Social Media is increasingly more of an impediment than a benefit to everything from elections to maintaining brand image. Trolls, paid influencers, and opponents (either political or business) can enter the process with false information, poorly founded conspiracy theories, and FUD (fear, uncertainty, and doubt) and destroy reputations, brand equity, and manipulate outcomes doing a massive amount of harm in the process.
From an individual base, skills like TA (Transactional Analysis), and research like that from Pew Research showcase methods to deal with these trolls effectively, but they don’t scale to the kind of response you’d need if the Troll’s post went viral, which they too often do. Now, this doesn’t just damage our brands, parties, and products; they also damage these social media companies’ reputation and lead to advertisers and users abandoning the platforms.
We need something I’ve termed a Super Moderator, but when talking about the scale, you would need to address the problem with a service like Facebook; the staffing level would be onerous. Even if you could afford that level of staffing, assuring those human moderators wouldn’t misbehave, show bias, or get angry would, at scale, be virtually impossible.
One tool that might address this problem is IBM’s Project Debater and its relatively new Watson AI platform capability. Let’s talk about that this week.
Applying Facts At Scale
When dealing with a troll, the general advice is to combat them with facts, don’t lose your temper, and if they continue to misbehave, ignore them. A moderator adds to this with their power to erase posts and ban the people that make them, but humans are subjective. AIs, if trained, and that trained part is essential, are objective and can be trained to recognize different kinds of attacks and learn the ideal response over time. If you instrument the social media platform, you not only get the real attack time, but you also get a sense of which types of responses are best able to shut the attack down early.
A human Super Moderator doing that would have trouble responding fast enough to keep the post from going viral would be incapable of scaling to address the problem once it had gone viral. But an AI, assuming it was adequately provisioned, could impersonally apply the learned best response, but it has no human limit to scaling if the spread of the post still made it to the viral stage.
Besides, human moderators can face severe backlash if they make a mistake or do something unpopular with many people. An AI doesn’t have a home, pets, or vehicles that could be vandalized or harmed, and they are pretty much immune to personal attacks. Granted, the blowback would still be on whoever deployed the tool, but that would happen regardless of whether the Super Moderator was human or not.
Returning To Fact-Based Decisions
There is an interesting book called True Enough: Learning to Live in a Post-Fact Society by Farhad Manjoo. (Recall that Steve Jobs used to fabricate stories so much that folks argued he had a “Reality Distortion Field” surrounding him). It is on my list of books that are not only fun to read but explain most of the media issues surrounding us. It highlights that the only real defense for being misled is always looking up sources and verifying facts. But with social media, no one had time for that. Thus we need something that doesn’t remove the fun of Social Media but does address Social Media’s ability to do massive harm.
I believe the closest thing we currently have to a mature tool that might be able to do this is IBM’s Watson AI-based Project, Debater. And I also believe that, with a focused effort, it could be converted to provide reputation and brand protection at scale for enterprises and provide a similar service, at scale, for Social Media networks.
In the end, we are all better served if everyone is using facts, not beliefs, in their decisions and positions because the way we are currently going is likely not to end well. By the way, and this likely goes without saying, this same tool could also provide a better real-time warning when a post you are writing is likely to cause your career or company damage. I still think this capability is critical to the long-term success of most Social Networks because no one wants to use a service that could get them in trouble with their manager or fired after the fact.