Social media platforms are a marketplace for content exchange. Depending on the platform, the content can be text, visual, audio, or a combination of it. Content is the currency of all social media platforms, and as a result, each platform benefits from owning and moderating the currency, just like any country or government.
Of course, owning and moderating currency comes with problems, such as theft, abuse, etc. For social media platforms, it comes in form of content theft, profanity, adultery, gaslighting, abuse, and more.
To maintain control, every platform deploys its solution for content moderation. While the exact solution differs from platform to platform, each is often a concoction of processes, technologies, and people.
Two of those three measures, processes and technology, are helpful in content moderation when the content blatantly violates platform usage. For example, detecting porn or graphic image on Instagram on a non-verified/under-age account is easy to block. Similarly, deleting tweets/flagging users who generate content that contains blocklist words, such as my tweet, is also easy, effective, and pretty fast.
However, this only addresses a fraction of content moderation problem. If we were to graph all content violations on a bell curve, these blatant violations would only account for 15-30% of the overall content.
The bulk of the content, residing in the center of bell curve, requires human intervention as it requires an understanding of the context. For that, social media platforms deploy content moderators, i.e., humans who can read and interpret content in context and enforce platform policy on a case-by-case basis.
And AI is going to change/break that. AI is getting exceptionally good at understanding human context. Once we train AI to moderate the content effectively, the content moderation jobs become obsolete.
Big social media platforms have more incentive than ever to deploy AI-content moderators at a large scale. Simply because it’s going to be cheap, and who doesn’t want to improve their bottom line.
Now you may think that AI integration of content moderation is a few years away.
Sadly not.
Remember the technology piece I mentioned above that is doing noticeable content violation flagging of images and text? An AI is doing it. While the AI that understands human context is still not here, I wouldn’t be surprised if next year we see another round of mass layoffs of folks in content moderation roles.
Now, this post turned out more gloomy than I expected. I don’t want to end this post on a down note. So here is the cup half-full perspective.
While the content moderation jobs will get eliminated, new jobs will emerge. What will those be? It is hard to say, but there will be plenty for sure. How do I know? Because this is what new technology does. It replaces the most inefficient jobs with efficient ones like industrialization, the Internet and the smartphone did before. And AI will do the same. The first example is Prompt Engineering. Although its not same as content moderation, it’s a new job that did not exist before.
So stay optimistic and happy thinking, everyone!!