Skip to main contentSkip to navigationSkip to navigation
A Trump supporter at a protest against the US election result last November.
A Trump supporter at a protest against the US election result last November. Photograph: Bing Guan/Reuters
A Trump supporter at a protest against the US election result last November. Photograph: Bing Guan/Reuters

It's taken Donald Trump to show social media giants the meaning of moderation

This article is more than 3 years old

Twitter and Facebook need to be more transparent about how they choose who gets censored

The enforcement of policies on content moderation by popular social media platforms such as Facebook and Twitter, is a continuously contentious topic. Particularly so following the booting out of Donald Trump by the aforementioned companies in early January, just before he exited the White House in disgrace. Facebook’s decision was criticised as being too little too late, while Twitter CEO Jack Dorsey defended his company’s decision but nevertheless said that it set a “dangerous precedent”.

But these actions hardly set any real precedent: the thing is, millions of people find their content or accounts removed from these platforms every day, often for far lesser offences than Trump’s incitement of supporters to riot at the Capitol building (which has also led to his impeachment trial this week) or his streams of Covid-19 misinformation. While there’s plenty to criticise about how these companies operate – they’re too opaque in their decision-making, have inadequate support for workers in content moderation, and offer next to zero customer service, among other things – the truth of the matter is that moderating the expression of billions of users is truly a hard problem, perhaps even an impossible one at scale.

While Trump’s deplatforming made headlines in January, those who are more often affected by the tech platforms’ labyrinthine policies and what seem to be poor judgment calls are not world leaders, but marginalised and vulnerable communities, who have had posts and accounts deemed to be in violation of rules.

Twitter recently suspended the account of an artist who wrote a comical limerick about the Covid-19 vaccine because the company’s blunt automated tools for spotting perceived offences – the use of which has increased during the pandemic – can’t tell the difference between misinformation and a joke. It would seem such mistakes would be far less likely if Twitter (and other companies) ensured that content was viewed by human moderators – we don’t actually know the full process because the companies won’t tell us.

Of course, Trump’s account was assuredly viewed by humans, which leads us to ask: Why did Twitter and Facebook wait so long to act? Some have suggested that Trump didn’t truly cross the line until early January, but it’s clear that prior tweets violated the rules, and would have likely been removed had he been a lower-profile person. Other political pundits and media watchers have claimed that it was the threat of more violence from his supporters after the 6 January incidents that finally prompted the suspensions – entirely possible, but equally it could be argued that removing his account sooner could have also engendered violence.

In my opinion, the reason these companies waited until the last possible minute, just days before Trump left office, was because they could – it cost them nothing and made them look good in the eyes of the public in the final hour. Cynical, perhaps, but after a decade of studying how Silicon Valley behaves, I’d gladly bet my last dollar on it.

When it comes to content moderation on these platforms, I strongly believe that rules should be clear, that users should consent to them (including every time they’re updated), be informed appropriately about any consequences for violating them, and be provided with a clear path to remedy if they believe a mistake has been made. Under that sort of regime, it is then entirely fair for a platform to ban me, the president or anyone else.

But under the status quo, serious questions abound, not least of which why we’ve decided to trust a largely elite, wealthy, white and American group of unelected leaders with vital decisions about what we are and are not allowed to say in public forums. Seeing Trump removed from Twitter last month was certainly pleasing to me personally – and I’d be lying if I said I didn’t sleep a little bit better that first night – but a few weeks later, all of the concerns I’ve shared for more than a decade are back.

So where do we go next? It’s past time that social media companies change the way they operate. They need to bring in more diverse staff and executives, pay all their workers an adequate wage and operate with far more transparency. Tech companies should also take this moment of relative quiet to conduct a full audit of their existing policies and update them for the new era – to name but two examples, is banning nudity or requiring “authentic names” to register really necessary in 2021? Or how to ensure that policies surrounding violent incitement apply evenly to both ordinary citizens and world leaders? These are important questions and they need to be answered not by me, Jack Dorsey or Mark Zuckerberg, but by civil society far more broadly, how ever that could be achieved.

Most viewed

Most viewed