TL;DR
X has pledged to speed up the review of hate and terrorist content in the UK, aiming to assess reports within 24 hours and withhold access to illegal accounts. This follows increased scrutiny from regulators and concerns over hate speech escalation.
X has committed to significantly reducing hate and terrorist content in the UK, promising faster review times and account bans, according to the regulator Ofcom. This development comes despite evidence of increased hate speech following Elon Musk’s acquisition of the platform, raising questions about the platform’s actual commitment to change.
In a statement, Ofcom, the UK’s communications regulator, confirmed that X plans to review and assess reports of terrorist and hate content within 24 hours on average, and at least 85 percent of such content within 48 hours. The platform also intends to collaborate with UK experts on hate and terror content and to remove offending accounts. Ofcom stated it will monitor X’s performance quarterly over the next year.
This initiative follows increased regulatory scrutiny, including Ofcom’s ongoing investigation into Elon Musk’s Grok AI for illegal content generation, and a recent fine imposed on 4chan for violations of the UK’s Online Safety Act. Critics, however, question whether these commitments will translate into meaningful action, citing Musk’s history of posting racist content and the platform’s prior increase in hate speech after Musk’s takeover.
Why It Matters
This development is significant because it reflects regulatory efforts to curb online hate speech in the UK, a country that has experienced a rise in hate-motivated crimes, especially impacting Jewish communities. The platform’s actions could influence broader social media policies and accountability standards, but skepticism remains about Musk’s platform priorities given past behavior.
online hate speech monitoring tools
As an affiliate, we earn on qualifying purchases.
As an affiliate, we earn on qualifying purchases.
Background
Following Elon Musk’s acquisition of Twitter, renamed X, a UC Berkeley study reported a 50 percent increase in weekly hate speech, partly driven by bots. The UK government and regulators like Ofcom have ramped up efforts to pressure social media companies to address illegal and harmful content. Previously, Ofcom fined 4chan nearly $700,000 for violations of the Online Safety Act, highlighting ongoing enforcement challenges.
“We have evidence that terrorist content and illegal hate speech is persisting on some of the largest social media sites. We are challenging them to tackle the problem and expect them to take firm action.”
— Oliver Griffiths, Ofcom’s Online Safety Group Director
“X will review and assess terrorist and hate content in the UK within 24 hours of reporting, or at least 85 percent within 48 hours.”
— Unattributed, source statement
social media content moderation software
As an affiliate, we earn on qualifying purchases.
As an affiliate, we earn on qualifying purchases.
What Remains Unclear
It remains unclear whether X will meet its self-imposed review timelines consistently or if the platform will effectively enforce account bans. Skepticism persists regarding Musk’s personal posting history and the platform’s overall commitment to genuine change, especially given ongoing regulatory investigations and past increases in hate speech.
AI-based hate speech detection
As an affiliate, we earn on qualifying purchases.
As an affiliate, we earn on qualifying purchases.
What’s Next
Ofcom will continue to monitor X’s performance data quarterly over the next year, and further regulatory actions or fines may follow if commitments are not met. The platform’s actual impact on hate speech levels in the UK will become clearer as these measures are implemented and assessed.
privacy and safety tools for social media
As an affiliate, we earn on qualifying purchases.
As an affiliate, we earn on qualifying purchases.
Key Questions
Will X actually reduce hate content in the UK?
It is not yet clear whether X will meet its commitments, as skepticism remains due to past increases in hate speech and Musk’s posting history.
How will Ofcom evaluate X’s performance?
Ofcom will review X’s performance data quarterly over the next year to assess the platform’s effectiveness in removing hate and terrorist content.
What actions can X take if it fails to meet commitments?
Regulators could impose fines or other enforcement actions if X does not comply with UK online safety regulations.
Does this mean hate speech will disappear from X?
While the platform aims to reduce hate speech, complete elimination is unlikely; the effectiveness of these measures remains to be seen.