The United Kingdom has recently experienced a surge of violent anti-immigration riots, significantly fueled by social media. Elon Musk, CEO of Tesla and owner of the platform X (formerly known as Twitter), has been central to the digital frenzy with his controversial statements. Notably, Musk declared that a “civil war is inevitable” in reaction to posts blaming mass migration and open borders for the unrest.
The UK government has strongly opposed Musk’s incendiary comments. Prime Minister Keir Starmer’s office denounced these remarks as unfounded and inflammatory. Nonetheless, Musk continued his critique, dubbing Starmer #TwoTierKier, referring to debunked allegations of biased policing between right-wing and left-wing protests. Musk’s rhetoric intensified as he likened the UK’s regulation of offensive speech on social media to Soviet-era censorship.
The proliferation of false information online has become a significant concern for the UK, given its role in inciting real-world violence. The riots have caused extensive property damage, including the destruction of cars and the torching of two Holiday Inn hotels, believed to be housing asylum seekers. Public buildings have also been defaced, and police officers have faced assaults with bricks. Law enforcement has made numerous arrests in efforts to restore order.
The riots were exacerbated by a tragic stabbing attack that resulted in the deaths of three children. Far-right groups swiftly disseminated false information on social media, erroneously claiming that a Muslim asylum seeker was responsible. Despite police confirmation that the suspect, 17-year-old Axel Rudakubana, was UK-born, the misleading narrative persisted, further fueling the riots.
According to the Institute for Strategic Dialogue (ISD), a think tank, the day after the attack saw the false identity of the alleged asylum seeker mentioned over 30,000 times on X by more than 18,000 unique accounts. The ISD highlighted how platform algorithms amplified this misinformation, reaching users who might not have otherwise seen it. The UK government suspects that bots, potentially state-backed, also played a role in spreading the false information.
Social media companies face ongoing challenges in enforcing policies against hate speech and incitement to violence. Despite existing guidelines, the volume of content during crises often overwhelms moderation efforts. Musk’s actions have worsened this issue by promoting provocative content and loosening content moderation on X. This approach has alarmed European regulators, who accuse the platform of misleading users.
The UK government is committed to prosecuting online criminality and urges social media companies to act decisively against false information. Home Secretary Yvette Cooper emphasized social media’s role in escalating the situation and called for stricter enforcement against those inciting violence online. Prime Minister Starmer reiterated the government’s dedication to bringing both online and offline offenders to justice during a recent cabinet meeting.
The UK’s Online Safety Act, enacted last year, seeks to impose new responsibilities on social media platforms, including the rapid removal of illegal content. Although the Act is not yet in force, the regulator Ofcom is currently developing codes of practice and guidance. Once implemented, the Act will empower Ofcom to fine companies up to 10% of their global revenue for non-compliance. Ofcom has already initiated discussions with tech platforms to ensure they are ready for the new regulations.
As the situation continues to develop, the UK government remains resolute in holding those accountable for the riots and their online facilitators. The actions of influential figures like Elon Musk and the policies of social media platforms will be closely examined as authorities strive to restore order and prevent further violence.