The FBI and US Department of Homeland Security have scaled back efforts over the past two years to disrupt violent extremists’ online activities, according to current and former US officials and Internet radicalisation specialists who fear the trend will accelerate under the incoming Trump administration.
FBI and DHS officials are requesting fewer content takedowns and sharing less threat-related information with social media companies, according to a US official, two former US officials and three researchers who work with the agencies, all of whom requested anonymity to preserve government relationships. In particular, the agencies have largely stopped flagging networks of white supremacist accounts that try to recruit or radicalize new followers, according to the researchers.
Law enforcement officials had worked closely with platforms after a mob of then-President Donald Trump’s supporters stormed the US Capitol on Jan 6, 2021, fuelled by election-related conspiracy theories popularised online. The FBI, for example, alerted social media and gaming platforms about online communities where users had been observed floating plans for violent attacks, according to several researchers who worked with the bureau.
The pullback by federal law enforcement agencies echoes a retreat by companies in Silicon Valley in moderating content on social media sites. Meta Platforms Inc announced Jan 7 that it would end its third party fact-checking program in the US and move to a user-generated model for content oversight on Facebook and Instagram, akin to the community-notes model deployed by Elon Musk on X following his 2022 purchase of the entity then known as Twitter.
Meta’s new policy drew rare praise from Trump, who told reporters Tuesday he thought the company had “come a long way.” That marked a shift for the president-elect, who frequently railed against Meta following his ban from Facebook in early 2021 over the Capitol riot and called the company “an enemy of the people” as recently as last year. His account was reinstated in 2023.
FBI and DHS efforts had drawn a backlash from Trump’s Republican allies who described any federal push to monitor and remove false information online as censorship. The agencies dialed back their engagement with social media firms in July 2023 after a federal judge sided with Republican attorneys general, who claimed that the government’s pursuit of disinformation was suppressing free speech.
Extremism remains an urgent concern for US authorities, highlighted by the New Year’s Day terror attack in New Orleans that left 14 dead and dozens injured. The suspect had a terrorist organisation’s flag in the rented vehicle he used to run down revelers, and he posted social media videos hours before the incident declaring his loyalty to the terrorist group, FBI officials said.
The agencies’ shift puts new pressure on the contractors and research organisations that social media companies use to identify calls for violence, such as threats about politically motivated attacks or mass shootings, according to experts who continue to monitor such activity. It will also offload difficult, time-consuming work to state and local law enforcement agencies that already struggle to widely track extremist content, experts say.
"We don’t have a national policy that requires social media firms to do anything about extremists, so a lot of investigating that falls on organisations like ours and law enforcement,” said Katherine Keneally, a former New York Police Department official who now works as director of threat analysis and prevention at the Institute for Strategic Dialogue, which tracks hate groups on the Internet. "If investment in this area slows down, either financially or in terms of manpower, that could create a very real security threat.”
The Federal Bureau of Investigation declined to comment. DHS didn’t respond to questions about its handling of domestic extremist content online.
Since the court’s ruling, DHS and the FBI have almost entirely refrained from sharing information on extremists with online platforms and instead save the data for their own investigations, according to one current and two former US officials not authorised to speak on law enforcement tactics. While some aspects of the judge’s injunction have lifted, the ruling continues to have a chilling effect on interactions with social media companies, according to the current and former government officials.
US officials are also steering federal grant money away from research into online extremism and more toward communication education efforts and offline local police training, according to multiple people familiar with the matter. None of the applications that DHS approved in 2024 for its terrorism prevention program focused on Internet radicalisation. By contrast, in 2022, DHS awarded US$9.2mil (RM40.94mil) to 19 grant applicants who sought money for work related to digital misinformation and studying extremists’ online messaging.
DHS also recently shuttered a unit that swept the Internet for threatening materials in public posts, social media messages and online forums, according to two of the former government officials who were involved in that effort. The agency previously used covert social media profiles to research activities on Meta’s Facebook, Instagram and other sites without disclosing their DHS affiliation.
Silicon Valley firms have also stood down on content moderation, with Meta no longer labeling misleading posts about the election and Musk’s X allowing the spread of a range of conspiracies. Brendan Carr, the incoming chair of the Federal Communications Commission, has alleged that major social firms have played a role in a "censorship cartel” and sought details about their work with anti-misinformation firms.
The changes have left the FBI and DHS without a cohesive strategy for exchanging information with social media companies about domestic extremism, despite a Government Accountability Office recommendation in January 2024 to develop one, according to a US official who wasn’t authorised to speak on the matter. Another GAO report in 2023 faulted the two agencies for their levels of collaboration on countering domestic terrorism threats.
That lack of clarity comes as incendiary rhetoric online has recently contributed to real-world violence, such as attacks on US power substations, white supremacist riots in the UK and threats against recovery workers who responded to devastating hurricanes.
Trump’s return to the White House on Jan 20 promises to cement the agencies’ hands-off approach toward online behaviour by domestic extremists, according to online radicalization experts who work with the government and requested anonymity to protect their relationships. His hand-picked choice to lead the FBI – Kash Patel, who faces a potentially tough confirmation in the Senate – has previously attacked government agencies that he claims unfairly target Trump and Republicans.
A Heritage Foundation-led initiative known as Project 2025, widely viewed as a set of policy proposals for a second Trump administration, advocates barring the FBI from curbing the spread of misinformation. "The United States government and, by extension, the FBI, have absolutely no business policing speech,” a passage in the almost 1,000-page document reads.
The president-elect has previously urged firing DHS or FBI officials who directly or indirectly worked with technology companies to scrub the web of misleading information about the 2020 election results. In 2020, he fired Christopher Krebs as director of Cybersecurity and Infrastructure Security Agency, a unit of DHS, after Krebs contradicted the president’s false claims about widespread election fraud.
DHS has also come under pressure from Congress, including Representative Jim Jordan, an Ohio Republican who heads the Judiciary Committee. The agency largely abandoned its work to counter domestic online disinformation after Jordan led an effort to portray it as unconstitutional.
In 2022, DHS closed its Disinformation Governance Board, designed to offer best practices in countering viral conspiracies. Hateful or threatening language on social media and gaming platforms is often First Amendment-protected speech, according to anti-extremism researchers. More formal policing begins with overt calls for violence or if investigators determine that a suspect has taken concrete steps to act on their threats, such as saying they’ve purchased weapons.
Numerous violent criminals had a history of hateful social media activity before they committed atrocities. For instance, the gunman who murdered 10 people and injured three others in Buffalo, New York, in May 2022 had spent months explicitly discussing plans to commit a terrorist act on the Discord chat app.
Scouring platforms for signs of extremist activity involves tracing the spread of memes that use coded racist language, understanding the behaviours and connections of users who amplify hate speech and watching for calls to violence. Gamers and social media users who use dehumanising language about specific populations, such as comparing people to animals or insects, is of particular concern.
Fewer interactions between federal agencies and the technology community also reflects a change in investigative tactics, said John Cohen, executive director of the Center for Internet Security and the former DHS acting undersecretary for intelligence and analysis and counterterrorism coordinator. For instance, local police departments and school resource officers use Internet posts to increase their awareness about potential threats, Cohen said.
"The emphasis for security officials in the past was working with platforms on content removal,” he said. "Now law enforcement, particularly at the local level, looks at violent extremist online content as intelligence that informs criminal investigations and violence prevention activities.” – Bloomberg