The recent deadly racist attack in Buffalo, N.Y., planned with tactical advice from online chat groups, is sparking calls in Canada and beyond for better oversight of internet content. But civil liberties activists say trying to effectively sanitize the web of hateful or violent material is logistically difficult.
The massacre at Tops supermarket left 10 dead and three injured. Officials believe the attack was a racially motivated hate crime.
An online cache of disturbing posts suggests the alleged Buffalo shooter was seeking advice from like-minded individuals on loosely moderated online discussion forums.
The shooting has once again raised questions about how effectively social media platforms can respond to threatening content while maintaining free speech online.
The alleged Buffalo shooter also discussed the specifics of launching the attack on the online platform Discord. It allows users to create private channels which can only be accessed via invitation, but the site also hosts public channels that anyone can join.
‘What kind of bullets will defeat body armour?’
On Discord, the suspect posted a diary dating back two years detailing a racist manifesto heavily inspired by the Christchurch perpetrator’s manifesto. There were also detailed plans for executing an attack.
The alleged Buffalo shooter posted messages asking for advice on tactical gear, like bulletproof vests and armour, what weapon to use, and where to access certain bullets. “Is there a Discord that mainly talks about tactical gear?” one post from August 2020 reads. “And what kind of bullets will defeat body armour?”
Alternative media outlet Unicorn Riot uncovered the web posts seemingly linked to the Buffalo suspect and shared the content with CBC News. The network is not reposting the most disturbing and racist material contained in the posts.
In addition to asking for specific advice on conducting a mass shooting, the suspect live-streamed the attack on Twitch, an Amazon-owned platform often used to broadcast video game play. Twitch removed the video within two minutes after the violence began.
But the post was re-uploaded online, going viral on platforms like Facebook and Twitter.
Amarnath Amarasingam, a professor at Queen’s University and an expert on extremism and online communities, said diary entries uploaded by the suspect reveal that Discord flagged one of his posts when he tried to upload the Christchurch shooter’s manifesto, but the platform did nothing to follow up.
“If they even bothered to look at his diary, it would have been immediately clear that he’s planning an attack because he says so directly and openly from the very beginning,” Amarasingam said.
“In the long list of red flags that were missed, you can also add this one.”
‘Hate has no place on Discord’
In an email to CBC News, Discord provided a response to the attack. “Our deepest sympathies are with the victims and their families,” a company spokesperson wrote. “Hate has no place on Discord and we are committed to combating violence and extremism.”
Discord said as far they know, the alleged shooter maintained “a private, invite-only server … to serve as a personal diary chat log.” But around 30 minutes before the attack, “a small group of people were invited to and joined the server.”
Effectively and quickly moderating this kind content is not easy. Last year, the Liberals proposed a bill that received criticism for not striking the right balance between privacy rights and online safety.
“Regulation needs to be thoughtful and nuanced, recognizing how vital freedom of expression is to a democratic society,” said Cara Zwibel at the Canadian Civil Liberties Association in a statement to the CBC. “A government that believes it can root out online hate or sanitize the internet by imposing strict takedown requirements on platforms is engaged in a losing battle,”
“Governments should focus on requiring platforms to be more transparent about how they address these issues and in particular around the tools and methods they use to amplify, promote and monetize certain types of online expression,” Zwibel said.
WATCH: Black Canadians react to Buffalo mass shooting:
During the 2021 federal election campaign, the Liberals promised to introduce new legislation within the first 100 days of their mandate “to combat serious forms of harmful online content, specifically hate speech, terrorist content, content that incites violence, child sexual abuse material and the non-consensual distribution of intimate images.”
They pledged to “make sure that social media platforms and other online services are held accountable for the content that they host.” The move was partly in response to the hate-motivated attack on a mosque in Quebec City in 2017 and the deadly London, Ont., van attack in June of 2021.
While the government missed the 100-day mark in early February, they have since established a panel of experts to make recommendations to Minister of Heritage Pablo Rodriguez. Their findings will inform policy on regulating social media platforms.
“What happens online doesn’t stay online,” said Rodriguez. “Online violence is real violence and we have to tackle that.”
Amarasingam is on that expert panel.
“That all needs to fall under some sort of legislation that compels some of these platforms to kind of think through the risks that are built into their service so that they can think about how to prevent it,” said Amarasingam.
New Zealand’s response
New Zealand faced a similar challenge in 2019 when the Christchurch shooter live streamed his attack and posted his manifesto online. Authorities took steps in 2019 to ban the video from the public. The country’s chief censor has also classified the Buffalo video, diary and manifesto as “objectionable,” because the attack was inspired by the ones in Christchurch, creating more trauma for people there.
Academics and others can apply for an exemption to use the banned content in limited contexts for research purposes.
Rupert Ablett-Hampson, the acting chief censor of New Zealand, said removing content like what the Christchurch gunman posted doesn’t stop the spread of racist manifestos or misinformation entirely.
“What we can’t classify is the underlying misinformation and hate … that’s ultimately behind these actions,” said Ablett-Hampson.
“We really need to look to the tech companies to be able to take some responsible action for misinformation online.”