In keeping with its allegiance to the Christchurch Call, Canada has committed another $1 million to the cause of fighting digital terrorism. Brought before international leaders and tech multi-nationals by New Zealand PM Jacinda Ardern, the Christchurch Call was a direct reaction to the video that was live streamed and virally shared of the terrorist attacks in Christchurch, New Zealand in March of this year.
I have nothing but empathy and compassion for Ardern’s desire to never have this kind of thing happen again, and for sure not to have people have to watch it. But the vagueness of just what the Christchurch Call is asking leaders and tech giants to do brings up free speech concerns.
Canada’s new pledge to the Tech Against Terrorism program is “to create a digital database that will notify smaller companies when terrorist content is detected and help eliminate it.” While this is noble and all that, the definition of just what constitutes terrorist content doesn’t really exist. In one sense, we can all tell what terrorist content is, just like the U.S. Congress in the 1980’s with regard to porn.
But when those definitions become more broadly applied, as when Robert Mapplethorpe’s homoerotic paintings were labeled pornography, or the legendary Karen Finley was marked down as obscene, free speech protections fly out the window.
Context matters, and tech companies will not have the bandwidth to splice all the violent content to determine what is and isn’t terrorist. When Diamond Reynolds live streamed the police shooting of her boyfriend Philando Castile, that was not terrorism, but an urgent cry for help. It was also valuable evidence. Would it have been terrorism if the officer who did the shooting had live streamed it? When the terrorist attack was live streamed by the attacker in Christchurch, that was terrorism. Would it have been terrorism if one of the victims or a witness had live streamed it, and it was used for evidence?
The non-binding Christchurch Call is an effort to “eliminate terrorist and violent extremist content online,” but it offers no definition. In addition to the problem of context, there is the concern over subjectivity and bias. The conversation over what constitutes violence and hate speech online has gotten so crazy that the wild and rabble-rousing knitting community has gotten into the act.
Will Canada’s federal funding to root out extremist content dive into the Macramé or semaphore communities next, to make sure no one there is sending out messages of hate? If not, why not? It seems pretty logical that without real definitions of “terrorist and violent extremist content,” no forum is safe from Canada’s well-funded speech policing.
The U.S. didn’t sign onto the Christchurch call. Mostly because it can’t. The free speech protections that are enshrined in our Bill of Rights are more important than eliminating, well, free expression, even if that free expression sucks. That’s as it should be. When government and corporations team up to clamp down on individual rights, no one in safe.
These are weapons that in the left’s hands look big and shiny and powerful right now, but once the sword is forged it can be used by anyone, including your enemies. Why create these kind of restrictions when they will just be used against you when the pendulum swings back the other way?
Maybe as AI and machine learning algorithms advance there will be an easy fix for this, a way for content to be parsed and trashed according to specific standards, but as of right now, it’s basically a judgement call. That decision making process of exactly what content gets pulled and what content stays is made by individuals, based in many instances on user flagged content. Whether or not the post gets pulled down is all in the eye of the beholder. Much of this is outsourced to the tech equivalent of call centers, where employees scan through content and field reports of user violations.
Ethical guidelines are not marvelously clear, and the standards keep shifting. While no sane person wants to see extremist and terrorist content proliferating online, or have hate speech in subculture forums, or accidentally stumble upon horribly violent videos, we cannot ban it all. The quest for perpetual safety, in body or in thought, leads to imprisonment. And speaking as an American, we’ve got quite enough people locked up already without jailing our speech as well.
Powered by StructureCMS™ Comments
Join and support independent free thinkers!
We’re independent and can’t be cancelled. The establishment media is increasingly dedicated to divisive cancel culture, corporate wokeism, and political correctness, all while covering up corruption from the corridors of power. The need for fact-based journalism and thoughtful analysis has never been greater. When you support The Post Millennial, you support freedom of the press at a time when it's under direct attack. Join the ranks of independent, free thinkers by supporting us today for as little as $1.
Remind me next month
To find out what personal data we collect and how we use it, please visit our Privacy Policy
Comments