Deep fakes are weird, they’re pervasive, and they’re getting worse. The digital public is already wary of reality, and with this new assault, it’s likely we’ll break from it altogether. Jordan B. Peterson was spoofed on a deep fake, as was Mark Zuckerberg. Victims of this practice also include random women who end up doctored into porn videos. Deep fakes are more insidious than something like doctored videos or photos because they use AI and deep machine learning to not alter existing reality but to create an alternate one.
The way deep fakes are made possible through the use of a GAN, a general adversarial network. Introduced in a paper at the University of Montreal in 2014, GANs are basically two neural nets that learn from each other through challenging each other to get better. The two neural nets can be defined as the maker and the investigator, and all are in service to the puppeteer, the human being who intends to create the fake for whatever purpose they have. The maker submits a proxy, the investigator counters it for flaws.
For example, NotJordanPeterson.com was a website where users could plug in any content they wanted and have it spit back at them in the voice of Jordan B. Peterson. The actual Peterson was rightly disturbed and the owners took the site down. The way the site worked was the maker algorithm would create the audio, and the investigator algorithm would counter with information about how it can tell it’s a fake, prompting the maker algorithm to try again, and repeat. Peterson’s cadence, pronunciation, accent, common word usage, would be interrogated by the investigative side, and then adjusted by the maker side, until accuracy is achieved.
So if you had used NotJordanPeterson.com to create a clip of Jordan B. Peterson talking about how great Human Rights Tribunals are, or how much he likes strawberry froyo, or simply to whisper inappropriate things to you, it would have sounded super realistic because it would have been Peterson’s voice talking the way Peterson talks about things Peterson probably would never say in public, which is of course why he got pissed off.
The Linear Digressions podcast got into GANs and deep fakes: “the technology, in general, has been pretty democratized in the sense that you don’t have to be an AI graduate student to be able to use some of this software, it’s just like floating around out there. And… there are definitely people who are researching how good the GANs are… so even if we can sort of tell the difference right now, they’re improving so much that that’s not something that’s going to be maintained for very long. So at this point, it looks like for some of the better algorithms out there the only way that you can detect which ones are fake is not with humans but with algorithms that are specially trained to pick out the ones that are fake.”
Algorithms are being used to create fake realities that humans cannot distinguish from actual reality and only other algorithms can tell apart. We are smart enough to create computer programs to alter reality to the point where we don’t know what’s real and what’s fake, and stupid enough to make sure only way to tell the difference is by employing another computer program. The more advanced we are the more ridiculous our predicament with reality.
Speaking to the 1A podcast, Rachel Thomas, founder of Fast AI, brought up another aspect of the problem. If we don’t believe what our content tells us, then even real content can be questioned. “The more and more that we see deep fakes spread, and then we educate people, that you can’t believe what your eyes are telling you, then if they don’t realize, also that people can sort of leverage the phenomenon, and say ‘y’know that genuine video that shows me committing a crime? Well sure, that’s a fake, that’s not real.’ If we don’t have people on guard to recognize that as well, the misuse of the deep fake phenomenon to run away from reality, that, too, leads us to this nihilistic place where we say ‘there are no truths.'”
This wouldn’t be the first time that people have arrived at the place where we think ‘there are no truths,’ where we can’t tell the difference between fact and fiction, and where we look at reality with total skepticism. Writing in the 17th century, Irish philosopher Georges Berkeley looked at reality as though it might not have any existence outside the minds of people. He wrote that ideas and sensations do not exist independent of a mind to perceive them, but that those things which cause the ideas and sensations do objectively exist.
In A Treatise concerning the Principles of Human Knowledge, he wrote: “Ideas imprinted on the senses are ‘real’ things, or do really exist: this we do not deny; but we deny they can subsist without the minds which perceive them, or that they are resemblances or archetypes existing without the mind; since the very being of a sensation or idea consists in being perceived, and an idea can be like nothing but an idea.– Again, the things perceived but sense may be termed ‘external,’ with regard to their origin, in that they are not generated from within by the mind itself, but imprinted by a Spirit distinct from that which perceives them.– Sensible objects may likewise be said to be ‘without the mind’ in another sense, namely when they exist in some other mind; thus, when I shut my eyes, the thing I saw may still exist, but it must be in another mind.”
With the emergence of deep fakes, a totally altered yet believable reality, Berkeley’s hypothesis has been proven. The fake exists, but it being a reality only exists in our minds. Is there a difference between a perceived reality that everyone agrees is real and the actual, independent reality? In some respect, once the deep fake is created and understood to be part of reality, isn’t it as real as real life? This is the problem. Deep fake content looks as real as real content, and when it confirms our perspective, instead of invalidating it, we are more likely to believe it.
The problem with deep fakes speaks to issues of defamation, libel, and slander. There are low stakes deep fakes, like showing Trump eating an ice cream, or altering entertainment content, or some fan fic type stuff, but there is also the realm of the high stakes deep fake. For women, this can all be boiled down to one word: porn. Deep fake porn is like revenge porn on steroids. A deep fake machine learning GAN can put together a woman’s face, voice, mannerisms with a porn video to show her looking like she really and truly engaged in that content. The consequences of this video being publicized could damage her reputation, cost her a job, humiliate family and children, and cause her deep embarrassment when she realizes how many people saw her doing this thing. It would be a hard image to erase from the imagination, never mind the boundless memory banks of the infinite world wide web.
In essence, the phenomenon of deep fakes asks us how much control we are willing to relinquish in the understanding of reality. It may be that we never had much, but with the emergence of deep fakes, we have even less. Entering into a hellscape where we can believe neither our eyes nor any of our senses in determining what is real poses problems that we are not ready for, from the understanding of actual facts to the ability to discern actual fiction. From now on, what we make of reality is up to each of us alone, and we better get it right.
Powered by StructureCMS™ Comments
Join and support independent free thinkers!
We’re independent and can’t be cancelled. The establishment media is increasingly dedicated to divisive cancel culture, corporate wokeism, and political correctness, all while covering up corruption from the corridors of power. The need for fact-based journalism and thoughtful analysis has never been greater. When you support The Post Millennial, you support freedom of the press at a time when it's under direct attack. Join the ranks of independent, free thinkers by supporting us today for as little as $1.
Remind me next month
To find out what personal data we collect and how we use it, please visit our Privacy Policy
Comments