Tech ethicists have been sounding the alarm about deepfakes for some time now, and tech think tank Future Advocacy has decided to show just how possible and damaging this tech can be. They’ve released a fake campaign video that shows the two candidates for the coming U.K. election endorsing each other.
Rationally, we know that Jeremy Corbyn and Boris Johnson would not actually endorse each other for the office they both covet, yet our eyes deceive us when we view a video like this. In the hands of Future Advocacy, the video is revealed to be a fake. But this tech could be used by bad actors to disrupt elections all over the world.
Unlike the magician who guards his sleight of hand with care, Future Advocacy reveals how the trick was turned. First, they choose the source video, that clip that they would use to as the base image and movement of the person they are going to fake. Then they parse the words the person most uses, and write the script that sounds like what the person would say. After that, the voice is laid in, and aligned with the movements.
Last month, the U.S. Senate passed the Deepfake Report Act, that “would require the Department of Homeland Security to publish an annual report on the use of deepfake technology that would be required to include an assessment of how both foreign governments and domestic groups are using deepfakes to harm national security.”
The Senate became more concerned about the problem earlier this year when a parody video of Nancy Pelosi was released that made her look drunk. This video was not actually a deep fake, but an actual video slowed down to make her appear sluggish. But it was enough to strike fear into the hearts of legislators.
While the Deepfake Report Act is a step toward trying to understand how the tech is used, what is still needed are the tools on how to detect it. Facebook, ever in the spotlight when it comes to hating on big tech, has dedicated $10 million to the study of deepfakes.
The Pentagon’s Defense Advanced Research Projects Agency (DARPA) has been researching deep fakes, learning first how to make them, so that they can learn how to detect them. The creation of deep fakes is entirely dependent on computer analysis, and as is the detection of the fakes.
It’s a good bet that while Future Advocacy and the Pentagon are working on both raising awareness and figure out how to combat this problem, respectively, those who would sow the seeds of chaos around the world are working just as hard to make them undetectable.
The very concept of reality is under threat. Libel and defamation laws could punish those who would legit make faked campaign videos such as the one conjured by Future Advocacy. But where does that leave us with regard to those videos that go undetected? Even when a video, as the slurred Pelosi one, was proved to be false, the damage was already done. That clip went viral before anyone even raised a question, probably even before Pelosi saw it herself.
Even more recently, friends of the Royals have floated the theory that the infamous photo of Prince Andrew with his 17-year-old accuser, Virginia Roberts Giuffre is “doctored” and that “his fingers look too chubby.”
Giuffre responded by saying “This photo has been verified as an original and it’s been since given to the FBI and they’ve never contested that it’s a fake. I know it’s real. He needs to stop with all of these lame excuses. We’re sick of hearing it. This is a real photo. That’s the very first time I met him.”
As illustrated by this recent example, the implications go beyond fooling voters. Allegations of deep-fakery could be used to cover up crimes or in other cases, falsely implicate people in crimes.
If the goal of those who make deepfakes is to create chaos and confusion in the U.S. and the U.K., they are proving that they are already capable of achieving success. We must maintain our vigilance, good humour, and wariness of everything that flickers across our screens. However, this wariness, this inability to trust trusted sources, is the chaos, confusion, and disorder that bad actors have engendered. When we don’t know who to trust, when we can’t believe our own eyes, when every conceivable source of data and information needs to be interrogated, where does that leave us?
In many ways, humans make snap judgements. Perhaps it’s a remnant of a survival instinct, a fight or flight impulse. But thinking on our feet, making quick determinations, is how we get through life. We do not question everything, because there is simply not enough time in the day. If we find that we are unable to trust new sources of information, we may lock down our views, solidify them, and begin to believe that anything that contradicts them is false.
The hardest part, for each individual, in addressing and dealing with this emerging technology, is not knowing what incoming data to trust. This means that when we read or see something that confirms a view we hold dear, we should question it, antagonize it, investigate it. We need to make sure we know why we believe what we believe, and not assume truth just because it feels right (or wrong) to us. As deepfakes threaten our reality in every aspect from education to crime to democracy, we must remain aware of what is being thrown at us. If not, it’s going to knock us over.
Powered by StructureCMS™ Comments
Join and support independent free thinkers!
We’re independent and can’t be cancelled. The establishment media is increasingly dedicated to divisive cancel culture, corporate wokeism, and political correctness, all while covering up corruption from the corridors of power. The need for fact-based journalism and thoughtful analysis has never been greater. When you support The Post Millennial, you support freedom of the press at a time when it's under direct attack. Join the ranks of independent, free thinkers by supporting us today for as little as $1.
Remind me next month
To find out what personal data we collect and how we use it, please visit our Privacy Policy
Comments