The Newest WMD: Weapon of Mass Deception

 The Newest WMD: Weapon of Mass Deception

Deepfakes | Weapon of Mass Deception | policybae

When one thinks of a security threat, they may think of something they can see (like climate change) or cannot see (like cyber hacking). But what if the threat was believing your own eyes? That is precisely the question that deepfakes raise.

What are Deepfakes ?

Deepfakes are video and audio forgeries produced by artificial intelligence programs that portray people doing and saying things in a realistic manner even though those things never happened. Deepfakes are one of the greatest emerging threats to U.S. security for their ability to disinform and divide the public that can undermine figures, institutions, and lead to violent confrontations. 

When trying to understand deepfakes and their significance, it is important to be familiar with the concepts of artificial intelligence and machine learning. While these terms are often used interchangeably, they actually refer to distinct elements. Machine learning refers to the study within computer science and technique for designing algorithms that adapt their models to improve their ability to predict; and artificial intelligence refers to the application of those techniques to simulate intelligent behavior and problem-solving capabilities using machines and software.[2]

Essentially, the term deepfake combines the terms deep (machine) learning and fake to identify an audio or visual clip that appears true, but is in fact false. What started as a technology used for satire and adult entertainment purposes in 2017 has now opened the door to major disinformation campaigns, discord around the globe, and far greater potential for disruption to U.S. security. The world is already behind in identifying altered content and there are more individuals working on creating deepfakes than those working to detect them or identify them. 

“The number of people working on the video-synthesis side, as opposed to the detector side, is 100 to 1,” said Henry Farid, a University of California, Berkeley professor in an interview with The Washington Post.

According to a report from Sensity, the number of deepfakes present online has been roughly doubling every six months since December 2018 and had increased by almost 100% in July 2019. While the majority of doctored audio and visual media remains focused on the adult entertainment industry rather than business sectors or elected officials, the report notes that these categories are increasingly becoming targets. In an interconnected world, inflammatory media such as disinformation and hate speech is sure to exacerbate divisions across countries and global regions that ultimately beget violent domestic and international consequences.

The deepfake problem is global. It has impacted political leaders from Gabon to Malaysia, leading to political scandals and an unsuccessful coup. On an individual level, deepfakes can discredit people, ruin reputations (such as prominent figures forged in pornographic videos), and even ruin credit in the case of identities being recreated for financial harm. for a national level, deepfakes can sow mass discord and undermine the public’s trust in their government and non-partisan institutions. On a global level, deepfakes may have the potential to dissuade a leader from joining a multilateral agreement or pledging aid, erode international accountability as the media becomes contested, or encourage mass atrocities such as ethnic cleansing.

In 2019, a survey conducted in the U.S. found that 63% of Americans believed that videos and images were altered to mislead the public and push made-up news.

The country and the world recently witnessed a pro-Trump mob violently storm the U.S. Capitol, materializing the impact of a disinformation campaign aimed at the presidential election that eventually left our law making institutions vulnerable. Misinformation and disinformation campaigns continue to sow political division in the U.S. and around the world.The alarming throughline is that the security threat of undermining institutions and civil unrest in the U.S. will grow—so long as the awareness lags behind the action to combat the software that has become more simple, cheap, and accessible to virtually anyone. 

Much like how the presence of deepfakes is growing exponentially, legal and ethical questions that arise in combating deepfakes are growing as well. This technology, while used primarily in a negative way, can and is used in the medical industry to advance health needs. Additionally, free speech and ownership laws tied to the creation of a deepfake that technically is not copyrighted footage have enabled tech giants to invoke their legal immunity from the content posted on social media platforms—collectively offering spotty policies and framework for this content, if at all.

Currently, the majority of deepfake software used is simple face swapping, but it is only a matter of time before we see other forms such as changing lip movement or body movement becoming just as prevalent. Fortunately, steps can be taken to effectively minimize the presence of deepfakes and prevent them from triggering disruption among political leaders, but it will require the cooperation of policymakers, the global legal community, and most importantly, big tech to combat the distribution and ill-intentioned use of this software.

Facebook Comments
Emily Nunez

Emily Nunez

Emily is a Latina political mobilizer and community engagement specialist. She seeks to inform and energize policymakers and the public to push for individualized, humane solutions to complex policy issues in technology and foreign affairs through organizing and policy advocacy. Emily holds a Bachelor of Arts in Politics and Law from Bryant University. Follow Emily on Instagram: @emilyjnunez2

1 Comment

  • Excellent article on something we may often see, but not immediately recognize. Quite an alarming emerging threat. Such great insight here, thank you!

Comments are closed.

Join the Policybae community!

Sign up for our newsletter for updates sent directly to your inbox.