Decoding the Threat: AI-Driven Misinformation and U.S. National Security

Harun Vemulapalli, Apr 28, 2024
feature-top

In response to rumors that US intelligence agencies were aware of a potential bombing or hijacking before 9/11, former United States Secretary of State Colin Powell said, “Yes, it’s true.” He explained shortly after the attacks, “We received information about something like this, bombings, and so on. But we always receive lots of information we are not able to process or even to see. We have too much of it, this is the problem. We have too much information” [1].

 

The digital age has long been the status quo, and humans have become accustomed to living in a world where information is readily available whenever and wherever needed. However, the ability to produce information in greater quantities has improved at unprecedented rates, especially with the development of generative Artificial Intelligence (AI). The faulty relationship humans have with analyzing large batches of information is at odds with such recent technological advancements.

This relationship is characterized by Professors Eppler and Mengis as an inverted U-curve, with more information correlating with better decision-making until a maximum point [2]. Here, the quantity of information is extreme, resulting in poorer decision-making even with increased information loads. Additionally, the ability of humans to sift through information is limited and unlikely to improve [3]. There is a foundational imbalance between how much information is produced and how well humans can digest it. Thus, the oversaturation of information, a vulnerability for security in the past, will continue to threaten national security today. 

 

While the quality of AI-produced information has areas for improvement, the quantity of the content produced is a unique threat [4]. AI technology can generate fake text, audio files, and even incredibly realistic videos of famous actors. For example, deepfake videos have become extremely popular on social media sites like Instagram or TikTok and have been used for politically malicious purposes. One deepfake video showed Ukrainian President Volodymyr Zelenskyy announcing a surrender to Russia. Right before the New Hampshire primary election, an AI-generated robocall imitating President Biden told residents not to attend the polls [5]. AI technology assisted in producing an audio file of London Mayor Sadiq Khan seemingly disregarding Armistice Day, a holiday commemorating the end of World War I, causing the audio clip to make waves amongst right-wing groups [6]. While the impact of AI-generated media is currently low, the quantity of media produced is likely to improve as the barrier of entry to accessing AI technology decreases. AI, through the production of misinformation, deepfakes, and propaganda, can contribute to the operations of terrorist organizations and the continued rise of autocratic states. Thus, the United States must invest resources to counter the emerging threat.

 

AI-generated content is particularly dangerous because it can exploit psychological vulnerabilities within individuals to influence the decisions of citizens, leaders, and enemies. Studies indicate that individuals often trust their intuition that the content is not altered and accept the presented AI-manipulated audiovisual content instead of thinking analytically [7]. Even imperfect media, with glitches and signs of manipulation, are not recognized as fake. People often overestimate their ability to detect deepfakes. Participants in a study by the University of Amsterdam identified around 70 percent of media as authentic, even though the participants were informed that only half of the videos were real [8]. Overconfidence, a lack of attention, and the tendency to digest media at face value demonstrate a looming vulnerability for individuals and nations to become easily influenced by synthetic media.

 

While regular citizens are frequent targets for misinformation, analysts of intelligence agencies face an understated threat—an oversaturation of information [9]. Humans can only process so much information at a time, and information overload can result in poor decision-making. Similar to how the high volume of media impacts everyday citizens on social media apps, the large frequency of data that must be analyzed impacts the performance of government institutions. Agencies like the National Security Agency or the Department of Defense are not set up institutionally to handle such an extreme amount of data, causing agencies to accomplish less with more [10]. AI functions more as a culprit than a solution in the age of info-saturation, as AI-generated media will likely make up as much as 90 percent of online content, according to the European Union Agency for Law Enforcement Cooperation. Adversaries can “barrage jam” the US with mass-produced media, similar to a warfare tactic in which radar systems are blinded by noise [11]. And for better or worse, AI technology is becoming more readily available at a lower cost. 

 

AI exemplifies a sector where the cost and complexity of development have lowered in recent years. In contrast to capital-intensive technologies like nuclear weapons or ballistic missiles, AI technology can be developed by those even without comprehensive technical knowledge [12]. The multi-purpose use of AI has resulted in a lack of regulations that allow several entities to create and access such technology. These technologies are appealing to both state and non-state actors. For example, China has accelerated its focus on AI Research & Development (R&D), investing more of the People’s Liberation Army’s (PLA) defense budget into AI development than the United States [13]. China’s ability to manipulate social media users will only expand en masse, and reports indicate that Chinese actors have already used generative AI to yield a more favorable perception of the Chinese Communist Party (CCP) [14]. China can shape how its various geopolitical strategies are framed. For instance, China may use thousands of fake AI accounts to drown out human rights reports, legitimize Chinese territorial disputes with Taiwan and Hong Kong, and endorse its expansion into Africa and Latin America. Generative AI may allow China to blur narratives that oppose its interests while allowing other state and non-state actors who are adversaries of the United States to amplify political divisions and create controversy. 

 

Terrorist groups and non-state actors possess inherent asymmetrical disadvantages compared to state actors, incentivizing them to pursue cheap technology. AI technology is a force multiplier, allowing non-state actors to overcome power disparities. Terrorists, similar to state adversaries, can use generative AI to produce propaganda on social media [15]. Generative AI allows terrorist groups to spread their ideals, radicalizing users with selective information to yield higher recruitment percentages. Deepfakes, in particular, can incite hate, extremism, and terror among social media users. For example, the Islamic State in Syria (ISIS) has created content using AI that romanticizes the radicalization process and the experience of being in a terrorist group. Terrorist organizations in India, such as the Resistance Front and the Tehreeki-Milat-i-Islami group, have similarly taken advantage of generative AI to develop videos and photos to sow division among young people. There is also untapped potential for terrorists to use AI to target individuals who are already sympathetic to terrorist causes and would allow terrorists to send these individuals specific messages as part of recruitment efforts. 

 

AI can also supplement more significant conventional attacks against the United States and its allies. The ability to generate media at will can be used to legitimize military action. For example, China can use generative AI to create the impression of considerable support for a potential invasion of Taiwan or to fake an event involving Taiwanese aggression that justifies a PLA response [16]. Deepfakes or other forms of media can justify preemptive military action by describing plans for Taiwanese or US incursions. For example, if AI-generated plans of a Taiwanese invasion of China were to be ‘leaked,’ China would use these plans to justify a preemptive invasion of Taiwan. Returning to the example of a doctored video of Zelenskyy instructing soldiers to surrender to Russian forces, it is clear that generative AI can also falsify orders during conflict. Falsified messages can influence troop positioning, behavior, and vulnerability, supplementing a military’s force with the power of disinformation. While online communities should be accustomed to the post-truth environment, the changing landscape will bleed into how warfare is waged. AI-generated audio and video even permeate the war in Israel and Palestine. For example, a fake video of Jordan’s Queen Raina depicted her claiming that Jordan was pro-Israel, and another video depicted Hamas fighters destroying Israeli helicopters and tanks [17]. There is no clear way for citizens to distinguish between real and fake media in warfare. If leaders attempt to broadcast a message during wartime, the hesitancy of a given citizen to accept the instructions could prove detrimental. Similarly, soldiers could be encouraged to give up or retreat if they see doctored media of victorious enemies. 

 

The most unrecognized effect of generative AI as a supplemental weapon would be its effect on the citizenry. Governments, especially democratic ones, are mindful of public opinion even during warfare. The military-industrial complex relies on popular support, and doctored media could create divisions that impede the government’s ability to respond to an attack. While sometimes resistance to military action is justifiable, a loss of public support during legitimate crises can be detrimental to the security of a nation. In the context of the United States, political divisions along ideological lines are preventing critical aid to Ukraine and are spilling over into the public sphere, dividing the country. As is often the case in our current political climate, agreement among political leaders is lacking. Even perceived minor costs, such as economic investment or military casualties, can erode public support [18]. Deepfakes portraying military forces committing atrocities or engaging in extra-judicial behavior may easily sway the public to oppose a military response to a legitimate threat. Similarly, generative AI can depict political allies making unsavory comments about one another, or it can be used to discredit leaders by showing them saying controversial statements. 

 

Why is the United States so weak to such a dangerous threat? A Western emphasis on free speech makes combating misinformation and propaganda more complicated. There is resistance to tactics like information filtering or regulation [19]. Many approaches are deemed unacceptable because they could threaten the Internet as a place of freedom. As a result, the United States will have to toe the line to navigate the future. Freedom may come at the cost of insecurity. 

 

Because policy change would require a drastic shift in public opinion, the United States must continue researching and funding efforts to use AI to counter AI. The ability to detect AI-generated media must improve at rates higher than or similar to the evolution of deepfake techniques [20]. Because the costs of adopting the technology for nefarious purposes are low, the United States can do little to prevent adversaries from accessing it. Instead, the US must engage in the information arms race, knowing there will still be, as Beshoy Benjamin frames it, an   “endless loop of technological one-upmanship.” Even though there is a feedback loop between better detection technology and newer creation techniques, the US cannot afford to drop out of this cycle because the costs of not keeping up with adversaries outweigh the costs of continuing the cycle. AI can help to identify and combat misinformation and propaganda online with a high accuracy rate and can analyze incredibly large information loads without fatigue. Thus, continued investment into AI ensures no entity has a comparative advantage. The future will be determined on an invisible front, with AI fighting itself in an endless battle.


Sources

[1] Berardi, Franco, and Empson, Erik. Precarious Rhapsody: Semiocapitalism and the Pathologies of the Post-alpha Generation. United Kingdom, Minor Compositions, 2009.

[2] Eppler, Martin & Mengis, Jeanne. ResearchGate. 2003. A framework for information overload research in organizations: insights from organization science, accounting, marketing, MIS, and related disciplines.

[3] Marr, Bernard. “Why Too Much Data Is Stressing Us Out.” Forbes. November 25th, 2015. www.forbes.com/sites/bernardmarr/2015/11/25/why-too-much-data-is-stressing-us-out/?sh=c61074df7630.

[4] Allen, Gregory C. “Advanced Technology: Examining Threats to National Security.” CSIS, 2023, www.csis.org/analysis/advanced-technology-examining-threats-national-security.

[5] Higgins, Neal. “From Disinformation to Deepfakes: The Evolving Fight to Protect Democracy.” The Hill. February 20th, 2024. thehill.com/opinion/technology/4470012-from-disinformation-to-deepfakes-the-evolving-fight-to-protect-democracy/.
[6] Sabbagh, Dan. “Faked Audio of Sadiq Khan Dismissing Armistice Day Shared among Far-Right Groups.” The Guardian. November 10th, 2023. www.theguardian.com/politics/2023/nov/10/faked-audio-sadiq-khan-armistice-day-shared-among-far-right.

[7] Markus Appel, Fabian Prietzel, The detection of political deepfakes, Journal of Computer-Mediated Communication, Volume 27, Issue 4, July 2022, zmac008, https://doi.org/10.1093/jcmc/zmac008

[8] Köbis, Nils C, et al. “Fooled Twice: People Cannot Detect Deepfakes but Think They Can.” IScience, vol. 24, no. 11, November 1st, 2021. pp. 103364–103364, www.sciencedirect.com/science/article/pii/S2589004221013353, https://doi.org/10.1016/j.isci.2021.103364.

[9] Young, Alex. “Too Much Information: Ineffective Intelligence Collection.” Harvard International Review. August 18th, 2019. hir.harvard.edu/too-much-information/.

[10] Maggie Harrison Dupré. “Experts: 90% of Online Content Will Be AI-Generated by 2026.” Futurism. September 18th, 2022. www.futurism.com/the-byte/experts-90-online-content-ai-generated.

[11] Helmus, Todd C. “Artificial Intelligence, Deepfakes, and Disinformation: A Primer.” RAND Corporation. July 6th, 2022. www.rand.org/pubs/perspectives/PEA1043-1.html.

[12] Kreps, Sarah. “Democratizing Harm: Artificial Intelligence in the Hands of Nonstate Actors.” Brookings. November 29th, 2021. www.brookings.edu/articles/democratizing-harm-artificial-intelligence-in-the-hands-of-non-state-actors/.

[13] Honrada, Gabriel. “Why US Is Lagging behind China in AI Race.” Asia Times. July 25th, 2023. www.asiatimes.com/2023/07/why-us-is-lagging-behind-china-in-ai-race/.

[14] Myers, Steven. “China Uses “Deceptive” Methods to Sow Disinformation, U.S. Says.” The New York Times. September 28th, 2023. www.nytimes.com/2023/09/28/technology/china-disinformation-us-state-department.html.

‌[15] Gao, Chongyang, et al. “Deepfakes and International Conflict.” Brookings. January 5th, 2023. www.brookings.edu/articles/deepfakes-and-international-conflict/.

[16] Tucker, Patrick. “How China Could Use Generative AI to Manipulate the Globe on Taiwan.” Defense One. September 10th, 2023. www.defenseone.com/technology/2023/09/how-china-could-use-generative-ai-manipulate-globe-taiwan/390147/.

[17] Walt, Vivienne. “Deepfakes Are Another Front in the Israel-Hamas War That Risk Unleashing Even More Violence and Confusion in the Future: “This Is Moving Incredibly Fast.”” Yahoo Finance. December 4th, 2023. finance.yahoo.com/news/deepfakes-yet-another-front-israel-211500187.html?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&guce_referrer_sig=AQAAAMAo5Ur5FlC1DZokAng2zek3wFWeDp9BJkRMg1fZcJL-yrvv4AGvBLuJP_RWC50VOOnHbJFs9WNm0C3cCpnwocTwDOiO1E28JFClJTN6R5oWwGNZTdj_xfmZMXEMqlg-alsCIIEcngLgxNfLqsuRoJkZUKB9R03b0AJmXyRgzKWs.

[18] Larson, Eric V. “Public Support for U.S. Military Operations.” RAND Corporation, 1996, www.rand.org/pubs/research_briefs/RB2502.html.

[19] Davies, Pascale. “How NATO Is Preparing for a New Era of AI Cyber Attacks.” Euronews. December 26th, 2022. www.euronews.com/next/2022/12/26/ai-cyber-attacks-are-a-critical-threat-this-is-how-nato-is-countering-them.

[20] Benjamin, Beshoy. “Weaponizing Intelligence: How AI Is Revolutionizing Warfare, Ethics, and Global Defense.” Modern Diplomacy. September 23rd, 2023. moderndiplomacy.eu/2023/09/23/weaponizing-intelligence-how-ai-is-revolutionizing-warfare-ethics-and-global-defense/.