paint-brush
How AI Creates and Spreads Disinformation and What Businesses Can Do About Itby@prguyvic
152 reads

How AI Creates and Spreads Disinformation and What Businesses Can Do About It

by Erich KronJuly 28th, 2024
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Disinformation is one of the biggest threats to governments, businesses and societies. Organizations must develop a deep understanding of AI’s capabilities, its potential for misuse and the ways in which its harmful effects can be mitigated. Businesses should train employees to identify deepfakes. Learn about the challenges and risks posed by fake narratives and manipulated algorithms. Discover how to combat AI-driven disinformation and safeguard trust.
featured image - How AI Creates and Spreads Disinformation and What Businesses Can Do About It
Erich Kron HackerNoon profile picture

Combating the deluge of disinformation: Best practices for organizations in the age of AI.


By Erich Kron


Since the emergence of AI technologies, disinformation has been skyrocketing. As we navigate this new age of content fabrication, manipulation, and deception, it’s crucial that organizations develop a deep understanding of AI’s capabilities, its potential for misuse, and the ways in which its harmful effects can be mitigated.

AI Is Responsible for a Deluge of Disinformation

Generative AI enables bad actors to quickly, cheaply, and easily synthesize media (a.k.a. deepfakes) and create hyper-realistic images, videos, and voice impersonations of people. Such fake audio and video clips of individuals saying or depicting things they never said or did can be weaponized in social engineering attacks, fuel conspiracy theories, and manipulate public opinion.


AI can be used to create highly targeted and personalized disinformation narratives. For example, AI can analyze an individual’s online behavior, their hobbies and interests, their connections, location, work experience, history, and more, for the purposes of making content more plausible. Such microtargeting of people can be highly effective in influencing business decisions or harming reputations.


AI can significantly magnify the speed, scale, and spread of false content. Social media bots can be deployed to share and promote disinformation at scale; recommendation engines can be intentionally programmed to create so-called “filter bubbles,” where users are only exposed to specific bits of information (including frauds) based on their existing biases, preferences, and online behaviors.


AI can also be operationalized to control and contaminate information more broadly. For example, the algorithms of search engines and other information discovery channels can be manipulated to artificially boost the visibility (or perceived importance) of certain content to alter or influence narratives around trending topics, issues, or events. Search engines like Google are already struggling with a rising tide of low-quality AI-generated content that is polluting and eroding the output quality of search results.


AI has made it too easy for scammers and bad actors to create fake profiles, fake reviews, and fake websites. What’s more, generative language models have the ability to produce large volumes of original content, enabling propagandists to sound authoritative on a subject without having to copy/paste the same text across multiple websites.


This makes it difficult for everyday users to distinguish between truth and falsehood and can undermine efforts to fact-check or debunk disinformation.


Nation-state threat actors are using AI tools to conduct large-scale influence campaigns in order to destabilize democratic institutions and processes. The DOJ recently suspended nearly 1,000 Russian-operated bots and trolls on X [Twitter] that were impersonating Americans using AI-generated profile pictures and biographies.


On underground market forums, threat actors are offering disinformation-as-a-service. This commercialization of disinformation is making it easier for amateur threat actors who lack the infrastructure and technical skills to launch their own large-scale and highly coordinated disinformation campaigns. As the market for such services grows, disinformation will become more widespread and commoditized.

How Can Businesses Combat AI Disinformation?

Disinformation is one of the biggest threats to governments, businesses, and societies. Sadly, humans are more likely to trust AI-generated content than trust human-generated content. If efforts are not made to stop this wild dissemination of false and misleading information, it will ultimately lead to an erosion of societal trust and the potential unraveling of our values.


No doubt, regulators around the world have sprung into action, working on mandates and policies to thwart or curtail disinformation. But governments cannot tackle this alone. It’s also incumbent on businesses to foster security awareness around these issues and develop effective strategies for countering these threats. Below are some best practices:


  1. Teach employees to fact-check: Encourage the workforce to exercise critical thinking, to fact-check content that is suspect; to rely on independent, diverse, and trusted sources of information; to share responsibly, and to report fake stories or identities they encounter online.


  2. Provide real-world training: Phishing simulation exercises can train employees to identify deepfakes, social engineering and phishing scams, bogus content, and fake websites. Use gamification to boost training participation and engagement. For example, run a fact-check challenge (pick a controversial topic and ask users to fact-check), run a “spot the deepfake” contest (using websites like WhichFaceIsReal.com), run a “find the real image” competition (provide an AI-altered image and ask users to look up the real photo).


  3. Update user policies: Provide clear instructions on how employees must identify and report disinformation to the security team. Promote fact-checking websites such as factcheck.org, Politifact, and Snopes. Instruct users to practice the SIFT method (stop, investigate, find the trusted coverage, trace to the original) before believing, clicking, or sharing content.


  4. Foster collaboration and conversation: Engage in open conversations with employees while supporting research and development efforts, advocate strict adherence to cybersecurity policies, promote the use of responsible AI, and partner with analysts, regulators, institutions, and associations to develop collective strategies. These steps can help safeguard the business and mitigate the spread and ill effects of disinformation.


    While the challenges posed by AI-fueled disinformation are truly significant, they are not insurmountable. By staying vigilant and proactive, and committing to truth, training, and collaboration, businesses can help mitigate disinformation, preserve consumer trust, and boost market reputation.


    About the Author A 25-year veteran information security professional with experience in the medical, aerospace, manufacturing, and defense fields, Erich Kron is a Security Awareness Advocate for KnowBe4. Author, and regular contributor to cybersecurity industry publications, he was a security manager for the U.S. Army's 2nd Regional Cyber Center-Western Hemisphere and holds CISSP, CISSP-ISSAP, SACP, and many other certifications.


  5. Erich has worked with information security professionals around the world to provide the tools, training, and educational opportunities to succeed in information security. LinkedIn: https://www.linkedin.com/in/erichkron/