paint-brush
What Governor Newsom’s AI Deepfake Regulations Mean for the 2024 U.S. Electionby@victordey
200 reads

What Governor Newsom’s AI Deepfake Regulations Mean for the 2024 U.S. Election

by Victor DeyOctober 21st, 2024
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

Gov. Newsom signed five A.I.-focused bills into law. Three of the laws tackle the spread of election-related misinformation. The laws come after a high-profile incident involving X CEO Elon Musk. Musk shared a deepfake video that falsely portrayed Vice President Kamala Harris as a "diversity hire"
featured image - What Governor Newsom’s AI Deepfake Regulations Mean for the 2024 U.S. Election
Victor Dey HackerNoon profile picture

California Governor Gavin Newsom enacted three new laws targeting the use of A.I. deepfakes in political campaigns ahead of the 2024 U.S. election.


California Governor Gavin Newsom passed three major bills last month to mitigate the use of artificial intelligence (A.I.) in creating misleading “deepfake” images and videos for political campaigns. As the state prepares for the November elections, Gov. Newsom’s actions extend California’s legal framework around AI, with five A.I.-focused bills passed – three of which tackle the spread of election-related misinformation. The tighter regulations come after a high-profile incident involving X CEO Elon Musk, who shared a deepfake video that falsely portrayed Vice President Kamala Harris calling herself a "diversity hire."


Among the enacted regulations, the Defending Democracy from Deepfake Deception Act (AB 2655) demands social media platforms limit users from posting "deceptive" content, including altered media misrepresenting a public figure’s words or actions. The act builds upon a 2019 law (AB 730) signed by Newsom, which made it illegal to distribute misleading media intended to dishonor political candidates. Another law, AB 2839, described as an “emergency measure,” prohibits the distribution of false content within 120 days of an election in California. A third, AB 2355, mandates that political ads or videos generated using AI disclose this information to viewers for transparent political campaigning.


"Safeguarding the integrity of elections is essential to democracy, and it’s critical that we ensure AI is not deployed to undermine the public’s trust through disinformation – especially in today’s fraught political climate," he said in a__blog post__. “These measures will help to combat the harmful use of deepfakes in political ads and other content, one of several areas in which the state is being proactive to foster transparent and trustworthy AI.”


The laws come at a pivotal time as deepfake content created through generative A.I. (genAI) models has evolved to the point where human eyes can no longer distinguish between real and AI-generated. According to a 2024 European Union Law Enforcement Agency report, 90% of online content could be synthetically generated by 2026. Likewise, a recent KPMG study titled “Deepfakes: Real Threat, found that over the past few years deepfakes have increased by 900%.


However, Newsom’s legislative efforts have ignited a debate on social media over the future of technology. While supporters view the laws as crucial for protecting voters from election misinformation, critics, including Musk, argue that such laws stifle free speech. In a post on X, Musk compared Gov. Newsom to "The Joker."



“AI-generated content has gone beyond what we can comprehend as real or imitation or parody. What’s clear is that there is AI-generated content being used to be intentionally deceptive. At a minimum, we need to have systems in place that provide context on AI-generated recordings, videos, and images,” Kevin Guo, CEO and co-founder of genA.I. content detection platform Hive, told me. “Platforms and organizations must adopt these models to protect consumers and prevent large-scale misinformation campaigns from influencing elections.”


Assembly member Gail Pellerin, author of bill AB 2839, stressed the urgency of addressing the dangers of deepfakes. “With fewer than 50 days until the general election, there is an urgent need to protect against misleading, digitally-altered content that can interfere with the election. With the enactment of AB 2839, California is taking a stand against the manipulative use of deepfake technology to deceive voters,” she said in a statement.


California’s deepfake laws could serve as a model for other states. While 20 states, including New York, Texas, and Florida, have already enacted varying degrees of regulation on election-related deepfakes, these laws' effectiveness remains a big question. Tech giants including Meta and X recently faced criticism for inconsistent genA.I. content moderation, while others like Microsoft have already integrated special A.I. models such as Azure AI Content Safety, to monitor and moderate A.I.-generated content. Tech experts suggest that while extending the protection period to 120 days is a step in the right direction, it alone won’t stop the spread of A.I.-driven misinformation.


“The difficulty lies not just in creating laws but in enforcing them across digital platforms that span multiple jurisdictions, both national and international. Designing frameworks that are adaptable enough to accommodate rapid advancements in AI is essential,” Dominik Heinrich, Head of AI Design at Coca-Cola, told me. “Legal frameworks, while important, will always lag behind the technological curve. We cannot rely solely on laws; we need to focus on building AI systems that can self-detect and adapt in real-time. We will soon see a world where AI governs AI with human oversight.”


The instance highlights the growing complexity of balancing A.I. innovation with societal safeguards. For now, California aims to lead as an example, but whether these measures effectively combat AI-driven misinformation or curtail free speech will shape the future of AI in political campaigns for years to come.