Malicious Artificial Intelligence Enables Deepfake Kidnapping Scam

Written by thesociable | Published 2023/06/29
Tech Story Tags: artificial-intelligence | deepfakes | evil | wef | security | cybersecurity | good-company | hackernoon-top-story

TLDRJennifer DeStefano gave a testimony before the US Senate on artificial intelligence. The Scottsdale, Arizona mom experienced a terror no mother should ever have to face. ā€œAI is revolutionizing and unraveling the very foundation of our social fabric by creating doubt and fear in what was once never questioned,ā€ she said.via the TL;DR App

The lead image for this article was generated by HackerNoon's AI Image Generator via the prompt "evil robot"


Earlier this year, Scottsdale, Arizona mom Jennifer DeStefano experienced a terror no mother should ever have to face ā€” the sound of her daughterā€™s sobbing voice crying that sheā€™d been kidnapped.

But it wasnā€™t her daughter on the phone. It was an AI deepfake so convincing that DeStefano was prepared to hand over $50K to the scammers, who told her they would kill her daughter if she didnā€™t pay up.

Today, DeStafano gave a heartfelt testimony before the US Senate relaying her harrowing and terrifying story to theĀ Judiciary Subcommittee on Human Rights and the Law.

https://twitter.com/SenOssoff/status/1668702476122615813?s=20&embedable=true

ā€œThe longer this form of terror remains unpunishable, the farther more egregious it will become. Thereā€™s no limit to the depth of evil AI can enableā€

ā€œArtificial intelligence is being weaponized to not only invoke fear and terror in the American public, but in the global community at large as it capitalizes on, and redefines, what we have known as familiar,ā€ said DeStefano.

ā€œAI is revolutionizing and unraveling the very foundation of our social fabric by creating doubt and fear in what was once never questioned ā€” the sound of a loved oneā€™s voice,ā€ she added.

After retelling theĀ story of her horrific experienceĀ with the kidnapping and extortion scammers she gave toĀ AZ FamilyĀ back in April, the Arizona mom explained how real the deepfake voice clone seemed to be:

ā€œIt was my daughterā€™s voice. It was her cries; it was her sobs. It was the way she spoke. I will never be able to shake that voice and the desperate cries for help out of my mind.ā€

ā€œNo longer can we trust ā€˜seeing is believing,ā€™ or ā€˜I heard it with my own ears,ā€™ or even the sound of your own childā€™s voiceā€œ

Opining on the future of generative AI used for nefarious purposes, DeStefano warned, ā€œThe longer this form of terror remains unpunishable, the farther more egregious it will become. Thereā€™s no limit to the depth of evil AI can enable.ā€

She went on to say, ā€œAs our world moves at a lightning-fast pace, the human element of familiarity that lays foundation to our social fabric of what is known and what is truth is being revolutionized with AI ā€” some for good and some for evil.

ā€œNo longer can we trust ā€˜seeing is believing,ā€™ or ā€˜I heard it with my own ears,ā€™ or even the sound of your own childā€™s voice.ā€

When DeStefano found out that her daughter had not been kidnapped, she called the police, but they told her there was little they could do because it was just a prank call and there was no actual kidnapping or money exchanged.

ā€œIs this our new normal?ā€ she questioned.

ā€œIs this the future we are creating by enabling the abuses of artificial intelligence without consequence and without regulation?ā€

ā€œIf left uncontrolled, unregulated, and we are left unprotected without consequence, it will rewrite our understanding and perception of what is and what is not truth.ā€

Following her opening remarks, DeStefano would not be called upon for questioning until the very end of the hearing where she reiterated that ā€œnot all AI is evilā€ and that there were a lot of ā€œhopeful advancements in AIā€ that could improve peopleā€™s lives.

https://twitter.com/TimHinchliffe/status/1654148049302585346?s=20&embedable=true

While DeStefano clearly outlined meaningful harm arising from bad actors using AI in terrifying ways, Microsoft chief economist Michael Schwarz told the World Economic Forum (WEF) in May thatĀ AI shouldnā€™t be regulated until there was meaningful harm.

ā€œWe shouldnā€™t regulate AI until we see some meaningful harm that is actually happening ā€” not imaginary scenariosā€

Microsoft Chief Economist Michael Schwarz at the WEF Growth Summit 2023

Speaking at the WEF Growth Summit 2023 during a panel on ā€œGrowth Hotspots: Harnessing the Generative AI Revolution,ā€ Microsoftā€™s Michael Schwarz argued that when it came to AI, it would be best not to regulate it until something bad happens, so as to not suppress the potentially greater benefits.

ā€œI am quite confident that yes, AI will be used by bad actors; and yes, it will cause real damage; and yes, we have to be very careful and very vigilant,ā€ Schwarz told the WEF panel.

When asked about regulating generative AI, the Microsoft chief economist explained:

ā€œWhat should be our philosophy about regulating AI? Clearly, we have to regulate it, and I think my philosophy there is very simple.

ā€œWe should regulate AI in a way where we donā€™t throw away the baby with the bathwater.

ā€œSo, I think that regulation should be based not on abstract principles.

ā€œAs an economist, I like efficiency, so first,Ā we shouldnā€™t regulate AI until we see some meaningful harm that is actually happening ā€” not imaginary scenarios,ā€ he added.

On January 23, 2023, MicrosoftĀ extended its partnership with OpenAIĀ ā€” the creators of ChatGPT ā€”Ā investing an additional $10 billionĀ on top of the ā€œ$1 billion Microsoft poured into OpenAI in 2019 and another round in 2021,ā€ according toĀ Bloomberg.


This article was originally published by Tim Hinchliffe on The Sociable.


Written by thesociable | The Sociable is a technology news publication that picks apart how technology transforms society and vice versa.
Published by HackerNoon on 2023/06/29