Nuova storia

L'IA è un'arma pericolosa nell'era della disinformazione

di The Tech Panda4m2025/04/13
Read on Terminal Reader

Troppo lungo; Leggere

L’intelligenza artificiale viene utilizzata in modo improprio per diffondere la disinformazione, creare falsificazioni dannose e commettere frodi finanziarie.Dai falsi di celebrità alle truffe bancarie e alle truffe del commercio elettronico, le conseguenze sono allarmanti.Mentre i legislatori e le aziende tecnologiche si sforzano di regolamentare, il dibattito sull’intelligenza artificiale aperta vs. controllata si intensifica.
featured image - L'IA è un'arma pericolosa nell'era della disinformazione
The Tech Panda HackerNoon profile picture


La capacità dell'IA di diffondere la disinformazione sta raggiungendo nuove altezze inquietanti - forse la forma più allarmante è la sua armonizzazione attraverso deepfakes. Recentemente, Scarlett Johansson ha chiesto un divieto sulla tecnologia deepfake dopo che un video è apparso online con una versione generata dall'IA di lei.


“… I also firmly believe that the potential for hate speech multiplied by A.I. is a far greater threat than any one person who takes accountability for it. We must call out the misuse of A.I., no matter its messaging, or we risk losing a hold on reality,” the actress told People Magazine.

“… I also firmly believe that the potential for hate speech multiplied by A.I. is a far greater threat than any one person who takes accountability for it. We must call out the misuse of A.I., no matter its messaging, or we risk losing a hold on reality,” the actress told People Magazine.

toldMagazine delle persone


People are frequently turning to AI to speed things up, or create sensation, especially in the media industry. Last year, a Wyoming reporter was caught using AI to create fake quotes and stories. Creating sensational stories with the help of AI has already proven dangerous. The anti-migrant violence in UK was born from online misinformation.

was caughtwas born from


After the incident of three girls being tragically stabbed in the UK, rioters created AI-generated images that incited hatred and spread harmful stereotypes. As per The Guardian, far-right groups also made use of AI music generators to create songs with xenophobic content. This content spread across online apps like TikTok via efficient recommendation algorithms.

The GuardianThe Guardian


Last October, according to Wired, AI-powered search engines like Google, Microsoft, and Perplexity were found promoting scientific racism in search results.

WiredWired


Remember when Elon Musk’s xAI released Grok-2, an image generator using Flux with almost no safeguards? This feature allowed users to create uncensored deepfakes, like Vice President Kamala Harris and Donald Trump posing as a couple, sparking deep concerns, is this unprecedented creative freedom, or a dangerous threat to democracy and the integrity of public discourse?

Vice President Kamala Harris and Donald Trumpunprecedented creative freedom


I deepfakes di Taylor Swift, le donne politiche e i bambini che sono andati virali lo scorso anno costringono le aziende tecnologiche a sedersi e prestare attenzione.Henry Ajder, un esperto di intelligenza artificiale generativa che ha studiato i deepfakes per quasi un decennio, ha dichiarato: “Siamo a un punto di svolta in cui la pressione dei legislatori e la consapevolezza tra i consumatori è così grande che le aziende tecnologiche non possono più ignorare il problema”.L’algoritmo


taking steps to keep explicit deepfakes from appearing in search results. Watermarks and protective shields haven’t actually worked so far. But regulation is being upped. For example, the UK banned both creation and distribution of nonconsensual explicit deepfakes. The EU has its AI Act and the US has been pushing for the Defiance Act.

taking steps to keep explicit deepfakes


Meanwhile, startups like Synthesia promise hyperrealistic deepfakes with full bodies that move and hands that wave. Deepfakes are just getting a whole lot more realistic. How will we stop the evil side of this?

Synthesia promise hyperrealistic deepfakes with full bodies

L'intelligenza artificiale sta aiutando nella frode finanziaria

AI sta aiutando nella frode finanziaria

AI-generated fake news spread on social media is heightening the risks of bank runs, according to a new British study that says lenders must improve monitoring to detect when disinformation risks impact customer behavior. Other kinds of fraud are also rampant.

heightening the risks of bank runs


Also, Juniper Research predicts that the value of eCommerce fraud will rise from US$44.3 B in 2024 to US$107 B in 2029, a growth of 141%.

predicts


Tutto grazie all’intelligenza artificiale, che sta alimentando la sofisticazione degli attacchi in tutto l’ecosistema del commercio elettronico, con l’uso di deepfakes creati utilizzando l’intelligenza artificiale per sconfiggere i sistemi di verifica che costituiscono una minaccia chiave.Questa minaccia, combinata con livelli crescenti di “fraude amichevole”, in cui la frode è commessa dal cliente stesso, come la frode al rimborso, sta minacciando sempre di più la redditività dei commercianti.



L’intelligenza artificiale sta aiutando i truffatori a rimanere in prima linea rispetto alle misure di sicurezza e a commettere attacchi sofisticati su una scala più ampia.

Dovrebbe l'IA essere aperta come Internet?

Dovrebbe l’IA essere aperta come Internet?

Meta’s AI chief, Yann LeCun, has urged that AI should be as open as the internet since eventually, all our interactions with the digital world are going to be mediated by AI assistants. LeCun explained that platforms like ChatGPT and Llama will constitute a repository of all human knowledge and culture, creating a shared infrastructure like the internet today.

urged

He said that we cannot have a small number of AI assistants (OpenAI’s ChatGPT and alike) controlling the digital diet of every citizen across the world. “This will be extremely dangerous for diversity of thought, for democracy, for just about everything,” he added.

said


As AI becomes more and more human-like, we must remember that it is still not human. As Microsoft’s Satya Nadella told Bloomberg Technology, AI is software and it doesn’t display human intelligence.

Bloomberg TechnologyBloomberg Technology


“Ha l’intelligenza se vuoi dargli quel nome, ma non è la stessa intelligenza che ho io”, dice.«R»

Navanwita Bora Sachdev, Redattore, The Tech Panda

Navanwita Bora Sachdev, Redattore, The Tech Panda

Navanwita Bora Sachdev, Redattore, The Tech Panda


Trending Topics

blockchaincryptocurrencyhackernoon-top-storyprogrammingsoftware-developmenttechnologystartuphackernoon-booksBitcoinbooks