paint-brush
As we Learn to Use AI, It is Learning to Use Us by@thesociable
775 reads
775 reads

As we Learn to Use AI, It is Learning to Use Us

by The SociableJuly 29th, 2023
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

Historian Yuval Noah Harari tells the UN’s AI for Good Global Summit that “while we are learning to use AI, it islearning to use us” Harari opined that AI should continue to be developed, but that it shouldn’t be deployed without safety checks and regulations. The historian likened the development of AI to having a virus in the lab, saying it's OK to develop it, but not to deploy it to the public.
featured image - As we Learn to Use AI, It is Learning to Use Us
The Sociable HackerNoon profile picture


Historian Yuval Noah Harari tells the UN’s AI for Good Global Summit that “while we are learning to use AI, it is learning to use us,” and that we should slow down its deployment, not development.


Speaking on Thursday at the United Nations’ International Telecommunication Union (ITU) AI for Good Global Summit session on “Guardrails needed for safe and responsible AI,” Harari opined that AI should continue to be developed, but that it shouldn’t be deployed without safety checks and regulations.


“While we are learning to use AI, it is learning to use us”

Yuval Noah Harari, AI for Good Global Summit, 2023


“While we are learning to use AI, it is learning to use us,” said Harari, adding, “It’s very difficult to stop the development [of AI] because we have this arms race mentality. People are aware — some of them — of the dangers, but they don’t want to be left behind.


But the really crucial thing, and this is the good news, the crucial thing is to slow down deployment — not development,” he added.


The historian likened the development of AI to having a virus in the lab, saying it’s OK to develop it, but not to deploy it to the public.


“It’s like you have this very dangerous virus in your laboratory, but you don’t release it to the public sphere; that’s fine”

Yuval Noah Harari, AI for Good Global Summit, 2023


“You can have an extremely sophisticated AI tool in your laboratory as long as you don’t deploy it out into the public sphere. This is less dangerous,” said Harari.


“You know, it’s like you have this very dangerous virus in your laboratory, but you don’t release it to the public sphere; that’s fine. There is a margin of safety there.”


Just as drug companies and car manufacturers have to go through safety checks, Harari argued that the same should apply to AI.


“Now it is possible for the first time in history to create billions of fake people […] If you can’t know who is a real human and who is a fake human, trust will collapse, and with it at least, free society”

Yuval Noah Harari, AI for Good Global Summit, 2023


On the subject of AI-generated deepfakes and bots, Harari said, “Now it is possible for the first time in history to create fake people — to create billions of fake people — that you interact with somebody online and you don’t know of it’s a real human being or a bot.


“In a year probably, this conversation that we’re having now, it will be almost impossible to be sure whether you're talking with a deepfake or with a real human.


“If this is allowed to happen, it will do to society what fake money threatens to do to the financial system.”


“We should better understand its [AI] potential impact on society, on culture, on psychology, and on the economy of the world before we deploy it into the public sphere”

Yuval Noah Harari, AI for Good Global Summit, 2023


The historian added that “If you can’t know who is a real human and who is a fake human, trust will collapse, and with it at least free society. Maybe dictatorships will be able to manage somehow, but not democracies.”


Harari clarified that creating fake people was OK so long as they were labeled as such and not passed off as real — “I need to know if it’s a real human or not,” he said.


The International Telecommunication Union is the United Nations’ specialized agency for information and communication technologies – ICTs.



“We are no longer mysterious souls; we are now hackable animals”

Yuval Noah harari, World Economic Forum, 2020



Harari has spoken multiple times at the World Economic Forum’s (WEF) annual meetings in Davos where he declared, “We humans should get used to the idea that we are no longer mysterious souls; we are now hackable animals.”


Speaking at the WEF in 2020, Harari said, “To hack human beings you need a lot of biological knowledge, a lot of computing power, and especially a lot of data.


“If you have enough data about me and enough computing power and biological knowledge, you can hack my body, my brain, my life. You can reach a point where you know me better than I know myself.”


With this power to hack human beings, Harari said, “[it] can of course be used for good purposes like provided much better healthcare, but if this power falls into the hands of a 21st Century Stalin, the result will be the worst totalitarian regime in human history, and we already have a number of applicants for the job of 21st Century Stalin.”


“We shouldn’t regulate AI until we see some meaningful harm that is actually happening — not imaginary scenarios”

Michael Schwarz, WEF Growth Summit, 2023


Speaking at the WEF Growth Summit 2023 during a panel on “Growth Hotspots: Harnessing the Generative AI Revolution,” Microsoft’s Michael Schwarz argued that when it came to AI, it would be best not to regulate it until something bad happens, so as to not suppress the potentially greater benefits.


I am quite confident that yes, AI will be used by bad actors; and yes, it will cause real damage; and yes, we have to be very careful and very vigilant,” Schwarz told the WEF panel.


When asked about regulating generative AI, the Microsoft chief economist explained:

“What should be our philosophy about regulating AI? Clearly, we have to regulate it, and I think my philosophy there is very simple.


We should regulate AI in a way where we don’t throw away the baby with the bathwater.

“So, I think that regulation should be based not on abstract principles.


“As an economist, I like efficiency, so first, we shouldn’t regulate AI until we see some meaningful harm that is actually happening — not imaginary scenarios.”


On January 23, 2023, Microsoft extended its partnership with OpenAI — the creators of ChatGPT — investing an additional $10 billion on top of the “$1 billion Microsoft poured into OpenAI in 2019 and another round in 2021,” according to Bloomberg.



This article was originally published by Tim Hinchliffe on The Sociable.