Artificial Intelligence beats people at Go and Dota 2, helps diagnose diseases and test scientific hypotheses. According to analysts at IDC, global spending on AI in 2022 will total nearly $78 billion, more than tripling in four years.
AI seems omnipotent, but it can't handle a number of tasks. We need to be realistic about artificial intelligence and stop expecting it to solve all of humanity's problems.
AI requires data for analysis and training - without a sufficient amount of structured data it is impossible to create a practically useful AI solution. For example, to accurately recognize faces in photos, you need to analyze tens of thousands of photos.
Many industries, especially the public sector, still rely on paper archives, the full digitization of which will take time. For businesses, this means that it is not enough to develop software for AI - first you need to get access to the data.
A McKinsey Global Institute study showed that the leaders in the implementation of artificial intelligence technologies are FinTech and Telecom, while the laggards are construction, education, and tourism, which lack digitized data. At the same time, the quality of data is no less important than its volume. It is impossible to build correct models on the basis of low-quality data.
Because of this, the problem of data security and protection from possible littering and misuse by intruders becomes acute. Companies need to think about cybersecurity before they start implementing artificial intelligence.
Artificial intelligence is not yet capable of separating truth from fiction and combating misinformation. Although OpenAI has already created artificial intelligence to generate convincing "fake news," algorithms still recognize fakes worse than humans.
For example, Facebook abandoned artificial intelligence to solve this problem and hired 10,000 moderators capable of understanding the
cultural nuances of publications.
Another limitation of artificial intelligence: its inability to recognize emotions in social networks. This disadvantage prevents, in particular,
the problem of cyberbullying from being effectively addressed. Existing
mechanisms require the participation of people who must complain about
offensive posts.
People do not trust artificial intelligence, which greatly hinders its adoption. IBM’s Watson Oncology project is able to recommend treatment
options for 13 different types of cancer - in some cases, the algorithm's
decision was 93% the same as the recommendations of oncology experts.
But doctors were not ready to delegate life and death decisions to a machine. The issue of responsibility for potential mistakes made by artificial intelligence also came up sharply.
In addition, there was a problem with the data sampling on which Watson was trained - foreign hospitals complained that the program was geared toward American medical practices and treatments. As a result, some of
the hospitals that have implemented the technology have abandoned it, citing high costs and unsatisfactory results.
Perhaps there is a way to address society's distrust of artificial intelligence. A study by American scientists has shown that people are willing to trust AI if they can make minor changes to its algorithms.
Artificial intelligence lacks creativity - it can only imitate the style of people, but not create its own. The media has long used AI to write sports news and crime stories, but jokes and novels performed by robots still do not stand up to criticism.
In 2018, a neural network, trained on an array of 43,000 jokes, began generating nonsense like "What do you get if you crossbreed with a dinosaur? Lawyers." Apparently, we should not expect a machine revolution in the field of humor.
Things are no better with prose: although there are developments demonstrating the ability of artificial intelligence to write stories, there is still a long way to go before a computer wins the Nobel Prize for Literature.
Apple co-founder Steve Wozniak suggested using the "Coffee Test" to measure machine intelligence abilities. To pass Wozniak's test, a
robot must enter an unfamiliar apartment, find a coffee maker, pour water, take out a mug, and make coffee. So far, no one has been able to pass this test. The "Coffee Test" has its share of jokes, but it shows the serious limitations of the intelligence of modern machines.
Andrew Ng, founder of Landing AI and Coursera, believes that it is possible to successfully automate those intelligent tasks that take less than a second for humans to solve.
The problem is that people themselves do not yet fully understand what intelligence is. For decades, researchers have believed that the ideal measure of intelligence is a game of chess. Today, grandmasters
cannot compete with machines, but the ability of chatbots and voice assistants to maintain meaningful conversations is no further than that of a five-year-old child.
Every limitation of artificial intelligence today is a challenge for developers and entrepreneurs. Tasks that are still beyond the machine's control should be a challenge to a new generation of researchers.
For example, it is possible to create a service that guesses a person's emotions based on social networking messages or to train a neural network to make witty jokes and create a viral application based on it that will conquer the world.