paint-brush
A Layman’s Guide to Artificial Intelligence (AI)by@vinogeo
7,424 reads
7,424 reads

A Layman’s Guide to Artificial Intelligence (AI)

by Vinoth George ChellamuthuAugust 20th, 2018
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

According to the father of Artificial Intelligence, <strong>John McCarthy</strong>, it is “<strong>The science and engineering of making intelligent machines, especially intelligent computer programs</strong>”.

People Mentioned

Mention Thumbnail

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - A Layman’s Guide to Artificial Intelligence (AI)
Vinoth George Chellamuthu HackerNoon profile picture

“The very concept of intelligence is like a stage magician’s trick. Like the concept of ‘the unexplored regions of Africa’. It disappears as soon as we discover it.”

— Marvin Minsky (1927–2016), mathematician and an AI pioneer.

What is Artificial Intelligence?

According to the father of Artificial Intelligence, John McCarthy, it is “The science and engineering of making intelligent machines, especially intelligent computer programs”.

Basically, artificial intelligence (AI) is the ability of a machine or a computer program to think and learn. The concept of AI is based on the idea of building machines capable of thinking, acting, and learning like humans.

AI is accomplished by studying how human brain thinks, and how humans learn, decide, and work while trying to solve a problem, and then using the outcomes of this study as a basis of developing intelligent software and systems.

Goals of AI

  • To Create Expert Systems − The systems which exhibit intelligent behavior, learn, demonstrate, explain, and advice its users.
  • To Implement Human Intelligence in Machines − Creating systems that understand, think, learn, and behave like humans.

Applications of AI

AI has been dominant in various fields such as

  • Gaming − AI plays crucial role in strategic games such as chess, poker, tic-tac-toe, etc., where machine can think of large number of possible positions based on heuristic knowledge.
  • Natural Language Processing − It is possible to interact with the computer that understands natural language spoken by humans.
  • Expert Systems − There are some applications which integrate machine, software, and special information to impart reasoning and advising. They provide explanation and advice to the users.
  • Vision Systems − These systems understand, interpret, and comprehend visual input on the computer. For example,
  • A spying aeroplane takes photographs, which are used to figure out spatial information or map of the areas.
  • Doctors use clinical expert system to diagnose the patient.
  • Police use computer software that can recognize the face of criminal with the stored portrait made by forensic artist.
  • Speech Recognition − Some intelligent systems are capable of hearing and comprehending the language in terms of sentences and their meanings while a human talks to it. It can handle different accents, slang words, noise in the background, change in human’s noise due to cold, etc.
  • Handwriting Recognition − The handwriting recognition software reads the text written on paper by a pen or on screen by a stylus. It can recognize the shapes of the letters and convert it into editable text.
  • Intelligent Robots − Robots are able to perform the tasks given by a human. They have sensors to detect physical data from the real world such as light, heat, temperature, movement, sound, bump, and pressure. They have efficient processors, multiple sensors and huge memory, to exhibit intelligence. In addition, they are capable of learning from their mistakes and they can adapt to the new environment.

The difference between AI and “machine learning”











Chances are, if you’ve heard the term AI ballooning over the last few years, you’ve also heard “machine learning” as a buzzword.   Many have questions like “Is AI The Same As Machine Learning?”   Not really. Although the two terms are often used interchangeably, they are not the same.   Artificial intelligence is a broader concept, while machine learning is the most common application of AI.   Here’s what it means: Advanced machines use large data sets to “learn” and create patterns — then, they use what they’ve learned to recognize more of the unknown.   AI and machine learning have a similar relationship to rectangles and squares. Just as all squares are rectangles, but not all rectangles are squares; machine learning is one application of AI, but AI is a broader concept that has other uses, too.

What’s AI Like Today?

The computers haven’t taken over the world, but artificial intelligence is already part of our everyday lives.

Although most of us haven’t taken a ride in a self-driving car, we benefit from AI through apps like Uber and Lyft that use algorithms to connect drivers to passengers.

We don’t have robotic assistants, yet, but we use AI assisted software like Siri and Google Now.

AI is also used in e-commerce, customer service, and financial services.

IBM’s cognitive computing system, Watson, is best known as a Jeopardy! winner, but Watson is also used in day-to-day data analytics in marketing, and research and diagnostic assistance at hospitals for physicians.

Google’s AI made news by beating the world Go champion, but its computing is also being used to answer email in Inbox, identify photos in Google Photos, and schedule appointments in G Suite, formerly Google Apps for Work.

How it will affect Humans:

Experts predict that within the next decade AI will outperform humans in relatively simple tasks such as translating languages, writing school essays, and driving trucks. More complicated tasks like writing a bestselling book or working as a surgeon, however, will take machines much more time to learn. AI is expected to master these two skills by 2049 and 2053 accordingly.

It is obviously too soon to talk about AI-powered creatures like those from Westworld or Ex Machina stealing our jobs or, worse yet, rising against humanity, but we are certainly moving in that direction. Meanwhile, top tech professionals and scientists are getting increasingly concerned about our future and encourage further research on the potential impact of AI.

Potential for bias

AI has an intrinsic potential for bias in terms of the data used to train each algorithm to do what it’s supposed to. For example, Google Photos came under fire for tagging African American users as gorillas in 2015, and in 2017, the developers of FaceApp “beautified” faces by lightening skin tones. That’s why it’s vital for AI companies to look at the data they’re using and make sure it’s engineered to reduce bias.

What’s next in AI

AI is on the rise in industries across the board. In fact, 30 percent of businesses are predicted to incorporate it before 2019, and that’s up from just 13 percent last year, according to Spiceworks, an information technology company. Google, IBM, Amazon, Microsoft, Apple and many more companies are making AI a priority.

Key players

Tesla CEO Elon Musk, who incorporates AI into his company’s autonomous cars, fears for what the technology could mean for the future of humanity. “If you’re not concerned about AI safety, you should be,” he tweeted in August 2017. “Vastly more risk than North Korea.” He also encouraged the government to regulate the technology before it becomes too advanced. “Nobody likes being regulated, but everything (cars, planes, food, drugs, etc) that’s a danger to the public is regulated,” he wrote on Twitter. “AI should be too.

Mark Zuckerberg, on the other hand, seems to disagree wholeheartedly. The Facebook CEO hosted a 2017 Facebook live in which he called his views on AI “really optimistic” and mentioned that those who “drum up doomsday scenarios” about AI are “negative” and, in some ways, “really irresponsible.” People naturally pointed to Elon Musk, who later tweeted, “I’ve talked to Mark about this. His understanding of the subject is limited.”

Key figures at Amazon lean more towards Zuckerberg’s view of the subject, saying the benefits of AI outweigh the risk. “We believe it is the wrong approach to impose a ban on promising new technologies because they might be used by bad actors for nefarious purposes in the future,” wrote Dr. Matt Wood, general manager of AI at AWS. “The world would be a very different place if we had restricted people from buying computers because it was possible to use that computer to do harm.” The company recently sold its Rekognition facial recognition software — which identifies and tracks faces in real time, including those of “people of interest” — to police departments and government agencies. Critics argued it could easily be misused and harm marginalized people.

Sundar Pichai, CEO of Google, recently released new guidelines surrounding the company’s future with AI. His views are more in line with regulation, even if it’s self-regulation, of the company’s use of AI. “We recognize that such powerful technology raises equally powerful questions about its use,” he wrote in a June blog post. “How AI is developed and used will have a significant impact on society for many years to come. … We feel a deep responsibility to get this right.” He clarified that where there’s a material risk of harm, the company will proceed only when it believes the benefits substantially outweigh the risk. The company also said it won’t collaborate on weapons or “other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.”

Closing Thoughts

Given the innate advantage AI machines have over us humans (accuracy, speed, etc.) an AI rebellion scenario is something we should not completely dismiss. Time will show us whether AI is our greatest existential threat or a tech blessing that will improve our quality of life in many different ways.

So far, one thing remains perfectly clear: creating AI is one of the most remarkable events for humankind. After all, AI is considered a major component of 4th Industrial Revolution, and its potential socioeconomic impact is believed to be as huge as the invention of electricity once had.

In light of this, the smartest approach would be keeping an eye on how the technology evolves, taking advantage of the improvements it brings to our lives, and not getting too nervous at the thought of machine takeover.