paint-brush
AI vs the Human Brain: Can AI Beat Human Intelligence?by@intelligence
1,356 reads
1,356 reads

AI vs the Human Brain: Can AI Beat Human Intelligence?

by Michael ScofieldNovember 17th, 2023
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Explore the dynamic interplay between the human brain and AI. Uncover the complexities and contrasts shaping the future of intelligence.

People Mentioned

Mention Thumbnail
featured image - AI vs the Human Brain: Can AI Beat Human Intelligence?
Michael Scofield HackerNoon profile picture


In a surprising turn of events on Wednesday, OpenAI has had to temporarily halt new sign-ups for its ChatGPT Plus service, citing an overwhelming surge in demand. This unexpected popularity underscores the growing reliance on artificial intelligence (AI) in our daily lives, raising questions about the capabilities of these AI systems in comparison to the remarkable human brain.


For all its faults, the human brain is pretty incredible. So incredible, in fact, that for more than 60 years, scientists, entrepreneurs, and sci-fi enthusiasts have done everything they can to replicate it in the form of artificial intelligence.


While many people condemn such technology (AI) as the harbinger of the apocalypse, as Stephen Hawking warned in 2014 that artificial intelligence could seriously threaten humanity if it grew uncontrollable, it has made countless tasks easier and even obsolete.

Can there be artificial minds? Can machines be made to think? Can machines be conscious? Is it possible for artificial intelligence to replace the human brain? These and similar questions pervade most discussions and philosophical polemics on the issue of artificial intelligence.

The Human Brain

We attempt to model AI to the human brain. Why? Well, the human brain is the most powerful mechanism for intelligence that we know, and it has been extensively studied over time.

Let's start with a simple truth: the human brain is much more complex than AI. Here's a surprising fact: unlike AI, the human brain does not use algorithms! Now, what's an algorithm? It's like a set of instructions for solving problems. So, the human brain, with its 86 billion neurons, doesn't follow a predetermined set of rules.


Now, let's turn our attention to Artificial Intelligence. In the expansive field of AI research, the aspiration is for AI to approach the complexity of the human brain, a milestone referred to as achieving Artificial General Intelligence (AGI).


Notable milestones such as the introduction of ChatGPT and the establishment of the GPTs store by OpenAI have fueled these conversations. This store empowers individuals to create customized versions of the viral AI chatbot, sparking discussions and debates about the evolving landscape of AI capabilities and applications in the ongoing narrative of artificial intelligence's journey toward mirroring the intricacies of the human mind.


It's essential to emphasize that a brain isn't intelligence per se; rather, it serves as a vessel for intelligence. Intelligence, in straightforward terms, revolves around problem-solving; it's an active process, a doing.


In the realm of AI research, human intelligence is often used interchangeably with what a general AI accomplishes, leading to their comparison. The difference between human intelligence and AI is that AI was deliberately constructed by humans. That is what the “artificial” part means.


So, artificial intelligence (AI) was inspired by the human brain.


Interesting facts about the human brain

Image from Healthy Living Australia

AI is a model of the brain, not a copy.

Think of your brain like a super intricate puzzle—millions of tiny pieces (neurons) working together to create thoughts and make decisions. Imagine artificial intelligence (AI) as someone trying to make a similar puzzle but not copying every single piece. Instead, they're creating a simplified version that captures the main idea of how the original puzzle works.


So, AI isn't copying your brain like a duplicate. It's more like making a smart, simplified guess at how your brain tackles problems and learns new things. It's not as complex or flexible as your brain, but it's a clever imitation, helping us do specific tasks really well.


This distinction is crucial because it helps us appreciate the unique strengths of both the human brain and AI. In a nutshell, AI is like a model inspired by your brain, but not an exact copy.


Garry Kasparov, World Chess champion, loses to IBM's Deep Blue in 1997,  putting AI in the spotlight

Image from DailySabah

Common Sense

One issue that today’s AI has with demonstrating intelligence is exhibiting common sense. While it can rival or surpass human thinking at specific tasks like playing chess or detecting tumors, the real world throws endless unforeseen circumstances, and there, AI often stumbles.


Experts refer to these tricky situations as "corner cases," scenarios lying on the fringes of what's expected or likely. Human minds navigate these by tapping into common sense, but AI systems, operating on set rules or learned patterns, often hit a roadblock in such situations. It's like humans have this built-in knack for handling unexpected twists, while AI, with its rulebook approach, can sometimes miss the mark when faced with the unpredictable.


Common sense might seem like no big deal, something everyone naturally has. However, try to picture a world without it, and things get pretty clear. Imagine you're a robot having a day out at the carnival. On the way home, you encounter a fire hydrant going wild, splashing water all over the road. Now, without common sense, figuring out if it's safe to drive through that watery spray becomes a head-scratcher.


You decide to park near a drugstore, and suddenly, a guy on the sidewalk is screaming for help, bleeding like crazy. The common sense question pops up – can you grab bandages from the store without waiting in line to pay? As a human, you tap into your huge stash of common-sense knowledge to make sense of these situations. It's something you always do because, let's face it, life throws lots of unforeseen circumstances at us. And here's the kicker: A.I.s might struggle with these real-life twists and turns.


How do humans acquire common sense? The quick answer is that we're versatile learners. We experiment, observe outcomes, read books, heed instructions, absorb information silently, and engage in independent reasoning. We face challenges, witness mistakes, and learn from the experiences of others. In contrast, A.I. systems lack this well-rounded approach. They tend to follow a single path, excluding alternative routes.


Maybe computers won't truly grasp common sense until they have brains and bodies like ours and experience life as we do. On the flip side, the nature of machines might offer them the opportunity to develop an enhanced version of common sense.


Human beings, however, are a complex bunch. Despite holding common-sense views, we often fall short of meeting our own standards. Engaging in activities like texting while driving, procrastinating, and disregarding traffic laws sometimes seems to contradict our common sense principles. Taking a broader perspective, common sense isn't just about possessing knowledge but also about acting on it when it truly matters.


The question arises: could a computer program ever surpass a human in terms of common sense? I'd say about 60%, although the gap, while still significant, is gradually narrowing with advancements like ChatGPT. But for now, despite our occasional lapses, we remain the reigning champions of common sense.

AI and Self-driving vehicles

Driving is a complex task that engages multiple senses and requires delicate decision-making. Achieving full self-driving capabilities is a challenging endeavor precisely because of the intricate nature of human driving skills.


When we drive, our brains seamlessly process information from our eyes, ears, and even touch to navigate through a dynamic environment. The ability to interpret traffic signs, predict the movements of other vehicles, react to sudden changes, and even respond to unexpected sounds – these are tasks that involve a symphony of sensory input and quick decision-making.


Creating a self-driving vehicle that can match the versatility of the human brain in these situations is a formidable challenge. While AI has made significant strides in pattern recognition, obstacle detection, and decision-making, the real world often throws unpredictable scenarios at us. Factors like weather conditions, diverse road environments, and the unexpected behaviors of other drivers add layers of complexity that go beyond what current AI systems can fully comprehend.


Additionally, the human aspect of driving extends beyond logical decision-making. Human drivers often rely on intuition, understanding social cues, and making split-second judgments based on experience. Infusing AI with this level of human-like intuition and adaptability is very tough.


So, while progress in self-driving technology is impressive, achieving full autonomy that matches the sensory awareness and adaptability of the human brain remains a significant hurdle. It's not just about replicating individual driving tasks but about creating a holistic system that can handle the unpredictable nature of real-world driving scenarios—a task that requires the integration of advanced AI, sensor technologies, and a deep understanding of human cognition.

Is Superhuman AI a myth?

The pursuit of Artificial General Intelligence (AGI), or what's often referred to as "superhuman" AI as of now, is a myth. A system that can understand, learn, and apply knowledge across a broad spectrum of tasks at a level surpassing human capabilities remains a lofty goal.


AGI would possess a level of generalization and adaptability that mirrors or even surpasses human intelligence. This includes the ability to set its own goals, understand the nuances of diverse tasks, and apply knowledge across a wide range of scenarios.


Elon Musk's concern about AGI being an existential risk stems from the idea that if we create a system smarter than us, there's potential for it to operate beyond our control, potentially with unforeseen consequences. This view aligns with the cautionary perspective that many in the AI community share, emphasizing the need for ethical considerations and robust safety measures in AGI development.


On the other hand, OpenAI's mission, as articulated by its CEO, is to ensure that if AGI is developed, it is aligned with human values and benefits all of humanity. This represents a commitment to responsible AI development, with a focus on avoiding harmful impacts and ensuring broad access and benefits.


With AGI remaining a distant goal, the complexity involved in creating a system that not only surpasses human intelligence but also understands and respects our values is immense. It requires addressing not only technical challenges but also ethical considerations, safety protocols, and a deep understanding of the societal implications of such a powerful technology.


You can read the full details of what Sam Altman (OpenAI CEO) said on AGI here

Conclusion

The mechanisms by which the human brain achieves many of the above are still unknown. Key research is still ongoing to understand the human brain better and emulate it in computer systems. Until such research matures, we might not achieve what we can classify as a truly artificial intelligent system.


This article taught us:


  • Considering the discussions we've delved into in this article, it would be a misjudgment to draw parallels or insinuate that the intelligence displayed by machines surpasses that of the human mind. Ultimately, it's crucial to recognize that these thinking machines are products crafted by the ingenuity of the human mind.
  • It is apparent that devices like computers, considered "thinking" machines, serve as tools that facilitate the tasks of the human mind. However, it is crucial to emphasize that these machines are not substitutes for the human mind. They enhance and streamline processes but fall short of replicating the holistic range and depth of human intelligence.
  • A hallmark of intelligence is the willingness to change one's mind. In contrast, machines focus solely on precision and exactness. Computing doesn’t entertain opinions.
  • However, it is important to note that while modeling the human brain is a promising approach for AI research, understanding the workings of the brain and effectively reproducing its functions in AI systems poses significant challenges, and there's a substantial amount yet to uncover in this pursuit.
  • Man will always drive the machine. But to ensure it is not the other way around, we need to keep super-smart weapons systems under human control, make AI systems explain their decisions to humans in excruciating detail, and align the interests of machines and humans.


There is more to the brain than AI.


Gif Source: Giphy