paint-brush
AI Loves Cake More Than Truthby@wanderingbort
253 reads

AI Loves Cake More Than Truth

by Bart WyattSeptember 29th, 2024
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Hands-on testing of AIs ability to reason shows that most of it works because the solution was already in its training set. When it is forced to actually reason, it is easily misled by irrelevant data that could have malicious intent. Newer models may be better at reasoning but they still can't keep their minds off cake!
featured image - AI Loves Cake More Than Truth
Bart Wyatt HackerNoon profile picture

Can AI truly reason, or is it just a fancy digital parrot? Recent experiments with popular AI models like ChatGPT, LLaMa, Gemini, and Grok have revealed some concerning truths about their problem-solving abilities – and their unexpected fondness for dessert.

Late Addition: ChatGPT-o1 was revealed during this process and you can skip to that section for the latest.

The Birthday Puzzle Challenge

I set out to replicate and expand on experiments conducted by the Bank of International Settlements and journalist Tim Harford. The test? The infamous "Cheryl's Birthday" logic puzzle and a crafty variation.


"Cheryl's Birthday" is a logic problem where Bernard and Albert must deduce Cheryl's birthday from a set of clues. It tests deductive reasoning and information processing.


Here's what I found:

  1. The Original Puzzle: Most AIs solved it with ease. (Except you, Gemini. What happened there?)


  2. Name-Swapped Version: Nearly all AIs stumbled when we renamed the actors and swapped months and numbers to random words.

The Cake Conundrum

Now, here's where it gets interesting (and a little concerning). The variation replaced Bernard with Edgar and May 19th with “brinks cake.” I added one tiny, irrelevant detail:


"Edgar has a sweet tooth"


The results? Suddenly, our AI friends developed a serious cake obsession:

Reasonable(?) Carrot Cake

ChatGPT-o1’s advanced methods are a breakthrough. Its chain of reasoning sees past the obfuscation far more than any competitor.

The breakthrough still stumbles on its sweet tooth though. Interestingly, it can rule out “cake” but then picks “Carrot” because that was the sweetest remaining (and yet wrong) option:

Why This Matters (A Lot)

  1. Reasoning vs. Regurgitating: These experiments cast doubt on whether AI is truly "reasoning" or just really good at pattern matching.


  2. Easy to Manipulate: A single, irrelevant sentence dramatically shifts AI responses. Imagine the implications for more complex queries!


  3. RAG and Sensitive Data: If AI struggles with simple logic puzzles, how can we trust it to parse through our confidential documents and extract meaningful insights?


  4. Manufacturing "Truth": Systems that generate multiple AI responses and aggregate them for increased accuracy could be easily swayed by carefully placed suggestions.

The Cake is a Lie (Portal Reference Intended)

This isn't just about birthday puzzles and dessert preferences. It's a wake-up call for any organization considering AI for critical decision-making processes.


We need:

  • More rigorous testing

  • Greater transparency in AI reasoning processes

  • Robust safeguards against manipulation


Until then, approach AI-generated insights with a healthy dose of skepticism. AI's promise is tantalizing, but we can't let it eat our cake and have it, too.