Plastic influencer. AI Fanboy. Cardboard expert. All terms entering the modern lexicon to describe the wave of ‘hype’ surrounding AI. I’ve long been a skeptic of some of the more outlandish and grandiose claims in the GenAI scene.
1/ Programmers will disappear
2/ AGI will arrive in 2024
3/ All jobs will be automated
4/ Robots will become conscious (Skynet)
All this baseless hyperbole without even delving into the more extremist views (there is a Reddit forum singularity that has 3.4 million members)
I’m particularly bemused by the projection of emotion and fantasy onto computer algorithms capable of doing cool stuff. You won’t find me on a companion app, and I believe that many brilliant people who subscribe to this Skynet perception of AI consciousness are at risk of losing their sanity.
My recent blogs have been in contradiction to the mainstream and somewhat fantastical AI world view 👇
All these APIs are doing is converting audio to text, processing it through a language model, and then converting it back to audio. It might seem sophisticated on the surface but underneath it's just basic text generation in a robot's voice. Each individual system is comprehensive and reasonably mature, but glue them all together on our proverbial pig and there is no real understanding of the nuances of audio interactions.
If it looks like a pig, squeals like a pig and walks like a pig. It’s a pig. Even if it’s wearing lipstick.
The barrier for excellence has never been so low, because the competition is increasingly with an algorithm and its unengaged and inexpert master.
The robot will never reach true expertise, because there will never be a sufficient dataset of genuine experts to crowdsource from. And crowdsourcing takes the average result, not the best one. The robot doesn’t think. It repeats.
The problem with providing a tool or framework that allows you to abstract functionality is that it comes with a set of assumptions. When I buy a hammer, I assume it will work. When I buy a pressure cleaner, I assume it will work.
The problem is that when I use a framework, I assume it will work. But this is quite literally impossible given the maturity of the underlying technology. Far from increasing adoption, Agentic Frameworks are selling an illusion on top of highly controlled demos and finite use cases that will never actually work in the hands of the typical user (and there are millions…).
This preface is to make a point.
Believe me when I say that I don’t say this lightly.
What Google has just done with Gemini 2.0 flash has changed absolutely everything. Everything.
And no one saw it coming.
One of my parents favourite stories is how when I was 5 years old, I was given a part in the local nativity play. Cast as a tree, my role was to silently adorn the set while the older and more capable children performed an interpretation of the birth of Jesus Christ.
I wasn’t particularly happy with this minor role.
Over the next 10-15 minutes before I was dragged off the stage, I followed the cast about stage, stealing their lines and thundering out my own interpretation of the play.
Interjecting at perfect moments, performing at others. It was a masterclass of disruption, and every giggle and teary eye from the watching crowd goaded me into more. It was ruthless destruction.
The performance descended into farce, the audience crying with laughter; the actors bemused and confused.
The laughter encouraged me, it became a crescendo.
The play was converted into pantomime, the job complete. To this day it remains a tale told at dinner parties to new and younger family members.
Of course, this particular play is Open AIs 12 days of Christmas and how Google has not just stolen their thunder, but commanded the narrative, stolen the limelight and turned a Christmas celebration from OpenAI into a winter nightmare.
I, (like most rational people), tuned into the 12 days of Christmas by OpenAI with a healthy degree of skepticism, and watched as they demoed phone calls and astronomically expensive and slow API calls to a marginally improved LLM model, and felt reassured that my cynical world view was validated.
Then something happened.
It happened in the background, with perfect theatrical timing; like an earthquake the repercussions are coming and they will be felt by everyone and seen in every product.
I thought Google had dropped the ball on AI, we all did. They were just irrelevant in all practical usages. Quality was poor, functionality was limited.
It turns out that they didn’t drop the ball and weren’t asleep on the job. They were simply leaving the competition (now children by comparison) to wrestle with Beta releases, barely functioning APIs and scale issues while quietly building the tooling that is necessary to effectively use GenAI in production.
Until a week ago I didn’t even have a live Google API Key.
This week, I’m in the process of migrating every single one of my services.
This may seem rash, but let me explain.
There are two different factions within the world of AI right now; scientists and builders.
The pioneers and scientists are seeking AGI and novel use cases; this is important work such as new approaches to cancer treatment or looking for academic breakthroughs in Quantum physics. This can be theoretical or even in some cases some green shoots of practical use cases, especially in the domain of robotics for example.
These folk are interested in pursuing AGI and adapting GenAI to a more hybrid form of intelligence that will exponentially increase utility over current LLMs. This may take years, it may take generations (probably!).
I’m firmly and unashamedly in the second faction; we are builders.
GenAI is already capable of incredible stuff. Things that a year or two ago would have been impossible. I want to build stuff that works, right now.
The craft and job at hand is working with available LLMs and APIs and seeing what use cases we can implement.
A builder needs tools and my stack was derived from countless hours spent testing the utility of all the available APIs and models.
1/ Claude 3.5 Sonnet for Coding (Code)
2/ OpenAI APIs for structured data reasoning (Agents)
3/ Groq / Fireworks AI APIs for cheap and instant inference (Individual calls)
4/ Llama for local/on device (Edge computing)
I thought that most of my bases would be covered for the next 3-5 years.
Potentially at some point I could swap out the OpenAI models for a cheaper alternative, but inference cost isn’t really a problem for me at my scale anyway. To be honest, I wasn't really interested in any GenAI model that wasn’t listed above, I wasn’t even paying attention to the Gemini Flash v2.0.
I’m paying attention now.
We all know that 2025 is the year of the Agents, social media won’t stop telling us.
I hate hype trains but the underlying truth is that AI systems are now basically capable of ‘semi-reliably’ taking actions on our behalf. Thus, it is fair to say that there will be loads of popular software released in 2025 that will use this paradigm.
A typical agentic flow goes something like this.
We receive an instruction (Book a flight, call my mum, make my breakfast) which is interpreted by a Prompt. A prompt is usually executed via API, hence your OpenAI or Groq or Fireworks AI API). That prompt calls a tool (Skyscanner, Web search) which gets the result and calls some code setup by the developer and does “stuff”. The result of this “stuff” is then returned to another Prompt and the cycle continues (nJumps) until we have performed the action. Hurrah.
It doesn’t look like the cleanest architecture does it?
If any of these API calls fails or returns an unexpected result, the whole chain is broken. Dozens of Python Frameworks have emerged to abstract this problem, but they can’t solve it. Tooling is improving, we can now see errors in execution, validate structured data and build chains with something approaching reliability, hence the hype for Agent 2025.
But the above architecture remains convoluted, complex and unreliable. Despite this, it is also the only way we had to unlock the potential of GenAI in Agentic flows.
In Dec 2024 Google has just made the above agentic model obsolete before it has even become ubiquitous.
The primary reasons are as follows:
1/ Native search
2/ Integrated orchestration
3/ Multi-modal (which works!)
https://ai.google.dev/gemini-api/docs/models/gemini-v2#search-tool
Have a read of Gemini API docs, and bear in mind that this isn’t a proposal or a fantasy, but an API that works and can provide results in milliseconds.
Google’s integrated search is reliable and also works quickly. Rivals such as Perplexity have a text based AI search engine, it has its place in the wider landscape but bear in mind the core value proposition has now been integrated as a ‘feature’ of Gemini Flash v2.0.
Perplexity AI’s purpose and reason for existence has been assumed within an actual AI model that is capable of the same quality and speed of result with massive utility in other areas as well.
The fact that Google owns a proprietary Search API is critical here. They have a “Native Tool”, bundled into the same API serving the inference model that can search the internet available by just adding some text to the API call. Ah, but OpenAI can do that too I hear you say?
OpenAI can’t compete. Their search is not native (or not mature) and that is important. It really shows. They have a “Realtime API”, but it doesn't work that well and is noticeably slower and buggier than Google's Gemini Flash v2.0 implementation. In real time more than any other domain, latency is everything. The results are not even close.
Google literally runs the search request WHILE the model is responding and has the infrastructure to provide the answer before you have read the response. This small detail covers the critical milliseconds that change the interaction experience from “Lipstick on a Pig” to the “real f**king deal”.
Google's integrated search works, and it works really really quickly.
Loads of talk in the AI world about how no-one has a moat.
Well Google just filled up a giant moat with Christmas Joy and pulled the drawbridge.
Price, Speed, Quality… Choose two? Hmmmm…
Google is winning on three counts.
Merry Christmas OpenAI.
But it doesn’t stop there. Google has changed the game in terms of Agentic flows. Search the internet for “AI Tools” and you will find mountains of frameworks, code repos and projects that are basically doing the same thing.
Search Internet; Check.
Scape website; check
Convert to markdown; check.
Run code; check.
Fetch some private data; check.
All these tools are automating search, retrieval and code execution. https://python.langchain.com/docs/integrations/tools/
The thing is, Google has just integrated this into their API, a single endpoint to handle all of the above. It is now essentially a solved problem.
We no longer need complex agentic flows for many many use cases.
The below diagram from OpenAI shows how function calling works for Agents.
Until now, we have the execution environment outside of GenAI API.
Google has just built most of that functionality into a core API that can be used by developers.
For example, if I want to use Llama 3.3 to search the internet, I can do tool calling as follows.
This same flow with Gemini Flash v2.0:
Back to the previous point, Speed, Quality, Cost…
Google just chose all 3.
Nearly all tools are variations of search, retrieval (convert to markdown and inject into prompt) and arbitrary code execution with a sprinkling of private data. Except for the data (almost definitely coming soon…), these are now core concerns, which has made a lot of Agentic systems obsolete before they have been launched.
It won’t be long before we also have native plugins to your Google data sources (a logical next step), at which point except for a rare few scaled and highly complex AI systems, basically all the current frameworks and processes are just convoluted implementations of what can be achieved better, faster and cheaper in a single API call.
The relevance of this from an architectural point of view, is that instead of building chained and complex flows, I can refine a single simple model. Everything just became a lot simpler.
Bye bye Python frameworks. (don’t stay in touch).
Even if we can’t do everything we need right now, the line in the sand has been drawn and “tools” will become core concerns, integrated into APIs by providers. We don’t need to DIY our own Agents anymore, we have reliable, scaled and fast APIs to work with.
Like me, you are probably a bit burned by all the multi-modal ‘demo’ integration of Audio/Video usage. I remember being so excited to try audio-streaming (I’ve been developing on WebRTC for years and in a past life founded an eCommerce video streaming tool).
The potential is obvious, but the whole thing just doesn’t feel right. For an example go to the OpenAI playground and try out their realtime API. It shows potential, but is miles away from being an enjoyable user experience. Most users (and I’ve spoken to 100s), just want an experience that “works”. Those milliseconds and natural intonations are not details, they are the very essence of the product.
Gemini Flash v2.0 is the first model that gave me the “wow” moment that I had when I first started using Claude for coding. It is the same feeling as the first time you sceptically asked ChatGPT a question and the “machine” gave you a human response.
The latency, the pauses, the voice intonation. Google has NAILED it. It is still obviously an AI system, but that was never the problem. The problem was always the pauses, the interruptions, the way that the model interacted with humans.
I don’t mind talking to a machine, assuming the machine is knowledgeable, able to interact and capable of doing the things that I need it to do. This is 100% the first time I’ve actually seen a model capable of providing this experience, and the ramifications are tremendous.
If you were excited by audio or video interactions and a bit sceptical of the models. Go give Gemini Flash v2.0 a try. Google has obviously invested time, effort and resources into solving issues about latency and cost. No other AI model that I have tried even comes close.
And it’s cheap…
And it’s scalable…
I’m as excited as the first time that I asked ChatGPT to write a linkedin post all those years ago. At this stage of my life and involvement with GenAI, that isn’t particularly easy.
I didn’t expect this moment to come so soon.
We now have a reality with a cheap, fast and highly capable model that we can interact with in real time.
This is literally the first time in my life that I can speak to a computer, and feel like it understands me, can respond to me, and take actions on my behalf. It isn’t a complex agent, it is a single API call.
This is a technical achievement that will reverberate through the AI world, even if many haven’t yet realised.
Apart from the natural interface and interactions, the model is capable of natively searching the internet, executing code and giving me the response in the time it takes to form a sentence.
There was a dream that was the UX of Generative AI.
In December 2024 it became a reality.
Now if you will excuse me, I’m off to build stuff.