paint-brush
From OpenAI to Closed AI: Custom Chips Are Closing The Doors—What’s Next? by@bigmao
456 reads
456 reads

From OpenAI to Closed AI: Custom Chips Are Closing The Doors—What’s Next?

by susie liuNovember 20th, 2024
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

Word on the street is that OpenAI's working with Broadcom to develop custom chips. Written into this rumor is a message that the days of universally accessible AI may be numbered—here's why designer hardware is inevitable, how tech could use chips to chain us, and what to expect from 2025.
featured image - From OpenAI to Closed AI: Custom Chips Are Closing The Doors—What’s Next?
susie liu HackerNoon profile picture

At the close of October, reports surfaced—courtesy of Reuters—that OpenAI is working with Broadcom to develop custom silicon tailored to handle its colossal AI workloads. Sources disclosed that the company has secured manufacturing capacity with TSMC and assembled a 20-strong team of engineers, poaching former veterans from Google’s Tensor processor division. Production timelines remain flexible, with chips potentially rolling out as late as 2026, but the groundwork is already being laid for a seismic shift in how OpenAI handles its infrastructure.


While the move indeed aligns with OpenAI’s ongoing strategy to diversify its supply chain and control escalating infrastructure costs, written into this rumor could be a message that the days of universally accessible AI may be numbered. The future of AI isn’t a bigger, brighter world open to anyone with a brilliant mind—it’s a VIP lounge with custom chips for walls, where the membership fee is a billion dollar budget.


Custom hardware will no doubt bring breakthroughs, but also build barriers; barricades that leaves the general public—and most other players—on the outside looking in.


And that might be exactly what tech wanted all along.


Let’s look at why the hardware arms race is inevitable, how chips play into tech’s bigger strategy for domination, and what to expect next.




Custom Chips: A Fancy Way of Saying “We’re Stuck”


AI has had us dreaming big—everything from personalized therapy bots to autonomous delivery drones and AI-driven diagnostics on every phone. But the latest move by OpenAI to develop custom chips signals our wild ambitions for AI now necessitate models so formidable that even the most powerful general-purpose processors are waving a white flag. Custom silicon isn’t about making AI faster, better, and freer, but about keeping the whole thing afloat under ever-inflating demands—a silent admission that we’ve hit an innovation ceiling that only hardware can break.


Here’s what’s driving the need for custom silicon.


ChatGPT’s Real Personality: High Maintenance


The wisdom of LLMs like GPT-4 and Gemini is built on transformer-based models that keep track of billions of parameters. But this intelligence comes at a price: a backbone of self-attention mechanisms that require massive matrix multiplications which gobble up memory bandwidth. The architecture of transformers also requires quadratic complexity scaling, meaning that model expansion drives up demands exponentially. When LLMs attempt to level up their game through layering on reinforcement learning (RL) to adapt to feedback in real-time, or try to map connections using graph neural networks (GNNs), things spiral into a serious data party, sending power requirements through the roof. If you’ve noticed ChatGPT’s been having the occasional fit of epilepsy as of late, this is why.


Custom chips like Google’s TPUs can solve these issues by integrating high-bandwidth memory (HBM) onto the chip, managing data movement and improving memory hierarchies to reduce latency, and also leverage systolic arrays to parallelize matrix operations.


Generative AI: From Outputs To Outages


Generative AI is shifting from delivering singular outputs like text or images to cross-modal masterpieces through blending multiple forms of media (text, audio, video). This technical sorcery breeds computational chaos—each modality has distinct processing needs, and asking AI to digest everything simultaneously strains general-purpose GPUs that weren’t designed to be master jugglers. In addition, real-time synthesis models for enhanced engagement features, such as dynamically adaptive storylines for games or SFX filters for live-streaming, demand ultra-low latency and rapid inference speeds, requirements that mainstream GPUs struggle to meet without introducing delays and a sky-high electricity bill.


Custom silicon like NVIDIA’s A100 and Google’s TPUs address these issues with multi-instance GPUs (MIG) and tensor cores, enabling power efficient real-time cross-modal computation by dividing tasks into parallel, isolated processes on the same chip. Reduced-precision arithmetic can also be introduced to allow processing in formats like FP16 or INT8 instead of FP32, retaining accuracy without melting the hardware.


Life-or-Death AI: Precision That Drains the Grid


Navigating high-stakes real-world chaos—think autonomous driving, robotics, drones—necessitates that event-driven AI responds at superhuman speed, a task suited to neuromorphic and probabilistic models that would break an off-the-shelf chip. Custom chips like Intel’s Loihi are built with architectures that mimic biological neural networks, relying on spiking neural networks (SNNs) and event-based processingto analyze data only when relevant events while dynamically allocating resources based on incoming data patterns. While this design enables low-power and low-latency operations at scale, it’s incompatible with hardware that’s on the open market.


Social AI: Cracking Humanity Takes Heavy Lifting


Ah, the enticing yet ethically murky application of AI that the titans are hoping will keep us bewitched by their platforms and besotted with their programmed pets. Decoding the elusiveness of human nature requires systems that interpret, predict, and adapt to behavior at both individual and societal levels—cross-modal attention mechanisms, GNNs to dissect collective interactions, affective computing to develop emotional intelligence, knowledge graphs to ensure contextual relevancy, and the list goes on. Furthermore, social AI might operate in sensitive contexts (like inside a depression-curing robo-rodent), necessitating on-device AI to safeguard user data. Needless to say, all this computation send mainstream chips and their batch processing into a state of paralysis.


These processes demand sparse data efficiency and high accuracy at low latency, requirements that designer silicon can meet through incorporating features such as unified memory architecture, task-specific accelerators, sparse data optimization (used in Graphcore’s IPU), and multimodal fusion optimization.


Scientific AI: The Final Frontier Too Big for Conventional Hardware


Though less of a media darling than other domains, scientific AI is poised to become the most profoundly revolutionary frontier in artificial intelligence. But only if the hardware can keep up. For generative scientific AI to create new possibilities (e.g. novel molecules, materials, and systems), advanced computational frameworks like diffusion models, VAEs, transformers, and reinforcement learning must be combined with domain-specific knowledge. Non-generative AI used for predictive modeling and simulations deals with petabyte-scale data and high-dimensional systems, using mechanisms such as PDE solvers, GNNs, Bayesian models, and finite element analysis (FAEs). Though the two branches of scientific AI serve different purposes, both call for precision, scalability, and computational intensity—criteria only the most elite mechanisms across both generative and predictive AI can meet. It’s a no brainer that ready-made hardware won’t make the cut.




The Death Open AI And The Allure Of Exclusivity


The path to bespoke silicon comes with a price tag that almost guarantees the stratification of access to AI. Economics 101: to cover the astronomical expenses, OpenAI (and all who follow suit) will inevitably pass the burden to customers, bundling access into offering that’ll make our current subscriptions look like pennies on the dollar.


But don’t mistake inflated prices and a hierarchical system where financial muscle is a prerequisite as a reactive survival tactic; it’s a strategic opportunity—because exclusivity isn’t a bug; it’s the feature tech has been waiting for.


1. Vendor Lock-In 2.0: Chaining Companies Through Chips


Proprietary hardware brings in a gravitational field: once enterprises are embedded within an ecosystem that marries software and custom silicon, they’re effectively bound by it. Weaving a software stack into silicon creates a mechanism where systems can only function at peak within the domains of the provider—an irreversible setup where the hardware dictates the software. After companies build their applications and workflows around this custom environment, leaving isn’t simply a matter of transferring data or software licenses, but re-engineering from square one—like trying to transfer progress from PS to Xbox.


And as the hardware iterates, the integration grows more seamless, making exit costs soar higher with each update. With increased performance comes cemented loyalty—as with most vertically integrated ecosystems, switching out means starting over.


2. Custom Chips, Custom Rules: Killing Competition by Design


Custom silicon fragments the AI ecosystem into walled gardens where the interoperability that defined the early AI boom goes to die. Due to the elevated efficiency and control offered by custom chips, OpenAI could set standards (such as performance bars, features, and compatibility requirements) that revolve around its proprietary systems and intellectual patents, marginalizing open-source initiatives and smaller players who can’t compete with hardware-dependent advancements. If you’ve got an idea, you might need to take it to OpenAI and grovel for hardware support, the way techies are currently lining up outside NVIDIA’s revolving doors.


Custom silicon creates a knowledge gap as well as an access gap. By designing hardware that is optimized for proprietary AI architectures, OpenAI not only accelerates its own models but also builds unintelligible ecosystems that competitors can’t reverse-engineer or replicate effectively. This learning asymmetry effectively blocks competitors from learning or innovating within the same paradigm, weaponizing exclusivity to slow industry-wide progress.


Exclusive hardware becomes an innovation blockade, enabling providers to control the pace of progress, and ensuring that they remain at the center of AI’s next chapterone where talent and creativity succumb to the muscle of raw capital.


3. Behind Closed Chips: Opacity Secures Domination


Unlike software, which can be reverse-engineered or forked, hardware-based processes are physically opaque and difficult to deconstruct without burning through wads of cash. This impenetrable layer of abstraction acts as the ultimate stronghold, fortifying OpenAI’s claim on the AI Iron Throne.


<Sidestepping Scrutiny: Accountability Without Answers>


Custom silicon offers a convenient shield to the probing eyes of regulators and hardball questions of media and advocacy groups through the addition of layers of complexity that make the inner workings of systems harder to explain, and even harder to audit. Companies can argue that certain outputs aren’t deliberate design flaws, but byproducts of the hardware-software interaction, deflecting scrutiny by pointing to the system’s inherent opacity.


Predictable performance parameters could also be baked in to reduce system variability for deployment in specific critical environments. This opacity ensures that companies don’t need to reveal trade-offs or vulnerabilities in their models, especially in industries like healthcare, finance, or defense where reliability is paramount.


<Internal Insurance: Protecting The Crown Jewels>


With most AI companies reliant on a distributed workforce, contractors, or cloud infrastructure providers, the risk of intellectual property leakage grows. Due to the universality of programming languages and frameworks, software is inherently portable and replicable. By contrast, hardware development is highly contextual, reliant on specialized and siloed expertise and access to specific manufacturing pipelines, processes, and facilities—this compartmentalization and means no single engineer carries enough knowledge or resources to whip up the magic for a competitor. By welding innovations into chips, OpenAI ties their IP is to infrastructure rather than individuals, minimizing the risk of losing competitive advantage when engineers hand in their resignation.

<The Network Effect: Turning Perception Into Reality>


By making the inner workings of AI systems inaccessible, OpenAI ensures only they can define and control the narrative of their capabilities. Much like how NVIDIA’s GPUs became synonymous with AI performance due to benchmarks optimized for their architecture, OpenAI could create its own metrics tied to their silicon, framing incremental improvements as game-changing. The lack of transparency also means selective performance milestones (e.g., “5x faster inference”) achieved through minor hardware optimizations can be marketed off as genuine breakthroughs, reaping the benefits of being perceived as a pioneer while concealing trade-offs or limitations.


With no way to benchmark or validate claims, customers, investors, and the media are left to trust the company’s PR spiel. A lie can travel halfway around the world while the truth is still putting on its shoes: soon we’re all buying into this rigged version of “innovation”, handing over the cash and headlines, and fabricated dominance becomes tangible reality.

This illusion is as much a branding strategy as a technical one, transforming opacity into a tool for sustained market leadership.




2025 Outlook: There Will Be Innovation, Just Not For You


Hardware’s a bottleneck, but tech’s never slowed down for a speed bump. Just don’t be fooled—true advancements will serve to consolidate corporate influence and competitive control, while novelties ushered to commoners and companies will be data grabs disguised in layers of PR.


Consumer Toys: Data, Distraction, And Your Guard Dropped


Think the hardware development workload’s going to stall consumer operations? Nope, things will appear to speed up, because we’ll only keep handing over our behavioral data in exchange for “progress”. But now that our grandparents are finding amusement in chatbots, tech’s going to pivot from trying to captivate you with butlers to gimmicks that feel like they’re from “Back To The Future”. (Don’t worry, tech will periodically give those quirky assistants a facelift to keep them fresh on our radar. OpenAI is reportedly unveiling an AI agent called “Operator” in January 2025. Sources indicate that Operator will directly interact with your computer, primarily functioning as both a workflow enhancement and web browser tool, automating tasks and streamlining the online experience. So, a desktop spy.)


The big dogs will pivot from software to sleek gadgets to engage the sophisticated—Altman’s just hauled over Orion’s former hardware lead Caitlin Kalinowski, Zuck’s working on robot hands, even Cook’s chewing on the thought of smart home devices. Wall Street will take the absurd to market, from outlandish products like Friend’s creepy necklace to questionable apps like Daze.


The incessant hype and headlines won’t just push us to buy; they’ll erode our defenses, priming us to happily embrace whatever the hardware-augmented AI future dishes out.


Enterprise Tools: Scaling Depth, Not Breadth


The real money lies in the wallets of institutions, not individuals. But enterprises will only adopt solutions that hit them where it hurts, which means AI needs to dig beyond surface-level pain points. Next year won’t be about general-purpose tools to soften businesses to the idea of AI, but domain-specific models that expose algorithms to all the nitty gritty intricacies of each sector, department, team, and employee.


Case in point: Microsoft, in collaboration with Siemens, Bayer, and Rockwell, has just launched a set of AI models to address specific challenges in manufacturing, agriculture, and financial services. Niche-specific AI startups are also reaping in the cash—Breakr for music marketing, Dreamwell for influencer automation, Beeble for VFX, and that’s just from the past couple months. Analysts predict that vertical AI’s market capitalization will be at least 10x the size of legacy vertical SaaS.


Guarded Genius: Pioneering For The Prestigious


Tech’s saving their horsepower for transformative advancements that’ll bind their futures with those of the elite: corporate behemoths and governments. OpenAI’s presenting a blueprint for US AI infrastructure to Trump, and Anthropic’s just partnered with defense contractor Palantir to "process vast amounts of complex data rapidly, elevate data-driven insights, identify patterns and trends more effectively, streamline document review and preparation, and help US officials to make more informed decisions in time-sensitive situations." Microsoft’s partnering with BlackRock, IBM’s in bed with AWS, and Google’s gone to Saudi.


What’s the deal behind the partnerships? Only time will tell.




Final Thoughts: A Case Study For The PR Hall Of Fame


The tale of AI for all—a promise of shared innovation and universal access—always felt too good to be true. But in retrospect, could the same visionaries who cracked the code of intelligence itself truly have been blindsided by the inevitability of bespoke silicon and billion-dollar buy-ins?


This author is unconvinced.


And now, by framing the hardware pivot as a heroic response to AI’s growing demands, the likes of OpenAI neatly sidestep the reality that they’ve been building towards toward exclusivity from the start.


“Open” was always a branding exercise, a PR plot decades in the making, and democracy just the slogan.


Perhaps the most revolutionary aspect of AI may not be the technology, but the narrative we were sold.