paint-brush
The World Is Ready for a New Type of Operating Systemby@tprstly
124,472 reads
124,472 reads

The World Is Ready for a New Type of Operating System

by Theo PriestleyNovember 14th, 2023
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

This visionary article challenges the status quo of space industry software, advocating for a decentralized operating system infused with AI. Breaking away from outdated monolithic approaches, the proposal emphasizes modularity, real-time capabilities, and open-source collaboration. The goal: to revolutionize space exploration by creating a flexible, resilient, and innovative software foundation for the challenges that lie beyond Earth.

People Mentioned

Mention Thumbnail
featured image - The World Is Ready for a New Type of Operating System
Theo Priestley HackerNoon profile picture


Around this time last year, I wrote “The Metaverse Needs An Operating System”, a deep dive into why the idea of new software foundations to handle a shift in the way we interact through spatial computing was needed. It explored concepts new and old but ultimately the conclusion was that where we are heading in many respects requires a rethink in OS design from the ground up.


We simply cannot move forward where thinking is still stuck in kernel design and operating system architecture from the mid-1980s and 90s. Now with the rise of AI and Large Language Models, data sovereignty and user control, identity, and the age-old arguments of ‘proprietary vs open source’ that question of needing to rethink the OS of yesterday for tomorrow rears its head again.


What I wanted to do was take the exploration done last year but shift the thinking to another industry, one which has received a lot of attention and is touted to be the future of humanity — the space industry. There probably isn’t a sector where the software and hardware demands are as stringent or as secure because of the environment they operate in (save for defence and aerospace which of course are still part of the same overall sector anyway).


Rather large caveats: what’s to follow is purely conceptual based on desk research in fields that I am not an expert in but with a fundamental belief (whether right or wrong) that things need to change. I stick to the core tenets of decentralisation, open source and modularity on purpose. I will attempt to avoid altogether the questions around new CPU and silicon architectures needed to really take advantage of the changes because let's face it, we’re kind of stuck with the same thinking because of OS design. It’s a two-fold problem.


No More Monoliths

The space industry, for all its innovation in the last decade or so thanks to SpaceX, still sits on operational software principles that hark back to the 1960s and this isn’t a foundation to build the future of space exploration (“The [Starlink] constellation has more than 30,000 Linux nodes (and more than 6,000 microcontrollers) in space right now,” said Matt Monson in a Reddit AMA back in 2020. That’s a lot of code sitting in a fragmented architecture originally conceived in the 90s.


I’m a massive supporter and believer in decentralization despite the ideology being usurped by moronic web3 VCs and cryptocoin startups. Fundamentally, a distributed and decentralized architecture points the way to a new internet and the way we write software, one which would take us beyond our Earthy confines.


The operating system landscape, particularly within the space sector, is characterized by a patchwork of proprietary and open-source systems, each with its own set of interfaces and protocols. This lack of standardization has led to inefficiencies, increased costs, and complexities in mission design. Something new would directly address these challenges by providing a cohesive platform that ensures compatibility and seamless comms between diverse hardware and software components through a unique approach — a combination of decentralized and RTOS architectures.


The concept of a decentralized OS isn’t new, Bell Labs back in the 1980s created Plan 9  which showed the path forward for this and it's time to pick up where they left off and complete the work.


For the uninitiated, Plan 9 from Bell Labs is a distributed operating system that originated from the Computing Science Research Center (CSRC) at Bell Labs in the mid-1980s and built on UNIX concepts first developed there in the late 1960s. Since 2000, Plan 9 has been free and open-source. The final official release was in early 2015. Plan 9 replaced Unix as Bell Labs’s primary platform for operating systems research. It explored several changes to the original Unix model that facilitate the use and programming of the system, notably in distributed multi-user environments.


Why care about this at all, why bother with this? Well, because the concepts behind Plan 9 (and to a certain extent, GridOS too mentioned in the original Metaverse OS article) point the way to a radical shift in how we really need to think about operating system design and kernel architecture, especially in the space industry.


What are the requirements as I see them?


  • Decentralized and Modular: something new should be designed to be decentralized, meaning it can operate across a distributed network, reducing single points of failure and potentially enhancing resilience and fault tolerance which is critical for space-based operations.


  • Customisability: thanks to a modular microkernel architecture, it should allow for greater flexibility. Modules can be added or removed as needed for different applications or missions, making it highly adaptable to various requirements.


  • Real-Time Capabilities: integrating real-time processing capabilities, crucial for time-sensitive applications such as those found in space exploration and satellite operations addresses some of the immediate concerns about decentralization and node communication.


  • Community-Driven and Open Source: it has to be built on an open-source model, encouraging community contributions and making the source code available for review, which can foster innovation and trust.


  • Compatibility and Transition: needs to be designed with compatibility in mind, so supports existing hardware platforms and can run legacy applications within secure modules, easing the transition from traditional operating systems.


The design of something this critical would intend to dismantle the incumbent jigsaw puzzle of Red Hat, various Linux flavors, embedded software, and Vxworks by Wind River by creating a leap in OS architecture and opening up a new and uncontested market space. By proving the design in one of the toughest markets you could then be poised to replicate and work backwards into similar industries that will be necessary for space exploration — including mining, manufacturing, IOT, and other heavy industries that all sit upon the same old software principles.


What Windows is as a general purpose and productivity operating system platform would make this a highly tuned operating software platform for the future of humanity in space as its opposite.


Plan 9 From Outer Space?


If you wanted to take this a stage further, especially where Space Domain Awareness (SDA) is a critical initiative with government agencies then any new operating system made for space exploration could be a critical component for both civilian and military entities that operate assets in space also.


  • Enhanced Data Integration: a modular nature allows for the seamless integration of various sensors and data sources. This capability is crucial for SDA, where data from radar, telescopes, satellites, and other sensors must be synthesized to provide a comprehensive picture of the space environment.


  • Improved Data Processing and Analysis: the decentralized aspect of a new OS can facilitate distributed data processing, reducing the time it takes to analyze vast amounts of space-domain data. Faster data processing leads to more timely responses to threats such as space debris, adversarial maneuvers, or natural phenomena.


  • Resilience and Redundancy: for military operations, resilience is critical so a decentralized structure can offer greater resilience against cyber-attacks and system failures. If one node fails, others can take over, ensuring continuous SDA operations.


  • Interoperability: as military operations often involve coalitions, a decentralized OS can provide standardized communication protocols and interfaces, enabling interoperability between systems of different countries and services, which is essential for joint SDA efforts.


  • Adaptability and Scalability: the modular design of a decentralized OS allows for rapid adaptation to new sensors, technologies, or mission requirements. As the space domain evolves, so too can this incorporate new modules to address emerging SDA needs without overhauling the entire system.


  • Security: with a new kernel architecture, security protocols can be tightly integrated into each module, providing robust security measures that are vital for military operations. The decentralized nature also means that an attack on one module is less likely to compromise the entire system.

  • Cost Efficiency: standardizing on a modular OS can lead to cost savings by reducing the need for custom software development for each new SDA initiative. This economic efficiency can free up resources for other critical defense needs.


Artificial Intelligence Points Another Way Forward

Prompt Your Way To A New OS?

Now, let’s discuss the future of operating systems like Windows and Linux in a world of artificial intelligence. Aren’t monolithic OS redundant where we can use AI to build applications, browse the web, answer complex questions, conduct research and do a grocery shop with automated agents at our beck and call?


I would say so. The approach right now is just to integrate LLMs and AI into various parts of the OS or productivity platforms rather than architect AI from the ground up to be integral. Subtle difference.


The integration (more like shoehorning) of AI into operating systems like Windows does indeed prompt the question of whether a complete redesign, starting from the kernel upwards, is necessary to fully harness AI’s capabilities in this new era so we need to take a look at what might be required.


  • Deep Integration vs. Superficial Add-Ons: current operating systems could integrate AI as an additional layer, enhancing certain functionalities. However, this approach may not leverage the full potential of AI. A redesign from the kernel level could embed AI more deeply into the core functions of the OS, leading to a more integral approach.


  • Resource Management and Scheduling: traditional operating systems are not primarily designed for the complexities of AI workloads. Redesigning the kernel could allow for more efficient management of resources (like CPU, GPU, and memory) for AI processes, optimizing performance and energy consumption.


  • Security and Privacy: AI introduces new security and privacy challenges. A kernel redesigned with AI in mind could incorporate more advanced security protocols to handle these challenges, especially in processing large volumes of sensitive data.


  • Real-Time Processing and Edge Computing: AI applications, particularly those involving machine learning and real-time data processing, can benefit from low-latency and high-speed processing. A kernel-level redesign could optimize these processes, especially for edge computing scenarios.


  • Autonomous Operation and Self-Healing: an AI-driven kernel could enable the operating system to perform autonomous optimization and self-healing tasks, predicting and preventing system failures, and optimizing performance without human intervention.


  • Hardware Acceleration: modern AI applications often rely on specialized hardware like GPUs and TPUs. A kernel designed with these in mind could provide better support and optimization for such hardware, enhancing AI application performance. Much like what Graphcore set out to do with its IPU but has fallen foul of product market fit and high capital investment requirements to continue.


  • Backward Compatibility and Transition: a significant challenge in redesigning the kernel for AI is maintaining compatibility with existing applications and systems. This transition would require careful planning and gradual implementation.


  • Adaptive Behaviour: the system could adapt its behavior based on the environment and usage patterns. For instance, it could optimize itself for energy efficiency, performance, or security, depending on the context.


If we take a revolutionary approach to operating system design, combining AI-first architecture, kernel-level AI integration, and decentralization as core principles, a new kernel and OS architecture would differ significantly from traditional systems like Windows and Linux. Of course, such a shift would also require overcoming significant bumps in the road in terms of development, adoption, and compatibility with existing tech and infrastructure. No mean feat but if you approached this from the angle that building an OS like this was a Blue Ocean Strategy then being patient and nurturing this throughout a couple of decades and this is bigger game and prize to aim for.


Let’s Go Swimming

Blue ocean strategy is the simultaneous pursuit of differentiation and low cost to open up a new market space and create new demand. It is about creating and capturing uncontested market space, thereby making the competition irrelevant. It is based on the view that market boundaries and industry structure are not a given and can be reconstructed by the actions and beliefs of industry players.


Red ocean, blue ocean

Red oceans are all the industries in existence today — the known market space, where industry boundaries are defined and companies try to outperform their rivals to grab a greater share of the existing market. Cutthroat competition turns the ocean bloody red. Hence, the term ‘red’ oceans.


Blue oceans denote all the industries not in existence today — the unknown market space, unexplored and untainted by competition. Like the ‘blue’ ocean, it is vast, deep and powerful –in terms of opportunity and profitable growth.


A perfect example of this was when Nintendo released the Wii.


The Nintendo Wii launched in 2006 and at its heart is the concept of value innovation. This is a key principle of blue ocean strategy which sees low cost and differentiation being pursued simultaneously.


To reduce costs, Nintendo did away with the hard disk and DVD functionality found in most game consoles and reduced the processing quality and graphics. At the same time, Nintendo introduced a wireless motion control stick to differentiate itself against the market offering. This allowed the company to offer a range of new features and benefits that hadn’t been seen in the world of gaming previously such as the ability to use a games console to get fit or to play in a larger social group.


By pursuing value innovation, Nintendo could go beyond competing against the likes of PlayStation and X-Box in a crowded and fiercely competitive red ocean. Instead, it was able to open up a new market entirely. The Nintendo Wii, with its innovative, new features and affordable price point, appealed to an entirely new and expansive market — a blue ocean-spanning non-gamers, the elderly and parents with young children.


By adopting the same approach with a new operating system it would obliterate the incumbent market riddled with technical debt and legacy which would be unable to respond because of the effort required to change direction.


Where Do We Go From Here?

It’s not a simple or small effort by any means. The reason I chose AI and Space was because they are complementary approaches to the same problem using the same answers. We’re building on concepts and ideas that have never been pulled together like this before but could lead to the building blocks of the next 50–100 years of software architecture because they have to be fit for purpose for the brave new world that’s coming at us fast.


Take the current experimental deployments of IPFS (Interplanetary File System) with Lockheed Martin. This mission is the first of its kind to evaluate in-space use cases for decentralized storage. It will be hosted aboard Lockheed Martin’s self-funded LM 400 Technology Demonstrator — a software-defined satellite about the size of a refrigerator, designed to support a wide range of missions and customers. Once the spacecraft is in orbit, it will use its SmartSat™ software-defined satellite technology to upload and perform the IPFS demonstration.


We’re experimenting with decentralised technologies all the time but seem hesitant to make it a core foundation of platforms going forward.


These are conceptual frameworks and ideas I’ve been kicking around and god knows whether they’ll stick but if there’s anyone out there nodding in violent agreement — whether you’re a software engineer or an investor — beat down my door and lets talk because I have a desire to make this a reality.


Also published here.