VERSES AI has developed a unique methodology that combines Active Inference AI with the Spatial Web Protocol, trained on real-time data within a common network of Intelligent Agents based on real-world events as they occur and unfold over time.
OpenAI administers a multi-modal methodology of generative Machine Learning AI derived from LLMs trained on text/image pairs.
In a mere three-question conversation with ChatGPT4, the OpenAI bot provided a solid basis for understanding the greatest issues facing Machine Learning LLMs, the advantages of Active Inference AI over LLMs, and the way in which the next evolution of the internet protocol, the Spatial Web, when combined with Active Inference AI, can function as a nervous system for a company, city, or even larger systems.
Describe some of biggest problems with LLMs like ChatGPT and Bard.”
ChatGPT Replies:
Alignment: Aligning large language models (LLMs) with human values is challenging. The model learns from a broad dataset, typically without specific instructions about what is ‘good’ or ‘bad’. Misalignment can lead to outputs that are harmful or offensive, and it’s hard to perfectly align these models with any specific set of values.
Grounding: LLMs lack a true understanding of the world. They generate text based on patterns they’ve learned, not on a deep comprehension of the concepts involved. This lack of grounding can lead to responses that sound plausible but are nonsensical or factually incorrect.
Fairness: Bias in the training data can lead to bias in the model’s responses. If the training data contains sexist, racist, or other unfair biases, the model can learn and reproduce these biases, even unintentionally.
Auditability: It’s difficult to know exactly why a model made a specific prediction or decision. This lack of transparency (also known as the “black box” problem) can make it hard to audit the model’s decisions, particularly in sensitive areas like healthcare or finance.
Cost to create: Developing a state-of-the-art LLM requires significant computational resources, not to mention the time and expertise required to train and fine-tune the model. This can make it prohibitively expensive for smaller organizations or individual researchers.
Cost to operate: In addition to the cost of creation, LLMs require substantial resources to operate, particularly for models with billions of parameters. This can include both computational costs (like server time and electricity) and human costs (like monitoring and maintenance).
Hallucination: Sometimes, LLMs generate outputs that are completely fabricated or “hallucinated”. This can range from minor inaccuracies to entirely false statements, which can be problematic especially if the model is relied upon for accurate information.
Jailbreaking: Users might attempt to “jailbreak” the model, or manipulate it to generate harmful or inappropriate content. This could be done through carefully crafted inputs, or by trying to trick the model’s safeguards.
Guardrails: Implementing effective guardrails to prevent misuse is a challenge. While measures can be taken to avoid generating harmful content, these may also limit the model’s usefulness by preventing it from generating certain types of legitimate content.
Generalization: While LLMs are good at interpolating from the data they’ve seen, they can struggle with extrapolation, or making predictions about scenarios that are significantly different from their training data. This can lead to poor performance in novel situations, or when asked to generate content about recent events or advancements that occurred after their training cut-off.
Obsolete Data: An LLM is created on a certain date. It’s something like a magic box filled with billions of data points called parameters. GPT 2 had 1.7 billion, GPT 3 had 170 Billion and GPT 4 has an estimated 1 trillion. Each of these is a separate system and has to be created anew for each generation. If the creation date June 15, 2022, then the LLM is unaware of events that have happened after that date. This can create errors as its responses will not take into account events in the world or breakthroughs in science.
There is a new type of Artificial Intelligence from VERSES.AI called Active Inference. This new form of AI does not use Big Internal Data like an LLM.
Instead, it can access REAL TIME data about the world to update its internal World Model. Describe the advantages of Active Inference AI over an LLM like ChatGPT.”
ChatGPT Replies:
Active Inference (AI) and Large Language Models (LLMs) are quite different types of AI. Active Inference AI from VERSES.AI has a number of unique characteristics that can solve many of the problems found in LLMs like ChatGPT.
Alignment: Active Inference models, because they maintain a generative model of the world, can have a more robust alignment with human values. Their behavior is driven by a combination of their prior beliefs (encoded in the model) and the data they observe, allowing for more explicit encoding of ethical and value-based considerations.
Grounding: A key problem with LLMs is their lack of grounding in a physical world model. Active Inference models maintain a generative model of the world which is constantly updated based on new observations. This allows them to have a more ‘grounded’ understanding of the world, which can lead to more accurate and contextually appropriate responses.
Fairness: Active Inference models can also address issues of fairness in AI. Because they do not rely exclusively on historical data (which may contain biases), they can make decisions based on a more balanced view of the world. However, this does not eliminate the problem entirely, as biases can still be encoded in the model’s prior beliefs.
Auditability: Active Inference models, because of their generative nature, may offer better auditability than LLMs. The decision-making process is based on a combination of prior beliefs and observed data, providing a more transparent view of why a certain decision was made.
Cost to create and operate: Active Inference models do not require massive datasets for training, which significantly reduces the cost to create. The VERSES.AI network uses the World Wide Web model where anyone can create a website and host it wherever they choose. The Spatial Web enables anyone anywhere in the world to create and network an AI application which means that there is no longer a large upfront cost to VERSES — rather the developer of the AI application provides their own data. Moreover, Active Inference operates effectively in real-time by networking many AI applications together — so the cost to operate is a small fraction of an LLM.
Hallucination: Since Active Inference models are constantly updating their world model based on real-time data, they are less likely to ‘hallucinate’ or make up information that is not grounded in their current understanding of the world.
Jailbreaking and Guardrails: The real-time, generative nature of Active Inference models can make them more robust to attempts at jailbreaking. They can be designed with guardrails in mind, making it harder for users to manipulate them into producing harmful or inappropriate content.
Generalization: Active Inference models are better equipped to handle novel situations as they actively seek to minimize the difference between their predictions and the observed data. This allows them to continuously learn and adapt, making them more capable of generalizing to new situations.
Obsolete Data: Active Inference uses Real-Time data from sensors and cameras rather than BIG DATA stored in an LLM. This means that Active Inference AI is always being updated moment after moment so its data is never obsolete.
The Spatial Web creates a network of the Internet of Things (IoT), and this network of IoT can provide real-time data to the Active Inference AI acting like the perception system of a human being.
Describe how Active inference when combined with the Spatial Web could function as a nervous system for a company or an entire city.”
ChatGPT Replies:
Active Inference combined with the Spatial Web could indeed function as a “nervous system” for a company, a city, or even larger systems, in the sense that it could perceive, process, and respond to real-time information in a way that’s similar to how biological nervous systems operate.
Here’s how this could work:
Perception: The Spatial Web, through Internet of Things (IoT) devices, collects real-time data from the environment. These could be sensors measuring everything from temperature, light, and sound to more complex data like traffic flow, energy usage, human movement, and much more. Essentially, the IoT devices serve as the “sensory organs” feeding data to the AI.
Processing: The real-time data from the IoT devices is then processed by the Active Inference AI. The AI maintains a generative model of the world that it continually updates based on the incoming data, much like how our brains update our understanding of the world based on sensory input. This allows the AI to maintain a current, accurate “picture” of the state of the system it’s monitoring.
Response: Based on the processed data and its understanding of the world, the AI can then make decisions or recommend actions to optimize certain objectives or minimize free energy (a measure of surprise or prediction error). These actions could then be carried out by various systems connected to the Spatial Web.
For example, in a city, this system could monitor traffic flow, energy usage, air quality, and other key metrics in real-time. It could then make decisions to optimize traffic routing, energy distribution, and environmental management based on the current state of the city and its understanding of the city’s dynamics.
In a company, it could monitor production, inventory, sales, and other key metrics, and make decisions to optimize resource allocation, supply chain management, and strategic planning.
In both cases, the combination of the Spatial Web and Active Inference AI could provide a powerful tool for autonomously managing complex systems, with the potential to greatly improve efficiency, responsiveness, and overall performance.
-end ChatGPT4 reply-
ChatGPT4 provided a point-by-point comparison that demonstrates the clear advantages that Active Inference AI has over LLMs.
Active Inference AI combined with the Spatial Web network enables a generative model of the world that is grounded in actual occurrences within the world that updates continually in real-time, providing the AI with an efficient and accurate basis of evolving information with which to learn, adapt, and make decisions.
The Spatial Web provides a foundation for an evolving network that unites programmable spaces with human activity and real-world, real-time data insights, enabling and empowering a network of Intelligent Agents to perceive, interpret, and carry out actions with a high level of reliability and accuracy.
VERSES AI has developed an artificial intelligence methodology with capabilities that are simply not possible with LLMs; a system of programmable nested intelligence that can sense the world around us, with the potential to monitor and manage complex systems great and small.
Special thanks to Dan Mapes, President & Co-Founder, VERSES AI, and Director of the Spatial Web Foundation.
You can learn more about the Spatial Web Protocol and Active Inference AI, by visiting the VERSES AI website: https://www.verses.ai and the Spatial Web Foundation: https://spatialwebfoundation.org.
Feature image by author