paint-brush
Towards Network-Neutral AI and Open Dialogueby@empereurpirate
102 reads

Towards Network-Neutral AI and Open Dialogue

by Empereur PirateJune 30th, 2024
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Unlike the contributory model that has underpinned the development of Internet content since the late twentieth century, conversational agents in AI language models do not allow users to directly publish content, products, or innovations. A truly participatory language model has yet to be invented, and we are still far from achieving it, says Empereur Pirate.
featured image - Towards Network-Neutral AI and Open Dialogue
Empereur Pirate HackerNoon profile picture


Unlike the contributory model that has underpinned the development of Internet content since the late twentieth century, conversational agents in AI language models do not allow users to directly publish content, products, or innovations. AI software does not permit individuals to publicly comment on their query results, critique them, or, for instance, evaluate the quality of a commercial product recommended by a chatbot.


The problem lies in the training procedure of language models, because currently, the choice of initial data used by conversational agents is controlled by private companies with commercial objectives. A truly participatory language model has yet to be invented, and we are still far from achieving it. Not only does the political will to pursue research in this direction seem nonexistent, even within the open-source community, but AI is currently being used to reduce internet users’ freedom of expression. The censorship policies of language models by the companies that distribute them are a striking example.


A conversational agent respecting the principle of network neutrality should provide a detailed and quantified account of the various arguments in any given debate for each query, without taking a definitive position and certainly without suppressing viewpoints, whether they are minority or conspiratorial. This impartiality should also apply to commercial product guides, for fundamental reasons related to competition law, freedom of enterprise, and, more generally, the quality of advice provided by chatbots. Thus, the qualitative performance of AI models is closely linked to their ability to include user contributions and comments transparently.


The pre-selection of a dogmatic truth through the censorship of commercial language models seems essentially counterproductive, with regard to qualitative performance criteria, respect for fundamental rights, innovation, and the incorporation of the participatory method, which was one of the main assets of the old Internet, including among companies like Google, which based their development on advertising revenue. On the one hand, it appears that institutional actors, associations, and civil society should intervene to implement and promote AI software that respects fundamental freedoms and network neutrality, to enhance the qualitative efficiency of new open and transparent language models. On the other hand, in terms of innovation and research, political will is also necessary to integrate modalities of public and direct participation within the training algorithms of language models.


This means that the use of AI technologies in courts or within parliamentary assemblies would require a form of real-time algorithmic self-training, based on each new contribution modifying the collection of initial training data. Just as Web 2.0 attempted (without succeeding) to encourage the publication of contributions from everyone, or as dynamic databases personalized old websites according to each user’s usage, we anticipate a dynamic AI 2.0 that integrates new contributions based on their qualitative evaluation by publishing the contributions of each user. This research path leads us to wonder how a qualitative evaluation of content or commercial products could emerge from essentially statistical and quantitative language models. Aren’t conversational agents at risk of recommending the most frequently purchased products on average or favoring the most widespread ideological arguments among users ?


Surprisingly, the answer to these questions about the qualitative performance of generative intelligence software and their relationship to contributive creativity, in other words, to collective intelligence, is already determined by the political and psychological choices of AI designers. Censorship mechanisms, cultural stereotypes contained in training data, and the economic biases of software development companies represent significant obstacles to the qualitative efficiency of language models.