paint-brush
Algorithms are All Around Us, but Can They Be Trusted to Govern Us?by@obyte
355 reads
355 reads

Algorithms are All Around Us, but Can They Be Trusted to Govern Us?

by ObyteMarch 6th, 2025
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

Algorithms are being used by individuals, companies, and governments to make decisions already. Algorithms increase efficiency and reduce human participation, but they can become dystopian as well. These systems analyze personal data to predict behavior without people realizing it.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Algorithms are All Around Us, but Can They Be Trusted to Govern Us?
Obyte HackerNoon profile picture
0-item


“Algocracy,” “Government by algorithm,” or “Algorithmic regulation” isn’t exactly a well-defined or clear concept. However, we can sum it up like this: algorithms + governance (laws or rules of some kind). In this sense, we can talk of algocracy when some type of algorithm-based system, be it a smart contract, an Artificial Intelligence (AI), or any other, controls, fully or partially, the decision-making processes of some platform, project, venture, or institution.


It’s very much not the same as e-government (governments using digital tools), and it poses its own set of potential issues. An algorithm is more than just a simple digital application; it is a structured set of precise instructions designed to tackle complex problems—while also having the potential to create new ones. In the novel Daemon (2006) by Daniel Suarez, for instance, we can see how an algorithm takes over the world secretly, even murdering people, after its creator passes away.


We haven’t reached such a dystopian level in our own world, but we might be building the path for it. Algorithms are being used by individuals, companies, and governments to make decisions already. And important decisions, at that.


Algorithms in Action


If you’ve been wondering: yes, cryptocurrencies work with algorithms. Smart contracts work with algorithms, and entire crypto networks are built on algorithms. They use these complex, mathematical instructions to replace expensive or distrustful human middlemen in verifying transactions. Algorithms also help people make general decisions about their platforms by providing the infrastructure for them to vote (on-chain governance in crypto networks). This is just a couple of examples of what algorithms do, but that’s far from all they are up to.


In 2017, the police from Durham (UK) introduced the Harm Assessment Risk Tool (HART), an AI system that predicts the likelihood of a suspect reoffending. It classified individuals into risk categories, helping to decide whether they were detained or eligible for rehabilitation. Similar predictive policing methods, often powered by machine learning and data analysis, are used in the U.S. and China.



Smart cities provide another real-world application of algorithmic governance. AI-powered infrastructure, such as intelligent street lighting and automated traffic systems, optimizes resource use. Projects like the futuristic city “The Line” in Saudi Arabia are planning to integrate AI for proactive services. Governments are also incorporating AI in decision-making, from automating tax audits to using predictive models for social services. AI judges, tested in China and Estonia, can handle minor legal disputes, while predictive algorithms assist in judicial sentencing. Education sees similar automation with platforms like Knewton, which adjusts learning materials based on student performance.


Language models like ChatGPT or DeepSeek are built on algorithms. Google Search uses algorithms to rank results. YouTube, Netflix, and Spotify use algorithms to suggest personalized content, and Amazon does it to customize shopping suggestions. Algorithms are useful, and we’re already surrounded by them.


Dystopian Algorithms


Algorithms increase efficiency and reduce human participation, but they can become dystopian as well. One major concern is what Evgeny Morozov called “invisible barbed wire”—a system where algorithms subtly guide choices, restricting intellectual and social growth without people realizing it. These systems analyze personal data to predict behavior, nudging individuals toward certain actions while limiting exposure to alternative ideas or opportunities.


Since the constraints aren’t explicit, people assume they are acting freely, even though their options have been carefully curated by unseen forces. This quiet control can weaken freedom by reducing critical thinking and reinforcing pre-existing habits instead of fostering independent decision-making. In other words, people mindlessly follow suggestions from the algorithm, not knowing how the algorithm works, unaware that they might be manipulated, but afraid of taking alternative paths that could harm them.


The opacity of these decision-making systems is another pressing issue, indeed. Many algorithms function as black boxes, making critical choices without clear explanations. This lack of transparency is dangerous, especially when algorithms are trained on biased data. If an algorithm unknowingly reflects historical inequalities, it can perpetuate discrimination while presenting its decisions as objective.


For instance, predictive policing tools have been criticized for unfairly targeting marginalized communities, and AI-driven credit scoring systems have disproportionately disadvantaged certain groups. When these tools shape financial access, law enforcement, and employment, the risks of biased automation become significant.


Some Bad History


Real-world examples illustrate the harm poorly designed algorithms can cause. In 2018, the Dutch government deployed the algorithmic system SyRI to identify potential welfare fraudsters, flagging thousands of people for investigation without clear justification. Public backlash led to the system being shut down in 2020 for violating human rights.


Similarly, in 2021 in the US, ATLAS software was used to evaluate immigration applications, drawing criticism for its opaque and potentially discriminatory decisions, and its ultimate goal to denaturalize citizens. In the UK, an algorithm assigned student exam grades in 2020, favoring those from wealthier schools while penalizing others. Widespread protests forced the government to reverse the decision, demonstrating how unchecked algorithmic control can directly impact lives.

While algorithms can streamline decision-making, their misuse can entrench systemic issues, limit freedom, and erode trust. Without transparency, accountability, and ethical oversight, centralized algorithms risk becoming tools of control rather than empowerment.

Decentralized Justice

So far, at least, we can say that the most tyrannical algorithms come from the centralized world. It’s always a central party (company or government) controlling the whole thing to reach their own dubious purposes, or messing stuff up for mere negligence. Luckily for us, algorithms can still be used to get freedom and justice, especially if they’re decentralized. As you may be guessing, most crypto algorithms are open-source and decentralized, available for everyone to check and use. And we already have some algorithmic systems for decentralized justice.


A key element in decentralized governance, for instance, is the Decentralized Autonomous Organization (DAO), which operates using smart contracts—self-executing agreements. DAOs allow members to participate in decision-making through voting, ensuring that control is distributed among participants rather than a central entity. This is a type of on-chain governance, where rules and decisions are executed through DLT-verified processes, and it helps maintain transparency and security while preventing manipulation by powerful individuals or organizations.


Decentralized justice aims to resolve disputes fairly and efficiently while avoiding the risks of centralized control. Traditional courts rely on human judgment, but decentralized justice systems use crypto-economic incentives to ensure impartiality. Participants are rewarded for aligning with the consensus, which is assumed to be fair decision-making. This method eliminates reliance on trust and instead uses incentives to ensure just outcomes.


Additionally, decentralized justice is designed to be transparent, with rules and decision-making processes openly available on the network. This guarantees predictability, consistency, and resistance to bias or corruption.


Algocracy for Good


An apt algorithmic network to build fairer justice systems is Obyte, a fully decentralized ledger technology (DLT) that removes intermediaries like miners and “validators”. Obyte also enables on-chain governance, allowing its community to make key decisions collectively through voting mechanisms. It also supports smart contracts, which automate transactions and agreements without requiring a central authority.



Additionally, Obyte offers contracts with arbitration, allowing parties to engage in agreements where disputes can be settled by using not only smart contracts but professional human arbitrators from the ArbStore. These features ensure that transactions and governance remain transparent, fair, and resistant to external control, reinforcing the principles of decentralization in both governance and justice. It’s also a great example of how to use the algocracy for good!



Featured Vector Image by vector4stock / Freepik