Image Source: Unified Infotech
Have you ever wondered about how you can see the most relevant posts on the top of your Facebook timeline, or get those amazing offers from your favorite restaurants?
If you have and are still looking for the answer, it is Algorithm.
This basic tool for data analysis and then providing proper output has been pulling the strings of our life for a long time now. But how is this technical puppeteer affecting our daily life?
And how exactly is that preventing everyone’s voices from being heard? Let’s find out.
From the moment we wake up to the moment we go to sleep, we make around 35,000 remotely conscious decisions. That is a huge number of decisions and each one has either good or bad consequences. Now how many of these are affected by Algorithm?
The effect of the Algorithm on our decision making starts from the moment we start engaging with the digital world. Whether it is about choosing what to have for breakfast, or deciding whether you’d take the train to go to work or get an Uber, Algorithm is affecting our decisions. And the fun part is that we are letting it.
It is easy to delegate the decision making part to the so-called unbiased, infallible system that is Algorithm.
It will process the nutritional data and let us know that we should have a healthy breakfast, and check the traffic data to let us know that taking a train is going to be a faster way of getting to work.
Starting with these small decisions, it will make its way into making bigger decisions, such as what kind of news will be relevant to us and the content we would find relevant on social media.
Even though the above-mentioned way of life seems easier, it has larger implications. The human decision-making faculty is the fundamental part of the social, economic and political fabric of life.
Out of all the creatures on this planet, it is only us humans who have the ability to make decisions that are unrelated to our basic needs in life.
Even though decision making is a fundamental ability and right, today the user is voluntarily removing themselves from the decision making process. And when this happens, the question we really need to ask ourselves is this-
Is the Algorithmic system taking away our ability to make choices?
While this question seems like overkill, it still remains valid.
Thanks to the smart device era, we don’t really have to make the small decisions such as when to order detergent or what temperature should I set the thermostat at.
But, how long till the Algorithm starts to make serious decisions for us? Like which candidate to vote for or whether a convicted person really committed the crime or not?
There is definitely something wrong with the way the Algorithm is taking over our decision-making faculty. And if you are wondering why then keep reading to the next part.
Let’s begin with a trivial example.
Recently, according to an experiment conducted by Ben Berman, a game designer in San Francisco, showed how popular dating apps like Tinder, Bumble, Hinge and many more are using collaborative filtering to generate matches according to a majority opinion.
This means that once you register on a dating app, the Algorithm will use your data, and show you the matches approved by people with similar data like you.
If you happen to be a 5’4” tall blonde woman who loves hiking, living in New York, then you will see the similar matches other women with similar data like yours.
The question this example creates is how long until this collaborative filtering is applied to other aspects of life, such as who is more in need of that life-saving medical treatment? While the Algorithm tends to the majority opinion, what happens to the opinions of the few?
The problem is that we don’t know when the Algorithm Black Box will start to curb the voices of the minority to tend to the majority opinion. We have already seen instances where medical AI has already discriminated against black patients when it comes to getting proper medical treatment. What if at a later stage, the Algorithm starts to decide who’s voice and opinion is worth hearing and whose isn’t?
We have already seen to what extent the online world affects the real world. Whether it is fake news spreading terror among the people, or malicious opinions influencing the mass. The virtual world has some real power over us in the real world, and it can turn ugly if the Algorithm messes up.
This is not only limited to the opinions expressed online and how it will affect us. Algorithms are also deciding who to hire for a job based on their resume AND their voice, it is also deciding who gets life-saving treatment and who doesn’t.
It is a convenient system, especially when a team of 10 HR members doesn’t have to go through hundreds and thousands of job applications.
But it does put the fairness of the system into question. How can a set of Algorithm sift through the data decide who is worthy?
The effects of Algorithm seem a little sinister till now, but the actual problem and solution both lie somewhere else. At the end of the day, the Algorithm is a system designed by human beings, and the data it uses is affected by human actions.
Since it is being designed by human beings, we can safely assume that it will follow the thoughts and ethics of the designer. Added to that, a poor representation of diversity in the institution will have a serious impact on people and how their voices are being heard.
Without any standardized rules about the bias in Algorithm design, the system will continue to be weighed down by the bias and judgment of the designer.
The age where Data is fueling almost every system, those who have the most useful collection of data will win the game.
From Google, and Facebook, to online shopping platforms, each has huge sets of data on us that can be used for almost everything. The problem, however, remains is who is going to act as the gatekeeper and make sure that the data is used properly?
The raw data can reveal patterns that will ultimately lead the Algorithm to mute the voices of millions. It is because the data is affected by human behavior and therefore reflects the human bias.
So how do we prevent the Algorithm from learning these biases and censoring those whose opinion needs to be heard?
While all this sounds very sinister, there might be a few solutions to make sure that the Algorithmic systems provide us with fair and unbiased judgment.
Having a standardized legal structure in place for safe use of data might be the first step towards creating a safer and fairer Algorithmic system. Without any rules in place for how the data can be used, there is sure to be digital anarchy.
While having a legal structure might be the first step, we also need someone to enforce it. And that’s why we have to choose a gatekeeper who will be able to prevent misuse of data, creating biased systems and enforce the legal structure.
While in most places tech companies themselves are working as gatekeepers, in regions such as Europe, the government is working as the gatekeeper for the public data.
Another way of removing bias from Algorithm data will be to have diverse representation among the staff. This way the voices of all demographics will have a part in the design and deployment of the Algorithmic system.
There needs to be made some checks when it comes to the use of historical data. Incorrect implementation of the historical data can lead to the Algorithm repeating the same mistakes from the past.
To prevent this from happening, it is important that the computational and data experts at work seek and implement advice from experts of other domains such as sociology, cognitive science, behavioral economics.
This way they will be able to understand the unpredictable human brain and its various dimensions and create an Algorithm that will be able to predict context before spitting out an outcome.
Last but not least, the organizations that are collecting and using the data to create algorithmic systems should provide a clear and transparent policy. Sure they might not let us know exactly how the Algorithm works, but detailing the limits or the Algorithm will go a long way for us to know exactly to what extent our data is being used and for what is being accomplished with it.
The fact remains that our data can be used to accomplish everything, from determining what kind of illness we have to when we are most likely to take retirement.
The problem, on the other hand, is the biased use of the data which helps to promote some voices and profiles more than others. And even though the scenario seems very sinister, there are many ways to solve this.
The only thing that is lacking is the human conscience and unbiased outlook. Once we achieve that, Algorithm would neither be able to make our decisions for us nor will it shut down the voices of those which need to be heard.