paint-brush
Product 'Longtermism' and the Danger it May Bring by@raudaschl
215 reads

Product 'Longtermism' and the Danger it May Bring

by Adrian H. RaudaschlMarch 24th, 2022
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

Trying to correct for societal problems through products will always fail. No single solution is correct in the long term. Long term solutions don’t account for how the self changes over time, how we acquire and lose traits in virtue of new social locations and relations. If we want to plan for positive long-term change, we must ensure we are transparently imposing corrections upon people’s behaviour, with adequate tools to opt out.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Product 'Longtermism' and the Danger it May Bring
Adrian H. Raudaschl HackerNoon profile picture


There are no guarantees the future will hold the same values as us, so what right do we have to encode ours upon them?


In Christopher Nolan’s film Tenet, the future is at war with the past “because the oceans rose and the rivers ran dry.”


Thanks to catastrophic climate change, no path lay ahead for our descendants, and their only hope was to carve out a future by orchestrating a genocide in our past.


As the film’s protagonist, Neil, explains:


“Every generation looks out for its own survival.”


“I’ll See You At The Beginning, Friend.” Tenet, Warner Bros. Pictures


It’s hard to watch a film like Tenet and not contemplate the longer-term consequences of our actions — how will the future judge us?


Working in product, we tend to approach most opportunities with good intention. We want to identify and solve valuable problems, lead a comfortable life and leave this spinning rock a better place than when we joined it.


Sometimes, however, when planning for good long-term outcomes, the universe has ways of reminding us how truly short-signed we can be.


I wonder, for example, if the Facebook product team had any sleepless nights after discovering how Cambridge Analytica highjacked their Open Graph to influence outcomes of the 2016 US presidential election.


Or when their researchers realised that 64% of the time a person joins an extremist group, they do so because Facebook recommended it.


What do you do when you learn your product promotes societal insurrection?


Or when your algorithm gives women a lower credit limit?


Or when your gaming machine is weaponised to create a national security threat?


Try, for a minute, to envision the worst person getting control of your life’s work. How could they co-opt it to some nefarious end? What habits of thought and attention could they cultivate?


It’s challenging to look at our politically fragmented world of polarised opinions and subjective beliefs and not want to do something. Given the opportunity, it’s tempting to work towards decisive approaches for dealing with the world’s epidemiological, climatological, and political ails. If only everyone could just listen to our wise thoughts.


“The heresy of one age becomes the orthodoxy of the next” – Helen Keller


The risk of inflicting unintentional societal scar tissue through bias, misuse, and political agenda through our work invites us to question the moral responsibility of product teams to limit the harm of the things we build.


Our reaction is to start building things to prevent such future abuse. We start thinking about how to measure misuse events. What early warning systems could we put in place? What limitations could we impose?


Though I believe such thought experiments are valid, as this article explores, we risk doing more harm by trying to impose long-term good.


We are all at risk of what Evgeny Morozov called in 2013 tech solutionism’. Product people tend to presume, rather than openly investigate, the problems we set out to tackle; reaching for solutions before questions have been fully asked.


To usher in a more mature tech paradigm, we need to start seeing products not just as solutions but as instruments that teach us how to see the world more fully.


We are all required — tech users, critics, and product teams — to ask ‘what if’ and ‘why not’ about the possibility of technologies designed to support a long-term spirit of questioning. Until we do so, the riskiest person in control of our work will always be us.

The Future is at War With the Present

The idea of trying to orchestrate things to create a better world is a well-understood human desire. It doesn’t matter if we are running for political office, designing a fairer credit score system, managing a charity, or just a plain old dictator. Our motives are the same — we all believe in building things that outlive us that will guide humanity to a better future. Put another way; we are practising a form of longtermism.


Longtermism is an altruistic stance that we should be giving priority to improving the lives of people in the future over ourselves.


“Future people matter morally just as much as people alive today; (…) there may well be more people alive in the future than there are in the present or have been in the past; and (…) we can positively affect future peoples’ lives”, summarises Vox journalist Sigal Samuel.


Longtermism argues that our goal above all else is to elevate the fulfilment of humanity’s potential.


In a phrase: people exist for the sake of maximising value, rather than value existing for the sake of benefitting people.


Longtermists take this moral worldview to the extreme by inviting us to imagine the impact our actions have on the very long-term future of the universe — thousands, millions, billions, and even trillions of years from now.


The problem is that under this ideology, even a climate catastrophe that cuts the human population by 75 percent for the next two millennia will, in the grand scheme of things, be nothing more than a minor blip — the equivalent of a 90-year-old stubbing their toe.

For example, climate change under this ideology is justified so long as it enables a small portion of humanity to go forth and proposer sometime in the future.


It’s easy to look at this and say, “I don’t think like that”, but if you work in product, you probably have. In fact, the correlation between Longtermism and planning for product success is scarily close.


Revenue, retention or referral are typical long-term goals for most products, which profoundly affect how markets, entrepreneurship, and our startup culture prioritise work. Sure such principles help develop most innovations, but these solutions are also the most vulnerable to risks and unintentional societal harm.


A typical product mental model like Pareto principle **(80/20 Rule)**works similarly to Longtermism in that it forces us to consider a long-term goal like revenue, then prioritise 80% of sales, which may come from just 20% of your users. The argument is to ignore the 20% that are difficult, because they will require 80% of the effort.


Imagine if other sectors adopted the same reductionism:


  • If cafes were just about the efficient delivery of calories.


  • If hotels focused solely on their number of beds per square meter.


  • If health care were solely about longevity.


Nowhere is this most dangerous than services that use AI or rule-based systems to influence user behaviour.


When building solutions that correct injustices for the good of humanity, we have two choices: Enforce what we think fair is today or trust individuals to make better choices in the future.

In short, our question should become: what opportunities do we miss by ignoring that other 80%?

Making Others do the Right Thing

Suppose your goal is to reduce long-term bias and discrimination from your services. It is reasonable to begin by agreeing on what discrimination looks like, what demographic segments to use (e.g. race, age or gender), and how to measure and correct it.


We are drawn to develop solutions based on our dominant assumptions, but in the absence of that, we do the next worst thing: trying to decipher the ‘truth’ of a problem by looking for its mythical average.


You can probably guess why this is so hard to achieve. To even agree on what measurements to correct requires a sizeable homogeneous group within a sterile, clean, static environment. That isn’t the reality for anyone — but it certainly isn’t the reality if you are an outlier or a small minority.


The risk is that in trying to enshrine fairness into services, we always risk enforcing new biases upon people in the future by making them see the world as we do today. Our tools impose rules on us as though all people are the same, and the context won’t change.


Even if we successfully agree on some unbiased quantification, our representation of how things are is no better than measuring a sand dune on a windy day. You could be out in a desert for a BILLION YEARS trying your hardest to get the measurements and pin down exactly what that sand dune is and STILL fail, through no fault of your own. That sand dune, like a society, is a process, not a static thing that we can ever quantify.


“No man (person) ever steps in the same river twice, for it’s not the same river and he’s not the same man.” — Heraclitus


There are also broader things to consider. If all our systems, for example, require individuals to be easily classifiable, could that in itself inhibit societal progress? When your gender, ethnicity or age is quantified, how could inclusivity movements like cultural integration, gender transitioning or self-identification be impacted? Could it reinforce unhealthy identity association or even encourage social and racial profiling?


Our problem is that in trying to fix things, we start viewing society in terms of networks of harm.


Therefore it becomes easier to view solutions to all our problems in terms of data networks, surveillance coverage, and interactions among distributed people and devices that must be regulated.


Many AI systems incorporated into institutions, government agencies, and corporations are black-box models. They increasingly rely on such complex hyperparamatised classifications of the world that it will become increasingly challenging to know how such solutions reach a particular decision, prediction, or recommendation.


Austria’s employment profiling algorithm for example was designed to “serve a growing number of job seekers with a stagnating budget” using an algorithm to predict a job seeker’s employment prospects based on factors such as gender, age group, citizenship, health, occupation, and work experience.


The goal of Austria’s job bot was to optimise government spending by supporting individuals with moderate employment prospects while reducing support to those with low or high employment prospects (the reasoning being that such support would have a negligible effect on their hiring chances).


Disappointingly, it was Academics and civil society groups, in the end, who identified how this job algorithm became unfairly discriminatory towards women over age 30, women with childcare obligations, migrants, or those with disabilities. It turned out the model was amplifying racial and ethnic discrimination demonstrated by employers in some parts of the country.


The lesson here is that despite all their superior abilities, these systems have not avoided the kinds of patterns humans are prone to.


Because many algorithmic decisions are informed by historical data — such as loans approved in the past or crimes that have been subject to harsh sentencing — they tend to reinforce historical biases, a problem the journalist Cade Metz compares to “children picking up bad habits from their parents.”


The result, French philosopher Simone Weil would argue, is a society that functions “without a single human being understanding anything at all about what he was doing.”


Our historical opinions risk becoming ghosts in the machine that haunt generations to come.

When employing product longtermist strategies for societal good, perhaps creating solutions based on binary classification and corrections isn’t the way to go. Instead, we should strive to never solve a long term problem without first building platforms that help us understand their nuance and complexity.


In other words: create products that provide an expansive awareness.

Informed Consent Should be Our Longterm-ist Goal

Even if we successfully create algorithms that encourage users to act in ways we deem “right”, the problem is that it does not necessarily mean people are reflecting on the issues that prompted such “corrections” in the first place. Instead, we should be doing more to inform users about the biases they are being exposed to.


For example, in 2015, Google was called out by a study from the University of Washington for the profiling, sexualisation and underrepresentation of women in its image search results for queries like ‘ceo’ or ‘construction worker’. I doubt the Google product teams intended such an outcome. More likely, this reflects the dangers of encoding historical click data hoping that users will find relevant results.


However, thanks to Google’s longtermist goal of increasing visitor engagement, they have unconsciously helped impose historical gender stereotypes on future generations.


Now Google has since tried to correct this, but unfortunately, their longtermist goals don’t seem to have changed.


Sarah McQuate from the University of Washington recently noted a simple keyword change from ‘CEO’ to ‘CEO united states’ resurfaced the same gendered problems. Not to mention the lack of demographic representation — not all CEOs are white, young, have straight haircuts and wear suits.


Image by Sarah McQuate from Google’s ‘CEO’ image search gender bias hasn’t really been fixed (Feb 2022)


Almost seven years since that initial study, it would suggest that forcibly correcting gender bias for specific searches hasn’t helped progress the unconscious bias of the broader population. More worrying is how long it’s taken to raise these concerns again.


PhD candidate Peter Polack from UCLA best summarises the problem here:


That “by imbuing human corrections into computational systems and then designing them to respond to the environment dynamically, we risk automating such extraneous ethical discussions like fairness and representation away.”


In other words, the imposed corrections created a false sense of security, leading to stagnation.


Building products and services should not just be a focused effort of migrating users from pain to solution; but also tools of insight and introspection. Instead of imposing correct user behaviour, Google should have focused on creating awareness.


Every time a user searches, users should understand why they are seeing what they are seeing, how it may deviate from a true reflection of society, and what parameters have defined that.

Introducing this type of transparency in all our rule-based solutions can help break the damaging effects of models, AI, governments, or product teams imposing specific views upon the world. The key difference now is that individuals are freer to make informed choices about which biases they wish to opt into.


Sure, some will choose to stick with a limited perspective — seeking opinions and imagery that aligns with their worldview. But the difference is that process is no longer passive; it becomes a continuous, conscious choice. At some point, we have to continually confront and reflect on our biases, something that doesn’t need to happen when corrections are automated for us.


We must remember that no single solution is correct in the long term. Once designed, regardless of how well-intentioned or diversely supported, models march toward their preconceived purpose according to an objective logic.


Long term solutions don’t account for how the self changes over time, how we acquire and lose traits in virtue of new social locations and relations. If we want to plan for positive long-term change, we must ensure we are transparently imposing corrections upon people’s behaviour, with adequate tools to opt-out.


Invisible rule systems have a nasty tendency to get us to serve the aims of our technology than ourselves. Like Google image search click history, we generate the data and behaviours that it needs to operate more efficiently. Still, over time we start living in ways that serve not our goals but those of technology.


Philosopher Martin Heidegger would argue that we need to be wary that technology has a logic of its own beyond human motives and intentions. Therefore it is up to us to ensure human agency always has a place in that logic.


But Mr. Gray, there’s no inspiration in logic. There’s no courage in logic. There’s not even happiness in logic. There’s only satisfaction. The only place logic has in my life is in the realisation that the more I am willing to do for my family, the more I shall be able to do for myself. — The Common Denominator of Success, by Albert E. N. Gray


Want more? I found reading the following articles a great source of inspiration for this article.

  1. Routine Maintenance, Meghan O’Gieblyn


  2. False Positivism, Peter Polack


  3. Monomania Is Illiberal and Stupefying, Jonathan Haidt


Addendum

I felt it worth mentioning how I think the EU’s Artificial Intelligence Regulation which aims to ensure a “well-functioning internal market for artificial intelligence systems” that is based on “EU values and fundamental rights.” is a step in the right direction.


It encompasses many of my principles around safeguards that surface inequities and failures to protect rights adequately. I felt reassured to see classification systems like “trustworthiness” scores banned and how high-risk AI systems will be formally regulated.


This article was co-published here.