paint-brush
Why The Ethical Responsibility For Tech Must Extend to Non-Usersby@fiona-j-mcevoy

Why The Ethical Responsibility For Tech Must Extend to Non-Users

by Fiona J McEvoyAugust 17th, 2019
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

A US company is using machine learning to pick through the genetic data of embryos to establish the risk of health conditions. The ethical debate around this sort of pre-screening is hardly new, but it's important to note that Hsu is not genetically altering embryos. The expansion or contraction of human choice is often used as a barometer by which the ethical permissibility of new technologies is judged. This is particularly pertinent in the kind of situation described by Stephen Hsu, where parents might be forced to choose between a healthy embryo and a trait considered less desirable.

Coin Mentioned

Mention Thumbnail
featured image - Why The Ethical Responsibility For Tech Must Extend to Non-Users
Fiona J McEvoy HackerNoon profile picture

Last month, Oscar Schwartz wrote a byline for OneZero with a familiarly provocative headline: “What If an Algorithm Could Predict Your Unborn Child’s Intelligence?”. The piece described the work of Genomic Prediction, a US company using machine learning to pick through the genetic data of embryos to establish the risk of health conditions. Given the title of the article, the upshot won’t surprise you. Prospective parents can now use this technology to expand their domain over the “design” of new offspring – and cognitive ability” is among the features up for selection. 

Setting aside the contention over whether intelligence is even inheritable, the ethical debate around this sort of pre-screening is hardly new. Gender selection has been a live issue for years now. Way back in 2001, Oxford University bioethicist Julian Savulescu caused controversy by proposing a principle of “Procreative Beneficence” stating that “couples (or single reproducers) should select the child, of the possible children they could have, who is expected to have the best life, or at least as good a life as the others, based on the relevant, available information.” (Opponents to procreative beneficence vociferously pointed out that  – regrettably – Savulescu’s principle would likely lead to populations dominated by tall, pale males…). 

The topic continues to fascinate. Many instinctively reject the idea of “playing God” or stepping in to set a child’s fate apriori. But those who defend such techniques are usually careful to frame genetic selection as a choice rather than a moral obligation à la Savulescu. In Schwartz’s article, Genomic Prediction co-founder, Stephen Hsu, is quoted to this effect: 

“If I tell a couple that they are carriers of a disease and that there is a one in four chance that their kid is going to die a horrible death, and allow them to select the healthy embryo, is that eugenics? I guess it is. 

But if I give them the option to make to make a choice, and don’t coerce them, I don’t think there’s ethically anything wrong with it.”

There’s a lot going on with this statement. But what strikes is the built-in assumption that granting a choice is uncontroversially okay. The expansion or contraction of human choice is often used as a barometer by which the ethical permissibility of new technologies is judged (indeed, my own work has looked at choice closely).

The argument tends to go the same way: less choice equals bad, more choice equals good. But this isn’t something we should necessarily take as a given, and what Hsu’s comment doesn’t allow for is the fact that the broadening of choice can be a bad thing. 

For starters, when a new option is added we immediately lose the ability not to choose at all. This is particularly pertinent in the kind of situation described by Hsu, where parents might be forced to choose between a healthy embryo and one that carries the risk of a disability or a trait considered less desirable – like a lower intelligence score.

The choice here creates a difficult and deeply troubling responsibility. 

It’s important to note at this juncture that Hsu is not genetically altering embryos. Nor is he facilitating the death of a child (by most people’s estimations). These parents-to-be are choosing which embryo(s) to bring into the world. The others will simply never be born. Yet, if they are procedurally encouraged to select “healthy” embryos – a choice that might strike many as a “no brainer” –  then they could be responsible for a broader harm to society.

After all, whole groups of people would be edited out, and this winnowing could give way to a detrimental cultural shallowing. (Is it really “better” or preferable to live in a future world with no physically or mentally impaired people?)

All of this is really to say that we cannot let technologists side-step issues of misuse and downstream harms by pointing to the alternative option their customers/users had – i.e. not to use their product or service at all. It is a deflection that functions as a flak jacket while granting makers permission not to reflect inwardly. 

The University of California philosopher, Professor Gerald Dworkin, wrote about the problems with choice in the paper “Is More Choice Better than Less?”, where he challenges lazy assumptions that having more options is always more desirable. 

Dworkin cites the work of philosophers like John Rawls, who go unchallenged in stating that any rational individual would prefer to have their choices widened as we do not suffer “for greater liberty.” Dworkin goes on to supply ample ammunition to prove that on occasions we do.  

For one, decisions often have costs associated with them that are not present when we only have a single option. Usually, to make a reasoned choice we must amass all the relevant information, which is a cost in itself.

Dworkin says that the more serious the issue, the more difficult information can be to obtain which in turn forces costs higher. Moreover, there are also psychic costs to making a choice. We may second guess ourselves and agonize over whether the right decision was made based on the correct information.  

Choice also brings with it the pressure to conform, and adding new choices can be problematic for those who wish not to exercise them. For example, those who choose not to have a designer baby or – more trivially – purchase an Alexa or optimize facial recognition, may find themselves at a disadvantage in a society where increasing numbers choose differently. Feeling outcast or inconvenienced could obviously yield from this new and unwanted choice. 

Most importantly though, choice creates responsibility. By introducing a new option into the world it’s use or otherwise now becomes the responsibility of the user. As Dworkin says, “Once I am aware I have a choice, my failure to choose counts against me.”

In future scenarios, perhaps we will see a decision to prevent a teenager using a conversational therapy bot as in some way negligent. The same when it comes to surveillance technology in cars that can perceive emotions and alertness.

Responsibility may just manifest as a feeling, but of course there is also the possibility of being held responsible too. 

In a world where these new AI-driven systems are increasingly ubiquitous, non-adoption becomes equivalent to rejection. It ceases to be neutral.

To be clear, I (via Dworkin) am not arguing that technological choice should be reduced or that the status quo is always (or even generally) preferable to the broadening of choice. Rather, this is an attempt to frame that idea that choice isn’t always preferable or non-controversial because it does create pressure points elsewhere.

And commenting – as Hsu did – that a new technology simply provides an option that can be ignored willfully underestimates the ways in which such technologies disrupt, if only by creating that new binary. 

So, when considering the downstream ethical effects of new technologies, makers should be encouraged not only to consider the effect on direct and indirect users, but also how creating a new option might affect the choice environment more broadly, including the impact on those who “opt out.”

They have a responsibility to these non-users because they have created this category in much the same way as they have with their consenting users. 

Admittedly, not all technologies will create the kinds of dilemma that embryo selection throws out, but there are many that create subtle changes to the status quo that should be acknowledged and thought through.

Obviously the sphere of influence for a maker begins with their own product and users, so perhaps the starting point for considering responsibilities to non-users begins with imagining how life would change if their new tech option were the only option – and then rolling back from there.

Could Genomic Prediction continue to justify their product if it were made mandatory? Or if they reached 70 per cent saturation, what would life be life for the remaining 30 per cent? 

Too often, proclaiming that users have “freely chosen a product” or are free “not to choose” a product seems like a bid to abdicate responsibility for any ill effects to the chooser (financial, moral, psychological, etc).

In fact, it seems that creating a new choice where there was formerly a single option actually creates a broader responsibility for two categories – both users and non-users.