paint-brush
AI in Medicine: A Physicians Perspectiveby@Tanisha.Bassan
450 reads
450 reads

AI in Medicine: A Physicians Perspective

by Tanisha BassanOctober 31st, 2017
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

<strong><em>It is funny how much I want to learn but just not at school.</em></strong><em> </em>It was more important for me to leave school early and instead sit in on a bioethical lecture at the University of Toronto on what to expect from AI in medicine. This article explains everything I learned and why I want to share it with everyone.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - AI in Medicine: A Physicians Perspective
Tanisha Bassan HackerNoon profile picture

It is funny how much I want to learn but just not at school. It was more important for me to leave school early and instead sit in on a bioethical lecture at the University of Toronto on what to expect from AI in medicine. This article explains everything I learned and why I want to share it with everyone.

So what’s so special about AI in medicine and why should you care?

Personally, I care about the types of doctors I am willing to trust with saving my life in the future. The interesting fact that stood out to me was that in 10 years or so I could be visiting the doctor’s office filled with more robots than humans. Now I don’t know how I feel about that so I decided to find out.

How are robots integrated into the medical field today?

#1. Robotic-assisted procedures

It is common for surgeons to perform surgeries with the help of robotic structures, they provide doctors with speed and accuracy. Some of the advantages are:

  • more control over the precision of incisions
  • minimal blood loss with essentially scarless cuts
  • less surface area at risk for infections
  • a lot less painful and faster recovery time

#2. Telemedicine

Doctors are now able to communicate with patients without the regard for geographical limits. Essentially with the use of telecommunications, populations in remote rural areas can reach expert medical advice.

This concept has grown to introduce RP-VITA, a telemedicine robot by iRobot and InTouch Health. This new robot essentially allows doctors to interact with patients via autonomous navigation of the robot. It process data in real time at a quicker rate, this allows more proficient allocation of doctors and nurses at a hospital.

This is a link to a video that shows the robot and patient interaction:

#3. Other ways AI in included in medicine

  • Mining for medical records for quicker access to patient records and data
  • Assisting in many repetitive jobs which help better time management
  • Improving clinical expertise for distinguishing symptoms for a normal headache versus a brain tumor.
  • Creating individualized treatment plans for patients with multiple variations of symptoms for a common illness
  • Used in drug discovery and healthcare system analysis for countries

So far robots are very useful thus far in the medical field so when do we start worrying?

3 Reasons why AI could be disastrous in medicine:

#1. Disparity of care

Sadly even medical practices are influenced by discrimination based on gender, race, economic status, and geography in our world today. For example, in the USA the top 0.001% of the richest Americans went through a surge of 636% income growth rate in the last 40 years. The income for the bottom 50% of the population had close to 0 increase in income growth rate.

Half the population of America not only receive an unequal distribution of income but also do not have free healthcare. This is tragic because,

  • Medical assistance goes towards only those who can afford costly treatment
  • AI will only increase health care costs which will make middle-class citizens and lower struggle to pay medical bills
  • AI companies like most other companies in our world often only want to raise revenue rather than provide equal service to everyone
  • Sustaining a free healthcare system is unrealistic for the future as cost for individual medical care is increasing every year and by 2051 will exceed government budget with the assimilation of AI technologies

#2. The black box issue

What is a black box?

It refers to complex electronic technologies that contain internal mechanisms that release outputs too mysterious for a user to understand.

We now have a technology with an exquisite feature called artificial neural networks (ANN). It works much like how our brains process information with our nervous system. It is used for pattern identification and data processing.

ANN will be able to discover underlying causes and solutions for illnesses using hidden patterns which a doctor may not be able to identify. It is very likely an AI will surpass our understanding of research in medicine and become beyond our comprehension. This raises the question: would you trust your doctor or an AI with your diagnoses?

Why is this an issue?

It is okay to assume that AI’s are programmed bias-free however due to the fact we cannot understand how an ANN comes to its decision it is a risk to assume its conclusion is always right. A time where this was proven to be the case was:

In a Florida prison, an AI system was used with the function of assessing an offender’s probability to partake in criminal activity again. It came to an independent conclusion of labeling two young girls who stole a bike and scooter as high-risk and medium risk offenders under the claims that they were black. For more in-depth details refer to this article: https://qz.com/1055145/ai-in-the-prison-system-to-fix-algorithmic-bias-we-first-need-to-fix-ourselves/

This incidence is one of many which makes us question if we can trust AI decisions fully without knowing the process of which lead them to a solution.

Our genomes are complicated systems. A problem for doctors is when an illness does not statistically show up as true on paper but is true for a patient’s biological data.

ANN processing may come to conclusions that do not correspond with a person’s genomic biological differences that are usually outliers in a pattern algorithm.

Dr. Das, the amazing neurosurgeon presenting the lecture points out that;

“many times the phenomena of our gut feeling contributes to how he diagnoses or makes decisions for illnesses which he also finds true with his coworkers as well.”

The unexplainable instinctual decision that AI’s are not programmed to have can affect medical treatment. The gut feeling is very real and also very useful for doctors.

#3. A Physician’s identity in society

Doctor’s have two roles in society, the healer and the professional.

A healer is:

  • Empathetic and compassionate and is able to heal patients and relate with them.

A professional is:

  • Logical in his reasoning and remains bias-free throughout the process of caring for a patient.

A doctor must balance both to be successful in his work. It is important to care for a patient and have guilt when mistakes are made. A person will only learn from mistakes and for doctor’s the stakes are very high so any mistake is guaranteed to never happen again. Even if the outcome is an unhappy one for a patient, a doctor or any caregiver will always hold the burden of bad news and many times our human emotions allow us to care and relate with people who are suffering.

AI’s will not feel the weight of blame from mistakes nor pressure of guilt from bad news. An AI is strictly professional and creates a gap between human and caregiver understanding. Trust is built on transparency and someone will never trust an AI if he or she cannot relate with its reasoning. We will lose human contact with medicine and transparency will become negligible. So far I know we cannot program an AI to feel regret or any type of human emotion. That is what makes an AI different from a human.

What are the important lessons learned:

  1. AI is already being integrated into medicine through popular practices like robotic-surgeries, telemedicine, etc.
  2. There are risks and advantages with using AI and everyone should think about them sooner rather than later
  3. The first issue is the risk of a huge marginal gap between the people who can or cannot afford expensive AI treatments
  4. The second problem is the unknown processes of an AI algorithm towards its conclusions and whether we can trust AI’s to always give 100% accurate diagnoses
  5. The last risk is the loss of human to doctor relationship which can make transparency negligible

I want to thank the UofT Ethics Centre for giving me an opportunity to learn about this issue and Dr. Sunit Das for his thorough presentation. Comment below your thoughts on if AI’s are a negative or a positive asset to the medical field.