paint-brush
Can AI Ever Overcome Built-In Human Biases? by@gordonfrayne
964 reads
964 reads

Can AI Ever Overcome Built-In Human Biases?

by Gordon FrayneAugust 18th, 2023
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

AI systems today exhibit biases along race, gender, and other factors that reflect societal prejudices and imbalanced training data. Main causes are lack of diversity in data and teams, and focus on pure accuracy over fairness. Mitigation tactics like adversarial debiasing, augmented data, and ethics reviews can help reduce bias. Fundamentally unbiased AI requires rethinking how we build datasets, set objectives, and make ethical design central. Future challenges include pursuing general AI safely while removing bias, and cross-disciplinary collaboration. AI has potential as a rational foil to counteract irrational human biases and promote fairness, if developed responsibly. Choices made now in how AI is created and applied will determine whether it reduces or amplifies discrimination in the long run.
featured image - Can AI Ever Overcome Built-In Human Biases?
Gordon Frayne HackerNoon profile picture

The Challenge of Developing Truly Rational Systems

"AI is racist." This provocative headline grabbed the world's attention a few years ago when reports surfaced about racial and gender bias in artificial intelligence systems. Facial recognition algorithms struggle to identify people of color. Hiring tools favor male applicants over females with identical qualifications. The machines we built to be rational and objective—unlike biased human minds—have prejudice coded into their decision-making.


How did this happen? Why does AI exhibit many of the same biases as people, even though it lacks human cognitive limitations? The root causes trace back to AI's data sources and optimization objectives. AI systems absorb implicit biases from datasets that reflect existing societal inequities. And algorithms programmed to maximize accuracy propagate these biases rather than challenge them.


This begs the question—can we ever develop truly rational AI free of unfair bias? To achieve this ideal, we need to rethink how we design datasets, choose performance metrics beyond accuracy, and implement ethics reviews during development. With diligence and a cross-disciplinary approach, the same technology that absorbed our biases could someday help counteract them.


AI has immense potential, but first, it needs an upgrade to move beyond mimicking our flaws. The path won't be easy, but the promise of AI unencumbered by irrational prejudice makes it a worthy goal.


TLDR; Can AI Overcome Built-In Human Biases?

  • AI systems today exhibit biases along race, gender, and other factors that reflect societal prejudices and imbalanced training data.
  • The main causes are a lack of diversity in data and teams, and a focus on pure accuracy over fairness.
  • Mitigation tactics like adversarial debiasing, augmented data, and ethics reviews can help reduce bias.
  • Fundamentally unbiased AI requires rethinking how we build datasets, set objectives, and make ethical design central.
  • Future challenges include pursuing general AI safely while removing bias, and cross-disciplinary collaboration.
  • AI has the potential as a rational foil to counteract irrational human biases and promote fairness if developed responsibly.
  • Choices made now in how AI is created and applied will determine whether it reduces or amplifies discrimination in the long run.


Examples of Biased AI Systems

One prominent example of racial bias in AI is facial recognition systems. Studies have found that leading facial recognition algorithms have error rates upwards of 10 to 100 times higher when identifying people with darker skin tones compared to lighter skin tones. These systems were trained mostly on datasets of white faces, causing them to struggle with accurately recognizing other races.



Beyond issues identifying people of color, some facial recognition systems have also exhibited prejudice in how they categorize people based on race and gender. Systems trained on imbalanced data have associated images of Black people with negative labels like “criminal” at higher rates.


Gender bias has also been observed in hiring algorithms meant to screen job candidates. Amazon previously used an AI recruiting tool that was found to penalize resumes containing the word “women’s” and downrank graduates of two all-women's colleges. The system learned these biases from patterns in the male-dominated resume data it was trained on. In another case, a recruiting algorithm favored male applicants over equally qualified females for technical jobs. These examples illustrate how unchecked AI can amplify gender inequality in hiring rather than reduce it.


Many other instances demonstrate AI absorbing implicit human biases around race, gender, age, ability status, and more. For example, natural language processing algorithms have exhibited bias in how they complete partial sentences about different demographic groups. Algorithms meant to support clinical decision-making have also demonstrated racial bias in estimated risk scores and treatment recommendations when given the same symptoms.


Causes of Bias in AI

One major source of bias in AI systems is that the data used to train them often reflects societal biases and lack of representation. Models only learn from the information they are exposed to. If that data over-represents certain groups or exhibits prejudiced associations, those biases will be propagated through the algorithms. For example, facial recognition systems trained mostly on light-skinned faces will inevitably struggle with dark-skinned faces. The same goes for natural language systems trained on text containing stereotypes and hate speech.


Additionally, because AI models are frequently optimized for accuracy and performance metrics above all else, they will latch onto any "signal" in training data that helps them achieve those goals - even if that signal represents an unfair bias. Unless told otherwise, models have no mechanism for distinguishing between factual statistical differences vs inappropriate prejudices or distortions. So algorithms tasked with predicting employment success will pick up on and rely upon imbalanced gender representation and discriminatory practices in the hiring data.


Finally, the lack of diverse perspectives among AI researchers and developers also contributes to bias being baked into systems. The field suffers from gender gaps and underrepresentation of minority groups. With AI teams and workplace culture dominated by one demographic, many blindspots will go unchecked. Having diverse voices involved in asking the right questions about possible sources of unfair bias could help identify and mitigate many issues early on.


Mitigating Bias in Current AI Systems

While biased AI reflects problematic data and design choices, techniques do exist to help mitigate these issues in current systems. One approach is adversarial debiasing, which uses an adversarial network to penalize models that exhibit bias and force them to correct it. By explicitly trying to expose bias, the adversarial network helps shift the model away from relying on prejudiced patterns or labels. However, this technique is still limited by the capabilities of the underlying training data.


Expanding the diversity of training data to better represent populations and perspectives is another mitigation strategy. Ensuring datasets include reasonable distributions of gender, race, age groups, etc. reduces the chance of imbalance-driven bias. Synthetic data generation can also help augment underrepresented groups. However, diversity alone does not guarantee fairness, as models may still learn latent biases in new data.


Incorporating ethics reviews and bias testing into standard AI development pipelines is important for catching issues early. Checking for statistical bias across groups in a validation set, surveying minority populations for perceived harms, and having a diverse team perform ethical reviews of models and data can surface problems missed by homogeneous teams and metrics. Making ethical AI an integral part of the process improves outcomes.


Building Unbiased AI from the Ground Up

To work towards fundamentally unbiased AI, we must rethink how we build datasets themselves. Rather than passively absorbing collected data with all its biases, active efforts to balance representation and exclude prejudice are needed. This requires extra work soliciting data from underrepresented groups and consciously auditing datasets for imbalance or harmful associations. Building datasets as diverse and ethically sound training environments from the start establishes a stronger foundation.


We should also prioritize goals like group fairness and avoidance of harm above pure accuracy. Setting objectives that minimize gaps in performance across populations guides models away from biases even if accuracy takes a slight hit. Models should align with ethics first, using accuracy as a secondary metric. Architecting AI to maximize fairness rather than unconstrained optimization prevents maximizing performance at the cost of discrimination.


Finally, transparency and ethical design principles should be core to development from day one, not an afterthought. Explaining model behaviors builds trust. Extensive testing for disparate impacts across populations ensures even subtle biases are caught. And human oversight for high-stakes decisions reduces harm from inevitable errors. Grounding AI in ethical design and responsible development practices is key to preventing irrational prejudices of the past from tainting the rational systems of the future.


Outlook for the Future

The path toward developing AI systems free of human-like biases brings both challenges and opportunities. A major challenge lies in continuing to pursue more expansive, general artificial intelligence while also proactively removing biases. Self-driving cars, personalized medicine, and other applications require increasingly capable AI, but sophisticated models are also more prone to complex behavior reflecting subconscious prejudice. Maintaining momentum in AI safety and ethics as capabilities advance will require vigilance.



Truly overcoming bias will necessitate cross-disciplinary collaboration. Ethicists, social scientists, legislators, activists, and other stakeholders must work alongside computer scientists to guide AI progress. Diverse teams asking difficult ethical questions will strengthen solutions. Establishing norms and regulations around transparency and testing for different groups will also help enact positive change.


While AI has absorbed our biases, its rational capacities also give it the potential to help counteract human irrationality in the long run. Rather than acting as a passive mirror of humanity’s flaws, advanced AI systems could someday model and promote fairness in society. AI could be leveraged to audit biased decisions, identify instances of discrimination, and suggest ways to equitably improve systems. We must remain cautious about anthropomorphizing AI given its lack of human context and lived experience. But consciously building AI as a rational foil to humanity's irrational biases may foster gradual progress towards greater justice.


The road to ethical, unbiased AI has obstacles but is full of promise. With diligence, creativity, and collaboration, the same technology plagued by our prejudices could be transformed into a tool for exposing and reducing them. The choices we make today in how AI is built, audited, and applied will determine what role it plays in either perpetuating or breaking down unfair barriers. Progress will not be automatic, but by treating AI as a mirror to guide our self-improvement rather than just a tool for optimization, we can work towards fairer, more rational systems benefiting all groups equally.


The Bottom Line: Can AI Overcome Built-In Human Biases?

AI stands at a crossroads today. While current systems propagate unfair biases, they also have immense potential as tools for exposing and reducing such biases in society. But this future is not guaranteed - the path toward ethical, unbiased AI requires diligence and collaborative effort across disciplines.


How we choose to build, audit, and apply AI now will determine what role it plays in either perpetuating or breaking down discrimination in years to come. Though progress will be gradual and require overcoming many challenges, the goal of developing AI as a truly rational foil to counteract irrational human biases is a worthy pursuit. With creative solutions, responsible regulation, and diverse teams asking tough questions, we can transform these promising technologies into forces for greater fairness.


AI is only as flawed as the data and directives we imbue it with. These remarkable tools are only as biased as their human creators. But if STEM fields make diversity, transparency, and ethics core priorities of AI development today, we can set these systems on a trajectory towards reducing, rather than amplifying, the unfair biases that have too long plagued society. AI offers immense power; it is up to all of us to wield that power responsibly and shape AI into a driving force for equality.