paint-brush
The Ethical AI Libraries that are Critical for Every Data Scientist to Knowby@sharmi1206
1,039 reads
1,039 reads

The Ethical AI Libraries that are Critical for Every Data Scientist to Know

by Sharmistha ChatterjeeDecember 28th, 2020
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

AI ethics is a set of values, principles, and techniques that employ widely accepted standards of right and wrong to guide moral conduct in the development and use of AI technologies. In machine learning and AI, any given algorithm is said to be fair, or to have fairness, if its results are independent of given variables, especially those considered sensitive. The research about Fair ML Algorithms and ToolKits shows that it can detect and mitigate bias and yield fair outcomes for both text and visual data. In this blog, we primarily discuss the open-source toolkits and algorithms that can help us to design Fair AI solutions.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail

Coin Mentioned

Mention Thumbnail
featured image - The Ethical AI Libraries that are Critical for Every Data Scientist to Know
Sharmistha Chatterjee HackerNoon profile picture

There has been an exponential rise in applications of AI, Data Science, and Machine Learning spanning across a wide variety of industries. Scientists, researchers, and data scientists have become increasingly aware of AI Ethics. This is in response to the range of individual and societal harms and the negative consequences that AI systems may cause.

AI ethics is a set of values, principles, and techniques that employ widely accepted standards of right and wrong to guide moral conduct in the development and use of AI technologies.

In machine learning and AI, any given algorithm is said to be fair, or to have fairness, if its results are independent of given variables, especially those considered sensitive. The traits of individuals should not correlate with the outcome (i.e. gender, ethnicity, sexual orientation, disability, etc.).

Most of the ML algorithms, which are unfair, suffer from disparate treatment if it's decisions are (partly) based on the subject’s sensitive attribute. A disparate impact occurs if outcomes disproportionately hurt (or benefit) people with certain sensitive attribute values (e.g., females, blacks).

Fairness is based on the following.

  • Unawareness
  • Demographic Parity
  • Equalized Odds
  • Predictive Rate Parity
  • Individual Fairness
  • Counterfactual fairness

The research about Fair ML Algorithms and ToolKits shows that it can detect and mitigate bias and yield fair outcomes for both text and visual data. The algorithms that have been developed and they fall into three phases of ML lifecycle: preprocessing optimization at training time, and post-processing and in-algorithmic optimization.

In this blog, we primarily discuss the open-source toolkits and algorithms that can help us to design Fair AI solutions.

ToolKits

What-If-Tool — By Google

What-If-Tool launched by Google is a new feature of the open-source TensorBoard web application. This tool enables:

  • Users to analyze an ML model without writing code.
  • With given pointers to the TensorFlow model and a dataset, the WhatIf Tool offers an interactive visual interface for exploring model results.
  • It also allows manual editing of examples from your dataset and see the effect of those changes.
  • It supports automatic generation of partial dependence plots which show how the model’s predictions change as any single feature is changed.

AI Fairness 360 — By IBM

The AI Fairness 360 toolkit is an extensible open-source library containing techniques developed by the IBM research community to help and detect, mitigate bias in machine learning models throughout the AI application lifecycle. AI Fairness 360 package is available in both Python and R.

The AI Fairness 360 package includes

  • comprehensive set of metrics for datasets and models to test for biases
  • Explanations for these metrics, and Algorithms to mitigate bias in datasets and models.
  • Designed to translate algorithmic research from the lab into the actual practice of domains as wide-ranging as finance, human capital management, healthcare, and education.

The below figure illustrates different fairness metrics on protected attributes sex and age on German Credit Dataset.

Source

This Github link states in detail more on the bias mitigation algorithms (optimized pre-processing, disparate parity remover) and Supported Fairness Metrics.

Fair-Learn — By Microsoft

This tool primarily developed by Microsoft focuses on how an AI system is behaving unfairly in terms of its impact on people – i.e. in terms of harms. It includes:

Allocation harms – These harms can occur when AI systems extend or withheld opportunities, resources, or information. Some of the key applications are in hiring, school admissions, and lending.

Quality-of-service harms – Quality of service refers to whether a system works as well for one person as it does for another, even if no opportunities, resources, or information are extended or withheld.

Source

This blog describes how the tool incorporates different types of fairness algorithms (reduction variants, post-processing algorithms) that can be applied to Classification and Regression problems.

To integrate the Fairness tool, refer to https://github.com/fairlearn/fairlearn

Themis-ml

A Python utility themis-ml built on top of pandas and sklearn that implements fairness-aware machine learning algorithms, by measuring discrimination and mitigating discrimination. The objective behind using this tool is to use discrimination-aware techniques to use it as a preference (bias) for or against a set of social groups that result in the unfair treatment of its members with respect to some outcome.

With ML algorithms, try this out with pre-processing library, model estimation, and post-processing techniques on different datasets available from Github

The Bias and Fairness Audit Toolkit

Aequitas is an open-source bias audit toolkit for data scientists, machine learning researchers, and policymakers to audit machine learning models for discrimination and bias, and to make informed and equitable decisions around developing and deploying predictive tools.

Toolkit for Auditing and Mitigating Bias and Fairness of Machine Learning Systems

Responsibly is developed for practitioners and researchers primarily for auditing bias and fairness of machine learning systems, in addition to mitigating bias and adjust fairness through algorithmic interventions, with special emphasis on NLP models.

Derivable Conditional Fairness Regularizer

DCFR is an adversarial learning method to deal with fairness issues in supervised machine learning tasks. Traditional fairness notations, such as demographic parity and equalized odds are demonstrated as special cases of conditional fairness. The main objective of this library is to define Derivable Conditional Fairness Regularizer (DCFR), which can be integrated into any decision-making model, to track the trade-off between precision and fairness of algorithmic decision making and to measure the degree of unfairness in adversarial representation.

Fairness Comparison

This repository is meant to facilitate the benchmarking of fairness aware machine learning algorithms, by accounting for differences in different fairness techniques. The benchmarking technique helps to compare a number of different algorithms under a variety of fairness measures, and a large number of existing datasets. This benchmarking tool also simulates fairness-preserving algorithms that tend to be sensitive to fluctuations in dataset composition.

Counterfactual Local Explanations via Regression (CLEAR)

Though not a Fairness tool, the model Explainability tool CLEAR explains single predictions of machine learning classifiers, based on the view that a satisfactory explanation of a single prediction needs to both explain the value of that prediction and answer ’what-if-things-had-been-different’ questions. This explainable tool tries to answer impact of different questions by considering the relative importance of the input features and show how they interact.

Differential Fairness

This library leverages the connections between differential privacy and legal notions of fairness, and further measures the fairness cost of mechanism M(x) with a parameter ε. The concept has originated from the Differential Privacy to limit/restrict a protected group’s intersecting combination of gender, race, and nationality, marginalizing over the remaining attributes of the dataset x, with a probability of 80%.

Flexible-Fairness-Constraints

This library introduces an adversarial framework to enforce fairness constraints on graph embeddings. It uses a composition technique that can flexibly accommodate different combinations of fairness constraints during inference.

In the context of social recommendations, this framework also allows one user to request, such that their recommendations are invariant to both their age and gender. Further, this library also allows other users to request invariance to just their age, which is demonstrated with standard knowledge graph and recommender system benchmarks.

FairRegression

This library optimizes the accuracy of the estimation subject to a user-defined level of fairness (when the subject is composed of multiple sensitive attributes e.g., race, gender, age). The fairness constraint induces a nonconvexity of the feasible region, which disables the use of an off-the-shelf convex optimizer.

Fair Classification

This library is composed of logistic regression implementation techniques in python for the fair classification mechanisms introduced in AISTATS’17, WWW’17, and NIPS’17 papers.

Fair Clustering

Below are some of the techniques involved in designing Fair Clustering Algorithms:

Scalable Fair Clustering

This library implements a fair k-median clustering algorithm. It computes a fairlet decomposition of the dataset, followed by a non-fair k-median algorithm on the fairlet centers. The resulting clustering is then extended to the whole dataset by assigning each data point to the cluster that contains its fairlet center, to yield a final fair clustering.

Fair Algorithms for Clustering

Variational Fair Clustering

This library implements a clustering method that finds clusters with specified proportions of different demographic groups pertaining to a sensitive attribute of the dataset (e.g. race, gender, etc.). It can be used for any well-known clustering method such as K-means, K-median, or Spectral clustering (Normalized cut), etc. in a flexible and scalable way.

Proportionally Fair Clustering

This library implements proportional centroid clustering in a metric context, by clustering n points with k centers. Fairness is defined as the proportionality to mean that any n/k points are entitled to form their own cluster if there is another center that is closer in distance for all n/k points.

Fair Clustering Through Fairlets

This library introduces the concept of fairlets, which are minimal sets that satisfy fair representation while approximately preserving the clustering objective. In its implementation, it shows that any fair clustering problem can be decomposed into first finding good fairlets, and then using existing machinery for traditional clustering algorithms.

Though finding good fairlets can be NP-hard, the library strives to obtain efficient approximation algorithms based on minimum cost flow.

Fair Recommendation Systems

Below are some of the techniques involved in designing Fair Recommendation Systems:

Two-Sided Fairness for Personalized Recommendations in Two-Sided Platforms

The objective of this fair recommendation library is in the context of two-sided online platforms, comprising customers on one side and producers. Recommendation services in these platforms have been built primarily to maximize customer satisfaction through personalized preferences of the individual customers.

This library incorporates a fair allocation of indivisible goods by guaranteeing at least Maximin Share (MMS) of exposure for most of the producers and Envy-Free up to One item (EF1) fairness for every customer.

FLAG: Frequency Linked Attribute for Evaluating Consumer-side Fairness

The goal of this library is to demonstrate an application of assigning synthetic demographic attributes to different recommendation data sets. Such attributes are personally sensitive and excluded from publicly-available data sets.

fairness-aware variational autoencoder

Variational auto-encoders for collaborative filtering is a framework for making recommendations. The objective of this library is to incorporate randomness in the regular operation of VAEs in order to increase the fairness (mitigate the position bias) in multiple rounds of recommendation

Fairness-Aware_Tensor-Based_Recommendation

The objective of this library is to enhance recommendation fairness while preserving recommendation quality. It achieves this goal by introducing 

(i) a new sensitive latent factor matrix for isolating sensitive features

(ii) a sensitive information regularizer that extracts sensitive information that can taint other latent factors

(iii) an effective algorithm to solve the proposed optimization model

(iv) extension to multi-feature and multi-category cases.

SIREN

Siren is a python interface built to offer diversity in recommendations based on the MyMediaLite toolbox and visualizations for two diversity metrics (long-tail and unexpectedness). SIREN can be used by content providers (news outlets) to investigate which recommendation strategy fits better according to their diverse needs, in addition to analyzing recommendation effects in a different news environment.

Awesome AI Guidelines

There has been a large amount of content published which attempts to address these issues through “Principles”, “Ethics Frameworks”, “Standards & Regulations”, “Checklists” and beyond, which are captured through this repository link. In addition, there are papers, documents, and resources available at Github for introducing Fairness in Computer Vision.

Fairness in Machine Learning

This library contains Keras & TensorFlow implementation of Towards fairness in ML with adversarial networks and Pytorch implementation of Fairness in Machine Learning with PyTorch. The principle behind this library is to introduce a training procedure based on adversarial networks for enforcing the pivotal property (or, equivalently, fairness with respect to continuous attributes) on a predictive model. Further, it also includes a hyperparameter to make a trade-off between accuracy and robustness

FairPut — Fair Machine Learning Framework

FairPut is a light open framework that strives to state a preferred process at the end of the machine learning pipeline to enhance model fairness. It also plays a simultaneously role in enhancing model interpretability, robustness, and a reasonable level of accuracy.

Other Libraries  EthicML

EthicML is a library for the researcher’s toolkit for performing and assessing algorithmic fairness. It has support for multiple sensitive attributes, vision datasets, codebase typed with mypy, tested code and reproducible results.

Collaborative Fairness in Federated Learning – This library introduces Collaborative Fairness in Federated Learning, where regardless of the contribution all individual participants can receive the same or similar models. This is achieved by utilizing reputation to enforce participants to converge to different models, which helps in achieving fairness without sacrificing the predictive performance.

Rich Subgroup Fairness

Classification constraints on small collections of pre-defined groups appear to be fair on each individual group, but badly violates the fairness constraint on one or more structured subgroups defined over the protected attributes, (from Kearns et al., https://arxiv.org/abs/1711.05144)

This fairness library implements statistical notions of fairness across exponentially (or infinitely) many subgroups, defined by a structured class of functions over the protected attributes. This library works primarily with sklearn binary classifiers (LinearRegression) and group classes having Linear threshold functions. The library provides:

It also supports Fairness metrics supported for learning and auditing:

  • False Positive Rate Equality
  • False Negative Rate Equality
  • Fairness in Sentiment Prediction

Sensitive Subspace Robustness (SenSR) demonstrates how to use SenSR to eliminate gender and racial biases in the sentiment prediction task described. This fairness distance is computed on the basis of Truncated SVD and can be tried out from this link on Github.

UnIntended Bias Analysis Conversational AI – Contains notebooks to train deep learning models as a part of the Conversational AI project.

References

Previously published at https://techairesearch.com/most-essential-python-fairness-libraries-every-data-scientist-should-know/