SIAM News Blog

Designing Algorithms to Increase Fairness in Artificial Intelligence

By Anil Aswani and Matt Olfat

The increasing role of artificial intelligence (AI) to automate decision-making sparks concern about potential AI-based discrimination. Such bias is undesirable in the quest to minimize inequality, and legal regulations require that AI not discriminate against protected classes on the basis of gender, race, age, and the like. Exclusion of data on protected classes when training AI is one strategy to minimize prejudice. However, this is not only a naive approach to fairness, but also insufficient because AI systems can learn to use information on protected classes via other data [2]; for example, race can often be inferred from a home address. Consequently, more sophisticated algorithmic and computational approaches are necessary to ensure that AI behaves impartially.

Such concerns about AI fairness are not merely theoretical. Researchers have observed several instances of biased data and prejudiced AI. When discriminatory behavior influences the data that trains AI, the resulting AI output often perpetuates this bias. For instance, doctors frequently undertreat pain in women as compared to men [3]. AI systems for pain management trained using such biased data can algorithmically preserve this gender discrimination. Social media algorithms provide further examples; in one situation, LinkedIn disproportionately advertised high-paying jobs to men. In another, Facebook’s algorithms displayed considerable racial prejudice in censorship [1].

The first step towards developing fair AI is to quantify fairness, for which investigators have proposed several definitions for supervised learning to date. Put simply, these definitions require that an individual’s membership in a protected class (e.g., gender) will not impact AI’s outcome (e.g., approval or disapproval of a loan application). One can tailor this process for the purpose of classification, where the goal is to construct a function \(h : \mathbb{R}^p\rightarrow\{-1,+1\}\) (known as a classifier) that uses a vector of descriptive features \(x\in\mathbb{R}^p\), characterizing each individual to predict a binary outcome \(y\in\{-1,+1\}\) for him/herself. When an individual’s protected class \(z\in\{-1,+1\}\) is binary, the notion of demographic parity [4] at level \(\Delta\) requires that \(|\mathbb{P}\:[h(x) = +1 | z = +1] - \mathbb{P}[h(x) = +1 | z = -1]| \leq \Delta\). Intuitively, the probability of receiving a positive outcome from the classifier is independent of the protected class’s value. Other definitions of fairness—such as equal opportunity—also exist [5].

While there are many approaches for constructing accurate classifiers from training data, researchers have only recently begun developing methods that also ensure fairness. Linear classifiers \(h(x) = \mathrm{sign}(x^\mathsf{T}\beta - t)\) compare a linear score \(s(x) = x^\mathsf{T}\beta\) (dependent upon a vector of parameters \(\beta\in\mathbb{R}^p\)) to a threshold \(t\in\mathbb{R}\). One can classically compute the \(\beta\) parameters from training data by solving a convex optimization problem corresponding to either logistic regression or a support vector machine (SVM). In contrast, designing a fair linear classifier requires choosing the \(\beta\) parameters to satisfy the fairness definition for a small value of \(\Delta\) while still maintaining high accuracy in predicting outcome \(y\). This can be accomplished by adding appropriate constraints to the original convex optimization problem and developing algorithms to solve the resulting optimization problem. Here we describe these steps in more detail for the special case \(\mathbb{E}(x) = 0\), where we assume the features have a \(0\) mean.

One approach augments the optimization problems that underlie logistic regression or SVM with an additional constraint \(-\delta \leq \mathbb{E}[x^\mathsf{T}\beta | z=+1] - \mathbb{E}[x^\mathsf{T}\beta | z=-1]\leq\delta\), which ensures that the means of the score function—when conditioned on the protected class—are approximately equal [9]. Adding another constraint, \(-\delta \leq \mathbb{E}[\beta^\mathsf{T}xx^\mathsf{T}\beta | z=+1] -  \mathbb{E}[\beta^\mathsf{T}xx^\mathsf{T}\beta^\mathsf{T} | z=-1]\leq\delta\), to the optimization problem can further improve fairness of the linear classifier. This substantiates that the variances of the score function—when conditioned on the protected class—are approximately equal. However, training occurs with data \((x_i, \, y_i, \, z_i)\) for \(i = 1,\ldots, n\). Therefore, the actual constraints in the optimization problem correspond to their sample average approximations

\[ -\delta \leq \Big(\sum_{i: z_i=+1} x_i - \sum_{i: z_i=-1} x_i\Big)^\mathsf{T}\beta \leq \delta \\ -\delta \leq \beta^\mathsf{T}\Big(\sum_{i: z_i=+1} x_i^\mathsf{T}x_i - \sum_{i: z_i=-1} x_i^\mathsf{T}x_i\Big)\beta \leq \delta.\]

Adding only the first (linear) fairness constraint results in a convex optimization problem, while including the second (indefinite quadratic) fairness constraint yields a nonconvex optimization problem. Consequently, attaining good numerical solutions that satisfy the second fairness constraint necessitates careful algorithmic design [7]. Because this second constraint is an indefinite quadratic, a spectral decomposition can separate it into a difference of two convex quadratic functions. Once decomposed in this manner, the problem is solvable via algorithms from difference-of-convex programming [8]. Figure 1 depicts the improved fairness with high accuracy of a linear classifier trained by this method.

Figure 1. From left to right: comparison of a linear support vector machine (SVM), linear SVM of [9], and linear SVM of [7]. 1a. We separate red and green points while remaining fair between “x”s and “o”s. 1b. The distribution of the linear score s(x) conditioned on the protected class z. 1c. The distribution of the linear score s(x) conditioned on outcome y. Fairness occurs when conditional distributions in 1b are similar, and accuracy occurs when conditional distributions in 1c are dissimilar. Figure courtesy of [7].

The application of dimensionality reduction algorithms, like the classic principal component analysis (PCA), also raises the question of fairness. This issue is less well-studied because the lack of outcome data makes past fairness definitions inapplicable. Recent work has moved toward developing a fair version of PCA [6]. Let \(x\in\mathbb{R}^p\) be a vector of features describing an individual, and suppose that the protected class \(z\in\{-1,+1\}\) of an individual is binary. In this case, a dimensionality-reducing map \(\Pi: \mathbb{R}^p\rightarrow\mathbb{R}^d\) with \(d < p\) is \(\Delta\)-fair with respect to a family \(\mathcal{F}\) of classifiers if \(|\mathbb{P}[h(\Pi(x)) = +1 | z = +1] - \mathbb{P}[h(\Pi(x)) = +1 | z = -1]| \: \leq \Delta\) for all classifiers \(h\in\mathcal{F}\). The intuition behind this definition is that a dimensionality reduction is fair if one cannot accurately predict the protected class of any single point using the lower-dimensional data computed after applying map \(\Pi\). The classic PCA framework can be extended using semidefinite programming to construct an algorithm for fair PCA performance [6]. Figure 2 shows that fair PCA yields an intuitively fairer outcome.

Figure 2. Motivation for fairness in unsupervised learning. Dimensionality reduction via fair principal component analysis (PCA) diminishes opportunities for discrimination. The thick red line in 2b and 2c is the optimal linear support vector machine (SVM) that separates by color, and the dotted line is the optimal Gaussian kernel SVM. Figure courtesy of [6].

Most research on impartial AI has focused on the learning problem, and much work remains in the development of algorithms for unbiased decision-making. The fundamental difficulty is that fairness is inherently nonconvex. One can solve typical learning and decision-making problems by minimizing a convex loss function; however, fairness essentially asks the opposite — maximizing the independence of predictions and decisions from protected classes. Thus, the development of effective and fair AI techniques will require direct confrontation of the underlying nonconvexity in various instances of the problem.

[1] Angwin, J., & Grassegger, H. (2017). Facebook’s secret censorship rules protect white men from hate speech but not black children. ProPublica. Retrieved from
[2] Dwork, C., Hardt, M., Pitassi, T., Reingold, O., & Zemel, R. (2012). Fairness through awareness. In Proceedings of the 3rd Innovations in Theoretical Computer Science Conference (pp. 214-226). Cambridge, MA: Association for Computing Machinery.
[3] Edwards, L. (2013, March 16). The gender gap in pain. The New York Times, p. SR8.
[4] Goh, G., Cotter, A., Gupta, M., & Friedlander, M.P. (2016). Satisfying real-world goals with dataset constraints. In Advances in Neural Information Processing Systems 29 (NIPS 2016) (pp. 2415-2423). Barcelona, Spain.
[5] Hardt, M., Price, E., & Srebro, N. (2016). Equality of opportunity in supervised learning. In Advances in Neural Information Processing Systems 29 (NIPS 2016) (pp. 3315-3323). Barcelona, Spain.
[6] Olfat, M., & Aswani, A. (2018). Convex formulations for fair principal component analysis. In 33rd AAAI Conference on Artificial Intelligence. Honolulu, Hawaii. To appear. 
[7] Olfat, M., & Aswani, A. (2018). Spectral algorithms for computing fair support vector machines. In 21st International Conference on Artificial Intelligence and Statistics. Playa Blanca, Lanzarote.
[8] Tuy, H. (1998). Convex Analysis and Global Optimization. In Advances in Natural and Technological Hazards Research (Vol. 22). Boston, MA: Springer.
[9] Zafar, M.B., Valera, I., Rodriguez, M.G., & Gummadi, K.P. (2017). Fairness constraints: Mechanisms for fair classification. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics. Fort Lauderdale, FL.

Anil Aswani is an assistant professor in Industrial Engineering and Operations Research (IEOR) at the University of California, Berkeley. His current research focuses on developing methodologies for robust data-driven decision-making, particularly for applications in healthcare. Matt Olfat is a Ph.D. candidate in IEOR at UC Berkeley. He holds a B.S. in systems engineering and mathematics from the University of Virginia, and his research interests include fairness and robustness in machine learning and decompositions of high-dimensional datasets.

blog comments powered by Disqus