How Hidden Prejudices Shape Algorithms and Impact Business & Society
What Exactly is AI Bias?
AI bias occurs when algorithms are trained on data that contains existing, and sometimes hidden, bias. This leads to the perpetuation of biases or mistruths that we as a society wish to avoid. Addressing AI Bias is especially important for the purposes of promoting equity, accessibility, and fairness. It also permeates the quality of business processes touched by AI. AI Bias is pernicious because AIs are colloquially viewed as a "neutral actor", simply analyzing data and producing an output.
This is not a simple problem to address, and it strikes at the heart of how AI systems create their output when fed a set of training data. That training data may be an accurate representation of the real-world, or it may be a biased representation. As Brian Christian, author of The Alignment Problem notes, "If a certain type of data is underrepresented or absent from the training data but present in the real world, then all bets are off."
Take for example the application of prediction algorithms in the criminal justice system. Christian writes about a software program called COMPAS, which was designed to algorithmically calculate the propensity of recidivism for an inmate. Investigative journalism by ProPublica found that the algorithm was unfairly biased against African Americans, rating them as a higher risk of reoffending even though that was not the case in terms of actual observed data (versus the inputs it was trained upon, largely from a questionnaire). The application of this type of algorithm can lead to a social injustice.
While the algorithmic bias of COMPAS describes AI bias in a "pre LLM" world, the same issue persist with the proliferation of LLMs. Researchers have found that biased output regarding gender and race can permeate into LLM responses. To the extent that these tools will help guide and educate people, it represents a social problem.
In a business case, the stakes are lower than incarceration, but high nonetheless. AI Biases can propagate inequity across employees with things such as hiring, compensation, and promotion, as AI is incorporated into functions such as resume review, talent evaluation, or even hiring/firing.
What should we do about AI Bias?
It is crucial to interrogate how AI models are created and what the risks of bias are when calibrating their uses across society and business. For example, with the current crop of foundational LLMs, great efforts were made to include reinforcement learning as a means of having human oversight push these models toward output that we deem more equitable, even when the biases in their training data (and the reward function of their training instruction) promotes such biases.
As Christian notes, "If used wisely—and descriptively rather than prescriptively—the very systems capable of reinforcing and perpetuating the biases latent in society can be used, instead, to make them visible, inarguable."
This is the optimistic take on AI Bias, and an effort every researcher and AI-based company should promote. It is a value we hold dearly at ModuleQ, with our commitment to positive impact and a people-centric application of AI.
Defined by others as:
IBM: "AI bias, also called machine learning bias or algorithm bias, refers to the occurrence of biased results due to human biases that skew the original training data or AI algorithm—leading to distorted outputs and potentially harmful outcomes.
PwC: "The definition of AI bias is straightforward: AI that makes decisions that are systematically unfair to certain groups of people. Several studies have identified the potential for these biases to cause real harm."
Additional Reading:
Brian Christian - The Alignment Problem
Roselli, Matthews and Talagala - Managing Bias in AI (2019)
Ferrara - The Butterfly Effect in artificial intelligence systems: Implications for AI bias and fairness (2024)
HBR - What Do We Do About the Biases in AI? (2019)