The link between artificial intelligence (AI) and bias is alarming.
As AI evolves to become more human-like, it’s becoming clear that human bias is impacting technology in negative, potentially dangerous ways.Â
Here, we explore how AI and bias are linked and what’s being done to reduce the impact of bias in AI applications:
See more: The Ethics of Artificial Intelligence (AI)
3 questions on AI and bias
1. How does bias in AI impact automated decision systems?
Using AI in decision-making processes has become commonplace, mostly because predictive analytics algorithms can perform the work of humans at a much faster and often more accurate rate. Decisions are being made by AI on small matters, like restaurant preferences, and critical issues, like determining which patient should receive an organ donation.Â
While the stakes may differ, whether human bias is playing a role in AI decisions is sure to impact outcomes. Bad product recommendations impact retailer profit, and medical decisions can directly impact individual patient lives.Â
Vincent C. MĂĽller takes a look at AI and bias in his research paper, “Ethics of Artificial Intelligence and Robotics,” included in the Summer 2021 edition of “The Stanford Encyclopedia of Philosophy.” Fairness in policing is a primary concern, MĂĽller says, noting that human bias exists in the data sets used by police to decide, for example, where to focus patrols or which prisoners are likely to re-offend.Â
This kind of “predictive policing,” MĂĽller says, relies heavily on data influenced by cognitive biases, especially confirmation bias, even when the bias is implicit and unknown to human programmers.Â
Christina Pazzanese refers to the work of political philosopher Michael Sandel, a professor of government, in her article, “Great promise but potential for peril,” in The Harvard Gazette.
“Part of the appeal of algorithmic decision-making is that it seems to offer an objective way of overcoming human subjectivity, bias, and prejudice,” Sandel says. “But we are discovering that many of the algorithms that decide who should get parole, for example, or who should be presented with employment opportunities or housing … replicate and embed the biases that already exist in our society.”
See more: Artificial Intelligence: Current and Future Trends
2. Why does bias exist in AI?Â
To figure out how to remove or at least reduce bias in AI decision-making platforms, we have to consider why it exists in the first place. Â
Take the AI chatbot training story in 2016. The chatbot was set up by Microsoft to hold conversations on Twitter, interacting with users through tweets and direct messaging. In other words, the general public had a large part in determining the chatbot’s “personality.” Within a few hours of its release, the chatbot was replying to users with offensive and racist messages, having been trained on anonymous public data, which was immediately co-opted by a group of people.Â
The chatbot was heavily influenced in a conscious way, but it’s often not so clear-cut. In their joint article, “What Do We Do About the Biases in AI” in the Harvard Business Review, James Manyika, Jake Silberg, and Brittany Presten say that implicit human biases — those which people don’t realize they hold — can significantly impact AI.Â
Bias can creep into algorithms in several ways, the article says. It can include biased human decisions or reflect “historical or social inequities, even if sensitive variables such as gender, race, or sexual orientation are removed.” As an example, the researchers point to Amazon, which stopped using a hiring algorithm after finding it favored applications based on words like “executed” or “captured,” which were more commonly included on men’s resumes.Â
Flawed data sampling is another concern, the trio writes, when groups are overrepresented or underrepresented in the training data that teaches AI algorithms to make decisions. For example, facial analysis technologies analyzed by MIT researchers Joy Buolamwini and Timnit Gebru had higher error rates for minorities, especially minority women, potentially due to underrepresented training data.Â
3. How can we reduce bias in AI?
In the McKinsey Global Institute article, “Tackling bias in artificial intelligence (and in humans),” Jake Silberg and James Manyika lay out six guidelines AI creators can follow to reduce bias in AI:
- Be aware of the contexts in which AI can help correct for bias as well as where there is a high risk that AI could exacerbate bias
- Establish processes and practices to test for and mitigate bias in AI systems
- Engage in fact-based conversations about potential biases in human decisions
- Fully explore how humans and machines can work best together
- Invest more in bias research, make more data available for research, while respecting privacy, and adopt a multidisciplinary approach
- Invest more in diversifying the AI field itself
The researchers acknowledge that these guidelines won’t eliminate bias altogether, but when applied consistently, they have the potential to significantly improve on the situation.Â