As artificial intelligence plays an ever greater role in our world, the question of ethics in our use of AI gains greater urgency. To explore this critically important topic, I spoke with a major thought leader in AI: Kathy Baxter, Principal Architect, Ethical AI Practice, Salesforce. In a wide ranging conversation, Baxter provided insight on the following:
1) To what extent is bias a major problem in today’s AI systems? What effect is that bias having?
2) On your blog, you write that “To determine if you are making decisions based on unfair criteria like race, gender, geography, or income, you need fairness through awareness. That means collecting sensitive variables to see correlations in the data but not make decisions based on those sensitive variables or their proxies.” But isn’t it true that once the data is collected, it will be used to create decisions?
3) You mention that companies need to commit to “creating and implementing AI responsibly and ethically.” How do you see this process evolving? On a related note, what are your thoughts around surveillance of employees and/or the ethical considerations of individual privacy?
4)Â What are steps that companies can take to improve the ethical foundation in their AI systems?
5)Â What about AI as it pertains to back-to-work solutions?
6)Â The future: If we look several years ahead, what do you see for the future of AI and ethics? It seems like one of the core challenges is that there is no governing body for AI. Will companies be motivated to move themselves forward, given all the competitive advantages that get from aggressively focused AI?
Download the podcast:
Watch the full video interview:
Selected quotes from the full interview:
Is bias in AI widespread? What effect does this bias have?
We are seeing bias come in in any place that humans are involved, because AI just reflects our own human tendencies.
The areas where we need to be most concerned about is anything that impacts human rights. So, our right to freedom, parole, bail, predictive policing. Access to benefits, like food and medical assistance, housing assistance. Privacy, so facial recognition. And of course medical, anything involving safety and health.
We know that those have a tendency to have bias if the data are not representative, if we are not collecting data from everyone, or if the data that we do collect represents our own historical biases, or if we’re not applying decision-making equally to everyone. So, certain communities being policed and more data recorded about them than other areas, and then [the AI bias] becomes unbalanced.
I think [bias in AI] is probably wider spread than what we may be aware of. Because we don’t have regulations or requirements on transparency and explainability, it makes it very hard to look across the spectrum at all of the different AI systems.
We don’t always even know when AI is being used, so a company might be using AI to screen resumes and highlight the resumes that they think are going to be the best candidates, and those are the ones that are forwarded to the hiring manager. And so the candidate may have zero insight that an AI had anything to do with their resume. So it’s that opaqueness that makes it difficult to know for sure just how prevalent these issues might be.
You mentioned that companies need to commit to creating and implementing AI responsibly and ethically. How do you see this process evolving?
There are definitely companies out there that are leaders and have shared some very good practices for how they’re doing this. But on a regular basis, I have customers that will reach out to me and say, “Hey, we wanna do this too. We don’t even know where to begin. How do we find somebody that we could hire? What are the steps that we need to take?” And so this is very much an evolving area.
Paula Goldman, who is our Chief Ethical and Humane Use Officer, has likened this time in AI ethics to the 1980s with cybersecurity. So cybersecurity wasn’t even a thing. You didn’t do pen testing or red-teaming, and it wasn’t until those malware attacks came out that this became a thing. And so the security industry had to figure out what are the methodologies, what are the standards, how do we build this practice up?
And so the AI ethics industry is very much in a time similar to that today. We’re trying to figure out what are the best practices, can we agree upon a standard methodology for how we’ll identify bias in different types of models, what’s a safe threshold? ‘Cause we can never say a data set or a model is 100% bias-free. And so what’s that safe level that we agree upon?
In regulated industries, I think there’s more oversight. It may not be specific to AI, but whether it is a human deciding who gets a loan or an AI deciding, the regulation is the same. You can’t have biased decisions based on someone’s age, race, or gender. But in non-regulated industries, yes, right now, we don’t have a lot of guidelines and regulations, but that is changing.
What advice would you give to management to help them build an ethical AI framework?
There are a lot of different steps to take, but I think first and foremost, at the executive level, there really needs to be executive buy-in that this is going to be a priority, and create the incentive structure to support that.
If people are still incentivized by increasing user engagement and click-through rates and things like that, those tend to be at odds with making society-first decisions. So you have to decide what are the metrics that are compatible with ethical technology, and then how do we incentivize people to hit those metrics instead of perhaps other metrics that may have been prioritized in the past.
And then you need the company to follow through. This really is an every employee hands-on-deck kind of effort.
It’s not just the ethicist on your team. Every single employee is responsible for thinking about, “Should we build this in the first place,” or the sales team, “Should I sell this feature for this particular use case?”
And so thinking through all of the different ethical pieces and understanding your individual responsibility is important. And so is having the training and the education available so that everybody feels empowered, they don’t feel like, “Well, I’m being held accountable for a metric that I don’t exactly understand how I’m supposed to meet.” So you need to come from both sides of the company to be successful.
Are you confident that AI can truly be governed in the years ahead, as it grows in sophistication and complexity?
We need regulation. We really do. Obviously for higher stakes AI, again, anything that impacts human rights like safety, privacy, freedom, we really need to prioritize those, and so we do see the EU in particular is very seriously putting together AI regulation.
We see in the US, there are a number of individual states that are putting forth their own regulations or down to the city level. So California, multiple cities have banned the use of facial recognition technology for various use cases. New York City right now, there’s a bill being passed to regulate the use of AI in hiring.
But not all of the people writing these policies and bills may understand all the complexities of AI. There’s the creators of it, and then there’s the implementers of it, and then there’s the individual users of it, and so regulations need to be specific to each one of those parties. So organizations like World Economic Forum, IEEE, Partnership on AI, each one of these organizations have been active in suggesting different types of regulations.