ARMONK, N.Y. — A new report shows the gap between companies’ plans for artificial intelligence (AI) ethics and the market reality.
“AI ethics in action” by the IBM Institute for Business Value (IBV) is intended to show “where executives stand” on AI ethics and how they’re implementing AI ethics, according to IBM this month.
The report indicates there’s a “strong imperative for advancing trustworthy AI,” including better performance compared to peers in sustainability, social responsibility, and diversity and inclusion.
Yet, there’s an AI ethics gap between “leaders’ intention and meaningful actions.”
“AI ethics in action” also highlights the “radical shift” in the roles responsible for leading and upholding AI ethics at company: 80% of survey respondents pointed to a non-technical executive, such as a CEO, as the primary “champion” for AI ethics, up from 15% in 2018.
Other findings from “AI ethics in action”
Non-technical business executives are now seen as the driving force in AI ethics:
- The CEO (28%) as well as board members (10%), general counsel (10%), privacy officer (8%), and risk and compliance officer (6%) are viewed as being most accountable for AI ethics
- 66% of respondents cite the CEO or other C-level executive as having a strong influence on their organization’s ethics; 58% cite board directives; 53% cite the shareholder community
Building trustworthy AI is perceived as a strategic differentiator, and organizations are beginning to implement AI ethics mechanisms:
- Over 75% of business leaders agree AI ethics is important to their organizations, up from about 50% in 2018
- 75% believe ethics is a source of competitive differentiation
- Over 67% who view AI and AI ethics as important indicate their organizations outperform their peers in sustainability, social responsibility, and diversity and inclusion
- Over 50% of organizations have taken steps to embed AI ethics into their existing approach to business ethics
- More than 45% of organizations have created AI-specific ethics mechanisms, such as an AI project risk assessment framework and auditing/review process
Ensuring ethical principles are embedded in AI solutions is a pressing need for organizations, but progress is slow:
- 79% of CEOs are prepared to embed AI ethics into their AI practices, up from 20% in 2018
- Over 50% of organizations have publicly endorsed common principles of AI ethics
- Less than 25% have operationalized AI ethics
- Fewer than 20% strongly agreed that their organization’s practices and actions match or exceed their stated principles and values
- 68% acknowledge that having a diverse and inclusive workplace is important to mitigating bias in AI. Findings, however, indicate that AI teams are still less diverse than their organizations’ workforces: 5.5 times less inclusive of women; 4 times less inclusive of LGBT+ individuals; and 1.7 times less racially inclusive.
IBM provides several recommendations for business leaders to implement AI ethics: take a cross-functional, collaborative approach; establish both organizational and AI life cycle governance to operationalize the discipline of AI ethics; and reach beyond the organization for partnership.
“As many companies today use AI algorithms across their business, they potentially face increasing internal and external demands to design these algorithms to be fair, secured, and trustworthy,” said Jesus Mantas, global managing partner, IBM Consulting.
“Yet, there has been little progress across the industry in embedding AI ethics into their practices.”
Mantas said building “trustworthy AI is a business imperative and a societal expectation, not just a compliance issue.”
“As such, companies can implement a governance model and embed ethical principles across the full AI life cycle,” Mantas said.
Methodology
The IBM report “AI ethics in action” is partly based on a survey of 1,200 executives working across 22 countries and 22 industries.
The survey was conducted in partnership with Oxford Economics in 2021.