Humans continue to pose questions about artificial intelligence (AI) and ethics.
Back 1951, when Christopher Strachey launched his AI-driven checkers program at the University of Manchester in England, people were questioning the implications of human-mimicking machines.
Today, depending on who is giving the answer, AI is either the key to driving society toward a peaceful, harmonious future or the biggest potential threat to humankind.
We can find examples to support both perspectives in the business world and beyond. But reaching a consensus about the ethical bounds of AI is an endless societal debate that will morph as AI applications evolve, becoming more sophisticated — and more human-like.
Here, we examine a series of questions on and AI and ethics society will wrestle with for the foreseeable future:
See more: Artificial Intelligence Market
5 ethical questions AI raises
1. How much surveillance is too much?
AI applications that help employers and advertisers keep tabs on humans are among the most prominent targets for ethical skepticism. Whether it’s the AI-driven algorithms that track and analyze users’ moves on social media or a “smart” app that runs in the background, clocking every keystroke and click made by an employee, we’ve reached an age where many are under constant digital surveillance.
On one hand, is it really such a terrible thing if employers can keep better track of exactly what employees are doing during working hours? After all, employees receive paychecks, and employers have a significant stake in employee efficiency. On the other hand, what will become of personal autonomy? What kind of workplace environments are we creating when employees feel surveilled and monetized at every moment of the workday?
Work isn’t the only place humans are under increased scrutiny. “Big Brother” is lurking, and as AI grows more sophisticated and accurate, we could eventually see applications like facial recognition become a part of nearly every public interaction.
“In an AI-enabled future, we assume that everyone and everything will have knowledge about everyone else,” says Kathleen Walch, managing partner and principal analyst at the AI firm Cognilytica, in a recent Forbes article, “Ethical Concerns of AI.”
“No longer will we be able to just unplug for a while. We may quickly move to a world where just a few companies and government have an uncomfortable amount of knowledge and level of control over the lives of everyone.”
2. When do the ‘needs of many’ outweigh the needs of a few?
As AI evolves to become more intelligent and autonomous, it’s easy to imagine scenarios where we would hope and expect AI applications to put the needs of the human collective above the needs of individuals.
Imagine AI that can root, identify, and detain a potential terrorist — here, there is a clear societal benefit. When situations are grayer, however, it’s a slippery slope.
Take another potential scenario: a factory that is polluting a river and poisoning a small town’s water supply, as Debarshi Chaudhury cites in his Forbes article, “Why the Ethics of AI are Complicated.”
“A machine could decide that for the benefit of the many thousands of people in the town, the best option is to destroy the factory,” Chaudhury says. “That’s something a ‘good’ human being would never do — we’d try to fix it in other ways.”
Until we reach a level where AI is able to examine these kinds of issues with real context, we can’t truly trust the technology to be left alone to its own devices. The tendency for AI to follow the most logical, literal path is one AI engineers have yet to adequately address.
See more: Artificial Intelligence: Current and Future Trends
3. What happens if AI replaces humans in the workplace?
It’s perhaps the one of the most common technological concern expressed by the public, since the industrial revolution began in the late 1800s. As factories emerged for the first time, it quickly became apparent how much work could be done with the help of inventions like the mechanical belt.
If machines can perform jobs, the logic goes, we’ll need fewer humans on hand, which means fewer employment options. Machines are often quicker, more precise, and much more efficient, since they don’t need breaks nearly as regularly as human workers. And now, machines powered by AI are, indeed, replacing some jobs previously completed by people.
Some technology watchdogs see signs that the ongoing pandemic has accelerated an inevitable shift toward automation and away from human work. One study that estimated that around 400,000 jobs were lost to automation in U.S. factories between 1990 and 2007, according to a recent Time article.
The drive to replace humans with machinery is accelerating, writer Alana Samuels says in the article. While some jobs have come back since during the pandemic, Samuels says, “Some will never return.” One group of economists estimates that some 42% of the jobs lost are gone “forever,” she says.
Ultimately, AI will significantly impact the entire career landscape for would-be employees. Already, “babysitting robots” is a job, and human oncologists are beginning to turn over diagnostic duties to AI, which has proven to be far more accurate in some cases, according to Yulia Gavrilova in her blog post, “10 AI Ethics Questions We Need to Ask,” on the Serokell Software Development Company website.
“What will happen to specialists when AI systems become available in every hospital not only for making diagnoses but also for performing operations?” Gavrilova says.
“If computers take over all the work, what will we do? For many people, work and self-realization are the meaning of life.”
4. Is AI making it easier for bad actors to carry out attacks?
We’ve focused on how AI itself poses several ethical issues in certain applications, but what about AI in the wrong hands? AI can do good, but it can be wielded as a weapon by malicious users.
AI-driven network breaches and other cyber attacks are on the rise. TaskRabbit, an online marketplace that connects freelancers with clients, suffered a data breach in 2018, putting almost 4 million users at risk. Information like Social Security numbers and bank account details made it into the hands of bad actors through an AI-controlled botnet, which used machines to launch DDoS attacks on TaskRabbit’s servers. The personal data of 141 million users was accessed.
Another example includes a well-publicized AI-driven attack on WordPress, which has so far seen over 20,000 sites infected with a botnet-style cyber attack, putting users’ personal information and credit card numbers at risk.
Industry watchers speculate that a 2019 attack on Instagram was also AI-based, with many experts speculating that hackers used AI to scan user data for potential vulnerabilities.
5. What role is AI playing in the spread of disinformation?
Our hyper-connected world allows us to stay constantly connected to each another and to access information about virtually anything at high speeds.
The Internet Age led society to a place where disinformation and “fake news” have reached epidemic levels. AI is at the heart of many harmful disinformation campaigns.
AI-driven programs can create extremely convincing fake images, videos, conversations and alarmingly authentic-feeling content.
What happens if we reach a point where no one, or a small number of people, can tell if images are real or AI-generated? With every passing year, we see evidence that AI-created content is only becoming more sophisticated. Consider the MIT Technology Review article, “These weird, unsettling photos show that AI is getting smarter,” which explores the work being done by researchers to perfect AI image generation.
We’ve already seen first-hand how AI-driven bots can impact world events. The post-mortem on the 2016 U.S. presidential election shows how quickly and intentionally AI can be used to spread convincing misinformation.
Throughout the 2016 election cycle, bots were used to automate social media accounts with the aim of manipulating voters and stoking partisan division. The 2016 election was not an isolated event. Even as research conducted through programs like the University of Oxford Computational Propaganda continues to piece together how foreign actors were able to significantly impact an American election, bots continue running rampant among social media platforms.