Artificial intelligence (AI) is either our best path to a better tomorrow or the harbinger of an apocalypse, depending on who is talking about it. Â
Both paths are viable, and I’m increasingly becoming worried that we aren’t focusing enough on creating the former and will, as a result, get the latter. Initially, AI efforts were primarily focused on things like improving medical diagnosis speed and accuracy. At the same time, today, they seem more focused on coming up with ways to replace call centers and new methods to get people to buy what they don’t need or vote for politicians. Â
As they mature, AIs will have the power to make decisions at machine speeds, but, like any technology, there is no assurance the result will benefit humanity. Priorities appear to be sales, weapons, replacing humans, robots, including autonomous transportation, and entertainment. Â
Let’s talk about resetting priorities for AI that would better ensure a future we might want, rather than the one I’m increasingly worried we will get. Â
Suggested AI Priorities
If we look at where AI investment seems to be focused, it would seem that our highest priorities are to create more effective ways to kill people in other countries in war, to get money from customers more successfully, and to find ways to replace humans with machines, mostly in call centers and in sales. Â
But the problems we seem to be facing are climate change, pandemics, an overabundance of believable false information, a failing education system, and an inability to create a balance between work and life. Yes, companies like IBM are working to make AI into more of a human companion than a replacement, but too often, they seem to be standing alone in their effort. Â
AIs could be used, even in their current form, to analyze students individually and customize a curriculum for them and help them find a better life path. They could help understand relationships and identify predators and those that might become life-changing friends. Rather than being used as weapons to kill other people, they could be used as weapons against fires and other natural and manufactured disasters, which are currently doing more damage and killing more people than wars. And they certainly could be used to better identify and isolate people who actively spread false information. Â
The movie “Terminator” is a story based on the authentic practice of weaponizing an AI before creating an AI defense. We have projects like AI Shield and Shield AI, but their budget and funding are a small fraction of what is being spent on weaponizing AI. The Lifeboat Foundations effort is focused on finding a way to give AIs empathy, a kind of a moral compass that is both more flexible and more effective than the old “Three Laws of Robotics” from years ago. Â
Now I’m not saying to do those other things, but when you are watching the world burn, maybe prioritizing solutions that focus on stopping the burning over solutions that are even more destructive, like AI-powered robotic weapons systems, would seem prudent. Â
A simple AI-based tool that would flag false information could do a fantastic amount of good, while a tool that used what it knew about you to convince you of something false could do harm.
Wrapping Up
I think our priorities on AI development aren’t being considered against the threats we have today, and much of the result may end up doing more harm than good.Â
Technology in general, and artificial intelligence mainly, are force multipliers that today appear more focused on harming than protecting against that harm. We appear to be pulling back from humanitarian efforts like helping doctors make more accurate decisions and increasing funding for AI-driven weapons systems. Â
If we don’t restore balance to these efforts, we are increasingly likely to regret the eventual outcome of what otherwise could be a potent tool to make the world a better place.  Â