Autonomous Vehicles and the Problem With Poorly Created AI

Artificial intelligence (AI) is a critical technology that is advancing incredibly quickly but most implementations I and other analysts have reviewed have failed to meet expectations. 

Often the reasons include a lack of AI competence on the AI customer or vendor side, a lack of understanding by the individuals creating the solution for the problems that are to be solved. 

I was traveling recently in California visiting HPE’s Amplify Partner event. Some messaging I heard in the state concerned AI-based autonomous driving technology being poorly conceived and unsafe.

Let’s talk about the problem of releasing dangerous AI programs, because there’s no doubt in my mind that multiple vendors will likely release dangerous AIs when they make a common tech mistake of releasing software to meet a target release date, regardless of whether it is ready or not.  

AI in autonomous vehicles 

I worry releasing AI technology long before the technology is ready for release will sour the market on autonomous cars.

Autonomous driving AI has been under development for over 20 years and started to make more sense when NVIDIA entered the market and favored using the metaverse, rather than physical roads, to train the things. 

With simulation, in this case NVIDIA’s Omniverse based Drive AI training solution, you can do the equivalent of decades of testing in months without ever putting anyone at risk. As we have seen, road testing has resulted in a number of accidents that could have been avoided had this testing been done in the metaverse rather than on physical roads. 

Some autonomous vehicle companies have consistently over-promised what their AI technology could do, which can result in dangerous situations. 

AI, particularly those that are doing jobs that put lives at risk, need extensive testing, and the focus should be on getting the technology right, not rushing it out the door. But the history of companies, even major companies, releasing software that wasn’t ready is long and troubled. While these earlier mistakes resulted in lost work and a lot of aggravation, nobody was harmed. 

With AI, particularly those managing systems that interact with people, the risk of catastrophic damage is far greater, suggesting the need for a third-party quality assurance process that ensures the product isn’t released until it is safe.  

Google Glass timing

One of the biggest tech examples of releasing a product before it is ready was with Google Glass. 

Instead of waiting until the software was mature, Google released it to the world while it was in beta and even got customers to pay for the incomplete product. The result affected consumer augmented reality (AR) efforts for years, and some folks were using the product, which included a head-mounted camera, in inappropriate areas. 

This again highlighted that products that aren’t ready shouldn’t be introduced to the general public. 

Dangers of prematurely releasing AI

Science fiction movies, like “Colossus: The Forbin Project,” “War Games,” and even “2001: A Space Odyssey,” have simulated what could happen if AI had too much power, couldn’t differentiate between reality and simulation, or was given conflicting directives that destabilized the AI and turned it against people. What is more likely is that AI is released into production before it is fully tested and vetted.  

The cause of this problem is that decision makers are unable to fully consider the level of risk they are taking and appear to be unable to carefully consider the devastating risk potential and how likely it is that risk could have catastrophic consequences. 

As we begin to move AI into areas that affect human safety, there needs to be a stronger third-party review process to assure steps haven’t been skipped, and, until the AI is designated as ready, it shouldn’t be available to anyone but professional testers who operate under highly controlled circumstances.

Skipping this due diligence could damage autonomous vehicle companies irreparably and set back autonomous driving efforts by decades as drivers move to actively avoid it and regulators to ban it.  

Risk and oversight

Leaders at autonomous vehicle makers like to take risks, which is refreshing in a way. The inability to take risks has significantly slowed development and innovation in a number of industries. 

However, this same behavior, when applied to technology that has the potential to become human-like is exceedingly dangerous and could not only destroy companies, but set back by years the adoption of a technology that, done right, could save thousands of lives. 

Given the potential for harm, AI will increasingly need third-party oversight during their release process to ensure people who take risks are offset by a control structure that forces them to assure the quality of the AI before it is released on a vulnerable public.

Similar articles

Get the Free Newsletter!
Subscribe to Data Insider for top news, trends & analysis
This email address is invalid.
Get the Free Newsletter!
Subscribe to Data Insider for top news, trends & analysis
This email address is invalid.

Latest Articles