Artificial intelligence, like any new technology, goes through phases that are called the “hype cycle.” They begin with the technology trigger. You have a peak of over set expectations (overhyped), followed by disillusionment when the technology doesn’t meet expectations, then the slope of enlightenment where we get solid examples of success, and finally, the plateau of productivity.
With AI, thanks to movies, we got buried in the second part of this cycle where expectations are something like Skynet in the Terminator movies, and the reality is closer to what Siri does when you ask her a question. There are AI solutions that do real work, but they aren’t anywhere near an AI that is capable of addressing a human level spectrum of decisions, let alone taking over the world.
That’s a problem because if you think any AI can do any job, it won’t end well for your project. Instead, you have to have a well-defined problem, coupled with an AI solution designed to deal with it, before you can even hope to have a successful outcome.
Let’s explore the big problem with AI right now over set expectations.
Level 2 Autonomous Driving
Autonomous driving is the first broad application of AI, and this week, the AAA released a report indicating that the current form of the technology wasn’t just short of expectations; it is unsafe.
If you are like me, you’ve likely found your level 2 system does weird stuff like trying to take offramps you didn’t intend to take or suddenly turn itself off when it gets confused. Besides, there have been a number of Tesla drivers who have been injured because – arguably – they have depended on their car’s AI to do things it can’t yet reliably accomplish. Consumer Reports even tried to get Tesla to change the name of their technology from Autopilot so that drivers wouldn’t treat it like an Autopilot.
But this reflects the danger of over set expectations, and it can kill a technology because it results in decision-makers also setting expectations for a project that the technology isn’t yet ready to meet.
When Expectations Fall
As technology goes through this hype cycle, expectations fall, and the technology advances to a cross over point when the lowered expectations and the technology’s increasing capabilities meet. Autonomous driving is expected in around five years. Until then, there is a real risk that individual efforts could be banned for safety due to overset expectations.
We are effectively playing with the disillusionment phase of this technology, and this is not a place where you want to be for long because it can kill an effort. Consumer products that never made it past this phase are Laser Disks, Quadrophonic sound, robotic pets, and mono-wheel scooters. Expectations were way ahead of reality, and while some of this stuff came back in other forms (CDs, 7-1 sound systems, Aibo 2) over set expectations that result in an extended disillusionment phase can kill a technology.
On the corporate side, facial-recognition, an applied AI technology, is a very high risk right now. This risk is because expectations were over set, and the problems have continued for far too long, leading to a significant enough disillusionment phase that one of the founders of this technology, and a leader in AI, recently abandoned it.
Wrapping Up: Will AI Fail?
Frankly, no, AI isn’t at risk of total failure, but we just got a huge warning on facial recognition, suggesting it isn’t out of the woods yet. I’ve heard from large enterprises like Bank of America that are having great success with AI.
But this success is because they spent to time understanding what it could, and could not, do so their expectations were in line with reality and, rather than being disappointed, they were surprised their solution worked incredibly well. But it was a tightly targeted, well developed, deeply trained solution that didn’t attempt to do what AI could not yet do.
To be successful with AI, you have to understand, deeply, the problem you need to be solved, and the capabilities of the AI solutions you are considering. This understanding has to include both the limitations of the technology and the costs associated with getting it to work correctly (people often way underset the budget for training, for instance) before pulling the trigger on a project. And finally, you have to set reasonable expectations based on actual deployments for time to completion; otherwise, you’ll miss expectations and have a failed project.
We’ll eventually progress out of this phase, but until we do, your protection is to either avoid AI or develop a deep understanding of the current, accurate level of its capabilities; otherwise, you’ll likely get bitten by over set expectations.