Dell Technologies World is a fascinating conference, particularly concerning robotics. Over the years, particularly at this year’s event, they highlighted Robotics as the future, yet Dell does minimal today with robotics. Now their systems are used to create robots and program robots, but Dell seems to have a block as they do with Smartphones on Robotics; they understand the importance but don’t seem to want to resource the effort.
Again, at Dell Technology World this year, they had a fascinating talk by Dr. Kate Darling on Robotics, where they again predict that “from transportation systems to hospitals to the military, to the robotization of workplaces and households robots will be everyplace.” (This quote is right out of the talk summary).
Let’s talk about what Dr. Kate Darling said about the coming wave of robots.
Dealing With The Perception Problem
Dr. Darling opened with some stories of interesting human-robotic interactions. First was a Japanese manufacturer who had their employees and robots exercise together in the morning to think of the robots as colleagues (these are industrial robots that didn’t look human at all). Then they spoke about the Boston Dynamics video, where they kicked the dog-like robot to showcase its stability, and PETA (People for the Ethical Treatment of Animals) had to take a position on whether it was cruel or not because of the backlash over social media.
People were empathizing with a machine that couldn’t feel pain, emotion and wasn’t alive. I have to admit when I first saw that Boston Dynamics video years ago, I got pissed as well because it looked like abuse. I’d anthropomorphized the robot, and, according to Dr. Darling, I was far from alone.
She then shared an argument she had with a guy named “Scott” (sounded like one of my relatives) who didn’t accept that people would attribute living concepts to machines even in the face of clear evidence that many of us do. I mean, how many of us name our cars? Of course, most of us do project emotions in our pets and grow up with stuffed animals we treat as if they are alive, suggesting this behavior is ingrained early.
According to Darling, we scan the world around us and are differentiate what we see into different categories, one of them being agents. She pointed out that over 80% of Rumba robotic vacuum owners name them (that is something I didn’t do). Research has been done on soldiers who work with robotic bomb disposal robots who name them and even present them with medals for performance. This anthropomorphic behavior is particularly typical with robots like those that Boston Dynamics creates that use animal physics to move.
She then shared a personal story about a baby dinosaur toy with a ton of sensors and behaved like a live animal and how she got upset when it was poorly treated. This experience evolved into a workshop at a robotics conference where these same robots were held early on; then, the attendees were told they had to kill the robot with a hammer or hatchet, and no one would do it until they were told if they didn’t all the robots would be destroyed. One guy stepped up and, when he was done, everyone seemed to feel bad for the thing which, in the end, was only a toy.
One other study they did was with robotic hex bugs. They tested the participants for empathy and then gave some of the bugs a back story. Those participants that had high empathy would refuse to smash the bugs, and those with low empathy, well they played the Hulk and smashed.
Driving Human/Machine Interaction In Medicine
Work like this provided for creating robots designed to interact with children with autism helped them interact and engage. Using a robot that looked like a baby seal, another effort was used to help elderly patients, particularly those with dementia, in nursing homes, be comforted, and as an alternative to medication.
This use of robots to comfort people surfaced a problem. Even though these tools were doing good, third parties saw them as a dystopian outcome that resulted from the lack of human interaction. Dr. Darling highlighted this as a huge problem when it comes to talking about and using robots. We are prematurely comparing robots and AI to humans and human intelligence. We are decades away from that.
Eliminating The False Human/Machine Bridge
This false bridge between robotics and people creates vast problems with robots’ deployment because people aren’t looking at what robots are. Robots can be much smarter than people for calculation and recall, but they aren’t very situationally aware yet (though IBM’s Watson is getting better through projects like Project Debator). Dr. Darling gave the example of Apple’s Siri, where someone asked Siri to call him a car, and it began referring to the user as “a car.” Machines perceive the world and interact with it differently.
She argues that you don’t want to turn robots into human clones but to evolve them in a way that that is consistent with their strengths and weakness, not ours. The best path is to use them as replacements for animals, kind of like we used automobiles and trucks to replace horses and drones to replace camera carrying pidgins (I had no idea we’d used pidgins to carry cameras). And we used ferrets to pull wire through tight places where robots would be better. We even used dolphins for underwater mine detection; sadly, we have yet to figure out a better robot.
Using AI and Smart Robots Correctly
An example of robotics and AI used correctly is how AI is being used in the US Patent office. AI isn’t being used to replace Patent Office employees but to enhance them to review a patent application for prior art, and then the human makes the final determination. They didn’t cut employees; they made the existing staff more effective. This favorable outcome was created, teaming the robots up with the people each what they did best.
Dr. Darling closed by indicating that the future of robotics and AIs will be human companions and that eventually, we’ll treat them similar to our pets. This evolution opens up concerns not only about security but about emotional manipulation. So understanding this social aspect of AIs and robots helps us understand social interaction and the kinds of problems we are likely to emerge, like advertising-funded or compromised robots that manipulate us into buying or doing things we wouldn’t otherwise do.
AI Interaction
It was fascinating that Dr. Darling closed her talk much like I have when talking about AI, with one exception. I close by saying I’m good with our coming AI masters covering my butt against a Terminator outcome; she more accurately praises them as our coming partners and helpers.
But I agree with her one of the industry’s biggest perception problems is that AIs and smart robots are perceived as threats to our employment, and like any form of automation, it isn’t them that is the threat but the decision-maker and trends that surround them. Used properly and most effectively, AI and smart robots are designed to make us more productive. The skills needed to best interact with them are not risks, but they could assure our value as employees for the foreseeable future.
While listening to her talk, one area that came to mind was self-driving tractors used for long haul shipping. The driver’s union has been against this technology even though we have a massive driver shortage because folks hate the boredom of driving a long haul truck. An AI-driven autonomous system allows the driver, who is still needed to manage and secure the load, to hand off the boring part of the job to the AI. Then you can string more trucks together into a convoy to address the driver shortage while making the job itself more attractive while making the driver more productive.