It’s pretty clear that both home and industry use of robots is accelerating. “Robot density”, measured as the ratio of individual robots per 10,000 human employees, is rising at an annual rate of 5% to 9%, according to the International Federation of Robotics.
Yet both industrial and personal robots have a looong way to go before we really see the sci-fi future we’ve all been dreaming of — and even then, that future is likely to look different than we’ve imagined.
To understand the challenges facing robots, it’s important to know that robots — whether personal or industrial — exist in two main types:
Single-Task Robots. These are anything from your tiny household Roomba to the Fanuc M-2000iA (usually seen picking up entire cars in automotive factories). Each of these robots does one thing — vacuuming, lifting, etc. — so don’t try to get it to recognize your neighbor at the door while petting the cat.
General-Purpose Robots. Hello, Rosie and C-3PO! True general-purpose robots capable of performing thousands of tasks and learning new skills are our sci-fi dream. But, they don’t exist yet. In the meantime, both they and their single-task cousins share some significant challenges.
1. Human-Friendly Interaction
The first major hurdle for both industrial and personal robots is human interaction.
We humans like our interactions to be friendly (Good morning, coffee machine!) and the people we talk with to, you know, get what we’re saying, right?
But until we design robots that can read our moods and respond to our projecting personalities on them, human-machine interaction are going to feel cold and limited.
In an industrial setting, companies are mostly just not even trying to hit that bar. Most industrial robots exist with as little human interaction as possible. Instead, robots usually work in an isolated part of a factory floor, caged off from human interaction. But that’s not great. The future of the workplace is for businesses to move toward the ideal of cobots, with humans and robotsworking side by side.
To fully hit the “cobot” ideal, robots must understand humans. They must have a human-machine interface that understands and responds to human emotional states and verbal communications. As we can see with the amount of effort large tech companies are putting into their personal-assistant AI products, this is a real challenge.
Robot companies really do understand how important this is to figure out. Witness Mimus, ATONATON’s industrial robot that interacts with people without relying on pre-planned movements. Mimus’ “personality” is curious and somewhat puppy-like — a far cry from the purely functional behavior robots typically exhibit. This is impressive stuff, if still a pretty long way from where we need to be.
2. Environment Mapping
Other than human safety, there’s another reason why today’s industrial robots are generally bolted to the floor or fixed to an assembly line. The technology robots need to generate a map of their environment, figure out where they are on the map, and safely move by using the map is still in its infancy.
This technology (called “SLAM” for Simultaneous Localization and Mapping) is incredibly complex from a programming perspective. Mobile robots need to be able to perform mapping tasks constantly and with great accuracy.
While a Roomba can get by with a two-dimensional map of the floor it vacuums (creating a privacy dilemma in the process), a general-purpose robot interacting with humans should ideally accommodate three dimensions and, most importantly, do it quickly. If it takes a robot too long to figure out the shape of a new area and where it is in that space, it can make some big mistakes. Whether the robot weighs 2 tons or 2 pounds, having it run into or over humans and other objects is a very bad thing.
So, if they’re going to interact with us, robots need to identify objects, obstacles, and pathways as fast as we do. And, yes, it may seem easy for humans and animals, but for robots it’s darn hard.
As mentioned earlier, today’s industrial robots are generally really good at performing one particular task. And general-purpose robots are getting better and better at doing multiple things. But, the true revolution can’t occur until robots are capable of performing multiple tasks simultaneously. And this is still super hard.
For example, our Misty robot uses a powerful Qualcomm Snapdragon 820 processor (Samsung Galaxy Note 7 class) to achieve cutting-edge facial detection and recognition. Misty’s facial detection algorithm is very successful, but currently it has to be paused for Misty to recognize any other object simultaneously.
Any mobile robot — general purpose or single task — really needs to recognize people and objects while at the same time physically interacting with their environment. Humans have no problem recognizing one another for example while engaged in other tasks — we don’t need to “stop” or “start” processes in order to do that.
Or, for another example, consider that the average adult knows between20,000 and 35,000 words in his or her native language. Every one of those words represents an object, action, description, or idea that a general-purpose robot must also be able to understand in order to offer meaningful multi-purpose functionality. And the robot must be able to do this interpretation in real time even while distracted with other processes, the way human beings do every day.
4. Privacy & Security
It’s often treated like a boring and annoying afterthought, but when there are issues we can all agree that data privacy and security matters.
In the case of robots, they all need to gather and process some amount of information in order to accomplish their tasks. That information may range from a record of spoken queries to face recognition data to mapping data.
But there’s no clear agreement yet as to how robots should treat our data. There’s not even agreement on who’s responsible for it: Robot owners? Manufacturers? Programmers?
In industrial settings, privacy concerns center around hacking, trade secrets, and performance data. In the medical field, it doesn’t take much imagination to see that there are potential ethical issues with the data surgical robots gather, for example, getting where it doesn’t belong.
At the moment, the most timely parallel the field of robotics has, from a privacy and cybersecurity standpoint, are drones. However, even though drones are seemingly everywhere, we’re still a long way off from seeing any meaningful laws or industry self-regulation.
Bottom line? Robot manufacturers can observe and learn from the software industry that companies tend to have a bias toward creating a feature, rather than securing that feature. We can and must do better with robots than we are, say, with social media.
These are all significant challenges and overcoming them isn’t easy. Yet we’re working on them. And not just us at Misty Robotics. Motivated, creative people in garages, universities, and global corporate research labs are all looking for the path forward. We’re pretty confident that progress is going to happen in leaps and bounds over the next few years. And maybe when it does, our own personal robot will safely roll up to (not over) our toes, examine our face to see whether we’re paying attention, and quietly, privately whisper: Thank you.