We think the magic of developing for Misty is in combining her capabilities to solve problems in the physical space. To show this in action, we filmed an example of how we might code Misty to help out the owner of a local distillery. This business owner needed a way to monitor the temperature of his equipment when he couldn’t be around, and we felt Misty had the right skills for the job.
The distillery example combines capabilities like mapping, navigation, image capture, data collection, and communication with third-party APIs into a single “roam and collect” skill. For this post, we’ll walk through (at an abstracted level) the process of piecing those capabilities together.
First, a job like this requires Misty to follow a predetermined route through her workspace. Along the way, she needs to pause, gather data, and send it off to the cloud. Doing so requires that she have a detailed internal map of the environment, so we should start by showing Misty around her new workplace.
Navigating the distillery
Before we can code the skill, we need to help Misty build a map of the distillery. We do this by calling her mapping commands and driving her (programmatically or manually) around her new environment.
When mapping, Misty uses her depth sensor to generate an array of data called an occupancy grid. The occupancy grid is a two-dimensional matrix wherein each element represents a cell of space on Misty’s map. The value of each element tells Misty (and her developers) whether that part of the map is traversable, occupied by an object, or unexplored.
To have Misty follow a route, we calculate the X and Y position of cells in the occupancy grid. When we write the code for our skill, we pass these coordinates into Misty’s path driving commands to have her navigate from one location to the next.
In the distillery example, we need Misty to follow the same predetermined path each time she collects data. But what if the distillery owner gets distracted in the middle of a task and sets something down in the middle of Misty’s route? Misty doesn’t generate a new map unless directed to, and the new obstacle will be entirely unexpected. When that happens (as it inevitably will), Misty will use her time-of-flight sensors for ad-hoc obstacle avoidance.
After showing Misty around (and warning her about unexpected obstacles in the workroom), we’re ready to teach her the most important part of her new job: collecting & analyzing data.
Getting by with a little help from the cloud
The distillery example requires that each time Misty pauses on her route, she tilts her head back and snaps a photo of a temperature gauge. She’ll know exactly where to stop and how to position her head because the values for those actions will be coded into her skill. We may need to run through her path a few times before we know precisely what those values should be, and we like to consider these adjustments a form of constructive criticism for our new robot helper.
The next step is preparing Misty to handle all the data she’ll be collecting. Our method here is to have Misty send each picture to Microsoft Azure for analysis. In the video, we use Cognitive Services to extract the relevant temperature data from the image, and then we write that information to an Azure database for later access.
If we wrapped up development at this stage, our business owner would have a pretty capable helper. We could rest assured that Misty’s able to collect the necessary data and store it in the cloud for remote access. But the reason we’re having Misty collect all that data in the first place to give our distiller confidence that everything is running smoothly while he’s away. And if he’s away, he might not be able to check that data for the details he needs.
We can help him out by giving Misty the responsibility to let him know when there’s a problem. With a few lines of code, we can teach Misty the acceptable value range of the numbers she collects. And when she catches a temperature outside that range, we can have her call for help by sending an SMS to the distiller via a web request to a service like Twilio.
Investigating from afar
Let’s imagine our business owner gets a text from Misty about a concerning temperature reading. Our example shows a prototyped mobile app that lets him drive Misty from a remote location and tap into her video stream, so he can see what Misty sees and evaluate the situation for himself.
At the time of writing, Misty’s video streaming is in active development (you can see the capabilities roadmap on the Misty Robotics website). When it’s ready for developers, you’ll be able to use video streaming alongside the other capabilities featured in this example. In addition to being accessible from the code you write that runs on Misty herself, most of these features are also available through Misty’s REST API and WebSocket interface. This gives developers a way to write programs that send commands to Misty from external devices, or to create their own graphics interfaces for driving Misty and sending her commands.
The right tool for the job
In designing Misty, we worked hard to give her a broad range of advanced capabilities. And while it’s true she won’t be great at everything (it may be awhile before she can play Bach on the piano), there’s a lot she can do well, especially in the section of the task space requiring independent mobility, perception, and communication. We’re not going to try and guess all the jobs Misty will have, but we’re confident it won’t by any single capability that makes her the right robot for a given task. Instead, it will be the creativity of the developers who weave those capabilities together.