We’ve built the Misty II as a platform upon which developers can realize their dreams of robots performing useful tasks in their homes and offices. Misty II has it all, from high-quality sensors and robust locomotion to the ability to easily connect to and consume 3rd-party services and APIs. And with Microsoft Azure, it’s easier than ever to bring in the smarts – those elements that orchestrate the connections between performing physical tasks, collecting and reporting data, and then actually using the data to solve problems and improve situations.
As developers, we often depend on other people (particularly DevOps folks) to set up access to cloud services. While the enterprising among us are able to unblock themselves, many of us end up struggling on our own, creating variations of databases and cloud instances like
companystaging2 until we’re able to stumble onto the correct configuration. Thankfully, with serverless functions like those from Microsoft Azure being more prevalent and extremely easy to create, it’s become trivial to quickly build and iterate on robust functionality.
That’s all well and good for projects like web pages and mobile applications, but how does this relate to Misty? Whether you’re reserving Misty’s processing power for other tasks or leveraging the capabilities of a third party product, there are a lot of situations where a developer might want to offload computationally heavy processes from Misty to the cloud. And, if you’re like me, you probably prefer not to maintain your own virtual machines or databases for those tasks. That’s where Microsoft Azure comes in.
Microsoft Azure makes it pretty straightforward to set up a lot of the heavy lifting you’ll probably want to try when you code Misty. You can leverage Azure for cognitive tasks like sentiment analysis, text-to-speech, scene description, and text extraction – and that just scratches the surface. As an example of how this works, we wrote a skill for Misty that uses serverless functions and cognitive services from Azure to give Misty the ability to read text and describe scenes.
The Seamless Cloud
There is a symbiotic relationship between leveraging cloud capabilities and developing skills for Misty. The art is in knowing how and when to use each technology. We’ve discussed the advantage of offloading any heavy processing requirements to the cloud, returning only the results of those calculations to your robot. So how does having a robot in the physical environments we care about offer advantages to the cloud?
Let’s brainstorm for a moment. Say you’re training a supervised ML model. The advantage of having an intelligent node physically in the environment gathering data is a boon to any practitioner. In this scenario, Misty can be active extension of an Azure service, gathering and loading data which you can then annotate or modify in Azure’s Machine Learning Studio. We can simultaneously gather, process, and enhance data from a device that’s operating in the physical environment we care about. This distinction makes Misty a hugely versatile tool for developers looking to leverage the potential of robotics in their own domains of knowledge.
In a sense, Misty is the living definition of the intelligent edge, the concept that Satya Nadella presented at the 2018 BUILD conference. That is, she enables a seamless cloud-to-edge computing experience. To grant Misty the gift of literacy, we use Misty’s own local computing capabilities to capture sensor data and kick things off, serverless functions in the cloud for data processing and web requests, and Azure Cognitive Services for enhancements to the data that Misty collects. This kind of combination opens enormous potential that developers can experiment with right away – no DevOps team required.
Interested in learning how we taught Misty to read? We’re working on a tutorial to show how you can use this functionality with your own Misty. Join the community and be the first to know when it’s available. We can’t wait to see how you’ll extend Misty’s capabilities next!