I have used Gemini and Open AI for projects (looking at and creating Arduino circuits, home fixing projects etc) and incorporation of the vision is a big step for robots. Since Misty already has great hardware merging the two would really be an incredible thing. Even if it’s just Misty beside you sending an image of what you were doing to chat, GPT, and using that analysis to help would be a great first step! then, moving on from this, using Mistys vision and new 3-D, printed arm with vision and calculations, based on Misty’s input would be the next step. Thoughts on how we can accomplish this without it being too cumbersome?
top of page
bottom of page