Ask a child to design a robot, and they’ll produce a drawing that looks a little like you or I—the parts may be gray and boxy, but it will have two arms, two legs, and a head (probably with an antenna coming out of the top). Starting from the beginning of robotics, the human form has seemed like an excellent starting point. One of the best places to draw inspiration for robotic design, however, is the kingdom of insects, arachnids, snails, and slugs.
NASA may be known for sending men to the moon, establishing the International Space Station, and planning for a base on Mars—but apart from astronauts, its best-known spokesmen aren’t men at all—they’re robots.
Rovers like Spirit, Opportunity, and Curiosity, and landers like Viking and Philae, make the perfect ambassadors into hostile, freezing, and nearly airless environments. Not only do these explorers bring back valuable scientific data from Earth’s planetary neighbors, they also make perfect showcases for practical robotics.
A variety of robots and tools are being created to solve one specific piece of the environmental climate change crisis. Together, the engineers who develop them and the bots doing the work will be able to make a difference and lessen the impact of human activity destroying the planet.
Finding the right approach to automatic speech recognition (ASR) has been a critical step in Jibo’s design: get it right and the experience will be great; get it wrong, and it could really take away from the interactions Jibo’s owners have with him. In this interview, Jibo’s Head of Advanced Conversational Technologies Roberto Pieraccini talks about the direction the company’s engineering team has taken with ASR.
Jibo may be a robot, but the last thing the team wants is for Jibo to sound like a robot. In these two video interviews, Jibo’s design team talks about how they selected Jibo’s voice, how that then manifests itself as the voice you’ll hear when you interact with Jibo, as well as the engineering challenges of Text-To-Speech (TTS) technology and how the team solved for them.
Through the Jibo SDK, developers have high-level access to Jibo’s audio processing, visual processing, persona and interaction, and movement capabilities. The SDK gives developers the tools to build a wide range of Jibo Skills for personal enjoyment or as a business opportunity. We sincerely believe developers have a huge role to play in extending Jibo’s personality and capabilities, making him the social robot people are looking forward to being part of their lives.
Our most recent video update comes from our VP of Engineering Andy Atkins. Take a sneak peak inside the minds of our engineers as they finish our newest Jibo P2s, to find out what kinds of challenges and hurdles we have overcome in the past year.
Ever wonder why Jibo’s eye was designed as a ball? And why does he have only one eye instead of two? Check out this video with Jibo’s Lead Designer & Animator, Fardad Faridi, as he briefly describes the nature and design of Jibo’s eye.
What is a social robot supposed to look like? We have asked ourselves this question many times during the last year. HUGE design was fortunate enough to be partnered with the Jibo team and tasked with creating the industrial design/look and feel for Jibo. At first, it seemed that our lack of experience designing anything remotely close to a robot might be a problem. We quickly learned however that this product needed to be unlike any existing robot, and a fresh industrial design was going to be a crucial part in defining this new socially charged experience for users.
Ugobe created Pleo, a life-like robot using our “LifeOS” technology that animates robots with life-like movements similar to living creatures. After countless experiences of watching people transform into proud pet owners through interactions with Pleo, I am absolutely convinced that people want to connect with robots in a meaningful way that touches our hearts.
Social robots have everything that personal assistants have—speech, display, touch—but also a body that can move, a vision system that can recognize local environments, and microphones that can locate and focus on where sounds and speech are coming from. How will a social robot interact with speech among all the other modalities?
It is a fantastic time for technology. We live in a more connected world, and compared to even a few decades ago, we have vastly improved access to information and content, and have dramatically expanded our ability to connect with one another in interesting new ways to socialize despite time and distance.