Tag Archives: life

#435791 To Fly Solo, Racing Drones Have a Need ...

Drone racing’s ultimate vision of quadcopters weaving nimbly through obstacle courses has attracted far less excitement and investment than self-driving cars aimed at reshaping ground transportation. But the U.S. military and defense industry are betting on autonomous drone racing as the next frontier for developing AI so that it can handle high-speed navigation within tight spaces without human intervention.

The autonomous drone challenge requires split-second decision-making with six degrees of freedom instead of a car’s mere two degrees of road freedom. One research team developing the AI necessary for controlling autonomous racing drones is the Robotics and Perception Group at the University of Zurich in Switzerland. In late May, the Swiss researchers were among nine teams revealed to be competing in the two-year AlphaPilot open innovation challenge sponsored by U.S. aerospace company Lockheed Martin. The winning team will walk away with up to $2.25 million for beating other autonomous racing drones and a professional human drone pilot in head-to-head competitions.

“I think it is important to first point out that having an autonomous drone to finish a racing track at high speeds or even beating a human pilot does not imply that we can have autonomous drones [capable of] navigating in real-world, complex, unstructured, unknown environments such as disaster zones, collapsed buildings, caves, tunnels or narrow pipes, forests, military scenarios, and so on,” says Davide Scaramuzza, a professor of robotics and perception at the University of Zurich and ETH Zurich. “However, the robust and computationally efficient state estimation algorithms, control, and planning algorithms developed for autonomous drone racing would represent a starting point.”

The nine teams that made the cut—from a pool of 424 AlphaPilot applicants—will compete in four 2019 racing events organized under the Drone Racing League’s Artificial Intelligence Robotic Racing Circuit, says Keith Lynn, program manager for AlphaPilot at Lockheed Martin. To ensure an apples-to-apples comparison of each team’s AI secret sauce, each AlphaPilot team will upload its AI code into identical, specially-built drones that have the NVIDIA Xavier GPU at the core of the onboard computing hardware.

“Lockheed Martin is offering mentorship to the nine AlphaPilot teams to support their AI tech development and innovations,” says Lynn. The company “will be hosting a week-long Developers Summit at MIT in July, dedicated to workshopping and improving AlphaPilot teams’ code,” he added. He notes that each team will retain the intellectual property rights to its AI code.

The AlphaPilot challenge takes inspiration from older autonomous drone racing events hosted by academic researchers, Scaramuzza says. He credits Hyungpil Moon, a professor of robotics and mechanical engineering at Sungkyunkwan University in South Korea, for having organized the annual autonomous drone racing competition at the International Conference on Intelligent Robots and Systems since 2016.

It’s no easy task to create and train AI that can perform high-speed flight through complex environments by relying on visual navigation. One big challenge comes from how drones can accelerate sharply, take sharp turns, fly sideways, do zig-zag patterns and even perform back flips. That means camera images can suddenly appear tilted or even upside down during drone flight. Motion blur may occur when a drone flies very close to structures at high speeds and camera pixels collect light from multiple directions. Both cameras and visual software can also struggle to compensate for sudden changes between light and dark parts of an environment.

To lend AI a helping hand, Scaramuzza’s group recently published a drone racing dataset that includes realistic training data taken from a drone flown by a professional pilot in both indoor and outdoor spaces. The data, which includes complicated aerial maneuvers such as back flips, flight sequences that cover hundreds of meters, and flight speeds of up to 83 kilometers per hour, was presented at the 2019 IEEE International Conference on Robotics and Automation.

The drone racing dataset also includes data captured by the group’s special bioinspired event cameras that can detect changes in motion on a per-pixel basis within microseconds. By comparison, ordinary cameras need milliseconds (each millisecond being 1,000 microseconds) to compare motion changes in each image frame. The event cameras have already proven capable of helping drones nimbly dodge soccer balls thrown at them by the Swiss lab’s researchers.

The Swiss group’s work on the racing drone dataset received funding in part from the U.S. Defense Advanced Research Projects Agency (DARPA), which acts as the U.S. military’s special R&D arm for more futuristic projects. Specifically, the funding came from DARPA’s Fast Lightweight Autonomy program that envisions small autonomous drones capable of flying at high speeds through cluttered environments without GPS guidance or communication with human pilots.

Such speedy drones could serve as military scouts checking out dangerous buildings or alleys. They could also someday help search-and-rescue teams find people trapped in semi-collapsed buildings or lost in the woods. Being able to fly at high speed without crashing into things also makes a drone more efficient at all sorts of tasks by making the most of limited battery life, Scaramuzza says. After all, most drone battery life gets used up by the need to hover in flight and doesn’t get drained much by flying faster.

Even if AI manages to conquer the drone racing obstacle courses, that would be the end of the beginning of the technology’s development. What would still be required? Scaramuzza specifically singled out the need to handle low-visibility conditions involving smoke, dust, fog, rain, snow, fire, hail, as some of the biggest challenges for vision-based algorithms and AI in complex real-life environments.

“I think we should develop and release datasets containing smoke, dust, fog, rain, fire, etc. if we want to allow using autonomous robots to complement human rescuers in saving people lives after an earthquake or natural disaster in the future,” Scaramuzza says. Continue reading

Posted in Human Robots

#435779 This Robot Ostrich Can Ride Around on ...

Proponents of legged robots say that they make sense because legs are often required to go where humans go. Proponents of wheeled robots say, “Yeah, that’s great but watch how fast and efficient my robot is, compared to yours.” Some robots try and take advantage of wheels and legs with hybrid designs like whegs or wheeled feet, but a simpler and more versatile solution is to do what humans do, and just take advantage of wheels when you need them.

We’ve seen a few experiments with this. The University of Michigan managed to convince Cassie to ride a Segway, with mostly positive (but occasionally quite negative) results. A Segway, and hoverboard-like systems, can provide wheeled mobility for legged robots over flat terrain, but they can’t handle things like stairs, which is kind of the whole point of having a robot with legs anyway.

Image: UC Berkeley

From left, a Segway, a hovercraft, and hovershoes, with complexity in terms of user control increasing from left to right.

At UC Berkeley’s Hybrid Robotics Lab, led by Koushil Sreenath, researchers have taken things a step further. They are teaching their Cassie bipedal robot (called Cassie Cal) to wheel around on a pair of hovershoes. Hovershoes are like hoverboards that have been chopped in half, resulting in a pair of motorized single-wheel skates. You balance on the skates, and control them by leaning forwards and backwards and left and right, which causes each skate to accelerate or decelerate in an attempt to keep itself upright. It’s not easy to get these things to work, even for a human, but by adding a sensor package to Cassie the UC Berkeley researchers have managed to get it to zip around campus fully autonomously.

Remember, Cassie is operating autonomously here—it’s performing vSLAM (with an Intel RealSense) and doing all of its own computation onboard in real time. Watching it jolt across that cracked sidewalk is particularly impressive, especially considering that it only has pitch control over its ankles and can’t roll its feet to maintain maximum contact with the hovershoes. But you can see the advantage that this particular platform offers to a robot like Cassie, including the ability to handle stairs. Stairs in one direction, anyway.

It’s a testament to the robustness of UC Berkeley’s controller that they were willing to let the robot operate untethered and outside, and it sounds like they’re thinking long-term about how legged robots on wheels would be real-world useful:

Our feedback control and autonomous system allow for swift movement through urban environments to aid in everything from food delivery to security and surveillance to search and rescue missions. This work can also help with transportation in large factories and warehouses.

For more details, we spoke with the UC Berkeley students (Shuxiao Chen, Jonathan Rogers, and Bike Zhang) via email.

IEEE Spectrum: How representative of Cassie’s real-world performance is what we see in the video? What happens when things go wrong?

Cassie’s real-world performance is similar to what we see in the video. Cassie can ride the hovershoes successfully all around the campus. Our current controller allows Cassie to robustly ride the hovershoes and rejects various perturbations. At present, one of the failure modes is when the hovershoe rolls to the side—this happens when it goes sideways down a step or encounters a large obstacle on one side of it, causing it to roll over. Under these circumstances, Cassie doesn’t have sufficient control authority (due to the thin narrow feet) to get the hovershoe back on its wheel.

The Hybrid Robotics Lab has been working on robots that walk over challenging terrain—how do wheeled platforms like hovershoes fit in with that?

Surprisingly, this research is related to our prior work on walking on discrete terrain. While locomotion using legs is efficient when traveling over rough and discrete terrain, wheeled locomotion is more efficient when traveling over flat continuous terrain. Enabling legged robots to ride on various micro-mobility platforms will offer multimodal locomotion capabilities, improving the efficiency of locomotion over various terrains.

Our current research furthers the locomotion ability for bipedal robots over continuous terrains by using a wheeled platform. In the long run, we would like to develop multi-modal locomotion strategies based on our current and prior work to allow legged robots to robustly and efficiently locomote in our daily life.

Photo: UC Berkeley

In their experiments, the UC Berkeley researchers say Cassie proved quite capable of riding the hovershoes over rough and uneven terrain, including going down stairs.

How long did it take to train Cassie to use the hovershoes? Are there any hovershoe skills that Cassie is better at than an average human?

We spent about eight months to develop our whole system, including a controller, a path planner, and a vision system. This involved developing mathematical models of Cassie and the hovershoes, setting up a dynamical simulation, figuring out how to interface and communicate with various sensors and Cassie, and doing several experiments to slowly improve performance. In contrast, a human with a good sense of balance needs a few hours to learn to use the hovershoes. A human who has never used skates or skis will probably need a longer time.

A human can easily turn in place on the hovershoes, while Cassie cannot do this motion currently due to our algorithm requiring a non-zero forward speed in order to turn. However, Cassie is much better at riding the hovershoes over rough and uneven terrain including riding the hovershoes down some stairs!

What would it take to make Cassie faster or more agile on the hovershoes?

While Cassie can currently move at a decent pace on the hovershoes and navigate obstacles, Cassie’s ability to avoid obstacles at rapid speeds is constrained by the sensing, the controller, and the onboard computation. To enable Cassie to dynamically weave around obstacles at high speeds exhibiting agile motions, we need to make progress on different fronts.

We need planners that take into account the entire dynamics of the Cassie-Hovershoe system and rapidly generate dynamically-feasible trajectories; we need controllers that tightly coordinate all the degrees-of-freedom of Cassie to dynamically move while balancing on the hovershoes; we need sensors that are robust to motion-blur artifacts caused due to fast turns; and we need onboard computation that can execute our algorithms at real-time speeds.

What are you working on next?

We are working on enabling more aggressive movements for Cassie on the hovershoes by fully exploiting Cassie’s dynamics. We are working on approaches that enable us to easily go beyond hovershoes to other challenging micro-mobility platforms. We are working on enabling Cassie to step onto and off from wheeled platforms such as hovershoes. We would like to create a future of multi-modal locomotion strategies for legged robots to enable them to efficiently help people in our daily life.

“Feedback Control for Autonomous Riding of Hovershoes by a Cassie Bipedal Robot,” by Shuxiao Chen, Jonathan Rogers, Bike Zhang, and Koushil Sreenath from the Hybrid Robotics Lab at UC Berkeley, has been submitted to IEEE Robotics and Automation Letters with option to be presented at the 2019 IEEE RAS International Conference on Humanoid Robots. Continue reading

Posted in Human Robots

#435775 Jaco Is a Low-Power Robot Arm That Hooks ...

We usually think of robots as taking the place of humans in various tasks, but robots of all kinds can also enhance human capabilities. This may be especially true for people with disabilities. And while the Cybathlon competition showed what's possible when cutting-edge research robotics is paired with expert humans, that competition isn't necessarily reflective of the kind of robotics available to most people today.

Kinova Robotics's Jaco arm is an assistive robotic arm designed to be mounted on an electric wheelchair. With six degrees of freedom plus a three-fingered gripper, the lightweight carbon fiber arm is frequently used in research because it's rugged and versatile. But from the start, Kinova created it to add autonomy to the lives of people with mobility constraints.

Earlier this year, Kinova shared the story of Mary Nelson, an 11-year-old girl with spinal muscular atrophy, who uses her Jaco arm to show her horse in competition. Spinal muscular atrophy is a neuromuscular disorder that impairs voluntary muscle movement, including muscles that help with respiration, and Mary depends on a power chair for mobility.

We wanted to learn more about how Kinova designs its Jaco arm, and what that means for folks like Mary, so we spoke with both Kinova and Mary's parents to find out how much of a difference a robot arm can make.

IEEE Spectrum: How did Mary interact with the world before having her arm, and what was involved in the decision to try a robot arm in general? And why then Kinova's arm specifically?

Ryan Nelson: Mary interacts with the world much like you and I do, she just uses different tools to do so. For example, she is 100 percent independent using her computer, iPad, and phone, and she prefers to use a mouse. However, she cannot move a standard mouse, so she connects her wheelchair to each device with Bluetooth to move the mouse pointer/cursor using her wheelchair joystick.

For years, we had a Manfrotto magic arm and super clamp attached to her wheelchair and she used that much like the robotic arm. We could put a baseball bat, paint brush, toys, etc. in the super clamp so that Mary could hold the object and interact as physically able children do. Mary has always wanted to be more independent, so we knew the robotic arm was something she must try. We had seen videos of the Kinova arm on YouTube and on their website, so we reached out to them to get a trial.

Can you tell us about the Jaco arm, and how the process of designing an assistive robot arm is different from the process of designing a conventional robot arm?

Nathaniel Swenson, Director of U.S. Operations — Assistive Technologies at Kinova: Jaco is our flagship robotic arm. Inspired by our CEO's uncle and its namesake, Jacques “Jaco” Forest, it was designed as assistive technology with power wheelchair users in mind.

The primary differences between Jaco and our other robots, such as the new Gen3, which was designed to meet the needs of academic and industry research teams, are speed and power consumption. Other robots such as the Gen3 can move faster and draw slightly more power because they aren't limited by the battery size of power wheelchairs. Depending on the use case, they might not interact directly with a human being in the research setting and can safely move more quickly. Jaco is designed to move at safe speeds and make direct contact with the end user and draw very little power directly from their wheelchair.

The most important consideration in the design process of an assistive robot is the safety of the end user. Jaco users operate their robots through their existing drive controls to assist them in daily activities such as eating, drinking, and opening doors and they don't have to worry about the robot draining their chair's batteries throughout the day. The elegant design that results from meeting the needs of our power chair users has benefited subsequent iterations, [of products] such as the Gen3, as well: Kinova's robots are lightweight, extremely efficient in their power consumption, and safe for direct human-robot interaction. This is not true of conventional industrial robots.

What was the learning process like for Mary? Does she feel like she's mastered the arm, or is it a continuous learning process?

Ryan Nelson: The learning process was super quick for Mary. However, she amazes us every day with the new things that she can do with the arm. Literally within minutes of installing the arm on her chair, Mary had it figured out and was shaking hands with the Kinova rep. The control of the arm is super intuitive and the Kinova reps say that SMA (Spinal Muscular Atrophy) children are perfect users because they are so smart—they pick it up right away. Mary has learned to do many fine motor tasks with the arm, from picking up small objects like a pencil or a ruler, to adjusting her glasses on her face, to doing science experiments.

Photo: The Nelson Family

Mary uses a headset microphone to amplify her voice, and she will use the arm and finger to adjust the microphone in front of her mouth after she is done eating (also a task she mastered quickly with the arm). Additionally, Mary will use the arms to reach down and adjust her feet or leg by grabbing them with the arm and moving them to a more comfortable position. All of these examples are things she never really asked us to do, but something she needed and just did on her own, with the help of the arm.

What is the most common feedback that you get from new users of the arm? How about from experienced users who have been using the arm for a while?

Nathaniel Swenson: New users always tell us how excited they are to see what they can accomplish with their new Jaco. From day one, they are able to do things that they have longed to do without assistance from a caregiver: take a drink of water or coffee, scratch an itch, push the button to open an “accessible” door or elevator, or even feed their baby with a bottle.

The most common feedback I hear from experienced users is that Jaco has changed their life. Our experienced users like Mary are rock stars: everywhere they go, people get excited to see what they'll do next. The difference between a new user and an experienced user could be as little as two weeks. People who operate power wheelchairs every day are already expert drivers and we just add a new “gear” to their chair: robot mode. It's fun to see how quickly new users master the intuitive Jaco control modes.

What changes would you like to see in the next generation of Jaco arm?

Ryan Nelson: Titanium fingers! Make it lift heavier objects, hold heavier items like a baseball bat, machine gun, flame thrower, etc., and Mary literally said this last night: “I wish the arm moved fast enough to play the piano.”

Nathaniel Swenson: I love the idea of titanium fingers! Jaco's fingers are made from a flexible polymer and designed to avoid harm. This allows the fingers to bend or dislocate, rather than break, but it also means they are not as durable as a material like titanium. Increased payload, the ability to manipulate heavier objects, requires increased power consumption. We've struck a careful balance between providing enough strength to accomplish most medically necessary Activities of Daily Living and efficient use of the power chair's batteries.

We take Isaac Asimov's Laws of Robotics pretty seriously. When we start to combine machine guns, flame throwers, and artificial intelligence with robots, I get very nervous!

I wish the arm moved fast enough to play the piano, too! I am also a musician and I share Mary's dream of an assistive robot that would enable her to make music. In the meantime, while we work on that, please enjoy this beautiful violin piece by Manami Ito and her one-of-a-kind violin prosthesis:

To what extent could more autonomy for the arm be helpful for users? What would be involved in implementing that?

Nathaniel Swenson: Artificial intelligence, machine learning, and deep learning will introduce greater autonomy in future iterations of assistive robots. This will enable them to perform more complex tasks that aren't currently possible, and enable them to accomplish routine tasks more quickly and with less input than the current manual control requires.

For assistive robots, implementation of greater autonomy involves a focus on end-user safety and improvements in the robot's awareness of its environment. Autonomous robots that work in close proximity with humans need vision. They must be able to see to avoid collisions and they use haptic feedback to tell the robot how much force is being exerted on objects. All of these technologies exist, but the largest obstacle to bringing them to the assistive technology market is to prove to the health insurance companies who will fund them that they are both safe and medically necessary. Continue reading

Posted in Human Robots

#435748 Video Friday: This Robot Is Like a ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

RSS 2019 – June 22-26, 2019 – Freiburg, Germany
Hamlyn Symposium on Medical Robotics – June 23-26, 2019 – London, U.K.
ETH Robotics Summer School – June 27-1, 2019 – Zurich, Switzerland
MARSS 2019 – July 1-5, 2019 – Helsinki, Finland
ICRES 2019 – July 29-30, 2019 – London, U.K.
DARPA SubT Tunnel Circuit – August 15-22, 2019 – Pittsburgh, Pa., USA
Let us know if you have suggestions for next week, and enjoy today’s videos.

It’s been a while since we last spoke to Joe Jones, the inventor of Roomba, about his solar-powered, weed-killing robot, called Tertill, which he was launching as a Kickstarter project. Tertill is now available for purchase (US $300) and is shipping right now.

[ Tertill ]

Usually, we don’t post videos that involve drone use that looks to be either illegal or unsafe. These flights over the protests in Hong Kong are almost certainly both. However, it’s also a unique perspective on the scale of these protests.

[ Team BlackSheep ]

ICYMI: iRobot announced this week that it has acquired Root Robotics.

[ iRobot ]

This Boston Dynamics parody video went viral this week.

The CGI is good but the gratuitous violence—even if it’s against a fake robot—is a bit too much?

This is still our favorite Boston Dynamics parody video:

[ Corridor ]

Biomedical Engineering Department Head Bin He and his team have developed the first-ever successful non-invasive mind-controlled robotic arm to continuously track a computer cursor.

[ CMU ]

Organic chemists, prepare to meet your replacement:

Automated chemical synthesis carries great promises of safety, efficiency and reproducibility for both research and industry laboratories. Current approaches are based on specifically-designed automation systems, which present two major drawbacks: (i) existing apparatus must be modified to be integrated into the automation systems; (ii) such systems are not flexible and would require substantial re-design to handle new reactions or procedures. In this paper, we propose a system based on a robot arm which, by mimicking the motions of human chemists, is able to perform complex chemical reactions without any modifications to the existing setup used by humans. The system is capable of precise liquid handling, mixing, filtering, and is flexible: new skills and procedures could be added with minimum effort. We show that the robot is able to perform a Michael reaction, reaching a yield of 34%, which is comparable to that obtained by a junior chemist (undergraduate student in Chemistry).

[ arXiv ] via [ NTU ]

So yeah, ICRA 2019 was huge and awesome. Here are some brief highlights.

[ Montreal Gazette ]

For about US $5, this drone will deliver raw meat and beer to you if you live on an uninhabited island in Tokyo Bay.

[ Nikkei ]

The Smart Microsystems Lab at Michigan State University has a new version of their Autonomous Surface Craft. It’s autonomous, open source, and awfully hard to sink.

[ SML ]

As drone shows go, this one is pretty good.

[ CCTV ]

Here’s a remote controlled robot shooting stuff with a very large gun.

[ HDT ]

Over a period of three quarters (September 2018 thru May 2019), we’ve had the opportunity to work with five graduating University of Denver students as they brought their idea for a Misty II arm extension to life.

[ Misty Robotics ]

If you wonder how it looks to inspect burners and superheaters of a boiler with an Elios 2, here you are! This inspection was performed by Svenska Elektrod in a peat-fired boiler for Vattenfall in Sweden. Enjoy!

[ Flyability ]

The newest Soft Robotics technology, mGrip mini fingers, made for tight spaces, small packaging, and delicate items, giving limitless opportunities for your applications.

[ Soft Robotics ]

What if legged robots were able to generate dynamic motions in real-time while interacting with a complex environment? Such technology would represent a significant step forward the deployment of legged systems in real world scenarios. This means being able to replace humans in the execution of dangerous tasks and to collaborate with them in industrial applications.

This workshop aims to bring together researchers from all the relevant communities in legged locomotion such as: numerical optimization, machine learning (ML), model predictive control (MPC) and computational geometry in order to chart the most promising methods to address the above-mentioned scientific challenges.

[ Num Opt Wkshp ]

Army researchers teamed with the U.S. Marine Corps to fly and test 3-D printed quadcopter prototypes a the Marine Corps Air Ground Combat Center in 29 Palms, California recently.

[ CCDC ARL ]

Lex Fridman’s Artificial Intelligence podcast featuring Rosalind Picard.

[ AI Podcast ]

In this week’s episode of Robots in Depth, per speaks with Christian Guttmann, executive director of the Nordic AI Artificial Intelligence Institute.

Christian Guttmann talks about AI and wanting to understand intelligence enough to recreate it. Christian has be focusing on AI in healthcare and has recently started to communicate the opportunities and challenges in artificial intelligence to the general public. This is something that the host Per Sjöborg is also very passionate about. We also get to hear about the Nordic AI institute and the work it does to inform all parts of society about AI.

[ Robots in Depth ] Continue reading

Posted in Human Robots

#435742 This ‘Useless’ Social Robot ...

The recent high profile failures of some home social robots (and the companies behind them) have made it even more challenging than it was before to develop robots in that space. And it was challenging enough to begin with—making a robot that can autonomous interact with random humans in their homes over a long period of time for a price that people can afford is extraordinarily difficult. However, the massive amount of initial interest in robots like Jibo, Kuri, Vector, and Buddy prove that people do want these things, or at least think they do, and while that’s the case, there’s incentive for other companies to give social home robots a try.

One of those companies is Zoetic, founded in 2107 by Mita Yun and Jitu Das, both ex-Googlers. Their robot, Kiki, is more or less exactly what you’d expect from a social home robot: It’s cute, white, roundish, has big eyes, promises that it will be your “robot sidekick,” and is not cheap: It’s on Kicksterter for $800. Kiki is among what appears to be a sort of tentative second wave of social home robots, where designers have (presumably) had a chance to take everything that they learned from the social home robot pioneers and use it to make things better this time around.

Kiki’s Kickstarter video is, again, more or less exactly what you’d expect from a social home robot crowdfunding campaign:

We won’t get into all of the details on Kiki in this article (the Kickstarter page has tons of information), but a few distinguishing features:

Each Kiki will develop its own personality over time through its daily interactions with its owner, other people, and other Kikis.
Interacting with Kiki is more abstract than with most robots—it can understand some specific words and phrases, and will occasionally use a few specific words or two, but otherwise it’s mostly listening to your tone of voice and responding with sounds rather than speech.
Kiki doesn’t move on its own, but it can operate for up to two hours away from its charging dock.
Depending on how your treat Kiki, it can get depressed or neurotic. It also needs to be fed, which you can do by drawing different kinds of food in the app.
Everything Kiki does runs on-board the robot. It has Wi-Fi connectivity for updates, but doesn’t rely on the cloud for anything in real-time, meaning that your data stays on the robot and that the robot will continue to function even if its remote service shuts down.

It’s hard to say whether features like these are unique enough to help Kiki be successful where other social home robots haven’t been, so we spoke with Zoetic co-founder Mita Yun and asked her why she believes that Kiki is going to be the social home robot that makes it.

IEEE Spectrum: What’s your background?

Mita Yun: I was an only child growing up, and so I always wanted something like Doraemon or Totoro. Something that when you come home it’s there to greet you, not just because it’s programmed to do that but because it’s actually actively happy to see you, and only you. I was so interested in this that I went to study robotics at CMU and then after I graduated I joined Google and worked there for five years. I tended to go for the more risky and more fun projects, but they always got cancelled—the first project I joined was called Android at Home, and then I joined Google Glass, and then I joined a team called Robots for Kids. That project was building educational robots, and then I just realized that when we’re adding technology to something, to a product, we’re actually taking the life away somehow, and the kids were more connected with stuffed animals compared to the educational robots we were building. That project was also cancelled, and in 2017, I left with a coworker of mine (Jitu Das) to bring this dream into reality. And now we’re building Kiki.

“Jibo was Alexa plus cuteness equals $800, and I feel like that equation doesn’t work for most people, and that eventually killed the company. So, for Kiki, we are actually building something very different. We’re building something that’s completely useless”
—Mita Yun, Zoetic

You started working on Kiki in 2017, when things were already getting challenging for Jibo—why did you decide to start developing a social home robot at that point?

I thought Jibo was great. It had a special magical way of moving, and it was such a new idea that you could have this robot with embodiment and it can actually be your assistant. The problem with Jibo, in my opinion, was that it took too long to fulfill the orders. It took them three to four years to actually manufacture, because it was a very complex piece of hardware, and then during that period of time Alexa and Google Home came out, and they started selling these voice systems for $30 and then you have Jibo for $800. Jibo was Alexa plus cuteness equals $800, and I feel like that equation doesn’t work for most people, and that eventually killed the company. So, for Kiki, we are actually building something very different. We’re building something that’s completely useless.

Can you elaborate on “completely useless?”

I feel like people are initially connected with robots because they remind them of a character. And it’s the closest we can get to a character other than an organic character like an animal. So we’re connected to a character like when we have a robot in a mall that’s roaming around, even if it looks really ugly, like if it doesn’t have eyes, people still take selfies with it. Why? Because they think it’s a character. And humans are just hardwired to love characters and love stories. With Kiki, we just wanted to build a character that’s alive, we don’t want to have a character do anything super useful.

I understand why other robotics companies are adding Alexa integration to their robots, and I think that’s great. But the dream I had, and the understanding I have about robotics technology, is that for a consumer robot especially, it is very very difficult for the robot to justify its price through usefulness. And then there’s also research showing that the more useless something is, the easier it is to have an emotional connection, so that’s why we want to keep Kiki very useless.

What kind of character are you creating with Kiki?

The whole design principle around Kiki is we want to make it a very vulnerable character. In terms of its status at home, it’s not going to be higher or equal status as the owner, but slightly lower status than the human, and it’s vulnerable and needs you to take care of it in order to grow up into a good personality robot.

We don’t let Kiki speak full English sentences, because whenever it does that, people are going to think it’s at least as intelligent as a baby, which is impossible for robots at this point. And we also don’t let it move around, because when you have it move around, people are going to think “I’m going to call Kiki’s name, and then Kiki is will come to me.” But that is actually very difficult to build. And then also we don’t have any voice integration so it doesn’t tell you about the stock market price and so on.

Photo: Zoetic

Kiki is designed to be “vulnerable,” and it needs you to take care of it so it can “grow up into a good personality robot,” according to its creators.

That sounds similar to what Mayfield did with Kuri, emphasizing an emotional connection rather than specific functionality.

It is very similar, but one of the key differences from Kuri, I think, is that Kuri started with a Kobuki base, and then it’s wrapped into a cute shell, and they added sounds. So Kuri started with utility in mind—navigation is an important part of Kuri, so they started with that challenge. For Kiki, we started with the eyes. The entire thing started with the character itself.

How will you be able to convince your customers to spend $800 on a robot that you’ve described as “useless” in some ways?

Because it’s useless, it’s actually easier to convince people, because it provides you with an emotional connection. I think Kiki is not a utility-driven product, so the adoption cycle is different. For a functional product, it’s very easy to pick up, because you can justify it by saying “I’m going to pay this much and then my life can become this much more efficient.” But it’s also very easy to be replaced and forgotten. For an emotional-driven product, it’s slower to pick up, but once people actually pick it up, they’re going to be hooked—they get be connected with it, and they’re willing to invest more into taking care of the robot so it will grow up to be smarter.

Maintaining value over time has been another challenge for social home robots. How will you make sure that people don’t get bored with Kiki after a few weeks?

Of course Kiki has limits in what it can do. We can combine the eyes, the facial expression, the motors, and lights and sounds, but is it going to be constantly entertaining? So we think of this as, imagine if a human is actually puppeteering Kiki—can Kiki stay interesting if a human is puppeteering it and interacting with the owner? So I think what makes a robot interesting is not just in the physical expressions, but the part in between that and the robot conveying its intentions and emotions.

For example, if you come into the room and then Kiki decides it will turn the other direction, ignore you, and then you feel like, huh, why did the robot do that to me? Did I do something wrong? And then maybe you will come up to it and you will try to figure out why it did that. So, even though Kiki can only express in four different dimensions, it can still make things very interesting, and then when its strategies change, it makes it feel like a new experience.

There’s also an explore and exploit process going on. Kiki wants to make you smile, and it will try different things. It could try to chase its tail, and if you smile, Kiki learns that this works and will exploit it. But maybe after doing it three times, you no longer find it funny, because you’re bored of it, and then Kiki will observe your reactions and be motivated to explore a new strategy.

Photo: Zoetic

Kiki’s creators are hoping that, with an emotionally engaging robot, it will be easier for people to get attached to it and willing to spend time taking care of it.

A particular risk with crowdfunding a robot like this is setting expectations unreasonably high. The emphasis on personality and emotional engagement with Kiki seems like it may be very difficult for the robot to live up to in practice.

I think we invested more than most robotics companies into really building out Kiki’s personality, because that is the single most important thing to us. For Jibo a lot of the focus was in the assistant, and for Kuri, it’s more in the movement. For Kiki, it’s very much in the personality.

I feel like when most people talk about personality, they’re mainly talking about expression. With Kiki, it’s not just in the expression itself, not just in the voice or the eyes or the output layer, it’s in the layer in between—when Kiki receives input, how will it make decisions about what to do? We actually don’t think the personality of Kiki is categorizable, which is why I feel like Kiki has a deeper implementation of how personalities should work. And you’re right, Kiki doesn’t really understand why you’re feeling a certain way, it just reads your facial expressions. It’s maybe not your best friend, but maybe closer to your little guinea pig robot.

Photo: Zoetic

The team behind Kiki paid particular attention to its eyes, and designed the robot to always face the person that it is interacting with.

Is that where you’d put Kiki on the scale of human to pet?

Kiki is definitely not human, we want to keep it very far away from human. And it’s also not a dog or cat. When we were designing Kiki, we took inspiration from mammals because humans are deeply connected to mammals since we’re mammals ourselves. And specifically we’re connected to predator animals. With prey animals, their eyes are usually on the sides of their heads, because they need to see different angles. A predator animal needs to hunt, they need to focus. Cats and dogs are predator animals. So with Kiki, that’s why we made sure the eyes are on one side of the face and the head can actuate independently from the body and the body can turn so it’s always facing the person that it’s paying attention to.

I feel like Kiki is probably does more than a plant. It does more than a fish, because a fish doesn’t look you in the eyes. It’s not as smart as a cat or a dog, so I would just put it in this guinea pig kind of category.

What have you found so far when running user studies with Kiki?

When we were first designing Kiki we went through a whole series of prototypes. One of the earlier prototypes of Kiki looked like a CRT, like a very old monitor, and when we were testing that with people they didn’t even want to touch it. Kiki’s design inspiration actually came from an airplane, with a very angular, futuristic look, but based on user feedback we made it more round and more friendly to the touch. The lights were another feature request from the users, which adds another layer of expressivity to Kiki, and they wanted to see multiple Kikis working together with different personalities. Users also wanted different looks for Kiki, to make it look like a deer or a unicorn, for example, and we actually did take that into consideration because it doesn’t look like any particular mammal. In the future, you’ll be able to have different ears to make it look like completely different animals.

There has been a lot of user feedback that we didn’t implement—I believe we should observe the users reactions and feedback but not listen to their advice. The users shouldn’t be our product designers, because if you test Kiki with 10 users, eight of them will tell you they want Alexa in it. But we’re never going to add Alexa integration to Kiki because that’s not what it’s meant to do.

While it’s far too early to tell whether Kiki will be a long-term success, the Kickstarter campaign is currently over 95 percent funded with 8 days to go, and 34 robots are still available for a May 2020 delivery.

[ Kickstarter ] Continue reading

Posted in Human Robots