Tag Archives: can

#435757 Robotic Animal Agility

An off-shore wind power platform, somewhere in the North Sea, on a freezing cold night, with howling winds and waves crashing against the impressive structure. An imperturbable ANYmal is quietly conducting its inspection.

ANYmal, a medium sized dog-like quadruped robot, walks down the stairs, lifts a “paw” to open doors or to call the elevator and trots along corridors. Darkness is no problem: it knows the place perfectly, having 3D-mapped it. Its laser sensors keep it informed about its precise path, location and potential obstacles. It conducts its inspection across several rooms. Its cameras zoom in on counters, recording the measurements displayed. Its thermal sensors record the temperature of machines and equipment and its ultrasound microphone checks for potential gas leaks. The robot also inspects lever positions as well as the correct positioning of regulatory fire extinguishers. As the electronic buzz of its engines resumes, it carries on working tirelessly.

After a little over two hours of inspection, the robot returns to its docking station for recharging. It will soon head back out to conduct its next solitary patrol. ANYmal played alongside Mulder and Scully in the “X-Files” TV series*, but it is in no way a Hollywood robot. It genuinely exists and surveillance missions are part of its very near future.

Off-shore oil platforms, the first test fields and probably the first actual application of ANYmal. ©ANYbotics

This quadruped robot was designed by ANYbotics, a spinoff of the Swiss Federal Institute of Technology in Zurich (ETH Zurich). Made of carbon fibre and aluminium, it weighs about thirty kilos. It is fully ruggedised, water- and dust-proof (IP-67). A kevlar belly protects its main body, carrying its powerful brain, batteries, network device, power management system and navigational systems.

ANYmal was designed for all types of terrain, including rubble, sand or snow. It has been field tested on industrial sites and is at ease with new obstacles to overcome (and it can even get up after a fall). Depending on its mission, its batteries last 2 to 4 hours.

On its jointed legs, protected by rubber pads, it can walk (at the speed of human steps), trot, climb, curl upon itself to crawl, carry a load or even jump and dance. It is the need to move on all surfaces that has driven its designers to choose a quadruped. “Biped robots are not easy to stabilise, especially on irregular terrain” explains Dr Péter Fankhauser, co-founder and chief business development officer of ANYbotics. “Wheeled or tracked robots can carry heavy loads, but they are bulky and less agile. Flying drones are highly mobile, but cannot carry load, handle objects or operate in bad weather conditions. We believe that quadrupeds combine the optimal characteristics, both in terms of mobility and versatility.”

What served as a source of inspiration for the team behind the project, the Robotic Systems Lab of the ETH Zurich, is a champion of agility on rugged terrain: the mountain goat. “We are of course still a long way” says Fankhauser. “However, it remains our objective on the longer term.

The first prototype, ALoF, was designed already back in 2009. It was still rather slow, very rigid and clumsy – more of a proof of concept than a robot ready for application. In 2012, StarlETH, fitted with spring joints, could hop, jump and climb. It was with this robot that the team started participating in 2014 in ARGOS, a full-scale challenge, launched by the Total oil group. The idea was to present a robot capable of inspecting an off-shore drilling station autonomously.

Up against dozens of competitors, the ETH Zurich team was the only team to enter the competition with such a quadrupedal robot. They didn’t win, but the multiple field tests were growing evermore convincing. Especially because, during the challenge, the team designed new joints with elastic actuators made in-house. These joints, inspired by tendons and muscles, are compact, sealed and include their own custom control electronics. They can regulate joint torque, position and impedance directly. Thanks to this innovation, the team could enter the same competition with a new version of its robot, ANYmal, fitted with three joints on each leg.

The ARGOS experience confirms the relevance of the selected means of locomotion. “Our robot is lighter, takes up less space on site and it is less noisy” says Fankhauser. “It also overcomes bigger obstacles than larger wheeled or tracked robots!” As ANYmal generated public interest and its transformation into a genuine product seemed more than possible, the startup ANYbotics was launched in 2016. It sold not only its robot, but also its revolutionary joints, called ANYdrive.

Today, ANYmal is not yet ready for sale to companies. However, ANYbotics has a growing number of partnerships with several industries, testing the robot for a few days or several weeks, for all types of tasks. Last October, for example, ANYmal navigated its way through the dark sewage system of the city of Zurich in order to test its capacity to help workers in similar difficult, repetitive and even dangerous tasks.

Why such an early interest among companies? “Because many companies want to integrate robots into their maintenance tasks” answers Fankhauser. “With ANYmal, they can actually evaluate its feasibility and plan their strategy. Eventually, both the architecture and the equipment of buildings could be rethought to be adapted to these maintenance robots”.

ANYmal requires ruggedised, sealed and extremely reliable interconnection solutions, such as LEMO. ©ANYbotics

Through field demonstrations and testing, ANYbotics can gather masses of information (up to 50,000 measurements are recorded every second during each test!) “It helps us to shape the product.” In due time, the startup will be ready to deliver a commercial product which really caters for companies’ needs.

Inspection and surveillance tasks on industrial sites are not the only applications considered. The startup is also thinking of agricultural inspections – with its onboard sensors, ANYmal is capable of mapping its environment, measuring bio mass and even taking soil samples. In the longer term, it could also be used for search and rescue operations. By the way, the robot can already be switched to “remote control” mode at any time and can be easily tele-operated. It is also capable of live audio and video transmission.

The transition from the prototype to the marketed product stage will involve a number of further developments. These include increasing ANYmal’s agility and speed, extending its capacity to map large-scale environments, improving safety, security, user handling and integrating the system with the customer’s data management software. It will also be necessary to enhance the robot’s reliability “so that it can work for days, weeks, or even months without human supervision.” All required certifications will have to be obtained. The locomotion system, which had triggered the whole business, is only one of a number of considerations of ANYbotics.

Designed for extreme environments, for ANYmal smoke is not a problem and it can walk in the snow, through rubble or in water. ©ANYbotics

The startup is not all alone. In fact, it has sold ANYmal robots to a dozen major universities who use them to develop their know-how in robotics. The startup has also founded ANYmal Research, a community including members such as Toyota Research Institute, the German Aerospace Center and the computer company Nvidia. Members have full access to ANYmal’s control software, simulations and documentation. Sharing has boosted both software and hardware ideas and developments (built on ROS, the open-source Robot Operating System). In particular, payload variations, providing for expandability and scalability. For instance, one of the universities uses a robotic arm which enables ANYmal to grasp or handle objects and open doors.

Among possible applications, ANYbotics mentions entertainment. It is not only about playing in more films or TV series, but rather about participating in various attractions (trade shows, museums, etc.). “ANYmal is so novel that it attracts a great amount of interest” confirms Fankhauser with a smile. “Whenever we present it somewhere, people gather around.”

Videos of these events show a fascinated and sometimes slightly fearful audience, when ANYmal gets too close to them. Is it fear of the “bad robot”? “This fear exists indeed and we are happy to be able to use ANYmal also to promote public awareness towards robotics and robots.” Reminiscent of a young dog, ANYmal is truly adapted for the purpose.

However, Péter Fankhauser softens the image of humans and sophisticated robots living together. “These coming years, robots will continue to work in the background, like they have for a long time in factories. Then, they will be used in public places in a selective and targeted way, for instance for dangerous missions. We will need to wait another ten years before animal-like robots, such as ANYmal will share our everyday lives!”

At the Consumer Electronics Show (CES) in Las Vegas in January, Continental, the German automotive manufacturing company, used robots to demonstrate a last-mile delivery. It showed ANYmal getting out of an autonomous vehicle with a parcel, climbing onto the front porch, lifting a paw to ring the doorbell, depositing the parcel before getting back into the vehicle. This futuristic image seems very close indeed.

*X-Files, season 11, episode 7, aired in February 2018 Continue reading

Posted in Human Robots

#435752 T-RHex Is a Hexapod Robot With ...

In Aaron Johnson’s “Robot Design & Experimentation” class at CMU, teams of students have a semester to design and build an experimental robotic system based on a theme. For spring 2019, that theme was “Bioinspired Robotics,” which is definitely one of our favorite kinds of robotics—animals can do all kinds of crazy things, and it’s always a lot of fun watching robots try to match them. They almost never succeed, of course, but even basic imitation can lead to robots with some unique capabilities.

One of the projects from this year’s course, from Team ScienceParrot, is a new version of RHex called T-RHex (pronounced T-Rex, like the dinosaur). T-RHex comes with a tail, but more importantly, it has tiny tapered toes, which help it grip onto rough surfaces like bricks, wood, and concrete. It’s able to climb its way up very steep slopes, and hang from them, relying on its toes to keep itself from falling off.

T-RHex’s toes are called microspines, and we’ve seen them in all kinds of robots. The most famous of these is probably JPL’s LEMUR IIB (which wins on sheer microspine volume), although the concept goes back at least 15 years to Stanford’s SpinyBot. Robots that use microspines to climb tend to be fairly methodical at it, since the microspines have to be engaged and disengaged with care, limiting their non-climbing mobility.

T-RHex manages to perform many of the same sorts of climbing and hanging maneuvers without losing RHex’s ability for quick, efficient wheel-leg (wheg) locomotion.

If you look closely at T-RHex walking in the video, you’ll notice that in its normal forward gait, it’s sort of walking on its ankles, rather than its toes. This means that the microspines aren’t engaged most of the time, so that the robot can use its regular wheg motion to get around. To engage the microspines, the robot moves its whegs backwards, meaning that its tail is arguably coming out of its head. But since all of T-RHex’s capability is mechanical in nature and it has no active sensors, it doesn’t really need a head, so that’s fine.

The highest climbable slope that T-RHex could manage was 55 degrees, meaning that it can’t, yet, conquer vertical walls. The researchers were most surprised by the robot’s ability to cling to surfaces, where it was perfectly happy to hang out on a slope of 135 degrees, which is a 45 degree overhang (!). I have no idea how it would ever reach that kind of position on its own, but it’s nice to know that if it ever does, its spines will keep doing their job.

Photo: CMU

T-RHex uses laser-cut acrylic legs, with the microspines embedded into 3D-printed toes. The tail is needed to prevent the robot from tipping backward.

For more details about the project, we spoke with Team ScienceParrot member (and CMU PhD student) Catherine Pavlov via email.

IEEE Spectrum: We’re used to seeing RHex with compliant, springy legs—how do the new legs affect T-RHex’s mobility?

Catherine Pavlov: There’s some compliance in the legs, though not as much as RHex—this is driven by the use of acrylic, which was chosen for budget/manufacturing reasons. Matching the compliance of RHex with acrylic would have made the tines too weak (since often only a few hold the load of the robot during climbing). It definitely means you can’t use energy storage in the legs the way RHex does, for example when pronking. T-RHex is probably more limited by motor speed in terms of mobility though. We were using some borrowed Dynamixels that didn’t allow for good positioning at high speeds.

How did you design the climbing gait? Why not use the middle legs, and why is the tail necessary?

The gait was a lot of hand-tuning and trial-and-error. We wanted a left/right symmetric gait to enable load sharing among more spines and prevent out-of-plane twisting of the legs. When using all three pairs, you have to have very accurate angular positioning or one leg pair gets pushed off the wall. Since two legs should be able to hold the full robot gait, using the middle legs was hurting more than it was helping, with the middle legs sometimes pushing the rear ones off of the wall.

The tail is needed to prevent the robot from tipping backward and “sitting” on the wall. During static testing we saw the robot tip backward, disengaging the front legs, at around 35 degrees incline. The tail allows us to load the front legs, even when they’re at a shallow angle to the surface. The climbing gait we designed uses the tail to allow the rear legs to fully recirculate without the robot tipping backward.

Photo: CMU

Team ScienceParrot with T-RHex.

What prevents T-RHex from climbing even steeper surfaces?

There are a few limiting factors. One is that the tines of the legs break pretty easily. I think we also need a lighter platform to get fully vertical—we’re going to look at MiniRHex for future work. We’re also not convinced our gait is the best it can be, we can probably get marginal improvements with more tuning, which might be enough.

Can the microspines assist with more dynamic maneuvers?

Dynamic climbing maneuvers? I think that would only be possible on surfaces with very good surface adhesion and very good surface strength, but it’s certainly theoretically possible. The current instance of T-RHex would definitely break if you tried to wall jump though.

What are you working on next?

Our main target is exploring the space of materials for leg fabrication, such as fiberglass, PLA, urethanes, and maybe metallic glass. We think there’s a lot of room for improvement in the leg material and geometry. We’d also like to see MiniRHex equipped with microspines, which will require legs about half the scale of what we built for T-RHex. Longer-term improvements would be the addition of sensors e.g. for wall detection, and a reliable floor-to-wall transition and dynamic gait transitions.

[ T-RHex ] Continue reading

Posted in Human Robots

#435750 Video Friday: Amazon CEO Jeff Bezos ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events):

RSS 2019 – June 22-26, 2019 – Freiburg, Germany
Hamlyn Symposium on Medical Robotics – June 23-26, 2019 – London, U.K.
ETH Robotics Summer School – June 27-1, 2019 – Zurich, Switzerland
MARSS 2019 – July 1-5, 2019 – Helsinki, Finland
ICRES 2019 – July 29-30, 2019 – London, U.K.
Let us know if you have suggestions for next week, and enjoy today’s videos.

Last week at the re:MARS conference, Amazon CEO and aspiring supervillain Jeff Bezos tried out this pair of dexterous robotic hands, which he described as “weirdly natural” to operate. The system combines Shadow Robot’s anthropomorphic robot hands with SynTouch’s biomimetic tactile sensors and HaptX’s haptic feedback gloves.

After playing with the robot, Bezos let out his trademark evil laugh.

[ Shadow Robot ]

The RoboMaster S1 is DJI’s advanced new educational robot that opens the door to limitless learning and entertainment. Develop programming skills, get familiar with AI technology, and enjoy thrilling FPV driving with games and competition. From young learners to tech enthusiasts, get ready to discover endless possibilities with the RoboMaster S1.

[ DJI ]

It’s very impressive to see DLR’s humanoid robot Toro dynamically balancing, even while being handed heavy objects, pushing things, and using multi-contact techniques to kick a fire extinguisher for some reason.

The paper is in RA-L, and you can find it at the link below.

[ RA-L ] via [ DLR ]

Thanks Maximo!

Is it just me, or does the Suzumori Endo Robotics Laboratory’s Super Dragon arm somehow just keep getting longer?

Suzumori Endo Lab, Tokyo Tech developed a 10 m-long articulated manipulator for investigation inside the primary containment vessel of the Fukushima Daiichi Nuclear Power Plants. We employed a coupled tendon-driven mechanism and a gravity compensation mechanism using synthetic fiber ropes to design a lightweight and slender articulated manipulator. This work was published in IEEE Robotics and Automation Letters and Transactions of the JSME.

[ Suzumori Endo Lab ]

From what I can make out thanks to Google Translate, this cute little robot duck (developed by Nissan) helps minimize weeds in rice fields by stirring up the water.

[ Nippon.com ]

Confidence in your robot is when you can just casually throw it off of a balcony 15 meters up.

[ SUTD ]

You had me at “we’re going to completely submerge this apple in chocolate syrup.”

[ Soft Robotics Inc ]

In the mid 2020s, the European Space Agency is planning on sending a robotic sample return mission to the Moon. It’s called Heracles, after the noted snake-strangler of Greek mythology.

[ ESA ]

Rethink Robotics is still around, they’re just much more German than before. And Sawyer is still hard at work stealing jobs from humans.

[ Rethink Robotics ]

The reason to watch this new video of the Ghost Robotics Vision 60 quadruped is for the 3 seconds worth of barrel roll about 40 seconds in.

[ Ghost Robotics ]

This is a relatively low-altitude drop for Squishy Robotics’ tensegrity scout, but it still cool to watch a robot that’s resilient enough to be able to fall and just not worry about it.

[ Squishy Robotics ]

We control here the Apptronik DRACO bipedal robot for unsupported dynamic locomotion. DRACO consists of a 10 DoF lower body with liquid cooled viscoelastic actuators to reduce weight, increase payload, and achieve fast dynamic walking. Control and walking algorithms are designed by UT HCRL Laboratory.

I think all robot videos should be required to start with two “oops” clips followed by a “for real now” clip.

[ Apptronik ]

SAKE’s EZGripper manages to pick up a wrench, and also pick up a raspberry without turning it into instajam.

[ SAKE Robotics ]

And now: the robotic long-tongued piggy, courtesy Sony Toio.

[ Toio ]

In this video the ornithopter developed inside the ERC Advanced Grant GRIFFIN project performs its first flight. This projects aims to develop a flapping wing system with manipulation and human interaction capabilities.

A flapping-wing system with manipulation and human interaction capabilities, you say? I would like to subscribe to your newsletter.

[ GRVC ]

KITECH’s robotic hands and arms can manipulate, among other things, five boxes of Elmos. I’m not sure about the conversion of Elmos to Snuffleupaguses, although it turns out that one Snuffleupagus is exactly 1,000 pounds.

[ Ji-Hun Bae ]

The Australian Centre for Field Robotics (ACFR) has been working on agricultural robots for almost a decade, and this video sums up a bunch of the stuff that they’ve been doing, even if it’s more amusing than practical at times.

[ ACFR ]

ROS 2 is great for multi-robot coordination, like when you need your bubble level to stay really, really level.

[ Acutronic Robotics ]

We don’t hear iRobot CEO Colin Angle give a lot of talks, so this recent one (from Amazon’s re:MARS conference) is definitely worth a listen, especially considering how much innovation we’ve seen from iRobot recently.

Colin Angle, founder and CEO of iRobot, has unveil a series of breakthrough innovations in home robots from iRobot. For the first time on stage, he will discuss and demonstrate what it takes to build a truly intelligent system of robots that work together to accomplish more within the home – and enable that home, and the devices within it, to work together as one.

[ iRobot ]

In the latest episode of Robots in Depth, Per speaks with Federico Pecora from the Center for Applied Autonomous Sensor Systems at Örebro University in Sweden.

Federico talks about working on AI and service robotics. In this area he has worked on planning, especially focusing on why a particular goal is the one that the robot should work on. To make robots as useful and user friendly as possible, he works on inferring the goal from the robot’s environment so that the user does not have to tell the robot everything.

Federico has also worked with AI robotics planning in industry to optimize results. Managing the relative importance of tasks is another challenging area there. In this context, he works on automating not only a single robot for its goal, but an entire fleet of robots for their collective goal. We get to hear about how these techniques are being used in warehouse operations, in mines and in agriculture.

[ Robots in Depth ] Continue reading

Posted in Human Robots

#435742 This ‘Useless’ Social Robot ...

The recent high profile failures of some home social robots (and the companies behind them) have made it even more challenging than it was before to develop robots in that space. And it was challenging enough to begin with—making a robot that can autonomous interact with random humans in their homes over a long period of time for a price that people can afford is extraordinarily difficult. However, the massive amount of initial interest in robots like Jibo, Kuri, Vector, and Buddy prove that people do want these things, or at least think they do, and while that’s the case, there’s incentive for other companies to give social home robots a try.

One of those companies is Zoetic, founded in 2107 by Mita Yun and Jitu Das, both ex-Googlers. Their robot, Kiki, is more or less exactly what you’d expect from a social home robot: It’s cute, white, roundish, has big eyes, promises that it will be your “robot sidekick,” and is not cheap: It’s on Kicksterter for $800. Kiki is among what appears to be a sort of tentative second wave of social home robots, where designers have (presumably) had a chance to take everything that they learned from the social home robot pioneers and use it to make things better this time around.

Kiki’s Kickstarter video is, again, more or less exactly what you’d expect from a social home robot crowdfunding campaign:

We won’t get into all of the details on Kiki in this article (the Kickstarter page has tons of information), but a few distinguishing features:

Each Kiki will develop its own personality over time through its daily interactions with its owner, other people, and other Kikis.
Interacting with Kiki is more abstract than with most robots—it can understand some specific words and phrases, and will occasionally use a few specific words or two, but otherwise it’s mostly listening to your tone of voice and responding with sounds rather than speech.
Kiki doesn’t move on its own, but it can operate for up to two hours away from its charging dock.
Depending on how your treat Kiki, it can get depressed or neurotic. It also needs to be fed, which you can do by drawing different kinds of food in the app.
Everything Kiki does runs on-board the robot. It has Wi-Fi connectivity for updates, but doesn’t rely on the cloud for anything in real-time, meaning that your data stays on the robot and that the robot will continue to function even if its remote service shuts down.

It’s hard to say whether features like these are unique enough to help Kiki be successful where other social home robots haven’t been, so we spoke with Zoetic co-founder Mita Yun and asked her why she believes that Kiki is going to be the social home robot that makes it.

IEEE Spectrum: What’s your background?

Mita Yun: I was an only child growing up, and so I always wanted something like Doraemon or Totoro. Something that when you come home it’s there to greet you, not just because it’s programmed to do that but because it’s actually actively happy to see you, and only you. I was so interested in this that I went to study robotics at CMU and then after I graduated I joined Google and worked there for five years. I tended to go for the more risky and more fun projects, but they always got cancelled—the first project I joined was called Android at Home, and then I joined Google Glass, and then I joined a team called Robots for Kids. That project was building educational robots, and then I just realized that when we’re adding technology to something, to a product, we’re actually taking the life away somehow, and the kids were more connected with stuffed animals compared to the educational robots we were building. That project was also cancelled, and in 2017, I left with a coworker of mine (Jitu Das) to bring this dream into reality. And now we’re building Kiki.

“Jibo was Alexa plus cuteness equals $800, and I feel like that equation doesn’t work for most people, and that eventually killed the company. So, for Kiki, we are actually building something very different. We’re building something that’s completely useless”
—Mita Yun, Zoetic

You started working on Kiki in 2017, when things were already getting challenging for Jibo—why did you decide to start developing a social home robot at that point?

I thought Jibo was great. It had a special magical way of moving, and it was such a new idea that you could have this robot with embodiment and it can actually be your assistant. The problem with Jibo, in my opinion, was that it took too long to fulfill the orders. It took them three to four years to actually manufacture, because it was a very complex piece of hardware, and then during that period of time Alexa and Google Home came out, and they started selling these voice systems for $30 and then you have Jibo for $800. Jibo was Alexa plus cuteness equals $800, and I feel like that equation doesn’t work for most people, and that eventually killed the company. So, for Kiki, we are actually building something very different. We’re building something that’s completely useless.

Can you elaborate on “completely useless?”

I feel like people are initially connected with robots because they remind them of a character. And it’s the closest we can get to a character other than an organic character like an animal. So we’re connected to a character like when we have a robot in a mall that’s roaming around, even if it looks really ugly, like if it doesn’t have eyes, people still take selfies with it. Why? Because they think it’s a character. And humans are just hardwired to love characters and love stories. With Kiki, we just wanted to build a character that’s alive, we don’t want to have a character do anything super useful.

I understand why other robotics companies are adding Alexa integration to their robots, and I think that’s great. But the dream I had, and the understanding I have about robotics technology, is that for a consumer robot especially, it is very very difficult for the robot to justify its price through usefulness. And then there’s also research showing that the more useless something is, the easier it is to have an emotional connection, so that’s why we want to keep Kiki very useless.

What kind of character are you creating with Kiki?

The whole design principle around Kiki is we want to make it a very vulnerable character. In terms of its status at home, it’s not going to be higher or equal status as the owner, but slightly lower status than the human, and it’s vulnerable and needs you to take care of it in order to grow up into a good personality robot.

We don’t let Kiki speak full English sentences, because whenever it does that, people are going to think it’s at least as intelligent as a baby, which is impossible for robots at this point. And we also don’t let it move around, because when you have it move around, people are going to think “I’m going to call Kiki’s name, and then Kiki is will come to me.” But that is actually very difficult to build. And then also we don’t have any voice integration so it doesn’t tell you about the stock market price and so on.

Photo: Zoetic

Kiki is designed to be “vulnerable,” and it needs you to take care of it so it can “grow up into a good personality robot,” according to its creators.

That sounds similar to what Mayfield did with Kuri, emphasizing an emotional connection rather than specific functionality.

It is very similar, but one of the key differences from Kuri, I think, is that Kuri started with a Kobuki base, and then it’s wrapped into a cute shell, and they added sounds. So Kuri started with utility in mind—navigation is an important part of Kuri, so they started with that challenge. For Kiki, we started with the eyes. The entire thing started with the character itself.

How will you be able to convince your customers to spend $800 on a robot that you’ve described as “useless” in some ways?

Because it’s useless, it’s actually easier to convince people, because it provides you with an emotional connection. I think Kiki is not a utility-driven product, so the adoption cycle is different. For a functional product, it’s very easy to pick up, because you can justify it by saying “I’m going to pay this much and then my life can become this much more efficient.” But it’s also very easy to be replaced and forgotten. For an emotional-driven product, it’s slower to pick up, but once people actually pick it up, they’re going to be hooked—they get be connected with it, and they’re willing to invest more into taking care of the robot so it will grow up to be smarter.

Maintaining value over time has been another challenge for social home robots. How will you make sure that people don’t get bored with Kiki after a few weeks?

Of course Kiki has limits in what it can do. We can combine the eyes, the facial expression, the motors, and lights and sounds, but is it going to be constantly entertaining? So we think of this as, imagine if a human is actually puppeteering Kiki—can Kiki stay interesting if a human is puppeteering it and interacting with the owner? So I think what makes a robot interesting is not just in the physical expressions, but the part in between that and the robot conveying its intentions and emotions.

For example, if you come into the room and then Kiki decides it will turn the other direction, ignore you, and then you feel like, huh, why did the robot do that to me? Did I do something wrong? And then maybe you will come up to it and you will try to figure out why it did that. So, even though Kiki can only express in four different dimensions, it can still make things very interesting, and then when its strategies change, it makes it feel like a new experience.

There’s also an explore and exploit process going on. Kiki wants to make you smile, and it will try different things. It could try to chase its tail, and if you smile, Kiki learns that this works and will exploit it. But maybe after doing it three times, you no longer find it funny, because you’re bored of it, and then Kiki will observe your reactions and be motivated to explore a new strategy.

Photo: Zoetic

Kiki’s creators are hoping that, with an emotionally engaging robot, it will be easier for people to get attached to it and willing to spend time taking care of it.

A particular risk with crowdfunding a robot like this is setting expectations unreasonably high. The emphasis on personality and emotional engagement with Kiki seems like it may be very difficult for the robot to live up to in practice.

I think we invested more than most robotics companies into really building out Kiki’s personality, because that is the single most important thing to us. For Jibo a lot of the focus was in the assistant, and for Kuri, it’s more in the movement. For Kiki, it’s very much in the personality.

I feel like when most people talk about personality, they’re mainly talking about expression. With Kiki, it’s not just in the expression itself, not just in the voice or the eyes or the output layer, it’s in the layer in between—when Kiki receives input, how will it make decisions about what to do? We actually don’t think the personality of Kiki is categorizable, which is why I feel like Kiki has a deeper implementation of how personalities should work. And you’re right, Kiki doesn’t really understand why you’re feeling a certain way, it just reads your facial expressions. It’s maybe not your best friend, but maybe closer to your little guinea pig robot.

Photo: Zoetic

The team behind Kiki paid particular attention to its eyes, and designed the robot to always face the person that it is interacting with.

Is that where you’d put Kiki on the scale of human to pet?

Kiki is definitely not human, we want to keep it very far away from human. And it’s also not a dog or cat. When we were designing Kiki, we took inspiration from mammals because humans are deeply connected to mammals since we’re mammals ourselves. And specifically we’re connected to predator animals. With prey animals, their eyes are usually on the sides of their heads, because they need to see different angles. A predator animal needs to hunt, they need to focus. Cats and dogs are predator animals. So with Kiki, that’s why we made sure the eyes are on one side of the face and the head can actuate independently from the body and the body can turn so it’s always facing the person that it’s paying attention to.

I feel like Kiki is probably does more than a plant. It does more than a fish, because a fish doesn’t look you in the eyes. It’s not as smart as a cat or a dog, so I would just put it in this guinea pig kind of category.

What have you found so far when running user studies with Kiki?

When we were first designing Kiki we went through a whole series of prototypes. One of the earlier prototypes of Kiki looked like a CRT, like a very old monitor, and when we were testing that with people they didn’t even want to touch it. Kiki’s design inspiration actually came from an airplane, with a very angular, futuristic look, but based on user feedback we made it more round and more friendly to the touch. The lights were another feature request from the users, which adds another layer of expressivity to Kiki, and they wanted to see multiple Kikis working together with different personalities. Users also wanted different looks for Kiki, to make it look like a deer or a unicorn, for example, and we actually did take that into consideration because it doesn’t look like any particular mammal. In the future, you’ll be able to have different ears to make it look like completely different animals.

There has been a lot of user feedback that we didn’t implement—I believe we should observe the users reactions and feedback but not listen to their advice. The users shouldn’t be our product designers, because if you test Kiki with 10 users, eight of them will tell you they want Alexa in it. But we’re never going to add Alexa integration to Kiki because that’s not what it’s meant to do.

While it’s far too early to tell whether Kiki will be a long-term success, the Kickstarter campaign is currently over 95 percent funded with 8 days to go, and 34 robots are still available for a May 2020 delivery.

[ Kickstarter ] Continue reading

Posted in Human Robots

#435738 Boing Goes the Trampoline Robot

There are a handful of quadrupedal robots out there that are highly dynamic, with the ability to run and jump, but those robots tend to be rather expensive and complicated, requiring powerful actuators and legs with elasticity. Boxing Wang, a Ph.D. student in the College of Control Science and Engineering at Zhejiang University in China, contacted us to share a project he’s been working to investigate quadruped jumping with simple, affordable hardware.

“The motivation for this project is quite simple,” Boxing says. “I wanted to study quadrupedal jumping control, but I didn’t have custom-made powerful actuators, and I didn’t want to have to design elastic legs. So I decided to use a trampoline to make a normal servo-driven quadruped robot to jump.”

Boxing and his colleagues had wanted to study quadrupedal running and jumping, so they built this robot with the most powerful servos they had access to: Kondo KRS6003RHV actuators, which have a maximum torque of 6 Nm. After some simple testing, it became clear that the servos were simply not fast or powerful enough to get the robot to jump, and that an elastic element was necessary to store energy to help the robot get off the ground.

“Normally, people would choose elastic legs,” says Boxing. “But nobody in my lab knew for sure how to design them. If we tried making elastic legs and we failed to make the robot jump, we couldn’t be sure whether the problem was the legs or the control algorithms. For hardware, we decided that it’s better to start with something reliable, something that definitely won’t be the source of the problem.”

As it turns out, all you need is a trampoline, an inertial measurement unit (IMU), and little tactile switches on the end of each foot to detect touch-down and lift-off events, and you can do some useful jumping research without a jumping robot. And the trampoline has other benefits as well—because it’s stiffer at the edges than at the center, for example, the robot will tend to center itself on the trampoline, and you get some warning before things go wrong.

“I can’t say that it’s a breakthrough to make a quadruped robot jump on a trampoline,” Boxing tells us. “But I believe this is useful for prototype testing, especially for people who are interested in quadrupedal jumping control but without a suitable robot at hand.”

To learn more about the project, we emailed him some additional questions.

IEEE Spectrum: Where did this idea come from?

Boxing Wang: The idea of the trampoline came while we were drinking milk tea. I don’t know why it came up, maybe someone saw a trampoline in a gym recently. And I don’t remember who proposed it exactly. It was just like someone said it unintentionally. But I realized that a trampoline would be a perfect choice. It’s reliable, easy to buy, and should have a similar dynamic model with the one of jumping with springy legs (we have briefly analyzed this in a paper). So I decided to try the trampoline.

How much do you think you can learn using a quadruped on a trampoline, instead of using a jumping quadruped?

Generally speaking, no contact surfaces are strictly rigid. They all have elasticity. So there are no essential differences between jumping on a trampoline and jumping on a rigid surface. However, using a quadruped on a trampoline can give you more information on how to make use of elasticity to make jumping easier and more efficient. You can use quadruped robots with springy legs to address the same problem, but that usually requires much more time on hardware design.

We prefer to treat the trampoline experiment as a kind of early test for further real jumping quadruped design. Unless you’re interested in designing an acrobatic robot on a trampoline, a real jumping quadruped is probably a more useful application, and that is our ultimate goal. The point of the trampoline tests is to develop the control algorithms first, and to examine the stability of the general hardware structure. Due to the similarity between jumping on a trampoline with rigid legs and jumping on hard surfaces with springy legs, the control algorithms you develop could be transferred to hard-surface jumping robots.

“Unless you’re interested in designing an acrobatic robot on a trampoline, a real jumping quadruped is probably a more useful application, and that is our ultimate goal. The point of the trampoline tests is to develop the control algorithms first, and to examine the stability of the general hardware structure”

Do you think that this idea can be beneficial for other kinds of robotics research?

Yes. For jumping quadrupeds with springy legs, the control algorithms could be first designed through trampoline tests using simple rigid legs. And the hardware design for elastic legs could be accelerated with the help of the control algorithms you design. In addition, we believe our work could be a good example of using a position-control robot to realize dynamic motions such as jumping, or even running.

Unlike other dynamic robots, every active joint in our robot is controlled through commercial position-control servos and not custom torque control motors. Most people don’t think that a position-control robot could perform highly dynamic motions such as jumping, because position-control motors usually mean high a gear ratio and slow response. However, our work indicates that, with the help of elasticity, stable jumping could be realized through position-control servos. So for those who already have a position-control robot at hand, they could explore the potential of their robot through trampoline tests.

Why is teaching a robot to jump important?

There are many scenarios where a jumping robot is needed. For example, a real jumping quadruped could be used to design a running quadruped. Both experience moments when all four legs are in the air, and it is easier to start from jumping and then move to running. Specifically, hopping or pronking can easily transform to bounding if the pitch angle is not strictly controlled. A bounding quadruped is similar to a running rabbit, so for now it can already be called a running quadruped.

To the best of our knowledge, a practical use of jumping quadrupeds could be planet exploration, just like what SpaceBok was designed for. In a low-gravity environment, jumping is more efficient than walking, and it’s easier to jump over obstacles. But if I had a jumping quadruped on Earth, I would teach it to catch a ball that I throw at it by jumping. It would be fantastic!

That would be fantastic.

Since the whole point of the trampoline was to get jumping software up and running with a minimum of hardware, the next step is to add some springy legs to the robot so that the control system the researchers developed can be tested on hard surfaces. They have a journal paper currently under revision, and Boxing Wang is joined as first author by his adviser Chunlin Zhou, undergrads Ziheng Duan and Qichao Zhu, and researchers Jun Wu and Rong Xiong. Continue reading

Posted in Human Robots