Tag Archives: lab
#435818 Swappable Flying Batteries Keep Drones ...
Battery power is a limiting factor for robots everywhere, but it’s particularly problematic for drones, which have to make an awkward tradeoff between the amount of battery they carry, the amount of other more useful stuff they carry, and how long they can spend in the air. Consumer drones seem to have settled around about a third of their overall mass in battery, resulting in flight times of 20 to 25 minutes at best, before you have to bring the drone back for a battery swap. And if whatever the drone was supposed to be doing depended on it staying in the air, then you’re pretty much out of luck.
When much larger aircraft have this problem, and in particular military aircraft which sometimes need to stay on-station for long periods of time, the solution is mid-air refueling—why send an aircraft all the way back to its fuel source when you can instead bring the fuel source to the aircraft? It’s easier to do this with liquid fuel than it is with batteries, of course, but researchers at UC Berkeley have come up with a clever solution: You just give the batteries wings. Or, in this case, rotors.
The big quadrotor, which weighs 820 grams, is carrying its own 2.2 Ah lithium-polymer battery that by itself gives it a flight time of about 12 minutes. Each little quadrotor weighs 320 g, including its own 0.8 Ah battery plus a 1.5 Ah battery as cargo. The little ones can’t keep themselves aloft for all that long, but that’s okay, because as flying batteries their only job is to go from ground to the big quadrotor and back again.
Photo: UC Berkeley
The flying batteries land on a tray mounted atop the main drone and align their legs with electrical contacts.
How the flying batteries work
As each flying battery approaches the main quadrotor, the smaller quadrotor takes a position about 30 centimeter above a passive docking tray mounted on top of the bigger drone. It then slowly descends to about 3 cm above, waits for its alignment to be just right, and then drops, landing on the tray which helps align its legs with electrical contacts. As soon as a connection is made, the main quadrotor is able to power itself completely from the smaller drone’s battery payload. Each flying battery can power the main quadrotor for about 6 minutes, and then it flies off and a new flying battery takes its place. If everything goes well, the main quadrotor only uses its primary battery during the undocking and docking phases, and in testing, this boosted its flight time from 12 minutes to nearly an hour.
All of this happens in a motion-capture environment, which is a big constraint, and getting this precision(ish) docking maneuver to work outside, or when the primary drone is moving, is something that the researchers would like to figure out. There are potential applications in situations where continuous monitoring by a drone is important—you could argue that switching off two identical drones might be a simpler way of achieving that, but it also requires two (presumably fancy) drones as opposed to just one plus a bunch of relatively simple and inexpensive flying batteries.
“Flying Batteries: In-flight Battery Switching to Increase Multirotor Flight Time,” by Karan P. Jain and Mark W. Mueller from the High Performance Robotics Lab at UC Berkeley, is available on arXiv. Continue reading →
#435816 This Light-based Nervous System Helps ...
Last night, way past midnight, I stumbled onto my porch blindly grasping for my keys after a hellish day of international travel. Lights were low, I was half-asleep, yet my hand grabbed the keychain, found the lock, and opened the door.
If you’re rolling your eyes—yeah, it’s not exactly an epic feat for a human. Thanks to the intricate wiring between our brain and millions of sensors dotted on—and inside—our skin, we know exactly where our hand is in space and what it’s touching without needing visual confirmation. But this combined sense of the internal and the external is completely lost to robots, which generally rely on computer vision or surface mechanosensors to track their movements and their interaction with the outside world. It’s not always a winning strategy.
What if, instead, we could give robots an artificial nervous system?
This month, a team led by Dr. Rob Shepard at Cornell University did just that, with a seriously clever twist. Rather than mimicking the electric signals in our nervous system, his team turned to light. By embedding optical fibers inside a 3D printed stretchable material, the team engineered an “optical lace” that can detect changes in pressure less than a fraction of a pound, and pinpoint the location to a spot half the width of a tiny needle.
The invention isn’t just an artificial skin. Instead, the delicate fibers can be distributed both inside a robot and on its surface, giving it both a sense of tactile touch and—most importantly—an idea of its own body position in space. Optical lace isn’t a superficial coating of mechanical sensors; it’s an entire platform that may finally endow robots with nerve-like networks throughout the body.
Eventually, engineers hope to use this fleshy, washable material to coat the sharp, cold metal interior of current robots, transforming C-3PO more into the human-like hosts of Westworld. Robots with a “bodily” sense could act as better caretakers for the elderly, said Shepard, because they can assist fragile people without inadvertently bruising or otherwise harming them. The results were published in Science Robotics.
An Unconventional Marriage
The optical lace is especially creative because it marries two contrasting ideas: one biological-inspired, the other wholly alien.
The overarching idea for optical lace is based on the animal kingdom. Through sight, hearing, smell, taste, touch, and other senses, we’re able to interpret the outside world—something scientists call exteroception. Thanks to our nervous system, we perform these computations subconsciously, allowing us to constantly “perceive” what’s going on around us.
Our other perception is purely internal. Proprioception (sorry, it’s not called “inception” though it should be) is how we know where our body parts are in space without having to look at them, which lets us perform complex tasks when blind. Although less intuitive than exteroception, proprioception also relies on stretching and other deformations within the muscles and tendons and receptors under the skin, which generate electrical currents that shoot up into the brain for further interpretation.
In other words, in theory it’s possible to recreate both perceptions with a single information-carrying system.
Here’s where the alien factor comes in. Rather than using electrical properties, the team turned to light as their data carrier. They had good reason. “Compared with electricity, light carries information faster and with higher data densities,” the team explained. Light can also transmit in multiple directions simultaneously, and is less susceptible to electromagnetic interference. Although optical nervous systems don’t exist in the biological world, the team decided to improve on Mother Nature and give it a shot.
Optical Lace
The construction starts with engineering a “sheath” for the optical nerve fibers. The team first used an elastic polyurethane—a synthetic material used in foam cushioning, for example—to make a lattice structure filled with large pores, somewhat like a lattice pie crust. Thanks to rapid, high-resolution 3D printing, the scaffold can have different stiffness from top to bottom. To increase sensitivity to the outside world, the team made the top of the lattice soft and pliable, to better transfer force to mechanical sensors. In contrast, the “deeper” regions held their structure better, and kept their structure under pressure.
Now the fun part. The team next threaded stretchable “light guides” into the scaffold. These fibers transmit photons, and are illuminated with a blue LED light. One, the input light guide, ran horizontally across the soft top part of the scaffold. Others ran perpendicular to the input in a “U” shape, going from more surface regions to deeper ones. These are the output guides. The architecture loosely resembles the wiring in our skin and flesh.
Normally, the output guides are separated from the input by a small air gap. When pressed down, the input light fiber distorts slightly, and if the pressure is high enough, it contacts one of the output guides. This causes light from the input fiber to “leak” to the output one, so that it lights up—the stronger the pressure, the brighter the output.
“When the structure deforms, you have contact between the input line and the output lines, and the light jumps into these output loops in the structure, so you can tell where the contact is happening,” said study author Patricia Xu. “The intensity of this determines the intensity of the deformation itself.”
Double Perception
As a proof-of-concept for proprioception, the team made a cylindrical lace with one input and 12 output channels. They varied the stiffness of the scaffold along the cylinder, and by pressing down at different points, were able to calculate how much each part stretched and deformed—a prominent precursor to knowing where different regions of the structure are moving in space. It’s a very rudimentary sort of proprioception, but one that will become more sophisticated with increasing numbers of strategically-placed mechanosensors.
The test for exteroception was a whole lot stranger. Here, the team engineered another optical lace with 15 output channels and turned it into a squishy piano. When pressed down, an Arduino microcontroller translated light output signals into sound based on the position of each touch. The stronger the pressure, the louder the volume. While not a musical masterpiece, the demo proved their point: the optical lace faithfully reported the strength and location of each touch.
A More Efficient Robot
Although remarkably novel, the optical lace isn’t yet ready for prime time. One problem is scalability: because of light loss, the material is limited to a certain size. However, rather than coating an entire robot, it may help to add optical lace to body parts where perception is critical—for example, fingertips and hands.
The team sees plenty of potential to keep developing the artificial flesh. Depending on particular needs, both the light guides and scaffold can be modified for sensitivity, spatial resolution, and accuracy. Multiple optical fibers that measure for different aspects—pressure, pain, temperature—can potentially be embedded in the same region, giving robots a multitude of senses.
In this way, we hope to reduce the number of electronics and combine signals from multiple sensors without losing information, the authors said. By taking inspiration from biological networks, it may even be possible to use various inputs through an optical lace to control how the robot behaves, closing the loop from sensation to action.
Image Credit: Cornell Organic Robotics Lab. A flexible, porous lattice structure is threaded with stretchable optical fibers containing more than a dozen mechanosensors and attached to an LED light. When the lattice structure is pressed, the sensors pinpoint changes in the photon flow. Continue reading →
#435806 Boston Dynamics’ Spot Robot Dog ...
Boston Dynamics is announcing this morning that Spot, its versatile quadruped robot, is now for sale. The machine’s animal-like behavior regularly electrifies crowds at tech conferences, and like other Boston Dynamics’ robots, Spot is a YouTube sensation whose videos amass millions of views.
Now anyone interested in buying a Spot—or a pack of them—can go to the company’s website and submit an order form. But don’t pull out your credit card just yet. Spot may cost as much as a luxury car, and it is not really available to consumers. The initial sale, described as an “early adopter program,” is targeting businesses. Boston Dynamics wants to find customers in select industries and help them deploy Spots in real-world scenarios.
“What we’re doing is the productization of Spot,” Boston Dynamics CEO Marc Raibert tells IEEE Spectrum. “It’s really a milestone for us going from robots that work in the lab to these that are hardened for work out in the field.”
Boston Dynamics has always been a secretive company, but last month, in preparation for launching Spot (formerly SpotMini), it allowed our photographers into its headquarters in Waltham, Mass., for a special shoot. In that session, we captured Spot and also Atlas—the company’s highly dynamic humanoid—in action, walking, climbing, and jumping.
You can see Spot’s photo interactives on our Robots Guide. (The Atlas interactives will appear in coming weeks.)
Gif: Bob O’Connor/Robots.ieee.org
And if you’re in the market for a robot dog, here’s everything we know about Boston Dynamics’ plans for Spot.
Who can buy a Spot?
If you’re interested in one, you should go to Boston Dynamics’ website and take a look at the information the company requires from potential buyers. Again, the focus is on businesses. Boston Dynamics says it wants to get Spots out to initial customers that “either have a compelling use case or a development team that we believe can do something really interesting with the robot,” says VP of business development Michael Perry. “Just because of the scarcity of the robots that we have, we’re going to have to be selective about which partners we start working together with.”
What can Spot do?
As you’ve probably seen on the YouTube videos, Spot can walk, trot, avoid obstacles, climb stairs, and much more. The robot’s hardware is almost completely custom, with powerful compute boards for control, and five sensor modules located on every side of Spot’s body, allowing it to survey the space around itself from any direction. The legs are powered by 12 custom motors with a reduction, with a top speed of 1.6 meters per second. The robot can operate for 90 minutes on a charge. In addition to the basic configuration, you can integrate up to 14 kilograms of extra hardware to a payload interface. Among the payload packages Boston Dynamics plans to offer are a 6 degrees-of-freedom arm, a version of which can be seen in some of the YouTube videos, and a ring of cameras called SpotCam that could be used to create Street View–type images inside buildings.
Image: Boston Dynamics
How do you control Spot?
Learning to drive the robot using its gaming-style controller “takes 15 seconds,” says CEO Marc Raibert. He explains that while teleoperating Spot, you may not realize that the robot is doing a lot of the work. “You don’t really see what that is like until you’re operating the joystick and you go over a box and you don’t have to do anything,” he says. “You’re practically just thinking about what you want to do and the robot takes care of everything.” The control methods have evolved significantly since the company’s first quadruped robots, machines like BigDog and LS3. “The control in those days was much more monolithic, and now we have what we call a sequential composition controller,” Raibert says, “which lets the system have control of the dynamics in a much broader variety of situations.” That means that every time one of Spot’s feet touches or doesn’t touch the ground, this different state of the body affects the basic physical behavior of the robot, and the controller adjusts accordingly. “Our controller is designed to understand what that state is and have different controls depending upon the case,” he says.
How much does Spot cost?
Boston Dynamics would not give us specific details about pricing, saying only that potential customers should contact them for a quote and that there is going to be a leasing option. It’s understandable: As with any expensive and complex product, prices can vary on a case by case basis and depend on factors such as configuration, availability, level of support, and so forth. When we pressed the company for at least an approximate base price, Perry answered: “Our general guidance is that the total cost of the early adopter program lease will be less than the price of a car—but how nice a car will depend on the number of Spots leased and how long the customer will be leasing the robot.”
Can Spot do mapping and SLAM out of the box?
The robot’s perception system includes cameras and 3D sensors (there is no lidar), used to avoid obstacles and sense the terrain so it can climb stairs and walk over rubble. It’s also used to create 3D maps. According to Boston Dynamics, the first software release will offer just teleoperation. But a second release, to be available in the next few weeks, will enable more autonomous behaviors. For example, it will be able to do mapping and autonomous navigation—similar to what the company demonstrated in a video last year, showing how you can drive the robot through an environment, create a 3D point cloud of the environment, and then set waypoints within that map for Spot to go out and execute that mission. For customers that have their own autonomy stack and are interested in using those on Spot, Boston Dynamics made it “as plug and play as possible in terms of how third-party software integrates into Spot’s system,” Perry says. This is done mainly via an API.
How does Spot’s API works?
Boston Dynamics built an API so that customers can create application-level products with Spot without having to deal with low-level control processes. “Rather than going and building joint-level kinematic access to the robot,” Perry explains, “we created a high-level API and SDK that allows people who are used to Web app development or development of missions for drones to use that same scope, and they’ll be able to build applications for Spot.”
What applications should we see first?
Boston Dynamics envisions Spot as a platform: a versatile mobile robot that companies can use to build applications based on their needs. What types of applications? The company says the best way to find out is to put Spot in the hands of as many users as possible and let them develop the applications. Some possibilities include performing remote data collection and light manipulation in construction sites; monitoring sensors and infrastructure at oil and gas sites; and carrying out dangerous missions such as bomb disposal and hazmat inspections. There are also other promising areas such as security, package delivery, and even entertainment. “We have some initial guesses about which markets could benefit most from this technology, and we’ve been engaging with customers doing proof-of-concept trials,” Perry says. “But at the end of the day, that value story is really going to be determined by people going out and exploring and pushing the limits of the robot.”
Photo: Bob O'Connor
How many Spots have been produced?
Last June, Boston Dynamics said it was planning to build about a hundred Spots by the end of the year, eventually ramping up production to a thousand units per year by the middle of this year. The company admits that it is not quite there yet. It has built close to a hundred beta units, which it has used to test and refine the final design. This version is now being mass manufactured, but the company is still “in the early tens of robots,” Perry says.
How did Boston Dynamics test Spot?
The company has tested the robots during proof-of-concept trials with customers, and at least one is already using Spot to survey construction sites. The company has also done reliability tests at its facility in Waltham, Mass. “We drive around, not quite day and night, but hundreds of miles a week, so that we can collect reliability data and find bugs,” Raibert says.
What about competitors?
In recent years, there’s been a proliferation of quadruped robots that will compete in the same space as Spot. The most prominent of these is ANYmal, from ANYbotics, a Swiss company that spun out of ETH Zurich. Other quadrupeds include Vision from Ghost Robotics, used by one of the teams in the DARPA Subterranean Challenge; and Laikago and Aliengo from Unitree Robotics, a Chinese startup. Raibert views the competition as a positive thing. “We’re excited to see all these companies out there helping validate the space,” he says. “I think we’re more in competition with finding the right need [that robots can satisfy] than we are with the other people building the robots at this point.”
Why is Boston Dynamics selling Spot now?
Boston Dynamics has long been an R&D-centric firm, with most of its early funding coming from military programs, but it says commercializing robots has always been a goal. Productizing its machines probably accelerated when the company was acquired by Google’s parent company, Alphabet, which had an ambitious (and now apparently very dead) robotics program. The commercial focus likely continued after Alphabet sold Boston Dynamics to SoftBank, whose famed CEO, Masayoshi Son, is known for his love of robots—and profits.
Which should I buy, Spot or Aibo?
Don’t laugh. We’ve gotten emails from individuals interested in purchasing a Spot for personal use after seeing our stories on the robot. Alas, Spot is not a bigger, fancier Aibo pet robot. It’s an expensive, industrial-grade machine that requires development and maintenance. If you’re maybe Jeff Bezos you could probably convince Boston Dynamics to sell you one, but otherwise the company will prioritize businesses.
What’s next for Boston Dynamics?
On the commercial side of things, other than Spot, Boston Dynamics is interested in the logistics space. Earlier this year it announced the acquisition of Kinema Systems, a startup that had developed vision sensors and deep-learning software to enable industrial robot arms to locate and move boxes. There’s also Handle, the mobile robot on whegs (wheels + legs), that can pick up and move packages. Boston Dynamics is hiring both in Waltham, Mass., and Mountain View, Calif., where Kinema was located.
Okay, can I watch a cool video now?
During our visit to Boston Dynamics’ headquarters last month, we saw Atlas and Spot performing some cool new tricks that we unfortunately are not allowed to tell you about. We hope that, although the company is putting a lot of energy and resources into its commercial programs, Boston Dynamics will still find plenty of time to improve its robots, build new ones, and of course, keep making videos. [Update: The company has just released a new Spot video, which we’ve embedded at the top of the post.][Update 2: We should have known. Boston Dynamics sure knows how to create buzz for itself: It has just released a second video, this time of Atlas doing some of those tricks we saw during our visit and couldn’t tell you about. Enjoy!]
[ Boston Dynamics ] Continue reading →
#435791 To Fly Solo, Racing Drones Have a Need ...
Drone racing’s ultimate vision of quadcopters weaving nimbly through obstacle courses has attracted far less excitement and investment than self-driving cars aimed at reshaping ground transportation. But the U.S. military and defense industry are betting on autonomous drone racing as the next frontier for developing AI so that it can handle high-speed navigation within tight spaces without human intervention.
The autonomous drone challenge requires split-second decision-making with six degrees of freedom instead of a car’s mere two degrees of road freedom. One research team developing the AI necessary for controlling autonomous racing drones is the Robotics and Perception Group at the University of Zurich in Switzerland. In late May, the Swiss researchers were among nine teams revealed to be competing in the two-year AlphaPilot open innovation challenge sponsored by U.S. aerospace company Lockheed Martin. The winning team will walk away with up to $2.25 million for beating other autonomous racing drones and a professional human drone pilot in head-to-head competitions.
“I think it is important to first point out that having an autonomous drone to finish a racing track at high speeds or even beating a human pilot does not imply that we can have autonomous drones [capable of] navigating in real-world, complex, unstructured, unknown environments such as disaster zones, collapsed buildings, caves, tunnels or narrow pipes, forests, military scenarios, and so on,” says Davide Scaramuzza, a professor of robotics and perception at the University of Zurich and ETH Zurich. “However, the robust and computationally efficient state estimation algorithms, control, and planning algorithms developed for autonomous drone racing would represent a starting point.”
The nine teams that made the cut—from a pool of 424 AlphaPilot applicants—will compete in four 2019 racing events organized under the Drone Racing League’s Artificial Intelligence Robotic Racing Circuit, says Keith Lynn, program manager for AlphaPilot at Lockheed Martin. To ensure an apples-to-apples comparison of each team’s AI secret sauce, each AlphaPilot team will upload its AI code into identical, specially-built drones that have the NVIDIA Xavier GPU at the core of the onboard computing hardware.
“Lockheed Martin is offering mentorship to the nine AlphaPilot teams to support their AI tech development and innovations,” says Lynn. The company “will be hosting a week-long Developers Summit at MIT in July, dedicated to workshopping and improving AlphaPilot teams’ code,” he added. He notes that each team will retain the intellectual property rights to its AI code.
The AlphaPilot challenge takes inspiration from older autonomous drone racing events hosted by academic researchers, Scaramuzza says. He credits Hyungpil Moon, a professor of robotics and mechanical engineering at Sungkyunkwan University in South Korea, for having organized the annual autonomous drone racing competition at the International Conference on Intelligent Robots and Systems since 2016.
It’s no easy task to create and train AI that can perform high-speed flight through complex environments by relying on visual navigation. One big challenge comes from how drones can accelerate sharply, take sharp turns, fly sideways, do zig-zag patterns and even perform back flips. That means camera images can suddenly appear tilted or even upside down during drone flight. Motion blur may occur when a drone flies very close to structures at high speeds and camera pixels collect light from multiple directions. Both cameras and visual software can also struggle to compensate for sudden changes between light and dark parts of an environment.
To lend AI a helping hand, Scaramuzza’s group recently published a drone racing dataset that includes realistic training data taken from a drone flown by a professional pilot in both indoor and outdoor spaces. The data, which includes complicated aerial maneuvers such as back flips, flight sequences that cover hundreds of meters, and flight speeds of up to 83 kilometers per hour, was presented at the 2019 IEEE International Conference on Robotics and Automation.
The drone racing dataset also includes data captured by the group’s special bioinspired event cameras that can detect changes in motion on a per-pixel basis within microseconds. By comparison, ordinary cameras need milliseconds (each millisecond being 1,000 microseconds) to compare motion changes in each image frame. The event cameras have already proven capable of helping drones nimbly dodge soccer balls thrown at them by the Swiss lab’s researchers.
The Swiss group’s work on the racing drone dataset received funding in part from the U.S. Defense Advanced Research Projects Agency (DARPA), which acts as the U.S. military’s special R&D arm for more futuristic projects. Specifically, the funding came from DARPA’s Fast Lightweight Autonomy program that envisions small autonomous drones capable of flying at high speeds through cluttered environments without GPS guidance or communication with human pilots.
Such speedy drones could serve as military scouts checking out dangerous buildings or alleys. They could also someday help search-and-rescue teams find people trapped in semi-collapsed buildings or lost in the woods. Being able to fly at high speed without crashing into things also makes a drone more efficient at all sorts of tasks by making the most of limited battery life, Scaramuzza says. After all, most drone battery life gets used up by the need to hover in flight and doesn’t get drained much by flying faster.
Even if AI manages to conquer the drone racing obstacle courses, that would be the end of the beginning of the technology’s development. What would still be required? Scaramuzza specifically singled out the need to handle low-visibility conditions involving smoke, dust, fog, rain, snow, fire, hail, as some of the biggest challenges for vision-based algorithms and AI in complex real-life environments.
“I think we should develop and release datasets containing smoke, dust, fog, rain, fire, etc. if we want to allow using autonomous robots to complement human rescuers in saving people lives after an earthquake or natural disaster in the future,” Scaramuzza says. Continue reading →
#435779 This Robot Ostrich Can Ride Around on ...
Proponents of legged robots say that they make sense because legs are often required to go where humans go. Proponents of wheeled robots say, “Yeah, that’s great but watch how fast and efficient my robot is, compared to yours.” Some robots try and take advantage of wheels and legs with hybrid designs like whegs or wheeled feet, but a simpler and more versatile solution is to do what humans do, and just take advantage of wheels when you need them.
We’ve seen a few experiments with this. The University of Michigan managed to convince Cassie to ride a Segway, with mostly positive (but occasionally quite negative) results. A Segway, and hoverboard-like systems, can provide wheeled mobility for legged robots over flat terrain, but they can’t handle things like stairs, which is kind of the whole point of having a robot with legs anyway.
Image: UC Berkeley
From left, a Segway, a hovercraft, and hovershoes, with complexity in terms of user control increasing from left to right.
At UC Berkeley’s Hybrid Robotics Lab, led by Koushil Sreenath, researchers have taken things a step further. They are teaching their Cassie bipedal robot (called Cassie Cal) to wheel around on a pair of hovershoes. Hovershoes are like hoverboards that have been chopped in half, resulting in a pair of motorized single-wheel skates. You balance on the skates, and control them by leaning forwards and backwards and left and right, which causes each skate to accelerate or decelerate in an attempt to keep itself upright. It’s not easy to get these things to work, even for a human, but by adding a sensor package to Cassie the UC Berkeley researchers have managed to get it to zip around campus fully autonomously.
Remember, Cassie is operating autonomously here—it’s performing vSLAM (with an Intel RealSense) and doing all of its own computation onboard in real time. Watching it jolt across that cracked sidewalk is particularly impressive, especially considering that it only has pitch control over its ankles and can’t roll its feet to maintain maximum contact with the hovershoes. But you can see the advantage that this particular platform offers to a robot like Cassie, including the ability to handle stairs. Stairs in one direction, anyway.
It’s a testament to the robustness of UC Berkeley’s controller that they were willing to let the robot operate untethered and outside, and it sounds like they’re thinking long-term about how legged robots on wheels would be real-world useful:
Our feedback control and autonomous system allow for swift movement through urban environments to aid in everything from food delivery to security and surveillance to search and rescue missions. This work can also help with transportation in large factories and warehouses.
For more details, we spoke with the UC Berkeley students (Shuxiao Chen, Jonathan Rogers, and Bike Zhang) via email.
IEEE Spectrum: How representative of Cassie’s real-world performance is what we see in the video? What happens when things go wrong?
Cassie’s real-world performance is similar to what we see in the video. Cassie can ride the hovershoes successfully all around the campus. Our current controller allows Cassie to robustly ride the hovershoes and rejects various perturbations. At present, one of the failure modes is when the hovershoe rolls to the side—this happens when it goes sideways down a step or encounters a large obstacle on one side of it, causing it to roll over. Under these circumstances, Cassie doesn’t have sufficient control authority (due to the thin narrow feet) to get the hovershoe back on its wheel.
The Hybrid Robotics Lab has been working on robots that walk over challenging terrain—how do wheeled platforms like hovershoes fit in with that?
Surprisingly, this research is related to our prior work on walking on discrete terrain. While locomotion using legs is efficient when traveling over rough and discrete terrain, wheeled locomotion is more efficient when traveling over flat continuous terrain. Enabling legged robots to ride on various micro-mobility platforms will offer multimodal locomotion capabilities, improving the efficiency of locomotion over various terrains.
Our current research furthers the locomotion ability for bipedal robots over continuous terrains by using a wheeled platform. In the long run, we would like to develop multi-modal locomotion strategies based on our current and prior work to allow legged robots to robustly and efficiently locomote in our daily life.
Photo: UC Berkeley
In their experiments, the UC Berkeley researchers say Cassie proved quite capable of riding the hovershoes over rough and uneven terrain, including going down stairs.
How long did it take to train Cassie to use the hovershoes? Are there any hovershoe skills that Cassie is better at than an average human?
We spent about eight months to develop our whole system, including a controller, a path planner, and a vision system. This involved developing mathematical models of Cassie and the hovershoes, setting up a dynamical simulation, figuring out how to interface and communicate with various sensors and Cassie, and doing several experiments to slowly improve performance. In contrast, a human with a good sense of balance needs a few hours to learn to use the hovershoes. A human who has never used skates or skis will probably need a longer time.
A human can easily turn in place on the hovershoes, while Cassie cannot do this motion currently due to our algorithm requiring a non-zero forward speed in order to turn. However, Cassie is much better at riding the hovershoes over rough and uneven terrain including riding the hovershoes down some stairs!
What would it take to make Cassie faster or more agile on the hovershoes?
While Cassie can currently move at a decent pace on the hovershoes and navigate obstacles, Cassie’s ability to avoid obstacles at rapid speeds is constrained by the sensing, the controller, and the onboard computation. To enable Cassie to dynamically weave around obstacles at high speeds exhibiting agile motions, we need to make progress on different fronts.
We need planners that take into account the entire dynamics of the Cassie-Hovershoe system and rapidly generate dynamically-feasible trajectories; we need controllers that tightly coordinate all the degrees-of-freedom of Cassie to dynamically move while balancing on the hovershoes; we need sensors that are robust to motion-blur artifacts caused due to fast turns; and we need onboard computation that can execute our algorithms at real-time speeds.
What are you working on next?
We are working on enabling more aggressive movements for Cassie on the hovershoes by fully exploiting Cassie’s dynamics. We are working on approaches that enable us to easily go beyond hovershoes to other challenging micro-mobility platforms. We are working on enabling Cassie to step onto and off from wheeled platforms such as hovershoes. We would like to create a future of multi-modal locomotion strategies for legged robots to enable them to efficiently help people in our daily life.
“Feedback Control for Autonomous Riding of Hovershoes by a Cassie Bipedal Robot,” by Shuxiao Chen, Jonathan Rogers, Bike Zhang, and Koushil Sreenath from the Hybrid Robotics Lab at UC Berkeley, has been submitted to IEEE Robotics and Automation Letters with option to be presented at the 2019 IEEE RAS International Conference on Humanoid Robots. Continue reading →