Tag Archives: how to
#435806 Boston Dynamics’ Spot Robot Dog ...
Boston Dynamics is announcing this morning that Spot, its versatile quadruped robot, is now for sale. The machine’s animal-like behavior regularly electrifies crowds at tech conferences, and like other Boston Dynamics’ robots, Spot is a YouTube sensation whose videos amass millions of views.
Now anyone interested in buying a Spot—or a pack of them—can go to the company’s website and submit an order form. But don’t pull out your credit card just yet. Spot may cost as much as a luxury car, and it is not really available to consumers. The initial sale, described as an “early adopter program,” is targeting businesses. Boston Dynamics wants to find customers in select industries and help them deploy Spots in real-world scenarios.
“What we’re doing is the productization of Spot,” Boston Dynamics CEO Marc Raibert tells IEEE Spectrum. “It’s really a milestone for us going from robots that work in the lab to these that are hardened for work out in the field.”
Boston Dynamics has always been a secretive company, but last month, in preparation for launching Spot (formerly SpotMini), it allowed our photographers into its headquarters in Waltham, Mass., for a special shoot. In that session, we captured Spot and also Atlas—the company’s highly dynamic humanoid—in action, walking, climbing, and jumping.
You can see Spot’s photo interactives on our Robots Guide. (The Atlas interactives will appear in coming weeks.)
Gif: Bob O’Connor/Robots.ieee.org
And if you’re in the market for a robot dog, here’s everything we know about Boston Dynamics’ plans for Spot.
Who can buy a Spot?
If you’re interested in one, you should go to Boston Dynamics’ website and take a look at the information the company requires from potential buyers. Again, the focus is on businesses. Boston Dynamics says it wants to get Spots out to initial customers that “either have a compelling use case or a development team that we believe can do something really interesting with the robot,” says VP of business development Michael Perry. “Just because of the scarcity of the robots that we have, we’re going to have to be selective about which partners we start working together with.”
What can Spot do?
As you’ve probably seen on the YouTube videos, Spot can walk, trot, avoid obstacles, climb stairs, and much more. The robot’s hardware is almost completely custom, with powerful compute boards for control, and five sensor modules located on every side of Spot’s body, allowing it to survey the space around itself from any direction. The legs are powered by 12 custom motors with a reduction, with a top speed of 1.6 meters per second. The robot can operate for 90 minutes on a charge. In addition to the basic configuration, you can integrate up to 14 kilograms of extra hardware to a payload interface. Among the payload packages Boston Dynamics plans to offer are a 6 degrees-of-freedom arm, a version of which can be seen in some of the YouTube videos, and a ring of cameras called SpotCam that could be used to create Street View–type images inside buildings.
Image: Boston Dynamics
How do you control Spot?
Learning to drive the robot using its gaming-style controller “takes 15 seconds,” says CEO Marc Raibert. He explains that while teleoperating Spot, you may not realize that the robot is doing a lot of the work. “You don’t really see what that is like until you’re operating the joystick and you go over a box and you don’t have to do anything,” he says. “You’re practically just thinking about what you want to do and the robot takes care of everything.” The control methods have evolved significantly since the company’s first quadruped robots, machines like BigDog and LS3. “The control in those days was much more monolithic, and now we have what we call a sequential composition controller,” Raibert says, “which lets the system have control of the dynamics in a much broader variety of situations.” That means that every time one of Spot’s feet touches or doesn’t touch the ground, this different state of the body affects the basic physical behavior of the robot, and the controller adjusts accordingly. “Our controller is designed to understand what that state is and have different controls depending upon the case,” he says.
How much does Spot cost?
Boston Dynamics would not give us specific details about pricing, saying only that potential customers should contact them for a quote and that there is going to be a leasing option. It’s understandable: As with any expensive and complex product, prices can vary on a case by case basis and depend on factors such as configuration, availability, level of support, and so forth. When we pressed the company for at least an approximate base price, Perry answered: “Our general guidance is that the total cost of the early adopter program lease will be less than the price of a car—but how nice a car will depend on the number of Spots leased and how long the customer will be leasing the robot.”
Can Spot do mapping and SLAM out of the box?
The robot’s perception system includes cameras and 3D sensors (there is no lidar), used to avoid obstacles and sense the terrain so it can climb stairs and walk over rubble. It’s also used to create 3D maps. According to Boston Dynamics, the first software release will offer just teleoperation. But a second release, to be available in the next few weeks, will enable more autonomous behaviors. For example, it will be able to do mapping and autonomous navigation—similar to what the company demonstrated in a video last year, showing how you can drive the robot through an environment, create a 3D point cloud of the environment, and then set waypoints within that map for Spot to go out and execute that mission. For customers that have their own autonomy stack and are interested in using those on Spot, Boston Dynamics made it “as plug and play as possible in terms of how third-party software integrates into Spot’s system,” Perry says. This is done mainly via an API.
How does Spot’s API works?
Boston Dynamics built an API so that customers can create application-level products with Spot without having to deal with low-level control processes. “Rather than going and building joint-level kinematic access to the robot,” Perry explains, “we created a high-level API and SDK that allows people who are used to Web app development or development of missions for drones to use that same scope, and they’ll be able to build applications for Spot.”
What applications should we see first?
Boston Dynamics envisions Spot as a platform: a versatile mobile robot that companies can use to build applications based on their needs. What types of applications? The company says the best way to find out is to put Spot in the hands of as many users as possible and let them develop the applications. Some possibilities include performing remote data collection and light manipulation in construction sites; monitoring sensors and infrastructure at oil and gas sites; and carrying out dangerous missions such as bomb disposal and hazmat inspections. There are also other promising areas such as security, package delivery, and even entertainment. “We have some initial guesses about which markets could benefit most from this technology, and we’ve been engaging with customers doing proof-of-concept trials,” Perry says. “But at the end of the day, that value story is really going to be determined by people going out and exploring and pushing the limits of the robot.”
Photo: Bob O'Connor
How many Spots have been produced?
Last June, Boston Dynamics said it was planning to build about a hundred Spots by the end of the year, eventually ramping up production to a thousand units per year by the middle of this year. The company admits that it is not quite there yet. It has built close to a hundred beta units, which it has used to test and refine the final design. This version is now being mass manufactured, but the company is still “in the early tens of robots,” Perry says.
How did Boston Dynamics test Spot?
The company has tested the robots during proof-of-concept trials with customers, and at least one is already using Spot to survey construction sites. The company has also done reliability tests at its facility in Waltham, Mass. “We drive around, not quite day and night, but hundreds of miles a week, so that we can collect reliability data and find bugs,” Raibert says.
What about competitors?
In recent years, there’s been a proliferation of quadruped robots that will compete in the same space as Spot. The most prominent of these is ANYmal, from ANYbotics, a Swiss company that spun out of ETH Zurich. Other quadrupeds include Vision from Ghost Robotics, used by one of the teams in the DARPA Subterranean Challenge; and Laikago and Aliengo from Unitree Robotics, a Chinese startup. Raibert views the competition as a positive thing. “We’re excited to see all these companies out there helping validate the space,” he says. “I think we’re more in competition with finding the right need [that robots can satisfy] than we are with the other people building the robots at this point.”
Why is Boston Dynamics selling Spot now?
Boston Dynamics has long been an R&D-centric firm, with most of its early funding coming from military programs, but it says commercializing robots has always been a goal. Productizing its machines probably accelerated when the company was acquired by Google’s parent company, Alphabet, which had an ambitious (and now apparently very dead) robotics program. The commercial focus likely continued after Alphabet sold Boston Dynamics to SoftBank, whose famed CEO, Masayoshi Son, is known for his love of robots—and profits.
Which should I buy, Spot or Aibo?
Don’t laugh. We’ve gotten emails from individuals interested in purchasing a Spot for personal use after seeing our stories on the robot. Alas, Spot is not a bigger, fancier Aibo pet robot. It’s an expensive, industrial-grade machine that requires development and maintenance. If you’re maybe Jeff Bezos you could probably convince Boston Dynamics to sell you one, but otherwise the company will prioritize businesses.
What’s next for Boston Dynamics?
On the commercial side of things, other than Spot, Boston Dynamics is interested in the logistics space. Earlier this year it announced the acquisition of Kinema Systems, a startup that had developed vision sensors and deep-learning software to enable industrial robot arms to locate and move boxes. There’s also Handle, the mobile robot on whegs (wheels + legs), that can pick up and move packages. Boston Dynamics is hiring both in Waltham, Mass., and Mountain View, Calif., where Kinema was located.
Okay, can I watch a cool video now?
During our visit to Boston Dynamics’ headquarters last month, we saw Atlas and Spot performing some cool new tricks that we unfortunately are not allowed to tell you about. We hope that, although the company is putting a lot of energy and resources into its commercial programs, Boston Dynamics will still find plenty of time to improve its robots, build new ones, and of course, keep making videos. [Update: The company has just released a new Spot video, which we’ve embedded at the top of the post.][Update 2: We should have known. Boston Dynamics sure knows how to create buzz for itself: It has just released a second video, this time of Atlas doing some of those tricks we saw during our visit and couldn’t tell you about. Enjoy!]
[ Boston Dynamics ] Continue reading →
#435779 This Robot Ostrich Can Ride Around on ...
Proponents of legged robots say that they make sense because legs are often required to go where humans go. Proponents of wheeled robots say, “Yeah, that’s great but watch how fast and efficient my robot is, compared to yours.” Some robots try and take advantage of wheels and legs with hybrid designs like whegs or wheeled feet, but a simpler and more versatile solution is to do what humans do, and just take advantage of wheels when you need them.
We’ve seen a few experiments with this. The University of Michigan managed to convince Cassie to ride a Segway, with mostly positive (but occasionally quite negative) results. A Segway, and hoverboard-like systems, can provide wheeled mobility for legged robots over flat terrain, but they can’t handle things like stairs, which is kind of the whole point of having a robot with legs anyway.
Image: UC Berkeley
From left, a Segway, a hovercraft, and hovershoes, with complexity in terms of user control increasing from left to right.
At UC Berkeley’s Hybrid Robotics Lab, led by Koushil Sreenath, researchers have taken things a step further. They are teaching their Cassie bipedal robot (called Cassie Cal) to wheel around on a pair of hovershoes. Hovershoes are like hoverboards that have been chopped in half, resulting in a pair of motorized single-wheel skates. You balance on the skates, and control them by leaning forwards and backwards and left and right, which causes each skate to accelerate or decelerate in an attempt to keep itself upright. It’s not easy to get these things to work, even for a human, but by adding a sensor package to Cassie the UC Berkeley researchers have managed to get it to zip around campus fully autonomously.
Remember, Cassie is operating autonomously here—it’s performing vSLAM (with an Intel RealSense) and doing all of its own computation onboard in real time. Watching it jolt across that cracked sidewalk is particularly impressive, especially considering that it only has pitch control over its ankles and can’t roll its feet to maintain maximum contact with the hovershoes. But you can see the advantage that this particular platform offers to a robot like Cassie, including the ability to handle stairs. Stairs in one direction, anyway.
It’s a testament to the robustness of UC Berkeley’s controller that they were willing to let the robot operate untethered and outside, and it sounds like they’re thinking long-term about how legged robots on wheels would be real-world useful:
Our feedback control and autonomous system allow for swift movement through urban environments to aid in everything from food delivery to security and surveillance to search and rescue missions. This work can also help with transportation in large factories and warehouses.
For more details, we spoke with the UC Berkeley students (Shuxiao Chen, Jonathan Rogers, and Bike Zhang) via email.
IEEE Spectrum: How representative of Cassie’s real-world performance is what we see in the video? What happens when things go wrong?
Cassie’s real-world performance is similar to what we see in the video. Cassie can ride the hovershoes successfully all around the campus. Our current controller allows Cassie to robustly ride the hovershoes and rejects various perturbations. At present, one of the failure modes is when the hovershoe rolls to the side—this happens when it goes sideways down a step or encounters a large obstacle on one side of it, causing it to roll over. Under these circumstances, Cassie doesn’t have sufficient control authority (due to the thin narrow feet) to get the hovershoe back on its wheel.
The Hybrid Robotics Lab has been working on robots that walk over challenging terrain—how do wheeled platforms like hovershoes fit in with that?
Surprisingly, this research is related to our prior work on walking on discrete terrain. While locomotion using legs is efficient when traveling over rough and discrete terrain, wheeled locomotion is more efficient when traveling over flat continuous terrain. Enabling legged robots to ride on various micro-mobility platforms will offer multimodal locomotion capabilities, improving the efficiency of locomotion over various terrains.
Our current research furthers the locomotion ability for bipedal robots over continuous terrains by using a wheeled platform. In the long run, we would like to develop multi-modal locomotion strategies based on our current and prior work to allow legged robots to robustly and efficiently locomote in our daily life.
Photo: UC Berkeley
In their experiments, the UC Berkeley researchers say Cassie proved quite capable of riding the hovershoes over rough and uneven terrain, including going down stairs.
How long did it take to train Cassie to use the hovershoes? Are there any hovershoe skills that Cassie is better at than an average human?
We spent about eight months to develop our whole system, including a controller, a path planner, and a vision system. This involved developing mathematical models of Cassie and the hovershoes, setting up a dynamical simulation, figuring out how to interface and communicate with various sensors and Cassie, and doing several experiments to slowly improve performance. In contrast, a human with a good sense of balance needs a few hours to learn to use the hovershoes. A human who has never used skates or skis will probably need a longer time.
A human can easily turn in place on the hovershoes, while Cassie cannot do this motion currently due to our algorithm requiring a non-zero forward speed in order to turn. However, Cassie is much better at riding the hovershoes over rough and uneven terrain including riding the hovershoes down some stairs!
What would it take to make Cassie faster or more agile on the hovershoes?
While Cassie can currently move at a decent pace on the hovershoes and navigate obstacles, Cassie’s ability to avoid obstacles at rapid speeds is constrained by the sensing, the controller, and the onboard computation. To enable Cassie to dynamically weave around obstacles at high speeds exhibiting agile motions, we need to make progress on different fronts.
We need planners that take into account the entire dynamics of the Cassie-Hovershoe system and rapidly generate dynamically-feasible trajectories; we need controllers that tightly coordinate all the degrees-of-freedom of Cassie to dynamically move while balancing on the hovershoes; we need sensors that are robust to motion-blur artifacts caused due to fast turns; and we need onboard computation that can execute our algorithms at real-time speeds.
What are you working on next?
We are working on enabling more aggressive movements for Cassie on the hovershoes by fully exploiting Cassie’s dynamics. We are working on approaches that enable us to easily go beyond hovershoes to other challenging micro-mobility platforms. We are working on enabling Cassie to step onto and off from wheeled platforms such as hovershoes. We would like to create a future of multi-modal locomotion strategies for legged robots to enable them to efficiently help people in our daily life.
“Feedback Control for Autonomous Riding of Hovershoes by a Cassie Bipedal Robot,” by Shuxiao Chen, Jonathan Rogers, Bike Zhang, and Koushil Sreenath from the Hybrid Robotics Lab at UC Berkeley, has been submitted to IEEE Robotics and Automation Letters with option to be presented at the 2019 IEEE RAS International Conference on Humanoid Robots. Continue reading →
#435769 The Ultimate Optimization Problem: How ...
Lucas Joppa thinks big. Even while gazing down into his cup of tea in his modest office on Microsoft’s campus in Redmond, Washington, he seems to see the entire planet bobbing in there like a spherical tea bag.
As Microsoft’s first chief environmental officer, Joppa came up with the company’s AI for Earth program, a five-year effort that’s spending US $50 million on AI-powered solutions to global environmental challenges.
The program is not just about specific deliverables, though. It’s also about mindset, Joppa told IEEE Spectrum in an interview in July. “It’s a plea for people to think about the Earth in the same way they think about the technologies they’re developing,” he says. “You start with an objective. So what’s our objective function for Earth?” (In computer science, an objective function describes the parameter or parameters you are trying to maximize or minimize for optimal results.)
Photo: Microsoft
Lucas Joppa
AI for Earth launched in December 2017, and Joppa’s team has since given grants to more than 400 organizations around the world. In addition to receiving funding, some grantees get help from Microsoft’s data scientists and access to the company’s computing resources.
In a wide-ranging interview about the program, Joppa described his vision of the “ultimate optimization problem”—figuring out which parts of the planet should be used for farming, cities, wilderness reserves, energy production, and so on.
Every square meter of land and water on Earth has an infinite number of possible utility functions. It’s the job of Homo sapiens to describe our overall objective for the Earth. Then it’s the job of computers to produce optimization results that are aligned with the human-defined objective.
I don’t think we’re close at all to being able to do this. I think we’re closer from a technology perspective—being able to run the model—than we are from a social perspective—being able to make decisions about what the objective should be. What do we want to do with the Earth’s surface?
Such questions are increasingly urgent, as climate change has already begun reshaping our planet and our societies. Global sea and air surface temperatures have already risen by an average of 1 degree Celsius above preindustrial levels, according to the Intergovernmental Panel on Climate Change.
Today, people all around the world participated in a “climate strike,” with young people leading the charge and demanding a global transition to renewable energy. On Monday, world leaders will gather in New York for the United Nations Climate Action Summit, where they’re expected to present plans to limit warming to 1.5 degrees Celsius.
Joppa says such summit discussions should aim for a truly holistic solution.
We talk about how to solve climate change. There’s a higher-order question for society: What climate do we want? What output from nature do we want and desire? If we could agree on those things, we could put systems in place for optimizing our environment accordingly. Instead we have this scattered approach, where we try for local optimization. But the sum of local optimizations is never a global optimization.
There’s increasing interest in using artificial intelligence to tackle global environmental problems. New sensing technologies enable scientists to collect unprecedented amounts of data about the planet and its denizens, and AI tools are becoming vital for interpreting all that data.
The 2018 report “Harnessing AI for the Earth,” produced by the World Economic Forum and the consulting company PwC, discusses ways that AI can be used to address six of the world’s most pressing environmental challenges (climate change, biodiversity, and healthy oceans, water security, clean air, and disaster resilience).
Many of the proposed applications involve better monitoring of human and natural systems, as well as modeling applications that would enable better predictions and more efficient use of natural resources.
Joppa says that AI for Earth is taking a two-pronged approach, funding efforts to collect and interpret vast amounts of data alongside efforts that use that data to help humans make better decisions. And that’s where the global optimization engine would really come in handy.
For any location on earth, you should be able to go and ask: What’s there, how much is there, and how is it changing? And more importantly: What should be there?
On land, the data is really only interesting for the first few hundred feet. Whereas in the ocean, the depth dimension is really important.
We need a planet with sensors, with roving agents, with remote sensing. Otherwise our decisions aren’t going to be any good.
AI for Earth isn’t going to create such an online portal within five years, Joppa stresses. But he hopes the projects that he’s funding will contribute to making such a portal possible—eventually.
We’re asking ourselves: What are the fundamental missing layers in the tech stack that would allow people to build a global optimization engine? Some of them are clear, some are still opaque to me.
By the end of five years, I’d like to have identified these missing layers, and have at least one example of each of the components.
Some of the projects that AI for Earth has funded seem to fit that desire. Examples include SilviaTerra, which used satellite imagery and AI to create a map of the 92 billion trees in forested areas across the United States. There’s also OceanMind, a non-profit that detects illegal fishing and helps marine authorities enforce compliance. Platforms like Wildbook and iNaturalist enable citizen scientists to upload pictures of animals and plants, aiding conservation efforts and research on biodiversity. And FarmBeats aims to enable data-driven agriculture with low-cost sensors, drones, and cloud services.
It’s not impossible to imagine putting such services together into an optimization engine that knows everything about the land, the water, and the creatures who live on planet Earth. Then we’ll just have to tell that engine what we want to do about it.
Editor’s note: This story is published in cooperation with more than 250 media organizations and independent journalists that have focused their coverage on climate change ahead of the UN Climate Action Summit. IEEE Spectrum’s participation in the Covering Climate Now partnership builds on our past reporting about this global issue. Continue reading →
#435738 Boing Goes the Trampoline Robot
There are a handful of quadrupedal robots out there that are highly dynamic, with the ability to run and jump, but those robots tend to be rather expensive and complicated, requiring powerful actuators and legs with elasticity. Boxing Wang, a Ph.D. student in the College of Control Science and Engineering at Zhejiang University in China, contacted us to share a project he’s been working to investigate quadruped jumping with simple, affordable hardware.
“The motivation for this project is quite simple,” Boxing says. “I wanted to study quadrupedal jumping control, but I didn’t have custom-made powerful actuators, and I didn’t want to have to design elastic legs. So I decided to use a trampoline to make a normal servo-driven quadruped robot to jump.”
Boxing and his colleagues had wanted to study quadrupedal running and jumping, so they built this robot with the most powerful servos they had access to: Kondo KRS6003RHV actuators, which have a maximum torque of 6 Nm. After some simple testing, it became clear that the servos were simply not fast or powerful enough to get the robot to jump, and that an elastic element was necessary to store energy to help the robot get off the ground.
“Normally, people would choose elastic legs,” says Boxing. “But nobody in my lab knew for sure how to design them. If we tried making elastic legs and we failed to make the robot jump, we couldn’t be sure whether the problem was the legs or the control algorithms. For hardware, we decided that it’s better to start with something reliable, something that definitely won’t be the source of the problem.”
As it turns out, all you need is a trampoline, an inertial measurement unit (IMU), and little tactile switches on the end of each foot to detect touch-down and lift-off events, and you can do some useful jumping research without a jumping robot. And the trampoline has other benefits as well—because it’s stiffer at the edges than at the center, for example, the robot will tend to center itself on the trampoline, and you get some warning before things go wrong.
“I can’t say that it’s a breakthrough to make a quadruped robot jump on a trampoline,” Boxing tells us. “But I believe this is useful for prototype testing, especially for people who are interested in quadrupedal jumping control but without a suitable robot at hand.”
To learn more about the project, we emailed him some additional questions.
IEEE Spectrum: Where did this idea come from?
Boxing Wang: The idea of the trampoline came while we were drinking milk tea. I don’t know why it came up, maybe someone saw a trampoline in a gym recently. And I don’t remember who proposed it exactly. It was just like someone said it unintentionally. But I realized that a trampoline would be a perfect choice. It’s reliable, easy to buy, and should have a similar dynamic model with the one of jumping with springy legs (we have briefly analyzed this in a paper). So I decided to try the trampoline.
How much do you think you can learn using a quadruped on a trampoline, instead of using a jumping quadruped?
Generally speaking, no contact surfaces are strictly rigid. They all have elasticity. So there are no essential differences between jumping on a trampoline and jumping on a rigid surface. However, using a quadruped on a trampoline can give you more information on how to make use of elasticity to make jumping easier and more efficient. You can use quadruped robots with springy legs to address the same problem, but that usually requires much more time on hardware design.
We prefer to treat the trampoline experiment as a kind of early test for further real jumping quadruped design. Unless you’re interested in designing an acrobatic robot on a trampoline, a real jumping quadruped is probably a more useful application, and that is our ultimate goal. The point of the trampoline tests is to develop the control algorithms first, and to examine the stability of the general hardware structure. Due to the similarity between jumping on a trampoline with rigid legs and jumping on hard surfaces with springy legs, the control algorithms you develop could be transferred to hard-surface jumping robots.
“Unless you’re interested in designing an acrobatic robot on a trampoline, a real jumping quadruped is probably a more useful application, and that is our ultimate goal. The point of the trampoline tests is to develop the control algorithms first, and to examine the stability of the general hardware structure”
Do you think that this idea can be beneficial for other kinds of robotics research?
Yes. For jumping quadrupeds with springy legs, the control algorithms could be first designed through trampoline tests using simple rigid legs. And the hardware design for elastic legs could be accelerated with the help of the control algorithms you design. In addition, we believe our work could be a good example of using a position-control robot to realize dynamic motions such as jumping, or even running.
Unlike other dynamic robots, every active joint in our robot is controlled through commercial position-control servos and not custom torque control motors. Most people don’t think that a position-control robot could perform highly dynamic motions such as jumping, because position-control motors usually mean high a gear ratio and slow response. However, our work indicates that, with the help of elasticity, stable jumping could be realized through position-control servos. So for those who already have a position-control robot at hand, they could explore the potential of their robot through trampoline tests.
Why is teaching a robot to jump important?
There are many scenarios where a jumping robot is needed. For example, a real jumping quadruped could be used to design a running quadruped. Both experience moments when all four legs are in the air, and it is easier to start from jumping and then move to running. Specifically, hopping or pronking can easily transform to bounding if the pitch angle is not strictly controlled. A bounding quadruped is similar to a running rabbit, so for now it can already be called a running quadruped.
To the best of our knowledge, a practical use of jumping quadrupeds could be planet exploration, just like what SpaceBok was designed for. In a low-gravity environment, jumping is more efficient than walking, and it’s easier to jump over obstacles. But if I had a jumping quadruped on Earth, I would teach it to catch a ball that I throw at it by jumping. It would be fantastic!
That would be fantastic.
Since the whole point of the trampoline was to get jumping software up and running with a minimum of hardware, the next step is to add some springy legs to the robot so that the control system the researchers developed can be tested on hard surfaces. They have a journal paper currently under revision, and Boxing Wang is joined as first author by his adviser Chunlin Zhou, undergrads Ziheng Duan and Qichao Zhu, and researchers Jun Wu and Rong Xiong. Continue reading →
#435726 This Is the Most Powerful Robot Arm Ever ...
Last month, engineers at NASA’s Jet Propulsion Laboratory wrapped up the installation of the Mars 2020 rover’s 2.1-meter-long robot arm. This is the most powerful arm ever installed on a Mars rover. Even though the Mars 2020 rover shares much of its design with Curiosity, the new arm was redesigned to be able to do much more complex science, drilling into rocks to collect samples that can be stored for later recovery.
JPL is well known for developing robots that do amazing work in incredibly distant and hostile environments. The Opportunity Mars rover, to name just one example, had a 90-day planned mission but remained operational for 5,498 days in a robot unfriendly place full of dust and wild temperature swings where even the most basic maintenance or repair is utterly impossible. (Its twin rover, Spirit, operated for 2,269 days.)
To learn more about the process behind designing robotic systems that are capable of feats like these, we talked with Matt Robinson, one of the engineers who designed the Mars 2020 rover’s new robot arm.
The Mars 2020 rover (which will be officially named through a public contest which opens this fall) is scheduled to launch in July of 2020, landing in Jezero Crater on February 18, 2021. The overall design is similar to the Mars Science Laboratory (MSL) rover, named Curiosity, which has been exploring Gale Crater on Mars since August 2012, except Mars 2020 will be a bit bigger and capable of doing even more amazing science. It will outweigh Curiosity by about 150 kilograms, but it’s otherwise about the same size, and uses the same type of radioisotope thermoelectric generator for power. Upgraded aluminum wheels will be more durable than Curiosity’s wheels, which have suffered significant wear. Mars 2020 will land on Mars in the same way that Curiosity did, with a mildly insane descent to the surface from a rocket-powered hovering “skycrane.”
Photo: NASA/JPL-Caltech
Last month, engineers at NASA's Jet Propulsion Laboratory install the main robotic arm on the Mars 2020 rover. Measuring 2.1 meters long, the arm will allow the rover to work as a human geologist would: by holding and using science tools with its turret.
Mars 2020 really steps it up when it comes to science. The most interesting new capability (besides serving as the base station for a highly experimental autonomous helicopter) is that the rover will be able to take surface samples of rock and soil, put them into tubes, seal the tubes up, and then cache the tubes on the surface for later retrieval (and potentially return to Earth for analysis). Collecting the samples is the job of a drill on the end of the robot arm that can be equipped with a variety of interchangeable bits, but the arm holds a number of other instruments as well. A “turret” can swap between the drill, a mineral identification sensor suite called SHERLOC, and an X-ray spectrometer and camera called PIXL. Fundamentally, most of Mars 2020’s science work is going to depend on the arm and the hardware that it carries, both in terms of close-up surface investigations and collecting samples for caching.
Matt Robinson is the Deputy Delivery Manager for the Sample Caching System on the Mars 2020 rover, which covers the robotic arm itself, the drill at the end of the arm, and the sample caching system within the body of the rover that manages the samples. Robinson has been at JPL since 2001, and he’s worked on the Mars Phoenix Lander mission as the robotic arm flight software developer and robotic arm test and operations engineer, as well as on Curiosity as the robotic arm test and operations lead engineer.
We spoke with Robinson about how the Mars 2020 arm was designed, and what it’s like to be building robots for exploring other planets.
IEEE Spectrum: How’d you end up working on robots at JPL?
Matt Robinson: When I was a grad student, my focus was on vision-based robotics research, so the kinds of things they do at JPL, or that we do at JPL now, were right within my wheelhouse. One of my advisors in grad school had a former student who was out here at JPL, so that’s how I made the contact. But I was very excited to come to JPL—as a young grad student working in robotics, space robotics was where it’s at.
For a robotics engineer, working in space is kind of the gold standard. You’re working in a challenging environment and you have to be prepared for any time of eventuality that may occur. And when you send your robot out to space, there’s no getting it back.
Once the rover arrives on Mars and you receive pictures back from it operating, there’s no greater feeling. You’ve built something that is now working 200+ million miles away. It’s an awesome experience! I have to pinch myself sometimes with the job I do. Working at JPL on space robotics is the holy grail for a roboticist.
What’s different about designing an arm for a rover that will operate on Mars?
We spent over five years designing, manufacturing, assembling, and testing the arm. Scientists have defined the high-level goals for what the mission has to do—acquire core samples and process them for return, carry science instruments on the arm to help determine what rocks to sample, and so on. We, as engineers, define the next level of requirements that support those goals.
When you’re building a robotic arm for another planet, you want to design something that is robust to the environment as well as robust from fault-protection standpoint. On Mars, we’re talking about an environment where the temperature can vary 100 degrees Celsius over the course of the day, so it’s very challenging thermally. With force sensing for instance, that’s a major problem. Force sensors aren’t typically designed to operate or even survive in temperature ranges that we’re talking about. So a lot of effort has to go into force sensor design and testing.
And then there’s a do-no-harm aspect—you’re sending this piece of hardware 200 million miles away, and you can’t get it back, so you want to make sure your hardware and software are robust and cannot do any harm to the system. It’s definitely a change in mindset from a terrestrial robot, where if you make a mistake, you can repair it.
“Once the rover arrives on Mars and you receive pictures back from it, there’s no greater feeling . . . I have to pinch myself sometimes with the job I do.”
—Matt Robinson, NASA JPL
How do you decide how much redundancy is enough?
That’s always a big question. It comes down to a couple of things, typically: mass and volume. You have a certain amount of mass that’s allocated to the robotic arm and we have a volume that it has to fit within, so those are often the drivers of the amount of redundancy that you can fit. We also have a lot of experience with sending arms to other planets, and at the beginning of projects, we establish a number of requirements that the design has to meet, and that’s where the redundancy is captured.
How much is the design of the arm driven by this need for redundancy, as opposed to trying to pack in all of the instrumentation that you want to have on there to do as much science as possible?
The requirements were driven by a couple of things. We knew roughly how big the instruments on the end of the arm were going to be, so the arm design is partially driven by that, because as the instruments get bigger and heavier, the arm has to get bigger and stronger. We have our coring drill at the end of the arm, and coring requires a certain level of force, so the arm has to be strong enough to do that. Those all became requirements that drove the design of the arm. On top of that, there was also that this arm also has to operate within the Martian environment, so you have things like the temperature changes and thermal expansion—you have to design for that as well. It’s a combination of both, really.
You were a test engineer for the arm used on the MSL rover. What did you learn from Spirit and Opportunity that informed the design of the arm on Curiosity?
Spirit and Opportunity did not have any force-sensing on the robotic arm. We had contact sensors that were good enough. Spirit and Opportunity’s arms were used to place instruments, that’s all it had to do, primarily. When you’re talking about actually acquiring samples, it’s not a matter of just placing the tool—you also have to apply forces to the environment. And once you start doing that, you really need a force sensor to protect you, and also to determine how much load to apply. So that was a big theme, a big difference between MSL and Spirit and Opportunity.
The size grew a lot too. If you look at Spirit and Opportunity, they’re the size of a riding lawnmower. Curiosity and the Mars 2020 rovers are the size of a small car. The Spirit and Opportunity arm was under a meter long, and the 2020 arm is twice that, and it has to apply forces that are much higher than the Spirit and Opportunity arm. From Curiosity to 2020, the payload of the arm grew by 50 percent, but the mass of the arm did not grow a whole lot, because our mass budget was kind of tight. We had to design an arm that was stronger, that had more capability, without adding more mass. That was a big challenge. We were fairly efficient on Curiosity, but on 2020, we sharpened the pencil even more.
Photo: NASA/JPL-Caltech
Three generations of Mars rovers developed at NASA’s Jet Propulsion Laboratory. Front and center: Sojourner rover, which landed on Mars in 1997 as part of the Mars Pathfinder Project. Left: Mars Exploration Rover Project rover (Spirit and Opportunity), which landed on Mars in 2004. Right: Mars Science Laboratory rover (Curiosity), which landed on Mars in August 2012.
MSL used its arm to drill into rocks like Mars 2020 will—how has the experience of operating MSL on Mars changed your thinking on how to make that work?
On MSL, the force sensor was used primarily for fault protection, just to protect the arm from being overloaded. [When drilling] we used a stiffness model of the arm to apply the force. The force sensor was only used in case you overloaded, and that’s very different from doing active force control, where you’re actually using the force sensor in a control loop.
On Mars 2020, we’re taking it to the next step, using the force sensor to actually actively control the level of force, both for pushing on the ground and for doing bit exchange. That’s a key point because fault protection to prevent damage usually has larger error bars. When you’re trying to actually push on the environment to apply force, and you’re doing active force control, the force sensor has to be significantly more accurate.
So a big thing that we learned on MSL—it was the first time we’d actually flown a force sensor, and we learned a lot about how to design and test force sensors to be used on the surface of Mars.
How do you effectively test the Mars 2020 arm on Earth?
That’s a good question. The arm was designed to operate on either Earth or Mars. It’s strong enough to do both. We also have a stiffness model of the arm which includes allows us to compensate for differences in gravity. For testing, we make two copies of the robotic arm. We have our copy that we’re going to fly to Mars, which is what we call our flight model, and we have our engineering model. They’re effectively duplicates of each other. The engineering arm stays on earth, so even once we’ve sent the flight model to Mars, we can continue to test. And if something were to happen, if say a drill bit got stuck in the ground on Mars, we could try to replicate those conditions on Earth with our engineering model arm, and use that to test out different scenarios to overcome the problem.
How much autonomy will the arm have?
We have different models of autonomy. We have pretty high levels flight software and, for instance, we have a command that just says “dock,” that moves the arm does all the force control to the dock the arm with the carousel. For surface interaction, we have stereo cameras on the rover, and those cameras allow us to generate 3D terrain models. Using those 3D terrain models, scientists can select a target on that surface, and then we can position the arm on the target.
Scientists like to select the particular sample targets, because they have very specific types of rocks they’re looking for to sample from. On 2020, we’re providing the ability for the next level of autonomy for the rover to drive up to an area and at least do the initial surveying of that area, so the scientists can select the specific target. So the way that that would happen is, if there’s an area off in the distance that the scientists find potentially interesting, the rover will autonomously drive up to it, and deploy the arm and take all the pictures so that we can generate those 3D terrain models and then the next day the scientists can pick the specific target they want. It’s really cool.
JPL is famous for making robots that operate for far longer than NASA necessarily plans for. What’s it like designing hardware and software for a system that will (hopefully) become part of that legacy?
The way that I look at it is, when you’re building an arm that’s going to go to another planet, all the things that could go wrong… You have to build something that’s robust and that can survive all that. It’s not that we’re trying to overdesign arms so that they’ll end up lasting much, much longer, it’s that, given all the things that you can encounter within a fairly unknown environment, and the level of robustness of the design you have to apply, it just so happens we end up with designs that end up lasting a lot longer than they do. Which is great, but we’re not held to that, although we’re very excited when we see them last that long. Without any calibration, without any maintenance, exactly, it’s amazing. They show their wear over time, but they still operate, it’s super exciting, it’s very inspirational to see.
[ Mars 2020 Rover ] Continue reading →