Tag Archives: Space

#436042 Video Friday: Caltech’s Drone With ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

ISRR 2019 – October 6-10, 2019 – Hanoi, Vietnam
Ro-Man 2019 – October 14-18, 2019 – New Delhi, India
Humanoids 2019 – October 15-17, 2019 – Toronto, Canada
ARSO 2019 – October 31-1, 2019 – Beijing, China
ROSCon 2019 – October 31-1, 2019 – Macau
IROS 2019 – November 4-8, 2019 – Macau
Let us know if you have suggestions for next week, and enjoy today’s videos.

Caltech has been making progress on LEONARDO (LEg ON Aerial Robotic DrOne), their leggy thruster powered humanoid-thing. It can now balance and walk, which is quite impressive to see.

We’ll circle back again when they’ve got it jumping and floating around.

[ Caltech ]

Turn the subtitles on to learn how robots became experts at slicing bubbly, melty, delicious cheese.

These robots learned how to do the traditional Swiss raclette from demonstration. The Robot Learning & Interaction group at the Idiap Research Institute has developed an imitation learning technique allowing the robot to acquire new skills by considering position and force information, with an automatic adaptation to new situations. The range of applications is wide, including industrial robots, service robots, and assistive robots.

[ Idiap ]

Thanks Sylvain!

Some amazing news this week from Skydio, with the announcement of their better in every single way Skydio 2 autonomous drone. Read our full article for details, but here’s a getting started video that gives you an overview of what the drone can do.

The first batch sold out in 36 hours, but you can put down a $100 deposit to reserve the $999 drone for 2020 delivery.

[ Skydio ]

UBTECH is introducing a couple new robot kits for the holidays: ChampBot and FireBot.

$130 each, available on October 20.

[ Ubtech ]

NASA’s InSight lander on Mars is trying to use its robotic arm to get the mission’s heat flow probe, or mole, digging again. InSight team engineer Ashitey Trebbi-Ollennu, based at NASA’s Jet Propulsion Laboratory in Pasadena, California, explains what has been attempted and the game plan for the coming weeks. The next tactic they’ll try will be “pinning” the mole against the hole it’s in.

[ NASA ]

We introduce shape-changing swarm robots. A swarm of self-transformable robots can both individually and collectively change their configuration to display information, actuate objects, act as tangible controllers, visualize data, and provide physical affordances. ShapeBots is a concept prototype of shape-changing swarm robots. Each robot can change its shape by leveraging small linear actuators that are thin (2.5 cm) and highly extendable (up to 20cm) in both horizontal and vertical directions.

[ Ryo Suzuki ]

Robot abuse!

Vision 60 legged robot managing unstructured terrain without vision or force sensors in its legs. Using only high-transparency actuators and 2kHz algorithmic stability control… 4-limbs and 12-motors with only a velocity command.

[ Ghost Robotics ]

We asked real people to bring in real products they needed picked for their application. In MINUTES, we assembled the right tool.

This is a cool idea, but for a real challenge they should try it outside a supermarket. Or a pet store.

[ Soft Robotics ]

Good water quality is important to humans and to nature. In a country with as much water as the Netherlands has, ensuring water quality is a very labour-intensive undertaking. To address this issue, researchers from TU Delft have developed a ‘pelican drone’: a drone capable of taking water samples quickly, in combination with a measuring instrument that immediately analyses the water quality. The drone was tested this week at the new Marker Wadden nature area ‘Living Lab’.

[ MAVLab ]

In an international collaboration led by scientists in Switzerland, three amputees merge with their bionic prosthetic legs as they climb over various obstacles without having to look. The amputees report using and feeling their bionic leg as part of their own body, thanks to sensory feedback from the prosthetic leg that is delivered to nerves in the leg’s stump.

[ EPFL ]

It’s a little hard to see, but this is one way of testing out asteroid imaging spacecraft without actually going into space: a fake asteroid and a 2D microgravity simulator.

[ Caltech ]

Drones can help filmmakers do the kinds of shots that would be otherwise impossible.

[ DJI ]

Two long interviews this week from Lex Fridman’s AI Podcast, and both of them are worth watching: Gary Marcus, and Peter Norvig.

[ AI Podcast ]

This week’s CMU RI Seminar comes from Tucker Hermans at the University of Utah, on “Improving Multi-fingered Robot Manipulation by Unifying Learning and Planning.”

Multi-fingered hands offer autonomous robots increased dexterity, versatility, and stability over simple two-fingered grippers. Naturally, this increased ability comes with increased complexity in planning and executing manipulation actions. As such, I propose combining model-based planning with learned components to improve over purely data-driven or purely-model based approaches to manipulation. This talk examines multi-fingered autonomous manipulation when the robot has only partial knowledge of the object of interest. I will first present results on planning multi-fingered grasps for novel objects using a learned neural network. I will then present our approach to planning in-hand manipulation tasks when dynamic properties of objects are not known. I will conclude with a discussion of our ongoing and future research to further unify these two approaches.

[ CMU RI ] Continue reading

Posted in Human Robots

#436005 NASA Hiring Engineers to Develop “Next ...

It’s been nearly six years since NASA unveiled Valkyrie, a state-of-the-art full-size humanoid robot. After the DARPA Robotics Challenge, NASA has continued to work with Valkyrie at Johnson Space Center, and has also provided Valkyrie robots to several different universities. Although it’s not a new platform anymore (six years is a long time in robotics), Valkyrie is still very capable, with plenty of potential for robotics research.

With that in mind, we were caught by surprise when over the last several months, Jacobs, a Dallas-based engineering company that appears to provide a wide variety of technical services to anyone who wants them, has posted several open jobs in need of roboticists in the Houston, Texas, area who are interested in working with NASA on “the next generation of humanoid robot.”

Here are the relevant bullet points from the one of the job descriptions (which you can view at this link):

Work directly with NASA Johnson Space Center in designing the next generation of humanoid robot.

Join the Valkyrie humanoid robot team in NASA’s Robotic Systems Technology Branch.

Build on the success of the existing Valkyrie and Robonaut 2 humanoid robots and advance NASA’s ability to project a remote human presence and dexterous manipulation capability into challenging, dangerous, and distant environments both in space and here on earth.

The question is, why is NASA developing its own humanoid robot (again) when it could instead save a whole bunch of time and money by using a platform that already exists, whether it’s Atlas, Digit, Valkyrie itself, or one of the small handful of other humanoids that are more or less available? The only answer that I can come up with is that no existing platforms meet NASA’s requirements, whatever those may be. And if that’s the case, what kind of requirements are we talking about? The obvious one would be the ability to work in the kinds of environments that NASA specializes in—space, the Moon, and Mars.

Image: NASA

Artist’s concept of NASA’s Valkyrie humanoid robot working on the surface of Mars.

NASA’s existing humanoid robots, including Robonaut 2 and Valkyrie, were designed to operate on Earth. Robonaut 2 ended up going to space anyway (it’s recently returned to Earth for repairs), but its hardware was certainly never intended to function outside of the International Space Station. Working in a vacuum involves designing for a much more rigorous set of environmental challenges, and things get even worse on the Moon or on Mars, where highly abrasive dust gets everywhere.

We know that it’s possible to design robots for long term operation in these kinds of environments because we’ve done it before. But if you’re not actually going to send your robot off-world, there’s very little reason to bother making sure that it can operate through (say) 300° Celsius temperature swings like you’d find on the Moon. In the past, NASA has quite sensibly focused on designing robots that can be used as platforms for the development of software and techniques that could one day be applied to off-world operations, without over-engineering those specific robots to operate in places that they would almost certainly never go. As NASA increasingly focuses on a return to the Moon, though, maybe it’s time to start thinking about a humanoid robot that could actually do useful stuff on the lunar surface.

Image: NASA

Artist’s concept of the Gateway moon-orbiting space station (seen on the right) with an Orion crew vehicle approaching.

The other possibility that I can think of, and perhaps the more likely one, is that this next humanoid robot will be a direct successor to Robonaut 2, intended for NASA’s Gateway space station orbiting the Moon. Some of the robotics folks at NASA that we’ve talked to recently have emphasized how important robotics will be for Gateway:

Trey Smith, NASA Ames: Everybody at NASA is really excited about work on the Gateway space station that would be in near lunar space. We don’t have definite plans for what would happen on the Gateway yet, but there’s a general recognition that intra-vehicular robots are important for space stations. And so, it would not be surprising to see a mobile manipulator like Robonaut, and a free flyer like Astrobee, on the Gateway.

If you have an un-crewed cargo vehicle that shows up stuffed to the rafters with cargo bags and it docks with the Gateway when there’s no crew there, it would be very useful to have intra-vehicular robots that can pull all those cargo bags out, unpack them, stow all the items, and then even allow the cargo vehicle to detach before the crew show up so that the crew don’t have to waste their time with that.

Julia Badger, NASA JSC: One of the systems on board Gateway is going to be intravehicular robots. They’re not going to necessarily look like Robonaut, but they’ll have some of the same functionality as Robonaut—being mobile, being able to carry payloads from one part of the module to another, doing some dexterous manipulation tasks, inspecting behind panels, those sorts of things.

Image: NASA

Artist’s concept of NASA’s Valkyrie humanoid robot working inside a spacecraft.

Since Gateway won’t be crewed by humans all of the time, it’ll be important to have a permanent robotic presence to keep things running while nobody is home while saving on resources by virtue of the fact that robots aren’t always eating food, drinking water, consuming oxygen, demanding that the temperature stays just so, and producing a variety of disgusting kinds of waste. Obviously, the robot won’t be as capable as humans, but if they can manage to do even basic continuing maintenance tasks (most likely through at least partial teleoperation), that would be very useful.

Photo: Evan Ackerman/IEEE Spectrum

NASA’s Robonaut team plans to perform a variety of mobility and motion-planning experiments using the robot’s new legs, which can grab handrails on the International Space Station.

As for whether robots designed for Gateway would really fall into the “humanoid” category, it’s worth considering that Gateway is designed for humans, implying that an effective robotic system on Gateway would need to be able to interact with the station in similar ways to how a human astronaut would. So, you’d expect to see arms with end-effectors that can grip things as well as push buttons, and some kind of mobility system—the legged version of Robonaut 2 seems like a likely template, but redesigned from the ground up to work in space, incorporating all the advances in robotics hardware and computing that have taken place over the last decade.

We’ve been pestering NASA about this for a little bit now, and they’re not ready to comment on this project, or even to confirm it. And again, everything in this article (besides the job post, which you should totally check out and consider applying for) is just speculation on our part, and we could be wrong about absolutely all of it. As soon as we hear more, we’ll definitely let you know. Continue reading

Posted in Human Robots

#435828 Video Friday: Boston Dynamics’ ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

RoboBusiness 2019 – October 1-3, 2019 – Santa Clara, Calif., USA
ISRR 2019 – October 6-10, 2019 – Hanoi, Vietnam
Ro-Man 2019 – October 14-18, 2019 – New Delhi, India
Humanoids 2019 – October 15-17, 2019 – Toronto, Canada
ARSO 2019 – October 31-1, 2019 – Beijing, China
ROSCon 2019 – October 31-1, 2019 – Macau
IROS 2019 – November 4-8, 2019 – Macau
Let us know if you have suggestions for next week, and enjoy today’s videos.

You’ve almost certainly seen the new Spot and Atlas videos from Boston Dynamics, if for no other reason than we posted about Spot’s commercial availability earlier this week. But what, are we supposed to NOT include them in Video Friday anyway? Psh! Here you go:

[ Boston Dynamics ]

Eight deadly-looking robots. One Giant Nut trophy. Tonight is the BattleBots season finale, airing on Discovery, 8 p.m. ET, or check your local channels.

[ BattleBots ]

Thanks Trey!

Speaking of battling robots… Having giant robots fight each other is one of those things that sounds really great in theory, but doesn’t work out so well in reality. And sadly, MegaBots is having to deal with reality, which means putting their giant fighting robot up on eBay.

As of Friday afternoon, the current bid is just over $100,000 with a week to go.

[ MegaBots ]

Michigan Engineering has figured out the secret formula to getting 150,000 views on YouTube: drone plus nail gun.

[ Michigan Engineering ]

Michael Burke from the University of Edinburgh writes:

We’ve been learning to scoop grapefruit segments using a PR2, by “feeling” the difference between peel and pulp. We use joint torque measurements to predict the probability that the knife is in the peel or pulp, and use this to apply feedback control to a nominal cutting trajectory learned from human demonstration, so that we remain in a position of maximum uncertainty about which medium we’re cutting. This means we slice along the boundary between the two mediums. It works pretty well!

[ Paper ] via [ Robust Autonomy and Decisions Group ]

Thanks Michael!

Hey look, it’s Jan with eight EMYS robot heads. Hi, Jan! Hi, EMYSes!

[ EMYS ]

We’re putting the KRAKEN Arm through its paces, demonstrating that it can unfold from an Express Rack locker on the International Space Station and access neighboring lockers in NASA’s FabLab system to enable transfer of materials and parts between manufacturing, inspection, and storage stations. The KRAKEN arm will be able to change between multiple ’end effector’ tools such as grippers and inspection sensors – those are in development so they’re not shown in this video.

[ Tethers Unlimited ]

UBTECH’s Alpha Mini Robot with Smart Robot’s “Maatje” software is offering healthcare service to children at Praktijk Intraverte Multidisciplinary Institution in Netherlands.

This institution is using Alpha Mini in counseling children’s behavior. Alpha Mini can move and talk to children and offers games and activities to stimulate and interact with them. Alpha Mini talks, helps and motivates children thereby becoming more flexible in society.

[ UBTECH ]

Some impressive work here from Anusha Nagabandi, Kurt Konoglie, Sergey Levine, Vikash Kumar at Google Brain, training a dexterous multi-fingered hand to do that thing with two balls that I’m really bad at.

Dexterous multi-fingered hands can provide robots with the ability to flexibly perform a wide range of manipulation skills. However, many of the more complex behaviors are also notoriously difficult to control: Performing in-hand object manipulation, executing finger gaits to move objects, and exhibiting precise fine motor skills such as writing, all require finely balancing contact forces, breaking and reestablishing contacts repeatedly, and maintaining control of unactuated objects. In this work, we demonstrate that our method of online planning with deep dynamics models (PDDM) addresses both of these limitations; we show that improvements in learned dynamics models, together with improvements in online model-predictive control, can indeed enable efficient and effective learning of flexible contact-rich dexterous manipulation skills — and that too, on a 24-DoF anthropomorphic hand in the real world, using just 2-4 hours of purely real-world data to learn to simultaneously coordinate multiple free-floating objects.

[ PDDM ]

Thanks Vikash!

CMU’s Ballbot has a deceptively light touch that’s ideal for leading people around.

A paper on this has been submitted to IROS 2019.

[ CMU ]

The Autonomous Robots Lab at the University of Nevada is sharing some of the work they’ve done on path planning and exploration for aerial robots during the DARPA SubT Challenge.

[ Autonomous Robots Lab ]

More proof that anything can be a drone if you staple some motors to it. Even 32 feet of styrofoam insulation.

[ YouTube ]

Whatever you think of military drones, we can all agree that they look cool.

[ Boeing ]

I appreciate the fact that iCub has eyelids, I really do, but sometimes, it ends up looking kinda sleepy in research videos.

[ EPFL LASA ]

Video shows autonomous flight of a lightweight aerial vehicle outdoors and indoors on the campus of Carnegie Mellon University. The vehicle is equipped with limited onboard sensing from a front-facing camera and a proximity sensor. The aerial autonomy is enabled by utilizing a 3D prior map built in Step 1.

[ CMU ]

The Stanford Space Robotics Facility allows researchers to test innovative guidance and navigation algorithms on a realistic frictionless, underactuated system.

[ Stanford ASL ]

In this video, Ian and CP discuss Misty’s many capabilities including robust locomotion, obstacle avoidance, 3D mapping/SLAM, face detection and recognition, sound localization, hardware extensibility, photo and video capture, and programmable personality. They also talk about some of the skills he’s built using these capabilities (and others) and how those skills can be expanded upon by you.

[ Misty Robotics ]

This week’s CMU RI Seminar comes from Aaron Parness at Caltech and NASA JPL, on “Robotic Grippers for Planetary Applications.”

The previous generation of NASA missions to the outer solar system discovered salt water oceans on Europa and Enceladus, each with more liquid water than Earth – compelling targets to look for extraterrestrial life. Closer to home, JAXA and NASA have imaged sky-light entrances to lava tube caves on the Moon more than 100 m in diameter and ESA has characterized the incredibly varied and complex terrain of Comet 67P. While JPL has successfully landed and operated four rovers on the surface of Mars using a 6-wheeled rocker-bogie architecture, future missions will require new mobility architectures for these extreme environments. Unfortunately, the highest value science targets often lie in the terrain that is hardest to access. This talk will explore robotic grippers that enable missions to these extreme terrains through their ability to grip a wide variety of surfaces (shapes, sizes, and geotechnical properties). To prepare for use in space where repair or replacement is not possible, we field-test these grippers and robots in analog extreme terrain on Earth. Many of these systems are enabled by advances in autonomy. The talk will present a rapid overview of my work and a detailed case study of an underactuated rock gripper for deflecting asteroids.

[ CMU ]

Rod Brooks gives some of the best robotics talks ever. He gave this one earlier this week at UC Berkeley, on “Steps Toward Super Intelligence and the Search for a New Path.”

[ UC Berkeley ] Continue reading

Posted in Human Robots

#435816 This Light-based Nervous System Helps ...

Last night, way past midnight, I stumbled onto my porch blindly grasping for my keys after a hellish day of international travel. Lights were low, I was half-asleep, yet my hand grabbed the keychain, found the lock, and opened the door.

If you’re rolling your eyes—yeah, it’s not exactly an epic feat for a human. Thanks to the intricate wiring between our brain and millions of sensors dotted on—and inside—our skin, we know exactly where our hand is in space and what it’s touching without needing visual confirmation. But this combined sense of the internal and the external is completely lost to robots, which generally rely on computer vision or surface mechanosensors to track their movements and their interaction with the outside world. It’s not always a winning strategy.

What if, instead, we could give robots an artificial nervous system?

This month, a team led by Dr. Rob Shepard at Cornell University did just that, with a seriously clever twist. Rather than mimicking the electric signals in our nervous system, his team turned to light. By embedding optical fibers inside a 3D printed stretchable material, the team engineered an “optical lace” that can detect changes in pressure less than a fraction of a pound, and pinpoint the location to a spot half the width of a tiny needle.

The invention isn’t just an artificial skin. Instead, the delicate fibers can be distributed both inside a robot and on its surface, giving it both a sense of tactile touch and—most importantly—an idea of its own body position in space. Optical lace isn’t a superficial coating of mechanical sensors; it’s an entire platform that may finally endow robots with nerve-like networks throughout the body.

Eventually, engineers hope to use this fleshy, washable material to coat the sharp, cold metal interior of current robots, transforming C-3PO more into the human-like hosts of Westworld. Robots with a “bodily” sense could act as better caretakers for the elderly, said Shepard, because they can assist fragile people without inadvertently bruising or otherwise harming them. The results were published in Science Robotics.

An Unconventional Marriage
The optical lace is especially creative because it marries two contrasting ideas: one biological-inspired, the other wholly alien.

The overarching idea for optical lace is based on the animal kingdom. Through sight, hearing, smell, taste, touch, and other senses, we’re able to interpret the outside world—something scientists call exteroception. Thanks to our nervous system, we perform these computations subconsciously, allowing us to constantly “perceive” what’s going on around us.

Our other perception is purely internal. Proprioception (sorry, it’s not called “inception” though it should be) is how we know where our body parts are in space without having to look at them, which lets us perform complex tasks when blind. Although less intuitive than exteroception, proprioception also relies on stretching and other deformations within the muscles and tendons and receptors under the skin, which generate electrical currents that shoot up into the brain for further interpretation.

In other words, in theory it’s possible to recreate both perceptions with a single information-carrying system.

Here’s where the alien factor comes in. Rather than using electrical properties, the team turned to light as their data carrier. They had good reason. “Compared with electricity, light carries information faster and with higher data densities,” the team explained. Light can also transmit in multiple directions simultaneously, and is less susceptible to electromagnetic interference. Although optical nervous systems don’t exist in the biological world, the team decided to improve on Mother Nature and give it a shot.

Optical Lace
The construction starts with engineering a “sheath” for the optical nerve fibers. The team first used an elastic polyurethane—a synthetic material used in foam cushioning, for example—to make a lattice structure filled with large pores, somewhat like a lattice pie crust. Thanks to rapid, high-resolution 3D printing, the scaffold can have different stiffness from top to bottom. To increase sensitivity to the outside world, the team made the top of the lattice soft and pliable, to better transfer force to mechanical sensors. In contrast, the “deeper” regions held their structure better, and kept their structure under pressure.

Now the fun part. The team next threaded stretchable “light guides” into the scaffold. These fibers transmit photons, and are illuminated with a blue LED light. One, the input light guide, ran horizontally across the soft top part of the scaffold. Others ran perpendicular to the input in a “U” shape, going from more surface regions to deeper ones. These are the output guides. The architecture loosely resembles the wiring in our skin and flesh.

Normally, the output guides are separated from the input by a small air gap. When pressed down, the input light fiber distorts slightly, and if the pressure is high enough, it contacts one of the output guides. This causes light from the input fiber to “leak” to the output one, so that it lights up—the stronger the pressure, the brighter the output.

“When the structure deforms, you have contact between the input line and the output lines, and the light jumps into these output loops in the structure, so you can tell where the contact is happening,” said study author Patricia Xu. “The intensity of this determines the intensity of the deformation itself.”

Double Perception
As a proof-of-concept for proprioception, the team made a cylindrical lace with one input and 12 output channels. They varied the stiffness of the scaffold along the cylinder, and by pressing down at different points, were able to calculate how much each part stretched and deformed—a prominent precursor to knowing where different regions of the structure are moving in space. It’s a very rudimentary sort of proprioception, but one that will become more sophisticated with increasing numbers of strategically-placed mechanosensors.

The test for exteroception was a whole lot stranger. Here, the team engineered another optical lace with 15 output channels and turned it into a squishy piano. When pressed down, an Arduino microcontroller translated light output signals into sound based on the position of each touch. The stronger the pressure, the louder the volume. While not a musical masterpiece, the demo proved their point: the optical lace faithfully reported the strength and location of each touch.

A More Efficient Robot
Although remarkably novel, the optical lace isn’t yet ready for prime time. One problem is scalability: because of light loss, the material is limited to a certain size. However, rather than coating an entire robot, it may help to add optical lace to body parts where perception is critical—for example, fingertips and hands.

The team sees plenty of potential to keep developing the artificial flesh. Depending on particular needs, both the light guides and scaffold can be modified for sensitivity, spatial resolution, and accuracy. Multiple optical fibers that measure for different aspects—pressure, pain, temperature—can potentially be embedded in the same region, giving robots a multitude of senses.

In this way, we hope to reduce the number of electronics and combine signals from multiple sensors without losing information, the authors said. By taking inspiration from biological networks, it may even be possible to use various inputs through an optical lace to control how the robot behaves, closing the loop from sensation to action.

Image Credit: Cornell Organic Robotics Lab. A flexible, porous lattice structure is threaded with stretchable optical fibers containing more than a dozen mechanosensors and attached to an LED light. When the lattice structure is pressed, the sensors pinpoint changes in the photon flow. Continue reading

Posted in Human Robots

#435806 Boston Dynamics’ Spot Robot Dog ...

Boston Dynamics is announcing this morning that Spot, its versatile quadruped robot, is now for sale. The machine’s animal-like behavior regularly electrifies crowds at tech conferences, and like other Boston Dynamics’ robots, Spot is a YouTube sensation whose videos amass millions of views.

Now anyone interested in buying a Spot—or a pack of them—can go to the company’s website and submit an order form. But don’t pull out your credit card just yet. Spot may cost as much as a luxury car, and it is not really available to consumers. The initial sale, described as an “early adopter program,” is targeting businesses. Boston Dynamics wants to find customers in select industries and help them deploy Spots in real-world scenarios.

“What we’re doing is the productization of Spot,” Boston Dynamics CEO Marc Raibert tells IEEE Spectrum. “It’s really a milestone for us going from robots that work in the lab to these that are hardened for work out in the field.”

Boston Dynamics has always been a secretive company, but last month, in preparation for launching Spot (formerly SpotMini), it allowed our photographers into its headquarters in Waltham, Mass., for a special shoot. In that session, we captured Spot and also Atlas—the company’s highly dynamic humanoid—in action, walking, climbing, and jumping.

You can see Spot’s photo interactives on our Robots Guide. (The Atlas interactives will appear in coming weeks.)

Gif: Bob O’Connor/Robots.ieee.org

And if you’re in the market for a robot dog, here’s everything we know about Boston Dynamics’ plans for Spot.

Who can buy a Spot?
If you’re interested in one, you should go to Boston Dynamics’ website and take a look at the information the company requires from potential buyers. Again, the focus is on businesses. Boston Dynamics says it wants to get Spots out to initial customers that “either have a compelling use case or a development team that we believe can do something really interesting with the robot,” says VP of business development Michael Perry. “Just because of the scarcity of the robots that we have, we’re going to have to be selective about which partners we start working together with.”

What can Spot do?
As you’ve probably seen on the YouTube videos, Spot can walk, trot, avoid obstacles, climb stairs, and much more. The robot’s hardware is almost completely custom, with powerful compute boards for control, and five sensor modules located on every side of Spot’s body, allowing it to survey the space around itself from any direction. The legs are powered by 12 custom motors with a reduction, with a top speed of 1.6 meters per second. The robot can operate for 90 minutes on a charge. In addition to the basic configuration, you can integrate up to 14 kilograms of extra hardware to a payload interface. Among the payload packages Boston Dynamics plans to offer are a 6 degrees-of-freedom arm, a version of which can be seen in some of the YouTube videos, and a ring of cameras called SpotCam that could be used to create Street View–type images inside buildings.

Image: Boston Dynamics

How do you control Spot?
Learning to drive the robot using its gaming-style controller “takes 15 seconds,” says CEO Marc Raibert. He explains that while teleoperating Spot, you may not realize that the robot is doing a lot of the work. “You don’t really see what that is like until you’re operating the joystick and you go over a box and you don’t have to do anything,” he says. “You’re practically just thinking about what you want to do and the robot takes care of everything.” The control methods have evolved significantly since the company’s first quadruped robots, machines like BigDog and LS3. “The control in those days was much more monolithic, and now we have what we call a sequential composition controller,” Raibert says, “which lets the system have control of the dynamics in a much broader variety of situations.” That means that every time one of Spot’s feet touches or doesn’t touch the ground, this different state of the body affects the basic physical behavior of the robot, and the controller adjusts accordingly. “Our controller is designed to understand what that state is and have different controls depending upon the case,” he says.

How much does Spot cost?
Boston Dynamics would not give us specific details about pricing, saying only that potential customers should contact them for a quote and that there is going to be a leasing option. It’s understandable: As with any expensive and complex product, prices can vary on a case by case basis and depend on factors such as configuration, availability, level of support, and so forth. When we pressed the company for at least an approximate base price, Perry answered: “Our general guidance is that the total cost of the early adopter program lease will be less than the price of a car—but how nice a car will depend on the number of Spots leased and how long the customer will be leasing the robot.”

Can Spot do mapping and SLAM out of the box?
The robot’s perception system includes cameras and 3D sensors (there is no lidar), used to avoid obstacles and sense the terrain so it can climb stairs and walk over rubble. It’s also used to create 3D maps. According to Boston Dynamics, the first software release will offer just teleoperation. But a second release, to be available in the next few weeks, will enable more autonomous behaviors. For example, it will be able to do mapping and autonomous navigation—similar to what the company demonstrated in a video last year, showing how you can drive the robot through an environment, create a 3D point cloud of the environment, and then set waypoints within that map for Spot to go out and execute that mission. For customers that have their own autonomy stack and are interested in using those on Spot, Boston Dynamics made it “as plug and play as possible in terms of how third-party software integrates into Spot’s system,” Perry says. This is done mainly via an API.

How does Spot’s API works?
Boston Dynamics built an API so that customers can create application-level products with Spot without having to deal with low-level control processes. “Rather than going and building joint-level kinematic access to the robot,” Perry explains, “we created a high-level API and SDK that allows people who are used to Web app development or development of missions for drones to use that same scope, and they’ll be able to build applications for Spot.”

What applications should we see first?
Boston Dynamics envisions Spot as a platform: a versatile mobile robot that companies can use to build applications based on their needs. What types of applications? The company says the best way to find out is to put Spot in the hands of as many users as possible and let them develop the applications. Some possibilities include performing remote data collection and light manipulation in construction sites; monitoring sensors and infrastructure at oil and gas sites; and carrying out dangerous missions such as bomb disposal and hazmat inspections. There are also other promising areas such as security, package delivery, and even entertainment. “We have some initial guesses about which markets could benefit most from this technology, and we’ve been engaging with customers doing proof-of-concept trials,” Perry says. “But at the end of the day, that value story is really going to be determined by people going out and exploring and pushing the limits of the robot.”

Photo: Bob O'Connor

How many Spots have been produced?
Last June, Boston Dynamics said it was planning to build about a hundred Spots by the end of the year, eventually ramping up production to a thousand units per year by the middle of this year. The company admits that it is not quite there yet. It has built close to a hundred beta units, which it has used to test and refine the final design. This version is now being mass manufactured, but the company is still “in the early tens of robots,” Perry says.

How did Boston Dynamics test Spot?

The company has tested the robots during proof-of-concept trials with customers, and at least one is already using Spot to survey construction sites. The company has also done reliability tests at its facility in Waltham, Mass. “We drive around, not quite day and night, but hundreds of miles a week, so that we can collect reliability data and find bugs,” Raibert says.

What about competitors?
In recent years, there’s been a proliferation of quadruped robots that will compete in the same space as Spot. The most prominent of these is ANYmal, from ANYbotics, a Swiss company that spun out of ETH Zurich. Other quadrupeds include Vision from Ghost Robotics, used by one of the teams in the DARPA Subterranean Challenge; and Laikago and Aliengo from Unitree Robotics, a Chinese startup. Raibert views the competition as a positive thing. “We’re excited to see all these companies out there helping validate the space,” he says. “I think we’re more in competition with finding the right need [that robots can satisfy] than we are with the other people building the robots at this point.”

Why is Boston Dynamics selling Spot now?
Boston Dynamics has long been an R&D-centric firm, with most of its early funding coming from military programs, but it says commercializing robots has always been a goal. Productizing its machines probably accelerated when the company was acquired by Google’s parent company, Alphabet, which had an ambitious (and now apparently very dead) robotics program. The commercial focus likely continued after Alphabet sold Boston Dynamics to SoftBank, whose famed CEO, Masayoshi Son, is known for his love of robots—and profits.

Which should I buy, Spot or Aibo?
Don’t laugh. We’ve gotten emails from individuals interested in purchasing a Spot for personal use after seeing our stories on the robot. Alas, Spot is not a bigger, fancier Aibo pet robot. It’s an expensive, industrial-grade machine that requires development and maintenance. If you’re maybe Jeff Bezos you could probably convince Boston Dynamics to sell you one, but otherwise the company will prioritize businesses.

What’s next for Boston Dynamics?
On the commercial side of things, other than Spot, Boston Dynamics is interested in the logistics space. Earlier this year it announced the acquisition of Kinema Systems, a startup that had developed vision sensors and deep-learning software to enable industrial robot arms to locate and move boxes. There’s also Handle, the mobile robot on whegs (wheels + legs), that can pick up and move packages. Boston Dynamics is hiring both in Waltham, Mass., and Mountain View, Calif., where Kinema was located.

Okay, can I watch a cool video now?
During our visit to Boston Dynamics’ headquarters last month, we saw Atlas and Spot performing some cool new tricks that we unfortunately are not allowed to tell you about. We hope that, although the company is putting a lot of energy and resources into its commercial programs, Boston Dynamics will still find plenty of time to improve its robots, build new ones, and of course, keep making videos. [Update: The company has just released a new Spot video, which we’ve embedded at the top of the post.][Update 2: We should have known. Boston Dynamics sure knows how to create buzz for itself: It has just released a second video, this time of Atlas doing some of those tricks we saw during our visit and couldn’t tell you about. Enjoy!]

[ Boston Dynamics ] Continue reading

Posted in Human Robots