Tag Archives: less

#437896 Solar-based Electronic Skin Generates ...

Replicating the human sense of touch is complicated—electronic skins need to be flexible, stretchable, and sensitive to temperature, pressure and texture; they need to be able to read biological data and provide electronic readouts. Therefore, how to power electronic skin for continuous, real-time use is a big challenge.

To address this, researchers from Glasgow University have developed an energy-generating e-skin made out of miniaturized solar cells, without dedicated touch sensors. The solar cells not only generate their own power—and some surplus—but also provide tactile capabilities for touch and proximity sensing. An early-view paper of their findings was published in IEEE Transactions on Robotics.

When exposed to a light source, the solar cells on the s-skin generate energy. If a cell is shadowed by an approaching object, the intensity of the light, and therefore the energy generated, reduces, dropping to zero when the cell makes contact with the object, confirming touch. In proximity mode, the light intensity tells you how far the object is with respect to the cell. “In real time, you can then compare the light intensity…and after calibration find out the distances,” says Ravinder Dahiya of the Bendable Electronics and Sensing Technologies (BEST) Group, James Watt School of Engineering, University of Glasgow, where the study was carried out. The team used infra-red LEDs with the solar cells for proximity sensing for better results.

To demonstrate their concept, the researchers wrapped a generic 3D-printed robotic hand in their solar skin, which was then recorded interacting with its environment. The proof-of-concept tests showed an energy surplus of 383.3 mW from the palm of the robotic arm. “The eSkin could generate more than 100 W if present over the whole body area,” they reported in their paper.

“If you look at autonomous, battery-powered robots, putting an electronic skin [that] is consuming energy is a big problem because then it leads to reduced operational time,” says Dahiya. “On the other hand, if you have a skin which generates energy, then…it improves the operational time because you can continue to charge [during operation].” In essence, he says, they turned a challenge—how to power the large surface area of the skin—into an opportunity—by turning it into an energy-generating resource.

Dahiya envisages numerous applications for BEST’s innovative e-skin, given its material-integrated sensing capabilities, apart from the obvious use in robotics. For instance, in prosthetics: “[As] we are using [a] solar cell as a touch sensor itself…we are also [making it] less bulkier than other electronic skins.” This, he adds, will help create prosthetics that are of optimal weight and size, thus making it easier for prosthetics users. “If you look at electronic skin research, the the real action starts after it makes contact… Solar skin is a step ahead, because it will start to work when the object is approaching…[and] have more time to prepare for action.” This could effectively reduce the time lag that is often seen in brain–computer interfaces.

There are also possibilities in the automation sector, particularly in electrical and interactive vehicles. A car covered with solar e-skin, because of its proximity-sensing capabilities, would be able to “see” an approaching obstacle or a person. It isn’t “seeing” in the biological sense, Dahiya clarifies, but from the point of view of a machine. This can be integrated with other objects, not just cars, for a variety of uses. “Gestures can be recognized as well…[which] could be used for gesture-based control…in gaming or in other sectors.”

In the lab, tests were conducted with a single source of white light at 650 lux, but Dahiya feels there are interesting possibilities if they could work with multiple light sources that the e-skin could differentiate between. “We are exploring different AI techniques [for that],” he says, “processing the data in an innovative way [so] that we can identify the the directions of the light sources as well as the object.”

The BEST team’s achievement brings us closer to a flexible, self-powered, cost-effective electronic skin that can touch as well as “see.” At the moment, however, there are still some challenges. One of them is flexibility. In their prototype, they used commercial solar cells made of amorphous silicon, each 1cm x 1cm. “They are not flexible, but they are integrated on a flexible substrate,” Dahiya says. “We are currently exploring nanowire-based solar cells…[with which] we we hope to achieve good performance in terms of energy as well as sensing functionality.” Another shortcoming is what Dahiya calls “the integration challenge”—how to make the solar skin work with different materials. Continue reading

Posted in Human Robots

#437884 Hyundai Buys Boston Dynamics for Nearly ...

This morning just after 3 a.m. ET, Boston Dynamics sent out a media release confirming that Hyundai Motor Group has acquired a controlling interest in the company that values Boston Dynamics at US $1.1 billion:

Under the agreement, Hyundai Motor Group will hold an approximately 80 percent stake in Boston Dynamics and SoftBank, through one of its affiliates, will retain an approximately 20 percent stake in Boston Dynamics after the closing of the transaction.

The release is very long, but does have some interesting bits—we’ll go through them, and talk about what this might mean for both Boston Dynamics and Hyundai.

We’ve asked Boston Dynamics for comment, but they’ve been unusually quiet for the last few days (I wonder why!). So at this point just keep in mind that the only things we know for sure are the ones in the release. If (when?) we hear anything from either Boston Dynamics or Hyundai, we’ll update this post.

The first thing to be clear on is that the acquisition is split between Hyundai Motor Group’s affiliates, including Hyundai Motor, Hyundai Mobis, and Hyundai Glovis. Hyundai Motor makes cars, Hyundai Mobis makes car parts and seems to be doing some autonomous stuff as well, and Hyundai Glovis does logistics. There are many other groups that share the Hyundai name, but they’re separate entities, at least on paper. For example, there’s a Hyundai Robotics, but that’s part of Hyundai Heavy Industries, a different company than Hyundai Motor Group. But for this article, when we say “Hyundai,” we’re talking about Hyundai Motor Group.

What’s in it for Hyundai?
Let’s get into the press release, which is filled with press release-y terms like “synergies” and “working together”—you can view the whole thing here—but still has some parts that convey useful info.

By establishing a leading presence in the field of robotics, the acquisition will mark another major step for Hyundai Motor Group toward its strategic transformation into a Smart Mobility Solution Provider. To propel this transformation, Hyundai Motor Group has invested substantially in development of future technologies, including in fields such as autonomous driving technology, connectivity, eco-friendly vehicles, smart factories, advanced materials, artificial intelligence (AI), and robots.

If Hyundai wants to be a “Smart Mobility Solution Provider” with a focus on vehicles, it really seems like there’s a whole bunch of other ways they could have spent most of a billion dollars that would get them there quicker. Will Boston Dynamics’ expertise help them develop autonomous driving technology? Sure, I guess, but why not just buy an autonomous car startup instead? Boston Dynamics is more about “robots,” which happens to be dead last on the list above.

There was some speculation a couple of weeks ago that Hyundai was going to try and leverage Boston Dynamics to make a real version of this hybrid wheeled/legged concept car, so if that’s what Hyundai means by “Smart Mobility Solution Provider,” then I suppose the Boston Dynamics acquisition makes more sense. Still, I think that’s unlikely, because it’s just a concept car, after all.

In addition to “smart mobility,” which seems like a longer-term goal for Hyundai, the company also mentions other, more immediate benefits from the acquisition:

Advanced robotics offer opportunities for rapid growth with the potential to positively impact society in multiple ways. Boston Dynamics is the established leader in developing agile, mobile robots that have been successfully integrated into various business operations. The deal is also expected to allow Hyundai Motor Group and Boston Dynamics to leverage each other’s respective strengths in manufacturing, logistics, construction and automation.

“Successfully integrated” might be a little optimistic here. They’re talking about Spot, of course, but I think the best you could say at this point is that Spot is in the middle of some promising pilot projects. Whether it’ll be successfully integrated in the sense that it’ll have long-term commercial usefulness and value remains to be seen. I’m optimistic about this as well, but Spot is definitely not there yet.

What does probably hold a lot of value for Hyundai is getting Spot, Pick, and perhaps even Handle into that “manufacturing, logistics, construction” stuff. This is the bread and butter for robots right now, and Boston Dynamics has plenty of valuable technology to offer in those spaces.

Photo: Bob O’Connor

Boston Dynamics is selling Spot for $74,500, shipping included.

Betting on Spot and Pick
With Boston Dynamics founder Marc Raibert’s transition to Chairman of the company, the CEO position is now occupied by Robert Playter, the long-time VP of engineering and more recently COO at Boston Dynamics. Here’s his statement from the release:

“Boston Dynamics’ commercial business has grown rapidly as we’ve brought to market the first robot that can automate repetitive and dangerous tasks in workplaces designed for human-level mobility. We and Hyundai share a view of the transformational power of mobility and look forward to working together to accelerate our plans to enable the world with cutting edge automation, and to continue to solve the world’s hardest robotics challenges for our customers.”

Whether Spot is in fact “the first robot that can automate repetitive and dangerous tasks in workplaces designed for human-level mobility” on the market is perhaps something that could be argued against, although I won’t. Whether or not it was the first robot that can do these kinds of things, it’s definitely not the only robot that do these kinds of things, and going forward, it’s going to be increasingly challenging for Spot to maintain its uniqueness.

For a long time, Boston Dynamics totally owned the quadruped space. Now, they’re one company among many—ANYbotics and Unitree are just two examples of other quadrupeds that are being successfully commercialized. Spot is certainly very capable and easy to use, and we shouldn’t underestimate the effort required to create a robot as complex as Spot that can be commercially used and supported. But it’s not clear how long they’ll maintain that advantage, with much more affordable platforms coming out of Asia, and other companies offering some unique new capabilities.

Photo: Boston Dynamics

Boston Dynamics’ Handle is an all-electric robot featuring a leg-wheel hybrid mobility system, a manipulator arm with a vacuum gripper, and a counterbalancing tail.

Boston Dynamics’ picking system, which stemmed from their 2019 acquisition of Kinema Systems, faces the same kinds of challenges—it’s very good, but it’s not totally unique.

Boston Dynamics produces highly capable mobile robots with advanced mobility, dexterity and intelligence, enabling automation in difficult, dangerous, or unstructured environments. The company launched sales of its first commercial robot, Spot in June of 2020 and has since sold hundreds of robots in a variety of industries, such as power utilities, construction, manufacturing, oil and gas, and mining. Boston Dynamics plans to expand the Spot product line early next year with an enterprise version of the robot with greater levels of autonomy and remote inspection capabilities, and the release of a robotic arm, which will be a breakthrough in mobile manipulation.

Boston Dynamics is also entering the logistics automation market with the industry leading Pick, a computer vision-based depalletizing solution, and will introduce a mobile robot for warehouses in 2021.

Huh. We’ll be trying to figure out what “greater levels of autonomy” means, as well as whether the “mobile robot for warehouses” is Handle, or something more like an autonomous mobile robot (AMR) platform. I’d honestly be surprised if Handle was ready for work outside of Boston Dynamics next year, and it’s hard to imagine how Boston Dynamics could leverage their expertise into the AMR space with something that wouldn’t just seem… Dull, compared to what they usually do. I hope to be surprised, though!

A new deep-pocketed benefactor

Hyundai Motor Group’s decision to acquire Boston Dynamics is based on its growth potential and wide range of capabilities.

“Wide range of capabilities” we get, but that other phrase, “growth potential,” has a heck of a lot wrapped up in it. At the moment, Boston Dynamics is nowhere near profitable, as far as we know. SoftBank acquired Boston Dynamics in 2017 for between one hundred and two hundred million, and over the last three years they’ve poured hundreds of millions more into Boston Dynamics.

Hyundai’s 80 percent stake just means that they’ll need to take over the majority of that support, and perhaps even increase it if Boston Dynamics’ growth is one of their primary goals. Hyundai can’t have a reasonable expectation that Boston Dynamics will be profitable any time soon; they’re selling Spots now, but it’s an open question whether Spot will manage to find a scalable niche in which it’ll be useful in the sort of volume that will make it a sustainable commercial success. And even if it does become a success, it seems unlikely that Spot by itself will make a significant dent in Boston Dynamics’ burn rate anytime soon. Boston Dynamics will have more products of course, but it’s going to take a while, and Hyundai will need to support them in the interim.

Depending on whether Hyundai views Boston Dynamics as a company that does research or a company that makes robots that are useful and profitable, it may be difficult for Boston Dynamics to justify the cost to develop the
next Atlas, when the
current one still seems so far from commercialization

It’s become clear that to sustain itself, Boston Dynamics needs a benefactor with very deep pockets and a long time horizon. Initially, Boston Dynamics’ business model (or whatever you want to call it) was to do bespoke projects for defense-ish folks like DARPA, but from what we understand Boston Dynamics stopped that sort of work after Google acquired them back in 2013. From one perspective, that government funding did exactly what it was supposed to do, which was to fund the development of legged robots through low TRLs (technology readiness levels) to the point where they could start to explore commercialization.

The question now, though, is whether Hyundai is willing to let Boston Dynamics undertake the kinds of low-TRL, high-risk projects that led from BigDog to LS3 to Spot, and from PETMAN to DRC Atlas to the current Atlas. So will Hyundai be cool about the whole thing and be the sort of benefactor that’s willing to give Boston Dynamics the resources that they need to keep doing what they’re doing, without having to answer too many awkward questions about things like practicality and profitability? Hyundai can certainly afford to do this, but so could SoftBank, and Google—the question is whether Hyundai will want to, over the length of time that’s required for the development of the kind of ultra-sophisticated robotics hardware that Boston Dynamics specializes in.

To put it another way: Depending whether Hyundai’s perspective on Boston Dynamics is as a company that does research or a company that makes robots that are useful and profitable, it may be difficult for Boston Dynamics to justify the cost to develop the next Atlas, when the current one still seems so far from commercialization.

Google, SoftBank, now Hyundai

Boston Dynamics possesses multiple key technologies for high-performance robots equipped with perception, navigation, and intelligence.

Hyundai Motor Group’s AI and Human Robot Interaction (HRI) expertise is highly synergistic with Boston Dynamics’s 3D vision, manipulation, and bipedal/quadruped expertise.

As it turns out, Hyundai Motors does have its own robotics lab, called Hyundai Motors Robotics Lab. Their website is not all that great, but here’s a video from last year:

I’m not entirely clear on what Hyundai means when they use the word “synergistic” when they talk about their robotics lab and Boston Dynamics, but it’s a little bit concerning. Usually, when a big company buys a little company that specializes in something that the big company is interested in, the idea is that the little company, to some extent, will be absorbed into the big company to give them some expertise in that area. Historically, however, Boston Dynamics has been highly resistant to this, maintaining its post-acquisition independence and appearing to be very reluctant to do anything besides what it wants to do, at whatever pace it wants to do it, and as by itself as possible.

From what we understand, Boston Dynamics didn’t integrate particularly well with Google’s robotics push in 2013, and we haven’t seen much evidence that SoftBank’s experience was much different. The most direct benefit to SoftBank (or at least the most visible one) was the addition of a fleet of Spot robots to the SoftBank Hawks baseball team cheerleading squad, along with a single (that we know about) choreographed gymnastics routine from an Atlas robot that was only shown on video.

And honestly, if you were a big manufacturing company with a bunch of money and you wanted to build up your own robotics program quickly, you’d probably have much better luck picking up some smaller robotics companies who were a bit less individualistic and would probably be more amenable to integration and would cost way less than a billion dollars-ish. And if integration is ultimately Hyundai’s goal, we’ll be very sad, because it’ll likely signal the end of Boston Dynamics doing the unfettered crazy stuff that we’ve grown to love.

Photo: Bob O’Connor

Possibly the most agile humanoid robot ever built, Atlas can run, climb, jump over obstacles, and even get up after a fall.

Boston Dynamics contemplates its future

The release ends by saying that the transaction is “subject to regulatory approvals and other customary closing conditions” and “is expected to close by June of 2021.” Again, you can read the whole thing here.

My initial reaction is that, despite the “synergies” described by Hyundai, it’s certainly not immediately obvious why the company wants to own 80 percent of Boston Dynamics. I’d also like a better understanding of how they arrived at the $1.1 billion valuation. I’m not saying this because I don’t believe in what Boston Dynamics is doing or in the inherent value of the company, because I absolutely do, albeit perhaps in a slightly less tangible sense. But when you start tossing around numbers like these, a big pile of expectations inevitably comes along with them. I hope that Boston Dynamics is unique enough that the kinds of rules that normally apply to robotics companies (or companies in general) can be set aside, at least somewhat, but I also worry that what made Boston Dynamics great was the explicit funding for the kinds of radical ideas that eventually resulted in robots like Atlas and Spot.

Can Hyundai continue giving Boston Dynamics the support and freedom that they need to keep doing the kinds of things that have made them legendary? I certainly hope so. Continue reading

Posted in Human Robots

#437859 We Can Do Better Than Human-Like Hands ...

One strategy for designing robots that are capable in anthropomorphic environments is to make the robots themselves as anthropomorphic as possible. It makes sense—for example, there are stairs all over the place because humans have legs, and legs are good at stairs, so if we give robots legs like humans, they’ll be good at stairs too, right? We also see this tendency when it comes to robotic grippers, because robots need to grip things that have been optimized for human hands.

Despite some amazing robotic hands inspired by the biology of our own human hands, there are also opportunities for creativity in gripper designs that do things human hands are not physically capable of. At ICRA 2020, researchers from Stanford University presented a paper on the design of a robotic hand that has fingers made of actuated rollers, allowing it to manipulate objects in ways that would tie your fingers into knots.

While it’s got a couple fingers, this prototype “roller grasper” hand tosses anthropomorphic design out the window in favor of unique methods of in-hand manipulation. The roller grasper does share some features with other grippers designed for in-hand manipulation using active surfaces (like conveyor belts embedded in fingers), but what’s new and exciting here is that those articulated active roller fingertips (or whatever non-anthropomorphic name you want to give them) provide active surfaces that are steerable. This means that the hand can grasp objects and rotate them without having to resort to complex sequences of finger repositioning, which is how humans do it.

Photo: Stanford University

Things like picking something flat off of a table, always tricky for robotic hands (and sometimes for human hands as well), is a breeze thanks to the fingertip rollers.

Each of the hand’s fingers has three actuated degrees of freedom, which result in several different ways in which objects can be grasped and manipulated. Things like picking something flat off of a table, always tricky for robotic hands (and sometimes for human hands as well), is a breeze thanks to the fingertip rollers. The motion of an object in this gripper isn’t quite holonomic, meaning that it can’t arbitrarily reorient things without sometimes going through other intermediate steps. And it’s also not compliant in the way that many other grippers are, limiting some types of grasps. This particular design probably won’t replace every gripper out there, but it’s particularly skilled at some specific kinds of manipulations in a way that makes it unique.

We should be clear that it’s not the intent of this paper (or of this article!) to belittle five-fingered robotic hands—the point is that there are lots of things that you can do with totally different hand designs, and just because humans use one kind of hand doesn’t mean that robots need to do the same if they want to match (or exceed) some specific human capabilities. If we could make robotic hands with five fingers that had all of the actuation and sensing and control that our own hands do, that would be amazing, but it’s probably decades away. In the meantime, there are plenty of different designs to explore.

And speaking of exploring different designs, these same folks are already at work on version two of their hand, which replaces the fingertip rollers with fingertip balls:

For more on this new version of the hand (among other things), we spoke with lead author Shenli Yuan via email. And the ICRA page is here if you have questions of your own.

IEEE Spectrum: Human hands are often seen as the standard for manipulation. When adding degrees of freedom that human hands don’t have (as in your work) can make robotic hands more capable than ours in many ways, do you think we should still think of human hands as something to try and emulate?

Shenli Yuan: Yes, definitely. Not only because human hands have great manipulation capability, but because we’re constantly surrounded by objects that were designed and built specifically to be manipulated by the human hand. Anthropomorphic robot hands are still worth investigating, and still have a long way to go before they truly match the dexterity of a human hand. The design we came up with is an exploration of what unique capabilities may be achieved if we are not bound by the constraints of anthropomorphism, and what a biologically impossible mechanism may achieve in robotic manipulation. In addition, for lots of tasks, it isn’t necessarily optimal to try and emulate the human hand. Perhaps in 20 to 50 years when robot manipulators are much better, they won’t look like the human hand that much. The design constraints for robotics and biology have points in common (like mechanical wear, finite tendons stiffness) but also major differences (like continuous rotation for robots and less heat dissipation problems for humans).

“For lots of tasks, it isn’t necessarily optimal to try and emulate the human hand. Perhaps in 20 to 50 years when robot manipulators are much better, they won’t look like the human hand that much.”
—Shenli Yuan, Stanford University

What are some manipulation capabilities of human hands that are the most difficult to replicate with your system?

There are a few things that come to mind. It cannot perform a power grasp (using the whole hand for grasping as opposed to pinch grasp that uses only fingertips), which is something that can be easily done by human hands. It cannot move or rotate objects instantaneously in arbitrary directions or about arbitrary axes, though the human hand is somewhat limited in this respect as well. It also cannot perform gaiting. That being said, these limitations exist largely because this grasper only has 9 degrees of freedom, as opposed to the human hand which has more than 20. We don’t think of this grasper as a replacement for anthropomorphic hands, but rather as a way to provide unique capabilities without all of the complexity associated with a highly actuated, humanlike hand.

What’s the most surprising or impressive thing that your hand is able to do?

The most impressive feature is that it can rotate objects continuously, which is typically difficult or inefficient for humanlike robot hands. Something really surprising was that we put most of our energy into the design and analysis of the grasper, and the control strategy we implemented for demonstrations is very simple. This simple control strategy works surprisingly well with very little tuning or trial-and-error.

With this many degrees of freedom, how complicated is it to get the hand to do what you want it to do?

The number of degrees of freedom is actually not what makes controlling it difficult. Most of the difficulties we encountered were actually due to the rolling contact between the rollers and the object during manipulation. The rolling behavior can be viewed as constantly breaking and re-establishing contacts between the rollers and objects, this very dynamic behavior introduces uncertainties in controlling our grasper. Specifically, it was difficult estimating the velocity of each contact point with the object, which changes based on object and finger position, object shape (especially curvature), and slip/no slip.

What more can you tell us about Roller Grasper V2?

Roller Grasper V2 has spherical rollers, while the V1 has cylindrical rollers. We realized that cylindrical rollers are very good at manipulating objects when the rollers and the object form line contacts, but it can be unstable when the grasp geometry doesn’t allow for a line contact between each roller and the grasped object. Spherical rollers solve that problem by allowing predictable points of contact regardless of how a surface is oriented.

The parallelogram mechanism of Roller Grasper V1 makes the pivot axis offset a bit from the center of the roller, which made our control and analysis more challenging. The kinematics of the Roller Grasper V2 is simpler. The base joint intersects with the finger, which intersects with the pivot joint, and the pivot joint intersects with the roller joint. It’s symmetrical design and simpler kinematics make our control and analysis a lot more straightforward. Roller Grasper V2 also has a larger pivot range of 180 degrees, while V1 is limited to 90 degrees.

In terms of control, we implemented more sophisticated control strategies (including a hand-crafted control strategy and an imitation learning based strategy) for the grasper to perform autonomous in-hand manipulation.

“Design of a Roller-Based Dexterous Hand for Object Grasping and Within-Hand Manipulation,” by Shenli Yuan, Austin D. Epps, Jerome B. Nowak, and J. Kenneth Salisbury from Stanford University is being presented at ICRA 2020.

< Back to IEEE Journal Watch Continue reading

Posted in Human Robots

#437851 Boston Dynamics’ Spot Robot Dog ...

Boston Dynamics has been fielding questions about when its robots are going to go on sale and how much they’ll cost for at least a dozen years now. I can say this with confidence, because that’s how long I’ve been a robotics journalist, and I’ve been pestering them about it the entire time. But it’s only relatively recently that the company started to make a concerted push away from developing robots exclusively for the likes of DARPA into platforms with more commercial potential, starting with a compact legged robot called Spot, first introduced in 2016.

Since then, we’ve been following closely as Spot has gone from a research platform to a product, and today, Boston Dynamics is announcing the final step in that process: commercial availability. You can now order a Spot Explorer Kit from the Boston Dynamics online store for US $74,500 (plus tax), shipping included, with delivery in 6 to 8 weeks. FINALLY!

Over the past 10 months or so, Boston Dynamics has leased Spot robots to carefully selected companies, research groups, and even a few individuals as part of their early adopter program—that’s where all of the clips in the video below came from. While there are over 100 Spots out in the world right now, getting one of them has required convincing Boston Dynamics up front that you knew more or less exactly what you wanted to do and how you wanted to do it. If you’re a big construction company or the Jet Propulsion Laboratory or Adam Savage, that’s all well and good, but for other folks who think that a Spot could be useful for them somehow and want to give it a shot, this new availability provides a fewer-strings attached opportunity to do some experimentation with the robot.

There’s a lot of cool stuff going on in that video, but we were told that the one thing that really stood out to the folks at Boston Dynamics was a 2-second clip that you can see on the left-hand side of the screen from 0:19 to 0:21. In it, Spot is somehow managing to walk across a spider web of rebar without getting tripped up, at faster than human speed. This isn’t something that Spot was specifically programmed to do, and in fact the Spot User Guide specifically identifies “rebar mesh” as an unsafe operating environment. But the robot just handles it, and that’s a big part of what makes Spot so useful—its ability to deal with (almost) whatever you can throw at it.

Before you get too excited, Boston Dynamics is fairly explicit that the current license for the robot is intended for commercial use, and the company specifically doesn’t want people to be just using it at home for fun. We know this because we asked (of course we asked), and they told us “we specifically don’t want people to just be using it at home for fun.” Drat. You can still buy one as an individual, but you have to promise that you’ll follow the terms of use and user guidelines, and it sounds like using a robot in your house might be the second-fastest way to invalidate your warranty:

SPOT IS AN AMAZING ROBOT, BUT IS NOT CERTIFIED SAFE FOR IN-HOME USE OR INTENDED FOR USE NEAR CHILDREN OR OTHERS WHO MAY NOT APPRECIATE THE HAZARDS ASSOCIATED WITH ITS OPERATION.

Not being able to get Spot to play with your kids may be disappointing, but for those of you with the sort of kids who are also students, the good news is that Boston Dynamics has carved out a niche for academic institutions, which can buy Spot at a discounted price. And if you want to buy a whole pack of Spots, there’s a bulk discount for Enterprise users as well.

What do you get for $74,500? All this!

Spot robot
Spot battery (2x)
Spot charger
Tablet controller and charger
Robot case for storage and transportation
FREE SHIPPING!

Photo: Boston Dynamics

The basic package includes the robot, two batteries, charger, a tablet controller, and a storage case.

You can view detailed specs here.

So is $75k a lot of money for a robot like Spot, or not all that much? We don’t have many useful points of comparison, partially because it’s not clear to what extent other pre-commercial quadrupedal robots (like ANYmal or Aliengo) share capabilities and features with Spot. For more perspective on Spot’s price tag, we spoke to Michael Perry, vice president of business development at Boston Dynamics.

IEEE Spectrum: Why is Spot so affordable?

Michael Perry: The main goal of selling the robot at this stage is to try to get it into the hands of as many application developers as possible, so that we can learn from the community what the biggest driver of value is for Spot. As a platform, unlocking the value of an ecosystem is our core focus right now.

Spectrum: Why is Spot so expensive?

Perry: Expensive is relative, but compared to the initial prototypes of Spot, we’ve been able to drop down the cost pretty significantly. One key thing has been designing it for robustness—we’ve put hundreds and hundreds of hours on the robot to make sure that it’s able to be successful when it falls, or when it has an electrostatic discharge. We’ve made sure that it’s able to perceive a wide variety of environments that are difficult for traditional vision-based sensors to handle. A lot of that engineering is baked into the core product so that you don’t have to worry about the mobility or robotic side of the equation, you can just focus on application development.

Photos: Boston Dynamics

Accessories for Spot include [clockwise from top left]: Spot GXP with additional ports for payload integration; Spot CAM with panorama camera and advanced comms; Spot CAM+ with pan-tilt-zoom camera for inspections; Spot EAP with lidar to enhance autonomy on large sites; Spot EAP+ with Spot CAM camera plus lidar; and Spot CORE for additional processing power.

The $75k that you’ll pay for the Spot Explorer Kit, it’s important to note, is just the base price for the robot. As with other things that fall into this price range (like a luxury car), there are all kinds of fun ways to drive that cost up with accessories, although for Spot, some of those accessories will be necessary for many (if not most) applications. For example, a couple of expansion ports to make it easier to install your own payloads on Spot will run you $1,275. An additional battery is $4,620. And if you want to really get some work done, the Enhanced Autonomy Package (with 360 cameras, lights, better comms, and a Velodyne VLP-16) will set you back an additional $34,570. If you were hoping for an arm, you’ll have to wait until the end of the year.

Each Spot also includes a year’s worth of software updates and a warranty, although the standard warranty just covers “defects related to materials and workmanship” not “I drove my robot off a cliff” or “I tried to take my robot swimming.” For that sort of thing (user error) to be covered, you’ll need to upgrade to the $12,000 Spot CARE premium service plan to cover your robot for a year as long as you don’t subject it to willful abuse, which both of those examples I just gave probably qualify as.

While we’re on the subject of robot abuse, Boston Dynamics has very sensibly devoted a substantial amount of the Spot User Guide to help new users understand how they should not be using their robot, in order to “lessen the risk of serious injury, death, or robot and other property damage.” According to the guide, some things that could cause Spot to fall include holes, cliffs, slippery surfaces (like ice and wet grass), and cords. Spot’s sensors also get confused by “transparent, mirrored, or very bright obstacles,” and the guide specifically says Spot “may crash into glass doors and windows.” Also this: “Spot cannot predict trajectories of moving objects. Do not operate Spot around moving objects such as vehicles, children, or pets.”

We should emphasize that this is all totally reasonable, and while there are certainly a lot of things to be aware of, it’s frankly astonishing that these are the only things that Boston Dynamics explicitly warns users against. Obviously, not every potentially unsafe situation or thing is described above, but the point is that Boston Dynamics is willing to say to new users, “here’s your robot, go do stuff with it” without feeling the need to hold their hand the entire time.

There’s one more thing to be aware of before you decide to buy a Spot, which is the following:

“All orders will be subject to Boston Dynamics’ Terms and Conditions of Sale which require the beneficial use of its robots.”

Specifically, this appears to mean that you aren’t allowed to (or supposed to) use the robot in a way that could hurt living things, or “as a weapon, or to enable any weapon.” The conditions of sale also prohibit using the robot for “any illegal or ultra-hazardous purpose,” and there’s some stuff in there about it not being cool to use Spot for “nuclear, chemical, or biological weapons proliferation, or development of missile technology,” which seems weirdly specific.

“Once you make a technology more broadly available, the story of it starts slipping out of your hands. Our hope is that ahead of time we’re able to clearly articulate the beneficial uses of the robot in environments where we think the robot has a high potential to reduce the risk to people, rather than potentially causing harm.”
—Michael Perry, Boston Dynamics

I’m very glad that Boston Dynamics is being so upfront about requiring that Spot is used beneficially. However, it does put the company in a somewhat challenging position now that these robots are being sold. Boston Dynamics can (and will) perform some amount of due-diligence before shipping a Spot, but ultimately, once the robots are in someone else’s hands, there’s only so much that BD can do.

Spectrum: Why is beneficial use important to Boston Dynamics?

Perry: One of the key things that we’ve highlighted many times in our license and terms of use is that we don’t want to see the robot being used in any way that inflicts physical harm on people or animals. There are philosophical reasons for that—I think all of us don’t want to see our technology used in a way that would hurt people. But also from a business perspective, robots are really terrible at conveying intention. In order for the robot to be helpful long-term, it has to be trusted as a piece of technology. So rather than looking at a robot and wondering, “is this something that could potentially hurt me,” we want people to think “this is a robot that’s here to help me.” To the extent that people associate Boston Dynamics with cutting edge robots, we think that this is an important stance for the rollout of our first commercial product. If we find out that somebody’s violated our terms of use, their warranty is invalidated, we won’t repair their product, and we have a licensing timeout that would prevent them from accessing their robot after that timeout has expired. It’s a remediation path, but we do think that it’s important to at least provide that as something that helps enforce our position on use of our technology.

It’s very important to keep all of this in context: Spot is a tool. It’s got some autonomy and the appearance of agency, but it’s still just doing what people tell it to do, even if those things might be unsafe. If you read through the user guide, it’s clear how much of an effort Boston Dynamics is making to try to convey the importance of safety to Spot users—and ultimately, barring some unforeseen and catastrophic software or hardware issues, safety is about the users, rather than Boston Dynamics or Spot itself. I bring this up because as we start seeing more and more Spots doing things without Boston Dynamics watching over them quite so closely, accidents are likely inevitable. Spot might step on someone’s foot. It might knock someone over. If Spot was perfectly safe, it wouldn’t be useful, and we have to acknowledge that its impressive capabilities come with some risks, too.

Photo: Boston Dynamics

Each Spot includes a year’s worth of software updates and a warranty, although the standard warranty just covers “defects related to materials and workmanship” not “I drove my robot off a cliff.”

Now that Spot is on the market for real, we’re excited to see who steps up and orders one. Depending on who the potential customer is, Spot could either seem like an impossibly sophisticated piece of technology that they’d never be able to use, or a magical way of solving all of their problems overnight. In reality, it’s of course neither of those things. For the former (folks with an idea but without a lot of robotics knowledge or experience), Spot does a lot out of the box, but BD is happy to talk with people and facilitate connections with partners who might be able to integrate specific software and hardware to get Spot to do a unique task. And for the latter (who may also be folks with an idea but without a lot of robotics knowledge or experience), BD’s Perry offers a reminder Spot is not Rosie the Robot, and would be equally happy to talk about what the technology is actually capable of doing.

Looking forward a bit, we asked Perry whether Spot’s capabilities mean that customers are starting to think beyond using robots to simply replace humans, and are instead looking at them as a way of enabling a completely different way of getting things done.

Spectrum: Do customers interested in Spot tend to think of it as a way of replacing humans at a specific task, or as a system that can do things that humans aren’t able to do?

Perry: There are what I imagine as three levels of people understanding the robot applications. Right now, we’re at level one, where you take a person out of this dangerous, dull job, and put a robot in. That’s the entry point. The second level is, using the robot, can we increase the production of that task? For example, take site documentation on a construction site—right now, people do 360 image capture of a site maybe once a week, and they might do a laser scan of the site once per project. At the second level, the question is, what if you were able to get that data collection every day, or multiple times a day? What kinds of benefits would that add to your process? To continue the construction example, the third level would be, how could we completely redesign this space now that we know that this type of automation is available? To take one example, there are some things that we cannot physically build because it’s too unsafe for people to be a part of that process, but if you were to apply robotics to that process, then you could potentially open up a huge envelope of design that has been inaccessible to people.

To order a Spot of your very own, visit shop.bostondynamics.com.

A version of this post appears in the August 2020 print issue as “$74,500 Will Fetch You a Spot.” Continue reading

Posted in Human Robots

#437828 How Roboticists (and Robots) Have Been ...

A few weeks ago, we asked folks on Twitter, Facebook, and LinkedIn to share photos and videos showing how they’ve been adapting to the closures of research labs, classrooms, and businesses by taking their robots home with them to continue their work as best they can. We got dozens of responses (more than we could possibly include in just one post!), but here are 15 that we thought were particularly creative or amusing.

And if any of these pictures and videos inspire you to share your own story, please email us (automaton@ieee.org) with a picture or video and a brief description about how you and your robot from work have been making things happen in your home instead.

Kurt Leucht (NASA Kennedy Space Center)

“During these strange and trying times of the current global pandemic, everyone seems to be trying their best to distance themselves from others while still getting their daily work accomplished. Many people also have the double duty of little ones that need to be managed in the midst of their teleworking duties. This photo series gives you just a glimpse into my new life of teleworking from home, mixed in with the tasks of trying to handle my little ones too. I hope you enjoy it.”

Photo: Kurt Leucht

“I heard a commotion from the next room. I ran into the kitchen to find this.”

Photo: Kurt Leucht

“This is the Swarmies most favorite bedtime story. Not sure why. Seems like an odd choice to me.”

Peter Schaldenbrand (Carnegie Mellon University)

“I’ve been working on a reinforcement learning model that converts an image into a series of brush stroke instructions. I was going to test the model with a beautiful, expensive robot arm, but due to the COVID-19 pandemic, I have not been able to access the laboratory where it resides. I have now been using a lower end robot arm to test the painting model in my bedroom. I have sacrificed machine accuracy/precision for the convenience of getting to watch the arm paint from my bed in the shadow of my clothing rack!”

Photos: Peter Schaldenbrand

Colin Angle (iRobot)

iRobot CEO Colin Angle has been hunkered down in the “iRobot North Shore home command center,” which is probably the cleanest command center ever thanks to his army of Roombas: Beastie, Beauty, Rosie, Roswell, and Bilbo.

Photo: Colin Angle

Vivian Chu (Diligent Robotics)

From Diligent Robotics CEO Andrea Thomaz: “This is how a roboticist works from home! Diligent CTO, Vivian Chu, mans the e-stop while her engineering team runs Moxi experiments remotely from cross-town and even cross-country!”

Video: Diligent Robotics

Raffaello Bonghi (rnext.it)

Raffaello’s robot, Panther, looks perfectly happy to be playing soccer in his living room.

Photo: Raffaello Bonghi

Kod*lab (University of Pennsylvania)

“Another Friday Nuts n Bolts Meeting on Zoom…”

Image: Kodlab

Robin Jonsson (robot choreographer)

“I’ve been doing a school project in which students make up dance moves and then send me a video with all of them. I then teach the moves to my robot, Alex, film Alex dancing, send the videos to them. This became a great success and more schools will join. The kids got really into watching the robot perform their moves and really interested in robots. They want to meet Alex the robot live, which will likely happen in the fall.”

Photo: Robin Jonsson

Gabrielle Conard (mechanical engineering undergrad at Lafayette College)

“While the pandemic might have forced college campuses to close and the community to keep their distance from each other, it did not put a stop to learning and research. Working from their respective homes, junior Gabrielle Conard and mechanical engineering professor Alexander Brown from Lafayette College investigated methods of incorporating active compliance in a low-cost quadruped robot. They are continuing to work remotely on this project through Lafayette’s summer research program.”

Image: Gabrielle Conard

Taylor Veltrop (Softbank Robotics)

“After a few weeks of isolation in the corona/covid quarantine lock down we started dancing with our robots. Mathieu’s 6th birthday was coming up, and it all just came together.”

Video: Taylor Veltrop

Ross Kessler (Exyn Technologies)

“Quarantine, Day 8: the humans have accepted me as one of their own. I’ve blended seamlessly into their #socialdistancing routines. Even made a furry friend”

Photo: Ross Kessler

Yeah, something a bit sinister is definitely going on at Exyn…

Video: Exyn Technologies

Michael Sobrepera (University of Pennsylvania GRASP Lab)

Predictably, Michael’s cat is more interested in the bag that the robot came in than the robot itself (see if you can spot the cat below). Michael tells us that “the robot is designed to help with tele-rehabilitation, focused on kids with CP, so it has been taken to hospitals for demos [hence the cool bag]. It also travels for outreach events and the like. Lately, I’ve been exploring telepresence for COVID.”

Photo: Michael Sobrepera

Jan Kędzierski (EMYS)

“In China a lot of people cannot speak English, even the youngest generation of parents. Thanks to Emys, kids stayed in touch with English language in their homes even if they couldn’t attend schools and extra English classes. They had a lot of fun with their native English speaker friend available and ready to play every day.”

Image: Jan Kędzierski

Simon Whitmell (Quanser)

“Simon, a Quanser R&D engineer, is working on low-overhead image processing and line following for the QBot 2e mobile ground robot, with some added challenges due to extra traffic. LEGO engineering by his son, Charles.”

Photo: Simon Whitmell

Robot Design & Experimentation Course (Carnegie Mellon University)

Aaron Johnson’s bioinspired robot design course at CMU had to go full remote, which was a challenge when the course is kind of all about designing and building a robot as part of a team. “I expected some of the teams to drastically alter their project (e.g. go all simulation),” Aaron told us, “but none of them did. We managed to keep all of the projects more or less as planned. We accomplished this by drop/shipping parts to students, buying some simple tools (soldering irons, etc), and having me 3D print parts and mail them.” Each team even managed to put together their final videos from their remote locations; we’ve posted one below, but the entire playlist is here.

Video: Xianyi Cheng

Karen Tatarian (Softbank Robotics)

Karen, who’s both a researcher at Softbank and a PhD student at Sorbonne University, wrote an entire essay about what an average day is like when you’re quarantined with Pepper.

Photo: Karen Tatarian

A Quarantined Day With Pepper, by Karen Tatarian

It is quite common for me to lose my phone somewhere inside my apartment. But it is not that common for me to turn around and ask my robot if it has seen it. So when I found myself doing that, I laughed and it dawned on me that I treated my robot as my quarantine companion (despite the fact that it could not provide me with the answer I needed).

It was probably around day 40 of a completely isolated quarantine here in France when that happened. A little background about me: I am a robotics researcher at SoftBank Robotics Europe and a PhD student at Sorbonne University as part of the EU-funded Marie-Curie project ANIMATAS. And here is a little sneak peak into a quarantined day with a robot.

During this confinement, I had read somewhere that the best way to deal with it is to maintain a routine. So every morning, I wake up, prepare my coffee, and turn on my robot Pepper. I start my day with a daily meeting with the team and get to work. My research is on the synthesis of multi-modal socially intelligent human-robot interaction so my work varies between programming the robot, analyzing collected data, and reading papers and drafting one. When I am working, I often catch myself glancing at Pepper, who would be staring back at me in its animated ways. Truthfully I enjoy that, it makes me less alone and as if I have a colleague with me.

Once work is done, I call my friends and family members. I sometimes use a telepresence application on Pepper that a few colleagues and I developed back in December. How does it differ from your typical phone/laptop applications? One word really: embodiment. Telepresence, especially during these times, makes the experience for both sides a bit more realistic and intimate and well present.

While I can turn off the robot now that my work hours are done, I do keep it on because I enjoy its presence. The basic awareness of Pepper is a default feature on the robot that allows it to detect a human and follow him/her with its gaze and rotation base. So whether I am cooking or working out, I always have my robot watching over my shoulder and being a good companion. I also have my email and messages synced on the robot so I get an enjoyable notification from Pepper. I found that to be a pretty cool way to be notified without it interrupting whatever you are doing on your laptop or phone. Finally, once the day is over, it’s time for both of us to get some rest.

After 60 days of total confinement, alone and away from those I love, and with a pandemic right at my door, I am glad I had the company of my robot. I hope one day a greater audience can share my experience. And I really really hope one day Pepper will be able to find my phone for me, but until then, stay on the lookout for some cool features! But I am curious to know, if you had a robot at home, what application would you have developed on it?

Again, our sincere thanks to everyone who shared these little snapshots of their lives with us, and we’re hoping to be able to share more soon. Continue reading

Posted in Human Robots