Tag Archives: PLACE

#439055 Stretch Is Boston Dynamics’ Take on a ...

Today, Boston Dynamics is announcing Stretch, a mobile robot designed to autonomously move boxes around warehouses. At first glance, you might be wondering why the heck this is a Boston Dynamics robot at all, since the dynamic mobility that we associate with most of their platforms is notably absent. The combination of strength and speed in Stretch’s arm is something we haven’t seen before in a mobile robot, and it’s what makes this a unique and potentially exciting entry into the warehouse robotics space.

Useful mobile manipulation in any environment that’s not almost entirely structured is still a significant challenge in robotics, and it requires a very difficult combination of sensing, intelligence, and dynamic motion, all of which are classic Boston Dynamics. But also classic Boston Dynamics is building really cool platforms, and only later trying to figure out a way of making them commercially viable. So why Stretch, why boxes, why now, and (the real question) why not Handle? We talk with Boston Dynamics’ Vice President of Product Engineering Kevin Blankespoor to find out.

Stretch is very explicitly a box-handling mobile robot for relatively well structured warehouses. It’s in no way designed to be a generalist that many of Boston Dynamics’ other robots are. And to be fair, this is absolutely how to make a robot that’s practical and cost effective right out of the crate: Identify a task that is dull or dirty or dangerous for humans, design a robot to do that task safely and efficiently, and deploy it with the expectation that it’ll be really good at that task but not necessarily much else. This is a very different approach than a robot like Spot, where the platform came first and the practical applications came later—with Stretch, it’s all about that specific task in a specific environment.

There are already robotic solutions for truck unloading, palletizing, and depalletizing, but Stretch seems to be uniquely capable. For truck unloading, the highest performance systems that I’m aware of are monstrous things (here’s one example from Honeywell) that use a ton of custom hardware to just sort of ingest the cargo within a trailer all at once. In a highly structured and predictable warehouse, this sort of thing may pay off over the long term, but it’s going to be extremely expensive and not very versatile at all.

Palletizing and depalletizing robots are much more common in warehouses today. They’re almost always large industrial arms surrounded by a network of custom conveyor belts and whatnot, suffering from the same sorts of constraints as a truck unloader— very capable in some situations, but generally high cost and low flexibility.

Photo: Boston Dynamics

Stretch is probably not going to be able to compete with either of these types of dedicated systems when it comes to sheer speed, but it offers lots of other critical advantages: It’s fast and easy to deploy, easy to use, and adaptable to a variety of different tasks without costly infrastructure changes. It’s also very much not Handle, which was Boston Dynamics’ earlier (although not that much earlier) attempt at a box-handling robot for warehouses, and (let’s be honest here) a much more Boston Dynamics-y thing than Stretch seems to be. To learn more about why the answer is Stretch rather than Handle, and how Stretch will fit into the warehouse of the very near future, we spoke with Kevin Blankespoor, Boston Dynamics’ VP of Product Engineering and chief engineer for both Handle and Stretch.

IEEE Spectrum: Tell me about Stretch!

Kevin Blankespoor: Stretch is the first mobile robot that we’ve designed specifically for the warehouse. It’s all about moving boxes. Stretch is a flexible robot that can move throughout the warehouse and do different tasks. During a typical day in the life of Stretch in the future, it might spend the morning on the inbound side of the warehouse unloading boxes from trucks. It might spend the afternoon in the aisles of the warehouse building up pallets to go to retailers and e-commerce facilities, and it might spend the evening on the outbound side of the warehouse loading boxes into the trucks. So, it really goes to where the work is.

There are already other robots that include truck unloading robots, palletizing and depalletizing robots, and mobile bases with arms on them. What makes Boston Dynamics the right company to introduce a new robot in this space?

We definitely thought through this, because there are already autonomous mobile robots [AMRs] out there. Most of them, though, are more like pallet movers or tote movers—they don't have an arm, and most of them are really just about moving something from point A to point B without manipulation capability. We've seen some experiments where people put arms on AMRs, but nothing that's made it very far in the market. And so when we started looking at Stretch, we realized we really needed to make a custom robot, and that it was something we could do quickly.

“We got a lot of interest from people who wanted to put Atlas to work in the warehouse, but we knew that we could build a simpler robot to do some of those same tasks.”

Stretch is built with pieces from Spot and Atlas and that gave us a big head start. For example, if you look at Stretch’s vision system, it's 2D cameras, depth sensors, and software that allows it to do obstacle detection, box detection, and localization. Those are all the same sensors and software that we've been using for years on our legged robots. And if you look closely at Stretch’s wrist joints, they're actually the same as Spot’s hips. They use the same electric motors, the same gearboxes, the same sensors, and they even have the same closed-loop controller controlling the joints.

If you were to buy an existing industrial robot arm with this kind of performance, it would be about four times heavier than the arm we built, and it's really hard to make that into a mobile robot. A lot of this came from our leg technology because it’s so important for our leg designs to be lightweight for the robots to balance. We took that same strength to weight advantage that we have, and built it into this arm. We're able to rapidly piece together things from our other robots to get us out of the gate quickly, so even though this looks like a totally different robot, we think we have a good head start going into this market.

At what point did you decide to go with an arm on a statically stable base on Stretch, rather than something more, you know, dynamic-y?

Stretch looks really different than the robots that Boston Dynamics has done in the past. But you'd be surprised how much similarity there is between our legged robots and Stretch under the hood. Looking back, we actually got our start on moving boxes with Atlas, and at that point it was just research and development. We were really trying to do force control for box grasping. We were picking up heavy boxes and maintaining balance and working on those fundamentals. We released a video of that as our first next-gen Atlas video, and it was interesting. We got a lot of interest from people who wanted to put Atlas to work in the warehouse, but we knew that we could build a simpler robot to do some of those same tasks.

So at this point we actually came up with Handle. The intent of Handle was to do a couple things—one was, we thought we could build a simpler robot that had Atlas’ attributes. Handle has a small footprint so it can fit in tight spaces, but it can pick up heavy boxes. And in addition to that, we had always really wanted to combine wheels and legs. We’d been talking about doing that for a decade and so Handle was a chance for us to try it.

We built a couple versions of Handle, and the first one was really just a prototype to kind of explore the morphology. But the second one was more purpose-built for warehouse tasks, and we started building pallets with that one and it looked pretty good. And then we started doing truck unloading with Handle, which was the pivotal moment. Handle could do it, but it took too long. Every time Handle grasped a box, it would have to roll back and then get to a place where it could spin itself to face forward and place the box, and trucks are very tight for a robot this size, so there's not a lot of room to maneuver. We knew the whole time that there was a robot like Stretch that was another alternative, but that's really when it became clear that Stretch would have a lot of advantages, and we started working on it about a year ago.

Stretch is certainly impressive in a practical way, but I’ll admit to really hoping that something like Handle could have turned out to be a viable warehouse robot.

I love the Handle project as well, and I’m very passionate about that robot. And there was a stage before we built Stretch where we thought, “this would be pretty standard looking compared to Handle, is it going to capture enough of the Boston Dynamics secret sauce?” But when you actually dissect all the problems within Stretch that you have to tackle, there are a lot of cool robotics problems left in there—the vision system, the planning, the manipulation, the grasping of the boxes—it's a lot harder to solve than it looks, and we're excited that we're actually getting fairly far down that road now.

What happens to Handle now?

Stretch has really taken over our team as far as warehouse products go. Handle we still use occasionally as a research robot, but it’s not actively under development. Stretch is really Handle’s descendent. Handle’s not retired, exactly, but we’re just using it for things like the dance video.

There’s still potential to do cool stuff with Handle. I do think that combining wheels with legs is very cool, and largely unexplored compared to its potential. So I still think that you're gonna see versions of robots combining wheels and legs like Handle, and maybe a version of Handle in the future that does more of that. But because we're switching this thread from research into product, Stretch is really the main focus now.

How autonomous is Stretch?

Stretch is semi-autonomous, and that means it really needs to work with people to tap into its full potential. With truck unloading, for example, a person will drive Stretch into the back of the truck and then basically point Stretch in the right direction and say go. And from that point on, everything’s autonomous. Stretch has its vision system and its mobility and it can detect all the boxes, grasp all boxes, and move them onto a conveyor all autonomously. This is something that takes people hours to do manually, and Stretch can go all the way until it gets to the last box, and the truck is empty. There are some parts of the truck unloading task that do require people, like verifying that the truck is in the right place and opening the doors. But this takes a person just a few minutes, and then the robot can spend hours or as long as it takes to do its job autonomously.

There are also other tasks in the warehouse where the autonomy will increase in the future. After truck unloading, the second thing we’ll take on is order building, which will be more in the aisles of a warehouse. For that, Stretch will be navigating around the warehouse, finding the right pallet it needs to take a box from, and loading it onto a new pallet. This will be a different model with more autonomy; you’ll still have people involved to some degree, but the robot will have a higher percentage of the time where it can work independently.

What kinds of constraints is Stretch operating under? Do the boxes all have to be stacked neatly in the back of the truck, do they have to be the same size, the same color, etc?

“This will be a different model with more autonomy. You’ll still have people involved to some degree, but the robot will have a higher percentage of the time where it can work independently.”

If you think about manufacturing, where there's been automation for decades, you can go into a modern manufacturing facility and there are robot arms and conveyors and other machines. But if you look at the actual warehouse space, 90+ percent is manually operated, and that's because of what you just asked about— things that are less structured, where there’s more variety, and it's more challenging for a robot. But this is starting to change. This is really, really early days, and you’re going to be seeing a lot more robots in the warehouse space.

The warehouse robotics industry is going to grow a lot over the next decade, and a lot of that boils down to vision—the ability for robots to navigate and to understand what they’re seeing. Actually seeing boxes in real world scenarios is challenging, especially when there's a lot of variety. We've been testing our machine learning-based box detection system on Pick for a few years now, and it's gotten far enough that we know it’s one of the technical hurdles you need to overcome to succeed in the warehouse.

Can you compare the performance of Stretch to the performance of a human in a box-unloading task?

Stretch can move cases up to 50 pounds which is the OSHA limit for how much a single person's allowed to move. The peak case rate for Stretch is 800 cases per hour. You really need to keep up with the flow of goods throughout the warehouse, and 800 cases per hour should be enough for most applications. This is similar to a really good human; most humans are probably slower, and it’s hard for a human to sustain that rate, and one of the big issues with people doing this jobs is injury rates. Imagine moving really heavy boxes all day, and having to reach up high or bend down to get them—injuries are really common in this area. Truck unloading is one of the hardest jobs in a warehouse, and that’s one of the reasons we’re starting there with Stretch.

Is Stretch safe for humans to be around?

We looked at using collaborative robot arms for Stretch, but they don’t have the combination of strength and speed and reach to do this task. That’s partially just due to the laws of physics—if you want to move a 50lb box really fast, that’s a lot of energy there. So, Stretch does need to maintain separation from humans, but it’s pretty safe when it’s operating in the back of a truck.

In the middle of a warehouse, Stretch will have a couple different modes. When it's traveling around it'll be kind of like an AMR, and use a safety-rated lidar making sure that it slows down or stops as people get closer. If it's parked and the arm is moving, it'll do the same thing, monitoring anyone getting close and either slow down or stop.

How do you see Stretch interacting with other warehouse robots?

For building pallet orders, we can do that in a couple of different ways, and we’re experimenting with partners in the AMR space. So you might have an AMR that moves the pallet around and then rendezvous with Stretch, and Stretch does the manipulation part and moves boxes onto the pallet, and then the AMR scuttles off to the next rendezvous point where maybe a different Stretch meets it. We’re developing prototypes of that behavior now with a few partners. Another way to do it is Stretch can actually pull the pallet around itself and do both tasks. There are two fundamental things that happen in the warehouse: there's movement of goods, and there's manipulation of goods, and Stretch can do both.

You’re aware that Hello Robot has a mobile manipulator called Stretch, right?

Great minds think alike! We know Aaron [Edsinger] from the Google days; we all used to be in the same company, and he’s a great guy. We’re in very different applications and spaces, though— Aaron’s robot is going into research and maybe a little bit into the consumer space, while this robot is on a much bigger scale aimed at industrial applications, so I think there’s actually a lot of space between our robots, in terms of how they’ll be used.

Editor’s Note: We did check in with Aaron Edsinger at Hello Robot, and he sees things a little bit differently. “We're disappointed they chose our name for their robot,” Edsinger told us. “We're seriously concerned about it and considering our options.” We sincerely hope that Boston Dynamics and Hello Robot can come to an amicable solution on this.
What’s the timeline for commercial deployment of Stretch?

This is a prototype of the Stretch robot, and anytime we design a new robot, we always like to build a prototype as quickly as possible so we can figure out what works and what doesn't work. We did that with our bipeds and quadrupeds as well. So, we get an early look at what we need to iterate, because any time you build the first thing, it's not the right thing, and you always need to make changes to get to the final version. We've got about six of those Stretch prototypes operating now. In parallel, our hardware team is finishing up the design of the productized version of Stretch. That version of Stretch looks a lot like the prototype, but every component has been redesigned from the ground up to be manufacturable, to be reliable, and to be higher performance.

For the productized version of Stretch, we’ll build up the first units this summer, and then it’ll go on sale next year. So this is kind of a sneak peak into what the final product will be.

How much does it cost, and will you be selling Stretch, or offering it as a service?

We’re not quite ready to talk about cost yet, but it’ll be cost effective, and similar in cost to existing systems if you were to combine an industrial robot arm, custom gripper, and mobile base. We’re considering both selling and leasing as a service, but we’re not quite ready to narrow it down yet.

Photo: Boston Dynamics

As with all mobile manipulators, what Stretch can do long-term is constrained far more by software than by hardware. With a fast and powerful arm, a mobile base, a solid perception system, and 16 hours of battery life, you can imagine how different grippers could enable all kinds of different capabilities. But we’re getting ahead of ourselves, because it’s a long, long way from getting a prototype to work pretty well to getting robots into warehouses in a way that’s commercially viable long-term, even when the use case is as clear as it seems to be for Stretch.

Stretch also could signal a significant shift in focus for Boston Dynamics. While Blankespoor’s comments about Stretch leveraging Boston Dynamics’ expertise with robots like Spot and Atlas are well taken, Stretch is arguably the most traditional robot that the company has designed, and they’ve done so specifically to be able to sell robots into industry. This is what you do if you’re a robotics company who wants to make money by selling robots commercially, which (historically) has not been what Boston Dynamics is all about. Despite its bonkers valuation, Boston Dynamics ultimately needs to make money, and robots like Stretch are a good way to do it. With that in mind, I wouldn’t be surprised to see more robots like this from Boston Dynamics—robots that leverage the company’s unique technology, but that are designed to do commercially useful tasks in a somewhat less flashy way. And if this strategy keeps Boston Dynamics around (while funding some occasional creative craziness), then I’m all for it. Continue reading

Posted in Human Robots

#439042 How Scientists Used Ultrasound to Read ...

Thanks to neural implants, mind reading is no longer science fiction.

As I’m writing this sentence, a tiny chip with arrays of electrodes could sit on my brain, listening in on the crackling of my neurons firing as my hands dance across the keyboard. Sophisticated algorithms could then decode these electrical signals in real time. My brain’s inner language to plan and move my fingers could then be used to guide a robotic hand to do the same. Mind-to-machine control, voilà!

Yet as the name implies, even the most advanced neural implant has a problem: it’s an implant. For electrodes to reliably read the brain’s electrical chatter, they need to pierce through the its protective membrane and into brain tissue. Danger of infection aside, over time, damage accumulates around the electrodes, distorting their signals or even rendering them unusable.

Now, researchers from Caltech have paved a way to read the brain without any physical contact. Key to their device is a relatively new superstar in neuroscience: functional ultrasound, which uses sound waves to capture activity in the brain.

In monkeys, the technology could reliably predict their eye movement and hand gestures after just a single trial—without the usual lengthy training process needed to decode a movement. If adopted by humans, the new mind-reading tech represents a triple triumph: it requires minimal surgery and minimal learning, but yields maximal resolution for brain decoding. For people who are paralyzed, it could be a paradigm shift in how they control their prosthetics.

“We pushed the limits of ultrasound neuroimaging and were thrilled that it could predict movement,” said study author Dr. Sumner Norman.

To Dr. Krishna Shenoy at Stanford, who was not involved, the study will finally put ultrasound “on the map as a brain-machine interface technique. Adding to this toolkit is spectacular,” he said.

Breaking the Sound Barrier
Using sound to decode brain activity might seem preposterous, but ultrasound has had quite the run in medicine. You’ve probably heard of its most common use: taking photos of a fetus in pregnancy. The technique uses a transducer, which emits ultrasound pulses into the body and finds boundaries in tissue structure by analyzing the sound waves that bounce back.

Roughly a decade ago, neuroscientists realized they could adapt the tech for brain scanning. Rather than directly measuring the brain’s electrical chatter, it looks at a proxy—blood flow. When certain brain regions or circuits are active, the brain requires much more energy, which is provided by increased blood flow. In this way, functional ultrasound works similarly to functional MRI, but at a far higher resolution—roughly ten times, the authors said. Plus, people don’t have to lie very still in an expensive, claustrophobic magnet.

“A key question in this work was: If we have a technique like functional ultrasound that gives us high-resolution images of the brain’s blood flow dynamics in space and over time, is there enough information from that imaging to decode something useful about behavior?” said study author Dr. Mikhail Shapiro.

There’s plenty of reasons for doubt. As the new kid on the block, functional ultrasound has some known drawbacks. A major one: it gives a far less direct signal than electrodes. Previous studies show that, with multiple measurements, it can provide a rough picture of brain activity. But is that enough detail to guide a robotic prosthesis?

One-Trial Wonder
The new study put functional ultrasound to the ultimate test: could it reliably detect movement intention in monkeys? Because their brains are the most similar to ours, rhesus macaque monkeys are often the critical step before a brain-machine interface technology is adapted for humans.

The team first inserted small ultrasound transducers into the skulls of two rhesus monkeys. While it sounds intense, the surgery doesn’t penetrate the brain or its protective membrane; it’s only on the skull. Compared to electrodes, this means the brain itself isn’t physically harmed.

The device is linked to a computer, which controls the direction of sound waves and captures signals from the brain. For this study, the team aimed the pulses at the posterior parietal cortex, a part of the “motor” aspect of the brain, which plans movement. If right now you’re thinking about scrolling down this page, that’s the brain region already activated, before your fingers actually perform the movement.

Then came the tests. The first looked at eye movements—something pretty necessary before planning actual body movements without tripping all over the place. Here, the monkeys learned to focus on a central dot on a computer screen. A second dot, either left or right, then flashed. The monkeys’ task was to flicker their eyes to the most recent dot. It’s something that seems easy for us, but requires sophisticated brain computation.

The second task was more straightforward. Rather than just moving their eyes to the second target dot, the monkeys learned to grab and manipulate a joystick to move a cursor to that target.

Using brain imaging to decode the mind and control movement. Image Credit: S. Norman, Caltech
As the monkeys learned, so did the device. Ultrasound data capturing brain activity was fed into a sophisticated machine learning algorithm to guess the monkeys’ intentions. Here’s the kicker: once trained, using data from just a single trial, the algorithm was able to correctly predict the monkeys’ actual eye movement—whether left or right—with roughly 78 percent accuracy. The accuracy for correctly maneuvering the joystick was even higher, at nearly 90 percent.

That’s crazy accurate, and very much needed for a mind-controlled prosthetic. If you’re using a mind-controlled cursor or limb, the last thing you’d want is to have to imagine the movement multiple times before you actually click the web button, grab the door handle, or move your robotic leg.

Even more impressive is the resolution. Sound waves seem omnipresent, but with focused ultrasound, it’s possible to measure brain activity at a resolution of 100 microns—roughly 10 neurons in the brain.

A Cyborg Future?
Before you start worrying about scientists blasting your brain with sound waves to hack your mind, don’t worry. The new tech still requires skull surgery, meaning that a small chunk of skull needs to be removed. However, the brain itself is spared. This means that compared to electrodes, ultrasound could offer less damage and potentially a far longer mind reading than anything currently possible.

There are downsides. Focused ultrasound is far younger than any electrode-based neural implants, and can’t yet reliably decode 360-degree movement or fine finger movements. For now, the tech requires a wire to link the device to a computer, which is off-putting to many people and will prevent widespread adoption. Add to that the inherent downside of focused ultrasound, which lags behind electrical recordings by roughly two seconds.

All that aside, however, the tech is just tiptoeing into a future where minds and machines seamlessly connect. Ultrasound can penetrate the skull, though not yet at the resolution needed for imaging and decoding brain activity. The team is already working with human volunteers with traumatic brain injuries, who had to have a piece of their skulls removed, to see how well ultrasound works for reading their minds.

“What’s most exciting is that functional ultrasound is a young technique with huge potential. This is just our first step in bringing high performance, less invasive brain-machine interface to more people,” said Norman.

Image Credit: Free-Photos / Pixabay Continue reading

Posted in Human Robots

#438801 This AI Thrashes the Hardest Atari Games ...

Learning from rewards seems like the simplest thing. I make coffee, I sip coffee, I’m happy. My brain registers “brewing coffee” as an action that leads to a reward.

That’s the guiding insight behind deep reinforcement learning, a family of algorithms that famously smashed most of Atari’s gaming catalog and triumphed over humans in strategy games like Go. Here, an AI “agent” explores the game, trying out different actions and registering ones that let it win.

Except it’s not that simple. “Brewing coffee” isn’t one action; it’s a series of actions spanning several minutes, where you’re only rewarded at the very end. By just tasting the final product, how do you learn to fine-tune grind coarseness, water to coffee ratio, brewing temperature, and a gazillion other factors that result in the reward—tasty, perk-me-up coffee?

That’s the problem with “sparse rewards,” which are ironically very abundant in our messy, complex world. We don’t immediately get feedback from our actions—no video-game-style dings or points for just grinding coffee beans—yet somehow we’re able to learn and perform an entire sequence of arm and hand movements while half-asleep.

This week, researchers from UberAI and OpenAI teamed up to bestow this talent on AI.

The trick is to encourage AI agents to “return” to a previous step, one that’s promising for a winning solution. The agent then keeps a record of that state, reloads it, and branches out again to intentionally explore other solutions that may have been left behind on the first go-around. Video gamers are likely familiar with this idea: live, die, reload a saved point, try something else, repeat for a perfect run-through.

The new family of algorithms, appropriately dubbed “Go-Explore,” smashed notoriously difficult Atari games like Montezuma’s Revenge that were previously unsolvable by its AI predecessors, while trouncing human performance along the way.

It’s not just games and digital fun. In a computer simulation of a robotic arm, the team found that installing Go-Explore as its “brain” allowed it to solve a challenging series of actions when given very sparse rewards. Because the overarching idea is so simple, the authors say, it can be adapted and expanded to other real-world problems, such as drug design or language learning.

Growing Pains
How do you reward an algorithm?

Rewards are very hard to craft, the authors say. Take the problem of asking a robot to go to a fridge. A sparse reward will only give the robot “happy points” if it reaches its destination, which is similar to asking a baby, with no concept of space and danger, to crawl through a potential minefield of toys and other obstacles towards a fridge.

“In practice, reinforcement learning works very well, if you have very rich feedback, if you can tell, ‘hey, this move is good, that move is bad, this move is good, that move is bad,’” said study author Joost Huinzinga. However, in situations that offer very little feedback, “rewards can intentionally lead to a dead end. Randomly exploring the space just doesn’t cut it.”

The other extreme is providing denser rewards. In the same robot-to-fridge example, you could frequently reward the bot as it goes along its journey, essentially helping “map out” the exact recipe to success. But that’s troubling as well. Over-holding an AI’s hand could result in an extremely rigid robot that ignores new additions to its path—a pet, for example—leading to dangerous situations. It’s a deceptive AI solution that seems effective in a simple environment, but crashes in the real world.

What we need are AI agents that can tackle both problems, the team said.

Intelligent Exploration
The key is to return to the past.

For AI, motivation usually comes from “exploring new or unusual situations,” said Huizinga. It’s efficient, but comes with significant downsides. For one, the AI agent could prematurely stop going back to promising areas because it thinks it had already found a good solution. For another, it could simply forget a previous decision point because of the mechanics of how it probes the next step in a problem.

For a complex task, the end result is an AI that randomly stumbles around towards a solution while ignoring potentially better ones.

“Detaching from a place that was previously visited after collecting a reward doesn’t work in difficult games, because you might leave out important clues,” Huinzinga explained.

Go-Explore solves these problems with a simple principle: first return, then explore. In essence, the algorithm saves different approaches it previously tried and loads promising save points—once more likely to lead to victory—to explore further.

Digging a bit deeper, the AI stores screen caps from a game. It then analyzes saved points and groups images that look alike as a potential promising “save point” to return to. Rinse and repeat. The AI tries to maximize its final score in the game, and updates its save points when it achieves a new record score. Because Atari doesn’t usually allow people to revisit any random point, the team used an emulator, which is a kind of software that mimics the Atari system but with custom abilities such as saving and reloading at any time.

The trick worked like magic. When pitted against 55 Atari games in the OpenAI gym, now commonly used to benchmark reinforcement learning algorithms, Go-Explore knocked out state-of-the-art AI competitors over 85 percent of the time.

It also crushed games previously unbeatable by AI. Montezuma’s Revenge, for example, requires you to move Pedro, the blocky protagonist, through a labyrinth of underground temples while evading obstacles such as traps and enemies and gathering jewels. One bad jump could derail the path to the next level. It’s a perfect example of sparse rewards: you need a series of good actions to get to the reward—advancing onward.

Go-Explore didn’t just beat all levels of the game, a first for AI. It also scored higher than any previous record for reinforcement learning algorithms at lower levels while toppling the human world record.

Outside a gaming environment, Go-Explore was also able to boost the performance of a simulated robot arm. While it’s easy for humans to follow high-level guidance like “put the cup on this shelf in a cupboard,” robots often need explicit training—from grasping the cup to recognizing a cupboard, moving towards it while avoiding obstacles, and learning motions to not smash the cup when putting it down.

Here, similar to the real world, the digital robot arm was only rewarded when it placed the cup onto the correct shelf, out of four possible shelves. When pitted against another algorithm, Go-Explore quickly figured out the movements needed to place the cup, while its competitor struggled with even reliably picking the cup up.

Combining Forces
By itself, the “first return, then explore” idea behind Go-Explore is already powerful. The team thinks it can do even better.

One idea is to change the mechanics of save points. Rather than reloading saved states through the emulator, it’s possible to train a neural network to do the same, without needing to relaunch a saved state. It’s a potential way to make the AI even smarter, the team said, because it can “learn” to overcome one obstacle once, instead of solving the same problem again and again. The downside? It’s much more computationally intensive.

Another idea is to combine Go-Explore with an alternative form of learning, called “imitation learning.” Here, an AI observes human behavior and mimics it through a series of actions. Combined with Go-Explore, said study author Adrien Ecoffet, this could make more robust robots capable of handling all the complexity and messiness in the real world.

To the team, the implications go far beyond Go-Explore. The concept of “first return, then explore” seems to be especially powerful, suggesting “it may be a fundamental feature of learning in general.” The team said, “Harnessing these insights…may be essential…to create generally intelligent agents.”

Image Credit: Adrien Ecoffet, Joost Huizinga, Joel Lehman, Kenneth O. Stanley, and Jeff Clune Continue reading

Posted in Human Robots

#438779 Meet Catfish Charlie, the CIA’s ...

Photo: CIA Museum

CIA roboticists designed Catfish Charlie to take water samples undetected. Why they wanted a spy fish for such a purpose remains classified.

In 1961, Tom Rogers of the Leo Burnett Agency created Charlie the Tuna, a jive-talking cartoon mascot and spokesfish for the StarKist brand. The popular ad campaign ran for several decades, and its catchphrase “Sorry, Charlie” quickly hooked itself in the American lexicon.

When the CIA’s Office of Advanced Technologies and Programs started conducting some fish-focused research in the 1990s, Charlie must have seemed like the perfect code name. Except that the CIA’s Charlie was a catfish. And it was a robot.

More precisely, Charlie was an unmanned underwater vehicle (UUV) designed to surreptitiously collect water samples. Its handler controlled the fish via a line-of-sight radio handset. Not much has been revealed about the fish’s construction except that its body contained a pressure hull, ballast system, and communications system, while its tail housed the propulsion. At 61 centimeters long, Charlie wouldn’t set any biggest-fish records. (Some species of catfish can grow to 2 meters.) Whether Charlie reeled in any useful intel is unknown, as details of its missions are still classified.

For exploring watery environments, nothing beats a robot
The CIA was far from alone in its pursuit of UUVs nor was it the first agency to do so. In the United States, such research began in earnest in the 1950s, with the U.S. Navy’s funding of technology for deep-sea rescue and salvage operations. Other projects looked at sea drones for surveillance and scientific data collection.

Aaron Marburg, a principal electrical and computer engineer who works on UUVs at the University of Washington’s Applied Physics Laboratory, notes that the world’s oceans are largely off-limits to crewed vessels. “The nature of the oceans is that we can only go there with robots,” he told me in a recent Zoom call. To explore those uncharted regions, he said, “we are forced to solve the technical problems and make the robots work.”

Image: Thomas Wells/Applied Physics Laboratory/University of Washington

An oil painting commemorates SPURV, a series of underwater research robots built by the University of Washington’s Applied Physics Lab. In nearly 400 deployments, no SPURVs were lost.

One of the earliest UUVs happens to sit in the hall outside Marburg’s office: the Self-Propelled Underwater Research Vehicle, or SPURV, developed at the applied physics lab beginning in the late ’50s. SPURV’s original purpose was to gather data on the physical properties of the sea, in particular temperature and sound velocity. Unlike Charlie, with its fishy exterior, SPURV had a utilitarian torpedo shape that was more in line with its mission. Just over 3 meters long, it could dive to 3,600 meters, had a top speed of 2.5 m/s, and operated for 5.5 hours on a battery pack. Data was recorded to magnetic tape and later transferred to a photosensitive paper strip recorder or other computer-compatible media and then plotted using an IBM 1130.

Over time, SPURV’s instrumentation grew more capable, and the scope of the project expanded. In one study, for example, SPURV carried a fluorometer to measure the dispersion of dye in the water, to support wake studies. The project was so successful that additional SPURVs were developed, eventually completing nearly 400 missions by the time it ended in 1979.

Working on underwater robots, Marburg says, means balancing technical risks and mission objectives against constraints on funding and other resources. Support for purely speculative research in this area is rare. The goal, then, is to build UUVs that are simple, effective, and reliable. “No one wants to write a report to their funders saying, ‘Sorry, the batteries died, and we lost our million-dollar robot fish in a current,’ ” Marburg says.

A robot fish called SoFi
Since SPURV, there have been many other unmanned underwater vehicles, of various shapes and sizes and for various missions, developed in the United States and elsewhere. UUVs and their autonomous cousins, AUVs, are now routinely used for scientific research, education, and surveillance.

At least a few of these robots have been fish-inspired. In the mid-1990s, for instance, engineers at MIT worked on a RoboTuna, also nicknamed Charlie. Modeled loosely on a blue-fin tuna, it had a propulsion system that mimicked the tail fin of a real fish. This was a big departure from the screws or propellers used on UUVs like SPURV. But this Charlie never swam on its own; it was always tethered to a bank of instruments. The MIT group’s next effort, a RoboPike called Wanda, overcame this limitation and swam freely, but never learned to avoid running into the sides of its tank.

Fast-forward 25 years, and a team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) unveiled SoFi, a decidedly more fishy robot designed to swim next to real fish without disturbing them. Controlled by a retrofitted Super Nintendo handset, SoFi could dive more than 15 meters, control its own buoyancy, and swim around for up to 40 minutes between battery charges. Noting that SoFi’s creators tested their robot fish in the gorgeous waters off Fiji, IEEE Spectrum’s Evan Ackerman noted, “Part of me is convinced that roboticists take on projects like these…because it’s a great way to justify a trip somewhere exotic.”

SoFi, Wanda, and both Charlies are all examples of biomimetics, a term coined in 1974 to describe the study of biological mechanisms, processes, structures, and substances. Biomimetics looks to nature to inspire design.

Sometimes, the resulting technology proves to be more efficient than its natural counterpart, as Richard James Clapham discovered while researching robotic fish for his Ph.D. at the University of Essex, in England. Under the supervision of robotics expert Huosheng Hu, Clapham studied the swimming motion of Cyprinus carpio, the common carp. He then developed four robots that incorporated carplike swimming, the most capable of which was iSplash-II. When tested under ideal conditions—that is, a tank 5 meters long, 2 meters wide, and 1.5 meters deep—iSpash-II obtained a maximum velocity of 11.6 body lengths per second (or about 3.7 m/s). That’s faster than a real carp, which averages a top velocity of 10 body lengths per second. But iSplash-II fell short of the peak performance of a fish darting quickly to avoid a predator.

Of course, swimming in a test pool or placid lake is one thing; surviving the rough and tumble of a breaking wave is another matter. The latter is something that roboticist Kathryn Daltorio has explored in depth.

Daltorio, an assistant professor at Case Western Reserve University and codirector of the Center for Biologically Inspired Robotics Research there, has studied the movements of cockroaches, earthworms, and crabs for clues on how to build better robots. After watching a crab navigate from the sandy beach to shallow water without being thrown off course by a wave, she was inspired to create an amphibious robot with tapered, curved feet that could dig into the sand. This design allowed her robot to withstand forces up to 138 percent of its body weight.

Photo: Nicole Graf

This robotic crab created by Case Western’s Kathryn Daltorio imitates how real crabs grab the sand to avoid being toppled by waves.

In her designs, Daltorio is following architect Louis Sullivan’s famous maxim: Form follows function. She isn’t trying to imitate the aesthetics of nature—her robot bears only a passing resemblance to a crab—but rather the best functionality. She looks at how animals interact with their environments and steals evolution’s best ideas.

And yet, Daltorio admits, there is also a place for realistic-looking robotic fish, because they can capture the imagination and spark interest in robotics as well as nature. And unlike a hyperrealistic humanoid, a robotic fish is unlikely to fall into the creepiness of the uncanny valley.

In writing this column, I was delighted to come across plenty of recent examples of such robotic fish. Ryomei Engineering, a subsidiary of Mitsubishi Heavy Industries, has developed several: a robo-coelacanth, a robotic gold koi, and a robotic carp. The coelacanth was designed as an educational tool for aquariums, to present a lifelike specimen of a rarely seen fish that is often only known by its fossil record. Meanwhile, engineers at the University of Kitakyushu in Japan created Tai-robot-kun, a credible-looking sea bream. And a team at Evologics, based in Berlin, came up with the BOSS manta ray.

Whatever their official purpose, these nature-inspired robocreatures can inspire us in return. UUVs that open up new and wondrous vistas on the world’s oceans can extend humankind’s ability to explore. We create them, and they enhance us, and that strikes me as a very fair and worthy exchange.

This article appears in the March 2021 print issue as “Catfish, Robot, Swimmer, Spy.”

About the Author
Allison Marsh is an associate professor of history at the University of South Carolina and codirector of the university’s Ann Johnson Institute for Science, Technology & Society. Continue reading

Posted in Human Robots

#438774 The World’s First 3D Printed School ...

3D printed houses have been popping up all over the map. Some are hive-shaped, some can float, some are up for sale. Now this practical, cost-cutting technology is being employed for another type of building: a school.

Located on the island of Madagascar, the project is a collaboration between San Francisco-based architecture firm Studio Mortazavi and Thinking Huts, a nonprofit whose mission is to increase global access to education through 3D printing. The school will be built on the campus of a university in Fianarantsoa, a city in the south central area of the island nation.

According to the World Economic Forum, lack of physical infrastructure is one of the biggest barriers to education. Building schools requires not only funds, human capital, and building materials, but also community collaboration and ongoing upkeep and maintenance. For people to feel good about sending their kids to school each day, the buildings should be conveniently located, appealing, comfortable to spend several hours in, and of course safe. All of this is harder to accomplish than you might think, especially in low-income areas.

Because of its comparatively low cost and quick turnaround time, 3D printing has been lauded as a possible solution to housing shortages and a tool to aid in disaster relief. Cost details of the Madagascar school haven’t been released, but if 3D printed houses can go up in a day for under $10,000 or list at a much lower price than their non-3D-printed neighbors, it’s safe to say that 3D printing a school is likely substantially cheaper than building it through traditional construction methods.

The school’s modular design resembles a honeycomb, where as few or as many nodes as needed can be linked together. Each node consists of a room with two bathrooms, a closet, and a front and rear entrance. The Fianarantsoa school with just have one node to start with, but as local technologists will participate in the building process, they’ll learn the 3D printing ins and outs and subsequently be able to add new nodes or build similar schools in other areas.

Artist rendering of the completed school. Image Credit: Studio Mortazavi/Thinking Huts
The printer for the project is coming from Hyperion Robotics, a Finnish company that specializes in 3D printing solutions for reinforced concrete. The building’s walls will be made of layers of a special cement mixture that Thinking Huts says emits less carbon dioxide than traditional concrete. The roof, doors, and windows will be sourced locally, and the whole process can be completed in less than a week, another major advantage over traditional building methods.

“We can build these schools in less than a week, including the foundation and all the electrical and plumbing work that’s involved,” said Amir Mortazavi, lead architect on the project. “Something like this would typically take months, if not even longer.”

The roof of the building will be equipped with solar panels to provide the school with power, and in a true melding of modern technology and traditional design, the pattern of its walls is based on Malagasy textiles.

Thinking Huts considered seven different countries for its first school, and ended up choosing Madagascar for the pilot based on its need for education infrastructure, stable political outlook, opportunity for growth, and renewable energy potential. However, the team is hoping the pilot will be the first of many similar projects across multiple countries. “We can use this as a case study,” Mortazavi said. “Then we can go to other countries around the world and train the local technologists to use the 3D printer and start a nonprofit there to be able to build schools.”

Construction of the school will take place in the latter half of this year, with hopes of getting students into the classroom as soon as the pandemic is no longer a major threat to the local community’s health.

Image Credit: Studio Mortazavi/Thinking Huts Continue reading

Posted in Human Robots