Tag Archives: high

#439509 What’s Going on With Amazon’s ...

Amazon’s innovation blog recently published a post entitled “New technologies to improve Amazon employee safety,” which highlighted four different robotic systems that Amazon’s Robotics and Advanced Technology teams have been working on. Three of these robotic systems are mobile robots, which have been making huge contributions to the warehouse space over the past decade. Amazon in particular was one of the first (if not the first) e-commerce companies to really understand the fundamental power of robots in warehouses, with their $775 million acquisition of Kiva Systems’ pod-transporting robots back in 2012.

Since then, a bunch of other robotics companies have started commercially deploying robots in warehouses, and over the past five years or so, we’ve seen some of those robots develop enough autonomy and intelligence to be able to operate outside of restricted, highly structured environments and work directly with humans. Autonomous mobile robots for warehouses is now a highly competitive sector, with companies like Fetch Robotics, Locus Robotics, and OTTO Motors all offering systems that can zip payloads around busy warehouse floors safely and efficiently.

But if we’re to take the capabilities of the robots that Amazon showcased over the weekend at face value, the company appears to be substantially behind the curve on warehouse robots.

Let’s take a look at the three mobile robots that Amazon describes in their blog post:

“Bert” is one of Amazon’s first Autonomous Mobile Robots, or AMRs. Historically, it’s been difficult to incorporate robotics into areas of our facilities where people and robots are working in the same physical space. AMRs like Bert, which is being tested to autonomously navigate through our facilities with Amazon-developed advanced safety, perception, and navigation technology, could change that. With Bert, robots no longer need to be confined to restricted areas. This means that in the future, an employee could summon Bert to carry items across a facility. In addition, Bert might at some point be able to move larger, heavier items or carts that are used to transport multiple packages through our facilities. By taking those movements on, Bert could help lessen strain on employees.

This all sounds fairly impressive, but only if you’ve been checked out of the AMR space for the last few years. Amazon is presenting Bert as part of the “new technologies” they’re developing, and while that may be the case, as far as we can make out these are very much technologies that seem to be new mostly just to Amazon and not really to anyone else. There are any number of other companies who are selling mobile robot tech that looks to be significantly beyond what we’re seeing here—tech that (unless we’re missing something) has already largely solved many of the same technical problems that Amazon is working on.

We spoke with mobile robot experts from three different robotics companies, none of whom were comfortable going on record (for obvious reasons), but they all agreed that what Amazon is demonstrating in these videos appears to be 2+ years behind the state of the art in commercial mobile robots.

We’re obviously seeing a work in progress with Bert, but I’d be less confused if we were looking at a deployed system, because at least then you could make the argument that Amazon has managed to get something operational at (some) scale, which is much more difficult than a demo or pilot project. But the slow speed, the careful turns, the human chaperones—other AMR companies are way past this stage.

Kermit is an AGC (Autonomously Guided Cart) that is focused on moving empty totes from one location to another within our facilities so we can get empty totes back to the starting line. Kermit follows strategically placed magnetic tape to guide its navigation and uses tags placed along the way to determine if it should speed up, slow down, or modify its course in some way. Kermit is further along in development, currently being tested in several sites across the U.S., and will be introduced in at least a dozen more sites across North America this year.

Most folks in the mobile robots industry would hesitate to call Kermit an autonomous robot at all, which is likely why Amazon doesn’t refer to it as such, instead calling it a “guided cart.” As far as I know, pretty much every other mobile robotics company has done away with stuff like magnetic tape in favor of map-based natural-feature localization (a technology that has been commercially available for years), because then your robots can go anywhere in a mapped warehouse, not just on these predefined paths. Even if you have a space and workflow that never ever changes, busy warehouses have paths that get blocked for one reason or another all the time, and modern AMRs are flexible enough to plan around those paths to complete their tasks. With these autonomous carts that are locked to their tapes, they can’t even move over a couple of feet to get around an obstacle.

I have no idea why this monstrous system called Scooter is the best solution for moving carts around a warehouse. It just seems needlessly huge and complicated, especially since we know Amazon already understands that a great way of moving carts around is by using much smaller robots that can zip underneath a cart, lift it up, and carry it around with them. Obviously, the Kiva drive units only operate in highly structured environments, but other AMR companies are making this concept work on the warehouse floor just fine.

Why is Amazon at “possibilities” when other companies are at commercial deployments?

I honestly just don’t understand what’s happening here. Amazon has (I assume) a huge R&D budget at its disposal. It was investing in robotic technology for e-commerce warehouses super early, and at an unmatched scale. Even beyond Kiva, Amazon obviously understood the importance of AMRs several years ago, with its $100+ million acquisition of Canvas Technology in 2019. But looking back at Canvas’ old videos, it seems like Canvas was doing in 2017 more or less what we’re seeing Amazon’s Bert robot doing now, nearly half a decade later.

We reached out to Amazon Robotics for comment and sent them a series of questions about the robots in these videos. They sent us this response:

The health and safety of our employees is our number one priority—and has been since day one. We’re excited about the possibilities robotics and other technology can play in helping to improve employee safety.

Hmm.

I mean, sure, I’m excited about the same thing, but I’m still stuck on why Amazon is at possibilities, while other companies are at commercial deployments. It’s certainly possible that the sheer Amazon-ness of Amazon is a significant factor here, in the sense that a commercial deployment for Amazon is orders of magnitude larger and more complex than any of the AMR companies that we’re comparing them to are dealing with. And if Amazon can figure out how to make (say) an AMR without using lidar, it would make a much more significant difference for an in-house large-scale deployment relative to companies offering AMRs as a service.

For another take on what might be going on with this announcement from Amazon, we spoke with Matt Beane, who got his PhD at MIT and studies robotics at UCSB’s Technology Management Program. At the ACM/IEEE International Conference on Human-Robot Interaction (HRI) last year, Beane published a paper on the value of robots as social signals—that is, organizations get valuable outcomes from just announcing they have robots, because this encourages key audiences to see the organization in favorable ways. “My research strongly suggests that Amazon is reaping signaling value from this announcement,” Beane told us. There’s nothing inherently wrong with signaling, because robots can create instrumental value, and that value needs to be communicated to the people who will, ideally, benefit from it. But you have to be careful: “My paper also suggests this can be a risky move,” explains Beane. “Blowback can be pretty nasty if the systems aren’t in full-tilt, high-value use. In other words, it works only if the signal pretty closely matches the internal reality.”

There’s no way for us to know what the internal reality at Amazon is. All we have to go on is this blog post, which isn’t much, and we should reiterate that there may be a significant gap between what the post is showing us about Amazon’s mobile robots and what’s actually going on at Amazon Robotics. My hope is what we’re seeing here is primarily a sign that Amazon Robotics is starting to scale things up, and that we’re about to see them get a lot more serious about developing robots that will help make their warehouses less tedious, safer, and more productive. Continue reading

Posted in Human Robots

#439353 What’s Going on With Amazon’s ...

Amazon’s innovation blog recently published a post entitled “New technologies to improve Amazon employee safety,” which highlighted four different robotic systems that Amazon’s Robotics and Advanced Technology teams have been working on. Three of these robotic systems are mobile robots, which have been making huge contributions to the warehouse space over the past decade. Amazon in particular was one of the first (if not the first) e-commerce companies to really understand the fundamental power of robots in warehouses, with their $775 million acquisition of Kiva Systems’ pod-transporting robots back in 2012.

Since then, a bunch of other robotics companies have started commercially deploying robots in warehouses, and over the past five years or so, we’ve seen some of those robots develop enough autonomy and intelligence to be able to operate outside of restricted, highly structured environments and work directly with humans. Autonomous mobile robots for warehouses is now a highly competitive sector, with companies like Fetch Robotics, Locus Robotics, and OTTO Motors all offering systems that can zip payloads around busy warehouse floors safely and efficiently.

But if we’re to take the capabilities of the robots that Amazon showcased over the weekend at face value, the company appears to be substantially behind the curve on warehouse robots.

Let’s take a look at the three mobile robots that Amazon describes in their blog post:

“Bert” is one of Amazon’s first Autonomous Mobile Robots, or AMRs. Historically, it’s been difficult to incorporate robotics into areas of our facilities where people and robots are working in the same physical space. AMRs like Bert, which is being tested to autonomously navigate through our facilities with Amazon-developed advanced safety, perception, and navigation technology, could change that. With Bert, robots no longer need to be confined to restricted areas. This means that in the future, an employee could summon Bert to carry items across a facility. In addition, Bert might at some point be able to move larger, heavier items or carts that are used to transport multiple packages through our facilities. By taking those movements on, Bert could help lessen strain on employees.

This all sounds fairly impressive, but only if you’ve been checked out of the AMR space for the last few years. Amazon is presenting Bert as part of the “new technologies” they’re developing, and while that may be the case, as far as we can make out these are very much technologies that seem to be new mostly just to Amazon and not really to anyone else. There are any number of other companies who are selling mobile robot tech that looks to be significantly beyond what we’re seeing here—tech that (unless we’re missing something) has already largely solved many of the same technical problems that Amazon is working on.

We spoke with mobile robot experts from three different robotics companies, none of whom were comfortable going on record (for obvious reasons), but they all agreed that what Amazon is demonstrating in these videos appears to be 2+ years behind the state of the art in commercial mobile robots.

We’re obviously seeing a work in progress with Bert, but I’d be less confused if we were looking at a deployed system, because at least then you could make the argument that Amazon has managed to get something operational at (some) scale, which is much more difficult than a demo or pilot project. But the slow speed, the careful turns, the human chaperones—other AMR companies are way past this stage.

Kermit is an AGC (Autonomously Guided Cart) that is focused on moving empty totes from one location to another within our facilities so we can get empty totes back to the starting line. Kermit follows strategically placed magnetic tape to guide its navigation and uses tags placed along the way to determine if it should speed up, slow down, or modify its course in some way. Kermit is further along in development, currently being tested in several sites across the U.S., and will be introduced in at least a dozen more sites across North America this year.

Most folks in the mobile robots industry would hesitate to call Kermit an autonomous robot at all, which is likely why Amazon doesn’t refer to it as such, instead calling it a “guided cart.” As far as I know, pretty much every other mobile robotics company has done away with stuff like magnetic tape in favor of map-based natural-feature localization (a technology that has been commercially available for years), because then your robots can go anywhere in a mapped warehouse, not just on these predefined paths. Even if you have a space and workflow that never ever changes, busy warehouses have paths that get blocked for one reason or another all the time, and modern AMRs are flexible enough to plan around those paths to complete their tasks. With these autonomous carts that are locked to their tapes, they can’t even move over a couple of feet to get around an obstacle.

I have no idea why this monstrous system called Scooter is the best solution for moving carts around a warehouse. It just seems needlessly huge and complicated, especially since we know Amazon already understands that a great way of moving carts around is by using much smaller robots that can zip underneath a cart, lift it up, and carry it around with them. Obviously, the Kiva drive units only operate in highly structured environments, but other AMR companies are making this concept work on the warehouse floor just fine.

Why is Amazon at “possibilities” when other companies are at commercial deployments?

I honestly just don’t understand what’s happening here. Amazon has (I assume) a huge R&D budget at its disposal. It was investing in robotic technology for e-commerce warehouses super early, and at an unmatched scale. Even beyond Kiva, Amazon obviously understood the importance of AMRs several years ago, with its $100+ million acquisition of Canvas Technology in 2019. But looking back at Canvas’ old videos, it seems like Canvas was doing in 2017 more or less what we’re seeing Amazon’s Bert robot doing now, nearly half a decade later.

We reached out to Amazon Robotics for comment and sent them a series of questions about the robots in these videos. They sent us this response:

The health and safety of our employees is our number one priority—and has been since day one. We’re excited about the possibilities robotics and other technology can play in helping to improve employee safety.

Hmm.

I mean, sure, I’m excited about the same thing, but I’m still stuck on why Amazon is at possibilities, while other companies are at commercial deployments. It’s certainly possible that the sheer Amazon-ness of Amazon is a significant factor here, in the sense that a commercial deployment for Amazon is orders of magnitude larger and more complex than any of the AMR companies that we’re comparing them to are dealing with. And if Amazon can figure out how to make (say) an AMR without using lidar, it would make a much more significant difference for an in-house large-scale deployment relative to companies offering AMRs as a service.

For another take on what might be going on with this announcement from Amazon, we spoke with Matt Beane, who got his PhD at MIT and studies robotics at UCSB’s Technology Management Program. At the ACM/IEEE International Conference on Human-Robot Interaction (HRI) last year, Beane published a paper on the value of robots as social signals—that is, organizations get valuable outcomes from just announcing they have robots, because this encourages key audiences to see the organization in favorable ways. “My research strongly suggests that Amazon is reaping signaling value from this announcement,” Beane told us. There’s nothing inherently wrong with signaling, because robots can create instrumental value, and that value needs to be communicated to the people who will, ideally, benefit from it. But you have to be careful: “My paper also suggests this can be a risky move,” explains Beane. “Blowback can be pretty nasty if the systems aren’t in full-tilt, high-value use. In other words, it works only if the signal pretty closely matches the internal reality.”

There’s no way for us to know what the internal reality at Amazon is. All we have to go on is this blog post, which isn’t much, and we should reiterate that there may be a significant gap between what the post is showing us about Amazon’s mobile robots and what’s actually going on at Amazon Robotics. My hope is what we’re seeing here is primarily a sign that Amazon Robotics is starting to scale things up, and that we’re about to see them get a lot more serious about developing robots that will help make their warehouses less tedious, safer, and more productive. Continue reading

Posted in Human Robots

#439354 What’s Going on With Amazon’s ...

Amazon’s innovation blog recently published a post entitled “New technologies to improve Amazon employee safety,” which highlighted four different robotic systems that Amazon’s Robotics and Advanced Technology teams have been working on. Three of these robotic systems are mobile robots, which have been making huge contributions to the warehouse space over the past decade. Amazon in particular was one of the first (if not the first) e-commerce companies to really understand the fundamental power of robots in warehouses, with their $775 million acquisition of Kiva Systems’ pod-transporting robots back in 2012.

Since then, a bunch of other robotics companies have started commercially deploying robots in warehouses, and over the past five years or so, we’ve seen some of those robots develop enough autonomy and intelligence to be able to operate outside of restricted, highly structured environments and work directly with humans. Autonomous mobile robots for warehouses is now a highly competitive sector, with companies like Fetch Robotics, Locus Robotics, and OTTO Motors all offering systems that can zip payloads around busy warehouse floors safely and efficiently.

But if we’re to take the capabilities of the robots that Amazon showcased over the weekend at face value, the company appears to be substantially behind the curve on warehouse robots.

Let’s take a look at the three mobile robots that Amazon describes in their blog post:

“Bert” is one of Amazon’s first Autonomous Mobile Robots, or AMRs. Historically, it’s been difficult to incorporate robotics into areas of our facilities where people and robots are working in the same physical space. AMRs like Bert, which is being tested to autonomously navigate through our facilities with Amazon-developed advanced safety, perception, and navigation technology, could change that. With Bert, robots no longer need to be confined to restricted areas. This means that in the future, an employee could summon Bert to carry items across a facility. In addition, Bert might at some point be able to move larger, heavier items or carts that are used to transport multiple packages through our facilities. By taking those movements on, Bert could help lessen strain on employees.

This all sounds fairly impressive, but only if you’ve been checked out of the AMR space for the last few years. Amazon is presenting Bert as part of the “new technologies” they’re developing, and while that may be the case, as far as we can make out these are very much technologies that seem to be new mostly just to Amazon and not really to anyone else. There are any number of other companies who are selling mobile robot tech that looks to be significantly beyond what we’re seeing here—tech that (unless we’re missing something) has already largely solved many of the same technical problems that Amazon is working on.

We spoke with mobile robot experts from three different robotics companies, none of whom were comfortable going on record (for obvious reasons), but they all agreed that what Amazon is demonstrating in these videos appears to be 2+ years behind the state of the art in commercial mobile robots.

We’re obviously seeing a work in progress with Bert, but I’d be less confused if we were looking at a deployed system, because at least then you could make the argument that Amazon has managed to get something operational at (some) scale, which is much more difficult than a demo or pilot project. But the slow speed, the careful turns, the human chaperones—other AMR companies are way past this stage.

Kermit is an AGC (Autonomously Guided Cart) that is focused on moving empty totes from one location to another within our facilities so we can get empty totes back to the starting line. Kermit follows strategically placed magnetic tape to guide its navigation and uses tags placed along the way to determine if it should speed up, slow down, or modify its course in some way. Kermit is further along in development, currently being tested in several sites across the U.S., and will be introduced in at least a dozen more sites across North America this year.

Most folks in the mobile robots industry would hesitate to call Kermit an autonomous robot at all, which is likely why Amazon doesn’t refer to it as such, instead calling it a “guided cart.” As far as I know, pretty much every other mobile robotics company has done away with stuff like magnetic tape in favor of map-based natural-feature localization (a technology that has been commercially available for years), because then your robots can go anywhere in a mapped warehouse, not just on these predefined paths. Even if you have a space and workflow that never ever changes, busy warehouses have paths that get blocked for one reason or another all the time, and modern AMRs are flexible enough to plan around those paths to complete their tasks. With these autonomous carts that are locked to their tapes, they can’t even move over a couple of feet to get around an obstacle.

I have no idea why this monstrous system called Scooter is the best solution for moving carts around a warehouse. It just seems needlessly huge and complicated, especially since we know Amazon already understands that a great way of moving carts around is by using much smaller robots that can zip underneath a cart, lift it up, and carry it around with them. Obviously, the Kiva drive units only operate in highly structured environments, but other AMR companies are making this concept work on the warehouse floor just fine.

Why is Amazon at “possibilities” when other companies are at commercial deployments?

I honestly just don’t understand what’s happening here. Amazon has (I assume) a huge R&D budget at its disposal. It was investing in robotic technology for e-commerce warehouses super early, and at an unmatched scale. Even beyond Kiva, Amazon obviously understood the importance of AMRs several years ago, with its $100+ million acquisition of Canvas Technology in 2019. But looking back at Canvas’ old videos, it seems like Canvas was doing in 2017 more or less what we’re seeing Amazon’s Bert robot doing now, nearly half a decade later.

We reached out to Amazon Robotics for comment and sent them a series of questions about the robots in these videos. They sent us this response:

The health and safety of our employees is our number one priority—and has been since day one. We’re excited about the possibilities robotics and other technology can play in helping to improve employee safety.

Hmm.

I mean, sure, I’m excited about the same thing, but I’m still stuck on why Amazon is at possibilities, while other companies are at commercial deployments. It’s certainly possible that the sheer Amazon-ness of Amazon is a significant factor here, in the sense that a commercial deployment for Amazon is orders of magnitude larger and more complex than any of the AMR companies that we’re comparing them to are dealing with. And if Amazon can figure out how to make (say) an AMR without using lidar, it would make a much more significant difference for an in-house large-scale deployment relative to companies offering AMRs as a service.

For another take on what might be going on with this announcement from Amazon, we spoke with Matt Beane, who got his PhD at MIT and studies robotics at UCSB’s Technology Management Program. At the ACM/IEEE International Conference on Human-Robot Interaction (HRI) last year, Beane published a paper on the value of robots as social signals—that is, organizations get valuable outcomes from just announcing they have robots, because this encourages key audiences to see the organization in favorable ways. “My research strongly suggests that Amazon is reaping signaling value from this announcement,” Beane told us. There’s nothing inherently wrong with signaling, because robots can create instrumental value, and that value needs to be communicated to the people who will, ideally, benefit from it. But you have to be careful: “My paper also suggests this can be a risky move,” explains Beane. “Blowback can be pretty nasty if the systems aren’t in full-tilt, high-value use. In other words, it works only if the signal pretty closely matches the internal reality.”

There’s no way for us to know what the internal reality at Amazon is. All we have to go on is this blog post, which isn’t much, and we should reiterate that there may be a significant gap between what the post is showing us about Amazon’s mobile robots and what’s actually going on at Amazon Robotics. My hope is what we’re seeing here is primarily a sign that Amazon Robotics is starting to scale things up, and that we’re about to see them get a lot more serious about developing robots that will help make their warehouses less tedious, safer, and more productive. Continue reading

Posted in Human Robots

#439280 Google and Harvard Unveil the Largest ...

Last Tuesday, teams from Google and Harvard published an intricate map of every cell and connection in a cubic millimeter of the human brain.

The mapped region encompasses the various layers and cell types of the cerebral cortex, a region of brain tissue associated with higher-level cognition, such as thinking, planning, and language. According to Google, it’s the largest brain map at this level of detail to date, and it’s freely available to scientists (and the rest of us) online. (Really. Go here. Take a stroll.)

To make the map, the teams sliced donated tissue into 5,300 sections, each 30 nanometers thick, and imaged them with a scanning electron microscope at a resolution of 4 nanometers. The resulting 225 million images were computationally aligned and stitched back into a 3D digital representation of the region. Machine learning algorithms segmented individual cells and classified synapses, axons, dendrites, cells, and other structures, and humans checked their work. (The team posted a pre-print paper about the map on bioArxiv.)

Last year, Google and the Janelia Research Campus of the Howard Hughes Medical Institute made headlines when they similarly mapped a portion of a fruit fly brain. That map, at the time the largest yet, covered some 25,000 neurons and 20 million synapses. In addition to targeting the human brain, itself of note, the new map includes tens of thousands of neurons and 130 million synapses. It takes up 1.4 petabytes of disk space.

By comparison, over three decades’ worth of satellite images of Earth by NASA’s Landsat program require 1.3 petabytes of storage. Collections of brain images on the smallest scales are like “a world in a grain of sand,” the Allen Institute’s Clay Reid told Nature, quoting William Blake in reference to an earlier map of the mouse brain.

All that, however, is but a millionth of the human brain. Which is to say, a similarly detailed map of the entire thing is yet years away. Still, the work shows how fast the field is moving. A map of this scale and detail would have been unimaginable a few decades ago.

How to Map a Brain
The study of the brain’s cellular circuitry is known as connectomics.

Obtaining the human connectome, or the wiring diagram of a whole brain, is a moonshot akin to the human genome. And like the human genome, at first, it seemed an impossible feat.

The only complete connectomes are for simple creatures: the nematode worm (C. elegans) and the larva of a sea creature called C. intestinalis. There’s a very good reason for that. Until recently, the mapping process was time-consuming and costly.

Researchers mapping C. elegans in the 1980s used a film camera attached to an electron microscope to image slices of the worm, then reconstructed the neurons and synaptic connections by hand, like a maddeningly difficult three-dimensional puzzle. C. elegans has only 302 neurons and roughly 7,000 synapses, but the rough draft of its connectome took 15 years, and a final draft took another 20. Clearly, this approach wouldn’t scale.

What’s changed? In short, automation.

These days the images themselves are, of course, digital. A process known as focused ion beam milling shaves down each slice of tissue a few nanometers at a time. After one layer is vaporized, an electron microscope images the newly exposed layer. The imaged layer is then shaved away by the ion beam and the next one imaged, until all that’s left of the slice of tissue is a nanometer-resolution digital copy. It’s a far cry from the days of Kodachrome.

But maybe the most dramatic improvement is what happens after scientists complete that pile of images.

Instead of assembling them by hand, algorithms take over. Their first job is ordering the imaged slices. Then they do something impossible until the last decade. They line up the images just so, tracing the path of cells and synapses between them and thus building a 3D model. Humans still proofread the results, but they don’t do the hardest bit anymore. (Even the proofreading can be refined. Renowned neuroscientist and connectomics proponent Sebastian Seung, for example, created a game called Eyewire, where thousands of volunteers review structures.)

“It’s truly beautiful to look at,” Harvard’s Jeff Lichtman, whose lab collaborated with Google on the new map, told Nature in 2019. The programs can trace out neurons faster than the team can churn out image data, he said. “We’re not able to keep up with them. That’s a great place to be.”

But Why…?
In a 2010 TED talk, Seung told the audience you are your connectome. Reconstruct the connections and you reconstruct the mind itself: memories, experience, and personality.

But connectomics has not been without controversy over the years.

Not everyone believes mapping the connectome at this level of detail is necessary for a deep understanding of the brain. And, especially in the field’s earlier, more artisanal past, researchers worried the scale of resources required simply wouldn’t yield comparably valuable (or timely) results.

“I don’t need to know the precise details of the wiring of each cell and each synapse in each of those brains,” nueroscientist Anthony Movshon said in 2019. “What I need to know, instead, is the organizational principles that wire them together.” These, Movshon believes, can likely be inferred from observations at lower resolutions.

Also, a static snapshot of the brain’s physical connections doesn’t necessarily explain how those connections are used in practice.

“A connectome is necessary, but not sufficient,” some scientists have said over the years. Indeed, it may be in the combination of brain maps—including functional, higher-level maps that track signals flowing through neural networks in response to stimuli—that the brain’s inner workings will be illuminated in the sharpest detail.

Still, the C. elegans connectome has proven to be a foundational building block for neuroscience over the years. And the growing speed of mapping is beginning to suggest goals that once seemed impractical may actually be within reach in the coming decades.

Are We There Yet?
Seung has said that when he first started out he estimated it’d take a million years for a person to manually trace all the connections in a cubic millimeter of human cortex. The whole brain, he further inferred, would take on the order of a trillion years.

That’s why automation and algorithms have been so crucial to the field.

Janelia’s Gerry Rubin told Stat he and his team have overseen a 1,000-fold increase in mapping speed since they began work on the fruit fly connectome in 2008. The full connectome—the first part of which was completed last year—may arrive in 2022.

Other groups are working on other animals, like octopuses, saying comparing how different forms of intelligence are wired up may prove particularly rich ground for discovery.

The full connectome of a mouse, a project already underway, may follow the fruit fly by the end of the decade. Rubin estimates going from mouse to human would need another million-fold jump in mapping speed. But he points to the trillion-fold increase in DNA sequencing speed since 1973 to show such dramatic technical improvements aren’t unprecedented.

The genome may be an apt comparison in another way too. Even after sequencing the first human genome, it’s taken many years to scale genomics to the point we can more fully realize its potential. Perhaps the same will be true of connectomics.

Even as the technology opens new doors, it may take time to understand and make use of all it has to offer.

“I believe people were impatient about what [connectomes] would provide,” Joshua Vogelstein, cofounder of the Open Connetome Project, told the Verge last year. “The amount of time between a good technology being seeded, and doing actual science using that technology is often approximately 15 years. Now it’s 15 years later and we can start doing science.”

Proponents hope brain maps will yield new insights into how the brain works—from thinking to emotion and memory—and how to better diagnose and treat brain disorders. Others, Google among them no doubt, hope to glean insights that could lead to more efficient computing (the brain is astonishing in this respect) and powerful artificial intelligence.

There’s no telling exactly what scientists will find as, neuron by synapse, they map the inner workings of our minds—but it seems all but certain great discoveries await.

Image Credit: Google / Harvard Continue reading

Posted in Human Robots

#439252 The Cheetah’s Fluffy Tail Points ...

Almost but not quite a decade ago, researchers from UC Berkeley equipped a little robotic car with an actuated metal rod with a weight on the end and used it to show how lizards use their tails to stabilize themselves while jumping through the air. That research inspired a whole bunch of other tailed mobile robots, including a couple of nifty ones from Amir Patel at the University of Cape Town.

The robotic tails that we’ve seen are generally actuated inertial tails: a moving mass that goes one way causes the robot that it’s attached to to go the other way. This is how lizard tails work, and it’s a totally fine way to do things. In fact, people generally figured that many if not most other animals that use their tails to improve their agility leverage this inertial principle, including (most famously) the cheetah. But at least as far as the cheetah was concerned, nobody had actually bothered to check, until Patel took the tails from a collection of ex-cheetahs and showed that in fact cheetah tails are almost entirely fluff. So if it’s not the mass of its tail that helps a cheetah chase down prey, then it must be the aerodynamics.

The internet is full of wisdom on cheetah tails, and most of it describes “heavy” tails that “act as a counterbalance” to the rest of the cheetah’s body. This makes intuitive sense, but it’s also quite wrong, as Amir Patel figured out:

The aerodynamics of cheetah tails are super important, and actually something I discovered by accident! Towards the end of my PhD I was invited to a cheetah autopsy at the National Zoological Gardens here in South Africa. The idea was to weigh and measure the inertia of the cheetah tail because no such data existed. Based on what I’d seen in wildlife documentaries (and speaking to any game ranger in South Africa), the cheetah tail is often considered to be heavy, and used as a counterweight.

However, once we removed the fur and skin from the tail during the autopsy, it was surprisingly skinny! We measured it (and the tails of another 6 cheetahs) as being only about 2 percent of the body mass—much lower than my own robotic tails. But the fur made up a significant volume of the tail. So, I figured that there must be something to it: maybe the fur was making the tail appear like a larger object aerodynamically, without the weight penalty of an inertial tail.

A few years ago, Patel started to characterize tail aerodynamics in partnership with Aaron Johnson’s lab at CMU, and that work has lead to a recent paper published in IEEE Transactions on Robotics, exploring how aerodynamic drag on a lightweight tail can help robots perform dynamic behaviors more successfully.

The specific tail design that Minitaur is sporting in the video above doesn’t look particularly cheetah-like, being made out of carbon fiber and polyethylene film rather than floof, and only sporting an aerodynamic component at the end of the tail rather than tip to butt. This is explained by cheetahs in the wild not having easy access to either carbon fiber or polyethylene, and by a design that the researchers optimized to maximize drag while minimizing mass rather than for biomimicry. “We experimented with a whole array of furry tails to mimic cheetah fur, but found that the half cylinder shape had by far the most drag,” first author Joseph Norby told us in an email. “And the reduction of the drag component to just the end of the tail was a balance of effectiveness and rigidity—we could have made the drag component cover the entire length, but really the section near the tip produces most of the drag, and reducing the length of the drag component helps maintain the shape of the tail.”

Aerodynamic tails are potentially appealing because unlike inertial tails, the amount of torque that they can produce doesn't depend on how much they weigh, but rather with the velocity at which the robot is moving: the faster the robot goes, the more torque an aerodynamic tail can produce. We see this in animals, too, with fluffy tails commonly found on fast movers and jumpers like jerboas and flying squirrels. This offers some suggestion about what kind of robots could benefit most from tails like these, although as Norby points out, the greatest limitation of these tails is the large workspace required for the tail to move around safely.

Image: Norby et al

A variety of animals (and one robot) with aerodynamic drag tails, including a jerboa and giant Indian squirrel.

While this paper is focused on quantifying the effects of aerodynamic drag on robotic tails, it seems like there’s a lot of potential for some really creative designs—we were wondering about tails with adjustable floofitude, for example, and we asked Norby about some ways in which this research might be extended.

I think a foldable or retractable tail would greatly improve practicality by reducing the workspace when the tail is not needed. Essentially all of the animals we studied had some sort of flexibility to their tails, which I believe is a crucial property for improving both practicality and durability. In a similar vein, we've also thought about employing active or passive designs that could quickly modify the drag coefficient, whether by furling and unfurling, or simply rotating an asymmetric tail like our half cylinder. This could perhaps allow new forms of control similar to paddling and feathering a canoe: increasing drag when moving in one direction and reducing drag in the other could allow for more net control authority. This would be completely impossible with an inertial tail, which cannot do work on the environment.

Photo: Evan Ackerman/IEEE Spectrum

Gratuitous cheetah picture.

Even though animals had the idea for lightweight aerodynamic drag tails first, there’s no reason why we need to restrict ourselves to animal-like form factors when leveraging the advantages that tails like these offer, or indeed with the designs of the tails themselves. Without a mass penalty to worry about, why not put tails on any robot that has trouble keeping its balance, like pretty much every bipedal robot, right? Of course there are plenty of reasons not to do this, but still, it’s exciting to see this whole design space of aerodynamic drag tails potentially open up for any robot platform that needs a little bit of help with dynamic motion.

Enabling Dynamic Behaviors With Aerodynamic Drag in Lightweight Tails, by Joseph Norby, Jun Yang Li, Cameron Selby, Amir Patel, and Aaron M. Johnson from CMU and the University of Cape Town is published in IEEE Transactions on Robotics. Continue reading

Posted in Human Robots