Category Archives: Human Robots

Everything about Humanoid Robots and Androids

#439416 Neuro-evolutionary robotics: A gap ...

Neuro-evolutionary robotics is an attractive approach to realize collective behaviors for swarms of robots. Despite the large number of studies that have been devoted to it and although many methods and ideas have been proposed, empirical evaluations and comparative analyses are rare. Continue reading

Posted in Human Robots

#439414 Air-powered computer memory helps soft ...

Engineers at UC Riverside have unveiled an air-powered computer memory that can be used to control soft robots. The innovation overcomes one of the biggest obstacles to advancing soft robotics: the fundamental mismatch between pneumatics and electronics. The work is published in the open-access journal, PLOS One. Continue reading

Posted in Human Robots

#439406 Dextrous Robotics Wants To Move Boxes ...

Hype aside, there aren’t necessarily all that many areas where robots have the potential to step into an existing workflow and immediately provide a substantial amount of value. But one of the areas that we have seen several robotics companies jump into recently is box manipulation—specifically, using robots to unload boxes from the back of a truck, ideally significantly faster than a human. This is a good task for robots because it plays to their strengths: you can work in a semi-structured and usually predictable environment, speed, power, and precision are all valued highly, and it’s not a job that humans are particularly interested in or designed for.

One of the more novel approaches to this task comes from Dextrous Robotics, a Memphis TN-based startup led by Evan Drumwright. Drumwright was a professor at GWU before spending a few years at the Toyota Research Institute and then co-founding Dextrous in 2019 with an ex-student of his, Sam Zapolsky. The approach that they’ve come up with is to do box manipulation without any sort of suction, or really any sort of grippers at all. Instead, they’re using what can best be described as a pair of moving arms, each gripping a robotic chopstick.

We can pick up basically anything using chopsticks. If you're good with chopsticks, you can pick up individual grains of rice, and you can pick up things that are relatively large compared to the scale of the chopsticks. Your imagination is about the limit, so wouldn't it be cool if you had a robot that could manipulate things with chopsticks? —Evan Drumwright

It definitely is cool, but are there practical reasons why using chopsticks for box manipulation is a good idea? Of course there are! The nice thing about chopsticks is that they really can grip almost anything (even if you scale them up), making them especially valuable in constrained spaces where you’ve got large disparities in shapes and sizes and weights. They’re good for manipulation, too, able to nudge and reposition things with precision. And while Dextrous is initially focused on a trailer unloading task, having this extra manipulation capability will allow them to consider more difficult manipulation tasks in the future, like trailer loading, a task that necessarily happens just as often as unloading does but which is significantly more complicated to robot-ize.

Even though there are some clear advantages to Dextrous’ chopstick technique, there are disadvantages as well, and the biggest one is likely that it’s just a lot harder to use a manipulation technique like this. “The downside of the chopsticks approach is, as any human will tell you, you need some sophisticated control software to be able to operate,” Drumwright tells us. “But that’s part of what we bring to the game: not just a clever hardware design, but the software to operate it, too.”

Meanwhile, what we’ve seen so far from other companies in this space is pretty consistent use of suction systems for box handling. If you have a flat, non-permeable surface (as with most boxes), suction can work quickly and reliably and with a minimum of fancy planning. However, suction has limits form of manipulation, because it’s inherently so sticky, meaning that it can be difficult and/or time consuming to do anything with precision. Other issues with suction include its sensitivity to temperature and moisture, its propensity to ingest all the dirt it possibly can, and the fact that you need to design the suction array based on the biggest and heaviest things that you anticipate having to deal with. That last thing is a particular problem because if you also want to manipulate smaller objects, you’re left trying to do so with a suction array that’s way bigger than you’d like it to be. This is not to say that suction is inferior in all cases, and Drumwright readily admits that suction will probably prove to be a good option for some specific tasks. But chopstick manipulation, if they can get it to work, will be a lot more versatile.

Photo: Dextrous Robotics

Dextrous Robotics co-founders Evan Drumwright and Sam Zapolsky.

I think there's a reason that nature has given us hands. Nature knows how to design suction devices—bats have it, octopi have it, frogs have it—and yet we have hands. Why? Hands are a superior instrument. And so, that's why we've gone down this road. I personally believe, based on billions of years of evolution, that there's a reason that manipulation is superior and that that technology is going to win out. —Evan Drumwright

Part of Dextrous’ secret sauce is an emphasis on simulation. Hardware is hard, so ideally, you want to make one thing that just works the first time, rather than having to iterate over and over. Getting it perfect on the first try is probably unrealistic, but the better you can simulate things in advance, the closer you can get. “What we’ve been able to do is set up our entire planning perception and control system so that it looks exactly like it does when that code runs on the real robot,” says Drumwright. “When we run something on the simulated robot, it agrees with reality about 95 percent of the time, which is frankly unprecedented.” Using very high fidelity hardware modeling, a real time simulator, and software that can directly transfer between sim and real, Dextrous is able to confidently model how their system performs even on notoriously tricky things to simulate, like contact and stiction. The idea is that the end result will be a system that can be developed faster while performing more complex tasks better than other solutions.

We were also wondering why this system uses smooth round chopsticks rather than something a little bit grippier, like chopsticks with a square cross section, and maybe with some higher friction something on the inside surface. Drumwright explains that the advantage of the current design is that it’s symmetrical around its rotational axis, meaning that you only need five degrees of freedom to fully control it. “What that means practically is that things can get a whole lot simpler—the control algorithms get simpler, the inverse kinematics algorithms get simpler, and importantly the number of motors that we need to drive in the robot goes down.”

Screenshot: Dextrous Robotics

Simulated version of Dextrous Robotics’ hardware.

Dextrous took seed funding 18 months ago, and since then they’ve been working on both the software and hardware for their system as well as finding the time to score an NSF SBIR phase 1 grant. The above screenshot shows the simulation of the hardware they’re working towards (chopstick manipulators on two towers that can move laterally), while the Franka Panda arms are what they’re using to validate their software in the meantime. New hardware should be done imminently, and over the next year, Dextrous is looking forward to conducting paid pilots with real customers. Continue reading

Posted in Human Robots

#439400 A Neuron’s Sense of Timing Encodes ...

We like to think of brains as computers: A physical system that processes inputs and spits out outputs. But, obviously, what’s between your ears bears little resemblance to your laptop.

Computer scientists know the intimate details of how computers store and process information because they design and build them. But neuroscientists didn’t build brains, which makes them a bit like a piece of alien technology they’ve found and are trying to reverse engineer.

At this point, researchers have catalogued the components fairly well. We know the brain is a vast and intricate network of cells called neurons that communicate by way of electrical and chemical signals. What’s harder to figure out is how this network makes sense of the world.

To do that, scientists try to tie behavior to activity in the brain by listening to the chatter of its neurons firing. If neurons in a region get rowdy when a person is eating chocolate, well, those cells might be processing taste or directing chewing. This method has mostly focused on the frequency at which neurons fire—that is, how often they fire in a given period of time.

But frequency alone is an imprecise measure. For years, research in rats has suggested that when neurons fire relative to their peers—during navigation of spaces in particular—may also encode information. This process, in which the timing of some neurons grows increasingly out of step with their neighbors, is called “phase precession.”

It wasn’t known if phase precession was widespread in mammals, but recent studies have found it in bats and marmosets. And now, a new study has shown that it happens in humans too, strengthening the case that phase precession may occur across species.

The new study also found evidence of phase precession outside of spatial tasks, lending some weight to the idea it may be a more general process in learning throughout the brain.

The paper was published in the journal Cell last month by a Columbia University team of researchers led by neuroscientist and biomedical engineer Josh Jacobs.

The researchers say more studies are needed to flesh out the role of phase precession in the brain, and how or if it contributes to learning is still uncertain.

But to Salman Qasim, a post-doctoral fellow on Jacobs’ team and lead author of the paper, the patterns are tantalizing. “[Phase precession is] so prominent and prevalent in the rodent brain that it makes you want to assume it’s a generalizable mechanism,” he told Quanta Magazine this month.

Rat Brains to Human Brains
Though phase precession in rats has been studied for decades, it’s taken longer to unearth it in humans for a couple reasons. For one, it’s more challenging to study in humans at the level of neurons because it requires placing electrodes deep in the brain. Also, our patterns of brain activity are subtler and more complex, making them harder to untangle.

To solve the first challenge, the team analyzed decade-old recordings of neural chatter from 13 patients with drug-resistant epilepsy. As a part of their treatment, the patients had electrodes implanted to map the storms of activity during a seizure.

In one test, they navigated a two-dimensional virtual world—like a simple video game—on a laptop. Their brain activity was recorded as they were instructed to drive and drop off “passengers” at six stores around the perimeter of a rectangular track.

The team combed through this activity for hints of phase precession.

Active regions of the brain tend to fire together at a steady rate. These rhythms, called brain waves, are like a metronome or internal clock. Phase precession occurs when individual neurons fall out of step with the prevailing brain waves nearby. In navigation of spaces, like in this study, a particular type of neuron, called a “place cell,” fires earlier and earlier compared to its peers as the subject approaches and passes through a region. Its early firing eventually links up with the late firing of the next place cell in the chain, strengthening the synapse between the two and encoding a path through space.

In rats, theta waves in the hippocampus, which is a region associated with navigation, are strong and clear, making precession easier to pick out. In humans, they’re weaker and more variable. So the team used a clever statistical analysis to widen the observed wave frequencies into a range. And that’s when the phase precession clearly stood out.

This result lined up with prior navigation studies in rats. But the team went a step further.

In another part of the brain, the frontal cortex, they found phase precession in neurons not involved in navigation. The timing of these cells fell out of step with their neighbors as the subject achieved the goal of dropping passengers off at one of the stores. This indicated phase precession may also encode the sequence of steps leading up to a goal.

The findings, therefore, extend the occurrence of phase precession to humans and to new tasks and regions in the brain. The researchers say this suggests the phenomenon may be a general mechanism that encodes experiences over time. Indeed, other research—some very recent and not yet peer-reviewed—validates this idea, tying it to the processing of sounds, smells, and series of images.

And, the cherry on top, the process compresses experience to the length of a single brain wave. That is, an experience that takes seconds—say, a rat moving through several locations in the real world—is compressed to the fraction of a second it takes the associated neurons to fire in sequence.

In theory, this could help explain how we learn so fast from so few examples. Something artificial intelligence algorithms struggle to do.

As enticing as the research is, however, both the team involved in the study and other researchers say it’s still too early to draw definitive conclusions. There are other theories for how humans learn so quickly, and it’s possible phase precession is an artifact of the way the brain functions as opposed to a driver of its information processing.

That said, the results justify more serious investigation.

“Anyone who looks at brain activity as much as we do knows that it’s often a chaotic, stochastic mess,” Qasim told Wired last month. “So when you see some order emerge in that chaos, you want to ascribe to it some sort of functional purpose.”

Only time will tell if that order is a fundamental neural algorithm or something else.

Image Credit: Daniele Franchi / Unsplash Continue reading

Posted in Human Robots

#439395 This Week’s Awesome Tech Stories From ...

ARTIFICIAL INTELLIGENCE
Need to Fit Billions of Transistors on Your Chip? Let AI Do It
Will Knight | Wired
“Google, Nvidia, and others are training algorithms in the dark arts of designing semiconductors—some of which will be used to run artificial intelligence programs. …This should help companies draw up more powerful and efficient blueprints in much less time.”

DIGITAL MEDIA
AI Voice Actors Sound More Human Than Ever—and They’re Ready to Hire
Karen Hao | MIT Technology Review
“A new wave of startups are using deep learning to build synthetic voice actors for digital assistants, video-game characters, and corporate videos. …Companies can now license these voices to say whatever they need. They simply feed some text into the voice engine, and out will spool a crisp audio clip of a natural-sounding performance.”

AUGMENTED REALITY
5 Years After Pokémon Go, It’s Time for the Metaverse
Steven Levy | Wired
“That’s right, it’s now the fifth anniversary of Pokémon Go and the craze that marked its launch. That phenomenon was not only a milestone for the company behind the game, a Google offshoot called Niantic, but for the digital world in general. Pokémon Go was the first wildly popular implementation of augmented reality, a budding technology at the time, and it gave us a preview of what techno-pundits now believe is the next big thing.”

SPACE
Startups Aim Beyond Earth
Erin Woo | The New York Times
“Investors are putting more money than ever into space technology. Space start-ups raised over $7 billion in 2020, double the amount from just two years earlier, according to the space analytics firm BryceTech. …The boom, many executives, analysts and investors say, is fueled partly by advancements that have made it affordable for private companies—not just nations—to develop space technology and launch products into space.”

ENVIRONMENT
New Fabric Passively Cools Whatever It’s Covering—Including You
John Timmer | Ars Technica
“Without using energy, [passive cooling] materials take heat from whatever they’re covering and radiate it out to space. Most of these efforts have focused on building materials, with the goal of creating roofs that can keep buildings a few degrees cooler than the surrounding air. But now a team based in China has taken the same principles and applied them to fabric, creating a vest that keeps its users about 3º C cooler than they would be otherwise.”

SCIENCE
NASA Is Supporting the Search for Alien Megastructures
Daniel Oberhaus | Supercluster
“For the first time in history, America’s space agency is officially sponsoring a search for alien megastructures. ‘I’m encouraged that we’ve got NASA funding to support this,’ says [UC Berkeley’s Steve] Croft. ‘We’re using a NASA mission to fulfill a stated NASA objective—the search for life in the universe. But we’re doing it through a technosignature search that is not very expensive for NASA compared to some of their biosignature work.’i”

ROBOTICS
Boston Dynamics, BTS, and Ballet: The Next Act for Robotics
Sydney Skybetter | Wired
“Even though Boston Dynamics’ dancing robots are currently relegated to the realm of branded spectacle, I am consistently impressed by the company’s choreographic strides. In artists’ hands, these machines are becoming eminently capable of expression through performance. Boston Dynamics is a company that takes dance seriously, and, per its blog post, uses choreography as ‘a form of highly accelerated lifecycle testing for the hardware.’i”

INNOVATION
The Rise of ‘ARPA-Everything’ and What It Means for Science
Jeff Tollefson | Nature
“Enamored with the innovation that DARPA fostered in the United States, governments around the world, including in Europe and Japan, have attempted to duplicate the agency within their own borders. …Scientists who have studied the DARPA model say it works if applied properly, and to the right, ‘ARPA-able’ problems. But replicating DARPA’s recipe isn’t easy.”

AUTOMATION
No Driver? No Problem—This Is the Indy Autonomous Challenge
Gregory Leporati | Ars Technica
“The upcoming competition is, in many ways, the spiritual successor of the DARPA Grand Challenge, a robotics race from the early 2000s. …’If you could bring back the excitement of the DARPA Grand Challenge,’ [ESN’s president and CEO] Paul Mitchell continued, ‘and apply it to a really challenging edge use case, like high-speed racing, then that can leap the industry from where it is to where it needs to be to help us realize our autonomous future.’i”

Image Credit: Henry & Co. / Unsplash Continue reading

Posted in Human Robots