Category Archives: Human Robots
#439284 A system to benchmark the posture ...
In recent years, roboticists have developed a wide variety of robots with human-like capabilities. This includes robots with bodies that structurally resemble those of humans, also known as humanoid robots. Continue reading
#439280 Google and Harvard Unveil the Largest ...
Last Tuesday, teams from Google and Harvard published an intricate map of every cell and connection in a cubic millimeter of the human brain.
The mapped region encompasses the various layers and cell types of the cerebral cortex, a region of brain tissue associated with higher-level cognition, such as thinking, planning, and language. According to Google, it’s the largest brain map at this level of detail to date, and it’s freely available to scientists (and the rest of us) online. (Really. Go here. Take a stroll.)
To make the map, the teams sliced donated tissue into 5,300 sections, each 30 nanometers thick, and imaged them with a scanning electron microscope at a resolution of 4 nanometers. The resulting 225 million images were computationally aligned and stitched back into a 3D digital representation of the region. Machine learning algorithms segmented individual cells and classified synapses, axons, dendrites, cells, and other structures, and humans checked their work. (The team posted a pre-print paper about the map on bioArxiv.)
Last year, Google and the Janelia Research Campus of the Howard Hughes Medical Institute made headlines when they similarly mapped a portion of a fruit fly brain. That map, at the time the largest yet, covered some 25,000 neurons and 20 million synapses. In addition to targeting the human brain, itself of note, the new map includes tens of thousands of neurons and 130 million synapses. It takes up 1.4 petabytes of disk space.
By comparison, over three decades’ worth of satellite images of Earth by NASA’s Landsat program require 1.3 petabytes of storage. Collections of brain images on the smallest scales are like “a world in a grain of sand,” the Allen Institute’s Clay Reid told Nature, quoting William Blake in reference to an earlier map of the mouse brain.
All that, however, is but a millionth of the human brain. Which is to say, a similarly detailed map of the entire thing is yet years away. Still, the work shows how fast the field is moving. A map of this scale and detail would have been unimaginable a few decades ago.
How to Map a Brain
The study of the brain’s cellular circuitry is known as connectomics.
Obtaining the human connectome, or the wiring diagram of a whole brain, is a moonshot akin to the human genome. And like the human genome, at first, it seemed an impossible feat.
The only complete connectomes are for simple creatures: the nematode worm (C. elegans) and the larva of a sea creature called C. intestinalis. There’s a very good reason for that. Until recently, the mapping process was time-consuming and costly.
Researchers mapping C. elegans in the 1980s used a film camera attached to an electron microscope to image slices of the worm, then reconstructed the neurons and synaptic connections by hand, like a maddeningly difficult three-dimensional puzzle. C. elegans has only 302 neurons and roughly 7,000 synapses, but the rough draft of its connectome took 15 years, and a final draft took another 20. Clearly, this approach wouldn’t scale.
What’s changed? In short, automation.
These days the images themselves are, of course, digital. A process known as focused ion beam milling shaves down each slice of tissue a few nanometers at a time. After one layer is vaporized, an electron microscope images the newly exposed layer. The imaged layer is then shaved away by the ion beam and the next one imaged, until all that’s left of the slice of tissue is a nanometer-resolution digital copy. It’s a far cry from the days of Kodachrome.
But maybe the most dramatic improvement is what happens after scientists complete that pile of images.
Instead of assembling them by hand, algorithms take over. Their first job is ordering the imaged slices. Then they do something impossible until the last decade. They line up the images just so, tracing the path of cells and synapses between them and thus building a 3D model. Humans still proofread the results, but they don’t do the hardest bit anymore. (Even the proofreading can be refined. Renowned neuroscientist and connectomics proponent Sebastian Seung, for example, created a game called Eyewire, where thousands of volunteers review structures.)
“It’s truly beautiful to look at,” Harvard’s Jeff Lichtman, whose lab collaborated with Google on the new map, told Nature in 2019. The programs can trace out neurons faster than the team can churn out image data, he said. “We’re not able to keep up with them. That’s a great place to be.”
But Why…?
In a 2010 TED talk, Seung told the audience you are your connectome. Reconstruct the connections and you reconstruct the mind itself: memories, experience, and personality.
But connectomics has not been without controversy over the years.
Not everyone believes mapping the connectome at this level of detail is necessary for a deep understanding of the brain. And, especially in the field’s earlier, more artisanal past, researchers worried the scale of resources required simply wouldn’t yield comparably valuable (or timely) results.
“I don’t need to know the precise details of the wiring of each cell and each synapse in each of those brains,” nueroscientist Anthony Movshon said in 2019. “What I need to know, instead, is the organizational principles that wire them together.” These, Movshon believes, can likely be inferred from observations at lower resolutions.
Also, a static snapshot of the brain’s physical connections doesn’t necessarily explain how those connections are used in practice.
“A connectome is necessary, but not sufficient,” some scientists have said over the years. Indeed, it may be in the combination of brain maps—including functional, higher-level maps that track signals flowing through neural networks in response to stimuli—that the brain’s inner workings will be illuminated in the sharpest detail.
Still, the C. elegans connectome has proven to be a foundational building block for neuroscience over the years. And the growing speed of mapping is beginning to suggest goals that once seemed impractical may actually be within reach in the coming decades.
Are We There Yet?
Seung has said that when he first started out he estimated it’d take a million years for a person to manually trace all the connections in a cubic millimeter of human cortex. The whole brain, he further inferred, would take on the order of a trillion years.
That’s why automation and algorithms have been so crucial to the field.
Janelia’s Gerry Rubin told Stat he and his team have overseen a 1,000-fold increase in mapping speed since they began work on the fruit fly connectome in 2008. The full connectome—the first part of which was completed last year—may arrive in 2022.
Other groups are working on other animals, like octopuses, saying comparing how different forms of intelligence are wired up may prove particularly rich ground for discovery.
The full connectome of a mouse, a project already underway, may follow the fruit fly by the end of the decade. Rubin estimates going from mouse to human would need another million-fold jump in mapping speed. But he points to the trillion-fold increase in DNA sequencing speed since 1973 to show such dramatic technical improvements aren’t unprecedented.
The genome may be an apt comparison in another way too. Even after sequencing the first human genome, it’s taken many years to scale genomics to the point we can more fully realize its potential. Perhaps the same will be true of connectomics.
Even as the technology opens new doors, it may take time to understand and make use of all it has to offer.
“I believe people were impatient about what [connectomes] would provide,” Joshua Vogelstein, cofounder of the Open Connetome Project, told the Verge last year. “The amount of time between a good technology being seeded, and doing actual science using that technology is often approximately 15 years. Now it’s 15 years later and we can start doing science.”
Proponents hope brain maps will yield new insights into how the brain works—from thinking to emotion and memory—and how to better diagnose and treat brain disorders. Others, Google among them no doubt, hope to glean insights that could lead to more efficient computing (the brain is astonishing in this respect) and powerful artificial intelligence.
There’s no telling exactly what scientists will find as, neuron by synapse, they map the inner workings of our minds—but it seems all but certain great discoveries await.
Image Credit: Google / Harvard Continue reading
#439271 Video Friday: NASA Sending Robots to ...
Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers.
It’s ICRA this week, but since the full proceedings are not yet available, we’re going to wait until we can access everything to cover the conference properly. Or, as properly as we can not being in Xi’an right now.
We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):
RoboCup 2021 – June 22-28, 2021 – [Online Event]
RSS 2021 – July 12-16, 2021 – [Online Event]
Humanoids 2020 – July 19-21, 2021 – [Online Event]
DARPA SubT Finals – September 21-23, 2021 – Louisville, KY, USA
WeRobot 2021 – September 23-25, 2021 – Coral Gables, FL, USA
IROS 2021 – September 27-1, 2021 – [Online Event]
ROSCon 2021 – October 21-23, 2021 – New Orleans, LA, USA
Let us know if you have suggestions for next week, and enjoy today's videos.
NASA has selected the DAVINCI+ (Deep Atmosphere Venus Investigation of Noble-gases, Chemistry and Imaging +) mission as part of its Discovery program, and it will be the first spacecraft to enter the Venus atmosphere since NASA’s Pioneer Venus in 1978 and USSR’s Vega in 1985.
The mission, Deep Atmosphere Venus Investigation of Noble gases, Chemistry, and Imaging Plus, will consist of a spacecraft and a probe. The spacecraft will track motions of the clouds and map surface composition by measuring heat emission from Venus’ surface that escapes to space through the massive atmosphere. The probe will descend through the atmosphere, sampling its chemistry as well as the temperature, pressure, and winds. The probe will also take the first high-resolution images of Alpha Regio, an ancient highland twice the size of Texas with rugged mountains, looking for evidence that past crustal water influenced surface materials.
Launch is targeted for FY2030.
[ NASA ]
Skydio has officially launched their 3D Scan software, turning our favorite fully autonomous drone into a reality capture system.
Skydio held a launch event at the U.S. Space & Rocket Center and the keynote is online; it's actually a fairly interesting 20 minutes with some cool rockets thrown in for good measure.
[ Skydio ]
Space robotics is a key technology for space exploration and an enabling factor for future missions, both scientific and commercial. Underwater tests are a valuable tool for validating robotic technologies for space. In DFKI’s test basin, even large robots can be tested in simulated micro-gravity with mostly unrestricted range of motion.
[ DFKI ]
The Harvard Microrobotics Lab has developed a soft robotic hand with dexterous soft fingers capable of some impressive in-hand manipulation, starting (obviously) with a head of broccoli.
Training soft robots in simulation has been a bit of a challenge, but the researchers developed their own simulation framework that matches the real world pretty closely:
The simulation framework is avilable to download and use, and you can do some nutty things with it, like simulating tentacle basketball:
I’d pay to watch that IRL.
[ Paper ] via [ Harvard ]
Using the navigation cameras on its mast, NASA’s Curiosity Mars rover this movie of clouds just after sunset on March 28, 2021, the 3,072nd so, or Martian day, of the mission. These noctilucent, or twilight clouds, are made of water ice; ice crystals reflect the setting sun, allowing the detail in each cloud to be seen more easily.
[ JPL ]
Genesis Robotics is working on something, and that's all we know.
[ Genesis Robotics ]
To further improve the autonomous capabilities of future space robots and to advance European efforts in this field, the European Union funded the ADE project, which was completed recently in Wulsbüttel near Bremen. There, the rover “SherpaTT” of the German Research Center for Artificial Intelligence (DFKI) managed to autonomously cover a distance of 500 meters in less than three hours thanks to the successful collaboration of 14 European partners.
[ DFKI ]
For $6.50, a NEXTAGE robot will make an optimized coffee for you. In Japan, of course.
[ Impress ]
Things I’m glad a robot is doing so that I don’t have to: dross skimming.
[ Fanuc ]
Today, anyone can hail a ride to experience the Waymo Driver with our fully autonomous ride-hailing service, Waymo One. Riders Ben and Ida share their experience on one of their recent multi-stop rides. Watch as they take us along for a ride.
[ Waymo ]
The IEEE Robotics and Automation Society Town Hall 2021 featured discussion around Diversity & Inclusion, RAS CARES committee & Code of Conduct, Gender Diversity, and the Developing Country Faculty Engagement Program.
[ IEEE RAS ] Continue reading
#439267 Video Friday: Digger Finger
Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):
ICRA 2021 – May 30-June 5, 2021 – [Online Event]
RoboCup 2021 – June 22-28, 2021 – [Online Event]
RSS 2021 – July 12-16, 2021 – [Online Event]
DARPA SubT Finals – September 21-23, 2021 – Louisville, KY, USA
WeRobot 2021 – September 23-25, 2021 – Coral Gables, FL, USA
IROS 2021 – September 27-1, 2021 – [Online Event]
ROSCon 2021 – October 21-23, 2021 – New Orleans, LA, USA
Let us know if you have suggestions for next week, and enjoy today's videos.
MIT researchers have now designed a sharp-tipped robot finger equipped with tactile sensing to meet the challenge of identifying buried objects. In experiments, the aptly named Digger Finger was able to dig through granular media such as sand and rice, and it correctly sensed the shapes of submerged items it encountered. The researchers say the robot might one day perform various subterranean duties, such as finding buried cables or disarming buried bombs.
[ MIT ]
Bye bye, robots!
I’m sure they'll be fine. And I’m sure because they were, in fact, fine:
[ Squishy Robotics ]
This has to be the most heavily modified Husky I’ve ever seen.
[ ORI ]
TRI is now letting anyone build their own bubble gripper, which is very kind of them.
The Punyo bubbles employ state of the art visuotactile sensing techniques that allow a robot to recognize objects by shape, track their orientation in its grasp and sense forces as it interacts with the world. This feedback is critical as robots learn to push and pull on the world safely and robustly while assisting people by opening doors, putting things away, using household tools, and other domestic tasks.
[ Punyo ] via [ TRI ]
Thanks, Andrew!
Some impressive work from Giuseppe Loianno’s lab at NYU, showing cooperative aerial transport of a payload using only a monocular camera and IMU on each drone. No external anything!
[ Paper ] via [ ARPL ]
Thanks, Giuseppe!
Highly constrained manipulation tasks continue to be challenging for autonomous robots as they require high levels of precision. This paper demonstrates that the combination of state-of-the-art object tracking with passively adaptive mechanical hardware can be leveraged to complete precision manipulation tasks with tight, industrially-relevant tolerances (0.25mm).
[ Paper ]
Thanks, Fan!
Need a tank cleaned? HEBI's got you.
[ HEBI Robotics ]
Thanks, Hardik!
Multi-robotics cooperation is one of several key technologies that are seen as promising for planetary exploration. In the PRO-ACT project, these technologies were applied and further developed. The involved robotic systems VELES (a six-wheeled rover from PIAP Space, Poland), Mantis (a six-legged walking robotic system from DFKI, Germany) and the Mobile Gantry (a four-wheeled gantry with a 3D printer from AVS, Spain) were foreseen to perform tasks together.
[ Pro-Act ]
This work presents a new version of the tactile-sensing finger GelSlim 3.0, which integrates the ability to sense high-resolution shape, force, and slip in a compact form factor for use with small parallel jaw grippers in cluttered bin-picking scenarios. The novel design incorporates the capability to use real-time analytic methods to measure shape, estimate the contact 3D force distribution, and detect incipient slip.
[ GelSlim ]
A swarm of robots and a human collaborate to create paintings: Robotic Canvas was created in Bristol Robotics Laboratory (University of Bristol and University of the West of England), aiming at combining swarm robotics, human-robot interaction and art.
[ BRL ] via [ Robohub ]
As someone who plays rec soccer, I'm impressed. Also, lol.
[ Paper ]
It's unclear how big of a deal fomites actually are, but robots are out there zapping stuff anyway.
[ PAL ]
The Magic Queen is the largest ever made 3D printed, biodegradable structure, created with an ABB IRB 2600 robot, and tended by an ABB IRB IRB 4600. Showcased by the Austrian architectural bureau MAEID at the 17th International Architecture La Biennale di Venezia, the Magic Queen aims to inspire architects about the possibilities of automation and 3D printing, driving innovation and enabling new ways of building.
[ ABB ]
This video showcases our current research to address the challenges of aerial manipulation with omnidirectional flying robots at ETH Zurich's Autonomous systems lab. This topic connects several different topics of active research at our institute, including design and control of omnidirectional aerial manipulators, planning frameworks, and grasp detection.
[ ASL ]
Legged locomotion can extend the operational domain of robots to some of the most challenging environments on Earth. However, conventional controllers for legged locomotion are based on elaborate state machines that explicitly trigger the execution of motion primitives and reflexes. These designs have increased in complexity but fallen short of the generality and robustness of animal locomotion. Here, we present a robust controller for blind quadrupedal locomotion in challenging natural environments.
[ ETHZ ]
We sat down with ElliQ user Deanna Dezern to hear her heartwarming story on what it's been like to have ElliQ at home with her (throughout the pandemic and long before). She explains the meaningful bond she has developed with ElliQ, the value she has found in ElliQ, and how ElliQ helped her when she needed it most.
[ ElliQ ]
On May 13, 2021, the University of Washington’s Center for an Informed Public and Tech Policy Lab co-hosted a virtual book talk featuring Kate Crawford, a leading scholar of the social implications of artificial intelligence and author of the recently published book, Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence (Yale University Press, April 2021). This recording features a discussion and Q&A moderated by UW School of Law professor Ryan Calo, a co-founder of the Center for an Informed Public and faculty co-director at the Tech Policy Lab.
[ UW ] Continue reading