Tag Archives: Stanford
#439042 How Scientists Used Ultrasound to Read ...
Thanks to neural implants, mind reading is no longer science fiction.
As I’m writing this sentence, a tiny chip with arrays of electrodes could sit on my brain, listening in on the crackling of my neurons firing as my hands dance across the keyboard. Sophisticated algorithms could then decode these electrical signals in real time. My brain’s inner language to plan and move my fingers could then be used to guide a robotic hand to do the same. Mind-to-machine control, voilà!
Yet as the name implies, even the most advanced neural implant has a problem: it’s an implant. For electrodes to reliably read the brain’s electrical chatter, they need to pierce through the its protective membrane and into brain tissue. Danger of infection aside, over time, damage accumulates around the electrodes, distorting their signals or even rendering them unusable.
Now, researchers from Caltech have paved a way to read the brain without any physical contact. Key to their device is a relatively new superstar in neuroscience: functional ultrasound, which uses sound waves to capture activity in the brain.
In monkeys, the technology could reliably predict their eye movement and hand gestures after just a single trial—without the usual lengthy training process needed to decode a movement. If adopted by humans, the new mind-reading tech represents a triple triumph: it requires minimal surgery and minimal learning, but yields maximal resolution for brain decoding. For people who are paralyzed, it could be a paradigm shift in how they control their prosthetics.
“We pushed the limits of ultrasound neuroimaging and were thrilled that it could predict movement,” said study author Dr. Sumner Norman.
To Dr. Krishna Shenoy at Stanford, who was not involved, the study will finally put ultrasound “on the map as a brain-machine interface technique. Adding to this toolkit is spectacular,” he said.
Breaking the Sound Barrier
Using sound to decode brain activity might seem preposterous, but ultrasound has had quite the run in medicine. You’ve probably heard of its most common use: taking photos of a fetus in pregnancy. The technique uses a transducer, which emits ultrasound pulses into the body and finds boundaries in tissue structure by analyzing the sound waves that bounce back.
Roughly a decade ago, neuroscientists realized they could adapt the tech for brain scanning. Rather than directly measuring the brain’s electrical chatter, it looks at a proxy—blood flow. When certain brain regions or circuits are active, the brain requires much more energy, which is provided by increased blood flow. In this way, functional ultrasound works similarly to functional MRI, but at a far higher resolution—roughly ten times, the authors said. Plus, people don’t have to lie very still in an expensive, claustrophobic magnet.
“A key question in this work was: If we have a technique like functional ultrasound that gives us high-resolution images of the brain’s blood flow dynamics in space and over time, is there enough information from that imaging to decode something useful about behavior?” said study author Dr. Mikhail Shapiro.
There’s plenty of reasons for doubt. As the new kid on the block, functional ultrasound has some known drawbacks. A major one: it gives a far less direct signal than electrodes. Previous studies show that, with multiple measurements, it can provide a rough picture of brain activity. But is that enough detail to guide a robotic prosthesis?
One-Trial Wonder
The new study put functional ultrasound to the ultimate test: could it reliably detect movement intention in monkeys? Because their brains are the most similar to ours, rhesus macaque monkeys are often the critical step before a brain-machine interface technology is adapted for humans.
The team first inserted small ultrasound transducers into the skulls of two rhesus monkeys. While it sounds intense, the surgery doesn’t penetrate the brain or its protective membrane; it’s only on the skull. Compared to electrodes, this means the brain itself isn’t physically harmed.
The device is linked to a computer, which controls the direction of sound waves and captures signals from the brain. For this study, the team aimed the pulses at the posterior parietal cortex, a part of the “motor” aspect of the brain, which plans movement. If right now you’re thinking about scrolling down this page, that’s the brain region already activated, before your fingers actually perform the movement.
Then came the tests. The first looked at eye movements—something pretty necessary before planning actual body movements without tripping all over the place. Here, the monkeys learned to focus on a central dot on a computer screen. A second dot, either left or right, then flashed. The monkeys’ task was to flicker their eyes to the most recent dot. It’s something that seems easy for us, but requires sophisticated brain computation.
The second task was more straightforward. Rather than just moving their eyes to the second target dot, the monkeys learned to grab and manipulate a joystick to move a cursor to that target.
Using brain imaging to decode the mind and control movement. Image Credit: S. Norman, Caltech
As the monkeys learned, so did the device. Ultrasound data capturing brain activity was fed into a sophisticated machine learning algorithm to guess the monkeys’ intentions. Here’s the kicker: once trained, using data from just a single trial, the algorithm was able to correctly predict the monkeys’ actual eye movement—whether left or right—with roughly 78 percent accuracy. The accuracy for correctly maneuvering the joystick was even higher, at nearly 90 percent.
That’s crazy accurate, and very much needed for a mind-controlled prosthetic. If you’re using a mind-controlled cursor or limb, the last thing you’d want is to have to imagine the movement multiple times before you actually click the web button, grab the door handle, or move your robotic leg.
Even more impressive is the resolution. Sound waves seem omnipresent, but with focused ultrasound, it’s possible to measure brain activity at a resolution of 100 microns—roughly 10 neurons in the brain.
A Cyborg Future?
Before you start worrying about scientists blasting your brain with sound waves to hack your mind, don’t worry. The new tech still requires skull surgery, meaning that a small chunk of skull needs to be removed. However, the brain itself is spared. This means that compared to electrodes, ultrasound could offer less damage and potentially a far longer mind reading than anything currently possible.
There are downsides. Focused ultrasound is far younger than any electrode-based neural implants, and can’t yet reliably decode 360-degree movement or fine finger movements. For now, the tech requires a wire to link the device to a computer, which is off-putting to many people and will prevent widespread adoption. Add to that the inherent downside of focused ultrasound, which lags behind electrical recordings by roughly two seconds.
All that aside, however, the tech is just tiptoeing into a future where minds and machines seamlessly connect. Ultrasound can penetrate the skull, though not yet at the resolution needed for imaging and decoding brain activity. The team is already working with human volunteers with traumatic brain injuries, who had to have a piece of their skulls removed, to see how well ultrasound works for reading their minds.
“What’s most exciting is that functional ultrasound is a young technique with huge potential. This is just our first step in bringing high performance, less invasive brain-machine interface to more people,” said Norman.
Image Credit: Free-Photos / Pixabay Continue reading
#439012 Video Friday: Man-Machine Synergy ...
Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):
RoboSoft 2021 – April 12-16, 2021 – [Online Conference]
ICRA 2021 – May 30-5, 2021 – Xi'an, China
DARPA SubT Finals – September 21-23, 2021 – Louisville, KY, USA
WeRobot 2021 – September 23-25, 2021 – Coral Gables, FL, USA
Let us know if you have suggestions for next week, and enjoy today's videos.
Man-Machine Synergy Effectors, Inc. is a Japanese company working on an absolutely massive “human machine synergistic effect device,” which is a huge robot controlled by a nearby human using a haptic rig.
From the look of things, the next generation will be able to move around. Whoa.
[ MMSE ]
This method of loading and unloading AMRs without having them ever stop moving is so obvious that there must be some equally obvious reason why I've never seen it done in practice.
The LoadRunner is able to transport and sort parcels weighing up to 30 kilograms. This makes it the perfect luggage carrier for airports. These AI-driven go-carts can also work in concert as larger collectives to carry large, heavy and bulky objects. Every LoadRunner can also haul up to four passive trailers. Powered by four electric motors, the LoadRunner sharply brakes at just the right moment right in front of its destination and the payload slides from the robot onto the delivery platform.
[ Fraunhofer ] via [ Gizmodo ]
Ayato Kanada at Kyushu University wrote in to share this clever “dislocatable joint,” a way of combining continuum and rigid robots.
[ Paper ]
Thanks Ayato!
The DodgeDrone challenge revisits the popular dodgeball game in the context of autonomous drones. Specifically, participants will have to code navigation policies to fly drones between waypoints while avoiding dynamic obstacles. Drones are fast but fragile systems: as soon as something hits them, they will crash! Since objects will move towards the drone with different speeds and acceleration, smart algorithms are required to avoid them!
This could totally happen in real life, and we need to be prepared for it!
[ DodgeDrone Challenge ]
In addition to winning the Best Student Design Competition CREATIVITY Award at HRI 2021, this paper would also have won the Best Paper Title award, if that award existed.
[ Paper ]
Robots are traditionally bound by a fixed morphology during their operational lifetime, which is limited to adapting only their control strategies. Here we present the first quadrupedal robot that can morphologically adapt to different environmental conditions in outdoor, unstructured environments.
We show that the robot exploits its training to effectively transition between different morphological configurations, exhibiting substantial performance improvements over a non-adaptive approach. The demonstrated benefits of real-world morphological adaptation demonstrate the potential for a new embodied way of incorporating adaptation into future robotic designs.
[ Nature ]
A drone video shot in a Minneapolis bowling alley was hailed as an instant classic. One Hollywood veteran said it “adds to the language and vocabulary of cinema.” One IEEE Spectrum editor said “hey that's pretty cool.”
[ Bryant Lake Bowl ]
It doesn't take a robot to convince me to buy candy, but I think if I buy candy from Relay it's a business expense, right?
[ RIS ]
DARPA is making progress on its AI dogfighting program, with physical flight tests expected this year.
[ DARPA ACE ]
Unitree Robotics has realized that the Empire needs to be overthrown!
[ Unitree ]
Windhover Labs, an emerging leader in open and reliable flight software and hardware, announces the upcoming availability of its first hardware product, a low cost modular flight computer for commercial drones and small satellites.
[ Windhover ]
As robots and autonomous systems are poised to become part of our everyday lives, the University of Michigan and Ford are opening a one-of-a-kind facility where they’ll develop robots and roboticists that help make lives better, keep people safer and build a more equitable society.
[ U Michigan ]
The adaptive robot Rizon combined with a new hybrid electrostatic and gecko-inspired gripping pad developed by Stanford BDML can manipulate bulky, non-smooth items in the most effort-saving way, which broadens the applications in retail and household environments.
[ Flexiv ]
Thanks Yunfan!
I don't know why anyone would want things to get MORE icy, but if you do for some reason, you can make it happen with a Husky.
Is winter over yet?
[ Clearpath ]
Skip ahead to about 1:20 to see a pair of Gita robots following a Spot following a human like a chain of lil’ robot duckings.
[ PFF ]
Here are a couple of retro robotics videos, one showing teleoperated humanoids from 2000, and the other showing a robotic guide dog from 1976 (!)
[ Tachi Lab ]
Thanks Fan!
If you missed Chad Jenkins' talk “That Ain’t Right: AI Mistakes and Black Lives” last time, here's another opportunity to watch from Robotics Today, and it includes a top notch panel discussion at the end.
[ Robotics Today ]
Since its founding in 1979, the Robotics Institute (RI) at Carnegie Mellon University has been leading the world in robotics research and education. In the mid 1990s, RI created NREC as the applied R&D center within the Institute with a specific mission to apply robotics technology in an impactful way on real-world applications. In this talk, I will go over numerous R&D programs that I have led at NREC in the past 25 years.
[ CMU ] Continue reading
#438613 Video Friday: Digit Takes a Hike
Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):
HRI 2021 – March 8-11, 2021 – [Online Conference]
RoboSoft 2021 – April 12-16, 2021 – [Online Conference]
ICRA 2021 – May 30-5, 2021 – Xi'an, China
Let us know if you have suggestions for next week, and enjoy today's videos.
It's winter in Oregon, so everything is damp, all the time. No problem for Digit!
Also the case for summer in Oregon.
[ Agility Robotics ]
While other organisms form collective flocks, schools, or swarms for such purposes as mating, predation, and protection, the Lumbriculus variegatus worms are unusual in their ability to braid themselves together to accomplish tasks that unconnected individuals cannot. A new study reported by researchers at the Georgia Institute of Technology describes how the worms self-organize to act as entangled “active matter,” creating surprising collective behaviors whose principles have been applied to help blobs of simple robots evolve their own locomotion.
No, this doesn't squick me out at all, why would it.
[ Georgia Tech ]
A few years ago, we wrote about Zhifeng Huang's jet-foot equipped bipedal robot, and he's been continuing to work on it to the point where it can now step over gaps that are an absolutely astonishing 147% of its leg length.
[ Paper ]
Thanks Zhifeng!
The Inception Drive is a novel, ultra-compact design for an Infinitely Variable Transmission (IVT) that uses nested-pulleys to adjust the gear ratio between input and output shafts. This video shows the first proof-of-concept prototype for a “Fully Balanced” design, where the spinning masses within the drive are completely balanced to reduce vibration, thereby allowing the drive to operate more efficiently and at higher speeds than achievable on an unbalanced design.
As shown in this video, the Inception Drive can change both the speed and direction of rotation of the output shaft while keeping the direction and speed of the input shaft constant. This ability to adjust speed and direction within such a compact package makes the Inception Drive a compelling choice for machine designers in a wide variety of fields, including robotics, automotive, and renewable-energy generation.
[ SRI ]
Robots with kinematic loops are known to have superior mechanical performance. However, due to these loops, their modeling and control is challenging, and prevents a more widespread use. In this paper, we describe a versatile Inverse Kinematics (IK) formulation for the retargeting of expressive motions onto mechanical systems with loops.
[ Disney Research ]
Watch Engineered Arts put together one of its Mesmer robots in a not at all uncanny way.
[ Engineered Arts ]
There's been a bunch of interesting research into vision-based tactile sensing recently; here's some from Van Ho at JAIST:
[ Paper ]
Thanks Van!
This is really more of an automated system than a robot, but these little levitating pucks are very very slick.
ACOPOS 6D is based on the principle of magnetic levitation: Shuttles with integrated permanent magnets float over the surface of electromagnetic motor segments. The modular motor segments are 240 x 240 millimeters in size and can be arranged freely in any shape. A variety of shuttle sizes carry payloads of 0.6 to 14 kilograms and reach speeds of up to 2 meters per second. They can move freely in two-dimensional space, rotate and tilt along three axes and offer precise control over the height of levitation. All together, that gives them six degrees of motion control freedom.
[ ACOPOS ]
Navigation and motion control of a robot to a destination are tasks that have historically been performed with the assumption that contact with the environment is harmful. This makes sense for rigid-bodied robots where obstacle collisions are fundamentally dangerous. However, because many soft robots have bodies that are low-inertia and compliant, obstacle contact is inherently safe. We find that a planner that takes into account and capitalizes on environmental contact produces paths that are more robust to uncertainty than a planner that avoids all obstacle contact.
[ CHARM Lab ]
The quadrotor experts at UZH have been really cranking it up recently.
Aerodynamic forces render accurate high-speed trajectory tracking with quadrotors extremely challenging. These complex aerodynamic effects become a significant disturbance at high speeds, introducing large positional tracking errors, and are extremely difficult to model. To fly at high speeds, feedback control must be able to account for these aerodynamic effects in real-time. This necessitates a modelling procedure that is both accurate and efficient to evaluate. Therefore, we present an approach to model aerodynamic effects using Gaussian Processes, which we incorporate into a Model Predictive Controller to achieve efficient and precise real-time feedback control, leading to up to 70% reduction in trajectory tracking error at high speeds. We verify our method by extensive comparison to a state-of-the-art linear drag model in synthetic and real-world experiments at speeds of up to 14m/s and accelerations beyond 4g.
[ Paper ]
I have not heard much from Harvest Automation over the last couple years and their website was last updated in 2016, but I guess they're selling robots in France, so that's good?
[ Harvest Automation ]
Last year, Clearpath Robotics introduced a ROS package for Spot which enables robotics developers to leverage ROS capabilities out-of-the-box. Here at OTTO Motors, we thought it would be a compelling test case to see just how easy it would be to integrate Spot into our test fleet of OTTO materials handling robots.
[ OTTO Motors ]
Video showcasing recent robotics activities at PRISMA Lab, coordinated by Prof. Bruno Siciliano, at Università di Napoli Federico II.
[ PRISMA Lab ]
Thanks Fan!
State estimation framework developed by the team CoSTAR for the DARPA Subterranean Challenge, where the team achieved 2nd and 1st places in the Tunnel and Urban circuits.
[ Paper ]
Highlights from the 2020 ROS Industrial conference.
[ ROS Industrial ]
Thanks Thilo!
Not robotics, but entertaining anyway. From the CHI 1995 Technical Video Program, “The Tablet Newspaper: a Vision for the Future.”
[ CHI 1995 ]
This week's GRASP on Robotics seminar comes from Allison Okamura at Stanford, on “Wearable Haptic Devices for Ubiquitous Communication.”
Haptic devices allow touch-based information transfer between humans and intelligent systems, enabling communication in a salient but private manner that frees other sensory channels. For such devices to become ubiquitous, their physical and computational aspects must be intuitive and unobtrusive. We explore the design of a wide array of haptic feedback mechanisms, ranging from devices that can be actively touched by the fingertips to multi-modal haptic actuation mounted on the arm. We demonstrate how these devices are effective in virtual reality, human-machine communication, and human-human communication.
[ UPenn ] Continue reading