Tag Archives: gestures
#439042 How Scientists Used Ultrasound to Read ...
Thanks to neural implants, mind reading is no longer science fiction.
As I’m writing this sentence, a tiny chip with arrays of electrodes could sit on my brain, listening in on the crackling of my neurons firing as my hands dance across the keyboard. Sophisticated algorithms could then decode these electrical signals in real time. My brain’s inner language to plan and move my fingers could then be used to guide a robotic hand to do the same. Mind-to-machine control, voilà!
Yet as the name implies, even the most advanced neural implant has a problem: it’s an implant. For electrodes to reliably read the brain’s electrical chatter, they need to pierce through the its protective membrane and into brain tissue. Danger of infection aside, over time, damage accumulates around the electrodes, distorting their signals or even rendering them unusable.
Now, researchers from Caltech have paved a way to read the brain without any physical contact. Key to their device is a relatively new superstar in neuroscience: functional ultrasound, which uses sound waves to capture activity in the brain.
In monkeys, the technology could reliably predict their eye movement and hand gestures after just a single trial—without the usual lengthy training process needed to decode a movement. If adopted by humans, the new mind-reading tech represents a triple triumph: it requires minimal surgery and minimal learning, but yields maximal resolution for brain decoding. For people who are paralyzed, it could be a paradigm shift in how they control their prosthetics.
“We pushed the limits of ultrasound neuroimaging and were thrilled that it could predict movement,” said study author Dr. Sumner Norman.
To Dr. Krishna Shenoy at Stanford, who was not involved, the study will finally put ultrasound “on the map as a brain-machine interface technique. Adding to this toolkit is spectacular,” he said.
Breaking the Sound Barrier
Using sound to decode brain activity might seem preposterous, but ultrasound has had quite the run in medicine. You’ve probably heard of its most common use: taking photos of a fetus in pregnancy. The technique uses a transducer, which emits ultrasound pulses into the body and finds boundaries in tissue structure by analyzing the sound waves that bounce back.
Roughly a decade ago, neuroscientists realized they could adapt the tech for brain scanning. Rather than directly measuring the brain’s electrical chatter, it looks at a proxy—blood flow. When certain brain regions or circuits are active, the brain requires much more energy, which is provided by increased blood flow. In this way, functional ultrasound works similarly to functional MRI, but at a far higher resolution—roughly ten times, the authors said. Plus, people don’t have to lie very still in an expensive, claustrophobic magnet.
“A key question in this work was: If we have a technique like functional ultrasound that gives us high-resolution images of the brain’s blood flow dynamics in space and over time, is there enough information from that imaging to decode something useful about behavior?” said study author Dr. Mikhail Shapiro.
There’s plenty of reasons for doubt. As the new kid on the block, functional ultrasound has some known drawbacks. A major one: it gives a far less direct signal than electrodes. Previous studies show that, with multiple measurements, it can provide a rough picture of brain activity. But is that enough detail to guide a robotic prosthesis?
One-Trial Wonder
The new study put functional ultrasound to the ultimate test: could it reliably detect movement intention in monkeys? Because their brains are the most similar to ours, rhesus macaque monkeys are often the critical step before a brain-machine interface technology is adapted for humans.
The team first inserted small ultrasound transducers into the skulls of two rhesus monkeys. While it sounds intense, the surgery doesn’t penetrate the brain or its protective membrane; it’s only on the skull. Compared to electrodes, this means the brain itself isn’t physically harmed.
The device is linked to a computer, which controls the direction of sound waves and captures signals from the brain. For this study, the team aimed the pulses at the posterior parietal cortex, a part of the “motor” aspect of the brain, which plans movement. If right now you’re thinking about scrolling down this page, that’s the brain region already activated, before your fingers actually perform the movement.
Then came the tests. The first looked at eye movements—something pretty necessary before planning actual body movements without tripping all over the place. Here, the monkeys learned to focus on a central dot on a computer screen. A second dot, either left or right, then flashed. The monkeys’ task was to flicker their eyes to the most recent dot. It’s something that seems easy for us, but requires sophisticated brain computation.
The second task was more straightforward. Rather than just moving their eyes to the second target dot, the monkeys learned to grab and manipulate a joystick to move a cursor to that target.
Using brain imaging to decode the mind and control movement. Image Credit: S. Norman, Caltech
As the monkeys learned, so did the device. Ultrasound data capturing brain activity was fed into a sophisticated machine learning algorithm to guess the monkeys’ intentions. Here’s the kicker: once trained, using data from just a single trial, the algorithm was able to correctly predict the monkeys’ actual eye movement—whether left or right—with roughly 78 percent accuracy. The accuracy for correctly maneuvering the joystick was even higher, at nearly 90 percent.
That’s crazy accurate, and very much needed for a mind-controlled prosthetic. If you’re using a mind-controlled cursor or limb, the last thing you’d want is to have to imagine the movement multiple times before you actually click the web button, grab the door handle, or move your robotic leg.
Even more impressive is the resolution. Sound waves seem omnipresent, but with focused ultrasound, it’s possible to measure brain activity at a resolution of 100 microns—roughly 10 neurons in the brain.
A Cyborg Future?
Before you start worrying about scientists blasting your brain with sound waves to hack your mind, don’t worry. The new tech still requires skull surgery, meaning that a small chunk of skull needs to be removed. However, the brain itself is spared. This means that compared to electrodes, ultrasound could offer less damage and potentially a far longer mind reading than anything currently possible.
There are downsides. Focused ultrasound is far younger than any electrode-based neural implants, and can’t yet reliably decode 360-degree movement or fine finger movements. For now, the tech requires a wire to link the device to a computer, which is off-putting to many people and will prevent widespread adoption. Add to that the inherent downside of focused ultrasound, which lags behind electrical recordings by roughly two seconds.
All that aside, however, the tech is just tiptoeing into a future where minds and machines seamlessly connect. Ultrasound can penetrate the skull, though not yet at the resolution needed for imaging and decoding brain activity. The team is already working with human volunteers with traumatic brain injuries, who had to have a piece of their skulls removed, to see how well ultrasound works for reading their minds.
“What’s most exciting is that functional ultrasound is a young technique with huge potential. This is just our first step in bringing high performance, less invasive brain-machine interface to more people,” said Norman.
Image Credit: Free-Photos / Pixabay Continue reading
#438807 Visible Touch: How Cameras Can Help ...
The dawn of the robot revolution is already here, and it is not the dystopian nightmare we imagined. Instead, it comes in the form of social robots: Autonomous robots in homes and schools, offices and public spaces, able to interact with humans and other robots in a socially acceptable, human-perceptible way to resolve tasks related to core human needs.
To design social robots that “understand” humans, robotics scientists are delving into the psychology of human communication. Researchers from Cornell University posit that embedding the sense of touch in social robots could teach them to detect physical interactions and gestures. They describe a way of doing so by relying not on touch but on vision.
A USB camera inside the robot captures shadows of hand gestures on the robot’s surface and classifies them with machine-learning software. They call this method ShadowSense, which they define as a modality between vision and touch, bringing “the high resolution and low cost of vision-sensing to the close-up sensory experience of touch.”
Touch-sensing in social or interactive robots is usually achieved with force sensors or capacitive sensors, says study co-author Guy Hoffman of the Sibley School of Mechanical and Aerospace Engineering at Cornell University. The drawback to his group’s approach has been that, even to achieve coarse spatial resolution, many sensors are needed in a small area.
However, working with non-rigid, inflatable robots, Hoffman and his co-researchers installed a consumer-grade USB camera to which they attached a fisheye lens for a wider field of vision.
“Given that the robot is already hollow, and has a soft and translucent skin, we could do touch interaction by looking at the shadows created by people touching the robot,” says Hoffman. They used deep neural networks to interpret the shadows. “And we were able to do it with very high accuracy,” he says. The robot was able to interpret six different gestures, including one- or two-handed touch, pointing, hugging and punching, with an accuracy of 87.5 to 96 percent, depending on the lighting.
This is not the first time that computer vision has been used for tactile sensing, though the scale and application of ShadowSense is unique. “Photography has been used for touch mainly in robotic grasping,” says Hoffman. By contrast, Hoffman and collaborators wanted to develop a sense that could be “felt” across the whole of the device.
The potential applications for ShadowSense include mobile robot guidance using touch, and interactive screens on soft robots. A third concerns privacy, especially in home-based social robots. “We have another paper currently under review that looks specifically at the ability to detect gestures that are further away [from the robot’s skin],” says Hoffman. This way, users would be able to cover their robot’s camera with a translucent material and still allow it to interpret actions and gestures from shadows. Thus, even though it’s prevented from capturing a high-resolution image of the user or their surrounding environment, using the right kind of training datasets, the robot can continue to monitor some kinds of non-tactile activities.
In its current iteration, Hoffman says, ShadowSense doesn’t do well in low-light conditions. Environmental noise, or shadows from surrounding objects, also interfere with image classification. Relying on one camera also means a single point of failure. “I think if this were to become a commercial product, we would probably [have to] work a little bit better on image detection,” says Hoffman.
As it was, the researchers used transfer learning—reusing a pre-trained deep-learning model in a new problem—for image analysis. “One of the problems with multi-layered neural networks is that you need a lot of training data to make accurate predictions,” says Hoffman. “Obviously, we don’t have millions of examples of people touching a hollow, inflatable robot. But we can use pre-trained networks trained on general images, which we have billions of, and we only retrain the last layers of the network using our own dataset.” Continue reading
#437896 Solar-based Electronic Skin Generates ...
Replicating the human sense of touch is complicated—electronic skins need to be flexible, stretchable, and sensitive to temperature, pressure and texture; they need to be able to read biological data and provide electronic readouts. Therefore, how to power electronic skin for continuous, real-time use is a big challenge.
To address this, researchers from Glasgow University have developed an energy-generating e-skin made out of miniaturized solar cells, without dedicated touch sensors. The solar cells not only generate their own power—and some surplus—but also provide tactile capabilities for touch and proximity sensing. An early-view paper of their findings was published in IEEE Transactions on Robotics.
When exposed to a light source, the solar cells on the s-skin generate energy. If a cell is shadowed by an approaching object, the intensity of the light, and therefore the energy generated, reduces, dropping to zero when the cell makes contact with the object, confirming touch. In proximity mode, the light intensity tells you how far the object is with respect to the cell. “In real time, you can then compare the light intensity…and after calibration find out the distances,” says Ravinder Dahiya of the Bendable Electronics and Sensing Technologies (BEST) Group, James Watt School of Engineering, University of Glasgow, where the study was carried out. The team used infra-red LEDs with the solar cells for proximity sensing for better results.
To demonstrate their concept, the researchers wrapped a generic 3D-printed robotic hand in their solar skin, which was then recorded interacting with its environment. The proof-of-concept tests showed an energy surplus of 383.3 mW from the palm of the robotic arm. “The eSkin could generate more than 100 W if present over the whole body area,” they reported in their paper.
“If you look at autonomous, battery-powered robots, putting an electronic skin [that] is consuming energy is a big problem because then it leads to reduced operational time,” says Dahiya. “On the other hand, if you have a skin which generates energy, then…it improves the operational time because you can continue to charge [during operation].” In essence, he says, they turned a challenge—how to power the large surface area of the skin—into an opportunity—by turning it into an energy-generating resource.
Dahiya envisages numerous applications for BEST’s innovative e-skin, given its material-integrated sensing capabilities, apart from the obvious use in robotics. For instance, in prosthetics: “[As] we are using [a] solar cell as a touch sensor itself…we are also [making it] less bulkier than other electronic skins.” This, he adds, will help create prosthetics that are of optimal weight and size, thus making it easier for prosthetics users. “If you look at electronic skin research, the the real action starts after it makes contact… Solar skin is a step ahead, because it will start to work when the object is approaching…[and] have more time to prepare for action.” This could effectively reduce the time lag that is often seen in brain–computer interfaces.
There are also possibilities in the automation sector, particularly in electrical and interactive vehicles. A car covered with solar e-skin, because of its proximity-sensing capabilities, would be able to “see” an approaching obstacle or a person. It isn’t “seeing” in the biological sense, Dahiya clarifies, but from the point of view of a machine. This can be integrated with other objects, not just cars, for a variety of uses. “Gestures can be recognized as well…[which] could be used for gesture-based control…in gaming or in other sectors.”
In the lab, tests were conducted with a single source of white light at 650 lux, but Dahiya feels there are interesting possibilities if they could work with multiple light sources that the e-skin could differentiate between. “We are exploring different AI techniques [for that],” he says, “processing the data in an innovative way [so] that we can identify the the directions of the light sources as well as the object.”
The BEST team’s achievement brings us closer to a flexible, self-powered, cost-effective electronic skin that can touch as well as “see.” At the moment, however, there are still some challenges. One of them is flexibility. In their prototype, they used commercial solar cells made of amorphous silicon, each 1cm x 1cm. “They are not flexible, but they are integrated on a flexible substrate,” Dahiya says. “We are currently exploring nanowire-based solar cells…[with which] we we hope to achieve good performance in terms of energy as well as sensing functionality.” Another shortcoming is what Dahiya calls “the integration challenge”—how to make the solar skin work with different materials. Continue reading
#437373 Microsoft’s New Deepfake Detector Puts ...
The upcoming US presidential election seems set to be something of a mess—to put it lightly. Covid-19 will likely deter millions from voting in person, and mail-in voting isn’t shaping up to be much more promising. This all comes at a time when political tensions are running higher than they have in decades, issues that shouldn’t be political (like mask-wearing) have become highly politicized, and Americans are dramatically divided along party lines.
So the last thing we need right now is yet another wrench in the spokes of democracy, in the form of disinformation; we all saw how that played out in 2016, and it wasn’t pretty. For the record, disinformation purposely misleads people, while misinformation is simply inaccurate, but without malicious intent. While there’s not a ton tech can do to make people feel safe at crowded polling stations or up the Postal Service’s budget, tech can help with disinformation, and Microsoft is trying to do so.
On Tuesday the company released two new tools designed to combat disinformation, described in a blog post by VP of Customer Security and Trust Tom Burt and Chief Scientific Officer Eric Horvitz.
The first is Microsoft Video Authenticator, which is made to detect deepfakes. In case you’re not familiar with this wicked byproduct of AI progress, “deepfakes” refers to audio or visual files made using artificial intelligence that can manipulate peoples’ voices or likenesses to make it look like they said things they didn’t. Editing a video to string together words and form a sentence someone didn’t say doesn’t count as a deepfake; though there’s manipulation involved, you don’t need a neural network and you’re not generating any original content or footage.
The Authenticator analyzes videos or images and tells users the percentage chance that they’ve been artificially manipulated. For videos, the tool can even analyze individual frames in real time.
Deepfake videos are made by feeding hundreds of hours of video of someone into a neural network, “teaching” the network the minutiae of the person’s voice, pronunciation, mannerisms, gestures, etc. It’s like when you do an imitation of your annoying coworker from accounting, complete with mimicking the way he makes every sentence sound like a question and his eyes widen when he talks about complex spreadsheets. You’ve spent hours—no, months—in his presence and have his personality quirks down pat. An AI algorithm that produces deepfakes needs to learn those same quirks, and more, about whoever the creator’s target is.
Given enough real information and examples, the algorithm can then generate its own fake footage, with deepfake creators using computer graphics and manually tweaking the output to make it as realistic as possible.
The scariest part? To make a deepfake, you don’t need a fancy computer or even a ton of knowledge about software. There are open-source programs people can access for free online, and as far as finding video footage of famous people—well, we’ve got YouTube to thank for how easy that is.
Microsoft’s Video Authenticator can detect the blending boundary of a deepfake and subtle fading or greyscale elements that the human eye may not be able to see.
In the blog post, Burt and Horvitz point out that as time goes by, deepfakes are only going to get better and become harder to detect; after all, they’re generated by neural networks that are continuously learning from and improving themselves.
Microsoft’s counter-tactic is to come in from the opposite angle, that is, being able to confirm beyond doubt that a video, image, or piece of news is real (I mean, can McDonald’s fries cure baldness? Did a seal slap a kayaker in the face with an octopus? Never has it been so imperative that the world know the truth).
A tool built into Microsoft Azure, the company’s cloud computing service, lets content producers add digital hashes and certificates to their content, and a reader (which can be used as a browser extension) checks the certificates and matches the hashes to indicate the content is authentic.
Finally, Microsoft also launched an interactive “Spot the Deepfake” quiz it developed in collaboration with the University of Washington’s Center for an Informed Public, deepfake detection company Sensity, and USA Today. The quiz is intended to help people “learn about synthetic media, develop critical media literacy skills, and gain awareness of the impact of synthetic media on democracy.”
The impact Microsoft’s new tools will have remains to be seen—but hey, we’re glad they’re trying. And they’re not alone; Facebook, Twitter, and YouTube have all taken steps to ban and remove deepfakes from their sites. The AI Foundation’s Reality Defender uses synthetic media detection algorithms to identify fake content. There’s even a coalition of big tech companies teaming up to try to fight election interference.
One thing is for sure: between a global pandemic, widespread protests and riots, mass unemployment, a hobbled economy, and the disinformation that’s remained rife through it all, we’re going to need all the help we can get to make it through not just the election, but the rest of the conga-line-of-catastrophes year that is 2020.
Image Credit: Darius Bashar on Unsplash Continue reading