Tag Archives: perfect

#435583 Soft Self-Healing Materials for Robots ...

If there’s one thing we know about robots, it’s that they break. They break, like, literally all the time. The software breaks. The hardware breaks. The bits that you think could never, ever, ever possibly break end up breaking just when you need them not to break the most, and then you have to try to explain what happened to your advisor who’s been standing there watching your robot fail and then stay up all night fixing the thing that seriously was not supposed to break.

While most of this is just a fundamental characteristic of robots that can’t be helped, the European Commission is funding a project called SHERO (Self HEaling soft RObotics) to try and solve at least some of those physical robot breaking problems through the use of structural materials that can autonomously heal themselves over and over again.

SHERO is a three year, €3 million collaboration between Vrije Universiteit Brussel, University of Cambridge, École Supérieure de Physique et de Chimie Industrielles de la ville de Paris (ESPCI-Paris), and Swiss Federal Laboratories for Materials Science and Technology (Empa). As the name SHERO suggests, the goal of the project is to develop soft materials that can completely recover from the kinds of damage that robots are likely to suffer in day to day operations, as well as the occasional more extreme accident.

Most materials, especially soft materials, are fixable somehow, whether it’s with super glue or duct tape. But fixing things involves a human first identifying when they’re broken, and then performing a potentially skill, labor, time, and money intensive task. SHERO’s soft materials will, eventually, make this entire process autonomous, allowing robots to self-identify damage and initiate healing on their own.

Photos: SHERO Project

The damaged robot finger [top] can operate normally after healing itself.

How the self-healing material works
What these self-healing materials can do is really pretty amazing. The researchers are actually developing two different types—the first one heals itself when there’s an application of heat, either internally or externally, which gives some control over when and how the healing process starts. For example, if the robot is handling stuff that’s dirty, you’d want to get it cleaned up before healing it so that dirt doesn’t become embedded in the material. This could mean that the robot either takes itself to a heating station, or it could activate some kind of embedded heating mechanism to be more self-sufficient.

The second kind of self-healing material is autonomous, in that it will heal itself at room temperature without any additional input, and is probably more suitable for relatively minor scrapes and cracks. Here are some numbers about how well the healing works:

Autonomous self-healing polymers do not require heat. They can heal damage at room temperature. Developing soft robotic systems from autonomous self-healing polymers excludes the need of additional heating devices… The healing however takes some time. The healing efficiency after 3 days, 7 days and 14 days is respectively 62 percent, 91 percent and 97 percent.

This material was used to develop a healable soft pneumatic hand. Relevant large cuts can be healed entirely without the need of external heat stimulus. Depending on the size of the damage and even more on the location of damage, the healing takes only seconds or up to a week. Damage on locations on the actuator that are subjected to very small stresses during actuation was healed instantaneously. Larger damages, like cutting the actuator completely in half, took 7 days to heal. But even this severe damage could be healed completely without the need of any external stimulus.

Applications of self-healing robots
Both of these materials can be mixed together, and their mechanical properties can be customized so that the structure that they’re a part of can be tuned to move in different ways. The researchers also plan on introducing flexible conductive sensors into the material, which will help sense damage as well as providing position feedback for control systems. A lot of development will happen over the next few years, and for more details, we spoke with Bram Vanderborght at Vrije Universiteit in Brussels.

IEEE Spectrum: How easy or difficult or expensive is it to produce these materials? Will they add significant cost to robotic grippers?

Bram Vanderborght: They are definitely more expensive materials, but it’s also a matter of size of production. At the moment, we’ve made a few kilograms of the material (enough to make several demonstrators), and the price already dropped significantly from when we ordered 100 grams of the material in the first phase of the project. So probably the cost of the gripper will be higher [than a regular gripper], but you won’t need to replace the gripper as often as other grippers that need to be replaced due to wear, so it can be an advantage.

Moreover due to the method of 3D printing the material, the surface is smoother and airtight (so no post-processing is required to make it airtight). Also, the smooth surface is better to avoid contamination for food handling, for example.

In commercial or industrial applications, gradual fatigue seems to be a more common issue than more abrupt trauma like cuts. How well does the self-healing work to improve durability over long periods of time?

We did not test for gradual fatigue over very long times. But both macroscopic and microscopic damage can be healed. So hopefully it can provide an answer here as well.

Image: SHERO Project

After developing a self-healing robot gripper, the researchers plan to use similar materials to build parts that can be used as the skeleton of robots, allowing them to repair themselves on a regular basis.

How much does the self-healing capability restrict the material properties? What are the limits for softness or hardness or smoothness or other characteristics of the material?

Typically the mechanical properties of networked polymers are much better than thermoplastics. Our material is a networked polymer but in which the crosslinks are reversible. We can change quite a lot of parameters in the design of the materials. So we can develop very stiff (fracture strain at 1.24 percent) and very elastic materials (fracture strain at 450 percent). The big advantage that our material has is we can mix it to have intermediate properties. Moreover, at the interface of the materials with different mechanical properties, we have the same chemical bonds, so the interface is perfect. While other materials, they may need to glue it, which gives local stresses and a weak spot.

When the material heals itself, is it less structurally sound in that spot? Can it heal damage that happens to the same spot over and over again?

In theory we can heal it an infinite amount of times. When the wound is not perfectly aligned, of course in that spot it will become weaker. Also too high temperatures lead to irreversible bonds, and impurities lead to weak spots.

Besides grippers and skins, what other potential robotics applications would this technology be useful for?

Most of self healing materials available now are used for coatings. What we are developing are structural components, therefore the mechanical properties of the material need to be good for such applications. So maybe part of the skeleton of the robot can be developed with such materials to make it lighter, since can be designed for regular repair. And for exceptional loads, it breaks and can be repaired like our human body.

[ SHERO Project ] Continue reading

Posted in Human Robots

#435522 Harvard’s Smart Exo-Shorts Talk to the ...

Exosuits don’t generally scream “fashionable” or “svelte.” Take the mind-controlled robotic exoskeleton that allowed a paraplegic man to kick off the World Cup back in 2014. Is it cool? Hell yeah. Is it practical? Not so much.

Yapping about wearability might seem childish when the technology already helps people with impaired mobility move around dexterously. But the lesson of the ill-fated Google Glassholes, which includes an awkward dorky head tilt and an assuming voice command, clearly shows that wearable computer assistants can’t just work technologically—they have to look natural and allow the user to behave like as usual. They have to, in a sense, disappear.

To Dr. Jose Pons at the Legs + Walking Ability Lab in Chicago, exosuits need three main selling points to make it in the real world. One, they have to physically interact with their wearer and seamlessly deliver assistance when needed. Two, they should cognitively interact with the host to guide and control the robot at all times. Finally, they need to feel like a second skin—move with the user without adding too much extra mass or reducing mobility.

This week, a US-Korean collaboration delivered the whole shebang in a Lululemon-style skin-hugging package combined with a retro waist pack. The portable exosuit, weighing only 11 pounds, looks like a pair of spandex shorts but can support the wearer’s hip movement when needed. Unlike their predecessors, the shorts are embedded with sensors that let them know when the wearer is walking versus running by analyzing gait.

Switching between the two movement modes may not seem like much, but what naturally comes to our brains doesn’t translate directly to smart exosuits. “Walking and running have fundamentally different biomechanics, which makes developing devices that assist both gaits challenging,” the team said. Their algorithm, computed in the cloud, allows the wearer to easily switch between both, with the shorts providing appropriate hip support that makes the movement experience seamless.

To Pons, who was not involved in the research but wrote a perspective piece, the study is an exciting step towards future exosuits that will eventually disappear under the skin—that is, implanted neural interfaces to control robotic assistance or activate the user’s own muscles.

“It is realistic to think that we will witness, in the next several years…robust human-robot interfaces to command wearable robotics based on…the neural code of movement in humans,” he said.

A “Smart” Exosuit Hack
There are a few ways you can hack a human body to move with an exosuit. One is using implanted electrodes inside the brain or muscles to decipher movement intent. With heavy practice, a neural implant can help paralyzed people walk again or dexterously move external robotic arms. But because the technique requires surgery, it’s not an immediate sell for people who experience low mobility because of aging or low muscle tone.

The other approach is to look to biophysics. Rather than decoding neural signals that control movement, here the idea is to measure gait and other physical positions in space to decipher intent. As you can probably guess, accurately deciphering user intent isn’t easy, especially when the wearable tries to accommodate multiple gaits. But the gains are many: there’s no surgery involved, and the wearable is low in energy consumption.

Double Trouble
The authors decided to tackle an everyday situation. You’re walking to catch the train to work, realize you’re late, and immediately start sprinting.

That seemingly easy conversion hides a complex switch in biomechanics. When you walk, your legs act like an inverted pendulum that swing towards a dedicated center in a predictable way. When you run, however, the legs move more like a spring-loaded system, and the joints involved in the motion differ from a casual stroll. Engineering an assistive wearable for each is relatively simple; making one for both is exceedingly hard.

Led by Dr. Conor Walsh at Harvard University, the team started with an intuitive idea: assisted walking and running requires specialized “actuation” profiles tailored to both. When the user is moving in a way that doesn’t require assistance, the wearable needs to be out of the way so that it doesn’t restrict mobility. A quick analysis found that assisting hip extension has the largest impact, because it’s important to both gaits and doesn’t add mass to the lower legs.

Building on that insight, the team made a waist belt connected to two thigh wraps, similar to a climbing harness. Two electrical motors embedded inside the device connect the waist belt to other components through a pulley system to help the hip joints move. The whole contraption weighed about 11 lbs and didn’t obstruct natural movement.

Next, the team programmed two separate supporting profiles for walking and running. The goal was to reduce the “metabolic cost” for both movements, so that the wearer expends as little energy as needed. To switch between the two programs, they used a cloud-based classification algorithm to measure changes in energy fluctuation to figure out what mode—running or walking—the user is in.

Smart Booster
Initial trials on treadmills were highly positive. Six male volunteers with similar age and build donned the exosuit and either ran or walked on the treadmill at varying inclines. The algorithm performed perfectly at distinguishing between the two gaits in all conditions, even at steep angles.

An outdoor test with eight volunteers also proved the algorithm nearly perfect. Even on uneven terrain, only two steps out of all test trials were misclassified. In an additional trial on mud or snow, the algorithm performed just as well.

“The system allows the wearer to use their preferred gait for each speed,” the team said.

Software excellence translated to performance. A test found that the exosuit reduced the energy for walking by over nine percent and running by four percent. It may not sound like much, but the range of improvement is meaningful in athletic performance. Putting things into perspective, the team said, the metabolic rate reduction during walking is similar to taking 16 pounds off at the waist.

The Wearable Exosuit Revolution
The study’s lightweight exoshorts are hardly the only players in town. Back in 2017, SRI International’s spin-off, Superflex, engineered an Aura suit to support mobility in the elderly. The Aura used a different mechanism: rather than a pulley system, it incorporated a type of smart material that contracts in a manner similar to human muscles when zapped with electricity.

Embedded with a myriad of sensors for motion, accelerometers and gyroscopes, Aura’s smartness came from mini-computers that measure how fast the wearer is moving and track the user’s posture. The data were integrated and processed locally inside hexagon-shaped computing pods near the thighs and upper back. The pods also acted as the control center for sending electrical zaps to give the wearer a boost when needed.

Around the same time, a collaboration between Harvard’s Wyss Institute and ReWalk Robotics introduced a fabric-based wearable robot to assist a wearer’s legs for balance and movement. Meanwhile, a Swiss team coated normal fabric with electroactive material to weave soft, pliable artificial “muscles” that move with the skin.

Although health support is the current goal, the military is obviously interested in similar technologies to enhance soldiers’ physicality. Superflex’s Aura, for example, was originally inspired by technology born from DARPA’s Warrior Web Program, which aimed to reduce a soldier’s mechanical load.

That said, military gear has had a long history of trickling down to consumer use. Similar to the way camouflage, cargo pants, and GORE-TEX trickled down into the consumer ecosphere, it’s not hard to imagine your local Target eventually stocking intelligent exowear.

Image and Video Credit: Wyss Institute at Harvard University. Continue reading

Posted in Human Robots

#435520 These Are the Meta-Trends Shaping the ...

Life is pretty different now than it was 20 years ago, or even 10 years ago. It’s sort of exciting, and sort of scary. And hold onto your hat, because it’s going to keep changing—even faster than it already has been.

The good news is, maybe there won’t be too many big surprises, because the future will be shaped by trends that have already been set in motion. According to Singularity University co-founder and XPRIZE founder Peter Diamandis, a lot of these trends are unstoppable—but they’re also pretty predictable.

At SU’s Global Summit, taking place this week in San Francisco, Diamandis outlined some of the meta-trends he believes are key to how we’ll live our lives and do business in the (not too distant) future.

Increasing Global Abundance
Resources are becoming more abundant all over the world, and fewer people are seeing their lives limited by scarcity. “It’s hard for us to realize this as we see crisis news, but what people have access to is more abundant than ever before,” Diamandis said. Products and services are becoming cheaper and thus available to more people, and having more resources then enables people to create more, thus producing even more resources—and so on.

Need evidence? The proportion of the world’s population living in extreme poverty is currently lower than it’s ever been. The average human life expectancy is longer than it’s ever been. The costs of day-to-day needs like food, energy, transportation, and communications are on a downward trend.

Take energy. In most of the world, though its costs are decreasing, it’s still a fairly precious commodity; we turn off our lights and our air conditioners when we don’t need them (ideally, both to save money and to avoid wastefulness). But the cost of solar energy has plummeted, and the storage capacity of batteries is improving, and solar technology is steadily getting more efficient. Bids for new solar power plants in the past few years have broken each other’s records for lowest cost per kilowatt hour.

“We’re not far from a penny per kilowatt hour for energy from the sun,” Diamandis said. “And if you’ve got energy, you’ve got water.” Desalination, for one, will be much more widely feasible once the cost of the energy needed for it drops.

Knowledge is perhaps the most crucial resource that’s going from scarce to abundant. All the world’s knowledge is now at the fingertips of anyone who has a mobile phone and an internet connection—and the number of people connected is only going to grow. “Everyone is being connected at gigabit connection speeds, and this will be transformative,” Diamandis said. “We’re heading towards a world where anyone can know anything at any time.”

Increasing Capital Abundance
It’s not just goods, services, and knowledge that are becoming more plentiful. Money is, too—particularly money for business. “There’s more and more capital available to invest in companies,” Diamandis said. As a result, more people are getting the chance to bring their world-changing ideas to life.

Venture capital investments reached a new record of $130 billion in 2018, up from $84 billion in 2017—and that’s just in the US. Globally, VC funding grew 21 percent from 2017 to a total of $207 billion in 2018.

Through crowdfunding, any person in any part of the world can present their idea and ask for funding. That funding can come in the form of a loan, an equity investment, a reward, or an advanced purchase of the proposed product or service. “Crowdfunding means it doesn’t matter where you live, if you have a great idea you can get it funded by people from all over the world,” Diamandis said.

All this is making a difference; the number of unicorns—privately-held startups valued at over $1 billion—currently stands at an astounding 360.

One of the reasons why the world is getting better, Diamandis believes, is because entrepreneurs are trying more crazy ideas—not ideas that are reasonable or predictable or linear, but ideas that seem absurd at first, then eventually end up changing the world.

Everyone and Everything, Connected
As already noted, knowledge is becoming abundant thanks to the proliferation of mobile phones and wireless internet; everyone’s getting connected. In the next decade or sooner, connectivity will reach every person in the world. 5G is being tested and offered for the first time this year, and companies like Google, SpaceX, OneWeb, and Amazon are racing to develop global satellite internet constellations, whether by launching 12,000 satellites, as SpaceX’s Starlink is doing, or by floating giant balloons into the stratosphere like Google’s Project Loon.

“We’re about to reach a period of time in the next four to six years where we’re going from half the world’s people being connected to the whole world being connected,” Diamandis said. “What happens when 4.2 billion new minds come online? They’re all going to want to create, discover, consume, and invent.”

And it doesn’t stop at connecting people. Things are becoming more connected too. “By 2020 there will be over 20 billion connected devices and more than one trillion sensors,” Diamandis said. By 2030, those projections go up to 500 billion and 100 trillion. Think about it: there’s home devices like refrigerators, TVs, dishwashers, digital assistants, and even toasters. There’s city infrastructure, from stoplights to cameras to public transportation like buses or bike sharing. It’s all getting smart and connected.

Soon we’ll be adding autonomous cars to the mix, and an unimaginable glut of data to go with them. Every turn, every stop, every acceleration will be a data point. Some cars already collect over 25 gigabytes of data per hour, Diamandis said, and car data is projected to generate $750 billion of revenue by 2030.

“You’re going to start asking questions that were never askable before, because the data is now there to be mined,” he said.

Increasing Human Intelligence
Indeed, we’ll have data on everything we could possibly want data on. We’ll also soon have what Diamandis calls just-in-time education, where 5G combined with artificial intelligence and augmented reality will allow you to learn something in the moment you need it. “It’s not going and studying, it’s where your AR glasses show you how to do an emergency surgery, or fix something, or program something,” he said.

We’re also at the beginning of massive investments in research working towards connecting our brains to the cloud. “Right now, everything we think, feel, hear, or learn is confined in our synaptic connections,” Diamandis said. What will it look like when that’s no longer the case? Companies like Kernel, Neuralink, Open Water, Facebook, Google, and IBM are all investing billions of dollars into brain-machine interface research.

Increasing Human Longevity
One of the most important problems we’ll use our newfound intelligence to solve is that of our own health and mortality, making 100 years old the new 60—then eventually, 120 or 150.

“Our bodies were never evolved to live past age 30,” Diamandis said. “You’d go into puberty at age 13 and have a baby, and by the time you were 26 your baby was having a baby.”

Seeing how drastically our lifespans have changed over time makes you wonder what aging even is; is it natural, or is it a disease? Many companies are treating it as one, and using technologies like senolytics, CRISPR, and stem cell therapy to try to cure it. Scaffolds of human organs can now be 3D printed then populated with the recipient’s own stem cells so that their bodies won’t reject the transplant. Companies are testing small-molecule pharmaceuticals that can stop various forms of cancer.

“We don’t truly know what’s going on inside our bodies—but we can,” Diamandis said. “We’re going to be able to track our bodies and find disease at stage zero.”

Chins Up
The world is far from perfect—that’s not hard to see. What’s less obvious but just as true is that we’re living in an amazing time. More people are coming together, and they have more access to information, and that information moves faster, than ever before.

“I don’t think any of us understand how fast the world is changing,” Diamandis said. “Most people are fearful about the future. But we should be excited about the tools we now have to solve the world’s problems.”

Image Credit: spainter_vfx / Shutterstock.com Continue reading

Posted in Human Robots

#435056 How Researchers Used AI to Better ...

A few years back, DeepMind’s Demis Hassabis famously prophesized that AI and neuroscience will positively feed into each other in a “virtuous circle.” If realized, this would fundamentally expand our insight into intelligence, both machine and human.

We’ve already seen some proofs of concept, at least in the brain-to-AI direction. For example, memory replay, a biological mechanism that fortifies our memories during sleep, also boosted AI learning when abstractly appropriated into deep learning models. Reinforcement learning, loosely based on our motivation circuits, is now behind some of AI’s most powerful tools.

Hassabis is about to be proven right again.

Last week, two studies independently tapped into the power of ANNs to solve a 70-year-old neuroscience mystery: how does our visual system perceive reality?

The first, published in Cell, used generative networks to evolve DeepDream-like images that hyper-activate complex visual neurons in monkeys. These machine artworks are pure nightmare fuel to the human eye; but together, they revealed a fundamental “visual hieroglyph” that may form a basic rule for how we piece together visual stimuli to process sight into perception.

In the second study, a team used a deep ANN model—one thought to mimic biological vision—to synthesize new patterns tailored to control certain networks of visual neurons in the monkey brain. When directly shown to monkeys, the team found that the machine-generated artworks could reliably activate predicted populations of neurons. Future improved ANN models could allow even better control, giving neuroscientists a powerful noninvasive tool to study the brain. The work was published in Science.

The individual results, though fascinating, aren’t necessarily the point. Rather, they illustrate how scientists are now striving to complete the virtuous circle: tapping AI to probe natural intelligence. Vision is only the beginning—the tools can potentially be expanded into other sensory domains. And the more we understand about natural brains, the better we can engineer artificial ones.

It’s a “great example of leveraging artificial intelligence to study organic intelligence,” commented Dr. Roman Sandler at Kernel.co on Twitter.

Why Vision?
ANNs and biological vision have quite the history.

In the late 1950s, the legendary neuroscientist duo David Hubel and Torsten Wiesel became some of the first to use mathematical equations to understand how neurons in the brain work together.

In a series of experiments—many using cats—the team carefully dissected the structure and function of the visual cortex. Using myriads of images, they revealed that vision is processed in a hierarchy: neurons in “earlier” brain regions, those closer to the eyes, tend to activate when they “see” simple patterns such as lines. As we move deeper into the brain, from the early V1 to a nub located slightly behind our ears, the IT cortex, neurons increasingly respond to more complex or abstract patterns, including faces, animals, and objects. The discovery led some scientists to call certain IT neurons “Jennifer Aniston cells,” which fire in response to pictures of the actress regardless of lighting, angle, or haircut. That is, IT neurons somehow extract visual information into the “gist” of things.

That’s not trivial. The complex neural connections that lead to increasing abstraction of what we see into what we think we see—what we perceive—is a central question in machine vision: how can we teach machines to transform numbers encoding stimuli into dots, lines, and angles that eventually form “perceptions” and “gists”? The answer could transform self-driving cars, facial recognition, and other computer vision applications as they learn to better generalize.

Hubel and Wiesel’s Nobel-prize-winning studies heavily influenced the birth of ANNs and deep learning. Much of earlier ANN “feed-forward” model structures are based on our visual system; even today, the idea of increasing layers of abstraction—for perception or reasoning—guide computer scientists to build AI that can better generalize. The early romance between vision and deep learning is perhaps the bond that kicked off our current AI revolution.

It only seems fair that AI would feed back into vision neuroscience.

Hieroglyphs and Controllers
In the Cell study, a team led by Dr. Margaret Livingstone at Harvard Medical School tapped into generative networks to unravel IT neurons’ complex visual alphabet.

Scientists have long known that neurons in earlier visual regions (V1) tend to fire in response to “grating patches” oriented in certain ways. Using a limited set of these patches like letters, V1 neurons can “express a visual sentence” and represent any image, said Dr. Arash Afraz at the National Institute of Health, who was not involved in the study.

But how IT neurons operate remained a mystery. Here, the team used a combination of genetic algorithms and deep generative networks to “evolve” computer art for every studied neuron. In seven monkeys, the team implanted electrodes into various parts of the visual IT region so that they could monitor the activity of a single neuron.

The team showed each monkey an initial set of 40 images. They then picked the top 10 images that stimulated the highest neural activity, and married them to 30 new images to “evolve” the next generation of images. After 250 generations, the technique, XDREAM, generated a slew of images that mashed up contorted face-like shapes with lines, gratings, and abstract shapes.

This image shows the evolution of an optimum image for stimulating a visual neuron in a monkey. Image Credit: Ponce, Xiao, and Schade et al. – Cell.
“The evolved images look quite counter-intuitive,” explained Afraz. Some clearly show detailed structures that resemble natural images, while others show complex structures that can’t be characterized by our puny human brains.

This figure shows natural images (right) and images evolved by neurons in the inferotemporal cortex of a monkey (left). Image Credit: Ponce, Xiao, and Schade et al. – Cell.
“What started to emerge during each experiment were pictures that were reminiscent of shapes in the world but were not actual objects in the world,” said study author Carlos Ponce. “We were seeing something that was more like the language cells use with each other.”

This image was evolved by a neuron in the inferotemporal cortex of a monkey using AI. Image Credit: Ponce, Xiao, and Schade et al. – Cell.
Although IT neurons don’t seem to use a simple letter alphabet, it does rely on a vast array of characters like hieroglyphs or Chinese characters, “each loaded with more information,” said Afraz.

The adaptive nature of XDREAM turns it into a powerful tool to probe the inner workings of our brains—particularly for revealing discrepancies between biology and models.

The Science study, led by Dr. James DiCarlo at MIT, takes a similar approach. Using ANNs to generate new patterns and images, the team was able to selectively predict and independently control neuron populations in a high-level visual region called V4.

“So far, what has been done with these models is predicting what the neural responses would be to other stimuli that they have not seen before,” said study author Dr. Pouya Bashivan. “The main difference here is that we are going one step further and using the models to drive the neurons into desired states.”

It suggests that our current ANN models for visual computation “implicitly capture a great deal of visual knowledge” which we can’t really describe, but which the brain uses to turn vision information into perception, the authors said. By testing AI-generated images on biological vision, however, the team concluded that today’s ANNs have a degree of understanding and generalization. The results could potentially help engineer even more accurate ANN models of biological vision, which in turn could feed back into machine vision.

“One thing is clear already: Improved ANN models … have led to control of a high-level neural population that was previously out of reach,” the authors said. “The results presented here have likely only scratched the surface of what is possible with such implemented characterizations of the brain’s neural networks.”

To Afraz, the power of AI here is to find cracks in human perception—both our computational models of sensory processes, as well as our evolved biological software itself. AI can be used “as a perfect adversarial tool to discover design cracks” of IT, said Afraz, such as finding computer art that “fools” a neuron into thinking the object is something else.

“As artificial intelligence researchers develop models that work as well as the brain does—or even better—we will still need to understand which networks are more likely to behave safely and further human goals,” said Ponce. “More efficient AI can be grounded by knowledge of how the brain works.”

Image Credit: Sangoiri / Shutterstock.com Continue reading

Posted in Human Robots

#435043 This Week’s Awesome Stories From ...

NANOTECHNOLOGY
The Microbots Are on Their Way
Kenneth Chang | The New York Times
“Like Frankenstein, Marc Miskin’s robots initially lie motionless. Then their limbs jerk to life. But these robots are the size of a speck of dust. Thousands fit side-by-side on a single silicon wafer similar to those used for computer chips, and, like Frankenstein coming to life, they pull themselves free and start crawling.”

FUTURE
Why the ‘Post-Natural’ Age Could Be Strange and Beautiful
Lauren Holt | BBC
“As long as humans have existed, we have been influencing our planet’s flora and fauna. So, if humanity continues to flourish far into the future, how will nature change? And how might this genetic manipulation affect our own biology and evolutionary trajectory? The short answer: it will be strange, potentially beautiful and like nothing we’re used to.”

3D PRINTING
Watch This Wild 3D-Printed Lung Air Sac Breathe
Amanda Kooser | CNET
“A research team led by bioengineers at the University of Washington and Rice University developed an open-source technique for bioprinting tissues ‘with exquisitely entangled vascular networks similar to the body’s natural passageways for blood, air, lymph and other vital fluids.’i”

SENSORS
A New Camera Can Photograph You From 45 Kilometers Away
Emerging Technology from the arXiv | MIT Technology Review
“Conventional images taken through the telescope show nothing other than noise. But the new technique produces images with a spatial resolution of about 60 cm, which resolves building windows.”

BIOTECH
The Search for the Kryptonite That Can Stop CRISPR
Antonio Regalado | MIT Technology Review
“CRISPR weapons? We’ll leave it to your imagination exactly what one could look like. What is safe to say, though, is that DARPA has asked Doudna and others to start looking into prophylactic treatments or even pills you could take to stop gene editing, just the way you can swallow antibiotics if you’ve gotten an anthrax letter in the mail.”

ROBOTICS
The Holy Grail of Robotics: Inside the Quest to Build a Mechanical Human Hand
Luke Dormehl | Digital Trends
“For real-life roboticists, building the perfect robot hand has long been the Holy Grail. It is the hardware yin to the software yang of creating an artificial mind. Seeking out the ultimate challenge, robotics experts gravitated to recreating what is one of the most complicated and beautiful pieces of natural engineering found in the human body.”

Image Credit: Maksim Sansonov / Unsplash Continue reading

Posted in Human Robots