Tag Archives: innovation

#431906 Low-Cost Soft Robot Muscles Can Lift 200 ...

Jerky mechanical robots are staples of science fiction, but to seamlessly integrate into everyday life they’ll need the precise yet powerful motor control of humans. Now scientists have created a new class of artificial muscles that could soon make that a reality.
The advance is the latest breakthrough in the field of soft robotics. Scientists are increasingly designing robots using soft materials that more closely resemble biological systems, which can be more adaptable and better suited to working in close proximity to humans.
One of the main challenges has been creating soft components that match the power and control of the rigid actuators that drive mechanical robots—things like motors and pistons. Now researchers at the University of Colorado Boulder have built a series of low-cost artificial muscles—as little as 10 cents per device—using soft plastic pouches filled with electrically insulating liquids that contract with the force and speed of mammalian skeletal muscles when a voltage is applied to them.

Three different designs of the so-called hydraulically amplified self-healing electrostatic (HASEL) actuators were detailed in two papers in the journals Science and Science Robotics last week. They could carry out a variety of tasks, from gently picking up delicate objects like eggs or raspberries to lifting objects many times their own weight, such as a gallon of water, at rapid repetition rates.
“We draw our inspiration from the astonishing capabilities of biological muscle,” Christoph Keplinger, an assistant professor at UC Boulder and senior author of both papers, said in a press release. “Just like biological muscle, HASEL actuators can reproduce the adaptability of an octopus arm, the speed of a hummingbird and the strength of an elephant.”
The artificial muscles work by applying a voltage to hydrogel electrodes on either side of pouches filled with liquid insulators, which can be as simple as canola oil. This creates an attraction between the two electrodes, pulling them together and displacing the liquid. This causes a change of shape that can push or pull levers, arms or any other articulated component.
The design is essentially a synthesis of two leading approaches to actuating soft robots. Pneumatic and hydraulic actuators that pump fluids around have been popular due to their high forces, easy fabrication and ability to mimic a variety of natural motions. But they tend to be bulky and relatively slow.
Dielectric elastomer actuators apply an electric field across a solid insulating layer to make it flex. These can mimic the responsiveness of biological muscle. But they are not very versatile and can also fail catastrophically, because the high voltages required can cause a bolt of electricity to blast through the insulator, destroying it. The likelihood of this happening increases in line with the size of their electrodes, which makes it hard to scale them up. By combining the two approaches, researchers get the best of both worlds, with the power, versatility and easy fabrication of a fluid-based system and the responsiveness of electrically-powered actuators.
One of the designs holds particular promise for robotics applications, as it behaves a lot like biological muscle. The so-called Peano-HASEL actuators are made up of multiple rectangular pouches connected in series, which allows them to contract linearly, just like real muscle. They can lift more than 200 times their weight, but being electrically powered, they exceed the flexing speed of human muscle.
As the name suggests, the HASEL actuators are also self-healing. They are still prone to the same kind of electrical damage as dielectric elastomer actuators, but the liquid insulator is able to immediately self-heal by redistributing itself and regaining its insulating properties.
The muscles can even monitor the amount of strain they’re under to provide the same kind of feedback biological systems would. The muscle’s capacitance—its ability to store an electric charge—changes as the device stretches, which makes it possible to power the arm while simultaneously measuring what position it’s in.
The researchers say this could imbue robots with a similar sense of proprioception or body-awareness to that found in plants and animals. “Self-sensing allows for the development of closed-loop feedback controllers to design highly advanced and precise robots for diverse applications,” Shane Mitchell, a PhD student in Keplinger’s lab and an author on both papers, said in an email.
The researchers say the high voltages required are an ongoing challenge, though they’ve already designed devices in the lab that use a fifth of the voltage of those features in the recent papers.
In most of their demonstrations, these soft actuators were being used to power rigid arms and levers, pointing to the fact that future robots are likely to combine both rigid and soft components, much like animals do. The potential applications for the technology range from more realistic prosthetics to much more dextrous robots that can work easily alongside humans.
It will take some work before these devices appear in commercial robots. But the combination of high-performance with simple and inexpensive fabrication methods mean other researchers are likely to jump in, so innovation could be rapid.
Image Credit: Keplinger Research Group/University of Colorado Continue reading

Posted in Human Robots | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

#431900 Artificial muscles power up with new ...

Scientists are one step closer to artificial muscles. Orthotics have come a long way since their initial wood and strap designs, yet innovation lapsed when it came to compensating for muscle power—until now. A collaborative research team has designed a wearable robot to support a person's hip joint while walking. The team, led by Minoru Hashimoto, a professor of textile science and technology at Shinshu University in Japan, published the details of their prototype in Smart Materials and Structures, a journal published by the Institute of Physics. Continue reading

Posted in Human Robots | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

#431836 Do Our Brains Use Deep Learning to Make ...

The first time Dr. Blake Richards heard about deep learning, he was convinced that he wasn’t just looking at a technique that would revolutionize artificial intelligence. He also knew he was looking at something fundamental about the human brain.
That was the early 2000s, and Richards was taking a course with Dr. Geoff Hinton at the University of Toronto. Hinton, a pioneer architect of the algorithm that would later take the world by storm, was offering an introductory course on his learning method inspired by the human brain.
The key words here are “inspired by.” Despite Richards’ conviction, the odds were stacked against him. The human brain, as it happens, seems to lack a critical function that’s programmed into deep learning algorithms. On the surface, the algorithms were violating basic biological facts already proven by neuroscientists.
But what if, superficial differences aside, deep learning and the brain are actually compatible?
Now, in a new study published in eLife, Richards, working with DeepMind, proposed a new algorithm based on the biological structure of neurons in the neocortex. Also known as the cortex, this outermost region of the brain is home to higher cognitive functions such as reasoning, prediction, and flexible thought.
The team networked their artificial neurons together into a multi-layered network and challenged it with a classic computer vision task—identifying hand-written numbers.
The new algorithm performed well. But the kicker is that it analyzed the learning examples in a way that’s characteristic of deep learning algorithms, even though it was completely based on the brain’s fundamental biology.
“Deep learning is possible in a biological framework,” concludes the team.
Because the model is only a computer simulation at this point, Richards hopes to pass the baton to experimental neuroscientists, who could actively test whether the algorithm operates in an actual brain.
If so, the data could then be passed back to computer scientists to work out the next generation of massively parallel and low-energy algorithms to power our machines.
It’s a first step towards merging the two fields back into a “virtuous circle” of discovery and innovation.
The blame game
While you’ve probably heard of deep learning’s recent wins against humans in the game of Go, you might not know the nitty-gritty behind the algorithm’s operations.
In a nutshell, deep learning relies on an artificial neural network with virtual “neurons.” Like a towering skyscraper, the network is structured into hierarchies: lower-level neurons process aspects of an input—for example, a horizontal or vertical stroke that eventually forms the number four—whereas higher-level neurons extract more abstract aspects of the number four.
To teach the network, you give it examples of what you’re looking for. The signal propagates forward in the network (like climbing up a building), where each neuron works to fish out something fundamental about the number four.
Like children trying to learn a skill the first time, initially the network doesn’t do so well. It spits out what it thinks a universal number four should look like—think a Picasso-esque rendition.
But here’s where the learning occurs: the algorithm compares the output with the ideal output, and computes the difference between the two (dubbed “error”). This error is then “backpropagated” throughout the entire network, telling each neuron: hey, this is how far off you were, so try adjusting your computation closer to the ideal.
Millions of examples and tweakings later, the network inches closer to the desired output and becomes highly proficient at the trained task.
This error signal is crucial for learning. Without efficient “backprop,” the network doesn’t know which of its neurons are off kilter. By assigning blame, the AI can better itself.
The brain does this too. How? We have no clue.
Biological No-Go
What’s clear, though, is that the deep learning solution doesn’t work.
Backprop is a pretty needy function. It requires a very specific infrastructure for it to work as expected.
For one, each neuron in the network has to receive the error feedback. But in the brain, neurons are only connected to a few downstream partners (if that). For backprop to work in the brain, early-level neurons need to be able to receive information from billions of connections in their downstream circuits—a biological impossibility.
And while certain deep learning algorithms adapt a more local form of backprop— essentially between neurons—it requires their connection forwards and backwards to be symmetric. This hardly ever occurs in the brain’s synapses.
More recent algorithms adapt a slightly different strategy, in that they implement a separate feedback pathway that helps the neurons to figure out errors locally. While it’s more biologically plausible, the brain doesn’t have a separate computational network dedicated to the blame game.
What it does have are neurons with intricate structures, unlike the uniform “balls” that are currently applied in deep learning.
Branching Networks
The team took inspiration from pyramidal cells that populate the human cortex.
“Most of these neurons are shaped like trees, with ‘roots’ deep in the brain and ‘branches’ close to the surface,” says Richards. “What’s interesting is that these roots receive a different set of inputs than the branches that are way up at the top of the tree.”
This is an illustration of a multi-compartment neural network model for deep learning. Left: Reconstruction of pyramidal neurons from mouse primary visual cortex. Right: Illustration of simplified pyramidal neuron models. Image Credit: CIFAR
Curiously, the structure of neurons often turn out be “just right” for efficiently cracking a computational problem. Take the processing of sensations: the bottoms of pyramidal neurons are right smack where they need to be to receive sensory input, whereas the tops are conveniently placed to transmit feedback errors.
Could this intricate structure be evolution’s solution to channeling the error signal?
The team set up a multi-layered neural network based on previous algorithms. But rather than having uniform neurons, they gave those in middle layers—sandwiched between the input and output—compartments, just like real neurons.
When trained with hand-written digits, the algorithm performed much better than a single-layered network, despite lacking a way to perform classical backprop. The cell-like structure itself was sufficient to assign error: the error signals at one end of the neuron are naturally kept separate from input at the other end.
Then, at the right moment, the neuron brings both sources of information together to find the best solution.
There’s some biological evidence for this: neuroscientists have long known that the neuron’s input branches perform local computations, which can be integrated with signals that propagate backwards from the so-called output branch.
However, we don’t yet know if this is the brain’s way of dealing blame—a question that Richards urges neuroscientists to test out.
What’s more, the network parsed the problem in a way eerily similar to traditional deep learning algorithms: it took advantage of its multi-layered structure to extract progressively more abstract “ideas” about each number.
“[This is] the hallmark of deep learning,” the authors explain.
The Deep Learning Brain
Without doubt, there will be more twists and turns to the story as computer scientists incorporate more biological details into AI algorithms.
One aspect that Richards and team are already eyeing is a top-down predictive function, in which signals from higher levels directly influence how lower levels respond to input.
Feedback from upper levels doesn’t just provide error signals; it could also be nudging lower processing neurons towards a “better” activity pattern in real-time, says Richards.
The network doesn’t yet outperform other non-biologically derived (but “brain-inspired”) deep networks. But that’s not the point.
“Deep learning has had a huge impact on AI, but, to date, its impact on neuroscience has been limited,” the authors say.
Now neuroscientists have a lead they could experimentally test: that the structure of neurons underlie nature’s own deep learning algorithm.
“What we might see in the next decade or so is a real virtuous cycle of research between neuroscience and AI, where neuroscience discoveries help us to develop new AI and AI can help us interpret and understand our experimental data in neuroscience,” says Richards.
Image Credit: christitzeimaging.com / Shutterstock.com Continue reading

Posted in Human Robots | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

#431678 This Week’s Awesome Stories From ...

ARTIFICIAL INTELLIGENCE
Can A.I. Be Taught to Explain Itself?Cliff Kuang | New York Times“Kosinski’s results suggested something stranger: that artificial intelligences often excel by developing whole new ways of seeing, or even thinking, that are inscrutable to us. It’s a more profound version of what’s often called the ‘black box’ problem—the inability to discern exactly what machines are doing when they’re teaching themselves novel skills—and it has become a central concern in artificial-intelligence research.”
BIOTECH
Semi-Synthetic Life Form Now Fully Armed and OperationalAntonio Regalado | MIT Technology Review “By this year, the team had devised a more stable bacterium. But it wasn’t enough to endow the germ with a partly alien code—it needed to use that code to make a partly alien protein. That’s what Romesberg’s team, reporting today in the journal Nature, says it has done.”
COMPUTING
4 Strange New Ways to ComputeSamuel K. Moore | IEEE Spectrum “With Moore’s Law slowing, engineers have been taking a cold hard look at what will keep computing going when it’s gone…What follows includes a taste of both the strange and the potentially impactful.”
INNOVATION
Google X and the Science of Radical CreativityDerek Thompson | The Atlantic “But what X is attempting is nonetheless audacious. It is investing in both invention and innovation. Its founders hope to demystify and routinize the entire process of making a technological breakthrough—to nurture each moonshot, from question to idea to discovery to product—and, in so doing, to write an operator’s manual for radical creativity.”
PRIVACY AND SECURITY
Uber Paid Hackers to Delete Stolen Data on 57 Million PeopleEric Newcomer | Bloomberg “Hackers stole the personal data of 57 million customers and drivers from Uber Technologies Inc., a massive breach that the company concealed for more than a year. This week, the ride-hailing firm ousted its chief security officer and one of his deputies for their roles in keeping the hack under wraps, which included a $100,000 payment to the attackers.”
Image Credit: singpentinkhappy / Shutterstock.com Continue reading

Posted in Human Robots | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

#431559 Drug Discovery AI to Scour a Universe of ...

On a dark night, away from city lights, the stars of the Milky Way can seem uncountable. Yet from any given location no more than 4,500 are visible to the naked eye. Meanwhile, our galaxy has 100–400 billion stars, and there are even more galaxies in the universe.
The numbers of the night sky are humbling. And they give us a deep perspective…on drugs.
Yes, this includes wow-the-stars-are-freaking-amazing-tonight drugs, but also the kinds of drugs that make us well again when we’re sick. The number of possible organic compounds with “drug-like” properties dwarfs the number of stars in the universe by over 30 orders of magnitude.
Next to this multiverse of possibility, the chemical configurations scientists have made into actual medicines are like the smattering of stars you’d glimpse downtown.
But for good reason.
Exploring all that potential drug-space is as humanly impossible as exploring all of physical space, and even if we could, most of what we’d find wouldn’t fit our purposes. Still, the idea that wonder drugs must surely lurk amid the multitudes is too tantalizing to ignore.
Which is why, Alex Zhavoronkov said at Singularity University’s Exponential Medicine in San Diego last week, we should use artificial intelligence to do more of the legwork and speed discovery. This, he said, could be one of the next big medical applications for AI.
Dogs, Diagnosis, and Drugs
Zhavoronkov is CEO of Insilico Medicine and CSO of the Biogerontology Research Foundation. Insilico is one of a number of AI startups aiming to accelerate drug discovery with AI.
In recent years, Zhavoronkov said, the now-famous machine learning technique, deep learning, has made progress on a number of fronts. Algorithms that can teach themselves to play games—like DeepMind’s AlphaGo Zero or Carnegie Mellon’s poker playing AI—are perhaps the most headline-grabbing of the bunch. But pattern recognition was the thing that kicked deep learning into overdrive early on, when machine learning algorithms went from struggling to tell dogs and cats apart to outperforming their peers and then their makers in quick succession.
[Watch this video for an AI update from Neil Jacobstein, chair of Artificial Intelligence and Robotics at Singularity University.]

In medicine, deep learning algorithms trained on databases of medical images can spot life-threatening disease with equal or greater accuracy than human professionals. There’s even speculation that AI, if we learn to trust it, could be invaluable in diagnosing disease. And, as Zhavoronkov noted, with more applications and a longer track record that trust is coming.
“Tesla is already putting cars on the street,” Zhavoronkov said. “Three-year, four-year-old technology is already carrying passengers from point A to point B, at 100 miles an hour, and one mistake and you’re dead. But people are trusting their lives to this technology.”
“So, why don’t we do it in pharma?”
Trial and Error and Try Again
AI wouldn’t drive the car in pharmaceutical research. It’d be an assistant that, when paired with a chemist or two, could fast-track discovery by screening more possibilities for better candidates.
There’s plenty of room to make things more efficient, according to Zhavoronkov.
Drug discovery is arduous and expensive. Chemists sift tens of thousands of candidate compounds for the most promising to synthesize. Of these, a handful will go on to further research, fewer will make it to human clinical trials, and a fraction of those will be approved.
The whole process can take many years and cost hundreds of millions of dollars.
This is a big data problem if ever there was one, and deep learning thrives on big data. Early applications have shown their worth unearthing subtle patterns in huge training databases. Although drug-makers already use software to sift compounds, such software requires explicit rules written by chemists. AI’s allure is its ability to learn and improve on its own.
“There are two strategies for AI-driven innovation in pharma to ensure you get better molecules and much faster approvals,” Zhavoronkov said. “One is looking for the needle in the haystack, and another one is creating a new needle.”
To find the needle in the haystack, algorithms are trained on large databases of molecules. Then they go looking for molecules with attractive properties. But creating a new needle? That’s a possibility enabled by the generative adversarial networks Zhavoronkov specializes in.
Such algorithms pit two neural networks against each other. One generates meaningful output while the other judges whether this output is true or false, Zhavoronkov said. Together, the networks generate new objects like text, images, or in this case, molecular structures.
“We started employing this particular technology to make deep neural networks imagine new molecules, to make it perfect right from the start. So, to come up with really perfect needles,” Zhavoronkov said. “[You] can essentially go to this [generative adversarial network] and ask it to create molecules that inhibit protein X at concentration Y, with the highest viability, specific characteristics, and minimal side effects.”
Zhavoronkov believes AI can find or fabricate more needles from the array of molecular possibilities, freeing human chemists to focus on synthesizing only the most promising. If it works, he hopes we can increase hits, minimize misses, and generally speed the process up.
Proof’s in the Pudding
Insilico isn’t alone on its drug-discovery quest, nor is it a brand new area of interest.
Last year, a Harvard group published a paper on an AI that similarly suggests drug candidates. The software trained on 250,000 drug-like molecules and used its experience to generate new molecules that blended existing drugs and made suggestions based on desired properties.
An MIT Technology Review article on the subject highlighted a few of the challenges such systems may still face. The results returned aren’t always meaningful or easy to synthesize in the lab, and the quality of these results, as always, is only as good as the data dined upon.
Stanford chemistry professor and Andreesen Horowitz partner, Vijay Pande, said that images, speech, and text—three of the areas deep learning’s made quick strides in—have better, cleaner data. Chemical data, on the other hand, is still being optimized for deep learning. Also, while there are public databases, much data still lives behind closed doors at private companies.
To overcome the challenges and prove their worth, Zhavoronkov said, his company is very focused on validating the tech. But this year, skepticism in the pharmaceutical industry seems to be easing into interest and investment.
AI drug discovery startup Exscientia inked a deal with Sanofi for $280 million and GlaxoSmithKline for $42 million. Insilico is also partnering with GlaxoSmithKline, and Numerate is working with Takeda Pharmaceutical. Even Google may jump in. According to an article in Nature outlining the field, the firm’s deep learning project, Google Brain, is growing its biosciences team, and industry watchers wouldn’t be surprised to see them target drug discovery.
With AI and the hardware running it advancing rapidly, the greatest potential may yet be ahead. Perhaps, one day, all 1060 molecules in drug-space will be at our disposal. “You should take all the data you have, build n new models, and search as much of that 1060 as possible” before every decision you make, Brandon Allgood, CTO at Numerate, told Nature.
Today’s projects need to live up to their promises, of course, but Zhavoronkov believes AI will have a big impact in the coming years, and now’s the time to integrate it. “If you are working for a pharma company, and you’re still thinking, ‘Okay, where is the proof?’ Once there is a proof, and once you can see it to believe it—it’s going to be too late,” he said.
Image Credit: Klavdiya Krinichnaya / Shutterstock.com Continue reading

Posted in Human Robots | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment