Tag Archives: applications

#435056 How Researchers Used AI to Better ...

A few years back, DeepMind’s Demis Hassabis famously prophesized that AI and neuroscience will positively feed into each other in a “virtuous circle.” If realized, this would fundamentally expand our insight into intelligence, both machine and human.

We’ve already seen some proofs of concept, at least in the brain-to-AI direction. For example, memory replay, a biological mechanism that fortifies our memories during sleep, also boosted AI learning when abstractly appropriated into deep learning models. Reinforcement learning, loosely based on our motivation circuits, is now behind some of AI’s most powerful tools.

Hassabis is about to be proven right again.

Last week, two studies independently tapped into the power of ANNs to solve a 70-year-old neuroscience mystery: how does our visual system perceive reality?

The first, published in Cell, used generative networks to evolve DeepDream-like images that hyper-activate complex visual neurons in monkeys. These machine artworks are pure nightmare fuel to the human eye; but together, they revealed a fundamental “visual hieroglyph” that may form a basic rule for how we piece together visual stimuli to process sight into perception.

In the second study, a team used a deep ANN model—one thought to mimic biological vision—to synthesize new patterns tailored to control certain networks of visual neurons in the monkey brain. When directly shown to monkeys, the team found that the machine-generated artworks could reliably activate predicted populations of neurons. Future improved ANN models could allow even better control, giving neuroscientists a powerful noninvasive tool to study the brain. The work was published in Science.

The individual results, though fascinating, aren’t necessarily the point. Rather, they illustrate how scientists are now striving to complete the virtuous circle: tapping AI to probe natural intelligence. Vision is only the beginning—the tools can potentially be expanded into other sensory domains. And the more we understand about natural brains, the better we can engineer artificial ones.

It’s a “great example of leveraging artificial intelligence to study organic intelligence,” commented Dr. Roman Sandler at Kernel.co on Twitter.

Why Vision?
ANNs and biological vision have quite the history.

In the late 1950s, the legendary neuroscientist duo David Hubel and Torsten Wiesel became some of the first to use mathematical equations to understand how neurons in the brain work together.

In a series of experiments—many using cats—the team carefully dissected the structure and function of the visual cortex. Using myriads of images, they revealed that vision is processed in a hierarchy: neurons in “earlier” brain regions, those closer to the eyes, tend to activate when they “see” simple patterns such as lines. As we move deeper into the brain, from the early V1 to a nub located slightly behind our ears, the IT cortex, neurons increasingly respond to more complex or abstract patterns, including faces, animals, and objects. The discovery led some scientists to call certain IT neurons “Jennifer Aniston cells,” which fire in response to pictures of the actress regardless of lighting, angle, or haircut. That is, IT neurons somehow extract visual information into the “gist” of things.

That’s not trivial. The complex neural connections that lead to increasing abstraction of what we see into what we think we see—what we perceive—is a central question in machine vision: how can we teach machines to transform numbers encoding stimuli into dots, lines, and angles that eventually form “perceptions” and “gists”? The answer could transform self-driving cars, facial recognition, and other computer vision applications as they learn to better generalize.

Hubel and Wiesel’s Nobel-prize-winning studies heavily influenced the birth of ANNs and deep learning. Much of earlier ANN “feed-forward” model structures are based on our visual system; even today, the idea of increasing layers of abstraction—for perception or reasoning—guide computer scientists to build AI that can better generalize. The early romance between vision and deep learning is perhaps the bond that kicked off our current AI revolution.

It only seems fair that AI would feed back into vision neuroscience.

Hieroglyphs and Controllers
In the Cell study, a team led by Dr. Margaret Livingstone at Harvard Medical School tapped into generative networks to unravel IT neurons’ complex visual alphabet.

Scientists have long known that neurons in earlier visual regions (V1) tend to fire in response to “grating patches” oriented in certain ways. Using a limited set of these patches like letters, V1 neurons can “express a visual sentence” and represent any image, said Dr. Arash Afraz at the National Institute of Health, who was not involved in the study.

But how IT neurons operate remained a mystery. Here, the team used a combination of genetic algorithms and deep generative networks to “evolve” computer art for every studied neuron. In seven monkeys, the team implanted electrodes into various parts of the visual IT region so that they could monitor the activity of a single neuron.

The team showed each monkey an initial set of 40 images. They then picked the top 10 images that stimulated the highest neural activity, and married them to 30 new images to “evolve” the next generation of images. After 250 generations, the technique, XDREAM, generated a slew of images that mashed up contorted face-like shapes with lines, gratings, and abstract shapes.

This image shows the evolution of an optimum image for stimulating a visual neuron in a monkey. Image Credit: Ponce, Xiao, and Schade et al. – Cell.
“The evolved images look quite counter-intuitive,” explained Afraz. Some clearly show detailed structures that resemble natural images, while others show complex structures that can’t be characterized by our puny human brains.

This figure shows natural images (right) and images evolved by neurons in the inferotemporal cortex of a monkey (left). Image Credit: Ponce, Xiao, and Schade et al. – Cell.
“What started to emerge during each experiment were pictures that were reminiscent of shapes in the world but were not actual objects in the world,” said study author Carlos Ponce. “We were seeing something that was more like the language cells use with each other.”

This image was evolved by a neuron in the inferotemporal cortex of a monkey using AI. Image Credit: Ponce, Xiao, and Schade et al. – Cell.
Although IT neurons don’t seem to use a simple letter alphabet, it does rely on a vast array of characters like hieroglyphs or Chinese characters, “each loaded with more information,” said Afraz.

The adaptive nature of XDREAM turns it into a powerful tool to probe the inner workings of our brains—particularly for revealing discrepancies between biology and models.

The Science study, led by Dr. James DiCarlo at MIT, takes a similar approach. Using ANNs to generate new patterns and images, the team was able to selectively predict and independently control neuron populations in a high-level visual region called V4.

“So far, what has been done with these models is predicting what the neural responses would be to other stimuli that they have not seen before,” said study author Dr. Pouya Bashivan. “The main difference here is that we are going one step further and using the models to drive the neurons into desired states.”

It suggests that our current ANN models for visual computation “implicitly capture a great deal of visual knowledge” which we can’t really describe, but which the brain uses to turn vision information into perception, the authors said. By testing AI-generated images on biological vision, however, the team concluded that today’s ANNs have a degree of understanding and generalization. The results could potentially help engineer even more accurate ANN models of biological vision, which in turn could feed back into machine vision.

“One thing is clear already: Improved ANN models … have led to control of a high-level neural population that was previously out of reach,” the authors said. “The results presented here have likely only scratched the surface of what is possible with such implemented characterizations of the brain’s neural networks.”

To Afraz, the power of AI here is to find cracks in human perception—both our computational models of sensory processes, as well as our evolved biological software itself. AI can be used “as a perfect adversarial tool to discover design cracks” of IT, said Afraz, such as finding computer art that “fools” a neuron into thinking the object is something else.

“As artificial intelligence researchers develop models that work as well as the brain does—or even better—we will still need to understand which networks are more likely to behave safely and further human goals,” said Ponce. “More efficient AI can be grounded by knowledge of how the brain works.”

Image Credit: Sangoiri / Shutterstock.com Continue reading

Posted in Human Robots

#434827 AI and Robotics Are Transforming ...

During the past 50 years, the frequency of recorded natural disasters has surged nearly five-fold.

In this blog, I’ll be exploring how converging exponential technologies (AI, robotics, drones, sensors, networks) are transforming the future of disaster relief—how we can prevent them in the first place and get help to victims during that first golden hour wherein immediate relief can save lives.

Here are the three areas of greatest impact:

AI, predictive mapping, and the power of the crowd
Next-gen robotics and swarm solutions
Aerial drones and immediate aid supply

Let’s dive in!

Artificial Intelligence and Predictive Mapping
When it comes to immediate and high-precision emergency response, data is gold.

Already, the meteoric rise of space-based networks, stratosphere-hovering balloons, and 5G telecommunications infrastructure is in the process of connecting every last individual on the planet.

Aside from democratizing the world’s information, however, this upsurge in connectivity will soon grant anyone the ability to broadcast detailed geo-tagged data, particularly those most vulnerable to natural disasters.

Armed with the power of data broadcasting and the force of the crowd, disaster victims now play a vital role in emergency response, turning a historically one-way blind rescue operation into a two-way dialogue between connected crowds and smart response systems.

With a skyrocketing abundance of data, however, comes a new paradigm: one in which we no longer face a scarcity of answers. Instead, it will be the quality of our questions that matters most.

This is where AI comes in: our mining mechanism.

In the case of emergency response, what if we could strategically map an almost endless amount of incoming data points? Or predict the dynamics of a flood and identify a tsunami’s most vulnerable targets before it even strikes? Or even amplify critical signals to trigger automatic aid by surveillance drones and immediately alert crowdsourced volunteers?

Already, a number of key players are leveraging AI, crowdsourced intelligence, and cutting-edge visualizations to optimize crisis response and multiply relief speeds.

Take One Concern, for instance. Born out of Stanford under the mentorship of leading AI expert Andrew Ng, One Concern leverages AI through analytical disaster assessment and calculated damage estimates.

Partnering with the cities of Los Angeles, San Francisco, and numerous cities in San Mateo County, the platform assigns verified, unique ‘digital fingerprints’ to every element in a city. Building robust models of each system, One Concern’s AI platform can then monitor site-specific impacts of not only climate change but each individual natural disaster, from sweeping thermal shifts to seismic movement.

This data, combined with that of city infrastructure and former disasters, are then used to predict future damage under a range of disaster scenarios, informing prevention methods and structures in need of reinforcement.

Within just four years, One Concern can now make precise predictions with an 85 percent accuracy rate in under 15 minutes.

And as IoT-connected devices and intelligent hardware continue to boom, a blooming trillion-sensor economy will only serve to amplify AI’s predictive capacity, offering us immediate, preventive strategies long before disaster strikes.

Beyond natural disasters, however, crowdsourced intelligence, predictive crisis mapping, and AI-powered responses are just as formidable a triage in humanitarian disasters.

One extraordinary story is that of Ushahidi. When violence broke out after the 2007 Kenyan elections, one local blogger proposed a simple yet powerful question to the web: “Any techies out there willing to do a mashup of where the violence and destruction is occurring and put it on a map?”

Within days, four ‘techies’ heeded the call, building a platform that crowdsourced first-hand reports via SMS, mined the web for answers, and—with over 40,000 verified reports—sent alerts back to locals on the ground and viewers across the world.

Today, Ushahidi has been used in over 150 countries, reaching a total of 20 million people across 100,000+ deployments. Now an open-source crisis-mapping software, its V3 (or “Ushahidi in the Cloud”) is accessible to anyone, mining millions of Tweets, hundreds of thousands of news articles, and geo-tagged, time-stamped data from countless sources.

Aggregating one of the longest-running crisis maps to date, Ushahidi’s Syria Tracker has proved invaluable in the crowdsourcing of witness reports. Providing real-time geographic visualizations of all verified data, Syria Tracker has enabled civilians to report everything from missing people and relief supply needs to civilian casualties and disease outbreaks— all while evading the government’s cell network, keeping identities private, and verifying reports prior to publication.

As mobile connectivity and abundant sensors converge with AI-mined crowd intelligence, real-time awareness will only multiply in speed and scale.

Imagining the Future….

Within the next 10 years, spatial web technology might even allow us to tap into mesh networks.

As I’ve explored in a previous blog on the implications of the spatial web, while traditional networks rely on a limited set of wired access points (or wireless hotspots), a wireless mesh network can connect entire cities via hundreds of dispersed nodes that communicate with each other and share a network connection non-hierarchically.

In short, this means that individual mobile users can together establish a local mesh network using nothing but the computing power in their own devices.

Take this a step further, and a local population of strangers could collectively broadcast countless 360-degree feeds across a local mesh network.

Imagine a scenario in which armed attacks break out across disjointed urban districts, each cluster of eye witnesses and at-risk civilians broadcasting an aggregate of 360-degree videos, all fed through photogrammetry AIs that build out a live hologram in real time, giving family members and first responders complete information.

Or take a coastal community in the throes of torrential rainfall and failing infrastructure. Now empowered by a collective live feed, verification of data reports takes a matter of seconds, and richly-layered data informs first responders and AI platforms with unbelievable accuracy and specificity of relief needs.

By linking all the right technological pieces, we might even see the rise of automated drone deliveries. Imagine: crowdsourced intelligence is first cross-referenced with sensor data and verified algorithmically. AI is then leveraged to determine the specific needs and degree of urgency at ultra-precise coordinates. Within minutes, once approved by personnel, swarm robots rush to collect the requisite supplies, equipping size-appropriate drones with the right aid for rapid-fire delivery.

This brings us to a second critical convergence: robots and drones.

While cutting-edge drone technology revolutionizes the way we deliver aid, new breakthroughs in AI-geared robotics are paving the way for superhuman emergency responses in some of today’s most dangerous environments.

Let’s explore a few of the most disruptive examples to reach the testing phase.

First up….

Autonomous Robots and Swarm Solutions
As hardware advancements converge with exploding AI capabilities, disaster relief robots are graduating from assistance roles to fully autonomous responders at a breakneck pace.

Born out of MIT’s Biomimetic Robotics Lab, the Cheetah III is but one of many robots that may form our first line of defense in everything from earthquake search-and-rescue missions to high-risk ops in dangerous radiation zones.

Now capable of running at 6.4 meters per second, Cheetah III can even leap up to a height of 60 centimeters, autonomously determining how to avoid obstacles and jump over hurdles as they arise.

Initially designed to perform spectral inspection tasks in hazardous settings (think: nuclear plants or chemical factories), the Cheetah’s various iterations have focused on increasing its payload capacity, range of motion, and even a gripping function with enhanced dexterity.

Cheetah III and future versions are aimed at saving lives in almost any environment.

And the Cheetah III is not alone. Just this February, Tokyo’s Electric Power Company (TEPCO) has put one of its own robots to the test. For the first time since Japan’s devastating 2011 tsunami, which led to three nuclear meltdowns in the nation’s Fukushima nuclear power plant, a robot has successfully examined the reactor’s fuel.

Broadcasting the process with its built-in camera, the robot was able to retrieve small chunks of radioactive fuel at five of the six test sites, offering tremendous promise for long-term plans to clean up the still-deadly interior.

Also out of Japan, Mitsubishi Heavy Industries (MHi) is even using robots to fight fires with full autonomy. In a remarkable new feat, MHi’s Water Cannon Bot can now put out blazes in difficult-to-access or highly dangerous fire sites.

Delivering foam or water at 4,000 liters per minute and 1 megapascal (MPa) of pressure, the Cannon Bot and its accompanying Hose Extension Bot even form part of a greater AI-geared system to conduct reconnaissance and surveillance on larger transport vehicles.

As wildfires grow ever more untameable, high-volume production of such bots could prove a true lifesaver. Paired with predictive AI forest fire mapping and autonomous hauling vehicles, not only will solutions like MHi’s Cannon Bot save numerous lives, but avoid population displacement and paralyzing damage to our natural environment before disaster has the chance to spread.

But even in cases where emergency shelter is needed, groundbreaking (literally) robotics solutions are fast to the rescue.

After multiple iterations by Fastbrick Robotics, the Hadrian X end-to-end bricklaying robot can now autonomously build a fully livable, 180-square-meter home in under three days. Using a laser-guided robotic attachment, the all-in-one brick-loaded truck simply drives to a construction site and directs blocks through its robotic arm in accordance with a 3D model.

Meeting verified building standards, Hadrian and similar solutions hold massive promise in the long-term, deployable across post-conflict refugee sites and regions recovering from natural catastrophes.

But what if we need to build emergency shelters from local soil at hand? Marking an extraordinary convergence between robotics and 3D printing, the Institute for Advanced Architecture of Catalonia (IAAC) is already working on a solution.

In a major feat for low-cost construction in remote zones, IAAC has found a way to convert almost any soil into a building material with three times the tensile strength of industrial clay. Offering myriad benefits, including natural insulation, low GHG emissions, fire protection, air circulation, and thermal mediation, IAAC’s new 3D printed native soil can build houses on-site for as little as $1,000.

But while cutting-edge robotics unlock extraordinary new frontiers for low-cost, large-scale emergency construction, novel hardware and computing breakthroughs are also enabling robotic scale at the other extreme of the spectrum.

Again, inspired by biological phenomena, robotics specialists across the US have begun to pilot tiny robotic prototypes for locating trapped individuals and assessing infrastructural damage.

Take RoboBees, tiny Harvard-developed bots that use electrostatic adhesion to ‘perch’ on walls and even ceilings, evaluating structural damage in the aftermath of an earthquake.

Or Carnegie Mellon’s prototyped Snakebot, capable of navigating through entry points that would otherwise be completely inaccessible to human responders. Driven by AI, the Snakebot can maneuver through even the most densely-packed rubble to locate survivors, using cameras and microphones for communication.

But when it comes to fast-paced reconnaissance in inaccessible regions, miniature robot swarms have good company.

Next-Generation Drones for Instantaneous Relief Supplies
Particularly in the case of wildfires and conflict zones, autonomous drone technology is fundamentally revolutionizing the way we identify survivors in need and automate relief supply.

Not only are drones enabling high-resolution imagery for real-time mapping and damage assessment, but preliminary research shows that UAVs far outpace ground-based rescue teams in locating isolated survivors.

As presented by a team of electrical engineers from the University of Science and Technology of China, drones could even build out a mobile wireless broadband network in record time using a “drone-assisted multi-hop device-to-device” program.

And as shown during Houston’s Hurricane Harvey, drones can provide scores of predictive intel on everything from future flooding to damage estimates.

Among multiple others, a team led by Texas A&M computer science professor and director of the university’s Center for Robot-Assisted Search and Rescue Dr. Robin Murphy flew a total of 119 drone missions over the city, from small-scale quadcopters to military-grade unmanned planes. Not only were these critical for monitoring levee infrastructure, but also for identifying those left behind by human rescue teams.

But beyond surveillance, UAVs have begun to provide lifesaving supplies across some of the most remote regions of the globe. One of the most inspiring examples to date is Zipline.

Created in 2014, Zipline has completed 12,352 life-saving drone deliveries to date. While drones are designed, tested, and assembled in California, Zipline primarily operates in Rwanda and Tanzania, hiring local operators and providing over 11 million people with instant access to medical supplies.

Providing everything from vaccines and HIV medications to blood and IV tubes, Zipline’s drones far outpace ground-based supply transport, in many instances providing life-critical blood cells, plasma, and platelets in under an hour.

But drone technology is even beginning to transcend the limited scale of medical supplies and food.

Now developing its drones under contracts with DARPA and the US Marine Corps, Logistic Gliders, Inc. has built autonomously-navigating drones capable of carrying 1,800 pounds of cargo over unprecedented long distances.

Built from plywood, Logistic’s gliders are projected to cost as little as a few hundred dollars each, making them perfect candidates for high-volume remote aid deliveries, whether navigated by a pilot or self-flown in accordance with real-time disaster zone mapping.

As hardware continues to advance, autonomous drone technology coupled with real-time mapping algorithms pose no end of abundant opportunities for aid supply, disaster monitoring, and richly layered intel previously unimaginable for humanitarian relief.

Concluding Thoughts
Perhaps one of the most consequential and impactful applications of converging technologies is their transformation of disaster relief methods.

While AI-driven intel platforms crowdsource firsthand experiential data from those on the ground, mobile connectivity and drone-supplied networks are granting newfound narrative power to those most in need.

And as a wave of new hardware advancements gives rise to robotic responders, swarm technology, and aerial drones, we are fast approaching an age of instantaneous and efficiently-distributed responses in the midst of conflict and natural catastrophes alike.

Empowered by these new tools, what might we create when everyone on the planet has the same access to relief supplies and immediate resources? In a new age of prevention and fast recovery, what futures can you envision?

Join Me
Abundance-Digital Online Community: I’ve created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is my ‘onramp’ for exponential entrepreneurs – those who want to get involved and play at a higher level. Click here to learn more.

Image Credit: Arcansel / Shutterstock.com Continue reading

Posted in Human Robots

#434823 The Tangled Web of Turning Spider Silk ...

Spider-Man is one of the most popular superheroes of all time. It’s a bit surprising given that one of the more common phobias is arachnophobia—a debilitating fear of spiders.

Perhaps more fantastical is that young Peter Parker, a brainy high school science nerd, seemingly developed overnight the famous web-shooters and the synthetic spider silk that he uses to swing across the cityscape like Tarzan through the jungle.

That’s because scientists have been trying for decades to replicate spider silk, a material that is five times stronger than steel, among its many superpowers. In recent years, researchers have been untangling the protein-based fiber’s structure down to the molecular level, leading to new insights and new potential for eventual commercial uses.

The applications for such a material seem near endless. There’s the more futuristic visions, like enabling robotic “muscles” for human-like movement or ensnaring real-life villains with a Spider-Man-like web. Near-term applications could include the biomedical industry, such as bandages and adhesives, and as a replacement textile for everything from rope to seat belts to parachutes.

Spinning Synthetic Spider Silk
Randy Lewis has been studying the properties of spider silk and developing methods for producing it synthetically for more than three decades. In the 1990s, his research team was behind cloning the first spider silk gene, as well as the first to identify and sequence the proteins that make up the six different silks that web slingers make. Each has different mechanical properties.

“So our thought process was that you could take that information and begin to to understand what made them strong and what makes them stretchy, and why some are are very stretchy and some are not stretchy at all, and some are stronger and some are weaker,” explained Lewis, a biology professor at Utah State University and director of the Synthetic Spider Silk Lab, in an interview with Singularity Hub.

Spiders are naturally territorial and cannibalistic, so any intention to farm silk naturally would likely end in an orgy of arachnid violence. Instead, Lewis and company have genetically modified different organisms to produce spider silk synthetically, including inserting a couple of web-making genes into the genetic code of goats. The goats’ milk contains spider silk proteins.

The lab also produces synthetic spider silk through a fermentation process not entirely dissimilar to brewing beer, but using genetically modified bacteria to make the desired spider silk proteins. A similar technique has been used for years to make a key enzyme in cheese production. More recently, companies are using transgenic bacteria to make meat and milk proteins, entirely bypassing animals in the process.

The same fermentation technology is used by a chic startup called Bolt Threads outside of San Francisco that has raised more than $200 million for fashionable fibers made out of synthetic spider silk it calls Microsilk. (The company is also developing a second leather-like material, Mylo, using the underground root structure of mushrooms known as mycelium.)

Lewis’ lab also uses transgenic silkworms to produce a kind of composite material made up of the domesticated insect’s own silk proteins and those of spider silk. “Those have some fairly impressive properties,” Lewis said.

The researchers are even experimenting with genetically modified alfalfa. One of the big advantages there is that once the spider silk protein has been extracted, the remaining protein could be sold as livestock feed. “That would bring the cost of spider silk protein production down significantly,” Lewis said.

Building a Better Web
Producing synthetic spider silk isn’t the problem, according to Lewis, but the ability to do it at scale commercially remains a sticking point.

Another challenge is “weaving” the synthetic spider silk into usable products that can take advantage of the material’s marvelous properties.

“It is possible to make silk proteins synthetically, but it is very hard to assemble the individual proteins into a fiber or other material forms,” said Markus Buehler, head of the Department of Civil and Environmental Engineering at MIT, in an email to Singularity Hub. “The spider has a complex spinning duct in which silk proteins are exposed to physical forces, chemical gradients, the combination of which generates the assembly of molecules that leads to silk fibers.”

Buehler recently co-authored a paper in the journal Science Advances that found dragline spider silk exhibits different properties in response to changes in humidity that could eventually have applications in robotics.

Specifically, spider silk suddenly contracts and twists above a certain level of relative humidity, exerting enough force to “potentially be competitive with other materials being explored as actuators—devices that move to perform some activity such as controlling a valve,” according to a press release.

Studying Spider Silk Up Close
Recent studies at the molecular level are helping scientists learn more about the unique properties of spider silk, which may help researchers develop materials with extraordinary capabilities.

For example, scientists at Arizona State University used magnetic resonance tools and other instruments to image the abdomen of a black widow spider. They produced what they called the first molecular-level model of spider silk protein fiber formation, providing insights on the nanoparticle structure. The research was published last October in Proceedings of the National Academy of Sciences.

A cross section of the abdomen of a black widow (Latrodectus Hesperus) spider used in this study at Arizona State University. Image Credit: Samrat Amin.
Also in 2018, a study presented in Nature Communications described a sort of molecular clamp that binds the silk protein building blocks, which are called spidroins. The researchers observed for the first time that the clamp self-assembles in a two-step process, contributing to the extensibility, or stretchiness, of spider silk.

Another team put the spider silk of a brown recluse under an atomic force microscope, discovering that each strand, already 1,000 times thinner than a human hair, is made up of thousands of nanostrands. That helps explain its extraordinary tensile strength, though technique is also a factor, as the brown recluse uses a special looping method to reinforce its silk strands. The study also appeared last year in the journal ACS Macro Letters.

Making Spider Silk Stick
Buehler said his team is now trying to develop better and faster predictive methods to design silk proteins using artificial intelligence.

“These new methods allow us to generate new protein designs that do not naturally exist and which can be explored to optimize certain desirable properties like torsional actuation, strength, bioactivity—for example, tissue engineering—and others,” he said.

Meanwhile, Lewis’ lab has discovered a method that allows it to solubilize spider silk protein in what is essentially a water-based solution, eschewing acids or other toxic compounds that are normally used in the process.

That enables the researchers to develop materials beyond fiber, including adhesives that “are better than an awful lot of the current commercial adhesives,” Lewis said, as well as coatings that could be used to dampen vibrations, for example.

“We’re making gels for various kinds of of tissue regeneration, as well as drug delivery, and things like that,” he added. “So we’ve expanded the use profile from something beyond fibers to something that is a much more extensive portfolio of possible kinds of materials.”

And, yes, there’s even designs at the Synthetic Spider Silk Lab for developing a Spider-Man web-slinger material. The US Navy is interested in non-destructive ways of disabling an enemy vessel, such as fouling its propeller. The project also includes producing synthetic proteins from the hagfish, an eel-like critter that exudes a gelatinous slime when threatened.

Lewis said that while the potential for spider silk is certainly headline-grabbing, he cautioned that much of the hype is not focused on the unique mechanical properties that could lead to advances in healthcare and other industries.

“We want to see spider silk out there because it’s a unique material, not because it’s got marketing appeal,” he said.

Image Credit: mycteria / Shutterstock.com Continue reading

Posted in Human Robots

#434818 Watch These Robots Do Tasks You Thought ...

Robots have been masters of manufacturing at speed and precision for decades, but give them a seemingly simple task like stacking shelves, and they quickly get stuck. That’s changing, though, as engineers build systems that can take on the deceptively tricky tasks most humans can do with their eyes closed.

Boston Dynamics is famous for dramatic reveals of robots performing mind-blowing feats that also leave you scratching your head as to what the market is—think the bipedal Atlas doing backflips or Spot the galloping robot dog.

Last week, the company released a video of a robot called Handle that looks like an ostrich on wheels carrying out the seemingly mundane task of stacking boxes in a warehouse.

It might seem like a step backward, but this is exactly the kind of practical task robots have long struggled with. While the speed and precision of industrial robots has seen them take over many functions in modern factories, they’re generally limited to highly prescribed tasks carried out in meticulously-controlled environments.

That’s because despite their mechanical sophistication, most are still surprisingly dumb. They can carry out precision welding on a car or rapidly assemble electronics, but only by rigidly following a prescribed set of motions. Moving cardboard boxes around a warehouse might seem simple to a human, but it actually involves a variety of tasks machines still find pretty difficult—perceiving your surroundings, navigating, and interacting with objects in a dynamic environment.

But the release of this video suggests Boston Dynamics thinks these kinds of applications are close to prime time. Last week the company doubled down by announcing the acquisition of start-up Kinema Systems, which builds computer vision systems for robots working in warehouses.

It’s not the only company making strides in this area. On the same day the video went live, Google unveiled a robot arm called TossingBot that can pick random objects from a box and quickly toss them into another container beyond its reach, which could prove very useful for sorting items in a warehouse. The machine can train on new objects in just an hour or two, and can pick and toss up to 500 items an hour with better accuracy than any of the humans who tried the task.

And an apple-picking robot built by Abundant Robotics is currently on New Zealand farms navigating between rows of apple trees using LIDAR and computer vision to single out ripe apples before using a vacuum tube to suck them off the tree.

In most cases, advances in machine learning and computer vision brought about by the recent AI boom are the keys to these rapidly improving capabilities. Robots have historically had to be painstakingly programmed by humans to solve each new task, but deep learning is making it possible for them to quickly train themselves on a variety of perception, navigation, and dexterity tasks.

It’s not been simple, though, and the application of deep learning in robotics has lagged behind other areas. A major limitation is that the process typically requires huge amounts of training data. That’s fine when you’re dealing with image classification, but when that data needs to be generated by real-world robots it can make the approach impractical. Simulations offer the possibility to run this training faster than real time, but it’s proved difficult to translate policies learned in virtual environments into the real world.

Recent years have seen significant progress on these fronts, though, and the increasing integration of modern machine learning with robotics. In October, OpenAI imbued a robotic hand with human-level dexterity by training an algorithm in a simulation using reinforcement learning before transferring it to the real-world device. The key to ensuring the translation went smoothly was injecting random noise into the simulation to mimic some of the unpredictability of the real world.

And just a couple of weeks ago, MIT researchers demonstrated a new technique that let a robot arm learn to manipulate new objects with far less training data than is usually required. By getting the algorithm to focus on a few key points on the object necessary for picking it up, the system could learn to pick up a previously unseen object after seeing only a few dozen examples (rather than the hundreds or thousands typically required).

How quickly these innovations will trickle down to practical applications remains to be seen, but a number of startups as well as logistics behemoth Amazon are developing robots designed to flexibly pick and place the wide variety of items found in your average warehouse.

Whether the economics of using robots to replace humans at these kinds of menial tasks makes sense yet is still unclear. The collapse of collaborative robotics pioneer Rethink Robotics last year suggests there are still plenty of challenges.

But at the same time, the number of robotic warehouses is expected to leap from 4,000 today to 50,000 by 2025. It may not be long until robots are muscling in on tasks we’ve long assumed only humans could do.

Image Credit: Visual Generation / Shutterstock.com Continue reading

Posted in Human Robots

#434797 This Week’s Awesome Stories From ...

GENE EDITING
Genome Engineers Made More Than 13,000 Genome Edits in a Single Cell
Antonio Regalado | MIT Technology Review
“The group, led by gene technologist George Church, wants to rewrite genomes at a far larger scale than has currently been possible, something it says could ultimately lead to the ‘radical redesign’ of species—even humans.”

ROBOTICS
Inside Google’s Rebooted Robotics Program
Cade Metz | The New York Times
“Google’s new lab is indicative of a broader effort to bring so-called machine learning to robotics. …Many believe that machine learning—not extravagant new devices—will be the key to developing robotics for manufacturing, warehouse automation, transportation and many other tasks.

VIDEOS
Boston Dynamics Builds the Warehouse Robot of Jeff Bezos’ Dreams
Luke Dormehl | Digital Trends
“…for anyone wondering what the future of warehouse operation is likely to look like, this offers a far more practical glimpse of the years to come than, say, a dancing dog robot. As Boston Dynamics moves toward commercializing its creations for the first time, this could turn out to be a lot closer than you might think.”

TECHNOLOGY
Europe Is Splitting the Internet Into Three
Casey Newton | The Verge
“The internet had previously been divided into two: the open web, which most of the world could access; and the authoritarian web of countries like China, which is parceled out stingily and heavily monitored. As of today, though, the web no longer feels truly worldwide. Instead we now have the American internet, the authoritarian internet, and the European internet. How does the EU Copyright Directive change our understanding of the web?”

VIRTUAL REALITY
No Man’s Sky’s Next Update Will Let You Explore Infinite Space in Virtual Reality
Taylor Hatmaker | TechCrunch
“Assuming the game runs well enough, No Man’s Sky Virtual Reality will be a far cry from gimmicky VR games that lack true depth, offering one of the most expansive—if not the most expansive—VR experiences to date.”

3D PRINTING
3D Metal Printing Tries to Break Into the Manufacturing Mainstream
Mark Anderson | IEEE Spectrum
“It’s been five or so years since 3D printing was at peak hype. Since then, the technology has edged its way into a new class of materials and started to break into more applications. Today, 3D printers are being seriously considered as a means to produce stainless steel 5G smartphones, high-strength alloy gas-turbine blades, and other complex metal parts.”

Image Credit: ale de sun / Shutterstock.com Continue reading

Posted in Human Robots