Tag Archives: Team

#437182 MIT’s Tiny New Brain Chip Aims for AI ...

The human brain operates on roughly 20 watts of power (a third of a 60-watt light bulb) in a space the size of, well, a human head. The biggest machine learning algorithms use closer to a nuclear power plant’s worth of electricity and racks of chips to learn.

That’s not to slander machine learning, but nature may have a tip or two to improve the situation. Luckily, there’s a branch of computer chip design heeding that call. By mimicking the brain, super-efficient neuromorphic chips aim to take AI off the cloud and put it in your pocket.

The latest such chip is smaller than a piece of confetti and has tens of thousands of artificial synapses made out of memristors—chip components that can mimic their natural counterparts in the brain.

In a recent paper in Nature Nanotechnology, a team of MIT scientists say their tiny new neuromorphic chip was used to store, retrieve, and manipulate images of Captain America’s Shield and MIT’s Killian Court. Whereas images stored with existing methods tended to lose fidelity over time, the new chip’s images remained crystal clear.

“So far, artificial synapse networks exist as software. We’re trying to build real neural network hardware for portable artificial intelligence systems,” Jeehwan Kim, associate professor of mechanical engineering at MIT said in a press release. “Imagine connecting a neuromorphic device to a camera on your car, and having it recognize lights and objects and make a decision immediately, without having to connect to the internet. We hope to use energy-efficient memristors to do those tasks on-site, in real-time.”

A Brain in Your Pocket
Whereas the computers in our phones and laptops use separate digital components for processing and memory—and therefore need to shuttle information between the two—the MIT chip uses analog components called memristors that process and store information in the same place. This is similar to the way the brain works and makes memristors far more efficient. To date, however, they’ve struggled with reliability and scalability.

To overcome these challenges, the MIT team designed a new kind of silicon-based, alloyed memristor. Ions flowing in memristors made from unalloyed materials tend to scatter as the components get smaller, meaning the signal loses fidelity and the resulting computations are less reliable. The team found an alloy of silver and copper helped stabilize the flow of silver ions between electrodes, allowing them to scale the number of memristors on the chip without sacrificing functionality.

While MIT’s new chip is promising, there’s likely a ways to go before memristor-based neuromorphic chips go mainstream. Between now and then, engineers like Kim have their work cut out for them to further scale and demonstrate their designs. But if successful, they could make for smarter smartphones and other even smaller devices.

“We would like to develop this technology further to have larger-scale arrays to do image recognition tasks,” Kim said. “And some day, you might be able to carry around artificial brains to do these kinds of tasks, without connecting to supercomputers, the internet, or the cloud.”

Special Chips for AI
The MIT work is part of a larger trend in computing and machine learning. As progress in classical chips has flagged in recent years, there’s been an increasing focus on more efficient software and specialized chips to continue pushing the pace.

Neuromorphic chips, for example, aren’t new. IBM and Intel are developing their own designs. So far, their chips have been based on groups of standard computing components, such as transistors (as opposed to memristors), arranged to imitate neurons in the brain. These chips are, however, still in the research phase.

Graphics processing units (GPUs)—chips originally developed for graphics-heavy work like video games—are the best practical example of specialized hardware for AI and were heavily used in this generation of machine learning early on. In the years since, Google, NVIDIA, and others have developed even more specialized chips that cater more specifically to machine learning.

The gains from such specialized chips are already being felt.

In a recent cost analysis of machine learning, research and investment firm ARK Invest said cost declines have far outpaced Moore’s Law. In a particular example, they found the cost to train an image recognition algorithm (ResNet-50) went from around $1,000 in 2017 to roughly $10 in 2019. The fall in cost to actually run such an algorithm was even more dramatic. It took $10,000 to classify a billion images in 2017 and just $0.03 in 2019.

Some of these declines can be traced to better software, but according to ARK, specialized chips have improved performance by nearly 16 times in the last three years.

As neuromorphic chips—and other tailored designs—advance further in the years to come, these trends in cost and performance may continue. Eventually, if all goes to plan, we might all carry a pocket brain that can do the work of today’s best AI.

Image credit: Peng Lin Continue reading

Posted in Human Robots

#437179 Drug-carrying platelets engineered to ...

A team of researchers from the University of California San Diego and the University of Science and Technology Beijing has developed a way to engineer platelets to propel themselves through biofluids as a means of delivering drugs to targeted parts of the body. In their paper published in the journal Science Robotics, the group outlines their method and how well it worked when tested in the lab. In the same issue, Jinjun Shi with Brigham and Women's Hospital has published a Focus piece outlining ongoing research into the development of natural drug delivery systems and the method used in this new effort. Continue reading

Posted in Human Robots

#437139 SILVER2 aquatic robot walks around on ...

A team of Italian researchers from the BioRobotics Institute, Scuola Superiore Sant'Anna and Stazione Zoologica Anton Dohrn has developed a new and improved version of its Seabed Interaction Legged Vehicle for Exploration and Research (SILVER) with the SILVER2—a robot that can walk around on the seafloor taking video as it goes. In their paper published in the journal Science Robotics, the group describes the robot, its capabilities and how it might be used in research efforts. Continue reading

Posted in Human Robots

#437120 The New Indiana Jones? AI. Here’s How ...

Archaeologists have uncovered scores of long-abandoned settlements along coastal Madagascar that reveal environmental connections to modern-day communities. They have detected the nearly indiscernible bumps of earthen mounds left behind by prehistoric North American cultures. Still other researchers have mapped Bronze Age river systems in the Indus Valley, one of the cradles of civilization.

All of these recent discoveries are examples of landscape archaeology. They’re also examples of how artificial intelligence is helping scientists hunt for new archaeological digs on a scale and at a pace unimaginable even a decade ago.

“AI in archaeology has been increasing substantially over the past few years,” said Dylan Davis, a PhD candidate in the Department of Anthropology at Penn State University. “One of the major uses of AI in archaeology is for the detection of new archaeological sites.”

The near-ubiquitous availability of satellite data and other types of aerial imagery for many parts of the world has been both a boon and a bane to archaeologists. They can cover far more ground, but the job of manually mowing their way across digitized landscapes is still time-consuming and laborious. Machine learning algorithms offer a way to parse through complex data far more quickly.

AI Gives Archaeologists a Bird’s Eye View
Davis developed an automated algorithm for identifying large earthen and shell mounds built by native populations long before Europeans arrived with far-off visions of skyscrapers and superhighways in their eyes. The sites still hidden in places like the South Carolina wilderness contain a wealth of information about how people lived, even what they ate, and the ways they interacted with the local environment and other cultures.

In this particular case, the imagery comes from LiDAR, which uses light pulses that can penetrate tree canopies to map forest floors. The team taught the computer the shape, size, and texture characteristics of the mounds so it could identify potential sites from the digital 3D datasets that it analyzed.

“The process resulted in several thousand possible features that my colleagues and I checked by hand,” Davis told Singularity Hub. “While not entirely automated, this saved the equivalent of years of manual labor that would have been required for analyzing the whole LiDAR image by hand.”

In Madagascar—where Davis is studying human settlement history across the world’s fourth largest island over a timescale of millennia—he developed a predictive algorithm to help locate archaeological sites using freely available satellite imagery. His team was able to survey and identify more than 70 new archaeological sites—and potentially hundreds more—across an area of more than 1,000 square kilometers during the course of about a year.

Machines Learning From the Past Prepare Us for the Future
One impetus behind the rapid identification of archaeological sites is that many are under threat from climate change, such as coastal erosion from sea level rise, or other human impacts. Meanwhile, traditional archaeological approaches are expensive and laborious—serious handicaps in a race against time.

“It is imperative to record as many archaeological sites as we can in a short period of time. That is why AI and machine learning are useful for my research,” Davis said.

Studying the rise and fall of past civilizations can also teach modern humans a thing or two about how to grapple with these current challenges.

Researchers at the Institut Català d’Arqueologia Clàssica (ICAC) turned to machine-learning algorithms to reconstruct more than 20,000 kilometers of paleo-rivers along the Indus Valley civilization of what is now part of modern Pakistan and India. Such AI-powered mapping techniques wouldn’t be possible using satellite images alone.

That effort helped locate many previously unknown archaeological sites and unlocked new insights into those Bronze Age cultures. However, the analytics can also assist governments with important water resource management today, according to Hèctor A. Orengo Romeu, co-director of the Landscape Archaeology Research Group at ICAC.

“Our analyses can contribute to the forecasts of the evolution of aquifers in the area and provide valuable information on aspects such as the variability of agricultural productivity or the influence of climate change on the expansion of the Thar desert, in addition to providing cultural management tools to the government,” he said.

Leveraging AI for Language and Lots More
While landscape archaeology is one major application of AI in archaeology, it’s far from the only one. In 2000, only about a half-dozen scientific papers referred to the use of AI, according to the Web of Science, reputedly the world’s largest global citation database. Last year, more than 65 papers were published concerning the use of machine intelligence technologies in archaeology, with a significant uptick beginning in 2015.

AI methods, for instance, are being used to understand the chemical makeup of artifacts like pottery and ceramics, according to Davis. “This can help identify where these materials were made and how far they were transported. It can also help us to understand the extent of past trading networks.”

Linguistic anthropologists have also used machine intelligence methods to trace the evolution of different languages, Davis said. “Using AI, we can learn when and where languages emerged around the world.”

In other cases, AI has helped reconstruct or decipher ancient texts. Last year, researchers at Google’s DeepMind used a deep neural network called PYTHIA to recreate missing inscriptions in ancient Greek from damaged surfaces of objects made of stone or ceramics.

Named after the Oracle at Delphi, PYTHIA “takes a sequence of damaged text as input, and is trained to predict character sequences comprising hypothesised restorations of ancient Greek inscriptions,” the researchers reported.

In a similar fashion, Chinese scientists applied a convolutional neural network (CNN) to untangle another ancient tongue once found on turtle shells and ox bones. The CNN managed to classify oracle bone morphology in order to piece together fragments of these divination objects, some with inscriptions that represent the earliest evidence of China’s recorded history.

“Differentiating the materials of oracle bones is one of the most basic steps for oracle bone morphology—we need to first make sure we don’t assemble pieces of ox bones with tortoise shells,” lead author of the study, associate professor Shanxiong Chen at China’s Southwest University, told Synced, an online tech publication in China.

AI Helps Archaeologists Get the Scoop…
And then there are applications of AI in archaeology that are simply … interesting. Just last month, researchers published a paper about a machine learning method trained to differentiate between human and canine paleofeces.

The algorithm, dubbed CoproID, compares the gut microbiome DNA found in the ancient material with DNA found in modern feces, enabling it to get the scoop on the origin of the poop.

Also known as coprolites, paleo-feces from humans and dogs are often found in the same archaeological sites. Scientists need to know which is which if they’re trying to understand something like past diets or disease.

“CoproID is the first line of identification in coprolite analysis to confirm that what we’re looking for is actually human, or a dog if we’re interested in dogs,” Maxime Borry, a bioinformatics PhD student at the Max Planck Institute for the Science of Human History, told Vice.

…But Machine Intelligence Is Just Another Tool
There is obviously quite a bit of work that can be automated through AI. But there’s no reason for archaeologists to hit the unemployment line any time soon. There are also plenty of instances where machines can’t yet match humans in identifying objects or patterns. At other times, it’s just faster doing the analysis yourself, Davis noted.

“For ‘big data’ tasks like detecting archaeological materials over a continental scale, AI is useful,” he said. “But for some tasks, it is sometimes more time-consuming to train an entire computer algorithm to complete a task that you can do on your own in an hour.”

Still, there’s no telling what the future will hold for studying the past using artificial intelligence.

“We have already started to see real improvements in the accuracy and reliability of these approaches, but there is a lot more to do,” Davis said. “Hopefully, we start to see these methods being directly applied to a variety of interesting questions around the world, as these methods can produce datasets that would have been impossible a few decades ago.”

Image Credit: James Wheeler from Pixabay Continue reading

Posted in Human Robots

#436962 Scientists Engineered Neurons to Make ...

Electricity plays a surprisingly powerful role in our bodies. While most people are aware that it plays a crucial role in carrying signals to and from our nerves, our bodies produce electric fields that can do everything from helping heal wounds to triggering the release of hormones.

Electric fields can influence a host of important cellular behavior, like directional migration, proliferation, division, or even differentiation into different cell types. The work of Michael Levin at Tufts University even suggests that electrical fields may play a crucial role in the way our bodies organize themselves.

This has prompted considerable interest in exploiting our body’s receptiveness to electrical stimulation for therapeutic means, but given the diffuse nature of electrical fields a key challenge is finding a way to localize these effects. Conductive polymers have proven a useful tool in this regard thanks to their good electrical properties and biocompatibility, and have been used in everything from neural implants to biosensors.

But now, a team at Stanford University has developed a way to genetically engineer neurons to build the materials into their own cell membranes. The approach could make it possible to target highly specific groups of cells, providing unprecedented control over the body’s response to electrical stimulation.

In a paper in Science, the team explained how they used re-engineered viruses to deliver DNA that hijacks cells’ biosynthesis machinery to create an enzyme that assembles electroactive polymers onto their membranes. This changes the electrical properties of the cells, which the team demonstrated could be used to control their behavior.

They used the approach to modulate neuronal firing in cultures of rat hippocampal neurons, mouse brain slices, and even human cortical spheroids. Most impressively, they showed that they could coax the neurons of living C. elegans worms to produce the polymers in large enough quantities to alter their behavior without impairing the cells’ natural function.

Translating the idea to humans poses major challenges, not least because the viruses used to deliver the genetic changes are still a long way from being approved for clinical use. But the ability to precisely target specific cells using a genetic approach holds enormous promise for bioelectronic medicine, Kevin Otto and Christine Schmidt from the University of Florida say in an accompanying perspective.

Interest is booming in therapies that use electrical stimulation of neural circuits as an alternative to drugs for diseases as varied as arthritis, Alzheimer’s, diabetes, and cardiovascular disease, and hundreds of clinical trials are currently underway.

At present these approaches rely on electrodes that can provide some level of localization, but because different kinds of nerve cells are often packed closely together it’s proven hard to stimulate exactly the right nerves, say Otto and Schmidt. This new approach makes it possible to boost the conductivity of specific cell types, which could make these kinds of interventions dramatically more targeted.

Besides disease-focused bioelectronic interventions, Otto and Schmidt say the approach could prove invaluable for helping to interface advanced prosthetics with patients’ nervous systems by making it possible to excite sensory neurons without accidentally triggering motor neurons, or vice versa.

More speculatively, the approach could one day help create far more efficient bridges between our minds and machines. One of the major challenges for brain-machine interfaces is recording from specific neurons, something that a genetically targeted approach might be able to help greatly with.

If the researchers can replicate the ability to build electronic-tissue “composites” in humans, we may be well on our way to the cyborg future predicted by science fiction.

Image Credit: Gerd Altmann from Pixabay Continue reading

Posted in Human Robots