Tag Archives: parts
#437120 The New Indiana Jones? AI. Here’s How ...
Archaeologists have uncovered scores of long-abandoned settlements along coastal Madagascar that reveal environmental connections to modern-day communities. They have detected the nearly indiscernible bumps of earthen mounds left behind by prehistoric North American cultures. Still other researchers have mapped Bronze Age river systems in the Indus Valley, one of the cradles of civilization.
All of these recent discoveries are examples of landscape archaeology. They’re also examples of how artificial intelligence is helping scientists hunt for new archaeological digs on a scale and at a pace unimaginable even a decade ago.
“AI in archaeology has been increasing substantially over the past few years,” said Dylan Davis, a PhD candidate in the Department of Anthropology at Penn State University. “One of the major uses of AI in archaeology is for the detection of new archaeological sites.”
The near-ubiquitous availability of satellite data and other types of aerial imagery for many parts of the world has been both a boon and a bane to archaeologists. They can cover far more ground, but the job of manually mowing their way across digitized landscapes is still time-consuming and laborious. Machine learning algorithms offer a way to parse through complex data far more quickly.
AI Gives Archaeologists a Bird’s Eye View
Davis developed an automated algorithm for identifying large earthen and shell mounds built by native populations long before Europeans arrived with far-off visions of skyscrapers and superhighways in their eyes. The sites still hidden in places like the South Carolina wilderness contain a wealth of information about how people lived, even what they ate, and the ways they interacted with the local environment and other cultures.
In this particular case, the imagery comes from LiDAR, which uses light pulses that can penetrate tree canopies to map forest floors. The team taught the computer the shape, size, and texture characteristics of the mounds so it could identify potential sites from the digital 3D datasets that it analyzed.
“The process resulted in several thousand possible features that my colleagues and I checked by hand,” Davis told Singularity Hub. “While not entirely automated, this saved the equivalent of years of manual labor that would have been required for analyzing the whole LiDAR image by hand.”
In Madagascar—where Davis is studying human settlement history across the world’s fourth largest island over a timescale of millennia—he developed a predictive algorithm to help locate archaeological sites using freely available satellite imagery. His team was able to survey and identify more than 70 new archaeological sites—and potentially hundreds more—across an area of more than 1,000 square kilometers during the course of about a year.
Machines Learning From the Past Prepare Us for the Future
One impetus behind the rapid identification of archaeological sites is that many are under threat from climate change, such as coastal erosion from sea level rise, or other human impacts. Meanwhile, traditional archaeological approaches are expensive and laborious—serious handicaps in a race against time.
“It is imperative to record as many archaeological sites as we can in a short period of time. That is why AI and machine learning are useful for my research,” Davis said.
Studying the rise and fall of past civilizations can also teach modern humans a thing or two about how to grapple with these current challenges.
Researchers at the Institut Català d’Arqueologia Clàssica (ICAC) turned to machine-learning algorithms to reconstruct more than 20,000 kilometers of paleo-rivers along the Indus Valley civilization of what is now part of modern Pakistan and India. Such AI-powered mapping techniques wouldn’t be possible using satellite images alone.
That effort helped locate many previously unknown archaeological sites and unlocked new insights into those Bronze Age cultures. However, the analytics can also assist governments with important water resource management today, according to Hèctor A. Orengo Romeu, co-director of the Landscape Archaeology Research Group at ICAC.
“Our analyses can contribute to the forecasts of the evolution of aquifers in the area and provide valuable information on aspects such as the variability of agricultural productivity or the influence of climate change on the expansion of the Thar desert, in addition to providing cultural management tools to the government,” he said.
Leveraging AI for Language and Lots More
While landscape archaeology is one major application of AI in archaeology, it’s far from the only one. In 2000, only about a half-dozen scientific papers referred to the use of AI, according to the Web of Science, reputedly the world’s largest global citation database. Last year, more than 65 papers were published concerning the use of machine intelligence technologies in archaeology, with a significant uptick beginning in 2015.
AI methods, for instance, are being used to understand the chemical makeup of artifacts like pottery and ceramics, according to Davis. “This can help identify where these materials were made and how far they were transported. It can also help us to understand the extent of past trading networks.”
Linguistic anthropologists have also used machine intelligence methods to trace the evolution of different languages, Davis said. “Using AI, we can learn when and where languages emerged around the world.”
In other cases, AI has helped reconstruct or decipher ancient texts. Last year, researchers at Google’s DeepMind used a deep neural network called PYTHIA to recreate missing inscriptions in ancient Greek from damaged surfaces of objects made of stone or ceramics.
Named after the Oracle at Delphi, PYTHIA “takes a sequence of damaged text as input, and is trained to predict character sequences comprising hypothesised restorations of ancient Greek inscriptions,” the researchers reported.
In a similar fashion, Chinese scientists applied a convolutional neural network (CNN) to untangle another ancient tongue once found on turtle shells and ox bones. The CNN managed to classify oracle bone morphology in order to piece together fragments of these divination objects, some with inscriptions that represent the earliest evidence of China’s recorded history.
“Differentiating the materials of oracle bones is one of the most basic steps for oracle bone morphology—we need to first make sure we don’t assemble pieces of ox bones with tortoise shells,” lead author of the study, associate professor Shanxiong Chen at China’s Southwest University, told Synced, an online tech publication in China.
AI Helps Archaeologists Get the Scoop…
And then there are applications of AI in archaeology that are simply … interesting. Just last month, researchers published a paper about a machine learning method trained to differentiate between human and canine paleofeces.
The algorithm, dubbed CoproID, compares the gut microbiome DNA found in the ancient material with DNA found in modern feces, enabling it to get the scoop on the origin of the poop.
Also known as coprolites, paleo-feces from humans and dogs are often found in the same archaeological sites. Scientists need to know which is which if they’re trying to understand something like past diets or disease.
“CoproID is the first line of identification in coprolite analysis to confirm that what we’re looking for is actually human, or a dog if we’re interested in dogs,” Maxime Borry, a bioinformatics PhD student at the Max Planck Institute for the Science of Human History, told Vice.
…But Machine Intelligence Is Just Another Tool
There is obviously quite a bit of work that can be automated through AI. But there’s no reason for archaeologists to hit the unemployment line any time soon. There are also plenty of instances where machines can’t yet match humans in identifying objects or patterns. At other times, it’s just faster doing the analysis yourself, Davis noted.
“For ‘big data’ tasks like detecting archaeological materials over a continental scale, AI is useful,” he said. “But for some tasks, it is sometimes more time-consuming to train an entire computer algorithm to complete a task that you can do on your own in an hour.”
Still, there’s no telling what the future will hold for studying the past using artificial intelligence.
“We have already started to see real improvements in the accuracy and reliability of these approaches, but there is a lot more to do,” Davis said. “Hopefully, we start to see these methods being directly applied to a variety of interesting questions around the world, as these methods can produce datasets that would have been impossible a few decades ago.”
Image Credit: James Wheeler from Pixabay Continue reading
#436911 Scientists Linked Artificial and ...
Scientists have linked up two silicon-based artificial neurons with a biological one across multiple countries into a fully-functional network. Using standard internet protocols, they established a chain of communication whereby an artificial neuron controls a living, biological one, and passes on the info to another artificial one.
Whoa.
We’ve talked plenty about brain-computer interfaces and novel computer chips that resemble the brain. We’ve covered how those “neuromorphic” chips could link up into tremendously powerful computing entities, using engineered communication nodes called artificial synapses.
As Moore’s law is dying, we even said that neuromorphic computing is one path towards the future of extremely powerful, low energy consumption artificial neural network-based computing—in hardware—that could in theory better link up with the brain. Because the chips “speak” the brain’s language, in theory they could become neuroprosthesis hubs far more advanced and “natural” than anything currently possible.
This month, an international team put all of those ingredients together, turning theory into reality.
The three labs, scattered across Padova, Italy, Zurich, Switzerland, and Southampton, England, collaborated to create a fully self-controlled, hybrid artificial-biological neural network that communicated using biological principles, but over the internet.
The three-neuron network, linked through artificial synapses that emulate the real thing, was able to reproduce a classic neuroscience experiment that’s considered the basis of learning and memory in the brain. In other words, artificial neuron and synapse “chips” have progressed to the point where they can actually use a biological neuron intermediary to form a circuit that, at least partially, behaves like the real thing.
That’s not to say cyborg brains are coming soon. The simulation only recreated a small network that supports excitatory transmission in the hippocampus—a critical region that supports memory—and most brain functions require enormous cross-talk between numerous neurons and circuits. Nevertheless, the study is a jaw-dropping demonstration of how far we’ve come in recreating biological neurons and synapses in artificial hardware.
And perhaps one day, the currently “experimental” neuromorphic hardware will be integrated into broken biological neural circuits as bridges to restore movement, memory, personality, and even a sense of self.
The Artificial Brain Boom
One important thing: this study relies heavily on a decade of research into neuromorphic computing, or the implementation of brain functions inside computer chips.
The best-known example is perhaps IBM’s TrueNorth, which leveraged the brain’s computational principles to build a completely different computer than what we have today. Today’s computers run on a von Neumann architecture, in which memory and processing modules are physically separate. In contrast, the brain’s computing and memory are simultaneously achieved at synapses, small “hubs” on individual neurons that talk to adjacent ones.
Because memory and processing occur on the same site, biological neurons don’t have to shuttle data back and forth between processing and storage compartments, massively reducing processing time and energy use. What’s more, a neuron’s history will also influence how it behaves in the future, increasing flexibility and adaptability compared to computers. With the rise of deep learning, which loosely mimics neural processing as the prima donna of AI, the need to reduce power while boosting speed and flexible learning is becoming ever more tantamount in the AI community.
Neuromorphic computing was partially born out of this need. Most chips utilize special ingredients that change their resistance (or other physical characteristics) to mimic how a neuron might adapt to stimulation. Some chips emulate a whole neuron, that is, how it responds to a history of stimulation—does it get easier or harder to fire? Others imitate synapses themselves, that is, how easily they will pass on the information to another neuron.
Although single neuromorphic chips have proven to be far more efficient and powerful than current computer chips running machine learning algorithms in toy problems, so far few people have tried putting the artificial components together with biological ones in the ultimate test.
That’s what this study did.
A Hybrid Network
Still with me? Let’s talk network.
It’s gonna sound complicated, but remember: learning is the formation of neural networks, and neurons that fire together wire together. To rephrase: when learning, neurons will spontaneously organize into networks so that future instances will re-trigger the entire network. To “wire” together, downstream neurons will become more responsive to their upstream neural partners, so that even a whisper will cause them to activate. In contrast, some types of stimulation will cause the downstream neuron to “chill out” so that only an upstream “shout” will trigger downstream activation.
Both these properties—easier or harder to activate downstream neurons—are essentially how the brain forms connections. The “amping up,” in neuroscience jargon, is long-term potentiation (LTP), whereas the down-tuning is LTD (long-term depression). These two phenomena were first discovered in the rodent hippocampus more than half a century ago, and ever since have been considered as the biological basis of how the brain learns and remembers, and implicated in neurological problems such as addition (seriously, you can’t pass Neuro 101 without learning about LTP and LTD!).
So it’s perhaps especially salient that one of the first artificial-brain hybrid networks recapitulated this classic result.
To visualize: the three-neuron network began in Switzerland, with an artificial neuron with the badass name of “silicon spiking neuron.” That neuron is linked to an artificial synapse, a “memristor” located in the UK, which is then linked to a biological rat neuron cultured in Italy. The rat neuron has a “smart” microelectrode, controlled by the artificial synapse, to stimulate it. This is the artificial-to-biological pathway.
Meanwhile, the rat neuron in Italy also has electrodes that listen in on its electrical signaling. This signaling is passed back to another artificial synapse in the UK, which is then used to control a second artificial neuron back in Switzerland. This is the biological-to-artificial pathway back. As a testimony in how far we’ve come in digitizing neural signaling, all of the biological neural responses are digitized and sent over the internet to control its far-out artificial partner.
Here’s the crux: to demonstrate a functional neural network, just having the biological neuron passively “pass on” electrical stimulation isn’t enough. It has to show the capacity to learn, that is, to be able to mimic the amping up and down-tuning that are LTP and LTD, respectively.
You’ve probably guessed the results: certain stimulation patterns to the first artificial neuron in Switzerland changed how the artificial synapse in the UK operated. This, in turn, changed the stimulation to the biological neuron, so that it either amped up or toned down depending on the input.
Similarly, the response of the biological neuron altered the second artificial synapse, which then controlled the output of the second artificial neuron. Altogether, the biological and artificial components seamlessly linked up, over thousands of miles, into a functional neural circuit.
Cyborg Mind-Meld
So…I’m still picking my jaw up off the floor.
It’s utterly insane seeing a classic neuroscience learning experiment repeated with an integrated network with artificial components. That said, a three-neuron network is far from the thousands of synapses (if not more) needed to truly re-establish a broken neural circuit in the hippocampus, which DARPA has been aiming to do. And LTP/LTD has come under fire recently as the de facto brain mechanism for learning, though so far they remain cemented as neuroscience dogma.
However, this is one of the few studies where you see fields coming together. As Richard Feynman famously said, “What I cannot recreate, I cannot understand.” Even though neuromorphic chips were built on a high-level rather than molecular-level understanding of how neurons work, the study shows that artificial versions can still synapse with their biological counterparts. We’re not just on the right path towards understanding the brain, we’re recreating it, in hardware—if just a little.
While the study doesn’t have immediate use cases, practically it does boost both the neuromorphic computing and neuroprosthetic fields.
“We are very excited with this new development,” said study author Dr. Themis Prodromakis at the University of Southampton. “On one side it sets the basis for a novel scenario that was never encountered during natural evolution, where biological and artificial neurons are linked together and communicate across global networks; laying the foundations for the Internet of Neuro-electronics. On the other hand, it brings new prospects to neuroprosthetic technologies, paving the way towards research into replacing dysfunctional parts of the brain with AI chips.”
Image Credit: Gerd Altmann from Pixabay Continue reading