Tag Archives: modern

#437345 Moore’s Law Lives: Intel Says Chips ...

If you weren’t already convinced the digital world is taking over, you probably are now.

To keep the economy on life support as people stay home to stem the viral tide, we’ve been forced to digitize interactions at scale (for better and worse). Work, school, events, shopping, food, politics. The companies at the center of the digital universe are now powerhouses of the modern era—worth trillions and nearly impossible to avoid in daily life.

Six decades ago, this world didn’t exist.

A humble microchip in the early 1960s would have boasted a handful of transistors. Now, your laptop or smartphone runs on a chip with billions of transistors. As first described by Moore’s Law, this is possible because the number of transistors on a chip doubled with extreme predictability every two years for decades.

But now progress is faltering as the size of transistors approaches physical limits, and the money and time it takes to squeeze a few more onto a chip are growing. There’ve been many predictions that Moore’s Law is, finally, ending. But, perhaps also predictably, the company whose founder coined Moore’s Law begs to differ.

In a keynote presentation at this year’s Hot Chips conference, Intel’s chief architect, Raja Koduri, laid out a roadmap to increase transistor density—that is, the number of transistors you can fit on a chip—by a factor of 50.

“We firmly believe there is a lot more transistor density to come,” Koduri said. “The vision will play out over time—maybe a decade or more—but it will play out.”

Why the optimism?

Calling the end of Moore’s Law is a bit of a tradition. As Peter Lee, vice president at Microsoft Research, quipped to The Economist a few years ago, “The number of people predicting the death of Moore’s Law doubles every two years.” To date, prophets of doom have been premature, and though the pace is slowing, the industry continues to dodge death with creative engineering.

Koduri believes the trend will continue this decade and outlined the upcoming chip innovations Intel thinks can drive more gains in computing power.

Keeping It Traditional
First, engineers can further shrink today’s transistors. Fin field effect transistors (or FinFET) first hit the scene in the 2010s and have since pushed chip features past 14 and 10 nanometers (or nodes, as such size checkpoints are called). Korduri said FinFET will again triple chip density before it’s exhausted.

The Next Generation
FinFET will hand the torch off to nanowire transistors (also known as gate-all-around transistors).

Here’s how they’ll work. A transistor is made up of three basic components: the source, where current is introduced, the gate and channel, where current selectively flows, and the drain. The gate is like a light switch. It controls how much current flows through the channel. A transistor is “on” when the gate allows current to flow, and it’s off when no current flows. The smaller transistors get, the harder it is to control that current.

FinFET maintained fine control of current by surrounding the channel with a gate on three sides. Nanowire designs kick that up a notch by surrounding the channel with a gate on four sides (hence, gate-all-around). They’ve been in the works for years and are expected around 2025. Koduri said first-generation nanowire transistors will be followed by stacked nanowire transistors, and together, they’ll quadruple transistor density.

Building Up
Growing transistor density won’t only be about shrinking transistors, but also going 3D.

This is akin to how skyscrapers increase a city’s population density by adding more usable space on the same patch of land. Along those lines, Intel recently launched its Foveros chip design. Instead of laying a chip’s various “neighborhoods” next to each other in a 2D silicon sprawl, they’ve stacked them on top of each other like a layer cake. Chip stacking isn’t entirely new, but it’s advancing and being applied to general purpose CPUs, like the chips in your phone and laptop.

Koduri said 3D chip stacking will quadruple transistor density.

A Self-Fulfilling Prophecy
The technologies Koduri outlines are an evolution of the same general technology in use today. That is, we don’t need quantum computing or nanotube transistors to augment or replace silicon chips yet. Rather, as it’s done many times over the years, the chip industry will get creative with the design of its core product to realize gains for another decade.

Last year, veteran chip engineer Jim Keller, who at the time was Intel’s head of silicon engineering but has since left the company, told MIT Technology Review there are over a 100 variables driving Moore’s Law (including 3D architectures and new transistor designs). From the standpoint of pure performance, it’s also about how efficiently software uses all those transistors. Keller suggested that with some clever software tweaks “we could get chips that are a hundred times faster in 10 years.”

But whether Intel’s vision pans out as planned is far from certain.

Intel’s faced challenges recently, taking five years instead of two to move its chips from 14 nanometers to 10 nanometers. After a delay of six months for its 7-nanometer chips, it’s now a year behind schedule and lagging other makers who already offer 7-nanometer chips. This is a key point. Yes, chipmakers continue making progress, but it’s getting harder, more expensive, and timelines are stretching.

The question isn’t if Intel and competitors can cram more transistors onto a chip—which, Intel rival TSMC agrees is clearly possible—it’s how long will it take and at what cost?

That said, demand for more computing power isn’t going anywhere.

Amazon, Microsoft, Alphabet, Apple, and Facebook now make up a whopping 20 percent of the stock market’s total value. By that metric, tech is the most dominant industry in at least 70 years. And new technologies—from artificial intelligence and virtual reality to a proliferation of Internet of Things devices and self-driving cars—will demand better chips.

There’s ample motivation to push computing to its bitter limits and beyond. As is often said, Moore’s Law is a self-fulfilling prophecy, and likely whatever comes after it will be too.

Image credit: Laura Ockel / Unsplash Continue reading

Posted in Human Robots

#437293 These Scientists Just Completed a 3D ...

Human brain maps are a dime a dozen these days. Maps that detail neurons in a certain region. Maps that draw out functional connections between those cells. Maps that dive deeper into gene expression. Or even meta-maps that combine all of the above.

But have you ever wondered: how well do those maps represent my brain? After all, no two brains are alike. And if we’re ever going to reverse-engineer the brain as a computer simulation—as Europe’s Human Brain Project is trying to do—shouldn’t we ask whose brain they’re hoping to simulate?

Enter a new kind of map: the Julich-Brain, a probabilistic map of human brains that accounts for individual differences using a computational framework. Rather than generating a static PDF of a brain map, the Julich-Brain atlas is also dynamic, in that it continuously changes to incorporate more recent brain mapping results. So far, the map has data from over 24,000 thinly sliced sections from 23 postmortem brains covering most years of adulthood at the cellular level. But the atlas can also continuously adapt to progress in mapping technologies to aid brain modeling and simulation, and link to other atlases and alternatives.

In other words, rather than “just another” human brain map, the Julich-Brain atlas is its own neuromapping API—one that could unite previous brain-mapping efforts with more modern methods.

“It is exciting to see how far the combination of brain research and digital technologies has progressed,” said Dr. Katrin Amunts of the Institute of Neuroscience and Medicine at Research Centre Jülich in Germany, who spearheaded the study.

The Old Dogma
The Julich-Brain atlas embraces traditional brain-mapping while also yanking the field into the 21st century.

First, the new atlas includes the brain’s cytoarchitecture, or how brain cells are organized. As brain maps go, these kinds of maps are the oldest and most fundamental. Rather than exploring how neurons talk to each other functionally—which is all the rage these days with connectome maps—cytoarchitecture maps draw out the physical arrangement of neurons.

Like a census, these maps literally capture how neurons are distributed in the brain, what they look like, and how they layer within and between different brain regions.

Because neurons aren’t packed together the same way between different brain regions, this provides a way to parse the brain into areas that can be further studied. When we say the brain’s “memory center,” the hippocampus, or the emotion center, the “amygdala,” these distinctions are based on cytoarchitectural maps.

Some may call this type of mapping “boring.” But cytoarchitecture maps form the very basis of any sort of neuroscience understanding. Like hand-drawn maps from early explorers sailing to the western hemisphere, these maps provide the brain’s geographical patterns from which we try to decipher functional connections. If brain regions are cities, then cytoarchitecture maps attempt to show trading or other “functional” activities that occur in the interlinking highways.

You might’ve heard of the most common cytoarchitecture map used today: the Brodmann map from 1909 (yup, that old), which divided the brain into classical regions based on the cells’ morphology and location. The map, while impactful, wasn’t able to account for brain differences between people. More recent brain-mapping technologies have allowed us to dig deeper into neuronal differences and divide the brain into more regions—180 areas in the cortex alone, compared with 43 in the original Brodmann map.

The new study took inspiration from that age-old map and transformed it into a digital ecosystem.

A Living Atlas
Work began on the Julich-Brain atlas in the mid-1990s, with a little help from the crowd.

The preparation of human tissue and its microstructural mapping, analysis, and data processing is incredibly labor-intensive, the authors lamented, making it impossible to do for the whole brain at high resolution in just one lab. To build their “Google Earth” for the brain, the team hooked up with EBRAINS, a shared computing platform set up by the Human Brain Project to promote collaboration between neuroscience labs in the EU.

First, the team acquired MRI scans of 23 postmortem brains, sliced the brains into wafer-thin sections, and scanned and digitized them. They corrected distortions from the chopping using data from the MRI scans and then lined up neurons in consecutive sections—picture putting together a 3D puzzle—to reconstruct the whole brain. Overall, the team had to analyze 24,000 brain sections, which prompted them to build a computational management system for individual brain sections—a win, because they could now track individual donor brains too.

Their method was quite clever. They first mapped their results to a brain template from a single person, called the MNI-Colin27 template. Because the reference brain was extremely detailed, this allowed the team to better figure out the location of brain cells and regions in a particular anatomical space.

However, MNI-Colin27’s brain isn’t your or my brain—or any of the brains the team analyzed. To dilute any of Colin’s potential brain quirks, the team also mapped their dataset onto an “average brain,” dubbed the ICBM2009c (catchy, I know).

This step allowed the team to “standardize” their results with everything else from the Human Connectome Project and the UK Biobank, kind of like adding their Google Maps layer to the existing map. To highlight individual brain differences, the team overlaid their dataset on existing ones, and looked for differences in the cytoarchitecture.

The microscopic architecture of neurons change between two areas (dotted line), forming the basis of different identifiable brain regions. To account for individual differences, the team also calculated a probability map (right hemisphere). Image credit: Forschungszentrum Juelich / Katrin Amunts
Based on structure alone, the brains were both remarkably different and shockingly similar at the same time. For example, the cortexes—the outermost layer of the brain—were physically different across donor brains of different age and sex. The region especially divergent between people was Broca’s region, which is traditionally linked to speech production. In contrast, parts of the visual cortex were almost identical between the brains.

The Brain-Mapping Future
Rather than relying on the brain’s visible “landmarks,” which can still differ between people, the probabilistic map is far more precise, the authors said.

What’s more, the map could also pool yet unmapped regions in the cortex—about 30 percent or so—into “gap maps,” providing neuroscientists with a better idea of what still needs to be understood.

“New maps are continuously replacing gap maps with progress in mapping while the process is captured and documented … Consequently, the atlas is not static but rather represents a ‘living map,’” the authors said.

Thanks to its structurally-sound architecture down to individual cells, the atlas can contribute to brain modeling and simulation down the line—especially for personalized brain models for neurological disorders such as seizures. Researchers can also use the framework for other species, and they can even incorporate new data-crunching processors into the workflow, such as mapping brain regions using artificial intelligence.

Fundamentally, the goal is to build shared resources to better understand the brain. “[These atlases] help us—and more and more researchers worldwide—to better understand the complex organization of the brain and to jointly uncover how things are connected,” the authors said.

Image credit: Richard Watts, PhD, University of Vermont and Fair Neuroimaging Lab, Oregon Health and Science University Continue reading

Posted in Human Robots

#437236 Why We Need Mass Automation to ...

The scale of goods moving around the planet at any moment is staggering. Raw materials are dug up in one country, spun into parts and pieces in another, and assembled into products in a third. Crossing oceans and continents, they find their way to a local store or direct to your door.

Magically, a roll of toilet paper, power tool, or tube of toothpaste is there just when you need it.

Even more staggering is that this whole system, the global supply chain, works so well that it’s effectively invisible most of the time. Until now, that is. The pandemic has thrown a floodlight on the inner workings of this modern wonder—and it’s exposed massive vulnerabilities.

The e-commerce supply chain is an instructive example. As the world went into lockdown, and everything non-essential went online, demand for digital fulfillment skyrocketed.

Even under “normal” conditions, most e-commerce warehouses were struggling to meet demand. But Covid-19 has further strained the ability to cope with shifting supply, an unprecedented tidal wave of orders, and labor shortages. Local stores are running out of key products. Online grocers and e-commerce platforms are suspending some home deliveries, restricting online purchases of certain items, and limiting new customers. The whole system is being severely tested.

Why? Despite an abundance of 21st century technology, we’re stuck in the 20th century.

Today’s supply chain consists of fleets of ships, trucks, warehouses, and importantly, people scattered around the world. While there are some notable instances of advanced automation, the overwhelming majority of work is still manual, resembling a sort of human-powered bucket brigade, with people wandering around warehouses or standing alongside conveyor belts. Each package of diapers or bottle of detergent ordered by an online customer might be touched dozens of times by warehouse workers before finding its way into a box delivered to a home.

The pandemic has proven the critical need for innovation due to increased demand, concerns about the health and safety of workers, and traceability and safety of products and services.

At the 2020 World Economic Forum, there was much discussion about the ongoing societal transformation in which humans and machines work in tandem, automating and augmenting the way we get things done. At the time, pre-pandemic, debate trended toward skepticism and fear of job losses, with some even questioning the ethics and need for these technologies.

Now, we see things differently. To make the global supply chain more resilient to shocks like Covid-19, we must look to technology.

Perfecting the Global Supply Chain: The Massive ‘Matter Router’
Technology has faced and overcome similar challenges in the past.

World War II, for example, drove innovation in techniques for rapid production of many products on a large scale, including penicillin. We went from the availability of one dose of the drug in 1941, to four million sterile packages of the drug every month four years later.

Similarly, today’s companies, big and small, are looking to automation, robotics, and AI to meet the pandemic head on. These technologies are crucial to scaling the infrastructure that will fulfill most of the world’s e-commerce and food distribution needs.

You can think of this new infrastructure as a rapidly evolving “matter router” that will employ increasingly complex robotic systems to move products more freely and efficiently.

Robots powered by specialized AI software, for example, are already learning to adapt to changes in the environment, using the most recent advances in industrial robotics and machine learning. When customers suddenly need to order dramatically new items, these robots don’t need to stop or be reprogrammed. They can perform new tasks by learning from experience using low-cost camera systems and deep learning for visual and image recognition.

These more flexible robots can work around the clock, helping make facilities less sensitive to sudden changes in workforce and customer demand and strengthening the supply chain.

Today, e-commerce is roughly 12% of retail sales in the US and is expected to rise well beyond 25% within the decade, fueled by changes in buying habits. However, analysts have begun to consider whether the current crisis might cause permanent jumps in those numbers, as it has in the past (for instance with the SARS epidemic in China in 2003). Whatever happens, the larger supply chain will benefit from greater, more flexible automation, especially during global crises.

We must create what Hamza Mudassire of the University of Cambridge calls a “resilient ecosystem that links multiple buyers with multiple vendors, across a mesh of supply chains.” This ecosystem must be backed by robust, efficient, and scalable automation that uses robotics, autonomous vehicles, and the Internet of Things to help track the flow of goods through the supply chain.

The good news? We can accomplish this with technologies we have today.

Image credit: Guillaume Bolduc / Unsplash Continue reading

Posted in Human Robots

#437216 New Report: Tech Could Fuel an Age of ...

With rapid technological progress running headlong into dramatic climate change and widening inequality, most experts agree the coming decade will be tumultuous. But a new report predicts it could actually make or break civilization as we know it.

The idea that humanity is facing a major shake-up this century is not new. The Fourth Industrial Revolution being brought about by technologies like AI, gene editing, robotics, and 3D printing is predicted to cause dramatic social, political, and economic upheaval in the coming decades.

But according to think tank RethinkX, thinking about the coming transition as just another industrial revolution is too simplistic. In a report released last week called Rethinking Humanity, the authors argue that we are about to see a reordering of our relationship with the world as fundamental as when hunter-gatherers came together to build the first civilizations.

At the core of their argument is the fact that since the first large human settlements appeared 10,000 years ago, civilization has been built on the back of our ability to extract resources from nature, be they food, energy, or materials. This led to a competitive landscape where the governing logic is grow or die, which has driven all civilizations to date.

That could be about to change thanks to emerging technologies that will fundamentally disrupt the five foundational sectors underpinning society: information, energy, food, transportation, and materials. They predict that across all five, costs will fall by 10 times or more, while production processes will become 10 times more efficient and will use 90 percent fewer natural resources with 10 to 100 times less waste.

They say that this transformation has already happened in information, where the internet has dramatically reduced barriers to communication and knowledge. They predict the combination of cheap solar and grid storage will soon see energy costs drop as low as one cent per kilowatt hour, and they envisage widespread adoption of autonomous electric vehicles and the replacement of car ownership with ride-sharing.

The authors laid out their vision for the future of food in another report last year, where they predicted that traditional agriculture would soon be replaced by industrial-scale brewing of single-celled organisms genetically modified to produce all the nutrients we need. In a similar vein, they believe the same processes combined with additive manufacturing and “nanotechnologies” will allow us to build all the materials required for the modern world from the molecule up rather than extracting scarce natural resources.

They believe this could allow us to shift from a system of production based on extraction to one built on creation, as limitless renewable energy makes it possible to build everything we need from scratch and barriers to movement and information disappear. As a result, a lifestyle worthy of the “American Dream” could be available to anyone for as little as $250/month by 2030.

This will require a fundamental reimagining of our societies, though. All great civilizations have eventually hit fundamental limits on their growth and we are no different, as demonstrated by our growing impact on the environment and the increasing concentration of wealth. Historically this stage of development has lead to a doubling down on old tactics in search of short-term gains, but this invariably leads to the collapse of the civilization.

The authors argue that we’re in a unique position. Because of the technological disruption detailed above, we have the ability to break through the limits on our growth. But only if we change what the authors call our “Organizing System.” They describe this as “the prevailing models of thought, belief systems, myths, values, abstractions, and conceptual frameworks that help explain how the world works and our relationship to it.”

They say that the current hierarchical, centralized system based on nation-states is unfit for the new system of production that is emerging. The cracks are already starting to appear, with problems like disinformation campaigns, fake news, and growing polarization demonstrating how ill-suited our institutions are for dealing with the distributed nature of today’s information systems. And as this same disruption comes to the other foundational sectors the shockwaves could lead to the collapse of civilization as we know it.

Their solution is a conscious shift towards a new way of organizing the world. As emerging technology allows communities to become self-sufficient, flows of physical resources will be replaced by flows of information, and we will require a decentralized but highly networked Organizing System.

The report includes detailed recommendations on how to usher this in. Examples include giving individuals control and ownership of data rights; developing new models for community ownership of energy, information, and transportation networks; and allowing states and cities far greater autonomy on policies like immigration, taxation, education, and public expenditure.

How easy it will be to get people on board with such a shift is another matter. The authors say it may require us to re-examine the foundations of our society, like representative democracy, capitalism, and nation-states. While they acknowledge that these ideas are deeply entrenched, they appear to believe we can reason our way around them.

That seems optimistic. Cultural and societal change can be glacial, and efforts to impose it top-down through reason and logic are rarely successful. The report seems to brush over many of the messy realities of humanity, such as the huge sway that tradition and religion hold over the vast majority of people.

It also doesn’t deal with the uneven distribution of the technology that is supposed to catapult us into this new age. And while the predicted revolutions in transportation, energy, and information do seem inevitable, the idea that in the next decade or two we’ll be able to produce any material we desire using cheap and abundant stock materials seems like a stretch.

Despite the techno-utopianism though, many of the ideas in the report hold promise for building societies that are better adapted for the disruptive new age we are about to enter.

Image Credit: Futuristic Society/flickr Continue reading

Posted in Human Robots

#437120 The New Indiana Jones? AI. Here’s How ...

Archaeologists have uncovered scores of long-abandoned settlements along coastal Madagascar that reveal environmental connections to modern-day communities. They have detected the nearly indiscernible bumps of earthen mounds left behind by prehistoric North American cultures. Still other researchers have mapped Bronze Age river systems in the Indus Valley, one of the cradles of civilization.

All of these recent discoveries are examples of landscape archaeology. They’re also examples of how artificial intelligence is helping scientists hunt for new archaeological digs on a scale and at a pace unimaginable even a decade ago.

“AI in archaeology has been increasing substantially over the past few years,” said Dylan Davis, a PhD candidate in the Department of Anthropology at Penn State University. “One of the major uses of AI in archaeology is for the detection of new archaeological sites.”

The near-ubiquitous availability of satellite data and other types of aerial imagery for many parts of the world has been both a boon and a bane to archaeologists. They can cover far more ground, but the job of manually mowing their way across digitized landscapes is still time-consuming and laborious. Machine learning algorithms offer a way to parse through complex data far more quickly.

AI Gives Archaeologists a Bird’s Eye View
Davis developed an automated algorithm for identifying large earthen and shell mounds built by native populations long before Europeans arrived with far-off visions of skyscrapers and superhighways in their eyes. The sites still hidden in places like the South Carolina wilderness contain a wealth of information about how people lived, even what they ate, and the ways they interacted with the local environment and other cultures.

In this particular case, the imagery comes from LiDAR, which uses light pulses that can penetrate tree canopies to map forest floors. The team taught the computer the shape, size, and texture characteristics of the mounds so it could identify potential sites from the digital 3D datasets that it analyzed.

“The process resulted in several thousand possible features that my colleagues and I checked by hand,” Davis told Singularity Hub. “While not entirely automated, this saved the equivalent of years of manual labor that would have been required for analyzing the whole LiDAR image by hand.”

In Madagascar—where Davis is studying human settlement history across the world’s fourth largest island over a timescale of millennia—he developed a predictive algorithm to help locate archaeological sites using freely available satellite imagery. His team was able to survey and identify more than 70 new archaeological sites—and potentially hundreds more—across an area of more than 1,000 square kilometers during the course of about a year.

Machines Learning From the Past Prepare Us for the Future
One impetus behind the rapid identification of archaeological sites is that many are under threat from climate change, such as coastal erosion from sea level rise, or other human impacts. Meanwhile, traditional archaeological approaches are expensive and laborious—serious handicaps in a race against time.

“It is imperative to record as many archaeological sites as we can in a short period of time. That is why AI and machine learning are useful for my research,” Davis said.

Studying the rise and fall of past civilizations can also teach modern humans a thing or two about how to grapple with these current challenges.

Researchers at the Institut Català d’Arqueologia Clàssica (ICAC) turned to machine-learning algorithms to reconstruct more than 20,000 kilometers of paleo-rivers along the Indus Valley civilization of what is now part of modern Pakistan and India. Such AI-powered mapping techniques wouldn’t be possible using satellite images alone.

That effort helped locate many previously unknown archaeological sites and unlocked new insights into those Bronze Age cultures. However, the analytics can also assist governments with important water resource management today, according to Hèctor A. Orengo Romeu, co-director of the Landscape Archaeology Research Group at ICAC.

“Our analyses can contribute to the forecasts of the evolution of aquifers in the area and provide valuable information on aspects such as the variability of agricultural productivity or the influence of climate change on the expansion of the Thar desert, in addition to providing cultural management tools to the government,” he said.

Leveraging AI for Language and Lots More
While landscape archaeology is one major application of AI in archaeology, it’s far from the only one. In 2000, only about a half-dozen scientific papers referred to the use of AI, according to the Web of Science, reputedly the world’s largest global citation database. Last year, more than 65 papers were published concerning the use of machine intelligence technologies in archaeology, with a significant uptick beginning in 2015.

AI methods, for instance, are being used to understand the chemical makeup of artifacts like pottery and ceramics, according to Davis. “This can help identify where these materials were made and how far they were transported. It can also help us to understand the extent of past trading networks.”

Linguistic anthropologists have also used machine intelligence methods to trace the evolution of different languages, Davis said. “Using AI, we can learn when and where languages emerged around the world.”

In other cases, AI has helped reconstruct or decipher ancient texts. Last year, researchers at Google’s DeepMind used a deep neural network called PYTHIA to recreate missing inscriptions in ancient Greek from damaged surfaces of objects made of stone or ceramics.

Named after the Oracle at Delphi, PYTHIA “takes a sequence of damaged text as input, and is trained to predict character sequences comprising hypothesised restorations of ancient Greek inscriptions,” the researchers reported.

In a similar fashion, Chinese scientists applied a convolutional neural network (CNN) to untangle another ancient tongue once found on turtle shells and ox bones. The CNN managed to classify oracle bone morphology in order to piece together fragments of these divination objects, some with inscriptions that represent the earliest evidence of China’s recorded history.

“Differentiating the materials of oracle bones is one of the most basic steps for oracle bone morphology—we need to first make sure we don’t assemble pieces of ox bones with tortoise shells,” lead author of the study, associate professor Shanxiong Chen at China’s Southwest University, told Synced, an online tech publication in China.

AI Helps Archaeologists Get the Scoop…
And then there are applications of AI in archaeology that are simply … interesting. Just last month, researchers published a paper about a machine learning method trained to differentiate between human and canine paleofeces.

The algorithm, dubbed CoproID, compares the gut microbiome DNA found in the ancient material with DNA found in modern feces, enabling it to get the scoop on the origin of the poop.

Also known as coprolites, paleo-feces from humans and dogs are often found in the same archaeological sites. Scientists need to know which is which if they’re trying to understand something like past diets or disease.

“CoproID is the first line of identification in coprolite analysis to confirm that what we’re looking for is actually human, or a dog if we’re interested in dogs,” Maxime Borry, a bioinformatics PhD student at the Max Planck Institute for the Science of Human History, told Vice.

…But Machine Intelligence Is Just Another Tool
There is obviously quite a bit of work that can be automated through AI. But there’s no reason for archaeologists to hit the unemployment line any time soon. There are also plenty of instances where machines can’t yet match humans in identifying objects or patterns. At other times, it’s just faster doing the analysis yourself, Davis noted.

“For ‘big data’ tasks like detecting archaeological materials over a continental scale, AI is useful,” he said. “But for some tasks, it is sometimes more time-consuming to train an entire computer algorithm to complete a task that you can do on your own in an hour.”

Still, there’s no telling what the future will hold for studying the past using artificial intelligence.

“We have already started to see real improvements in the accuracy and reliability of these approaches, but there is a lot more to do,” Davis said. “Hopefully, we start to see these methods being directly applied to a variety of interesting questions around the world, as these methods can produce datasets that would have been impossible a few decades ago.”

Image Credit: James Wheeler from Pixabay Continue reading

Posted in Human Robots