Tag Archives: soon

#437120 The New Indiana Jones? AI. Here’s How ...

Archaeologists have uncovered scores of long-abandoned settlements along coastal Madagascar that reveal environmental connections to modern-day communities. They have detected the nearly indiscernible bumps of earthen mounds left behind by prehistoric North American cultures. Still other researchers have mapped Bronze Age river systems in the Indus Valley, one of the cradles of civilization.

All of these recent discoveries are examples of landscape archaeology. They’re also examples of how artificial intelligence is helping scientists hunt for new archaeological digs on a scale and at a pace unimaginable even a decade ago.

“AI in archaeology has been increasing substantially over the past few years,” said Dylan Davis, a PhD candidate in the Department of Anthropology at Penn State University. “One of the major uses of AI in archaeology is for the detection of new archaeological sites.”

The near-ubiquitous availability of satellite data and other types of aerial imagery for many parts of the world has been both a boon and a bane to archaeologists. They can cover far more ground, but the job of manually mowing their way across digitized landscapes is still time-consuming and laborious. Machine learning algorithms offer a way to parse through complex data far more quickly.

AI Gives Archaeologists a Bird’s Eye View
Davis developed an automated algorithm for identifying large earthen and shell mounds built by native populations long before Europeans arrived with far-off visions of skyscrapers and superhighways in their eyes. The sites still hidden in places like the South Carolina wilderness contain a wealth of information about how people lived, even what they ate, and the ways they interacted with the local environment and other cultures.

In this particular case, the imagery comes from LiDAR, which uses light pulses that can penetrate tree canopies to map forest floors. The team taught the computer the shape, size, and texture characteristics of the mounds so it could identify potential sites from the digital 3D datasets that it analyzed.

“The process resulted in several thousand possible features that my colleagues and I checked by hand,” Davis told Singularity Hub. “While not entirely automated, this saved the equivalent of years of manual labor that would have been required for analyzing the whole LiDAR image by hand.”

In Madagascar—where Davis is studying human settlement history across the world’s fourth largest island over a timescale of millennia—he developed a predictive algorithm to help locate archaeological sites using freely available satellite imagery. His team was able to survey and identify more than 70 new archaeological sites—and potentially hundreds more—across an area of more than 1,000 square kilometers during the course of about a year.

Machines Learning From the Past Prepare Us for the Future
One impetus behind the rapid identification of archaeological sites is that many are under threat from climate change, such as coastal erosion from sea level rise, or other human impacts. Meanwhile, traditional archaeological approaches are expensive and laborious—serious handicaps in a race against time.

“It is imperative to record as many archaeological sites as we can in a short period of time. That is why AI and machine learning are useful for my research,” Davis said.

Studying the rise and fall of past civilizations can also teach modern humans a thing or two about how to grapple with these current challenges.

Researchers at the Institut Català d’Arqueologia Clàssica (ICAC) turned to machine-learning algorithms to reconstruct more than 20,000 kilometers of paleo-rivers along the Indus Valley civilization of what is now part of modern Pakistan and India. Such AI-powered mapping techniques wouldn’t be possible using satellite images alone.

That effort helped locate many previously unknown archaeological sites and unlocked new insights into those Bronze Age cultures. However, the analytics can also assist governments with important water resource management today, according to Hèctor A. Orengo Romeu, co-director of the Landscape Archaeology Research Group at ICAC.

“Our analyses can contribute to the forecasts of the evolution of aquifers in the area and provide valuable information on aspects such as the variability of agricultural productivity or the influence of climate change on the expansion of the Thar desert, in addition to providing cultural management tools to the government,” he said.

Leveraging AI for Language and Lots More
While landscape archaeology is one major application of AI in archaeology, it’s far from the only one. In 2000, only about a half-dozen scientific papers referred to the use of AI, according to the Web of Science, reputedly the world’s largest global citation database. Last year, more than 65 papers were published concerning the use of machine intelligence technologies in archaeology, with a significant uptick beginning in 2015.

AI methods, for instance, are being used to understand the chemical makeup of artifacts like pottery and ceramics, according to Davis. “This can help identify where these materials were made and how far they were transported. It can also help us to understand the extent of past trading networks.”

Linguistic anthropologists have also used machine intelligence methods to trace the evolution of different languages, Davis said. “Using AI, we can learn when and where languages emerged around the world.”

In other cases, AI has helped reconstruct or decipher ancient texts. Last year, researchers at Google’s DeepMind used a deep neural network called PYTHIA to recreate missing inscriptions in ancient Greek from damaged surfaces of objects made of stone or ceramics.

Named after the Oracle at Delphi, PYTHIA “takes a sequence of damaged text as input, and is trained to predict character sequences comprising hypothesised restorations of ancient Greek inscriptions,” the researchers reported.

In a similar fashion, Chinese scientists applied a convolutional neural network (CNN) to untangle another ancient tongue once found on turtle shells and ox bones. The CNN managed to classify oracle bone morphology in order to piece together fragments of these divination objects, some with inscriptions that represent the earliest evidence of China’s recorded history.

“Differentiating the materials of oracle bones is one of the most basic steps for oracle bone morphology—we need to first make sure we don’t assemble pieces of ox bones with tortoise shells,” lead author of the study, associate professor Shanxiong Chen at China’s Southwest University, told Synced, an online tech publication in China.

AI Helps Archaeologists Get the Scoop…
And then there are applications of AI in archaeology that are simply … interesting. Just last month, researchers published a paper about a machine learning method trained to differentiate between human and canine paleofeces.

The algorithm, dubbed CoproID, compares the gut microbiome DNA found in the ancient material with DNA found in modern feces, enabling it to get the scoop on the origin of the poop.

Also known as coprolites, paleo-feces from humans and dogs are often found in the same archaeological sites. Scientists need to know which is which if they’re trying to understand something like past diets or disease.

“CoproID is the first line of identification in coprolite analysis to confirm that what we’re looking for is actually human, or a dog if we’re interested in dogs,” Maxime Borry, a bioinformatics PhD student at the Max Planck Institute for the Science of Human History, told Vice.

…But Machine Intelligence Is Just Another Tool
There is obviously quite a bit of work that can be automated through AI. But there’s no reason for archaeologists to hit the unemployment line any time soon. There are also plenty of instances where machines can’t yet match humans in identifying objects or patterns. At other times, it’s just faster doing the analysis yourself, Davis noted.

“For ‘big data’ tasks like detecting archaeological materials over a continental scale, AI is useful,” he said. “But for some tasks, it is sometimes more time-consuming to train an entire computer algorithm to complete a task that you can do on your own in an hour.”

Still, there’s no telling what the future will hold for studying the past using artificial intelligence.

“We have already started to see real improvements in the accuracy and reliability of these approaches, but there is a lot more to do,” Davis said. “Hopefully, we start to see these methods being directly applied to a variety of interesting questions around the world, as these methods can produce datasets that would have been impossible a few decades ago.”

Image Credit: James Wheeler from Pixabay Continue reading

Posted in Human Robots

#437103 How to Make Sense of Uncertainty in a ...

As the internet churns with information about Covid-19, about the virus that causes the disease, and about what we’re supposed to do to fight it, it can be difficult to see the forest for the trees. What can we realistically expect for the rest of 2020? And how do we even know what’s realistic?

Today, humanity’s primary, ideal goal is to eliminate the virus, SARS-CoV-2, and Covid-19. Our second-choice goal is to control virus transmission. Either way, we have three big aims: to save lives, to return to public life, and to keep the economy functioning.

To hit our second-choice goal—and maybe even our primary goal—countries are pursuing five major public health strategies. Note that many of these advances cross-fertilize: for example, advances in virus testing and antibody testing will drive data-based prevention efforts.

Five major public health strategies are underway to bring Covid-19 under control and to contain the spread of SARS-CoV-2.
These strategies arise from things we can control based on the things that we know at any given moment. But what about the things we can’t control and don’t yet know?

The biology of the virus and how it interacts with our bodies is what it is, so we should seek to understand it as thoroughly as possible. How long any immunity gained from prior infection lasts—and indeed whether people develop meaningful immunity at all after infection—are open questions urgently in need of greater clarity. Similarly, right now it’s important to focus on understanding rather than making assumptions about environmental factors like seasonality.

But the biggest question on everyone’s lips is, “When?” When will we see therapeutic progress against Covid-19? And when will life get “back to normal”? There are lots of models out there on the internet; which of those models are right? The simple answer is “none of them.” That’s right—it’s almost certain that every model you’ve seen is wrong in at least one detail, if not all of them. But modeling is meant to be a tool for deeper thinking, a way to run mental (and computational) experiments before—and while—taking action. As George E. P. Box famously wrote in 1976, “All models are wrong, but some are useful.”

Here, we’re seeking useful insights, as opposed to exact predictions, which is why we’re pulling back from quantitative details to get at the mindsets that will support agency and hope. To that end, I’ve been putting together timelines that I believe will yield useful expectations for the next year or two—and asking how optimistic I need to be in order to believe a particular timeline.

For a moderately optimistic scenario to be relevant, breakthroughs in science and technology come at paces expected based on previous efforts and assumptions that turn out to be basically correct; accessibility of those breakthroughs increases at a reasonable pace; regulation achieves its desired effects, without major surprises; and compliance with regulations is reasonably high.

In contrast, if I’m being highly optimistic, breakthroughs in science and technology and their accessibility come more quickly than they ever have before; regulation is evidence-based and successful in the first try or two; and compliance with those regulations is high and uniform. If I’m feeling not-so-optimistic, then I anticipate serious setbacks to breakthroughs and accessibility (with the overturning of many important assumptions), repeated failure of regulations to achieve their desired outcomes, and low compliance with those regulations.

The following scenarios outline the things that need to happen in the fight against Covid-19, when I expect to see them, and how confident I feel in those expectations. They focus on North America and Europe because there are data missing about China’s 2019 outbreak and other regions are still early in their outbreaks. Perhaps the most important thing to keep in mind throughout: We know more today than we did yesterday, but we still have much to learn. New knowledge derived from greater study and debate will almost certainly inspire ongoing course corrections.

As you dive into the scenarios below, practice these three mindset shifts. First, defeating Covid-19 will be a marathon, not a sprint. We shouldn’t expect life to look like 2019 for the next year or two—if ever. As Ed Yong wrote recently in The Atlantic, “There won’t be an obvious moment when everything is under control and regular life can safely resume.” Second, remember that you have important things to do for at least a year. And third, we are all in this together. There is no “us” and “them.” We must all be alert, responsive, generous, and strong throughout 2020 and 2021—and willing to throw away our assumptions when scientific evidence invalidates them.

The Middle Way: Moderate Optimism
Let’s start with the case in which I have the most confidence: moderate optimism.

This timeline considers milestones through late 2021, the earliest that I believe vaccines will become available. The “normal” timeline for developing a vaccine for diseases like seasonal flu is 18 months, which leads to my projection that we could potentially have vaccines as soon as 18 months from the first quarter of 2020. While Melinda Gates agrees with that projection, others (including AI) believe that 3 to 5 years is far more realistic, based on past vaccine development and the need to test safety and efficacy in humans. However, repurposing existing vaccines against other diseases—or piggybacking off clever synthetic platforms—could lead to vaccines being available sooner. I tried to balance these considerations for this moderately optimistic scenario. Either way, deploying vaccines at the end of 2021 is probably much later than you may have been led to believe by the hype engine. Again, if you take away only one message from this article, remember that the fight against Covid-19 is a marathon, not a sprint.

Here, I’ve visualized a moderately optimistic scenario as a baseline. Think of these timelines as living guides, as opposed to exact predictions. There are still many unknowns. More or less optimistic views (see below) and new information could shift these timelines forward or back and change the details of the strategies.
Based on current data, I expect that the first wave of Covid-19 cases (where we are now) will continue to subside in many areas, leading governments to ease restrictions in an effort to get people back to work. We’re already seeing movement in that direction, with a variety of benchmarks and changes at state and country levels around the world. But depending on the details of the changes, easing restrictions will probably cause a second wave of sickness (see Germany and Singapore), which should lead governments to reimpose at least some restrictions.

In tandem, therapeutic efforts will be transitioning from emergency treatments to treatments that have been approved based on safety and efficacy data in clinical trials. In a moderately optimistic scenario, assuming clinical trials currently underway yield at least a few positive results, this shift to mostly approved therapies could happen as early as the third or fourth quarter of this year and continue from there. One approval that should come rather quickly is for plasma therapies, in which the blood from people who have recovered from Covid-19 is used as a source of antibodies for people who are currently sick.

Companies around the world are working on both viral and antibody testing, focusing on speed, accuracy, reliability, and wide accessibility. While these tests are currently being run in hospitals and research laboratories, at-home testing is a critical component of the mass testing we’ll need to keep viral spread in check. These are needed to minimize the impact of asymptomatic cases, test the assumption that infection yields resistance to subsequent infection (and whether it lasts), and construct potential immunity passports if this assumption holds. Testing is also needed for contact tracing efforts to prevent further spread and get people back to public life. Finally, it’s crucial to our fundamental understanding of the biology of SARS-CoV-2 and Covid-19.

We need tests that are very reliable, both in the clinic and at home. So, don’t go buying any at-home test kits just yet, even if you find them online. Wait for reliable test kits and deeper understanding of how a test result translates to everyday realities. If we’re moderately optimistic, in-clinic testing will rapidly expand this quarter and/or next, with the possibility of broadly available, high-quality at-home sampling (and perhaps even analysis) thereafter.

Note that testing is not likely to be a “one-and-done” endeavor, as a person’s infection and immunity status change over time. Expect to be testing yourself—and your family—often as we move later into 2020.

Testing data are also going to inform distancing requirements at the country and local levels. In this scenario, restrictions—at some level of stringency—could persist at least through the end of 2020, as most countries are way behind the curve on testing (Iceland is an informative exception). Governments will likely continue to ask citizens to work from home if at all possible; to wear masks or face coverings in public; to employ heightened hygiene and social distancing in workplaces; and to restrict travel and social gatherings. So while it’s likely we’ll be eating in local restaurants again in 2020 in this scenario, at least for a little while, it’s not likely we’ll be heading to big concerts any time soon.

The Extremes: High and Low Optimism
How would high and low levels of optimism change our moderately optimistic timeline? The milestones are the same, but the time required to achieve them is shorter or longer, respectively. Quantifying these shifts is less important than acknowledging and incorporating a range of possibilities into our view. It pays to pay attention to our bias. Here are a few examples of reasonable possibilities that could shift the moderately optimistic timeline.

When vaccines become available
Vaccine repurposing could shorten the time for vaccines to become available; today, many vaccine candidates are in various stages of testing. On the other hand, difficulties in manufacture and distribution, or faster-than-expected mutation of SARS-CoV-2, could slow vaccine development. Given what we know now, I am not strongly concerned about either of these possibilities—drug companies are rapidly expanding their capabilities, and viral mutation isn’t an urgent concern at this time based on sequencing data—but they could happen.

At first, governments will likely supply vaccines to essential workers such as healthcare workers, but it is essential that vaccines become widely available around the world as quickly and as safely as possible. Overall, I suggest a dose of skepticism when reading highly optimistic claims about a vaccine (or multiple vaccines) being available in 2020. Remember, a vaccine is a knockout punch, not a first line of defense for an outbreak.

When testing hits its stride
While I am confident that testing is a critical component of our response to Covid-19, reliability is incredibly important to testing for SARS-CoV-2 and for immunity to the disease, particularly at home. For an individual, a false negative (being told you don’t have antibodies when you really do) could be just as bad as a false positive (being told you do have antibodies when you really don’t). Those errors are compounded when governments are trying to make evidence-based policies for social and physical distancing.

If you’re highly optimistic, high-quality testing will ramp up quickly as companies and scientists innovate rapidly by cleverly combining multiple test modalities, digital signals, and cutting-edge tech like CRISPR. Pop-up testing labs could also take some pressure off hospitals and clinics.

If things don’t go well, reliability issues could hinder testing, manufacturing bottlenecks could limit availability, and both could hamstring efforts to control spread and ease restrictions. And if it turns out that immunity to Covid-19 isn’t working the way we assumed, then we must revisit our assumptions about our path(s) back to public life, as well as our vaccine-development strategies.

How quickly safe and effective treatments appear
Drug development is known to be long, costly, and fraught with failure. It’s not uncommon to see hope in a drug spike early only to be dashed later on down the road. With that in mind, the number of treatments currently under investigation is astonishing, as is the speed through which they’re proceeding through testing. Breakthroughs in a therapeutic area—for example in treating the seriously ill or in reducing viral spread after an infection takes hold—could motivate changes in the focus of distancing regulations.

While speed will save lives, we cannot overlook the importance of knowing a treatment’s efficacy (does it work against Covid-19?) and safety (does it make you sick in a different, or worse, way?). Repurposing drugs that have already been tested for other diseases is speeding innovation here, as is artificial intelligence.

Remarkable collaborations among governments and companies, large and small, are driving innovation in therapeutics and devices such as ventilators for treating the sick.

Whether government policies are effective and responsive
Those of us who have experienced lockdown are eager for it to be over. Businesses, economists, and governments are also eager to relieve the terrible pressure that is being exerted on the global economy. However, lifting restrictions will almost certainly lead to a resurgence in sickness.

Here, the future is hard to model because there are many, many factors at play, and at play differently in different places—including the extent to which individuals actually comply with regulations.

Reliable testing—both in the clinic and at home—is crucial to designing and implementing restrictions, monitoring their effectiveness, and updating them; delays in reliable testing could seriously hamper this design cycle. Lack of trust in governments and/or companies could also suppress uptake. That said, systems are already in place for contact tracing in East Asia. Other governments could learn important lessons, but must also earn—and keep—their citizens’ trust.

Expect to see restrictions descend and then lift in response to changes in the number of Covid-19 cases and in the effectiveness of our prevention strategies. Also expect country-specific and perhaps even area-specific responses that differ from each other. The benefit of this approach? Governments around the world are running perhaps hundreds of real-time experiments and design cycles in balancing health and the economy, and we can learn from the results.

A Way Out
As Jeremy Farrar, head of the Wellcome Trust, told Science magazine, “Science is the exit strategy.” Some of our greatest technological assistance is coming from artificial intelligence, digital tools for collaboration, and advances in biotechnology.

Our exit strategy also needs to include empathy and future visioning—because in the midst of this crisis, we are breaking ground for a new, post-Covid future.

What do we want that future to look like? How will the hard choices we make now about data ethics impact the future of surveillance? Will we continue to embrace inclusiveness and mass collaboration? Perhaps most importantly, will we lay the foundation for successfully confronting future challenges? Whether we’re thinking about the next pandemic (and there will be others) or the cascade of catastrophes that climate change is bringing ever closer—it’s important to remember that we all have the power to become agents of that change.

Special thanks to Ola Kowalewski and Jason Dorrier for significant conversations.

Image Credit: Drew Beamer / Unsplash Continue reading

Posted in Human Robots

#436946 Coronavirus May Mean Automation Is ...

We’re in the midst of a public health emergency, and life as we know it has ground to a halt. The places we usually go are closed, the events we were looking forward to are canceled, and some of us have lost our jobs or fear losing them soon.

But although it may not seem like it, there are some silver linings; this crisis is bringing out the worst in some (I’m looking at you, toilet paper hoarders), but the best in many. Italians on lockdown are singing together, Spaniards on lockdown are exercising together, this entrepreneur made a DIY ventilator and put it on YouTube, and volunteers in Italy 3D printed medical valves for virus treatment at a fraction of their usual cost.

Indeed, if you want to feel like there’s still hope for humanity instead of feeling like we’re about to snowball into terribleness as a species, just look at these examples—and I’m sure there are many more out there. There’s plenty of hope and opportunity to be found in this crisis.

Peter Xing, a keynote speaker and writer on emerging technologies and associate director in technology and growth initiatives at KPMG, would agree. Xing believes the coronavirus epidemic is presenting us with ample opportunities for increased automation and remote delivery of goods and services. “The upside right now is the burgeoning platform of the digital transformation ecosystem,” he said.

In a thought-provoking talk at Singularity University’s COVID-19 virtual summit this week, Xing explained how the outbreak is accelerating our transition to a highly-automated society—and painted a picture of what the future may look like.

Confronting Scarcity
You’ve probably seen them by now—the barren shelves at your local grocery store. Whether you were in the paper goods aisle, the frozen food section, or the fresh produce area, it was clear something was amiss; the shelves were empty. One of the most inexplicable items people have been panic-bulk-buying is toilet paper.

Xing described this toilet paper scarcity as a prisoner’s dilemma, pointing out that we have a scarcity problem right now in terms of our mindset, not in terms of actual supply shortages. “It’s a prisoner’s dilemma in that we’re all prisoners in our homes right now, and we can either hoard or not hoard, and the outcomes depend on how we collaborate with each other,” he said. “But it’s not a zero-sum game.”

Xing referenced a CNN article about why toilet paper, of all things, is one of the items people have been panic-buying most (I, too, have been utterly baffled by this phenomenon). But maybe there’d be less panic if we knew more about the production methods and supply chain involved in manufacturing toilet paper. It turns out it’s a highly automated process (you can learn more about it in this documentary by National Geographic) and requires very few people (though it does require about 27,000 trees a day—so stop bulk-buying it! Just stop!).

The supply chain limitation here is in the raw material; we certainly can’t keep cutting down this many trees a day forever. But—somewhat ironically, given the Costco cartloads of TP people have been stuffing into their trunks and backseats—thanks to automation, toilet paper isn’t something stores are going to stop receiving anytime soon.

Automation For All
Now we have a reason to apply this level of automation to, well, pretty much everything.

Though our current situation may force us into using more robots and automated systems sooner than we’d planned, it will end up saving us money and creating opportunity, Xing believes. He cited “fast-casual” restaurants (Chipotle, Panera, etc.) as a prime example.

Currently, people in the US spend much more to eat at home than we do to eat in fast-casual restaurants if you take into account the cost of the food we’re preparing plus the value of the time we’re spending on cooking, grocery shopping, and cleaning up after meals. According to research from investment management firm ARK Invest, taking all these costs into account makes for about $12 per meal for food cooked at home.

That’s the same as or more than the cost of grabbing a burrito or a sandwich at the joint around the corner. As more of the repetitive, low-skill tasks involved in preparing fast casual meals are automated, their cost will drop even more, giving us more incentive to forego home cooking. (But, it’s worth noting that these figures don’t take into account that eating at home is, in most cases, better for you since you’re less likely to fill your food with sugar, oil, or various other taste-enhancing but health-destroying ingredients—plus, there are those of us who get a nearly incomparable amount of joy from laboring over then savoring a homemade meal).

Now that we’re not supposed to be touching each other or touching anything anyone else has touched, but we still need to eat, automating food preparation sounds appealing (and maybe necessary). Multiple food delivery services have already implemented a contactless delivery option, where customers can choose to have their food left on their doorstep.

Besides the opportunities for in-restaurant automation, “This is an opportunity for automation to happen at the last mile,” said Xing. Delivery drones, robots, and autonomous trucks and vans could all play a part. In fact, use of delivery drones has ramped up in China since the outbreak.

Speaking of deliveries, service robots have steadily increased in numbers at Amazon; as of late 2019, the company employed around 650,000 humans and 200,000 robots—and costs have gone down as robots have gone up.

ARK Invest’s research predicts automation could add $800 billion to US GDP over the next 5 years and $12 trillion during the next 15 years. On this trajectory, GDP would end up being 40 percent higher with automation than without it.

Automating Ourselves?
This is all well and good, but what do these numbers and percentages mean for the average consumer, worker, or citizen?

“The benefits of automation aren’t being passed on to the average citizen,” said Xing. “They’re going to the shareholders of the companies creating the automation.” This is where policies like universal basic income and universal healthcare come in; in the not-too-distant future, we may see more movement toward measures like these (depending how the election goes) that spread the benefit of automation out rather than concentrating it in a few wealthy hands.

In the meantime, though, some people are benefiting from automation in ways that maybe weren’t expected. We’re in the midst of what’s probably the biggest remote-work experiment in US history, not to mention remote learning. Tools that let us digitally communicate and collaborate, like Slack, Zoom, Dropbox, and Gsuite, are enabling remote work in a way that wouldn’t have been possible 20 or even 10 years ago.

In addition, Xing said, tools like DataRobot and H2O.ai are democratizing artificial intelligence by allowing almost anyone, not just data scientists or computer engineers, to run machine learning algorithms. People are codifying the steps in their own repetitive work processes and having their computers take over tasks for them.

As 3D printing gets cheaper and more accessible, it’s also being more widely adopted, and people are finding more applications (case in point: the Italians mentioned above who figured out how to cheaply print a medical valve for coronavirus treatment).

The Mother of Invention
This movement towards a more automated society has some positives: it will help us stay healthy during times like the present, it will drive down the cost of goods and services, and it will grow our GDP in the long run. But by leaning into automation, will we be enabling a future that keeps us more physically, psychologically, and emotionally distant from each other?

We’re in a crisis, and desperate times call for desperate measures. We’re sheltering in place, practicing social distancing, and trying not to touch each other. And for most of us, this is really unpleasant and difficult. We can’t wait for it to be over.

For better or worse, this pandemic will likely make us pick up the pace on our path to automation, across many sectors and processes. The solutions people implement during this crisis won’t disappear when things go back to normal (and, depending who you talk to, they may never really do so).

But let’s make sure to remember something. Even once robots are making our food and drones are delivering it, and our computers are doing data entry and email replies on our behalf, and we all have 3D printers to make anything we want at home—we’re still going to be human. And humans like being around each other. We like seeing one another’s faces, hearing one another’s voices, and feeling one another’s touch—in person, not on a screen or in an app.

No amount of automation is going to change that, and beyond lowering costs or increasing GDP, our greatest and most crucial responsibility will always be to take care of each other.

Image Credit: Gritt Zheng on Unsplash Continue reading

Posted in Human Robots

#436911 Scientists Linked Artificial and ...

Scientists have linked up two silicon-based artificial neurons with a biological one across multiple countries into a fully-functional network. Using standard internet protocols, they established a chain of communication whereby an artificial neuron controls a living, biological one, and passes on the info to another artificial one.

Whoa.

We’ve talked plenty about brain-computer interfaces and novel computer chips that resemble the brain. We’ve covered how those “neuromorphic” chips could link up into tremendously powerful computing entities, using engineered communication nodes called artificial synapses.

As Moore’s law is dying, we even said that neuromorphic computing is one path towards the future of extremely powerful, low energy consumption artificial neural network-based computing—in hardware—that could in theory better link up with the brain. Because the chips “speak” the brain’s language, in theory they could become neuroprosthesis hubs far more advanced and “natural” than anything currently possible.

This month, an international team put all of those ingredients together, turning theory into reality.

The three labs, scattered across Padova, Italy, Zurich, Switzerland, and Southampton, England, collaborated to create a fully self-controlled, hybrid artificial-biological neural network that communicated using biological principles, but over the internet.

The three-neuron network, linked through artificial synapses that emulate the real thing, was able to reproduce a classic neuroscience experiment that’s considered the basis of learning and memory in the brain. In other words, artificial neuron and synapse “chips” have progressed to the point where they can actually use a biological neuron intermediary to form a circuit that, at least partially, behaves like the real thing.

That’s not to say cyborg brains are coming soon. The simulation only recreated a small network that supports excitatory transmission in the hippocampus—a critical region that supports memory—and most brain functions require enormous cross-talk between numerous neurons and circuits. Nevertheless, the study is a jaw-dropping demonstration of how far we’ve come in recreating biological neurons and synapses in artificial hardware.

And perhaps one day, the currently “experimental” neuromorphic hardware will be integrated into broken biological neural circuits as bridges to restore movement, memory, personality, and even a sense of self.

The Artificial Brain Boom
One important thing: this study relies heavily on a decade of research into neuromorphic computing, or the implementation of brain functions inside computer chips.

The best-known example is perhaps IBM’s TrueNorth, which leveraged the brain’s computational principles to build a completely different computer than what we have today. Today’s computers run on a von Neumann architecture, in which memory and processing modules are physically separate. In contrast, the brain’s computing and memory are simultaneously achieved at synapses, small “hubs” on individual neurons that talk to adjacent ones.

Because memory and processing occur on the same site, biological neurons don’t have to shuttle data back and forth between processing and storage compartments, massively reducing processing time and energy use. What’s more, a neuron’s history will also influence how it behaves in the future, increasing flexibility and adaptability compared to computers. With the rise of deep learning, which loosely mimics neural processing as the prima donna of AI, the need to reduce power while boosting speed and flexible learning is becoming ever more tantamount in the AI community.

Neuromorphic computing was partially born out of this need. Most chips utilize special ingredients that change their resistance (or other physical characteristics) to mimic how a neuron might adapt to stimulation. Some chips emulate a whole neuron, that is, how it responds to a history of stimulation—does it get easier or harder to fire? Others imitate synapses themselves, that is, how easily they will pass on the information to another neuron.

Although single neuromorphic chips have proven to be far more efficient and powerful than current computer chips running machine learning algorithms in toy problems, so far few people have tried putting the artificial components together with biological ones in the ultimate test.

That’s what this study did.

A Hybrid Network
Still with me? Let’s talk network.

It’s gonna sound complicated, but remember: learning is the formation of neural networks, and neurons that fire together wire together. To rephrase: when learning, neurons will spontaneously organize into networks so that future instances will re-trigger the entire network. To “wire” together, downstream neurons will become more responsive to their upstream neural partners, so that even a whisper will cause them to activate. In contrast, some types of stimulation will cause the downstream neuron to “chill out” so that only an upstream “shout” will trigger downstream activation.

Both these properties—easier or harder to activate downstream neurons—are essentially how the brain forms connections. The “amping up,” in neuroscience jargon, is long-term potentiation (LTP), whereas the down-tuning is LTD (long-term depression). These two phenomena were first discovered in the rodent hippocampus more than half a century ago, and ever since have been considered as the biological basis of how the brain learns and remembers, and implicated in neurological problems such as addition (seriously, you can’t pass Neuro 101 without learning about LTP and LTD!).

So it’s perhaps especially salient that one of the first artificial-brain hybrid networks recapitulated this classic result.

To visualize: the three-neuron network began in Switzerland, with an artificial neuron with the badass name of “silicon spiking neuron.” That neuron is linked to an artificial synapse, a “memristor” located in the UK, which is then linked to a biological rat neuron cultured in Italy. The rat neuron has a “smart” microelectrode, controlled by the artificial synapse, to stimulate it. This is the artificial-to-biological pathway.

Meanwhile, the rat neuron in Italy also has electrodes that listen in on its electrical signaling. This signaling is passed back to another artificial synapse in the UK, which is then used to control a second artificial neuron back in Switzerland. This is the biological-to-artificial pathway back. As a testimony in how far we’ve come in digitizing neural signaling, all of the biological neural responses are digitized and sent over the internet to control its far-out artificial partner.

Here’s the crux: to demonstrate a functional neural network, just having the biological neuron passively “pass on” electrical stimulation isn’t enough. It has to show the capacity to learn, that is, to be able to mimic the amping up and down-tuning that are LTP and LTD, respectively.

You’ve probably guessed the results: certain stimulation patterns to the first artificial neuron in Switzerland changed how the artificial synapse in the UK operated. This, in turn, changed the stimulation to the biological neuron, so that it either amped up or toned down depending on the input.

Similarly, the response of the biological neuron altered the second artificial synapse, which then controlled the output of the second artificial neuron. Altogether, the biological and artificial components seamlessly linked up, over thousands of miles, into a functional neural circuit.

Cyborg Mind-Meld
So…I’m still picking my jaw up off the floor.

It’s utterly insane seeing a classic neuroscience learning experiment repeated with an integrated network with artificial components. That said, a three-neuron network is far from the thousands of synapses (if not more) needed to truly re-establish a broken neural circuit in the hippocampus, which DARPA has been aiming to do. And LTP/LTD has come under fire recently as the de facto brain mechanism for learning, though so far they remain cemented as neuroscience dogma.

However, this is one of the few studies where you see fields coming together. As Richard Feynman famously said, “What I cannot recreate, I cannot understand.” Even though neuromorphic chips were built on a high-level rather than molecular-level understanding of how neurons work, the study shows that artificial versions can still synapse with their biological counterparts. We’re not just on the right path towards understanding the brain, we’re recreating it, in hardware—if just a little.

While the study doesn’t have immediate use cases, practically it does boost both the neuromorphic computing and neuroprosthetic fields.

“We are very excited with this new development,” said study author Dr. Themis Prodromakis at the University of Southampton. “On one side it sets the basis for a novel scenario that was never encountered during natural evolution, where biological and artificial neurons are linked together and communicate across global networks; laying the foundations for the Internet of Neuro-electronics. On the other hand, it brings new prospects to neuroprosthetic technologies, paving the way towards research into replacing dysfunctional parts of the brain with AI chips.”

Image Credit: Gerd Altmann from Pixabay Continue reading

Posted in Human Robots

#436774 AI Is an Energy-Guzzler. We Need to ...

There is a saying that has emerged among the tech set in recent years: AI is the new electricity. The platitude refers to the disruptive power of artificial intelligence for driving advances in everything from transportation to predicting the weather.

Of course, the computers and data centers that support AI’s complex algorithms are very much dependent on electricity. While that may seem pretty obvious, it may be surprising to learn that AI can be extremely power-hungry, especially when it comes to training the models that enable machines to recognize your face in a photo or for Alexa to understand a voice command.

The scale of the problem is difficult to measure, but there have been some attempts to put hard numbers on the environmental cost.

For instance, one paper published on the open-access repository arXiv claimed that the carbon emissions for training a basic natural language processing (NLP) model—algorithms that process and understand language-based data—are equal to the CO2 produced by the average American lifestyle over two years. A more robust model required the equivalent of about 17 years’ worth of emissions.

The authors noted that about a decade ago, NLP models could do the job on a regular commercial laptop. Today, much more sophisticated AI models use specialized hardware like graphics processing units, or GPUs, a chip technology popularized by Nvidia for gaming that also proved capable of supporting computing tasks for AI.

OpenAI, a nonprofit research organization co-founded by tech prophet and profiteer Elon Musk, said that the computing power “used in the largest AI training runs has been increasing exponentially with a 3.4-month doubling time” since 2012. That’s about the time that GPUs started making their way into AI computing systems.

Getting Smarter About AI Chip Design
While GPUs from Nvidia remain the gold standard in AI hardware today, a number of startups have emerged to challenge the company’s industry dominance. Many are building chipsets designed to work more like the human brain, an area that’s been dubbed neuromorphic computing.

One of the leading companies in this arena is Graphcore, a UK startup that has raised more than $450 million and boasts a valuation of $1.95 billion. The company’s version of the GPU is an IPU, which stands for intelligence processing unit.

To build a computer brain more akin to a human one, the big brains at Graphcore are bypassing the precise but time-consuming number-crunching typical of a conventional microprocessor with one that’s content to get by on less precise arithmetic.

The results are essentially the same, but IPUs get the job done much quicker. Graphcore claimed it was able to train the popular BERT NLP model in just 56 hours, while tripling throughput and reducing latency by 20 percent.

An article in Bloomberg compared the approach to the “human brain shifting from calculating the exact GPS coordinates of a restaurant to just remembering its name and neighborhood.”

Graphcore’s hardware architecture also features more built-in memory processing, boosting efficiency because there’s less need to send as much data back and forth between chips. That’s similar to an approach adopted by a team of researchers in Italy that recently published a paper about a new computing circuit.

The novel circuit uses a device called a memristor that can execute a mathematical function known as a regression in just one operation. The approach attempts to mimic the human brain by processing data directly within the memory.

Daniele Ielmini at Politecnico di Milano, co-author of the Science Advances paper, told Singularity Hub that the main advantage of in-memory computing is the lack of any data movement, which is the main bottleneck of conventional digital computers, as well as the parallel processing of data that enables the intimate interactions among various currents and voltages within the memory array.

Ielmini explained that in-memory computing can have a “tremendous impact on energy efficiency of AI, as it can accelerate very advanced tasks by physical computation within the memory circuit.” He added that such “radical ideas” in hardware design will be needed in order to make a quantum leap in energy efficiency and time.

It’s Not Just a Hardware Problem
The emphasis on designing more efficient chip architecture might suggest that AI’s power hunger is essentially a hardware problem. That’s not the case, Ielmini noted.

“We believe that significant progress could be made by similar breakthroughs at the algorithm and dataset levels,” he said.

He’s not the only one.

One of the key research areas at Qualcomm’s AI research lab is energy efficiency. Max Welling, vice president of Qualcomm Technology R&D division, has written about the need for more power-efficient algorithms. He has gone so far as to suggest that AI algorithms will be measured by the amount of intelligence they provide per joule.

One emerging area being studied, Welling wrote, is the use of Bayesian deep learning for deep neural networks.

It’s all pretty heady stuff and easily the subject of a PhD thesis. The main thing to understand in this context is that Bayesian deep learning is another attempt to mimic how the brain processes information by introducing random values into the neural network. A benefit of Bayesian deep learning is that it compresses and quantifies data in order to reduce the complexity of a neural network. In turn, that reduces the number of “steps” required to recognize a dog as a dog—and the energy required to get the right result.

A team at Oak Ridge National Laboratory has previously demonstrated another way to improve AI energy efficiency by converting deep learning neural networks into what’s called a spiking neural network. The researchers spiked their deep spiking neural network (DSNN) by introducing a stochastic process that adds random values like Bayesian deep learning.

The DSNN actually imitates the way neurons interact with synapses, which send signals between brain cells. Individual “spikes” in the network indicate where to perform computations, lowering energy consumption because it disregards unnecessary computations.

The system is being used by cancer researchers to scan millions of clinical reports to unearth insights on causes and treatments of the disease.

Helping battle cancer is only one of many rewards we may reap from artificial intelligence in the future, as long as the benefits of those algorithms outweigh the costs of using them.

“Making AI more energy-efficient is an overarching objective that spans the fields of algorithms, systems, architecture, circuits, and devices,” Ielmini said.

Image Credit: analogicus from Pixabay Continue reading

Posted in Human Robots