Tag Archives: looking

#437171 Scientists Tap the World’s Most ...

In The Hitchhiker’s Guide to the Galaxy by Douglas Adams, the haughty supercomputer Deep Thought is asked whether it can find the answer to the ultimate question concerning life, the universe, and everything. It replies that, yes, it can do it, but it’s tricky and it’ll have to think about it. When asked how long it will take it replies, “Seven-and-a-half million years. I told you I’d have to think about it.”

Real-life supercomputers are being asked somewhat less expansive questions but tricky ones nonetheless: how to tackle the Covid-19 pandemic. They’re being used in many facets of responding to the disease, including to predict the spread of the virus, to optimize contact tracing, to allocate resources and provide decisions for physicians, to design vaccines and rapid testing tools, and to understand sneezes. And the answers are needed in a rather shorter time frame than Deep Thought was proposing.

The largest number of Covid-19 supercomputing projects involves designing drugs. It’s likely to take several effective drugs to treat the disease. Supercomputers allow researchers to take a rational approach and aim to selectively muzzle proteins that SARS-CoV-2, the virus that causes Covid-19, needs for its life cycle.

The viral genome encodes proteins needed by the virus to infect humans and to replicate. Among these are the infamous spike protein that sniffs out and penetrates its human cellular target, but there are also enzymes and molecular machines that the virus forces its human subjects to produce for it. Finding drugs that can bind to these proteins and stop them from working is a logical way to go.

The Summit supercomputer at Oak Ridge National Laboratory has a peak performance of 200,000 trillion calculations per second—equivalent to about a million laptops. Image credit: Oak Ridge National Laboratory, U.S. Dept. of Energy, CC BY

I am a molecular biophysicist. My lab, at the Center for Molecular Biophysics at the University of Tennessee and Oak Ridge National Laboratory, uses a supercomputer to discover drugs. We build three-dimensional virtual models of biological molecules like the proteins used by cells and viruses, and simulate how various chemical compounds interact with those proteins. We test thousands of compounds to find the ones that “dock” with a target protein. Those compounds that fit, lock-and-key style, with the protein are potential therapies.

The top-ranked candidates are then tested experimentally to see if they indeed do bind to their targets and, in the case of Covid-19, stop the virus from infecting human cells. The compounds are first tested in cells, then animals, and finally humans. Computational drug discovery with high-performance computing has been important in finding antiviral drugs in the past, such as the anti-HIV drugs that revolutionized AIDS treatment in the 1990s.

World’s Most Powerful Computer
Since the 1990s the power of supercomputers has increased by a factor of a million or so. Summit at Oak Ridge National Laboratory is presently the world’s most powerful supercomputer, and has the combined power of roughly a million laptops. A laptop today has roughly the same power as a supercomputer had 20-30 years ago.

However, in order to gin up speed, supercomputer architectures have become more complicated. They used to consist of single, very powerful chips on which programs would simply run faster. Now they consist of thousands of processors performing massively parallel processing in which many calculations, such as testing the potential of drugs to dock with a pathogen or cell’s proteins, are performed at the same time. Persuading those processors to work together harmoniously is a pain in the neck but means we can quickly try out a lot of chemicals virtually.

Further, researchers use supercomputers to figure out by simulation the different shapes formed by the target binding sites and then virtually dock compounds to each shape. In my lab, that procedure has produced experimentally validated hits—chemicals that work—for each of 16 protein targets that physician-scientists and biochemists have discovered over the past few years. These targets were selected because finding compounds that dock with them could result in drugs for treating different diseases, including chronic kidney disease, prostate cancer, osteoporosis, diabetes, thrombosis and bacterial infections.

Scientists are using supercomputers to find ways to disable the various proteins—including the infamous spike protein (green protrusions)—produced by SARS-CoV-2, the virus responsible for Covid-19. Image credit: Thomas Splettstoesser scistyle.com, CC BY-ND

Billions of Possibilities
So which chemicals are being tested for Covid-19? A first approach is trying out drugs that already exist for other indications and that we have a pretty good idea are reasonably safe. That’s called “repurposing,” and if it works, regulatory approval will be quick.

But repurposing isn’t necessarily being done in the most rational way. One idea researchers are considering is that drugs that work against protein targets of some other virus, such as the flu, hepatitis or Ebola, will automatically work against Covid-19, even when the SARS-CoV-2 protein targets don’t have the same shape.

Our own work has now expanded to about 10 targets on SARS-CoV-2, and we’re also looking at human protein targets for disrupting the virus’s attack on human cells. Top-ranked compounds from our calculations are being tested experimentally for activity against the live virus. Several of these have already been found to be active.The best approach is to check if repurposed compounds will actually bind to their intended target. To that end, my lab published a preliminary report of a supercomputer-driven docking study of a repurposing compound database in mid-February. The study ranked 8,000 compounds in order of how well they bind to the viral spike protein. This paper triggered the establishment of a high-performance computing consortium against our viral enemy, announced by President Trump in March. Several of our top-ranked compounds are now in clinical trials.

Also, we and others are venturing out into the wild world of new drug discovery for Covid-19—looking for compounds that have never been tried as drugs before. Databases of billions of these compounds exist, all of which could probably be synthesized in principle but most of which have never been made. Billion-compound docking is a tailor-made task for massively parallel supercomputing.

Dawn of the Exascale Era
Work will be helped by the arrival of the next big machine at Oak Ridge, called Frontier, planned for next year. Frontier should be about 10 times more powerful than Summit. Frontier will herald the “exascale” supercomputing era, meaning machines capable of 1,000,000,000,000,000,000 calculations per second.

Although some fear supercomputers will take over the world, for the time being, at least, they are humanity’s servants, which means that they do what we tell them to. Different scientists have different ideas about how to calculate which drugs work best—some prefer artificial intelligence, for example—so there’s quite a lot of arguing going on.

Hopefully, scientists armed with the most powerful computers in the world will, sooner rather than later, find the drugs needed to tackle Covid-19. If they do, then their answers will be of more immediate benefit, if less philosophically tantalizing, than the answer to the ultimate question provided by Deep Thought, which was, maddeningly, simply 42.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image credit: NIH/NIAID Continue reading

Posted in Human Robots

#437150 AI Is Getting More Creative. But Who ...

Creativity is a trait that makes humans unique from other species. We alone have the ability to make music and art that speak to our experiences or illuminate truths about our world. But suddenly, humans’ artistic abilities have some competition—and from a decidedly non-human source.

Over the last couple years there have been some remarkable examples of art produced by deep learning algorithms. They have challenged the notion of an elusive definition of creativity and put into perspective how professionals can use artificial intelligence to enhance their abilities and produce beyond the known boundaries.

But when creativity is the result of code written by a programmer, using a format given by a software engineer, featuring private and public datasets, how do we assign ownership of AI-generated content, and particularly that of artwork? McKinsey estimates AI will annually generate value of $3.5 to $5.8 trillion across various sectors.

In 2018, a portrait that was christened Edmond de Belamy was made in a French art collective called Obvious. It used a database with 15,000 portraits from the 1300s to the 1900s to train a deep learning algorithm to produce a unique portrait. The painting sold for $432,500 in a New York auction. Similarly, a program called Aiva, trained on thousands of classical compositions, has released albums whose pieces are being used by ad agencies and movies.

The datasets used by these algorithms were different, but behind both there was a programmer who changed the brush strokes or musical notes into lines of code and a data scientist or engineer who fitted and “curated” the datasets to use for the model. There could also have been user-based input, and the output may be biased towards certain styles or unintentionally infringe on similar pieces of art. This shows that there are many collaborators with distinct roles in producing AI-generated content, and it’s important to discuss how they can protect their proprietary interests.

A perspective article published in Nature Machine Intelligence by Jason K. Eshraghian in March looks into how AI artists and the collaborators involved should assess their ownership, laying out some guiding principles that are “only applicable for as long as AI does not have legal parenthood, the way humans and corporations are accorded.”

Before looking at how collaborators can protect their interests, it’s useful to understand the basic requirements of copyright law. The artwork in question must be an “original work of authorship fixed in a tangible medium.” Given this principle, the author asked whether it’s possible for AI to exercise creativity, skill, or any other indicator of originality. The answer is still straightforward—no—or at least not yet. Currently, AI’s range of creativity doesn’t exceed the standard used by the US Copyright Office, which states that copyright law protects the “fruits of intellectual labor founded in the creative powers of the mind.”

Due to the current limitations of narrow AI, it must have some form of initial input that helps develop its ability to create. At the moment AI is a tool that can be used to produce creative work in the same way that a video camera is a tool used to film creative content. Video producers don’t need to comprehend the inner workings of their cameras; as long as their content shows creativity and originality, they have a proprietary claim over their creations.

The same concept applies to programmers developing a neural network. As long as the dataset they use as input yields an original and creative result, it will be protected by copyright law; they don’t need to understand the high-level mathematics, which in this case are often black box algorithms whose output it’s impossible to analyze.

Will robots and algorithms eventually be treated as creative sources able to own copyrights? The author pointed to the recent patent case of Warner-Lambert Co Ltd versus Generics where Lord Briggs, Justice of the Supreme Court of the UK, determined that “the court is well versed in identifying the governing mind of a corporation and, when the need arises, will no doubt be able to do the same for robots.”

In the meantime, Dr. Eshraghian suggests four guiding principles to allow artists who collaborate with AI to protect themselves.

First, programmers need to document their process through online code repositories like GitHub or BitBucket.

Second, data engineers should also document and catalog their datasets and the process they used to curate their models, indicating selectivity in their criteria as much as possible to demonstrate their involvement and creativity.

Third, in cases where user data is utilized, the engineer should “catalog all runs of the program” to distinguish the data selection process. This could be interpreted as a way of determining whether user-based input has a right to claim the copyright too.

Finally, the output should avoid infringing on others’ content through methods like reverse image searches and version control, as mentioned above.

AI-generated artwork is still a very new concept, and the ambiguous copyright laws around it give a lot of flexibility to AI artists and programmers worldwide. The guiding principles Eshraghian lays out will hopefully shed some light on the legislation we’ll eventually need for this kind of art, and start an important conversation between all the stakeholders involved.

Image Credit: Wikimedia Commons Continue reading

Posted in Human Robots

#437120 The New Indiana Jones? AI. Here’s How ...

Archaeologists have uncovered scores of long-abandoned settlements along coastal Madagascar that reveal environmental connections to modern-day communities. They have detected the nearly indiscernible bumps of earthen mounds left behind by prehistoric North American cultures. Still other researchers have mapped Bronze Age river systems in the Indus Valley, one of the cradles of civilization.

All of these recent discoveries are examples of landscape archaeology. They’re also examples of how artificial intelligence is helping scientists hunt for new archaeological digs on a scale and at a pace unimaginable even a decade ago.

“AI in archaeology has been increasing substantially over the past few years,” said Dylan Davis, a PhD candidate in the Department of Anthropology at Penn State University. “One of the major uses of AI in archaeology is for the detection of new archaeological sites.”

The near-ubiquitous availability of satellite data and other types of aerial imagery for many parts of the world has been both a boon and a bane to archaeologists. They can cover far more ground, but the job of manually mowing their way across digitized landscapes is still time-consuming and laborious. Machine learning algorithms offer a way to parse through complex data far more quickly.

AI Gives Archaeologists a Bird’s Eye View
Davis developed an automated algorithm for identifying large earthen and shell mounds built by native populations long before Europeans arrived with far-off visions of skyscrapers and superhighways in their eyes. The sites still hidden in places like the South Carolina wilderness contain a wealth of information about how people lived, even what they ate, and the ways they interacted with the local environment and other cultures.

In this particular case, the imagery comes from LiDAR, which uses light pulses that can penetrate tree canopies to map forest floors. The team taught the computer the shape, size, and texture characteristics of the mounds so it could identify potential sites from the digital 3D datasets that it analyzed.

“The process resulted in several thousand possible features that my colleagues and I checked by hand,” Davis told Singularity Hub. “While not entirely automated, this saved the equivalent of years of manual labor that would have been required for analyzing the whole LiDAR image by hand.”

In Madagascar—where Davis is studying human settlement history across the world’s fourth largest island over a timescale of millennia—he developed a predictive algorithm to help locate archaeological sites using freely available satellite imagery. His team was able to survey and identify more than 70 new archaeological sites—and potentially hundreds more—across an area of more than 1,000 square kilometers during the course of about a year.

Machines Learning From the Past Prepare Us for the Future
One impetus behind the rapid identification of archaeological sites is that many are under threat from climate change, such as coastal erosion from sea level rise, or other human impacts. Meanwhile, traditional archaeological approaches are expensive and laborious—serious handicaps in a race against time.

“It is imperative to record as many archaeological sites as we can in a short period of time. That is why AI and machine learning are useful for my research,” Davis said.

Studying the rise and fall of past civilizations can also teach modern humans a thing or two about how to grapple with these current challenges.

Researchers at the Institut Català d’Arqueologia Clàssica (ICAC) turned to machine-learning algorithms to reconstruct more than 20,000 kilometers of paleo-rivers along the Indus Valley civilization of what is now part of modern Pakistan and India. Such AI-powered mapping techniques wouldn’t be possible using satellite images alone.

That effort helped locate many previously unknown archaeological sites and unlocked new insights into those Bronze Age cultures. However, the analytics can also assist governments with important water resource management today, according to Hèctor A. Orengo Romeu, co-director of the Landscape Archaeology Research Group at ICAC.

“Our analyses can contribute to the forecasts of the evolution of aquifers in the area and provide valuable information on aspects such as the variability of agricultural productivity or the influence of climate change on the expansion of the Thar desert, in addition to providing cultural management tools to the government,” he said.

Leveraging AI for Language and Lots More
While landscape archaeology is one major application of AI in archaeology, it’s far from the only one. In 2000, only about a half-dozen scientific papers referred to the use of AI, according to the Web of Science, reputedly the world’s largest global citation database. Last year, more than 65 papers were published concerning the use of machine intelligence technologies in archaeology, with a significant uptick beginning in 2015.

AI methods, for instance, are being used to understand the chemical makeup of artifacts like pottery and ceramics, according to Davis. “This can help identify where these materials were made and how far they were transported. It can also help us to understand the extent of past trading networks.”

Linguistic anthropologists have also used machine intelligence methods to trace the evolution of different languages, Davis said. “Using AI, we can learn when and where languages emerged around the world.”

In other cases, AI has helped reconstruct or decipher ancient texts. Last year, researchers at Google’s DeepMind used a deep neural network called PYTHIA to recreate missing inscriptions in ancient Greek from damaged surfaces of objects made of stone or ceramics.

Named after the Oracle at Delphi, PYTHIA “takes a sequence of damaged text as input, and is trained to predict character sequences comprising hypothesised restorations of ancient Greek inscriptions,” the researchers reported.

In a similar fashion, Chinese scientists applied a convolutional neural network (CNN) to untangle another ancient tongue once found on turtle shells and ox bones. The CNN managed to classify oracle bone morphology in order to piece together fragments of these divination objects, some with inscriptions that represent the earliest evidence of China’s recorded history.

“Differentiating the materials of oracle bones is one of the most basic steps for oracle bone morphology—we need to first make sure we don’t assemble pieces of ox bones with tortoise shells,” lead author of the study, associate professor Shanxiong Chen at China’s Southwest University, told Synced, an online tech publication in China.

AI Helps Archaeologists Get the Scoop…
And then there are applications of AI in archaeology that are simply … interesting. Just last month, researchers published a paper about a machine learning method trained to differentiate between human and canine paleofeces.

The algorithm, dubbed CoproID, compares the gut microbiome DNA found in the ancient material with DNA found in modern feces, enabling it to get the scoop on the origin of the poop.

Also known as coprolites, paleo-feces from humans and dogs are often found in the same archaeological sites. Scientists need to know which is which if they’re trying to understand something like past diets or disease.

“CoproID is the first line of identification in coprolite analysis to confirm that what we’re looking for is actually human, or a dog if we’re interested in dogs,” Maxime Borry, a bioinformatics PhD student at the Max Planck Institute for the Science of Human History, told Vice.

…But Machine Intelligence Is Just Another Tool
There is obviously quite a bit of work that can be automated through AI. But there’s no reason for archaeologists to hit the unemployment line any time soon. There are also plenty of instances where machines can’t yet match humans in identifying objects or patterns. At other times, it’s just faster doing the analysis yourself, Davis noted.

“For ‘big data’ tasks like detecting archaeological materials over a continental scale, AI is useful,” he said. “But for some tasks, it is sometimes more time-consuming to train an entire computer algorithm to complete a task that you can do on your own in an hour.”

Still, there’s no telling what the future will hold for studying the past using artificial intelligence.

“We have already started to see real improvements in the accuracy and reliability of these approaches, but there is a lot more to do,” Davis said. “Hopefully, we start to see these methods being directly applied to a variety of interesting questions around the world, as these methods can produce datasets that would have been impossible a few decades ago.”

Image Credit: James Wheeler from Pixabay Continue reading

Posted in Human Robots

#436988 This Week’s Awesome Tech Stories From ...

FUTURE
We Need to Start Modeling Alternative Futures
Andrew Marino | The Verge
“‘I’m going to be the first person to tell you if you gave me all the data in the world and all the computers in the world, at this moment in time I cannot tell you what things are going to look like in three months,’ [says quantitative futurist Amy Webb.] ‘And that’s fine because that tells us we still have some agency. …The good news is if you are willing to lean into uncertainty and to accept the fact that you can’t control everything, but also you are not helpless in whatever comes next.'”

GOVERNANCE
The Dangers of Moving All of Democracy Online
Marion Fourcade and Henry Farrell | Wired
“As we try to protect democracy from coronavirus, we must see technology as a scalpel, not a sledgehammer. …If we’re very lucky, we’ll have restrained, targeted, and temporary measures that will be effective against the pandemic. If we’re not, we’ll create an open-ended, sweeping surveillance system that will undermine democratic freedoms without doing much to stop coronavirus.”

TECHNOLOGY
Why Does It Suddenly Feel Like 1999 on the Internet?
Tanya Basu and Karen Hao | MIT Technology Review
“You see it in the renewed willingness of people to form virtual relationships. …Now casually hanging out with randos (virtually, of course) is cool again. People are joining video calls with people they’ve never met for everything from happy hours to book clubs to late-night flirting. They’re sharing in collective moments of creativity on Google Sheets, looking for new pandemic pen pals, and sending softer, less pointed emails.”

SCIENCE
Covid-19 Changed How the World Does Science, Together
Matt Apuzzo and David D. Kirkpatrick | The New York Times
“While political leaders have locked their borders, scientists have been shattering theirs, creating a global collaboration unlike any in history. Never before, researchers say, have so many experts in so many countries focused simultaneously on a single topic and with such urgency. Nearly all other research has ground to a halt.”

ARTIFICIAL INTELLIGENCE
A Debate Between AI Experts Shows a Battle Over the Technology’s Future
Karen Hao | MIT Technology Review
“The disagreements [the two experts] expressed mirror many of the clashes within the field, highlighting how powerfully the technology has been shaped by a persistent battle of ideas and how little certainty there is about where it’s headed next.”

BIOTECH
Meet the Xenobots, Virtual Creatures Brought to Life
Joshua Sokol | The New York Times
“If the last few decades of progress in artificial intelligence and in molecular biology hooked up, their love child—a class of life unlike anything that has ever lived—might resemble the dark specks doing lazy laps around a petri dish in a laboratory at Tufts University.”

ENVIRONMENT
Rivian Wants to Bring Electric Trucks to the Masses
Jon Gertner | Wired
“The pickup walks a careful line between Detroit traditionalism and EV iconoclasm. Where Tesla’s forthcoming Cybertruck looks like origami on wheels, the R1T, slim and limber, looks more like an F-150 on a gym-and-yoga regimen.”

ENERGY
The Promise and Peril of Nuclear Power
John R. Quain | Gizmodo
“To save us from the coming climate catastrophe, we need an energy hero, boasting limitless power and no greenhouse gas emissions (or nearly none). So it’s time, say some analysts, to resuscitate the nuclear energy industry. Doing so could provide carbon-free energy. But any plan to make nuclear power a big part of the energy mix also comes with serious financial risks as well as questions about if there’s enough time to enlist an army of nuclear power plants in the battle against the climate crisis.”

Image Credit: Jason Rosewell / Unsplash Continue reading

Posted in Human Robots

#436946 Coronavirus May Mean Automation Is ...

We’re in the midst of a public health emergency, and life as we know it has ground to a halt. The places we usually go are closed, the events we were looking forward to are canceled, and some of us have lost our jobs or fear losing them soon.

But although it may not seem like it, there are some silver linings; this crisis is bringing out the worst in some (I’m looking at you, toilet paper hoarders), but the best in many. Italians on lockdown are singing together, Spaniards on lockdown are exercising together, this entrepreneur made a DIY ventilator and put it on YouTube, and volunteers in Italy 3D printed medical valves for virus treatment at a fraction of their usual cost.

Indeed, if you want to feel like there’s still hope for humanity instead of feeling like we’re about to snowball into terribleness as a species, just look at these examples—and I’m sure there are many more out there. There’s plenty of hope and opportunity to be found in this crisis.

Peter Xing, a keynote speaker and writer on emerging technologies and associate director in technology and growth initiatives at KPMG, would agree. Xing believes the coronavirus epidemic is presenting us with ample opportunities for increased automation and remote delivery of goods and services. “The upside right now is the burgeoning platform of the digital transformation ecosystem,” he said.

In a thought-provoking talk at Singularity University’s COVID-19 virtual summit this week, Xing explained how the outbreak is accelerating our transition to a highly-automated society—and painted a picture of what the future may look like.

Confronting Scarcity
You’ve probably seen them by now—the barren shelves at your local grocery store. Whether you were in the paper goods aisle, the frozen food section, or the fresh produce area, it was clear something was amiss; the shelves were empty. One of the most inexplicable items people have been panic-bulk-buying is toilet paper.

Xing described this toilet paper scarcity as a prisoner’s dilemma, pointing out that we have a scarcity problem right now in terms of our mindset, not in terms of actual supply shortages. “It’s a prisoner’s dilemma in that we’re all prisoners in our homes right now, and we can either hoard or not hoard, and the outcomes depend on how we collaborate with each other,” he said. “But it’s not a zero-sum game.”

Xing referenced a CNN article about why toilet paper, of all things, is one of the items people have been panic-buying most (I, too, have been utterly baffled by this phenomenon). But maybe there’d be less panic if we knew more about the production methods and supply chain involved in manufacturing toilet paper. It turns out it’s a highly automated process (you can learn more about it in this documentary by National Geographic) and requires very few people (though it does require about 27,000 trees a day—so stop bulk-buying it! Just stop!).

The supply chain limitation here is in the raw material; we certainly can’t keep cutting down this many trees a day forever. But—somewhat ironically, given the Costco cartloads of TP people have been stuffing into their trunks and backseats—thanks to automation, toilet paper isn’t something stores are going to stop receiving anytime soon.

Automation For All
Now we have a reason to apply this level of automation to, well, pretty much everything.

Though our current situation may force us into using more robots and automated systems sooner than we’d planned, it will end up saving us money and creating opportunity, Xing believes. He cited “fast-casual” restaurants (Chipotle, Panera, etc.) as a prime example.

Currently, people in the US spend much more to eat at home than we do to eat in fast-casual restaurants if you take into account the cost of the food we’re preparing plus the value of the time we’re spending on cooking, grocery shopping, and cleaning up after meals. According to research from investment management firm ARK Invest, taking all these costs into account makes for about $12 per meal for food cooked at home.

That’s the same as or more than the cost of grabbing a burrito or a sandwich at the joint around the corner. As more of the repetitive, low-skill tasks involved in preparing fast casual meals are automated, their cost will drop even more, giving us more incentive to forego home cooking. (But, it’s worth noting that these figures don’t take into account that eating at home is, in most cases, better for you since you’re less likely to fill your food with sugar, oil, or various other taste-enhancing but health-destroying ingredients—plus, there are those of us who get a nearly incomparable amount of joy from laboring over then savoring a homemade meal).

Now that we’re not supposed to be touching each other or touching anything anyone else has touched, but we still need to eat, automating food preparation sounds appealing (and maybe necessary). Multiple food delivery services have already implemented a contactless delivery option, where customers can choose to have their food left on their doorstep.

Besides the opportunities for in-restaurant automation, “This is an opportunity for automation to happen at the last mile,” said Xing. Delivery drones, robots, and autonomous trucks and vans could all play a part. In fact, use of delivery drones has ramped up in China since the outbreak.

Speaking of deliveries, service robots have steadily increased in numbers at Amazon; as of late 2019, the company employed around 650,000 humans and 200,000 robots—and costs have gone down as robots have gone up.

ARK Invest’s research predicts automation could add $800 billion to US GDP over the next 5 years and $12 trillion during the next 15 years. On this trajectory, GDP would end up being 40 percent higher with automation than without it.

Automating Ourselves?
This is all well and good, but what do these numbers and percentages mean for the average consumer, worker, or citizen?

“The benefits of automation aren’t being passed on to the average citizen,” said Xing. “They’re going to the shareholders of the companies creating the automation.” This is where policies like universal basic income and universal healthcare come in; in the not-too-distant future, we may see more movement toward measures like these (depending how the election goes) that spread the benefit of automation out rather than concentrating it in a few wealthy hands.

In the meantime, though, some people are benefiting from automation in ways that maybe weren’t expected. We’re in the midst of what’s probably the biggest remote-work experiment in US history, not to mention remote learning. Tools that let us digitally communicate and collaborate, like Slack, Zoom, Dropbox, and Gsuite, are enabling remote work in a way that wouldn’t have been possible 20 or even 10 years ago.

In addition, Xing said, tools like DataRobot and H2O.ai are democratizing artificial intelligence by allowing almost anyone, not just data scientists or computer engineers, to run machine learning algorithms. People are codifying the steps in their own repetitive work processes and having their computers take over tasks for them.

As 3D printing gets cheaper and more accessible, it’s also being more widely adopted, and people are finding more applications (case in point: the Italians mentioned above who figured out how to cheaply print a medical valve for coronavirus treatment).

The Mother of Invention
This movement towards a more automated society has some positives: it will help us stay healthy during times like the present, it will drive down the cost of goods and services, and it will grow our GDP in the long run. But by leaning into automation, will we be enabling a future that keeps us more physically, psychologically, and emotionally distant from each other?

We’re in a crisis, and desperate times call for desperate measures. We’re sheltering in place, practicing social distancing, and trying not to touch each other. And for most of us, this is really unpleasant and difficult. We can’t wait for it to be over.

For better or worse, this pandemic will likely make us pick up the pace on our path to automation, across many sectors and processes. The solutions people implement during this crisis won’t disappear when things go back to normal (and, depending who you talk to, they may never really do so).

But let’s make sure to remember something. Even once robots are making our food and drones are delivering it, and our computers are doing data entry and email replies on our behalf, and we all have 3D printers to make anything we want at home—we’re still going to be human. And humans like being around each other. We like seeing one another’s faces, hearing one another’s voices, and feeling one another’s touch—in person, not on a screen or in an app.

No amount of automation is going to change that, and beyond lowering costs or increasing GDP, our greatest and most crucial responsibility will always be to take care of each other.

Image Credit: Gritt Zheng on Unsplash Continue reading

Posted in Human Robots