Tag Archives: death

#432031 Why the Rise of Self-Driving Vehicles ...

It’s been a long time coming. For years Waymo (formerly known as Google Chauffeur) has been diligently developing, driving, testing and refining its fleets of various models of self-driving cars. Now Waymo is going big. The company recently placed an order for several thousand new Chrysler Pacifica minivans and next year plans to launch driverless taxis in a number of US cities.

This deal raises one of the biggest unanswered questions about autonomous vehicles: if fleets of driverless taxis make it cheap and easy for regular people to get around, what’s going to happen to car ownership?

One popular line of thought goes as follows: as autonomous ride-hailing services become ubiquitous, people will no longer need to buy their own cars. This notion has a certain logical appeal. It makes sense to assume that as driverless taxis become widely available, most of us will eagerly sell the family car and use on-demand taxis to get to work, run errands, or pick up the kids. After all, vehicle ownership is pricey and most cars spend the vast majority of their lives parked.

Even experts believe commercial availability of autonomous vehicles will cause car sales to drop.

Market research firm KPMG estimates that by 2030, midsize car sales in the US will decline from today’s 5.4 million units sold each year to nearly half that number, a measly 2.1 million units. Another market research firm, ReThinkX, offers an even more pessimistic estimate (or optimistic, depending on your opinion of cars), predicting that autonomous vehicles will reduce consumer demand for new vehicles by a whopping 70 percent.

The reality is that the impending death of private vehicle sales is greatly exaggerated. Despite the fact that autonomous taxis will be a beneficial and widely-embraced form of urban transportation, we will witness the opposite. Most people will still prefer to own their own autonomous vehicle. In fact, the total number of units of autonomous vehicles sold each year is going to increase rather than decrease.

When people predict the demise of car ownership, they are overlooking the reality that the new autonomous automotive industry is not going to be just a re-hash of today’s car industry with driverless vehicles. Instead, the automotive industry of the future will be selling what could be considered an entirely new product: a wide variety of intelligent, self-guiding transportation robots. When cars become a widely used type of transportation robot, they will be cheap, ubiquitous, and versatile.

Several unique characteristics of autonomous vehicles will ensure that people will continue to buy their own cars.

1. Cost: Thanks to simpler electric engines and lighter auto bodies, autonomous vehicles will be cheaper to buy and maintain than today’s human-driven vehicles. Some estimates bring the price to $10K per vehicle, a stark contrast with today’s average of $30K per vehicle.

2. Personal belongings: Consumers will be able to do much more in their driverless vehicles, including work, play, and rest. This means they will want to keep more personal items in their cars.

3. Frequent upgrades: The average (human-driven) car today is owned for 10 years. As driverless cars become software-driven devices, their price/performance ratio will track to Moore’s law. Their rapid improvement will increase the appeal and frequency of new vehicle purchases.

4. Instant accessibility: In a dense urban setting, a driverless taxi is able to show up within minutes of being summoned. But not so in rural areas, where people live miles apart. For many, delay and “loss of control” over their own mobility will increase the appeal of owning their own vehicle.

5. Diversity of form and function: Autonomous vehicles will be available in a wide variety of sizes and shapes. Consumers will drive demand for custom-made, purpose-built autonomous vehicles whose form is adapted for a particular function.

Let’s explore each of these characteristics in more detail.

Autonomous vehicles will cost less for several reasons. For one, they will be powered by electric engines, which are cheaper to construct and maintain than gasoline-powered engines. Removing human drivers will also save consumers money. Autonomous vehicles will be much less likely to have accidents, hence they can be built out of lightweight, lower-cost materials and will be cheaper to insure. With the human interface no longer needed, autonomous vehicles won’t be burdened by the manufacturing costs of a complex dashboard, steering wheel, and foot pedals.

While hop-on, hop-off autonomous taxi-based mobility services may be ideal for some of the urban population, several sizeable customer segments will still want to own their own cars.

These include people who live in sparsely-populated rural areas who can’t afford to wait extended periods of time for a taxi to appear. Families with children will prefer to own their own driverless cars to house their childrens’ car seats and favorite toys and sippy cups. Another loyal car-buying segment will be die-hard gadget-hounds who will eagerly buy a sexy upgraded model every year or so, unable to resist the siren song of AI that is three times as safe, or a ride that is twice as smooth.

Finally, consider the allure of robotic diversity.

Commuters will invest in a home office on wheels, a sleek, traveling workspace resembling the first-class suite on an airplane. On the high end of the market, city-dwellers and country-dwellers alike will special-order custom-made autonomous vehicles whose shape and on-board gadgetry is adapted for a particular function or hobby. Privately-owned small businesses will buy their own autonomous delivery robot that could range in size from a knee-high, last-mile delivery pod, to a giant, long-haul shipping device.

As autonomous vehicles near commercial viability, Waymo’s procurement deal with Fiat Chrysler is just the beginning.

The exact value of this future automotive industry has yet to be defined, but research from Intel’s internal autonomous vehicle division estimates this new so-called “passenger economy” could be worth nearly $7 trillion a year. To position themselves to capture a chunk of this potential revenue, companies whose businesses used to lie in previously disparate fields such as robotics, software, ships, and entertainment (to name but a few) have begun to form a bewildering web of what they hope will be symbiotic partnerships. Car hailing and chip companies are collaborating with car rental companies, who in turn are befriending giant software firms, who are launching joint projects with all sizes of hardware companies, and so on.

Last year, car companies sold an estimated 80 million new cars worldwide. Over the course of nearly a century, car companies and their partners, global chains of suppliers and service providers, have become masters at mass-producing and maintaining sturdy and cost-effective human-driven vehicles. As autonomous vehicle technology becomes ready for mainstream use, traditional automotive companies are being forced to grapple with the painful realization that they must compete in a new playing field.

The challenge for traditional car-makers won’t be that people no longer want to own cars. Instead, the challenge will be learning to compete in a new and larger transportation industry where consumers will choose their product according to the appeal of its customized body and the quality of its intelligent software.

Melba Kurman and Hod Lipson are the authors of Driverless: Intelligent Cars and the Road Ahead and Fabricated: the New World of 3D Printing.

Image Credit: hfzimages / Shutterstock.com

We are a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for us to earn fees by linking to Amazon.com and affiliated sites. Continue reading

Posted in Human Robots | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

#431869 When Will We Finally Achieve True ...

The field of artificial intelligence goes back a long way, but many consider it was officially born when a group of scientists at Dartmouth College got together for a summer, back in 1956. Computers had, over the last few decades, come on in incredible leaps and bounds; they could now perform calculations far faster than humans. Optimism, given the incredible progress that had been made, was rational. Genius computer scientist Alan Turing had already mooted the idea of thinking machines just a few years before. The scientists had a fairly simple idea: intelligence is, after all, just a mathematical process. The human brain was a type of machine. Pick apart that process, and you can make a machine simulate it.
The problem didn’t seem too hard: the Dartmouth scientists wrote, “We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.” This research proposal, by the way, contains one of the earliest uses of the term artificial intelligence. They had a number of ideas—maybe simulating the human brain’s pattern of neurons could work and teaching machines the abstract rules of human language would be important.
The scientists were optimistic, and their efforts were rewarded. Before too long, they had computer programs that seemed to understand human language and could solve algebra problems. People were confidently predicting there would be a human-level intelligent machine built within, oh, let’s say, the next twenty years.
It’s fitting that the industry of predicting when we’d have human-level intelligent AI was born at around the same time as the AI industry itself. In fact, it goes all the way back to Turing’s first paper on “thinking machines,” where he predicted that the Turing Test—machines that could convince humans they were human—would be passed in 50 years, by 2000. Nowadays, of course, people are still predicting it will happen within the next 20 years, perhaps most famously Ray Kurzweil. There are so many different surveys of experts and analyses that you almost wonder if AI researchers aren’t tempted to come up with an auto reply: “I’ve already predicted what your question will be, and no, I can’t really predict that.”
The issue with trying to predict the exact date of human-level AI is that we don’t know how far is left to go. This is unlike Moore’s Law. Moore’s Law, the doubling of processing power roughly every couple of years, makes a very concrete prediction about a very specific phenomenon. We understand roughly how to get there—improved engineering of silicon wafers—and we know we’re not at the fundamental limits of our current approach (at least, not until you’re trying to work on chips at the atomic scale). You cannot say the same about artificial intelligence.
Common Mistakes
Stuart Armstrong’s survey looked for trends in these predictions. Specifically, there were two major cognitive biases he was looking for. The first was the idea that AI experts predict true AI will arrive (and make them immortal) conveniently just before they’d be due to die. This is the “Rapture of the Nerds” criticism people have leveled at Kurzweil—his predictions are motivated by fear of death, desire for immortality, and are fundamentally irrational. The ability to create a superintelligence is taken as an article of faith. There are also criticisms by people working in the AI field who know first-hand the frustrations and limitations of today’s AI.
The second was the idea that people always pick a time span of 15 to 20 years. That’s enough to convince people they’re working on something that could prove revolutionary very soon (people are less impressed by efforts that will lead to tangible results centuries down the line), but not enough for you to be embarrassingly proved wrong. Of the two, Armstrong found more evidence for the second one—people were perfectly happy to predict AI after they died, although most didn’t, but there was a clear bias towards “15–20 years from now” in predictions throughout history.
Measuring Progress
Armstrong points out that, if you want to assess the validity of a specific prediction, there are plenty of parameters you can look at. For example, the idea that human-level intelligence will be developed by simulating the human brain does at least give you a clear pathway that allows you to assess progress. Every time we get a more detailed map of the brain, or successfully simulate another part of it, we can tell that we are progressing towards this eventual goal, which will presumably end in human-level AI. We may not be 20 years away on that path, but at least you can scientifically evaluate the progress.
Compare this to those that say AI, or else consciousness, will “emerge” if a network is sufficiently complex, given enough processing power. This might be how we imagine human intelligence and consciousness emerged during evolution—although evolution had billions of years, not just decades. The issue with this is that we have no empirical evidence: we have never seen consciousness manifest itself out of a complex network. Not only do we not know if this is possible, we cannot know how far away we are from reaching this, as we can’t even measure progress along the way.
There is an immense difficulty in understanding which tasks are hard, which has continued from the birth of AI to the present day. Just look at that original research proposal, where understanding human language, randomness and creativity, and self-improvement are all mentioned in the same breath. We have great natural language processing, but do our computers understand what they’re processing? We have AI that can randomly vary to be “creative,” but is it creative? Exponential self-improvement of the kind the singularity often relies on seems far away.
We also struggle to understand what’s meant by intelligence. For example, AI experts consistently underestimated the ability of AI to play Go. Many thought, in 2015, it would take until 2027. In the end, it took two years, not twelve. But does that mean AI is any closer to being able to write the Great American Novel, say? Does it mean it’s any closer to conceptually understanding the world around it? Does it mean that it’s any closer to human-level intelligence? That’s not necessarily clear.
Not Human, But Smarter Than Humans
But perhaps we’ve been looking at the wrong problem. For example, the Turing test has not yet been passed in the sense that AI cannot convince people it’s human in conversation; but of course the calculating ability, and perhaps soon the ability to perform other tasks like pattern recognition and driving cars, far exceed human levels. As “weak” AI algorithms make more decisions, and Internet of Things evangelists and tech optimists seek to find more ways to feed more data into more algorithms, the impact on society from this “artificial intelligence” can only grow.
It may be that we don’t yet have the mechanism for human-level intelligence, but it’s also true that we don’t know how far we can go with the current generation of algorithms. Those scary surveys that state automation will disrupt society and change it in fundamental ways don’t rely on nearly as many assumptions about some nebulous superintelligence.
Then there are those that point out we should be worried about AI for other reasons. Just because we can’t say for sure if human-level AI will arrive this century, or never, it doesn’t mean we shouldn’t prepare for the possibility that the optimistic predictors could be correct. We need to ensure that human values are programmed into these algorithms, so that they understand the value of human life and can act in “moral, responsible” ways.
Phil Torres, at the Project for Future Human Flourishing, expressed it well in an interview with me. He points out that if we suddenly decided, as a society, that we had to solve the problem of morality—determine what was right and wrong and feed it into a machine—in the next twenty years…would we even be able to do it?
So, we should take predictions with a grain of salt. Remember, it turned out the problems the AI pioneers foresaw were far more complicated than they anticipated. The same could be true today. At the same time, we cannot be unprepared. We should understand the risks and take our precautions. When those scientists met in Dartmouth in 1956, they had no idea of the vast, foggy terrain before them. Sixty years later, we still don’t know how much further there is to go, or how far we can go. But we’re going somewhere.
Image Credit: Ico Maker / Shutterstock.com Continue reading

Posted in Human Robots | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

#431603 What We Can Learn From the Second Life ...

For every new piece of technology that gets developed, you can usually find people saying it will never be useful. The president of the Michigan Savings Bank in 1903, for example, said, “The horse is here to stay but the automobile is only a novelty—a fad.” It’s equally easy to find people raving about whichever new technology is at the peak of the Gartner Hype Cycle, which tracks the buzz around these newest developments and attempts to temper predictions. When technologies emerge, there are all kinds of uncertainties, from the actual capacity of the technology to its use cases in real life to the price tag.
Eventually the dust settles, and some technologies get widely adopted, to the extent that they can become “invisible”; people take them for granted. Others fall by the wayside as gimmicky fads or impractical ideas. Picking which horses to back is the difference between Silicon Valley millions and Betamax pub-quiz-question obscurity. For a while, it seemed that Google had—for once—backed the wrong horse.
Google Glass emerged from Google X, the ubiquitous tech giant’s much-hyped moonshot factory, where highly secretive researchers work on the sci-fi technologies of the future. Self-driving cars and artificial intelligence are the more mundane end for an organization that apparently once looked into jetpacks and teleportation.
The original smart glasses, Google began selling Google Glass in 2013 for $1,500 as prototypes for their acolytes, around 8,000 early adopters. Users could control the glasses with a touchpad, or, activated by tilting the head back, with voice commands. Audio relay—as with several wearable products—is via bone conduction, which transmits sound by vibrating the skull bones of the user. This was going to usher in the age of augmented reality, the next best thing to having a chip implanted directly into your brain.
On the surface, it seemed to be a reasonable proposition. People had dreamed about augmented reality for a long time—an onboard, JARVIS-style computer giving you extra information and instant access to communications without even having to touch a button. After smartphone ubiquity, it looked like a natural step forward.
Instead, there was a backlash. People may be willing to give their data up to corporations, but they’re less pleased with the idea that someone might be filming them in public. The worst aspect of smartphones is trying to talk to people who are distractedly scrolling through their phones. There’s a famous analogy in Revolutionary Road about an old couple’s loveless marriage: the husband tunes out his wife’s conversation by turning his hearing aid down to zero. To many, Google Glass seemed to provide us with a whole new way to ignore each other in favor of our Twitter feeds.
Then there’s the fact that, regardless of whether it’s because we’re not used to them, or if it’s a more permanent feature, people wearing AR tech often look very silly. Put all this together with a lack of early functionality, the high price (do you really feel comfortable wearing a $1,500 computer?), and a killer pun for the users—Glassholes—and the final recipe wasn’t great for Google.
Google Glass was quietly dropped from sale in 2015 with the ominous slogan posted on Google’s website “Thanks for exploring with us.” Reminding the Glass users that they had always been referred to as “explorers”—beta-testing a product, in many ways—it perhaps signaled less enthusiasm for wearables than the original, Google Glass skydive might have suggested.
In reality, Google went back to the drawing board. Not with the technology per se, although it has improved in the intervening years, but with the uses behind the technology.
Under what circumstances would you actually need a Google Glass? When would it genuinely be preferable to a smartphone that can do many of the same things and more? Beyond simply being a fashion item, which Google Glass decidedly was not, even the most tech-evangelical of us need a convincing reason to splash $1,500 on a wearable computer that’s less socially acceptable and less easy to use than the machine you’re probably reading this on right now.
Enter the Google Glass Enterprise Edition.
Piloted in factories during the years that Google Glass was dormant, and now roaring back to life and commercially available, the Google Glass relaunch got under way in earnest in July of 2017. The difference here was the specific audience: workers in factories who need hands-free computing because they need to use their hands at the same time.
In this niche application, wearable computers can become invaluable. A new employee can be trained with pre-programmed material that explains how to perform actions in real time, while instructions can be relayed straight into a worker’s eyeline without them needing to check a phone or switch to email.
Medical devices have long been a dream application for Google Glass. You can imagine a situation where people receive real-time information during surgery, or are augmented by artificial intelligence that provides additional diagnostic information or questions in response to a patient’s symptoms. The quest to develop a healthcare AI, which can provide recommendations in response to natural language queries, is on. The famously untidy doctor’s handwriting—and the associated death toll—could be avoided if the glasses could take dictation straight into a patient’s medical records. All of this is far more useful than allowing people to check Facebook hands-free while they’re riding the subway.
Google’s “Lens” application indicates another use for Google Glass that hadn’t quite matured when the original was launched: the Lens processes images and provides information about them. You can look at text and have it translated in real time, or look at a building or sign and receive additional information. Image processing, either through neural networks hooked up to a cloud database or some other means, is the frontier that enables driverless cars and similar technology to exist. Hook this up to a voice-activated assistant relaying information to the user, and you have your killer application: real-time annotation of the world around you. It’s this functionality that just wasn’t ready yet when Google launched Glass.
Amazon’s recent announcement that they want to integrate Alexa into a range of smart glasses indicates that the tech giants aren’t ready to give up on wearables yet. Perhaps, in time, people will become used to voice activation and interaction with their machines, at which point smart glasses with bone conduction will genuinely be more convenient than a smartphone.
But in many ways, the real lesson from the initial failure—and promising second life—of Google Glass is a simple question that developers of any smart technology, from the Internet of Things through to wearable computers, must answer. “What can this do that my smartphone can’t?” Find your answer, as the Enterprise Edition did, as Lens might, and you find your product.
Image Credit: Hattanas / Shutterstock.com Continue reading

Posted in Human Robots | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

#431243 Does Our Survival Depend on Relentless ...

Malthus had a fever dream in the 1790s. While the world was marveling in the first manifestations of modern science and technology and the industrial revolution that was just beginning, he was concerned. He saw the exponential growth in the human population as a terrible problem for the species—an existential threat. He was afraid the human population would overshoot the availability of resources, and then things would really hit the fan.
“Famine seems to be the last, the most dreadful resource of nature. The power of population is so superior to the power of the earth to produce subsistence for man, that premature death must in some shape or other visit the human race. The vices of mankind are active and able ministers of depopulation.”
So Malthus wrote in his famous text, an essay on the principles of population.
But Malthus was wrong. Not just in his proposed solution, which was to stop giving aid and food to the poor so that they wouldn’t explode in population. His prediction was also wrong: there was no great, overwhelming famine that caused the population to stay at the levels of the 1790s. Instead, the world population—with a few dips—has continued to grow exponentially ever since. And it’s still growing.
There have concurrently been developments in agriculture and medicine and, in the 20th century, the Green Revolution, in which Norman Borlaug ensured that countries adopted high-yield varieties of crops—the first precursors to modern ideas of genetically engineering food to produce better crops and more growth. The world was able to produce an astonishing amount of food—enough, in the modern era, for ten billion people. It is only a grave injustice in the way that food is distributed that means 12 percent of the world goes hungry, and we still have starvation. But, aside from that, we were saved by the majesty of another kind of exponential growth; the population grew, but the ability to produce food grew faster.
In so much of the world around us today, there’s the same old story. Take exploitation of fossil fuels: here, there is another exponential race. The exponential growth of our ability to mine coal, extract natural gas, refine oil from ever more complex hydrocarbons: this is pitted against our growing appetite. The stock market is built on exponential growth; you cannot provide compound interest unless the economy grows by a certain percentage a year.

“This relentless and ruthless expectation—that technology will continue to improve in ways we can’t foresee—is not just baked into share prices, but into the very survival of our species.”

When the economy fails to grow exponentially, it’s considered a crisis: a financial catastrophe. This expectation penetrates down to individual investors. In the cryptocurrency markets—hardly immune from bubbles, the bull-and-bear cycle of economics—the traders’ saying is “Buy the hype, sell the news.” Before an announcement is made, the expectation of growth, of a boost—the psychological shift—is almost invariably worth more than whatever the major announcement turns out to be. The idea of growth is baked into the share price, to the extent that even good news can often cause the price to dip when it’s delivered.
In the same way, this relentless and ruthless expectation—that technology will continue to improve in ways we can’t foresee—is not just baked into share prices, but into the very survival of our species. A third of Earth’s soil has been acutely degraded due to agriculture; we are looming on the brink of a topsoil crisis. In less relentless times, we may have tried to solve the problem by letting the fields lie fallow for a few years. But that’s no longer an option: if we do so, people will starve. Instead, we look to a second Green Revolution—genetically modified crops, or hydroponics—to save us.
Climate change is considered by many to be an existential threat. The Intergovernmental Panel on Climate Change has already put their faith in the exponential growth of technology. Many of the scenarios where they can successfully imagine the human race dealing with the climate crisis involve the development and widespread deployment of carbon capture and storage technology. Our hope for the future already has built-in expectations of exponential growth in our technology in this field. Alongside this, to reduce carbon emissions to zero on the timescales we need to, we will surely require new technologies in renewable energy, energy efficiency, and electrification of the transport system.
Without exponential growth in technology continuing, then, we are doomed. Humanity finds itself on a treadmill that’s rapidly accelerating, with the risk of plunging into the abyss if we can’t keep up the pace. Yet this very acceleration could also pose an existential threat. As our global system becomes more interconnected and complex, chaos theory takes over: the economics of a town in Macedonia can influence a US presidential election; critical infrastructure can be brought down by cybercriminals.
New threats, such as biotechnology, nanotechnology, or a generalized artificial intelligence, could put incredible power—power over the entire species—into the hands of a small number of people. We are faced with a paradox: the continued existence of our system depends on the exponential growth of our capacities outpacing the exponential growth of our needs and desires. Yet this very growth will create threats that are unimaginably larger than any humans have faced before in history.

“It is necessary that we understand the consequences and prospects for exponential growth: that we understand the nature of the race that we’re in.”

Neo-Luddites may find satisfaction in rejecting the ill-effects of technology, but they will still live in a society where technology is the lifeblood that keeps the whole system pumping. Now, more than ever, it is necessary that we understand the consequences and prospects for exponential growth: that we understand the nature of the race that we’re in.
If we decide that limitless exponential growth on a finite planet is unsustainable, we need to plan for the transition to a new way of living before our ability to accelerate runs out. If we require new technologies or fields of study to enable this growth to continue, we must focus our efforts on these before anything else. If we want to survive the 21st century without major catastrophe, we don’t have a choice but to understand it. Almost by default, we’re all accelerationists now.
Image Credit: focal point / Shutterstock.com Continue reading

Posted in Human Robots | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

#430286 Artificial Intelligence Predicts Death ...

Do not go gentle into that good night, Old age should burn and rave at close of day; Rage, rage against the dying of the light.
Welsh poet Dylan Thomas’ famous lines are a passionate plea to fight against the inevitability of death. While the sentiment is poetic, the reality is far more prosaic. We are all going to die someday at a time and place that will likely remain a mystery to us until the very end.
Or maybe not.
Researchers are now applying artificial intelligence, particularly machine learning and computer vision, to predict when someone may die. The ultimate goal is not to play the role of Grim Reaper, like in the macabre sci-fi Machine of Death universe, but to treat or even prevent chronic diseases and other illnesses.
The latest research into this application of AI to precision medicine used an off-the-shelf machine-learning platform to analyze 48 chest CT scans. The computer was able to predict which patients would die within five years with 69 percent accuracy. That’s about as good as any human doctor.
The results were published in the Nature journal Scientific Reports by a team led by the University of Adelaide.
In an email interview with Singularity Hub, lead author Dr. Luke Oakden-Rayner, a radiologist and PhD student, says that one of the obvious benefits of using AI in precision medicine is to identify health risks earlier and potentially intervene.
Less obvious, he adds, is the promise of speeding up longevity research.
“Currently, most research into chronic disease and longevity requires long periods of follow-up to detect any difference between patients with and without treatment, because the diseases progress so slowly,” he explains. “If we can quantify the changes earlier, not only can we identify disease while we can intervene more effectively, but we might also be able to detect treatment response much sooner.”
That could lead to faster and cheaper treatments, he adds. “If we could cut a year or two off the time it takes to take a treatment from lab to patient, that could speed up progress in this area substantially.”
AI has a heart
In January, researchers at Imperial College London published results that suggested AI could predict heart failure and death better than a human doctor. The research, published in the journal Radiology, involved creating virtual 3D hearts of about 250 patients that could simulate cardiac function. AI algorithms then went to work to learn what features would serve as the best predictors. The system relied on MRIs, blood tests, and other data for its analyses.
In the end, the machine was faster and better at assessing risk of pulmonary hypertension—about 73 percent versus 60 percent.
The researchers say the technology could be applied to predict outcomes of other heart conditions in the future. “We would like to develop the technology so it can be used in many heart conditions to complement how doctors interpret the results of medical tests,” says study co-author Dr. Tim Dawes in a press release. “The goal is to see if better predictions can guide treatment to help people to live longer.”
AI getting smarter
These sorts of applications with AI to precision medicine are only going to get better as the machines continue to learn, just like any medical school student.
Oakden-Rayner says his team is still building its ideal dataset as it moves forward with its research, but have already improved predictive accuracy by 75 to 80 percent by including information such as age and sex.
“I think there is an upper limit on how accurate we can be, because there is always going to be an element of randomness,” he says, replying to how well AI will be able to pinpoint individual human mortality. “But we can be much more precise than we are now, taking more of each individual’s risks and strengths into account. A model combining all of those factors will hopefully account for more than 80 percent of the risk of near-term mortality.”
Others are even more optimistic about how quickly AI will transform this aspect of the medical field.
“Predicting remaining life span for people is actually one of the easiest applications of machine learning,” Dr. Ziad Obermeyer tells STAT News. “It requires a unique set of data where we have electronic records linked to information about when people died. But once we have that for enough people, you can come up with a very accurate predictor of someone’s likelihood of being alive one month out, for instance, or one year out.”
Obermeyer co-authored a paper last year with Dr. Ezekiel Emanuel in the New England Journal of Medicine called “Predicting the Future—Big Data, Machine Learning, and Clinical Medicine.”
AI still has much to learn
Experts like Obermeyer and Oakden-Rayner agree that advances will come swiftly, but there is still much work to be done.
For one thing, there’s plenty of data out there to mine, but it’s still a bit of a mess. For example, the images needed to train machines still need to be processed to make them useful. “Many groups around the world are now spending millions of dollars on this task, because this appears to be the major bottleneck for successful medical AI,” Oakden-Rayner says.
In the interview with STAT News, Obermeyer says data is fragmented across the health system, so linking information and creating comprehensive datasets will take time and money. He also notes that while there is much excitement about the use of AI in precision medicine, there’s been little activity in testing the algorithms in a clinical setting.
“It’s all very well and good to say you’ve got an algorithm that’s good at predicting. Now let’s actually port them over to the real world in a safe and responsible and ethical way and see what happens,” he says in STAT News.
AI is no accident
Preventing a fatal disease is one thing. But preventing fatal accidents with AI?
That’s what US and Indian researchers set out to do when they looked over the disturbing number of deaths occurring from people taking selfies. The team identified 127 people who died while posing for a self-taken photo over a two-year period.
Based on a combination of text, images and location, the machine learned to identify a selfie as potentially dangerous or not. Running more than 3,000 annotated selfies collected on Twitter through the software resulted in 73 percent accuracy.
“The combination of image-based and location-based features resulted in the best accuracy,” they reported.
What’s next? A sort of selfie early warning system. “One of the directions that we are working on is to have the camera give the user information about [whether or not a particular location is] dangerous, with some score attached to it,” says Ponnurangam Kumaraguru, a professor at Indraprastha Institute of Information Technology in Delhi, in a story by Digital Trends.
AI and the future
This discussion begs the question: Do we really want to know when we’re going to die?
According to at least one paper published in Psychology Review earlier this year, the answer is a resounding “no.” Nearly nine out of 10 people in Germany and Spain who were quizzed about whether they would want to know about their future, including death, said they would prefer to remain ignorant.
Obermeyer sees it differently, at least when it comes to people living with life-threatening illness.
“[O]ne thing that those patients really, really want and aren’t getting from doctors is objective predictions about how long they have to live,” he tells Marketplace public radio. “Doctors are very reluctant to answer those kinds of questions, partly because, you know, you don’t want to be wrong about something so important. But also partly because there’s a sense that patients don’t want to know. And in fact, that turns out not to be true when you actually ask the patients.”
Stock Media provided by photocosma / Pond5 Continue reading

Posted in Human Robots | Tagged , , , , , , , | Leave a comment