Tag Archives: death
#432568 Tech Optimists See a Golden ...
Technology evangelists dream about a future where we’re all liberated from the more mundane aspects of our jobs by artificial intelligence. Other futurists go further, imagining AI will enable us to become superhuman, enhancing our intelligence, abandoning our mortal bodies, and uploading ourselves to the cloud.
Paradise is all very well, although your mileage may vary on whether these scenarios are realistic or desirable. The real question is, how do we get there?
Economist John Maynard Keynes notably argued in favor of active intervention when an economic crisis hits, rather than waiting for the markets to settle down to a more healthy equilibrium in the long run. His rebuttal to critics was, “In the long run, we are all dead.” After all, if it takes 50 years of upheaval and economic chaos for things to return to normality, there has been an immense amount of human suffering first.
Similar problems arise with the transition to a world where AI is intimately involved in our lives. In the long term, automation of labor might benefit the human species immensely. But in the short term, it has all kinds of potential pitfalls, especially in exacerbating inequality within societies where AI takes on a larger role. A new report from the Institute for Public Policy Research has deep concerns about the future of work.
Uneven Distribution
While the report doesn’t foresee the same gloom and doom of mass unemployment that other commentators have considered, the concern is that the gains in productivity and economic benefits from AI will be unevenly distributed. In the UK, jobs that account for £290 billion worth of wages in today’s economy could potentially be automated with current technology. But these are disproportionately jobs held by people who are already suffering from social inequality.
Low-wage jobs are five times more likely to be automated than high-wage jobs. A greater proportion of jobs held by women are likely to be automated. The solution that’s often suggested is that people should simply “retrain”; but if no funding or assistance is provided, this burden is too much to bear. You can’t expect people to seamlessly transition from driving taxis to writing self-driving car software without help. As we have already seen, inequality is exacerbated when jobs that don’t require advanced education (even if they require a great deal of technical skill) are the first to go.
No Room for Beginners
Optimists say algorithms won’t replace humans, but will instead liberate us from the dull parts of our jobs. Lawyers used to have to spend hours trawling through case law to find legal precedents; now AI can identify the most relevant documents for them. Doctors no longer need to look through endless scans and perform diagnostic tests; machines can do this, leaving the decision-making to humans. This boosts productivity and provides invaluable tools for workers.
But there are issues with this rosy picture. If humans need to do less work, the economic incentive is for the boss to reduce their hours. Some of these “dull, routine” parts of the job were traditionally how people getting into the field learned the ropes: paralegals used to look through case law, but AI may render them obsolete. Even in the field of journalism, there’s now software that will rewrite press releases for publication, traditionally something close to an entry-level task. If there are no entry-level jobs, or if entry-level now requires years of training, the result is to exacerbate inequality and reduce social mobility.
Automating Our Biases
The adoption of algorithms into employment has already had negative impacts on equality. Cathy O’Neil, mathematics PhD from Harvard, raises these concerns in her excellent book Weapons of Math Destruction. She notes that algorithms designed by humans often encode the biases of that society, whether they’re racial or based on gender and sexuality.
Google’s search engine advertises more executive-level jobs to users it thinks are male. AI programs predict that black offenders are more likely to re-offend than white offenders; they receive correspondingly longer sentences. It needn’t necessarily be that bias has been actively programmed; perhaps the algorithms just learn from historical data, but this means they will perpetuate historical inequalities.
Take candidate-screening software HireVue, used by many major corporations to assess new employees. It analyzes “verbal and non-verbal cues” of candidates, comparing them to employees that historically did well. Either way, according to Cathy O’Neil, they are “using people’s fear and trust of mathematics to prevent them from asking questions.” With no transparency or understanding of how the algorithm generates its results, and no consensus over who’s responsible for the results, discrimination can occur automatically, on a massive scale.
Combine this with other demographic trends. In rich countries, people are living longer. An increasing burden will be placed on a shrinking tax base to support that elderly population. A recent study said that due to the accumulation of wealth in older generations, millennials stand to inherit more than any previous generation, but it won’t happen until they’re in their 60s. Meanwhile, those with savings and capital will benefit as the economy shifts: the stock market and GDP will grow, but wages and equality will fall, a situation that favors people who are already wealthy.
Even in the most dramatic AI scenarios, inequality is exacerbated. If someone develops a general intelligence that’s near-human or super-human, and they manage to control and monopolize it, they instantly become immensely wealthy and powerful. If the glorious technological future that Silicon Valley enthusiasts dream about is only going to serve to make the growing gaps wider and strengthen existing unfair power structures, is it something worth striving for?
What Makes a Utopia?
We urgently need to redefine our notion of progress. Philosophers worry about an AI that is misaligned—the things it seeks to maximize are not the things we want maximized. At the same time, we measure the development of our countries by GDP, not the quality of life of workers or the equality of opportunity in the society. Growing wealth with increased inequality is not progress.
Some people will take the position that there are always winners and losers in society, and that any attempt to redress the inequalities of our society will stifle economic growth and leave everyone worse off. Some will see this as an argument for a new economic model, based around universal basic income. Any moves towards this will need to take care that it’s affordable, sustainable, and doesn’t lead towards an entrenched two-tier society.
Walter Schiedel’s book The Great Leveller is a huge survey of inequality across all of human history, from the 21st century to prehistoric cave-dwellers. He argues that only revolutions, wars, and other catastrophes have historically reduced inequality: a perfect example is the Black Death in Europe, which (by reducing the population and therefore the labor supply that was available) increased wages and reduced inequality. Meanwhile, our solution to the financial crisis of 2007-8 may have only made the problem worse.
But in a world of nuclear weapons, of biowarfare, of cyberwarfare—a world of unprecedented, complex, distributed threats—the consequences of these “safety valves” could be worse than ever before. Inequality increases the risk of global catastrophe, and global catastrophes could scupper any progress towards the techno-utopia that the utopians dream of. And a society with entrenched inequality is no utopia at all.
Image Credit: OliveTree / Shutterstock.com Continue reading
#432031 Why the Rise of Self-Driving Vehicles ...
It’s been a long time coming. For years Waymo (formerly known as Google Chauffeur) has been diligently developing, driving, testing and refining its fleets of various models of self-driving cars. Now Waymo is going big. The company recently placed an order for several thousand new Chrysler Pacifica minivans and next year plans to launch driverless taxis in a number of US cities.
This deal raises one of the biggest unanswered questions about autonomous vehicles: if fleets of driverless taxis make it cheap and easy for regular people to get around, what’s going to happen to car ownership?
One popular line of thought goes as follows: as autonomous ride-hailing services become ubiquitous, people will no longer need to buy their own cars. This notion has a certain logical appeal. It makes sense to assume that as driverless taxis become widely available, most of us will eagerly sell the family car and use on-demand taxis to get to work, run errands, or pick up the kids. After all, vehicle ownership is pricey and most cars spend the vast majority of their lives parked.
Even experts believe commercial availability of autonomous vehicles will cause car sales to drop.
Market research firm KPMG estimates that by 2030, midsize car sales in the US will decline from today’s 5.4 million units sold each year to nearly half that number, a measly 2.1 million units. Another market research firm, ReThinkX, offers an even more pessimistic estimate (or optimistic, depending on your opinion of cars), predicting that autonomous vehicles will reduce consumer demand for new vehicles by a whopping 70 percent.
The reality is that the impending death of private vehicle sales is greatly exaggerated. Despite the fact that autonomous taxis will be a beneficial and widely-embraced form of urban transportation, we will witness the opposite. Most people will still prefer to own their own autonomous vehicle. In fact, the total number of units of autonomous vehicles sold each year is going to increase rather than decrease.
When people predict the demise of car ownership, they are overlooking the reality that the new autonomous automotive industry is not going to be just a re-hash of today’s car industry with driverless vehicles. Instead, the automotive industry of the future will be selling what could be considered an entirely new product: a wide variety of intelligent, self-guiding transportation robots. When cars become a widely used type of transportation robot, they will be cheap, ubiquitous, and versatile.
Several unique characteristics of autonomous vehicles will ensure that people will continue to buy their own cars.
1. Cost: Thanks to simpler electric engines and lighter auto bodies, autonomous vehicles will be cheaper to buy and maintain than today’s human-driven vehicles. Some estimates bring the price to $10K per vehicle, a stark contrast with today’s average of $30K per vehicle.
2. Personal belongings: Consumers will be able to do much more in their driverless vehicles, including work, play, and rest. This means they will want to keep more personal items in their cars.
3. Frequent upgrades: The average (human-driven) car today is owned for 10 years. As driverless cars become software-driven devices, their price/performance ratio will track to Moore’s law. Their rapid improvement will increase the appeal and frequency of new vehicle purchases.
4. Instant accessibility: In a dense urban setting, a driverless taxi is able to show up within minutes of being summoned. But not so in rural areas, where people live miles apart. For many, delay and “loss of control” over their own mobility will increase the appeal of owning their own vehicle.
5. Diversity of form and function: Autonomous vehicles will be available in a wide variety of sizes and shapes. Consumers will drive demand for custom-made, purpose-built autonomous vehicles whose form is adapted for a particular function.
Let’s explore each of these characteristics in more detail.
Autonomous vehicles will cost less for several reasons. For one, they will be powered by electric engines, which are cheaper to construct and maintain than gasoline-powered engines. Removing human drivers will also save consumers money. Autonomous vehicles will be much less likely to have accidents, hence they can be built out of lightweight, lower-cost materials and will be cheaper to insure. With the human interface no longer needed, autonomous vehicles won’t be burdened by the manufacturing costs of a complex dashboard, steering wheel, and foot pedals.
While hop-on, hop-off autonomous taxi-based mobility services may be ideal for some of the urban population, several sizeable customer segments will still want to own their own cars.
These include people who live in sparsely-populated rural areas who can’t afford to wait extended periods of time for a taxi to appear. Families with children will prefer to own their own driverless cars to house their childrens’ car seats and favorite toys and sippy cups. Another loyal car-buying segment will be die-hard gadget-hounds who will eagerly buy a sexy upgraded model every year or so, unable to resist the siren song of AI that is three times as safe, or a ride that is twice as smooth.
Finally, consider the allure of robotic diversity.
Commuters will invest in a home office on wheels, a sleek, traveling workspace resembling the first-class suite on an airplane. On the high end of the market, city-dwellers and country-dwellers alike will special-order custom-made autonomous vehicles whose shape and on-board gadgetry is adapted for a particular function or hobby. Privately-owned small businesses will buy their own autonomous delivery robot that could range in size from a knee-high, last-mile delivery pod, to a giant, long-haul shipping device.
As autonomous vehicles near commercial viability, Waymo’s procurement deal with Fiat Chrysler is just the beginning.
The exact value of this future automotive industry has yet to be defined, but research from Intel’s internal autonomous vehicle division estimates this new so-called “passenger economy” could be worth nearly $7 trillion a year. To position themselves to capture a chunk of this potential revenue, companies whose businesses used to lie in previously disparate fields such as robotics, software, ships, and entertainment (to name but a few) have begun to form a bewildering web of what they hope will be symbiotic partnerships. Car hailing and chip companies are collaborating with car rental companies, who in turn are befriending giant software firms, who are launching joint projects with all sizes of hardware companies, and so on.
Last year, car companies sold an estimated 80 million new cars worldwide. Over the course of nearly a century, car companies and their partners, global chains of suppliers and service providers, have become masters at mass-producing and maintaining sturdy and cost-effective human-driven vehicles. As autonomous vehicle technology becomes ready for mainstream use, traditional automotive companies are being forced to grapple with the painful realization that they must compete in a new playing field.
The challenge for traditional car-makers won’t be that people no longer want to own cars. Instead, the challenge will be learning to compete in a new and larger transportation industry where consumers will choose their product according to the appeal of its customized body and the quality of its intelligent software.
—
Melba Kurman and Hod Lipson are the authors of Driverless: Intelligent Cars and the Road Ahead and Fabricated: the New World of 3D Printing.
Image Credit: hfzimages / Shutterstock.com
We are a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for us to earn fees by linking to Amazon.com and affiliated sites. Continue reading
#431869 When Will We Finally Achieve True ...
The field of artificial intelligence goes back a long way, but many consider it was officially born when a group of scientists at Dartmouth College got together for a summer, back in 1956. Computers had, over the last few decades, come on in incredible leaps and bounds; they could now perform calculations far faster than humans. Optimism, given the incredible progress that had been made, was rational. Genius computer scientist Alan Turing had already mooted the idea of thinking machines just a few years before. The scientists had a fairly simple idea: intelligence is, after all, just a mathematical process. The human brain was a type of machine. Pick apart that process, and you can make a machine simulate it.
The problem didn’t seem too hard: the Dartmouth scientists wrote, “We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.” This research proposal, by the way, contains one of the earliest uses of the term artificial intelligence. They had a number of ideas—maybe simulating the human brain’s pattern of neurons could work and teaching machines the abstract rules of human language would be important.
The scientists were optimistic, and their efforts were rewarded. Before too long, they had computer programs that seemed to understand human language and could solve algebra problems. People were confidently predicting there would be a human-level intelligent machine built within, oh, let’s say, the next twenty years.
It’s fitting that the industry of predicting when we’d have human-level intelligent AI was born at around the same time as the AI industry itself. In fact, it goes all the way back to Turing’s first paper on “thinking machines,” where he predicted that the Turing Test—machines that could convince humans they were human—would be passed in 50 years, by 2000. Nowadays, of course, people are still predicting it will happen within the next 20 years, perhaps most famously Ray Kurzweil. There are so many different surveys of experts and analyses that you almost wonder if AI researchers aren’t tempted to come up with an auto reply: “I’ve already predicted what your question will be, and no, I can’t really predict that.”
The issue with trying to predict the exact date of human-level AI is that we don’t know how far is left to go. This is unlike Moore’s Law. Moore’s Law, the doubling of processing power roughly every couple of years, makes a very concrete prediction about a very specific phenomenon. We understand roughly how to get there—improved engineering of silicon wafers—and we know we’re not at the fundamental limits of our current approach (at least, not until you’re trying to work on chips at the atomic scale). You cannot say the same about artificial intelligence.
Common Mistakes
Stuart Armstrong’s survey looked for trends in these predictions. Specifically, there were two major cognitive biases he was looking for. The first was the idea that AI experts predict true AI will arrive (and make them immortal) conveniently just before they’d be due to die. This is the “Rapture of the Nerds” criticism people have leveled at Kurzweil—his predictions are motivated by fear of death, desire for immortality, and are fundamentally irrational. The ability to create a superintelligence is taken as an article of faith. There are also criticisms by people working in the AI field who know first-hand the frustrations and limitations of today’s AI.
The second was the idea that people always pick a time span of 15 to 20 years. That’s enough to convince people they’re working on something that could prove revolutionary very soon (people are less impressed by efforts that will lead to tangible results centuries down the line), but not enough for you to be embarrassingly proved wrong. Of the two, Armstrong found more evidence for the second one—people were perfectly happy to predict AI after they died, although most didn’t, but there was a clear bias towards “15–20 years from now” in predictions throughout history.
Measuring Progress
Armstrong points out that, if you want to assess the validity of a specific prediction, there are plenty of parameters you can look at. For example, the idea that human-level intelligence will be developed by simulating the human brain does at least give you a clear pathway that allows you to assess progress. Every time we get a more detailed map of the brain, or successfully simulate another part of it, we can tell that we are progressing towards this eventual goal, which will presumably end in human-level AI. We may not be 20 years away on that path, but at least you can scientifically evaluate the progress.
Compare this to those that say AI, or else consciousness, will “emerge” if a network is sufficiently complex, given enough processing power. This might be how we imagine human intelligence and consciousness emerged during evolution—although evolution had billions of years, not just decades. The issue with this is that we have no empirical evidence: we have never seen consciousness manifest itself out of a complex network. Not only do we not know if this is possible, we cannot know how far away we are from reaching this, as we can’t even measure progress along the way.
There is an immense difficulty in understanding which tasks are hard, which has continued from the birth of AI to the present day. Just look at that original research proposal, where understanding human language, randomness and creativity, and self-improvement are all mentioned in the same breath. We have great natural language processing, but do our computers understand what they’re processing? We have AI that can randomly vary to be “creative,” but is it creative? Exponential self-improvement of the kind the singularity often relies on seems far away.
We also struggle to understand what’s meant by intelligence. For example, AI experts consistently underestimated the ability of AI to play Go. Many thought, in 2015, it would take until 2027. In the end, it took two years, not twelve. But does that mean AI is any closer to being able to write the Great American Novel, say? Does it mean it’s any closer to conceptually understanding the world around it? Does it mean that it’s any closer to human-level intelligence? That’s not necessarily clear.
Not Human, But Smarter Than Humans
But perhaps we’ve been looking at the wrong problem. For example, the Turing test has not yet been passed in the sense that AI cannot convince people it’s human in conversation; but of course the calculating ability, and perhaps soon the ability to perform other tasks like pattern recognition and driving cars, far exceed human levels. As “weak” AI algorithms make more decisions, and Internet of Things evangelists and tech optimists seek to find more ways to feed more data into more algorithms, the impact on society from this “artificial intelligence” can only grow.
It may be that we don’t yet have the mechanism for human-level intelligence, but it’s also true that we don’t know how far we can go with the current generation of algorithms. Those scary surveys that state automation will disrupt society and change it in fundamental ways don’t rely on nearly as many assumptions about some nebulous superintelligence.
Then there are those that point out we should be worried about AI for other reasons. Just because we can’t say for sure if human-level AI will arrive this century, or never, it doesn’t mean we shouldn’t prepare for the possibility that the optimistic predictors could be correct. We need to ensure that human values are programmed into these algorithms, so that they understand the value of human life and can act in “moral, responsible” ways.
Phil Torres, at the Project for Future Human Flourishing, expressed it well in an interview with me. He points out that if we suddenly decided, as a society, that we had to solve the problem of morality—determine what was right and wrong and feed it into a machine—in the next twenty years…would we even be able to do it?
So, we should take predictions with a grain of salt. Remember, it turned out the problems the AI pioneers foresaw were far more complicated than they anticipated. The same could be true today. At the same time, we cannot be unprepared. We should understand the risks and take our precautions. When those scientists met in Dartmouth in 1956, they had no idea of the vast, foggy terrain before them. Sixty years later, we still don’t know how much further there is to go, or how far we can go. But we’re going somewhere.
Image Credit: Ico Maker / Shutterstock.com Continue reading
#431603 What We Can Learn From the Second Life ...
For every new piece of technology that gets developed, you can usually find people saying it will never be useful. The president of the Michigan Savings Bank in 1903, for example, said, “The horse is here to stay but the automobile is only a novelty—a fad.” It’s equally easy to find people raving about whichever new technology is at the peak of the Gartner Hype Cycle, which tracks the buzz around these newest developments and attempts to temper predictions. When technologies emerge, there are all kinds of uncertainties, from the actual capacity of the technology to its use cases in real life to the price tag.
Eventually the dust settles, and some technologies get widely adopted, to the extent that they can become “invisible”; people take them for granted. Others fall by the wayside as gimmicky fads or impractical ideas. Picking which horses to back is the difference between Silicon Valley millions and Betamax pub-quiz-question obscurity. For a while, it seemed that Google had—for once—backed the wrong horse.
Google Glass emerged from Google X, the ubiquitous tech giant’s much-hyped moonshot factory, where highly secretive researchers work on the sci-fi technologies of the future. Self-driving cars and artificial intelligence are the more mundane end for an organization that apparently once looked into jetpacks and teleportation.
The original smart glasses, Google began selling Google Glass in 2013 for $1,500 as prototypes for their acolytes, around 8,000 early adopters. Users could control the glasses with a touchpad, or, activated by tilting the head back, with voice commands. Audio relay—as with several wearable products—is via bone conduction, which transmits sound by vibrating the skull bones of the user. This was going to usher in the age of augmented reality, the next best thing to having a chip implanted directly into your brain.
On the surface, it seemed to be a reasonable proposition. People had dreamed about augmented reality for a long time—an onboard, JARVIS-style computer giving you extra information and instant access to communications without even having to touch a button. After smartphone ubiquity, it looked like a natural step forward.
Instead, there was a backlash. People may be willing to give their data up to corporations, but they’re less pleased with the idea that someone might be filming them in public. The worst aspect of smartphones is trying to talk to people who are distractedly scrolling through their phones. There’s a famous analogy in Revolutionary Road about an old couple’s loveless marriage: the husband tunes out his wife’s conversation by turning his hearing aid down to zero. To many, Google Glass seemed to provide us with a whole new way to ignore each other in favor of our Twitter feeds.
Then there’s the fact that, regardless of whether it’s because we’re not used to them, or if it’s a more permanent feature, people wearing AR tech often look very silly. Put all this together with a lack of early functionality, the high price (do you really feel comfortable wearing a $1,500 computer?), and a killer pun for the users—Glassholes—and the final recipe wasn’t great for Google.
Google Glass was quietly dropped from sale in 2015 with the ominous slogan posted on Google’s website “Thanks for exploring with us.” Reminding the Glass users that they had always been referred to as “explorers”—beta-testing a product, in many ways—it perhaps signaled less enthusiasm for wearables than the original, Google Glass skydive might have suggested.
In reality, Google went back to the drawing board. Not with the technology per se, although it has improved in the intervening years, but with the uses behind the technology.
Under what circumstances would you actually need a Google Glass? When would it genuinely be preferable to a smartphone that can do many of the same things and more? Beyond simply being a fashion item, which Google Glass decidedly was not, even the most tech-evangelical of us need a convincing reason to splash $1,500 on a wearable computer that’s less socially acceptable and less easy to use than the machine you’re probably reading this on right now.
Enter the Google Glass Enterprise Edition.
Piloted in factories during the years that Google Glass was dormant, and now roaring back to life and commercially available, the Google Glass relaunch got under way in earnest in July of 2017. The difference here was the specific audience: workers in factories who need hands-free computing because they need to use their hands at the same time.
In this niche application, wearable computers can become invaluable. A new employee can be trained with pre-programmed material that explains how to perform actions in real time, while instructions can be relayed straight into a worker’s eyeline without them needing to check a phone or switch to email.
Medical devices have long been a dream application for Google Glass. You can imagine a situation where people receive real-time information during surgery, or are augmented by artificial intelligence that provides additional diagnostic information or questions in response to a patient’s symptoms. The quest to develop a healthcare AI, which can provide recommendations in response to natural language queries, is on. The famously untidy doctor’s handwriting—and the associated death toll—could be avoided if the glasses could take dictation straight into a patient’s medical records. All of this is far more useful than allowing people to check Facebook hands-free while they’re riding the subway.
Google’s “Lens” application indicates another use for Google Glass that hadn’t quite matured when the original was launched: the Lens processes images and provides information about them. You can look at text and have it translated in real time, or look at a building or sign and receive additional information. Image processing, either through neural networks hooked up to a cloud database or some other means, is the frontier that enables driverless cars and similar technology to exist. Hook this up to a voice-activated assistant relaying information to the user, and you have your killer application: real-time annotation of the world around you. It’s this functionality that just wasn’t ready yet when Google launched Glass.
Amazon’s recent announcement that they want to integrate Alexa into a range of smart glasses indicates that the tech giants aren’t ready to give up on wearables yet. Perhaps, in time, people will become used to voice activation and interaction with their machines, at which point smart glasses with bone conduction will genuinely be more convenient than a smartphone.
But in many ways, the real lesson from the initial failure—and promising second life—of Google Glass is a simple question that developers of any smart technology, from the Internet of Things through to wearable computers, must answer. “What can this do that my smartphone can’t?” Find your answer, as the Enterprise Edition did, as Lens might, and you find your product.
Image Credit: Hattanas / Shutterstock.com Continue reading