Tag Archives: internet

#431902 Old dog, new tricks: Sony unleashes ...

As Japan celebrates the year of the dog, electronics giant Sony on Thursday unleashed its new robot canine companion, packed with artificial intelligence and internet connectivity. Continue reading

Posted in Human Robots

#431899 Darker Still: Black Mirror’s New ...

The key difference between science fiction and fantasy is that science fiction is entirely possible because of its grounding in scientific facts, while fantasy is not. This is where Black Mirror is both an entertaining and terrifying work of science fiction. Created by Charlie Brooker, the anthological series tells cautionary tales of emerging technology that could one day be an integral part of our everyday lives.
While watching the often alarming episodes, one can’t help but recognize the eerie similarities to some of the tech tools that are already abundant in our lives today. In fact, many previous Black Mirror predictions are already becoming reality.
The latest season of Black Mirror was arguably darker than ever. This time, Brooker seemed to focus on the ethical implications of one particular area: neurotechnology.
Emerging Neurotechnology
Warning: The remainder of this article may contain spoilers from Season 4 of Black Mirror.
Most of the storylines from season four revolve around neurotechnology and brain-machine interfaces. They are based in a world where people have the power to upload their consciousness onto machines, have fully immersive experiences in virtual reality, merge their minds with other minds, record others’ memories, and even track what others are thinking, feeling, and doing.
How can all this ever be possible? Well, these capabilities are already being developed by pioneers and researchers globally. Early last year, Elon Musk unveiled Neuralink, a company whose goal is to merge the human mind with AI through a neural lace. We’ve already connected two brains via the internet, allowing one brain to communicate with another. Various research teams have been able to develop mechanisms for “reading minds” or reconstructing memories of individuals via devices. The list goes on.
With many of the technologies we see in Black Mirror it’s not a question of if, but when. Futurist Ray Kurzweil has predicted that by the 2030s we will be able to upload our consciousness onto the cloud via nanobots that will “provide full-immersion virtual reality from within the nervous system, provide direct brain-to-brain communication over the internet, and otherwise greatly expand human intelligence.” While other experts continue to challenge Kurzweil on the exact year we’ll accomplish this feat, with the current exponential growth of our technological capabilities, we’re on track to get there eventually.
Ethical Questions
As always, technology is only half the conversation. Equally fascinating are the many ethical and moral questions this topic raises.
For instance, with the increasing convergence of artificial intelligence and virtual reality, we have to ask ourselves if our morality from the physical world transfers equally into the virtual world. The first episode of season four, USS Calister, tells the story of a VR pioneer, Robert Daley, who creates breakthrough AI and VR to satisfy his personal frustrations and sexual urges. He uses the DNA of his coworkers (and their children) to re-create them digitally in his virtual world, to which he escapes to torture them, while they continue to be indifferent in the “real” world.
Audiences are left asking themselves: should what happens in the digital world be considered any less “real” than the physical world? How do we know if the individuals in the virtual world (who are ultimately based on algorithms) have true feelings or sentiments? Have they been developed to exhibit characteristics associated with suffering, or can they really feel suffering? Fascinatingly, these questions point to the hard problem of consciousness—the question of if, why, and how a given physical process generates the specific experience it does—which remains a major mystery in neuroscience.
Towards the end of USS Calister, the hostages of Daley’s virtual world attempt to escape through suicide, by committing an act that will delete the code that allows them to exist. This raises yet another mind-boggling ethical question: if we “delete” code that signifies a digital being, should that be considered murder (or suicide, in this case)? Why shouldn’t it? When we murder someone we are, in essence, taking away their capacity to live and to be, without their consent. By unplugging a self-aware AI, wouldn’t we be violating its basic right to live in the same why? Does AI, as code, even have rights?
Brain implants can also have a radical impact on our self-identity and how we define the word “I”. In the episode Black Museum, instead of witnessing just one horror, we get a series of scares in little segments. One of those segments tells the story of a father who attempts to reincarnate the mother of his child by uploading her consciousness into his mind and allowing her to live in his head (essentially giving him multiple personality disorder). In this way, she can experience special moments with their son.
With “no privacy for him, and no agency for her” the good intention slowly goes very wrong. This story raises a critical question: should we be allowed to upload consciousness into limited bodies? Even more, if we are to upload our minds into “the cloud,” at what point do we lose our individuality to become one collective being?
These questions can form the basis of hours of debate, but we’re just getting started. There are no right or wrong answers with many of these moral dilemmas, but we need to start having such discussions.
The Downside of Dystopian Sci-Fi
Like last season’s San Junipero, one episode of the series, Hang the DJ, had an uplifting ending. Yet the overwhelming majority of the stories in Black Mirror continue to focus on the darkest side of human nature, feeding into the pre-existing paranoia of the general public. There is certainly some value in this; it’s important to be aware of the dangers of technology. After all, what better way to explore these dangers before they occur than through speculative fiction?
A big takeaway from every tale told in the series is that the greatest threat to humanity does not come from technology, but from ourselves. Technology itself is not inherently good or evil; it all comes down to how we choose to use it as a society. So for those of you who are techno-paranoid, beware, for it’s not the technology you should fear, but the humans who get their hands on it.
While we can paint negative visions for the future, though, it is also important to paint positive ones. The kind of visions we set for ourselves have the power to inspire and motivate generations. Many people are inherently pessimistic when thinking about the future, and that pessimism in turn can shape their contributions to humanity.
While utopia may not exist, the future of our species could and should be one of solving global challenges, abundance, prosperity, liberation, and cosmic transcendence. Now that would be a thrilling episode to watch.
Image Credit: Billion Photos / Shutterstock.com Continue reading

Posted in Human Robots

#431873 Why the World Is Still Getting ...

If you read or watch the news, you’ll likely think the world is falling to pieces. Trends like terrorism, climate change, and a growing population straining the planet’s finite resources can easily lead you to think our world is in crisis.
But there’s another story, a story the news doesn’t often report. This story is backed by data, and it says we’re actually living in the most peaceful, abundant time in history, and things are likely to continue getting better.
The News vs. the Data
The reality that’s often clouded by a constant stream of bad news is we’re actually seeing a massive drop in poverty, fewer deaths from violent crime and preventable diseases. On top of that, we’re the most educated populace to ever walk the planet.
“Violence has been in decline for thousands of years, and today we may be living in the most peaceful era in the existence of our species.” –Steven Pinker
In the last hundred years, we’ve seen the average human life expectancy nearly double, the global GDP per capita rise exponentially, and childhood mortality drop 10-fold.

That’s pretty good progress! Maybe the world isn’t all gloom and doom.If you’re still not convinced the world is getting better, check out the charts in this article from Vox and on Peter Diamandis’ website for a lot more data.
Abundance for All Is Possible
So now that you know the world isn’t so bad after all, here’s another thing to think about: it can get much better, very soon.
In their book Abundance: The Future Is Better Than You Think, Steven Kotler and Peter Diamandis suggest it may be possible for us to meet and even exceed the basic needs of all the people living on the planet today.
“In the hands of smart and driven innovators, science and technology take things which were once scarce and make them abundant and accessible to all.”
This means making sure every single person in the world has adequate food, water and shelter, as well as a good education, access to healthcare, and personal freedom.
This might seem unimaginable, especially if you tend to think the world is only getting worse. But given how much progress we’ve already made in the last few hundred years, coupled with the recent explosion of information sharing and new, powerful technologies, abundance for all is not as out of reach as you might believe.
Throughout history, we’ve seen that in the hands of smart and driven innovators, science and technology take things which were once scarce and make them abundant and accessible to all.
Napoleon III
In Abundance, Diamandis and Kotler tell the story of how aluminum went from being one of the rarest metals on the planet to being one of the most abundant…
In the 1800s, aluminum was more valuable than silver and gold because it was rarer. So when Napoleon III entertained the King of Siam, the king and his guests were honored by being given aluminum utensils, while the rest of the dinner party ate with gold.
But aluminum is not really rare.
In fact, aluminum is the third most abundant element in the Earth’s crust, making up 8.3% of the weight of our planet. But it wasn’t until chemists Charles Martin Hall and Paul Héroult discovered how to use electrolysis to cheaply separate aluminum from surrounding materials that the element became suddenly abundant.
The problems keeping us from achieving a world where everyone’s basic needs are met may seem like resource problems — when in reality, many are accessibility problems.
The Engine Driving Us Toward Abundance: Exponential Technology
History is full of examples like the aluminum story. The most powerful one of the last few decades is information technology. Think about all the things that computers and the internet made abundant that were previously far less accessible because of cost or availability … Here are just a few examples:

Easy access to the world’s information
Ability to share information freely with anyone and everyone
Free/cheap long-distance communication
Buying and selling goods/services regardless of location

Less than two decades ago, when someone reached a certain level of economic stability, they could spend somewhere around $10K on stereos, cameras, entertainment systems, etc — today, we have all that equipment in the palm of our hand.
Now, there is a new generation of technologies heavily dependant on information technology and, therefore, similarly riding the wave of exponential growth. When put to the right use, emerging technologies like artificial intelligence, robotics, digital manufacturing, nano-materials and digital biology make it possible for us to drastically raise the standard of living for every person on the planet.

These are just some of the innovations which are unlocking currently scarce resources:

IBM’s Watson Health is being trained and used in medical facilities like the Cleveland Clinic to help doctors diagnose disease. In the future, it’s likely we’ll trust AI just as much, if not more than humans to diagnose disease, allowing people all over the world to have access to great diagnostic tools regardless of whether there is a well-trained doctor near them.

Solar power is now cheaper than fossil fuels in some parts of the world, and with advances in new materials and storage, the cost may decrease further. This could eventually lead to nearly-free, clean energy for people across the world.

Google’s GMNT network can now translate languages as well as a human, unlocking the ability for people to communicate globally as we never have before.

Self-driving cars are already on the roads of several American cities and will be coming to a road near you in the next couple years. Considering the average American spends nearly two hours driving every day, not having to drive would free up an increasingly scarce resource: time.

The Change-Makers
Today’s innovators can create enormous change because they have these incredible tools—which would have once been available only to big organizations—at their fingertips. And, as a result of our hyper-connected world, there is an unprecedented ability for people across the planet to work together to create solutions to some of our most pressing problems today.
“In today’s hyperlinked world, solving problems anywhere, solves problems everywhere.” –Peter Diamandis and Steven Kotler, Abundance
According to Diamandis and Kotler, there are three groups of people accelerating positive change.

DIY InnovatorsIn the 1970s and 1980s, the Homebrew Computer Club was a meeting place of “do-it-yourself” computer enthusiasts who shared ideas and spare parts. By the 1990s and 2000s, that little club became known as an inception point for the personal computer industry — dozens of companies, including Apple Computer, can directly trace their origins back to Homebrew. Since then, we’ve seen the rise of the social entrepreneur, the Maker Movement and the DIY Bio movement, which have similar ambitions to democratize social reform, manufacturing, and biology, the way Homebrew democratized computers. These are the people who look for new opportunities and aren’t afraid to take risks to create something new that will change the status-quo.
Techno-PhilanthropistsUnlike the robber barons of the 19th and early 20th centuries, today’s “techno-philanthropists” are not just giving away some of their wealth for a new museum, they are using their wealth to solve global problems and investing in social entrepreneurs aiming to do the same. The Bill and Melinda Gates Foundation has given away at least $28 billion, with a strong focus on ending diseases like polio, malaria, and measles for good. Jeff Skoll, after cashing out of eBay with $2 billion in 1998, went on to create the Skoll Foundation, which funds social entrepreneurs across the world. And last year, Mark Zuckerberg and Priscilla Chan pledged to give away 99% of their $46 billion in Facebook stock during their lifetimes.
The Rising BillionCisco estimates that by 2020, there will be 4.1 billion people connected to the internet, up from 3 billion in 2015. This number might even be higher, given the efforts of companies like Facebook, Google, Virgin Group, and SpaceX to bring internet access to the world. That’s a billion new people in the next several years who will be connected to the global conversation, looking to learn, create and better their own lives and communities.In his book, Fortune at the Bottom of the Pyramid, C.K. Pahalad writes that finding co-creative ways to serve this rising market can help lift people out of poverty while creating viable businesses for inventive companies.

The Path to Abundance
Eager to create change, innovators armed with powerful technologies can accomplish incredible feats. Kotler and Diamandis imagine that the path to abundance occurs in three tiers:

Basic Needs (food, water, shelter)
Tools of Growth (energy, education, access to information)
Ideal Health and Freedom

Of course, progress doesn’t always happen in a straight, logical way, but having a framework to visualize the needs is helpful.
Many people don’t believe it’s possible to end the persistent global problems we’re facing. However, looking at history, we can see many examples where technological tools have unlocked resources that previously seemed scarce.
Technological solutions are not always the answer, and we need social change and policy solutions as much as we need technology solutions. But we have seen time and time again, that powerful tools in the hands of innovative, driven change-makers can make the seemingly impossible happen.

You can download the full “Path to Abundance” infographic here. It was created under a CC BY-NC-ND license. If you share, please attribute to Singularity University.
Image Credit: janez volmajer / Shutterstock.com Continue reading

Posted in Human Robots

#431872 AI Uses Titan Supercomputer to Create ...

You don’t have to dig too deeply into the archive of dystopian science fiction to uncover the horror that intelligent machines might unleash. The Matrix and The Terminator are probably the most well-known examples of self-replicating, intelligent machines attempting to enslave or destroy humanity in the process of building a brave new digital world.
The prospect of artificially intelligent machines creating other artificially intelligent machines took a big step forward in 2017. However, we’re far from the runaway technological singularity futurists are predicting by mid-century or earlier, let alone murderous cyborgs or AI avatar assassins.
The first big boost this year came from Google. The tech giant announced it was developing automated machine learning (AutoML), writing algorithms that can do some of the heavy lifting by identifying the right neural networks for a specific job. Now researchers at the Department of Energy’s Oak Ridge National Laboratory (ORNL), using the most powerful supercomputer in the US, have developed an AI system that can generate neural networks as good if not better than any developed by a human in less than a day.
It can take months for the brainiest, best-paid data scientists to develop deep learning software, which sends data through a complex web of mathematical algorithms. The system is modeled after the human brain and known as an artificial neural network. Even Google’s AutoML took weeks to design a superior image recognition system, one of the more standard operations for AI systems today.
Computing Power
Of course, Google Brain project engineers only had access to 800 graphic processing units (GPUs), a type of computer hardware that works especially well for deep learning. Nvidia, which pioneered the development of GPUs, is considered the gold standard in today’s AI hardware architecture. Titan, the supercomputer at ORNL, boasts more than 18,000 GPUs.
The ORNL research team’s algorithm, called MENNDL for Multinode Evolutionary Neural Networks for Deep Learning, isn’t designed to create AI systems that cull cute cat photos from the internet. Instead, MENNDL is a tool for testing and training thousands of potential neural networks to work on unique science problems.
That requires a different approach from the Google and Facebook AI platforms of the world, notes Steven Young, a postdoctoral research associate at ORNL who is on the team that designed MENNDL.
“We’ve discovered that those [neural networks] are very often not the optimal network for a lot of our problems, because our data, while it can be thought of as images, is different,” he explains to Singularity Hub. “These images, and the problems, have very different characteristics from object detection.”
AI for Science
One application of the technology involved a particle physics experiment at the Fermi National Accelerator Laboratory. Fermilab researchers are interested in understanding neutrinos, high-energy subatomic particles that rarely interact with normal matter but could be a key to understanding the early formation of the universe. One Fermilab experiment involves taking a sort of “snapshot” of neutrino interactions.
The team wanted the help of an AI system that could analyze and classify Fermilab’s detector data. MENNDL evaluated 500,000 neural networks in 24 hours. Its final solution proved superior to custom models developed by human scientists.
In another case involving a collaboration with St. Jude Children’s Research Hospital in Memphis, MENNDL improved the error rate of a human-designed algorithm for identifying mitochondria inside 3D electron microscopy images of brain tissue by 30 percent.
“We are able to do better than humans in a fraction of the time at designing networks for these sort of very different datasets that we’re interested in,” Young says.
What makes MENNDL particularly adept is its ability to define the best or most optimal hyperparameters—the key variables—to tackle a particular dataset.
“You don’t always need a big, huge deep network. Sometimes you just need a small network with the right hyperparameters,” Young says.
A Virtual Data Scientist
That’s not dissimilar to the approach of a company called H20.ai, a startup out of Silicon Valley that uses open source machine learning platforms to “democratize” AI. It applies machine learning to create business solutions for Fortune 500 companies, including some of the world’s biggest banks and healthcare companies.
“Our software is more [about] pattern detection, let’s say anti-money laundering or fraud detection or which customer is most likely to churn,” Dr. Arno Candel, chief technology officer at H2O.ai, tells Singularity Hub. “And that kind of insight-generating software is what we call AI here.”
The company’s latest product, Driverless AI, promises to deliver the data scientist equivalent of a chessmaster to its customers (the company claims several such grandmasters in its employ and advisory board). In other words, the system can analyze a raw dataset and, like MENNDL, automatically identify what features should be included in the computer model to make the most of the data based on the best “chess moves” of its grandmasters.
“So we’re using those algorithms, but we’re giving them the human insights from those data scientists, and we automate their thinking,” he explains. “So we created a virtual data scientist that is relentless at trying these ideas.”
Inside the Black Box
Not unlike how the human brain reaches a conclusion, it’s not always possible to understand how a machine, despite being designed by humans, reaches its own solutions. The lack of transparency is often referred to as the AI “black box.” Experts like Young say we can learn something about the evolutionary process of machine learning by generating millions of neural networks and seeing what works well and what doesn’t.
“You’re never going to be able to completely explain what happened, but maybe we can better explain it than we currently can today,” Young says.
Transparency is built into the “thought process” of each particular model generated by Driverless AI, according to Candel.
The computer even explains itself to the user in plain English at each decision point. There is also real-time feedback that allows users to prioritize features, or parameters, to see how the changes improve the accuracy of the model. For example, the system may include data from people in the same zip code as it creates a model to describe customer turnover.
“That’s one of the advantages of our automatic feature engineering: it’s basically mimicking human thinking,” Candel says. “It’s not just neural nets that magically come up with some kind of number, but we’re trying to make it statistically significant.”
Moving Forward
Much digital ink has been spilled over the dearth of skilled data scientists, so automating certain design aspects for developing artificial neural networks makes sense. Experts agree that automation alone won’t solve that particular problem. However, it will free computer scientists to tackle more difficult issues, such as parsing the inherent biases that exist within the data used by machine learning today.
“I think the world has an opportunity to focus more on the meaning of things and not on the laborious tasks of just fitting a model and finding the best features to make that model,” Candel notes. “By automating, we are pushing the burden back for the data scientists to actually do something more meaningful, which is think about the problem and see how you can address it differently to make an even bigger impact.”
The team at ORNL expects it can also make bigger impacts beginning next year when the lab’s next supercomputer, Summit, comes online. While Summit will boast only 4,600 nodes, it will sport the latest and greatest GPU technology from Nvidia and CPUs from IBM. That means it will deliver more than five times the computational performance of Titan, the world’s fifth-most powerful supercomputer today.
“We’ll be able to look at much larger problems on Summit than we were able to with Titan and hopefully get to a solution much faster,” Young says.
It’s all in a day’s work.
Image Credit: Gennady Danilkin / Shutterstock.com Continue reading

Posted in Human Robots

#431869 When Will We Finally Achieve True ...

The field of artificial intelligence goes back a long way, but many consider it was officially born when a group of scientists at Dartmouth College got together for a summer, back in 1956. Computers had, over the last few decades, come on in incredible leaps and bounds; they could now perform calculations far faster than humans. Optimism, given the incredible progress that had been made, was rational. Genius computer scientist Alan Turing had already mooted the idea of thinking machines just a few years before. The scientists had a fairly simple idea: intelligence is, after all, just a mathematical process. The human brain was a type of machine. Pick apart that process, and you can make a machine simulate it.
The problem didn’t seem too hard: the Dartmouth scientists wrote, “We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.” This research proposal, by the way, contains one of the earliest uses of the term artificial intelligence. They had a number of ideas—maybe simulating the human brain’s pattern of neurons could work and teaching machines the abstract rules of human language would be important.
The scientists were optimistic, and their efforts were rewarded. Before too long, they had computer programs that seemed to understand human language and could solve algebra problems. People were confidently predicting there would be a human-level intelligent machine built within, oh, let’s say, the next twenty years.
It’s fitting that the industry of predicting when we’d have human-level intelligent AI was born at around the same time as the AI industry itself. In fact, it goes all the way back to Turing’s first paper on “thinking machines,” where he predicted that the Turing Test—machines that could convince humans they were human—would be passed in 50 years, by 2000. Nowadays, of course, people are still predicting it will happen within the next 20 years, perhaps most famously Ray Kurzweil. There are so many different surveys of experts and analyses that you almost wonder if AI researchers aren’t tempted to come up with an auto reply: “I’ve already predicted what your question will be, and no, I can’t really predict that.”
The issue with trying to predict the exact date of human-level AI is that we don’t know how far is left to go. This is unlike Moore’s Law. Moore’s Law, the doubling of processing power roughly every couple of years, makes a very concrete prediction about a very specific phenomenon. We understand roughly how to get there—improved engineering of silicon wafers—and we know we’re not at the fundamental limits of our current approach (at least, not until you’re trying to work on chips at the atomic scale). You cannot say the same about artificial intelligence.
Common Mistakes
Stuart Armstrong’s survey looked for trends in these predictions. Specifically, there were two major cognitive biases he was looking for. The first was the idea that AI experts predict true AI will arrive (and make them immortal) conveniently just before they’d be due to die. This is the “Rapture of the Nerds” criticism people have leveled at Kurzweil—his predictions are motivated by fear of death, desire for immortality, and are fundamentally irrational. The ability to create a superintelligence is taken as an article of faith. There are also criticisms by people working in the AI field who know first-hand the frustrations and limitations of today’s AI.
The second was the idea that people always pick a time span of 15 to 20 years. That’s enough to convince people they’re working on something that could prove revolutionary very soon (people are less impressed by efforts that will lead to tangible results centuries down the line), but not enough for you to be embarrassingly proved wrong. Of the two, Armstrong found more evidence for the second one—people were perfectly happy to predict AI after they died, although most didn’t, but there was a clear bias towards “15–20 years from now” in predictions throughout history.
Measuring Progress
Armstrong points out that, if you want to assess the validity of a specific prediction, there are plenty of parameters you can look at. For example, the idea that human-level intelligence will be developed by simulating the human brain does at least give you a clear pathway that allows you to assess progress. Every time we get a more detailed map of the brain, or successfully simulate another part of it, we can tell that we are progressing towards this eventual goal, which will presumably end in human-level AI. We may not be 20 years away on that path, but at least you can scientifically evaluate the progress.
Compare this to those that say AI, or else consciousness, will “emerge” if a network is sufficiently complex, given enough processing power. This might be how we imagine human intelligence and consciousness emerged during evolution—although evolution had billions of years, not just decades. The issue with this is that we have no empirical evidence: we have never seen consciousness manifest itself out of a complex network. Not only do we not know if this is possible, we cannot know how far away we are from reaching this, as we can’t even measure progress along the way.
There is an immense difficulty in understanding which tasks are hard, which has continued from the birth of AI to the present day. Just look at that original research proposal, where understanding human language, randomness and creativity, and self-improvement are all mentioned in the same breath. We have great natural language processing, but do our computers understand what they’re processing? We have AI that can randomly vary to be “creative,” but is it creative? Exponential self-improvement of the kind the singularity often relies on seems far away.
We also struggle to understand what’s meant by intelligence. For example, AI experts consistently underestimated the ability of AI to play Go. Many thought, in 2015, it would take until 2027. In the end, it took two years, not twelve. But does that mean AI is any closer to being able to write the Great American Novel, say? Does it mean it’s any closer to conceptually understanding the world around it? Does it mean that it’s any closer to human-level intelligence? That’s not necessarily clear.
Not Human, But Smarter Than Humans
But perhaps we’ve been looking at the wrong problem. For example, the Turing test has not yet been passed in the sense that AI cannot convince people it’s human in conversation; but of course the calculating ability, and perhaps soon the ability to perform other tasks like pattern recognition and driving cars, far exceed human levels. As “weak” AI algorithms make more decisions, and Internet of Things evangelists and tech optimists seek to find more ways to feed more data into more algorithms, the impact on society from this “artificial intelligence” can only grow.
It may be that we don’t yet have the mechanism for human-level intelligence, but it’s also true that we don’t know how far we can go with the current generation of algorithms. Those scary surveys that state automation will disrupt society and change it in fundamental ways don’t rely on nearly as many assumptions about some nebulous superintelligence.
Then there are those that point out we should be worried about AI for other reasons. Just because we can’t say for sure if human-level AI will arrive this century, or never, it doesn’t mean we shouldn’t prepare for the possibility that the optimistic predictors could be correct. We need to ensure that human values are programmed into these algorithms, so that they understand the value of human life and can act in “moral, responsible” ways.
Phil Torres, at the Project for Future Human Flourishing, expressed it well in an interview with me. He points out that if we suddenly decided, as a society, that we had to solve the problem of morality—determine what was right and wrong and feed it into a machine—in the next twenty years…would we even be able to do it?
So, we should take predictions with a grain of salt. Remember, it turned out the problems the AI pioneers foresaw were far more complicated than they anticipated. The same could be true today. At the same time, we cannot be unprepared. We should understand the risks and take our precautions. When those scientists met in Dartmouth in 1956, they had no idea of the vast, foggy terrain before them. Sixty years later, we still don’t know how much further there is to go, or how far we can go. But we’re going somewhere.
Image Credit: Ico Maker / Shutterstock.com Continue reading

Posted in Human Robots