Tag Archives: evidence

#431920 If We Could Engineer Animals to Be as ...

Advances in neural implants and genetic engineering suggest that in the not–too–distant future we may be able to boost human intelligence. If that’s true, could we—and should we—bring our animal cousins along for the ride?
Human brain augmentation made headlines last year after several tech firms announced ambitious efforts to build neural implant technology. Duke University neuroscientist Mikhail Lebedev told me in July it could be decades before these devices have applications beyond the strictly medical.
But he said the technology, as well as other pharmacological and genetic engineering approaches, will almost certainly allow us to boost our mental capacities at some point in the next few decades.
Whether this kind of cognitive enhancement is a good idea or not, and how we should regulate it, are matters of heated debate among philosophers, futurists, and bioethicists, but for some it has raised the question of whether we could do the same for animals.
There’s already tantalizing evidence of the idea’s feasibility. As detailed in BBC Future, a group from MIT found that mice that were genetically engineered to express the human FOXP2 gene linked to learning and speech processing picked up maze routes faster. Another group at Wake Forest University studying Alzheimer’s found that neural implants could boost rhesus monkeys’ scores on intelligence tests.
The concept of “animal uplift” is most famously depicted in the Planet of the Apes movie series, whose planet–conquering protagonists are likely to put most people off the idea. But proponents are less pessimistic about the outcomes.
Science fiction author David Brin popularized the concept in his “Uplift” series of novels, in which humans share the world with various other intelligent animals that all bring their own unique skills, perspectives, and innovations to the table. “The benefits, after a few hundred years, could be amazing,” he told Scientific American.
Others, like George Dvorsky, the director of the Rights of Non-Human Persons program at the Institute for Ethics and Emerging Technologies, go further and claim there is a moral imperative. He told the Boston Globe that denying augmentation technology to animals would be just as unethical as excluding certain groups of humans.
Others are less convinced. Forbes’ Alex Knapp points out that developing the technology to uplift animals will likely require lots of very invasive animal research that will cause huge suffering to the animals it purports to help. This is problematic enough with normal animals, but could be even more morally dubious when applied to ones whose cognitive capacities have been enhanced.
The whole concept could also be based on a fundamental misunderstanding of the nature of intelligence. Humans are prone to seeing intelligence as a single, self-contained metric that progresses in a linear way with humans at the pinnacle.
In an opinion piece in Wired arguing against the likelihood of superhuman artificial intelligence, Kevin Kelly points out that science has no such single dimension with which to rank the intelligence of different species. Each one combines a bundle of cognitive capabilities, some of which are well below our own capabilities and others which are superhuman. He uses the example of the squirrel, which can remember the precise location of thousands of acorns for years.
Uplift efforts may end up being less about boosting intelligence and more about making animals more human-like. That represents “a kind of benevolent colonialism” that assumes being more human-like is a good thing, Paul Graham Raven, a futures researcher at the University of Sheffield in the United Kingdom, told the Boston Globe. There’s scant evidence that’s the case, and it’s easy to see how a chimpanzee with the mind of a human might struggle to adjust.
There are also fundamental barriers that may make it difficult to achieve human-level cognitive capabilities in animals, no matter how advanced brain augmentation technology gets. In 2013 Swedish researchers selectively bred small fish called guppies for bigger brains. This made them smarter, but growing the energy-intensive organ meant the guppies developed smaller guts and produced fewer offspring to compensate.
This highlights the fact that uplifting animals may require more than just changes to their brains, possibly a complete rewiring of their physiology that could prove far more technically challenging than human brain augmentation.
Our intelligence is intimately tied to our evolutionary history—our brains are bigger than other animals’; opposable thumbs allow us to use tools; our vocal chords make complex communication possible. No matter how much you augment a cow’s brain, it still couldn’t use a screwdriver or talk to you in English because it simply doesn’t have the machinery.
Finally, from a purely selfish point of view, even if it does become possible to create a level playing field between us and other animals, it may not be a smart move for humanity. There’s no reason to assume animals would be any more benevolent than we are, having evolved in the same ‘survival of the fittest’ crucible that we have. And given our already endless capacity to divide ourselves along national, religious, or ethnic lines, conflict between species seems inevitable.
We’re already likely to face considerable competition from smart machines in the coming decades if you believe the hype around AI. So maybe adding a few more intelligent species to the mix isn’t the best idea.
Image Credit: Ron Meijer / Shutterstock.com Continue reading

Posted in Human Robots

#431869 When Will We Finally Achieve True ...

The field of artificial intelligence goes back a long way, but many consider it was officially born when a group of scientists at Dartmouth College got together for a summer, back in 1956. Computers had, over the last few decades, come on in incredible leaps and bounds; they could now perform calculations far faster than humans. Optimism, given the incredible progress that had been made, was rational. Genius computer scientist Alan Turing had already mooted the idea of thinking machines just a few years before. The scientists had a fairly simple idea: intelligence is, after all, just a mathematical process. The human brain was a type of machine. Pick apart that process, and you can make a machine simulate it.
The problem didn’t seem too hard: the Dartmouth scientists wrote, “We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.” This research proposal, by the way, contains one of the earliest uses of the term artificial intelligence. They had a number of ideas—maybe simulating the human brain’s pattern of neurons could work and teaching machines the abstract rules of human language would be important.
The scientists were optimistic, and their efforts were rewarded. Before too long, they had computer programs that seemed to understand human language and could solve algebra problems. People were confidently predicting there would be a human-level intelligent machine built within, oh, let’s say, the next twenty years.
It’s fitting that the industry of predicting when we’d have human-level intelligent AI was born at around the same time as the AI industry itself. In fact, it goes all the way back to Turing’s first paper on “thinking machines,” where he predicted that the Turing Test—machines that could convince humans they were human—would be passed in 50 years, by 2000. Nowadays, of course, people are still predicting it will happen within the next 20 years, perhaps most famously Ray Kurzweil. There are so many different surveys of experts and analyses that you almost wonder if AI researchers aren’t tempted to come up with an auto reply: “I’ve already predicted what your question will be, and no, I can’t really predict that.”
The issue with trying to predict the exact date of human-level AI is that we don’t know how far is left to go. This is unlike Moore’s Law. Moore’s Law, the doubling of processing power roughly every couple of years, makes a very concrete prediction about a very specific phenomenon. We understand roughly how to get there—improved engineering of silicon wafers—and we know we’re not at the fundamental limits of our current approach (at least, not until you’re trying to work on chips at the atomic scale). You cannot say the same about artificial intelligence.
Common Mistakes
Stuart Armstrong’s survey looked for trends in these predictions. Specifically, there were two major cognitive biases he was looking for. The first was the idea that AI experts predict true AI will arrive (and make them immortal) conveniently just before they’d be due to die. This is the “Rapture of the Nerds” criticism people have leveled at Kurzweil—his predictions are motivated by fear of death, desire for immortality, and are fundamentally irrational. The ability to create a superintelligence is taken as an article of faith. There are also criticisms by people working in the AI field who know first-hand the frustrations and limitations of today’s AI.
The second was the idea that people always pick a time span of 15 to 20 years. That’s enough to convince people they’re working on something that could prove revolutionary very soon (people are less impressed by efforts that will lead to tangible results centuries down the line), but not enough for you to be embarrassingly proved wrong. Of the two, Armstrong found more evidence for the second one—people were perfectly happy to predict AI after they died, although most didn’t, but there was a clear bias towards “15–20 years from now” in predictions throughout history.
Measuring Progress
Armstrong points out that, if you want to assess the validity of a specific prediction, there are plenty of parameters you can look at. For example, the idea that human-level intelligence will be developed by simulating the human brain does at least give you a clear pathway that allows you to assess progress. Every time we get a more detailed map of the brain, or successfully simulate another part of it, we can tell that we are progressing towards this eventual goal, which will presumably end in human-level AI. We may not be 20 years away on that path, but at least you can scientifically evaluate the progress.
Compare this to those that say AI, or else consciousness, will “emerge” if a network is sufficiently complex, given enough processing power. This might be how we imagine human intelligence and consciousness emerged during evolution—although evolution had billions of years, not just decades. The issue with this is that we have no empirical evidence: we have never seen consciousness manifest itself out of a complex network. Not only do we not know if this is possible, we cannot know how far away we are from reaching this, as we can’t even measure progress along the way.
There is an immense difficulty in understanding which tasks are hard, which has continued from the birth of AI to the present day. Just look at that original research proposal, where understanding human language, randomness and creativity, and self-improvement are all mentioned in the same breath. We have great natural language processing, but do our computers understand what they’re processing? We have AI that can randomly vary to be “creative,” but is it creative? Exponential self-improvement of the kind the singularity often relies on seems far away.
We also struggle to understand what’s meant by intelligence. For example, AI experts consistently underestimated the ability of AI to play Go. Many thought, in 2015, it would take until 2027. In the end, it took two years, not twelve. But does that mean AI is any closer to being able to write the Great American Novel, say? Does it mean it’s any closer to conceptually understanding the world around it? Does it mean that it’s any closer to human-level intelligence? That’s not necessarily clear.
Not Human, But Smarter Than Humans
But perhaps we’ve been looking at the wrong problem. For example, the Turing test has not yet been passed in the sense that AI cannot convince people it’s human in conversation; but of course the calculating ability, and perhaps soon the ability to perform other tasks like pattern recognition and driving cars, far exceed human levels. As “weak” AI algorithms make more decisions, and Internet of Things evangelists and tech optimists seek to find more ways to feed more data into more algorithms, the impact on society from this “artificial intelligence” can only grow.
It may be that we don’t yet have the mechanism for human-level intelligence, but it’s also true that we don’t know how far we can go with the current generation of algorithms. Those scary surveys that state automation will disrupt society and change it in fundamental ways don’t rely on nearly as many assumptions about some nebulous superintelligence.
Then there are those that point out we should be worried about AI for other reasons. Just because we can’t say for sure if human-level AI will arrive this century, or never, it doesn’t mean we shouldn’t prepare for the possibility that the optimistic predictors could be correct. We need to ensure that human values are programmed into these algorithms, so that they understand the value of human life and can act in “moral, responsible” ways.
Phil Torres, at the Project for Future Human Flourishing, expressed it well in an interview with me. He points out that if we suddenly decided, as a society, that we had to solve the problem of morality—determine what was right and wrong and feed it into a machine—in the next twenty years…would we even be able to do it?
So, we should take predictions with a grain of salt. Remember, it turned out the problems the AI pioneers foresaw were far more complicated than they anticipated. The same could be true today. At the same time, we cannot be unprepared. We should understand the risks and take our precautions. When those scientists met in Dartmouth in 1956, they had no idea of the vast, foggy terrain before them. Sixty years later, we still don’t know how much further there is to go, or how far we can go. But we’re going somewhere.
Image Credit: Ico Maker / Shutterstock.com Continue reading

Posted in Human Robots

#431836 Do Our Brains Use Deep Learning to Make ...

The first time Dr. Blake Richards heard about deep learning, he was convinced that he wasn’t just looking at a technique that would revolutionize artificial intelligence. He also knew he was looking at something fundamental about the human brain.
That was the early 2000s, and Richards was taking a course with Dr. Geoff Hinton at the University of Toronto. Hinton, a pioneer architect of the algorithm that would later take the world by storm, was offering an introductory course on his learning method inspired by the human brain.
The key words here are “inspired by.” Despite Richards’ conviction, the odds were stacked against him. The human brain, as it happens, seems to lack a critical function that’s programmed into deep learning algorithms. On the surface, the algorithms were violating basic biological facts already proven by neuroscientists.
But what if, superficial differences aside, deep learning and the brain are actually compatible?
Now, in a new study published in eLife, Richards, working with DeepMind, proposed a new algorithm based on the biological structure of neurons in the neocortex. Also known as the cortex, this outermost region of the brain is home to higher cognitive functions such as reasoning, prediction, and flexible thought.
The team networked their artificial neurons together into a multi-layered network and challenged it with a classic computer vision task—identifying hand-written numbers.
The new algorithm performed well. But the kicker is that it analyzed the learning examples in a way that’s characteristic of deep learning algorithms, even though it was completely based on the brain’s fundamental biology.
“Deep learning is possible in a biological framework,” concludes the team.
Because the model is only a computer simulation at this point, Richards hopes to pass the baton to experimental neuroscientists, who could actively test whether the algorithm operates in an actual brain.
If so, the data could then be passed back to computer scientists to work out the next generation of massively parallel and low-energy algorithms to power our machines.
It’s a first step towards merging the two fields back into a “virtuous circle” of discovery and innovation.
The blame game
While you’ve probably heard of deep learning’s recent wins against humans in the game of Go, you might not know the nitty-gritty behind the algorithm’s operations.
In a nutshell, deep learning relies on an artificial neural network with virtual “neurons.” Like a towering skyscraper, the network is structured into hierarchies: lower-level neurons process aspects of an input—for example, a horizontal or vertical stroke that eventually forms the number four—whereas higher-level neurons extract more abstract aspects of the number four.
To teach the network, you give it examples of what you’re looking for. The signal propagates forward in the network (like climbing up a building), where each neuron works to fish out something fundamental about the number four.
Like children trying to learn a skill the first time, initially the network doesn’t do so well. It spits out what it thinks a universal number four should look like—think a Picasso-esque rendition.
But here’s where the learning occurs: the algorithm compares the output with the ideal output, and computes the difference between the two (dubbed “error”). This error is then “backpropagated” throughout the entire network, telling each neuron: hey, this is how far off you were, so try adjusting your computation closer to the ideal.
Millions of examples and tweakings later, the network inches closer to the desired output and becomes highly proficient at the trained task.
This error signal is crucial for learning. Without efficient “backprop,” the network doesn’t know which of its neurons are off kilter. By assigning blame, the AI can better itself.
The brain does this too. How? We have no clue.
Biological No-Go
What’s clear, though, is that the deep learning solution doesn’t work.
Backprop is a pretty needy function. It requires a very specific infrastructure for it to work as expected.
For one, each neuron in the network has to receive the error feedback. But in the brain, neurons are only connected to a few downstream partners (if that). For backprop to work in the brain, early-level neurons need to be able to receive information from billions of connections in their downstream circuits—a biological impossibility.
And while certain deep learning algorithms adapt a more local form of backprop— essentially between neurons—it requires their connection forwards and backwards to be symmetric. This hardly ever occurs in the brain’s synapses.
More recent algorithms adapt a slightly different strategy, in that they implement a separate feedback pathway that helps the neurons to figure out errors locally. While it’s more biologically plausible, the brain doesn’t have a separate computational network dedicated to the blame game.
What it does have are neurons with intricate structures, unlike the uniform “balls” that are currently applied in deep learning.
Branching Networks
The team took inspiration from pyramidal cells that populate the human cortex.
“Most of these neurons are shaped like trees, with ‘roots’ deep in the brain and ‘branches’ close to the surface,” says Richards. “What’s interesting is that these roots receive a different set of inputs than the branches that are way up at the top of the tree.”
This is an illustration of a multi-compartment neural network model for deep learning. Left: Reconstruction of pyramidal neurons from mouse primary visual cortex. Right: Illustration of simplified pyramidal neuron models. Image Credit: CIFAR
Curiously, the structure of neurons often turn out be “just right” for efficiently cracking a computational problem. Take the processing of sensations: the bottoms of pyramidal neurons are right smack where they need to be to receive sensory input, whereas the tops are conveniently placed to transmit feedback errors.
Could this intricate structure be evolution’s solution to channeling the error signal?
The team set up a multi-layered neural network based on previous algorithms. But rather than having uniform neurons, they gave those in middle layers—sandwiched between the input and output—compartments, just like real neurons.
When trained with hand-written digits, the algorithm performed much better than a single-layered network, despite lacking a way to perform classical backprop. The cell-like structure itself was sufficient to assign error: the error signals at one end of the neuron are naturally kept separate from input at the other end.
Then, at the right moment, the neuron brings both sources of information together to find the best solution.
There’s some biological evidence for this: neuroscientists have long known that the neuron’s input branches perform local computations, which can be integrated with signals that propagate backwards from the so-called output branch.
However, we don’t yet know if this is the brain’s way of dealing blame—a question that Richards urges neuroscientists to test out.
What’s more, the network parsed the problem in a way eerily similar to traditional deep learning algorithms: it took advantage of its multi-layered structure to extract progressively more abstract “ideas” about each number.
“[This is] the hallmark of deep learning,” the authors explain.
The Deep Learning Brain
Without doubt, there will be more twists and turns to the story as computer scientists incorporate more biological details into AI algorithms.
One aspect that Richards and team are already eyeing is a top-down predictive function, in which signals from higher levels directly influence how lower levels respond to input.
Feedback from upper levels doesn’t just provide error signals; it could also be nudging lower processing neurons towards a “better” activity pattern in real-time, says Richards.
The network doesn’t yet outperform other non-biologically derived (but “brain-inspired”) deep networks. But that’s not the point.
“Deep learning has had a huge impact on AI, but, to date, its impact on neuroscience has been limited,” the authors say.
Now neuroscientists have a lead they could experimentally test: that the structure of neurons underlie nature’s own deep learning algorithm.
“What we might see in the next decade or so is a real virtuous cycle of research between neuroscience and AI, where neuroscience discoveries help us to develop new AI and AI can help us interpret and understand our experimental data in neuroscience,” says Richards.
Image Credit: christitzeimaging.com / Shutterstock.com Continue reading

Posted in Human Robots

#431414 This Week’s Awesome Stories From ...

QUANTUM COMPUTING IBM Raises the Bar With a 50-Qubit Quantum ComputerWill Knight | MIT Technology Review “50 qubits is a significant landmark in progress toward practical quantum computers. Other systems built so far have had limited capabilities and could perform only calculations that could also be done on a conventional supercomputer. A 50-qubit machine can do things that are extremely difficult to simulate without quantum technology.”
ARTIFICIAL INTELLIGENCE AI Startup Embodied Intelligence Wants Robots to Learn From Humans in Virtual RealityEvan Ackerman | IEEE Spectrum “This is a defining problem for robotics right now: Robots can do anything you want, as long as you tell them exactly what that is, every single time… This week, Abbeel and several of his colleagues from UC Berkeley and OpenAI are announcing a new startup (with US $7 million in seed funding) called Embodied Intelligence, which will ‘enable industrial robot arms to perceive and act like humans instead of just strictly following pre-programmed trajectories.’”
TRANSPORTATION Uber’s Plan to Launch Flying Cars in LA by 2020 Really Could Take OffJack Stewart | Wired“After grabbing an elevator, passengers will tap their phones to pass through a turnstile and access the roof. Presumably they’ve been prescreened, because there’s no airport-style security in evidence. An agent in an orange vest takes a group of four passengers out to the waiting aircraft. There’s a pilot up front, and a small overhead display with the estimated arrival time.”
ROBOTICS This Robot Swarm Finishes Your Grocery Shopping in MinutesJesus Diaz | Fast Company “At an Ocado warehouse in the English town of Andover, a swarm of 1,000 robots races over a grid the size of a soccer field, filling orders and replacing stock. The new system, which went live earlier this year, can fulfill a 50-item order in under five minutes—something that used to take about two hours at human-only facilities. It’s been so successful that Ocado is now building a new warehouse that’s three times larger in Erith, southeast of London.”
BIOTECH Meet the Scientists Building a Library of Designer DrugsAngela Chen | The Verge“One of the most prominent categories of designer drugs are those intended to mimic marijuana, called synthetic cannabinoids. Marijuana, or cannabis, is widely considered one of the safest drugs, but synthetic cannabinoids are some of the most dangerous synthetic drugs.”
Image Credit: anucha sirivisansuwan / Shutterstock.com Continue reading

Posted in Human Robots

#431301 Collective Intelligence Is the Root of ...

Many of us intuitively think about intelligence as an individual trait. As a society, we have a tendency to praise individual game-changers for accomplishments that would not be possible without their teams, often tens of thousands of people that work behind the scenes to make extraordinary things happen.
Matt Ridley, best-selling author of multiple books, including The Rational Optimist: How Prosperity Evolves, challenges this view. He argues that human achievement and intelligence are entirely “networking phenomena.” In other words, intelligence is collective and emergent as opposed to individual.
When asked what scientific concept would improve everybody’s cognitive toolkit, Ridley highlights collective intelligence: “It is by putting brains together through the division of labor— through trade and specialization—that human society stumbled upon a way to raise the living standards, carrying capacity, technological virtuosity, and knowledge base of the species.”
Ridley has spent a lifetime exploring human prosperity and the factors that contribute to it. In a conversation with Singularity Hub, he redefined how we perceive intelligence and human progress.
Raya Bidshahri: The common perspective seems to be that competition is what drives innovation and, consequently, human progress. Why do you think collaboration trumps competition when it comes to human progress?
Matt Ridley: There is a tendency to think that competition is an animal instinct that is natural and collaboration is a human instinct we have to learn. I think there is no evidence for that. Both are deeply rooted in us as a species. The evidence from evolutionary biology tells us that collaboration is just as important as competition. Yet, at the end, the Darwinian perspective is quite correct: it’s usually cooperation for the purpose of competition, wherein a given group tries to achieve something more effectively than another group. But the point is that the capacity to co-operate is very deep in our psyche.
RB: You write that “human achievement is entirely a networking phenomenon,” and we need to stop thinking about intelligence as an individual trait, and that instead we should look at what you refer to as collective intelligence. Why is that?
MR: The best way to think about it is that IQ doesn’t matter, because a hundred stupid people who are talking to each other will accomplish more than a hundred intelligent people who aren’t. It’s absolutely vital to see that everything from the manufacturing of a pencil to the manufacturing of a nuclear power station can’t be done by an individual human brain. You can’t possibly hold in your head all the knowledge you need to do these things. For the last 200,000 years we’ve been exchanging and specializing, which enables us to achieve much greater intelligence than we can as individuals.
RB: We often think of achievement and intelligence on individual terms. Why do you think it’s so counter-intuitive for us to think about collective intelligence?
MR: People are surprisingly myopic to the extent they understand the nature of intelligence. I think it goes back to a pre-human tendency to think in terms of individual stories and actors. For example, we love to read about the famous inventor or scientist who invented or discovered something. We never tell these stories as network stories. We tell them as individual hero stories.

“It’s absolutely vital to see that everything from the manufacturing of a pencil to the manufacturing of a nuclear power station can’t be done by an individual human brain.”

This idea of a brilliant hero who saves the world in the face of every obstacle seems to speak to tribal hunter-gatherer societies, where the alpha male leads and wins. But it doesn’t resonate with how human beings have structured modern society in the last 100,000 years or so. We modern-day humans haven’t internalized a way of thinking that incorporates this definition of distributed and collective intelligence.
RB: One of the books you’re best known for is The Rational Optimist. What does it mean to be a rational optimist?
MR: My optimism is rational because it’s not based on a feeling, it’s based on evidence. If you look at the data on human living standards over the last 200 years and compare it with the way that most people actually perceive our progress during that time, you’ll see an extraordinary gap. On the whole, people seem to think that things are getting worse, but things are actually getting better.
We’ve seen the most astonishing improvements in human living standards: we’ve brought the number of people living in extreme poverty to 9 percent from about 70 percent when I was born. The human lifespan is expanding by five hours a day, child mortality has gone down by two thirds in half a century, and much more. These feats dwarf the things that are going wrong. Yet most people are quite pessimistic about the future despite the things we’ve achieved in the past.
RB: Where does this idea of collective intelligence fit in rational optimism?
MR: Underlying the idea of rational optimism was understanding what prosperity is, and why it happens to us and not to rabbits or rocks. Why are we the only species in the world that has concepts like a GDP, growth rate, or living standard? My answer is that it comes back to this phenomena of collective intelligence. The reason for a rise in living standards is innovation, and the cause of that innovation is our ability to collaborate.
The grand theme of human history is exchange of ideas, collaborating through specialization and the division of labor. Throughout history, it’s in places where there is a lot of open exchange and trade where you get a lot of innovation. And indeed, there are some extraordinary episodes in human history when societies get cut off from exchange and their innovation slows down and they start moving backwards. One example of this is Tasmania, which was isolated and lost a lot of the technologies it started off with.
RB: Lots of people like to point out that just because the world has been getting better doesn’t guarantee it will continue to do so. How do you respond to that line of argumentation?
MR: There is a quote by Thomas Babington Macaulay from 1830, where he was fed up with the pessimists of the time saying things will only get worse. He says, “On what principle is it that with nothing but improvement behind us, we are to expect nothing but deterioration before us?” And this was back in the 1830s, where in Britain and a few other parts of the world, we were only seeing the beginning of the rise of living standards. It’s perverse to argue that because things were getting better in the past, now they are about to get worse.

“I think it’s worth remembering that good news tends to be gradual, and bad news tends to be sudden. Hence, the good stuff is rarely going to make the news.”

Another thing to point out is that people have always said this. Every generation thought they were at the peak looking downhill. If you think about the opportunities technology is about to give us, whether it’s through blockchain, gene editing, or artificial intelligence, there is every reason to believe that 2017 is going to look like a time of absolute misery compared to what our children and grandchildren are going to experience.
RB: There seems to be a fair amount of mayhem in today’s world, and lots of valid problems to pay attention to in the news. What would you say to empower our readers that we will push through it and continue to grow and improve as a species?
MR: I think it’s worth remembering that good news tends to be gradual, and bad news tends to be sudden. Hence, the good stuff is rarely going to make the news. It’s happening in an inexorable way, as a result of ordinary people exchanging, specializing, collaborating, and innovating, and it’s surprisingly hard to stop it.
Even if you look back to the 1940s, at the end of a world war, there was still a lot of innovation happening. In some ways it feels like we are going through a bad period now. I do worry a lot about the anti-enlightenment values that I see spreading in various parts of the world. But then I remind myself that people are working on innovative projects in the background, and these things are going to come through and push us forward.
Image Credit: Sahacha Nilkumhang / Shutterstock.com

We are a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for us to earn fees by linking to Amazon.com and affiliated sites. Continue reading

Posted in Human Robots