Tag Archives: computers

#432036 The Power to Upgrade Our Own Biology Is ...

Upgrading our biology may sound like science fiction, but attempts to improve humanity actually date back thousands of years. Every day, we enhance ourselves through seemingly mundane activities such as exercising, meditating, or consuming performance-enhancing drugs, such as caffeine or adderall. However, the tools with which we upgrade our biology are improving at an accelerating rate and becoming increasingly invasive.

In recent decades, we have developed a wide array of powerful methods, such as genetic engineering and brain-machine interfaces, that are redefining our humanity. In the short run, such enhancement technologies have medical applications and may be used to treat many diseases and disabilities. Additionally, in the coming decades, they could allow us to boost our physical abilities or even digitize human consciousness.

What’s New?
Many futurists argue that our devices, such as our smartphones, are already an extension of our cortex and in many ways an abstract form of enhancement. According to philosophers Andy Clark and David Chalmers’ theory of extended mind, we use technology to expand the boundaries of the human mind beyond our skulls.

One can argue that having access to a smartphone enhances one’s cognitive capacities and abilities and is an indirect form of enhancement of its own. It can be considered an abstract form of brain-machine interface. Beyond that, wearable devices and computers are already accessible in the market, and people like athletes use them to boost their progress.

However, these interfaces are becoming less abstract.

Not long ago, Elon Musk announced a new company, Neuralink, with the goal of merging the human mind with AI. The past few years have seen remarkable developments in both the hardware and software of brain-machine interfaces. Experts are designing more intricate electrodes while programming better algorithms to interpret neural signals. Scientists have already succeeded in enabling paralyzed patients to type with their minds, and are even allowing brains to communicate with one another purely through brainwaves.

Ethical Challenges of Enhancement
There are many social and ethical implications of such advancements.

One of the most fundamental issues with cognitive and physical enhancement techniques is that they contradict the very definition of merit and success that society has relied on for millennia. Many forms of performance-enhancing drugs have been considered “cheating” for the longest time.

But perhaps we ought to revisit some of our fundamental assumptions as a society.

For example, we like to credit hard work and talent in a fair manner, where “fair” generally implies that an individual has acted in a way that has served him to merit his rewards. If you are talented and successful, it is considered to be because you chose to work hard and take advantage of the opportunities available to you. But by these standards, how much of our accomplishments can we truly be credited for?

For instance, the genetic lottery can have an enormous impact on an individual’s predisposition and personality, which can in turn affect factors such as motivation, reasoning skills, and other mental abilities. Many people are born with a natural ability or a physique that gives them an advantage in a particular area or predisposes them to learn faster. But is it justified to reward someone for excellence if their genes had a pivotal role in their path to success?

Beyond that, there are already many ways in which we take “shortcuts” to better mental performance. Seemingly mundane activities like drinking coffee, meditating, exercising, or sleeping well can boost one’s performance in any given area and are tolerated by society. Even the use of language can have positive physical and psychological effects on the human brain, which can be liberating to the individual and immensely beneficial to society at large. And let’s not forget the fact that some of us are born into more access to developing literacy than others.

Given all these reasons, one could argue that cognitive abilities and talents are currently derived more from uncontrollable factors and luck than we like to admit. If anything, technologies like brain-machine interfaces can enhance individual autonomy and allow one a choice of how capable they become.

As Karim Jebari points out (pdf), if a certain characteristic or trait is required to perform a particular role and an individual lacks this trait, would it be wrong to implement the trait through brain-machine interfaces or genetic engineering? How is this different from any conventional form of learning or acquiring a skill? If anything, this would be removing limitations on individuals that result from factors outside their control, such as biological predisposition (or even traits induced from traumatic experiences) to act or perform in a certain way.

Another major ethical concern is equality. As with any other emerging technology, there are valid concerns that cognitive enhancement tech will benefit only the wealthy, thus exacerbating current inequalities. This is where public policy and regulations can play a pivotal role in the impact of technology on society.

Enhancement technologies can either contribute to inequality or allow us to solve it. Educating and empowering the under-privileged can happen at a much more rapid rate, helping the overall rate of human progress accelerate. The “normal range” for human capacity and intelligence, however it is defined, could shift dramatically towards more positive trends.

Many have also raised concerns over the negative applications of government-led biological enhancement, including eugenics-like movements and super-soldiers. Naturally, there are also issues of safety, security, and well-being, especially within the early stages of experimentation with enhancement techniques.

Brain-machine interfaces, for instance, could have implications on autonomy. The interface involves using information extracted from the brain to stimulate or modify systems in order to accomplish a goal. This part of the process can be enhanced by implementing an artificial intelligence system onto the interface—one that exposes the possibility of a third party potentially manipulating individual’s personalities, emotions, and desires by manipulating the interface.

A Tool For Transcendence
It’s important to discuss these risks, not so that we begin to fear and avoid such technologies, but so that we continue to advance in a way that minimizes harm and allows us to optimize the benefits.

Stephen Hawking notes that “with genetic engineering, we will be able to increase the complexity of our DNA, and improve the human race.” Indeed, the potential advantages of modifying biology are revolutionary. Doctors would gain access to a powerful tool to tackle disease, allowing us to live longer and healthier lives. We might be able to extend our lifespan and tackle aging, perhaps a critical step to becoming a space-faring species. We may begin to modify the brain’s building blocks to become more intelligent and capable of solving grand challenges.

In their book Evolving Ourselves, Juan Enriquez and Steve Gullans describe a world where evolution is no longer driven by natural processes. Instead, it is driven by human choices, through what they call unnatural selection and non-random mutation. Human enhancement is bringing us closer to such a world—it could allow us to take control of our evolution and truly shape the future of our species.

Image Credit: GrAl/ Shutterstock.com Continue reading

Posted in Human Robots

#432009 How Swarm Intelligence Is Making Simple ...

As a group, simple creatures following simple rules can display a surprising amount of complexity, efficiency, and even creativity. Known as swarm intelligence, this trait is found throughout nature, but researchers have recently begun using it to transform various fields such as robotics, data mining, medicine, and blockchains.

Ants, for example, can only perform a limited range of functions, but an ant colony can build bridges, create superhighways of food and information, wage war, and enslave other ant species—all of which are beyond the comprehension of any single ant. Likewise, schools of fish, flocks of birds, beehives, and other species exhibit behavior indicative of planning by a higher intelligence that doesn’t actually exist.

It happens by a process called stigmergy. Simply put, a small change by a group member causes other members to behave differently, leading to a new pattern of behavior.

When an ant finds a food source, it marks the path with pheromones. This attracts other ants to that path, leads them to the food source, and prompts them to mark the same path with more pheromones. Over time, the most efficient route will become the superhighway, as the faster and easier a path is, the more ants will reach the food and the more pheromones will be on the path. Thus, it looks as if a more intelligent being chose the best path, but it emerged from the tiny, simple changes made by individuals.

So what does this mean for humans? Well, a lot. In the past few decades, researchers have developed numerous algorithms and metaheuristics, such as ant colony optimization and particle swarm optimization, and they are rapidly being adopted.

Swarm Robotics
A swarm of robots would work on the same principles as an ant colony: each member has a simple set of rules to follow, leading to self-organization and self-sufficiency.

For example, researchers at Georgia Robotics and InTelligent Systems (GRITS) created a small swarm of simple robots that can spell and play piano. The robots cannot communicate, but based solely on the position of surrounding robots, they are able to use their specially-created algorithm to determine the optimal path to complete their task.

This is also immensely useful for drone swarms.

Last February, Ehang, an aviation company out of China, created a swarm of a thousand drones that not only lit the sky with colorful, intricate displays, but demonstrated the ability to improvise and troubleshoot errors entirely autonomously.

Further, just recently, the University of Cambridge and Koc University unveiled their idea for what they call the Energy Neutral Internet of Drones. Amazingly, this drone swarm would take initiative to share information or energy with other drones that did not receive a communication or are running low on energy.

Militaries all of the world are utilizing this as well.

Last year, the US Department of Defense announced it had successfully tested a swarm of miniature drones that could carry out complex missions cheaper and more efficiently. They claimed, “The micro-drones demonstrated advanced swarm behaviors such as collective decision-making, adaptive formation flying, and self-healing.”

Some experts estimate at least 30 nations are actively developing drone swarms—and even submersible drones—for military missions, including intelligence gathering, missile defense, precision missile strikes, and enhanced communication.

NASA also plans on deploying swarms of tiny spacecraft for space exploration, and the medical community is looking into using swarms of nanobots for precision delivery of drugs, microsurgery, targeting toxins, and biological sensors.

What If Humans Are the Ants?
The strength of any blockchain comes from the size and diversity of the community supporting it. Cryptocurrencies like Bitcoin, Ethereum, and Litecoin are driven by the people using, investing in, and, most importantly, mining them so their blockchains can function. Without an active community, or swarm, their blockchains wither away.

When viewed from a great height, a blockchain performs eerily like an ant colony in that it will naturally find the most efficient way to move vast amounts of information.

Miners compete with each other to perform the complex calculations necessary to add another block, for which the winner is rewarded with the blockchain’s native currency and agreed-upon fees. Of course, the miner with the more powerful computers is more likely to win the reward, thereby empowering the winner’s ability to mine and receive even more rewards. Over time, fewer and fewer miners are going to exist, as the winners are able to more efficiently shoulder more of the workload, in much the same way that ants build superhighways.

Further, a company called Unanimous AI has developed algorithms that allow humans to collectively make predictions. So far, the AI algorithms and their human participants have made some astoundingly accurate predictions, such as the first four winning horses of the Kentucky Derby, the Oscar winners, the Stanley Cup winners, and others. The more people involved in the swarm, the greater their predictive power will be.

To be clear, this is not a prediction based on group consensus. Rather, the swarm of humans uses software to input their opinions in real time, thus making micro-changes to the rest of the swarm and the inputs of other members.

Studies show that swarm intelligence consistently outperforms individuals and crowds working without the algorithms. While this is only the tip of the iceberg, some have suggested swarm intelligence can revolutionize how doctors diagnose a patient or how products are marketed to consumers. It might even be an essential step in truly creating AI.

While swarm intelligence is an essential part of many species’ success, it’s only a matter of time before humans harness its effectiveness as well.

Image Credit: Nature Bird Photography / Shutterstock.com Continue reading

Posted in Human Robots

#431999 Brain-Like Chips Now Beat the Human ...

Move over, deep learning. Neuromorphic computing—the next big thing in artificial intelligence—is on fire.

Just last week, two studies individually unveiled computer chips modeled after information processing in the human brain.

The first, published in Nature Materials, found a perfect solution to deal with unpredictability at synapses—the gap between two neurons that transmit and store information. The second, published in Science Advances, further amped up the system’s computational power, filling synapses with nanoclusters of supermagnetic material to bolster information encoding.

The result? Brain-like hardware systems that compute faster—and more efficiently—than the human brain.

“Ultimately we want a chip as big as a fingernail to replace one big supercomputer,” said Dr. Jeehwan Kim, who led the first study at MIT in Cambridge, Massachusetts.

Experts are hopeful.

“The field’s full of hype, and it’s nice to see quality work presented in an objective way,” said Dr. Carver Mead, an engineer at the California Institute of Technology in Pasadena not involved in the work.

Software to Hardware
The human brain is the ultimate computational wizard. With roughly 100 billion neurons densely packed into the size of a small football, the brain can deftly handle complex computation at lightning speed using very little energy.

AI experts have taken note. The past few years saw brain-inspired algorithms that can identify faces, falsify voices, and play a variety of games at—and often above—human capability.

But software is only part of the equation. Our current computers, with their transistors and binary digital systems, aren’t equipped to run these powerful algorithms.

That’s where neuromorphic computing comes in. The idea is simple: fabricate a computer chip that mimics the brain at the hardware level. Here, data is both processed and stored within the chip in an analog manner. Each artificial synapse can accumulate and integrate small bits of information from multiple sources and fire only when it reaches a threshold—much like its biological counterpart.

Experts believe the speed and efficiency gains will be enormous.

For one, the chips will no longer have to transfer data between the central processing unit (CPU) and storage blocks, which wastes both time and energy. For another, like biological neural networks, neuromorphic devices can support neurons that run millions of streams of parallel computation.

A “Brain-on-a-chip”
Optimism aside, reproducing the biological synapse in hardware form hasn’t been as easy as anticipated.

Neuromorphic chips exist in many forms, but often look like a nanoscale metal sandwich. The “bread” pieces are generally made of conductive plates surrounding a switching medium—a conductive material of sorts that acts like the gap in a biological synapse.

When a voltage is applied, as in the case of data input, ions move within the switching medium, which then creates conductive streams to stimulate the downstream plate. This change in conductivity mimics the way biological neurons change their “weight,” or the strength of connectivity between two adjacent neurons.

But so far, neuromorphic synapses have been rather unpredictable. According to Kim, that’s because the switching medium is often comprised of material that can’t channel ions to exact locations on the downstream plate.

“Once you apply some voltage to represent some data with your artificial neuron, you have to erase and be able to write it again in the exact same way,” explains Kim. “But in an amorphous solid, when you write again, the ions go in different directions because there are lots of defects.”

In his new study, Kim and colleagues swapped the jelly-like switching medium for silicon, a material with only a single line of defects that acts like a channel to guide ions.

The chip starts with a thin wafer of silicon etched with a honeycomb-like pattern. On top is a layer of silicon germanium—something often present in transistors—in the same pattern. This creates a funnel-like dislocation, a kind of Grand Canal that perfectly shuttles ions across the artificial synapse.

The researchers then made a neuromorphic chip containing these synapses and shot an electrical zap through them. Incredibly, the synapses’ response varied by only four percent—much higher than any neuromorphic device made with an amorphous switching medium.

In a computer simulation, the team built a multi-layer artificial neural network using parameters measured from their device. After tens of thousands of training examples, their neural network correctly recognized samples 95 percent of the time, just 2 percent lower than state-of-the-art software algorithms.

The upside? The neuromorphic chip requires much less space than the hardware that runs deep learning algorithms. Forget supercomputers—these chips could one day run complex computations right on our handheld devices.

A Magnetic Boost
Meanwhile, in Boulder, Colorado, Dr. Michael Schneider at the National Institute of Standards and Technology also realized that the standard switching medium had to go.

“There must be a better way to do this, because nature has figured out a better way to do this,” he says.

His solution? Nanoclusters of magnetic manganese.

Schneider’s chip contained two slices of superconducting electrodes made out of niobium, which channel electricity with no resistance. When researchers applied different magnetic fields to the synapse, they could control the alignment of the manganese “filling.”

The switch gave the chip a double boost. For one, by aligning the switching medium, the team could predict the ion flow and boost uniformity. For another, the magnetic manganese itself adds computational power. The chip can now encode data in both the level of electrical input and the direction of the magnetisms without bulking up the synapse.

It seriously worked. At one billion times per second, the chips fired several orders of magnitude faster than human neurons. Plus, the chips required just one ten-thousandth of the energy used by their biological counterparts, all the while synthesizing input from nine different sources in an analog manner.

The Road Ahead
These studies show that we may be nearing a benchmark where artificial synapses match—or even outperform—their human inspiration.

But to Dr. Steven Furber, an expert in neuromorphic computing, we still have a ways before the chips go mainstream.

Many of the special materials used in these chips require specific temperatures, he says. Magnetic manganese chips, for example, require temperatures around absolute zero to operate, meaning they come with the need for giant cooling tanks filled with liquid helium—obviously not practical for everyday use.

Another is scalability. Millions of synapses are necessary before a neuromorphic device can be used to tackle everyday problems such as facial recognition. So far, no deal.

But these problems may in fact be a driving force for the entire field. Intense competition could push teams into exploring different ideas and solutions to similar problems, much like these two studies.

If so, future chips may come in diverse flavors. Similar to our vast array of deep learning algorithms and operating systems, the computer chips of the future may also vary depending on specific requirements and needs.

It is worth developing as many different technological approaches as possible, says Furber, especially as neuroscientists increasingly understand what makes our biological synapses—the ultimate inspiration—so amazingly efficient.

Image Credit: arakio / Shutterstock.com Continue reading

Posted in Human Robots

#431928 How Fast Is AI Progressing? Stanford’s ...

When? This is probably the question that futurists, AI experts, and even people with a keen interest in technology dread the most. It has proved famously difficult to predict when new developments in AI will take place. The scientists at the Dartmouth Summer Research Project on Artificial Intelligence in 1956 thought that perhaps two months would be enough to make “significant advances” in a whole range of complex problems, including computers that can understand language, improve themselves, and even understand abstract concepts.
Sixty years later, and these problems are not yet solved. The AI Index, from Stanford, is an attempt to measure how much progress has been made in artificial intelligence.
The index adopts a unique approach, and tries to aggregate data across many regimes. It contains Volume of Activity metrics, which measure things like venture capital investment, attendance at academic conferences, published papers, and so on. The results are what you might expect: tenfold increases in academic activity since 1996, an explosive growth in startups focused around AI, and corresponding venture capital investment. The issue with this metric is that it measures AI hype as much as AI progress. The two might be correlated, but then again, they may not.
The index also scrapes data from the popular coding website Github, which hosts more source code than anyone in the world. They can track the amount of AI-related software people are creating, as well as the interest levels in popular machine learning packages like Tensorflow and Keras. The index also keeps track of the sentiment of news articles that mention AI: surprisingly, given concerns about the apocalypse and an employment crisis, those considered “positive” outweigh the “negative” by three to one.
But again, this could all just be a measure of AI enthusiasm in general.
No one would dispute the fact that we’re in an age of considerable AI hype, but the progress of AI is littered by booms and busts in hype, growth spurts that alternate with AI winters. So the AI Index attempts to track the progress of algorithms against a series of tasks. How well does computer vision perform at the Large Scale Visual Recognition challenge? (Superhuman at annotating images since 2015, but they still can’t answer questions about images very well, combining natural language processing and image recognition). Speech recognition on phone calls is almost at parity.
In other narrow fields, AIs are still catching up to humans. Translation might be good enough that you can usually get the gist of what’s being said, but still scores poorly on the BLEU metric for translation accuracy. The AI index even keeps track of how well the programs can do on the SAT test, so if you took it, you can compare your score to an AI’s.
Measuring the performance of state-of-the-art AI systems on narrow tasks is useful and fairly easy to do. You can define a metric that’s simple to calculate, or devise a competition with a scoring system, and compare new software with old in a standardized way. Academics can always debate about the best method of assessing translation or natural language understanding. The Loebner prize, a simplified question-and-answer Turing Test, recently adopted Winograd Schema type questions, which rely on contextual understanding. AI has more difficulty with these.
Where the assessment really becomes difficult, though, is in trying to map these narrow-task performances onto general intelligence. This is hard because of a lack of understanding of our own intelligence. Computers are superhuman at chess, and now even a more complex game like Go. The braver predictors who came up with timelines thought AlphaGo’s success was faster than expected, but does this necessarily mean we’re closer to general intelligence than they thought?
Here is where it’s harder to track progress.
We can note the specialized performance of algorithms on tasks previously reserved for humans—for example, the index cites a Nature paper that shows AI can now predict skin cancer with more accuracy than dermatologists. We could even try to track one specific approach to general AI; for example, how many regions of the brain have been successfully simulated by a computer? Alternatively, we could simply keep track of the number of professions and professional tasks that can now be performed to an acceptable standard by AI.

“We are running a race, but we don’t know how to get to the endpoint, or how far we have to go.”

Progress in AI over the next few years is far more likely to resemble a gradual rising tide—as more and more tasks can be turned into algorithms and accomplished by software—rather than the tsunami of a sudden intelligence explosion or general intelligence breakthrough. Perhaps measuring the ability of an AI system to learn and adapt to the work routines of humans in office-based tasks could be possible.
The AI index doesn’t attempt to offer a timeline for general intelligence, as this is still too nebulous and confused a concept.
Michael Woodridge, head of Computer Science at the University of Oxford, notes, “The main reason general AI is not captured in the report is that neither I nor anyone else would know how to measure progress.” He is concerned about another AI winter, and overhyped “charlatans and snake-oil salesmen” exaggerating the progress that has been made.
A key concern that all the experts bring up is the ethics of artificial intelligence.
Of course, you don’t need general intelligence to have an impact on society; algorithms are already transforming our lives and the world around us. After all, why are Amazon, Google, and Facebook worth any money? The experts agree on the need for an index to measure the benefits of AI, the interactions between humans and AIs, and our ability to program values, ethics, and oversight into these systems.
Barbra Grosz of Harvard champions this view, saying, “It is important to take on the challenge of identifying success measures for AI systems by their impact on people’s lives.”
For those concerned about the AI employment apocalypse, tracking the use of AI in the fields considered most vulnerable (say, self-driving cars replacing taxi drivers) would be a good idea. Society’s flexibility for adapting to AI trends should be measured, too; are we providing people with enough educational opportunities to retrain? How about teaching them to work alongside the algorithms, treating them as tools rather than replacements? The experts also note that the data suffers from being US-centric.
We are running a race, but we don’t know how to get to the endpoint, or how far we have to go. We are judging by the scenery, and how far we’ve run already. For this reason, measuring progress is a daunting task that starts with defining progress. But the AI index, as an annual collection of relevant information, is a good start.
Image Credit: Photobank gallery / Shutterstock.com Continue reading

Posted in Human Robots

#431873 Why the World Is Still Getting ...

If you read or watch the news, you’ll likely think the world is falling to pieces. Trends like terrorism, climate change, and a growing population straining the planet’s finite resources can easily lead you to think our world is in crisis.
But there’s another story, a story the news doesn’t often report. This story is backed by data, and it says we’re actually living in the most peaceful, abundant time in history, and things are likely to continue getting better.
The News vs. the Data
The reality that’s often clouded by a constant stream of bad news is we’re actually seeing a massive drop in poverty, fewer deaths from violent crime and preventable diseases. On top of that, we’re the most educated populace to ever walk the planet.
“Violence has been in decline for thousands of years, and today we may be living in the most peaceful era in the existence of our species.” –Steven Pinker
In the last hundred years, we’ve seen the average human life expectancy nearly double, the global GDP per capita rise exponentially, and childhood mortality drop 10-fold.

That’s pretty good progress! Maybe the world isn’t all gloom and doom.If you’re still not convinced the world is getting better, check out the charts in this article from Vox and on Peter Diamandis’ website for a lot more data.
Abundance for All Is Possible
So now that you know the world isn’t so bad after all, here’s another thing to think about: it can get much better, very soon.
In their book Abundance: The Future Is Better Than You Think, Steven Kotler and Peter Diamandis suggest it may be possible for us to meet and even exceed the basic needs of all the people living on the planet today.
“In the hands of smart and driven innovators, science and technology take things which were once scarce and make them abundant and accessible to all.”
This means making sure every single person in the world has adequate food, water and shelter, as well as a good education, access to healthcare, and personal freedom.
This might seem unimaginable, especially if you tend to think the world is only getting worse. But given how much progress we’ve already made in the last few hundred years, coupled with the recent explosion of information sharing and new, powerful technologies, abundance for all is not as out of reach as you might believe.
Throughout history, we’ve seen that in the hands of smart and driven innovators, science and technology take things which were once scarce and make them abundant and accessible to all.
Napoleon III
In Abundance, Diamandis and Kotler tell the story of how aluminum went from being one of the rarest metals on the planet to being one of the most abundant…
In the 1800s, aluminum was more valuable than silver and gold because it was rarer. So when Napoleon III entertained the King of Siam, the king and his guests were honored by being given aluminum utensils, while the rest of the dinner party ate with gold.
But aluminum is not really rare.
In fact, aluminum is the third most abundant element in the Earth’s crust, making up 8.3% of the weight of our planet. But it wasn’t until chemists Charles Martin Hall and Paul Héroult discovered how to use electrolysis to cheaply separate aluminum from surrounding materials that the element became suddenly abundant.
The problems keeping us from achieving a world where everyone’s basic needs are met may seem like resource problems — when in reality, many are accessibility problems.
The Engine Driving Us Toward Abundance: Exponential Technology
History is full of examples like the aluminum story. The most powerful one of the last few decades is information technology. Think about all the things that computers and the internet made abundant that were previously far less accessible because of cost or availability … Here are just a few examples:

Easy access to the world’s information
Ability to share information freely with anyone and everyone
Free/cheap long-distance communication
Buying and selling goods/services regardless of location

Less than two decades ago, when someone reached a certain level of economic stability, they could spend somewhere around $10K on stereos, cameras, entertainment systems, etc — today, we have all that equipment in the palm of our hand.
Now, there is a new generation of technologies heavily dependant on information technology and, therefore, similarly riding the wave of exponential growth. When put to the right use, emerging technologies like artificial intelligence, robotics, digital manufacturing, nano-materials and digital biology make it possible for us to drastically raise the standard of living for every person on the planet.

These are just some of the innovations which are unlocking currently scarce resources:

IBM’s Watson Health is being trained and used in medical facilities like the Cleveland Clinic to help doctors diagnose disease. In the future, it’s likely we’ll trust AI just as much, if not more than humans to diagnose disease, allowing people all over the world to have access to great diagnostic tools regardless of whether there is a well-trained doctor near them.

Solar power is now cheaper than fossil fuels in some parts of the world, and with advances in new materials and storage, the cost may decrease further. This could eventually lead to nearly-free, clean energy for people across the world.

Google’s GMNT network can now translate languages as well as a human, unlocking the ability for people to communicate globally as we never have before.

Self-driving cars are already on the roads of several American cities and will be coming to a road near you in the next couple years. Considering the average American spends nearly two hours driving every day, not having to drive would free up an increasingly scarce resource: time.

The Change-Makers
Today’s innovators can create enormous change because they have these incredible tools—which would have once been available only to big organizations—at their fingertips. And, as a result of our hyper-connected world, there is an unprecedented ability for people across the planet to work together to create solutions to some of our most pressing problems today.
“In today’s hyperlinked world, solving problems anywhere, solves problems everywhere.” –Peter Diamandis and Steven Kotler, Abundance
According to Diamandis and Kotler, there are three groups of people accelerating positive change.

DIY InnovatorsIn the 1970s and 1980s, the Homebrew Computer Club was a meeting place of “do-it-yourself” computer enthusiasts who shared ideas and spare parts. By the 1990s and 2000s, that little club became known as an inception point for the personal computer industry — dozens of companies, including Apple Computer, can directly trace their origins back to Homebrew. Since then, we’ve seen the rise of the social entrepreneur, the Maker Movement and the DIY Bio movement, which have similar ambitions to democratize social reform, manufacturing, and biology, the way Homebrew democratized computers. These are the people who look for new opportunities and aren’t afraid to take risks to create something new that will change the status-quo.
Techno-PhilanthropistsUnlike the robber barons of the 19th and early 20th centuries, today’s “techno-philanthropists” are not just giving away some of their wealth for a new museum, they are using their wealth to solve global problems and investing in social entrepreneurs aiming to do the same. The Bill and Melinda Gates Foundation has given away at least $28 billion, with a strong focus on ending diseases like polio, malaria, and measles for good. Jeff Skoll, after cashing out of eBay with $2 billion in 1998, went on to create the Skoll Foundation, which funds social entrepreneurs across the world. And last year, Mark Zuckerberg and Priscilla Chan pledged to give away 99% of their $46 billion in Facebook stock during their lifetimes.
The Rising BillionCisco estimates that by 2020, there will be 4.1 billion people connected to the internet, up from 3 billion in 2015. This number might even be higher, given the efforts of companies like Facebook, Google, Virgin Group, and SpaceX to bring internet access to the world. That’s a billion new people in the next several years who will be connected to the global conversation, looking to learn, create and better their own lives and communities.In his book, Fortune at the Bottom of the Pyramid, C.K. Pahalad writes that finding co-creative ways to serve this rising market can help lift people out of poverty while creating viable businesses for inventive companies.

The Path to Abundance
Eager to create change, innovators armed with powerful technologies can accomplish incredible feats. Kotler and Diamandis imagine that the path to abundance occurs in three tiers:

Basic Needs (food, water, shelter)
Tools of Growth (energy, education, access to information)
Ideal Health and Freedom

Of course, progress doesn’t always happen in a straight, logical way, but having a framework to visualize the needs is helpful.
Many people don’t believe it’s possible to end the persistent global problems we’re facing. However, looking at history, we can see many examples where technological tools have unlocked resources that previously seemed scarce.
Technological solutions are not always the answer, and we need social change and policy solutions as much as we need technology solutions. But we have seen time and time again, that powerful tools in the hands of innovative, driven change-makers can make the seemingly impossible happen.

You can download the full “Path to Abundance” infographic here. It was created under a CC BY-NC-ND license. If you share, please attribute to Singularity University.
Image Credit: janez volmajer / Shutterstock.com Continue reading

Posted in Human Robots