Tag Archives: look

#431999 Brain-Like Chips Now Beat the Human ...

Move over, deep learning. Neuromorphic computing—the next big thing in artificial intelligence—is on fire.

Just last week, two studies individually unveiled computer chips modeled after information processing in the human brain.

The first, published in Nature Materials, found a perfect solution to deal with unpredictability at synapses—the gap between two neurons that transmit and store information. The second, published in Science Advances, further amped up the system’s computational power, filling synapses with nanoclusters of supermagnetic material to bolster information encoding.

The result? Brain-like hardware systems that compute faster—and more efficiently—than the human brain.

“Ultimately we want a chip as big as a fingernail to replace one big supercomputer,” said Dr. Jeehwan Kim, who led the first study at MIT in Cambridge, Massachusetts.

Experts are hopeful.

“The field’s full of hype, and it’s nice to see quality work presented in an objective way,” said Dr. Carver Mead, an engineer at the California Institute of Technology in Pasadena not involved in the work.

Software to Hardware
The human brain is the ultimate computational wizard. With roughly 100 billion neurons densely packed into the size of a small football, the brain can deftly handle complex computation at lightning speed using very little energy.

AI experts have taken note. The past few years saw brain-inspired algorithms that can identify faces, falsify voices, and play a variety of games at—and often above—human capability.

But software is only part of the equation. Our current computers, with their transistors and binary digital systems, aren’t equipped to run these powerful algorithms.

That’s where neuromorphic computing comes in. The idea is simple: fabricate a computer chip that mimics the brain at the hardware level. Here, data is both processed and stored within the chip in an analog manner. Each artificial synapse can accumulate and integrate small bits of information from multiple sources and fire only when it reaches a threshold—much like its biological counterpart.

Experts believe the speed and efficiency gains will be enormous.

For one, the chips will no longer have to transfer data between the central processing unit (CPU) and storage blocks, which wastes both time and energy. For another, like biological neural networks, neuromorphic devices can support neurons that run millions of streams of parallel computation.

A “Brain-on-a-chip”
Optimism aside, reproducing the biological synapse in hardware form hasn’t been as easy as anticipated.

Neuromorphic chips exist in many forms, but often look like a nanoscale metal sandwich. The “bread” pieces are generally made of conductive plates surrounding a switching medium—a conductive material of sorts that acts like the gap in a biological synapse.

When a voltage is applied, as in the case of data input, ions move within the switching medium, which then creates conductive streams to stimulate the downstream plate. This change in conductivity mimics the way biological neurons change their “weight,” or the strength of connectivity between two adjacent neurons.

But so far, neuromorphic synapses have been rather unpredictable. According to Kim, that’s because the switching medium is often comprised of material that can’t channel ions to exact locations on the downstream plate.

“Once you apply some voltage to represent some data with your artificial neuron, you have to erase and be able to write it again in the exact same way,” explains Kim. “But in an amorphous solid, when you write again, the ions go in different directions because there are lots of defects.”

In his new study, Kim and colleagues swapped the jelly-like switching medium for silicon, a material with only a single line of defects that acts like a channel to guide ions.

The chip starts with a thin wafer of silicon etched with a honeycomb-like pattern. On top is a layer of silicon germanium—something often present in transistors—in the same pattern. This creates a funnel-like dislocation, a kind of Grand Canal that perfectly shuttles ions across the artificial synapse.

The researchers then made a neuromorphic chip containing these synapses and shot an electrical zap through them. Incredibly, the synapses’ response varied by only four percent—much higher than any neuromorphic device made with an amorphous switching medium.

In a computer simulation, the team built a multi-layer artificial neural network using parameters measured from their device. After tens of thousands of training examples, their neural network correctly recognized samples 95 percent of the time, just 2 percent lower than state-of-the-art software algorithms.

The upside? The neuromorphic chip requires much less space than the hardware that runs deep learning algorithms. Forget supercomputers—these chips could one day run complex computations right on our handheld devices.

A Magnetic Boost
Meanwhile, in Boulder, Colorado, Dr. Michael Schneider at the National Institute of Standards and Technology also realized that the standard switching medium had to go.

“There must be a better way to do this, because nature has figured out a better way to do this,” he says.

His solution? Nanoclusters of magnetic manganese.

Schneider’s chip contained two slices of superconducting electrodes made out of niobium, which channel electricity with no resistance. When researchers applied different magnetic fields to the synapse, they could control the alignment of the manganese “filling.”

The switch gave the chip a double boost. For one, by aligning the switching medium, the team could predict the ion flow and boost uniformity. For another, the magnetic manganese itself adds computational power. The chip can now encode data in both the level of electrical input and the direction of the magnetisms without bulking up the synapse.

It seriously worked. At one billion times per second, the chips fired several orders of magnitude faster than human neurons. Plus, the chips required just one ten-thousandth of the energy used by their biological counterparts, all the while synthesizing input from nine different sources in an analog manner.

The Road Ahead
These studies show that we may be nearing a benchmark where artificial synapses match—or even outperform—their human inspiration.

But to Dr. Steven Furber, an expert in neuromorphic computing, we still have a ways before the chips go mainstream.

Many of the special materials used in these chips require specific temperatures, he says. Magnetic manganese chips, for example, require temperatures around absolute zero to operate, meaning they come with the need for giant cooling tanks filled with liquid helium—obviously not practical for everyday use.

Another is scalability. Millions of synapses are necessary before a neuromorphic device can be used to tackle everyday problems such as facial recognition. So far, no deal.

But these problems may in fact be a driving force for the entire field. Intense competition could push teams into exploring different ideas and solutions to similar problems, much like these two studies.

If so, future chips may come in diverse flavors. Similar to our vast array of deep learning algorithms and operating systems, the computer chips of the future may also vary depending on specific requirements and needs.

It is worth developing as many different technological approaches as possible, says Furber, especially as neuroscientists increasingly understand what makes our biological synapses—the ultimate inspiration—so amazingly efficient.

Image Credit: arakio / Shutterstock.com Continue reading

Posted in Human Robots | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

#431958 The Next Generation of Cameras Might See ...

You might be really pleased with the camera technology in your latest smartphone, which can recognize your face and take slow-mo video in ultra-high definition. But these technological feats are just the start of a larger revolution that is underway.

The latest camera research is shifting away from increasing the number of mega-pixels towards fusing camera data with computational processing. By that, we don’t mean the Photoshop style of processing where effects and filters are added to a picture, but rather a radical new approach where the incoming data may not actually look like at an image at all. It only becomes an image after a series of computational steps that often involve complex mathematics and modeling how light travels through the scene or the camera.

This additional layer of computational processing magically frees us from the chains of conventional imaging techniques. One day we may not even need cameras in the conventional sense any more. Instead we will use light detectors that only a few years ago we would never have considered any use for imaging. And they will be able to do incredible things, like see through fog, inside the human body and even behind walls.

Single Pixel Cameras
One extreme example is the single pixel camera, which relies on a beautifully simple principle. Typical cameras use lots of pixels (tiny sensor elements) to capture a scene that is likely illuminated by a single light source. But you can also do things the other way around, capturing information from many light sources with a single pixel.

To do this you need a controlled light source, for example a simple data projector that illuminates the scene one spot at a time or with a series of different patterns. For each illumination spot or pattern, you then measure the amount of light reflected and add everything together to create the final image.

Clearly the disadvantage of taking a photo in this is way is that you have to send out lots of illumination spots or patterns in order to produce one image (which would take just one snapshot with a regular camera). But this form of imaging would allow you to create otherwise impossible cameras, for example that work at wavelengths of light beyond the visible spectrum, where good detectors cannot be made into cameras.

These cameras could be used to take photos through fog or thick falling snow. Or they could mimic the eyes of some animals and automatically increase an image’s resolution (the amount of detail it captures) depending on what’s in the scene.

It is even possible to capture images from light particles that have never even interacted with the object we want to photograph. This would take advantage of the idea of “quantum entanglement,” that two particles can be connected in a way that means whatever happens to one happens to the other, even if they are a long distance apart. This has intriguing possibilities for looking at objects whose properties might change when lit up, such as the eye. For example, does a retina look the same when in darkness as in light?

Multi-Sensor Imaging
Single-pixel imaging is just one of the simplest innovations in upcoming camera technology and relies, on the face of it, on the traditional concept of what forms a picture. But we are currently witnessing a surge of interest for systems that use lots of information but traditional techniques only collect a small part of it.

This is where we could use multi-sensor approaches that involve many different detectors pointed at the same scene. The Hubble telescope was a pioneering example of this, producing pictures made from combinations of many different images taken at different wavelengths. But now you can buy commercial versions of this kind of technology, such as the Lytro camera that collects information about light intensity and direction on the same sensor, to produce images that can be refocused after the image has been taken.

The next generation camera will probably look something like the Light L16 camera, which features ground-breaking technology based on more than ten different sensors. Their data are combined using a computer to provide a 50 MB, re-focusable and re-zoomable, professional-quality image. The camera itself looks like a very exciting Picasso interpretation of a crazy cell-phone camera.

Yet these are just the first steps towards a new generation of cameras that will change the way in which we think of and take images. Researchers are also working hard on the problem of seeing through fog, seeing behind walls, and even imaging deep inside the human body and brain.

All of these techniques rely on combining images with models that explain how light travels through through or around different substances.

Another interesting approach that is gaining ground relies on artificial intelligence to “learn” to recognize objects from the data. These techniques are inspired by learning processes in the human brain and are likely to play a major role in future imaging systems.

Single photon and quantum imaging technologies are also maturing to the point that they can take pictures with incredibly low light levels and videos with incredibly fast speeds reaching a trillion frames per second. This is enough to even capture images of light itself traveling across as scene.

Some of these applications might require a little time to fully develop, but we now know that the underlying physics should allow us to solve these and other problems through a clever combination of new technology and computational ingenuity.

This article was originally published on The Conversation. Read the original article.

Image Credit: Sylvia Adams / Shutterstock.com Continue reading

Posted in Human Robots | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

#431925 How the Science of Decision-Making Will ...

Neuroscientist Brie Linkenhoker believes that leaders must be better prepared for future strategic challenges by continually broadening their worldviews.
As the director of Worldview Stanford, Brie and her team produce multimedia content and immersive learning experiences to make academic research and insights accessible and useable by curious leaders. These future-focused topics are designed to help curious leaders understand the forces shaping the future.
Worldview Stanford has tackled such interdisciplinary topics as the power of minds, the science of decision-making, environmental risk and resilience, and trust and power in the age of big data.
We spoke with Brie about why understanding our biases is critical to making better decisions, particularly in a time of increasing change and complexity.

Lisa Kay Solomon: What is Worldview Stanford?
Brie Linkenhoker: Leaders and decision makers are trying to navigate this complex hairball of a planet that we live on and that requires keeping up on a lot of diverse topics across multiple fields of study and research. Universities like Stanford are where that new knowledge is being created, but it’s not getting out and used as readily as we would like, so that’s what we’re working on.
Worldview is designed to expand our individual and collective worldviews about important topics impacting our future. Your worldview is not a static thing, it’s constantly changing. We believe it should be informed by lots of different perspectives, different cultures, by knowledge from different domains and disciplines. This is more important now than ever.
At Worldview, we create learning experiences that are an amalgamation of all of those things.
LKS: One of your marquee programs is the Science of Decision Making. Can you tell us about that course and why it’s important?
BL: We tend to think about decision makers as being people in leadership positions, but every person who works in your organization, every member of your family, every member of the community is a decision maker. You have to decide what to buy, who to partner with, what government regulations to anticipate.
You have to think not just about your own decisions, but you have to anticipate how other people make decisions too. So, when we set out to create the Science of Decision Making, we wanted to help people improve their own decisions and be better able to predict, understand, anticipate the decisions of others.

“I think in another 10 or 15 years, we’re probably going to have really rich models of how we actually make decisions and what’s going on in the brain to support them.”

We realized that the only way to do that was to combine a lot of different perspectives, so we recruited experts from economics, psychology, neuroscience, philosophy, biology, and religion. We also brought in cutting-edge research on artificial intelligence and virtual reality and explored conversations about how technology is changing how we make decisions today and how it might support our decision-making in the future.
There’s no single set of answers. There are as many unanswered questions as there are answered questions.
LKS: One of the other things you explore in this course is the role of biases and heuristics. Can you explain the importance of both in decision-making?
BL: When I was a strategy consultant, executives would ask me, “How do I get rid of the biases in my decision-making or my organization’s decision-making?” And my response would be, “Good luck with that. It isn’t going to happen.”
As human beings we make, probably, thousands of decisions every single day. If we had to be actively thinking about each one of those decisions, we wouldn’t get out of our house in the morning, right?
We have to be able to do a lot of our decision-making essentially on autopilot to free up cognitive resources for more difficult decisions. So, we’ve evolved in the human brain a set of what we understand to be heuristics or rules of thumb.
And heuristics are great in, say, 95 percent of situations. It’s that five percent, or maybe even one percent, that they’re really not so great. That’s when we have to become aware of them because in some situations they can become biases.
For example, it doesn’t matter so much that we’re not aware of our rules of thumb when we’re driving to work or deciding what to make for dinner. But they can become absolutely critical in situations where a member of law enforcement is making an arrest or where you’re making a decision about a strategic investment or even when you’re deciding who to hire.
Let’s take hiring for a moment.
How many years is a hire going to impact your organization? You’re potentially looking at 5, 10, 15, 20 years. Having the right person in a role could change the future of your business entirely. That’s one of those areas where you really need to be aware of your own heuristics and biases—and we all have them. There’s no getting rid of them.
LKS: We seem to be at a time when the boundaries between different disciplines are starting to blend together. How has the advancement of neuroscience help us become better leaders? What do you see happening next?
BL: Heuristics and biases are very topical these days, thanks in part to Michael Lewis’s fantastic book, The Undoing Project, which is the story of the groundbreaking work that Nobel Prize winner Danny Kahneman and Amos Tversky did in the psychology and biases of human decision-making. Their work gave rise to the whole new field of behavioral economics.
In the last 10 to 15 years, neuroeconomics has really taken off. Neuroeconomics is the combination of behavioral economics with neuroscience. In behavioral economics, they use economic games and economic choices that have numbers associated with them and have real-world application.
For example, they ask, “How much would you spend to buy A versus B?” Or, “If I offered you X dollars for this thing that you have, would you take it or would you say no?” So, it’s trying to look at human decision-making in a format that’s easy to understand and quantify within a laboratory setting.
Now you bring neuroscience into that. You can have people doing those same kinds of tasks—making those kinds of semi-real-world decisions—in a brain scanner, and we can now start to understand what’s going on in the brain while people are making decisions. You can ask questions like, “Can I look at the signals in someone’s brain and predict what decision they’re going to make?” That can help us build a model of decision-making.
I think in another 10 or 15 years, we’re probably going to have really rich models of how we actually make decisions and what’s going on in the brain to support them. That’s very exciting for a neuroscientist.
Image Credit: Black Salmon / Shutterstock.com Continue reading

Posted in Human Robots | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

#431907 The Future of Cancer Treatment Is ...

In an interview at Singularity University’s Exponential Medicine in San Diego, Richard Wender, chief cancer control officer at the American Cancer Society, discussed how technology has changed cancer care and treatment in recent years.
Just a few years ago, microscopes were the primary tool used in cancer diagnoses, but we’ve come a long way since.
“We still look at a microscope, we still look at what organ the cancer started in,” Wender said. “But increasingly we’re looking at the molecular signature. It’s not just the genomics, and it’s not just the genes. It’s also the cellular environment around that cancer. We’re now targeting our therapies to the mutations that are found in that particular cancer.”
Cancer treatments in the past have been largely reactionary, but they don’t need to be. Most cancer is genetic, which means that treatment can be preventative. This is one reason why newer cancer treatment techniques are searching for actionable targets in the specific gene before the cancer develops.

When asked how artificial intelligence and machine learning technologies are reshaping clinical trials, Wender acknowledged that how clinical trials have been run in the past won’t work moving forward.
“Our traditional ways of learning about cancer were by finding a particular cancer type and conducting a long clinical trial that took a number of years enrolling patients from around the country. That is not how we’re going to learn to treat individual patients in the future.”
Instead, Wender emphasized the need for gathering as much data as possible, and from as many individual patients as possible. This data should encompass clinical, pathological, and molecular data and should be gathered from a patient all the way through their final outcome. “Literally every person becomes a clinical trial of one,” Wender said.
For the best cancer treatment and diagnostics, Wender says the answer is to make the process collaborative by pulling in resources from organizations and companies that are both established and emerging.
It’s no surprise to hear that the best solutions come from pairing together uncommon partners to innovate.
Image Credit: jovan vitanovski / Shutterstock.com Continue reading

Posted in Human Robots | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

#431873 Why the World Is Still Getting ...

If you read or watch the news, you’ll likely think the world is falling to pieces. Trends like terrorism, climate change, and a growing population straining the planet’s finite resources can easily lead you to think our world is in crisis.
But there’s another story, a story the news doesn’t often report. This story is backed by data, and it says we’re actually living in the most peaceful, abundant time in history, and things are likely to continue getting better.
The News vs. the Data
The reality that’s often clouded by a constant stream of bad news is we’re actually seeing a massive drop in poverty, fewer deaths from violent crime and preventable diseases. On top of that, we’re the most educated populace to ever walk the planet.
“Violence has been in decline for thousands of years, and today we may be living in the most peaceful era in the existence of our species.” –Steven Pinker
In the last hundred years, we’ve seen the average human life expectancy nearly double, the global GDP per capita rise exponentially, and childhood mortality drop 10-fold.

That’s pretty good progress! Maybe the world isn’t all gloom and doom.If you’re still not convinced the world is getting better, check out the charts in this article from Vox and on Peter Diamandis’ website for a lot more data.
Abundance for All Is Possible
So now that you know the world isn’t so bad after all, here’s another thing to think about: it can get much better, very soon.
In their book Abundance: The Future Is Better Than You Think, Steven Kotler and Peter Diamandis suggest it may be possible for us to meet and even exceed the basic needs of all the people living on the planet today.
“In the hands of smart and driven innovators, science and technology take things which were once scarce and make them abundant and accessible to all.”
This means making sure every single person in the world has adequate food, water and shelter, as well as a good education, access to healthcare, and personal freedom.
This might seem unimaginable, especially if you tend to think the world is only getting worse. But given how much progress we’ve already made in the last few hundred years, coupled with the recent explosion of information sharing and new, powerful technologies, abundance for all is not as out of reach as you might believe.
Throughout history, we’ve seen that in the hands of smart and driven innovators, science and technology take things which were once scarce and make them abundant and accessible to all.
Napoleon III
In Abundance, Diamandis and Kotler tell the story of how aluminum went from being one of the rarest metals on the planet to being one of the most abundant…
In the 1800s, aluminum was more valuable than silver and gold because it was rarer. So when Napoleon III entertained the King of Siam, the king and his guests were honored by being given aluminum utensils, while the rest of the dinner party ate with gold.
But aluminum is not really rare.
In fact, aluminum is the third most abundant element in the Earth’s crust, making up 8.3% of the weight of our planet. But it wasn’t until chemists Charles Martin Hall and Paul Héroult discovered how to use electrolysis to cheaply separate aluminum from surrounding materials that the element became suddenly abundant.
The problems keeping us from achieving a world where everyone’s basic needs are met may seem like resource problems — when in reality, many are accessibility problems.
The Engine Driving Us Toward Abundance: Exponential Technology
History is full of examples like the aluminum story. The most powerful one of the last few decades is information technology. Think about all the things that computers and the internet made abundant that were previously far less accessible because of cost or availability … Here are just a few examples:

Easy access to the world’s information
Ability to share information freely with anyone and everyone
Free/cheap long-distance communication
Buying and selling goods/services regardless of location

Less than two decades ago, when someone reached a certain level of economic stability, they could spend somewhere around $10K on stereos, cameras, entertainment systems, etc — today, we have all that equipment in the palm of our hand.
Now, there is a new generation of technologies heavily dependant on information technology and, therefore, similarly riding the wave of exponential growth. When put to the right use, emerging technologies like artificial intelligence, robotics, digital manufacturing, nano-materials and digital biology make it possible for us to drastically raise the standard of living for every person on the planet.

These are just some of the innovations which are unlocking currently scarce resources:

IBM’s Watson Health is being trained and used in medical facilities like the Cleveland Clinic to help doctors diagnose disease. In the future, it’s likely we’ll trust AI just as much, if not more than humans to diagnose disease, allowing people all over the world to have access to great diagnostic tools regardless of whether there is a well-trained doctor near them.

Solar power is now cheaper than fossil fuels in some parts of the world, and with advances in new materials and storage, the cost may decrease further. This could eventually lead to nearly-free, clean energy for people across the world.

Google’s GMNT network can now translate languages as well as a human, unlocking the ability for people to communicate globally as we never have before.

Self-driving cars are already on the roads of several American cities and will be coming to a road near you in the next couple years. Considering the average American spends nearly two hours driving every day, not having to drive would free up an increasingly scarce resource: time.

The Change-Makers
Today’s innovators can create enormous change because they have these incredible tools—which would have once been available only to big organizations—at their fingertips. And, as a result of our hyper-connected world, there is an unprecedented ability for people across the planet to work together to create solutions to some of our most pressing problems today.
“In today’s hyperlinked world, solving problems anywhere, solves problems everywhere.” –Peter Diamandis and Steven Kotler, Abundance
According to Diamandis and Kotler, there are three groups of people accelerating positive change.

DIY InnovatorsIn the 1970s and 1980s, the Homebrew Computer Club was a meeting place of “do-it-yourself” computer enthusiasts who shared ideas and spare parts. By the 1990s and 2000s, that little club became known as an inception point for the personal computer industry — dozens of companies, including Apple Computer, can directly trace their origins back to Homebrew. Since then, we’ve seen the rise of the social entrepreneur, the Maker Movement and the DIY Bio movement, which have similar ambitions to democratize social reform, manufacturing, and biology, the way Homebrew democratized computers. These are the people who look for new opportunities and aren’t afraid to take risks to create something new that will change the status-quo.
Techno-PhilanthropistsUnlike the robber barons of the 19th and early 20th centuries, today’s “techno-philanthropists” are not just giving away some of their wealth for a new museum, they are using their wealth to solve global problems and investing in social entrepreneurs aiming to do the same. The Bill and Melinda Gates Foundation has given away at least $28 billion, with a strong focus on ending diseases like polio, malaria, and measles for good. Jeff Skoll, after cashing out of eBay with $2 billion in 1998, went on to create the Skoll Foundation, which funds social entrepreneurs across the world. And last year, Mark Zuckerberg and Priscilla Chan pledged to give away 99% of their $46 billion in Facebook stock during their lifetimes.
The Rising BillionCisco estimates that by 2020, there will be 4.1 billion people connected to the internet, up from 3 billion in 2015. This number might even be higher, given the efforts of companies like Facebook, Google, Virgin Group, and SpaceX to bring internet access to the world. That’s a billion new people in the next several years who will be connected to the global conversation, looking to learn, create and better their own lives and communities.In his book, Fortune at the Bottom of the Pyramid, C.K. Pahalad writes that finding co-creative ways to serve this rising market can help lift people out of poverty while creating viable businesses for inventive companies.

The Path to Abundance
Eager to create change, innovators armed with powerful technologies can accomplish incredible feats. Kotler and Diamandis imagine that the path to abundance occurs in three tiers:

Basic Needs (food, water, shelter)
Tools of Growth (energy, education, access to information)
Ideal Health and Freedom

Of course, progress doesn’t always happen in a straight, logical way, but having a framework to visualize the needs is helpful.
Many people don’t believe it’s possible to end the persistent global problems we’re facing. However, looking at history, we can see many examples where technological tools have unlocked resources that previously seemed scarce.
Technological solutions are not always the answer, and we need social change and policy solutions as much as we need technology solutions. But we have seen time and time again, that powerful tools in the hands of innovative, driven change-makers can make the seemingly impossible happen.

You can download the full “Path to Abundance” infographic here. It was created under a CC BY-NC-ND license. If you share, please attribute to Singularity University.
Image Credit: janez volmajer / Shutterstock.com Continue reading

Posted in Human Robots | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment