Tag Archives: look

#431873 Why the World Is Still Getting ...

If you read or watch the news, you’ll likely think the world is falling to pieces. Trends like terrorism, climate change, and a growing population straining the planet’s finite resources can easily lead you to think our world is in crisis.
But there’s another story, a story the news doesn’t often report. This story is backed by data, and it says we’re actually living in the most peaceful, abundant time in history, and things are likely to continue getting better.
The News vs. the Data
The reality that’s often clouded by a constant stream of bad news is we’re actually seeing a massive drop in poverty, fewer deaths from violent crime and preventable diseases. On top of that, we’re the most educated populace to ever walk the planet.
“Violence has been in decline for thousands of years, and today we may be living in the most peaceful era in the existence of our species.” –Steven Pinker
In the last hundred years, we’ve seen the average human life expectancy nearly double, the global GDP per capita rise exponentially, and childhood mortality drop 10-fold.

That’s pretty good progress! Maybe the world isn’t all gloom and doom.If you’re still not convinced the world is getting better, check out the charts in this article from Vox and on Peter Diamandis’ website for a lot more data.
Abundance for All Is Possible
So now that you know the world isn’t so bad after all, here’s another thing to think about: it can get much better, very soon.
In their book Abundance: The Future Is Better Than You Think, Steven Kotler and Peter Diamandis suggest it may be possible for us to meet and even exceed the basic needs of all the people living on the planet today.
“In the hands of smart and driven innovators, science and technology take things which were once scarce and make them abundant and accessible to all.”
This means making sure every single person in the world has adequate food, water and shelter, as well as a good education, access to healthcare, and personal freedom.
This might seem unimaginable, especially if you tend to think the world is only getting worse. But given how much progress we’ve already made in the last few hundred years, coupled with the recent explosion of information sharing and new, powerful technologies, abundance for all is not as out of reach as you might believe.
Throughout history, we’ve seen that in the hands of smart and driven innovators, science and technology take things which were once scarce and make them abundant and accessible to all.
Napoleon III
In Abundance, Diamandis and Kotler tell the story of how aluminum went from being one of the rarest metals on the planet to being one of the most abundant…
In the 1800s, aluminum was more valuable than silver and gold because it was rarer. So when Napoleon III entertained the King of Siam, the king and his guests were honored by being given aluminum utensils, while the rest of the dinner party ate with gold.
But aluminum is not really rare.
In fact, aluminum is the third most abundant element in the Earth’s crust, making up 8.3% of the weight of our planet. But it wasn’t until chemists Charles Martin Hall and Paul Héroult discovered how to use electrolysis to cheaply separate aluminum from surrounding materials that the element became suddenly abundant.
The problems keeping us from achieving a world where everyone’s basic needs are met may seem like resource problems — when in reality, many are accessibility problems.
The Engine Driving Us Toward Abundance: Exponential Technology
History is full of examples like the aluminum story. The most powerful one of the last few decades is information technology. Think about all the things that computers and the internet made abundant that were previously far less accessible because of cost or availability … Here are just a few examples:

Easy access to the world’s information
Ability to share information freely with anyone and everyone
Free/cheap long-distance communication
Buying and selling goods/services regardless of location

Less than two decades ago, when someone reached a certain level of economic stability, they could spend somewhere around $10K on stereos, cameras, entertainment systems, etc — today, we have all that equipment in the palm of our hand.
Now, there is a new generation of technologies heavily dependant on information technology and, therefore, similarly riding the wave of exponential growth. When put to the right use, emerging technologies like artificial intelligence, robotics, digital manufacturing, nano-materials and digital biology make it possible for us to drastically raise the standard of living for every person on the planet.

These are just some of the innovations which are unlocking currently scarce resources:

IBM’s Watson Health is being trained and used in medical facilities like the Cleveland Clinic to help doctors diagnose disease. In the future, it’s likely we’ll trust AI just as much, if not more than humans to diagnose disease, allowing people all over the world to have access to great diagnostic tools regardless of whether there is a well-trained doctor near them.

Solar power is now cheaper than fossil fuels in some parts of the world, and with advances in new materials and storage, the cost may decrease further. This could eventually lead to nearly-free, clean energy for people across the world.

Google’s GMNT network can now translate languages as well as a human, unlocking the ability for people to communicate globally as we never have before.

Self-driving cars are already on the roads of several American cities and will be coming to a road near you in the next couple years. Considering the average American spends nearly two hours driving every day, not having to drive would free up an increasingly scarce resource: time.

The Change-Makers
Today’s innovators can create enormous change because they have these incredible tools—which would have once been available only to big organizations—at their fingertips. And, as a result of our hyper-connected world, there is an unprecedented ability for people across the planet to work together to create solutions to some of our most pressing problems today.
“In today’s hyperlinked world, solving problems anywhere, solves problems everywhere.” –Peter Diamandis and Steven Kotler, Abundance
According to Diamandis and Kotler, there are three groups of people accelerating positive change.

DIY InnovatorsIn the 1970s and 1980s, the Homebrew Computer Club was a meeting place of “do-it-yourself” computer enthusiasts who shared ideas and spare parts. By the 1990s and 2000s, that little club became known as an inception point for the personal computer industry — dozens of companies, including Apple Computer, can directly trace their origins back to Homebrew. Since then, we’ve seen the rise of the social entrepreneur, the Maker Movement and the DIY Bio movement, which have similar ambitions to democratize social reform, manufacturing, and biology, the way Homebrew democratized computers. These are the people who look for new opportunities and aren’t afraid to take risks to create something new that will change the status-quo.
Techno-PhilanthropistsUnlike the robber barons of the 19th and early 20th centuries, today’s “techno-philanthropists” are not just giving away some of their wealth for a new museum, they are using their wealth to solve global problems and investing in social entrepreneurs aiming to do the same. The Bill and Melinda Gates Foundation has given away at least $28 billion, with a strong focus on ending diseases like polio, malaria, and measles for good. Jeff Skoll, after cashing out of eBay with $2 billion in 1998, went on to create the Skoll Foundation, which funds social entrepreneurs across the world. And last year, Mark Zuckerberg and Priscilla Chan pledged to give away 99% of their $46 billion in Facebook stock during their lifetimes.
The Rising BillionCisco estimates that by 2020, there will be 4.1 billion people connected to the internet, up from 3 billion in 2015. This number might even be higher, given the efforts of companies like Facebook, Google, Virgin Group, and SpaceX to bring internet access to the world. That’s a billion new people in the next several years who will be connected to the global conversation, looking to learn, create and better their own lives and communities.In his book, Fortune at the Bottom of the Pyramid, C.K. Pahalad writes that finding co-creative ways to serve this rising market can help lift people out of poverty while creating viable businesses for inventive companies.

The Path to Abundance
Eager to create change, innovators armed with powerful technologies can accomplish incredible feats. Kotler and Diamandis imagine that the path to abundance occurs in three tiers:

Basic Needs (food, water, shelter)
Tools of Growth (energy, education, access to information)
Ideal Health and Freedom

Of course, progress doesn’t always happen in a straight, logical way, but having a framework to visualize the needs is helpful.
Many people don’t believe it’s possible to end the persistent global problems we’re facing. However, looking at history, we can see many examples where technological tools have unlocked resources that previously seemed scarce.
Technological solutions are not always the answer, and we need social change and policy solutions as much as we need technology solutions. But we have seen time and time again, that powerful tools in the hands of innovative, driven change-makers can make the seemingly impossible happen.

You can download the full “Path to Abundance” infographic here. It was created under a CC BY-NC-ND license. If you share, please attribute to Singularity University.
Image Credit: janez volmajer / Shutterstock.com Continue reading

Posted in Human Robots

#431872 AI Uses Titan Supercomputer to Create ...

You don’t have to dig too deeply into the archive of dystopian science fiction to uncover the horror that intelligent machines might unleash. The Matrix and The Terminator are probably the most well-known examples of self-replicating, intelligent machines attempting to enslave or destroy humanity in the process of building a brave new digital world.
The prospect of artificially intelligent machines creating other artificially intelligent machines took a big step forward in 2017. However, we’re far from the runaway technological singularity futurists are predicting by mid-century or earlier, let alone murderous cyborgs or AI avatar assassins.
The first big boost this year came from Google. The tech giant announced it was developing automated machine learning (AutoML), writing algorithms that can do some of the heavy lifting by identifying the right neural networks for a specific job. Now researchers at the Department of Energy’s Oak Ridge National Laboratory (ORNL), using the most powerful supercomputer in the US, have developed an AI system that can generate neural networks as good if not better than any developed by a human in less than a day.
It can take months for the brainiest, best-paid data scientists to develop deep learning software, which sends data through a complex web of mathematical algorithms. The system is modeled after the human brain and known as an artificial neural network. Even Google’s AutoML took weeks to design a superior image recognition system, one of the more standard operations for AI systems today.
Computing Power
Of course, Google Brain project engineers only had access to 800 graphic processing units (GPUs), a type of computer hardware that works especially well for deep learning. Nvidia, which pioneered the development of GPUs, is considered the gold standard in today’s AI hardware architecture. Titan, the supercomputer at ORNL, boasts more than 18,000 GPUs.
The ORNL research team’s algorithm, called MENNDL for Multinode Evolutionary Neural Networks for Deep Learning, isn’t designed to create AI systems that cull cute cat photos from the internet. Instead, MENNDL is a tool for testing and training thousands of potential neural networks to work on unique science problems.
That requires a different approach from the Google and Facebook AI platforms of the world, notes Steven Young, a postdoctoral research associate at ORNL who is on the team that designed MENNDL.
“We’ve discovered that those [neural networks] are very often not the optimal network for a lot of our problems, because our data, while it can be thought of as images, is different,” he explains to Singularity Hub. “These images, and the problems, have very different characteristics from object detection.”
AI for Science
One application of the technology involved a particle physics experiment at the Fermi National Accelerator Laboratory. Fermilab researchers are interested in understanding neutrinos, high-energy subatomic particles that rarely interact with normal matter but could be a key to understanding the early formation of the universe. One Fermilab experiment involves taking a sort of “snapshot” of neutrino interactions.
The team wanted the help of an AI system that could analyze and classify Fermilab’s detector data. MENNDL evaluated 500,000 neural networks in 24 hours. Its final solution proved superior to custom models developed by human scientists.
In another case involving a collaboration with St. Jude Children’s Research Hospital in Memphis, MENNDL improved the error rate of a human-designed algorithm for identifying mitochondria inside 3D electron microscopy images of brain tissue by 30 percent.
“We are able to do better than humans in a fraction of the time at designing networks for these sort of very different datasets that we’re interested in,” Young says.
What makes MENNDL particularly adept is its ability to define the best or most optimal hyperparameters—the key variables—to tackle a particular dataset.
“You don’t always need a big, huge deep network. Sometimes you just need a small network with the right hyperparameters,” Young says.
A Virtual Data Scientist
That’s not dissimilar to the approach of a company called H20.ai, a startup out of Silicon Valley that uses open source machine learning platforms to “democratize” AI. It applies machine learning to create business solutions for Fortune 500 companies, including some of the world’s biggest banks and healthcare companies.
“Our software is more [about] pattern detection, let’s say anti-money laundering or fraud detection or which customer is most likely to churn,” Dr. Arno Candel, chief technology officer at H2O.ai, tells Singularity Hub. “And that kind of insight-generating software is what we call AI here.”
The company’s latest product, Driverless AI, promises to deliver the data scientist equivalent of a chessmaster to its customers (the company claims several such grandmasters in its employ and advisory board). In other words, the system can analyze a raw dataset and, like MENNDL, automatically identify what features should be included in the computer model to make the most of the data based on the best “chess moves” of its grandmasters.
“So we’re using those algorithms, but we’re giving them the human insights from those data scientists, and we automate their thinking,” he explains. “So we created a virtual data scientist that is relentless at trying these ideas.”
Inside the Black Box
Not unlike how the human brain reaches a conclusion, it’s not always possible to understand how a machine, despite being designed by humans, reaches its own solutions. The lack of transparency is often referred to as the AI “black box.” Experts like Young say we can learn something about the evolutionary process of machine learning by generating millions of neural networks and seeing what works well and what doesn’t.
“You’re never going to be able to completely explain what happened, but maybe we can better explain it than we currently can today,” Young says.
Transparency is built into the “thought process” of each particular model generated by Driverless AI, according to Candel.
The computer even explains itself to the user in plain English at each decision point. There is also real-time feedback that allows users to prioritize features, or parameters, to see how the changes improve the accuracy of the model. For example, the system may include data from people in the same zip code as it creates a model to describe customer turnover.
“That’s one of the advantages of our automatic feature engineering: it’s basically mimicking human thinking,” Candel says. “It’s not just neural nets that magically come up with some kind of number, but we’re trying to make it statistically significant.”
Moving Forward
Much digital ink has been spilled over the dearth of skilled data scientists, so automating certain design aspects for developing artificial neural networks makes sense. Experts agree that automation alone won’t solve that particular problem. However, it will free computer scientists to tackle more difficult issues, such as parsing the inherent biases that exist within the data used by machine learning today.
“I think the world has an opportunity to focus more on the meaning of things and not on the laborious tasks of just fitting a model and finding the best features to make that model,” Candel notes. “By automating, we are pushing the burden back for the data scientists to actually do something more meaningful, which is think about the problem and see how you can address it differently to make an even bigger impact.”
The team at ORNL expects it can also make bigger impacts beginning next year when the lab’s next supercomputer, Summit, comes online. While Summit will boast only 4,600 nodes, it will sport the latest and greatest GPU technology from Nvidia and CPUs from IBM. That means it will deliver more than five times the computational performance of Titan, the world’s fifth-most powerful supercomputer today.
“We’ll be able to look at much larger problems on Summit than we were able to with Titan and hopefully get to a solution much faster,” Young says.
It’s all in a day’s work.
Image Credit: Gennady Danilkin / Shutterstock.com Continue reading

Posted in Human Robots

#431869 When Will We Finally Achieve True ...

The field of artificial intelligence goes back a long way, but many consider it was officially born when a group of scientists at Dartmouth College got together for a summer, back in 1956. Computers had, over the last few decades, come on in incredible leaps and bounds; they could now perform calculations far faster than humans. Optimism, given the incredible progress that had been made, was rational. Genius computer scientist Alan Turing had already mooted the idea of thinking machines just a few years before. The scientists had a fairly simple idea: intelligence is, after all, just a mathematical process. The human brain was a type of machine. Pick apart that process, and you can make a machine simulate it.
The problem didn’t seem too hard: the Dartmouth scientists wrote, “We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.” This research proposal, by the way, contains one of the earliest uses of the term artificial intelligence. They had a number of ideas—maybe simulating the human brain’s pattern of neurons could work and teaching machines the abstract rules of human language would be important.
The scientists were optimistic, and their efforts were rewarded. Before too long, they had computer programs that seemed to understand human language and could solve algebra problems. People were confidently predicting there would be a human-level intelligent machine built within, oh, let’s say, the next twenty years.
It’s fitting that the industry of predicting when we’d have human-level intelligent AI was born at around the same time as the AI industry itself. In fact, it goes all the way back to Turing’s first paper on “thinking machines,” where he predicted that the Turing Test—machines that could convince humans they were human—would be passed in 50 years, by 2000. Nowadays, of course, people are still predicting it will happen within the next 20 years, perhaps most famously Ray Kurzweil. There are so many different surveys of experts and analyses that you almost wonder if AI researchers aren’t tempted to come up with an auto reply: “I’ve already predicted what your question will be, and no, I can’t really predict that.”
The issue with trying to predict the exact date of human-level AI is that we don’t know how far is left to go. This is unlike Moore’s Law. Moore’s Law, the doubling of processing power roughly every couple of years, makes a very concrete prediction about a very specific phenomenon. We understand roughly how to get there—improved engineering of silicon wafers—and we know we’re not at the fundamental limits of our current approach (at least, not until you’re trying to work on chips at the atomic scale). You cannot say the same about artificial intelligence.
Common Mistakes
Stuart Armstrong’s survey looked for trends in these predictions. Specifically, there were two major cognitive biases he was looking for. The first was the idea that AI experts predict true AI will arrive (and make them immortal) conveniently just before they’d be due to die. This is the “Rapture of the Nerds” criticism people have leveled at Kurzweil—his predictions are motivated by fear of death, desire for immortality, and are fundamentally irrational. The ability to create a superintelligence is taken as an article of faith. There are also criticisms by people working in the AI field who know first-hand the frustrations and limitations of today’s AI.
The second was the idea that people always pick a time span of 15 to 20 years. That’s enough to convince people they’re working on something that could prove revolutionary very soon (people are less impressed by efforts that will lead to tangible results centuries down the line), but not enough for you to be embarrassingly proved wrong. Of the two, Armstrong found more evidence for the second one—people were perfectly happy to predict AI after they died, although most didn’t, but there was a clear bias towards “15–20 years from now” in predictions throughout history.
Measuring Progress
Armstrong points out that, if you want to assess the validity of a specific prediction, there are plenty of parameters you can look at. For example, the idea that human-level intelligence will be developed by simulating the human brain does at least give you a clear pathway that allows you to assess progress. Every time we get a more detailed map of the brain, or successfully simulate another part of it, we can tell that we are progressing towards this eventual goal, which will presumably end in human-level AI. We may not be 20 years away on that path, but at least you can scientifically evaluate the progress.
Compare this to those that say AI, or else consciousness, will “emerge” if a network is sufficiently complex, given enough processing power. This might be how we imagine human intelligence and consciousness emerged during evolution—although evolution had billions of years, not just decades. The issue with this is that we have no empirical evidence: we have never seen consciousness manifest itself out of a complex network. Not only do we not know if this is possible, we cannot know how far away we are from reaching this, as we can’t even measure progress along the way.
There is an immense difficulty in understanding which tasks are hard, which has continued from the birth of AI to the present day. Just look at that original research proposal, where understanding human language, randomness and creativity, and self-improvement are all mentioned in the same breath. We have great natural language processing, but do our computers understand what they’re processing? We have AI that can randomly vary to be “creative,” but is it creative? Exponential self-improvement of the kind the singularity often relies on seems far away.
We also struggle to understand what’s meant by intelligence. For example, AI experts consistently underestimated the ability of AI to play Go. Many thought, in 2015, it would take until 2027. In the end, it took two years, not twelve. But does that mean AI is any closer to being able to write the Great American Novel, say? Does it mean it’s any closer to conceptually understanding the world around it? Does it mean that it’s any closer to human-level intelligence? That’s not necessarily clear.
Not Human, But Smarter Than Humans
But perhaps we’ve been looking at the wrong problem. For example, the Turing test has not yet been passed in the sense that AI cannot convince people it’s human in conversation; but of course the calculating ability, and perhaps soon the ability to perform other tasks like pattern recognition and driving cars, far exceed human levels. As “weak” AI algorithms make more decisions, and Internet of Things evangelists and tech optimists seek to find more ways to feed more data into more algorithms, the impact on society from this “artificial intelligence” can only grow.
It may be that we don’t yet have the mechanism for human-level intelligence, but it’s also true that we don’t know how far we can go with the current generation of algorithms. Those scary surveys that state automation will disrupt society and change it in fundamental ways don’t rely on nearly as many assumptions about some nebulous superintelligence.
Then there are those that point out we should be worried about AI for other reasons. Just because we can’t say for sure if human-level AI will arrive this century, or never, it doesn’t mean we shouldn’t prepare for the possibility that the optimistic predictors could be correct. We need to ensure that human values are programmed into these algorithms, so that they understand the value of human life and can act in “moral, responsible” ways.
Phil Torres, at the Project for Future Human Flourishing, expressed it well in an interview with me. He points out that if we suddenly decided, as a society, that we had to solve the problem of morality—determine what was right and wrong and feed it into a machine—in the next twenty years…would we even be able to do it?
So, we should take predictions with a grain of salt. Remember, it turned out the problems the AI pioneers foresaw were far more complicated than they anticipated. The same could be true today. At the same time, we cannot be unprepared. We should understand the risks and take our precautions. When those scientists met in Dartmouth in 1956, they had no idea of the vast, foggy terrain before them. Sixty years later, we still don’t know how much further there is to go, or how far we can go. But we’re going somewhere.
Image Credit: Ico Maker / Shutterstock.com Continue reading

Posted in Human Robots

#431866 The Technologies We’ll Have Our Eyes ...

It’s that time of year again when our team has a little fun and throws on our futurist glasses to look ahead at some of the technologies and trends we’re most anticipating next year.
Whether the implications of a technology are vast or it resonates with one of us personally, here’s the list from some of the Singularity Hub team of what we have our eyes on as we enter the new year.
For a little refresher, these were the technologies our team was fired up about at the start of 2017.
Tweet us the technology you’re excited to watch in 2018 at @SingularityHub.
Cryptocurrency and Blockchain
“Given all the noise Bitcoin is making globally in the media, it is driving droves of main street investors to dabble in and learn more about cryptocurrencies. This will continue to raise valuations and drive adoption of blockchain. From Bank of America recently getting a blockchain-based patent approved to the Australian Securities Exchange’s plan to use blockchain, next year is going to be chock-full of these stories. Coindesk even recently spotted a patent filing from Apple involving blockchain. From ‘China’s Ethereum’, NEO, to IOTA to Golem to Qtum, there are a lot of interesting cryptos to follow given the immense numbers of potential applications. Hang on, it’s going to be a bumpy ride in 2018!”
–Kirk Nankivell, Website Manager
There Is No One Technology to Watch
“Next year may be remembered for advances in gene editing, blockchain, AI—or most likely all these and more. There is no single technology to watch. A number of consequential trends are advancing and converging. This general pace of change is exciting, and it also contributes to spiking anxiety. Technology’s invisible lines of force are extending further and faster into our lives and subtly subverting how we view the world and each other in unanticipated ways. Still, all the near-term messiness and volatility, the little and not-so-little dramas, the hype and disillusion, the controversies and conflict, all that smooths out a bit when you take a deep breath and a step back, and it’s my sincere hope and belief the net result will be more beneficial than harmful.”
–Jason Dorrier, Managing Editor
‘Fake News’ Fighting Technology
“It’s been a wild ride for the media this year with the term ‘fake news’ moving from the public’s peripheral and into mainstream vocabulary. The spread of ‘fake news’ is often blamed on media outlets, but social media platforms and search engines are often responsible too. (Facebook still won’t identify as a media company—maybe next year?) Yes, technology can contribute to spreading false information, but it can also help stop it. From technologists who are building in-article ‘trust indicator’ features, to artificial intelligence systems that can both spot and shut down fake news early on, I’m hopeful we can create new solutions to this huge problem. One step further: if publishers step up to fix this we might see some faith restored in the media.”
–Alison E. Berman, Digital Producer
Pay-as-You-Go Home Solar Power
“People in rural African communities are increasingly bypassing electrical grids (which aren’t even an option in many cases) and installing pay-as-you-go solar panels on their homes. The companies offering these services are currently not subject to any regulations, though they’re essentially acting as a utility. As demand for power grows, they’ll have to come up with ways to efficiently scale, and to balance the humanitarian and capitalistic aspects of their work. It’s fascinating to think traditional grids may never be necessary in many areas of the continent thanks to this technology.”
–Vanessa Bates Ramirez, Associate Editor
Virtual Personal Assistants
“AI is clearly going to rule our lives, and in many ways it already makes us look like clumsy apes. Alexa, Siri, and Google Assistant are promising first steps toward a world of computers that understand us and relate to us on an emotional level. I crave the day when my Apple Watch coaches me into healthier habits, lets me know about new concerts nearby, speaks to my self-driving Lyft on my behalf, and can help me respond effectively to aggravating emails based on communication patterns. But let’s not brush aside privacy concerns and the implications of handing over our personal data to megacorporations. The scariest thing here is that privacy laws and advertising ethics do not accommodate this level of intrusive data hoarding.”
–Matthew Straub, Director of Digital Engagement (Hub social media)
Solve for Learning: Educational Apps for Children in Conflict Zones
“I am most excited by exponential technology when it is used to help solve a global grand challenge. Educational apps are currently being developed to help solve for learning by increasing accessibility to learning opportunities for children living in conflict zones. Many children in these areas are not receiving an education, with girls being 2.5 times more likely than boys to be out of school. The EduApp4Syria project is developing apps to help children in Syria and Kashmir learn in their native languages. Mobile phones are increasingly available in these areas, and the apps are available offline for children who do not have consistent access to mobile networks. The apps are low-cost, easily accessible, and scalable educational opportunities.
–Paige Wilcoxson, Director, Curriculum & Learning Design
Image Credit: Triff / Shutterstock.com Continue reading

Posted in Human Robots

#431859 Digitized to Democratized: These Are the ...

“The Six Ds are a chain reaction of technological progression, a road map of rapid development that always leads to enormous upheaval and opportunity.”
–Peter Diamandis and Steven Kotler, Bold
We live in incredible times. News travels the globe in an instant. Music, movies, games, communication, and knowledge are ever-available on always-connected devices. From biotechnology to artificial intelligence, powerful technologies that were once only available to huge organizations and governments are becoming more accessible and affordable thanks to digitization.
The potential for entrepreneurs to disrupt industries and corporate behemoths to unexpectedly go extinct has never been greater.
One hundred or fifty or even twenty years ago, disruption meant coming up with a product or service people needed but didn’t have yet, then finding a way to produce it with higher quality and lower costs than your competitors. This entailed hiring hundreds or thousands of employees, having a large physical space to put them in, and waiting years or even decades for hard work to pay off and products to come to fruition.

“Technology is disrupting traditional industrial processes, and they’re never going back.”

But thanks to digital technologies developing at exponential rates of change, the landscape of 21st-century business has taken on a dramatically different look and feel.
The structure of organizations is changing. Instead of thousands of employees and large physical plants, modern start-ups are small organizations focused on information technologies. They dematerialize what was once physical and create new products and revenue streams in months, sometimes weeks.
It no longer takes a huge corporation to have a huge impact.
Technology is disrupting traditional industrial processes, and they’re never going back. This disruption is filled with opportunity for forward-thinking entrepreneurs.
The secret to positively impacting the lives of millions of people is understanding and internalizing the growth cycle of digital technologies. This growth cycle takes place in six key steps, which Peter Diamandis calls the Six Ds of Exponentials: digitization, deception, disruption, demonetization, dematerialization, and democratization.
According to Diamandis, cofounder and chairman of Singularity University and founder and executive chairman of XPRIZE, when something is digitized it begins to behave like an information technology.

Newly digitized products develop at an exponential pace instead of a linear one, fooling onlookers at first before going on to disrupt companies and whole industries. Before you know it, something that was once expensive and physical is an app that costs a buck.
Newspapers and CDs are two obvious recent examples. The entertainment and media industries are still dealing with the aftermath of digitization as they attempt to transform and update old practices tailored to a bygone era. But it won’t end with digital media. As more of the economy is digitized—from medicine to manufacturing—industries will hop on an exponential curve and be similarly disrupted.
Diamandis’s 6 Ds are critical to understanding and planning for this disruption.
The 6 Ds of Exponential Organizations are Digitized, Deceptive, Disruptive, Demonetized, Dematerialized, and Democratized.

Diamandis uses the contrasting fates of Kodak and Instagram to illustrate the power of the six Ds and exponential thinking.
Kodak invented the digital camera in 1975, but didn’t invest heavily in the new technology, instead sticking with what had always worked: traditional cameras and film. In 1996, Kodak had a $28 billion market capitalization with 95,000 employees.
But the company didn’t pay enough attention to how digitization of their core business was changing it; people were no longer taking pictures in the same way and for the same reasons as before.
After a downward spiral, Kodak went bankrupt in 2012. That same year, Facebook acquired Instagram, a digital photo sharing app, which at the time was a startup with 13 employees. The acquisition’s price tag? $1 billion. And Instagram had been founded only 18 months earlier.
The most ironic piece of this story is that Kodak invented the digital camera; they took the first step toward overhauling the photography industry and ushering it into the modern age, but they were unwilling to disrupt their existing business by taking a risk in what was then uncharted territory. So others did it instead.
The same can happen with any technology that’s just getting off the ground. It’s easy to stop pursuing it in the early part of the exponential curve, when development appears to be moving slowly. But failing to follow through only gives someone else the chance to do it instead.
The Six Ds are a road map showing what can happen when an exponential technology is born. Not every phase is easy, but the results give even small teams the power to change the world in a faster and more impactful way than traditional business ever could.
Image Credit: Mohammed Tareq / Shutterstock Continue reading

Posted in Human Robots