Tag Archives: term

#431238 AI Is Easy to Fool—Why That Needs to ...

Con artistry is one of the world’s oldest and most innovative professions, and it may soon have a new target. Research suggests artificial intelligence may be uniquely susceptible to tricksters, and as its influence in the modern world grows, attacks against it are likely to become more common.
The root of the problem lies in the fact that artificial intelligence algorithms learn about the world in very different ways than people do, and so slight tweaks to the data fed into these algorithms can throw them off completely while remaining imperceptible to humans.
Much of the research into this area has been conducted on image recognition systems, in particular those relying on deep learning neural networks. These systems are trained by showing them thousands of examples of images of a particular object until they can extract common features that allow them to accurately spot the object in new images.
But the features they extract are not necessarily the same high-level features a human would be looking for, like the word STOP on a sign or a tail on a dog. These systems analyze images at the individual pixel level to detect patterns shared between examples. These patterns can be obscure combinations of pixel values, in small pockets or spread across the image, that would be impossible to discern for a human, but highly accurate at predicting a particular object.

“An attacker can trick the object recognition algorithm into seeing something that isn’t there, without these alterations being obvious to a human.”

What this means is that by identifying these patterns and overlaying them over a different image, an attacker can trick the object recognition algorithm into seeing something that isn’t there, without these alterations being obvious to a human. This kind of manipulation is known as an “adversarial attack.”
Early attempts to trick image recognition systems this way required access to the algorithm’s inner workings to decipher these patterns. But in 2016 researchers demonstrated a “black box” attack that enabled them to trick such a system without knowing its inner workings.
By feeding the system doctored images and seeing how it classified them, they were able to work out what it was focusing on and therefore generate images they knew would fool it. Importantly, the doctored images were not obviously different to human eyes.
These approaches were tested by feeding doctored image data directly into the algorithm, but more recently, similar approaches have been applied in the real world. Last year it was shown that printouts of doctored images that were then photographed on a smartphone successfully tricked an image classification system.
Another group showed that wearing specially designed, psychedelically-colored spectacles could trick a facial recognition system into thinking people were celebrities. In August scientists showed that adding stickers to stop signs in particular configurations could cause a neural net designed to spot them to misclassify the signs.
These last two examples highlight some of the potential nefarious applications for this technology. Getting a self-driving car to miss a stop sign could cause an accident, either for insurance fraud or to do someone harm. If facial recognition becomes increasingly popular for biometric security applications, being able to pose as someone else could be very useful to a con artist.
Unsurprisingly, there are already efforts to counteract the threat of adversarial attacks. In particular, it has been shown that deep neural networks can be trained to detect adversarial images. One study from the Bosch Center for AI demonstrated such a detector, an adversarial attack that fools the detector, and a training regime for the detector that nullifies the attack, hinting at the kind of arms race we are likely to see in the future.
While image recognition systems provide an easy-to-visualize demonstration, they’re not the only machine learning systems at risk. The techniques used to perturb pixel data can be applied to other kinds of data too.

“Bypassing cybersecurity defenses is one of the more worrying and probable near-term applications for this approach.”

Chinese researchers showed that adding specific words to a sentence or misspelling a word can completely throw off machine learning systems designed to analyze what a passage of text is about. Another group demonstrated that garbled sounds played over speakers could make a smartphone running the Google Now voice command system visit a particular web address, which could be used to download malware.
This last example points toward one of the more worrying and probable near-term applications for this approach: bypassing cybersecurity defenses. The industry is increasingly using machine learning and data analytics to identify malware and detect intrusions, but these systems are also highly susceptible to trickery.
At this summer’s DEF CON hacking convention, a security firm demonstrated they could bypass anti-malware AI using a similar approach to the earlier black box attack on the image classifier, but super-powered with an AI of their own.
Their system fed malicious code to the antivirus software and then noted the score it was given. It then used genetic algorithms to iteratively tweak the code until it was able to bypass the defenses while maintaining its function.
All the approaches noted so far are focused on tricking pre-trained machine learning systems, but another approach of major concern to the cybersecurity industry is that of “data poisoning.” This is the idea that introducing false data into a machine learning system’s training set will cause it to start misclassifying things.
This could be particularly challenging for things like anti-malware systems that are constantly being updated to take into account new viruses. A related approach bombards systems with data designed to generate false positives so the defenders recalibrate their systems in a way that then allows the attackers to sneak in.
How likely it is that these approaches will be used in the wild will depend on the potential reward and the sophistication of the attackers. Most of the techniques described above require high levels of domain expertise, but it’s becoming ever easier to access training materials and tools for machine learning.
Simpler versions of machine learning have been at the heart of email spam filters for years, and spammers have developed a host of innovative workarounds to circumvent them. As machine learning and AI increasingly embed themselves in our lives, the rewards for learning how to trick them will likely outweigh the costs.
Image Credit: Nejron Photo / Shutterstock.com Continue reading

Posted in Human Robots | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

#431155 What It Will Take for Quantum Computers ...

Quantum computers could give the machine learning algorithms at the heart of modern artificial intelligence a dramatic speed up, but how far off are we? An international group of researchers has outlined the barriers that still need to be overcome.
This year has seen a surge of interest in quantum computing, driven in part by Google’s announcement that it will demonstrate “quantum supremacy” by the end of 2017. That means solving a problem beyond the capabilities of normal computers, which the company predicts will take 49 qubits—the quantum computing equivalent of bits.
As impressive as such a feat would be, the demonstration is likely to be on an esoteric problem that stacks the odds heavily in the quantum processor’s favor, and getting quantum computers to carry out practically useful calculations will take a lot more work.
But these devices hold great promise for solving problems in fields as diverse as cryptography or weather forecasting. One application people are particularly excited about is whether they could be used to supercharge the machine learning algorithms already transforming the modern world.
The potential is summarized in a recent review paper in the journal Nature written by a group of experts from the emerging field of quantum machine learning.
“Classical machine learning methods such as deep neural networks frequently have the feature that they can both recognize statistical patterns in data and produce data that possess the same statistical patterns: they recognize the patterns that they produce,” they write.
“This observation suggests the following hope. If small quantum information processors can produce statistical patterns that are computationally difficult for a classical computer to produce, then perhaps they can also recognize patterns that are equally difficult to recognize classically.”
Because of the way quantum computers work—taking advantage of strange quantum mechanical effects like entanglement and superposition—algorithms running on them should in principle be able to solve problems much faster than the best known classical algorithms, a phenomenon known as quantum speedup.
Designing these algorithms is tricky work, but the authors of the review note that there has been significant progress in recent years. They highlight multiple quantum algorithms exhibiting quantum speedup that could act as subroutines, or building blocks, for quantum machine learning programs.
We still don’t have the hardware to implement these algorithms, but according to the researchers the challenge is a technical one and clear paths to overcoming them exist. More challenging, they say, are four fundamental conceptual problems that could limit the applicability of quantum machine learning.
The first two are the input and output problems. Quantum computers, unsurprisingly, deal with quantum data, but the majority of the problems humans want to solve relate to the classical world. Translating significant amounts of classical data into the quantum systems can take so much time it can cancel out the benefits of the faster processing speeds, and the same is true of reading out the solution at the end.
The input problem could be mitigated to some extent by the development of quantum random access memory (qRAM)—the equivalent to RAM in a conventional computer used to provide the machine with quick access to its working memory. A qRAM can be configured to store classical data but allow the quantum computers to access all that information simultaneously as a superposition, which is required for a variety of quantum algorithms. But the authors note this is still a considerable engineering challenge and may not be sustainable for big data problems.
Closely related to the input/output problem is the costing problem. At present, the authors say very little is known about how many gates—or operations—a quantum machine learning algorithm will require to solve a given problem when operated on real-world devices. It’s expected that on highly complex problems they will offer considerable improvements over classical computers, but it’s not clear how big problems have to be before this becomes apparent.
Finally, whether or when these advantages kick in may be hard to prove, something the authors call the benchmarking problem. Claiming that a quantum algorithm can outperform any classical machine learning approach requires extensive testing against these other techniques that may not be feasible.
They suggest that this could be sidestepped by lowering the standards quantum machine learning algorithms are currently held to. This makes sense, as it doesn’t really matter whether an algorithm is intrinsically faster than all possible classical ones, as long as it’s faster than all the existing ones.
Another way of avoiding some of these problems is to apply these techniques directly to quantum data, the actual states generated by quantum systems and processes. The authors say this is probably the most promising near-term application for quantum machine learning and has the added benefit that any insights can be fed back into the design of better hardware.
“This would enable a virtuous cycle of innovation similar to that which occurred in classical computing, wherein each generation of processors is then leveraged to design the next-generation processors,” they conclude.
Image Credit: archy13 / Shutterstock.com Continue reading

Posted in Human Robots | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

#430855 Why Education Is the Hardest Sector of ...

We’ve all heard the warning cries: automation will disrupt entire industries and put millions of people out of jobs. In fact, up to 45 percent of existing jobs can be automated using current technology.
However, this may not necessarily apply to the education sector. After a detailed analysis of more than 2,000-plus work activities for more than 800 occupations, a report by McKinsey & Co states that of all the sectors examined, “…the technical feasibility of automation is lowest in education.”
There is no doubt that technological trends will have a powerful impact on global education, both by improving the overall learning experience and by increasing global access to education. Massive open online courses (MOOCs), chatbot tutors, and AI-powered lesson plans are just a few examples of the digital transformation in global education. But will robots and artificial intelligence ever fully replace teachers?
The Most Difficult Sector to Automate
While various tasks revolving around education—like administrative tasks or facilities maintenance—are open to automation, teaching itself is not.
Effective education involves more than just transfer of information from a teacher to a student. Good teaching requires complex social interactions and adaptation to the individual student’s learning needs. An effective teacher is not just responsive to each student’s strengths and weaknesses, but is also empathetic towards the student’s state of mind. It’s about maximizing human potential.
Furthermore, students don’t just rely on effective teachers to teach them the course material, but also as a source of life guidance and career mentorship. Deep and meaningful human interaction is crucial and is something that is very difficult, if not impossible, to automate.
Automating teaching is an example of a task that would require artificial general intelligence (as opposed to narrow or specific intelligence). In other words, this is the kind of task that would require an AI that understands natural human language, can be empathetic towards emotions, plan, strategize and make impactful decisions under unpredictable circumstances.
This would be the kind of machine that can do anything a human can do, and it doesn’t exist—at least, not yet.
We’re Getting There
Let’s not forget how quickly AI is evolving. Just because it’s difficult to fully automate teaching, it doesn’t mean the world’s leading AI experts aren’t trying.
Meet Jill Watson, the teaching assistant from Georgia Institute of Technology. Watson isn’t your average TA. She’s an IBM-powered artificial intelligence that is being implemented in universities around the world. Watson is able to answer students’ questions with 97 percent certainty.
Technologies like this also have applications in grading and providing feedback. Some AI algorithms are being trained and refined to perform automatic essay scoring. One project has achieved a 0.945 correlation with human graders.
All of this will have a remarkable impact on online education as we know it and dramatically increase online student retention rates.

Any student with a smartphone can access a wealth of information and free courses from universities around the world. MOOCs have allowed valuable courses to become available to millions of students. But at the moment, not all participants can receive customized feedback for their work. Currently, this is limited by manpower, but in the future that may not be the case.
What chatbots like Jill Watson allow is the opportunity for hundreds of thousands, if not millions, of students to have their work reviewed and all their questions answered at a minimal cost.
AI algorithms also have a significant role to play in personalization of education. Every student is unique and has a different set of strengths and weaknesses. Data analysis can be used to improve individual student results, assess each student’s strengths and weaknesses, and create mass-customized programs. Algorithms can analyze student data and consequently make flexible programs that adapt to the learner based on real-time feedback. According to the McKinsey Global Institute, all of this data in education could unlock between $900 billion and $1.2 trillion in global economic value.
Beyond Automated Teaching
It’s important to recognize that technological automation alone won’t fix the many issues in our global education system today. Dominated by outdated curricula, standardized tests, and an emphasis on short-term knowledge, many experts are calling for a transformation of how we teach.
It is not enough to simply automate the process. We can have a completely digital learning experience that continues to focus on outdated skills and fails to prepare students for the future. In other words, we must not only be innovative with our automation capabilities, but also with educational content, strategy, and policies.
Are we equipping students with the most important survival skills? Are we inspiring young minds to create a better future? Are we meeting the unique learning needs of each and every student? There’s no point automating and digitizing a system that is already flawed. We need to ensure the system that is being digitized is itself being transformed for the better.
Stock Media provided by davincidig / Pond5 Continue reading

Posted in Human Robots | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Comments Off on Why Education Is the Hardest Sector of ...

#430668 Why Every Leader Needs to Be Obsessed ...

This article is part of a series exploring the skills leaders must learn to make the most of rapid change in an increasingly disruptive world. The first article in the series, “How the Most Successful Leaders Will Thrive in an Exponential World,” broadly outlines four critical leadership skills—futurist, technologist, innovator, and humanitarian—and how they work together.
Today’s post, part five in the series, takes a more detailed look at leaders as technologists. Be sure to check out part two of the series, “How Leaders Dream Boldly to Bring New Futures to Life,” part three of the series, “How All Leaders Can Make the World a Better Place,” and part four of the series, “How Leaders Can Make Innovation Everyone’s Day Job”.
In the 1990s, Tower Records was the place to get new music. Successful and popular, the California chain spread far and wide, and in 1998, they took on $110 million in debt to fund aggressive further expansion. This wasn’t, as it turns out, the best of timing.
The first portable digital music player went on sale the same year. The following year brought Napster, a file sharing service allowing users to freely share music online. By 2000, Napster hosted 20 million users swapping songs. Then in 2001, Apple’s iPod and iTunes arrived, and when the iTunes Music Store opened in 2003, Apple sold over a million songs the first week.
As music was digitized, hard copies began to go out of style, and sales and revenue declined.
Tower first filed for bankruptcy in 2004 and again (for the last time) in 2006. The internet wasn’t the only reason for Tower’s demise. Mismanagement and price competition from electronics retailers like Best Buy also played a part. Still, today, the vast majority of music is purchased or streamed entirely online, and record stores are for the most part a niche market.
The writing was on the wall, but those impacted most had trouble reading it.
Why is it difficult for leaders to see technological change coming and right the ship before it’s too late? Why did Tower go all out on expansion just as the next big thing took the stage?
This is one story of many. Digitization has moved beyond music and entertainment, and now many big retailers operating physical stores are struggling to stay relevant. Meanwhile, the pace of change is accelerating, and new potentially disruptive technologies are on the horizon.
More than ever, leaders need to develop a strong understanding of and perspective on technology. They need to survey new innovations, forecast their pace, gauge the implications, and adopt new tools and strategy to change course as an industry shifts, not after it’s shifted.
Simply, leaders need to adopt the mindset of a technologist. Here’s what that means.
Survey the Landscape
Nurturing curiosity is the first step to understanding technological change. To know how technology might disrupt your industry, you have to know what’s in the pipeline and identify which new inventions are directly or indirectly related to your industry.
Becoming more technologically minded takes discipline and focus as well as unstructured time to explore the non-obvious connections between what is right in front of us and what might be. It requires a commitment to ongoing learning and discovery.
Read outside your industry and comfort zone, not just Fast Company and Wired, but Science and Nature to expand your horizons. Identify experts with the ability to demystify specific technology areas—many have a solid following on Twitter or a frequently cited blog.
But it isn’t all about reading. Consider going where the change is happening too.
Visit one of the technology hubs around the world or a local university research lab in your own back yard. Or bring the innovation to you by building an internal exploration lab stocked with the latest technologies, creating a technology advisory board, hosting an internal innovation challenge, or a local pitch night where aspiring entrepreneurs can share their newest ideas.
You might even ask the crowd by inviting anyone to suggest what innovation is most likely to disrupt your product, service, or sector. And don’t hesitate to engage younger folks—the digital natives all around you—by asking questions about what technology they are using or excited about. Consider going on a field trip with them to see how they use technology in different aspects of their lives. Invite the seasoned executives on your team to explore long-term “reverse mentoring” with someone who can expose them to the latest technology and teach them to use it.
Whatever your strategy, the goal should be to develop a healthy obsession with technology.
By exploring fresh perspectives outside traditional work environments and then giving ourselves permission to see how these new ideas might influence existing products and strategies, we have a chance to be ready for what we’re not ready for—but is likely right around the corner.
Estimate the Pace of Progress
The next step is forecasting when a technology will mature.
One of the most challenging aspects of the changes underway is that in many technology arenas, we are quickly moving from a linear to an exponential pace. It is hard enough to envision what is needed in an industry buffeted by progress that is changing 10% per year, but what happens when technological progress doubles annually? That is another world altogether.
This kind of change can be deceiving. For example, machine learning and big data are finally reaching critical momentum after more than twenty years of being right around the corner. The advances in applications like speech and image recognition that we’ve seen in recent years dwarf what came before and many believe we’ve just begun to understand the implications.
Even as we begin to embrace disruptive change in one technology arena, far more exciting possibilities unfold when we explore how multiple arenas are converging.
Artificial intelligence and big data are great examples. As Hod Lipson, professor of Mechanical Engineering and Data Science at Columbia University and co-author of Driverless: Intelligent Cars and the Road Ahead, says, “AI is the engine, but big data is the fuel. They need each other.”
This convergence paired with an accelerating pace makes for surprising applications.
To keep his research lab agile and open to new uses of advancing technologies, Lipson routinely asks his PhD students, “How might AI disrupt this industry?” to prompt development of applications across a wide spectrum of sectors from healthcare to agriculture to food delivery.
Explore the Consequences
New technology inevitably gives rise to new ethical, social, and moral questions that we have never faced before. Rather than bury our heads in the sand, as leaders we must explore the full range of potential consequences of whatever is underway or still to come.
We can add AI to kids’ toys, like Mattel’s Hello Barbie or use cutting-edge gene editing technology like CRISPR-Cas9 to select for preferred gene sequences beyond basic health. But just because we can do something doesn’t mean we should.
Take time to listen to skeptics and understand the risks posed by technology.
Elon Musk, Stephen Hawking, Steve Wozniak, Bill Gates, and other well-known names in science and technology have expressed concern in the media and via open letters about the risks posed by AI. Microsoft’s CEO, Satya Nadella, has even argued tech companies shouldn’t build artificial intelligence systems that will replace people rather than making them more productive.
Exploring unintended consequences goes beyond having a Plan B for when something goes wrong. It requires broadening our view of what we’re responsible for. Beyond customers, shareholders, and the bottom line, we should understand how our decisions may impact employees, communities, the environment, our broader industry, and even our competitors.
The minor inconvenience of mitigating these risks now is far better than the alternative. Create forums to listen to and value voices outside of the board room and C-Suite. Seek out naysayers, ethicists, community leaders, wise elders, and even neophytes—those who may not share our preconceived notions of right and wrong or our narrow view of our role in the larger world.
The question isn’t: If we build it, will they come? It’s now: If we can build it, should we?
Adopt New Technologies and Shift Course
The last step is hardest. Once you’ve identified a technology (or technologies) as a potential disruptor and understand the implications, you need to figure out how to evolve your organization to make the most of the opportunity. Simply recognizing disruption isn’t enough.
Take today’s struggling brick-and-mortar retail business. Online shopping isn’t new. Amazon isn’t a plucky startup. Both have been changing how we buy stuff for years. And yet many who still own and operate physical stores—perhaps most prominently, Sears—are now on the brink of bankruptcy.
There’s hope though. Netflix began as a DVD delivery service in the 90s, but quickly realized its core business didn’t have staying power. It would have been laughable to stream movies when Netflix was founded. Still, computers and bandwidth were advancing fast. In 2007, the company added streaming to its subscription. Even then it wasn’t a totally compelling product.
But Netflix clearly saw a streaming future would likely end their DVD business.
In recent years, faster connection speeds, a growing content library, and the company’s entrance into original programming have given Netflix streaming the upper hand over DVDs. Since 2011, DVD subscriptions have steadily declined. Yet the company itself is doing fine. Why? It anticipated the shift to streaming and acted on it.
Never Stop Looking for the Next Big Thing
Technology is and will increasingly be a driver of disruption, destabilizing entrenched businesses and entire industries while also creating new markets and value not yet imagined.
When faced with the rapidly accelerating pace of change, many companies still default to old models and established practices. Leading like a technologist requires vigilant understanding of potential sources of disruption—what might make your company’s offering obsolete? The answers may not always be perfectly clear. What’s most important is relentlessly seeking them.
Stock Media provided by MJTierney / Pond5 Continue reading

Posted in Human Robots | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Comments Off on Why Every Leader Needs to Be Obsessed ...

#430658 Why Every Leader Needs a Healthy ...

This article is part of a series exploring the skills leaders must learn to make the most of rapid change in an increasingly disruptive world. The first article in the series, “How the Most Successful Leaders Will Thrive in an Exponential World,” broadly outlines four critical leadership skills—futurist, technologist, innovator, and humanitarian—and how they work together.
Today’s post, part five in the series, takes a more detailed look at leaders as technologists. Be sure to check out part two of the series, “How Leaders Dream Boldly to Bring New Futures to Life,” part three of the series, “How All Leaders Can Make the World a Better Place,” and part four of the series, “How Leaders Can Make Innovation Everyone’s Day Job”.
In the 1990s, Tower Records was the place to get new music. Successful and popular, the California chain spread far and wide, and in 1998, they took on $110 million in debt to fund aggressive further expansion. This wasn’t, as it turns out, the best of timing.
The first portable digital music player went on sale the same year. The following year brought Napster, a file sharing service allowing users to freely share music online. By 2000, Napster hosted 20 million users swapping songs. Then in 2001, Apple’s iPod and iTunes arrived, and when the iTunes Music Store opened in 2003, Apple sold over a million songs the first week.
As music was digitized, hard copies began to go out of style, and sales and revenue declined.
Tower first filed for bankruptcy in 2004 and again (for the last time) in 2006. The internet wasn’t the only reason for Tower’s demise. Mismanagement and price competition from electronics retailers like Best Buy also played a part. Still, today, the vast majority of music is purchased or streamed entirely online, and record stores are for the most part a niche market.
The writing was on the wall, but those impacted most had trouble reading it.
Why is it difficult for leaders to see technological change coming and right the ship before it’s too late? Why did Tower go all out on expansion just as the next big thing took the stage?
This is one story of many. Digitization has moved beyond music and entertainment, and now many big retailers operating physical stores are struggling to stay relevant. Meanwhile, the pace of change is accelerating, and new potentially disruptive technologies are on the horizon.
More than ever, leaders need to develop a strong understanding of and perspective on technology. They need to survey new innovations, forecast their pace, gauge the implications, and adopt new tools and strategy to change course as an industry shifts, not after it’s shifted.
Simply, leaders need to adopt the mindset of a technologist. Here’s what that means.
Survey the Landscape
Nurturing curiosity is the first step to understanding technological change. To know how technology might disrupt your industry, you have to know what’s in the pipeline and identify which new inventions are directly or indirectly related to your industry.
Becoming more technologically minded takes discipline and focus as well as unstructured time to explore the non-obvious connections between what is right in front of us and what might be. It requires a commitment to ongoing learning and discovery.
Read outside your industry and comfort zone, not just Fast Company and Wired, but Science and Nature to expand your horizons. Identify experts with the ability to demystify specific technology areas—many have a solid following on Twitter or a frequently cited blog.
But it isn’t all about reading. Consider going where the change is happening too.
Visit one of the technology hubs around the world or a local university research lab in your own back yard. Or bring the innovation to you by building an internal exploration lab stocked with the latest technologies, creating a technology advisory board, hosting an internal innovation challenge, or a local pitch night where aspiring entrepreneurs can share their newest ideas.
You might even ask the crowd by inviting anyone to suggest what innovation is most likely to disrupt your product, service, or sector. And don’t hesitate to engage younger folks—the digital natives all around you—by asking questions about what technology they are using or excited about. Consider going on a field trip with them to see how they use technology in different aspects of their lives. Invite the seasoned executives on your team to explore long-term “reverse mentoring” with someone who can expose them to the latest technology and teach them to use it.
Whatever your strategy, the goal should be to develop a healthy obsession with technology.
By exploring fresh perspectives outside traditional work environments and then giving ourselves permission to see how these new ideas might influence existing products and strategies, we have a chance to be ready for what we’re not ready for—but is likely right around the corner.
Estimate the Pace of Progress
The next step is forecasting when a technology will mature.
One of the most challenging aspects of the changes underway is that in many technology arenas, we are quickly moving from a linear to an exponential pace. It is hard enough to envision what is needed in an industry buffeted by progress that is changing 10% per year, but what happens when technological progress doubles annually? That is another world altogether.
This kind of change can be deceiving. For example, machine learning and big data are finally reaching critical momentum after more than twenty years of being right around the corner. The advances in applications like speech and image recognition that we’ve seen in recent years dwarf what came before and many believe we’ve just begun to understand the implications.
Even as we begin to embrace disruptive change in one technology arena, far more exciting possibilities unfold when we explore how multiple arenas are converging.
Artificial intelligence and big data are great examples. As Hod Lipson, professor of Mechanical Engineering and Data Science at Columbia University and co-author of Driverless: Intelligent Cars and the Road Ahead, says, “AI is the engine, but big data is the fuel. They need each other.”
This convergence paired with an accelerating pace makes for surprising applications.
To keep his research lab agile and open to new uses of advancing technologies, Lipson routinely asks his PhD students, “How might AI disrupt this industry?” to prompt development of applications across a wide spectrum of sectors from healthcare to agriculture to food delivery.
Explore the Consequences
New technology inevitably gives rise to new ethical, social, and moral questions that we have never faced before. Rather than bury our heads in the sand, as leaders we must explore the full range of potential consequences of whatever is underway or still to come.
We can add AI to kids’ toys, like Mattel’s Hello Barbie or use cutting-edge gene editing technology like CRISPR-Cas9 to select for preferred gene sequences beyond basic health. But just because we can do something doesn’t mean we should.
Take time to listen to skeptics and understand the risks posed by technology.
Elon Musk, Stephen Hawking, Steve Wozniak, Bill Gates, and other well-known names in science and technology have expressed concern in the media and via open letters about the risks posed by AI. Microsoft’s CEO, Satya Nadella, has even argued tech companies shouldn’t build artificial intelligence systems that will replace people rather than making them more productive.
Exploring unintended consequences goes beyond having a Plan B for when something goes wrong. It requires broadening our view of what we’re responsible for. Beyond customers, shareholders, and the bottom line, we should understand how our decisions may impact employees, communities, the environment, our broader industry, and even our competitors.
The minor inconvenience of mitigating these risks now is far better than the alternative. Create forums to listen to and value voices outside of the board room and C-Suite. Seek out naysayers, ethicists, community leaders, wise elders, and even neophytes—those who may not share our preconceived notions of right and wrong or our narrow view of our role in the larger world.
The question isn’t: If we build it, will they come? It’s now: If we can build it, should we?
Adopt New Technologies and Shift Course
The last step is hardest. Once you’ve identified a technology (or technologies) as a potential disruptor and understand the implications, you need to figure out how to evolve your organization to make the most of the opportunity. Simply recognizing disruption isn’t enough.
Take today’s struggling brick-and-mortar retail business. Online shopping isn’t new. Amazon isn’t a plucky startup. Both have been changing how we buy stuff for years. And yet many who still own and operate physical stores—perhaps most prominently, Sears—are now on the brink of bankruptcy.
There’s hope though. Netflix began as a DVD delivery service in the 90s, but quickly realized its core business didn’t have staying power. It would have been laughable to stream movies when Netflix was founded. Still, computers and bandwidth were advancing fast. In 2007, the company added streaming to its subscription. Even then it wasn’t a totally compelling product.
But Netflix clearly saw a streaming future would likely end their DVD business.
In recent years, faster connection speeds, a growing content library, and the company’s entrance into original programming have given Netflix streaming the upper hand over DVDs. Since 2011, DVD subscriptions have steadily declined. Yet the company itself is doing fine. Why? It anticipated the shift to streaming and acted on it.
Never Stop Looking for the Next Big Thing
Technology is and will increasingly be a driver of disruption, destabilizing entrenched businesses and entire industries while also creating new markets and value not yet imagined.
When faced with the rapidly accelerating pace of change, many companies still default to old models and established practices. Leading like a technologist requires vigilant understanding of potential sources of disruption—what might make your company’s offering obsolete? The answers may not always be perfectly clear. What’s most important is relentlessly seeking them.
Stock Media provided by MJTierney / Pond5 Continue reading

Posted in Human Robots | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Comments Off on Why Every Leader Needs a Healthy ...