Tag Archives: big

#430830 Biocomputers Made From Cells Can Now ...

When it comes to biomolecules, RNA doesn’t get a lot of love.
Maybe you haven’t even heard of the silent workhorse. RNA is the cell’s de facto translator: like a game of telephone, RNA takes DNA’s genetic code to a cellular factory called ribosomes. There, the cell makes proteins based on RNA’s message.
But RNA isn’t just a middleman. It controls what proteins are formed. Because proteins wiz around the cell completing all sorts of important processes, you can say that RNA is the gatekeeper: no RNA message, no proteins, no life.
In a new study published in Nature, RNA finally took center stage. By adding bits of genetic material to the E. Coli bacteria, a team of biohackers at the Wyss Institute hijacked the organism’s RNA messengers so that they only spring into action following certain inputs.
The result? A bacterial biocomputer capable of performing 12-input logic operations—AND, OR, and NOT—following specific inputs. Rather than outputting 0s and 1s, these biocircuits produce results based on the presence or absence of proteins and other molecules.
“It’s the greatest number of inputs in a circuit that a cell has been able to process,” says study author Dr. Alexander Green at Arizona State University. “To be able to analyze those signals and make a decision is the big advance here.”
When given a specific set of inputs, the bacteria spit out a protein that made them glow neon green under fluorescent light.
But synthetic biology promises far more than just a party trick—by tinkering with a cell’s RNA repertoire, scientists may one day coax them to photosynthesize, produce expensive drugs on the fly, or diagnose and hunt down rogue tumor cells.
Illustration of an RNA-based ‘ribocomputing’ device that makes logic-based decisions in living cells. The long gate RNA (blue) detects the binding of an input RNA (red). The ribosome (purple/mauve) reads the gate RNA to produce an output protein. Image Credit: Alexander Green / Arizona State University
The software of life
This isn’t the first time that scientists hijacked life’s algorithms to reprogram cells into nanocomputing systems. Previous work has already introduced to the world yeast cells that can make anti-malaria drugs from sugar or mammalian cells that can perform Boolean logic.
Yet circuits with multiple inputs and outputs remain hard to program. The reason is this: synthetic biologists have traditionally focused on snipping, fusing, or otherwise arranging a cell’s DNA to produce the outcomes they want.
But DNA is two steps removed from proteins, and tinkering with life’s code often leads to unexpected consequences. For one, the cell may not even accept and produce the extra bits of DNA code. For another, the added code, when transformed into proteins, may not act accordingly in the crowded and ever-changing environment of the cell.
What’s more, tinkering with one gene is often not enough to program an entirely new circuit. Scientists often need to amp up or shut down the activity of multiple genes, or multiple biological “modules” each made up of tens or hundreds of genes.
It’s like trying to fit new Lego pieces in a specific order into a room full of Lego constructions. Each new piece has the potential to wander off track and click onto something it’s not supposed to touch.
Getting every moving component to work in sync—as you might have guessed—is a giant headache.
The RNA way
With “ribocomputing,” Green and colleagues set off to tackle a main problem in synthetic biology: predictability.
Named after the “R (ribo)” in “RNA,” the method grew out of an idea that first struck Green back in 2012.
“The synthetic biological circuits to date have relied heavily on protein-based regulators that are difficult to scale up,” Green wrote at the time. We only have a limited handful of “designable parts” that work well, and these circuits require significant resources to encode and operate, he explains.
RNA, in comparison, is a lot more predictable. Like its more famous sibling DNA, RNA is composed of units that come in four different flavors: A, G, C, and U. Although RNA is only single-stranded, rather than the double helix for which DNA is known for, it can bind short DNA-like sequences in a very predictable manner: Gs always match up with Cs and As always with Us.
Because of this predictability, it’s possible to design RNA components that bind together perfectly. In other words, it reduces the chance that added RNA bits might go rogue in an unsuspecting cell.
Normally, once RNA is produced it immediately rushes to the ribosome—the cell’s protein-building factory. Think of it as a constantly “on” system.
However, Green and his team found a clever mechanism to slow them down. Dubbed the “toehold switch,” it works like this: the artificial RNA component is first incorporated into a chain of A, G, C, and U folded into a paperclip-like structure.
This blocks the RNA from accessing the ribosome. Because one RNA strand generally maps to one protein, the switch prevents that protein from ever getting made.
In this way, the switch is set to “off” by default—a “NOT” gate, in Boolean logic.
To activate the switch, the cell needs another component: a “trigger RNA,” which binds to the RNA toehold switch. This flips it on: the RNA grabs onto the ribosome, and bam—proteins.
BioLogic gates
String a few RNA switches together, with the activity of each one relying on the one before, and it forms an “AND” gate. Alternatively, if the activity of each switch is independent, that’s an “OR” gate.
“Basically, the toehold switches performed so well that we wanted to find a way to best exploit them for cellular applications,” says Green. They’re “kind of the equivalent of your first transistors,” he adds.
Once the team optimized the designs for different logic gates, they carefully condensed the switches into “gate RNA” molecules. These gate RNAs contain both codes for proteins and the logic operations needed to kickstart the process—a molecular logic circuit, so to speak.
If you’ve ever played around with an Arduino-controlled electrical circuit, you probably know the easiest way to test its function is with a light bulb.
That’s what the team did here, though with a biological bulb: green fluorescent protein, a light-sensing protein not normally present in bacteria that—when turned on—makes the microbugs glow neon green.
In a series of experiments, Green and his team genetically inserted gate RNAs into bacteria. Then, depending on the type of logical function, they added different combinations of trigger RNAs—the inputs.
When the input RNA matched up with its corresponding gate RNA, it flipped on the switch, causing the cell to light up.

Their most complex circuit contained five AND gates, five OR gates, and two NOTs—a 12-input ribocomputer that functioned exactly as designed.
That’s quite the achievement. “Everything is interacting with everything else and there are a million ways those interactions could flip the switch on accident,” says RNA researcher Dr. Julies Lucks at Northwestern University.
The specificity is thanks to RNA, the authors explain. Because RNAs bind to others so predictably, we can now design massive libraries of gate and trigger units to mix-and-match into all types of nano-biocomputers.
RNA BioNanobots
Although the technology doesn’t have any immediate applications, the team has high hopes.
For the first time, it’s now possible to massively scale up the process of programming new circuits into living cells. We’ve expanded the library of available biocomponents that can be used to reprogram life’s basic code, the authors say.
What’s more, when freeze-dried onto a piece of tissue paper, RNA keeps very well. We could potentially print RNA toehold switches onto paper that respond to viruses or to tumor cells, the authors say, essentially transforming the technology into highly accurate diagnostic platforms.
But Green’s hopes are even wilder for his RNA-based circuits.
“Because we’re using RNA, a universal molecule of life, we know these interactions can also work in other cells, so our method provides a general strategy that could be ported to other organisms,” he says.
Ultimately, the hope is to program neural network-like capabilities into the body’s other cells.
Imagine cells endowed with circuits capable of performing the kinds of computation the brain does, the authors say.
Perhaps one day, synthetic biology will transform our own cells into fully programmable entities, turning us all into biological cyborgs from the inside. How wild would that be?
Image Credit: Wyss Institute at Harvard University Continue reading

Posted in Human Robots

#430801 3 Exponentials to Watch | Future of ...

In the third of Singularity University’s Future of Everything YouTube series with Jason Silva, Silva discusses “The Big Three” exponential technologies, which he defines as GNR: genetics, nanotechnology, and robotics.
“If I were to be talking to entrepreneurs, if I was talking to heads of companies, I would tell them, pay attention to exponentials,” Silva says. “Pay attention to disruptive technologies… These are the forces that are upending the world. These are the trillion-dollar industries that are going to emerge out of no place.”

Image Credit: Shutterstock Continue reading

Posted in Human Robots

#430761 How Robots Are Getting Better at Making ...

The multiverse of science fiction is populated by robots that are indistinguishable from humans. They are usually smarter, faster, and stronger than us. They seem capable of doing any job imaginable, from piloting a starship and battling alien invaders to taking out the trash and cooking a gourmet meal.
The reality, of course, is far from fantasy. Aside from industrial settings, robots have yet to meet The Jetsons. The robots the public are exposed to seem little more than over-sized plastic toys, pre-programmed to perform a set of tasks without the ability to interact meaningfully with their environment or their creators.
To paraphrase PayPal co-founder and tech entrepreneur Peter Thiel, we wanted cool robots, instead we got 140 characters and Flippy the burger bot. But scientists are making progress to empower robots with the ability to see and respond to their surroundings just like humans.
Some of the latest developments in that arena were presented this month at the annual Robotics: Science and Systems Conference in Cambridge, Massachusetts. The papers drilled down into topics that ranged from how to make robots more conversational and help them understand language ambiguities to helping them see and navigate through complex spaces.
Improved Vision
Ben Burchfiel, a graduate student at Duke University, and his thesis advisor George Konidaris, an assistant professor of computer science at Brown University, developed an algorithm to enable machines to see the world more like humans.
In the paper, Burchfiel and Konidaris demonstrate how they can teach robots to identify and possibly manipulate three-dimensional objects even when they might be obscured or sitting in unfamiliar positions, such as a teapot that has been tipped over.
The researchers trained their algorithm by feeding it 3D scans of about 4,000 common household items such as beds, chairs, tables, and even toilets. They then tested its ability to identify about 900 new 3D objects just from a bird’s eye view. The algorithm made the right guess 75 percent of the time versus a success rate of about 50 percent for other computer vision techniques.
In an email interview with Singularity Hub, Burchfiel notes his research is not the first to train machines on 3D object classification. How their approach differs is that they confine the space in which the robot learns to classify the objects.
“Imagine the space of all possible objects,” Burchfiel explains. “That is to say, imagine you had tiny Legos, and I told you [that] you could stick them together any way you wanted, just build me an object. You have a huge number of objects you could make!”
The infinite possibilities could result in an object no human or machine might recognize.
To address that problem, the researchers had their algorithm find a more restricted space that would host the objects it wants to classify. “By working in this restricted space—mathematically we call it a subspace—we greatly simplify our task of classification. It is the finding of this space that sets us apart from previous approaches.”
Following Directions
Meanwhile, a pair of undergraduate students at Brown University figured out a way to teach robots to understand directions better, even at varying degrees of abstraction.
The research, led by Dilip Arumugam and Siddharth Karamcheti, addressed how to train a robot to understand nuances of natural language and then follow instructions correctly and efficiently.
“The problem is that commands can have different levels of abstraction, and that can cause a robot to plan its actions inefficiently or fail to complete the task at all,” says Arumugam in a press release.
In this project, the young researchers crowdsourced instructions for moving a virtual robot through an online domain. The space consisted of several rooms and a chair, which the robot was told to manipulate from one place to another. The volunteers gave various commands to the robot, ranging from general (“take the chair to the blue room”) to step-by-step instructions.
The researchers then used the database of spoken instructions to teach their system to understand the kinds of words used in different levels of language. The machine learned to not only follow instructions but to recognize the level of abstraction. That was key to kickstart its problem-solving abilities to tackle the job in the most appropriate way.
The research eventually moved from virtual pixels to a real place, using a Roomba-like robot that was able to respond to instructions within one second 90 percent of the time. Conversely, when unable to identify the specificity of the task, it took the robot 20 or more seconds to plan a task about 50 percent of the time.
One application of this new machine-learning technique referenced in the paper is a robot worker in a warehouse setting, but there are many fields that could benefit from a more versatile machine capable of moving seamlessly between small-scale operations and generalized tasks.
“Other areas that could possibly benefit from such a system include things from autonomous vehicles… to assistive robotics, all the way to medical robotics,” says Karamcheti, responding to a question by email from Singularity Hub.
More to Come
These achievements are yet another step toward creating robots that see, listen, and act more like humans. But don’t expect Disney to build a real-life Westworld next to Toon Town anytime soon.
“I think we’re a long way off from human-level communication,” Karamcheti says. “There are so many problems preventing our learning models from getting to that point, from seemingly simple questions like how to deal with words never seen before, to harder, more complicated questions like how to resolve the ambiguities inherent in language, including idiomatic or metaphorical speech.”
Even relatively verbose chatbots can run out of things to say, Karamcheti notes, as the conversation becomes more complex.
The same goes for human vision, according to Burchfiel.
While deep learning techniques have dramatically improved pattern matching—Google can find just about any picture of a cat—there’s more to human eyesight than, well, meets the eye.
“There are two big areas where I think perception has a long way to go: inductive bias and formal reasoning,” Burchfiel says.
The former is essentially all of the contextual knowledge people use to help them reason, he explains. Burchfiel uses the example of a puddle in the street. People are conditioned or biased to assume it’s a puddle of water rather than a patch of glass, for instance.
“This sort of bias is why we see faces in clouds; we have strong inductive bias helping us identify faces,” he says. “While it sounds simple at first, it powers much of what we do. Humans have a very intuitive understanding of what they expect to see, [and] it makes perception much easier.”
Formal reasoning is equally important. A machine can use deep learning, in Burchfiel’s example, to figure out the direction any river flows once it understands that water runs downhill. But it’s not yet capable of applying the sort of human reasoning that would allow us to transfer that knowledge to an alien setting, such as figuring out how water moves through a plumbing system on Mars.
“Much work was done in decades past on this sort of formal reasoning… but we have yet to figure out how to merge it with standard machine-learning methods to create a seamless system that is useful in the actual physical world.”
Robots still have a lot to learn about being human, which should make us feel good that we’re still by far the most complex machines on the planet.
Image Credit: Alex Knight via Unsplash Continue reading

Posted in Human Robots

#430668 Why Every Leader Needs to Be Obsessed ...

This article is part of a series exploring the skills leaders must learn to make the most of rapid change in an increasingly disruptive world. The first article in the series, “How the Most Successful Leaders Will Thrive in an Exponential World,” broadly outlines four critical leadership skills—futurist, technologist, innovator, and humanitarian—and how they work together.
Today’s post, part five in the series, takes a more detailed look at leaders as technologists. Be sure to check out part two of the series, “How Leaders Dream Boldly to Bring New Futures to Life,” part three of the series, “How All Leaders Can Make the World a Better Place,” and part four of the series, “How Leaders Can Make Innovation Everyone’s Day Job”.
In the 1990s, Tower Records was the place to get new music. Successful and popular, the California chain spread far and wide, and in 1998, they took on $110 million in debt to fund aggressive further expansion. This wasn’t, as it turns out, the best of timing.
The first portable digital music player went on sale the same year. The following year brought Napster, a file sharing service allowing users to freely share music online. By 2000, Napster hosted 20 million users swapping songs. Then in 2001, Apple’s iPod and iTunes arrived, and when the iTunes Music Store opened in 2003, Apple sold over a million songs the first week.
As music was digitized, hard copies began to go out of style, and sales and revenue declined.
Tower first filed for bankruptcy in 2004 and again (for the last time) in 2006. The internet wasn’t the only reason for Tower’s demise. Mismanagement and price competition from electronics retailers like Best Buy also played a part. Still, today, the vast majority of music is purchased or streamed entirely online, and record stores are for the most part a niche market.
The writing was on the wall, but those impacted most had trouble reading it.
Why is it difficult for leaders to see technological change coming and right the ship before it’s too late? Why did Tower go all out on expansion just as the next big thing took the stage?
This is one story of many. Digitization has moved beyond music and entertainment, and now many big retailers operating physical stores are struggling to stay relevant. Meanwhile, the pace of change is accelerating, and new potentially disruptive technologies are on the horizon.
More than ever, leaders need to develop a strong understanding of and perspective on technology. They need to survey new innovations, forecast their pace, gauge the implications, and adopt new tools and strategy to change course as an industry shifts, not after it’s shifted.
Simply, leaders need to adopt the mindset of a technologist. Here’s what that means.
Survey the Landscape
Nurturing curiosity is the first step to understanding technological change. To know how technology might disrupt your industry, you have to know what’s in the pipeline and identify which new inventions are directly or indirectly related to your industry.
Becoming more technologically minded takes discipline and focus as well as unstructured time to explore the non-obvious connections between what is right in front of us and what might be. It requires a commitment to ongoing learning and discovery.
Read outside your industry and comfort zone, not just Fast Company and Wired, but Science and Nature to expand your horizons. Identify experts with the ability to demystify specific technology areas—many have a solid following on Twitter or a frequently cited blog.
But it isn’t all about reading. Consider going where the change is happening too.
Visit one of the technology hubs around the world or a local university research lab in your own back yard. Or bring the innovation to you by building an internal exploration lab stocked with the latest technologies, creating a technology advisory board, hosting an internal innovation challenge, or a local pitch night where aspiring entrepreneurs can share their newest ideas.
You might even ask the crowd by inviting anyone to suggest what innovation is most likely to disrupt your product, service, or sector. And don’t hesitate to engage younger folks—the digital natives all around you—by asking questions about what technology they are using or excited about. Consider going on a field trip with them to see how they use technology in different aspects of their lives. Invite the seasoned executives on your team to explore long-term “reverse mentoring” with someone who can expose them to the latest technology and teach them to use it.
Whatever your strategy, the goal should be to develop a healthy obsession with technology.
By exploring fresh perspectives outside traditional work environments and then giving ourselves permission to see how these new ideas might influence existing products and strategies, we have a chance to be ready for what we’re not ready for—but is likely right around the corner.
Estimate the Pace of Progress
The next step is forecasting when a technology will mature.
One of the most challenging aspects of the changes underway is that in many technology arenas, we are quickly moving from a linear to an exponential pace. It is hard enough to envision what is needed in an industry buffeted by progress that is changing 10% per year, but what happens when technological progress doubles annually? That is another world altogether.
This kind of change can be deceiving. For example, machine learning and big data are finally reaching critical momentum after more than twenty years of being right around the corner. The advances in applications like speech and image recognition that we’ve seen in recent years dwarf what came before and many believe we’ve just begun to understand the implications.
Even as we begin to embrace disruptive change in one technology arena, far more exciting possibilities unfold when we explore how multiple arenas are converging.
Artificial intelligence and big data are great examples. As Hod Lipson, professor of Mechanical Engineering and Data Science at Columbia University and co-author of Driverless: Intelligent Cars and the Road Ahead, says, “AI is the engine, but big data is the fuel. They need each other.”
This convergence paired with an accelerating pace makes for surprising applications.
To keep his research lab agile and open to new uses of advancing technologies, Lipson routinely asks his PhD students, “How might AI disrupt this industry?” to prompt development of applications across a wide spectrum of sectors from healthcare to agriculture to food delivery.
Explore the Consequences
New technology inevitably gives rise to new ethical, social, and moral questions that we have never faced before. Rather than bury our heads in the sand, as leaders we must explore the full range of potential consequences of whatever is underway or still to come.
We can add AI to kids’ toys, like Mattel’s Hello Barbie or use cutting-edge gene editing technology like CRISPR-Cas9 to select for preferred gene sequences beyond basic health. But just because we can do something doesn’t mean we should.
Take time to listen to skeptics and understand the risks posed by technology.
Elon Musk, Stephen Hawking, Steve Wozniak, Bill Gates, and other well-known names in science and technology have expressed concern in the media and via open letters about the risks posed by AI. Microsoft’s CEO, Satya Nadella, has even argued tech companies shouldn’t build artificial intelligence systems that will replace people rather than making them more productive.
Exploring unintended consequences goes beyond having a Plan B for when something goes wrong. It requires broadening our view of what we’re responsible for. Beyond customers, shareholders, and the bottom line, we should understand how our decisions may impact employees, communities, the environment, our broader industry, and even our competitors.
The minor inconvenience of mitigating these risks now is far better than the alternative. Create forums to listen to and value voices outside of the board room and C-Suite. Seek out naysayers, ethicists, community leaders, wise elders, and even neophytes—those who may not share our preconceived notions of right and wrong or our narrow view of our role in the larger world.
The question isn’t: If we build it, will they come? It’s now: If we can build it, should we?
Adopt New Technologies and Shift Course
The last step is hardest. Once you’ve identified a technology (or technologies) as a potential disruptor and understand the implications, you need to figure out how to evolve your organization to make the most of the opportunity. Simply recognizing disruption isn’t enough.
Take today’s struggling brick-and-mortar retail business. Online shopping isn’t new. Amazon isn’t a plucky startup. Both have been changing how we buy stuff for years. And yet many who still own and operate physical stores—perhaps most prominently, Sears—are now on the brink of bankruptcy.
There’s hope though. Netflix began as a DVD delivery service in the 90s, but quickly realized its core business didn’t have staying power. It would have been laughable to stream movies when Netflix was founded. Still, computers and bandwidth were advancing fast. In 2007, the company added streaming to its subscription. Even then it wasn’t a totally compelling product.
But Netflix clearly saw a streaming future would likely end their DVD business.
In recent years, faster connection speeds, a growing content library, and the company’s entrance into original programming have given Netflix streaming the upper hand over DVDs. Since 2011, DVD subscriptions have steadily declined. Yet the company itself is doing fine. Why? It anticipated the shift to streaming and acted on it.
Never Stop Looking for the Next Big Thing
Technology is and will increasingly be a driver of disruption, destabilizing entrenched businesses and entire industries while also creating new markets and value not yet imagined.
When faced with the rapidly accelerating pace of change, many companies still default to old models and established practices. Leading like a technologist requires vigilant understanding of potential sources of disruption—what might make your company’s offering obsolete? The answers may not always be perfectly clear. What’s most important is relentlessly seeking them.
Stock Media provided by MJTierney / Pond5 Continue reading

Posted in Human Robots

#430658 Why Every Leader Needs a Healthy ...

This article is part of a series exploring the skills leaders must learn to make the most of rapid change in an increasingly disruptive world. The first article in the series, “How the Most Successful Leaders Will Thrive in an Exponential World,” broadly outlines four critical leadership skills—futurist, technologist, innovator, and humanitarian—and how they work together.
Today’s post, part five in the series, takes a more detailed look at leaders as technologists. Be sure to check out part two of the series, “How Leaders Dream Boldly to Bring New Futures to Life,” part three of the series, “How All Leaders Can Make the World a Better Place,” and part four of the series, “How Leaders Can Make Innovation Everyone’s Day Job”.
In the 1990s, Tower Records was the place to get new music. Successful and popular, the California chain spread far and wide, and in 1998, they took on $110 million in debt to fund aggressive further expansion. This wasn’t, as it turns out, the best of timing.
The first portable digital music player went on sale the same year. The following year brought Napster, a file sharing service allowing users to freely share music online. By 2000, Napster hosted 20 million users swapping songs. Then in 2001, Apple’s iPod and iTunes arrived, and when the iTunes Music Store opened in 2003, Apple sold over a million songs the first week.
As music was digitized, hard copies began to go out of style, and sales and revenue declined.
Tower first filed for bankruptcy in 2004 and again (for the last time) in 2006. The internet wasn’t the only reason for Tower’s demise. Mismanagement and price competition from electronics retailers like Best Buy also played a part. Still, today, the vast majority of music is purchased or streamed entirely online, and record stores are for the most part a niche market.
The writing was on the wall, but those impacted most had trouble reading it.
Why is it difficult for leaders to see technological change coming and right the ship before it’s too late? Why did Tower go all out on expansion just as the next big thing took the stage?
This is one story of many. Digitization has moved beyond music and entertainment, and now many big retailers operating physical stores are struggling to stay relevant. Meanwhile, the pace of change is accelerating, and new potentially disruptive technologies are on the horizon.
More than ever, leaders need to develop a strong understanding of and perspective on technology. They need to survey new innovations, forecast their pace, gauge the implications, and adopt new tools and strategy to change course as an industry shifts, not after it’s shifted.
Simply, leaders need to adopt the mindset of a technologist. Here’s what that means.
Survey the Landscape
Nurturing curiosity is the first step to understanding technological change. To know how technology might disrupt your industry, you have to know what’s in the pipeline and identify which new inventions are directly or indirectly related to your industry.
Becoming more technologically minded takes discipline and focus as well as unstructured time to explore the non-obvious connections between what is right in front of us and what might be. It requires a commitment to ongoing learning and discovery.
Read outside your industry and comfort zone, not just Fast Company and Wired, but Science and Nature to expand your horizons. Identify experts with the ability to demystify specific technology areas—many have a solid following on Twitter or a frequently cited blog.
But it isn’t all about reading. Consider going where the change is happening too.
Visit one of the technology hubs around the world or a local university research lab in your own back yard. Or bring the innovation to you by building an internal exploration lab stocked with the latest technologies, creating a technology advisory board, hosting an internal innovation challenge, or a local pitch night where aspiring entrepreneurs can share their newest ideas.
You might even ask the crowd by inviting anyone to suggest what innovation is most likely to disrupt your product, service, or sector. And don’t hesitate to engage younger folks—the digital natives all around you—by asking questions about what technology they are using or excited about. Consider going on a field trip with them to see how they use technology in different aspects of their lives. Invite the seasoned executives on your team to explore long-term “reverse mentoring” with someone who can expose them to the latest technology and teach them to use it.
Whatever your strategy, the goal should be to develop a healthy obsession with technology.
By exploring fresh perspectives outside traditional work environments and then giving ourselves permission to see how these new ideas might influence existing products and strategies, we have a chance to be ready for what we’re not ready for—but is likely right around the corner.
Estimate the Pace of Progress
The next step is forecasting when a technology will mature.
One of the most challenging aspects of the changes underway is that in many technology arenas, we are quickly moving from a linear to an exponential pace. It is hard enough to envision what is needed in an industry buffeted by progress that is changing 10% per year, but what happens when technological progress doubles annually? That is another world altogether.
This kind of change can be deceiving. For example, machine learning and big data are finally reaching critical momentum after more than twenty years of being right around the corner. The advances in applications like speech and image recognition that we’ve seen in recent years dwarf what came before and many believe we’ve just begun to understand the implications.
Even as we begin to embrace disruptive change in one technology arena, far more exciting possibilities unfold when we explore how multiple arenas are converging.
Artificial intelligence and big data are great examples. As Hod Lipson, professor of Mechanical Engineering and Data Science at Columbia University and co-author of Driverless: Intelligent Cars and the Road Ahead, says, “AI is the engine, but big data is the fuel. They need each other.”
This convergence paired with an accelerating pace makes for surprising applications.
To keep his research lab agile and open to new uses of advancing technologies, Lipson routinely asks his PhD students, “How might AI disrupt this industry?” to prompt development of applications across a wide spectrum of sectors from healthcare to agriculture to food delivery.
Explore the Consequences
New technology inevitably gives rise to new ethical, social, and moral questions that we have never faced before. Rather than bury our heads in the sand, as leaders we must explore the full range of potential consequences of whatever is underway or still to come.
We can add AI to kids’ toys, like Mattel’s Hello Barbie or use cutting-edge gene editing technology like CRISPR-Cas9 to select for preferred gene sequences beyond basic health. But just because we can do something doesn’t mean we should.
Take time to listen to skeptics and understand the risks posed by technology.
Elon Musk, Stephen Hawking, Steve Wozniak, Bill Gates, and other well-known names in science and technology have expressed concern in the media and via open letters about the risks posed by AI. Microsoft’s CEO, Satya Nadella, has even argued tech companies shouldn’t build artificial intelligence systems that will replace people rather than making them more productive.
Exploring unintended consequences goes beyond having a Plan B for when something goes wrong. It requires broadening our view of what we’re responsible for. Beyond customers, shareholders, and the bottom line, we should understand how our decisions may impact employees, communities, the environment, our broader industry, and even our competitors.
The minor inconvenience of mitigating these risks now is far better than the alternative. Create forums to listen to and value voices outside of the board room and C-Suite. Seek out naysayers, ethicists, community leaders, wise elders, and even neophytes—those who may not share our preconceived notions of right and wrong or our narrow view of our role in the larger world.
The question isn’t: If we build it, will they come? It’s now: If we can build it, should we?
Adopt New Technologies and Shift Course
The last step is hardest. Once you’ve identified a technology (or technologies) as a potential disruptor and understand the implications, you need to figure out how to evolve your organization to make the most of the opportunity. Simply recognizing disruption isn’t enough.
Take today’s struggling brick-and-mortar retail business. Online shopping isn’t new. Amazon isn’t a plucky startup. Both have been changing how we buy stuff for years. And yet many who still own and operate physical stores—perhaps most prominently, Sears—are now on the brink of bankruptcy.
There’s hope though. Netflix began as a DVD delivery service in the 90s, but quickly realized its core business didn’t have staying power. It would have been laughable to stream movies when Netflix was founded. Still, computers and bandwidth were advancing fast. In 2007, the company added streaming to its subscription. Even then it wasn’t a totally compelling product.
But Netflix clearly saw a streaming future would likely end their DVD business.
In recent years, faster connection speeds, a growing content library, and the company’s entrance into original programming have given Netflix streaming the upper hand over DVDs. Since 2011, DVD subscriptions have steadily declined. Yet the company itself is doing fine. Why? It anticipated the shift to streaming and acted on it.
Never Stop Looking for the Next Big Thing
Technology is and will increasingly be a driver of disruption, destabilizing entrenched businesses and entire industries while also creating new markets and value not yet imagined.
When faced with the rapidly accelerating pace of change, many companies still default to old models and established practices. Leading like a technologist requires vigilant understanding of potential sources of disruption—what might make your company’s offering obsolete? The answers may not always be perfectly clear. What’s most important is relentlessly seeking them.
Stock Media provided by MJTierney / Pond5 Continue reading

Posted in Human Robots