Tag Archives: back

#430830 Biocomputers Made From Cells Can Now ...

When it comes to biomolecules, RNA doesn’t get a lot of love.
Maybe you haven’t even heard of the silent workhorse. RNA is the cell’s de facto translator: like a game of telephone, RNA takes DNA’s genetic code to a cellular factory called ribosomes. There, the cell makes proteins based on RNA’s message.
But RNA isn’t just a middleman. It controls what proteins are formed. Because proteins wiz around the cell completing all sorts of important processes, you can say that RNA is the gatekeeper: no RNA message, no proteins, no life.
In a new study published in Nature, RNA finally took center stage. By adding bits of genetic material to the E. Coli bacteria, a team of biohackers at the Wyss Institute hijacked the organism’s RNA messengers so that they only spring into action following certain inputs.
The result? A bacterial biocomputer capable of performing 12-input logic operations—AND, OR, and NOT—following specific inputs. Rather than outputting 0s and 1s, these biocircuits produce results based on the presence or absence of proteins and other molecules.
“It’s the greatest number of inputs in a circuit that a cell has been able to process,” says study author Dr. Alexander Green at Arizona State University. “To be able to analyze those signals and make a decision is the big advance here.”
When given a specific set of inputs, the bacteria spit out a protein that made them glow neon green under fluorescent light.
But synthetic biology promises far more than just a party trick—by tinkering with a cell’s RNA repertoire, scientists may one day coax them to photosynthesize, produce expensive drugs on the fly, or diagnose and hunt down rogue tumor cells.
Illustration of an RNA-based ‘ribocomputing’ device that makes logic-based decisions in living cells. The long gate RNA (blue) detects the binding of an input RNA (red). The ribosome (purple/mauve) reads the gate RNA to produce an output protein. Image Credit: Alexander Green / Arizona State University
The software of life
This isn’t the first time that scientists hijacked life’s algorithms to reprogram cells into nanocomputing systems. Previous work has already introduced to the world yeast cells that can make anti-malaria drugs from sugar or mammalian cells that can perform Boolean logic.
Yet circuits with multiple inputs and outputs remain hard to program. The reason is this: synthetic biologists have traditionally focused on snipping, fusing, or otherwise arranging a cell’s DNA to produce the outcomes they want.
But DNA is two steps removed from proteins, and tinkering with life’s code often leads to unexpected consequences. For one, the cell may not even accept and produce the extra bits of DNA code. For another, the added code, when transformed into proteins, may not act accordingly in the crowded and ever-changing environment of the cell.
What’s more, tinkering with one gene is often not enough to program an entirely new circuit. Scientists often need to amp up or shut down the activity of multiple genes, or multiple biological “modules” each made up of tens or hundreds of genes.
It’s like trying to fit new Lego pieces in a specific order into a room full of Lego constructions. Each new piece has the potential to wander off track and click onto something it’s not supposed to touch.
Getting every moving component to work in sync—as you might have guessed—is a giant headache.
The RNA way
With “ribocomputing,” Green and colleagues set off to tackle a main problem in synthetic biology: predictability.
Named after the “R (ribo)” in “RNA,” the method grew out of an idea that first struck Green back in 2012.
“The synthetic biological circuits to date have relied heavily on protein-based regulators that are difficult to scale up,” Green wrote at the time. We only have a limited handful of “designable parts” that work well, and these circuits require significant resources to encode and operate, he explains.
RNA, in comparison, is a lot more predictable. Like its more famous sibling DNA, RNA is composed of units that come in four different flavors: A, G, C, and U. Although RNA is only single-stranded, rather than the double helix for which DNA is known for, it can bind short DNA-like sequences in a very predictable manner: Gs always match up with Cs and As always with Us.
Because of this predictability, it’s possible to design RNA components that bind together perfectly. In other words, it reduces the chance that added RNA bits might go rogue in an unsuspecting cell.
Normally, once RNA is produced it immediately rushes to the ribosome—the cell’s protein-building factory. Think of it as a constantly “on” system.
However, Green and his team found a clever mechanism to slow them down. Dubbed the “toehold switch,” it works like this: the artificial RNA component is first incorporated into a chain of A, G, C, and U folded into a paperclip-like structure.
This blocks the RNA from accessing the ribosome. Because one RNA strand generally maps to one protein, the switch prevents that protein from ever getting made.
In this way, the switch is set to “off” by default—a “NOT” gate, in Boolean logic.
To activate the switch, the cell needs another component: a “trigger RNA,” which binds to the RNA toehold switch. This flips it on: the RNA grabs onto the ribosome, and bam—proteins.
BioLogic gates
String a few RNA switches together, with the activity of each one relying on the one before, and it forms an “AND” gate. Alternatively, if the activity of each switch is independent, that’s an “OR” gate.
“Basically, the toehold switches performed so well that we wanted to find a way to best exploit them for cellular applications,” says Green. They’re “kind of the equivalent of your first transistors,” he adds.
Once the team optimized the designs for different logic gates, they carefully condensed the switches into “gate RNA” molecules. These gate RNAs contain both codes for proteins and the logic operations needed to kickstart the process—a molecular logic circuit, so to speak.
If you’ve ever played around with an Arduino-controlled electrical circuit, you probably know the easiest way to test its function is with a light bulb.
That’s what the team did here, though with a biological bulb: green fluorescent protein, a light-sensing protein not normally present in bacteria that—when turned on—makes the microbugs glow neon green.
In a series of experiments, Green and his team genetically inserted gate RNAs into bacteria. Then, depending on the type of logical function, they added different combinations of trigger RNAs—the inputs.
When the input RNA matched up with its corresponding gate RNA, it flipped on the switch, causing the cell to light up.

Their most complex circuit contained five AND gates, five OR gates, and two NOTs—a 12-input ribocomputer that functioned exactly as designed.
That’s quite the achievement. “Everything is interacting with everything else and there are a million ways those interactions could flip the switch on accident,” says RNA researcher Dr. Julies Lucks at Northwestern University.
The specificity is thanks to RNA, the authors explain. Because RNAs bind to others so predictably, we can now design massive libraries of gate and trigger units to mix-and-match into all types of nano-biocomputers.
RNA BioNanobots
Although the technology doesn’t have any immediate applications, the team has high hopes.
For the first time, it’s now possible to massively scale up the process of programming new circuits into living cells. We’ve expanded the library of available biocomponents that can be used to reprogram life’s basic code, the authors say.
What’s more, when freeze-dried onto a piece of tissue paper, RNA keeps very well. We could potentially print RNA toehold switches onto paper that respond to viruses or to tumor cells, the authors say, essentially transforming the technology into highly accurate diagnostic platforms.
But Green’s hopes are even wilder for his RNA-based circuits.
“Because we’re using RNA, a universal molecule of life, we know these interactions can also work in other cells, so our method provides a general strategy that could be ported to other organisms,” he says.
Ultimately, the hope is to program neural network-like capabilities into the body’s other cells.
Imagine cells endowed with circuits capable of performing the kinds of computation the brain does, the authors say.
Perhaps one day, synthetic biology will transform our own cells into fully programmable entities, turning us all into biological cyborgs from the inside. How wild would that be?
Image Credit: Wyss Institute at Harvard University Continue reading

Posted in Human Robots

#430814 The Age of Cyborgs Has Arrived

How many cyborgs did you see during your morning commute today? I would guess at least five. Did they make you nervous? Probably not; you likely didn’t even realize they were there.
In a presentation titled “Biohacking and the Connected Body” at Singularity University Global Summit, Hannes Sjoblad informed the audience that we’re already living in the age of cyborgs. Sjoblad is co-founder of the Sweden-based biohacker network Bionyfiken, a chartered non-profit that unites DIY-biologists, hackers, makers, body modification artists and health and performance devotees to explore human-machine integration.
Sjoblad said the cyborgs we see today don’t look like Hollywood prototypes; they’re regular people who have integrated technology into their bodies to improve or monitor some aspect of their health. Sjoblad defined biohacking as applying hacker ethic to biological systems. Some biohackers experiment with their biology with the goal of taking the human body’s experience beyond what nature intended.
Smart insulin monitoring systems, pacemakers, bionic eyes, and Cochlear implants are all examples of biohacking, according to Sjoblad. He told the audience, “We live in a time where, thanks to technology, we can make the deaf hear, the blind see, and the lame walk.” He is convinced that while biohacking could conceivably end up having Brave New World-like dystopian consequences, it can also be leveraged to improve and enhance our quality of life in multiple ways.
The field where biohacking can make the most positive impact is health. In addition to pacemakers and insulin monitors, several new technologies are being developed with the goal of improving our health and simplifying access to information about our bodies.
Ingestibles are a type of smart pill that use wireless technology to monitor internal reactions to medications, helping doctors determine optimum dosage levels and tailor treatments to different people. Your body doesn’t absorb or process medication exactly as your neighbor’s does, so shouldn’t you each have a treatment that works best with your unique system? Colonoscopies and endoscopies could one day be replaced by miniature pill-shaped video cameras that would collect and transmit images as they travel through the digestive tract.
Singularity University Global Summit is the culmination of the Exponential Conference Series and the definitive place to witness converging exponential technologies and understand how they’ll impact the world.
Security is another area where biohacking could be beneficial. One example Sjoblad gave was personalization of weapons: an invader in your house couldn’t fire your gun because it will have been matched to your fingerprint or synced with your body so that it only responds to you.
Biohacking can also simplify everyday tasks. In an impressive example of walking the walk rather than just talking the talk, Sjoblad had an NFC chip implanted in his hand. The chip contains data from everything he used to have to carry around in his pockets: credit and bank card information, key cards to enter his office building and gym, business cards, and frequent shopper loyalty cards. When he’s in line for a morning coffee or rushing to get to the office on time, he doesn’t have to root around in his pockets or bag to find the right card or key; he just waves his hand in front of a sensor and he’s good to go.
Evolved from radio frequency identification (RFID)—an old and widely distributed technology—NFC chips are activated by another chip, and small amounts of data can be transferred back and forth. No wireless connection is necessary. Sjoblad sees his NFC implant as a personal key to the Internet of Things, a simple way for him to talk to the smart, connected devices around him.
Sjoblad isn’t the only person who feels a need for connection.

When British science writer Frank Swain realized he was going to go deaf, he decided to hack his hearing to be able to hear Wi-Fi. Swain developed software that tunes into wireless communication fields and uses an inbuilt Wi-Fi sensor to pick up router name, encryption modes and distance from the device. This data is translated into an audio stream where distant signals click or pop, and strong signals sound their network ID in a looped melody. Swain hears it all through an upgraded hearing aid.
Global datastreams can also become sensory experiences. Spanish artist Moon Ribas developed and implanted a chip in her elbow that is connected to the global monitoring system for seismographic sensors; each time there’s an earthquake, she feels it through vibrations in her arm.
You can feel connected to our planet, too: North Sense makes a “standalone artificial sensory organ” that connects to your body and vibrates whenever you’re facing north. It’s a built-in compass; you’ll never get lost again.
Biohacking applications are likely to proliferate in the coming years, some of them more useful than others. But there are serious ethical questions that can’t be ignored during development and use of this technology. To what extent is it wise to tamper with nature, and who gets to decide?
Most of us are probably ok with waiting in line an extra 10 minutes or occasionally having to pull up a maps app on our phone if it means we don’t need to implant computer chips into our forearms. If it’s frightening to think of criminals stealing our wallets, imagine them cutting a chunk of our skin out to have instant access to and control over our personal data. The physical invasiveness and potential for something to go wrong seems to far outweigh the benefits the average person could derive from this technology.
But that may not always be the case. It’s worth noting the miniaturization of technology continues at a quick rate, and the smaller things get, the less invasive (and hopefully more useful) they’ll be. Even today, there are people already sensibly benefiting from biohacking. If you look closely enough, you’ll spot at least a couple cyborgs on your commute tomorrow morning.
Image Credit:Movement Control Laboratory/University of Washington – Deep Dream Generator Continue reading

Posted in Human Robots

#430668 Why Every Leader Needs to Be Obsessed ...

This article is part of a series exploring the skills leaders must learn to make the most of rapid change in an increasingly disruptive world. The first article in the series, “How the Most Successful Leaders Will Thrive in an Exponential World,” broadly outlines four critical leadership skills—futurist, technologist, innovator, and humanitarian—and how they work together.
Today’s post, part five in the series, takes a more detailed look at leaders as technologists. Be sure to check out part two of the series, “How Leaders Dream Boldly to Bring New Futures to Life,” part three of the series, “How All Leaders Can Make the World a Better Place,” and part four of the series, “How Leaders Can Make Innovation Everyone’s Day Job”.
In the 1990s, Tower Records was the place to get new music. Successful and popular, the California chain spread far and wide, and in 1998, they took on $110 million in debt to fund aggressive further expansion. This wasn’t, as it turns out, the best of timing.
The first portable digital music player went on sale the same year. The following year brought Napster, a file sharing service allowing users to freely share music online. By 2000, Napster hosted 20 million users swapping songs. Then in 2001, Apple’s iPod and iTunes arrived, and when the iTunes Music Store opened in 2003, Apple sold over a million songs the first week.
As music was digitized, hard copies began to go out of style, and sales and revenue declined.
Tower first filed for bankruptcy in 2004 and again (for the last time) in 2006. The internet wasn’t the only reason for Tower’s demise. Mismanagement and price competition from electronics retailers like Best Buy also played a part. Still, today, the vast majority of music is purchased or streamed entirely online, and record stores are for the most part a niche market.
The writing was on the wall, but those impacted most had trouble reading it.
Why is it difficult for leaders to see technological change coming and right the ship before it’s too late? Why did Tower go all out on expansion just as the next big thing took the stage?
This is one story of many. Digitization has moved beyond music and entertainment, and now many big retailers operating physical stores are struggling to stay relevant. Meanwhile, the pace of change is accelerating, and new potentially disruptive technologies are on the horizon.
More than ever, leaders need to develop a strong understanding of and perspective on technology. They need to survey new innovations, forecast their pace, gauge the implications, and adopt new tools and strategy to change course as an industry shifts, not after it’s shifted.
Simply, leaders need to adopt the mindset of a technologist. Here’s what that means.
Survey the Landscape
Nurturing curiosity is the first step to understanding technological change. To know how technology might disrupt your industry, you have to know what’s in the pipeline and identify which new inventions are directly or indirectly related to your industry.
Becoming more technologically minded takes discipline and focus as well as unstructured time to explore the non-obvious connections between what is right in front of us and what might be. It requires a commitment to ongoing learning and discovery.
Read outside your industry and comfort zone, not just Fast Company and Wired, but Science and Nature to expand your horizons. Identify experts with the ability to demystify specific technology areas—many have a solid following on Twitter or a frequently cited blog.
But it isn’t all about reading. Consider going where the change is happening too.
Visit one of the technology hubs around the world or a local university research lab in your own back yard. Or bring the innovation to you by building an internal exploration lab stocked with the latest technologies, creating a technology advisory board, hosting an internal innovation challenge, or a local pitch night where aspiring entrepreneurs can share their newest ideas.
You might even ask the crowd by inviting anyone to suggest what innovation is most likely to disrupt your product, service, or sector. And don’t hesitate to engage younger folks—the digital natives all around you—by asking questions about what technology they are using or excited about. Consider going on a field trip with them to see how they use technology in different aspects of their lives. Invite the seasoned executives on your team to explore long-term “reverse mentoring” with someone who can expose them to the latest technology and teach them to use it.
Whatever your strategy, the goal should be to develop a healthy obsession with technology.
By exploring fresh perspectives outside traditional work environments and then giving ourselves permission to see how these new ideas might influence existing products and strategies, we have a chance to be ready for what we’re not ready for—but is likely right around the corner.
Estimate the Pace of Progress
The next step is forecasting when a technology will mature.
One of the most challenging aspects of the changes underway is that in many technology arenas, we are quickly moving from a linear to an exponential pace. It is hard enough to envision what is needed in an industry buffeted by progress that is changing 10% per year, but what happens when technological progress doubles annually? That is another world altogether.
This kind of change can be deceiving. For example, machine learning and big data are finally reaching critical momentum after more than twenty years of being right around the corner. The advances in applications like speech and image recognition that we’ve seen in recent years dwarf what came before and many believe we’ve just begun to understand the implications.
Even as we begin to embrace disruptive change in one technology arena, far more exciting possibilities unfold when we explore how multiple arenas are converging.
Artificial intelligence and big data are great examples. As Hod Lipson, professor of Mechanical Engineering and Data Science at Columbia University and co-author of Driverless: Intelligent Cars and the Road Ahead, says, “AI is the engine, but big data is the fuel. They need each other.”
This convergence paired with an accelerating pace makes for surprising applications.
To keep his research lab agile and open to new uses of advancing technologies, Lipson routinely asks his PhD students, “How might AI disrupt this industry?” to prompt development of applications across a wide spectrum of sectors from healthcare to agriculture to food delivery.
Explore the Consequences
New technology inevitably gives rise to new ethical, social, and moral questions that we have never faced before. Rather than bury our heads in the sand, as leaders we must explore the full range of potential consequences of whatever is underway or still to come.
We can add AI to kids’ toys, like Mattel’s Hello Barbie or use cutting-edge gene editing technology like CRISPR-Cas9 to select for preferred gene sequences beyond basic health. But just because we can do something doesn’t mean we should.
Take time to listen to skeptics and understand the risks posed by technology.
Elon Musk, Stephen Hawking, Steve Wozniak, Bill Gates, and other well-known names in science and technology have expressed concern in the media and via open letters about the risks posed by AI. Microsoft’s CEO, Satya Nadella, has even argued tech companies shouldn’t build artificial intelligence systems that will replace people rather than making them more productive.
Exploring unintended consequences goes beyond having a Plan B for when something goes wrong. It requires broadening our view of what we’re responsible for. Beyond customers, shareholders, and the bottom line, we should understand how our decisions may impact employees, communities, the environment, our broader industry, and even our competitors.
The minor inconvenience of mitigating these risks now is far better than the alternative. Create forums to listen to and value voices outside of the board room and C-Suite. Seek out naysayers, ethicists, community leaders, wise elders, and even neophytes—those who may not share our preconceived notions of right and wrong or our narrow view of our role in the larger world.
The question isn’t: If we build it, will they come? It’s now: If we can build it, should we?
Adopt New Technologies and Shift Course
The last step is hardest. Once you’ve identified a technology (or technologies) as a potential disruptor and understand the implications, you need to figure out how to evolve your organization to make the most of the opportunity. Simply recognizing disruption isn’t enough.
Take today’s struggling brick-and-mortar retail business. Online shopping isn’t new. Amazon isn’t a plucky startup. Both have been changing how we buy stuff for years. And yet many who still own and operate physical stores—perhaps most prominently, Sears—are now on the brink of bankruptcy.
There’s hope though. Netflix began as a DVD delivery service in the 90s, but quickly realized its core business didn’t have staying power. It would have been laughable to stream movies when Netflix was founded. Still, computers and bandwidth were advancing fast. In 2007, the company added streaming to its subscription. Even then it wasn’t a totally compelling product.
But Netflix clearly saw a streaming future would likely end their DVD business.
In recent years, faster connection speeds, a growing content library, and the company’s entrance into original programming have given Netflix streaming the upper hand over DVDs. Since 2011, DVD subscriptions have steadily declined. Yet the company itself is doing fine. Why? It anticipated the shift to streaming and acted on it.
Never Stop Looking for the Next Big Thing
Technology is and will increasingly be a driver of disruption, destabilizing entrenched businesses and entire industries while also creating new markets and value not yet imagined.
When faced with the rapidly accelerating pace of change, many companies still default to old models and established practices. Leading like a technologist requires vigilant understanding of potential sources of disruption—what might make your company’s offering obsolete? The answers may not always be perfectly clear. What’s most important is relentlessly seeking them.
Stock Media provided by MJTierney / Pond5 Continue reading

Posted in Human Robots

#430658 Why Every Leader Needs a Healthy ...

This article is part of a series exploring the skills leaders must learn to make the most of rapid change in an increasingly disruptive world. The first article in the series, “How the Most Successful Leaders Will Thrive in an Exponential World,” broadly outlines four critical leadership skills—futurist, technologist, innovator, and humanitarian—and how they work together.
Today’s post, part five in the series, takes a more detailed look at leaders as technologists. Be sure to check out part two of the series, “How Leaders Dream Boldly to Bring New Futures to Life,” part three of the series, “How All Leaders Can Make the World a Better Place,” and part four of the series, “How Leaders Can Make Innovation Everyone’s Day Job”.
In the 1990s, Tower Records was the place to get new music. Successful and popular, the California chain spread far and wide, and in 1998, they took on $110 million in debt to fund aggressive further expansion. This wasn’t, as it turns out, the best of timing.
The first portable digital music player went on sale the same year. The following year brought Napster, a file sharing service allowing users to freely share music online. By 2000, Napster hosted 20 million users swapping songs. Then in 2001, Apple’s iPod and iTunes arrived, and when the iTunes Music Store opened in 2003, Apple sold over a million songs the first week.
As music was digitized, hard copies began to go out of style, and sales and revenue declined.
Tower first filed for bankruptcy in 2004 and again (for the last time) in 2006. The internet wasn’t the only reason for Tower’s demise. Mismanagement and price competition from electronics retailers like Best Buy also played a part. Still, today, the vast majority of music is purchased or streamed entirely online, and record stores are for the most part a niche market.
The writing was on the wall, but those impacted most had trouble reading it.
Why is it difficult for leaders to see technological change coming and right the ship before it’s too late? Why did Tower go all out on expansion just as the next big thing took the stage?
This is one story of many. Digitization has moved beyond music and entertainment, and now many big retailers operating physical stores are struggling to stay relevant. Meanwhile, the pace of change is accelerating, and new potentially disruptive technologies are on the horizon.
More than ever, leaders need to develop a strong understanding of and perspective on technology. They need to survey new innovations, forecast their pace, gauge the implications, and adopt new tools and strategy to change course as an industry shifts, not after it’s shifted.
Simply, leaders need to adopt the mindset of a technologist. Here’s what that means.
Survey the Landscape
Nurturing curiosity is the first step to understanding technological change. To know how technology might disrupt your industry, you have to know what’s in the pipeline and identify which new inventions are directly or indirectly related to your industry.
Becoming more technologically minded takes discipline and focus as well as unstructured time to explore the non-obvious connections between what is right in front of us and what might be. It requires a commitment to ongoing learning and discovery.
Read outside your industry and comfort zone, not just Fast Company and Wired, but Science and Nature to expand your horizons. Identify experts with the ability to demystify specific technology areas—many have a solid following on Twitter or a frequently cited blog.
But it isn’t all about reading. Consider going where the change is happening too.
Visit one of the technology hubs around the world or a local university research lab in your own back yard. Or bring the innovation to you by building an internal exploration lab stocked with the latest technologies, creating a technology advisory board, hosting an internal innovation challenge, or a local pitch night where aspiring entrepreneurs can share their newest ideas.
You might even ask the crowd by inviting anyone to suggest what innovation is most likely to disrupt your product, service, or sector. And don’t hesitate to engage younger folks—the digital natives all around you—by asking questions about what technology they are using or excited about. Consider going on a field trip with them to see how they use technology in different aspects of their lives. Invite the seasoned executives on your team to explore long-term “reverse mentoring” with someone who can expose them to the latest technology and teach them to use it.
Whatever your strategy, the goal should be to develop a healthy obsession with technology.
By exploring fresh perspectives outside traditional work environments and then giving ourselves permission to see how these new ideas might influence existing products and strategies, we have a chance to be ready for what we’re not ready for—but is likely right around the corner.
Estimate the Pace of Progress
The next step is forecasting when a technology will mature.
One of the most challenging aspects of the changes underway is that in many technology arenas, we are quickly moving from a linear to an exponential pace. It is hard enough to envision what is needed in an industry buffeted by progress that is changing 10% per year, but what happens when technological progress doubles annually? That is another world altogether.
This kind of change can be deceiving. For example, machine learning and big data are finally reaching critical momentum after more than twenty years of being right around the corner. The advances in applications like speech and image recognition that we’ve seen in recent years dwarf what came before and many believe we’ve just begun to understand the implications.
Even as we begin to embrace disruptive change in one technology arena, far more exciting possibilities unfold when we explore how multiple arenas are converging.
Artificial intelligence and big data are great examples. As Hod Lipson, professor of Mechanical Engineering and Data Science at Columbia University and co-author of Driverless: Intelligent Cars and the Road Ahead, says, “AI is the engine, but big data is the fuel. They need each other.”
This convergence paired with an accelerating pace makes for surprising applications.
To keep his research lab agile and open to new uses of advancing technologies, Lipson routinely asks his PhD students, “How might AI disrupt this industry?” to prompt development of applications across a wide spectrum of sectors from healthcare to agriculture to food delivery.
Explore the Consequences
New technology inevitably gives rise to new ethical, social, and moral questions that we have never faced before. Rather than bury our heads in the sand, as leaders we must explore the full range of potential consequences of whatever is underway or still to come.
We can add AI to kids’ toys, like Mattel’s Hello Barbie or use cutting-edge gene editing technology like CRISPR-Cas9 to select for preferred gene sequences beyond basic health. But just because we can do something doesn’t mean we should.
Take time to listen to skeptics and understand the risks posed by technology.
Elon Musk, Stephen Hawking, Steve Wozniak, Bill Gates, and other well-known names in science and technology have expressed concern in the media and via open letters about the risks posed by AI. Microsoft’s CEO, Satya Nadella, has even argued tech companies shouldn’t build artificial intelligence systems that will replace people rather than making them more productive.
Exploring unintended consequences goes beyond having a Plan B for when something goes wrong. It requires broadening our view of what we’re responsible for. Beyond customers, shareholders, and the bottom line, we should understand how our decisions may impact employees, communities, the environment, our broader industry, and even our competitors.
The minor inconvenience of mitigating these risks now is far better than the alternative. Create forums to listen to and value voices outside of the board room and C-Suite. Seek out naysayers, ethicists, community leaders, wise elders, and even neophytes—those who may not share our preconceived notions of right and wrong or our narrow view of our role in the larger world.
The question isn’t: If we build it, will they come? It’s now: If we can build it, should we?
Adopt New Technologies and Shift Course
The last step is hardest. Once you’ve identified a technology (or technologies) as a potential disruptor and understand the implications, you need to figure out how to evolve your organization to make the most of the opportunity. Simply recognizing disruption isn’t enough.
Take today’s struggling brick-and-mortar retail business. Online shopping isn’t new. Amazon isn’t a plucky startup. Both have been changing how we buy stuff for years. And yet many who still own and operate physical stores—perhaps most prominently, Sears—are now on the brink of bankruptcy.
There’s hope though. Netflix began as a DVD delivery service in the 90s, but quickly realized its core business didn’t have staying power. It would have been laughable to stream movies when Netflix was founded. Still, computers and bandwidth were advancing fast. In 2007, the company added streaming to its subscription. Even then it wasn’t a totally compelling product.
But Netflix clearly saw a streaming future would likely end their DVD business.
In recent years, faster connection speeds, a growing content library, and the company’s entrance into original programming have given Netflix streaming the upper hand over DVDs. Since 2011, DVD subscriptions have steadily declined. Yet the company itself is doing fine. Why? It anticipated the shift to streaming and acted on it.
Never Stop Looking for the Next Big Thing
Technology is and will increasingly be a driver of disruption, destabilizing entrenched businesses and entire industries while also creating new markets and value not yet imagined.
When faced with the rapidly accelerating pace of change, many companies still default to old models and established practices. Leading like a technologist requires vigilant understanding of potential sources of disruption—what might make your company’s offering obsolete? The answers may not always be perfectly clear. What’s most important is relentlessly seeking them.
Stock Media provided by MJTierney / Pond5 Continue reading

Posted in Human Robots

#430579 What These Lifelike Androids Can Teach ...

For Dr. Hiroshi Ishiguro, one of the most interesting things about androids is the changing questions they pose us, their creators, as they evolve. Does it, for example, do something to the concept of being human if a human-made creation starts telling you about what kind of boys ‘she’ likes?
If you want to know the answer to the boys question, you need to ask ERICA, one of Dr. Ishiguro’s advanced androids. Beneath her plastic skull and silicone skin, wires connect to AI software systems that bring her to life. Her ability to respond goes far beyond standard inquiries. Spend a little time with her, and the feeling of a distinct personality starts to emerge. From time to time, she works as a receptionist at Dr. Ishiguro and his team’s Osaka University labs. One of her android sisters is an actor who has starred in plays and a film.

ERICA’s ‘brother’ is an android version of Dr. Ishiguro himself, which has represented its creator at various events while the biological Ishiguro can remain in his offices in Japan. Microphones and cameras capture Ishiguro’s voice and face movements, which are relayed to the android. Apart from mimicking its creator, the Geminoid™ android is also capable of lifelike blinking, fidgeting, and breathing movements.
Say hello to relaxation
As technological development continues to accelerate, so do the possibilities for androids. From a position as receptionist, ERICA may well branch out into many other professions in the coming years. Companion for the elderly, comic book storyteller (an ancient profession in Japan), pop star, conversational foreign language partner, and newscaster are some of the roles and responsibilities Dr. Ishiguro sees androids taking on in the near future.
“Androids are not uncanny anymore. Most people adapt to interacting with Erica very quickly. Actually, I think that in interacting with androids, which are still different from us, we get a better appreciation of interacting with other cultures. In both cases, we are talking with someone who is different from us and learn to overcome those differences,” he says.
A lot has been written about how robots will take our jobs. Dr. Ishiguro believes these fears are blown somewhat out of proportion.
“Robots and androids will take over many simple jobs. Initially there might be some job-related issues, but new schemes, like for example a robot tax similar to the one described by Bill Gates, should help,” he says.
“Androids will make it possible for humans to relax and keep evolving. If we compare the time we spend studying now compared to 100 years ago, it has grown a lot. I think it needs to keep growing if we are to keep expanding our scientific and technological knowledge. In the future, we might end up spending 20 percent of our lifetime on work and 80 percent of the time on education and growing our skills.”
Android asks who you are
For Dr. Ishiguro, another aspect of robotics in general, and androids in particular, is how they question what it means to be human.
“Identity is a very difficult concept for humans sometimes. For example, I think clothes are part of our identity, in a way that is similar to our faces and bodies. We don’t change those from one day to the next, and that is why I have ten matching black outfits,” he says.
This link between physical appearance and perceived identity is one of the aspects Dr. Ishiguro is exploring. Another closely linked concept is the connection between body and feeling of self. The Ishiguro avatar was once giving a presentation in Austria. Its creator recalls how he felt distinctly like he was in Austria, even capable of feeling sensation of touch on his own body when people laid their hands on the android. If he was distracted, he felt almost ‘sucked’ back into his body in Japan.
“I am constantly thinking about my life in this way, and I believe that androids are a unique mirror that helps us formulate questions about why we are here and why we have been so successful. I do not necessarily think I have found the answers to these questions, so if you have, please share,” he says with a laugh.
His work and these questions, while extremely interesting on their own, become extra poignant when considering the predicted melding of mind and machine in the near future.
The ability to be present in several locations through avatars—virtual or robotic—raises many questions of both philosophical and practical nature. Then add the hypotheticals, like why send a human out onto the hostile surface of Mars if you could send a remote-controlled android, capable of relaying everything it sees, hears and feels?
The two ways of robotics will meet
Dr. Ishiguro sees the world of AI-human interaction as currently roughly split into two. One is the chat-bot approach that companies like Amazon, Microsoft, Google, and recently Apple, employ using stationary objects like speakers. Androids like ERICA represent another approach.
“It is about more than the form factor. I think that the android approach is generally more story-based. We are integrating new conversation features based on assumptions about the situation and running different scenarios that expand the android’s vocabulary and interactions. Another aspect we are working on is giving androids desire and intention. Like with people, androids should have desires and intentions in order for you to want to interact with them over time,” Dr. Ishiguro explains.
This could be said to be part of a wider trend for Japan, where many companies are developing human-like robots that often have some Internet of Things capabilities, making them able to handle some of the same tasks as an Amazon Echo. The difference in approach could be summed up in the words ‘assistant’ (Apple, Amazon, etc.) and ‘companion’ (Japan).
Dr. Ishiguro sees this as partly linked to how Japanese as a language—and market—is somewhat limited. This has a direct impact on viability and practicality of ‘pure’ voice recognition systems. At the same time, Japanese people have had greater exposure to positive images of robots, and have a different cultural / religious view of objects having a ‘soul’. However, it may also mean Japanese companies and android scientists are both stealing a lap on their western counterparts.
“If you speak to an Amazon Echo, that is not a natural way to interact for humans. This is part of why we are making human-like robot systems. The human brain is set up to recognize and interact with humans. So, it makes sense to focus on developing the body for the AI mind, as well as the AI. I believe that the final goal for both Japanese and other companies and scientists is to create human-like interaction. Technology has to adapt to us, because we cannot adapt fast enough to it, as it develops so quickly,” he says.
Banner image courtesy of Hiroshi Ishiguro Laboratories, ATR all rights reserved.
Dr. Ishiguro’s team has collaborated with partners and developed a number of android systems:
Geminoid™ HI-2 has been developed by Hiroshi Ishiguro Laboratories and Advanced Telecommunications Research Institute International (ATR).
Geminoid™ F has been developed by Osaka University and Hiroshi Ishiguro Laboratories, Advanced Telecommunications Research Institute International (ATR).
ERICA has been developed by ERATO ISHIGURO Symbiotic Human-Robot Interaction Project Continue reading

Posted in Human Robots