Tag Archives: artificial

#435174 Revolt on the Horizon? How Young People ...

As digital technologies facilitate the growth of both new and incumbent organizations, we have started to see the darker sides of the digital economy unravel. In recent years, many unethical business practices have been exposed, including the capture and use of consumers’ data, anticompetitive activities, and covert social experiments.

But what do young people who grew up with the internet think about this development? Our research with 400 digital natives—19- to 24-year-olds—shows that this generation, dubbed “GenTech,” may be the one to turn the digital revolution on its head. Our findings point to a frustration and disillusionment with the way organizations have accumulated real-time information about consumers without their knowledge and often without their explicit consent.

Many from GenTech now understand that their online lives are of commercial value to an array of organizations that use this insight for the targeting and personalization of products, services, and experiences.

This era of accumulation and commercialization of user data through real-time monitoring has been coined “surveillance capitalism” and signifies a new economic system.

Artificial Intelligence
A central pillar of the modern digital economy is our interaction with artificial intelligence (AI) and machine learning algorithms. We found that 47 percent of GenTech do not want AI technology to monitor their lifestyle, purchases, and financial situation in order to recommend them particular things to buy.

In fact, only 29 percent see this as a positive intervention. Instead, they wish to maintain a sense of autonomy in their decision making and have the opportunity to freely explore new products, services, and experiences.

As individuals living in the digital age, we constantly negotiate with technology to let go of or retain control. This pendulum-like effect reflects the ongoing battle between humans and technology.

My Life, My Data?
Our research also reveals that 54 percent of GenTech are very concerned about the access organizations have to their data, while only 19 percent were not worried. Despite the EU General Data Protection Regulation being introduced in May 2018, this is still a major concern, grounded in a belief that too much of their data is in the possession of a small group of global companies, including Google, Amazon, and Facebook. Some 70 percent felt this way.

In recent weeks, both Facebook and Google have vowed to make privacy a top priority in the way they interact with users. Both companies have faced public outcry for their lack of openness and transparency when it comes to how they collect and store user data. It wasn’t long ago that a hidden microphone was found in one of Google’s home alarm products.

Google now plans to offer auto-deletion of users’ location history data, browsing, and app activity as well as extend its “incognito mode” to Google Maps and search. This will enable users to turn off tracking.

At Facebook, CEO Mark Zuckerberg is keen to reposition the platform as a “privacy focused communications platform” built on principles such as private interactions, encryption, safety, interoperability (communications across Facebook-owned apps and platforms), and secure data storage. This will be a tough turnaround for the company that is fundamentally dependent on turning user data into opportunities for highly individualized advertising.

Privacy and transparency are critically important themes for organizations today, both for those that have “grown up” online as well as the incumbents. While GenTech want organizations to be more transparent and responsible, 64 percent also believe that they cannot do much to keep their data private. Being tracked and monitored online by organizations is seen as part and parcel of being a digital consumer.

Despite these views, there is a growing revolt simmering under the surface. GenTech want to take ownership of their own data. They see this as a valuable commodity, which they should be given the opportunity to trade with organizations. Some 50 percent would willingly share their data with companies if they got something in return, for example a financial incentive.

Rewiring the Power Shift
GenTech are looking to enter into a transactional relationship with organizations. This reflects a significant change in attitudes from perceiving the free access to digital platforms as the “product” in itself (in exchange for user data), to now wishing to use that data to trade for explicit benefits.

This has created an opportunity for companies that seek to empower consumers and give them back control of their data. Several companies now offer consumers the opportunity to sell the data they are comfortable sharing or take part in research that they get paid for. More and more companies are joining this space, including People.io, Killi, and Ocean Protocol.

Sir Tim Berners Lee, the creator of the world wide web, has also been working on a way to shift the power from organizations and institutions back to citizens and consumers. The platform, Solid, offers users the opportunity to be in charge of where they store their data and who can access it. It is a form of re-decentralization.

The Solid POD (Personal Online Data storage) is a secure place on a hosted server or the individual’s own server. Users can grant apps access to their POD as a person’s data is stored centrally and not by an app developer or on an organization’s server. We see this as potentially being a way to let people take back control from technology and other companies.

GenTech have woken up to a reality where a life lived “plugged in” has significant consequences for their individual privacy and are starting to push back, questioning those organizations that have shown limited concern and continue to exercise exploitative practices.

It’s no wonder that we see these signs of revolt. GenTech is the generation with the most to lose. They face a life ahead intertwined with digital technology as part of their personal and private lives. With continued pressure on organizations to become more transparent, the time is now for young people to make their move.

Dr Mike Cooray, Professor of Practice, Hult International Business School and Dr Rikke Duus, Research Associate and Senior Teaching Fellow, UCL

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Ser Borakovskyy / Shutterstock.com Continue reading

Posted in Human Robots

#435172 DARPA’s New Project Is Investing ...

When Elon Musk and DARPA both hop aboard the cyborg hypetrain, you know brain-machine interfaces (BMIs) are about to achieve the impossible.

BMIs, already the stuff of science fiction, facilitate crosstalk between biological wetware with external computers, turning human users into literal cyborgs. Yet mind-controlled robotic arms, microelectrode “nerve patches”, or “memory Band-Aids” are still purely experimental medical treatments for those with nervous system impairments.

With the Next-Generation Nonsurgical Neurotechnology (N3) program, DARPA is looking to expand BMIs to the military. This month, the project tapped six academic teams to engineer radically different BMIs to hook up machines to the brains of able-bodied soldiers. The goal is to ditch surgery altogether—while minimizing any biological interventions—to link up brain and machine.

Rather than microelectrodes, which are currently surgically inserted into the brain to hijack neural communication, the project is looking to acoustic signals, electromagnetic waves, nanotechnology, genetically-enhanced neurons, and infrared beams for their next-gen BMIs.

It’s a radical departure from current protocol, with potentially thrilling—or devastating—impact. Wireless BMIs could dramatically boost bodily functions of veterans with neural damage or post-traumatic stress disorder (PTSD), or allow a single soldier to control swarms of AI-enabled drones with his or her mind. Or, similar to the Black Mirror episode Men Against Fire, it could cloud the perception of soldiers, distancing them from the emotional guilt of warfare.

When trickled down to civilian use, these new technologies are poised to revolutionize medical treatment. Or they could galvanize the transhumanist movement with an inconceivably powerful tool that fundamentally alters society—for better or worse.

Here’s what you need to know.

Radical Upgrades
The four-year N3 program focuses on two main aspects: noninvasive and “minutely” invasive neural interfaces to both read and write into the brain.

Because noninvasive technologies sit on the scalp, their sensors and stimulators will likely measure entire networks of neurons, such as those controlling movement. These systems could then allow soldiers to remotely pilot robots in the field—drones, rescue bots, or carriers like Boston Dynamics’ BigDog. The system could even boost multitasking prowess—mind-controlling multiple weapons at once—similar to how able-bodied humans can operate a third robotic arm in addition to their own two.

In contrast, minutely invasive technologies allow scientists to deliver nanotransducers without surgery: for example, an injection of a virus carrying light-sensitive sensors, or other chemical, biotech, or self-assembled nanobots that can reach individual neurons and control their activity independently without damaging sensitive tissue. The proposed use for these technologies isn’t yet well-specified, but as animal experiments have shown, controlling the activity of single neurons at multiple points is sufficient to program artificial memories of fear, desire, and experiences directly into the brain.

“A neural interface that enables fast, effective, and intuitive hands-free interaction with military systems by able-bodied warfighters is the ultimate program goal,” DARPA wrote in its funding brief, released early last year.

The only technologies that will be considered must have a viable path toward eventual use in healthy human subjects.

“Final N3 deliverables will include a complete integrated bidirectional brain-machine interface system,” the project description states. This doesn’t just include hardware, but also new algorithms tailored to these system, demonstrated in a “Department of Defense-relevant application.”

The Tools
Right off the bat, the usual tools of the BMI trade, including microelectrodes, MRI, or transcranial magnetic stimulation (TMS) are off the table. These popular technologies rely on surgery, heavy machinery, or personnel to sit very still—conditions unlikely in the real world.

The six teams will tap into three different kinds of natural phenomena for communication: magnetism, light beams, and acoustic waves.

Dr. Jacob Robinson at Rice University, for example, is combining genetic engineering, infrared laser beams, and nanomagnets for a bidirectional system. The $18 million project, MOANA (Magnetic, Optical and Acoustic Neural Access device) uses viruses to deliver two extra genes into the brain. One encodes a protein that sits on top of neurons and emits infrared light when the cell activates. Red and infrared light can penetrate through the skull. This lets a skull cap, embedded with light emitters and detectors, pick up these signals for subsequent decoding. Ultra-fast and utra-sensitvie photodetectors will further allow the cap to ignore scattered light and tease out relevant signals emanating from targeted portions of the brain, the team explained.

The other new gene helps write commands into the brain. This protein tethers iron nanoparticles to the neurons’ activation mechanism. Using magnetic coils on the headset, the team can then remotely stimulate magnetic super-neurons to fire while leaving others alone. Although the team plans to start in cell cultures and animals, their goal is to eventually transmit a visual image from one person to another. “In four years we hope to demonstrate direct, brain-to-brain communication at the speed of thought and without brain surgery,” said Robinson.

Other projects in N3 are just are ambitious.

The Carnegie Mellon team, for example, plans to use ultrasound waves to pinpoint light interaction in targeted brain regions, which can then be measured through a wearable “hat.” To write into the brain, they propose a flexible, wearable electrical mini-generator that counterbalances the noisy effect of the skull and scalp to target specific neural groups.

Similarly, a group at Johns Hopkins is also measuring light path changes in the brain to correlate them with regional brain activity to “read” wetware commands.

The Teledyne Scientific & Imaging group, in contrast, is turning to tiny light-powered “magnetometers” to detect small, localized magnetic fields that neurons generate when they fire, and match these signals to brain output.

The nonprofit Battelle team gets even fancier with their ”BrainSTORMS” nanotransducers: magnetic nanoparticles wrapped in a piezoelectric shell. The shell can convert electrical signals from neurons into magnetic ones and vice-versa. This allows external transceivers to wirelessly pick up the transformed signals and stimulate the brain through a bidirectional highway.

The magnetometers can be delivered into the brain through a nasal spray or other non-invasive methods, and magnetically guided towards targeted brain regions. When no longer needed, they can once again be steered out of the brain and into the bloodstream, where the body can excrete them without harm.

Four-Year Miracle
Mind-blown? Yeah, same. However, the challenges facing the teams are enormous.

DARPA’s stated goal is to hook up at least 16 sites in the brain with the BMI, with a lag of less than 50 milliseconds—on the scale of average human visual perception. That’s crazy high resolution for devices sitting outside the brain, both in space and time. Brain tissue, blood vessels, and the scalp and skull are all barriers that scatter and dissipate neural signals. All six teams will need to figure out the least computationally-intensive ways to fish out relevant brain signals from background noise, and triangulate them to the appropriate brain region to decipher intent.

In the long run, four years and an average $20 million per project isn’t much to potentially transform our relationship with machines—for better or worse. DARPA, to its credit, is keenly aware of potential misuse of remote brain control. The program is under the guidance of a panel of external advisors with expertise in bioethical issues. And although DARPA’s focus is on enabling able-bodied soldiers to better tackle combat challenges, it’s hard to argue that wireless, non-invasive BMIs will also benefit those most in need: veterans and other people with debilitating nerve damage. To this end, the program is heavily engaging the FDA to ensure it meets safety and efficacy regulations for human use.

Will we be there in just four years? I’m skeptical. But these electrical, optical, acoustic, magnetic, and genetic BMIs, as crazy as they sound, seem inevitable.

“DARPA is preparing for a future in which a combination of unmanned systems, AI, and cyber operations may cause conflicts to play out on timelines that are too short for humans to effectively manage with current technology alone,” said Al Emondi, the N3 program manager.

The question is, now that we know what’s in store, how should the rest of us prepare?

Image Credit: With permission from DARPA N3 project. Continue reading

Posted in Human Robots

#435161 Less Like Us: An Alternate Theory of ...

The question of whether an artificial general intelligence will be developed in the future—and, if so, when it might arrive—is controversial. One (very uncertain) estimate suggests 2070 might be the earliest we could expect to see such technology.

Some futurists point to Moore’s Law and the increasing capacity of machine learning algorithms to suggest that a more general breakthrough is just around the corner. Others suggest that extrapolating exponential improvements in hardware is unwise, and that creating narrow algorithms that can beat humans at specialized tasks brings us no closer to a “general intelligence.”

But evolution has produced minds like the human mind at least once. Surely we could create artificial intelligence simply by copying nature, either by guided evolution of simple algorithms or wholesale emulation of the human brain.

Both of these ideas are far easier to conceive of than they are to achieve. The 302 neurons of the nematode worm’s brain are still an extremely difficult engineering challenge, let alone the 86 billion in a human brain.

Leaving aside these caveats, though, many people are worried about artificial general intelligence. Nick Bostrom’s influential book on superintelligence imagines it will be an agent—an intelligence with a specific goal. Once such an agent reaches a human level of intelligence, it will improve itself—increasingly rapidly as it gets smarter—in pursuit of whatever goal it has, and this “recursive self-improvement” will lead it to become superintelligent.

This “intelligence explosion” could catch humans off guard. If the initial goal is poorly specified or malicious, or if improper safety features are in place, or if the AI decides it would prefer to do something else instead, humans may be unable to control our own creation. Bostrom gives examples of how a seemingly innocuous goal, such as “Make everyone happy,” could be misinterpreted; perhaps the AI decides to drug humanity into a happy stupor, or convert most of the world into computing infrastructure to pursue its goal.

Drexler and Comprehensive AI Services
These are increasingly familiar concerns for an AI that behaves like an agent, seeking to achieve its goal. There are dissenters to this picture of how artificial general intelligence might arise. One notable alternative point of view comes from Eric Drexler, famous for his work on molecular nanotechnology and Engines of Creation, the book that popularized it.

With respect to AI, Drexler believes our view of an artificial intelligence as a single “agent” that acts to maximize a specific goal is too narrow, almost anthropomorphizing AI, or modeling it as a more realistic route towards general intelligence. Instead, he proposes “Comprehensive AI Services” (CAIS) as an alternative route to artificial general intelligence.

What does this mean? Drexler’s argument is that we should look more closely at how machine learning and AI algorithms are actually being developed in the real world. The optimization effort is going into producing algorithms that can provide services and perform tasks like translation, music recommendations, classification, medical diagnoses, and so forth.

AI-driven improvements in technology, argues Drexler, will lead to a proliferation of different algorithms: technology and software improvement, which can automate increasingly more complicated tasks. Recursive improvement in this regime is already occurring—take the newer versions of AlphaGo, which can learn to improve themselves by playing against previous versions.

Many Smart Arms, No Smart Brain
Instead of relying on some unforeseen breakthrough, the CAIS model of AI just assumes that specialized, narrow AI will continue to improve at performing each of its tasks, and the range of tasks that machine learning algorithms will be able to perform will become wider. Ultimately, once a sufficient number of tasks have been automated, the services that an AI will provide will be so comprehensive that they will resemble a general intelligence.

One could then imagine a “general” intelligence as simply an algorithm that is extremely good at matching the task you ask it to perform to the specialized service algorithm that can perform that task. Rather than acting like a single brain that strives to achieve a particular goal, the central AI would be more like a search engine, looking through the tasks it can perform to find the closest match and calling upon a series of subroutines to achieve the goal.

For Drexler, this is inherently a safety feature. Rather than Bostrom’s single, impenetrable, conscious and superintelligent brain (which we must try to psychoanalyze in advance without really knowing what it will look like), we have a network of capabilities. If you don’t want your system to perform certain tasks, you can simply cut it off from access to those services. There is no superintelligent consciousness to outwit or “trap”: more like an extremely high-level programming language that can respond to complicated commands by calling upon one of the myriad specialized algorithms that have been developed by different groups.

This skirts the complex problem of consciousness and all of the sticky moral quandaries that arise in making minds that might be like ours. After all, if you could simulate a human mind, you could simulate it experiencing unimaginable pain. Black Mirror-esque dystopias where emulated minds have no rights and are regularly “erased” or forced to labor in dull and repetitive tasks, hove into view.

Drexler argues that, in this world, there is no need to ever build a conscious algorithm. Yet it seems likely that, at some point, humans will attempt to simulate our own brains, if only in the vain attempt to pursue immortality. This model cannot hold forever. Yet its proponents argue that any world in which we could develop general AI would probably also have developed superintelligent capabilities in a huge range of different tasks, such as computer programming, natural language understanding, and so on. In other words, CAIS arrives first.

The Future In Our Hands?
Drexler argues that his model already incorporates many of the ideas from general AI development. In the marketplace, algorithms compete all the time to perform these services: they undergo the same evolutionary pressures that lead to “higher intelligence,” but the behavior that’s considered superior is chosen by humans, and the nature of the “general intelligence” is far more shaped by human decision-making and human programmers. Development in AI services could still be rapid and disruptive.

But in Drexler’s case, the research and development capacity comes from humans and organizations driven by the desire to improve algorithms that are performing individualized and useful tasks, rather than from a conscious AI recursively reprogramming and improving itself.

In other words, this vision does not absolve us of the responsibility of making our AI safe; if anything, it gives us a greater degree of responsibility. As more and more complex “services” are automated, performing what used to be human jobs at superhuman speed, the economic disruption will be severe.

Equally, as machine learning is trusted to carry out more complex decisions, avoiding algorithmic bias becomes crucial. Shaping each of these individual decision-makers—and trying to predict the complex ways they might interact with each other—is no less daunting a task than specifying the goal for a hypothetical, superintelligent, God-like AI. Arguably, the consequences of the “misalignment” of these services algorithms are already multiplying around us.

The CAIS model bridges the gap between real-world AI, machine learning developments, and real-world safety considerations, as well as the speculative world of superintelligent agents and the safety considerations involved with controlling their behavior. We should keep our minds open as to what form AI and machine learning will take, and how it will influence our societies—and we must take care to ensure that the systems we create don’t end up forcing us all to live in a world of unintended consequences.

Image Credit: MF Production/Shutterstock.com Continue reading

Posted in Human Robots

#435159 This Week’s Awesome Stories From ...

ARTIFICIAL INTELLIGENCE
DeepMind Can Now Beat Us at Multiplayer Games Too
Cade Metz | The New York Times
“DeepMind’s project is part of a broad effort to build artificial intelligence that can play enormously complex, three-dimensional video games, including Quake III, Dota 2 and StarCraft II. Many researchers believe that success in the virtual arena will eventually lead to automated systems with improved abilities in the real world.”

ROBOTICS
Tiny Robots Carry Stem Cells Through a Mouse
Emily Waltz | IEEE Spectrum
“Engineers have built microrobots to perform all sorts of tasks in the body, and can now add to that list another key skill: delivering stem cells. In a paper, published [May 29] in Science Robotics, researchers describe propelling a magnetically-controlled, stem-cell-carrying bot through a live mouse.” [Video shows microbots navigating a microfluidic chip. MRI could not be used to image the mouse as the bots navigate magnetically.]

COMPUTING
How a Quantum Computer Could Break 2048-Bit RSA Encryption in 8 Hours
Emerging Technology From the arXiv | MIT Technology Review
“[Two researchers] have found a more efficient way for quantum computers to perform the code-breaking calculations, reducing the resources they require by orders of magnitude. Consequently, these machines are significantly closer to reality than anyone suspected.” [The arXiv is a pre-print server for research that has not yet been peer reviewed.]

AUTOMATION
Lyft Has Completed 55,000 Self Driving Rides in Las Vegas
Christine Fisher | Engadget
“One year ago, Lyft launched its self-driving ride service in Las Vegas. Today, the company announced its 30-vehicle fleet has made 55,000 trips. That makes it the largest commercial program of its kind in the US.”

TRANSPORTATION
Flying Car Startup Alaka’i Bets Hydrogen Can Outdo Batteries
Eric Adams | Wired
“Alaka’i says the final product will be able to fly for up to four hours and cover 400 miles on a single load of fuel, which can be replenished in 10 minutes at a hydrogen fueling station. It has built a functional, full-scale prototype that will make its first flight ‘imminently,’ a spokesperson says.”

ETHICS
The World Economic Forum Wants to Develop Global Rules for AI
Will Knight | MIT Technology Review
“This week, AI experts, politicians, and CEOs will gather to ask an important question: Can the United States, China, or anyone else agree on how artificial intelligence should be used and controlled?”

SPACE
Building a Rocket in a Garage to Take on SpaceX and Blue Origin
Jackson Ryan | CNET
“While billionaire entrepreneurs like SpaceX’s Elon Musk and Blue Origin’s Jeff Bezos push the boundaries of human spaceflight and exploration, a legion of smaller private startups around the world have been developing their own rocket technology to launch lighter payloads into orbit.”

Image Credit: Kevin Crosby / Unsplash Continue reading

Posted in Human Robots

#435145 How Big Companies Can Simultaneously Run ...

We live in the age of entrepreneurs. New startups seem to appear out of nowhere and challenge not only established companies, but entire industries. Where startup unicorns were once mythical creatures, they now seem abundant, not only increasing in numbers but also in the speed with which they can gain the minimum one-billion-dollar valuations to achieve this status.

But no matter how well things go for innovative startups, how many new success stories we hear, and how much space they take up in the media, the story that they are the best or only source of innovation isn’t entirely accurate.

Established organizations, or legacy organizations, can be incredibly innovative too. And while innovation is much more difficult in established organizations than in startups because they have much more complex systems—nobody is more likely to succeed in their innovation efforts than established organizations.

Unlike startups, established organizations have all the resources. They have money, customers, data, suppliers, partners, and infrastructure, which put them in a far better position to transform new ideas into concrete, value-creating, successful offerings than startups.

However, for established organizations, becoming an innovation champion in these times of rapid change requires new rules of engagement.

Many organizations commit the mistake of engaging in innovation as if it were a homogeneous thing that should be approached in the same way every time, regardless of its purpose. In my book, Transforming Legacy Organizations, I argue that innovation in established organizations must actually be divided into three different tracks: optimizing, augmenting, and mutating innovation.

All three are important, and to complicate matters further, organizations must execute all three types of innovation at the same time.

Optimizing Innovation
The first track is optimizing innovation. This type of innovation is the majority of what legacy organizations already do today. It is, metaphorically speaking, the extra blade on the razor. A razor manufacturer might launch a new razor that has not just three, but four blades, to ensure an even better, closer, and more comfortable shave. Then one or two years later, they say they are now launching a razor that has not only four, but five blades for an even better, closer, and more comfortable shave. That is optimizing innovation.

Adding extra blades on the razor is where the established player reigns.

No startup with so much as a modicum of sense would even try to beat the established company in this type of innovation. And this continuous optimization, both on the operational and customer facing sides, is important. In the short term. It pays the rent. But it’s far from enough. There are limits to how many blades a razor needs, and optimizing innovation only improves upon the past.

Augmenting Innovation
Established players must also go beyond optimization and prepare for the future through augmenting innovation.

The digital transformation projects that many organizations are initiating can be characterized as augmenting innovation. In the first instance, it is about upgrading core offerings and processes from analog to digital. Or, if you’re born digital, you’ve probably had to augment the core to become mobile-first. Perhaps you have even entered the next augmentation phase, which involves implementing artificial intelligence. Becoming AI-first, like the Amazons, Microsofts, Baidus, and Googles of the world, requires great technological advancements. And it’s difficult. But technology may, in fact, be a minor part of the task.

The biggest challenge for augmenting innovation is probably culture.

Only legacy organizations that manage to transform their cultures from status quo cultures—cultures with a preference for things as they are—into cultures full of incremental innovators can thrive in constant change.

To create a strong innovation culture, an organization needs to thoroughly understand its immune systems. These are the mechanisms that protect the organization and operate around the clock to keep it healthy and stable, just as the body’s immune system operates to keep the body healthy and stable. But in a rapidly changing world, many of these defense mechanisms are no longer appropriate and risk weakening organizations’ innovation power.

When talking about organizational immune systems, there is a clear tendency to simply point to the individual immune system, people’s unwillingness to change.

But this is too simplistic.

Of course, there is human resistance to change, but the organizational immune system, consisting of a company’s key performance indicators (KPIs), rewards systems, legacy IT infrastructure and processes, and investor and shareholder demands, is far more important. So is the organization’s societal immune system, such as legislative barriers, legacy customers and providers, and economic climate.

Luckily, there are many culture hacks that organizations can apply to strengthen their innovation cultures by upgrading their physical and digital workspaces, transforming their top-down work processes into decentralized, agile ones, and empowering their employees.

Mutating Innovation
Upgrading your core and preparing for the future by augmenting innovation is crucial if you want success in the medium term. But to win in the long run and be as or more successful 20 to 30 years from now, you need to invent the future, and challenge your core, through mutating innovation.

This requires involving radical innovators who have a bold focus on experimenting with that which is not currently understood and for which a business case cannot be prepared.

Here you must also physically move away from the core organization when you initiate and run such initiatives. This is sometimes called “innovation on the edges” because the initiatives will not have a chance at succeeding within the core. It will be too noisy as they challenge what currently exists—precisely what the majority of the organization’s employees are working to optimize or augment.

Forward-looking organizations experiment to mutate their core through “X divisions,” sometimes called skunk works or innovation labs.

Lowe’s Innovation Labs, for instance, worked with startups to build in-store robot assistants and zero-gravity 3D printers to explore the future. Mutating innovation might include pursuing partnerships across all imaginable domains or establishing brand new companies, rather than traditional business units, as we see automakers such as Toyota now doing to build software for autonomous vehicles. Companies might also engage in radical open innovation by sponsoring others’ ingenuity. Japan’s top airline ANA is exploring a future of travel that does not involve flying people from point A to point B via the ANA Avatar XPRIZE competition.

Increasing technological opportunities challenge the core of any organization but also create unprecedented potential. No matter what product, service, or experience you create, you can’t rest on your laurels. You have to bring yourself to a position where you have a clear strategy for optimizing, augmenting, and mutating your core and thus transforming your organization.

It’s not an easy job. But, hey, if it were easy, everyone would be doing it. Those who make it, on the other hand, will be the innovation champions of the future.

Image Credit: rock-the-stock / Shutterstock.com

We are a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for us to earn fees by linking to Amazon.com and affiliated sites. Continue reading

Posted in Human Robots