Tag Archives: world

#433872 Breaking Out of the Corporate Bubble ...

For big companies, success is a blessing and a curse. You don’t get big without doing something (or many things) very right. It might start with an invention or service the world didn’t know it needed. Your product takes off, and growth brings a whole new set of logistical challenges. Delivering consistent quality, hiring the right team, establishing a strong culture, tapping into new markets, satisfying shareholders. The list goes on.

Eventually, however, what made you successful also makes you resistant to change.

You’ve built a machine for one purpose, and it’s running smoothly, but what about retooling that machine to make something new? Not so easy. Leaders of big companies know there is no future for their organizations without change. And yet, they struggle to drive it.

In their new book, Leading Transformation: How to Take Charge of Your Company’s Future, Kyle Nel, Nathan Furr, and Thomas Ramsøy aim to deliver a roadmap for corporate transformation.

The book focuses on practical tools that have worked in big companies to break down behavioral and cognitive biases, envision radical futures, and run experiments. These include using science fiction and narrative to see ahead and adopting better measures of success for new endeavors.

A thread throughout is how to envision a new future and move into that future.

We’re limited by the bubbles in which we spend the most time—the corporate bubble, the startup bubble, the nonprofit bubble. The mutually beneficial convergence of complementary bubbles, then, can be a powerful tool for kickstarting transformation. The views and experiences of one partner can challenge the accepted wisdom of the other; resources can flow into newly co-created visions and projects; and connections can be made that wouldn’t otherwise exist.

The authors call such alliances uncommon partners. In the following excerpt from the book, Made In Space, a startup building 3D printers for space, helps Lowe’s explore an in-store 3D printing system, and Lowe’s helps Made In Space expand its vision and focus.

Uncommon Partners
In a dingy conference room at NASA, five prototypical nerds, smelling of Thai food, laid out the path to printing satellites in space and buildings on distant planets. At the end of their four-day marathon, they emerged with an artifact trail that began with early prototypes for the first 3D printer on the International Space Station and ended in the additive-manufacturing future—a future much bigger than 3D printing.

In the additive-manufacturing future, we will view everything as transient, or capable of being repurposed into new things. Rather than throwing away a soda bottle or a bent nail, we will simply reprocess these things into a new hinge for the fence we are building or a light switch plate for the tool shed. Indeed, we might not even go buy bricks for the tool shed, but instead might print them from impurities pulled from the air and the dirt beneath our feet. Such a process would both capture carbon in the air to make the bricks and avoid all the carbon involved in making and then transporting traditional bricks to your house.

If it all sounds a little too science fiction, think again. Lowe’s has already been honored as a Champion of Change by the US government for its prototype system to recycle plastic (e.g., plastic bags and bottles). The future may be closer than you have imagined. But to get there, Lowe’s didn’t work alone. It had to work with uncommon partners to create the future.

Uncommon partners are the types of organizations you might not normally work with, but which can greatly help you create radical new futures. Increasingly, as new technologies emerge and old industries converge, companies are finding that working independently to create all the necessary capabilities to enter new industries or create new technologies is costly, risky, and even counterproductive. Instead, organizations are finding that they need to collaborate with uncommon partners as an ecosystem to cocreate the future together. Nathan [Furr] and his colleague at INSEAD, Andrew Shipilov, call this arrangement an adaptive ecosystem strategy and described how companies such as Lowe’s, Samsung, Mastercard, and others are learning to work differently with partners and to work with different kinds of partners to more effectively discover new opportunities. For Lowe’s, an adaptive ecosystem strategy working with uncommon partners forms the foundation of capturing new opportunities and transforming the company. Despite its increased agility, Lowe’s can’t be (and shouldn’t become) an independent additive-manufacturing, robotics-using, exosuit-building, AR-promoting, fill-in-the-blank-what’s-next-ing company in addition to being a home improvement company. Instead, Lowe’s applies an adaptive ecosystem strategy to find the uncommon partners with which it can collaborate in new territory.

To apply the adaptive ecosystem strategy with uncommon partners, start by identifying the technical or operational components required for a particular focus area (e.g., exosuits) and then sort these components into three groups. First, there are the components that are emerging organically without any assistance from the orchestrator—the leader who tries to bring together the adaptive ecosystem. Second, there are the elements that might emerge, with encouragement and support. Third are the elements that won’t happen unless you do something about it. In an adaptive ecosystem strategy, you can create regular partnerships for the first two elements—those already emerging or that might emerge—if needed. But you have to create the elements in the final category (those that won’t emerge) either with an uncommon partner or by yourself.

For example, when Lowe’s wanted to explore the additive-manufacturing space, it began a search for an uncommon partner to provide the missing but needed capabilities. Unfortunately, initial discussions with major 3D printing companies proved disappointing. The major manufacturers kept trying to sell Lowe’s 3D printers. But the vision our group had created with science fiction was not for vendors to sell Lowe’s a printer, but for partners to help the company build a system—something that would allow customers to scan, manipulate, print, and eventually recycle additive-manufacturing objects. Every time we discussed 3D printing systems with these major companies, they responded that they could do it and then tried to sell printers. When Carin Watson, one of the leading lights at Singularity University, introduced us to Made In Space (a company being incubated in Singularity University’s futuristic accelerator), we discovered an uncommon partner that understood what it meant to cocreate a system.

Initially, Made In Space had been focused on simply getting 3D printing to work in space, where you can’t rely on gravity, you can’t send up a technician if the machine breaks, and you can’t release noxious fumes into cramped spacecraft quarters. But after the four days in the conference room going over the comic for additive manufacturing, Made In Space and Lowe’s emerged with a bigger vision. The company helped lay out an artifact trail that included not only the first printer on the International Space Station but also printing system services in Lowe’s stores.

Of course, the vision for an additive-manufacturing future didn’t end there. It also reshaped Made In Space’s trajectory, encouraging the startup, during those four days in a NASA conference room, to design a bolder future. Today, some of its bold projects include the Archinaut, a system that enables satellites to build themselves while in space, a direction that emerged partly from the science fiction narrative we created around additive manufacturing.

In summary, uncommon partners help you succeed by providing you with the capabilities you shouldn’t be building yourself, as well as with fresh insights. You also help uncommon partners succeed by creating new opportunities from which they can prosper.

Helping Uncommon Partners Prosper
Working most effectively with uncommon partners can require a shift from more familiar outsourcing or partnership relationships. When working with uncommon partners, you are trying to cocreate the future, which entails a great deal more uncertainty. Because you can’t specify outcomes precisely, agreements are typically less formal than in other types of relationships, and they operate under the provisions of shared vision and trust more than binding agreement clauses. Moreover, your goal isn’t to extract all the value from the relationship. Rather, you need to find a way to share the value.

Ideally, your uncommon partners should be transformed for the better by the work you do. For example, Lowe’s uncommon partner developing the robotics narrative was a small startup called Fellow Robots. Through their work with Lowe’s, Fellow Robots transformed from a small team focused on a narrow application of robotics (which was arguably the wrong problem) to a growing company developing a very different and valuable set of capabilities: putting cutting-edge technology on top of the old legacy systems embedded at the core of most companies. Working with Lowe’s allowed Fellow Robots to discover new opportunities, and today Fellow Robots works with retailers around the world, including BevMo! and Yamada. Ultimately, working with uncommon partners should be transformative for both of you, so focus more on creating a bigger pie than on how you are going to slice up a smaller pie.

The above excerpt appears in the new book Leading Transformation: How to Take Charge of Your Company’s Future by Kyle Nel, Nathan Furr, and Thomas Ramsøy, published by Harvard Business Review Press.

Image Credit: Here / Shutterstock.com

We are a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for us to earn fees by linking to Amazon.com and affiliated sites. Continue reading

Posted in Human Robots

#433852 How Do We Teach Autonomous Cars To Drive ...

Autonomous vehicles can follow the general rules of American roads, recognizing traffic signals and lane markings, noticing crosswalks and other regular features of the streets. But they work only on well-marked roads that are carefully scanned and mapped in advance.

Many paved roads, though, have faded paint, signs obscured behind trees and unusual intersections. In addition, 1.4 million miles of U.S. roads—one-third of the country’s public roadways—are unpaved, with no on-road signals like lane markings or stop-here lines. That doesn’t include miles of private roads, unpaved driveways or off-road trails.

What’s a rule-following autonomous car to do when the rules are unclear or nonexistent? And what are its passengers to do when they discover their vehicle can’t get them where they’re going?

Accounting for the Obscure
Most challenges in developing advanced technologies involve handling infrequent or uncommon situations, or events that require performance beyond a system’s normal capabilities. That’s definitely true for autonomous vehicles. Some on-road examples might be navigating construction zones, encountering a horse and buggy, or seeing graffiti that looks like a stop sign. Off-road, the possibilities include the full variety of the natural world, such as trees down over the road, flooding and large puddles—or even animals blocking the way.

At Mississippi State University’s Center for Advanced Vehicular Systems, we have taken up the challenge of training algorithms to respond to circumstances that almost never happen, are difficult to predict and are complex to create. We seek to put autonomous cars in the hardest possible scenario: driving in an area the car has no prior knowledge of, with no reliable infrastructure like road paint and traffic signs, and in an unknown environment where it’s just as likely to see a cactus as a polar bear.

Our work combines virtual technology and the real world. We create advanced simulations of lifelike outdoor scenes, which we use to train artificial intelligence algorithms to take a camera feed and classify what it sees, labeling trees, sky, open paths and potential obstacles. Then we transfer those algorithms to a purpose-built all-wheel-drive test vehicle and send it out on our dedicated off-road test track, where we can see how our algorithms work and collect more data to feed into our simulations.

Starting Virtual
We have developed a simulator that can create a wide range of realistic outdoor scenes for vehicles to navigate through. The system generates a range of landscapes of different climates, like forests and deserts, and can show how plants, shrubs and trees grow over time. It can also simulate weather changes, sunlight and moonlight, and the accurate locations of 9,000 stars.

The system also simulates the readings of sensors commonly used in autonomous vehicles, such as lidar and cameras. Those virtual sensors collect data that feeds into neural networks as valuable training data.

Simulated desert, meadow and forest environments generated by the Mississippi State University Autonomous Vehicle Simulator. Chris Goodin, Mississippi State University, Author provided.
Building a Test Track
Simulations are only as good as their portrayals of the real world. Mississippi State University has purchased 50 acres of land on which we are developing a test track for off-road autonomous vehicles. The property is excellent for off-road testing, with unusually steep grades for our area of Mississippi—up to 60 percent inclines—and a very diverse population of plants.

We have selected certain natural features of this land that we expect will be particularly challenging for self-driving vehicles, and replicated them exactly in our simulator. That allows us to directly compare results from the simulation and real-life attempts to navigate the actual land. Eventually, we’ll create similar real and virtual pairings of other types of landscapes to improve our vehicle’s capabilities.

A road washout, as seen in real life, left, and in simulation. Chris Goodin, Mississippi State University, Author provided.
Collecting More Data
We have also built a test vehicle, called the Halo Project, which has an electric motor and sensors and computers that can navigate various off-road environments. The Halo Project car has additional sensors to collect detailed data about its actual surroundings, which can help us build virtual environments to run new tests in.

The Halo Project car can collect data about driving and navigating in rugged terrain. Beth Newman Wynn, Mississippi State University, Author provided.
Two of its lidar sensors, for example, are mounted at intersecting angles on the front of the car so their beams sweep across the approaching ground. Together, they can provide information on how rough or smooth the surface is, as well as capturing readings from grass and other plants and items on the ground.

Lidar beams intersect, scanning the ground in front of the vehicle. Chris Goodin, Mississippi State University, Author provided
We’ve seen some exciting early results from our research. For example, we have shown promising preliminary results that machine learning algorithms trained on simulated environments can be useful in the real world. As with most autonomous vehicle research, there is still a long way to go, but our hope is that the technologies we’re developing for extreme cases will also help make autonomous vehicles more functional on today’s roads.

Matthew Doude, Associate Director, Center for Advanced Vehicular Systems; Ph.D. Student in Industrial and Systems Engineering, Mississippi State University; Christopher Goodin, Assistant Research Professor, Center for Advanced Vehicular Systems, Mississippi State University, and Daniel Carruth, Assistant Research Professor and Associate Director for Human Factors and Advanced Vehicle System, Center for Advanced Vehicular Systems, Mississippi State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Photo provided for The Conversation by Matthew Goudin / CC BY ND Continue reading

Posted in Human Robots

#433807 The How, Why, and Whether of Custom ...

A digital afterlife may soon be within reach, but it might not be for your benefit.

The reams of data we’re creating could soon make it possible to create digital avatars that live on after we die, aimed at comforting our loved ones or sharing our experience with future generations.

That may seem like a disappointing downgrade from the vision promised by the more optimistic futurists, where we upload our consciousness to the cloud and live forever in machines. But it might be a realistic possibility in the not-too-distant future—and the first steps have already been taken.

After her friend died in a car crash, Eugenia Kuyda, co-founder of Russian AI startup Luka, trained a neural network-powered chatbot on their shared message history to mimic him. Journalist and amateur coder James Vlahos took a more involved approach, carrying out extensive interviews with his terminally ill father so that he could create a digital clone of him when he died.

For those of us without the time or expertise to build our own artificial intelligence-powered avatar, startup Eternime is offering to take your social media posts and interactions as well as basic personal information to build a copy of you that could then interact with relatives once you’re gone. The service is so far only running a private beta with a handful of people, but with 40,000 on its waiting list, it’s clear there’s a market.

Comforting—Or Creepy?
The whole idea may seem eerily similar to the Black Mirror episode Be Right Back, in which a woman pays a company to create a digital copy of her deceased husband and eventually a realistic robot replica. And given the show’s focus on the emotional turmoil she goes through, people might question whether the idea is a sensible one.

But it’s hard to say at this stage whether being able to interact with an approximation of a deceased loved one would be a help or a hindrance in the grieving process. The fear is that it could make it harder for people to “let go” or “move on,” but others think it could play a useful therapeutic role, reminding people that just because someone is dead it doesn’t mean they’re gone, and providing a novel way for them to express and come to terms with their feelings.

While at present most envisage these digital resurrections as a way to memorialize loved ones, there are also more ambitious plans to use the technology as a way to preserve expertise and experience. A project at MIT called Augmented Eternity is investigating whether we could use AI to trawl through someone’s digital footprints and extract both their knowledge and elements of their personality.

Project leader Hossein Rahnama says he’s already working with a CEO who wants to leave behind a digital avatar that future executives could consult with after he’s gone. And you wouldn’t necessarily have to wait until you’re dead—experts could create virtual clones of themselves that could dispense advice on demand to far more people. These clones could soon be more than simple chatbots, too. Hollywood has already started spending millions of dollars to create 3D scans of its most bankable stars so that they can keep acting beyond the grave.

It’s easy to see the appeal of the idea; imagine if we could bring back Stephen Hawking or Tim Cook to share their wisdom with us. And what if we could create a digital brain trust combining the experience and wisdom of all the world’s greatest thinkers, accessible on demand?

But there are still huge hurdles ahead before we could create truly accurate representations of people by simply trawling through their digital remains. The first problem is data. Most peoples’ digital footprints only started reaching significant proportions in the last decade or so, and cover a relatively small period of their lives. It could take many years before there’s enough data to create more than just a superficial imitation of someone.

And that’s assuming that the data we produce is truly representative of who we are. Carefully-crafted Instagram profiles and cautiously-worded work emails hardly capture the messy realities of most peoples’ lives.

Perhaps if the idea is simply to create a bank of someone’s knowledge and expertise, accurately capturing the essence of their character would be less important. But these clones would also be static. Real people continually learn and change, but a digital avatar is a snapshot of someone’s character and opinions at the point they died. An inability to adapt as the world around them changes could put a shelf life on the usefulness of these replicas.

Who’s Calling the (Digital) Shots?
It won’t stop people trying, though, and that raises a potentially more important question: Who gets to make the calls about our digital afterlife? The subjects, their families, or the companies that hold their data?

In most countries, the law is currently pretty hazy on this topic. Companies like Google and Facebook have processes to let you choose who should take control of your accounts in the event of your death. But if you’ve forgotten to do that, the fate of your virtual remains comes down to a tangle of federal law, local law, and tech company terms of service.

This lack of regulation could create incentives and opportunities for unscrupulous behavior. The voice of a deceased loved one could be a highly persuasive tool for exploitation, and digital replicas of respected experts could be powerful means of pushing a hidden agenda.

That means there’s a pressing need for clear and unambiguous rules. Researchers at Oxford University recently suggested ethical guidelines that would treat our digital remains the same way museums and archaeologists are required to treat mortal remains—with dignity and in the interest of society.

Whether those kinds of guidelines are ever enshrined in law remains to be seen, but ultimately they may decide whether the digital afterlife turns out to be heaven or hell.

Image Credit: frankie’s / Shutterstock.com Continue reading

Posted in Human Robots

#433803 This Week’s Awesome Stories From ...

ARTIFICIAL INTELLIGENCE
The AI Cold War That Could Doom Us All
Nicholas Thompson | Wired
“At the dawn of a new stage in the digital revolution, the world’s two most powerful nations are rapidly retreating into positions of competitive isolation, like players across a Go board. …Is the arc of the digital revolution bending toward tyranny, and is there any way to stop it?”

LONGEVITY
Finally, the Drug That Keeps You Young
Stephen S. Hall | MIT Technology Review
“The other thing that has changed is that the field of senescence—and the recognition that senescent cells can be such drivers of aging—has finally gained acceptance. Whether those drugs will work in people is still an open question. But the first human trials are under way right now.”

SYNTHETIC BIOLOGY
Ginkgo Bioworks Is Turning Human Cells Into On-Demand Factories
Megan Molteni | Wired
“The biotech unicorn is already cranking out an impressive number of microbial biofactories that grow and multiply and burp out fragrances, fertilizers, and soon, psychoactive substances. And they do it at a fraction of the cost of traditional systems. But Kelly is thinking even bigger.”

CYBERNETICS
Thousands of Swedes Are Inserting Microchips Under Their Skin
Maddy Savage | NPR
“Around the size of a grain of rice, the chips typically are inserted into the skin just above each user’s thumb, using a syringe similar to that used for giving vaccinations. The procedure costs about $180. So many Swedes are lining up to get the microchips that the country’s main chipping company says it can’t keep up with the number of requests.”

ART
AI Art at Christie’s Sells for $432,500
Gabe Cohn | The New York Times
“Last Friday, a portrait produced by artificial intelligence was hanging at Christie’s New York opposite an Andy Warhol print and beside a bronze work by Roy Lichtenstein. On Thursday, it sold for well over double the price realized by both those pieces combined.”

ETHICS
Should a Self-Driving Car Kill the Baby or the Grandma? Depends on Where You’re From
Karen Hao | MIT Technology Review
“The researchers never predicted the experiment’s viral reception. Four years after the platform went live, millions of people in 233 countries and territories have logged 40 million decisions, making it one of the largest studies ever done on global moral preferences.”

TECHNOLOGY
The Rodney Brooks Rules for Predicting a Technology’s Success
Rodney Brooks | IEEE Spectrum
“Building electric cars and reusable rockets is fairly easy. Building a nuclear fusion reactor, flying cars, self-driving cars, or a Hyperloop system is very hard. What makes the difference?”

Image Source: spainter_vfx / Shutterstock.com Continue reading

Posted in Human Robots

#433796 Creepy AI-Created Portrait Fetches ...

It's the arrival of artificial intelligence in the art world. Continue reading

Posted in Human Robots