Tag Archives: top

#435145 How Big Companies Can Simultaneously Run ...

We live in the age of entrepreneurs. New startups seem to appear out of nowhere and challenge not only established companies, but entire industries. Where startup unicorns were once mythical creatures, they now seem abundant, not only increasing in numbers but also in the speed with which they can gain the minimum one-billion-dollar valuations to achieve this status.

But no matter how well things go for innovative startups, how many new success stories we hear, and how much space they take up in the media, the story that they are the best or only source of innovation isn’t entirely accurate.

Established organizations, or legacy organizations, can be incredibly innovative too. And while innovation is much more difficult in established organizations than in startups because they have much more complex systems—nobody is more likely to succeed in their innovation efforts than established organizations.

Unlike startups, established organizations have all the resources. They have money, customers, data, suppliers, partners, and infrastructure, which put them in a far better position to transform new ideas into concrete, value-creating, successful offerings than startups.

However, for established organizations, becoming an innovation champion in these times of rapid change requires new rules of engagement.

Many organizations commit the mistake of engaging in innovation as if it were a homogeneous thing that should be approached in the same way every time, regardless of its purpose. In my book, Transforming Legacy Organizations, I argue that innovation in established organizations must actually be divided into three different tracks: optimizing, augmenting, and mutating innovation.

All three are important, and to complicate matters further, organizations must execute all three types of innovation at the same time.

Optimizing Innovation
The first track is optimizing innovation. This type of innovation is the majority of what legacy organizations already do today. It is, metaphorically speaking, the extra blade on the razor. A razor manufacturer might launch a new razor that has not just three, but four blades, to ensure an even better, closer, and more comfortable shave. Then one or two years later, they say they are now launching a razor that has not only four, but five blades for an even better, closer, and more comfortable shave. That is optimizing innovation.

Adding extra blades on the razor is where the established player reigns.

No startup with so much as a modicum of sense would even try to beat the established company in this type of innovation. And this continuous optimization, both on the operational and customer facing sides, is important. In the short term. It pays the rent. But it’s far from enough. There are limits to how many blades a razor needs, and optimizing innovation only improves upon the past.

Augmenting Innovation
Established players must also go beyond optimization and prepare for the future through augmenting innovation.

The digital transformation projects that many organizations are initiating can be characterized as augmenting innovation. In the first instance, it is about upgrading core offerings and processes from analog to digital. Or, if you’re born digital, you’ve probably had to augment the core to become mobile-first. Perhaps you have even entered the next augmentation phase, which involves implementing artificial intelligence. Becoming AI-first, like the Amazons, Microsofts, Baidus, and Googles of the world, requires great technological advancements. And it’s difficult. But technology may, in fact, be a minor part of the task.

The biggest challenge for augmenting innovation is probably culture.

Only legacy organizations that manage to transform their cultures from status quo cultures—cultures with a preference for things as they are—into cultures full of incremental innovators can thrive in constant change.

To create a strong innovation culture, an organization needs to thoroughly understand its immune systems. These are the mechanisms that protect the organization and operate around the clock to keep it healthy and stable, just as the body’s immune system operates to keep the body healthy and stable. But in a rapidly changing world, many of these defense mechanisms are no longer appropriate and risk weakening organizations’ innovation power.

When talking about organizational immune systems, there is a clear tendency to simply point to the individual immune system, people’s unwillingness to change.

But this is too simplistic.

Of course, there is human resistance to change, but the organizational immune system, consisting of a company’s key performance indicators (KPIs), rewards systems, legacy IT infrastructure and processes, and investor and shareholder demands, is far more important. So is the organization’s societal immune system, such as legislative barriers, legacy customers and providers, and economic climate.

Luckily, there are many culture hacks that organizations can apply to strengthen their innovation cultures by upgrading their physical and digital workspaces, transforming their top-down work processes into decentralized, agile ones, and empowering their employees.

Mutating Innovation
Upgrading your core and preparing for the future by augmenting innovation is crucial if you want success in the medium term. But to win in the long run and be as or more successful 20 to 30 years from now, you need to invent the future, and challenge your core, through mutating innovation.

This requires involving radical innovators who have a bold focus on experimenting with that which is not currently understood and for which a business case cannot be prepared.

Here you must also physically move away from the core organization when you initiate and run such initiatives. This is sometimes called “innovation on the edges” because the initiatives will not have a chance at succeeding within the core. It will be too noisy as they challenge what currently exists—precisely what the majority of the organization’s employees are working to optimize or augment.

Forward-looking organizations experiment to mutate their core through “X divisions,” sometimes called skunk works or innovation labs.

Lowe’s Innovation Labs, for instance, worked with startups to build in-store robot assistants and zero-gravity 3D printers to explore the future. Mutating innovation might include pursuing partnerships across all imaginable domains or establishing brand new companies, rather than traditional business units, as we see automakers such as Toyota now doing to build software for autonomous vehicles. Companies might also engage in radical open innovation by sponsoring others’ ingenuity. Japan’s top airline ANA is exploring a future of travel that does not involve flying people from point A to point B via the ANA Avatar XPRIZE competition.

Increasing technological opportunities challenge the core of any organization but also create unprecedented potential. No matter what product, service, or experience you create, you can’t rest on your laurels. You have to bring yourself to a position where you have a clear strategy for optimizing, augmenting, and mutating your core and thus transforming your organization.

It’s not an easy job. But, hey, if it were easy, everyone would be doing it. Those who make it, on the other hand, will be the innovation champions of the future.

Image Credit: rock-the-stock / Shutterstock.com

We are a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for us to earn fees by linking to Amazon.com and affiliated sites. Continue reading

Posted in Human Robots

#435110 5 Coming Breakthroughs in Energy and ...

The energy and transportation industries are being aggressively disrupted by converging exponential technologies.

In just five days, the sun provides Earth with an energy supply exceeding all proven reserves of oil, coal, and natural gas. Capturing just 1 part in 8,000 of this available solar energy would allow us to meet 100 percent of our energy needs.

As we leverage renewable energy supplied by the sun, wind, geothermal sources, and eventually fusion, we are rapidly heading towards a future where 100 percent of our energy needs will be met by clean tech in just 30 years.

During the past 40 years, solar prices have dropped 250-fold. And as these costs plummet, solar panel capacity continues to grow exponentially.

On the heels of energy abundance, we are additionally witnessing a new transportation revolution, which sets the stage for a future of seamlessly efficient travel at lower economic and environmental costs.

Top 5 Transportation Breakthroughs (2019-2024)
Entrepreneur and inventor Ramez Naam is my go-to expert on all things energy and environment. Currently serving as the Energy Co-Chair at Singularity University, Naam is the award-winning author of five books, including the Nexus series of science fiction novels. Having spent 13 years at Microsoft, his software has touched the lives of over a billion people. Naam holds over 20 patents, including several shared with co-inventor Bill Gates.

In the next five years, he forecasts five respective transportation and energy trends, each poised to disrupt major players and birth entirely new business models.

Let’s dive in.

Autonomous cars drive 1 billion miles on US roads. Then 10 billion

Alphabet’s Waymo alone has already reached 10 million miles driven in the US. The 600 Waymo vehicles on public roads drive a total of 25,000 miles each day, and computer simulations provide an additional 25,000 virtual cars driving constantly. Since its launch in December, the Waymo One service has transported over 1,000 pre-vetted riders in the Phoenix area.

With more training miles, the accuracy of these cars continues to improve. Since last year, GM Cruise has improved its disengagement rate by 321 percent since last year, trailing close behind with only one human intervention per 5,025 miles self-driven.

Autonomous taxis as a service in top 20 US metro areas

Along with its first quarterly earnings released last week, Lyft recently announced that it would expand its Waymo partnership with the upcoming deployment of 10 autonomous vehicles in the Phoenix area. While individuals previously had to partake in Waymo’s “early rider program” prior to trying Waymo One, the Lyft partnership will allow anyone to ride in a self-driving vehicle without a prior NDA.

Strategic partnerships will grow increasingly essential between automakers, self-driving tech companies, and rideshare services. Ford is currently working with Volkswagen, and Nvidia now collaborates with Daimler (Mercedes) and Toyota. Just last week, GM Cruise raised another $1.15 billion at a $19 billion valuation as the company aims to launch a ride-hailing service this year.

“They’re going to come to the Bay Area, Los Angeles, Houston, other cities with relatively good weather,” notes Naam. “In every major city within five years in the US and in some other parts of the world, you’re going to see the ability to hail an autonomous vehicle as a ride.”

Cambrian explosion of vehicle formats

Naam explains, “If you look today at the average ridership of a taxi, a Lyft, or an Uber, it’s about 1.1 passengers plus the driver. So, why do you need a large four-seater vehicle for that?”

Small electric, autonomous pods that seat as few as two people will begin to emerge, satisfying the majority of ride-hailing demands we see today. At the same time, larger communal vehicles will appear, such as Uber Express, that will undercut even the cheapest of transportation methods—buses, trams, and the like. Finally, last-mile scooter transit (or simply short-distance walks) might connect you to communal pick-up locations.

By 2024, an unimaginably diverse range of vehicles will arise to meet every possible need, regardless of distance or destination.

Drone delivery for lightweight packages in at least one US city

Wing, the Alphabet drone delivery startup, recently became the first company to gain approval from the Federal Aviation Administration (FAA) to make deliveries in the US. Having secured approval to deliver to 100 homes in Canberra, Australia, Wing additionally plans to begin delivering goods from local businesses in the suburbs of Virginia.

The current state of drone delivery is best suited for lightweight, urgent-demand payloads like pharmaceuticals, thumb drives, or connectors. And as Amazon continues to decrease its Prime delivery times—now as speedy as a one-day turnaround in many cities—the use of drones will become essential.

Robotic factories drive onshoring of US factories… but without new jobs

The supply chain will continue to shorten and become more agile with the re-onshoring of manufacturing jobs in the US and other countries. Naam reasons that new management and software jobs will drive this shift, as these roles develop the necessary robotics to manufacture goods. Equally as important, these robotic factories will provide a more humane setting than many of the current manufacturing practices overseas.

Top 5 Energy Breakthroughs (2019-2024)

First “1 cent per kWh” deals for solar and wind signed

Ten years ago, the lowest price of solar and wind power fell between 10 to 12 cents per kilowatt hour (kWh), over twice the price of wholesale power from coal or natural gas.

Today, the gap between solar/wind power and fossil fuel-generated electricity is nearly negligible in many parts of the world. In G20 countries, fossil fuel electricity costs between 5 to 17 cents per kWh, while the average cost per kWh of solar power in the US stands at under 10 cents.

Spanish firm Solarpack Corp Technological recently won a bid in Chile for a 120 MW solar power plant supplying energy at 2.91 cents per kWh. This deal will result in an estimated 25 percent drop in energy costs for Chilean businesses by 2021.

Naam indicates, “We will see the first unsubsidized 1.0 cent solar deals in places like Chile, Mexico, the Southwest US, the Middle East, and North Africa, and we’ll see similar prices for wind in places like Mexico, Brazil, and the US Great Plains.”

Solar and wind will reach >15 percent of US electricity, and begin to drive all growth

Just over eight percent of energy in the US comes from solar and wind sources. In total, 17 percent of American energy is derived from renewable sources, while a whopping 63 percent is sourced from fossil fuels, and 17 percent from nuclear.

Last year in the U.K., twice as much energy was generated from wind than from coal. For over a week in May, the U.K. went completely coal-free, using wind and solar to supply 35 percent and 21 percent of power, respectively. While fossil fuels remain the primary electricity source, this week-long experiment highlights the disruptive potential of solar and wind power that major countries like the U.K. are beginning to emphasize.

“Solar and wind are still a relatively small part of the worldwide power mix, only about six percent. Within five years, it’s going to be 15 percent in the US and more than close to that worldwide,” Naam predicts. “We are nearing the point where we are not building any new fossil fuel power plants.”

It will be cheaper to build new solar/wind/batteries than to run on existing coal

Last October, Northern Indiana utility company NIPSCO announced its transition from a 65 percent coal-powered state to projected coal-free status by 2028. Importantly, this decision was made purely on the basis of financials, with an estimated $4 billion in cost savings for customers. The company has already begun several initiatives in solar, wind, and batteries.

NextEra, the largest power generator in the US, has taken on a similar goal, making a deal last year to purchase roughly seven million solar panels from JinkoSolar over four years. Leading power generators across the globe have vocalized a similar economic case for renewable energy.

ICE car sales have now peaked. All car sales growth will be electric

While electric vehicles (EV) have historically been more expensive for consumers than internal combustion engine-powered (ICE) cars, EVs are cheaper to operate and maintain. The yearly cost of operating an EV in the US is about $485, less than half the $1,117 cost of operating a gas-powered vehicle.

And as battery prices continue to shrink, the upfront costs of EVs will decline until a long-term payoff calculation is no longer required to determine which type of car is the better investment. EVs will become the obvious choice.

Many experts including Naam believe that ICE-powered vehicles peaked worldwide in 2018 and will begin to decline over the next five years, as has already been demonstrated in the past five months. At the same time, EVs are expected to quadruple their market share to 1.6 percent this year.

New storage technologies will displace Li-ion batteries for tomorrow’s most demanding applications

Lithium ion batteries have dominated the battery market for decades, but Naam anticipates new storage technologies will take hold for different contexts. Flow batteries, which can collect and store solar and wind power at large scales, will supply city grids. Already, California’s Independent System Operator, the nonprofit that maintains the majority of the state’s power grid, recently installed a flow battery system in San Diego.

Solid-state batteries, which consist of entirely solid electrolytes, will supply mobile devices in cars. A growing body of competitors, including Toyota, BMW, Honda, Hyundai, and Nissan, are already working on developing solid-state battery technology. These types of batteries offer up to six times faster charging periods, three times the energy density, and eight years of added lifespan, compared to lithium ion batteries.

Final Thoughts
Major advancements in transportation and energy technologies will continue to converge over the next five years. A case in point, Tesla’s recent announcement of its “robotaxi” fleet exemplifies the growing trend towards joint priority of sustainability and autonomy.

On the connectivity front, 5G and next-generation mobile networks will continue to enable the growth of autonomous fleets, many of which will soon run on renewable energy sources. This growth demands important partnerships between energy storage manufacturers, automakers, self-driving tech companies, and ridesharing services.

In the eco-realm, increasingly obvious economic calculi will catalyze consumer adoption of autonomous electric vehicles. In just five years, Naam predicts that self-driving rideshare services will be cheaper than owning a private vehicle for urban residents. And by the same token, plummeting renewable energy costs will make these fuels far more attractive than fossil fuel-derived electricity.

As universally optimized AI systems cut down on traffic, aggregate time spent in vehicles will decimate, while hours in your (or not your) car will be applied to any number of activities as autonomous systems steer the way. All the while, sharing an electric vehicle will cut down not only on your carbon footprint but on the exorbitant costs swallowed by your previous SUV. How will you spend this extra time and money? What new natural resources will fuel your everyday life?

Join Me
Abundance-Digital Online Community: Stay ahead of technological advancements and turn your passion into action. Abundance Digital is now part of Singularity University. Learn more.

Image Credit: welcomia / Shutterstock.com Continue reading

Posted in Human Robots

#435106 Could Artificial Photosynthesis Help ...

Plants are the planet’s lungs, but they’re struggling to keep up due to rising CO2 emissions and deforestation. Engineers are giving them a helping hand, though, by augmenting their capacity with new technology and creating artificial substitutes to help them clean up our atmosphere.

Imperial College London, one of the UK’s top engineering schools, recently announced that it was teaming up with startup Arborea to build the company’s first outdoor pilot of its BioSolar Leaf cultivation system at the university’s White City campus in West London.

Arborea is developing large solar panel-like structures that house microscopic plants and can be installed on buildings or open land. The plants absorb light and carbon dioxide as they photosynthesize, removing greenhouse gases from the air and producing organic material, which can be processed to extract valuable food additives like omega-3 fatty acids.

The idea of growing algae to produce useful materials isn’t new, but Arborea’s pitch seems to be flexibility and affordability. The more conventional approach is to grow algae in open ponds, which are less efficient and open to contamination, or in photo-bioreactors, which typically require CO2 to be piped in rather than getting it from the air and can be expensive to run.

There’s little detail on how the technology deals with issues like nutrient supply and harvesting or how efficient it is. The company claims it can remove carbon dioxide as fast as 100 trees using the surface area of just a single tree, but there’s no published research to back that up, and it’s hard to compare the surface area of flat panels to that of a complex object like a tree. If you flattened out every inch of a tree’s surface it would cover a surprisingly large area.

Nonetheless, the ability to install these panels directly on buildings could present a promising way to soak up the huge amount of CO2 produced in our cities by transport and industry. And Arborea isn’t the only one trying to give plants a helping hand.

For decades researchers have been working on ways to use light-activated catalysts to split water into oxygen and hydrogen fuel, and more recently there have been efforts to fuse this with additional processes to combine the hydrogen with carbon from CO2 to produce all kinds of useful products.

Most notably, in 2016 Harvard researchers showed that water-splitting catalysts could be augmented with bacteria that combines the resulting hydrogen with CO2 to create oxygen and biomass, fuel, or other useful products. The approach was more efficient than plants at turning CO2 to fuel and was built using cheap materials, but turning it into a commercially viable technology will take time.

Not everyone is looking to mimic or borrow from biology in their efforts to suck CO2 out of the atmosphere. There’s been a recent glut of investment in startups working on direct-air capture (DAC) technology, which had previously been written off for using too much power and space to be practical. The looming climate change crisis appears to be rewriting some of those assumptions, though.

Most approaches aim to use the concentrated CO2 to produce synthetic fuels or other useful products, creating a revenue stream that could help improve their commercial viability. But we look increasingly likely to surpass the safe greenhouse gas limits, so attention is instead turning to carbon-negative technologies.

That means capturing CO2 from the air and then putting it into long-term storage. One way could be to grow lots of biomass and then bury it, mimicking the process that created fossil fuels in the first place. Or DAC plants could pump the CO2 they produce into deep underground wells.

But the former would take up unreasonably large amounts of land to make a significant dent in emissions, while the latter would require huge amounts of already scant and expensive renewable power. According to a recent analysis, artificial photosynthesis could sidestep these issues because it’s up to five times more efficient than its natural counterpart and could be cheaper than DAC.

Whether the technology will develop quickly enough for it to be deployed at scale and in time to mitigate the worst effects of climate change remains to be seen. Emissions reductions certainly present a more sure-fire way to deal with the problem, but nonetheless, cyborg plants could soon be a common sight in our cities.

Image Credit: GiroScience / Shutterstock.com Continue reading

Posted in Human Robots

#435070 5 Breakthroughs Coming Soon in Augmented ...

Convergence is accelerating disruption… everywhere! Exponential technologies are colliding into each other, reinventing products, services, and industries.

In this third installment of my Convergence Catalyzer series, I’ll be synthesizing key insights from my annual entrepreneurs’ mastermind event, Abundance 360. This five-blog series looks at 3D printing, artificial intelligence, VR/AR, energy and transportation, and blockchain.

Today, let’s dive into virtual and augmented reality.

Today’s most prominent tech giants are leaping onto the VR/AR scene, each driving forward new and upcoming product lines. Think: Microsoft’s HoloLens, Facebook’s Oculus, Amazon’s Sumerian, and Google’s Cardboard (Apple plans to release a headset by 2021).

And as plummeting prices meet exponential advancements in VR/AR hardware, this burgeoning disruptor is on its way out of the early adopters’ market and into the majority of consumers’ homes.

My good friend Philip Rosedale is my go-to expert on AR/VR and one of the foremost creators of today’s most cutting-edge virtual worlds. After creating the virtual civilization Second Life in 2013, now populated by almost 1 million active users, Philip went on to co-found High Fidelity, which explores the future of next-generation shared VR.

In just the next five years, he predicts five emerging trends will take hold, together disrupting major players and birthing new ones.

Let’s dive in…

Top 5 Predictions for VR/AR Breakthroughs (2019-2024)
“If you think you kind of understand what’s going on with that tech today, you probably don’t,” says Philip. “We’re still in the middle of landing the airplane of all these new devices.”

(1) Transition from PC-based to standalone mobile VR devices

Historically, VR devices have relied on PC connections, usually involving wires and clunky hardware that restrict a user’s field of motion. However, as VR enters the dematerialization stage, we are about to witness the rapid rise of a standalone and highly mobile VR experience economy.

Oculus Go, the leading standalone mobile VR device on the market, requires only a mobile app for setup and can be transported anywhere with WiFi.

With a consumer audience in mind, the 32GB headset is priced at $200 and shares an app ecosystem with Samsung’s Gear VR. While Google Daydream are also standalone VR devices, they require a docked mobile phone instead of the built-in screen of Oculus Go.

In the AR space, Lenovo’s standalone Microsoft’s HoloLens 2 leads the way in providing tetherless experiences.

Freeing headsets from the constraints of heavy hardware will make VR/AR increasingly interactive and transportable, a seamless add-on whenever, wherever. Within a matter of years, it may be as simple as carrying lightweight VR goggles wherever you go and throwing them on at a moment’s notice.

(2) Wide field-of-view AR displays

Microsoft’s HoloLens 2 leads the AR industry in headset comfort and display quality. The most significant issue with their prior version was the limited rectangular field of view (FOV).

By implementing laser technology to create a microelectromechanical systems (MEMS) display, however, HoloLens 2 can position waveguides in front of users’ eyes, directed by mirrors. Subsequently enlarging images can be accomplished by shifting the angles of these mirrors. Coupled with a 47 pixel per degree resolution, HoloLens 2 has now doubled its predecessor’s FOV. Microsoft anticipates the release of its headset by the end of this year at a $3,500 price point, first targeting businesses and eventually rolling it out to consumers.

Magic Leap provides a similar FOV but with lower resolution than the HoloLens 2. The Meta 2 boasts an even wider 90-degree FOV, but requires a cable attachment. The race to achieve the natural human 120-degree horizontal FOV continues.

“The technology to expand the field of view is going to make those devices much more usable by giving you bigger than a small box to look through,” Rosedale explains.

(3) Mapping of real world to enable persistent AR ‘mirror worlds’

‘Mirror worlds’ are alternative dimensions of reality that can blanket a physical space. While seated in your office, the floor beneath you could dissolve into a calm lake and each desk into a sailboat. In the classroom, mirror worlds would convert pencils into magic wands and tabletops into touch screens.

Pokémon Go provides an introductory glimpse into the mirror world concept and its massive potential to unite people in real action.

To create these mirror worlds, AR headsets must precisely understand the architecture of the surrounding world. Rosedale predicts the scanning accuracy of devices will improve rapidly over the next five years to make these alternate dimensions possible.

(4) 5G mobile devices reduce latency to imperceptible levels

Verizon has already launched 5G networks in Minneapolis and Chicago, compatible with the Moto Z3. Sprint plans to follow with its own 5G launch in May. Samsung, LG, Huawei, and ZTE have all announced upcoming 5G devices.

“5G is rolling out this year and it’s going to materially affect particularly my work, which is making you feel like you’re talking to somebody else directly face to face,” explains Rosedale. “5G is critical because currently the cell devices impose too much delay, so it doesn’t feel real to talk to somebody face to face on these devices.”

To operate seamlessly from anywhere on the planet, standalone VR/AR devices will require a strong 5G network. Enhancing real-time connectivity in VR/AR will transform the communication methods of tomorrow.

(5) Eye-tracking and facial expressions built in for full natural communication

Companies like Pupil Labs and Tobii provide eye tracking hardware add-ons and software to VR/AR headsets. This technology allows for foveated rendering, which renders a given scene in high resolution only in the fovea region, while the peripheral regions appear in lower resolution, conserving processing power.

As seen in the HoloLens 2, eye tracking can also be used to identify users and customize lens widths to provide a comfortable, personalized experience for each individual.

According to Rosedale, “The fundamental opportunity for both VR and AR is to improve human communication.” He points out that current VR/AR headsets miss many of the subtle yet important aspects of communication. Eye movements and microexpressions provide valuable insight into a user’s emotions and desires.

Coupled with emotion-detecting AI software, such as Affectiva, VR/AR devices might soon convey much more richly textured and expressive interactions between any two people, transcending physical boundaries and even language gaps.

Final Thoughts
As these promising trends begin to transform the market, VR/AR will undoubtedly revolutionize our lives… possibly to the point at which our virtual worlds become just as consequential and enriching as our physical world.

A boon for next-gen education, VR/AR will empower youth and adults alike with holistic learning that incorporates social, emotional, and creative components through visceral experiences, storytelling, and simulation. Traveling to another time, manipulating the insides of a cell, or even designing a new city will become daily phenomena of tomorrow’s classrooms.

In real estate, buyers will increasingly make decisions through virtual tours. Corporate offices might evolve into spaces that only exist in ‘mirror worlds’ or grow virtual duplicates for remote workers.

In healthcare, accuracy of diagnosis will skyrocket, while surgeons gain access to digital aids as they conduct life-saving procedures. Or take manufacturing, wherein training and assembly will become exponentially more efficient as visual cues guide complex tasks.

In the mere matter of a decade, VR and AR will unlock limitless applications for new and converging industries. And as virtual worlds converge with AI, 3D printing, computing advancements and beyond, today’s experience economies will explode in scale and scope. Prepare yourself for the exciting disruption ahead!

Join Me
Abundance-Digital Online Community: Stay ahead of technological advancements, and turn your passion into action. Abundance Digital is now part of Singularity University. Learn more.

Image Credit: Mariia Korneeva / Shutterstock.com Continue reading

Posted in Human Robots

#435056 How Researchers Used AI to Better ...

A few years back, DeepMind’s Demis Hassabis famously prophesized that AI and neuroscience will positively feed into each other in a “virtuous circle.” If realized, this would fundamentally expand our insight into intelligence, both machine and human.

We’ve already seen some proofs of concept, at least in the brain-to-AI direction. For example, memory replay, a biological mechanism that fortifies our memories during sleep, also boosted AI learning when abstractly appropriated into deep learning models. Reinforcement learning, loosely based on our motivation circuits, is now behind some of AI’s most powerful tools.

Hassabis is about to be proven right again.

Last week, two studies independently tapped into the power of ANNs to solve a 70-year-old neuroscience mystery: how does our visual system perceive reality?

The first, published in Cell, used generative networks to evolve DeepDream-like images that hyper-activate complex visual neurons in monkeys. These machine artworks are pure nightmare fuel to the human eye; but together, they revealed a fundamental “visual hieroglyph” that may form a basic rule for how we piece together visual stimuli to process sight into perception.

In the second study, a team used a deep ANN model—one thought to mimic biological vision—to synthesize new patterns tailored to control certain networks of visual neurons in the monkey brain. When directly shown to monkeys, the team found that the machine-generated artworks could reliably activate predicted populations of neurons. Future improved ANN models could allow even better control, giving neuroscientists a powerful noninvasive tool to study the brain. The work was published in Science.

The individual results, though fascinating, aren’t necessarily the point. Rather, they illustrate how scientists are now striving to complete the virtuous circle: tapping AI to probe natural intelligence. Vision is only the beginning—the tools can potentially be expanded into other sensory domains. And the more we understand about natural brains, the better we can engineer artificial ones.

It’s a “great example of leveraging artificial intelligence to study organic intelligence,” commented Dr. Roman Sandler at Kernel.co on Twitter.

Why Vision?
ANNs and biological vision have quite the history.

In the late 1950s, the legendary neuroscientist duo David Hubel and Torsten Wiesel became some of the first to use mathematical equations to understand how neurons in the brain work together.

In a series of experiments—many using cats—the team carefully dissected the structure and function of the visual cortex. Using myriads of images, they revealed that vision is processed in a hierarchy: neurons in “earlier” brain regions, those closer to the eyes, tend to activate when they “see” simple patterns such as lines. As we move deeper into the brain, from the early V1 to a nub located slightly behind our ears, the IT cortex, neurons increasingly respond to more complex or abstract patterns, including faces, animals, and objects. The discovery led some scientists to call certain IT neurons “Jennifer Aniston cells,” which fire in response to pictures of the actress regardless of lighting, angle, or haircut. That is, IT neurons somehow extract visual information into the “gist” of things.

That’s not trivial. The complex neural connections that lead to increasing abstraction of what we see into what we think we see—what we perceive—is a central question in machine vision: how can we teach machines to transform numbers encoding stimuli into dots, lines, and angles that eventually form “perceptions” and “gists”? The answer could transform self-driving cars, facial recognition, and other computer vision applications as they learn to better generalize.

Hubel and Wiesel’s Nobel-prize-winning studies heavily influenced the birth of ANNs and deep learning. Much of earlier ANN “feed-forward” model structures are based on our visual system; even today, the idea of increasing layers of abstraction—for perception or reasoning—guide computer scientists to build AI that can better generalize. The early romance between vision and deep learning is perhaps the bond that kicked off our current AI revolution.

It only seems fair that AI would feed back into vision neuroscience.

Hieroglyphs and Controllers
In the Cell study, a team led by Dr. Margaret Livingstone at Harvard Medical School tapped into generative networks to unravel IT neurons’ complex visual alphabet.

Scientists have long known that neurons in earlier visual regions (V1) tend to fire in response to “grating patches” oriented in certain ways. Using a limited set of these patches like letters, V1 neurons can “express a visual sentence” and represent any image, said Dr. Arash Afraz at the National Institute of Health, who was not involved in the study.

But how IT neurons operate remained a mystery. Here, the team used a combination of genetic algorithms and deep generative networks to “evolve” computer art for every studied neuron. In seven monkeys, the team implanted electrodes into various parts of the visual IT region so that they could monitor the activity of a single neuron.

The team showed each monkey an initial set of 40 images. They then picked the top 10 images that stimulated the highest neural activity, and married them to 30 new images to “evolve” the next generation of images. After 250 generations, the technique, XDREAM, generated a slew of images that mashed up contorted face-like shapes with lines, gratings, and abstract shapes.

This image shows the evolution of an optimum image for stimulating a visual neuron in a monkey. Image Credit: Ponce, Xiao, and Schade et al. – Cell.
“The evolved images look quite counter-intuitive,” explained Afraz. Some clearly show detailed structures that resemble natural images, while others show complex structures that can’t be characterized by our puny human brains.

This figure shows natural images (right) and images evolved by neurons in the inferotemporal cortex of a monkey (left). Image Credit: Ponce, Xiao, and Schade et al. – Cell.
“What started to emerge during each experiment were pictures that were reminiscent of shapes in the world but were not actual objects in the world,” said study author Carlos Ponce. “We were seeing something that was more like the language cells use with each other.”

This image was evolved by a neuron in the inferotemporal cortex of a monkey using AI. Image Credit: Ponce, Xiao, and Schade et al. – Cell.
Although IT neurons don’t seem to use a simple letter alphabet, it does rely on a vast array of characters like hieroglyphs or Chinese characters, “each loaded with more information,” said Afraz.

The adaptive nature of XDREAM turns it into a powerful tool to probe the inner workings of our brains—particularly for revealing discrepancies between biology and models.

The Science study, led by Dr. James DiCarlo at MIT, takes a similar approach. Using ANNs to generate new patterns and images, the team was able to selectively predict and independently control neuron populations in a high-level visual region called V4.

“So far, what has been done with these models is predicting what the neural responses would be to other stimuli that they have not seen before,” said study author Dr. Pouya Bashivan. “The main difference here is that we are going one step further and using the models to drive the neurons into desired states.”

It suggests that our current ANN models for visual computation “implicitly capture a great deal of visual knowledge” which we can’t really describe, but which the brain uses to turn vision information into perception, the authors said. By testing AI-generated images on biological vision, however, the team concluded that today’s ANNs have a degree of understanding and generalization. The results could potentially help engineer even more accurate ANN models of biological vision, which in turn could feed back into machine vision.

“One thing is clear already: Improved ANN models … have led to control of a high-level neural population that was previously out of reach,” the authors said. “The results presented here have likely only scratched the surface of what is possible with such implemented characterizations of the brain’s neural networks.”

To Afraz, the power of AI here is to find cracks in human perception—both our computational models of sensory processes, as well as our evolved biological software itself. AI can be used “as a perfect adversarial tool to discover design cracks” of IT, said Afraz, such as finding computer art that “fools” a neuron into thinking the object is something else.

“As artificial intelligence researchers develop models that work as well as the brain does—or even better—we will still need to understand which networks are more likely to behave safely and further human goals,” said Ponce. “More efficient AI can be grounded by knowledge of how the brain works.”

Image Credit: Sangoiri / Shutterstock.com Continue reading

Posted in Human Robots