Tag Archives: form

#434701 3 Practical Solutions to Offset ...

In recent years, the media has sounded the alarm about mass job loss to automation and robotics—some studies predict that up to 50 percent of current jobs or tasks could be automated in coming decades. While this topic has received significant attention, much of the press focuses on potential problems without proposing realistic solutions or considering new opportunities.

The economic impacts of AI, robotics, and automation are complex topics that require a more comprehensive perspective to understand. Is universal basic income, for example, the answer? Many believe so, and there are a number of experiments in progress. But it’s only one strategy, and without a sustainable funding source, universal basic income may not be practical.

As automation continues to accelerate, we’ll need a multi-pronged approach to ease the transition. In short, we need to update broad socioeconomic strategies for a new century of rapid progress. How, then, do we plan practical solutions to support these new strategies?

Take history as a rough guide to the future. Looking back, technology revolutions have three themes in common.

First, past revolutions each produced profound benefits to productivity, increasing human welfare. Second, technological innovation and technology diffusion have accelerated over time, each iteration placing more strain on the human ability to adapt. And third, machines have gradually replaced more elements of human work, with human societies adapting by moving into new forms of work—from agriculture to manufacturing to service, for example.

Public and private solutions, therefore, need to be developed to address each of these three components of change. Let’s explore some practical solutions for each in turn.

Figure 1. Technology’s structural impacts in the 21st century. Refer to Appendix I for quantitative charts and technological examples corresponding to the numbers (1-22) in each slice.
Solution 1: Capture New Opportunities Through Aggressive Investment
The rapid emergence of new technology promises a bounty of opportunity for the twenty-first century’s economic winners. This technological arms race is shaping up to be a global affair, and the winners will be determined in part by who is able to build the future economy fastest and most effectively. Both the private and public sectors have a role to play in stimulating growth.

At the country level, several nations have created competitive strategies to promote research and development investments as automation technologies become more mature.

Germany and China have two of the most notable growth strategies. Germany’s Industrie 4.0 plan targets a 50 percent increase in manufacturing productivity via digital initiatives, while halving the resources required. China’s Made in China 2025 national strategy sets ambitious targets and provides subsidies for domestic innovation and production. It also includes building new concept cities, investing in robotics capabilities, and subsidizing high-tech acquisitions abroad to become the leader in certain high-tech industries. For China, specifically, tech innovation is driven partially by a fear that technology will disrupt social structures and government control.

Such opportunities are not limited to existing economic powers. Estonia’s progress after the breakup of the Soviet Union is a good case study in transitioning to a digital economy. The nation rapidly implemented capitalistic reforms and transformed itself into a technology-centric economy in preparation for a massive tech disruption. Internet access was declared a right in 2000, and the country’s classrooms were outfitted for a digital economy, with coding as a core educational requirement starting at kindergarten. Internet broadband speeds in Estonia are among the fastest in the world. Accordingly, the World Bank now ranks Estonia as a high-income country.

Solution 2: Address Increased Rate of Change With More Nimble Education Systems
Education and training are currently not set for the speed of change in the modern economy. Schools are still based on a one-time education model, with school providing the foundation for a single lifelong career. With content becoming obsolete faster and rapidly escalating costs, this system may be unsustainable in the future. To help workers more smoothly transition from one job into another, for example, we need to make education a more nimble, lifelong endeavor.

Primary and university education may still have a role in training foundational thinking and general education, but it will be necessary to curtail rising price of tuition and increase accessibility. Massive open online courses (MooCs) and open-enrollment platforms are early demonstrations of what the future of general education may look like: cheap, effective, and flexible.

Georgia Tech’s online Engineering Master’s program (a fraction of the cost of residential tuition) is an early example in making university education more broadly available. Similarly, nanodegrees or microcredentials provided by online education platforms such as Udacity and Coursera can be used for mid-career adjustments at low cost. AI itself may be deployed to supplement the learning process, with applications such as AI-enhanced tutorials or personalized content recommendations backed by machine learning. Recent developments in neuroscience research could optimize this experience by perfectly tailoring content and delivery to the learner’s brain to maximize retention.

Finally, companies looking for more customized skills may take a larger role in education, providing on-the-job training for specific capabilities. One potential model involves partnering with community colleges to create apprenticeship-style learning, where students work part-time in parallel with their education. Siemens has pioneered such a model in four states and is developing a playbook for other companies to do the same.

Solution 3: Enhance Social Safety Nets to Smooth Automation Impacts
If predicted job losses to automation come to fruition, modernizing existing social safety nets will increasingly become a priority. While the issue of safety nets can become quickly politicized, it is worth noting that each prior technological revolution has come with corresponding changes to the social contract (see below).

The evolving social contract (U.S. examples)
– 1842 | Right to strike
– 1924 | Abolish child labor
– 1935 | Right to unionize
– 1938 | 40-hour work week
– 1962, 1974 | Trade adjustment assistance
– 1964 | Pay discrimination prohibited
– 1970 | Health and safety laws
– 21st century | AI and automation adjustment assistance?

Figure 2. Labor laws have historically adjusted as technology and society progressed

Solutions like universal basic income (no-strings-attached monthly payout to all citizens) are appealing in concept, but somewhat difficult to implement as a first measure in countries such as the US or Japan that already have high debt. Additionally, universal basic income may create dis-incentives to stay in the labor force. A similar cautionary tale in program design was the Trade Adjustment Assistance (TAA), which was designed to protect industries and workers from import competition shocks from globalization, but is viewed as a missed opportunity due to insufficient coverage.

A near-term solution could come in the form of graduated wage insurance (compensation for those forced to take a lower-paying job), including health insurance subsidies to individuals directly impacted by automation, with incentives to return to the workforce quickly. Another topic to tackle is geographic mismatch between workers and jobs, which can be addressed by mobility assistance. Lastly, a training stipend can be issued to individuals as means to upskill.

Policymakers can intervene to reverse recent historical trends that have shifted incomes from labor to capital owners. The balance could be shifted back to labor by placing higher taxes on capital—an example is the recently proposed “robot tax” where the taxation would be on the work rather than the individual executing it. That is, if a self-driving car performs the task that formerly was done by a human, the rideshare company will still pay the tax as if a human was driving.

Other solutions may involve distribution of work. Some countries, such as France and Sweden, have experimented with redistributing working hours. The idea is to cap weekly hours, with the goal of having more people employed and work more evenly spread. So far these programs have had mixed results, with lower unemployment but high costs to taxpayers, but are potential models that can continue to be tested.

We cannot stop growth, nor should we. With the roles in response to this evolution shifting, so should the social contract between the stakeholders. Government will continue to play a critical role as a stabilizing “thumb” in the invisible hand of capitalism, regulating and cushioning against extreme volatility, particularly in labor markets.

However, we already see business leaders taking on some of the role traditionally played by government—thinking about measures to remedy risks of climate change or economic proposals to combat unemployment—in part because of greater agility in adapting to change. Cross-disciplinary collaboration and creative solutions from all parties will be critical in crafting the future economy.

Note: The full paper this article is based on is available here.

Image Credit: Dmitry Kalinovsky / Shutterstock.com Continue reading

Posted in Human Robots

#434616 What Games Are Humans Still Better at ...

Artificial intelligence (AI) systems’ rapid advances are continually crossing rows off the list of things humans do better than our computer compatriots.

AI has bested us at board games like chess and Go, and set astronomically high scores in classic computer games like Ms. Pacman. More complex games form part of AI’s next frontier.

While a team of AI bots developed by OpenAI, known as the OpenAI Five, ultimately lost to a team of professional players last year, they have since been running rampant against human opponents in Dota 2. Not to be outdone, Google’s DeepMind AI recently took on—and beat—several professional players at StarCraft II.

These victories beg the questions: what games are humans still better at than AI? And for how long?

The Making Of AlphaStar
DeepMind’s results provide a good starting point in a search for answers. The version of its AI for StarCraft II, dubbed AlphaStar, learned to play the games through supervised learning and reinforcement learning.

First, AI agents were trained by analyzing and copying human players, learning basic strategies. The initial agents then played each other in a sort of virtual death match where the strongest agents stayed on. New iterations of the agents were developed and entered the competition. Over time, the agents became better and better at the game, learning new strategies and tactics along the way.

One of the advantages of AI is that it can go through this kind of process at superspeed and quickly develop better agents. DeepMind researchers estimate that the AlphaStar agents went through the equivalent of roughly 200 years of game time in about 14 days.

Cheating or One Hand Behind the Back?
The AlphaStar AI agents faced off against human professional players in a series of games streamed on YouTube and Twitch. The AIs trounced their human opponents, winning ten games on the trot, before pro player Grzegorz “MaNa” Komincz managed to salvage some pride for humanity by winning the final game. Experts commenting on AlphaStar’s performance used words like “phenomenal” and “superhuman”—which was, to a degree, where things got a bit problematic.

AlphaStar proved particularly skilled at controlling and directing units in battle, known as micromanagement. One reason was that it viewed the whole game map at once—something a human player is not able to do—which made it seemingly able to control units in different areas at the same time. DeepMind researchers said the AIs only focused on a single part of the map at any given time, but interestingly, AlphaStar’s AI agent was limited to a more restricted camera view during the match “MaNA” won.

Potentially offsetting some of this advantage was the fact that AlphaStar was also restricted in certain ways. For example, it was prevented from performing more clicks per minute than a human player would be able to.

Where AIs Struggle
Games like StarCraft II and Dota 2 throw a lot of challenges at AIs. Complex game theory/ strategies, operating with imperfect/incomplete information, undertaking multi-variable and long-term planning, real-time decision-making, navigating a large action space, and making a multitude of possible decisions at every point in time are just the tip of the iceberg. The AIs’ performance in both games was impressive, but also highlighted some of the areas where they could be said to struggle.

In Dota 2 and StarCraft II, AI bots have seemed more vulnerable in longer games, or when confronted with surprising, unfamiliar strategies. They seem to struggle with complexity over time and improvisation/adapting to quick changes. This could be tied to how AIs learn. Even within the first few hours of performing a task, humans tend to gain a sense of familiarity and skill that takes an AI much longer. We are also better at transferring skill from one area to another. In other words, experience playing Dota 2 can help us become good at StarCraft II relatively quickly. This is not the case for AI—yet.

Dwindling Superiority
While the battle between AI and humans for absolute superiority is still on in Dota 2 and StarCraft II, it looks likely that AI will soon reign supreme. Similar things are happening to other types of games.

In 2017, a team from Carnegie Mellon University pitted its Libratus AI against four professionals. After 20 days of No Limit Texas Hold’em, Libratus was up by $1.7 million. Another likely candidate is the destroyer of family harmony at Christmas: Monopoly.

Poker involves bluffing, while Monopoly involves negotiation—skills you might not think AI would be particularly suited to handle. However, an AI experiment at Facebook showed that AI bots are more than capable of undertaking such tasks. The bots proved skilled negotiators, and developed negotiating strategies like pretending interest in one object while they were interested in another altogether—bluffing.

So, what games are we still better at than AI? There is no precise answer, but the list is getting shorter at a rapid pace.

The Aim Of the Game
While AI’s mastery of games might at first glance seem an odd area to focus research on, the belief is that the way AI learn to master a game is transferrable to other areas.

For example, the Libratus poker-playing AI employed strategies that could work in financial trading or political negotiations. The same applies to AlphaStar. As Oriol Vinyals, co-leader of the AlphaStar project, told The Verge:

“First and foremost, the mission at DeepMind is to build an artificial general intelligence. […] To do so, it’s important to benchmark how our agents perform on a wide variety of tasks.”

A 2017 survey of more than 350 AI researchers predicts AI could be a better driver than humans within ten years. By the middle of the century, AI will be able to write a best-selling novel, and a few years later, it will be better than humans at surgery. By the year 2060, AI may do everything better than us.

Whether you think this is a good or a bad thing, it’s worth noting that AI has an often overlooked ability to help us see things differently. When DeepMind’s AlphaGo beat human Go champion Lee Sedol, the Go community learned from it, too. Lee himself went on a win streak after the match with AlphaGo. The same is now happening within the Dota 2 and StarCraft II communities that are studying the human vs. AI games intensely.

More than anything, AI’s recent gaming triumphs illustrate how quickly artificial intelligence is developing. In 1997, Dr. Piet Hut, an astrophysicist at the Institute for Advanced Study at Princeton and a GO enthusiast, told the New York Times that:

”It may be a hundred years before a computer beats humans at Go—maybe even longer.”

Image Credit: Roman Kosolapov / Shutterstock.com Continue reading

Posted in Human Robots

#434303 Making Superhumans Through Radical ...

Imagine trying to read War and Peace one letter at a time. The thought alone feels excruciating. But in many ways, this painful idea holds parallels to how human-machine interfaces (HMI) force us to interact with and process data today.

Designed back in the 1970s at Xerox PARC and later refined during the 1980s by Apple, today’s HMI was originally conceived during fundamentally different times, and specifically, before people and machines were generating so much data. Fast forward to 2019, when humans are estimated to produce 44 zettabytes of data—equal to two stacks of books from here to Pluto—and we are still using the same HMI from the 1970s.

These dated interfaces are not equipped to handle today’s exponential rise in data, which has been ushered in by the rapid dematerialization of many physical products into computers and software.

Breakthroughs in perceptual and cognitive computing, especially machine learning algorithms, are enabling technology to process vast volumes of data, and in doing so, they are dramatically amplifying our brain’s abilities. Yet even with these powerful technologies that at times make us feel superhuman, the interfaces are still crippled with poor ergonomics.

Many interfaces are still designed around the concept that human interaction with technology is secondary, not instantaneous. This means that any time someone uses technology, they are inevitably multitasking, because they must simultaneously perform a task and operate the technology.

If our aim, however, is to create technology that truly extends and amplifies our mental abilities so that we can offload important tasks, the technology that helps us must not also overwhelm us in the process. We must reimagine interfaces to work in coherence with how our minds function in the world so that our brains and these tools can work together seamlessly.

Embodied Cognition
Most technology is designed to serve either the mind or the body. It is a problematic divide, because our brains use our entire body to process the world around us. Said differently, our minds and bodies do not operate distinctly. Our minds are embodied.

Studies using MRI scans have shown that when a person feels an emotion in their gut, blood actually moves to that area of the body. The body and the mind are linked in this way, sharing information back and forth continuously.

Current technology presents data to the brain differently from how the brain processes data. Our brains, for example, use sensory data to continually encode and decipher patterns within the neocortex. Our brains do not create a linguistic label for each item, which is how the majority of machine learning systems operate, nor do our brains have an image associated with each of these labels.

Our bodies move information through us instantaneously, in a sense “computing” at the speed of thought. What if our technology could do the same?

Using Cognitive Ergonomics to Design Better Interfaces
Well-designed physical tools, as philosopher Martin Heidegger once meditated on while using the metaphor of a hammer, seem to disappear into the “hand.” They are designed to amplify a human ability and not get in the way during the process.

The aim of physical ergonomics is to understand the mechanical movement of the human body and then adapt a physical system to amplify the human output in accordance. By understanding the movement of the body, physical ergonomics enables ergonomically sound physical affordances—or conditions—so that the mechanical movement of the body and the mechanical movement of the machine can work together harmoniously.

Cognitive ergonomics applied to HMI design uses this same idea of amplifying output, but rather than focusing on physical output, the focus is on mental output. By understanding the raw materials the brain uses to comprehend information and form an output, cognitive ergonomics allows technologists and designers to create technological affordances so that the brain can work seamlessly with interfaces and remove the interruption costs of our current devices. In doing so, the technology itself “disappears,” and a person’s interaction with technology becomes fluid and primary.

By leveraging cognitive ergonomics in HMI design, we can create a generation of interfaces that can process and present data the same way humans process real-world information, meaning through fully-sensory interfaces.

Several brain-machine interfaces are already on the path to achieving this. AlterEgo, a wearable device developed by MIT researchers, uses electrodes to detect and understand nonverbal prompts, which enables the device to read the user’s mind and act as an extension of the user’s cognition.

Another notable example is the BrainGate neural device, created by researchers at Stanford University. Just two months ago, a study was released showing that this brain implant system allowed paralyzed patients to navigate an Android tablet with their thoughts alone.

These are two extraordinary examples of what is possible for the future of HMI, but there is still a long way to go to bring cognitive ergonomics front and center in interface design.

Disruptive Innovation Happens When You Step Outside Your Existing Users
Most of today’s interfaces are designed by a narrow population, made up predominantly of white, non-disabled men who are prolific in the use of technology (you may recall The New York Times viral article from 2016, Artificial Intelligence’s White Guy Problem). If you ask this population if there is a problem with today’s HMIs, most will say no, and this is because the technology has been designed to serve them.

This lack of diversity means a limited perspective is being brought to interface design, which is problematic if we want HMI to evolve and work seamlessly with the brain. To use cognitive ergonomics in interface design, we must first gain a more holistic understanding of how people with different abilities understand the world and how they interact with technology.

Underserved groups, such as people with physical disabilities, operate on what Clayton Christensen coined in The Innovator’s Dilemma as the fringe segment of a market. Developing solutions that cater to fringe groups can in fact disrupt the larger market by opening a downward, much larger market.

Learning From Underserved Populations
When technology fails to serve a group of people, that group must adapt the technology to meet their needs.

The workarounds created are often ingenious, specifically because they have not been arrived at by preferences, but out of necessity that has forced disadvantaged users to approach the technology from a very different vantage point.

When a designer or technologist begins learning from this new viewpoint and understanding challenges through a different lens, they can bring new perspectives to design—perspectives that otherwise can go unseen.

Designers and technologists can also learn from people with physical disabilities who interact with the world by leveraging other senses that help them compensate for one they may lack. For example, some blind people use echolocation to detect objects in their environments.

The BrainPort device developed by Wicab is an incredible example of technology leveraging one human sense to serve or compliment another. The BrainPort device captures environmental information with a wearable video camera and converts this data into soft electrical stimulation sequences that are sent to a device on the user’s tongue—the most sensitive touch receptor in the body. The user learns how to interpret the patterns felt on their tongue, and in doing so, become able to “see” with their tongue.

Key to the future of HMI design is learning how different user groups navigate the world through senses beyond sight. To make cognitive ergonomics work, we must understand how to leverage the senses so we’re not always solely relying on our visual or verbal interactions.

Radical Inclusion for the Future of HMI
Bringing radical inclusion into HMI design is about gaining a broader lens on technology design at large, so that technology can serve everyone better.

Interestingly, cognitive ergonomics and radical inclusion go hand in hand. We can’t design our interfaces with cognitive ergonomics without bringing radical inclusion into the picture, and we also will not arrive at radical inclusion in technology so long as cognitive ergonomics are not considered.

This new mindset is the only way to usher in an era of technology design that amplifies the collective human ability to create a more inclusive future for all.

Image Credit: jamesteohart / Shutterstock.com Continue reading

Posted in Human Robots

#434246 How AR and VR Will Shape the Future of ...

How we work and play is about to transform.

After a prolonged technology “winter”—or what I like to call the ‘deceptive growth’ phase of any exponential technology—the hardware and software that power virtual (VR) and augmented reality (AR) applications are accelerating at an extraordinary rate.

Unprecedented new applications in almost every industry are exploding onto the scene.

Both VR and AR, combined with artificial intelligence, will significantly disrupt the “middleman” and make our lives “auto-magical.” The implications will touch every aspect of our lives, from education and real estate to healthcare and manufacturing.

The Future of Work
How and where we work is already changing, thanks to exponential technologies like artificial intelligence and robotics.

But virtual and augmented reality are taking the future workplace to an entirely new level.

Virtual Reality Case Study: eXp Realty

I recently interviewed Glenn Sanford, who founded eXp Realty in 2008 (imagine: a real estate company on the heels of the housing market collapse) and is the CEO of eXp World Holdings.

Ten years later, eXp Realty has an army of 14,000 agents across all 50 US states, three Canadian provinces, and 400 MLS market areas… all without a single traditional staffed office.

In a bid to transition from 2D interfaces to immersive, 3D work experiences, virtual platform VirBELA built out the company’s office space in VR, unlocking indefinite scaling potential and an extraordinary new precedent.

Real estate agents, managers, and even clients gather in a unique virtual campus, replete with a sports field, library, and lobby. It’s all accessible via head-mounted displays, but most agents join with a computer browser. Surprisingly, the campus-style setup enables the same type of water-cooler conversations I see every day at the XPRIZE headquarters.

With this centralized VR campus, eXp Realty has essentially thrown out overhead costs and entered a lucrative market without the same constraints of brick-and-mortar businesses.

Delocalize with VR, and you can now hire anyone with internet access (right next door or on the other side of the planet), redesign your corporate office every month, throw in an ocean-view office or impromptu conference room for client meetings, and forget about guzzled-up hours in traffic.

As a leader, what happens when you can scalably expand and connect your workforce, not to mention your customer base, without the excess overhead of office space and furniture? Your organization can run faster and farther than your competition.

But beyond the indefinite scalability achieved through digitizing your workplace, VR’s implications extend to the lives of your employees and even the future of urban planning:

Home Prices: As virtual headquarters and office branches take hold of the 21st-century workplace, those who work on campuses like eXp Realty’s won’t need to commute to work. As a result, VR has the potential to dramatically influence real estate prices—after all, if you don’t need to drive to an office, your home search isn’t limited to a specific set of neighborhoods anymore.

Transportation: In major cities like Los Angeles and San Francisco, the implications are tremendous. Analysts have revealed that it’s already cheaper to use ride-sharing services like Uber and Lyft than to own a car in many major cities. And once autonomous “Car-as-a-Service” platforms proliferate, associated transportation costs like parking fees, fuel, and auto repairs will no longer fall on the individual, if not entirely disappear.

Augmented Reality: Annotate and Interact with Your Workplace

As I discussed in a recent Spatial Web blog, not only will Web 3.0 and VR advancements allow us to build out virtual worlds, but we’ll soon be able to digitally map our real-world physical offices or entire commercial high-rises.

Enter a professional world electrified by augmented reality.

Our workplaces are practically littered with information. File cabinets abound with archival data and relevant documents, and company databases continue to grow at a breakneck pace. And, as all of us are increasingly aware, cybersecurity and robust data permission systems remain a major concern for CEOs and national security officials alike.

What if we could link that information to specific locations, people, time frames, and even moving objects?

As data gets added and linked to any given employee’s office, conference room, or security system, we might then access online-merge-offline environments and information through augmented reality.

Imagine showing up at your building’s concierge and your AR glasses automatically check you into the building, authenticating your identity and pulling up any reminders you’ve linked to that specific location.

You stop by a friend’s office, and his smart security system lets you know he’ll arrive in an hour. Need to book a public conference room that’s already been scheduled by another firm’s marketing team? Offer to pay them a fee and, once accepted, a smart transaction will automatically deliver a payment to their company account.

With blockchain-verified digital identities, spatially logged data, and virtually manifest information, business logistics take a fraction of the time, operations grow seamless, and corporate data will be safer than ever.

Or better yet, imagine precise and high-dexterity work environments populated with interactive annotations that guide an artisan, surgeon, or engineer through meticulous handiwork.

Take, for instance, AR service 3D4Medical, which annotates virtual anatomy in midair. And as augmented reality hardware continues to advance, we might envision a future wherein surgeons perform operations on annotated organs and magnified incision sites, or one in which quantum computer engineers can magnify and annotate mechanical parts, speeding up reaction times and vastly improving precision.

The Future of Free Time and Play
In Abundance, I wrote about today’s rapidly demonetizing cost of living. In 2011, almost 75 percent of the average American’s income was spent on housing, transportation, food, personal insurance, health, and entertainment. What the headlines don’t mention: this is a dramatic improvement over the last 50 years. We’re spending less on basic necessities and working fewer hours than previous generations.

Chart depicts the average weekly work hours for full-time production employees in non-agricultural activities. Source: Diamandis.com data
Technology continues to change this, continues to take care of us and do our work for us. One phrase that describes this is “technological socialism,” where it’s technology, not the government, that takes care of us.

Extrapolating from the data, I believe we are heading towards a post-scarcity economy. Perhaps we won’t need to work at all, because we’ll own and operate our own fleet of robots or AI systems that do our work for us.

As living expenses demonetize and workplace automation increases, what will we do with this abundance of time? How will our children and grandchildren connect and find their purpose if they don’t have to work for a living?

As I write this on a Saturday afternoon and watch my two seven-year-old boys immersed in Minecraft, building and exploring worlds of their own creation, I can’t help but imagine that this future is about to enter its disruptive phase.

Exponential technologies are enabling a new wave of highly immersive games, virtual worlds, and online communities. We’ve likely all heard of the Oasis from Ready Player One. But far beyond what we know today as ‘gaming,’ VR is fast becoming a home to immersive storytelling, interactive films, and virtual world creation.

Within the virtual world space, let’s take one of today’s greatest precursors, the aforementioned game Minecraft.

For reference, Minecraft is over eight times the size of planet Earth. And in their free time, my kids would rather build in Minecraft than almost any other activity. I think of it as their primary passion: to create worlds, explore worlds, and be challenged in worlds.

And in the near future, we’re all going to become creators of or participants in virtual worlds, each populated with assets and storylines interoperable with other virtual environments.

But while the technological methods are new, this concept has been alive and well for generations. Whether you got lost in the world of Heidi or Harry Potter, grew up reading comic books or watching television, we’ve all been playing in imaginary worlds, with characters and story arcs populating our minds. That’s the nature of childhood.

In the past, however, your ability to edit was limited, especially if a given story came in some form of 2D media. I couldn’t edit where Tom Sawyer was going or change what Iron Man was doing. But as a slew of new software advancements underlying VR and AR allow us to interact with characters and gain (albeit limited) agency (for now), both new and legacy stories will become subjects of our creation and playgrounds for virtual interaction.

Take VR/AR storytelling startup Fable Studio’s Wolves in the Walls film. Debuting at the 2018 Sundance Film Festival, Fable’s immersive story is adapted from Neil Gaiman’s book and tracks the protagonist, Lucy, whose programming allows her to respond differently based on what her viewers do.

And while Lucy can merely hand virtual cameras to her viewers among other limited tasks, Fable Studio’s founder Edward Saatchi sees this project as just the beginning.

Imagine a virtual character—either in augmented or virtual reality—geared with AI capabilities, that now can not only participate in a fictional storyline but interact and dialogue directly with you in a host of virtual and digitally overlayed environments.

Or imagine engaging with a less-structured environment, like the Star Wars cantina, populated with strangers and friends to provide an entirely novel social media experience.

Already, we’ve seen characters like that of Pokémon brought into the real world with Pokémon Go, populating cities and real spaces with holograms and tasks. And just as augmented reality has the power to turn our physical environments into digital gaming platforms, advanced AR could bring on a new era of in-home entertainment.

Imagine transforming your home into a narrative environment for your kids or overlaying your office interior design with Picasso paintings and gothic architecture. As computer vision rapidly grows capable of identifying objects and mapping virtual overlays atop them, we might also one day be able to project home theaters or live sports within our homes, broadcasting full holograms that allow us to zoom into the action and place ourselves within it.

Increasingly honed and commercialized, augmented and virtual reality are on the cusp of revolutionizing the way we play, tell stories, create worlds, and interact with both fictional characters and each other.

Join Me
Abundance-Digital Online Community: I’ve created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is my ‘onramp’ for exponential entrepreneurs – those who want to get involved and play at a higher level. Click here to learn more.

Image Credit: nmedia / Shutterstock.com Continue reading

Posted in Human Robots

#434194 Educating the Wise Cyborgs of the Future

When we think of wisdom, we often think of ancient philosophers, mystics, or spiritual leaders. Wisdom is associated with the past. Yet some intellectual leaders are challenging us to reconsider wisdom in the context of the technological evolution of the future.

With the rise of exponential technologies like virtual reality, big data, artificial intelligence, and robotics, people are gaining access to increasingly powerful tools. These tools are neither malevolent nor benevolent on their own; human values and decision-making influence how they are used.

In future-themed discussions we often focus on technological progress far more than on intellectual and moral advancements. In reality, the virtuous insights that future humans possess will be even more powerful than their technological tools.

Tom Lombardo and Ray Todd Blackwood are advocating for exactly this. In their interdisciplinary paper “Educating the Wise Cyborg of the Future,” they propose a new definition of wisdom—one that is relevant in the context of the future of humanity.

We Are Already Cyborgs
The core purpose of Lombardo and Blackwood’s paper is to explore revolutionary educational models that will prepare humans, soon-to-be-cyborgs, for the future. The idea of educating such “cyborgs” may sound like science fiction, but if you pay attention to yourself and the world around you, cyborgs came into being a long time ago.

Techno-philosophers like Jason Silva point out that our tech devices are an abstract form of brain-machine interfaces. We use smartphones to store and retrieve information, perform calculations, and communicate with each other. Our devices are an extension of our minds.

According to philosophers Andy Clark and David Chalmers’ theory of the extended mind, we use this technology to expand the boundaries of our minds. We use tools like machine learning to enhance our cognitive skills or powerful telescopes to enhance our visual reach. Such is how technology has become a part of our exoskeletons, allowing us to push beyond our biological limitations.

In other words, you are already a cyborg. You have been all along.

Such an abstract definition of cyborgs is both relevant and thought-provoking. But it won’t stay abstract for much longer. The past few years have seen remarkable developments in both the hardware and software of brain-machine interfaces. Experts are designing more intricate electrodes while programming better algorithms to interpret the neural signals. Scientists have already succeeded in enabling paralyzed patients to type with their minds, and are even allowing people to communicate purely through brainwaves. Technologists like Ray Kurzweil believe that by 2030 we will connect the neocortex of our brains to the cloud via nanobots.

Given these trends, humans will continue to be increasingly cyborg-like. Our future schools may not necessarily educate people as we are today, but rather will be educating a new species of human-machine hybrid.

Wisdom-Based Education
Whether you take an abstract or literal definition of a cyborg, we need to completely revamp our educational models. Even if you don’t buy into the scenario where humans integrate powerful brain-machine interfaces into our minds, there is still a desperate need for wisdom-based education to equip current generations to tackle 21st-century issues.

With an emphasis on isolated subjects, standardized assessments, and content knowledge, our current educational models were designed for the industrial era, with the intended goal of creating masses of efficient factory workers—not to empower critical thinkers, innovators, or wise cyborgs.

Currently, the goal of higher education is to provide students with the degree that society tells them they need, and ostensibly to prepare them for the workforce. In contrast, Lombardo and Blackwood argue that wisdom should be the central goal of higher education, and they elaborate on how we can practically make this happen. Lombardo has developed a comprehensive two-year foundational education program for incoming university students aimed at the development of wisdom.

What does such an educational model look like? Lombardo and Blackwood break wisdom down into individual traits and capacities, each of which can be developed and measured independently or in combination with others. The authors lay out an expansive list of traits that can influence our decision-making as we strive to tackle global challenges and pave a more exciting future. These include big-picture thinking, curiosity, wonder, compassion, self-transcendence, love of learning, optimism, and courage.

As the authors point out, “given the complex and transforming nature of the world we live in, the development of wisdom provides a holistic, perspicacious, and ethically informed foundation for understanding the world, identifying its critical problems and positive opportunities, and constructively addressing its challenges.”

After all, many of the challenges we see in our world today boil down to out-dated ways of thinking, be they regressive mindsets, superficial value systems, or egocentric mindsets. The development of wisdom would immunize future societies against such debilitating values; imagine what our world would be like if wisdom was ingrained in all leaders and participating members of society.

The Wise Cyborg
Lombardo and Blackwood invite us to imagine how the wise cyborgs of the future would live their lives. What would happen if the powerful human-machine hybrids of tomorrow were also purpose-driven, compassionate, and ethical?

They would perceive the evolving digital world through a lens of wonder, awe, and curiosity. They would use digital information as a tool for problem-solving and a source of infinite knowledge. They would leverage immersive mediums like virtual reality to enhance creative expression and experimentation. They would continue to adapt and thrive in an unpredictable world of accelerating change.

Our media often depict a dystopian future for our species. It is worth considering a radically positive yet plausible scenario where instead of the machines taking over, we converge with them into wise cyborgs. This is just a glimpse of what is possible if we combine transcendent wisdom with powerful exponential technologies.

Image Credit: Peshkova / Shutterstock.com Continue reading

Posted in Human Robots