Tag Archives: interaction

#434303 Making Superhumans Through Radical ...

Imagine trying to read War and Peace one letter at a time. The thought alone feels excruciating. But in many ways, this painful idea holds parallels to how human-machine interfaces (HMI) force us to interact with and process data today.

Designed back in the 1970s at Xerox PARC and later refined during the 1980s by Apple, today’s HMI was originally conceived during fundamentally different times, and specifically, before people and machines were generating so much data. Fast forward to 2019, when humans are estimated to produce 44 zettabytes of data—equal to two stacks of books from here to Pluto—and we are still using the same HMI from the 1970s.

These dated interfaces are not equipped to handle today’s exponential rise in data, which has been ushered in by the rapid dematerialization of many physical products into computers and software.

Breakthroughs in perceptual and cognitive computing, especially machine learning algorithms, are enabling technology to process vast volumes of data, and in doing so, they are dramatically amplifying our brain’s abilities. Yet even with these powerful technologies that at times make us feel superhuman, the interfaces are still crippled with poor ergonomics.

Many interfaces are still designed around the concept that human interaction with technology is secondary, not instantaneous. This means that any time someone uses technology, they are inevitably multitasking, because they must simultaneously perform a task and operate the technology.

If our aim, however, is to create technology that truly extends and amplifies our mental abilities so that we can offload important tasks, the technology that helps us must not also overwhelm us in the process. We must reimagine interfaces to work in coherence with how our minds function in the world so that our brains and these tools can work together seamlessly.

Embodied Cognition
Most technology is designed to serve either the mind or the body. It is a problematic divide, because our brains use our entire body to process the world around us. Said differently, our minds and bodies do not operate distinctly. Our minds are embodied.

Studies using MRI scans have shown that when a person feels an emotion in their gut, blood actually moves to that area of the body. The body and the mind are linked in this way, sharing information back and forth continuously.

Current technology presents data to the brain differently from how the brain processes data. Our brains, for example, use sensory data to continually encode and decipher patterns within the neocortex. Our brains do not create a linguistic label for each item, which is how the majority of machine learning systems operate, nor do our brains have an image associated with each of these labels.

Our bodies move information through us instantaneously, in a sense “computing” at the speed of thought. What if our technology could do the same?

Using Cognitive Ergonomics to Design Better Interfaces
Well-designed physical tools, as philosopher Martin Heidegger once meditated on while using the metaphor of a hammer, seem to disappear into the “hand.” They are designed to amplify a human ability and not get in the way during the process.

The aim of physical ergonomics is to understand the mechanical movement of the human body and then adapt a physical system to amplify the human output in accordance. By understanding the movement of the body, physical ergonomics enables ergonomically sound physical affordances—or conditions—so that the mechanical movement of the body and the mechanical movement of the machine can work together harmoniously.

Cognitive ergonomics applied to HMI design uses this same idea of amplifying output, but rather than focusing on physical output, the focus is on mental output. By understanding the raw materials the brain uses to comprehend information and form an output, cognitive ergonomics allows technologists and designers to create technological affordances so that the brain can work seamlessly with interfaces and remove the interruption costs of our current devices. In doing so, the technology itself “disappears,” and a person’s interaction with technology becomes fluid and primary.

By leveraging cognitive ergonomics in HMI design, we can create a generation of interfaces that can process and present data the same way humans process real-world information, meaning through fully-sensory interfaces.

Several brain-machine interfaces are already on the path to achieving this. AlterEgo, a wearable device developed by MIT researchers, uses electrodes to detect and understand nonverbal prompts, which enables the device to read the user’s mind and act as an extension of the user’s cognition.

Another notable example is the BrainGate neural device, created by researchers at Stanford University. Just two months ago, a study was released showing that this brain implant system allowed paralyzed patients to navigate an Android tablet with their thoughts alone.

These are two extraordinary examples of what is possible for the future of HMI, but there is still a long way to go to bring cognitive ergonomics front and center in interface design.

Disruptive Innovation Happens When You Step Outside Your Existing Users
Most of today’s interfaces are designed by a narrow population, made up predominantly of white, non-disabled men who are prolific in the use of technology (you may recall The New York Times viral article from 2016, Artificial Intelligence’s White Guy Problem). If you ask this population if there is a problem with today’s HMIs, most will say no, and this is because the technology has been designed to serve them.

This lack of diversity means a limited perspective is being brought to interface design, which is problematic if we want HMI to evolve and work seamlessly with the brain. To use cognitive ergonomics in interface design, we must first gain a more holistic understanding of how people with different abilities understand the world and how they interact with technology.

Underserved groups, such as people with physical disabilities, operate on what Clayton Christensen coined in The Innovator’s Dilemma as the fringe segment of a market. Developing solutions that cater to fringe groups can in fact disrupt the larger market by opening a downward, much larger market.

Learning From Underserved Populations
When technology fails to serve a group of people, that group must adapt the technology to meet their needs.

The workarounds created are often ingenious, specifically because they have not been arrived at by preferences, but out of necessity that has forced disadvantaged users to approach the technology from a very different vantage point.

When a designer or technologist begins learning from this new viewpoint and understanding challenges through a different lens, they can bring new perspectives to design—perspectives that otherwise can go unseen.

Designers and technologists can also learn from people with physical disabilities who interact with the world by leveraging other senses that help them compensate for one they may lack. For example, some blind people use echolocation to detect objects in their environments.

The BrainPort device developed by Wicab is an incredible example of technology leveraging one human sense to serve or compliment another. The BrainPort device captures environmental information with a wearable video camera and converts this data into soft electrical stimulation sequences that are sent to a device on the user’s tongue—the most sensitive touch receptor in the body. The user learns how to interpret the patterns felt on their tongue, and in doing so, become able to “see” with their tongue.

Key to the future of HMI design is learning how different user groups navigate the world through senses beyond sight. To make cognitive ergonomics work, we must understand how to leverage the senses so we’re not always solely relying on our visual or verbal interactions.

Radical Inclusion for the Future of HMI
Bringing radical inclusion into HMI design is about gaining a broader lens on technology design at large, so that technology can serve everyone better.

Interestingly, cognitive ergonomics and radical inclusion go hand in hand. We can’t design our interfaces with cognitive ergonomics without bringing radical inclusion into the picture, and we also will not arrive at radical inclusion in technology so long as cognitive ergonomics are not considered.

This new mindset is the only way to usher in an era of technology design that amplifies the collective human ability to create a more inclusive future for all.

Image Credit: jamesteohart / Shutterstock.com Continue reading

Posted in Human Robots

#434246 How AR and VR Will Shape the Future of ...

How we work and play is about to transform.

After a prolonged technology “winter”—or what I like to call the ‘deceptive growth’ phase of any exponential technology—the hardware and software that power virtual (VR) and augmented reality (AR) applications are accelerating at an extraordinary rate.

Unprecedented new applications in almost every industry are exploding onto the scene.

Both VR and AR, combined with artificial intelligence, will significantly disrupt the “middleman” and make our lives “auto-magical.” The implications will touch every aspect of our lives, from education and real estate to healthcare and manufacturing.

The Future of Work
How and where we work is already changing, thanks to exponential technologies like artificial intelligence and robotics.

But virtual and augmented reality are taking the future workplace to an entirely new level.

Virtual Reality Case Study: eXp Realty

I recently interviewed Glenn Sanford, who founded eXp Realty in 2008 (imagine: a real estate company on the heels of the housing market collapse) and is the CEO of eXp World Holdings.

Ten years later, eXp Realty has an army of 14,000 agents across all 50 US states, three Canadian provinces, and 400 MLS market areas… all without a single traditional staffed office.

In a bid to transition from 2D interfaces to immersive, 3D work experiences, virtual platform VirBELA built out the company’s office space in VR, unlocking indefinite scaling potential and an extraordinary new precedent.

Real estate agents, managers, and even clients gather in a unique virtual campus, replete with a sports field, library, and lobby. It’s all accessible via head-mounted displays, but most agents join with a computer browser. Surprisingly, the campus-style setup enables the same type of water-cooler conversations I see every day at the XPRIZE headquarters.

With this centralized VR campus, eXp Realty has essentially thrown out overhead costs and entered a lucrative market without the same constraints of brick-and-mortar businesses.

Delocalize with VR, and you can now hire anyone with internet access (right next door or on the other side of the planet), redesign your corporate office every month, throw in an ocean-view office or impromptu conference room for client meetings, and forget about guzzled-up hours in traffic.

As a leader, what happens when you can scalably expand and connect your workforce, not to mention your customer base, without the excess overhead of office space and furniture? Your organization can run faster and farther than your competition.

But beyond the indefinite scalability achieved through digitizing your workplace, VR’s implications extend to the lives of your employees and even the future of urban planning:

Home Prices: As virtual headquarters and office branches take hold of the 21st-century workplace, those who work on campuses like eXp Realty’s won’t need to commute to work. As a result, VR has the potential to dramatically influence real estate prices—after all, if you don’t need to drive to an office, your home search isn’t limited to a specific set of neighborhoods anymore.

Transportation: In major cities like Los Angeles and San Francisco, the implications are tremendous. Analysts have revealed that it’s already cheaper to use ride-sharing services like Uber and Lyft than to own a car in many major cities. And once autonomous “Car-as-a-Service” platforms proliferate, associated transportation costs like parking fees, fuel, and auto repairs will no longer fall on the individual, if not entirely disappear.

Augmented Reality: Annotate and Interact with Your Workplace

As I discussed in a recent Spatial Web blog, not only will Web 3.0 and VR advancements allow us to build out virtual worlds, but we’ll soon be able to digitally map our real-world physical offices or entire commercial high-rises.

Enter a professional world electrified by augmented reality.

Our workplaces are practically littered with information. File cabinets abound with archival data and relevant documents, and company databases continue to grow at a breakneck pace. And, as all of us are increasingly aware, cybersecurity and robust data permission systems remain a major concern for CEOs and national security officials alike.

What if we could link that information to specific locations, people, time frames, and even moving objects?

As data gets added and linked to any given employee’s office, conference room, or security system, we might then access online-merge-offline environments and information through augmented reality.

Imagine showing up at your building’s concierge and your AR glasses automatically check you into the building, authenticating your identity and pulling up any reminders you’ve linked to that specific location.

You stop by a friend’s office, and his smart security system lets you know he’ll arrive in an hour. Need to book a public conference room that’s already been scheduled by another firm’s marketing team? Offer to pay them a fee and, once accepted, a smart transaction will automatically deliver a payment to their company account.

With blockchain-verified digital identities, spatially logged data, and virtually manifest information, business logistics take a fraction of the time, operations grow seamless, and corporate data will be safer than ever.

Or better yet, imagine precise and high-dexterity work environments populated with interactive annotations that guide an artisan, surgeon, or engineer through meticulous handiwork.

Take, for instance, AR service 3D4Medical, which annotates virtual anatomy in midair. And as augmented reality hardware continues to advance, we might envision a future wherein surgeons perform operations on annotated organs and magnified incision sites, or one in which quantum computer engineers can magnify and annotate mechanical parts, speeding up reaction times and vastly improving precision.

The Future of Free Time and Play
In Abundance, I wrote about today’s rapidly demonetizing cost of living. In 2011, almost 75 percent of the average American’s income was spent on housing, transportation, food, personal insurance, health, and entertainment. What the headlines don’t mention: this is a dramatic improvement over the last 50 years. We’re spending less on basic necessities and working fewer hours than previous generations.

Chart depicts the average weekly work hours for full-time production employees in non-agricultural activities. Source: Diamandis.com data
Technology continues to change this, continues to take care of us and do our work for us. One phrase that describes this is “technological socialism,” where it’s technology, not the government, that takes care of us.

Extrapolating from the data, I believe we are heading towards a post-scarcity economy. Perhaps we won’t need to work at all, because we’ll own and operate our own fleet of robots or AI systems that do our work for us.

As living expenses demonetize and workplace automation increases, what will we do with this abundance of time? How will our children and grandchildren connect and find their purpose if they don’t have to work for a living?

As I write this on a Saturday afternoon and watch my two seven-year-old boys immersed in Minecraft, building and exploring worlds of their own creation, I can’t help but imagine that this future is about to enter its disruptive phase.

Exponential technologies are enabling a new wave of highly immersive games, virtual worlds, and online communities. We’ve likely all heard of the Oasis from Ready Player One. But far beyond what we know today as ‘gaming,’ VR is fast becoming a home to immersive storytelling, interactive films, and virtual world creation.

Within the virtual world space, let’s take one of today’s greatest precursors, the aforementioned game Minecraft.

For reference, Minecraft is over eight times the size of planet Earth. And in their free time, my kids would rather build in Minecraft than almost any other activity. I think of it as their primary passion: to create worlds, explore worlds, and be challenged in worlds.

And in the near future, we’re all going to become creators of or participants in virtual worlds, each populated with assets and storylines interoperable with other virtual environments.

But while the technological methods are new, this concept has been alive and well for generations. Whether you got lost in the world of Heidi or Harry Potter, grew up reading comic books or watching television, we’ve all been playing in imaginary worlds, with characters and story arcs populating our minds. That’s the nature of childhood.

In the past, however, your ability to edit was limited, especially if a given story came in some form of 2D media. I couldn’t edit where Tom Sawyer was going or change what Iron Man was doing. But as a slew of new software advancements underlying VR and AR allow us to interact with characters and gain (albeit limited) agency (for now), both new and legacy stories will become subjects of our creation and playgrounds for virtual interaction.

Take VR/AR storytelling startup Fable Studio’s Wolves in the Walls film. Debuting at the 2018 Sundance Film Festival, Fable’s immersive story is adapted from Neil Gaiman’s book and tracks the protagonist, Lucy, whose programming allows her to respond differently based on what her viewers do.

And while Lucy can merely hand virtual cameras to her viewers among other limited tasks, Fable Studio’s founder Edward Saatchi sees this project as just the beginning.

Imagine a virtual character—either in augmented or virtual reality—geared with AI capabilities, that now can not only participate in a fictional storyline but interact and dialogue directly with you in a host of virtual and digitally overlayed environments.

Or imagine engaging with a less-structured environment, like the Star Wars cantina, populated with strangers and friends to provide an entirely novel social media experience.

Already, we’ve seen characters like that of Pokémon brought into the real world with Pokémon Go, populating cities and real spaces with holograms and tasks. And just as augmented reality has the power to turn our physical environments into digital gaming platforms, advanced AR could bring on a new era of in-home entertainment.

Imagine transforming your home into a narrative environment for your kids or overlaying your office interior design with Picasso paintings and gothic architecture. As computer vision rapidly grows capable of identifying objects and mapping virtual overlays atop them, we might also one day be able to project home theaters or live sports within our homes, broadcasting full holograms that allow us to zoom into the action and place ourselves within it.

Increasingly honed and commercialized, augmented and virtual reality are on the cusp of revolutionizing the way we play, tell stories, create worlds, and interact with both fictional characters and each other.

Join Me
Abundance-Digital Online Community: I’ve created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is my ‘onramp’ for exponential entrepreneurs – those who want to get involved and play at a higher level. Click here to learn more.

Image Credit: nmedia / Shutterstock.com Continue reading

Posted in Human Robots

#434151 Life-or-Death Algorithms: The Black Box ...

When it comes to applications for machine learning, few can be more widely hyped than medicine. This is hardly surprising: it’s a huge industry that generates a phenomenal amount of data and revenue, where technological advances can improve or save the lives of millions of people. Hardly a week passes without a study that suggests algorithms will soon be better than experts at detecting pneumonia, or Alzheimer’s—diseases in complex organs ranging from the eye to the heart.

The problems of overcrowded hospitals and overworked medical staff plague public healthcare systems like Britain’s NHS and lead to rising costs for private healthcare systems. Here, again, algorithms offer a tantalizing solution. How many of those doctor’s visits really need to happen? How many could be replaced by an interaction with an intelligent chatbot—especially if it can be combined with portable diagnostic tests, utilizing the latest in biotechnology? That way, unnecessary visits could be reduced, and patients could be diagnosed and referred to specialists more quickly without waiting for an initial consultation.

As ever with artificial intelligence algorithms, the aim is not to replace doctors, but to give them tools to reduce the mundane or repetitive parts of the job. With an AI that can examine thousands of scans in a minute, the “dull drudgery” is left to machines, and the doctors are freed to concentrate on the parts of the job that require more complex, subtle, experience-based judgement of the best treatments and the needs of the patient.

High Stakes
But, as ever with AI algorithms, there are risks involved with relying on them—even for tasks that are considered mundane. The problems of black-box algorithms that make inexplicable decisions are bad enough when you’re trying to understand why that automated hiring chatbot was unimpressed by your job interview performance. In a healthcare context, where the decisions made could mean life or death, the consequences of algorithmic failure could be grave.

A new paper in Science Translational Medicine, by Nicholson Price, explores some of the promises and pitfalls of using these algorithms in the data-rich medical environment.

Neural networks excel at churning through vast quantities of training data and making connections, absorbing the underlying patterns or logic for the system in hidden layers of linear algebra; whether it’s detecting skin cancer from photographs or learning to write in pseudo-Shakespearean script. They are terrible, however, at explaining the underlying logic behind the relationships that they’ve found: there is often little more than a string of numbers, the statistical “weights” between the layers. They struggle to distinguish between correlation and causation.

This raises interesting dilemmas for healthcare providers. The dream of big data in medicine is to feed a neural network on “huge troves of health data, finding complex, implicit relationships and making individualized assessments for patients.” What if, inevitably, such an algorithm proves to be unreasonably effective at diagnosing a medical condition or prescribing a treatment, but you have no scientific understanding of how this link actually works?

Too Many Threads to Unravel?
The statistical models that underlie such neural networks often assume that variables are independent of each other, but in a complex, interacting system like the human body, this is not always the case.

In some ways, this is a familiar concept in medical science—there are many phenomena and links which have been observed for decades but are still poorly understood on a biological level. Paracetamol is one of the most commonly-prescribed painkillers, but there’s still robust debate about how it actually works. Medical practitioners may be keen to deploy whatever tool is most effective, regardless of whether it’s based on a deeper scientific understanding. Fans of the Copenhagen interpretation of quantum mechanics might spin this as “Shut up and medicate!”

But as in that field, there’s a debate to be had about whether this approach risks losing sight of a deeper understanding that will ultimately prove more fruitful—for example, for drug discovery.

Away from the philosophical weeds, there are more practical problems: if you don’t understand how a black-box medical algorithm is operating, how should you approach the issues of clinical trials and regulation?

Price points out that, in the US, the “21st-Century Cures Act” allows the FDA to regulate any algorithm that analyzes images, or doesn’t allow a provider to review the basis for its conclusions: this could completely exclude “black-box” algorithms of the kind described above from use.

Transparency about how the algorithm functions—the data it looks at, and the thresholds for drawing conclusions or providing medical advice—may be required, but could also conflict with the profit motive and the desire for secrecy in healthcare startups.

One solution might be to screen algorithms that can’t explain themselves, or don’t rely on well-understood medical science, from use before they enter the healthcare market. But this could prevent people from reaping the benefits that they can provide.

Evaluating Algorithms
New healthcare algorithms will be unable to do what physicists did with quantum mechanics, and point to a track record of success, because they will not have been deployed in the field. And, as Price notes, many algorithms will improve as they’re deployed in the field for a greater amount of time, and can harvest and learn from the performance data that’s actually used. So how can we choose between the most promising approaches?

Creating a standardized clinical trial and validation system that’s equally valid across algorithms that function in different ways, or use different input or training data, will be a difficult task. Clinical trials that rely on small sample sizes, such as for algorithms that attempt to personalize treatment to individuals, will also prove difficult. With a small sample size and little scientific understanding, it’s hard to tell whether the algorithm succeeded or failed because it’s bad at its job or by chance.

Add learning into the mix and the picture gets more complex. “Perhaps more importantly, to the extent that an ideal black-box algorithm is plastic and frequently updated, the clinical trial validation model breaks down further, because the model depends on a static product subject to stable validation.” As Price describes, the current system for testing and validation of medical products needs some adaptation to deal with this new software before it can successfully test and validate the new algorithms.

Striking a Balance
The story in healthcare reflects the AI story in so many other fields, and the complexities involved perhaps illustrate why even an illustrious company like IBM appears to be struggling to turn its famed Watson AI into a viable product in the healthcare space.

A balance must be struck, both in our rush to exploit big data and the eerie power of neural networks, and to automate thinking. We must be aware of the biases and flaws of this approach to problem-solving: to realize that it is not a foolproof panacea.

But we also need to embrace these technologies where they can be a useful complement to the skills, insights, and deeper understanding that humans can provide. Much like a neural network, our industries need to train themselves to enhance this cooperation in the future.

Image Credit: Connect world / Shutterstock.com Continue reading

Posted in Human Robots

#433939 The Promise—and Complications—of ...

Every year, for just a few days in a major city, a small team of roboticists get to live the dream: ordering around their own personal robot butlers. In carefully-constructed replicas of a restaurant scene or a domestic setting, these robots perform any number of simple algorithmic tasks. “Get the can of beans from the shelf. Greet the visitors to the museum. Help the humans with their shopping. Serve the customers at the restaurant.”

This is Robocup @ Home, the annual tournament where teams of roboticists put their autonomous service robots to the test for practical domestic applications. The tasks seem simple and mundane, but considering the technology required reveals that they’re really not.

The Robot Butler Contest
Say you want a robot to fetch items in the supermarket. In a crowded, noisy environment, the robot must understand your commands, ask for clarification, and map out and navigate an unfamiliar environment, avoiding obstacles and people as it does so. Then it must recognize the product you requested, perhaps in a cluttered environment, perhaps in an unfamiliar orientation. It has to grasp that product appropriately—recall that there are entire multi-million-dollar competitions just dedicated to developing robots that can grasp a range of objects—and then return it to you.

It’s a job so simple that a child could do it—and so complex that teams of smart roboticists can spend weeks programming and engineering, and still end up struggling to complete simplified versions of this task. Of course, the child has the advantage of millions of years of evolutionary research and development, while the first robots that could even begin these tasks were only developed in the 1970s.

Even bearing this in mind, Robocup @ Home can feel like a place where futurist expectations come crashing into technologist reality. You dream of a smooth-voiced, sardonic JARVIS who’s already made your favorite dinner when you come home late from work; you end up shouting “remember the biscuits” at a baffled, ungainly droid in aisle five.

Caring for the Elderly
Famously, Japan is one of the most robo-enthusiastic nations in the world; they are the nation that stunned us all with ASIMO in 2000, and several studies have been conducted into the phenomenon. It’s no surprise, then, that humanoid robotics should be seriously considered as a solution to the crisis of the aging population. The Japanese government, as part of its robots strategy, has already invested $44 million in their development.

Toyota’s Human Support Robot (HSR-2) is a simple but programmable robot with a single arm; it can be remote-controlled to pick up objects and can monitor patients. HSR-2 has become the default robot for use in Robocup @ Home tournaments, at least in tasks that involve manipulating objects.

Alongside this, Toyota is working on exoskeletons to assist people in walking after strokes. It may surprise you to learn that nurses suffer back injuries more than any other occupation, at roughly three times the rate of construction workers, due to the day-to-day work of lifting patients. Toyota has a Care Assist robot/exoskeleton designed to fix precisely this problem by helping care workers with the heavy lifting.

The Home of the Future
The enthusiasm for domestic robotics is easy to understand and, in fact, many startups already sell robots marketed as domestic helpers in some form or another. In general, though, they skirt the immensely complicated task of building a fully capable humanoid robot—a task that even Google’s skunk-works department gave up on, at least until recently.

It’s plain to see why: far more research and development is needed before these domestic robots could be used reliably and at a reasonable price. Consumers with expectations inflated by years of science fiction saturation might find themselves frustrated as the robots fail to perform basic tasks.

Instead, domestic robotics efforts fall into one of two categories. There are robots specialized to perform a domestic task, like iRobot’s Roomba, which stuck to vacuuming and became the most successful domestic robot of all time by far.

The tasks need not necessarily be simple, either: the impressive but expensive automated kitchen uses the world’s most dexterous hands to cook meals, providing it can recognize the ingredients. Other robots focus on human-robot interaction, like Jibo: they essentially package the abilities of a voice assistant like Siri, Cortana, or Alexa to respond to simple questions and perform online tasks in a friendly, dynamic robot exterior.

In this way, the future of domestic automation starts to look a lot more like smart homes than a robot or domestic servant. General robotics is difficult in the same way that general artificial intelligence is difficult; competing with humans, the great all-rounders, is a challenge. Getting superhuman performance at a more specific task, however, is feasible and won’t cost the earth.

Individual startups without the financial might of a Google or an Amazon can develop specialized robots, like Seven Dreamers’ laundry robot, and hope that one day it will form part of a network of autonomous robots that each have a role to play in the household.

Domestic Bliss?
The Smart Home has been a staple of futurist expectations for a long time, to the extent that movies featuring smart homes out of control are already a cliché. But critics of the smart home idea—and of the internet of things more generally—tend to focus on the idea that, more often than not, software just adds an additional layer of things that can break (NSFW), in exchange for minimal added convenience. A toaster that can short-circuit is bad enough, but a toaster that can refuse to serve you toast because its firmware is updating is something else entirely.

That’s before you even get into the security vulnerabilities, which are all the more important when devices are installed in your home and capable of interacting with them. The idea of a smart watch that lets you keep an eye on your children might sound like something a security-conscious parent would like: a smart watch that can be hacked to track children, listen in on their surroundings, and even fool them into thinking a call is coming from their parents is the stuff of nightmares.

Key to many of these problems is the lack of standardization for security protocols, and even the products themselves. The idea of dozens of startups each developing a highly-specialized piece of robotics to perform a single domestic task sounds great in theory, until you realize the potential hazards and pitfalls of getting dozens of incompatible devices to work together on the same system.

It seems inevitable that there are yet more layers of domestic drudgery that can be automated away, decades after the first generation of time-saving domestic devices like the dishwasher and vacuum cleaner became mainstream. With projected market values into the billions and trillions of dollars, there is no shortage of industry interest in ironing out these kinks. But, for now at least, the answer to the question: “Where’s my robot butler?” is that it is gradually, painstakingly learning how to sort through groceries.

Image Credit: Nonchanon / Shutterstock.com Continue reading

Posted in Human Robots

#433907 How the Spatial Web Will Fix What’s ...

Converging exponential technologies will transform media, advertising and the retail world. The world we see, through our digitally-enhanced eyes, will multiply and explode with intelligence, personalization, and brilliance.

This is the age of Web 3.0.

Last week, I discussed the what and how of Web 3.0 (also known as the Spatial Web), walking through its architecture and the converging technologies that enable it.

To recap, while Web 1.0 consisted of static documents and read-only data, Web 2.0 introduced multimedia content, interactive web applications, and participatory social media, all of these mediated by two-dimensional screens—a flat web of sensorily confined information.

During the next two to five years, the convergence of 5G, AI, a trillion sensors, and VR/AR will enable us to both map our physical world into virtual space and superimpose a digital layer onto our physical environments.

Web 3.0 is about to transform everything—from the way we learn and educate, to the way we trade (smart) assets, to our interactions with real and virtual versions of each other.

And while users grow rightly concerned about data privacy and misuse, the Spatial Web’s use of blockchain in its data and governance layer will secure and validate our online identities, protecting everything from your virtual assets to personal files.

In this second installment of the Web 3.0 series, I’ll be discussing the Spatial Web’s vast implications for a handful of industries:

News & Media Coverage
Smart Advertising
Personalized Retail

Let’s dive in.

Transforming Network News with Web 3.0
News media is big business. In 2016, global news media (including print) generated 168 billion USD in circulation and advertising revenue.

The news we listen to impacts our mindset. Listen to dystopian news on violence, disaster, and evil, and you’ll more likely be searching for a cave to hide in, rather than technology for the launch of your next business.

Today, different news media present starkly different realities of everything from foreign conflict to domestic policy. And outcomes are consequential. What reporters and news corporations decide to show or omit of a given news story plays a tremendous role in shaping the beliefs and resulting values of entire populations and constituencies.

But what if we could have an objective benchmark for today’s news, whereby crowdsourced and sensor-collected evidence allows you to tour the site of journalistic coverage, determining for yourself the most salient aspects of a story?

Enter mesh networks, AI, public ledgers, and virtual reality.

While traditional networks rely on a limited set of wired access points (or wireless hotspots), a wireless mesh network can connect entire cities via hundreds of dispersed nodes that communicate with each other and share a network connection non-hierarchically.

In short, this means that individual mobile users can together establish a local mesh network using nothing but the computing power in their own devices.

Take this a step further, and a local population of strangers could collectively broadcast countless 360-degree feeds across a local mesh network.

Imagine a scenario in which protests break out across the country, each cluster of activists broadcasting an aggregate of 360-degree videos, all fed through photogrammetry AIs that build out a live hologram of the march in real time. Want to see and hear what the NYC-based crowds are advocating for? Throw on some VR goggles and explore the event with full access. Or cue into the southern Texan border to assess for yourself the handling of immigrant entry and border conflicts.

Take a front seat in the Capitol during tomorrow’s Senate hearing, assessing each Senator’s reactions, questions and arguments without a Fox News or CNN filter. Or if you’re short on time, switch on the holographic press conference and host 3D avatars of live-broadcasting politicians in your living room.

We often think of modern media as taking away consumer agency, feeding tailored and often partisan ideology to a complacent audience. But as wireless mesh networks and agnostic sensor data allow for immersive VR-accessible news sites, the average viewer will necessarily become an active participant in her own education of current events.

And with each of us interpreting the news according to our own values, I envision a much less polarized world. A world in which civic engagement, moderately reasoned dialogue, and shared assumptions will allow us to empathize and make compromises.

The future promises an era in which news is verified and balanced; wherein public ledgers, AI, and new web interfaces bring you into the action and respect your intelligence—not manipulate your ignorance.

Web 3.0 Reinventing Advertising
Bringing about the rise of ‘user-owned data’ and self-established permissions, Web 3.0 is poised to completely disrupt digital advertising—a global industry worth over 192 billion USD.

Currently, targeted advertising leverages tomes of personal data and online consumer behavior to subtly engage you with products you might not want, or sell you on falsely advertised services promising inaccurate results.

With a new Web 3.0 data and governance layer, however, distributed ledger technologies will require advertisers to engage in more direct interaction with consumers, validating claims and upping transparency.

And with a data layer that allows users to own and authorize third-party use of their data, blockchain also holds extraordinary promise to slash not only data breaches and identity theft, but covert advertiser bombardment without your authorization.

Accessing crowdsourced reviews and AI-driven fact-checking, users will be able to validate advertising claims more efficiently and accurately than ever before, potentially rating and filtering out advertisers in the process. And in such a streamlined system of verified claims, sellers will face increased pressure to compete more on product and rely less on marketing.

But perhaps most exciting is the convergence of artificial intelligence and augmented reality.

As Spatial Web networks begin to associate digital information with physical objects and locations, products will begin to “sell themselves.” Each with built-in smart properties, products will become hyper-personalized, communicating information directly to users through Web 3.0 interfaces.

Imagine stepping into a department store in pursuit of a new web-connected fridge. As soon as you enter, your AR goggles register your location and immediately grant you access to a populated register of store products.

As you move closer to a kitchen set that catches your eye, a virtual salesperson—whether by holographic video or avatar—pops into your field of view next to the fridge you’ve been examining and begins introducing you to its various functions and features. You quickly decide you’d rather disable the avatar and get textual input instead, and preferences are reset to list appliance properties visually.

After a virtual tour of several other fridges, you decide on the one you want and seamlessly execute a smart contract, carried out by your smart wallet and the fridge. The transaction takes place in seconds, and the fridge’s blockchain-recorded ownership record has been updated.

Better yet, you head over to a friend’s home for dinner after moving into the neighborhood. While catching up in the kitchen, your eyes fixate on the cabinets, which quickly populate your AR glasses with a price-point and selection of colors.

But what if you’d rather not get auto-populated product info in the first place? No problem!

Now empowered with self-sovereign identities, users might be able to turn off advertising preferences entirely, turning on smart recommendations only when they want to buy a given product or need new supplies.

And with user-centric data, consumers might even sell such information to advertisers directly. Now, instead of Facebook or Google profiting off your data, you might earn a passive income by giving advertisers permission to personalize and market their services. Buy more, and your personal data marketplace grows in value. Buy less, and a lower-valued advertising profile causes an ebb in advertiser input.

With user-controlled data, advertisers now work on your terms, putting increased pressure on product iteration and personalizing products for each user.

This brings us to the transformative future of retail.

Personalized Retail–Power of the Spatial Web
In a future of smart and hyper-personalized products, I might walk through a virtual game space or a digitally reconstructed Target, browsing specific categories of clothing I’ve predetermined prior to entry.

As I pick out my selection, my AI assistant hones its algorithm reflecting new fashion preferences, and personal shoppers—also visiting the store in VR—help me pair different pieces as I go.

Once my personal shopper has finished constructing various outfits, I then sit back and watch a fashion show of countless Peter avatars with style and color variations of my selection, each customizable.

After I’ve made my selection, I might choose to purchase physical versions of three outfits and virtual versions of two others for my digital avatar. Payments are made automatically as I leave the store, including a smart wallet transaction made with the personal shopper at a per-outfit rate (for only the pieces I buy).

Already, several big players have broken into the VR market. Just this year, Walmart has announced its foray into the VR space, shipping 17,000 Oculus Go VR headsets to Walmart locations across the US.

And just this past January, Walmart filed two VR shopping-related patents. In a new bid to disrupt a rapidly changing retail market, Walmart now describes a system in which users couple their VR headset with haptic gloves for an immersive in-store experience, whether at 3am in your living room or during a lunch break at the office.

But Walmart is not alone. Big e-commerce players from Amazon to Alibaba are leaping onto the scene with new software buildout to ride the impending headset revolution.

Beyond virtual reality, players like IKEA have even begun using mobile-based augmented reality to map digitally replicated furniture in your physical living room, true to dimension. And this is just the beginning….

As AR headset hardware undergoes breakneck advancements in the next two to five years, we might soon be able to project watches onto our wrists, swapping out colors, styles, brand, and price points.

Or let’s say I need a new coffee table in my office. Pulling up multiple models in AR, I can position each option using advanced hand-tracking technology and customize height and width according to my needs. Once the smart payment is triggered, the manufacturer prints my newly-customized piece, droning it to my doorstep. As soon as I need to assemble the pieces, overlaid digital prompts walk me through each step, and any user confusions are communicated to a company database.

Perhaps one of the ripest industries for Spatial Web disruption, retail presents one of the greatest opportunities for profit across virtual apparel, digital malls, AI fashion startups and beyond.

In our next series iteration, I’ll be looking at the tremendous opportunities created by Web 3.0 for the Future of Work and Entertainment.

Join Me
Abundance-Digital Online Community: I’ve created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is my ‘onramp’ for exponential entrepreneurs – those who want to get involved and play at a higher level. Click here to learn more.

Image Credit: nmedia / Shutterstock.com Continue reading

Posted in Human Robots