Tag Archives: touch

#434303 Making Superhumans Through Radical ...

Imagine trying to read War and Peace one letter at a time. The thought alone feels excruciating. But in many ways, this painful idea holds parallels to how human-machine interfaces (HMI) force us to interact with and process data today.

Designed back in the 1970s at Xerox PARC and later refined during the 1980s by Apple, today’s HMI was originally conceived during fundamentally different times, and specifically, before people and machines were generating so much data. Fast forward to 2019, when humans are estimated to produce 44 zettabytes of data—equal to two stacks of books from here to Pluto—and we are still using the same HMI from the 1970s.

These dated interfaces are not equipped to handle today’s exponential rise in data, which has been ushered in by the rapid dematerialization of many physical products into computers and software.

Breakthroughs in perceptual and cognitive computing, especially machine learning algorithms, are enabling technology to process vast volumes of data, and in doing so, they are dramatically amplifying our brain’s abilities. Yet even with these powerful technologies that at times make us feel superhuman, the interfaces are still crippled with poor ergonomics.

Many interfaces are still designed around the concept that human interaction with technology is secondary, not instantaneous. This means that any time someone uses technology, they are inevitably multitasking, because they must simultaneously perform a task and operate the technology.

If our aim, however, is to create technology that truly extends and amplifies our mental abilities so that we can offload important tasks, the technology that helps us must not also overwhelm us in the process. We must reimagine interfaces to work in coherence with how our minds function in the world so that our brains and these tools can work together seamlessly.

Embodied Cognition
Most technology is designed to serve either the mind or the body. It is a problematic divide, because our brains use our entire body to process the world around us. Said differently, our minds and bodies do not operate distinctly. Our minds are embodied.

Studies using MRI scans have shown that when a person feels an emotion in their gut, blood actually moves to that area of the body. The body and the mind are linked in this way, sharing information back and forth continuously.

Current technology presents data to the brain differently from how the brain processes data. Our brains, for example, use sensory data to continually encode and decipher patterns within the neocortex. Our brains do not create a linguistic label for each item, which is how the majority of machine learning systems operate, nor do our brains have an image associated with each of these labels.

Our bodies move information through us instantaneously, in a sense “computing” at the speed of thought. What if our technology could do the same?

Using Cognitive Ergonomics to Design Better Interfaces
Well-designed physical tools, as philosopher Martin Heidegger once meditated on while using the metaphor of a hammer, seem to disappear into the “hand.” They are designed to amplify a human ability and not get in the way during the process.

The aim of physical ergonomics is to understand the mechanical movement of the human body and then adapt a physical system to amplify the human output in accordance. By understanding the movement of the body, physical ergonomics enables ergonomically sound physical affordances—or conditions—so that the mechanical movement of the body and the mechanical movement of the machine can work together harmoniously.

Cognitive ergonomics applied to HMI design uses this same idea of amplifying output, but rather than focusing on physical output, the focus is on mental output. By understanding the raw materials the brain uses to comprehend information and form an output, cognitive ergonomics allows technologists and designers to create technological affordances so that the brain can work seamlessly with interfaces and remove the interruption costs of our current devices. In doing so, the technology itself “disappears,” and a person’s interaction with technology becomes fluid and primary.

By leveraging cognitive ergonomics in HMI design, we can create a generation of interfaces that can process and present data the same way humans process real-world information, meaning through fully-sensory interfaces.

Several brain-machine interfaces are already on the path to achieving this. AlterEgo, a wearable device developed by MIT researchers, uses electrodes to detect and understand nonverbal prompts, which enables the device to read the user’s mind and act as an extension of the user’s cognition.

Another notable example is the BrainGate neural device, created by researchers at Stanford University. Just two months ago, a study was released showing that this brain implant system allowed paralyzed patients to navigate an Android tablet with their thoughts alone.

These are two extraordinary examples of what is possible for the future of HMI, but there is still a long way to go to bring cognitive ergonomics front and center in interface design.

Disruptive Innovation Happens When You Step Outside Your Existing Users
Most of today’s interfaces are designed by a narrow population, made up predominantly of white, non-disabled men who are prolific in the use of technology (you may recall The New York Times viral article from 2016, Artificial Intelligence’s White Guy Problem). If you ask this population if there is a problem with today’s HMIs, most will say no, and this is because the technology has been designed to serve them.

This lack of diversity means a limited perspective is being brought to interface design, which is problematic if we want HMI to evolve and work seamlessly with the brain. To use cognitive ergonomics in interface design, we must first gain a more holistic understanding of how people with different abilities understand the world and how they interact with technology.

Underserved groups, such as people with physical disabilities, operate on what Clayton Christensen coined in The Innovator’s Dilemma as the fringe segment of a market. Developing solutions that cater to fringe groups can in fact disrupt the larger market by opening a downward, much larger market.

Learning From Underserved Populations
When technology fails to serve a group of people, that group must adapt the technology to meet their needs.

The workarounds created are often ingenious, specifically because they have not been arrived at by preferences, but out of necessity that has forced disadvantaged users to approach the technology from a very different vantage point.

When a designer or technologist begins learning from this new viewpoint and understanding challenges through a different lens, they can bring new perspectives to design—perspectives that otherwise can go unseen.

Designers and technologists can also learn from people with physical disabilities who interact with the world by leveraging other senses that help them compensate for one they may lack. For example, some blind people use echolocation to detect objects in their environments.

The BrainPort device developed by Wicab is an incredible example of technology leveraging one human sense to serve or compliment another. The BrainPort device captures environmental information with a wearable video camera and converts this data into soft electrical stimulation sequences that are sent to a device on the user’s tongue—the most sensitive touch receptor in the body. The user learns how to interpret the patterns felt on their tongue, and in doing so, become able to “see” with their tongue.

Key to the future of HMI design is learning how different user groups navigate the world through senses beyond sight. To make cognitive ergonomics work, we must understand how to leverage the senses so we’re not always solely relying on our visual or verbal interactions.

Radical Inclusion for the Future of HMI
Bringing radical inclusion into HMI design is about gaining a broader lens on technology design at large, so that technology can serve everyone better.

Interestingly, cognitive ergonomics and radical inclusion go hand in hand. We can’t design our interfaces with cognitive ergonomics without bringing radical inclusion into the picture, and we also will not arrive at radical inclusion in technology so long as cognitive ergonomics are not considered.

This new mindset is the only way to usher in an era of technology design that amplifies the collective human ability to create a more inclusive future for all.

Image Credit: jamesteohart / Shutterstock.com Continue reading

Posted in Human Robots

#434246 How AR and VR Will Shape the Future of ...

How we work and play is about to transform.

After a prolonged technology “winter”—or what I like to call the ‘deceptive growth’ phase of any exponential technology—the hardware and software that power virtual (VR) and augmented reality (AR) applications are accelerating at an extraordinary rate.

Unprecedented new applications in almost every industry are exploding onto the scene.

Both VR and AR, combined with artificial intelligence, will significantly disrupt the “middleman” and make our lives “auto-magical.” The implications will touch every aspect of our lives, from education and real estate to healthcare and manufacturing.

The Future of Work
How and where we work is already changing, thanks to exponential technologies like artificial intelligence and robotics.

But virtual and augmented reality are taking the future workplace to an entirely new level.

Virtual Reality Case Study: eXp Realty

I recently interviewed Glenn Sanford, who founded eXp Realty in 2008 (imagine: a real estate company on the heels of the housing market collapse) and is the CEO of eXp World Holdings.

Ten years later, eXp Realty has an army of 14,000 agents across all 50 US states, three Canadian provinces, and 400 MLS market areas… all without a single traditional staffed office.

In a bid to transition from 2D interfaces to immersive, 3D work experiences, virtual platform VirBELA built out the company’s office space in VR, unlocking indefinite scaling potential and an extraordinary new precedent.

Real estate agents, managers, and even clients gather in a unique virtual campus, replete with a sports field, library, and lobby. It’s all accessible via head-mounted displays, but most agents join with a computer browser. Surprisingly, the campus-style setup enables the same type of water-cooler conversations I see every day at the XPRIZE headquarters.

With this centralized VR campus, eXp Realty has essentially thrown out overhead costs and entered a lucrative market without the same constraints of brick-and-mortar businesses.

Delocalize with VR, and you can now hire anyone with internet access (right next door or on the other side of the planet), redesign your corporate office every month, throw in an ocean-view office or impromptu conference room for client meetings, and forget about guzzled-up hours in traffic.

As a leader, what happens when you can scalably expand and connect your workforce, not to mention your customer base, without the excess overhead of office space and furniture? Your organization can run faster and farther than your competition.

But beyond the indefinite scalability achieved through digitizing your workplace, VR’s implications extend to the lives of your employees and even the future of urban planning:

Home Prices: As virtual headquarters and office branches take hold of the 21st-century workplace, those who work on campuses like eXp Realty’s won’t need to commute to work. As a result, VR has the potential to dramatically influence real estate prices—after all, if you don’t need to drive to an office, your home search isn’t limited to a specific set of neighborhoods anymore.

Transportation: In major cities like Los Angeles and San Francisco, the implications are tremendous. Analysts have revealed that it’s already cheaper to use ride-sharing services like Uber and Lyft than to own a car in many major cities. And once autonomous “Car-as-a-Service” platforms proliferate, associated transportation costs like parking fees, fuel, and auto repairs will no longer fall on the individual, if not entirely disappear.

Augmented Reality: Annotate and Interact with Your Workplace

As I discussed in a recent Spatial Web blog, not only will Web 3.0 and VR advancements allow us to build out virtual worlds, but we’ll soon be able to digitally map our real-world physical offices or entire commercial high-rises.

Enter a professional world electrified by augmented reality.

Our workplaces are practically littered with information. File cabinets abound with archival data and relevant documents, and company databases continue to grow at a breakneck pace. And, as all of us are increasingly aware, cybersecurity and robust data permission systems remain a major concern for CEOs and national security officials alike.

What if we could link that information to specific locations, people, time frames, and even moving objects?

As data gets added and linked to any given employee’s office, conference room, or security system, we might then access online-merge-offline environments and information through augmented reality.

Imagine showing up at your building’s concierge and your AR glasses automatically check you into the building, authenticating your identity and pulling up any reminders you’ve linked to that specific location.

You stop by a friend’s office, and his smart security system lets you know he’ll arrive in an hour. Need to book a public conference room that’s already been scheduled by another firm’s marketing team? Offer to pay them a fee and, once accepted, a smart transaction will automatically deliver a payment to their company account.

With blockchain-verified digital identities, spatially logged data, and virtually manifest information, business logistics take a fraction of the time, operations grow seamless, and corporate data will be safer than ever.

Or better yet, imagine precise and high-dexterity work environments populated with interactive annotations that guide an artisan, surgeon, or engineer through meticulous handiwork.

Take, for instance, AR service 3D4Medical, which annotates virtual anatomy in midair. And as augmented reality hardware continues to advance, we might envision a future wherein surgeons perform operations on annotated organs and magnified incision sites, or one in which quantum computer engineers can magnify and annotate mechanical parts, speeding up reaction times and vastly improving precision.

The Future of Free Time and Play
In Abundance, I wrote about today’s rapidly demonetizing cost of living. In 2011, almost 75 percent of the average American’s income was spent on housing, transportation, food, personal insurance, health, and entertainment. What the headlines don’t mention: this is a dramatic improvement over the last 50 years. We’re spending less on basic necessities and working fewer hours than previous generations.

Chart depicts the average weekly work hours for full-time production employees in non-agricultural activities. Source: Diamandis.com data
Technology continues to change this, continues to take care of us and do our work for us. One phrase that describes this is “technological socialism,” where it’s technology, not the government, that takes care of us.

Extrapolating from the data, I believe we are heading towards a post-scarcity economy. Perhaps we won’t need to work at all, because we’ll own and operate our own fleet of robots or AI systems that do our work for us.

As living expenses demonetize and workplace automation increases, what will we do with this abundance of time? How will our children and grandchildren connect and find their purpose if they don’t have to work for a living?

As I write this on a Saturday afternoon and watch my two seven-year-old boys immersed in Minecraft, building and exploring worlds of their own creation, I can’t help but imagine that this future is about to enter its disruptive phase.

Exponential technologies are enabling a new wave of highly immersive games, virtual worlds, and online communities. We’ve likely all heard of the Oasis from Ready Player One. But far beyond what we know today as ‘gaming,’ VR is fast becoming a home to immersive storytelling, interactive films, and virtual world creation.

Within the virtual world space, let’s take one of today’s greatest precursors, the aforementioned game Minecraft.

For reference, Minecraft is over eight times the size of planet Earth. And in their free time, my kids would rather build in Minecraft than almost any other activity. I think of it as their primary passion: to create worlds, explore worlds, and be challenged in worlds.

And in the near future, we’re all going to become creators of or participants in virtual worlds, each populated with assets and storylines interoperable with other virtual environments.

But while the technological methods are new, this concept has been alive and well for generations. Whether you got lost in the world of Heidi or Harry Potter, grew up reading comic books or watching television, we’ve all been playing in imaginary worlds, with characters and story arcs populating our minds. That’s the nature of childhood.

In the past, however, your ability to edit was limited, especially if a given story came in some form of 2D media. I couldn’t edit where Tom Sawyer was going or change what Iron Man was doing. But as a slew of new software advancements underlying VR and AR allow us to interact with characters and gain (albeit limited) agency (for now), both new and legacy stories will become subjects of our creation and playgrounds for virtual interaction.

Take VR/AR storytelling startup Fable Studio’s Wolves in the Walls film. Debuting at the 2018 Sundance Film Festival, Fable’s immersive story is adapted from Neil Gaiman’s book and tracks the protagonist, Lucy, whose programming allows her to respond differently based on what her viewers do.

And while Lucy can merely hand virtual cameras to her viewers among other limited tasks, Fable Studio’s founder Edward Saatchi sees this project as just the beginning.

Imagine a virtual character—either in augmented or virtual reality—geared with AI capabilities, that now can not only participate in a fictional storyline but interact and dialogue directly with you in a host of virtual and digitally overlayed environments.

Or imagine engaging with a less-structured environment, like the Star Wars cantina, populated with strangers and friends to provide an entirely novel social media experience.

Already, we’ve seen characters like that of Pokémon brought into the real world with Pokémon Go, populating cities and real spaces with holograms and tasks. And just as augmented reality has the power to turn our physical environments into digital gaming platforms, advanced AR could bring on a new era of in-home entertainment.

Imagine transforming your home into a narrative environment for your kids or overlaying your office interior design with Picasso paintings and gothic architecture. As computer vision rapidly grows capable of identifying objects and mapping virtual overlays atop them, we might also one day be able to project home theaters or live sports within our homes, broadcasting full holograms that allow us to zoom into the action and place ourselves within it.

Increasingly honed and commercialized, augmented and virtual reality are on the cusp of revolutionizing the way we play, tell stories, create worlds, and interact with both fictional characters and each other.

Join Me
Abundance-Digital Online Community: I’ve created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is my ‘onramp’ for exponential entrepreneurs – those who want to get involved and play at a higher level. Click here to learn more.

Image Credit: nmedia / Shutterstock.com Continue reading

Posted in Human Robots

#433950 How the Spatial Web Will Transform Every ...

What is the future of work? Is our future one of ‘technological socialism’ (where technology is taking care of our needs)? Or is our future workplace completely virtualized, whereby we hang out at home in our PJs while walking about our virtual corporate headquarters?

This blog will look at the future of work during the age of Web 3.0… Examining scenarios in which AI, VR, and the spatial web converge to transform every element of our careers, from training to execution to free time.

Three weeks ago, I explored the vast implications of Web 3.0 on news, media, smart advertising, and personalized retail. And to offer a quick recap on what the Spatial Web is and how it works, let’s cover some brief history.

A Quick Recap on Web 3.0
While Web 1.0 consisted of static documents and read-only data (static web pages), Web 2.0 introduced multimedia content, interactive web applications, and participatory social media, all of these mediated by two-dimensional screens.

But over the next two to five years, the convergence of 5G, artificial intelligence, VR/AR, and a trillion-sensor economy will enable us to both map our physical world into virtual space and superimpose a digital data layer onto our physical environments.

Suddenly, all our information will be manipulated, stored, understood, and experienced in spatial ways.

In this third installment of the Web 3.0 series, I’ll be discussing the Spatial Web’s vast implications for:

Professional Training
Delocalized Business and the Virtual Workplace
Smart Permissions and Data Security

Let’s dive in.

Virtual Training, Real-World Results
Virtual and augmented reality have already begun disrupting the professional training market.

Leading the charge, Walmart has already implemented VR across 200 Academy training centers, running over 45 modules and simulating everything from unusual customer requests to a Black Friday shopping rush.

In September 2018, Walmart committed to a 17,000-headset order of the Oculus Go to equip every US Supercenter, neighborhood market, and discount store with VR-based employee training.

In the engineering world, Bell Helicopter is using VR to massively expedite development and testing of its latest aircraft, FCX-001. Partnering with Sector 5 Digital and HTC VIVE, Bell found it could concentrate a typical six-year aircraft design process into the course of six months, turning physical mock-ups into CAD-designed virtual replicas.

But beyond the design process itself, Bell is now one of a slew of companies pioneering VR pilot tests and simulations with real-world accuracy. Seated in a true-to-life virtual cockpit, pilots have now tested countless iterations of the FCX-001 in virtual flight, drawing directly onto the 3D model and enacting aircraft modifications in real-time.

And in an expansion of our virtual senses, several key players are already working on haptic feedback. In the case of VR flight, French company Go Touch VR is now partnering with software developer FlyInside on fingertip-mounted haptic tech for aviation.

Dramatically reducing time and trouble required for VR-testing pilots, they aim to give touch-based confirmation of every switch and dial activated on virtual flights, just as one would experience in a full-sized cockpit mockup. Replicating texture, stiffness, and even the sensation of holding an object, these piloted devices contain a suite of actuators to simulate everything from a light touch to higher-pressured contact, all controlled by gaze and finger movements.

When it comes to other high-risk simulations, virtual and augmented reality have barely scratched the surface.

Firefighters can now combat virtual wildfires with new platforms like FLAIM Trainer or TargetSolutions. And thanks to the expansion of medical AR/VR services like 3D4Medical or Echopixel, surgeons might soon perform operations on annotated organs and magnified incision sites, speeding up reaction times and vastly improving precision.

But perhaps most urgent, Web 3.0 and its VR interface will offer an immediate solution for today’s constant industry turnover and large-scale re-education demands.

VR educational facilities with exact replicas of anything from large industrial equipment to minute circuitry will soon give anyone a second chance at the 21st-century job market.

Want to be an electric, autonomous vehicle mechanic at age 15? Throw on a demonetized VR module and learn by doing, testing your prototype iterations at almost zero cost and with no risk of harming others.

Want to be a plasma physicist and play around with a virtual nuclear fusion reactor? Now you’ll be able to simulate results and test out different tweaks, logging Smart Educational Record credits in the process.

As tomorrow’s career model shifts from a “one-and-done graduate degree” to lifelong education, professional VR-based re-education will allow for a continuous education loop, reducing the barrier to entry for anyone wanting to enter a new industry.

But beyond professional training and virtually enriched, real-world work scenarios, Web 3.0 promises entirely virtual workplaces and blockchain-secured authorization systems.

Rise of the Virtual Workplace and Digital Data Integrity
In addition to enabling an annual $52 billion virtual goods marketplace, the Spatial Web is also giving way to “virtual company headquarters” and completely virtualized companies, where employees can work from home or any place on the planet.

Too good to be true? Check out an incredible publicly listed company called eXp Realty.

Launched on the heels of the 2008 financial crisis, eXp Realty beat the odds, going public this past May and surpassing a $1B market cap on day one of trading.

But how? Opting for a demonetized virtual model, eXp’s founder Glenn Sanford decided to ditch brick and mortar from the get-go, instead building out an online virtual campus for employees, contractors, and thousands of agents.

And after years of hosting team meetings, training seminars, and even agent discussions with potential buyers through 2D digital interfaces, eXp’s virtual headquarters went spatial.

What is eXp’s primary corporate value? FUN! And Glenn Sanford’s employees love their jobs.

In a bid to transition from 2D interfaces to immersive, 3D work experiences, virtual platform VirBELA built out the company’s office space in VR, unlocking indefinite scaling potential and an extraordinary new precedent.

Foregoing any physical locations for a centralized VR campus, eXp Realty has essentially thrown out all overhead and entered a lucrative market with barely any upfront costs.

Delocalize with VR, and you can now hire anyone with internet access (right next door or on the other side of the planet), redesign your corporate office every month, throw in an ocean-view office or impromptu conference room for client meetings, and forget about guzzled-up hours in traffic.

Throw in the Spatial Web’s fundamental blockchain-based data layer, and now cryptographically secured virtual IDs will let you validate colleagues’ identities or any of the virtual avatars we will soon inhabit.

This becomes critically important for spatial information logs—keeping incorruptible records of who’s present at a meeting, which data each person has access to, and AI-translated reports of everything discussed and contracts agreed to.

But as I discussed in a previous Spatial Web blog, not only will Web 3.0 and VR advancements allow us to build out virtual worlds, but we’ll soon be able to digitally map our real-world physical offices or entire commercial high rises too.

As data gets added and linked to any given employee’s office, conference room, or security system, we might then access online-merge-offline environments and information through augmented reality.

Imaging showing up at your building’s concierge and your AR glasses automatically check you into the building, authenticating your identity and pulling up any reminders you’ve linked to that specific location.

You stop by a friend’s office, and his smart security system lets you know he’ll arrive in an hour. Need to book a public conference room that’s already been scheduled by another firm’s marketing team? Offer to pay them a fee and, once accepted, a smart transaction will automatically deliver a payment to their company account.

With blockchain-verified digital identities, spatially logged data, and virtually manifest information, business logistics take a fraction of the time, operations grow seamless, and corporate data will be safer than ever.

Final Thoughts
While converging technologies slash the lifespan of Fortune 500 companies, bring on the rise of vast new industries, and transform the job market, Web 3.0 is changing the way we work, where we work, and who we work with.

Life-like virtual modules are already unlocking countless professional training camps, modifiable in real-time and easily updated.

Virtual programming and blockchain-based authentication are enabling smart data logging, identity protection, and on-demand smart asset trading.

And VR/AR-accessible worlds (and corporate campuses) not only demonetize, dematerialize, and delocalize our everyday workplaces, but enrich our physical worlds with AI-driven, context-specific data.

Welcome to the Spatial Web workplace.

Join Me
Abundance-Digital Online Community: I’ve created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is my ‘onramp’ for exponential entrepreneurs – those who want to get involved and play at a higher level. Click here to learn more.

Image Credit: MONOPOLY919 / Shutterstock.com Continue reading

Posted in Human Robots

#433282 The 4 Waves of AI: Who Will Own the ...

Recently, I picked up Kai-Fu Lee’s newest book, AI Superpowers.

Kai-Fu Lee is one of the most plugged-in AI investors on the planet, managing over $2 billion between six funds and over 300 portfolio companies in the US and China.

Drawing from his pioneering work in AI, executive leadership at Microsoft, Apple, and Google (where he served as founding president of Google China), and his founding of VC fund Sinovation Ventures, Lee shares invaluable insights about:

The four factors driving today’s AI ecosystems;
China’s extraordinary inroads in AI implementation;
Where autonomous systems are headed;
How we’ll need to adapt.

With a foothold in both Beijing and Silicon Valley, Lee looks at the power balance between Chinese and US tech behemoths—each turbocharging new applications of deep learning and sweeping up global markets in the process.

In this post, I’ll be discussing Lee’s “Four Waves of AI,” an excellent framework for discussing where AI is today and where it’s going. I’ll also be featuring some of the hottest Chinese tech companies leading the charge, worth watching right now.

I’m super excited that this Tuesday, I’ve scored the opportunity to sit down with Kai-Fu Lee to discuss his book in detail via a webinar.

With Sino-US competition heating up, who will own the future of technology?

Let’s dive in.

The First Wave: Internet AI
In this first stage of AI deployment, we’re dealing primarily with recommendation engines—algorithmic systems that learn from masses of user data to curate online content personalized to each one of us.

Think Amazon’s spot-on product recommendations, or that “Up Next” YouTube video you just have to watch before getting back to work, or Facebook ads that seem to know what you’ll buy before you do.

Powered by the data flowing through our networks, internet AI leverages the fact that users automatically label data as we browse. Clicking versus not clicking; lingering on a web page longer than we did on another; hovering over a Facebook video to see what happens at the end.

These cascades of labeled data build a detailed picture of our personalities, habits, demands, and desires: the perfect recipe for more tailored content to keep us on a given platform.

Currently, Lee estimates that Chinese and American companies stand head-to-head when it comes to deployment of internet AI. But given China’s data advantage, he predicts that Chinese tech giants will have a slight lead (60-40) over their US counterparts in the next five years.

While you’ve most definitely heard of Alibaba and Baidu, you’ve probably never stumbled upon Toutiao.

Starting out as a copycat of America’s wildly popular Buzzfeed, Toutiao reached a valuation of $20 billion by 2017, dwarfing Buzzfeed’s valuation by more than a factor of 10. But with almost 120 million daily active users, Toutiao doesn’t just stop at creating viral content.

Equipped with natural-language processing and computer vision, Toutiao’s AI engines survey a vast network of different sites and contributors, rewriting headlines to optimize for user engagement, and processing each user’s online behavior—clicks, comments, engagement time—to curate individualized news feeds for millions of consumers.

And as users grow more engaged with Toutiao’s content, the company’s algorithms get better and better at recommending content, optimizing headlines, and delivering a truly personalized feed.

It’s this kind of positive feedback loop that fuels today’s AI giants surfing the wave of internet AI.

The Second Wave: Business AI
While internet AI takes advantage of the fact that netizens are constantly labeling data via clicks and other engagement metrics, business AI jumps on the data that traditional companies have already labeled in the past.

Think banks issuing loans and recording repayment rates; hospitals archiving diagnoses, imaging data, and subsequent health outcomes; or courts noting conviction history, recidivism, and flight.

While we humans make predictions based on obvious root causes (strong features), AI algorithms can process thousands of weakly correlated variables (weak features) that may have much more to do with a given outcome than the usual suspects.

By scouting out hidden correlations that escape our linear cause-and-effect logic, business AI leverages labeled data to train algorithms that outperform even the most veteran of experts.

Apply these data-trained AI engines to banking, insurance, and legal sentencing, and you get minimized default rates, optimized premiums, and plummeting recidivism rates.

While Lee confidently places America in the lead (90-10) for business AI, China’s substantial lag in structured industry data could actually work in its favor going forward.

In industries where Chinese startups can leapfrog over legacy systems, China has a major advantage.

Take Chinese app Smart Finance, for instance.

While Americans embraced credit and debit cards in the 1970s, China was still in the throes of its Cultural Revolution, largely missing the bus on this technology.

Fast forward to 2017, and China’s mobile payment spending outnumbered that of Americans’ by a ratio of 50 to 1. Without the competition of deeply entrenched credit cards, mobile payments were an obvious upgrade to China’s cash-heavy economy, embraced by 70 percent of China’s 753 million smartphone users by the end of 2017.

But by leapfrogging over credit cards and into mobile payments, China largely left behind the notion of credit.

And here’s where Smart Finance comes in.

An AI-powered app for microfinance, Smart Finance depends almost exclusively on its algorithms to make millions of microloans. For each potential borrower, the app simply requests access to a portion of the user’s phone data.

On the basis of variables as subtle as your typing speed and battery percentage, Smart Finance can predict with astounding accuracy your likelihood of repaying a $300 loan.

Such deployments of business AI and internet AI are already revolutionizing our industries and individual lifestyles. But still on the horizon lie two even more monumental waves— perception AI and autonomous AI.

The Third Wave: Perception AI
In this wave, AI gets an upgrade with eyes, ears, and myriad other senses, merging the digital world with our physical environments.

As sensors and smart devices proliferate through our homes and cities, we are on the verge of entering a trillion-sensor economy.

Companies like China’s Xiaomi are putting out millions of IoT-connected devices, and teams of researchers have already begun prototyping smart dust—solar cell- and sensor-geared particulates that can store and communicate troves of data anywhere, anytime.

As Kai-Fu explains, perception AI “will bring the convenience and abundance of the online world into our offline reality.” Sensor-enabled hardware devices will turn everything from hospitals to cars to schools into online-merge-offline (OMO) environments.

Imagine walking into a grocery store, scanning your face to pull up your most common purchases, and then picking up a virtual assistant (VA) shopping cart. Having pre-loaded your data, the cart adjusts your usual grocery list with voice input, reminds you to get your spouse’s favorite wine for an upcoming anniversary, and guides you through a personalized store route.

While we haven’t yet leveraged the full potential of perception AI, China and the US are already making incredible strides. Given China’s hardware advantage, Lee predicts China currently has a 60-40 edge over its American tech counterparts.

Now the go-to city for startups building robots, drones, wearable technology, and IoT infrastructure, Shenzhen has turned into a powerhouse for intelligent hardware, as I discussed last week. Turbocharging output of sensors and electronic parts via thousands of factories, Shenzhen’s skilled engineers can prototype and iterate new products at unprecedented scale and speed.

With the added fuel of Chinese government support and a relaxed Chinese attitude toward data privacy, China’s lead may even reach 80-20 in the next five years.

Jumping on this wave are companies like Xiaomi, which aims to turn bathrooms, kitchens, and living rooms into smart OMO environments. Having invested in 220 companies and incubated 29 startups that produce its products, Xiaomi surpassed 85 million intelligent home devices by the end of 2017, making it the world’s largest network of these connected products.

One KFC restaurant in China has even teamed up with Alipay (Alibaba’s mobile payments platform) to pioneer a ‘pay-with-your-face’ feature. Forget cash, cards, and cell phones, and let OMO do the work.

The Fourth Wave: Autonomous AI
But the most monumental—and unpredictable—wave is the fourth and final: autonomous AI.

Integrating all previous waves, autonomous AI gives machines the ability to sense and respond to the world around them, enabling AI to move and act productively.

While today’s machines can outperform us on repetitive tasks in structured and even unstructured environments (think Boston Dynamics’ humanoid Atlas or oncoming autonomous vehicles), machines with the power to see, hear, touch and optimize data will be a whole new ballgame.

Think: swarms of drones that can selectively spray and harvest entire farms with computer vision and remarkable dexterity, heat-resistant drones that can put out forest fires 100X more efficiently, or Level 5 autonomous vehicles that navigate smart roads and traffic systems all on their own.

While autonomous AI will first involve robots that create direct economic value—automating tasks on a one-to-one replacement basis—these intelligent machines will ultimately revamp entire industries from the ground up.

Kai-Fu Lee currently puts America in a commanding lead of 90-10 in autonomous AI, especially when it comes to self-driving vehicles. But Chinese government efforts are quickly ramping up the competition.

Already in China’s Zhejiang province, highway regulators and government officials have plans to build China’s first intelligent superhighway, outfitted with sensors, road-embedded solar panels and wireless communication between cars, roads and drivers.

Aimed at increasing transit efficiency by up to 30 percent while minimizing fatalities, the project may one day allow autonomous electric vehicles to continuously charge as they drive.

A similar government-fueled project involves Beijing’s new neighbor Xiong’an. Projected to take in over $580 billion in infrastructure spending over the next 20 years, Xiong’an New Area could one day become the world’s first city built around autonomous vehicles.

Baidu is already working with Xiong’an’s local government to build out this AI city with an environmental focus. Possibilities include sensor-geared cement, computer vision-enabled traffic lights, intersections with facial recognition, and parking lots-turned parks.

Lastly, Lee predicts China will almost certainly lead the charge in autonomous drones. Already, Shenzhen is home to premier drone maker DJI—a company I’ll be visiting with 24 top executives later this month as part of my annual China Platinum Trip.

Named “the best company I have ever encountered” by Chris Anderson, DJI owns an estimated 50 percent of the North American drone market, supercharged by Shenzhen’s extraordinary maker movement.

While the long-term Sino-US competitive balance in fourth wave AI remains to be seen, one thing is certain: in a matter of decades, we will witness the rise of AI-embedded cityscapes and autonomous machines that can interact with the real world and help solve today’s most pressing grand challenges.

Join Me
Webinar with Dr. Kai-Fu Lee: Dr. Kai-Fu Lee — one of the world’s most respected experts on AI — and I will discuss his latest book AI Superpowers: China, Silicon Valley, and the New World Order. Artificial Intelligence is reshaping the world as we know it. With U.S.-Sino competition heating up, who will own the future of technology? Register here for the free webinar on September 4th, 2018 from 11:00am–12:30pm PST.

Image Credit: Elena11 / Shutterstock.com Continue reading

Posted in Human Robots

#432891 This Week’s Awesome Stories From ...

TRANSPORTATION
Elon Musk Presents His Tunnel Vision to the People of LA
Jack Stewart and Aarian Marshall | Wired
“Now, Musk wants to build this new, 2.1-mile tunnel, near LA’s Sepulveda pass. It’s all part of his broader vision of a sprawling network that could take riders from Sherman Oaks in the north to Long Beach Airport in the south, Santa Monica in the west to Dodger Stadium in the east—without all that troublesome traffic.”

ROBOTICS
Feel What This Robot Feels Through Tactile Expressions
Evan Ackerman | IEEE Spectrum
“Guy Hoffman’s Human-Robot Collaboration & Companionship (HRC2) Lab at Cornell University is working on a new robot that’s designed to investigate this concept of textural communication, which really hasn’t been explored in robotics all that much. The robot uses a pneumatically powered elastomer skin that can be dynamically textured with either goosebumps or spikes, which should help it communicate more effectively, especially if what it’s trying to communicate is, ‘Don’t touch me!’”

VIRTUAL REALITY
In Virtual Reality, How Much Body Do You Need?
Steph Yin | The New York Times
“In a paper published Tuesday in Scientific Reports, they showed that animating virtual hands and feet alone is enough to make people feel their sense of body drift toward an invisible avatar. Their work fits into a corpus of research on illusory body ownership, which has challenged understandings of perception and contributed to therapies like treating pain for amputees who experience phantom limb.”

MEDICINE
How Graphene and Gold Could Help Us Test Drugs and Monitor Cancer
Angela Chen | The Verge
“In today’s study, scientists learned to precisely control the amount of electricity graphene generates by changing how much light they shine on the material. When they grew heart cells on the graphene, they could manipulate the cells too, says study co-author Alex Savtchenko, a physicist at the University of California, San Diego. They could make it beat 1.5 times faster, three times faster, 10 times faster, or whatever they needed.”

DISASTER RELIEF
Robotic Noses Could Be the Future of Disaster Rescue—If They Can Outsniff Search Dogs
Eleanor Cummins | Popular Science
“While canine units are a tried and fairly true method for identifying people trapped in the wreckage of a disaster, analytical chemists have for years been working in the lab to create a robotic alternative. A synthetic sniffer, they argue, could potentially prove to be just as or even more reliable than a dog, more resilient in the face of external pressures like heat and humidity, and infinitely more portable.”

Image Credit: Sergey Nivens / Shutterstock.com Continue reading

Posted in Human Robots