Tag Archives: Zero

#437301 The Global Work Crisis: Automation, the ...

The alarm bell rings. You open your eyes, come to your senses, and slide from dream state to consciousness. You hit the snooze button, and eventually crawl out of bed to the start of yet another working day.

This daily narrative is experienced by billions of people all over the world. We work, we eat, we sleep, and we repeat. As our lives pass day by day, the beating drums of the weekly routine take over and years pass until we reach our goal of retirement.

A Crisis of Work
We repeat the routine so that we can pay our bills, set our kids up for success, and provide for our families. And after a while, we start to forget what we would do with our lives if we didn’t have to go back to work.

In the end, we look back at our careers and reflect on what we’ve achieved. It may have been the hundreds of human interactions we’ve had; the thousands of emails read and replied to; the millions of minutes of physical labor—all to keep the global economy ticking along.

According to Gallup’s World Poll, only 15 percent of people worldwide are actually engaged with their jobs. The current state of “work” is not working for most people. In fact, it seems we as a species are trapped by a global work crisis, which condemns people to cast away their time just to get by in their day-to-day lives.

Technologies like artificial intelligence and automation may help relieve the work burdens of millions of people—but to benefit from their impact, we need to start changing our social structures and the way we think about work now.

The Specter of Automation
Automation has been ongoing since the Industrial Revolution. In recent decades it has taken on a more elegant guise, first with physical robots in production plants, and more recently with software automation entering most offices.

The driving goal behind much of this automation has always been productivity and hence, profits: technology that can act as a multiplier on what a single human can achieve in a day is of huge value to any company. Powered by this strong financial incentive, the quest for automation is growing ever more pervasive.

But if automation accelerates or even continues at its current pace and there aren’t strong social safety nets in place to catch the people who are negatively impacted (such as by losing their jobs), there could be a host of knock-on effects, including more concentrated wealth among a shrinking elite, more strain on government social support, an increase in depression and drug dependence, and even violent social unrest.

It seems as though we are rushing headlong into a major crisis, driven by the engine of accelerating automation. But what if instead of automation challenging our fragile status quo, we view it as the solution that can free us from the shackles of the Work Crisis?

The Way Out
In order to undertake this paradigm shift, we need to consider what society could potentially look like, as well as the problems associated with making this change. In the context of these crises, our primary aim should be for a system where people are not obligated to work to generate the means to survive. This removal of work should not threaten access to food, water, shelter, education, healthcare, energy, or human value. In our current system, work is the gatekeeper to these essentials: one can only access these (and even then often in a limited form), if one has a “job” that affords them.

Changing this system is thus a monumental task. This comes with two primary challenges: providing people without jobs with financial security, and ensuring they maintain a sense of their human value and worth. There are several measures that could be implemented to help meet these challenges, each with important steps for society to consider.

Universal basic income (UBI)

UBI is rapidly gaining support, and it would allow people to become shareholders in the fruits of automation, which would then be distributed more broadly.

UBI trials have been conducted in various countries around the world, including Finland, Kenya, and Spain. The findings have generally been positive on the health and well-being of the participants, and showed no evidence that UBI disincentivizes work, a common concern among the idea’s critics. The most recent popular voice for UBI has been that of former US presidential candidate Andrew Yang, who now runs a non-profit called Humanity Forward.

UBI could also remove wasteful bureaucracy in administering welfare payments (since everyone receives the same amount, there’s no need to prevent false claims), and promote the pursuit of projects aligned with peoples’ skill sets and passions, as well as quantifying the value of tasks not recognized by economic measures like Gross Domestic Product (GDP). This includes looking after children and the elderly at home.

How a UBI can be initiated with political will and social backing and paid for by governments has been hotly debated by economists and UBI enthusiasts. Variables like how much the UBI payments should be, whether to implement taxes such as Yang’s proposed valued added tax (VAT), whether to replace existing welfare payments, the impact on inflation, and the impact on “jobs” from people who would otherwise look for work require additional discussion. However, some have predicted the inevitability of UBI as a result of automation.

Universal healthcare

Another major component of any society is the healthcare of its citizens. A move away from work would further require the implementation of a universal healthcare system to decouple healthcare from jobs. Currently in the US, and indeed many other economies, healthcare is tied to employment.

Universal healthcare such as Medicare in Australia is evidence for the adage “prevention is better than cure,” when comparing the cost of healthcare in the US with Australia on a per capita basis. This has already presented itself as an advancement in the way healthcare is considered. There are further benefits of a healthier population, including less time and money spent on “sick-care.” Healthy people are more likely and more able to achieve their full potential.

Reshape the economy away from work-based value

One of the greatest challenges in a departure from work is for people to find value elsewhere in life. Many people view their identities as being inextricably tied to their jobs, and life without a job is therefore a threat to one’s sense of existence. This presents a shift that must be made at both a societal and personal level.

A person can only seek alternate value in life when afforded the time to do so. To this end, we need to start reducing “work-for-a-living” hours towards zero, which is a trend we are already seeing in Europe. This should not come at the cost of reducing wages pro rata, but rather could be complemented by UBI or additional schemes where people receive dividends for work done by automation. This transition makes even more sense when coupled with the idea of deviating from using GDP as a measure of societal growth, and instead adopting a well-being index based on universal human values like health, community, happiness, and peace.

The crux of this issue is in transitioning away from the view that work gives life meaning and life is about using work to survive, towards a view of living a life that itself is fulfilling and meaningful. This speaks directly to notions from Maslow’s hierarchy of needs, where work largely addresses psychological and safety needs such as shelter, food, and financial well-being. More people should have a chance to grow beyond the most basic needs and engage in self-actualization and transcendence.

The question is largely around what would provide people with a sense of value, and the answers would differ as much as people do; self-mastery, building relationships and contributing to community growth, fostering creativity, and even engaging in the enjoyable aspects of existing jobs could all come into play.

Universal education

With a move towards a society that promotes the values of living a good life, the education system would have to evolve as well. Researchers have long argued for a more nimble education system, but universities and even most online courses currently exist for the dominant purpose of ensuring people are adequately skilled to contribute to the economy. These “job factories” only exacerbate the Work Crisis. In fact, the response often given by educational institutions to the challenge posed by automation is to find new ways of upskilling students, such as ensuring they are all able to code. As alluded to earlier, this is a limited and unimaginative solution to the problem we are facing.

Instead, education should be centered on helping people acknowledge the current crisis of work and automation, teach them how to derive value that is decoupled from work, and enable people to embrace progress as we transition to the new economy.

Disrupting the Status Quo
While we seldom stop to think about it, much of the suffering faced by humanity is brought about by the systemic foe that is the Work Crisis. The way we think about work has brought society far and enabled tremendous developments, but at the same time it has failed many people. Now the status quo is threatened by those very developments as we progress to an era where machines are likely to take over many job functions.

This impending paradigm shift could be a threat to the stability of our fragile system, but only if it is not fully anticipated. If we prepare for it appropriately, it could instead be the key not just to our survival, but to a better future for all.

Image Credit: mostafa meraji from Pixabay Continue reading

Posted in Human Robots

#436946 Coronavirus May Mean Automation Is ...

We’re in the midst of a public health emergency, and life as we know it has ground to a halt. The places we usually go are closed, the events we were looking forward to are canceled, and some of us have lost our jobs or fear losing them soon.

But although it may not seem like it, there are some silver linings; this crisis is bringing out the worst in some (I’m looking at you, toilet paper hoarders), but the best in many. Italians on lockdown are singing together, Spaniards on lockdown are exercising together, this entrepreneur made a DIY ventilator and put it on YouTube, and volunteers in Italy 3D printed medical valves for virus treatment at a fraction of their usual cost.

Indeed, if you want to feel like there’s still hope for humanity instead of feeling like we’re about to snowball into terribleness as a species, just look at these examples—and I’m sure there are many more out there. There’s plenty of hope and opportunity to be found in this crisis.

Peter Xing, a keynote speaker and writer on emerging technologies and associate director in technology and growth initiatives at KPMG, would agree. Xing believes the coronavirus epidemic is presenting us with ample opportunities for increased automation and remote delivery of goods and services. “The upside right now is the burgeoning platform of the digital transformation ecosystem,” he said.

In a thought-provoking talk at Singularity University’s COVID-19 virtual summit this week, Xing explained how the outbreak is accelerating our transition to a highly-automated society—and painted a picture of what the future may look like.

Confronting Scarcity
You’ve probably seen them by now—the barren shelves at your local grocery store. Whether you were in the paper goods aisle, the frozen food section, or the fresh produce area, it was clear something was amiss; the shelves were empty. One of the most inexplicable items people have been panic-bulk-buying is toilet paper.

Xing described this toilet paper scarcity as a prisoner’s dilemma, pointing out that we have a scarcity problem right now in terms of our mindset, not in terms of actual supply shortages. “It’s a prisoner’s dilemma in that we’re all prisoners in our homes right now, and we can either hoard or not hoard, and the outcomes depend on how we collaborate with each other,” he said. “But it’s not a zero-sum game.”

Xing referenced a CNN article about why toilet paper, of all things, is one of the items people have been panic-buying most (I, too, have been utterly baffled by this phenomenon). But maybe there’d be less panic if we knew more about the production methods and supply chain involved in manufacturing toilet paper. It turns out it’s a highly automated process (you can learn more about it in this documentary by National Geographic) and requires very few people (though it does require about 27,000 trees a day—so stop bulk-buying it! Just stop!).

The supply chain limitation here is in the raw material; we certainly can’t keep cutting down this many trees a day forever. But—somewhat ironically, given the Costco cartloads of TP people have been stuffing into their trunks and backseats—thanks to automation, toilet paper isn’t something stores are going to stop receiving anytime soon.

Automation For All
Now we have a reason to apply this level of automation to, well, pretty much everything.

Though our current situation may force us into using more robots and automated systems sooner than we’d planned, it will end up saving us money and creating opportunity, Xing believes. He cited “fast-casual” restaurants (Chipotle, Panera, etc.) as a prime example.

Currently, people in the US spend much more to eat at home than we do to eat in fast-casual restaurants if you take into account the cost of the food we’re preparing plus the value of the time we’re spending on cooking, grocery shopping, and cleaning up after meals. According to research from investment management firm ARK Invest, taking all these costs into account makes for about $12 per meal for food cooked at home.

That’s the same as or more than the cost of grabbing a burrito or a sandwich at the joint around the corner. As more of the repetitive, low-skill tasks involved in preparing fast casual meals are automated, their cost will drop even more, giving us more incentive to forego home cooking. (But, it’s worth noting that these figures don’t take into account that eating at home is, in most cases, better for you since you’re less likely to fill your food with sugar, oil, or various other taste-enhancing but health-destroying ingredients—plus, there are those of us who get a nearly incomparable amount of joy from laboring over then savoring a homemade meal).

Now that we’re not supposed to be touching each other or touching anything anyone else has touched, but we still need to eat, automating food preparation sounds appealing (and maybe necessary). Multiple food delivery services have already implemented a contactless delivery option, where customers can choose to have their food left on their doorstep.

Besides the opportunities for in-restaurant automation, “This is an opportunity for automation to happen at the last mile,” said Xing. Delivery drones, robots, and autonomous trucks and vans could all play a part. In fact, use of delivery drones has ramped up in China since the outbreak.

Speaking of deliveries, service robots have steadily increased in numbers at Amazon; as of late 2019, the company employed around 650,000 humans and 200,000 robots—and costs have gone down as robots have gone up.

ARK Invest’s research predicts automation could add $800 billion to US GDP over the next 5 years and $12 trillion during the next 15 years. On this trajectory, GDP would end up being 40 percent higher with automation than without it.

Automating Ourselves?
This is all well and good, but what do these numbers and percentages mean for the average consumer, worker, or citizen?

“The benefits of automation aren’t being passed on to the average citizen,” said Xing. “They’re going to the shareholders of the companies creating the automation.” This is where policies like universal basic income and universal healthcare come in; in the not-too-distant future, we may see more movement toward measures like these (depending how the election goes) that spread the benefit of automation out rather than concentrating it in a few wealthy hands.

In the meantime, though, some people are benefiting from automation in ways that maybe weren’t expected. We’re in the midst of what’s probably the biggest remote-work experiment in US history, not to mention remote learning. Tools that let us digitally communicate and collaborate, like Slack, Zoom, Dropbox, and Gsuite, are enabling remote work in a way that wouldn’t have been possible 20 or even 10 years ago.

In addition, Xing said, tools like DataRobot and H2O.ai are democratizing artificial intelligence by allowing almost anyone, not just data scientists or computer engineers, to run machine learning algorithms. People are codifying the steps in their own repetitive work processes and having their computers take over tasks for them.

As 3D printing gets cheaper and more accessible, it’s also being more widely adopted, and people are finding more applications (case in point: the Italians mentioned above who figured out how to cheaply print a medical valve for coronavirus treatment).

The Mother of Invention
This movement towards a more automated society has some positives: it will help us stay healthy during times like the present, it will drive down the cost of goods and services, and it will grow our GDP in the long run. But by leaning into automation, will we be enabling a future that keeps us more physically, psychologically, and emotionally distant from each other?

We’re in a crisis, and desperate times call for desperate measures. We’re sheltering in place, practicing social distancing, and trying not to touch each other. And for most of us, this is really unpleasant and difficult. We can’t wait for it to be over.

For better or worse, this pandemic will likely make us pick up the pace on our path to automation, across many sectors and processes. The solutions people implement during this crisis won’t disappear when things go back to normal (and, depending who you talk to, they may never really do so).

But let’s make sure to remember something. Even once robots are making our food and drones are delivering it, and our computers are doing data entry and email replies on our behalf, and we all have 3D printers to make anything we want at home—we’re still going to be human. And humans like being around each other. We like seeing one another’s faces, hearing one another’s voices, and feeling one another’s touch—in person, not on a screen or in an app.

No amount of automation is going to change that, and beyond lowering costs or increasing GDP, our greatest and most crucial responsibility will always be to take care of each other.

Image Credit: Gritt Zheng on Unsplash Continue reading

Posted in Human Robots

#436550 Work in the Age of Web 3.0

What is the future of work? Is our future one of ‘technological socialism’ (where technology is taking care of our needs)? Or will tomorrow’s workplace be completely virtualized, allowing us to hang out at home in our PJs while “walking” about our virtual corporate headquarters?

This blog will look at the future of work during the age of Web 3.0, examining scenarios in which artificial intelligence, virtual reality, and the spatial web converge to transform every element of our careers, from training, to execution, to free time.

To offer a quick recap on what the Spatial Web is and how it works, let’s cover some brief history.

A Quick Recap on Web 3.0
While Web 1.0 consisted of static documents and read-only data (static web pages), Web 2.0 introduced multimedia content, interactive web applications, and participatory social media, all of these mediated by two-dimensional screens.

But over the next two to five years, the convergence of 5G, artificial intelligence, VR/AR, and a trillion-sensor economy will enable us to both map our physical world into virtual space and superimpose a digital data layer onto our physical environments. Suddenly, all our information will be manipulated, stored, understood and experienced in spatial ways.

In this blog, I’ll be discussing the Spatial Web’s vast implications for:

Professional Training
Delocalized Business & the Virtual Workplace
Smart Permissions & Data Security

Let’s dive in.

Virtual Training, Real-World Results
Virtual and augmented reality have already begun disrupting the professional training market. As projected by ABI Research, the enterprise VR training market is on track to exceed $6.3 billion in value by 2022.

Leading the charge, Walmart has already implemented VR across 200 Academy training centers, running over 45 modules and simulating everything from unusual customer requests to a Black Friday shopping rush.

Then in September 2018, Walmart committed to a 17,000-headset order of the Oculus Go to equip every US Supercenter, neighborhood market, and discount store with VR-based employee training. By mid-2019, Walmart had tracked a 10-15 percent boost in employee confidence as a result of newly implemented VR training.

In the engineering world, Bell Helicopter is using VR to massively expedite development and testing of its latest aircraft, FCX-001. Partnering with Sector 5 Digital and HTC VIVE, Bell found it could concentrate a typical 6-year aircraft design process into the course of 6 months, turning physical mock-ups into CAD-designed virtual replicas.

But beyond the design process itself, Bell is now one of a slew of companies pioneering VR pilot tests and simulations with real-world accuracy. Seated in a true-to-life virtual cockpit, pilots have now tested countless iterations of the FCX-001 in virtual flight, drawing directly onto the 3D model and enacting aircraft modifications in real-time.

And in an expansion of our virtual senses, several key players are already working on haptic feedback. In the case of VR flight, French company Go Touch VR is now partnering with software developer FlyInside on fingertip-mounted haptic tech for aviation.

Dramatically reducing time and trouble required for VR-testing pilots, they aim to give touch-based confirmation of every switch and dial activated on virtual flights, just as one would experience in a full-sized cockpit mockup. Replicating texture, stiffness, and even the sensation of holding an object, these piloted devices contain a suite of actuators to simulate everything from a light touch to higher-pressured contact, all controlled by gaze and finger movements.

When it comes to other high-risk simulations, virtual and augmented reality have barely scratched the surface.

Firefighters can now combat virtual wildfires with new platforms like FLAIM Trainer or TargetSolutions. And thanks to the expansion of medical AR/VR services like 3D4Medical or Echopixel, surgeons might soon perform operations on annotated organs and magnified incision sites, speeding up reaction times and vastly improving precision.

But perhaps most urgent, Web 3.0 and its VR interface will offer an immediate solution for today’s constant industry turnover and large-scale re-education demands. VR educational facilities with exact replicas of anything from large industrial equipment to minute circuitry will soon give anyone a second chance at the 21st-century job market.

Want to be an electric, autonomous vehicle mechanic at age 15? Throw on a demonetized VR module and learn by doing, testing your prototype iterations at almost zero cost and with no risk of harming others.

Want to be a plasma physicist and play around with a virtual nuclear fusion reactor? Now you’ll be able to simulate results and test out different tweaks, logging Smart Educational Record credits in the process.

As tomorrow’s career model shifts from a “one-and-done graduate degree” to continuous lifelong education, professional VR-based re-education will allow for a continuous education loop, reducing the barrier to entry for anyone wanting to enter a new industry.

But beyond professional training and virtually enriched, real-world work scenarios, Web 3.0 promises entirely virtual workplaces and blockchain-secured authorization systems.

Rise of the Virtual Workplace & Digital Data Integrity
In addition to enabling a virtual goods marketplace, the Spatial Web is also giving way to “virtual company headquarters” and completely virtualized companies, where employees can work from home or any place on the planet.

Too good to be true? Check out an incredible publicly listed company called eXp Realty.

Launched on the heels of the 2008 financial crisis, eXp Realty beat the odds, going public this past May and surpassing a $1B market cap on day one of trading. But how? Opting for a demonetized virtual model, eXp’s founder Glenn Sanford decided to ditch brick and mortar from the get-go, instead building out an online virtual campus for employees, contractors, and thousands of agents.

And after years of hosting team meetings, training seminars, and even agent discussions with potential buyers through 2D digital interfaces, eXp’s virtual headquarters went spatial. What is eXp’s primary corporate value? FUN! And Glenn Sanford’s employees love their jobs.

In a bid to transition from 2D interfaces to immersive, 3D work experiences, virtual platform VirBELA built out the company’s office space in VR, unlocking indefinite scaling potential and an extraordinary new precedent. Foregoing any physical locations for a centralized VR campus, eXp Realty has essentially thrown out all overhead and entered a lucrative market with barely any upfront costs.

Delocalize with VR, and you can now hire anyone with Internet access (right next door or on the other side of the planet), redesign your corporate office every month, throw in an ocean-view office or impromptu conference room for client meetings, and forget about guzzled-up hours in traffic.

Throw in the Spatial Web’s fundamental blockchain-based data layer, and now cryptographically secured virtual IDs will let you validate colleagues’ identities or any of the virtual avatars we will soon inhabit.

This becomes critically important for spatial information logs—keeping incorruptible records of who’s present at a meeting, which data each person has access to, and AI-translated reports of everything discussed and contracts agreed to.

But as I discussed in a previous Spatial Web blog, not only will Web 3.0 and VR advancements allow us to build out virtual worlds, but we’ll soon be able to digitally map our real-world physical offices or entire commercial high rises too.

As data gets added and linked to any given employee’s office, conference room, or security system, we might then access online-merge-offline environments and information through augmented reality.

Imagine showing up at your building’s concierge and your AR glasses automatically check you into the building, authenticating your identity and pulling up any reminders you’ve linked to that specific location.

You stop by a friend’s office, and his smart security system lets you know he’ll arrive in an hour. Need to book a public conference room that’s already been scheduled by another firm’s marketing team? Offer to pay them a fee and, once accepted, a smart transaction will automatically deliver a payment to their company account.

With blockchain-verified digital identities, spatially logged data, and virtually manifest information, business logistics take a fraction of the time, operations grow seamless, and corporate data will be safer than ever.

Final Thoughts
While converging technologies slash the lifespan of Fortune 500 companies, bring on the rise of vast new industries, and transform the job market, Web 3.0 is changing the way we work, where we work, and who we work with.

Life-like virtual modules are already unlocking countless professional training camps, modifiable in real time and easily updated. Virtual programming and blockchain-based authentication are enabling smart data logging, identity protection, and on-demand smart asset trading. And VR/AR-accessible worlds (and corporate campuses) not only demonetize, dematerialize, and delocalize our everyday workplaces, but enrich our physical worlds with AI-driven, context-specific data.

Welcome to the Spatial Web workplace.

Join Me
(1) A360 Executive Mastermind: If you’re an exponentially and abundance-minded entrepreneur who would like coaching directly from me, consider joining my Abundance 360 Mastermind, a highly selective community of 360 CEOs and entrepreneurs who I coach for 3 days every January in Beverly Hills, Ca. Through A360, I provide my members with context and clarity about how converging exponential technologies will transform every industry. I’m committed to running A360 for the course of an ongoing 25-year journey as a “countdown to the Singularity.”

If you’d like to learn more and consider joining our 2021 membership, apply here.

(2) Abundance-Digital Online Community: I’ve also created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is Singularity University’s ‘onramp’ for exponential entrepreneurs—those who want to get involved and play at a higher level. Click here to learn more.

(Both A360 and Abundance-Digital are part of Singularity University—your participation opens you to a global community.)

This article originally appeared on diamandis.com. Read the original article here.

Image Credit: Image by Gerd Altmann from Pixabay Continue reading

Posted in Human Robots

#436252 After AI, Fashion and Shopping Will ...

AI and broadband are eating retail for breakfast. In the first half of 2019, we’ve seen 19 retailer bankruptcies. And the retail apocalypse is only accelerating.

What’s coming next is astounding. Why drive when you can speak? Revenue from products purchased via voice commands is expected to quadruple from today’s US$2 billion to US$8 billion by 2023.

Virtual reality, augmented reality, and 3D printing are converging with artificial intelligence, drones, and 5G to transform shopping on every dimension. And as a result, shopping is becoming dematerialized, demonetized, democratized, and delocalized… a top-to-bottom transformation of the retail world.

Welcome to Part 1 of our series on the future of retail, a deep-dive into AI and its far-reaching implications.

Let’s dive in.

A Day in the Life of 2029
Welcome to April 21, 2029, a sunny day in Dallas. You’ve got a fundraising luncheon tomorrow, but nothing to wear. The last thing you want to do is spend the day at the mall.

No sweat. Your body image data is still current, as you were scanned only a week ago. Put on your VR headset and have a conversation with your AI. “It’s time to buy a dress for tomorrow’s event” is all you have to say. In a moment, you’re teleported to a virtual clothing store. Zero travel time. No freeway traffic, parking hassles, or angry hordes wielding baby strollers.

Instead, you’ve entered your own personal clothing store. Everything is in your exact size…. And I mean everything. The store has access to nearly every designer and style on the planet. Ask your AI to show you what’s hot in Shanghai, and presto—instant fashion show. Every model strutting down the runway looks exactly like you, only dressed in Shanghai’s latest.

When you’re done selecting an outfit, your AI pays the bill. And as your new clothes are being 3D printed at a warehouse—before speeding your way via drone delivery—a digital version has been added to your personal inventory for use at future virtual events.

The cost? Thanks to an era of no middlemen, less than half of what you pay in stores today. Yet this future is not all that far off…

Digital Assistants
Let’s begin with the basics: the act of turning desire into purchase.

Most of us navigate shopping malls or online marketplaces alone, hoping to stumble across the right item and fit. But if you’re lucky enough to employ a personal assistant, you have the luxury of describing what you want to someone who knows you well enough to buy that exact right thing most of the time.

For most of us who don’t, enter the digital assistant.

Right now, the four horsemen of the retail apocalypse are waging war for our wallets. Amazon’s Alexa, Google’s Now, Apple’s Siri, and Alibaba’s Tmall Genie are going head-to-head in a battle to become the platform du jour for voice-activated, AI-assisted commerce.

For baby boomers who grew up watching Captain Kirk talk to the Enterprise’s computer on Star Trek, digital assistants seem a little like science fiction. But for millennials, it’s just the next logical step in a world that is auto-magical.

And as those millennials enter their consumer prime, revenue from products purchased via voice-driven commands is projected to leap from today’s US$2 billion to US$8 billion by 2023.

We are already seeing a major change in purchasing habits. On average, consumers using Amazon Echo spent more than standard Amazon Prime customers: US$1,700 versus US$1,300.

And as far as an AI fashion advisor goes, those too are here, courtesy of both Alibaba and Amazon. During its annual Singles’ Day (November 11) shopping festival, Alibaba’s FashionAI concept store uses deep learning to make suggestions based on advice from human fashion experts and store inventory, driving a significant portion of the day’s US$25 billion in sales.

Similarly, Amazon’s shopping algorithm makes personalized clothing recommendations based on user preferences and social media behavior.

Customer Service
But AI is disrupting more than just personalized fashion and e-commerce. Its next big break will take place in the customer service arena.

According to a recent Zendesk study, good customer service increases the possibility of a purchase by 42 percent, while bad customer service translates into a 52 percent chance of losing that sale forever. This means more than half of us will stop shopping at a store due to a single disappointing customer service interaction. These are significant financial stakes. They’re also problems perfectly suited for an AI solution.

During the 2018 Google I/O conference, CEO Sundar Pichai demoed the Google Duplex, their next generation digital assistant. Pichai played the audience a series of pre-recorded phone calls made by Google Duplex. The first call made a reservation at a restaurant, the second one booked a haircut appointment, amusing the audience with a long “hmmm” mid-call.

In neither case did the person on the other end of the phone have any idea they were talking to an AI. The system’s success speaks to how seamlessly AI can blend into our retail lives and how convenient it will continue to make them. The same technology Pichai demonstrated that can make phone calls for consumers can also answer phones for retailers—a development that’s unfolding in two different ways:

(1) Customer service coaches: First, for organizations interested in keeping humans involved, there’s Beyond Verbal, a Tel Aviv-based startup that has built an AI customer service coach. Simply by analyzing customer voice intonation, the system can tell whether the person on the phone is about to blow a gasket, is genuinely excited, or anything in between.

Based on research of over 70,000 subjects in more than 30 languages, Beyond Verbal’s app can detect 400 different markers of human moods, attitudes, and personality traits. Already it’s been integrated in call centers to help human sales agents understand and react to customer emotions, making those calls more pleasant, and also more profitable.

For example, by analyzing word choice and vocal style, Beyond Verbal’s system can tell what kind of shopper the person on the line actually is. If they’re an early adopter, the AI alerts the sales agent to offer them the latest and greatest. If they’re more conservative, it suggests items more tried-and-true.

(2) Replacing customer service agents: Second, companies like New Zealand’s Soul Machines are working to replace human customer service agents altogether. Powered by IBM’s Watson, Soul Machines builds lifelike customer service avatars designed for empathy, making them one of many helping to pioneer the field of emotionally intelligent computing.

With their technology, 40 percent of all customer service interactions are now resolved with a high degree of satisfaction, no human intervention needed. And because the system is built using neural nets, it’s continuously learning from every interaction—meaning that percentage will continue to improve.

The number of these interactions continues to grow as well. Software manufacturer Autodesk now includes a Soul Machine avatar named AVA (Autodesk Virtual Assistant) in all of its new offerings. She lives in a small window on the screen, ready to soothe tempers, troubleshoot problems, and forever banish those long tech support hold times.

For Daimler Financial Services, Soul Machines built an avatar named Sarah, who helps customers with arguably three of modernity’s most annoying tasks: financing, leasing, and insuring a car.

This isn’t just about AI—it’s about AI converging with additional exponentials. Add networks and sensors to the story and it raises the scale of disruption, upping the FQ—the frictionless quotient—in our frictionless shopping adventure.

Final Thoughts
AI makes retail cheaper, faster, and more efficient, touching everything from customer service to product delivery. It also redefines the shopping experience, making it frictionless and—once we allow AI to make purchases for us—ultimately invisible.

Prepare for a future in which shopping is dematerialized, demonetized, democratized, and delocalized—otherwise known as “the end of malls.”

Of course, if you wait a few more years, you’ll be able to take an autonomous flying taxi to Westfield’s Destination 2028—so perhaps today’s converging exponentials are not so much spelling the end of malls but rather the beginning of an experience economy far smarter, more immersive, and whimsically imaginative than today’s shopping centers.

Either way, it’s a top-to-bottom transformation of the retail world.

Over the coming blog series, we will continue our discussion of the future of retail. Stay tuned to learn new implications for your business and how to future-proof your company in an age of smart, ultra-efficient, experiential retail.

Want a copy of my next book? If you’ve enjoyed this blogified snippet of The Future is Faster Than You Think, sign up here to be eligible for an early copy and access up to $800 worth of pre-launch giveaways!

Join Me
(1) A360 Executive Mastermind: If you’re an exponentially and abundance-minded entrepreneur who would like coaching directly from me, consider joining my Abundance 360 Mastermind, a highly selective community of 360 CEOs and entrepreneurs who I coach for 3 days every January in Beverly Hills, Ca. Through A360, I provide my members with context and clarity about how converging exponential technologies will transform every industry. I’m committed to running A360 for the course of an ongoing 25-year journey as a “countdown to the Singularity.”

If you’d like to learn more and consider joining our 2020 membership, apply here.

(2) Abundance-Digital Online Community: I’ve also created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is Singularity University’s ‘onramp’ for exponential entrepreneurs — those who want to get involved and play at a higher level. Click here to learn more.

(Both A360 and Abundance-Digital are part of Singularity University — your participation opens you to a global community.)

This article originally appeared on diamandis.com. Read the original article here.

Image Credit: Image by Pexels from Pixabay Continue reading

Posted in Human Robots

#436188 The Blogger Behind “AI ...

Sure, artificial intelligence is transforming the world’s societies and economies—but can an AI come up with plausible ideas for a Halloween costume?

Janelle Shane has been asking such probing questions since she started her AI Weirdness blog in 2016. She specializes in training neural networks (which underpin most of today’s machine learning techniques) on quirky data sets such as compilations of knitting instructions, ice cream flavors, and names of paint colors. Then she asks the neural net to generate its own contributions to these categories—and hilarity ensues. AI is not likely to disrupt the paint industry with names like “Ronching Blue,” “Dorkwood,” and “Turdly.”

Shane’s antics have a serious purpose. She aims to illustrate the serious limitations of today’s AI, and to counteract the prevailing narrative that describes AI as well on its way to superintelligence and complete human domination. “The danger of AI is not that it’s too smart,” Shane writes in her new book, “but that it’s not smart enough.”

The book, which came out on Tuesday, is called You Look Like a Thing and I Love You. It takes its odd title from a list of AI-generated pick-up lines, all of which would at least get a person’s attention if shouted, preferably by a robot, in a crowded bar. Shane’s book is shot through with her trademark absurdist humor, but it also contains real explanations of machine learning concepts and techniques. It’s a painless way to take AI 101.

She spoke with IEEE Spectrum about the perils of placing too much trust in AI systems, the strange AI phenomenon of “giraffing,” and her next potential Halloween costume.

Janelle Shane on . . .

The un-delicious origin of her blog
“The narrower the problem, the smarter the AI will seem”
Why overestimating AI is dangerous
Giraffing!
Machine and human creativity

The un-delicious origin of her blog IEEE Spectrum: You studied electrical engineering as an undergrad, then got a master’s degree in physics. How did that lead to you becoming the comedian of AI?
Janelle Shane: I’ve been interested in machine learning since freshman year of college. During orientation at Michigan State, a professor who worked on evolutionary algorithms gave a talk about his work. It was full of the most interesting anecdotes–some of which I’ve used in my book. He told an anecdote about people setting up a machine learning algorithm to do lens design, and the algorithm did end up designing an optical system that works… except one of the lenses was 50 feet thick, because they didn’t specify that it couldn’t do that.
I started working in his lab on optics, doing ultra-short laser pulse work. I ended up doing a lot more optics than machine learning, but I always found it interesting. One day I came across a list of recipes that someone had generated using a neural net, and I thought it was hilarious and remembered why I thought machine learning was so cool. That was in 2016, ages ago in machine learning land.
Spectrum: So you decided to “establish weirdness as your goal” for your blog. What was the first weird experiment that you blogged about?
Shane: It was generating cookbook recipes. The neural net came up with ingredients like: “Take ¼ pounds of bones or fresh bread.” That recipe started out: “Brown the salmon in oil, add creamed meat to the mixture.” It was making mistakes that showed the thing had no memory at all.
Spectrum: You say in the book that you can learn a lot about AI by giving it a task and watching it flail. What do you learn?
Shane: One thing you learn is how much it relies on surface appearances rather than deep understanding. With the recipes, for example: It got the structure of title, category, ingredients, instructions, yield at the end. But when you look more closely, it has instructions like “Fold the water and roll it into cubes.” So clearly this thing does not understand water, let alone the other things. It’s recognizing certain phrases that tend to occur, but it doesn’t have a concept that these recipes are describing something real. You start to realize how very narrow the algorithms in this world are. They only know exactly what we tell them in our data set.
BACK TO TOP↑ “The narrower the problem, the smarter the AI will seem” Spectrum: That makes me think of DeepMind’s AlphaGo, which was universally hailed as a triumph for AI. It can play the game of Go better than any human, but it doesn’t know what Go is. It doesn’t know that it’s playing a game.
Shane: It doesn’t know what a human is, or if it’s playing against a human or another program. That’s also a nice illustration of how well these algorithms do when they have a really narrow and well-defined problem.
The narrower the problem, the smarter the AI will seem. If it’s not just doing something repeatedly but instead has to understand something, coherence goes down. For example, take an algorithm that can generate images of objects. If the algorithm is restricted to birds, it could do a recognizable bird. If this same algorithm is asked to generate images of any animal, if its task is that broad, the bird it generates becomes an unrecognizable brown feathered smear against a green background.
Spectrum: That sounds… disturbing.
Shane: It’s disturbing in a weird amusing way. What’s really disturbing is the humans it generates. It hasn’t seen them enough times to have a good representation, so you end up with an amorphous, usually pale-faced thing with way too many orifices. If you asked it to generate an image of a person eating pizza, you’ll have blocks of pizza texture floating around. But if you give that image to an image-recognition algorithm that was trained on that same data set, it will say, “Oh yes, that’s a person eating pizza.”
BACK TO TOP↑ Why overestimating AI is dangerous Spectrum: Do you see it as your role to puncture the AI hype?
Shane: I do see it that way. Not a lot of people are bringing out this side of AI. When I first started posting my results, I’d get people saying, “I don’t understand, this is AI, shouldn’t it be better than this? Why doesn't it understand?” Many of the impressive examples of AI have a really narrow task, or they’ve been set up to hide how little understanding it has. There’s a motivation, especially among people selling products based on AI, to represent the AI as more competent and understanding than it actually is.
Spectrum: If people overestimate the abilities of AI, what risk does that pose?
Shane: I worry when I see people trusting AI with decisions it can’t handle, like hiring decisions or decisions about moderating content. These are really tough tasks for AI to do well on. There are going to be a lot of glitches. I see people saying, “The computer decided this so it must be unbiased, it must be objective.”

“If the algorithm’s task is to replicate human hiring decisions, it’s going to glom onto gender bias and race bias.”
—Janelle Shane, AI Weirdness blogger
That’s another thing I find myself highlighting in the work I’m doing. If the data includes bias, the algorithm will copy that bias. You can’t tell it not to be biased, because it doesn’t understand what bias is. I think that message is an important one for people to understand.
If there’s bias to be found, the algorithm is going to go after it. It’s like, “Thank goodness, finally a signal that’s reliable.” But for a tough problem like: Look at these resumes and decide who’s best for the job. If its task is to replicate human hiring decisions, it’s going to glom onto gender bias and race bias. There’s an example in the book of a hiring algorithm that Amazon was developing that discriminated against women, because the historical data it was trained on had that gender bias.
Spectrum: What are the other downsides of using AI systems that don’t really understand their tasks?
Shane: There is a risk in putting too much trust in AI and not examining its decisions. Another issue is that it can solve the wrong problems, without anyone realizing it. There have been a couple of cases in medicine. For example, there was an algorithm that was trained to recognize things like skin cancer. But instead of recognizing the actual skin condition, it latched onto signals like the markings a surgeon makes on the skin, or a ruler placed there for scale. It was treating those things as a sign of skin cancer. It’s another indication that these algorithms don’t understand what they’re looking at and what the goal really is.
BACK TO TOP↑ Giraffing Spectrum: In your blog, you often have neural nets generate names for things—such as ice cream flavors, paint colors, cats, mushrooms, and types of apples. How do you decide on topics?
Shane: Quite often it’s because someone has written in with an idea or a data set. They’ll say something like, “I’m the MIT librarian and I have a whole list of MIT thesis titles.” That one was delightful. Or they’ll say, “We are a high school robotics team, and we know where there’s a list of robotics team names.” It’s fun to peek into a different world. I have to be careful that I’m not making fun of the naming conventions in the field. But there’s a lot of humor simply in the neural net’s complete failure to understand. Puns in particular—it really struggles with puns.
Spectrum: Your blog is quite absurd, but it strikes me that machine learning is often absurd in itself. Can you explain the concept of giraffing?
Shane: This concept was originally introduced by [internet security expert] Melissa Elliott. She proposed this phrase as a way to describe the algorithms’ tendency to see giraffes way more often than would be likely in the real world. She posted a whole bunch of examples, like a photo of an empty field in which an image-recognition algorithm has confidently reported that there are giraffes. Why does it think giraffes are present so often when they’re actually really rare? Because they’re trained on data sets from online. People tend to say, “Hey look, a giraffe!” And then take a photo and share it. They don’t do that so often when they see an empty field with rocks.
There’s also a chatbot that has a delightful quirk. If you show it some photo and ask it how many giraffes are in the picture, it will always answer with some non zero number. This quirk comes from the way the training data was generated: These were questions asked and answered by humans online. People tended not to ask the question “How many giraffes are there?” when the answer was zero. So you can show it a picture of someone holding a Wii remote. If you ask it how many giraffes are in the picture, it will say two.
BACK TO TOP↑ Machine and human creativity Spectrum: AI can be absurd, and maybe also creative. But you make the point that AI art projects are really human-AI collaborations: Collecting the data set, training the algorithm, and curating the output are all artistic acts on the part of the human. Do you see your work as a human-AI art project?
Shane: Yes, I think there is artistic intent in my work; you could call it literary or visual. It’s not so interesting to just take a pre-trained algorithm that’s been trained on utilitarian data, and tell it to generate a bunch of stuff. Even if the algorithm isn’t one that I’ve trained myself, I think about, what is it doing that’s interesting, what kind of story can I tell around it, and what do I want to show people.

The Halloween costume algorithm “was able to draw on its knowledge of which words are related to suggest things like sexy barnacle.”
—Janelle Shane, AI Weirdness blogger
Spectrum: For the past three years you’ve been getting neural nets to generate ideas for Halloween costumes. As language models have gotten dramatically better over the past three years, are the costume suggestions getting less absurd?
Shane: Yes. Before I would get a lot more nonsense words. This time I got phrases that were related to real things in the data set. I don’t believe the training data had the words Flying Dutchman or barnacle. But it was able to draw on its knowledge of which words are related to suggest things like sexy barnacle and sexy Flying Dutchman.
Spectrum: This year, I saw on Twitter that someone made the gothy giraffe costume happen. Would you ever dress up for Halloween in a costume that the neural net suggested?
Shane: I think that would be fun. But there would be some challenges. I would love to go as the sexy Flying Dutchman. But my ambition may constrict me to do something more like a list of leg parts.
BACK TO TOP↑ Continue reading

Posted in Human Robots