Tag Archives: week

#436252 After AI, Fashion and Shopping Will ...

AI and broadband are eating retail for breakfast. In the first half of 2019, we’ve seen 19 retailer bankruptcies. And the retail apocalypse is only accelerating.

What’s coming next is astounding. Why drive when you can speak? Revenue from products purchased via voice commands is expected to quadruple from today’s US$2 billion to US$8 billion by 2023.

Virtual reality, augmented reality, and 3D printing are converging with artificial intelligence, drones, and 5G to transform shopping on every dimension. And as a result, shopping is becoming dematerialized, demonetized, democratized, and delocalized… a top-to-bottom transformation of the retail world.

Welcome to Part 1 of our series on the future of retail, a deep-dive into AI and its far-reaching implications.

Let’s dive in.

A Day in the Life of 2029
Welcome to April 21, 2029, a sunny day in Dallas. You’ve got a fundraising luncheon tomorrow, but nothing to wear. The last thing you want to do is spend the day at the mall.

No sweat. Your body image data is still current, as you were scanned only a week ago. Put on your VR headset and have a conversation with your AI. “It’s time to buy a dress for tomorrow’s event” is all you have to say. In a moment, you’re teleported to a virtual clothing store. Zero travel time. No freeway traffic, parking hassles, or angry hordes wielding baby strollers.

Instead, you’ve entered your own personal clothing store. Everything is in your exact size…. And I mean everything. The store has access to nearly every designer and style on the planet. Ask your AI to show you what’s hot in Shanghai, and presto—instant fashion show. Every model strutting down the runway looks exactly like you, only dressed in Shanghai’s latest.

When you’re done selecting an outfit, your AI pays the bill. And as your new clothes are being 3D printed at a warehouse—before speeding your way via drone delivery—a digital version has been added to your personal inventory for use at future virtual events.

The cost? Thanks to an era of no middlemen, less than half of what you pay in stores today. Yet this future is not all that far off…

Digital Assistants
Let’s begin with the basics: the act of turning desire into purchase.

Most of us navigate shopping malls or online marketplaces alone, hoping to stumble across the right item and fit. But if you’re lucky enough to employ a personal assistant, you have the luxury of describing what you want to someone who knows you well enough to buy that exact right thing most of the time.

For most of us who don’t, enter the digital assistant.

Right now, the four horsemen of the retail apocalypse are waging war for our wallets. Amazon’s Alexa, Google’s Now, Apple’s Siri, and Alibaba’s Tmall Genie are going head-to-head in a battle to become the platform du jour for voice-activated, AI-assisted commerce.

For baby boomers who grew up watching Captain Kirk talk to the Enterprise’s computer on Star Trek, digital assistants seem a little like science fiction. But for millennials, it’s just the next logical step in a world that is auto-magical.

And as those millennials enter their consumer prime, revenue from products purchased via voice-driven commands is projected to leap from today’s US$2 billion to US$8 billion by 2023.

We are already seeing a major change in purchasing habits. On average, consumers using Amazon Echo spent more than standard Amazon Prime customers: US$1,700 versus US$1,300.

And as far as an AI fashion advisor goes, those too are here, courtesy of both Alibaba and Amazon. During its annual Singles’ Day (November 11) shopping festival, Alibaba’s FashionAI concept store uses deep learning to make suggestions based on advice from human fashion experts and store inventory, driving a significant portion of the day’s US$25 billion in sales.

Similarly, Amazon’s shopping algorithm makes personalized clothing recommendations based on user preferences and social media behavior.

Customer Service
But AI is disrupting more than just personalized fashion and e-commerce. Its next big break will take place in the customer service arena.

According to a recent Zendesk study, good customer service increases the possibility of a purchase by 42 percent, while bad customer service translates into a 52 percent chance of losing that sale forever. This means more than half of us will stop shopping at a store due to a single disappointing customer service interaction. These are significant financial stakes. They’re also problems perfectly suited for an AI solution.

During the 2018 Google I/O conference, CEO Sundar Pichai demoed the Google Duplex, their next generation digital assistant. Pichai played the audience a series of pre-recorded phone calls made by Google Duplex. The first call made a reservation at a restaurant, the second one booked a haircut appointment, amusing the audience with a long “hmmm” mid-call.

In neither case did the person on the other end of the phone have any idea they were talking to an AI. The system’s success speaks to how seamlessly AI can blend into our retail lives and how convenient it will continue to make them. The same technology Pichai demonstrated that can make phone calls for consumers can also answer phones for retailers—a development that’s unfolding in two different ways:

(1) Customer service coaches: First, for organizations interested in keeping humans involved, there’s Beyond Verbal, a Tel Aviv-based startup that has built an AI customer service coach. Simply by analyzing customer voice intonation, the system can tell whether the person on the phone is about to blow a gasket, is genuinely excited, or anything in between.

Based on research of over 70,000 subjects in more than 30 languages, Beyond Verbal’s app can detect 400 different markers of human moods, attitudes, and personality traits. Already it’s been integrated in call centers to help human sales agents understand and react to customer emotions, making those calls more pleasant, and also more profitable.

For example, by analyzing word choice and vocal style, Beyond Verbal’s system can tell what kind of shopper the person on the line actually is. If they’re an early adopter, the AI alerts the sales agent to offer them the latest and greatest. If they’re more conservative, it suggests items more tried-and-true.

(2) Replacing customer service agents: Second, companies like New Zealand’s Soul Machines are working to replace human customer service agents altogether. Powered by IBM’s Watson, Soul Machines builds lifelike customer service avatars designed for empathy, making them one of many helping to pioneer the field of emotionally intelligent computing.

With their technology, 40 percent of all customer service interactions are now resolved with a high degree of satisfaction, no human intervention needed. And because the system is built using neural nets, it’s continuously learning from every interaction—meaning that percentage will continue to improve.

The number of these interactions continues to grow as well. Software manufacturer Autodesk now includes a Soul Machine avatar named AVA (Autodesk Virtual Assistant) in all of its new offerings. She lives in a small window on the screen, ready to soothe tempers, troubleshoot problems, and forever banish those long tech support hold times.

For Daimler Financial Services, Soul Machines built an avatar named Sarah, who helps customers with arguably three of modernity’s most annoying tasks: financing, leasing, and insuring a car.

This isn’t just about AI—it’s about AI converging with additional exponentials. Add networks and sensors to the story and it raises the scale of disruption, upping the FQ—the frictionless quotient—in our frictionless shopping adventure.

Final Thoughts
AI makes retail cheaper, faster, and more efficient, touching everything from customer service to product delivery. It also redefines the shopping experience, making it frictionless and—once we allow AI to make purchases for us—ultimately invisible.

Prepare for a future in which shopping is dematerialized, demonetized, democratized, and delocalized—otherwise known as “the end of malls.”

Of course, if you wait a few more years, you’ll be able to take an autonomous flying taxi to Westfield’s Destination 2028—so perhaps today’s converging exponentials are not so much spelling the end of malls but rather the beginning of an experience economy far smarter, more immersive, and whimsically imaginative than today’s shopping centers.

Either way, it’s a top-to-bottom transformation of the retail world.

Over the coming blog series, we will continue our discussion of the future of retail. Stay tuned to learn new implications for your business and how to future-proof your company in an age of smart, ultra-efficient, experiential retail.

Want a copy of my next book? If you’ve enjoyed this blogified snippet of The Future is Faster Than You Think, sign up here to be eligible for an early copy and access up to $800 worth of pre-launch giveaways!

Join Me
(1) A360 Executive Mastermind: If you’re an exponentially and abundance-minded entrepreneur who would like coaching directly from me, consider joining my Abundance 360 Mastermind, a highly selective community of 360 CEOs and entrepreneurs who I coach for 3 days every January in Beverly Hills, Ca. Through A360, I provide my members with context and clarity about how converging exponential technologies will transform every industry. I’m committed to running A360 for the course of an ongoing 25-year journey as a “countdown to the Singularity.”

If you’d like to learn more and consider joining our 2020 membership, apply here.

(2) Abundance-Digital Online Community: I’ve also created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is Singularity University’s ‘onramp’ for exponential entrepreneurs — those who want to get involved and play at a higher level. Click here to learn more.

(Both A360 and Abundance-Digital are part of Singularity University — your participation opens you to a global community.)

This article originally appeared on diamandis.com. Read the original article here.

Image Credit: Image by Pexels from Pixabay Continue reading

Posted in Human Robots

#436218 An AI Debated Its Own Potential for Good ...

Artificial intelligence is going to overhaul the way we live and work. But will the changes it brings be for the better? As the technology slowly develops (let’s remember that right now, we’re still very much in the narrow AI space and nowhere near an artificial general intelligence), whether it will end up doing us more harm than good is a question at the top of everyone’s mind.

What kind of response might we get if we posed this question to an AI itself?

Last week at the Cambridge Union in England, IBM did just that. Its Project Debater (an AI that narrowly lost a debate to human debating champion Harish Natarajan in February) gave the opening arguments in a debate about the promise and peril of artificial intelligence.

Critical thinking, linking different lines of thought, and anticipating counter-arguments are all valuable debating skills that humans can practice and refine. While these skills are tougher for an AI to get good at since they often require deeper contextual understanding, AI does have a major edge over humans in absorbing and analyzing information. In the February debate, Project Debater used IBM’s cloud computing infrastructure to read hundreds of millions of documents and extract relevant details to construct an argument.

This time around, Debater looked through 1,100 arguments for or against AI. The arguments were submitted to IBM by the public during the week prior to the debate, through a website set up for that purpose. Of the 1,100 submissions, the AI classified 570 as anti-AI, or of the opinion that the technology will bring more harm to humanity than good. 511 arguments were found to be pro-AI, and the rest were irrelevant to the topic at hand.

Debater grouped the arguments into five themes; the technology’s ability to take over dangerous or monotonous jobs was a pro-AI theme, and on the flip side was its potential to perpetuate the biases of its creators. “AI companies still have too little expertise on how to properly assess datasets and filter out bias,” the tall black box that houses Project Debater said. “AI will take human bias and will fixate it for generations.”
After Project Debater kicked off the debate by giving opening arguments for both sides, two teams of people took over, elaborating on its points and coming up with their own counter-arguments.

In the end, an audience poll voted in favor of the pro-AI side, but just barely; 51.2 percent of voters felt convinced that AI can help us more than it can hurt us.

The software’s natural language processing was able to identify racist, obscene, or otherwise inappropriate comments and weed them out as being irrelevant to the debate. But it also repeated the same arguments multiple times, and mixed up a statement about bias as being pro-AI rather than anti-AI.

IBM has been working on Project Debater for over six years, and though it aims to iron out small glitches like these, the system’s goal isn’t to ultimately outwit and defeat humans. On the contrary, the AI is meant to support our decision-making by taking in and processing huge amounts of information in a nuanced way, more quickly than we ever could.

IBM engineer Noam Slonim envisions Project Debater’s tech being used, for example, by a government seeking citizens’ feedback about a new policy. “This technology can help to establish an interesting and effective communication channel between the decision maker and the people that are going to be impacted by the decision,” he said.

As for the question of whether AI will do more good or harm, perhaps Sylvie Delacroix put it best. A professor of law and ethics at the University of Birmingham who argued on the pro-AI side of the debate, she pointed out that the impact AI will have depends on the way we design it, saying “AI is only as good as the data it has been fed.”

She’s right; rather than asking what sort of impact AI will have on humanity, we should start by asking what sort of impact we want it to have. The people working on AI—not AIs themselves—are ultimately responsible for how much good or harm will be done.

Image Credit: IBM Project Debater at Cambridge Union Society, photo courtesy of IBM Research Continue reading

Posted in Human Robots

#436209 Video Friday: Robotic Endoscope Travels ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

DARPA SubT Urban Circuit – February 18-27, 2020 – Olympia, WA, USA
Let us know if you have suggestions for next week, and enjoy today's videos.

Kuka has just announced the results of its annual Innovation Award. From an initial batch of 30 applicants, five teams reached the finals (we were part of the judging committee). The five finalists worked for nearly a year on their applications, which they demonstrated this week at the Medica trade show in Düsseldorf, Germany. And the winner of the €20,000 prize is…Team RoboFORCE, led by the STORM Lab in the U.K., which developed a “robotic magnetic flexible endoscope for painless colorectal cancer screening, surveillance, and intervention.”

The system could improve colonoscopy procedures by reducing pain and discomfort as well as other risks such as bleeding and perforation, according to the STORM Lab researchers. It uses a magnetic field to control the endoscope, pulling rather than pushing it through the colon.

The other four finalists also presented some really interesting applications—you can see their videos below.

“Because we were so please with the high quality of the submissions, we will have next year’s finals again at the Medica fair, and the challenge will be named ‘Medical Robotics’,” says Rainer Bischoff, vice president for corporate research at Kuka. He adds that the selected teams will again use Kuka’s LBR Med robot arm, which is “already certified for integration into medical products and makes it particularly easy for startups to use a robot as the main component for a particular solution.”

Applications are now open for Kuka’s Innovation Award 2020. You can find more information on how to enter here. The deadline is 5 January 2020.

[ Kuka ]

Oh good, Aibo needs to be fed now.

You know what comes next, right?

[ Aibo ]

Your cat needs this robot.

It's about $200 on Kickstarter.

[ Kickstarter ]

Enjoy this tour of the Skydio offices courtesy Skydio 2, which runs into not even one single thing.

If any Skydio employees had important piles of papers on their desks, well, they don’t anymore.

[ Skydio ]

Artificial intelligence is everywhere nowadays, but what exactly does it mean? We asked a group MIT computer science grad students and post-docs how they personally define AI.

“When most people say AI, they actually mean machine learning, which is just pattern recognition.” Yup.

[ MIT ]

Using event-based cameras, this drone control system can track attitude at 1600 degrees per second (!).

[ UZH ]

Introduced at CES 2018, Walker is an intelligent humanoid service robot from UBTECH Robotics. Below are the latest features and technologies used during our latest round of development to make Walker even better.

[ Ubtech ]

Introducing the Alpha Prime by #VelodyneLidar, the most advanced lidar sensor on the market! Alpha Prime delivers an unrivaled combination of field-of-view, range, high-resolution, clarity and operational performance.

Performance looks good, but don’t expect it to be cheap.

[ Velodyne ]

Ghost Robotics’ Spirit 40 will start shipping to researchers in January of next year.

[ Ghost Robotics ]

Unitree is about to ship the first batch of their AlienGo quadrupeds as well:

[ Unitree ]

Mechanical engineering’s Sarah Bergbreiter discusses her work on micro robotics, how they draw inspiration from insects and animals, and how tiny robots can help humans in a variety of fields.

[ CMU ]

Learning contact-rich, robotic manipulation skills is a challenging problem due to the high-dimensionality of the state and action space as well as uncertainty from noisy sensors and inaccurate motor control. To combat these factors and achieve more robust manipulation, humans actively exploit contact constraints in the environment. By adopting a similar strategy, robots can also achieve more robust manipulation. In this paper, we enable a robot to autonomously modify its environment and thereby discover how to ease manipulation skill learning. Specifically, we provide the robot with fixtures that it can freely place within the environment. These fixtures provide hard constraints that limit the outcome of robot actions. Thereby, they funnel uncertainty from perception and motor control and scaffold manipulation skill learning.

[ Stanford ]

Since 2016, Verity's drones have completed more than 200,000 flights around the world. Completely autonomous, client-operated and designed for live events, Verity is making the magic real by turning drones into flying lights, characters, and props.

[ Verity ]

To monitor and stop the spread of wildfires, University of Michigan engineers developed UAVs that could find, map and report fires. One day UAVs like this could work with disaster response units, firefighters and other emergency teams to provide real-time accurate information to reduce damage and save lives. For their research, the University of Michigan graduate students won first place at a competition for using a swarm of UAVs to successfully map and report simulated wildfires.

[ University of Michigan ]

Here’s an important issue that I haven’t heard talked about all that much: How first responders should interact with self-driving cars.

“To put the car in manual mode, you must call Waymo.” Huh.

[ Waymo ]

Here’s what Gitai has been up to recently, from a Humanoids 2019 workshop talk.

[ Gitai ]

The latest CMU RI seminar comes from Girish Chowdhary at the University of Illinois at Urbana-Champaign on “Autonomous and Intelligent Robots in Unstructured Field Environments.”

What if a team of collaborative autonomous robots grew your food for you? In this talk, I will discuss some key advances in robotics, machine learning, and autonomy that will one day enable teams of small robots to grow food for you in your backyard in a fundamentally more sustainable way than modern mega-farms! Teams of small aerial and ground robots could be a potential solution to many of the serious problems that modern agriculture is facing. However, fully autonomous robots that operate without supervision for weeks, months, or entire growing season are not yet practical. I will discuss my group’s theoretical and practical work towards the underlying challenging problems in robotic systems, autonomy, sensing, and learning. I will begin with our lightweight, compact, and autonomous field robot TerraSentia and the recent successes of this type of undercanopy robots for high-throughput phenotyping with deep learning-based machine vision. I will also discuss how to make a team of autonomous robots learn to coordinate to weed large agricultural farms under partial observability. These direct applications will help me make the case for the type of reinforcement learning and adaptive control that are necessary to usher in the next generation of autonomous field robots that learn to solve complex problems in harsh, changing, and dynamic environments. I will then end with an overview of our new MURI, in which we are working towards developing AI and control that leverages neurodynamics inspired by the Octopus brain.

[ CMU RI ] Continue reading

Posted in Human Robots

#436207 This Week’s Awesome Tech Stories From ...

COMPUTING
A Giant Superfast AI Chip Is Being Used to Find Better Cancer Drugs
Karen Hao | MIT Technology Review
“Thus far, Cerebras’s computer has checked all the boxes. Thanks to its chip size—it is larger than an iPad and has 1.2 trillion transistors for making calculations—it isn’t necessary to hook multiple smaller processors together, which can slow down model training. In testing, it has already shrunk the training time of models from weeks to hours.”

MEDICINE
Humans Put Into Suspended Animation for First Time
Ian Sample | The Guardian
“The process involves rapidly cooling the brain to less than 10C by replacing the patient’s blood with ice-cold saline solution. Typically the solution is pumped directly into the aorta, the main artery that carries blood away from the heart to the rest of the body.”

DRONES
This Transforming Drone Can Be Fired Straight Out of a Cannon
James Vincent | The Verge
“Drones are incredibly useful machines in the air, but getting them up and flying can be tricky, especially in crowded, windy, or emergency scenarios when speed is a factor. But a group of researchers from Caltech university and NASA’s Jet Propulsion Laboratory have come up with an elegant and oh-so-fun solution: fire the damn thing out of a cannon.”

ROBOTICS
Alphabet’s Dream of an ‘Everyday Robot’ Is Just Out of Reach
Tom Simonite | Wired
“Sorting trash was chosen as a convenient challenge to test the project’s approach to creating more capable robots. It’s using artificial intelligence software developed in collaboration with Google to make robots that learn complex tasks through on-the-job experience. The hope is to make robots less reliant on human coding for their skills, and capable of adapting quickly to complex new tasks and environments.”

ENVIRONMENT
The Electric Car Revolution May Take a Lot Longer Than Expected
James Temple | MIT Technology Review
“A new report from the MIT Energy Initiative warns that EVs may never reach the same sticker price so long as they rely on lithium-ion batteries, the energy storage technology that powers most of today’s consumer electronics. In fact, it’s likely to take another decade just to eliminate the difference in the lifetime costs between the vehicle categories, which factors in the higher fuel and maintenance expenses of standard cars and trucks.”

SPACE
How Two Intruders From Interstellar Space Are Upending Astronomy
Alexandra Witze | Nature
“From the tallest peak in Hawaii to a high plateau in the Andes, some of the biggest telescopes on Earth will point towards a faint smudge of light over the next few weeks. …What they’re looking for is a rare visitor that is about to make its closest approach to the Sun. After that, they have just months to grab as much information as they can from the object before it disappears forever into the blackness of space.”

Image Credit: Simone Hutsch / Unsplash Continue reading

Posted in Human Robots

#436204 We’re at IROS 2019 to Bring You ...

The 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) is taking place in Macau this week, featuring well over a thousand presentations on the newest and most amazing robotics research from around the world. There are also posters, workshops, tutorials, an exhibit hall, and plenty of social events where roboticists have the chance to get a little tipsy and talk about all the really interesting stuff.

As always, our plan is to bring you all of the coolest, weirdest, and most interesting things that we find at the show, and here are just a few of the things we’re looking forward to this week:

Flying robots with wings, tails, and… arms?
Spherical robot turtles
An update on that crazy jet-powered iCub
Agile and tiny robot insects
Metallic self-healing robot bones
How to train robots by messing with them
A weird robot sea urchin

And all that is happening just on Tuesday!

Our IROS coverage will continue beyond this week, so keep checking back for more of the best new robotics from Macau.

[ IROS 2019 ] Continue reading

Posted in Human Robots