Tag Archives: REPLACE

#436255 Are cyborg employees in our future? ...

Image by 849356 from Pixabay There’s been a disturbing recent YouTube post – a video purportedly showed a military-type robot shooting at targets while itself being intermittently thumped and shoved, only to turn on and shoot one of its human handlers. However, a quick check over at Snopes proves the video is false. Kudos to …

The post Are cyborg employees in our future? Advancing AI could replace human workers appeared first on TFOT. Continue reading

Posted in Human Robots

#436252 After AI, Fashion and Shopping Will ...

AI and broadband are eating retail for breakfast. In the first half of 2019, we’ve seen 19 retailer bankruptcies. And the retail apocalypse is only accelerating.

What’s coming next is astounding. Why drive when you can speak? Revenue from products purchased via voice commands is expected to quadruple from today’s US$2 billion to US$8 billion by 2023.

Virtual reality, augmented reality, and 3D printing are converging with artificial intelligence, drones, and 5G to transform shopping on every dimension. And as a result, shopping is becoming dematerialized, demonetized, democratized, and delocalized… a top-to-bottom transformation of the retail world.

Welcome to Part 1 of our series on the future of retail, a deep-dive into AI and its far-reaching implications.

Let’s dive in.

A Day in the Life of 2029
Welcome to April 21, 2029, a sunny day in Dallas. You’ve got a fundraising luncheon tomorrow, but nothing to wear. The last thing you want to do is spend the day at the mall.

No sweat. Your body image data is still current, as you were scanned only a week ago. Put on your VR headset and have a conversation with your AI. “It’s time to buy a dress for tomorrow’s event” is all you have to say. In a moment, you’re teleported to a virtual clothing store. Zero travel time. No freeway traffic, parking hassles, or angry hordes wielding baby strollers.

Instead, you’ve entered your own personal clothing store. Everything is in your exact size…. And I mean everything. The store has access to nearly every designer and style on the planet. Ask your AI to show you what’s hot in Shanghai, and presto—instant fashion show. Every model strutting down the runway looks exactly like you, only dressed in Shanghai’s latest.

When you’re done selecting an outfit, your AI pays the bill. And as your new clothes are being 3D printed at a warehouse—before speeding your way via drone delivery—a digital version has been added to your personal inventory for use at future virtual events.

The cost? Thanks to an era of no middlemen, less than half of what you pay in stores today. Yet this future is not all that far off…

Digital Assistants
Let’s begin with the basics: the act of turning desire into purchase.

Most of us navigate shopping malls or online marketplaces alone, hoping to stumble across the right item and fit. But if you’re lucky enough to employ a personal assistant, you have the luxury of describing what you want to someone who knows you well enough to buy that exact right thing most of the time.

For most of us who don’t, enter the digital assistant.

Right now, the four horsemen of the retail apocalypse are waging war for our wallets. Amazon’s Alexa, Google’s Now, Apple’s Siri, and Alibaba’s Tmall Genie are going head-to-head in a battle to become the platform du jour for voice-activated, AI-assisted commerce.

For baby boomers who grew up watching Captain Kirk talk to the Enterprise’s computer on Star Trek, digital assistants seem a little like science fiction. But for millennials, it’s just the next logical step in a world that is auto-magical.

And as those millennials enter their consumer prime, revenue from products purchased via voice-driven commands is projected to leap from today’s US$2 billion to US$8 billion by 2023.

We are already seeing a major change in purchasing habits. On average, consumers using Amazon Echo spent more than standard Amazon Prime customers: US$1,700 versus US$1,300.

And as far as an AI fashion advisor goes, those too are here, courtesy of both Alibaba and Amazon. During its annual Singles’ Day (November 11) shopping festival, Alibaba’s FashionAI concept store uses deep learning to make suggestions based on advice from human fashion experts and store inventory, driving a significant portion of the day’s US$25 billion in sales.

Similarly, Amazon’s shopping algorithm makes personalized clothing recommendations based on user preferences and social media behavior.

Customer Service
But AI is disrupting more than just personalized fashion and e-commerce. Its next big break will take place in the customer service arena.

According to a recent Zendesk study, good customer service increases the possibility of a purchase by 42 percent, while bad customer service translates into a 52 percent chance of losing that sale forever. This means more than half of us will stop shopping at a store due to a single disappointing customer service interaction. These are significant financial stakes. They’re also problems perfectly suited for an AI solution.

During the 2018 Google I/O conference, CEO Sundar Pichai demoed the Google Duplex, their next generation digital assistant. Pichai played the audience a series of pre-recorded phone calls made by Google Duplex. The first call made a reservation at a restaurant, the second one booked a haircut appointment, amusing the audience with a long “hmmm” mid-call.

In neither case did the person on the other end of the phone have any idea they were talking to an AI. The system’s success speaks to how seamlessly AI can blend into our retail lives and how convenient it will continue to make them. The same technology Pichai demonstrated that can make phone calls for consumers can also answer phones for retailers—a development that’s unfolding in two different ways:

(1) Customer service coaches: First, for organizations interested in keeping humans involved, there’s Beyond Verbal, a Tel Aviv-based startup that has built an AI customer service coach. Simply by analyzing customer voice intonation, the system can tell whether the person on the phone is about to blow a gasket, is genuinely excited, or anything in between.

Based on research of over 70,000 subjects in more than 30 languages, Beyond Verbal’s app can detect 400 different markers of human moods, attitudes, and personality traits. Already it’s been integrated in call centers to help human sales agents understand and react to customer emotions, making those calls more pleasant, and also more profitable.

For example, by analyzing word choice and vocal style, Beyond Verbal’s system can tell what kind of shopper the person on the line actually is. If they’re an early adopter, the AI alerts the sales agent to offer them the latest and greatest. If they’re more conservative, it suggests items more tried-and-true.

(2) Replacing customer service agents: Second, companies like New Zealand’s Soul Machines are working to replace human customer service agents altogether. Powered by IBM’s Watson, Soul Machines builds lifelike customer service avatars designed for empathy, making them one of many helping to pioneer the field of emotionally intelligent computing.

With their technology, 40 percent of all customer service interactions are now resolved with a high degree of satisfaction, no human intervention needed. And because the system is built using neural nets, it’s continuously learning from every interaction—meaning that percentage will continue to improve.

The number of these interactions continues to grow as well. Software manufacturer Autodesk now includes a Soul Machine avatar named AVA (Autodesk Virtual Assistant) in all of its new offerings. She lives in a small window on the screen, ready to soothe tempers, troubleshoot problems, and forever banish those long tech support hold times.

For Daimler Financial Services, Soul Machines built an avatar named Sarah, who helps customers with arguably three of modernity’s most annoying tasks: financing, leasing, and insuring a car.

This isn’t just about AI—it’s about AI converging with additional exponentials. Add networks and sensors to the story and it raises the scale of disruption, upping the FQ—the frictionless quotient—in our frictionless shopping adventure.

Final Thoughts
AI makes retail cheaper, faster, and more efficient, touching everything from customer service to product delivery. It also redefines the shopping experience, making it frictionless and—once we allow AI to make purchases for us—ultimately invisible.

Prepare for a future in which shopping is dematerialized, demonetized, democratized, and delocalized—otherwise known as “the end of malls.”

Of course, if you wait a few more years, you’ll be able to take an autonomous flying taxi to Westfield’s Destination 2028—so perhaps today’s converging exponentials are not so much spelling the end of malls but rather the beginning of an experience economy far smarter, more immersive, and whimsically imaginative than today’s shopping centers.

Either way, it’s a top-to-bottom transformation of the retail world.

Over the coming blog series, we will continue our discussion of the future of retail. Stay tuned to learn new implications for your business and how to future-proof your company in an age of smart, ultra-efficient, experiential retail.

Want a copy of my next book? If you’ve enjoyed this blogified snippet of The Future is Faster Than You Think, sign up here to be eligible for an early copy and access up to $800 worth of pre-launch giveaways!

Join Me
(1) A360 Executive Mastermind: If you’re an exponentially and abundance-minded entrepreneur who would like coaching directly from me, consider joining my Abundance 360 Mastermind, a highly selective community of 360 CEOs and entrepreneurs who I coach for 3 days every January in Beverly Hills, Ca. Through A360, I provide my members with context and clarity about how converging exponential technologies will transform every industry. I’m committed to running A360 for the course of an ongoing 25-year journey as a “countdown to the Singularity.”

If you’d like to learn more and consider joining our 2020 membership, apply here.

(2) Abundance-Digital Online Community: I’ve also created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is Singularity University’s ‘onramp’ for exponential entrepreneurs — those who want to get involved and play at a higher level. Click here to learn more.

(Both A360 and Abundance-Digital are part of Singularity University — your participation opens you to a global community.)

This article originally appeared on diamandis.com. Read the original article here.

Image Credit: Image by Pexels from Pixabay Continue reading

Posted in Human Robots

#436178 Within 10 Years, We’ll Travel by ...

What’s faster than autonomous vehicles and flying cars?

Try Hyperloop, rocket travel, and robotic avatars. Hyperloop is currently working towards 670 mph (1080 kph) passenger pods, capable of zipping us from Los Angeles to downtown Las Vegas in under 30 minutes. Rocket Travel (think SpaceX’s Starship) promises to deliver you almost anywhere on the planet in under an hour. Think New York to Shanghai in 39 minutes.

But wait, it gets even better…

As 5G connectivity, hyper-realistic virtual reality, and next-gen robotics continue their exponential progress, the emergence of “robotic avatars” will all but nullify the concept of distance, replacing human travel with immediate remote telepresence.

Let’s dive in.

Hyperloop One: LA to SF in 35 Minutes
Did you know that Hyperloop was the brainchild of Elon Musk? Just one in a series of transportation innovations from a man determined to leave his mark on the industry.

In 2013, in an attempt to shorten the long commute between Los Angeles and San Francisco, the California state legislature proposed a $68 billion budget allocation for what appeared to be the slowest and most expensive bullet train in history.

Musk was outraged. The cost was too high, the train too sluggish. Teaming up with a group of engineers from Tesla and SpaceX, he published a 58-page concept paper for “The Hyperloop,” a high-speed transportation network that used magnetic levitation to propel passenger pods down vacuum tubes at speeds of up to 670 mph. If successful, it would zip you across California in 35 minutes—just enough time to watch your favorite sitcom.

In January 2013, venture capitalist Shervin Pishevar, with Musk’s blessing, started Hyperloop One with myself, Jim Messina (former White House Deputy Chief of Staff for President Obama), and tech entrepreneurs Joe Lonsdale and David Sacks as founding board members. A couple of years after that, the Virgin Group invested in this idea, Richard Branson was elected chairman, and Virgin Hyperloop One was born.

“The Hyperloop exists,” says Josh Giegel, co-founder and chief technology officer of Hyperloop One, “because of the rapid acceleration of power electronics, computational modeling, material sciences, and 3D printing.”

Thanks to these convergences, there are now ten major Hyperloop One projects—in various stages of development—spread across the globe. Chicago to DC in 35 minutes. Pune to Mumbai in 25 minutes. According to Giegel, “Hyperloop is targeting certification in 2023. By 2025, the company plans to have multiple projects under construction and running initial passenger testing.”

So think about this timetable: Autonomous car rollouts by 2020. Hyperloop certification and aerial ridesharing by 2023. By 2025—going on vacation might have a totally different meaning. Going to work most definitely will.

But what’s faster than Hyperloop?

Rocket Travel
As if autonomous vehicles, flying cars, and Hyperloop weren’t enough, in September of 2017, speaking at the International Astronautical Congress in Adelaide, Australia, Musk promised that for the price of an economy airline ticket, his rockets will fly you “anywhere on Earth in under an hour.”

Musk wants to use SpaceX’s megarocket, Starship, which was designed to take humans to Mars, for terrestrial passenger delivery. The Starship travels at 17,500 mph. It’s an order of magnitude faster than the supersonic jet Concorde.

Think about what this actually means: New York to Shanghai in 39 minutes. London to Dubai in 29 minutes. Hong Kong to Singapore in 22 minutes.

So how real is the Starship?

“We could probably demonstrate this [technology] in three years,” Musk explained, “but it’s going to take a while to get the safety right. It’s a high bar. Aviation is incredibly safe. You’re safer on an airplane than you are at home.”

That demonstration is proceeding as planned. In September 2017, Musk announced his intentions to retire his current rocket fleet, both the Falcon 9 and Falcon Heavy, and replace them with the Starships in the 2020s.

Less than a year later, LA mayor Eric Garcetti tweeted that SpaceX was planning to break ground on an 18-acre rocket production facility near the port of Los Angeles. And April of this year marked an even bigger milestone: the very first test flights of the rocket.

Thus, sometime in the next decade or so, “off to Europe for lunch” may become a standard part of our lexicon.

Avatars
Wait, wait, there’s one more thing.

While the technologies we’ve discussed will decimate the traditional transportation industry, there’s something on the horizon that will disrupt travel itself. What if, to get from A to B, you didn’t have to move your body? What if you could quote Captain Kirk and just say “Beam me up, Scotty”?

Well, shy of the Star Trek transporter, there’s the world of avatars.

An avatar is a second self, typically in one of two forms. The digital version has been around for a couple of decades. It emerged from the video game industry and was popularized by virtual world sites like Second Life and books-turned-blockbusters like Ready Player One.

A VR headset teleports your eyes and ears to another location, while a set of haptic sensors shifts your sense of touch. Suddenly, you’re inside an avatar inside a virtual world. As you move in the real world, your avatar moves in the virtual.

Use this technology to give a lecture and you can do it from the comfort of your living room, skipping the trip to the airport, the cross-country flight, and the ride to the conference center.

Robots are the second form of avatars. Imagine a humanoid robot that you can occupy at will. Maybe, in a city far from home, you’ve rented the bot by the minute—via a different kind of ridesharing company—or maybe you have spare robot avatars located around the country.

Either way, put on VR goggles and a haptic suit, and you can teleport your senses into that robot. This allows you to walk around, shake hands, and take action—all without leaving your home.

And like the rest of the tech we’ve been talking about, even this future isn’t far away.

In 2018, entrepreneur Dr. Harry Kloor recommended to All Nippon Airways (ANA), Japan’s largest airline, the design of an Avatar XPRIZE. ANA then funded this vision to the tune of $10 million to speed the development of robotic avatars. Why? Because ANA knows this is one of the technologies likely to disrupt their own airline industry, and they want to be ready.

ANA recently announced its “newme” robot that humans can use to virtually explore new places. The colorful robots have Roomba-like wheeled bases and cameras mounted around eye-level, which capture surroundings viewable through VR headsets.

If the robot was stationed in your parents’ home, you could cruise around the rooms and chat with your family at any time of day. After revealing the technology at Tokyo’s Combined Exhibition of Advanced Technologies in October, ANA plans to deploy 1,000 newme robots by 2020.

With virtual avatars like newme, geography, distance, and cost will no longer limit our travel choices. From attractions like the Eiffel Tower or the pyramids of Egypt to unreachable destinations like the moon or deep sea, we will be able to transcend our own physical limits, explore the world and outer space, and access nearly any experience imaginable.

Final Thoughts
Individual car ownership has enjoyed over a century of ascendancy and dominance.

The first real threat it faced—today’s ride-sharing model—only showed up in the last decade. But that ridesharing model won’t even get ten years to dominate. Already, it’s on the brink of autonomous car displacement, which is on the brink of flying car disruption, which is on the brink of Hyperloop and rockets-to-anywhere decimation. Plus, avatars.

The most important part: All of this change will happen over the next ten years. Welcome to a future of human presence where the only constant is rapid change.

Note: This article—an excerpt from my next book The Future Is Faster Than You Think, co-authored with Steven Kotler, to be released January 28th, 2020—originally appeared on my tech blog at diamandis.com. Read the original article here.

Join Me
Abundance-Digital Online Community: Stay ahead of technological advancements and turn your passion into action. Abundance Digital is now part of Singularity University. Learn more.

Image Credit: Virgin Hyperloop One Continue reading

Posted in Human Robots

#436065 From Mainframes to PCs: What Robot ...

This is a guest post. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE.

Autonomous robots are coming around slowly. We already got autonomous vacuum cleaners, autonomous lawn mowers, toys that bleep and blink, and (maybe) soon autonomous cars. Yet, generation after generation, we keep waiting for the robots that we all know from movies and TV shows. Instead, businesses seem to get farther and farther away from the robots that are able to do a large variety of tasks using general-purpose, human anatomy-inspired hardware.

Although these are the droids we have been looking for, anything that came close, such as Willow Garage’s PR2 or Rethink Robotics’ Baxter has bitten the dust. With building a robotic company being particularly hard, compounding business risk with technological risk, the trend goes from selling robots to selling actual services like mowing your lawn, provide taxi rides, fulfilling retail orders, or picking strawberries by the pound. Unfortunately for fans of R2-D2 and C-3PO, these kind of business models emphasize specialized, room- or fridge-sized hardware that is optimized for one very specific task, but does not contribute to a general-purpose robotic platform.

We have actually seen something very similar in the personal computer (PC) industry. In the 1950s, even though computers could be as big as an entire room and were only available to a selected few, the public already had a good idea of what computers would look like. A long list of fictional computers started to populate mainstream entertainment during that time. In a 1962 New York Times article titled “Pocket Computer to Replace Shopping List,” visionary scientist John Mauchly stated that “there is no reason to suppose the average boy or girl cannot be master of a personal computer.”

In 1968, Douglas Engelbart gave us the “mother of all demos,” browsing hypertext on a graphical screen and a mouse, and other ideas that have become standard only decades later. Now that we have finally seen all of this, it might be helpful to examine what actually enabled the computing revolution to learn where robotics is really at and what we need to do next.

The parallels between computers and robots

In the 1970s, mainframes were about to be replaced by the emerging class of mini-computers, fridge-sized devices that cost less than US $25,000 ($165,000 in 2019 dollars). These computers did not use punch-cards, but could be programmed in Fortran and BASIC, dramatically expanding the ease with which potential applications could be created. Yet it was still unclear whether mini-computers could ever replace big mainframes in applications that require fast and efficient processing of large amounts of data, let alone enter every living room. This is very similar to the robotics industry right now, where large-scale factory robots (mainframes) that have existed since the 1960s are seeing competition from a growing industry of collaborative robots that can safely work next to humans and can easily be installed and programmed (minicomputers). As in the ’70s, applications for these devices that reach system prices comparable to that of a luxury car are quite limited, and it is hard to see how they could ever become a consumer product.

Yet, as in the computer industry, successful architectures are quickly being cloned, driving prices down, and entirely new approaches on how to construct or program robotic arms are sprouting left and right. Arm makers are joined by manufacturers of autonomous carts, robotic grippers, and sensors. These components can be combined, paving the way for standard general purpose platforms that follow the model of the IBM PC, which built a capable, open architecture relying as much on commodity parts as possible.

General purpose robotic systems have not been successful for similar reasons that general purpose, also known as “personal,” computers took decades to emerge. Mainframes were custom-built for each application, while typewriters got smarter and smarter, not really leaving room for general purpose computers in between. Indeed, given the cost of hardware and the relatively little abilities of today’s autonomous robots, it is almost always smarter to build a special purpose machine than trying to make a collaborative mobile manipulator smart.

A current example is e-commerce grocery fulfillment. The current trend is to reserve underutilized parts of a brick-and-mortar store for a micro-fulfillment center that stores goods in little crates with an automated retrieval system and a (human) picker. A number of startups like Alert Innovation, Fabric, Ocado Technology, TakeOff Technologies, and Tompkins Robotics, to just name a few, have raised hundreds of millions of venture capital recently to build mainframe equivalents of robotic fulfillment centers. This is in contrast with a robotic picker, which would drive through the aisles to restock and pick from shelves. Such a robotic store clerk would come much closer to our vision of a general purpose robot, but would require many copies of itself that crowd the aisles to churn out hundreds of orders per hour as a microwarehouse could. Although eventually more efficient, the margins in retail are already low and make it unlikely that this industry will produce the technological jump that we need to get friendly C-3POs manning the aisles.

Startups have raised hundreds of millions of venture capital recently to build mainframe equivalents of robotic fulfillment centers. This is in contrast with a robotic picker, which would drive through the aisles to restock and pick from shelves, and would come much closer to our vision of a general purpose robot.

Mainframes were also attacked from the bottom. Fascination with the new digital technology has led to a hobbyist movement to create microcomputers that were sold via mail order or at RadioShack. Initially, a large number of small businesses was selling tens, at most hundreds, of devices, usually as a kit and with wooden enclosures. This trend culminated into the “1977 Trinity” in the form of the Apple II, the Commodore PET, and the Tandy TRS-80, complete computers that were sold for prices around $2500 (TRS) to $5000 (Apple) in today’s dollars. The main application of these computers was their programmability (in BASIC), which would enable consumers to “learn to chart your biorhythms, balance your checking account, or even control your home environment,” according to an original Apple advertisement. Similarly, there exists a myriad of gadgets that explore different aspects of robotics such as mobility, manipulation, and entertainment.

As in the fledgling personal computing industry, the advertised functionality was at best a model of the real deal. A now-famous milestone in entertainment robotics was the original Sony’s Aibo, a robotic dog that was advertised to have many properties that a real dog has such as develop its own personality, play with a toy, and interact with its owner. Released in 1999, and re-launched in 2018, the platform has a solid following among hobbyists and academics who like its programmability, but probably only very few users who accept the device as a pet stand-in.

There also exist countless “build-your-own-robotic-arm” kits. One of the more successful examples is the uArm, which sells for around $800, and is advertised to perform pick and place, assembly, 3D printing, laser engraving, and many other things that sound like high value applications. Using compelling videos of the robot actually doing these things in a constrained environment has led to two successful crowd-funding campaigns, and have established the robot as a successful educational tool.

Finally, there exist platforms that allow hobbyist programmers to explore mobility to construct robots that patrol your house, deliver items, or provide their users with telepresence abilities. An example of that is the Misty II. Much like with the original Apple II, there remains a disconnect between the price of the hardware and the fidelity of the applications that were available.

For computers, this disconnect began to disappear with the invention of the first electronic spreadsheet software VisiCalc that spun out of Harvard in 1979 and prompted many people to buy an entire microcomputer just to run the program. VisiCalc was soon joined by WordStar, a word processing application, that sold for close to $2000 in today’s dollars. WordStar, too, would entice many people to buy the entire hardware just to use the software. The two programs are early examples of what became known as “killer application.”

With factory automation being mature, and robots with the price tag of a minicomputer being capable of driving around and autonomously carrying out many manipulation tasks, the robotics industry is somewhere where the PC industry was between 1973—the release of the Xerox Alto, the first computer with a graphical user interface, mouse, and special software—and 1979—when microcomputers in the under $5000 category began to take off.

Killer apps for robots
So what would it take for robotics to continue to advance like computers did? The market itself already has done a good job distilling what the possible killer apps are. VCs and customers alike push companies who have set out with lofty goals to reduce their offering to a simple value proposition. As a result, companies that started at opposite ends often converge to mirror images of each other that offer very similar autonomous carts, (bin) picking, palletizing, depalletizing, or sorting solutions. Each of these companies usually serves a single application to a single vertical—for example bin-picking clothes, transporting warehouse goods, or picking strawberries by the pound. They are trying to prove that their specific technology works without spreading themselves too thin.

Very few of these companies have really taken off. One example is Kiva Systems, which turned into the logistic robotics division of Amazon. Kiva and others are structured around sound value propositions that are grounded in well-known user needs. As these solutions are very specialized, however, it is unlikely that they result into any economies of scale of the same magnitude that early computer users who bought both a spreadsheet and a word processor application for their expensive minicomputer could enjoy. What would make these robotic solutions more interesting is when functionality becomes stackable. Instead of just being able to do bin picking, palletizing, and transportation with the same hardware, these three skills could be combined to model entire processes.

A skill that is yet little addressed by startups and is historically owned by the mainframe equivalent of robotics is assembly of simple mechatronic devices. The ability to assemble mechatronic parts is equivalent to other tasks such as changing a light bulb, changing the batteries in a remote control, or tending machines like a lever-based espresso machine. These tasks would involve the autonomous execution of complete workflows possible using a single machine, eventually leading to an explosion of industrial productivity across all sectors. For example, picking up an item from a bin, arranging it on the robot, moving it elsewhere, and placing it into a shelf or a machine is a process that equally applies to a manufacturing environment, a retail store, or someone’s kitchen.

Image: Robotic Materials Inc.

Autonomous, vision and force-based assembly of the
Siemens robot learning challenge.

Even though many of the above applications are becoming possible, it is still very hard to get a platform off the ground without added components that provide “killer app” value of their own. Interesting examples are Rethink Robotics or the Robot Operating System (ROS). Rethink Robotics’ Baxter and Sawyer robots pioneered a great user experience (like the 1973 Xerox Alto, really the first PC), but its applications were difficult to extend beyond simple pick-and-place and palletizing and depalletizing items.

ROS pioneered interprocess communication software that was adapted to robotic needs (multiple computers, different programming languages) and the idea of software modularity in robotics, but—in the absence of a common hardware platform—hasn’t yet delivered a single application, e.g. for navigation, path planning, or grasping, that performs beyond research-grade demonstration level and won’t get discarded once developers turn to production systems. At the same time, an increasing number of robotic devices, such as robot arms or 3D perception systems that offer intelligent functionality, provide other ways to wire them together that do not require an intermediary computer, while keeping close control over the real-time aspects of their hardware.

Image: Robotic Materials Inc.

Robotic Materials GPR-1 combines a MIR-100 autonomous cart with an UR-5 collaborative robotic arm, an onRobot force/torque sensor and Robotic Materials’ SmartHand to perform out-of-the-box mobile assembly, bin picking, palletizing, and depalletizing tasks.

At my company, Robotic Materials Inc., we have made strides to identify a few applications such as bin picking and assembly, making them configurable with a single click by combining machine learning and optimization with an intuitive user interface. Here, users can define object classes and how to grasp them using a web browser, which then appear as first-class objects in a robot-specific graphical programming language. We have also done this for assembly, allowing users to stack perception-based picking and force-based assembly primitives by simply dragging and dropping appropriate commands together.

While such an approach might answer the question of a killer app for robots priced in the “minicomputer” range, it is unclear how killer app-type value can be generated with robots in the less-than-$5000 category. A possible answer is two-fold: First, with low-cost arms, mobility platforms, and entertainment devices continuously improving, a confluence of technology readiness and user innovation, like with the Apple II and VisiCalc, will eventually happen. For example, there is not much innovation needed to turn Misty into a home security system; the uArm into a low-cost bin-picking system; or an Aibo-like device into a therapeutic system for the elderly or children with autism.

Second, robots and their components have to become dramatically cheaper. Indeed, computers have seen an exponential reduction in price accompanied by an exponential increase in computational power, thanks in great part to Moore’s Law. This development has helped robotics too, allowing us to reach breakthroughs in mobility and manipulation due to the ability to process massive amounts of image and depth data in real-time, and we can expect it to continue to do so.

Is there a Moore’s Law for robots?
One might ask, however, how a similar dynamics might be possible for robots as a whole, including all their motors and gears, and what a “Moore’s Law” would look like for the robotics industry. Here, it helps to remember that the perpetuation of Moore’s Law is not the reason, but the result of the PC revolution. Indeed, the first killer apps for bookkeeping, editing, and gaming were so good that they unleashed tremendous consumer demand, beating the benchmark on what was thought to be physically possible over and over again. (I vividly remember 56 kbps to be the absolute maximum data rate for copper phone lines until DSL appeared.)

That these economies of scale are also applicable to mechatronics is impressively demonstrated by the car industry. A good example is the 2020 Prius Prime, a highly computerized plug-in hybrid, that is available for one third of the cost of my company’s GPR-1 mobile manipulator while being orders of magnitude more complex, sporting an electrical motor, a combustion engine, and a myriad of sensors and computers. It is therefore very well conceivable to produce a mobile manipulator that retails at one tenth of the cost of a modern car, once robotics enjoy similar mass-market appeal. Given that these robots are part of the equation, actively lowering cost of production, this might happen as fast as never before in the history of industrialization.

It is therefore very well conceivable to produce a mobile manipulator that retails at one tenth of the cost of a modern car, once robotics enjoy similar mass-market appeal.

There is one more driver that might make robots exponentially more capable: the cloud. Once a general purpose robot has learned or was programmed with a new skill, it could share it with every other robot. At some point, a grocer who buys a robot could assume that it already knows how to recognize and handle 99 percent of the retail items in the store. Likewise, a manufacturer can assume that the robot can handle and assemble every item available from McMaster-Carr and Misumi. Finally, families could expect a robot to know every kitchen item that Ikea and Pottery Barn is selling. Sounds like a labor intense problem, but probably more manageable than collecting footage for Google’s Street View using cars, tricycles, and snowmobiles, among other vehicles.

Strategies for robot startups
While we are waiting for these two trends—better and better applications and hardware with decreasing cost—to converge, we as a community have to keep exploring what the canonical robotic applications beyond mobility, bin picking, palletizing, depalletizing, and assembly are. We must also continue to solve the fundamental challenges that stand in the way of making these solutions truly general and robust.

For both questions, it might help to look at the strategies that have been critical in the development of the personal computer, which might equally well apply to robotics:

Start with a solution to a problem your customers have. Unfortunately, their problem is almost never that they need your sensor, widget, or piece of code, but something that already costs them money or negatively affects them in some other way. Example: There are many more people who had a problem calculating their taxes (and wanted to buy VisiCalc) than writing their own solution in BASIC.

Build as little of your own hardware as necessary. Your business model should be stronger than the margin you can make on the hardware. Why taking the risk? Example: Why build your own typewriter if you can write the best typewriting application that makes it worth buying a computer just for that?

If your goal is a platform, make sure it comes with a killer application, which alone justifies the platform cost. Example: Microcomputer companies came and went until the “1977 Trinity” intersected with the killer apps spreadsheet and word processors. Corollary: You can also get lucky.

Use an open architecture, which creates an ecosystem where others compete on creating better components and peripherals, while allowing others to integrate your solution into their vertical and stack it with other devices. Example: Both the Apple II and the IBM PC were completely open architectures, enabling many clones, thereby growing the user and developer base.

It’s worthwhile pursuing this. With most business processes already being digitized, general purpose robots will allow us to fill in gaps in mobility and manipulation, increasing productivity at levels only limited by the amount of resources and energy that are available, possibly creating a utopia in which creativity becomes the ultimate currency. Maybe we’ll even get R2-D2.

Nikolaus Correll is an associate professor of computer science at the University of Colorado at Boulder where he works on mobile manipulation and other robotics applications. He’s co-founder and CTO of Robotic Materials Inc., which is supported by the National Science Foundation and the National Institute of Standards and Technology via their Small Business Innovative Research (SBIR) programs. Continue reading

Posted in Human Robots

#435748 Video Friday: This Robot Is Like a ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

RSS 2019 – June 22-26, 2019 – Freiburg, Germany
Hamlyn Symposium on Medical Robotics – June 23-26, 2019 – London, U.K.
ETH Robotics Summer School – June 27-1, 2019 – Zurich, Switzerland
MARSS 2019 – July 1-5, 2019 – Helsinki, Finland
ICRES 2019 – July 29-30, 2019 – London, U.K.
DARPA SubT Tunnel Circuit – August 15-22, 2019 – Pittsburgh, Pa., USA
Let us know if you have suggestions for next week, and enjoy today’s videos.

It’s been a while since we last spoke to Joe Jones, the inventor of Roomba, about his solar-powered, weed-killing robot, called Tertill, which he was launching as a Kickstarter project. Tertill is now available for purchase (US $300) and is shipping right now.

[ Tertill ]

Usually, we don’t post videos that involve drone use that looks to be either illegal or unsafe. These flights over the protests in Hong Kong are almost certainly both. However, it’s also a unique perspective on the scale of these protests.

[ Team BlackSheep ]

ICYMI: iRobot announced this week that it has acquired Root Robotics.

[ iRobot ]

This Boston Dynamics parody video went viral this week.

The CGI is good but the gratuitous violence—even if it’s against a fake robot—is a bit too much?

This is still our favorite Boston Dynamics parody video:

[ Corridor ]

Biomedical Engineering Department Head Bin He and his team have developed the first-ever successful non-invasive mind-controlled robotic arm to continuously track a computer cursor.

[ CMU ]

Organic chemists, prepare to meet your replacement:

Automated chemical synthesis carries great promises of safety, efficiency and reproducibility for both research and industry laboratories. Current approaches are based on specifically-designed automation systems, which present two major drawbacks: (i) existing apparatus must be modified to be integrated into the automation systems; (ii) such systems are not flexible and would require substantial re-design to handle new reactions or procedures. In this paper, we propose a system based on a robot arm which, by mimicking the motions of human chemists, is able to perform complex chemical reactions without any modifications to the existing setup used by humans. The system is capable of precise liquid handling, mixing, filtering, and is flexible: new skills and procedures could be added with minimum effort. We show that the robot is able to perform a Michael reaction, reaching a yield of 34%, which is comparable to that obtained by a junior chemist (undergraduate student in Chemistry).

[ arXiv ] via [ NTU ]

So yeah, ICRA 2019 was huge and awesome. Here are some brief highlights.

[ Montreal Gazette ]

For about US $5, this drone will deliver raw meat and beer to you if you live on an uninhabited island in Tokyo Bay.

[ Nikkei ]

The Smart Microsystems Lab at Michigan State University has a new version of their Autonomous Surface Craft. It’s autonomous, open source, and awfully hard to sink.

[ SML ]

As drone shows go, this one is pretty good.

[ CCTV ]

Here’s a remote controlled robot shooting stuff with a very large gun.

[ HDT ]

Over a period of three quarters (September 2018 thru May 2019), we’ve had the opportunity to work with five graduating University of Denver students as they brought their idea for a Misty II arm extension to life.

[ Misty Robotics ]

If you wonder how it looks to inspect burners and superheaters of a boiler with an Elios 2, here you are! This inspection was performed by Svenska Elektrod in a peat-fired boiler for Vattenfall in Sweden. Enjoy!

[ Flyability ]

The newest Soft Robotics technology, mGrip mini fingers, made for tight spaces, small packaging, and delicate items, giving limitless opportunities for your applications.

[ Soft Robotics ]

What if legged robots were able to generate dynamic motions in real-time while interacting with a complex environment? Such technology would represent a significant step forward the deployment of legged systems in real world scenarios. This means being able to replace humans in the execution of dangerous tasks and to collaborate with them in industrial applications.

This workshop aims to bring together researchers from all the relevant communities in legged locomotion such as: numerical optimization, machine learning (ML), model predictive control (MPC) and computational geometry in order to chart the most promising methods to address the above-mentioned scientific challenges.

[ Num Opt Wkshp ]

Army researchers teamed with the U.S. Marine Corps to fly and test 3-D printed quadcopter prototypes a the Marine Corps Air Ground Combat Center in 29 Palms, California recently.

[ CCDC ARL ]

Lex Fridman’s Artificial Intelligence podcast featuring Rosalind Picard.

[ AI Podcast ]

In this week’s episode of Robots in Depth, per speaks with Christian Guttmann, executive director of the Nordic AI Artificial Intelligence Institute.

Christian Guttmann talks about AI and wanting to understand intelligence enough to recreate it. Christian has be focusing on AI in healthcare and has recently started to communicate the opportunities and challenges in artificial intelligence to the general public. This is something that the host Per Sjöborg is also very passionate about. We also get to hear about the Nordic AI institute and the work it does to inform all parts of society about AI.

[ Robots in Depth ] Continue reading

Posted in Human Robots