Tag Archives: Final

#436470 Retail Robots Are on the Rise—at Every ...

The robots are coming! The robots are coming! On our sidewalks, in our skies, in our every store… Over the next decade, robots will enter the mainstream of retail.

As countless robots work behind the scenes to stock shelves, serve customers, and deliver products to our doorstep, the speed of retail will accelerate.

These changes are already underway. In this blog, we’ll elaborate on how robots are entering the retail ecosystem.

Let’s dive in.

Robot Delivery
On August 3rd, 2016, Domino’s Pizza introduced the Domino’s Robotic Unit, or “DRU” for short. The first home delivery pizza robot, the DRU looks like a cross between R2-D2 and an oversized microwave.

LIDAR and GPS sensors help it navigate, while temperature sensors keep hot food hot and cold food cold. Already, it’s been rolled out in ten countries, including New Zealand, France, and Germany, but its August 2016 debut was critical—as it was the first time we’d seen robotic home delivery.

And it won’t be the last.

A dozen or so different delivery bots are fast entering the market. Starship Technologies, for instance, a startup created by Skype founders Janus Friis and Ahti Heinla, has a general-purpose home delivery robot. Right now, the system is an array of cameras and GPS sensors, but upcoming models will include microphones, speakers, and even the ability—via AI-driven natural language processing—to communicate with customers. Since 2016, Starship has already carried out 50,000 deliveries in over 100 cities across 20 countries.

Along similar lines, Nuro—co-founded by Jiajun Zhu, one of the engineers who helped develop Google’s self-driving car—has a miniature self-driving car of its own. Half the size of a sedan, the Nuro looks like a toaster on wheels, except with a mission. This toaster has been designed to carry cargo—about 12 bags of groceries (version 2.0 will carry 20)—which it’s been doing for select Kroger stores since 2018. Domino’s also partnered with Nuro in 2019.

As these delivery bots take to our streets, others are streaking across the sky.

Back in 2016, Amazon came first, announcing Prime Air—the e-commerce giant’s promise of drone delivery in 30 minutes or less. Almost immediately, companies ranging from 7-Eleven and Walmart to Google and Alibaba jumped on the bandwagon.

While critics remain doubtful, the head of the FAA’s drone integration department recently said that drone deliveries may be “a lot closer than […] the skeptics think. [Companies are] getting ready for full-blown operations. We’re processing their applications. I would like to move as quickly as I can.”

In-Store Robots
While delivery bots start to spare us trips to the store, those who prefer shopping the old-fashioned way—i.e., in person—also have plenty of human-robot interaction in store. In fact, these robotics solutions have been around for a while.

In 2010, SoftBank introduced Pepper, a humanoid robot capable of understanding human emotion. Pepper is cute: 4 feet tall, with a white plastic body, two black eyes, a dark slash of a mouth, and a base shaped like a mermaid’s tail. Across her chest is a touch screen to aid in communication. And there’s been a lot of communication. Pepper’s cuteness is intentional, as it matches its mission: help humans enjoy life as much as possible.

Over 12,000 Peppers have been sold. She serves ice cream in Japan, greets diners at a Pizza Hut in Singapore, and dances with customers at a Palo Alto electronics store. More importantly, Pepper’s got company.

Walmart uses shelf-stocking robots for inventory control. Best Buy uses a robo-cashier, allowing select locations to operate 24-7. And Lowe’s Home Improvement employs the LoweBot—a giant iPad on wheels—to help customers find the items they need while tracking inventory along the way.

Warehouse Bots
Yet the biggest benefit robots provide might be in-warehouse logistics.

In 2012, when Amazon dished out $775 million for Kiva Systems, few could predict that just 6 years later, 45,000 Kiva robots would be deployed at all of their fulfillment centers, helping process a whopping 306 items per second during the Christmas season.

And many other retailers are following suit.

Order jeans from the Gap, and soon they’ll be sorted, packed, and shipped with the help of a Kindred robot. Remember the old arcade game where you picked up teddy bears with a giant claw? That’s Kindred, only her claw picks up T-shirts, pants, and the like, placing them in designated drop-off zones that resemble tiny mailboxes (for further sorting or shipping).

The big deal here is democratization. Kindred’s robot is cheap and easy to deploy, allowing smaller companies to compete with giants like Amazon.

Final Thoughts
For retailers interested in staying in business, there doesn’t appear to be much choice in the way of robotics.

By 2024, the US minimum wage is projected to be $15 an hour (the House of Representatives has already passed the bill, but the wage hike is meant to unfold gradually between now and 2025), and many consider that number far too low.

Yet, as human labor costs continue to climb, robots won’t just be coming, they’ll be here, there, and everywhere. It’s going to become increasingly difficult for store owners to justify human workers who call in sick, show up late, and can easily get injured. Robots work 24-7. They never take a day off, never need a bathroom break, health insurance, or parental leave.

Going forward, this spells a growing challenge of technological unemployment (a blog topic I will cover in the coming month). But in retail, robotics usher in tremendous benefits for companies and customers alike.

And while professional re-tooling initiatives and the transition of human capital from retail logistics to a booming experience economy take hold, robotic retail interaction and last-mile delivery will fundamentally transform our relationship with commerce.

This blog comes from The Future is Faster Than You Think—my upcoming book, to be released Jan 28th, 2020. To get an early copy and access up to $800 worth of pre-launch giveaways, sign up here!

Join Me
(1) A360 Executive Mastermind: If you’re an exponentially and abundance-minded entrepreneur who would like coaching directly from me, consider joining my Abundance 360 Mastermind, a highly selective community of 360 CEOs and entrepreneurs who I coach for 3 days every January in Beverly Hills, Ca. Through A360, I provide my members with context and clarity about how converging exponential technologies will transform every industry. I’m committed to running A360 for the course of an ongoing 25-year journey as a “countdown to the Singularity.”

If you’d like to learn more and consider joining our 2020 membership, apply here.

(2) Abundance-Digital Online Community: I’ve also created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is Singularity University’s ‘onramp’ for exponential entrepreneurs — those who want to get involved and play at a higher level. Click here to learn more.

(Both A360 and Abundance-Digital are part of Singularity University — your participation opens you to a global community.)

Image Credit: Image by imjanuary from Pixabay Continue reading

Posted in Human Robots

#436252 After AI, Fashion and Shopping Will ...

AI and broadband are eating retail for breakfast. In the first half of 2019, we’ve seen 19 retailer bankruptcies. And the retail apocalypse is only accelerating.

What’s coming next is astounding. Why drive when you can speak? Revenue from products purchased via voice commands is expected to quadruple from today’s US$2 billion to US$8 billion by 2023.

Virtual reality, augmented reality, and 3D printing are converging with artificial intelligence, drones, and 5G to transform shopping on every dimension. And as a result, shopping is becoming dematerialized, demonetized, democratized, and delocalized… a top-to-bottom transformation of the retail world.

Welcome to Part 1 of our series on the future of retail, a deep-dive into AI and its far-reaching implications.

Let’s dive in.

A Day in the Life of 2029
Welcome to April 21, 2029, a sunny day in Dallas. You’ve got a fundraising luncheon tomorrow, but nothing to wear. The last thing you want to do is spend the day at the mall.

No sweat. Your body image data is still current, as you were scanned only a week ago. Put on your VR headset and have a conversation with your AI. “It’s time to buy a dress for tomorrow’s event” is all you have to say. In a moment, you’re teleported to a virtual clothing store. Zero travel time. No freeway traffic, parking hassles, or angry hordes wielding baby strollers.

Instead, you’ve entered your own personal clothing store. Everything is in your exact size…. And I mean everything. The store has access to nearly every designer and style on the planet. Ask your AI to show you what’s hot in Shanghai, and presto—instant fashion show. Every model strutting down the runway looks exactly like you, only dressed in Shanghai’s latest.

When you’re done selecting an outfit, your AI pays the bill. And as your new clothes are being 3D printed at a warehouse—before speeding your way via drone delivery—a digital version has been added to your personal inventory for use at future virtual events.

The cost? Thanks to an era of no middlemen, less than half of what you pay in stores today. Yet this future is not all that far off…

Digital Assistants
Let’s begin with the basics: the act of turning desire into purchase.

Most of us navigate shopping malls or online marketplaces alone, hoping to stumble across the right item and fit. But if you’re lucky enough to employ a personal assistant, you have the luxury of describing what you want to someone who knows you well enough to buy that exact right thing most of the time.

For most of us who don’t, enter the digital assistant.

Right now, the four horsemen of the retail apocalypse are waging war for our wallets. Amazon’s Alexa, Google’s Now, Apple’s Siri, and Alibaba’s Tmall Genie are going head-to-head in a battle to become the platform du jour for voice-activated, AI-assisted commerce.

For baby boomers who grew up watching Captain Kirk talk to the Enterprise’s computer on Star Trek, digital assistants seem a little like science fiction. But for millennials, it’s just the next logical step in a world that is auto-magical.

And as those millennials enter their consumer prime, revenue from products purchased via voice-driven commands is projected to leap from today’s US$2 billion to US$8 billion by 2023.

We are already seeing a major change in purchasing habits. On average, consumers using Amazon Echo spent more than standard Amazon Prime customers: US$1,700 versus US$1,300.

And as far as an AI fashion advisor goes, those too are here, courtesy of both Alibaba and Amazon. During its annual Singles’ Day (November 11) shopping festival, Alibaba’s FashionAI concept store uses deep learning to make suggestions based on advice from human fashion experts and store inventory, driving a significant portion of the day’s US$25 billion in sales.

Similarly, Amazon’s shopping algorithm makes personalized clothing recommendations based on user preferences and social media behavior.

Customer Service
But AI is disrupting more than just personalized fashion and e-commerce. Its next big break will take place in the customer service arena.

According to a recent Zendesk study, good customer service increases the possibility of a purchase by 42 percent, while bad customer service translates into a 52 percent chance of losing that sale forever. This means more than half of us will stop shopping at a store due to a single disappointing customer service interaction. These are significant financial stakes. They’re also problems perfectly suited for an AI solution.

During the 2018 Google I/O conference, CEO Sundar Pichai demoed the Google Duplex, their next generation digital assistant. Pichai played the audience a series of pre-recorded phone calls made by Google Duplex. The first call made a reservation at a restaurant, the second one booked a haircut appointment, amusing the audience with a long “hmmm” mid-call.

In neither case did the person on the other end of the phone have any idea they were talking to an AI. The system’s success speaks to how seamlessly AI can blend into our retail lives and how convenient it will continue to make them. The same technology Pichai demonstrated that can make phone calls for consumers can also answer phones for retailers—a development that’s unfolding in two different ways:

(1) Customer service coaches: First, for organizations interested in keeping humans involved, there’s Beyond Verbal, a Tel Aviv-based startup that has built an AI customer service coach. Simply by analyzing customer voice intonation, the system can tell whether the person on the phone is about to blow a gasket, is genuinely excited, or anything in between.

Based on research of over 70,000 subjects in more than 30 languages, Beyond Verbal’s app can detect 400 different markers of human moods, attitudes, and personality traits. Already it’s been integrated in call centers to help human sales agents understand and react to customer emotions, making those calls more pleasant, and also more profitable.

For example, by analyzing word choice and vocal style, Beyond Verbal’s system can tell what kind of shopper the person on the line actually is. If they’re an early adopter, the AI alerts the sales agent to offer them the latest and greatest. If they’re more conservative, it suggests items more tried-and-true.

(2) Replacing customer service agents: Second, companies like New Zealand’s Soul Machines are working to replace human customer service agents altogether. Powered by IBM’s Watson, Soul Machines builds lifelike customer service avatars designed for empathy, making them one of many helping to pioneer the field of emotionally intelligent computing.

With their technology, 40 percent of all customer service interactions are now resolved with a high degree of satisfaction, no human intervention needed. And because the system is built using neural nets, it’s continuously learning from every interaction—meaning that percentage will continue to improve.

The number of these interactions continues to grow as well. Software manufacturer Autodesk now includes a Soul Machine avatar named AVA (Autodesk Virtual Assistant) in all of its new offerings. She lives in a small window on the screen, ready to soothe tempers, troubleshoot problems, and forever banish those long tech support hold times.

For Daimler Financial Services, Soul Machines built an avatar named Sarah, who helps customers with arguably three of modernity’s most annoying tasks: financing, leasing, and insuring a car.

This isn’t just about AI—it’s about AI converging with additional exponentials. Add networks and sensors to the story and it raises the scale of disruption, upping the FQ—the frictionless quotient—in our frictionless shopping adventure.

Final Thoughts
AI makes retail cheaper, faster, and more efficient, touching everything from customer service to product delivery. It also redefines the shopping experience, making it frictionless and—once we allow AI to make purchases for us—ultimately invisible.

Prepare for a future in which shopping is dematerialized, demonetized, democratized, and delocalized—otherwise known as “the end of malls.”

Of course, if you wait a few more years, you’ll be able to take an autonomous flying taxi to Westfield’s Destination 2028—so perhaps today’s converging exponentials are not so much spelling the end of malls but rather the beginning of an experience economy far smarter, more immersive, and whimsically imaginative than today’s shopping centers.

Either way, it’s a top-to-bottom transformation of the retail world.

Over the coming blog series, we will continue our discussion of the future of retail. Stay tuned to learn new implications for your business and how to future-proof your company in an age of smart, ultra-efficient, experiential retail.

Want a copy of my next book? If you’ve enjoyed this blogified snippet of The Future is Faster Than You Think, sign up here to be eligible for an early copy and access up to $800 worth of pre-launch giveaways!

Join Me
(1) A360 Executive Mastermind: If you’re an exponentially and abundance-minded entrepreneur who would like coaching directly from me, consider joining my Abundance 360 Mastermind, a highly selective community of 360 CEOs and entrepreneurs who I coach for 3 days every January in Beverly Hills, Ca. Through A360, I provide my members with context and clarity about how converging exponential technologies will transform every industry. I’m committed to running A360 for the course of an ongoing 25-year journey as a “countdown to the Singularity.”

If you’d like to learn more and consider joining our 2020 membership, apply here.

(2) Abundance-Digital Online Community: I’ve also created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is Singularity University’s ‘onramp’ for exponential entrepreneurs — those who want to get involved and play at a higher level. Click here to learn more.

(Both A360 and Abundance-Digital are part of Singularity University — your participation opens you to a global community.)

This article originally appeared on diamandis.com. Read the original article here.

Image Credit: Image by Pexels from Pixabay Continue reading

Posted in Human Robots

#436215 Help Rescuers Find Missing Persons With ...

There’s a definite sense that robots are destined to become a critical part of search and rescue missions and disaster relief efforts, working alongside humans to help first responders move faster and more efficiently. And we’ve seen all kinds of studies that include the claim “this robot could potentially help with disaster relief,” to varying degrees of plausibility.

But it takes a long time, and a lot of extra effort, for academic research to actually become anything useful—especially for first responders, where there isn’t a lot of financial incentive for further development.

It turns out that if you actually ask first responders what they most need for disaster relief, they’re not necessarily interested in the latest and greatest robotic platform or other futuristic technology. They’re using commercial off-the-shelf drones, often consumer-grade ones, because they’re simple and cheap and great at surveying large areas. The challenge is doing something useful with all of the imagery that these drones collect. Computer vision algorithms could help with that, as long as those algorithms are readily accessible and nearly effortless to use.

The IEEE Robotics and Automation Society and the Center for Robotic-Assisted Search and Rescue (CRASAR) at Texas A&M University have launched a contest to bridge this gap between the kinds of tools that roboticists and computer vision researchers might call “basic” and a system that’s useful to first responders in the field. It’s a simple and straightforward idea, and somewhat surprising that no one had thought of it before now. And if you can develop such a system, it’s worth some cash.

CRASAR does already have a Computer Vision Emergency Response Toolkit (created right after Hurricane Harvey), which includes a few pixel filters and some edge and corner detectors. Through this contest, you can get paid your share of a $3,000 prize pool for adding some other excessively basic tools, including:

Image enhancement through histogram equalization, which can be applied to electro-optical (visible light cameras) and thermal imagery

Color segmentation for a range

Grayscale segmentation for a range in a thermal image

If it seems like this contest is really not that hard, that’s because it isn’t. “The first thing to understand about this contest is that strictly speaking, it’s really not that hard,” says Robin Murphy, director of CRASAR. “This contest isn’t necessarily about coming up with algorithms that are brand new, or even state-of-the-art, but rather algorithms that are functional and reliable and implemented in a way that’s immediately [usable] by inexperienced users in the field.”

Murphy readily admits that some of what needs to be done is not particularly challenging at all, but that’s not the point—the point is to make these functionalities accessible to folks who have better things to do than solve these problems themselves, as Murphy explains.

“A lot of my research is driven by problems that I’ve seen in the field that you’d think somebody would have solved, but apparently not. More than half of this is available in OpenCV, but who’s going to find it, download it, learn Python, that kind of thing? We need to get these tools into an open framework. We’re happy if you take libraries that already exist (just don’t steal code)—not everything needs to be rewritten from scratch. Just use what’s already there. Some of it may seem too simple, because it IS that simple. It already exists and you just need to move some code around.”

If you want to get very slightly more complicated, there’s a second category that involves a little bit of math:

Coders must provide a system that does the following for each nadir image in a set:

Reads the geotag embedded in the .jpg
Overlays a USNG grid for a user-specified interval (e.g., every 50, 100, or 200 meters)
Gives the GPS coordinates of each pixel if a cursor is rolled over the image
Given a set of images with the GPS or USNG coordinate and a bounding box, finds all images in the set that have a pixel intersecting that location

The final category awards prizes to anyone who comes up with anything else that turns out to be useful. Or, more specifically, “entrants can submit any algorithm they believe will be of value.” Whether or not it’s actually of value will be up to a panel of judges that includes both first responders and computer vision experts. More detailed rules can be found here, along with sample datasets that you can use for testing.

The contest deadline is 16 December, so you’ve got about a month to submit an entry. Winners will be announced at the beginning of January. Continue reading

Posted in Human Robots

#436178 Within 10 Years, We’ll Travel by ...

What’s faster than autonomous vehicles and flying cars?

Try Hyperloop, rocket travel, and robotic avatars. Hyperloop is currently working towards 670 mph (1080 kph) passenger pods, capable of zipping us from Los Angeles to downtown Las Vegas in under 30 minutes. Rocket Travel (think SpaceX’s Starship) promises to deliver you almost anywhere on the planet in under an hour. Think New York to Shanghai in 39 minutes.

But wait, it gets even better…

As 5G connectivity, hyper-realistic virtual reality, and next-gen robotics continue their exponential progress, the emergence of “robotic avatars” will all but nullify the concept of distance, replacing human travel with immediate remote telepresence.

Let’s dive in.

Hyperloop One: LA to SF in 35 Minutes
Did you know that Hyperloop was the brainchild of Elon Musk? Just one in a series of transportation innovations from a man determined to leave his mark on the industry.

In 2013, in an attempt to shorten the long commute between Los Angeles and San Francisco, the California state legislature proposed a $68 billion budget allocation for what appeared to be the slowest and most expensive bullet train in history.

Musk was outraged. The cost was too high, the train too sluggish. Teaming up with a group of engineers from Tesla and SpaceX, he published a 58-page concept paper for “The Hyperloop,” a high-speed transportation network that used magnetic levitation to propel passenger pods down vacuum tubes at speeds of up to 670 mph. If successful, it would zip you across California in 35 minutes—just enough time to watch your favorite sitcom.

In January 2013, venture capitalist Shervin Pishevar, with Musk’s blessing, started Hyperloop One with myself, Jim Messina (former White House Deputy Chief of Staff for President Obama), and tech entrepreneurs Joe Lonsdale and David Sacks as founding board members. A couple of years after that, the Virgin Group invested in this idea, Richard Branson was elected chairman, and Virgin Hyperloop One was born.

“The Hyperloop exists,” says Josh Giegel, co-founder and chief technology officer of Hyperloop One, “because of the rapid acceleration of power electronics, computational modeling, material sciences, and 3D printing.”

Thanks to these convergences, there are now ten major Hyperloop One projects—in various stages of development—spread across the globe. Chicago to DC in 35 minutes. Pune to Mumbai in 25 minutes. According to Giegel, “Hyperloop is targeting certification in 2023. By 2025, the company plans to have multiple projects under construction and running initial passenger testing.”

So think about this timetable: Autonomous car rollouts by 2020. Hyperloop certification and aerial ridesharing by 2023. By 2025—going on vacation might have a totally different meaning. Going to work most definitely will.

But what’s faster than Hyperloop?

Rocket Travel
As if autonomous vehicles, flying cars, and Hyperloop weren’t enough, in September of 2017, speaking at the International Astronautical Congress in Adelaide, Australia, Musk promised that for the price of an economy airline ticket, his rockets will fly you “anywhere on Earth in under an hour.”

Musk wants to use SpaceX’s megarocket, Starship, which was designed to take humans to Mars, for terrestrial passenger delivery. The Starship travels at 17,500 mph. It’s an order of magnitude faster than the supersonic jet Concorde.

Think about what this actually means: New York to Shanghai in 39 minutes. London to Dubai in 29 minutes. Hong Kong to Singapore in 22 minutes.

So how real is the Starship?

“We could probably demonstrate this [technology] in three years,” Musk explained, “but it’s going to take a while to get the safety right. It’s a high bar. Aviation is incredibly safe. You’re safer on an airplane than you are at home.”

That demonstration is proceeding as planned. In September 2017, Musk announced his intentions to retire his current rocket fleet, both the Falcon 9 and Falcon Heavy, and replace them with the Starships in the 2020s.

Less than a year later, LA mayor Eric Garcetti tweeted that SpaceX was planning to break ground on an 18-acre rocket production facility near the port of Los Angeles. And April of this year marked an even bigger milestone: the very first test flights of the rocket.

Thus, sometime in the next decade or so, “off to Europe for lunch” may become a standard part of our lexicon.

Avatars
Wait, wait, there’s one more thing.

While the technologies we’ve discussed will decimate the traditional transportation industry, there’s something on the horizon that will disrupt travel itself. What if, to get from A to B, you didn’t have to move your body? What if you could quote Captain Kirk and just say “Beam me up, Scotty”?

Well, shy of the Star Trek transporter, there’s the world of avatars.

An avatar is a second self, typically in one of two forms. The digital version has been around for a couple of decades. It emerged from the video game industry and was popularized by virtual world sites like Second Life and books-turned-blockbusters like Ready Player One.

A VR headset teleports your eyes and ears to another location, while a set of haptic sensors shifts your sense of touch. Suddenly, you’re inside an avatar inside a virtual world. As you move in the real world, your avatar moves in the virtual.

Use this technology to give a lecture and you can do it from the comfort of your living room, skipping the trip to the airport, the cross-country flight, and the ride to the conference center.

Robots are the second form of avatars. Imagine a humanoid robot that you can occupy at will. Maybe, in a city far from home, you’ve rented the bot by the minute—via a different kind of ridesharing company—or maybe you have spare robot avatars located around the country.

Either way, put on VR goggles and a haptic suit, and you can teleport your senses into that robot. This allows you to walk around, shake hands, and take action—all without leaving your home.

And like the rest of the tech we’ve been talking about, even this future isn’t far away.

In 2018, entrepreneur Dr. Harry Kloor recommended to All Nippon Airways (ANA), Japan’s largest airline, the design of an Avatar XPRIZE. ANA then funded this vision to the tune of $10 million to speed the development of robotic avatars. Why? Because ANA knows this is one of the technologies likely to disrupt their own airline industry, and they want to be ready.

ANA recently announced its “newme” robot that humans can use to virtually explore new places. The colorful robots have Roomba-like wheeled bases and cameras mounted around eye-level, which capture surroundings viewable through VR headsets.

If the robot was stationed in your parents’ home, you could cruise around the rooms and chat with your family at any time of day. After revealing the technology at Tokyo’s Combined Exhibition of Advanced Technologies in October, ANA plans to deploy 1,000 newme robots by 2020.

With virtual avatars like newme, geography, distance, and cost will no longer limit our travel choices. From attractions like the Eiffel Tower or the pyramids of Egypt to unreachable destinations like the moon or deep sea, we will be able to transcend our own physical limits, explore the world and outer space, and access nearly any experience imaginable.

Final Thoughts
Individual car ownership has enjoyed over a century of ascendancy and dominance.

The first real threat it faced—today’s ride-sharing model—only showed up in the last decade. But that ridesharing model won’t even get ten years to dominate. Already, it’s on the brink of autonomous car displacement, which is on the brink of flying car disruption, which is on the brink of Hyperloop and rockets-to-anywhere decimation. Plus, avatars.

The most important part: All of this change will happen over the next ten years. Welcome to a future of human presence where the only constant is rapid change.

Note: This article—an excerpt from my next book The Future Is Faster Than You Think, co-authored with Steven Kotler, to be released January 28th, 2020—originally appeared on my tech blog at diamandis.com. Read the original article here.

Join Me
Abundance-Digital Online Community: Stay ahead of technological advancements and turn your passion into action. Abundance Digital is now part of Singularity University. Learn more.

Image Credit: Virgin Hyperloop One Continue reading

Posted in Human Robots

#436165 Video Friday: DJI’s Mavic Mini Is ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

IROS 2019 – November 4-8, 2019 – Macau
Let us know if you have suggestions for next week, and enjoy today’s videos.

DJI’s new Mavic Mini looks like a pretty great drone for US $400 ($500 for a combo with more accessories): It’s tiny, flies for 30 minutes, and will do what you need as far as pictures and video (although not a whole lot more).

DJI seems to have put a bunch of effort into making the drone 249 grams, 1 gram under what’s required for FAA registration. That means you save $5 and a few minutes of your time, but that does not mean you don’t have to follow the FAA’s rules and regulations governing drone use.

[ DJI ]

Don’t panic, but Clearpath and HEBI Robotics have armed the Jackal:

After locking eyes across a crowded room at ICRA 2019, Clearpath Robotics and HEBI Robotics basked in that warm and fuzzy feeling that comes with starting a new and exciting relationship. Over a conference hall coffee, they learned that the two companies have many overlapping interests. The most compelling was the realization that customers across a variety of industries are hunting for an elusive true love of their own – a robust but compact robotic platform combined with a long reach manipulator for remote inspection tasks.

After ICRA concluded, Arron Griffiths, Application Engineer at Clearpath, and Matthew Tesch, Software Engineer at HEBI, kept in touch and decided there had been enough magic in the air to warrant further exploration. A couple of months later, Matthew arrived at Clearpath to formally introduce the HEBI’s X-Series Arm to Clearpath’s Jackal UGV. It was love.

[ Clearpath ]

Thanks Dave!

I’m really not a fan of the people-carrying drones, but heavy lift cargo drones seem like a more okay idea.

Volocopter, the pioneer in Urban Air Mobility, presented the demonstrator of its VoloDrone. This marks Volocopters expansion into the logistics, agriculture, infrastructure and public services industry. The VoloDrone is an unmanned, fully electric, heavy-lift utility drone capable of carrying a payload of 200 kg (440 lbs) up to 40 km (25 miles). With a standardized payload attachment, VoloDrone can serve a great variety of purposes from transporting boxes, to liquids, to equipment and beyond. It can be remotely piloted or flown in automated mode on pre-set routes.

[ Volocopter ]

JAY is a mobile service robot that projects a display on the floor and plays sound with its speaker. By playing sounds and videos, it provides visual and audio entertainment in various places such as exhibition halls, airports, hotels, department stores and more.

[ Rainbow Robotics ]

The DARPA Subterranean Challenge Virtual Tunnel Circuit concluded this week—it was the same idea as the physical challenge that took place in August, just with a lot less IRL dirt.

The awards ceremony and team presentations are in this next video, and we’ll have more on this once we get back from IROS.

[ DARPA SubT ]

NASA is sending a mobile robot to the south pole of the Moon to get a close-up view of the location and concentration of water ice in the region and for the first time ever, actually sample the water ice at the same pole where the first woman and next man will land in 2024 under the Artemis program.

About the size of a golf cart, the Volatiles Investigating Polar Exploration Rover, or VIPER, will roam several miles, using its four science instruments — including a 1-meter drill — to sample various soil environments. Planned for delivery in December 2022, VIPER will collect about 100 days of data that will be used to inform development of the first global water resource maps of the Moon.

[ NASA ]

Happy Halloween from HEBI Robotics!

[ HEBI ]

Happy Halloween from Soft Robotics!

[ Soft Robotics ]

Halloween must be really, really confusing for autonomous cars.

[ Waymo ]

Once a year at Halloween, hardworking JPL engineers put their skills to the test in a highly competitive pumpkin carving contest. The result: A pumpkin gently landed on the Moon, its retrorockets smoldering, while across the room a Nemo-inspired pumpkin explored the sub-surface ocean of Jupiter moon Europa. Suffice to say that when the scientists and engineers at NASA’s Jet Propulsion Laboratory compete in a pumpkin-carving contest, the solar system’s the limit. Take a look at some of the masterpieces from 2019.

Now in its ninth year, the contest gives teams only one hour to carve and decorate their pumpkin though they can prepare non-pumpkin materials – like backgrounds, sound effects and motorized parts – ahead of time.

[ JPL ]

The online autonomous navigation and semantic mapping experiment presented [below] is conducted with the Cassie Blue bipedal robot at the University of Michigan. The sensors attached to the robot include an IMU, a 32-beam LiDAR and an RGB-D camera. The whole online process runs in real-time on a Jetson Xavier and a laptop with an i7 processor.

[ BPL ]

Misty II is now available to anyone who wants one, and she’s on sale for a mere $2900.

[ Misty ]

We leveraged LIDAR-based slam, in conjunction with our specialized relative localization sensor UVDAR to perform a de-centralized, communication-free swarm flight without the units knowing their absolute locations. The swarming and obstacle avoidance control is based on a modified Boids-like algorithm, while the whole swarm is controlled by directing a selected leader unit.

[ MRS ]

The MallARD robot is an autonomous surface vehicle (ASV), designed for the monitoring and inspection of wet storage facilities for example spent fuel pools or wet silos. The MallARD is holonomic, uses a LiDAR for localisation and features a robust trajectory tracking controller.

The University of Manchester’s researcher Dr Keir Groves designed and built the autonomous surface vehicle (ASV) for the challenge which came in the top three of the second round in Nov 2017. The MallARD went on to compete in a final 3rd round where it was deployed in a spent fuel pond at a nuclear power plant in Finland by the IAEA, along with two other entries. The MallARD came second overall, in November 2018.

[ RNE ]

Thanks Jennifer!

I sometimes get the sense that in the robotic grasping and manipulation world, suction cups are kinda seen as cheating at times. But, their nature allows you to do some pretty interesting things.

More clever octopus footage please.

[ CMU ]

A Personal, At-Home Teacher For Playful Learning: From academic topics to child-friendly news bulletins, fun facts and more, Miko 2 is packed with relevant and freshly updated content specially designed by educationists and child-specialists. Your little one won’t even realize they’re learning.

As we point out pretty much every time we post a video like this, keep in mind that you’re seeing a heavily edited version of a hypothetical best case scenario for how this robot can function. And things like “creating a relationship that they can then learn how to form with their peers” is almost certainly overselling things. But at $300 (shipping included), this may be a decent robot as long as your expectations are appropriately calibrated.

[ Miko ]

ICRA 2018 plenary talk by Rodney Brooks: “Robots and People: the Research Challenge.”

[ IEEE RAS ]

ICRA-X 2018 talk by Ron Arkin: “Lethal Autonomous Robots and the Plight of the Noncombatant.”

[ IEEE RAS ]

On the most recent episode of the AI Podcast, Lex Fridman interviews Garry Kasparov.

[ AI Podcast ] Continue reading

Posted in Human Robots