Tag Archives: sarah
#437683 iRobot Remembers That Robots Are ...
iRobot has released several new robots over the last few years, including the i7 and s9 vacuums. Both of these models are very fancy and very capable, packed with innovative and useful features that we’ve been impressed by. They’re both also quite expensive—with dirt docks included, you’re looking at US $800 for the i7+, and a whopping $1,100 for the s9+. You can knock a couple hundred bucks off of those prices if you don’t want the docks, but still, these vacuums are absolutely luxury items.
If you just want something that’ll do some vacuuming so that you don’t have to, iRobot has recently announced a new Roomba option. The Roomba i3 is iRobot’s new low to midrange vacuum, starting at $400. It’s not nearly as smart as the i7 or the s9, but it can navigate (sort of) and make maps (sort of) and do some basic smart home integration. If that sounds like all you need, the i3 could be the robot vacuum for you.
iRobot calls the i3 “stylish,” and it does look pretty neat with that fabric top. Underneath, you get dual rubber primary brushes plus a side brush. There’s limited compatibility with the iRobot Home app and IFTTT, along with Alexa and Google Home. The i3 is also compatible with iRobot’s Clean Base, but that’ll cost you an extra $200, and iRobot refers to this bundle as the i3+.
The reason that the i3 only offers limited compatibility with iRobot’s app is that the i3 is missing the top-mounted camera that you’ll find in more expensive models. Instead, it relies on a downward-looking optical sensor to help it navigate, and it builds up a map as it’s cleaning by keeping track of when it bumps into obstacles and paying attention to internal sensors like a gyro and wheel odometers. The i3 can localize directly on its charging station or Clean Base (which have beacons on them that the robot can see if it’s close enough), which allows it to resume cleaning after emptying it’s bin or recharging. You’ll get a map of the area that the i3 has cleaned once it’s finished, but that map won’t persist between cleaning sessions, meaning that you can’t do things like set keep-out zones or identify specific rooms for the robot to clean. Many of the more useful features that iRobot’s app offers are based on persistent maps, and this is probably the biggest gap in functionality between the i3 and its more expensive siblings.
According to iRobot senior global product manager Sarah Wang, the kind of augmented dead-reckoning-based mapping that the i3 uses actually works really well: “Based on our internal and external testing, the performance is equivalent with our products that have cameras, like the Roomba 960,” she says. To get this level of performance, though, you do have to be careful, Wang adds. “If you kidnap i3, then it will be very confused, because it doesn’t have a reference to know where it is.” “Kidnapping” is a term that’s used often in robotics to refer to a situation in which an autonomous robot gets moved to an unmapped location, and in the context of a home robot, the best example of this is if you decide that you want your robot to vacuum a different room instead, so you pick it up and move it there.
iRobot used to make this easy by giving all of its robots carrying handles, but not anymore, because getting moved around makes things really difficult for any robot trying to keep track of where it is. While robots like the i7 can recover using their cameras to look for unique features that they recognize, the only permanent, unique landmark that the i3 can for sure identify is the beacon on its dock. What this means is that when it comes to the i3, even more than other Roomba models, the best strategy, is to just “let it do its thing,” says iRobot senior principal system engineer Landon Unninayar.
Photo: iRobot
The Roomba i3 is iRobot’s new low to midrange vacuum, starting at $400.
If you’re looking to spend a bit less than the $400 starting price of the i3, there are other options to be aware of as well. The Roomba 614, for example, does a totally decent job and costs $250. It’s scheduling isn’t very clever, it doesn’t make maps, and it won’t empty itself, but it will absolutely help keep your floors clean as long as you don’t mind being a little bit more hands-on. (And there’s also Neato’s D4, which offers basic persistent maps—and lasers!—for $330.)
The other thing to consider if you’re trying to decide between the i3 and a more expensive Roomba is that without the camera, the i3 likely won’t be able to take advantage of nearly as many of the future improvements that iRobot has said it’s working on. Spending more money on a robot with additional sensors isn’t just buying what it can do now, but also investing in what it may be able to do later on, with its more sophisticated localization and ability to recognize objects. iRobot has promised major app updates every six months, and our guess is that most of the cool new stuff is going to show in the i7 and s9. So, if your top priority is just cleaner floors, the i3 is a solid choice. But if you want a part of what iRobot is working on next, the i3 might end up holding you back. Continue reading →
#437564 How We Won the DARPA SubT Challenge: ...
This is a guest post. The views expressed here are those of the authors and do not necessarily represent positions of IEEE or its organizational units.
“Do you smell smoke?” It was three days before the qualification deadline for the Virtual Tunnel Circuit of the DARPA Subterranean Challenge Virtual Track, and our team was barrelling through last-minute updates to our robot controllers in a small conference room at the Michigan Tech Research Institute (MTRI) offices in Ann Arbor, Mich. That’s when we noticed the smell. We’d assumed that one of the benefits of entering a virtual disaster competition was that we wouldn’t be exposed to any actual disasters, but equipment in the basement of the building MTRI shares had started to smoke. We evacuated. The fire department showed up. And as soon as we could, the team went back into the building, hunkered down, and tried to make up for the unexpected loss of several critical hours.
Team BARCS joins the SubT Virtual Track
The smoke incident happened more than a year after we first learned of the DARPA Subterranean Challenge. DARPA announced SubT early in 2018, and at that time, we were interested in building internal collaborations on multi-agent autonomy problems, and SubT seemed like the perfect opportunity. Though a few of us had backgrounds in robotics, the majority of our team was new to the field. We knew that submitting a proposal as a largely non-traditional robotics team from an organization not known for research in robotics was a risk. However, the Virtual Track gave us the opportunity to focus on autonomy and multi-agent teaming strategies, areas requiring skill in asynchronous computing and sensor data processing that are strengths of our Institute. The prevalence of open source code, small inexpensive platforms, and customizable sensors has provided the opportunity for experts in fields other than robotics to apply novel approaches to robotics problems. This is precisely what makes the Virtual Track of SubT appealing to us, and since starting SubT, autonomy has developed into a significant research thrust for our Institute. Plus, robots are fun!
After many hours of research, discussion, and collaboration, we submitted our proposal early in 2018. And several months later, we found out that we had won a contract and became a funded team (Team BARCS) in the SubT Virtual Track. Now we needed to actually make our strategy work for the first SubT Tunnel Circuit competition, taking place in August of 2019.
Building a team of virtual robots
A natural approach to robotics competitions like SubT is to start with the question of “what can X-type robot do” and then build a team and strategy around individual capabilities. A particular challenge for the SubT Virtual Track is that we can’t design our own systems; instead, we have to choose from a predefined set of simulated robots and sensors that DARPA provides, based on the real robots used by Systems Track teams. Our approach is to look at what a team of robots can do together, determining experimentally what the best team configuration is for each environment. By the final competition, ideally we will be demonstrating the value of combining platforms across multiple Systems Track teams into a single Virtual Track team. Each of the robot configurations in the competition has an associated cost, and team size is constrained by a total cost. This provides another impetus for limiting dependence on complex sensor packages, though our ranging preference is 3D lidar, which is the most expensive sensor!
Image: Michigan Tech Research Institute
The teams can rely on realistic physics and sensors but they start off with no maps of any kind, so the focus is on developing autonomous exploratory behavior, navigation methods, and object recognition for their simulated robots.
One of the frequent questions we receive about the Virtual Track is if it’s like a video game. While it may look similar on the surface, everything under the hood in a video game is designed to service the game narrative and play experience, not require novel research in AI and autonomy. The purpose of simulations, on the other hand, is to include full physics and sensor models (including noise and errors) to provide a testbed for prototyping and developing solutions to those real-world challenges. We are starting with realistic physics and sensors but no maps of any kind, so the focus is on developing autonomous exploratory behavior, navigation methods, and object recognition for our simulated robots.
Though the simulation is more like real life than a video game, it is not real life. Due to occasional software bugs, there are still non-physical events, like the robots falling through an invisible hole in the world or driving through a rock instead of over it or flipping head over heels when driving over a tiny lip between world tiles. These glitches, while sometimes frustrating, still allow the SubT Virtual platform to be realistic enough to support rapid prototyping of controller modules that will transition straightforwardly onto hardware, closing the loop between simulation and real-world robots.
Full autonomy for DARPA-hard scenarios
The Virtual Track requirement that the robotic agents be fully autonomous, rather than have a human supervisor, is a significant distinction between the Systems and Virtual Tracks of SubT. Our solutions must be hardened against software faults caused by things like missing and bad data since our robots can’t turn to us for help. In order for a team of robots to complete this objective reliably with no human-in-the-loop, all of the internal systems, from perception to navigation to control to actuation to communications, must be able to autonomously identify and manage faults and failures anywhere in the control chain.
The communications limitations in subterranean environments (both real and virtual) mean that we need to keep the amount of information shared between robots low, while making the usability of that information for joint decision-making high. This goal has guided much of our design for autonomous navigation and joint search strategy for our team. For example, instead of sharing the full SLAM map of the environment, our agents only share a simplified graphical representation of the space, along with data about frontiers it has not yet explored, and are able to merge its information with the graphs generated by other agents. The merged graph can then be used for planning and navigation without having full knowledge of the detailed 3D map.
The Virtual Track requires that the robotic agents be fully autonomous. With no human-in-the-loop, all of the internal systems, from perception to navigation to control to actuation to communications, must be able to identify and manage faults and failures anywhere in the control chain.
Since the objective of the SubT program is to advance the state-of-the-art in rapid autonomous exploration and mapping of subterranean environments by robots, our first software design choices focused on the mapping task. The SubT virtual environments are sufficiently rich as to provide interesting problems in building so-called costmaps that accurately separate obstructions that are traversable (like ramps) from legitimately impassible obstructions. An extra complication we discovered in the first course, which took place in mining tunnels, was that the angle of the lowest beam of the lidar was parallel to the down ramps in the tunnel environment, so they could not “see” the ground (or sometimes even obstructions on the ramp) until they got close enough to the lip of the ramp to receive lidar reflections off the bottom of the ramp. In this case, we had to not only change the costmap to convince the robot that there was safe ground to reach over the lip of the ramp, but also had to change the path planner to get the robot to proceed with caution onto the top of the ramp in case there were previously unseen obstructions on the ramp.
In addition to navigation in the costmaps, the robot must be able to generate its own goals to navigate to. This is what produces exploratory behavior when there is no map to start with. SLAM is used to generate a detailed map of the environment explored by a single robot—the space it has probed with its sensors. From the sensor data, we are able to extract information about the interior space of the environment while looking for holes in the data, to determine things like whether the current tunnel continues or ends, or how many tunnels meet at an intersection. Once we have some understanding of the interior space, we can place navigation goals in that space. These goals naturally update as the robot traverses the tunnel, allowing the entire space to be explored.
Sending our robots into the virtual unknown
The solutions for the Virtual Track competitions are tested by DARPA in multiple sequestered runs across many environments for each Circuit in the month prior to the Systems Track competition. We must wait until the joint award ceremony at the conclusion of the Systems Track to find out the results, and we are completely in the dark about placings before the awards are announced. It’s nerve-wracking! The challenges of the worlds used in the Circuit events are also hand-designed, so features of the worlds we use for development could be combined in ways we have not anticipated—it’s always interesting to see what features were prioritized after the event. We test everything in our controllers well enough to feel confident that we at least are submitting something reasonably stable and broadly capable, and once the solution is in, we can’t really do anything other than “let go” and get back to work on the next phase of development. Maybe it’s somewhat like sending your kid to college: “we did our best to prepare you for this world, little bots. Go do good.”
Image: Michigan Tech Research Institute
The first SubT competition was the Tunnel Circuit, featuring a labyrinthine environment that simulated human-engineered tunnels, including hazards such as vertical shafts and rubble.
The first competition was the Tunnel Circuit, in October 2019. This environment models human-engineered tunnels. Two substantial challenges in this environment were vertical shafts and rubble. Our team accrued 21 points over 15 competition runs in five separate tunnel environments for a second place finish, behind Team Coordinated Robotics.
The next phase of the SubT virtual competition was the Urban Circuit. Much of the difference between our Tunnel and Urban Circuit results came down to thorough testing to identify failure modes and implementations of checks and data filtering for fault tolerance. For example, in the SLAM nodes run by a single robot, the coordinates of the most recent sensor data are changed multiple times during processing and integration into the current global 3D map of the “visited” environment stored by that robot. If there is lag in IMU or clock data, the observation may be temporarily registered at a default location that is very far from the actual position. Since most of our decision processes for exploration are downstream from SLAM, this can cause faulty or impossible goals to be generated, and the robots then spend inordinate amounts of time trying to drive through walls. We updated our method to add a check to see if the new map position has jumped a far distance from the prior map position, and if so, we threw that data out.
Image: Michigan Tech Research Institute
In open spaces like the rooms in the Urban circuit, we adjusted our approach to exploration through graph generation to allow the robots to accurately identify viable routes while helping to prevent forays off platform edges.
Our approach to exploration through graph generation based on identification of interior spaces allowed us to thoroughly explore the centers of rooms, although we did have to make some changes from the Tunnel circuit to achieve that. In the Tunnel circuit, we used a simplified graph of the environment based on landmarks like intersections. The advantage of this approach is that it is straightforward for two robots to compare how the graphs of the space they explored individually overlap. In open spaces like the rooms in the Urban circuit, we chose to instead use a more complex, less directly comparable graph structure based on the individual robot’s trajectory. This allowed the robots to accurately identify viable routes between features like subway station platforms and subway tracks, as well as to build up the navigation space for room interiors, while helping to prevent forays off the platform edges. Frontier information is also integrated into the graph, providing a uniform data structure for both goal selection and route planning.
The results are in!
The award ceremony for the Urban Circuit was held concurrently with the Systems Track competition awards this past February in Washington State. We sent a team representative to participate in the Technical Interchange Meeting and present the approach for our team, and the rest of us followed along from our office space on the DARPAtv live stream. While we were confident in our solution, we had also been tracking the online leaderboard and knew our competitors were going to be submitting strong solutions. Since the competition environments are hand-designed, there are always novel challenges that could be presented in these environments as well. We knew we would put up a good fight, but it was very exciting to see BARCS appear in first place!
Any time we implement a new module in our control system, there is a lot of parameter tuning that has to happen to produce reliably good autonomous behavior. In the Urban Circuit, we did not sufficiently test some parameter values in our exploration modules. The effect of this was that the robots only chose to go down small hallways after they explored everything else in their environment, which meant very often they ran out of time and missed a lot of small rooms. This may be the biggest source of lost points for us in the Urban Circuit. One of our major plans going forward from the Urban Circuit is to integrate more sophisticated node selection methods, which can help our robots more intelligently prioritize which frontier nodes to visit. By going through all three Circuit challenges, we will learn how to appropriately add weights to the frontiers based on features of the individual environments. For the Final Challenge, when all three Circuit environments will be combined into large systems, we plan to implement adaptive controllers that will identify their environments and use the appropriate optimized parameters for that environment. In this way, we expect our agents to be able to (for example) prioritize hallways and other small spaces in Urban environments, and perhaps prioritize large openings over small in the Cave environments, if the small openings end up being treacherous overall.
Next for our team: Cave Circuit
Coming up next for Team BARCS is the Virtual Cave Circuit. We are in the middle of testing our hypothesis that our controller will transition from UGVs to UAVs and developing strategies for refining our solution to handle Cave Circuit environmental hazards. The UAVs have a shorter battery life than the UGVs, so executing a joint exploration strategy will also be a high priority for this event, as will completing our work on graph sharing and merging, which will give our robot teams more sophisticated options for navigation and teamwork. We’re reaching a threshold in development where we can start increasing the “smarts” of the robots, which we anticipate will be critical for the final competition, where all of the challenges of SubT will be combined to push the limits of innovation. The Cave Circuit will also have new environmental challenges to tackle: dynamic features such as rock falls have been added, which will block previously accessible passages in the cave environment. We think our controllers are well-poised to handle this new challenge, and we’re eager to find out if that’s the case.
As of now, the biggest worries for us are time and team composition. The Cave Circuit deadline has been postponed to October 15 due to COVID-19 delays, with the award ceremony in mid-November, but there have also been several very compelling additions to the testbed that we would like to experiment with before submission, including droppable networking ‘breadcrumbs’ and new simulated platforms. There are design trade-offs when balancing general versus specialist approaches to the controllers for these robots—since we are adding UAVs to our team for the first time, there are new decisions that will have to be made. For example, the UAVs can ascend into vertical spaces, but only have a battery life of 20 minutes. The UGVs by contrast have 90 minute battery life. One of our strategies is to do an early return to base with one or more agents to buy down risk on making any artifact reports at all for the run, hedging against our other robots not making it back in time, a lesson learned from the Tunnel Circuit. Should a UAV take on this role, or is it better to have them explore deeper into the environment and instead report their artifacts to a UGV or network node, which comes with its own risks? Testing and experimentation to determine the best options takes time, which is always a worry when preparing for a competition! We also anticipate new competitors and stiffer competition all around.
Image: Michigan Tech Research Institute
Team BARCS has now a year to prepare for the final DARPA SubT Challenge event, expected to take place in late 2021.
Going forward from the Cave Circuit, we will have a year to prepare for the final DARPA SubT Challenge event, expected to take place in late 2021. What we are most excited about is increasing the level of intelligence of the agents in their teamwork and joint exploration of the environment. Since we will have (hopefully) built up robust approaches to handling each of the specific types of environments in the Tunnel, Urban, and Cave circuits, we will be aiming to push the limits on collaboration and efficiency among the agents in our team. We view this as a central research contribution of the Virtual Track to the Subterranean Challenge because intelligent, adaptive, multi-robot collaboration is an upcoming stage of development for integration of robots into our lives.
The Subterranean Challenge Virtual Track gives us a bridge for transitioning our more abstract research ideas and algorithms relevant to this degree of autonomy and collaboration onto physical systems, and exploring the tangible outcomes of implementing our work in the real world. And the next time there’s an incident in the basement of our building, the robots (and humans) of Team BARCS will be ready to respond.
Richard Chase, Ph.D., P.E., is a research scientist at Michigan Tech Research Institute (MTRI) and has 20 years of experience developing robotics and cyber physical systems in areas from remote sensing to autonomous vehicles. At MTRI, he works on a variety of topics such as swarm autonomy, human-swarm teaming, and autonomous vehicles. His research interests are the intersection of design, robotics, and embedded systems.
Sarah Kitchen is a Ph.D. mathematician working as a research scientist and an AI/Robotics focus area leader at MTRI. Her research interests include intelligent autonomous agents and multi-agent collaborative teams, as well as applications of autonomous robots to sensing systems.
This material is based upon work supported by the Defense Advanced Research Projects Agency (DARPA) under Contract No. HR001118C0124 and is released under Distribution Statement (Approved for Public Release, Distribution Unlimited). Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of DARPA. Continue reading →
#436252 After AI, Fashion and Shopping Will ...
AI and broadband are eating retail for breakfast. In the first half of 2019, we’ve seen 19 retailer bankruptcies. And the retail apocalypse is only accelerating.
What’s coming next is astounding. Why drive when you can speak? Revenue from products purchased via voice commands is expected to quadruple from today’s US$2 billion to US$8 billion by 2023.
Virtual reality, augmented reality, and 3D printing are converging with artificial intelligence, drones, and 5G to transform shopping on every dimension. And as a result, shopping is becoming dematerialized, demonetized, democratized, and delocalized… a top-to-bottom transformation of the retail world.
Welcome to Part 1 of our series on the future of retail, a deep-dive into AI and its far-reaching implications.
Let’s dive in.
A Day in the Life of 2029
Welcome to April 21, 2029, a sunny day in Dallas. You’ve got a fundraising luncheon tomorrow, but nothing to wear. The last thing you want to do is spend the day at the mall.
No sweat. Your body image data is still current, as you were scanned only a week ago. Put on your VR headset and have a conversation with your AI. “It’s time to buy a dress for tomorrow’s event” is all you have to say. In a moment, you’re teleported to a virtual clothing store. Zero travel time. No freeway traffic, parking hassles, or angry hordes wielding baby strollers.
Instead, you’ve entered your own personal clothing store. Everything is in your exact size…. And I mean everything. The store has access to nearly every designer and style on the planet. Ask your AI to show you what’s hot in Shanghai, and presto—instant fashion show. Every model strutting down the runway looks exactly like you, only dressed in Shanghai’s latest.
When you’re done selecting an outfit, your AI pays the bill. And as your new clothes are being 3D printed at a warehouse—before speeding your way via drone delivery—a digital version has been added to your personal inventory for use at future virtual events.
The cost? Thanks to an era of no middlemen, less than half of what you pay in stores today. Yet this future is not all that far off…
Digital Assistants
Let’s begin with the basics: the act of turning desire into purchase.
Most of us navigate shopping malls or online marketplaces alone, hoping to stumble across the right item and fit. But if you’re lucky enough to employ a personal assistant, you have the luxury of describing what you want to someone who knows you well enough to buy that exact right thing most of the time.
For most of us who don’t, enter the digital assistant.
Right now, the four horsemen of the retail apocalypse are waging war for our wallets. Amazon’s Alexa, Google’s Now, Apple’s Siri, and Alibaba’s Tmall Genie are going head-to-head in a battle to become the platform du jour for voice-activated, AI-assisted commerce.
For baby boomers who grew up watching Captain Kirk talk to the Enterprise’s computer on Star Trek, digital assistants seem a little like science fiction. But for millennials, it’s just the next logical step in a world that is auto-magical.
And as those millennials enter their consumer prime, revenue from products purchased via voice-driven commands is projected to leap from today’s US$2 billion to US$8 billion by 2023.
We are already seeing a major change in purchasing habits. On average, consumers using Amazon Echo spent more than standard Amazon Prime customers: US$1,700 versus US$1,300.
And as far as an AI fashion advisor goes, those too are here, courtesy of both Alibaba and Amazon. During its annual Singles’ Day (November 11) shopping festival, Alibaba’s FashionAI concept store uses deep learning to make suggestions based on advice from human fashion experts and store inventory, driving a significant portion of the day’s US$25 billion in sales.
Similarly, Amazon’s shopping algorithm makes personalized clothing recommendations based on user preferences and social media behavior.
Customer Service
But AI is disrupting more than just personalized fashion and e-commerce. Its next big break will take place in the customer service arena.
According to a recent Zendesk study, good customer service increases the possibility of a purchase by 42 percent, while bad customer service translates into a 52 percent chance of losing that sale forever. This means more than half of us will stop shopping at a store due to a single disappointing customer service interaction. These are significant financial stakes. They’re also problems perfectly suited for an AI solution.
During the 2018 Google I/O conference, CEO Sundar Pichai demoed the Google Duplex, their next generation digital assistant. Pichai played the audience a series of pre-recorded phone calls made by Google Duplex. The first call made a reservation at a restaurant, the second one booked a haircut appointment, amusing the audience with a long “hmmm” mid-call.
In neither case did the person on the other end of the phone have any idea they were talking to an AI. The system’s success speaks to how seamlessly AI can blend into our retail lives and how convenient it will continue to make them. The same technology Pichai demonstrated that can make phone calls for consumers can also answer phones for retailers—a development that’s unfolding in two different ways:
(1) Customer service coaches: First, for organizations interested in keeping humans involved, there’s Beyond Verbal, a Tel Aviv-based startup that has built an AI customer service coach. Simply by analyzing customer voice intonation, the system can tell whether the person on the phone is about to blow a gasket, is genuinely excited, or anything in between.
Based on research of over 70,000 subjects in more than 30 languages, Beyond Verbal’s app can detect 400 different markers of human moods, attitudes, and personality traits. Already it’s been integrated in call centers to help human sales agents understand and react to customer emotions, making those calls more pleasant, and also more profitable.
For example, by analyzing word choice and vocal style, Beyond Verbal’s system can tell what kind of shopper the person on the line actually is. If they’re an early adopter, the AI alerts the sales agent to offer them the latest and greatest. If they’re more conservative, it suggests items more tried-and-true.
(2) Replacing customer service agents: Second, companies like New Zealand’s Soul Machines are working to replace human customer service agents altogether. Powered by IBM’s Watson, Soul Machines builds lifelike customer service avatars designed for empathy, making them one of many helping to pioneer the field of emotionally intelligent computing.
With their technology, 40 percent of all customer service interactions are now resolved with a high degree of satisfaction, no human intervention needed. And because the system is built using neural nets, it’s continuously learning from every interaction—meaning that percentage will continue to improve.
The number of these interactions continues to grow as well. Software manufacturer Autodesk now includes a Soul Machine avatar named AVA (Autodesk Virtual Assistant) in all of its new offerings. She lives in a small window on the screen, ready to soothe tempers, troubleshoot problems, and forever banish those long tech support hold times.
For Daimler Financial Services, Soul Machines built an avatar named Sarah, who helps customers with arguably three of modernity’s most annoying tasks: financing, leasing, and insuring a car.
This isn’t just about AI—it’s about AI converging with additional exponentials. Add networks and sensors to the story and it raises the scale of disruption, upping the FQ—the frictionless quotient—in our frictionless shopping adventure.
Final Thoughts
AI makes retail cheaper, faster, and more efficient, touching everything from customer service to product delivery. It also redefines the shopping experience, making it frictionless and—once we allow AI to make purchases for us—ultimately invisible.
Prepare for a future in which shopping is dematerialized, demonetized, democratized, and delocalized—otherwise known as “the end of malls.”
Of course, if you wait a few more years, you’ll be able to take an autonomous flying taxi to Westfield’s Destination 2028—so perhaps today’s converging exponentials are not so much spelling the end of malls but rather the beginning of an experience economy far smarter, more immersive, and whimsically imaginative than today’s shopping centers.
Either way, it’s a top-to-bottom transformation of the retail world.
Over the coming blog series, we will continue our discussion of the future of retail. Stay tuned to learn new implications for your business and how to future-proof your company in an age of smart, ultra-efficient, experiential retail.
Want a copy of my next book? If you’ve enjoyed this blogified snippet of The Future is Faster Than You Think, sign up here to be eligible for an early copy and access up to $800 worth of pre-launch giveaways!
Join Me
(1) A360 Executive Mastermind: If you’re an exponentially and abundance-minded entrepreneur who would like coaching directly from me, consider joining my Abundance 360 Mastermind, a highly selective community of 360 CEOs and entrepreneurs who I coach for 3 days every January in Beverly Hills, Ca. Through A360, I provide my members with context and clarity about how converging exponential technologies will transform every industry. I’m committed to running A360 for the course of an ongoing 25-year journey as a “countdown to the Singularity.”
If you’d like to learn more and consider joining our 2020 membership, apply here.
(2) Abundance-Digital Online Community: I’ve also created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is Singularity University’s ‘onramp’ for exponential entrepreneurs — those who want to get involved and play at a higher level. Click here to learn more.
(Both A360 and Abundance-Digital are part of Singularity University — your participation opens you to a global community.)
This article originally appeared on diamandis.com. Read the original article here.
Image Credit: Image by Pexels from Pixabay Continue reading →
#436209 Video Friday: Robotic Endoscope Travels ...
Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):
DARPA SubT Urban Circuit – February 18-27, 2020 – Olympia, WA, USA
Let us know if you have suggestions for next week, and enjoy today's videos.
Kuka has just announced the results of its annual Innovation Award. From an initial batch of 30 applicants, five teams reached the finals (we were part of the judging committee). The five finalists worked for nearly a year on their applications, which they demonstrated this week at the Medica trade show in Düsseldorf, Germany. And the winner of the €20,000 prize is…Team RoboFORCE, led by the STORM Lab in the U.K., which developed a “robotic magnetic flexible endoscope for painless colorectal cancer screening, surveillance, and intervention.”
The system could improve colonoscopy procedures by reducing pain and discomfort as well as other risks such as bleeding and perforation, according to the STORM Lab researchers. It uses a magnetic field to control the endoscope, pulling rather than pushing it through the colon.
The other four finalists also presented some really interesting applications—you can see their videos below.
“Because we were so please with the high quality of the submissions, we will have next year’s finals again at the Medica fair, and the challenge will be named ‘Medical Robotics’,” says Rainer Bischoff, vice president for corporate research at Kuka. He adds that the selected teams will again use Kuka’s LBR Med robot arm, which is “already certified for integration into medical products and makes it particularly easy for startups to use a robot as the main component for a particular solution.”
Applications are now open for Kuka’s Innovation Award 2020. You can find more information on how to enter here. The deadline is 5 January 2020.
[ Kuka ]
Oh good, Aibo needs to be fed now.
You know what comes next, right?
[ Aibo ]
Your cat needs this robot.
It's about $200 on Kickstarter.
[ Kickstarter ]
Enjoy this tour of the Skydio offices courtesy Skydio 2, which runs into not even one single thing.
If any Skydio employees had important piles of papers on their desks, well, they don’t anymore.
[ Skydio ]
Artificial intelligence is everywhere nowadays, but what exactly does it mean? We asked a group MIT computer science grad students and post-docs how they personally define AI.
“When most people say AI, they actually mean machine learning, which is just pattern recognition.” Yup.
[ MIT ]
Using event-based cameras, this drone control system can track attitude at 1600 degrees per second (!).
[ UZH ]
Introduced at CES 2018, Walker is an intelligent humanoid service robot from UBTECH Robotics. Below are the latest features and technologies used during our latest round of development to make Walker even better.
[ Ubtech ]
Introducing the Alpha Prime by #VelodyneLidar, the most advanced lidar sensor on the market! Alpha Prime delivers an unrivaled combination of field-of-view, range, high-resolution, clarity and operational performance.
Performance looks good, but don’t expect it to be cheap.
[ Velodyne ]
Ghost Robotics’ Spirit 40 will start shipping to researchers in January of next year.
[ Ghost Robotics ]
Unitree is about to ship the first batch of their AlienGo quadrupeds as well:
[ Unitree ]
Mechanical engineering’s Sarah Bergbreiter discusses her work on micro robotics, how they draw inspiration from insects and animals, and how tiny robots can help humans in a variety of fields.
[ CMU ]
Learning contact-rich, robotic manipulation skills is a challenging problem due to the high-dimensionality of the state and action space as well as uncertainty from noisy sensors and inaccurate motor control. To combat these factors and achieve more robust manipulation, humans actively exploit contact constraints in the environment. By adopting a similar strategy, robots can also achieve more robust manipulation. In this paper, we enable a robot to autonomously modify its environment and thereby discover how to ease manipulation skill learning. Specifically, we provide the robot with fixtures that it can freely place within the environment. These fixtures provide hard constraints that limit the outcome of robot actions. Thereby, they funnel uncertainty from perception and motor control and scaffold manipulation skill learning.
[ Stanford ]
Since 2016, Verity's drones have completed more than 200,000 flights around the world. Completely autonomous, client-operated and designed for live events, Verity is making the magic real by turning drones into flying lights, characters, and props.
[ Verity ]
To monitor and stop the spread of wildfires, University of Michigan engineers developed UAVs that could find, map and report fires. One day UAVs like this could work with disaster response units, firefighters and other emergency teams to provide real-time accurate information to reduce damage and save lives. For their research, the University of Michigan graduate students won first place at a competition for using a swarm of UAVs to successfully map and report simulated wildfires.
[ University of Michigan ]
Here’s an important issue that I haven’t heard talked about all that much: How first responders should interact with self-driving cars.
“To put the car in manual mode, you must call Waymo.” Huh.
[ Waymo ]
Here’s what Gitai has been up to recently, from a Humanoids 2019 workshop talk.
[ Gitai ]
The latest CMU RI seminar comes from Girish Chowdhary at the University of Illinois at Urbana-Champaign on “Autonomous and Intelligent Robots in Unstructured Field Environments.”
What if a team of collaborative autonomous robots grew your food for you? In this talk, I will discuss some key advances in robotics, machine learning, and autonomy that will one day enable teams of small robots to grow food for you in your backyard in a fundamentally more sustainable way than modern mega-farms! Teams of small aerial and ground robots could be a potential solution to many of the serious problems that modern agriculture is facing. However, fully autonomous robots that operate without supervision for weeks, months, or entire growing season are not yet practical. I will discuss my group’s theoretical and practical work towards the underlying challenging problems in robotic systems, autonomy, sensing, and learning. I will begin with our lightweight, compact, and autonomous field robot TerraSentia and the recent successes of this type of undercanopy robots for high-throughput phenotyping with deep learning-based machine vision. I will also discuss how to make a team of autonomous robots learn to coordinate to weed large agricultural farms under partial observability. These direct applications will help me make the case for the type of reinforcement learning and adaptive control that are necessary to usher in the next generation of autonomous field robots that learn to solve complex problems in harsh, changing, and dynamic environments. I will then end with an overview of our new MURI, in which we are working towards developing AI and control that leverages neurodynamics inspired by the Octopus brain.
[ CMU RI ] Continue reading →
#435773 Video Friday: Roller-Skating Quadruped ...
Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):
IEEE Africon 2019 – September 25-27, 2019 – Accra, Ghana
RoboBusiness 2019 – October 1-3, 2019 – Santa Clara, CA, USA
ISRR 2019 – October 6-10, 2019 – Hanoi, Vietnam
Ro-Man 2019 – October 14-18, 2019 – New Delhi, India
Humanoids 2019 – October 15-17, 2019 – Toronto, Canada
ARSO 2019 – October 31-1, 2019 – Beijing, China
ROSCon 2019 – October 31-1, 2019 – Macau
IROS 2019 – November 4-8, 2019 – Macau
Let us know if you have suggestions for next week, and enjoy today's videos.
We got a sneak peek of a new version of ANYmal equipped with actuated wheels for feet at the DARPA SubT Challenge, where it did surprisingly well at quickly and (mostly) robustly navigating some very tricky terrain. And when you're not expecting it to travel through a muddy, rocky, and dark tunnel, it looks even more capable:
[ Paper ]
Thanks Marko!
In Langley’s makerspace lab, researchers are developing a series of soft robot actuators to investigate the viability of soft robotics in space exploration and assembly. By design, the actuator has chambers, or air bladders, that expand and compress based on the amount of air in them.
[ NASA ]
I’m not normally a fan of the AdultSize RoboCup soccer competition, but NimbRo had a very impressive season.
I don’t know how it managed to not fall over at 45 seconds, but damn.
[ NimbRo ]
This is more AI than robotics, but that’s okay, because it’s totally cool.
I’m wondering whether the hiders ever tried another possibly effective strategy: trapping the seekers in a locked shelter right at the start.
[ OpenAI ]
We haven’t heard much from Piaggio Fast Forward in a while, but evidently they’ve still got a Gita robot going on, designed to be your personal autonomous caddy for absolutely anything that can fit into something the size of a portable cooler.
Available this fall, I guess?
[ Gita ]
This passively triggered robotic hand is startlingly fast, and seems almost predatory when it grabs stuff, especially once they fit it onto a drone.
[ New Dexterity ]
Thanks Fan!
Autonomous vehicles seem like a recent thing, but CMU has been working on them since the mid 1980s.
CMU was also working on drones back before drones were even really a thing:
[ CMU NavLab ] and [ CMU ]
Welcome to the most complicated and expensive robotic ice cream deployment system ever created.
[ Niska ]
Some impressive dexterity from a robot hand equipped with magnetic gears.
[ Ishikawa Senoo Lab ]
The Buddy Arduino social robot kit is now live on Kickstarter, and you can pledge for one of these little dudes for 49 bucks.
[ Kickstarter ]
Thanks Jenny!
Mobile manipulation robots have high potential to support rescue forces in disaster-response missions. Despite the difficulties imposed by real-world scenarios, robots are promising to perform mission tasks from a safe distance. In the CENTAURO project, we developed a disaster-response system which consists of the highly flexible Centauro robot and suitable control interfaces including an immersive telepresence suit and support-operator controls on different levels of autonomy.
[ CENTAURO ]
Thanks Sven!
Determined robots are the cutest robots.
[ Paper ]
The goal of the Dronument project is to create an aerial platform enabling interior and exterior documentation of heritage sites.
It’s got a base station that helps with localization, but still, flying that close to a chandelier in a UNESCO world heritage site makes me nervous.
[ Dronument ]
Thanks Fan!
Avast ye! No hornswaggling, lick-spittlering, or run-rigging over here – Only serious tech for devs. All hands hoay to check out Misty's capabilities and to build your own skills with plenty of heave ho! ARRRRRRRRGH…
International Talk Like a Pirate Day was yesterday, but I'm sure nobody will look at you funny if you keep at it today too.
[ Misty Robotics ]
This video presents an unobtrusive bimanual teleoperation setup with very low weight, consisting of two Vive visual motion trackers and two Myo surface electromyography bracelets. The video demonstrates complex, dexterous teleoperated bimanual daily-living tasks performed by the torque-controlled humanoid robot TORO.
[ DLR RMC ]
Lex Fridman interviews iRobot’s Colin Angle on the Artificial Intelligence Podcast.
Colin Angle is the CEO and co-founder of iRobot, a robotics company that for 29 years has been creating robots that operate successfully in the real world, not as a demo or on a scale of dozens, but on a scale of thousands and millions. As of this year, iRobot has sold more than 25 million robots to consumers, including the Roomba vacuum cleaning robot, the Braava floor mopping robot, and soon the Terra lawn mowing robot. 25 million robots successfully operating autonomously in people's homes to me is an incredible accomplishment of science, engineering, logistics, and all kinds of entrepreneurial innovation.
[ AI Podcast ]
This week’s CMU RI Seminar comes from CMU’s own Sarah Bergbreiter, on Microsystems-Inspired Robotics.
The ability to manufacture micro-scale sensors and actuators has inspired the robotics community for over 30 years. There have been huge success stories; MEMS inertial sensors have enabled an entire market of low-cost, small UAVs. However, the promise of ant-scale robots has largely failed. Ants can move high speeds on surfaces from picnic tables to front lawns, but the few legged microrobots that have walked have done so at slow speeds (< 1 body length/sec) on smooth silicon wafers. In addition, the vision of large numbers of microfabricated sensors interacting directly with the environment has suffered in part due to the brittle materials used in micro-fabrication. This talk will present our progress in the design of sensors, mechanisms, and actuators that utilize new microfabrication processes to incorporate materials with widely varying moduli and functionality to achieve more robustness, dynamic range, and complexity in smaller packages.
[ CMU RI ] Continue reading →