Tag Archives: new
#436114 Video Friday: Transferring Human Motion ...
Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):
ARSO 2019 – October 31-1, 2019 – Beijing, China
ROSCon 2019 – October 31-1, 2019 – Macau
IROS 2019 – November 4-8, 2019 – Macau
Let us know if you have suggestions for next week, and enjoy today’s videos.
We are very sad to say that MIT professor emeritus Woodie Flowers has passed away. Flowers will be remembered for (among many other things, like co-founding FIRST) the MIT 2.007 course that he began teaching in the mid-1970s, famous for its student competitions.
These competitions got a bunch of well-deserved publicity over the years; here’s one from 1985:
And the 2.007 competitions are still going strong—this year’s theme was Moonshot, and you can watch a replay of the event here.
[ MIT ]
Looks like Aibo is getting wireless integration with Hitachi appliances, which turns out to be pretty cute:
What is this magical box where you push a button and 60 seconds later fluffy pancakes come out?!
[ Aibo ]
LiftTiles are a “modular and reconfigurable room-scale shape display” that can turn your floor and walls into on-demand structures.
[ LiftTiles ]
Ben Katz, a grad student in MIT’s Biomimetics Robotics Lab, has been working on these beautiful desktop-sized Furuta pendulums:
That’s a crowdfunding project I’d pay way too much for.
[ Ben Katz ]
A clever bit of cable manipulation from MIT, using GelSight tactile sensors.
[ Paper ]
A useful display of industrial autonomy on ANYmal from the Oxford Robotics Group.
This video is of a demonstration for the ORCA Robotics Hub showing the ANYbotics ANYmal robot carrying out industrial inspection using autonomy software from Oxford Robotics Institute.
[ ORCA Hub ] via [ DRS ]
Thanks Maurice!
Meet Katie Hamilton, a software engineer at NASA’s Ames Research Center, who got into robotics because she wanted to help people with daily life. Katie writes code for robots, like Astrobee, who are assisting astronauts with routine tasks on the International Space Station.
[ NASA Astrobee ]
Transferring human motion to a mobile robotic manipulator and ensuring safe physical human-robot interaction are crucial steps towards automating complex manipulation tasks in human-shared environments. In this work we present a robot whole-body teleoperation framework for human motion transfer. We validate our approach through several experiments using the TIAGo robot, showing this could be an easy way for a non-expert to teach a rough manipulation skill to an assistive robot.
[ Paper ]
This is pretty cool looking for an autonomous boat, but we’ll see if they can build a real one by 2020 since at the moment it’s just an average rendering.
[ ProMare ]
I had no idea that asparagus grows like this. But, sure does make it easy for a robot to harvest.
[ Inaho ]
Skip to 2:30 in this Pepper unboxing video to hear the noise it makes when tickled.
[ HIT Lab NZ ]
In this interview, Jean Paul Laumond discusses his movement from mathematics to robotics and his career contributions to the field, especially in regards to motion planning and anthropomorphic motion. Describing his involvement at CNRS and in other robotics projects, such as HILARE, he comments on the distinction in perception between the robotics approach and a mathematics one.
[ IEEE RAS History ]
Here’s a couple of videos from the CMU Robotics Institute archives, showing some of the work that took place over the last few decades.
[ CMU RI ]
In this episode of the Artificial Intelligence Podcast, Lex Fridman speaks with David Ferrucci from IBM about Watson and (you guessed it) artificial intelligence.
David Ferrucci led the team that built Watson, the IBM question-answering system that beat the top humans in the world at the game of Jeopardy. He is also the Founder, CEO, and Chief Scientist of Elemental Cognition, a company working engineer AI systems that understand the world the way people do. This conversation is part of the Artificial Intelligence podcast.
[ AI Podcast ]
This week’s CMU RI Seminar is by Pieter Abbeel from UC Berkeley, on “Deep Learning for Robotics.”
Programming robots remains notoriously difficult. Equipping robots with the ability to learn would by-pass the need for what otherwise often ends up being time-consuming task specific programming. This talk will describe recent progress in deep reinforcement learning (robots learning through their own trial and error), in apprenticeship learning (robots learning from observing people), and in meta-learning for action (robots learning to learn). This work has led to new robotic capabilities in manipulation, locomotion, and flight, with the same approach underlying advances in each of these domains.
[ CMU RI ] Continue reading →
#436100 Labrador Systems Developing Affordable ...
Developing robots for the home is still a challenge, especially if you want those robots to interact with people and help them do practical, useful things. However, the potential markets for home robots are huge, and one of the most compelling markets is for home robots that can assist humans who need them. Today, Labrador Systems, a startup based in California, is announcing a pre-seed funding round of $2 million (led by SOSV’s hardware accelerator HAX with participation from Amazon’s Alexa Fund and iRobot Ventures, among others) with the goal of expanding development and conducting pilot studies of “a new [assistive robot] platform for supporting home health.”
Labrador was founded two years ago by Mike Dooley and Nikolai Romanov. Both Mike and Nikolai have backgrounds in consumer robotics at Evolution Robotics and iRobot, but as an ’80s gamer, Mike’s bio (or at least the parts of his bio on LinkedIn) caught my attention: From 1995 to 1997, Mike worked at Brøderbund Software, helping to manage play testing for games like Myst and Riven and the Where in the World is Carmen San Diego series. He then spent three years at Lego as the product manager for MindStorms. After doing some marginally less interesting things, Mike was the VP of product development at Evolution Robotics from 2006 to 2012, where he led the team that developed the Mint floor sweeping robot. Evolution was acquired by iRobot in 2012, and Mike ended up as the VP of product development over there until 2017, when he co-founded Labrador.
I was pretty much sold at Where in the World is Carmen San Diego (the original version of which I played from a 5.25” floppy on my dad’s Apple IIe)*, but as you can see from all that other stuff, Mike knows what he’s doing in robotics as well.
And according to Labrador’s press release, what they’re doing is this:
Labrador Systems is an early stage technology company developing a new generation of assistive robots to help people live more independently. The company’s core focus is creating affordable solutions that address practical and physical needs at a fraction of the cost of commercial robots. … Labrador’s technology platform offers an affordable solution to improve the quality of care while promoting independence and successful aging.
Labrador’s personal robot, the company’s first offering, will enter pilot studies in 2020.
That’s about as light on detail as a press release gets, but there’s a bit more on Labrador’s website, including:
Our core focus is creating affordable solutions that address practical and physical needs. (we are not a social robot company)
By affordable, we mean products and technologies that will be available at less than 1/10th the cost of commercial robots.
We achieve those low costs by fusing the latest technologies coming out of augmented reality with robotics to move things in the real world.
The only hardware we’ve actually seen from Labrador at this point is a demo that they put together for Amazon’s re:MARS conference, which took place a few months ago, showing a “demonstration project” called Smart Walker:
This isn’t the home assistance robot that Labrador got its funding for, but rather a demonstration of some of their technology. So of course, the question is, what’s Labrador working on, then? It’s still a secret, but Mike Dooley was able to give us a few more details.
IEEE Spectrum: Your website shows a smart walker concept—how is that related to the assistive robot that you’re working on?
Mike Dooley: The smart walker was a request from a major senior living organization to have our robot (which is really good at navigation) guide residents from place to place within their communities. To test the idea with residents, it turned out to be much quicker to take the navigation system from the robot and put it on an existing rollator walker. So when you see the clips of the technology in the smart walker video on our website, that’s actually the robot’s navigation system localizing in real time and path planning in an environment.
“Assistive robot” can cover a huge range of designs and capabilities—can you give us any more detail about your robot, and what it’ll be able to do?
One of the core features of our robot is to help people move things where they have difficulty moving themselves, particularly in the home setting. That may sound trivial, but to someone who has impaired mobility, it can be a major daily challenge and negatively impact their life and health in a number of ways. Some examples we repeatedly hear are people not staying hydrated or taking their medication on time simply because there is a distance between where they are and the items they need. Once we have those base capabilities, i.e. the ability to navigate around a home and move things within it, then the robot becomes a platform for a wider variety of applications.
What made you decide to develop assistive robots, and why are robots a good solution for seniors who want to live independently?
Supporting independent living has been seen as a massive opportunity in robotics for some time, but also as something off in the future. The turning point for me was watching my mother enter that stage in her life and seeing her transition to using a cane, then a walker, and eventually to a wheelchair. That made the problems very real for me. It also made things much clearer about how we could start addressing specific needs with the tools that are becoming available now.
In terms of why robots can be a good solution, the basic answer is the level of need is so overwhelming that even helping with “basic” tasks can make an appreciable difference in the quality of someone’s daily life. It’s also very much about giving individuals a degree of control back over their environment. That applies to seniors as well as others whose world starts getting more complex to manage as their abilities become more impaired.
What are the particular challenges of developing assistive robots, and how are you addressing them? Why do you think there aren’t more robotics startups in this space?
The setting (operating in homes and personal spaces) and the core purpose of the product (aiding a wide variety of individuals) bring a lot of complexity to any capability you want to build into an assistive robot. Our approach is to put as much structure as we can into the system to make it functional, affordable, understandable and reliable.
I think one of the reasons you don’t see more startups in the space is that a lot of roboticists want to skip ahead and do the fancy stuff, such as taking on human-level capabilities around things like manipulation. Those are very interesting research topics, but we think those are also very far away from being practical solutions you can productize for people to use in their homes.
How do you think assistive robots and human caregivers should work together?
The ideal scenario is allowing caregivers to focus more of their time on the high-touch, personal side of care. The robot can offload the more basic support tasks as well as extend the impact of the caregiver for the long hours of the day they can’t be with someone at their home. We see that applying to both paid care providers as well as the 40 million unpaid family members and friends that provide assistance.
The robot is really there as a tool, both for individuals in need and the people that help them. What’s promising in the research discussions we’ve had so far, is that even when a caregiver is present, giving control back to the individual for simple things can mean a lot in the relationship between them and the caregiver.
What should we look forward to from Labrador in 2020?
Our big goal in 2020 is to start placing the next version of the robot with individuals with different types of needs to let them experience it naturally in their own homes and provide feedback on what they like, what don’t like and how we can make it better. We are currently reaching out to companies in the healthcare and home health fields to participate in those studies and test specific applications related to their services. We plan to share more detail about those studies and the robot itself as we get further into 2020.
If you’re an organization (or individual) who wants to possibly try out Labrador’s prototype, the company encourages you to connect with them through their website. And as we learn more about what Labrador is up to, we’ll have updates for you, presumably in 2020.
[ Labrador Systems ]
* I just lost an hour of my life after finding out that you can play Where in the World is Carmen San Diego in your browser for free. Continue reading →
#436094 Agility Robotics Unveils Upgraded Digit ...
Last time we saw Agility Robotics’ Digit biped, it was picking up a box from a Ford delivery van and autonomously dropping it off on a porch, while at the same time managing to not trip over stairs, grass, or small children. As a demo, it was pretty impressive, but of course there’s an enormous gap between making a video of a robot doing a successful autonomous delivery and letting that robot out into the semi-structured world and expecting it to reliably do a good job.
Agility Robotics is aware of this, of course, and over the last six months they’ve been making substantial improvements to Digit to make it more capable and robust. A new video posted today shows what’s new with the latest version of Digit—Digit v2.
We appreciate Agility Robotics foregoing music in the video, which lets us hear exactly what Digit sounds like in operation. The most noticeable changes are in Digit’s feet, torso, and arms, and I was particularly impressed to see Digit reposition the box on the table before grasping it to make sure that it could get a good grip. Otherwise, it’s hard to tell what’s new, so we asked Agility Robotics’ CEO Damion Shelton to get us up to speed.
IEEE Spectrum: Can you summarize the differences between Digit v1 and v2? We’re particularly interested in the new feet.
Damion Shelton: The feet now include a roll degree of freedom, so that Digit can resist lateral forces without needing to side step. This allows Digit v2 to balance on one foot statically, which Digit v1 and Cassie could not do. The larger foot also dramatically decreases load per unit area, for improved performance on very soft surfaces like sand.
The perception stack includes four Intel RealSense cameras used for obstacle detection and pick/place, plus the lidar. In Digit v1, the perception systems were brought up incrementally over time for development purposes. In Digit v2, all perception systems are active from the beginning and tied to a dedicated computer. The perception system is used for a number of additional things beyond manipulation, which we’ll start to show in the next few weeks.
The torso changes are a bit more behind-the-scenes. All of the electronics in it are now fully custom, thermally managed, and environmentally sealed. We’ve also included power and ethernet to a payload bay that can fit either a NUC or Jetson module (or other customer payload).
What exactly are we seeing in the video in terms of Digit’s autonomous capabilities?
At the moment this is a demonstration of shared autonomy. Picking and placing the box is fully autonomous. Balance and footstep placement are fully autonomous, but guidance and obstacle avoidance are under local teleop. It’s no longer a radio controller as in early videos; we’re not ready to reveal our current controller design but it’s a reasonably significant upgrade. This is v2 hardware, so there’s one more full version in development prior to the 2020 launch, which will expand the autonomy envelope significantly.
“This is a demonstration of shared autonomy. Picking and placing the box is fully autonomous. Balance and footstep placement are fully autonomous, but guidance and obstacle avoidance are under local teleop. It’s no longer a radio controller as in early videos; we’re not ready to reveal our current controller design but it’s a reasonably significant upgrade”
—Damion Shelton, Agility Robotics
What are some unique features or capabilities of Digit v2 that might not be obvious from the video?
For those who’ve used Cassie robots, the power-up and power-down ergonomics are a lot more user friendly. Digit can be disassembled into carry-on luggage sized pieces (give or take) in under 5 minutes for easy transport. The battery charges in-situ using a normal laptop-style charger.
I’m curious about this “stompy” sort of gait that we see in Digit and many other bipedal robots—are there significant challenges or drawbacks to implementing a more human-like (and presumably quieter) heel-toe gait?
There are no drawbacks other than increased complexity in controls and foot design. With Digit v2, the larger surface area helps with the noise, and v2 has similar or better passive-dynamic performance as compared to Cassie or Digit v1. The foot design is brand new, and new behaviors like heel-toe are an active area of development.
How close is Digit v2 to a system that you’d be comfortable operating commercially?
We’re on track for a 2020 launch for Digit v3. Changes from v2 to v3 are mostly bug-fix in nature, with a few regulatory upgrades like full battery certification. Safety is a major concern for us, and we have launch customers that will be operating Digit in a safe environment, with a phased approach to relaxing operational constraints. Digit operates almost exclusively under force control (as with cobots more generally), but at the moment we’ll err on the side of caution during operation until we have the stats to back up safety and reliability. The legged robot industry has too much potential for us to screw it up by behaving irresponsibly.
It will be a while before Digit (or any other humanoid robot) is operating fully autonomously in crowds of people, but there are so many large market opportunities (think indoor factory/warehouse environments) to address prior to that point that we expect to mature the operational safety side of things well in advance of having saturated the more robot-tolerant markets.
[ Agility Robotics ] Continue reading →
#436079 Video Friday: This Humanoid Robot Will ...
Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):
Northeast Robotics Colloquium – October 12, 2019 – Philadelphia, Pa., USA
Ro-Man 2019 – October 14-18, 2019 – New Delhi, India
Humanoids 2019 – October 15-17, 2019 – Toronto, Canada
ARSO 2019 – October 31-1, 2019 – Beijing, China
ROSCon 2019 – October 31-1, 2019 – Macau
IROS 2019 – November 4-8, 2019 – Macau
Let us know if you have suggestions for next week, and enjoy today’s videos.
What’s better than a robotics paper with “dynamic” in the title? A robotics paper with “highly dynamic” in the title. From Sangbae Kim’s lab at MIT, the latest exploits of Mini Cheetah:
Yes I’d very much like one please. Full paper at the link below.
[ Paper ] via [ MIT ]
A humanoid robot serving you ice cream—on his own ice cream bike: What a delicious vision!
[ Roboy ]
The Roomba “i” series and “s” series vacuums have just gotten an update that lets you set “keep out” zones, which is super useful. Tell your robot where not to go!
I feel bad, that Roomba was probably just hungry 🙁
[ iRobot ]
We wrote about Voliro’s tilt-rotor hexcopter a couple years ago, and now it’s off doing practical things, like spray painting a building pretty much the same color that it was before.
[ Voliro ]
Thanks Mina!
Here’s a clever approach for bin-picking problematic objects, like shiny things: Just grab a whole bunch, and then sort out what you need on a nice robot-friendly table.
It might take a little bit longer, but what do you care, you’re probably off sipping a cocktail with a little umbrella in it on a beach somewhere.
[ Harada Lab ]
A unique combination of the IRB 1200 and YuMi industrial robots that use vision, AI and deep learning to recognize and categorize trash for recycling.
[ ABB ]
Measuring glacial movements in-situ is a challenging, but necessary task to model glaciers and predict their future evolution. However, installing GPS stations on ice can be dangerous and expensive when not impossible in the presence of large crevasses. In this project, the ASL develops UAVs for dropping and recovering lightweight GPS stations over inaccessible glaciers to record the ice flow motion. This video shows the results of first tests performed at Gorner glacier, Switzerland, in July 2019.
[ EPFL ]
Turns out Tertills actually do a pretty great job fighting weeds.
Plus, they leave all those cute lil’ Tertill tracks.
[ Franklin Robotics ]
The online autonomous navigation and semantic mapping experiment presented [below] is conducted with the Cassie Blue bipedal robot at the University of Michigan. The sensors attached to the robot include an IMU, a 32-beam LiDAR and an RGB-D camera. The whole online process runs in real-time on a Jetson Xavier and a laptop with an i7 processor.
The resulting map is so precise that it looks like we are doing real-time SLAM (simultaneous localization and mapping). In fact, the map is based on dead-reckoning via the InvEKF.
[ GTSAM ] via [ University of Michigan ]
UBTECH has announced an upgraded version of its Meebot, which is 30 percent bigger and comes with more sensors and programmable eyes.
[ UBTECH ]
ABB’s research team will be working with medical staff, scientist and engineers to develop non-surgical medical robotics systems, including logistics and next-generation automated laboratory technologies. The team will develop robotics solutions that will help eliminate bottlenecks in laboratory work and address the global shortage of skilled medical staff.
[ ABB ]
In this video, Ian and Chris go through Misty’s SDK, discussing the languages we’ve included, the tools that make it easy for you to get started quickly, a quick rundown of how to run the skills you build, plus what’s ahead on the Misty SDK roadmap.
[ Misty Robotics ]
My guess is that this was not one of iRobot’s testing environments for the Roomba.
You know, that’s actually super impressive. And maybe if they threw one of the self-emptying Roombas in there, it would be a viable solution to the entire problem.
[ How Farms Work ]
Part of WeRobotics’ Flying Labs network, Panama Flying Labs is a local knowledge hub catalyzing social good and empowering local experts. Through training and workshops, demonstrations and missions, the Panama Flying Labs team leverages the power of drones, data, and AI to promote entrepreneurship, build local capacity, and confront the pressing social challenges faced by communities in Panama and across Central America.
[ Panama Flying Labs ]
Go on a virtual flythrough of the NIOSH Experimental Mine, one of two courses used in the recent DARPA Subterranean Challenge Tunnel Circuit Event held 15-22 August, 2019. The data used for this partial flythrough tour were collected using 3D LIDAR sensors similar to the sensors commonly used on autonomous mobile robots.
[ SubT ]
Special thanks to PBS, Mark Knobil, Joe Seamans and Stan Brandorff and many others who produced this program in 1991.
It features Reid Simmons (and his 1 year old son), David Wettergreen, Red Whittaker, Mac Macdonald, Omead Amidi, and other Field Robotics Center alumni building the planetary walker prototype called Ambler. The team gets ready for an important demo for NASA.
[ CMU RI ]
As art and technology merge, roboticist Madeline Gannon explores the frontiers of human-robot interaction across the arts, sciences and society, and explores what this could mean for the future.
[ Sonar+D ] Continue reading →
#436065 From Mainframes to PCs: What Robot ...
This is a guest post. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE.
Autonomous robots are coming around slowly. We already got autonomous vacuum cleaners, autonomous lawn mowers, toys that bleep and blink, and (maybe) soon autonomous cars. Yet, generation after generation, we keep waiting for the robots that we all know from movies and TV shows. Instead, businesses seem to get farther and farther away from the robots that are able to do a large variety of tasks using general-purpose, human anatomy-inspired hardware.
Although these are the droids we have been looking for, anything that came close, such as Willow Garage’s PR2 or Rethink Robotics’ Baxter has bitten the dust. With building a robotic company being particularly hard, compounding business risk with technological risk, the trend goes from selling robots to selling actual services like mowing your lawn, provide taxi rides, fulfilling retail orders, or picking strawberries by the pound. Unfortunately for fans of R2-D2 and C-3PO, these kind of business models emphasize specialized, room- or fridge-sized hardware that is optimized for one very specific task, but does not contribute to a general-purpose robotic platform.
We have actually seen something very similar in the personal computer (PC) industry. In the 1950s, even though computers could be as big as an entire room and were only available to a selected few, the public already had a good idea of what computers would look like. A long list of fictional computers started to populate mainstream entertainment during that time. In a 1962 New York Times article titled “Pocket Computer to Replace Shopping List,” visionary scientist John Mauchly stated that “there is no reason to suppose the average boy or girl cannot be master of a personal computer.”
In 1968, Douglas Engelbart gave us the “mother of all demos,” browsing hypertext on a graphical screen and a mouse, and other ideas that have become standard only decades later. Now that we have finally seen all of this, it might be helpful to examine what actually enabled the computing revolution to learn where robotics is really at and what we need to do next.
The parallels between computers and robots
In the 1970s, mainframes were about to be replaced by the emerging class of mini-computers, fridge-sized devices that cost less than US $25,000 ($165,000 in 2019 dollars). These computers did not use punch-cards, but could be programmed in Fortran and BASIC, dramatically expanding the ease with which potential applications could be created. Yet it was still unclear whether mini-computers could ever replace big mainframes in applications that require fast and efficient processing of large amounts of data, let alone enter every living room. This is very similar to the robotics industry right now, where large-scale factory robots (mainframes) that have existed since the 1960s are seeing competition from a growing industry of collaborative robots that can safely work next to humans and can easily be installed and programmed (minicomputers). As in the ’70s, applications for these devices that reach system prices comparable to that of a luxury car are quite limited, and it is hard to see how they could ever become a consumer product.
Yet, as in the computer industry, successful architectures are quickly being cloned, driving prices down, and entirely new approaches on how to construct or program robotic arms are sprouting left and right. Arm makers are joined by manufacturers of autonomous carts, robotic grippers, and sensors. These components can be combined, paving the way for standard general purpose platforms that follow the model of the IBM PC, which built a capable, open architecture relying as much on commodity parts as possible.
General purpose robotic systems have not been successful for similar reasons that general purpose, also known as “personal,” computers took decades to emerge. Mainframes were custom-built for each application, while typewriters got smarter and smarter, not really leaving room for general purpose computers in between. Indeed, given the cost of hardware and the relatively little abilities of today’s autonomous robots, it is almost always smarter to build a special purpose machine than trying to make a collaborative mobile manipulator smart.
A current example is e-commerce grocery fulfillment. The current trend is to reserve underutilized parts of a brick-and-mortar store for a micro-fulfillment center that stores goods in little crates with an automated retrieval system and a (human) picker. A number of startups like Alert Innovation, Fabric, Ocado Technology, TakeOff Technologies, and Tompkins Robotics, to just name a few, have raised hundreds of millions of venture capital recently to build mainframe equivalents of robotic fulfillment centers. This is in contrast with a robotic picker, which would drive through the aisles to restock and pick from shelves. Such a robotic store clerk would come much closer to our vision of a general purpose robot, but would require many copies of itself that crowd the aisles to churn out hundreds of orders per hour as a microwarehouse could. Although eventually more efficient, the margins in retail are already low and make it unlikely that this industry will produce the technological jump that we need to get friendly C-3POs manning the aisles.
Startups have raised hundreds of millions of venture capital recently to build mainframe equivalents of robotic fulfillment centers. This is in contrast with a robotic picker, which would drive through the aisles to restock and pick from shelves, and would come much closer to our vision of a general purpose robot.
Mainframes were also attacked from the bottom. Fascination with the new digital technology has led to a hobbyist movement to create microcomputers that were sold via mail order or at RadioShack. Initially, a large number of small businesses was selling tens, at most hundreds, of devices, usually as a kit and with wooden enclosures. This trend culminated into the “1977 Trinity” in the form of the Apple II, the Commodore PET, and the Tandy TRS-80, complete computers that were sold for prices around $2500 (TRS) to $5000 (Apple) in today’s dollars. The main application of these computers was their programmability (in BASIC), which would enable consumers to “learn to chart your biorhythms, balance your checking account, or even control your home environment,” according to an original Apple advertisement. Similarly, there exists a myriad of gadgets that explore different aspects of robotics such as mobility, manipulation, and entertainment.
As in the fledgling personal computing industry, the advertised functionality was at best a model of the real deal. A now-famous milestone in entertainment robotics was the original Sony’s Aibo, a robotic dog that was advertised to have many properties that a real dog has such as develop its own personality, play with a toy, and interact with its owner. Released in 1999, and re-launched in 2018, the platform has a solid following among hobbyists and academics who like its programmability, but probably only very few users who accept the device as a pet stand-in.
There also exist countless “build-your-own-robotic-arm” kits. One of the more successful examples is the uArm, which sells for around $800, and is advertised to perform pick and place, assembly, 3D printing, laser engraving, and many other things that sound like high value applications. Using compelling videos of the robot actually doing these things in a constrained environment has led to two successful crowd-funding campaigns, and have established the robot as a successful educational tool.
Finally, there exist platforms that allow hobbyist programmers to explore mobility to construct robots that patrol your house, deliver items, or provide their users with telepresence abilities. An example of that is the Misty II. Much like with the original Apple II, there remains a disconnect between the price of the hardware and the fidelity of the applications that were available.
For computers, this disconnect began to disappear with the invention of the first electronic spreadsheet software VisiCalc that spun out of Harvard in 1979 and prompted many people to buy an entire microcomputer just to run the program. VisiCalc was soon joined by WordStar, a word processing application, that sold for close to $2000 in today’s dollars. WordStar, too, would entice many people to buy the entire hardware just to use the software. The two programs are early examples of what became known as “killer application.”
With factory automation being mature, and robots with the price tag of a minicomputer being capable of driving around and autonomously carrying out many manipulation tasks, the robotics industry is somewhere where the PC industry was between 1973—the release of the Xerox Alto, the first computer with a graphical user interface, mouse, and special software—and 1979—when microcomputers in the under $5000 category began to take off.
Killer apps for robots
So what would it take for robotics to continue to advance like computers did? The market itself already has done a good job distilling what the possible killer apps are. VCs and customers alike push companies who have set out with lofty goals to reduce their offering to a simple value proposition. As a result, companies that started at opposite ends often converge to mirror images of each other that offer very similar autonomous carts, (bin) picking, palletizing, depalletizing, or sorting solutions. Each of these companies usually serves a single application to a single vertical—for example bin-picking clothes, transporting warehouse goods, or picking strawberries by the pound. They are trying to prove that their specific technology works without spreading themselves too thin.
Very few of these companies have really taken off. One example is Kiva Systems, which turned into the logistic robotics division of Amazon. Kiva and others are structured around sound value propositions that are grounded in well-known user needs. As these solutions are very specialized, however, it is unlikely that they result into any economies of scale of the same magnitude that early computer users who bought both a spreadsheet and a word processor application for their expensive minicomputer could enjoy. What would make these robotic solutions more interesting is when functionality becomes stackable. Instead of just being able to do bin picking, palletizing, and transportation with the same hardware, these three skills could be combined to model entire processes.
A skill that is yet little addressed by startups and is historically owned by the mainframe equivalent of robotics is assembly of simple mechatronic devices. The ability to assemble mechatronic parts is equivalent to other tasks such as changing a light bulb, changing the batteries in a remote control, or tending machines like a lever-based espresso machine. These tasks would involve the autonomous execution of complete workflows possible using a single machine, eventually leading to an explosion of industrial productivity across all sectors. For example, picking up an item from a bin, arranging it on the robot, moving it elsewhere, and placing it into a shelf or a machine is a process that equally applies to a manufacturing environment, a retail store, or someone’s kitchen.
Image: Robotic Materials Inc.
Autonomous, vision and force-based assembly of the
Siemens robot learning challenge.
Even though many of the above applications are becoming possible, it is still very hard to get a platform off the ground without added components that provide “killer app” value of their own. Interesting examples are Rethink Robotics or the Robot Operating System (ROS). Rethink Robotics’ Baxter and Sawyer robots pioneered a great user experience (like the 1973 Xerox Alto, really the first PC), but its applications were difficult to extend beyond simple pick-and-place and palletizing and depalletizing items.
ROS pioneered interprocess communication software that was adapted to robotic needs (multiple computers, different programming languages) and the idea of software modularity in robotics, but—in the absence of a common hardware platform—hasn’t yet delivered a single application, e.g. for navigation, path planning, or grasping, that performs beyond research-grade demonstration level and won’t get discarded once developers turn to production systems. At the same time, an increasing number of robotic devices, such as robot arms or 3D perception systems that offer intelligent functionality, provide other ways to wire them together that do not require an intermediary computer, while keeping close control over the real-time aspects of their hardware.
Image: Robotic Materials Inc.
Robotic Materials GPR-1 combines a MIR-100 autonomous cart with an UR-5 collaborative robotic arm, an onRobot force/torque sensor and Robotic Materials’ SmartHand to perform out-of-the-box mobile assembly, bin picking, palletizing, and depalletizing tasks.
At my company, Robotic Materials Inc., we have made strides to identify a few applications such as bin picking and assembly, making them configurable with a single click by combining machine learning and optimization with an intuitive user interface. Here, users can define object classes and how to grasp them using a web browser, which then appear as first-class objects in a robot-specific graphical programming language. We have also done this for assembly, allowing users to stack perception-based picking and force-based assembly primitives by simply dragging and dropping appropriate commands together.
While such an approach might answer the question of a killer app for robots priced in the “minicomputer” range, it is unclear how killer app-type value can be generated with robots in the less-than-$5000 category. A possible answer is two-fold: First, with low-cost arms, mobility platforms, and entertainment devices continuously improving, a confluence of technology readiness and user innovation, like with the Apple II and VisiCalc, will eventually happen. For example, there is not much innovation needed to turn Misty into a home security system; the uArm into a low-cost bin-picking system; or an Aibo-like device into a therapeutic system for the elderly or children with autism.
Second, robots and their components have to become dramatically cheaper. Indeed, computers have seen an exponential reduction in price accompanied by an exponential increase in computational power, thanks in great part to Moore’s Law. This development has helped robotics too, allowing us to reach breakthroughs in mobility and manipulation due to the ability to process massive amounts of image and depth data in real-time, and we can expect it to continue to do so.
Is there a Moore’s Law for robots?
One might ask, however, how a similar dynamics might be possible for robots as a whole, including all their motors and gears, and what a “Moore’s Law” would look like for the robotics industry. Here, it helps to remember that the perpetuation of Moore’s Law is not the reason, but the result of the PC revolution. Indeed, the first killer apps for bookkeeping, editing, and gaming were so good that they unleashed tremendous consumer demand, beating the benchmark on what was thought to be physically possible over and over again. (I vividly remember 56 kbps to be the absolute maximum data rate for copper phone lines until DSL appeared.)
That these economies of scale are also applicable to mechatronics is impressively demonstrated by the car industry. A good example is the 2020 Prius Prime, a highly computerized plug-in hybrid, that is available for one third of the cost of my company’s GPR-1 mobile manipulator while being orders of magnitude more complex, sporting an electrical motor, a combustion engine, and a myriad of sensors and computers. It is therefore very well conceivable to produce a mobile manipulator that retails at one tenth of the cost of a modern car, once robotics enjoy similar mass-market appeal. Given that these robots are part of the equation, actively lowering cost of production, this might happen as fast as never before in the history of industrialization.
It is therefore very well conceivable to produce a mobile manipulator that retails at one tenth of the cost of a modern car, once robotics enjoy similar mass-market appeal.
There is one more driver that might make robots exponentially more capable: the cloud. Once a general purpose robot has learned or was programmed with a new skill, it could share it with every other robot. At some point, a grocer who buys a robot could assume that it already knows how to recognize and handle 99 percent of the retail items in the store. Likewise, a manufacturer can assume that the robot can handle and assemble every item available from McMaster-Carr and Misumi. Finally, families could expect a robot to know every kitchen item that Ikea and Pottery Barn is selling. Sounds like a labor intense problem, but probably more manageable than collecting footage for Google’s Street View using cars, tricycles, and snowmobiles, among other vehicles.
Strategies for robot startups
While we are waiting for these two trends—better and better applications and hardware with decreasing cost—to converge, we as a community have to keep exploring what the canonical robotic applications beyond mobility, bin picking, palletizing, depalletizing, and assembly are. We must also continue to solve the fundamental challenges that stand in the way of making these solutions truly general and robust.
For both questions, it might help to look at the strategies that have been critical in the development of the personal computer, which might equally well apply to robotics:
Start with a solution to a problem your customers have. Unfortunately, their problem is almost never that they need your sensor, widget, or piece of code, but something that already costs them money or negatively affects them in some other way. Example: There are many more people who had a problem calculating their taxes (and wanted to buy VisiCalc) than writing their own solution in BASIC.
Build as little of your own hardware as necessary. Your business model should be stronger than the margin you can make on the hardware. Why taking the risk? Example: Why build your own typewriter if you can write the best typewriting application that makes it worth buying a computer just for that?
If your goal is a platform, make sure it comes with a killer application, which alone justifies the platform cost. Example: Microcomputer companies came and went until the “1977 Trinity” intersected with the killer apps spreadsheet and word processors. Corollary: You can also get lucky.
Use an open architecture, which creates an ecosystem where others compete on creating better components and peripherals, while allowing others to integrate your solution into their vertical and stack it with other devices. Example: Both the Apple II and the IBM PC were completely open architectures, enabling many clones, thereby growing the user and developer base.
It’s worthwhile pursuing this. With most business processes already being digitized, general purpose robots will allow us to fill in gaps in mobility and manipulation, increasing productivity at levels only limited by the amount of resources and energy that are available, possibly creating a utopia in which creativity becomes the ultimate currency. Maybe we’ll even get R2-D2.
Nikolaus Correll is an associate professor of computer science at the University of Colorado at Boulder where he works on mobile manipulation and other robotics applications. He’s co-founder and CTO of Robotic Materials Inc., which is supported by the National Science Foundation and the National Institute of Standards and Technology via their Small Business Innovative Research (SBIR) programs. Continue reading →