Tag Archives: making
#439908 Why Facebook (Or Meta) Is Making Tactile ...
Facebook, or Meta as it's now calling itself for some reason that I don't entirely understand, is today announcing some new tactile sensing hardware for robots. Or, new-ish, at least—there's a ruggedized and ultra low-cost GelSight-style fingertip sensor, plus a nifty new kind of tactile sensing skin based on suspended magnetic particles and machine learning. It's cool stuff, but why?
Obviously, Facebook Meta cares about AI, because it uses AI to try and do a whole bunch of the things that it's unwilling or unable to devote the time of actual humans to. And to be fair, there are some things that AI may be better at (or at least more efficient at) than humans are. AI is of course much worse than humans at many, many, many things as well, but that debate goes well beyond Facebook Meta and certainly well beyond the scope of this article, which is about tactile sensing for robots. So why does Facebook Meta care even a little bit about making robots better at touching stuff? Yann LeCun, the Chief AI Scientist at Facebook Meta, takes a crack at explaining it:
Before I joined Facebook, I was chatting with Mark Zuckerberg and I asked him, “is there any area related to AI that you think we shouldn't be working on?” And he said, “I can't find any good reason for us to work on robotics.” And so, that was kind of the start of Facebook AI Research—we were not going to work on robotics.
After a few years, it became clear that a lot of interesting progress in AI was happening in the context of robotics, because robotics is the nexus of where people in AI research are trying to get the full loop of perception, reasoning, planning, and action, and getting feedback from the environment. Doing it in the real world is where the problems are concentrated, and you can't play games if you want robots to learn quickly.
It was clear that four or five years ago, there was no business reason to work on robotics, but the business reasons have kind of popped up. Robotics could be used for telepresence, for maintaining data centers more automatically, but the more important aspect of it is making progress towards intelligent agents, the kinds of things that could be used in the metaverse, in augmented reality, and in virtual reality. That's really one of the raison d'être of a research lab, to foresee the domains that will be important in the future. So that's the motivation.Well, okay, but none of that seems like a good justification for research into tactile sensing specifically. But according to LeCun, it's all about putting together the pieces required for some level of fundamental world understanding, a problem that robotic systems are still bad at and that machine learning has so far not been able to tackle:
How to get machines to learn that model of the world that allows them to predict in advance and plan what's going to happen as a consequence of their actions is really the crux of the problem here. And this is something you have to confront if you work on robotics. But it's also something you have to confront if you want to have an intelligent agent acting in a virtual environment that can interact with humans in a natural way. And one of the long-term visions of augmented reality, for example, is virtual agents that basically are with you all the time, living in your automatic reality glasses or your smartphone or your laptop or whatever, helping you in your daily life as a human assistant would do, but also can answer any question you have. And that system will have to have some degree of understanding of how the world works—some degree of common sense, and be smart enough to not be frustrating to talk to. And that is where all of this research leads in the long run, whether the environment is real or virtual.AI systems (robots included) are very very dumb in very very specific ways, quite often the ways in which humans are least understanding and forgiving of. This is such a well established thing that there's a name for it: Moravec's paradox. Humans are great at subconscious levels of world understanding that we've built up over years and years of experience being, you know, alive. AI systems have none of this, and there isn't necessarily a clear path to getting them there, but one potential approach is to start with the fundamentals in the same way that a shiny new human does and build from there, a process that must necessarily include touch.
The DIGIT touch sensor is based on the GelSight style of sensor, which was first conceptualized at MIT over a decade ago. The basic concept of these kinds of tactile sensors is that they're able to essentially convert a touch problem into a vision problem: an array of LEDs illuminate a squishy finger pad from the back, and when the squishy finger pad pushes against something with texture, that texture squishes through to the other side of the finger pad where it's illuminated from many different angles by the LEDs. A camera up inside of the finger takes video of this, resulting in a very rainbow but very detailed picture of whatever the finger pad is squishing against.
The DIGIT paper published last year summarizes the differences between this new sensor and previous versions of GelSight:
DIGIT improves over existing GelSight sensors in several ways: by providing a more compact form factor that can be used on multi-finger hands, improving the durability of the elastomer gel, and making design changes that facilitate large-scale, repeatable production of the sensor hardware to facilitate tactile sensing research.
DIGIT is open source, so you can make one on your own, but that's a hassle. The really big news here is that GelSight itself (an MIT spinoff which commercialized the original technology) will be commercially manufacturing DIGIT sensors, providing a standardized and low-cost option for tactile sensing. The bill of materials for each DIGIT sensor is about US $15 if you were to make a thousand of them, so we're expecting that the commercial version won't cost much more than that.
The other hardware announcement is ReSkin, a tactile sensing skin developed in collaboration with Carnegie Mellon. Like DIGIT, the idea is to make an open source, robust, and very low cost system that will allow researchers to focus on developing the software to help robots make sense of touch rather than having to waste time on their own hardware.
ReSkin operates on a fairly simple concept: it's a flexible sheet of 2mm thick silicone with magnetic particles carelessly mixed in. The sheet sits on top of a magnetometer, and whenever the sheet deforms (like if something touches it), the magnetic particles embedded in the sheet get squooshed and the magnetic signal changes, which is picked up by the magnetometer. For this to work, the sheet doesn't have to be directly connected to said magnetometer. This is key, because it makes the part of the ReSkin sensor that's most likely to get damaged super easy to replace—just peel it off and slap on another one and you're good to go.
I get that touch is an integral part of this humanish world understanding that Facebook Meta is working towards, but for most of us, the touch is much more nuanced than just tactile data collection, because we experience everything that we touch within the world understanding that we've built up through integration of all of our other senses as well. I asked Roberto Calandra, one of the authors of the paper on DIGIT, what he thought about this:
I believe that we certainly want to have multimodal sensing in the same way that humans do. Humans use cues from touch, cues from vision, and also cues from audio, and we are able to very smartly put these different sensor modalities together. And if I tell you, can you imagine how touching this object is going to feel for you, can sort of imagine that. You can also tell me the shape of something that you are touching, you are able to somehow recognize it. So there is very clearly a multisensorial representation that we are learning and using as humans, and it's very likely that this is also going to be very important for embodied agents that we want to develop and deploy.Calandra also noted that they still have plenty of work to do to get DIGIT closer in form factor and capability to a human finger, which is an aspiration that I often hear from roboticists. But I always wonder: why bother? Like, why constrain robots (which can do all kinds of things that humans cannot) to do things in a human-like way, when we can instead leverage creative sensing and actuation to potentially give them superhuman capabilities? Here's what Calandra thinks:
I don't necessarily believe that a human hand is the way to go. I do believe that the human hand is possibly the golden standard that we should compare against. Can we do at least as good as a human hand? Beyond that, I actually do believe that over the years, the decades, or maybe the centuries, robots will have the possibility of developing superhuman hardware, in the same way that we can put infrared sensors or laser scanners on a robot, why shouldn't we also have mechanical hardware which is superior?
I think there has been a lot of really cool work on soft robotics for example, on how to build tentacles that can imitate an octopus. So it's a very natural question—if we want to have a robot, why should it have hands and not tentacles? And the answer to this is, it depends on what the purpose is. Do we want robots that can perform the same functions of humans, or do we want robots which are specialized for doing particular tasks? We will see when we get there.So there you have it—the future of manipulation is 100% sometimes probably tentacles. Continue reading
#439487 SoftBank Stops Making Pepper Robots, ...
Reuters is reporting that SoftBank stopped manufacturing Pepper robots at some point last year due to low demand, and by September, will cut about half of the 330 positions at SoftBank Robotics Europe in France. Most of the positions will be in Q&A, sales, and service, which hopefully leaves SoftBank Robotics’ research and development group mostly intact. But the cuts reflect poor long-term sales, with SoftBank Robotics Europe having lost over 100 million Euros in the past three years, according to French business news site JDN. Speaking with Nikkei, SoftBank said that this doesn’t actually mean a permanent end for Pepper, and that they “plan to resume production if demand recovers.” But things aren’t looking good.
Reuters says that “only” 27,000 Peppers were produced, but that sure seems like a lot of Peppers to me. Perhaps too many—a huge number of Peppers were used by SoftBank itself in its retail stores, and a hundred at once were turned into a cheerleading squad for the SoftBank Hawks baseball team because of the pandemic. There’s nothing wrong with either of those things, but it’s hard to use them to gauge how successful Pepper has actually been.
I won’t try to argue that Pepper would necessarily have been commercially viable in the long(er) term, since it’s a very capable robot in some ways, but not very capable in others. For example, Pepper has arms and hands with individually articulated fingers, but the robot can’t actually do much in the way of useful grasping or manipulation. SoftBank positioned Pepper as a robot that can attract attention and provide useful, socially interactive information in public places. Besides SoftBank’s own stores, Peppers have been used in banks, malls, airports, and other places of that nature. A lot of what Pepper seems to have uniquely offered was novelty, though, which ultimately may not be sustainable for a commercial robot, because at some point, the novelty just wears off and you’re basically left with a very cool looking (but expensive) kiosk.
Having said all that, the sheer number of Peppers that SoftBank put out in the world could be one of the most significant impacts that the robot has had. The fact that Pepper was able to successfully operate for long enough, and in enough places, that it even had a chance to stop becoming novel and instead become normal is an enormous achievement for Pepper specifically as well as for social robots more broadly. Angelica Lim, who worked with Pepper at SoftBank Robotics Europe for three years before founding the Rosie Lab at SFU, shared some perspective with us on this:
There has never been a robot with the ambition of Pepper. Its mission was huge—be adaptable and robust to different purposes and locations: loud sushi shops, quiet banks, and hospitals that change from hour to hour. Compare that with Alexa which has a pretty stable and quiet environment—the home. On top of that, the robot needed to respond to different ages, cultures, countries and languages. The only thing I can think of that comes close is the smartphone, and the expectation for it is much lower compared to the humanoid Pepper. Ten years ago, it was unthinkable that we could leave a robot on “in the wild” for days, weeks, months and years, and yet Pepper did it thanks to the team at SoftBank Robotics.
Peppers are still being used in education today, from elementary schools and high schools to research labs in North America, Asia and Europe. The next generation will grow up programming these, like they did with the Apple personal computer. I’m confident it’s just the next step to technology that adapts to us as humans rather than the other way around.
Pepper has been an amazing platform for HRI research as well as for STEM education more broadly, and our hope is that Pepper will continue to be impactful in those ways, whether or not any more of these robots are ever made. We also hope that SoftBank does whatever is necessary to make sure that Peppers remain useful and accessible well into the future in both software and hardware. But perhaps we’re being too pessimistic here—this is certainly not good news, but despite how it looks we don’t know for sure that it’s catastrophic for Pepper. All we can do is wait and see what happens at SoftBank Robotics Europe over the next six months, and hope that Pepper continues to get the support that it deserves. Continue reading
#439378 SoftBank Stops Making Pepper Robots, ...
Reuters is reporting that SoftBank stopped manufacturing Pepper robots at some point last year due to low demand, and by September, will cut about half of the 330 positions at SoftBank Robotics Europe in France. Most of the positions will be in Q&A, sales, and service, which hopefully leaves SoftBank Robotics’ research and development group mostly intact. But the cuts reflect poor long-term sales, with SoftBank Robotics Europe having lost over 100 million Euros in the past three years, according to French business news site JDN. Speaking with Nikkei, SoftBank said that this doesn’t actually mean a permanent end for Pepper, and that they “plan to resume production if demand recovers.” But things aren’t looking good.
Reuters says that “only” 27,000 Peppers were produced, but that sure seems like a lot of Peppers to me. Perhaps too many—a huge number of Peppers were used by SoftBank itself in its retail stores, and a hundred at once were turned into a cheerleading squad for the SoftBank Hawks baseball team because of the pandemic. There’s nothing wrong with either of those things, but it’s hard to use them to gauge how successful Pepper has actually been.
I won’t try to argue that Pepper would necessarily have been commercially viable in the long(er) term, since it’s a very capable robot in some ways, but not very capable in others. For example, Pepper has arms and hands with individually articulated fingers, but the robot can’t actually do much in the way of useful grasping or manipulation. SoftBank positioned Pepper as a robot that can attract attention and provide useful, socially interactive information in public places. Besides SoftBank’s own stores, Peppers have been used in banks, malls, airports, and other places of that nature. A lot of what Pepper seems to have uniquely offered was novelty, though, which ultimately may not be sustainable for a commercial robot, because at some point, the novelty just wears off and you’re basically left with a very cool looking (but expensive) kiosk.
Having said all that, the sheer number of Peppers that SoftBank put out in the world could be one of the most significant impacts that the robot has had. The fact that Pepper was able to successfully operate for long enough, and in enough places, that it even had a chance to stop becoming novel and instead become normal is an enormous achievement for Pepper specifically as well as for social robots more broadly. Angelica Lim, who worked with Pepper at SoftBank Robotics Europe for three years before founding the Rosie Lab at SFU, shared some perspective with us on this:
There has never been a robot with the ambition of Pepper. Its mission was huge—be adaptable and robust to different purposes and locations: loud sushi shops, quiet banks, and hospitals that change from hour to hour. Compare that with Alexa which has a pretty stable and quiet environment—the home. On top of that, the robot needed to respond to different ages, cultures, countries and languages. The only thing I can think of that comes close is the smartphone, and the expectation for it is much lower compared to the humanoid Pepper. Ten years ago, it was unthinkable that we could leave a robot on “in the wild” for days, weeks, months and years, and yet Pepper did it thanks to the team at SoftBank Robotics.
Peppers are still being used in education today, from elementary schools and high schools to research labs in North America, Asia and Europe. The next generation will grow up programming these, like they did with the Apple personal computer. I’m confident it’s just the next step to technology that adapts to us as humans rather than the other way around.
Pepper has been an amazing platform for HRI research as well as for STEM education more broadly, and our hope is that Pepper will continue to be impactful in those ways, whether or not any more of these robots are ever made. We also hope that SoftBank does whatever is necessary to make sure that Peppers remain useful and accessible well into the future in both software and hardware. But perhaps we’re being too pessimistic here—this is certainly not good news, but despite how it looks we don’t know for sure that it’s catastrophic for Pepper. All we can do is wait and see what happens at SoftBank Robotics Europe over the next six months, and hope that Pepper continues to get the support that it deserves. Continue reading
#439347 Smart elastomers are making the robots ...
Imagine flexible surgical instruments that can twist and turn in all directions like miniature octopus arms, or how about large and powerful robot tentacles that can work closely and safely with human workers on production lines. A new generation of robotic tools are beginning to be realized thanks to a combination of strong 'muscles' and sensitive 'nerves' created from smart polymeric materials. A research team led by the smart materials experts Professor Stefan Seelecke and Junior Professor Gianluca Rizzello at Saarland University is exploring fundamental aspects of this exciting field of soft robotics. Continue reading
#439290 Making virtual assistants sound human ...
There's a scene in the 2008 film “Iron Man” that offers a glimpse of future interactions between human and artificial intelligence assistants. In it, Tony Stark's virtual assistant J.A.R.V.I.S. responds with sarcasm and humor to Stark's commands. Continue reading