Tag Archives: robots

#439399 An overview of Humanoid tech in 2021

Some of the most advanced humanoid robots we saw in 2021.

Posted in Human Robots

#439869 Short movie about Humanoid Androids

A short journey through the magical world of humanoid robots.

Posted in Human Robots

#440049 Years Later, Alphabet’s Everyday ...

Last week, Google or Alphabet or X or whatever you want to call it announced that its Everyday Robots team has grown enough and made enough progress that it's time for it to become its own thing, now called, you guessed it, “Everyday Robots.” There's a new website of questionable design along with a lot of fluffy descriptions of what Everyday Robots is all about. But fortunately, there are also some new videos and enough details about the engineering and the team's approach that it's worth spending a little bit of time wading through the clutter to see what Everyday Robots has been up to over the last couple of years and what their plans are for the near future.

That close to the arm seems like a really bad place to put an E-Stop, right?
Our headline may sound a little bit snarky, but the headline in Alphabet's own announcement blog post is “everyday robots are (slowly) leaving the lab.” It's less of a dig and more of an acknowledgement that getting mobile manipulators to usefully operate in semi-structured environments has been, and continues to be, a huge challenge. We'll get into the details in a moment, but the high-level news here is that Alphabet appears to have thrown a lot of resources behind this effort while embracing a long time horizon, and that its investment is starting to pay dividends. This is a nice surprise, considering the somewhat haphazard state (at least to outside appearances) of Google's robotics ventures over the years.
The goal of Everyday Robots, according to Astro Teller, who runs Alphabet's moonshot stuff, is to create “a general-purpose learning robot,” which sounds moonshot-y enough I suppose. To be fair, they've got an impressive amount of hardware deployed, says Everyday Robots' Hans Peter Brøndmo:
We are now operating a fleet of more than 100 robot prototypes that are autonomously performing a range of useful tasks around our offices. The same robot that sorts trash can now be equipped with a squeegee to wipe tables, and use the same gripper that grasps cups to open doors.That's a lot of robots, which is awesome, but I have to question what “autonomously” actually means along with what “a range of useful tasks” actually means. There is really not enough publicly available information for us (or anyone?) to assess what Everyday Robots is doing with its fleet of 100 prototypes, how much manipulator-holding is required, the constraints under which they operate, and whether calling what they do “useful” is appropriate.
If you'd rather not wade through Everyday Robots' weirdly overengineered website, we've extracted the good stuff (the videos, mostly) and reposted them here, along with a little bit of commentary underneath each.
Introducing Everyday Robots

Everyday Robots
0:01 — Is it just me, or does the gearing behind those motions sound kind of, um, unhealthy?
0:25 — A bit of an overstatement about the Nobel Prize for picking a cup up off of a table, I think. Robots are pretty good at perceiving and grasping cups off of tables, because it's such a common task. Like, I get the point, but I just think there are better examples of problems that are currently human-easy and robot-hard.
1:13 — It's not necessarily useful to draw that parallel between computers and smartphones and compare them to robots, because there are certain physical realities (like motors and manipulation requirements) that prevent the kind of scaling to which the narrator refers.
1:35 — This is a red flag for me because we've heard this “it's a platform” thing so many times before and it never, ever works out. But people keep on trying it anyway. It might be effective when constrained to a research environment, but fundamentally, “platform” typically means “getting it to do (commercially?) useful stuff is someone else's problem,” and I'm not sure that's ever been a successful model for robots.
2:10 — Yeah, okay. This robot sounds a lot more normal than the robots at the beginning of the video; what's up with that?
2:30 — I am a big fan of Moravec's Paradox and I wish it would get brought up more when people talk to the public about robots.
The challenge of everyday

Everyday Robots
0:18 — I like the door example, because you can easily imagine how many different ways it can go that would be catastrophic for most robots: different levers or knobs, glass in places, variable weight and resistance, and then, of course, thresholds and other nasty things like that.
1:03 — Yes. It can't be reinforced enough, especially in this context, that computers (and by extension robots) are really bad at understanding things. Recognizing things, yes. Understanding them, not so much.
1:40 — People really like throwing shade at Boston Dynamics, don't they? But this doesn't seem fair to me, especially for a company that Google used to own. What Boston Dynamics is doing is very hard, very impressive, and come on, pretty darn exciting. You can acknowledge that someone else is working on hard and exciting problems while you're working on different hard and exciting problems yourself, and not be a little miffed because what you're doing is, like, less flashy or whatever.
A robot that learns

Everyday Robots
0:26 — Saying that the robot is low cost is meaningless without telling us how much it costs. Seriously: “low cost” for a mobile manipulator like this could easily be (and almost certainly is) several tens of thousands of dollars at the very least.
1:10 — I love the inclusion of things not working. Everyone should do this when presenting a new robot project. Even if your budget is infinity, nobody gets everything right all the time, and we all feel better knowing that others are just as flawed as we are.
1:35 — I'd personally steer clear of using words like “intelligently” when talking about robots trained using reinforcement learning techniques, because most people associate “intelligence” with the kind of fundamental world understanding that robots really do not have.
Training the first task

Everyday Robots
1:20 — As a research task, I can see this being a useful project, but it's important to point out that this is a terrible way of automating the sorting of recyclables from trash. Since all of the trash and recyclables already get collected and (presumably) brought to a few centralized locations, in reality you'd just have your system there, where the robots could be stationary and have some control over their environment and do a much better job much more efficiently.
1:15 — Hopefully they'll talk more about this later, but when thinking about this montage, it's important to ask what of these tasks in the real world would you actually want a mobile manipulator to be doing, and which would you just want automated somehow, because those are very different things.
Building with everyone

Everyday Robots
0:19 — It could be a little premature to be talking about ethics at this point, but on the other hand, there's a reasonable argument to be made that there's no such thing as too early to consider the ethical implications of your robotics research. The latter is probably a better perspective, honestly, and I'm glad they're thinking about it in a serious and proactive way.
1:28 — Robots like these are not going to steal your job. I promise.
2:18 — Robots like these are also not the robots that he's talking about here, but the point he's making is a good one, because in the near- to medium term, robots are going to be most valuable in roles where they can increase human productivity by augmenting what humans can do on their own, rather than replacing humans completely.
3:16 — Again, that platform idea…blarg. The whole “someone has written those applications” thing, uh, who, exactly? And why would they? The difference between smartphones (which have a lucrative app ecosystem) and robots (which do not) is that without any third party apps at all, a smartphone has core functionality useful enough that it justifies its own cost. It's going to be a long time before robots are at that point, and they'll never get there if the software applications are always someone else's problem.

Everyday Robots
I'm a little bit torn on this whole thing. A fleet of 100 mobile manipulators is amazing. Pouring money and people into solving hard robotics problems is also amazing. I'm just not sure that the vision of an “Everyday Robot” that we're being asked to buy into is necessarily a realistic one.
The impression I get from watching all of these videos and reading through the website is that Everyday Robot wants us to believe that it's actually working towards putting general purpose mobile manipulators into everyday environments in a way where people (outside of the Google Campus) will be able to benefit from them. And maybe the company is working towards that exact thing, but is that a practical goal and does it make sense?
The fundamental research being undertaken seems solid; these are definitely hard problems, and solutions to these problems will help advance the field. (Those advances could be especially significant if these techniques and results are published or otherwise shared with the community.) And if the reason to embody this work in a robotic platform is to help inspire that research, then great, I have no issue with that.
But I'm really hesitant to embrace this vision of generalized in-home mobile manipulators doing useful tasks autonomously in a way that's likely to significantly help anyone who's actually watching Everyday Robotics' videos. And maybe this is the whole point of a moonshot vision—to work on something hard that won't pay off for a long time. And again, I have no problem with that. However, if that's the case, Everyday Robots should be careful about how it contextualizes and portrays its efforts (and even its successes), why it's working on a particular set of things, and how outside observers should set our expectations. Over and over, companies have overpromised and underdelivered on helpful and affordable robots. My hope is that Everyday Robots is not in the middle of making the exact same mistake. Continue reading

Posted in Human Robots

#440042 A Q-learning algorithm to generate shots ...

RoboCup, originally named the J-League, is an annual robotics and artificial intelligence (AI) competition organized by the International RoboCup Federation. During RoboCup, robots compete with other robots soccer tournaments. Continue reading

Posted in Human Robots

#439908 Why Facebook (Or Meta) Is Making Tactile ...

Facebook, or Meta as it's now calling itself for some reason that I don't entirely understand, is today announcing some new tactile sensing hardware for robots. Or, new-ish, at least—there's a ruggedized and ultra low-cost GelSight-style fingertip sensor, plus a nifty new kind of tactile sensing skin based on suspended magnetic particles and machine learning. It's cool stuff, but why?
Obviously, Facebook Meta cares about AI, because it uses AI to try and do a whole bunch of the things that it's unwilling or unable to devote the time of actual humans to. And to be fair, there are some things that AI may be better at (or at least more efficient at) than humans are. AI is of course much worse than humans at many, many, many things as well, but that debate goes well beyond Facebook Meta and certainly well beyond the scope of this article, which is about tactile sensing for robots. So why does Facebook Meta care even a little bit about making robots better at touching stuff? Yann LeCun, the Chief AI Scientist at Facebook Meta, takes a crack at explaining it:
Before I joined Facebook, I was chatting with Mark Zuckerberg and I asked him, “is there any area related to AI that you think we shouldn't be working on?” And he said, “I can't find any good reason for us to work on robotics.” And so, that was kind of the start of Facebook AI Research—we were not going to work on robotics.

After a few years, it became clear that a lot of interesting progress in AI was happening in the context of robotics, because robotics is the nexus of where people in AI research are trying to get the full loop of perception, reasoning, planning, and action, and getting feedback from the environment. Doing it in the real world is where the problems are concentrated, and you can't play games if you want robots to learn quickly.

It was clear that four or five years ago, there was no business reason to work on robotics, but the business reasons have kind of popped up. Robotics could be used for telepresence, for maintaining data centers more automatically, but the more important aspect of it is making progress towards intelligent agents, the kinds of things that could be used in the metaverse, in augmented reality, and in virtual reality. That's really one of the raison d'être of a research lab, to foresee the domains that will be important in the future. So that's the motivation.Well, okay, but none of that seems like a good justification for research into tactile sensing specifically. But according to LeCun, it's all about putting together the pieces required for some level of fundamental world understanding, a problem that robotic systems are still bad at and that machine learning has so far not been able to tackle:
How to get machines to learn that model of the world that allows them to predict in advance and plan what's going to happen as a consequence of their actions is really the crux of the problem here. And this is something you have to confront if you work on robotics. But it's also something you have to confront if you want to have an intelligent agent acting in a virtual environment that can interact with humans in a natural way. And one of the long-term visions of augmented reality, for example, is virtual agents that basically are with you all the time, living in your automatic reality glasses or your smartphone or your laptop or whatever, helping you in your daily life as a human assistant would do, but also can answer any question you have. And that system will have to have some degree of understanding of how the world works—some degree of common sense, and be smart enough to not be frustrating to talk to. And that is where all of this research leads in the long run, whether the environment is real or virtual.AI systems (robots included) are very very dumb in very very specific ways, quite often the ways in which humans are least understanding and forgiving of. This is such a well established thing that there's a name for it: Moravec's paradox. Humans are great at subconscious levels of world understanding that we've built up over years and years of experience being, you know, alive. AI systems have none of this, and there isn't necessarily a clear path to getting them there, but one potential approach is to start with the fundamentals in the same way that a shiny new human does and build from there, a process that must necessarily include touch.

The DIGIT touch sensor is based on the GelSight style of sensor, which was first conceptualized at MIT over a decade ago. The basic concept of these kinds of tactile sensors is that they're able to essentially convert a touch problem into a vision problem: an array of LEDs illuminate a squishy finger pad from the back, and when the squishy finger pad pushes against something with texture, that texture squishes through to the other side of the finger pad where it's illuminated from many different angles by the LEDs. A camera up inside of the finger takes video of this, resulting in a very rainbow but very detailed picture of whatever the finger pad is squishing against.

The DIGIT paper published last year summarizes the differences between this new sensor and previous versions of GelSight:

DIGIT improves over existing GelSight sensors in several ways: by providing a more compact form factor that can be used on multi-finger hands, improving the durability of the elastomer gel, and making design changes that facilitate large-scale, repeatable production of the sensor hardware to facilitate tactile sensing research.
DIGIT is open source, so you can make one on your own, but that's a hassle. The really big news here is that GelSight itself (an MIT spinoff which commercialized the original technology) will be commercially manufacturing DIGIT sensors, providing a standardized and low-cost option for tactile sensing. The bill of materials for each DIGIT sensor is about US $15 if you were to make a thousand of them, so we're expecting that the commercial version won't cost much more than that.

The other hardware announcement is ReSkin, a tactile sensing skin developed in collaboration with Carnegie Mellon. Like DIGIT, the idea is to make an open source, robust, and very low cost system that will allow researchers to focus on developing the software to help robots make sense of touch rather than having to waste time on their own hardware.
ReSkin operates on a fairly simple concept: it's a flexible sheet of 2mm thick silicone with magnetic particles carelessly mixed in. The sheet sits on top of a magnetometer, and whenever the sheet deforms (like if something touches it), the magnetic particles embedded in the sheet get squooshed and the magnetic signal changes, which is picked up by the magnetometer. For this to work, the sheet doesn't have to be directly connected to said magnetometer. This is key, because it makes the part of the ReSkin sensor that's most likely to get damaged super easy to replace—just peel it off and slap on another one and you're good to go.

I get that touch is an integral part of this humanish world understanding that Facebook Meta is working towards, but for most of us, the touch is much more nuanced than just tactile data collection, because we experience everything that we touch within the world understanding that we've built up through integration of all of our other senses as well. I asked Roberto Calandra, one of the authors of the paper on DIGIT, what he thought about this:
I believe that we certainly want to have multimodal sensing in the same way that humans do. Humans use cues from touch, cues from vision, and also cues from audio, and we are able to very smartly put these different sensor modalities together. And if I tell you, can you imagine how touching this object is going to feel for you, can sort of imagine that. You can also tell me the shape of something that you are touching, you are able to somehow recognize it. So there is very clearly a multisensorial representation that we are learning and using as humans, and it's very likely that this is also going to be very important for embodied agents that we want to develop and deploy.Calandra also noted that they still have plenty of work to do to get DIGIT closer in form factor and capability to a human finger, which is an aspiration that I often hear from roboticists. But I always wonder: why bother? Like, why constrain robots (which can do all kinds of things that humans cannot) to do things in a human-like way, when we can instead leverage creative sensing and actuation to potentially give them superhuman capabilities? Here's what Calandra thinks:
I don't necessarily believe that a human hand is the way to go. I do believe that the human hand is possibly the golden standard that we should compare against. Can we do at least as good as a human hand? Beyond that, I actually do believe that over the years, the decades, or maybe the centuries, robots will have the possibility of developing superhuman hardware, in the same way that we can put infrared sensors or laser scanners on a robot, why shouldn't we also have mechanical hardware which is superior?
I think there has been a lot of really cool work on soft robotics for example, on how to build tentacles that can imitate an octopus. So it's a very natural question—if we want to have a robot, why should it have hands and not tentacles? And the answer to this is, it depends on what the purpose is. Do we want robots that can perform the same functions of humans, or do we want robots which are specialized for doing particular tasks? We will see when we get there.So there you have it—the future of manipulation is 100% sometimes probably tentacles. Continue reading

Posted in Human Robots