Tag Archives: made
#440049 Years Later, Alphabet’s Everyday ...
Last week, Google or Alphabet or X or whatever you want to call it announced that its Everyday Robots team has grown enough and made enough progress that it's time for it to become its own thing, now called, you guessed it, “Everyday Robots.” There's a new website of questionable design along with a lot of fluffy descriptions of what Everyday Robots is all about. But fortunately, there are also some new videos and enough details about the engineering and the team's approach that it's worth spending a little bit of time wading through the clutter to see what Everyday Robots has been up to over the last couple of years and what their plans are for the near future.
That close to the arm seems like a really bad place to put an E-Stop, right?
Our headline may sound a little bit snarky, but the headline in Alphabet's own announcement blog post is “everyday robots are (slowly) leaving the lab.” It's less of a dig and more of an acknowledgement that getting mobile manipulators to usefully operate in semi-structured environments has been, and continues to be, a huge challenge. We'll get into the details in a moment, but the high-level news here is that Alphabet appears to have thrown a lot of resources behind this effort while embracing a long time horizon, and that its investment is starting to pay dividends. This is a nice surprise, considering the somewhat haphazard state (at least to outside appearances) of Google's robotics ventures over the years.
The goal of Everyday Robots, according to Astro Teller, who runs Alphabet's moonshot stuff, is to create “a general-purpose learning robot,” which sounds moonshot-y enough I suppose. To be fair, they've got an impressive amount of hardware deployed, says Everyday Robots' Hans Peter Brøndmo:
We are now operating a fleet of more than 100 robot prototypes that are autonomously performing a range of useful tasks around our offices. The same robot that sorts trash can now be equipped with a squeegee to wipe tables, and use the same gripper that grasps cups to open doors.That's a lot of robots, which is awesome, but I have to question what “autonomously” actually means along with what “a range of useful tasks” actually means. There is really not enough publicly available information for us (or anyone?) to assess what Everyday Robots is doing with its fleet of 100 prototypes, how much manipulator-holding is required, the constraints under which they operate, and whether calling what they do “useful” is appropriate.
If you'd rather not wade through Everyday Robots' weirdly overengineered website, we've extracted the good stuff (the videos, mostly) and reposted them here, along with a little bit of commentary underneath each.
Introducing Everyday Robots
Everyday Robots
0:01 — Is it just me, or does the gearing behind those motions sound kind of, um, unhealthy?
0:25 — A bit of an overstatement about the Nobel Prize for picking a cup up off of a table, I think. Robots are pretty good at perceiving and grasping cups off of tables, because it's such a common task. Like, I get the point, but I just think there are better examples of problems that are currently human-easy and robot-hard.
1:13 — It's not necessarily useful to draw that parallel between computers and smartphones and compare them to robots, because there are certain physical realities (like motors and manipulation requirements) that prevent the kind of scaling to which the narrator refers.
1:35 — This is a red flag for me because we've heard this “it's a platform” thing so many times before and it never, ever works out. But people keep on trying it anyway. It might be effective when constrained to a research environment, but fundamentally, “platform” typically means “getting it to do (commercially?) useful stuff is someone else's problem,” and I'm not sure that's ever been a successful model for robots.
2:10 — Yeah, okay. This robot sounds a lot more normal than the robots at the beginning of the video; what's up with that?
2:30 — I am a big fan of Moravec's Paradox and I wish it would get brought up more when people talk to the public about robots.
The challenge of everyday
Everyday Robots
0:18 — I like the door example, because you can easily imagine how many different ways it can go that would be catastrophic for most robots: different levers or knobs, glass in places, variable weight and resistance, and then, of course, thresholds and other nasty things like that.
1:03 — Yes. It can't be reinforced enough, especially in this context, that computers (and by extension robots) are really bad at understanding things. Recognizing things, yes. Understanding them, not so much.
1:40 — People really like throwing shade at Boston Dynamics, don't they? But this doesn't seem fair to me, especially for a company that Google used to own. What Boston Dynamics is doing is very hard, very impressive, and come on, pretty darn exciting. You can acknowledge that someone else is working on hard and exciting problems while you're working on different hard and exciting problems yourself, and not be a little miffed because what you're doing is, like, less flashy or whatever.
A robot that learns
Everyday Robots
0:26 — Saying that the robot is low cost is meaningless without telling us how much it costs. Seriously: “low cost” for a mobile manipulator like this could easily be (and almost certainly is) several tens of thousands of dollars at the very least.
1:10 — I love the inclusion of things not working. Everyone should do this when presenting a new robot project. Even if your budget is infinity, nobody gets everything right all the time, and we all feel better knowing that others are just as flawed as we are.
1:35 — I'd personally steer clear of using words like “intelligently” when talking about robots trained using reinforcement learning techniques, because most people associate “intelligence” with the kind of fundamental world understanding that robots really do not have.
Training the first task
Everyday Robots
1:20 — As a research task, I can see this being a useful project, but it's important to point out that this is a terrible way of automating the sorting of recyclables from trash. Since all of the trash and recyclables already get collected and (presumably) brought to a few centralized locations, in reality you'd just have your system there, where the robots could be stationary and have some control over their environment and do a much better job much more efficiently.
1:15 — Hopefully they'll talk more about this later, but when thinking about this montage, it's important to ask what of these tasks in the real world would you actually want a mobile manipulator to be doing, and which would you just want automated somehow, because those are very different things.
Building with everyone
Everyday Robots
0:19 — It could be a little premature to be talking about ethics at this point, but on the other hand, there's a reasonable argument to be made that there's no such thing as too early to consider the ethical implications of your robotics research. The latter is probably a better perspective, honestly, and I'm glad they're thinking about it in a serious and proactive way.
1:28 — Robots like these are not going to steal your job. I promise.
2:18 — Robots like these are also not the robots that he's talking about here, but the point he's making is a good one, because in the near- to medium term, robots are going to be most valuable in roles where they can increase human productivity by augmenting what humans can do on their own, rather than replacing humans completely.
3:16 — Again, that platform idea…blarg. The whole “someone has written those applications” thing, uh, who, exactly? And why would they? The difference between smartphones (which have a lucrative app ecosystem) and robots (which do not) is that without any third party apps at all, a smartphone has core functionality useful enough that it justifies its own cost. It's going to be a long time before robots are at that point, and they'll never get there if the software applications are always someone else's problem.
Everyday Robots
I'm a little bit torn on this whole thing. A fleet of 100 mobile manipulators is amazing. Pouring money and people into solving hard robotics problems is also amazing. I'm just not sure that the vision of an “Everyday Robot” that we're being asked to buy into is necessarily a realistic one.
The impression I get from watching all of these videos and reading through the website is that Everyday Robot wants us to believe that it's actually working towards putting general purpose mobile manipulators into everyday environments in a way where people (outside of the Google Campus) will be able to benefit from them. And maybe the company is working towards that exact thing, but is that a practical goal and does it make sense?
The fundamental research being undertaken seems solid; these are definitely hard problems, and solutions to these problems will help advance the field. (Those advances could be especially significant if these techniques and results are published or otherwise shared with the community.) And if the reason to embody this work in a robotic platform is to help inspire that research, then great, I have no issue with that.
But I'm really hesitant to embrace this vision of generalized in-home mobile manipulators doing useful tasks autonomously in a way that's likely to significantly help anyone who's actually watching Everyday Robotics' videos. And maybe this is the whole point of a moonshot vision—to work on something hard that won't pay off for a long time. And again, I have no problem with that. However, if that's the case, Everyday Robots should be careful about how it contextualizes and portrays its efforts (and even its successes), why it's working on a particular set of things, and how outside observers should set our expectations. Over and over, companies have overpromised and underdelivered on helpful and affordable robots. My hope is that Everyday Robots is not in the middle of making the exact same mistake. Continue reading
#439073 There’s a ‘New’ Nirvana Song Out, ...
One of the primary capabilities separating human intelligence from artificial intelligence is our ability to be creative—to use nothing but the world around us, our experiences, and our brains to create art. At present, AI needs to be extensively trained on human-made works of art in order to produce new work, so we’ve still got a leg up. That said, neural networks like OpenAI’s GPT-3 and Russian designer Nikolay Ironov have been able to create content indistinguishable from human-made work.
Now there’s another example of AI artistry that’s hard to tell apart from the real thing, and it’s sure to excite 90s alternative rock fans the world over: a brand-new, never-heard-before Nirvana song. Or, more accurately, a song written by a neural network that was trained on Nirvana’s music.
The song is called “Drowned in the Sun,” and it does have a pretty Nirvana-esque ring to it. The neural network that wrote it is Magenta, which was launched by Google in 2016 with the goal of training machines to create art—or as the tool’s website puts it, exploring the role of machine learning as a tool in the creative process. Magenta was built using TensorFlow, Google’s massive open-source software library focused on deep learning applications.
The song was written as part of an album called Lost Tapes of the 27 Club, a project carried out by a Toronto-based organization called Over the Bridge focused on mental health in the music industry.
Here’s how a computer was able to write a song in the unique style of a deceased musician. Music, 20 to 30 tracks, was fed into Magenta’s neural network in the form of MIDI files. MIDI stands for Musical Instrument Digital Interface, and the format contains the details of a song written in code that represents musical parameters like pitch and tempo. Components of each song, like vocal melody or rhythm guitar, were fed in one at a time.
The neural network found patterns in these different components, and got enough of a handle on them that when given a few notes to start from, it could use those patterns to predict what would come next; in this case, chords and melodies that sound like they could’ve been written by Kurt Cobain.
To be clear, Magenta didn’t spit out a ready-to-go song complete with lyrics. The AI wrote the music, but a different neural network wrote the lyrics (using essentially the same process as Magenta), and the team then sifted through “pages and pages” of output to find lyrics that fit the melodies Magenta created.
Eric Hogan, a singer for a Nirvana tribute band who the Over the Bridge team hired to sing “Drowned in the Sun,” felt that the lyrics were spot-on. “The song is saying, ‘I’m a weirdo, but I like it,’” he said. “That is total Kurt Cobain right there. The sentiment is exactly what he would have said.”
Cobain isn’t the only musician the Lost Tapes project tried to emulate; songs in the styles of Jimi Hendrix, Jim Morrison, and Amy Winehouse were also included. What all these artists have in common is that they died by suicide at the age of 27.
The project is meant to raise awareness around mental health, particularly among music industry professionals. It’s not hard to think of great artists of all persuasions—musicians, painters, writers, actors—whose lives are cut short due to severe depression and other mental health issues for which it can be hard to get help. These issues are sometimes romanticized, as suffering does tend to create art that’s meaningful, relatable, and timeless. But according to the Lost Tapes website, suicide attempts among music industry workers are more than double that of the general population.
How many more hit songs would these artists have written if they were still alive? We’ll never know, but hopefully Lost Tapes of the 27 Club and projects like it will raise awareness of mental health issues, both in the music industry and in general, and help people in need find the right resources. Because no matter how good computers eventually get at creating music, writing, or other art, as Lost Tapes’ website pointedly says, “Even AI will never replace the real thing.”
Image Credit: Edward Xu on Unsplash Continue reading