Tag Archives: organization
#439073 There’s a ‘New’ Nirvana Song Out, ...
One of the primary capabilities separating human intelligence from artificial intelligence is our ability to be creative—to use nothing but the world around us, our experiences, and our brains to create art. At present, AI needs to be extensively trained on human-made works of art in order to produce new work, so we’ve still got a leg up. That said, neural networks like OpenAI’s GPT-3 and Russian designer Nikolay Ironov have been able to create content indistinguishable from human-made work.
Now there’s another example of AI artistry that’s hard to tell apart from the real thing, and it’s sure to excite 90s alternative rock fans the world over: a brand-new, never-heard-before Nirvana song. Or, more accurately, a song written by a neural network that was trained on Nirvana’s music.
The song is called “Drowned in the Sun,” and it does have a pretty Nirvana-esque ring to it. The neural network that wrote it is Magenta, which was launched by Google in 2016 with the goal of training machines to create art—or as the tool’s website puts it, exploring the role of machine learning as a tool in the creative process. Magenta was built using TensorFlow, Google’s massive open-source software library focused on deep learning applications.
The song was written as part of an album called Lost Tapes of the 27 Club, a project carried out by a Toronto-based organization called Over the Bridge focused on mental health in the music industry.
Here’s how a computer was able to write a song in the unique style of a deceased musician. Music, 20 to 30 tracks, was fed into Magenta’s neural network in the form of MIDI files. MIDI stands for Musical Instrument Digital Interface, and the format contains the details of a song written in code that represents musical parameters like pitch and tempo. Components of each song, like vocal melody or rhythm guitar, were fed in one at a time.
The neural network found patterns in these different components, and got enough of a handle on them that when given a few notes to start from, it could use those patterns to predict what would come next; in this case, chords and melodies that sound like they could’ve been written by Kurt Cobain.
To be clear, Magenta didn’t spit out a ready-to-go song complete with lyrics. The AI wrote the music, but a different neural network wrote the lyrics (using essentially the same process as Magenta), and the team then sifted through “pages and pages” of output to find lyrics that fit the melodies Magenta created.
Eric Hogan, a singer for a Nirvana tribute band who the Over the Bridge team hired to sing “Drowned in the Sun,” felt that the lyrics were spot-on. “The song is saying, ‘I’m a weirdo, but I like it,’” he said. “That is total Kurt Cobain right there. The sentiment is exactly what he would have said.”
Cobain isn’t the only musician the Lost Tapes project tried to emulate; songs in the styles of Jimi Hendrix, Jim Morrison, and Amy Winehouse were also included. What all these artists have in common is that they died by suicide at the age of 27.
The project is meant to raise awareness around mental health, particularly among music industry professionals. It’s not hard to think of great artists of all persuasions—musicians, painters, writers, actors—whose lives are cut short due to severe depression and other mental health issues for which it can be hard to get help. These issues are sometimes romanticized, as suffering does tend to create art that’s meaningful, relatable, and timeless. But according to the Lost Tapes website, suicide attempts among music industry workers are more than double that of the general population.
How many more hit songs would these artists have written if they were still alive? We’ll never know, but hopefully Lost Tapes of the 27 Club and projects like it will raise awareness of mental health issues, both in the music industry and in general, and help people in need find the right resources. Because no matter how good computers eventually get at creating music, writing, or other art, as Lost Tapes’ website pointedly says, “Even AI will never replace the real thing.”
Image Credit: Edward Xu on Unsplash Continue reading
#438762 When Robots Enter the World, Who Is ...
Over the last half decade or so, the commercialization of autonomous robots that can operate outside of structured environments has dramatically increased. But this relatively new transition of robotic technologies from research projects to commercial products comes with its share of challenges, many of which relate to the rapidly increasing visibility that these robots have in society.
Whether it's because of their appearance of agency, or because of their history in popular culture, robots frequently inspire people’s imagination. Sometimes this is a good thing, like when it leads to innovative new use cases. And sometimes this is a bad thing, like when it leads to use cases that could be classified as irresponsible or unethical. Can the people selling robots do anything about the latter? And even if they can, should they?
Roboticists understand that robots, fundamentally, are tools. We build them, we program them, and even the autonomous ones are just following the instructions that we’ve coded into them. However, that same appearance of agency that makes robots so compelling means that it may not be clear to people without much experience with or exposure to real robots that a robot itself isn’t inherently good or bad—rather, as a tool, a robot is a reflection of its designers and users.
This can put robotics companies into a difficult position. When they sell a robot to someone, that person can, hypothetically, use the robot in any way they want. Of course, this is the case with every tool, but it’s the autonomous aspect that makes robots unique. I would argue that autonomy brings with it an implied association between a robot and its maker, or in this case, the company that develops and sells it. I’m not saying that this association is necessarily a reasonable one, but I think that it exists, even if that robot has been sold to someone else who has assumed full control over everything it does.
“All of our buyers, without exception, must agree that Spot will not be used to harm or intimidate people or animals, as a weapon or configured to hold a weapon”
—Robert Playter, Boston Dynamics
Robotics companies are certainly aware of this, because many of them are very careful about who they sell their robots to, and very explicit about what they want their robots to be doing. But once a robot is out in the wild, as it were, how far should that responsibility extend? And realistically, how far can it extend? Should robotics companies be held accountable for what their robots do in the world, or should we accept that once a robot is sold to someone else, responsibility is transferred as well? And what can be done if a robot is being used in an irresponsible or unethical way that could have a negative impact on the robotics community?
For perspective on this, we contacted folks from three different robotics companies, each of which has experience selling distinctive mobile robots to commercial end users. We asked them the same five questions about the responsibility that robotics companies have regarding the robots that they sell, and here’s what they had to say:
Do you have any restrictions on what people can do with your robots? If so, what are they, and if not, why not?
Péter Fankhauser, CEO, ANYbotics:
We closely work together with our customers to make sure that our solution provides the right approach for their problem. Thereby, the target use case is clear from the beginning and we do not work with customers interested in using our robot ANYmal outside the intended target applications. Specifically, we strictly exclude any military or weaponized uses and since the foundation of ANYbotics it is close to our heart to make human work easier, safer, and more enjoyable.
Robert Playter, CEO, Boston Dynamics:
Yes, we have restrictions on what people can do with our robots, which are outlined in our Terms and Conditions of Sale. All of our buyers, without exception, must agree that Spot will not be used to harm or intimidate people or animals, as a weapon or configured to hold a weapon. Spot, just like any product, must be used in compliance with the law.
Ryan Gariepy, CTO, Clearpath Robotics:
We do have strict restrictions and KYC processes which are based primarily on Canadian export control regulations. They depend on the type of equipment sold as well as where it is going. More generally, we also will not sell or support a robot if we know that it will create an uncontrolled safety hazard or if we have reason to believe that the buyer is unqualified to use the product. And, as always, we do not support using our products for the development of fully autonomous weapons systems.
More broadly, if you sell someone a robot, why should they be restricted in what they can do with it?
Péter Fankhauser, ANYbotics: We see the robot less as a simple object but more as an artificial workforce. This implies to us that the usage is closely coupled with the transfer of the robot and both the customer and the provider agree what the robot is expected to do. This approach is supported by what we hear from our customers with an increasing interest to pay for the robots as a service or per use.
Robert Playter, Boston Dynamics: We’re offering a product for sale. We’re going to do the best we can to stop bad actors from using our technology for harm, but we don’t have the control to regulate every use. That said, we believe that our business will be best served if our technology is used for peaceful purposes—to work alongside people as trusted assistants and remove them from harm’s way. We do not want to see our technology used to cause harm or promote violence. Our restrictions are similar to those of other manufacturers or technology companies that take steps to reduce or eliminate the violent or unlawful use of their products.
Ryan Gariepy, Clearpath Robotics: Assuming the organization doing the restricting is a private organization and the robot and its software is sold vs. leased or “managed,” there aren't strong legal reasons to restrict use. That being said, the manufacturer likewise has no obligation to continue supporting that specific robot or customer going forward. However, given that we are only at the very edge of how robots will reshape a great deal of society, it is in the best interest for the manufacturer and user to be honest with each other about their respective goals. Right now, you're not only investing in the initial purchase and relationship, you're investing in the promise of how you can help each other succeed in the future.
“If a robot is being used in a way that is irresponsible due to safety: intervene! If it’s unethical: speak up!”
—Péter Fankhauser, ANYbotics
What can you realistically do to make sure that people who buy your robots use them in the ways that you intend?
Péter Fankhauser, ANYbotics: We maintain a close collaboration with our customers to ensure their success with our solution. So for us, we have refrained from technical solutions to block unintended use.
Robert Playter, Boston Dynamics: We vet our customers to make sure that their desired applications are things that Spot can support, and are in alignment with our Terms and Conditions of Sale. We’ve turned away customers whose applications aren’t a good match with our technology. If customers misuse our technology, we’re clear in our Terms of Sale that their violations may void our warranty and prevent their robots from being updated, serviced, repaired, or replaced. We may also repossess robots that are not purchased, but leased. Finally, we will refuse future sales to customers that violate our Terms of Sale.
Ryan Gariepy, Clearpath Robotics: We typically work with our clients ahead of the purchase to make sure their expectations match reality, in particular on aspects like safety, supervisory requirements, and usability. It's far worse to sell a robot that'll sit on a shelf or worse, cause harm, then to not sell a robot at all, so we prefer to reduce the risk of this situation in advance of receiving an order or shipping a robot.
How do you evaluate the merit of edge cases, for example if someone wants to use your robot in research or art that may push the boundaries of what you personally think is responsible or ethical?
Péter Fankhauser, ANYbotics: It’s about the dialog, understanding, and figuring out alternatives that work for all involved parties and the earlier you can have this dialog the better.
Robert Playter, Boston Dynamics: There’s a clear line between exploring robots in research and art, and using the robot for violent or illegal purposes.
Ryan Gariepy, Clearpath Robotics: We have sold thousands of robots to hundreds of clients, and I do not recall the last situation that was not covered by a combination of export control and a general evaluation of the client's goals and expectations. I'm sure this will change as robots continue to drop in price and increase in flexibility and usability.
“You're not only investing in the initial purchase and relationship, you're investing in the promise of how you can help each other succeed in the future.”
—Ryan Gariepy, Clearpath Robotics
What should roboticists do if we see a robot being used in a way that we feel is unethical or irresponsible?
Péter Fankhauser, ANYbotics: If it’s irresponsible due to safety: intervene! If it’s unethical: speak up!
Robert Playter, Boston Dynamics: We want robots to be beneficial for humanity, which includes the notion of not causing harm. As an industry, we think robots will achieve long-term commercial viability only if people see robots as helpful, beneficial tools without worrying if they’re going to cause harm.
Ryan Gariepy, Clearpath Robotics: On a one off basis, they should speak to a combination of the user, the supplier or suppliers, the media, and, if safety is an immediate concern, regulatory or government agencies. If the situation in question risks becoming commonplace and is not being taken seriously, they should speak up more generally in appropriate forums—conferences, industry groups, standards bodies, and the like.
As more and more robots representing different capabilities become commercially available, these issues are likely to come up more frequently. The three companies we talked to certainly don’t represent every viewpoint, and we did reach out to other companies who declined to comment. But I would think (I would hope?) that everyone in the robotics community can agree that robots should be used in a way that makes people’s lives better. What “better” means in the context of art and research and even robots in the military may not always be easy to define, and inevitably there’ll be disagreement as to what is ethical and responsible, and what isn’t.
We’ll keep on talking about it, though, and do our best to help the robotics community to continue growing and evolving in a positive way. Let us know what you think in the comments. Continue reading
#438014 Meet Blueswarm, a Smart School of ...
Anyone who’s seen an undersea nature documentary has marveled at the complex choreography that schooling fish display, a darting, synchronized ballet with a cast of thousands.
Those instinctive movements have inspired researchers at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS), and the Wyss Institute for Biologically Inspired Engineering. The results could improve the performance and dependability of not just underwater robots, but other vehicles that require decentralized locomotion and organization, such as self-driving cars and robotic space exploration.
The fish collective called Blueswarm was created by a team led by Radhika Nagpal, whose lab is a pioneer in self-organizing systems. The oddly adorable robots can sync their movements like biological fish, taking cues from their plastic-bodied neighbors with no external controls required. Nagpal told IEEE Spectrum that this marks a milestone, demonstrating complex 3D behaviors with implicit coordination in underwater robots.
“Insights from this research will help us develop future miniature underwater swarms that can perform environmental monitoring and search in visually-rich but fragile environments like coral reefs,” Nagpal said. “This research also paves a way to better understand fish schools, by synthetically recreating their behavior.”
The research is published in Science Robotics, with Florian Berlinger as first author. Berlinger said the “Bluedot” robots integrate a trio of blue LED lights, a lithium-polymer battery, a pair of cameras, a Raspberry Pi computer and four controllable fins within a 3D-printed hull. The fish-lens cameras detect LED’s of their fellow swimmers, and apply a custom algorithm to calculate distance, direction and heading.
Based on that simple production and detection of LED light, the team proved that Blueswarm could self-organize behaviors, including aggregation, dispersal and circle formation—basically, swimming in a clockwise synchronization. Researchers also simulated a successful search mission, an autonomous Finding Nemo. Using their dispersion algorithm, the robot school spread out until one could detect a red light in the tank. Its blue LEDs then flashed, triggering the aggregation algorithm to gather the school around it. Such a robot swarm might prove valuable in search-and-rescue missions at sea, covering miles of open water and reporting back to its mates.
“Each Bluebot implicitly reacts to its neighbors’ positions,” Berlinger said. The fish—RoboCod, perhaps?—also integrate a Wifi module to allow uploading new behaviors remotely. The lab’s previous efforts include a 1,000-strong army of “Kilobots,” and a robotic construction crew inspired by termites. Both projects operated in two-dimensional space. But a 3D environment like air or water posed a tougher challenge for sensing and movement.
In nature, Berlinger notes, there’s no scaly CEO to direct the school’s movements. Nor do fish communicate their intentions. Instead, so-called “implicit coordination” guides the school’s collective behavior, with individual members executing high-speed moves based on what they see their neighbors doing. That decentralized, autonomous organization has long fascinated scientists, including in robotics.
“In these situations, it really benefits you to have a highly autonomous robot swarm that is self-sufficient. By using implicit rules and 3D visual perception, we were able to create a system with a high degree of autonomy and flexibility underwater where things like GPS and WiFi are not accessible.”
Berlinger adds the research could one day translate to anything that requires decentralized robots, from self-driving cars and Amazon warehouse vehicles to exploration of faraway planets, where poor latency makes it impossible to transmit commands quickly. Today’s semi-autonomous cars face their own technical hurdles in reliably sensing and responding to their complex environments, including when foul weather obscures onboard sensors or road markers, or when they can’t fix position via GPS. An entire subset of autonomous-car research involves vehicle-to-vehicle (V2V) communications that could give cars a hive mind to guide individual or collective decisions— avoiding snarled traffic, driving safely in tight convoys, or taking group evasive action during a crash that’s beyond their sensory range.
“Once we have millions of cars on the road, there can’t be one computer orchestrating all the traffic, making decisions that work for all the cars,” Berlinger said.
The miniature robots could also work long hours in places that are inaccessible to humans and divers, or even large tethered robots. Nagpal said the synthetic swimmers could monitor and collect data on reefs or underwater infrastructure 24/7, and work into tiny places without disturbing fragile equipment or ecosystems.
“If we could be as good as fish in that environment, we could collect information and be non-invasive, in cluttered environments where everything is an obstacle,” Nagpal said. Continue reading
#437709 iRobot Announces Major Software Update, ...
Since the release of the very first Roomba in 2002, iRobot’s long-term goal has been to deliver cleaner floors in a way that’s effortless and invisible. Which sounds pretty great, right? And arguably, iRobot has managed to do exactly this, with its most recent generation of robot vacuums that make their own maps and empty their own dustbins. For those of us who trust our robots, this is awesome, but iRobot has gradually been realizing that many Roomba users either don’t want this level of autonomy, or aren’t ready for it.
Today, iRobot is announcing a major new update to its app that represents a significant shift of its overall approach to home robot autonomy. Humans are being brought back into the loop through software that tries to learn when, where, and how you clean so that your Roomba can adapt itself to your life rather than the other way around.
To understand why this is such a shift for iRobot, let’s take a very brief look back at how the Roomba interface has evolved over the last couple of decades. The first generation of Roomba had three buttons on it that allowed (or required) the user to select whether the room being vacuumed was small or medium or large in size. iRobot ditched that system one generation later, replacing the room size buttons with one single “clean” button. Programmable scheduling meant that users no longer needed to push any buttons at all, and with Roombas able to find their way back to their docking stations, all you needed to do was empty the dustbin. And with the most recent few generations (the S and i series), the dustbin emptying is also done for you, reducing direct interaction with the robot to once a month or less.
Image: iRobot
iRobot CEO Colin Angle believes that working toward more intelligent human-robot collaboration is “the brave new frontier” of AI. “This whole journey has been earning the right to take this next step, because a robot can’t be responsive if it’s incompetent,” he says. “But thinking that autonomy was the destination was where I was just completely wrong.”
The point that the top-end Roombas are at now reflects a goal that iRobot has been working toward since 2002: With autonomy, scheduling, and the clean base to empty the bin, you can set up your Roomba to vacuum when you’re not home, giving you cleaner floors every single day without you even being aware that the Roomba is hard at work while you’re out. It’s not just hands-off, it’s brain-off. No noise, no fuss, just things being cleaner thanks to the efforts of a robot that does its best to be invisible to you. Personally, I’ve been completely sold on this idea for home robots, and iRobot CEO Colin Angle was as well.
“I probably told you that the perfect Roomba is the Roomba that you never see, you never touch, you just come home everyday and it’s done the right thing,” Angle told us. “But customers don’t want that—they want to be able to control what the robot does. We started to hear this a couple years ago, and it took a while before it sunk in, but it made sense.”
How? Angle compares it to having a human come into your house to clean, but you weren’t allowed to tell them where or when to do their job. Maybe after a while, you’ll build up the amount of trust necessary for that to work, but in the short term, it would likely be frustrating. And people get frustrated with their Roombas for this reason. “The desire to have more control over what the robot does kept coming up, and for me, it required a pretty big shift in my view of what intelligence we were trying to build. Autonomy is not intelligence. We need to do something more.”
That something more, Angle says, is a partnership as opposed to autonomy. It’s an acknowledgement that not everyone has the same level of trust in robots as the people who build them. It’s an understanding that people want to have a feeling of control over their homes, that they have set up the way that they want, and that they’ve been cleaning the way that they want, and a robot shouldn’t just come in and do its own thing.
This change in direction also represents a substantial shift in resources for iRobot, and the company has pivoted two-thirds of its engineering organization to focus on software-based collaborative intelligence rather than hardware.
“Until the robot proves that it knows enough about your home and about the way that you want your home cleaned,” Angle says, “you can’t move forward.” He adds that this is one of those things that seem obvious in retrospect, but even if they’d wanted to address the issue before, they didn’t have the technology to solve the problem. Now they do. “This whole journey has been earning the right to take this next step, because a robot can’t be responsive if it’s incompetent,” Angle says. “But thinking that autonomy was the destination was where I was just completely wrong.”
The previous iteration of the iRobot app (and Roombas themselves) are built around one big fat CLEAN button. The new approach instead tries to figure out in much more detail where the robot should clean, and when, using a mixture of autonomous technology and interaction with the user.
Where to Clean
Knowing where to clean depends on your Roomba having a detailed and accurate map of its environment. For several generations now, Roombas have been using visual mapping and localization (VSLAM) to build persistent maps of your home. These maps have been used to tell the Roomba to clean in specific rooms, but that’s about it. With the new update, Roombas with cameras will be able to recognize some objects and features in your home, including chairs, tables, couches, and even countertops. The robots will use these features to identify where messes tend to happen so that they can focus on those areas—like around the dining room table or along the front of the couch.
We should take a minute here to clarify how the Roomba is using its camera. The original (primary?) purpose of the camera was for VSLAM, where the robot would take photos of your home, downsample them into QR-code-like patterns of light and dark, and then use those (with the assistance of other sensors) to navigate. Now the camera is also being used to take pictures of other stuff around your house to make that map more useful.
Photo: iRobot
The robots will now try to fit into the kinds of cleaning routines that many people already have established. For example, the app may suggest an “after dinner” routine that cleans just around the kitchen and dining room table.
This is done through machine learning using a library of images of common household objects from a floor perspective that iRobot had to develop from scratch. Angle clarified for us that this is all done via a neural net that runs on the robot, and that “no recognizable images are ever stored on the robot or kept, and no images ever leave the robot.” Worst case, if all the data iRobot has about your home gets somehow stolen, the hacker would only know that (for example) your dining room has a table in it and the approximate size and location of that table, because the map iRobot has of your place only stores symbolic representations rather than images.
Another useful new feature is intended to help manage the “evil Roomba places” (as Angle puts it) that every home has that cause Roombas to get stuck. If the place is evil enough that Roomba has to call you for help because it gave up completely, Roomba will now remember, and suggest that either you make some changes or that it stops cleaning there, which seems reasonable.
When to Clean
It turns out that the primary cause of mission failure for Roombas is not that they get stuck or that they run out of battery—it’s user cancellation, usually because the robot is getting in the way or being noisy when you don’t want it to be. “If you kill a Roomba’s job because it annoys you,” points out Angle, “how is that robot being a good partner? I think it’s an epic fail.” Of course, it’s not the robot’s fault, because Roombas only clean when we tell them to, which Angle says is part of the problem. “People actually aren’t very good at making their own schedules—they tend to oversimplify, and not think through what their schedules are actually about, which leads to lots of [figurative] Roomba death.”
To help you figure out when the robot should actually be cleaning, the new app will look for patterns in when you ask the robot to clean, and then recommend a schedule based on those patterns. That might mean the robot cleans different areas at different times every day of the week. The app will also make scheduling recommendations that are event-based as well, integrated with other smart home devices. Would you prefer the Roomba to clean every time you leave the house? The app can integrate with your security system (or garage door, or any number of other things) and take care of that for you.
More generally, Roomba will now try to fit into the kinds of cleaning routines that many people already have established. For example, the app may suggest an “after dinner” routine that cleans just around the kitchen and dining room table. The app will also, to some extent, pay attention to the environment and season. It might suggest increasing your vacuuming frequency if pollen counts are especially high, or if it’s pet shedding season and you have a dog. Unfortunately, Roomba isn’t (yet?) capable of recognizing dogs on its own, so the app has to cheat a little bit by asking you some basic questions.
A Smarter App
Image: iRobot
The previous iteration of the iRobot app (and Roombas themselves) are built around one big fat CLEAN button. The new approach instead tries to figure out in much more detail where the robot should clean, and when, using a mixture of autonomous technology and interaction with the user.
The app update, which should be available starting today, is free. The scheduling and recommendations will work on every Roomba model, although for object recognition and anything related to mapping, you’ll need one of the more recent and fancier models with a camera. Future app updates will happen on a more aggressive schedule. Major app releases should happen every six months, with incremental updates happening even more frequently than that.
Angle also told us that overall, this change in direction also represents a substantial shift in resources for iRobot, and the company has pivoted two-thirds of its engineering organization to focus on software-based collaborative intelligence rather than hardware. “It’s not like we’re done doing hardware,” Angle assured us. “But we do think about hardware differently. We view our robots as platforms that have longer life cycles, and each platform will be able to support multiple generations of software. We’ve kind of decoupled robot intelligence from hardware, and that’s a change.”
Angle believes that working toward more intelligent collaboration between humans and robots is “the brave new frontier of artificial intelligence. I expect it to be the frontier for a reasonable amount of time to come,” he adds. “We have a lot of work to do to create the type of easy-to-use experience that consumer robots need.” Continue reading