Tag Archives: Designer
#439081 Classify This Robot-Woven Sneaker With ...
For athletes trying to run fast, the right shoe can be essential to achieving peak performance. For athletes trying to run fast as humanly possible, a runner’s shoe can also become a work of individually customized engineering.
This is why Adidas has married 3D printing with robotic automation in a mass-market footwear project it’s called Futurecraft.Strung, expected to be available for purchase as soon as later this year. Using a customized, 3D-printed sole, a Futurecraft.Strung manufacturing robot can place some 2,000 threads from up to 10 different sneaker yarns in one upper section of the shoe.
Skylar Tibbits, founder and co-director of the Self-Assembly Lab and associate professor in MIT's Department of Architecture, says that because of its small scale, footwear has been an area of focus for 3D printing and additive manufacturing, which involves adding material bit by bit.
“There are really interesting complex geometry problems,” he says. “It’s pretty well suited.”
Photo: Adidas
Beginning with a 3D-printed sole, Adidas robots weave together some 2000 threads from up to 10 different sneaker yarns to make one Futurecraft.Strung shoe—expected on the marketplace later this year or sometime in 2022.
Adidas began working on the Futurecraft.Strung project in 2016. Then two years later, Adidas Futurecraft, the company’s innovation incubator, began collaborating with digital design studio Kram/Weisshaar. In less than a year the team built the software and hardware for the upper part of the shoe, called Strung uppers.
“Most 3D printing in the footwear space has been focused on the midsole or outsole, like the bottom of the shoe,” Tibbits explains. But now, he says, Adidas is bringing robotics and a threaded design to the upper part of the shoe. The company bases its Futurecraft.Strung design on high-resolution scans of how runners’ feet move as they travel.
This more flexible design can benefit athletes in multiple sports, according to an Adidas blog post. It will be able to use motion capture of an athlete’s foot and feedback from the athlete to make the design specific to the athlete’s specific gait. Adidas customizes the weaving of the shoe’s “fabric” (really more like an elaborate woven string figure, a cat’s cradle to fit the foot) to achieve a close and comfortable fit, the company says.
What they call their “4D sole” consists of a design combining 3D printing with materials that can change their shape and properties over time. In fact, Tibbits coined the term 4D printing to describe this process in 2013. The company takes customized data from the Adidas Athlete Intelligent Engine to make the shoe, according to Kram/Weisshaar’s website.
Photo: Adidas
Closeup of the weaving process behind a Futurecraft.Strung shoe
“With Strung for the first time, we can program single threads in any direction, where each thread has a different property or strength,” Fionn Corcoran-Tadd, an innovation designer at Adidas’ Futurecraft lab, said in a company video. Each thread serves a purpose, the video noted. “This is like customized string art for your feet,” Tibbits says.
Although the robotics technology the company uses has been around for many years, what Adidas’s robotic weavers can achieve with thread is a matter of elaborate geometry. “It’s more just like a really elegant way to build up material combining robotics and the fibers and yarns into these intricate and complex patterns,” he says.
Robots can of course create patterns with more precision than if someone wound it by hand, as well as rapidly and reliably changing the yarn and color of the fabric pattern. Adidas says it can make a single upper in 45 minutes and a pair of sneakers in 1 hour and 30 minutes. It plans to reduce this time down to minutes in the months ahead, the company said.
An Adidas spokesperson says sneakers incorporating the Futurecraft.Strung uppers design are a prototype, but the company plans to bring a Strung shoe to market in late 2021 or 2022. However, Adidas Futurecraft sneakers are currently available with a 3D-printed midsole.
Adidas plans to continue gathering data from athletes to customize the uppers of sneakers. “We’re building up a library of knowledge and it will get more interesting as we aggregate data of testing and from different athletes and sports,” the Adidas Futurecraft team writes in a blog post. “The more we understand about how data can become design code, the more we can take that and apply it to new Strung textiles. It’s a continuous evolution.” Continue reading
#439073 There’s a ‘New’ Nirvana Song Out, ...
One of the primary capabilities separating human intelligence from artificial intelligence is our ability to be creative—to use nothing but the world around us, our experiences, and our brains to create art. At present, AI needs to be extensively trained on human-made works of art in order to produce new work, so we’ve still got a leg up. That said, neural networks like OpenAI’s GPT-3 and Russian designer Nikolay Ironov have been able to create content indistinguishable from human-made work.
Now there’s another example of AI artistry that’s hard to tell apart from the real thing, and it’s sure to excite 90s alternative rock fans the world over: a brand-new, never-heard-before Nirvana song. Or, more accurately, a song written by a neural network that was trained on Nirvana’s music.
The song is called “Drowned in the Sun,” and it does have a pretty Nirvana-esque ring to it. The neural network that wrote it is Magenta, which was launched by Google in 2016 with the goal of training machines to create art—or as the tool’s website puts it, exploring the role of machine learning as a tool in the creative process. Magenta was built using TensorFlow, Google’s massive open-source software library focused on deep learning applications.
The song was written as part of an album called Lost Tapes of the 27 Club, a project carried out by a Toronto-based organization called Over the Bridge focused on mental health in the music industry.
Here’s how a computer was able to write a song in the unique style of a deceased musician. Music, 20 to 30 tracks, was fed into Magenta’s neural network in the form of MIDI files. MIDI stands for Musical Instrument Digital Interface, and the format contains the details of a song written in code that represents musical parameters like pitch and tempo. Components of each song, like vocal melody or rhythm guitar, were fed in one at a time.
The neural network found patterns in these different components, and got enough of a handle on them that when given a few notes to start from, it could use those patterns to predict what would come next; in this case, chords and melodies that sound like they could’ve been written by Kurt Cobain.
To be clear, Magenta didn’t spit out a ready-to-go song complete with lyrics. The AI wrote the music, but a different neural network wrote the lyrics (using essentially the same process as Magenta), and the team then sifted through “pages and pages” of output to find lyrics that fit the melodies Magenta created.
Eric Hogan, a singer for a Nirvana tribute band who the Over the Bridge team hired to sing “Drowned in the Sun,” felt that the lyrics were spot-on. “The song is saying, ‘I’m a weirdo, but I like it,’” he said. “That is total Kurt Cobain right there. The sentiment is exactly what he would have said.”
Cobain isn’t the only musician the Lost Tapes project tried to emulate; songs in the styles of Jimi Hendrix, Jim Morrison, and Amy Winehouse were also included. What all these artists have in common is that they died by suicide at the age of 27.
The project is meant to raise awareness around mental health, particularly among music industry professionals. It’s not hard to think of great artists of all persuasions—musicians, painters, writers, actors—whose lives are cut short due to severe depression and other mental health issues for which it can be hard to get help. These issues are sometimes romanticized, as suffering does tend to create art that’s meaningful, relatable, and timeless. But according to the Lost Tapes website, suicide attempts among music industry workers are more than double that of the general population.
How many more hit songs would these artists have written if they were still alive? We’ll never know, but hopefully Lost Tapes of the 27 Club and projects like it will raise awareness of mental health issues, both in the music industry and in general, and help people in need find the right resources. Because no matter how good computers eventually get at creating music, writing, or other art, as Lost Tapes’ website pointedly says, “Even AI will never replace the real thing.”
Image Credit: Edward Xu on Unsplash Continue reading
#437905 New Deep Learning Method Helps Robots ...
One of the biggest things standing in the way of the robot revolution is their inability to adapt. That may be about to change though, thanks to a new approach that blends pre-learned skills on the fly to tackle new challenges.
Put a robot in a tightly-controlled environment and it can quickly surpass human performance at complex tasks, from building cars to playing table tennis. But throw these machines a curve ball and they’re in trouble—just check out this compilation of some of the world’s most advanced robots coming unstuck in the face of notoriously challenging obstacles like sand, steps, and doorways.
The reason robots tend to be so fragile is that the algorithms that control them are often manually designed. If they encounter a situation the designer didn’t think of, which is almost inevitable in the chaotic real world, then they simply don’t have the tools to react.
Rapid advances in AI have provided a potential workaround by letting robots learn how to carry out tasks instead of relying on hand-coded instructions. A particularly promising approach is deep reinforcement learning, where the robot interacts with its environment through a process of trial-and-error and is rewarded for carrying out the correct actions. Over many repetitions it can use this feedback to learn how to accomplish the task at hand.
But the approach requires huge amounts of data to solve even simple tasks. And most of the things we would want a robot to do are actually comprised of many smaller tasks—for instance, delivering a parcel involves learning how to pick an object up, how to walk, how to navigate, and how to pass an object to someone else, among other things.
Training all these sub-tasks simultaneously is hugely complex and far beyond the capabilities of most current AI systems, so many experiments so far have focused on narrow skills. Some have tried to train AI on multiple skills separately and then use an overarching system to flip between these expert sub-systems, but these approaches still can’t adapt to completely new challenges.
Building off this research, though, scientists have now created a new AI system that can blend together expert sub-systems specialized for a specific task. In a paper in Science Robotics, they explain how this allows a four-legged robot to improvise new skills and adapt to unfamiliar challenges in real time.
The technique, dubbed multi-expert learning architecture (MELA), relies on a two-stage training approach. First the researchers used a computer simulation to train two neural networks to carry out two separate tasks: trotting and recovering from a fall.
They then used the models these two networks learned as seeds for eight other neural networks specialized for more specific motor skills, like rolling over or turning left or right. The eight “expert networks” were trained simultaneously along with a “gating network,” which learns how to combine these experts to solve challenges.
Because the gating network synthesizes the expert networks rather than switching them on sequentially, MELA is able to come up with blends of different experts that allow it to tackle problems none could solve alone.
The authors liken the approach to training people in how to play soccer. You start out by getting them to do drills on individual skills like dribbling, passing, or shooting. Once they’ve mastered those, they can then intelligently combine them to deal with more dynamic situations in a real game.
After training the algorithm in simulation, the researchers uploaded it to a four-legged robot and subjected it to a battery of tests, both indoors and outdoors. The robot was able to adapt quickly to tricky surfaces like gravel or pebbles, and could quickly recover from being repeatedly pushed over before continuing on its way.
There’s still some way to go before the approach could be adapted for real-world commercially useful robots. For a start, MELA currently isn’t able to integrate visual perception or a sense of touch; it simply relies on feedback from the robot’s joints to tell it what’s going on around it. The more tasks you ask the robot to master, the more complex and time-consuming the training will get.
Nonetheless, the new approach points towards a promising way to make multi-skilled robots become more than the sum of their parts. As much fun as it is, it seems like laughing at compilations of clumsy robots may soon be a thing of the past.
Image Credit: Yang et al., Science Robotics Continue reading
#437630 How Toyota Research Envisions the Future ...
Yesterday, the Toyota Research Institute (TRI) showed off some of the projects that it’s been working on recently, including a ceiling-mounted robot that could one day help us with household chores. That system is just one example of how TRI envisions the future of robotics and artificial intelligence. As TRI CEO Gill Pratt told us, the company is focusing on robotics and AI technology for “amplifying, rather than replacing, human beings.” In other words, Toyota wants to develop robots not for convenience or to do our jobs for us, but rather to allow people to continue to live and work independently even as we age.
To better understand Toyota’s vision of robotics 15 to 20 years from now, it’s worth watching the 20-minute video below, which depicts various scenarios “where the application of robotic capabilities is enabling members of an aging society to live full and independent lives in spite of the challenges that getting older brings.” It’s a long video, but it helps explains TRI’s perspective on how robots will collaborate with humans in our daily lives over the next couple of decades.
Those are some interesting conceptual telepresence-controlled bipeds they’ve got running around in that video, right?
For more details, we sent TRI some questions on how it plans to go from concepts like the ones shown in the video to real products that can be deployed in human environments. Below are answers from TRI CEO Gill Pratt, who is also chief scientist for Toyota Motor Corp.; Steffi Paepcke, senior UX designer at TRI; and Max Bajracharya, VP of robotics at TRI.
IEEE Spectrum: TRI seems to have a more explicit focus on eventual commercialization than most of the robotics research that we cover. At what point TRI starts to think about things like reliability and cost?
Photo: TRI
Toyota is exploring robots capable of manipulating dishes in a sink and a dishwasher, performing experiments and simulations to make sure that the robots can handle a wide range of conditions.
Gill Pratt: It’s a really interesting question, because the normal way to think about this would be to say, well, both reliability and cost are product development tasks. But actually, we need to think about it at the earliest possible stage with research as well. The hardware that we use in the laboratory for doing experiments, we don’t worry about cost there, or not nearly as much as you’d worry about for a product. However, in terms of what research we do, we very much have to think about, is it possible (if the research is successful) for it to end up in a product that has a reasonable cost. Because if a customer can’t afford what we come up with, maybe it has some academic value but it’s not actually going to make a difference in their quality of life in the real world. So we think about cost very much from the beginning.
The same is true with reliability. Right now, we’re working very hard to make our control techniques robust to wide variations in the environment. For instance, in work that Russ Tedrake is doing with manipulating dishes in a sink and a dishwasher, both in physical testing and in simulation, we’re doing thousands and now millions of different experiments to make sure that we can handle the edge cases and it works over a very wide range of conditions.
A tremendous amount of work that we do is trying to bring robotics out of the age of doing demonstrations. There’s been a history of robotics where for some time, things have not been reliable, so we’d catch the robot succeeding just once and then show that video to the world, and people would get the mis-impression that it worked all of the time. Some researchers have been very good about showing the blooper reel too, to show that some of the time, robots don’t work.
“A tremendous amount of work that we do is trying to bring robotics out of the age of doing demonstrations. There’s been a history of robotics where for some time, things have not been reliable, so we’d catch the robot succeeding just once and then show that video to the world, and people would get the mis-impression that it worked all of the time.”
—Gill Pratt, TRI
In the spirit of sharing things that didn’t work, can you tell us a bit about some of the robots that TRI has had under development that didn’t make it into the demo yesterday because they were abandoned along the way?
Steffi Paepcke: We’re really looking at how we can connect people; it can be hard to stay in touch and see our loved ones as much as we would like to. There have been a few prototypes that we’ve worked on that had to be put on the shelf, at least for the time being. We were exploring how to use light so that people could be ambiently aware of one another across distances. I was very excited about that—the internal name was “glowing orb.” For a variety of reasons, it didn’t work out, but it was really fascinating to investigate different modalities for keeping in touch.
Another prototype we worked on—we found through our research that grocery shopping is obviously an important part of life, and for a lot of older adults, it’s not necessarily the right answer to always have groceries delivered. Getting up and getting out of the house keeps you physically active, and a lot of people prefer to continue doing it themselves. But it can be challenging, especially if you’re purchasing heavy items that you need to transport. We had a prototype that assisted with grocery shopping, but when we pivoted our focus to Japan, we found that the inside of a Japanese home really needs to stay inside, and the outside needs to stay outside, so a robot that traverses both domains is probably not the right fit for a Japanese audience, and those were some really valuable lessons for us.
Photo: TRI
Toyota recently demonstrated a gantry robot that would hang from the ceiling to perform tasks like wiping surfaces and clearing clutter.
I love that TRI is exploring things like the gantry robot both in terms of near-term research and as part of its long-term vision, but is a robot like this actually worth pursuing? Or more generally, what’s the right way to compromise between making an environment robot friendly, and asking humans to make changes to their homes?
Max Bajracharya: We think a lot about the problems that we’re trying to address in a holistic way. We don’t want to just give people a robot, and assume that they’re not going to change anything about their lifestyle. We have a lot of evidence from people who use automated vacuum cleaners that people will adapt to the tools you give them, and they’ll change their lifestyle. So we want to think about what is that trade between changing the environment, and giving people robotic assistance and tools.
We certainly think that there are ways to make the gantry system plausible. The one you saw today is obviously a prototype and does require significant infrastructure. If we’re going to retrofit a home, that isn’t going to be the way to do it. But we still feel like we’re very much in the prototype phase, where we’re trying to understand whether this is worth it to be able to bypass navigation challenges, and coming up with the pros and cons of the gantry system. We’re evaluating whether we think this is the right approach to solving the problem.
To what extent do you think humans should be either directly or indirectly in the loop with home and service robots?
Bajracharya: Our goal is to amplify people, so achieving this is going to require robots to be in a loop with people in some form. One thing we have learned is that using people in a slow loop with robots, such as teaching them or helping them when they make mistakes, gives a robot an important advantage over one that has to do everything perfectly 100 percent of the time. In unstructured human environments, robots are going to encounter corner cases, and are going to need to learn to adapt. People will likely play an important role in helping the robots learn. Continue reading
#437265 This Russian Firm’s Star Designer Is ...
Imagine discovering a new artist or designer—whether visual art, fashion, music, or even writing—and becoming a big fan of her work. You follow her on social media, eagerly anticipate new releases, and chat about her talent with your friends. It’s not long before you want to know more about this creative, inspiring person, so you start doing some research. It’s strange, but there doesn’t seem to be any information about the artist’s past online; you can’t find out where she went to school or who her mentors were.
After some more digging, you find out something totally unexpected: your beloved artist is actually not a person at all—she’s an AI.
Would you be amused? Annoyed? Baffled? Impressed? Probably some combination of all these. If you wanted to ask someone who’s had this experience, you could talk to clients of the biggest multidisciplinary design company in Russia, Art.Lebedev Studio (I know, the period confused me at first too). The studio passed off an AI designer as human for more than a year, and no one caught on.
They gave the AI a human-sounding name—Nikolay Ironov—and it participated in more than 20 different projects that included designing brand logos and building brand identities. According to the studio’s website, several of the logos the AI made attracted “considerable public interest, media attention, and discussion in online communities” due to their unique style.
So how did an AI learn to create such buzz-worthy designs? It was trained using hand-drawn vector images each associated with one or more themes. To start a new design, someone enters a few words describing the client, such as what kind of goods or services they offer. The AI uses those words to find associated images and generate various starter designs, which then go through another series of algorithms that “touch them up.” A human designer then selects the best options to present to the client.
“These systems combined together provide users with the experience of instantly converting a client’s text brief into a corporate identity design pack archive. Within seconds,” said Sergey Kulinkovich, the studio’s art director. He added that clients liked Nikolay Ironov’s work before finding out he was an AI (and liked the media attention their brands got after Ironov’s identity was revealed even more).
Ironov joins a growing group of AI “artists” that are starting to raise questions about the nature of art and creativity. Where do creative ideas come from? What makes a work of art truly great? And when more than one person is involved in making art, who should own the copyright?
Art.Lebedev is far from the first design studio to employ artificial intelligence; Mailchimp is using AI to let businesses design multi-channel marketing campaigns without human designers, and Adobe is marketing its new Sensei product as an AI design assistant.
While art made by algorithms can be unique and impressive, though, there’s one caveat that’s important to keep in mind when we worry about human creativity being rendered obsolete. Here’s the thing: AIs still depend on people to not only program them, but feed them a set of training data on which their intelligence and output are based. Depending on the size and nature of an AI’s input data, its output will look pretty different from that of a similar system, and a big part of the difference will be due to the people that created and trained the AIs.
Admittedly, Nikolay Ironov does outshine his human counterparts in a handful of ways; as the studio’s website points out, he can handle real commercial tasks effectively, he doesn’t sleep, get sick, or have “crippling creative blocks,” and he can complete tasks in a matter of seconds.
Given these superhuman capabilities, then, why even keep human designers on staff? As detailed above, it will be a while before creative firms really need to consider this question on a large scale; for now, it still takes a hard-working creative human to make a fast-producing creative AI.
Image Credit: Art.Lebedev Continue reading