Tag Archives: setting

#435775 Jaco Is a Low-Power Robot Arm That Hooks ...

We usually think of robots as taking the place of humans in various tasks, but robots of all kinds can also enhance human capabilities. This may be especially true for people with disabilities. And while the Cybathlon competition showed what's possible when cutting-edge research robotics is paired with expert humans, that competition isn't necessarily reflective of the kind of robotics available to most people today.

Kinova Robotics's Jaco arm is an assistive robotic arm designed to be mounted on an electric wheelchair. With six degrees of freedom plus a three-fingered gripper, the lightweight carbon fiber arm is frequently used in research because it's rugged and versatile. But from the start, Kinova created it to add autonomy to the lives of people with mobility constraints.

Earlier this year, Kinova shared the story of Mary Nelson, an 11-year-old girl with spinal muscular atrophy, who uses her Jaco arm to show her horse in competition. Spinal muscular atrophy is a neuromuscular disorder that impairs voluntary muscle movement, including muscles that help with respiration, and Mary depends on a power chair for mobility.

We wanted to learn more about how Kinova designs its Jaco arm, and what that means for folks like Mary, so we spoke with both Kinova and Mary's parents to find out how much of a difference a robot arm can make.

IEEE Spectrum: How did Mary interact with the world before having her arm, and what was involved in the decision to try a robot arm in general? And why then Kinova's arm specifically?

Ryan Nelson: Mary interacts with the world much like you and I do, she just uses different tools to do so. For example, she is 100 percent independent using her computer, iPad, and phone, and she prefers to use a mouse. However, she cannot move a standard mouse, so she connects her wheelchair to each device with Bluetooth to move the mouse pointer/cursor using her wheelchair joystick.

For years, we had a Manfrotto magic arm and super clamp attached to her wheelchair and she used that much like the robotic arm. We could put a baseball bat, paint brush, toys, etc. in the super clamp so that Mary could hold the object and interact as physically able children do. Mary has always wanted to be more independent, so we knew the robotic arm was something she must try. We had seen videos of the Kinova arm on YouTube and on their website, so we reached out to them to get a trial.

Can you tell us about the Jaco arm, and how the process of designing an assistive robot arm is different from the process of designing a conventional robot arm?

Nathaniel Swenson, Director of U.S. Operations — Assistive Technologies at Kinova: Jaco is our flagship robotic arm. Inspired by our CEO's uncle and its namesake, Jacques “Jaco” Forest, it was designed as assistive technology with power wheelchair users in mind.

The primary differences between Jaco and our other robots, such as the new Gen3, which was designed to meet the needs of academic and industry research teams, are speed and power consumption. Other robots such as the Gen3 can move faster and draw slightly more power because they aren't limited by the battery size of power wheelchairs. Depending on the use case, they might not interact directly with a human being in the research setting and can safely move more quickly. Jaco is designed to move at safe speeds and make direct contact with the end user and draw very little power directly from their wheelchair.

The most important consideration in the design process of an assistive robot is the safety of the end user. Jaco users operate their robots through their existing drive controls to assist them in daily activities such as eating, drinking, and opening doors and they don't have to worry about the robot draining their chair's batteries throughout the day. The elegant design that results from meeting the needs of our power chair users has benefited subsequent iterations, [of products] such as the Gen3, as well: Kinova's robots are lightweight, extremely efficient in their power consumption, and safe for direct human-robot interaction. This is not true of conventional industrial robots.

What was the learning process like for Mary? Does she feel like she's mastered the arm, or is it a continuous learning process?

Ryan Nelson: The learning process was super quick for Mary. However, she amazes us every day with the new things that she can do with the arm. Literally within minutes of installing the arm on her chair, Mary had it figured out and was shaking hands with the Kinova rep. The control of the arm is super intuitive and the Kinova reps say that SMA (Spinal Muscular Atrophy) children are perfect users because they are so smart—they pick it up right away. Mary has learned to do many fine motor tasks with the arm, from picking up small objects like a pencil or a ruler, to adjusting her glasses on her face, to doing science experiments.

Photo: The Nelson Family

Mary uses a headset microphone to amplify her voice, and she will use the arm and finger to adjust the microphone in front of her mouth after she is done eating (also a task she mastered quickly with the arm). Additionally, Mary will use the arms to reach down and adjust her feet or leg by grabbing them with the arm and moving them to a more comfortable position. All of these examples are things she never really asked us to do, but something she needed and just did on her own, with the help of the arm.

What is the most common feedback that you get from new users of the arm? How about from experienced users who have been using the arm for a while?

Nathaniel Swenson: New users always tell us how excited they are to see what they can accomplish with their new Jaco. From day one, they are able to do things that they have longed to do without assistance from a caregiver: take a drink of water or coffee, scratch an itch, push the button to open an “accessible” door or elevator, or even feed their baby with a bottle.

The most common feedback I hear from experienced users is that Jaco has changed their life. Our experienced users like Mary are rock stars: everywhere they go, people get excited to see what they'll do next. The difference between a new user and an experienced user could be as little as two weeks. People who operate power wheelchairs every day are already expert drivers and we just add a new “gear” to their chair: robot mode. It's fun to see how quickly new users master the intuitive Jaco control modes.

What changes would you like to see in the next generation of Jaco arm?

Ryan Nelson: Titanium fingers! Make it lift heavier objects, hold heavier items like a baseball bat, machine gun, flame thrower, etc., and Mary literally said this last night: “I wish the arm moved fast enough to play the piano.”

Nathaniel Swenson: I love the idea of titanium fingers! Jaco's fingers are made from a flexible polymer and designed to avoid harm. This allows the fingers to bend or dislocate, rather than break, but it also means they are not as durable as a material like titanium. Increased payload, the ability to manipulate heavier objects, requires increased power consumption. We've struck a careful balance between providing enough strength to accomplish most medically necessary Activities of Daily Living and efficient use of the power chair's batteries.

We take Isaac Asimov's Laws of Robotics pretty seriously. When we start to combine machine guns, flame throwers, and artificial intelligence with robots, I get very nervous!

I wish the arm moved fast enough to play the piano, too! I am also a musician and I share Mary's dream of an assistive robot that would enable her to make music. In the meantime, while we work on that, please enjoy this beautiful violin piece by Manami Ito and her one-of-a-kind violin prosthesis:

To what extent could more autonomy for the arm be helpful for users? What would be involved in implementing that?

Nathaniel Swenson: Artificial intelligence, machine learning, and deep learning will introduce greater autonomy in future iterations of assistive robots. This will enable them to perform more complex tasks that aren't currently possible, and enable them to accomplish routine tasks more quickly and with less input than the current manual control requires.

For assistive robots, implementation of greater autonomy involves a focus on end-user safety and improvements in the robot's awareness of its environment. Autonomous robots that work in close proximity with humans need vision. They must be able to see to avoid collisions and they use haptic feedback to tell the robot how much force is being exerted on objects. All of these technologies exist, but the largest obstacle to bringing them to the assistive technology market is to prove to the health insurance companies who will fund them that they are both safe and medically necessary. Continue reading

Posted in Human Robots

#435742 This ‘Useless’ Social Robot ...

The recent high profile failures of some home social robots (and the companies behind them) have made it even more challenging than it was before to develop robots in that space. And it was challenging enough to begin with—making a robot that can autonomous interact with random humans in their homes over a long period of time for a price that people can afford is extraordinarily difficult. However, the massive amount of initial interest in robots like Jibo, Kuri, Vector, and Buddy prove that people do want these things, or at least think they do, and while that’s the case, there’s incentive for other companies to give social home robots a try.

One of those companies is Zoetic, founded in 2107 by Mita Yun and Jitu Das, both ex-Googlers. Their robot, Kiki, is more or less exactly what you’d expect from a social home robot: It’s cute, white, roundish, has big eyes, promises that it will be your “robot sidekick,” and is not cheap: It’s on Kicksterter for $800. Kiki is among what appears to be a sort of tentative second wave of social home robots, where designers have (presumably) had a chance to take everything that they learned from the social home robot pioneers and use it to make things better this time around.

Kiki’s Kickstarter video is, again, more or less exactly what you’d expect from a social home robot crowdfunding campaign:

We won’t get into all of the details on Kiki in this article (the Kickstarter page has tons of information), but a few distinguishing features:

Each Kiki will develop its own personality over time through its daily interactions with its owner, other people, and other Kikis.
Interacting with Kiki is more abstract than with most robots—it can understand some specific words and phrases, and will occasionally use a few specific words or two, but otherwise it’s mostly listening to your tone of voice and responding with sounds rather than speech.
Kiki doesn’t move on its own, but it can operate for up to two hours away from its charging dock.
Depending on how your treat Kiki, it can get depressed or neurotic. It also needs to be fed, which you can do by drawing different kinds of food in the app.
Everything Kiki does runs on-board the robot. It has Wi-Fi connectivity for updates, but doesn’t rely on the cloud for anything in real-time, meaning that your data stays on the robot and that the robot will continue to function even if its remote service shuts down.

It’s hard to say whether features like these are unique enough to help Kiki be successful where other social home robots haven’t been, so we spoke with Zoetic co-founder Mita Yun and asked her why she believes that Kiki is going to be the social home robot that makes it.

IEEE Spectrum: What’s your background?

Mita Yun: I was an only child growing up, and so I always wanted something like Doraemon or Totoro. Something that when you come home it’s there to greet you, not just because it’s programmed to do that but because it’s actually actively happy to see you, and only you. I was so interested in this that I went to study robotics at CMU and then after I graduated I joined Google and worked there for five years. I tended to go for the more risky and more fun projects, but they always got cancelled—the first project I joined was called Android at Home, and then I joined Google Glass, and then I joined a team called Robots for Kids. That project was building educational robots, and then I just realized that when we’re adding technology to something, to a product, we’re actually taking the life away somehow, and the kids were more connected with stuffed animals compared to the educational robots we were building. That project was also cancelled, and in 2017, I left with a coworker of mine (Jitu Das) to bring this dream into reality. And now we’re building Kiki.

“Jibo was Alexa plus cuteness equals $800, and I feel like that equation doesn’t work for most people, and that eventually killed the company. So, for Kiki, we are actually building something very different. We’re building something that’s completely useless”
—Mita Yun, Zoetic

You started working on Kiki in 2017, when things were already getting challenging for Jibo—why did you decide to start developing a social home robot at that point?

I thought Jibo was great. It had a special magical way of moving, and it was such a new idea that you could have this robot with embodiment and it can actually be your assistant. The problem with Jibo, in my opinion, was that it took too long to fulfill the orders. It took them three to four years to actually manufacture, because it was a very complex piece of hardware, and then during that period of time Alexa and Google Home came out, and they started selling these voice systems for $30 and then you have Jibo for $800. Jibo was Alexa plus cuteness equals $800, and I feel like that equation doesn’t work for most people, and that eventually killed the company. So, for Kiki, we are actually building something very different. We’re building something that’s completely useless.

Can you elaborate on “completely useless?”

I feel like people are initially connected with robots because they remind them of a character. And it’s the closest we can get to a character other than an organic character like an animal. So we’re connected to a character like when we have a robot in a mall that’s roaming around, even if it looks really ugly, like if it doesn’t have eyes, people still take selfies with it. Why? Because they think it’s a character. And humans are just hardwired to love characters and love stories. With Kiki, we just wanted to build a character that’s alive, we don’t want to have a character do anything super useful.

I understand why other robotics companies are adding Alexa integration to their robots, and I think that’s great. But the dream I had, and the understanding I have about robotics technology, is that for a consumer robot especially, it is very very difficult for the robot to justify its price through usefulness. And then there’s also research showing that the more useless something is, the easier it is to have an emotional connection, so that’s why we want to keep Kiki very useless.

What kind of character are you creating with Kiki?

The whole design principle around Kiki is we want to make it a very vulnerable character. In terms of its status at home, it’s not going to be higher or equal status as the owner, but slightly lower status than the human, and it’s vulnerable and needs you to take care of it in order to grow up into a good personality robot.

We don’t let Kiki speak full English sentences, because whenever it does that, people are going to think it’s at least as intelligent as a baby, which is impossible for robots at this point. And we also don’t let it move around, because when you have it move around, people are going to think “I’m going to call Kiki’s name, and then Kiki is will come to me.” But that is actually very difficult to build. And then also we don’t have any voice integration so it doesn’t tell you about the stock market price and so on.

Photo: Zoetic

Kiki is designed to be “vulnerable,” and it needs you to take care of it so it can “grow up into a good personality robot,” according to its creators.

That sounds similar to what Mayfield did with Kuri, emphasizing an emotional connection rather than specific functionality.

It is very similar, but one of the key differences from Kuri, I think, is that Kuri started with a Kobuki base, and then it’s wrapped into a cute shell, and they added sounds. So Kuri started with utility in mind—navigation is an important part of Kuri, so they started with that challenge. For Kiki, we started with the eyes. The entire thing started with the character itself.

How will you be able to convince your customers to spend $800 on a robot that you’ve described as “useless” in some ways?

Because it’s useless, it’s actually easier to convince people, because it provides you with an emotional connection. I think Kiki is not a utility-driven product, so the adoption cycle is different. For a functional product, it’s very easy to pick up, because you can justify it by saying “I’m going to pay this much and then my life can become this much more efficient.” But it’s also very easy to be replaced and forgotten. For an emotional-driven product, it’s slower to pick up, but once people actually pick it up, they’re going to be hooked—they get be connected with it, and they’re willing to invest more into taking care of the robot so it will grow up to be smarter.

Maintaining value over time has been another challenge for social home robots. How will you make sure that people don’t get bored with Kiki after a few weeks?

Of course Kiki has limits in what it can do. We can combine the eyes, the facial expression, the motors, and lights and sounds, but is it going to be constantly entertaining? So we think of this as, imagine if a human is actually puppeteering Kiki—can Kiki stay interesting if a human is puppeteering it and interacting with the owner? So I think what makes a robot interesting is not just in the physical expressions, but the part in between that and the robot conveying its intentions and emotions.

For example, if you come into the room and then Kiki decides it will turn the other direction, ignore you, and then you feel like, huh, why did the robot do that to me? Did I do something wrong? And then maybe you will come up to it and you will try to figure out why it did that. So, even though Kiki can only express in four different dimensions, it can still make things very interesting, and then when its strategies change, it makes it feel like a new experience.

There’s also an explore and exploit process going on. Kiki wants to make you smile, and it will try different things. It could try to chase its tail, and if you smile, Kiki learns that this works and will exploit it. But maybe after doing it three times, you no longer find it funny, because you’re bored of it, and then Kiki will observe your reactions and be motivated to explore a new strategy.

Photo: Zoetic

Kiki’s creators are hoping that, with an emotionally engaging robot, it will be easier for people to get attached to it and willing to spend time taking care of it.

A particular risk with crowdfunding a robot like this is setting expectations unreasonably high. The emphasis on personality and emotional engagement with Kiki seems like it may be very difficult for the robot to live up to in practice.

I think we invested more than most robotics companies into really building out Kiki’s personality, because that is the single most important thing to us. For Jibo a lot of the focus was in the assistant, and for Kuri, it’s more in the movement. For Kiki, it’s very much in the personality.

I feel like when most people talk about personality, they’re mainly talking about expression. With Kiki, it’s not just in the expression itself, not just in the voice or the eyes or the output layer, it’s in the layer in between—when Kiki receives input, how will it make decisions about what to do? We actually don’t think the personality of Kiki is categorizable, which is why I feel like Kiki has a deeper implementation of how personalities should work. And you’re right, Kiki doesn’t really understand why you’re feeling a certain way, it just reads your facial expressions. It’s maybe not your best friend, but maybe closer to your little guinea pig robot.

Photo: Zoetic

The team behind Kiki paid particular attention to its eyes, and designed the robot to always face the person that it is interacting with.

Is that where you’d put Kiki on the scale of human to pet?

Kiki is definitely not human, we want to keep it very far away from human. And it’s also not a dog or cat. When we were designing Kiki, we took inspiration from mammals because humans are deeply connected to mammals since we’re mammals ourselves. And specifically we’re connected to predator animals. With prey animals, their eyes are usually on the sides of their heads, because they need to see different angles. A predator animal needs to hunt, they need to focus. Cats and dogs are predator animals. So with Kiki, that’s why we made sure the eyes are on one side of the face and the head can actuate independently from the body and the body can turn so it’s always facing the person that it’s paying attention to.

I feel like Kiki is probably does more than a plant. It does more than a fish, because a fish doesn’t look you in the eyes. It’s not as smart as a cat or a dog, so I would just put it in this guinea pig kind of category.

What have you found so far when running user studies with Kiki?

When we were first designing Kiki we went through a whole series of prototypes. One of the earlier prototypes of Kiki looked like a CRT, like a very old monitor, and when we were testing that with people they didn’t even want to touch it. Kiki’s design inspiration actually came from an airplane, with a very angular, futuristic look, but based on user feedback we made it more round and more friendly to the touch. The lights were another feature request from the users, which adds another layer of expressivity to Kiki, and they wanted to see multiple Kikis working together with different personalities. Users also wanted different looks for Kiki, to make it look like a deer or a unicorn, for example, and we actually did take that into consideration because it doesn’t look like any particular mammal. In the future, you’ll be able to have different ears to make it look like completely different animals.

There has been a lot of user feedback that we didn’t implement—I believe we should observe the users reactions and feedback but not listen to their advice. The users shouldn’t be our product designers, because if you test Kiki with 10 users, eight of them will tell you they want Alexa in it. But we’re never going to add Alexa integration to Kiki because that’s not what it’s meant to do.

While it’s far too early to tell whether Kiki will be a long-term success, the Kickstarter campaign is currently over 95 percent funded with 8 days to go, and 34 robots are still available for a May 2020 delivery.

[ Kickstarter ] Continue reading

Posted in Human Robots

#435669 Watch World Champion Soccer Robots Take ...

RoboCup 2019 took place earlier this month down in Sydney, Australia. While there are many different events including RoboCup@Home, RoboCup Rescue, and a bunch of different soccer leagues, one of the most compelling events is middle-size league (MSL), where mobile robots each about the size of a fire hydrant play soccer using a regular size FIFA soccer ball. The robots are fully autonomous, making their own decisions in real time about when to dribble, pass, and shoot.

The long-term goal of RoboCup is this:

By the middle of the 21st century, a team of fully autonomous humanoid robot soccer players shall win a soccer game, complying with the official rules of FIFA, against the winner of the most recent World Cup.

While the robots are certainly not there yet, they're definitely getting closer.

Even if you’re not a particular fan of soccer, it’s impressive to watch the robots coordinate with each other, setting up multiple passes and changing tactics on the fly in response to the movements of the other team. And the ability of these robots to shoot accurately is world-class (like, human world-class), as they’re seemingly able to put the ball in whatever corner of the goal they choose with split-second timing.

The final match was between Tech United from Eindhoven University of Technology in the Netherlands (whose robots are called TURTLE), and Team Water from Beijing Information Science & Technology University. Without spoiling it, I can tell you that the game was tied within just the last few seconds, meaning that it had to go to overtime. You can watch the entire match on YouTube, or a 5-minute commentated highlight video here:

It’s become a bit of a tradition to have the winning MSL robots play a team of what looks to be inexperienced adult humans wearing long pants and dress shoes.

The fact that the robots managed to score even once is pretty awesome, and it also looks like the robots are playing very conservatively (more so than the humans) so as not to accidentally injure any of us fragile meatbags with our spindly little legs. I get that RoboCup wants its first team of robots that can beat a human World Cup winning team to be humanoids, but at the moment, the MSL robots are where all the skill is.

To get calibrated on the state of the art for humanoid soccer robots, here’s the adult size final, Team Nimbro from the University of Bonn in Germany versus Team Sweaty from Offenburg University in Germany:

Yup, still a lot of falling over.

There’s lots more RoboCup on YouTube: Some channels to find more matches include the official RoboCup 2019 channel, and Tech United Eindhoven’s channel, which has both live English commentary and some highlight videos.

[ RoboCup 2019 ] Continue reading

Posted in Human Robots

#435656 Will AI Be Fashion Forward—or a ...

The narrative that often accompanies most stories about artificial intelligence these days is how machines will disrupt any number of industries, from healthcare to transportation. It makes sense. After all, technology already drives many of the innovations in these sectors of the economy.

But sneakers and the red carpet? The definitively low-tech fashion industry would seem to be one of the last to turn over its creative direction to data scientists and machine learning algorithms.

However, big brands, e-commerce giants, and numerous startups are betting that AI can ingest data and spit out Chanel. Maybe it’s not surprising, given that fashion is partly about buzz and trends—and there’s nothing more buzzy and trendy in the world of tech today than AI.

In its annual survey of the $3 trillion fashion industry, consulting firm McKinsey predicted that while AI didn’t hit a “critical mass” in 2018, it would increasingly influence the business of everything from design to manufacturing.

“Fashion as an industry really has been so slow to understand its potential roles interwoven with technology. And, to be perfectly honest, the technology doesn’t take fashion seriously.” This comment comes from Zowie Broach, head of fashion at London’s Royal College of Arts, who as a self-described “old fashioned” designer has embraced the disruptive nature of technology—with some caveats.

Co-founder in the late 1990s of the avant-garde fashion label Boudicca, Broach has always seen tech as a tool for designers, even setting up a website for the company circa 1998, way before an online presence became, well, fashionable.

Broach told Singularity Hub that while she is generally optimistic about the future of technology in fashion—the designer has avidly been consuming old sci-fi novels over the last few years—there are still a lot of difficult questions to answer about the interface of algorithms, art, and apparel.

For instance, can AI do what the great designers of the past have done? Fashion was “about designing, it was about a narrative, it was about meaning, it was about expression,” according to Broach.

AI that designs products based on data gleaned from human behavior can potentially tap into the Pavlovian response in consumers in order to make money, Broach noted. But is that channeling creativity, or just digitally dabbling in basic human brain chemistry?

She is concerned about people retaining control of the process, whether we’re talking about their data or their designs. But being empowered with the insights machines could provide into, for example, the geographical nuances of fashion between Dubai, Moscow, and Toronto is thrilling.

“What is it that we want the future to be from a fashion, an identity, and design perspective?” she asked.

Off on the Right Foot
Silicon Valley and some of the biggest brands in the industry offer a few answers about where AI and fashion are headed (though not at the sort of depths that address Broach’s broader questions of aesthetics and ethics).

Take what is arguably the biggest brand in fashion, at least by market cap but probably not by the measure of appearances on Oscar night: Nike. The $100 billion shoe company just gobbled up an AI startup called Celect to bolster its data analytics and optimize its inventory. In other words, Nike hopes it will be able to figure out what’s hot and what’s not in a particular location to stock its stores more efficiently.

The company is going even further with Nike Fit, a foot-scanning platform using a smartphone camera that applies AI techniques from fields like computer vision and machine learning to find the best fit for each person’s foot. The algorithms then identify and recommend the appropriately sized and shaped shoe in different styles.

No doubt the next step will be to 3D print personalized and on-demand sneakers at any store.

San Francisco-based startup ThirdLove is trying to bring a similar approach to bra sizes. Its 20-member data team, Fortune reported, has developed the Fit Finder quiz that uses machine learning algorithms to help pick just the right garment for every body type.

Data scientists are also a big part of the team at Stitch Fix, a former San Francisco startup that went public in 2017 and today sports a market cap of more than $2 billion. The online “personal styling” company uses hundreds of algorithms to not only make recommendations to customers, but to help design new styles and even manage the subscription-based supply chain.

Future of Fashion
E-commerce giant Amazon has thrown its own considerable resources into developing AI applications for retail fashion—with mixed results.

One notable attempt involved a “styling assistant” that came with the company’s Echo Look camera that helped people catalog and manage their wardrobes, evening helping pick out each day’s attire. The company more recently revisited the direct consumer side of AI with an app called StyleSnap, which matches clothes and accessories uploaded to the site with the retailer’s vast inventory and recommends similar styles.

Behind the curtains, Amazon is going even further. A team of researchers in Israel have developed algorithms that can deduce whether a particular look is stylish based on a few labeled images. Another group at the company’s San Francisco research center was working on tech that could generate new designs of items based on images of a particular style the algorithms trained on.

“I will say that the accumulation of many new technologies across the industry could manifest in a highly specialized style assistant, far better than the examples we’ve seen today. However, the most likely thing is that the least sexy of the machine learning work will become the most impactful, and the public may never hear about it.”

That prediction is from an online interview with Leanne Luce, a fashion technology blogger and product manager at Google who recently wrote a book called, succinctly enough, Artificial Intelligence and Fashion.

Data Meets Design
Academics are also sticking their beakers into AI and fashion. Researchers at the University of California, San Diego, and Adobe Research have previously demonstrated that neural networks, a type of AI designed to mimic some aspects of the human brain, can be trained to generate (i.e., design) new product images to match a buyer’s preference, much like the team at Amazon.

Meanwhile, scientists at Hong Kong Polytechnic University are working with China’s answer to Amazon, Alibaba, on developing a FashionAI Dataset to help machines better understand fashion. The effort will focus on how algorithms approach certain building blocks of design, what are called “key points” such as neckline and waistline, and “fashion attributes” like collar types and skirt styles.

The man largely behind the university’s research team is Calvin Wong, a professor and associate head of Hong Kong Polytechnic University’s Institute of Textiles and Clothing. His group has also developed an “intelligent fabric defect detection system” called WiseEye for quality control, reducing the chance of producing substandard fabric by 90 percent.

Wong and company also recently inked an agreement with RCA to establish an AI-powered design laboratory, though the details of that venture have yet to be worked out, according to Broach.

One hope is that such collaborations will not just get at the technological challenges of using machines in creative endeavors like fashion, but will also address the more personal relationships humans have with their machines.

“I think who we are, and how we use AI in fashion, as our identity, is not a superficial skin. It’s very, very important for how we define our future,” Broach said.

Image Credit: Inspirationfeed / Unsplash Continue reading

Posted in Human Robots

#435619 Video Friday: Watch This Robot Dog ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

IEEE Africon 2019 – September 25-27, 2019 – Accra, Ghana
RoboBusiness 2019 – October 1-3, 2019 – Santa Clara, CA, USA
ISRR 2019 – October 6-10, 2019 – Hanoi, Vietnam
Ro-Man 2019 – October 14-18, 2019 – New Delhi, India
Humanoids 2019 – October 15-17, 2019 – Toronto, Canada
ARSO 2019 – October 31-1, 2019 – Beijing, China
ROSCon 2019 – October 31-1, 2019 – Macau
IROS 2019 – November 4-8, 2019 – Macau
Let us know if you have suggestions for next week, and enjoy today’s videos.

Team PLUTO (University of Pennsylvania, Ghost Robotics, and Exyn Technologies) put together this video giving us a robot’s-eye-view (or whatever they happen to be using for eyes) of the DARPA Subterranean Challenge tunnel circuits.

[ PLUTO ]

Zhifeng Huang has been improving his jet-stepping humanoid robot, which features new hardware and the ability to take larger and more complex steps.

This video reported the last progress of an ongoing project utilizing ducted-fan propulsion system to improve humanoid robot’s ability in stepping over large ditches. The landing point of the robot’s swing foot can be not only forward but also side direction. With keeping quasi-static balance, the robot was able to step over a ditch with 450mm in width (up to 97% of the robot’s leg’s length) in 3D stepping.

[ Paper ]

Thanks Zhifeng!

These underacuated hands from Matei Ciocarlie’s lab at Columbia are magically able to reconfigure themselves to grasp different object types with just one or two motors.

[ Paper ] via [ ROAM Lab ]

This is one reason we should pursue not “autonomous cars” but “fully autonomous cars” that never require humans to take over. We can’t be trusted.

During our early days as the Google self-driving car project, we invited some employees to test our vehicles on their commutes and weekend trips. What we were testing at the time was similar to the highway driver assist features that are now available on cars today, where the car takes over the boring parts of the driving, but if something outside its ability occurs, the driver has to take over immediately.

What we saw was that our testers put too much trust in that technology. They were doing things like texting, applying makeup, and even falling asleep that made it clear they would not be ready to take over driving if the vehicle asked them to. This is why we believe that nothing short of full autonomy will do.

[ Waymo ]

Buddy is a DIY and fetchingly minimalist social robot (of sorts) that will be coming to Kickstarter this month.

We have created a new arduino kit. His name is Buddy. He is a DIY social robot to serve as a replacement for Jibo, Cozmo, or any of the other bots that are no longer available. Fully 3D printed and supported he adds much more to our series of Arduino STEM robotics kits.

Buddy is able to look around and map his surroundings and react to changes within them. He can be surprised and he will always have a unique reaction to changes. The kit can be built very easily in less than an hour. It is even robust enough to take the abuse that kids can give it in a classroom.

[ Littlebots ]

The android Mindar, based on the Buddhist deity of mercy, preaches sermons at Kodaiji temple in Kyoto, and its human colleagues predict that with artificial intelligence it could one day acquire unlimited wisdom. Developed at a cost of almost $1 million (¥106 million) in a joint project between the Zen temple and robotics professor Hiroshi Ishiguro, the robot teaches about compassion and the dangers of desire, anger and ego.

[ Japan Times ]

I’m not sure whether it’s the sound or what, but this thing scares me for some reason.

[ BIRL ]

This gripper uses magnets as a sort of adjustable spring for dynamic stiffness control, which seems pretty clever.

[ Buffalo ]

What a package of medicine sees while being flown by drone from a hospital to a remote clinic in the Dominican Republic. The drone flew 11 km horizontally and 800 meters vertically, and I can’t even imagine what it would take to make that drive.

[ WeRobotics ]

My first ride in a fully autonomous car was at Stanford in 2009. I vividly remember getting in the back seat of a descendant of Junior, and watching the steering wheel turn by itself as the car executed a perfect parking maneuver. Ten years later, it’s still fun to watch other people have that experience.

[ Waymo ]

Flirtey, the pioneer of the commercial drone delivery industry, has unveiled the much-anticipated first video of its next-generation delivery drone, the Flirtey Eagle. The aircraft designer and manufacturer also unveiled the Flirtey Portal, a sophisticated take off and landing platform that enables scalable store-to-door operations; and an autonomous software platform that enables drones to deliver safely to homes.

[ Flirtey ]

EPFL scientists are developing new approaches for improved control of robotic hands – in particular for amputees – that combines individual finger control and automation for improved grasping and manipulation. This interdisciplinary proof-of-concept between neuroengineering and robotics was successfully tested on three amputees and seven healthy subjects.

[ EPFL ]

This video is a few years old, but we’ll take any excuse to watch the majestic sage-grouse be majestic in all their majesticness.

[ UC Davis ]

I like the idea of a game of soccer (or, football to you weirdos in the rest of the world) where the ball has a mind of its own.

[ Sphero ]

Looks like the whole delivery glider idea is really taking off! Or, you know, not taking off.

Weird that they didn’t show the landing, because it sure looked like it was going to plow into the side of the hill at full speed.

[ Yates ] via [ sUAS News ]

This video is from a 2018 paper, but it’s not like we ever get tired of seeing quadrupeds do stuff, right?

[ MIT ]

Founder and Head of Product, Ian Bernstein, and Head of Engineering, Morgan Bell, have been involved in the Misty project for years and they have learned a thing or two about building robots. Hear how and why Misty evolved into a robot development platform, learn what some of the earliest prototypes did (and why they didn’t work for what we envision), and take a deep dive into the technology decisions that form the Misty II platform.

[ Misty Robotics ]

Lex Fridman interviews Vijay Kumar on the Artifiical Intelligence Podcast.

[ AI Podcast ]

This week’s CMU RI Seminar is from Ross Knepper at Cornell, on Formalizing Teamwork in Human-Robot Interaction.

Robots out in the world today work for people but not with people. Before robots can work closely with ordinary people as part of a human-robot team in a home or office setting, robots need the ability to acquire a new mix of functional and social skills. Working with people requires a shared understanding of the task, capabilities, intentions, and background knowledge. For robots to act jointly as part of a team with people, they must engage in collaborative planning, which involves forming a consensus through an exchange of information about goals, capabilities, and partial plans. Often, much of this information is conveyed through implicit communication. In this talk, I formalize components of teamwork involving collaboration, communication, and representation. I illustrate how these concepts interact in the application of social navigation, which I argue is a first-class example of teamwork. In this setting, participants must avoid collision by legibly conveying intended passing sides via nonverbal cues like path shape. A topological representation using the braid groups enables the robot to reason about a small enumerable set of passing outcomes. I show how implicit communication of topological group plans achieves rapid covergence to a group consensus, and how a robot in the group can deliberately influence the ultimate outcome to maximize joint performance, yielding pedestrian comfort with the robot.

[ CMU RI ]

In this week’s episode of Robots in Depth, Per speaks with Julien Bourgeois about Claytronics, a project from Carnegie Mellon and Intel to develop “programmable matter.”

Julien started out as a computer scientist. He was always interested in robotics privately but then had the opportunity to get into micro robots when his lab was merged into the FEMTO-ST Institute. He later worked with Seth Copen Goldstein at Carnegie Mellon on the Claytronics project.

Julien shows an enlarged mock-up of the small robots that make up programmable matter, catoms, and speaks about how they are designed. Currently he is working on a unit that is one centimeter in diameter and he shows us the very small CPU that goes into that model.

[ Robots in Depth ] Continue reading

Posted in Human Robots