Tag Archives: technology

#435748 Video Friday: This Robot Is Like a ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

RSS 2019 – June 22-26, 2019 – Freiburg, Germany
Hamlyn Symposium on Medical Robotics – June 23-26, 2019 – London, U.K.
ETH Robotics Summer School – June 27-1, 2019 – Zurich, Switzerland
MARSS 2019 – July 1-5, 2019 – Helsinki, Finland
ICRES 2019 – July 29-30, 2019 – London, U.K.
DARPA SubT Tunnel Circuit – August 15-22, 2019 – Pittsburgh, Pa., USA
Let us know if you have suggestions for next week, and enjoy today’s videos.

It’s been a while since we last spoke to Joe Jones, the inventor of Roomba, about his solar-powered, weed-killing robot, called Tertill, which he was launching as a Kickstarter project. Tertill is now available for purchase (US $300) and is shipping right now.

[ Tertill ]

Usually, we don’t post videos that involve drone use that looks to be either illegal or unsafe. These flights over the protests in Hong Kong are almost certainly both. However, it’s also a unique perspective on the scale of these protests.

[ Team BlackSheep ]

ICYMI: iRobot announced this week that it has acquired Root Robotics.

[ iRobot ]

This Boston Dynamics parody video went viral this week.

The CGI is good but the gratuitous violence—even if it’s against a fake robot—is a bit too much?

This is still our favorite Boston Dynamics parody video:

[ Corridor ]

Biomedical Engineering Department Head Bin He and his team have developed the first-ever successful non-invasive mind-controlled robotic arm to continuously track a computer cursor.

[ CMU ]

Organic chemists, prepare to meet your replacement:

Automated chemical synthesis carries great promises of safety, efficiency and reproducibility for both research and industry laboratories. Current approaches are based on specifically-designed automation systems, which present two major drawbacks: (i) existing apparatus must be modified to be integrated into the automation systems; (ii) such systems are not flexible and would require substantial re-design to handle new reactions or procedures. In this paper, we propose a system based on a robot arm which, by mimicking the motions of human chemists, is able to perform complex chemical reactions without any modifications to the existing setup used by humans. The system is capable of precise liquid handling, mixing, filtering, and is flexible: new skills and procedures could be added with minimum effort. We show that the robot is able to perform a Michael reaction, reaching a yield of 34%, which is comparable to that obtained by a junior chemist (undergraduate student in Chemistry).

[ arXiv ] via [ NTU ]

So yeah, ICRA 2019 was huge and awesome. Here are some brief highlights.

[ Montreal Gazette ]

For about US $5, this drone will deliver raw meat and beer to you if you live on an uninhabited island in Tokyo Bay.

[ Nikkei ]

The Smart Microsystems Lab at Michigan State University has a new version of their Autonomous Surface Craft. It’s autonomous, open source, and awfully hard to sink.

[ SML ]

As drone shows go, this one is pretty good.

[ CCTV ]

Here’s a remote controlled robot shooting stuff with a very large gun.

[ HDT ]

Over a period of three quarters (September 2018 thru May 2019), we’ve had the opportunity to work with five graduating University of Denver students as they brought their idea for a Misty II arm extension to life.

[ Misty Robotics ]

If you wonder how it looks to inspect burners and superheaters of a boiler with an Elios 2, here you are! This inspection was performed by Svenska Elektrod in a peat-fired boiler for Vattenfall in Sweden. Enjoy!

[ Flyability ]

The newest Soft Robotics technology, mGrip mini fingers, made for tight spaces, small packaging, and delicate items, giving limitless opportunities for your applications.

[ Soft Robotics ]

What if legged robots were able to generate dynamic motions in real-time while interacting with a complex environment? Such technology would represent a significant step forward the deployment of legged systems in real world scenarios. This means being able to replace humans in the execution of dangerous tasks and to collaborate with them in industrial applications.

This workshop aims to bring together researchers from all the relevant communities in legged locomotion such as: numerical optimization, machine learning (ML), model predictive control (MPC) and computational geometry in order to chart the most promising methods to address the above-mentioned scientific challenges.

[ Num Opt Wkshp ]

Army researchers teamed with the U.S. Marine Corps to fly and test 3-D printed quadcopter prototypes a the Marine Corps Air Ground Combat Center in 29 Palms, California recently.

[ CCDC ARL ]

Lex Fridman’s Artificial Intelligence podcast featuring Rosalind Picard.

[ AI Podcast ]

In this week’s episode of Robots in Depth, per speaks with Christian Guttmann, executive director of the Nordic AI Artificial Intelligence Institute.

Christian Guttmann talks about AI and wanting to understand intelligence enough to recreate it. Christian has be focusing on AI in healthcare and has recently started to communicate the opportunities and challenges in artificial intelligence to the general public. This is something that the host Per Sjöborg is also very passionate about. We also get to hear about the Nordic AI institute and the work it does to inform all parts of society about AI.

[ Robots in Depth ] Continue reading

Posted in Human Robots

#435742 This ‘Useless’ Social Robot ...

The recent high profile failures of some home social robots (and the companies behind them) have made it even more challenging than it was before to develop robots in that space. And it was challenging enough to begin with—making a robot that can autonomous interact with random humans in their homes over a long period of time for a price that people can afford is extraordinarily difficult. However, the massive amount of initial interest in robots like Jibo, Kuri, Vector, and Buddy prove that people do want these things, or at least think they do, and while that’s the case, there’s incentive for other companies to give social home robots a try.

One of those companies is Zoetic, founded in 2107 by Mita Yun and Jitu Das, both ex-Googlers. Their robot, Kiki, is more or less exactly what you’d expect from a social home robot: It’s cute, white, roundish, has big eyes, promises that it will be your “robot sidekick,” and is not cheap: It’s on Kicksterter for $800. Kiki is among what appears to be a sort of tentative second wave of social home robots, where designers have (presumably) had a chance to take everything that they learned from the social home robot pioneers and use it to make things better this time around.

Kiki’s Kickstarter video is, again, more or less exactly what you’d expect from a social home robot crowdfunding campaign:

We won’t get into all of the details on Kiki in this article (the Kickstarter page has tons of information), but a few distinguishing features:

Each Kiki will develop its own personality over time through its daily interactions with its owner, other people, and other Kikis.
Interacting with Kiki is more abstract than with most robots—it can understand some specific words and phrases, and will occasionally use a few specific words or two, but otherwise it’s mostly listening to your tone of voice and responding with sounds rather than speech.
Kiki doesn’t move on its own, but it can operate for up to two hours away from its charging dock.
Depending on how your treat Kiki, it can get depressed or neurotic. It also needs to be fed, which you can do by drawing different kinds of food in the app.
Everything Kiki does runs on-board the robot. It has Wi-Fi connectivity for updates, but doesn’t rely on the cloud for anything in real-time, meaning that your data stays on the robot and that the robot will continue to function even if its remote service shuts down.

It’s hard to say whether features like these are unique enough to help Kiki be successful where other social home robots haven’t been, so we spoke with Zoetic co-founder Mita Yun and asked her why she believes that Kiki is going to be the social home robot that makes it.

IEEE Spectrum: What’s your background?

Mita Yun: I was an only child growing up, and so I always wanted something like Doraemon or Totoro. Something that when you come home it’s there to greet you, not just because it’s programmed to do that but because it’s actually actively happy to see you, and only you. I was so interested in this that I went to study robotics at CMU and then after I graduated I joined Google and worked there for five years. I tended to go for the more risky and more fun projects, but they always got cancelled—the first project I joined was called Android at Home, and then I joined Google Glass, and then I joined a team called Robots for Kids. That project was building educational robots, and then I just realized that when we’re adding technology to something, to a product, we’re actually taking the life away somehow, and the kids were more connected with stuffed animals compared to the educational robots we were building. That project was also cancelled, and in 2017, I left with a coworker of mine (Jitu Das) to bring this dream into reality. And now we’re building Kiki.

“Jibo was Alexa plus cuteness equals $800, and I feel like that equation doesn’t work for most people, and that eventually killed the company. So, for Kiki, we are actually building something very different. We’re building something that’s completely useless”
—Mita Yun, Zoetic

You started working on Kiki in 2017, when things were already getting challenging for Jibo—why did you decide to start developing a social home robot at that point?

I thought Jibo was great. It had a special magical way of moving, and it was such a new idea that you could have this robot with embodiment and it can actually be your assistant. The problem with Jibo, in my opinion, was that it took too long to fulfill the orders. It took them three to four years to actually manufacture, because it was a very complex piece of hardware, and then during that period of time Alexa and Google Home came out, and they started selling these voice systems for $30 and then you have Jibo for $800. Jibo was Alexa plus cuteness equals $800, and I feel like that equation doesn’t work for most people, and that eventually killed the company. So, for Kiki, we are actually building something very different. We’re building something that’s completely useless.

Can you elaborate on “completely useless?”

I feel like people are initially connected with robots because they remind them of a character. And it’s the closest we can get to a character other than an organic character like an animal. So we’re connected to a character like when we have a robot in a mall that’s roaming around, even if it looks really ugly, like if it doesn’t have eyes, people still take selfies with it. Why? Because they think it’s a character. And humans are just hardwired to love characters and love stories. With Kiki, we just wanted to build a character that’s alive, we don’t want to have a character do anything super useful.

I understand why other robotics companies are adding Alexa integration to their robots, and I think that’s great. But the dream I had, and the understanding I have about robotics technology, is that for a consumer robot especially, it is very very difficult for the robot to justify its price through usefulness. And then there’s also research showing that the more useless something is, the easier it is to have an emotional connection, so that’s why we want to keep Kiki very useless.

What kind of character are you creating with Kiki?

The whole design principle around Kiki is we want to make it a very vulnerable character. In terms of its status at home, it’s not going to be higher or equal status as the owner, but slightly lower status than the human, and it’s vulnerable and needs you to take care of it in order to grow up into a good personality robot.

We don’t let Kiki speak full English sentences, because whenever it does that, people are going to think it’s at least as intelligent as a baby, which is impossible for robots at this point. And we also don’t let it move around, because when you have it move around, people are going to think “I’m going to call Kiki’s name, and then Kiki is will come to me.” But that is actually very difficult to build. And then also we don’t have any voice integration so it doesn’t tell you about the stock market price and so on.

Photo: Zoetic

Kiki is designed to be “vulnerable,” and it needs you to take care of it so it can “grow up into a good personality robot,” according to its creators.

That sounds similar to what Mayfield did with Kuri, emphasizing an emotional connection rather than specific functionality.

It is very similar, but one of the key differences from Kuri, I think, is that Kuri started with a Kobuki base, and then it’s wrapped into a cute shell, and they added sounds. So Kuri started with utility in mind—navigation is an important part of Kuri, so they started with that challenge. For Kiki, we started with the eyes. The entire thing started with the character itself.

How will you be able to convince your customers to spend $800 on a robot that you’ve described as “useless” in some ways?

Because it’s useless, it’s actually easier to convince people, because it provides you with an emotional connection. I think Kiki is not a utility-driven product, so the adoption cycle is different. For a functional product, it’s very easy to pick up, because you can justify it by saying “I’m going to pay this much and then my life can become this much more efficient.” But it’s also very easy to be replaced and forgotten. For an emotional-driven product, it’s slower to pick up, but once people actually pick it up, they’re going to be hooked—they get be connected with it, and they’re willing to invest more into taking care of the robot so it will grow up to be smarter.

Maintaining value over time has been another challenge for social home robots. How will you make sure that people don’t get bored with Kiki after a few weeks?

Of course Kiki has limits in what it can do. We can combine the eyes, the facial expression, the motors, and lights and sounds, but is it going to be constantly entertaining? So we think of this as, imagine if a human is actually puppeteering Kiki—can Kiki stay interesting if a human is puppeteering it and interacting with the owner? So I think what makes a robot interesting is not just in the physical expressions, but the part in between that and the robot conveying its intentions and emotions.

For example, if you come into the room and then Kiki decides it will turn the other direction, ignore you, and then you feel like, huh, why did the robot do that to me? Did I do something wrong? And then maybe you will come up to it and you will try to figure out why it did that. So, even though Kiki can only express in four different dimensions, it can still make things very interesting, and then when its strategies change, it makes it feel like a new experience.

There’s also an explore and exploit process going on. Kiki wants to make you smile, and it will try different things. It could try to chase its tail, and if you smile, Kiki learns that this works and will exploit it. But maybe after doing it three times, you no longer find it funny, because you’re bored of it, and then Kiki will observe your reactions and be motivated to explore a new strategy.

Photo: Zoetic

Kiki’s creators are hoping that, with an emotionally engaging robot, it will be easier for people to get attached to it and willing to spend time taking care of it.

A particular risk with crowdfunding a robot like this is setting expectations unreasonably high. The emphasis on personality and emotional engagement with Kiki seems like it may be very difficult for the robot to live up to in practice.

I think we invested more than most robotics companies into really building out Kiki’s personality, because that is the single most important thing to us. For Jibo a lot of the focus was in the assistant, and for Kuri, it’s more in the movement. For Kiki, it’s very much in the personality.

I feel like when most people talk about personality, they’re mainly talking about expression. With Kiki, it’s not just in the expression itself, not just in the voice or the eyes or the output layer, it’s in the layer in between—when Kiki receives input, how will it make decisions about what to do? We actually don’t think the personality of Kiki is categorizable, which is why I feel like Kiki has a deeper implementation of how personalities should work. And you’re right, Kiki doesn’t really understand why you’re feeling a certain way, it just reads your facial expressions. It’s maybe not your best friend, but maybe closer to your little guinea pig robot.

Photo: Zoetic

The team behind Kiki paid particular attention to its eyes, and designed the robot to always face the person that it is interacting with.

Is that where you’d put Kiki on the scale of human to pet?

Kiki is definitely not human, we want to keep it very far away from human. And it’s also not a dog or cat. When we were designing Kiki, we took inspiration from mammals because humans are deeply connected to mammals since we’re mammals ourselves. And specifically we’re connected to predator animals. With prey animals, their eyes are usually on the sides of their heads, because they need to see different angles. A predator animal needs to hunt, they need to focus. Cats and dogs are predator animals. So with Kiki, that’s why we made sure the eyes are on one side of the face and the head can actuate independently from the body and the body can turn so it’s always facing the person that it’s paying attention to.

I feel like Kiki is probably does more than a plant. It does more than a fish, because a fish doesn’t look you in the eyes. It’s not as smart as a cat or a dog, so I would just put it in this guinea pig kind of category.

What have you found so far when running user studies with Kiki?

When we were first designing Kiki we went through a whole series of prototypes. One of the earlier prototypes of Kiki looked like a CRT, like a very old monitor, and when we were testing that with people they didn’t even want to touch it. Kiki’s design inspiration actually came from an airplane, with a very angular, futuristic look, but based on user feedback we made it more round and more friendly to the touch. The lights were another feature request from the users, which adds another layer of expressivity to Kiki, and they wanted to see multiple Kikis working together with different personalities. Users also wanted different looks for Kiki, to make it look like a deer or a unicorn, for example, and we actually did take that into consideration because it doesn’t look like any particular mammal. In the future, you’ll be able to have different ears to make it look like completely different animals.

There has been a lot of user feedback that we didn’t implement—I believe we should observe the users reactions and feedback but not listen to their advice. The users shouldn’t be our product designers, because if you test Kiki with 10 users, eight of them will tell you they want Alexa in it. But we’re never going to add Alexa integration to Kiki because that’s not what it’s meant to do.

While it’s far too early to tell whether Kiki will be a long-term success, the Kickstarter campaign is currently over 95 percent funded with 8 days to go, and 34 robots are still available for a May 2020 delivery.

[ Kickstarter ] Continue reading

Posted in Human Robots

#435731 Video Friday: NASA Is Sending This ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

MARSS 2019 – July 1-5, 2019 – Helsinki, Finland
ICRES 2019 – July 29-30, 2019 – London, UK
DARPA SubT Tunnel Circuit – August 15-22, 2019 – Pittsburgh, PA, USA
Let us know if you have suggestions for next week, and enjoy today’s videos.

The big news today is that NASA is sending a robot to Saturn’s moon Titan. A flying robot. The Dragonfly mission will launch in 2026 and arrive in 2034, but you knew that already, because last January, we posted a detailed article about the concept from the Applied Physics Lab at Johns Hopkins University. And now it’s not a concept anymore, yay!

Again, read all the details plus an interview in 2018 article.

[ NASA ]

A robotic gripping arm that uses engineered bacteria to “taste” for a specific chemical has been developed by engineers at the University of California, Davis, and Carnegie Mellon University. The gripper is a proof-of-concept for biologically-based soft robotics.

The new device uses a biosensing module based on E. coli bacteria engineered to respond to the chemical IPTG by producing a fluorescent protein. The bacterial cells reside in wells with a flexible, porous membrane that allows chemicals to enter but keeps the cells inside. This biosensing module is built into the surface of a flexible gripper on a robotic arm, so the gripper can “taste” the environment through its fingers.

When IPTG crosses the membrane into the chamber, the cells fluoresce and electronic circuits inside the module detect the light. The electrical signal travels to the gripper’s control unit, which can decide whether to pick something up or release it.

[ UC Davis ]

The Toyota Research Institute (TRI) is taking on the hard problems in manipulation research toward making human-assist robots reliable and robust. Dr. Russ Tedrake, TRI Vice President of Robotics Research, explains how we are exploring the challenges and addressing the reliability gap by using a robot loading dishes in a dishwasher as an example task.

[ TRI ]

The Tactile Telerobot is the world’s first haptic telerobotic system that transmits realistic touch feedback to an operator located anywhere in the world. It is the product of joint collaboration between Shadow Robot Company, HaptX, and SynTouch. All Nippon Airways funded the project’s initial research and development.

What’s really unique about this is the HaptX tactile feedback system, which is something we’ve been following for several years now. It’s one of the most magical tech experiences I’ve ever had, and you can read about it here and here.

[ HaptX ]

Thanks Andrew!

I love how snake robots can emulate some of the fanciest moves of real snakes, and then also do bonkers things that real snakes never do.

[ Matsuno Lab ]

Here are a couple interesting videos from the Human-Robot Interaction Lab at Tufts.

A robot is instructed to perform an action and cannot do it due to lack of sensors. But when another robot is placed nearby, it can execute the instruction by tacitly tapping into the other robot’s mind and using that robot’s sensors for its own actions. Yes, it’s automatic, and yes, it’s the BORG!

Two Nao robots are instructed to perform a dance and are able to do it right after instruction. Moreover, they can switch roles immediately, and even a third different PR2 robot can perform the dance right away, demonstrating the ability of our DIARC architecture to learn quickly and share the knowledge with any type of robot running the architecture.

Compared to Nao, PR2 just sounds… depressed.

[ HRI Lab ]

This work explores the problem of robot tool construction – creating tools from parts available in the environment. We advance the state-of-the-art in robotic tool construction by introducing an approach that enables the robot to construct a wider range of tools with greater computational efficiency. Specifically, given an action that the robot wishes to accomplish and a set of building parts available to the robot, our approach reasons about the shape of the parts and potential ways of attaching them, generating a ranking of part combinations that the robot then uses to construct and test the target tool. We validate our approach on the construction of five tools using a physical 7-DOF robot arm.

[ RAIL Lab ] via [ RSS ]

We like Magazino’s approach to warehouse picking- constrain the problem to something you can reliably solve, like shoeboxes.

Magazino has announced a new pricing model for their robots. You pay 55k euros for the robot itself, and then after that, all you pay to keep the robot working is 6 cents per pick, so the robot is only costing you money for the work that it actually does.

[ Magazino ]

Thanks Florin!

Human-Robot Collaborations are happening across factories worldwide, yet very few are using it for smaller businesses, due to high costs or the difficulty of customization. Elephant Robotics, a new player from Shenzhen, the Silicon Valley of Asia, has set its sight on helping smaller businesses gain access to smart robotics. They created a Catbot (a collaborative robotic arm) that will offer high efficiency and flexibility to various industries.

The Catbot is set to help from education projects, photography, massaging, to being a personal barista or co-playing a table game. The customizations are endless. To increase the flexibility of usage, the Catbot is extremely easy to program from a high precision task up to covering hefty ground projects.

[ Elephant Robotics ]

Thanks Johnson!

Dronistics, an EPFL spin-off, has been testing out their enclosed delivery drone in the Dominican Republic through a partnership with WeRobotics.

[ WeRobotics ]

QTrobot is an expressive humanoid robot designed to help children with autism spectrum disorder and children with special educational needs in learning new skills. QTrobot uses simple and exaggerated facial expressions combined by interactive games and stories, to help children improve their emotional skills. QTrobot helps children to learn about and better understand the emotions and teach them strategies to handle their emotions more effectively.

[ LuxAI ]

Here’s a typical day in the life of a Tertill solar-powered autonomous weed-destroying robot.

$300, now shipping from Franklin Robotics.

[ Tertill ]

PAL Robotics is excited to announce a new TIAGo with two arms, TIAGo++! After carefully listening to the robotics community needs, we used TIAGo’s modularity to integrate two 7-DoF arms to our mobile manipulator. TIAGo++ can help you swiftly accomplish your research goals, opening endless possibilities in mobile manipulation.

[ PAL Robotics ]

Thanks Jack!

You’ve definitely already met the Cobalt security robot, but Toyota AI Ventures just threw a pile of money at them and would therefore like you to experience this re-introduction:

[ Cobalt Robotics ] via [ Toyota AI ]

ROSIE is a mobile manipulator kit from HEBI Robotics. And if you don’t like ROSIE, the modular nature of HEBI’s hardware means that you can take her apart and make something more interesting.

[ HEBI Robotics ]

Learn about Kawasaki Robotics’ second addition to their line of duAro dual-arm collaborative robots, duAro2. This model offers an extended vertical reach (550 mm) and an increased payload capacity (3 kg/arm).

[ Kawasaki Robotics ]

Drone Delivery Canada has partnered with Peel Region Paramedics to pilot its proprietary drone delivery platform to enable rapid first responder technology via drone with the goal to reduce response time and potentially save lives.

[ Drone Delivery Canada ]

In this week’s episode of Robots in Depth, Per speaks with Harri Ketamo, from Headai.

Harri Ketamo talks about AI and how he aims to mimic human decision making with algorithms. Harri has done a lot of AI for computer games to create opponents that are entertaining to play against. It is easy to develop a very bad or a very good opponent, but designing an opponent that behaves like a human, is entertaining to play against and that you can beat is quite hard. He talks about how AI in computer games is a very important story telling tool and an important part of making a game entertaining to play.

This work led him into other parts of the AI field. Harri thinks that we sometimes have a problem separating what is real from what is the type of story telling he knows from gaming AI. He calls for critical analysis of AI and says that data has to be used to verify AI decisions and results.

[ Robots in Depth ]

Thanks Per! Continue reading

Posted in Human Robots

#435722 Stochastic Robots Use Randomness to ...

The idea behind swarm robots is to replace discrete, expensive, breakable uni-tasking components with a whole bunch of much simpler, cheaper, and replaceable robots that can work together to do the same sorts of tasks. Unfortunately, all of those swarm robots end up needing their own computing and communications and stuff if you want to get them to do what you want them to do.

A different approach to swarm robotics is to use a swarm of much cheaper robots that are far less intelligent. In fact, they may not have to be intelligent at all, if you can rely on their physical characteristics to drive them instead. These swarms are “stochastic,” meaning that their motions are randomly determined, but if you’re clever and careful, you can still get them to do specific things.

Georgia Tech has developed some little swarm robots called “smarticles” that can’t really do much at all on their own, but once you put them together into a jumble, their randomness can actually accomplish something.

Honestly, calling these particle robots “smart” might be giving them a bit too much credit, because they’re actually kind of dumb and strictly speaking not capable of all that much on their own. A single smarticle weighs 35 grams, and consists of some little 3D-printed flappy bits attached to servos, plus an Arduino Pro Mini, a battery, and a light or sound sensor. When its little flappy bits are activated, each smarticle can move slightly, but a single one mostly just moves around in a square and then will gradually drift in a mostly random direction over time.

It gets more interesting when you throw a whole bunch of smarticles into a constrained area. A small collection of five or 10 smarticles constrained together form a “supersmarticle,” but besides being in close proximity to one another, the smarticles within the supersmarticle aren’t communicating or anything like that. As far as each smarticle is concerned, they’re independent, but weirdly, a bumble of them can work together without working together.

“These are very rudimentary robots whose behavior is dominated by mechanics and the laws of physics,” said Dan Goldman, a Dunn Family Professor in the School of Physics at the Georgia Institute of Technology.

The researchers noticed that if one small robot stopped moving, perhaps because its battery died, the group of smarticles would begin moving in the direction of that stalled robot. Graduate student Ross Warkentin learned he could control the movement by adding photo sensors to the robots that halt the arm flapping when a strong beam of light hits one of them.

“If you angle the flashlight just right, you can highlight the robot you want to be inactive, and that causes the ring to lurch toward or away from it, even though no robots are programmed to move toward the light,” Goldman said. “That allowed steering of the ensemble in a very rudimentary, stochastic way.”

It turns out that it’s possible to model this behavior, and control a supersmarticle with enough fidelity to steer it through a maze. And while these particular smarticles aren’t all that small, strictly speaking, the idea is to develop techniques that will work when robots are scaled way way down to the point where you can't physically fit useful computing in there at all.

The researchers are also working on some other concepts, like these:

Image: Science Robotics

The Georgia Tech researchers envision stochastic robot swarms that don’t have a perfectly defined shape or delineation but are capable of self-propulsion, relying on the ensemble-level behaviors that lead to collective locomotion. In such a robot, the researchers say, groups of largely generic agents may be able to achieve complex goals, as observed in biological collectives.

Er, yeah. I’m…not sure I really want there to be a bipedal humanoid robot built out of a bunch of tiny robots. Like, that seems creepy somehow, you know? I’m totally okay with slugs, but let’s not get crazy.

“A robot made of robots: Emergent transport and control of a smarticle ensemble, by William Savoie, Thomas A. Berrueta, Zachary Jackson, Ana Pervan, Ross Warkentin, Shengkai Li, Todd D. Murphey, Kurt Wiesenfeld, and Daniel I. Goldman” from the Georgia Institute of Technology, appears in the current issue of Science Robotics. Continue reading

Posted in Human Robots

#435716 Watch This Drone Explode Into Maple Seed ...

As useful as conventional fixed-wing and quadrotor drones have become, they still tend to be relatively complicated, expensive machines that you really want to be able to use more than once. When a one-way trip is all that you have in mind, you want something simple, reliable, and cheap, and we’ve seen a bunch of different designs for drone gliders that more or less fulfill those criteria.

For an even simpler gliding design, you want to minimize both airframe mass and control surfaces, and the maple tree provides some inspiration in the form of samara, those distinctive seed pods that whirl to the ground in the fall. Samara are essentially just an unbalanced wing that spins, and while the natural ones don’t steer, adding an actuated flap to the robotic version and moving it at just the right time results in enough controllability to aim for a specific point on the ground.

Roboticists at the Singapore University of Technology and Design (SUTD) have been experimenting with samara-inspired drones, and in a new paper in IEEE Robotics and Automation Letters they explore what happens if you attach five of the drones together and then separate them in mid air.

Image: Singapore University of Technology and Design

The drone with all five wings attached (top left), and details of the individual wings: (a) smaller 44.9-gram wing for semi-indoor testing; (b) larger 83.4-gram wing able to carry a Pixracer, GPS, and magnetometer for directional control experiments.

Fundamentally, a samara design acts as a decelerator for an aerial payload. You can think of it like a parachute: It makes sure that whatever you toss out of an airplane gets to the ground intact rather than just smashing itself to bits on impact. Steering is possible, but you don’t get a lot of stability or precision control. The RA-L paper describes one solution to this, which is to collaboratively use five drones at once in a configuration that looks a bit like a helicopter rotor.

And once the multi-drone is right where you want it, the five individual samara drones can split off all at once, heading out on their own missions. It's quite a sight:

The concept features a collaborative autorotation in the initial stage of drop whereby several wings are attached to each other to form a rotor hub. The combined form achieves higher rotational energy and a collaborative control strategy is possible. Once closer to the ground, they can exit the collaborative form and continue to descend to unique destinations. A section of each wing forms a flap and a small actuator changes its pitch cyclically. Since all wing-flaps can actuate simultaneously in collaborative mode, better maneuverability is possible, hence higher resistance against environmental conditions. The vertical and horizontal speeds can be controlled to a certain extent, allowing it to navigate towards a target location and land softly.

The samara autorotating wing drones themselves could conceivably carry small payloads like sensors or emergency medical supplies, with these small-scale versions in the video able to handle an extra 30 grams of payload. While they might not have as much capacity as a traditional fixed-wing glider, they have the advantage of being able to descent vertically, and can perform better than a parachute due to their ability to steer. The researchers plan on improving the design of their little drones, with the goal of increasing the rotation speed and improving the control performance of both the individual drones and the multi-wing collaborative version.

“Dynamics and Control of a Collaborative and Separating Descent of Samara Autorotating Wings,” by Shane Kyi Hla Win, Luke Soe Thura Win, Danial Sufiyan, Gim Song Soh, and Shaohui Foong from Singapore University of Technology and Design, appears in the current issue of IEEE Robotics and Automation Letters.
[ SUTD ]

< Back to IEEE Journal Watch Continue reading

Posted in Human Robots