Tag Archives: animals
#435752 T-RHex Is a Hexapod Robot With ...
In Aaron Johnson’s “Robot Design & Experimentation” class at CMU, teams of students have a semester to design and build an experimental robotic system based on a theme. For spring 2019, that theme was “Bioinspired Robotics,” which is definitely one of our favorite kinds of robotics—animals can do all kinds of crazy things, and it’s always a lot of fun watching robots try to match them. They almost never succeed, of course, but even basic imitation can lead to robots with some unique capabilities.
One of the projects from this year’s course, from Team ScienceParrot, is a new version of RHex called T-RHex (pronounced T-Rex, like the dinosaur). T-RHex comes with a tail, but more importantly, it has tiny tapered toes, which help it grip onto rough surfaces like bricks, wood, and concrete. It’s able to climb its way up very steep slopes, and hang from them, relying on its toes to keep itself from falling off.
T-RHex’s toes are called microspines, and we’ve seen them in all kinds of robots. The most famous of these is probably JPL’s LEMUR IIB (which wins on sheer microspine volume), although the concept goes back at least 15 years to Stanford’s SpinyBot. Robots that use microspines to climb tend to be fairly methodical at it, since the microspines have to be engaged and disengaged with care, limiting their non-climbing mobility.
T-RHex manages to perform many of the same sorts of climbing and hanging maneuvers without losing RHex’s ability for quick, efficient wheel-leg (wheg) locomotion.
If you look closely at T-RHex walking in the video, you’ll notice that in its normal forward gait, it’s sort of walking on its ankles, rather than its toes. This means that the microspines aren’t engaged most of the time, so that the robot can use its regular wheg motion to get around. To engage the microspines, the robot moves its whegs backwards, meaning that its tail is arguably coming out of its head. But since all of T-RHex’s capability is mechanical in nature and it has no active sensors, it doesn’t really need a head, so that’s fine.
The highest climbable slope that T-RHex could manage was 55 degrees, meaning that it can’t, yet, conquer vertical walls. The researchers were most surprised by the robot’s ability to cling to surfaces, where it was perfectly happy to hang out on a slope of 135 degrees, which is a 45 degree overhang (!). I have no idea how it would ever reach that kind of position on its own, but it’s nice to know that if it ever does, its spines will keep doing their job.
Photo: CMU
T-RHex uses laser-cut acrylic legs, with the microspines embedded into 3D-printed toes. The tail is needed to prevent the robot from tipping backward.
For more details about the project, we spoke with Team ScienceParrot member (and CMU PhD student) Catherine Pavlov via email.
IEEE Spectrum: We’re used to seeing RHex with compliant, springy legs—how do the new legs affect T-RHex’s mobility?
Catherine Pavlov: There’s some compliance in the legs, though not as much as RHex—this is driven by the use of acrylic, which was chosen for budget/manufacturing reasons. Matching the compliance of RHex with acrylic would have made the tines too weak (since often only a few hold the load of the robot during climbing). It definitely means you can’t use energy storage in the legs the way RHex does, for example when pronking. T-RHex is probably more limited by motor speed in terms of mobility though. We were using some borrowed Dynamixels that didn’t allow for good positioning at high speeds.
How did you design the climbing gait? Why not use the middle legs, and why is the tail necessary?
The gait was a lot of hand-tuning and trial-and-error. We wanted a left/right symmetric gait to enable load sharing among more spines and prevent out-of-plane twisting of the legs. When using all three pairs, you have to have very accurate angular positioning or one leg pair gets pushed off the wall. Since two legs should be able to hold the full robot gait, using the middle legs was hurting more than it was helping, with the middle legs sometimes pushing the rear ones off of the wall.
The tail is needed to prevent the robot from tipping backward and “sitting” on the wall. During static testing we saw the robot tip backward, disengaging the front legs, at around 35 degrees incline. The tail allows us to load the front legs, even when they’re at a shallow angle to the surface. The climbing gait we designed uses the tail to allow the rear legs to fully recirculate without the robot tipping backward.
Photo: CMU
Team ScienceParrot with T-RHex.
What prevents T-RHex from climbing even steeper surfaces?
There are a few limiting factors. One is that the tines of the legs break pretty easily. I think we also need a lighter platform to get fully vertical—we’re going to look at MiniRHex for future work. We’re also not convinced our gait is the best it can be, we can probably get marginal improvements with more tuning, which might be enough.
Can the microspines assist with more dynamic maneuvers?
Dynamic climbing maneuvers? I think that would only be possible on surfaces with very good surface adhesion and very good surface strength, but it’s certainly theoretically possible. The current instance of T-RHex would definitely break if you tried to wall jump though.
What are you working on next?
Our main target is exploring the space of materials for leg fabrication, such as fiberglass, PLA, urethanes, and maybe metallic glass. We think there’s a lot of room for improvement in the leg material and geometry. We’d also like to see MiniRHex equipped with microspines, which will require legs about half the scale of what we built for T-RHex. Longer-term improvements would be the addition of sensors e.g. for wall detection, and a reliable floor-to-wall transition and dynamic gait transitions.
[ T-RHex ] Continue reading →
#435742 This ‘Useless’ Social Robot ...
The recent high profile failures of some home social robots (and the companies behind them) have made it even more challenging than it was before to develop robots in that space. And it was challenging enough to begin with—making a robot that can autonomous interact with random humans in their homes over a long period of time for a price that people can afford is extraordinarily difficult. However, the massive amount of initial interest in robots like Jibo, Kuri, Vector, and Buddy prove that people do want these things, or at least think they do, and while that’s the case, there’s incentive for other companies to give social home robots a try.
One of those companies is Zoetic, founded in 2107 by Mita Yun and Jitu Das, both ex-Googlers. Their robot, Kiki, is more or less exactly what you’d expect from a social home robot: It’s cute, white, roundish, has big eyes, promises that it will be your “robot sidekick,” and is not cheap: It’s on Kicksterter for $800. Kiki is among what appears to be a sort of tentative second wave of social home robots, where designers have (presumably) had a chance to take everything that they learned from the social home robot pioneers and use it to make things better this time around.
Kiki’s Kickstarter video is, again, more or less exactly what you’d expect from a social home robot crowdfunding campaign:
We won’t get into all of the details on Kiki in this article (the Kickstarter page has tons of information), but a few distinguishing features:
Each Kiki will develop its own personality over time through its daily interactions with its owner, other people, and other Kikis.
Interacting with Kiki is more abstract than with most robots—it can understand some specific words and phrases, and will occasionally use a few specific words or two, but otherwise it’s mostly listening to your tone of voice and responding with sounds rather than speech.
Kiki doesn’t move on its own, but it can operate for up to two hours away from its charging dock.
Depending on how your treat Kiki, it can get depressed or neurotic. It also needs to be fed, which you can do by drawing different kinds of food in the app.
Everything Kiki does runs on-board the robot. It has Wi-Fi connectivity for updates, but doesn’t rely on the cloud for anything in real-time, meaning that your data stays on the robot and that the robot will continue to function even if its remote service shuts down.
It’s hard to say whether features like these are unique enough to help Kiki be successful where other social home robots haven’t been, so we spoke with Zoetic co-founder Mita Yun and asked her why she believes that Kiki is going to be the social home robot that makes it.
IEEE Spectrum: What’s your background?
Mita Yun: I was an only child growing up, and so I always wanted something like Doraemon or Totoro. Something that when you come home it’s there to greet you, not just because it’s programmed to do that but because it’s actually actively happy to see you, and only you. I was so interested in this that I went to study robotics at CMU and then after I graduated I joined Google and worked there for five years. I tended to go for the more risky and more fun projects, but they always got cancelled—the first project I joined was called Android at Home, and then I joined Google Glass, and then I joined a team called Robots for Kids. That project was building educational robots, and then I just realized that when we’re adding technology to something, to a product, we’re actually taking the life away somehow, and the kids were more connected with stuffed animals compared to the educational robots we were building. That project was also cancelled, and in 2017, I left with a coworker of mine (Jitu Das) to bring this dream into reality. And now we’re building Kiki.
“Jibo was Alexa plus cuteness equals $800, and I feel like that equation doesn’t work for most people, and that eventually killed the company. So, for Kiki, we are actually building something very different. We’re building something that’s completely useless”
—Mita Yun, Zoetic
You started working on Kiki in 2017, when things were already getting challenging for Jibo—why did you decide to start developing a social home robot at that point?
I thought Jibo was great. It had a special magical way of moving, and it was such a new idea that you could have this robot with embodiment and it can actually be your assistant. The problem with Jibo, in my opinion, was that it took too long to fulfill the orders. It took them three to four years to actually manufacture, because it was a very complex piece of hardware, and then during that period of time Alexa and Google Home came out, and they started selling these voice systems for $30 and then you have Jibo for $800. Jibo was Alexa plus cuteness equals $800, and I feel like that equation doesn’t work for most people, and that eventually killed the company. So, for Kiki, we are actually building something very different. We’re building something that’s completely useless.
Can you elaborate on “completely useless?”
I feel like people are initially connected with robots because they remind them of a character. And it’s the closest we can get to a character other than an organic character like an animal. So we’re connected to a character like when we have a robot in a mall that’s roaming around, even if it looks really ugly, like if it doesn’t have eyes, people still take selfies with it. Why? Because they think it’s a character. And humans are just hardwired to love characters and love stories. With Kiki, we just wanted to build a character that’s alive, we don’t want to have a character do anything super useful.
I understand why other robotics companies are adding Alexa integration to their robots, and I think that’s great. But the dream I had, and the understanding I have about robotics technology, is that for a consumer robot especially, it is very very difficult for the robot to justify its price through usefulness. And then there’s also research showing that the more useless something is, the easier it is to have an emotional connection, so that’s why we want to keep Kiki very useless.
What kind of character are you creating with Kiki?
The whole design principle around Kiki is we want to make it a very vulnerable character. In terms of its status at home, it’s not going to be higher or equal status as the owner, but slightly lower status than the human, and it’s vulnerable and needs you to take care of it in order to grow up into a good personality robot.
We don’t let Kiki speak full English sentences, because whenever it does that, people are going to think it’s at least as intelligent as a baby, which is impossible for robots at this point. And we also don’t let it move around, because when you have it move around, people are going to think “I’m going to call Kiki’s name, and then Kiki is will come to me.” But that is actually very difficult to build. And then also we don’t have any voice integration so it doesn’t tell you about the stock market price and so on.
Photo: Zoetic
Kiki is designed to be “vulnerable,” and it needs you to take care of it so it can “grow up into a good personality robot,” according to its creators.
That sounds similar to what Mayfield did with Kuri, emphasizing an emotional connection rather than specific functionality.
It is very similar, but one of the key differences from Kuri, I think, is that Kuri started with a Kobuki base, and then it’s wrapped into a cute shell, and they added sounds. So Kuri started with utility in mind—navigation is an important part of Kuri, so they started with that challenge. For Kiki, we started with the eyes. The entire thing started with the character itself.
How will you be able to convince your customers to spend $800 on a robot that you’ve described as “useless” in some ways?
Because it’s useless, it’s actually easier to convince people, because it provides you with an emotional connection. I think Kiki is not a utility-driven product, so the adoption cycle is different. For a functional product, it’s very easy to pick up, because you can justify it by saying “I’m going to pay this much and then my life can become this much more efficient.” But it’s also very easy to be replaced and forgotten. For an emotional-driven product, it’s slower to pick up, but once people actually pick it up, they’re going to be hooked—they get be connected with it, and they’re willing to invest more into taking care of the robot so it will grow up to be smarter.
Maintaining value over time has been another challenge for social home robots. How will you make sure that people don’t get bored with Kiki after a few weeks?
Of course Kiki has limits in what it can do. We can combine the eyes, the facial expression, the motors, and lights and sounds, but is it going to be constantly entertaining? So we think of this as, imagine if a human is actually puppeteering Kiki—can Kiki stay interesting if a human is puppeteering it and interacting with the owner? So I think what makes a robot interesting is not just in the physical expressions, but the part in between that and the robot conveying its intentions and emotions.
For example, if you come into the room and then Kiki decides it will turn the other direction, ignore you, and then you feel like, huh, why did the robot do that to me? Did I do something wrong? And then maybe you will come up to it and you will try to figure out why it did that. So, even though Kiki can only express in four different dimensions, it can still make things very interesting, and then when its strategies change, it makes it feel like a new experience.
There’s also an explore and exploit process going on. Kiki wants to make you smile, and it will try different things. It could try to chase its tail, and if you smile, Kiki learns that this works and will exploit it. But maybe after doing it three times, you no longer find it funny, because you’re bored of it, and then Kiki will observe your reactions and be motivated to explore a new strategy.
Photo: Zoetic
Kiki’s creators are hoping that, with an emotionally engaging robot, it will be easier for people to get attached to it and willing to spend time taking care of it.
A particular risk with crowdfunding a robot like this is setting expectations unreasonably high. The emphasis on personality and emotional engagement with Kiki seems like it may be very difficult for the robot to live up to in practice.
I think we invested more than most robotics companies into really building out Kiki’s personality, because that is the single most important thing to us. For Jibo a lot of the focus was in the assistant, and for Kuri, it’s more in the movement. For Kiki, it’s very much in the personality.
I feel like when most people talk about personality, they’re mainly talking about expression. With Kiki, it’s not just in the expression itself, not just in the voice or the eyes or the output layer, it’s in the layer in between—when Kiki receives input, how will it make decisions about what to do? We actually don’t think the personality of Kiki is categorizable, which is why I feel like Kiki has a deeper implementation of how personalities should work. And you’re right, Kiki doesn’t really understand why you’re feeling a certain way, it just reads your facial expressions. It’s maybe not your best friend, but maybe closer to your little guinea pig robot.
Photo: Zoetic
The team behind Kiki paid particular attention to its eyes, and designed the robot to always face the person that it is interacting with.
Is that where you’d put Kiki on the scale of human to pet?
Kiki is definitely not human, we want to keep it very far away from human. And it’s also not a dog or cat. When we were designing Kiki, we took inspiration from mammals because humans are deeply connected to mammals since we’re mammals ourselves. And specifically we’re connected to predator animals. With prey animals, their eyes are usually on the sides of their heads, because they need to see different angles. A predator animal needs to hunt, they need to focus. Cats and dogs are predator animals. So with Kiki, that’s why we made sure the eyes are on one side of the face and the head can actuate independently from the body and the body can turn so it’s always facing the person that it’s paying attention to.
I feel like Kiki is probably does more than a plant. It does more than a fish, because a fish doesn’t look you in the eyes. It’s not as smart as a cat or a dog, so I would just put it in this guinea pig kind of category.
What have you found so far when running user studies with Kiki?
When we were first designing Kiki we went through a whole series of prototypes. One of the earlier prototypes of Kiki looked like a CRT, like a very old monitor, and when we were testing that with people they didn’t even want to touch it. Kiki’s design inspiration actually came from an airplane, with a very angular, futuristic look, but based on user feedback we made it more round and more friendly to the touch. The lights were another feature request from the users, which adds another layer of expressivity to Kiki, and they wanted to see multiple Kikis working together with different personalities. Users also wanted different looks for Kiki, to make it look like a deer or a unicorn, for example, and we actually did take that into consideration because it doesn’t look like any particular mammal. In the future, you’ll be able to have different ears to make it look like completely different animals.
There has been a lot of user feedback that we didn’t implement—I believe we should observe the users reactions and feedback but not listen to their advice. The users shouldn’t be our product designers, because if you test Kiki with 10 users, eight of them will tell you they want Alexa in it. But we’re never going to add Alexa integration to Kiki because that’s not what it’s meant to do.
While it’s far too early to tell whether Kiki will be a long-term success, the Kickstarter campaign is currently over 95 percent funded with 8 days to go, and 34 robots are still available for a May 2020 delivery.
[ Kickstarter ] Continue reading →
#435733 Robot Squid and Robot Scallop Showcase ...
Most underwater robots use one of two ways of getting around. Way one is with propellers, and way two is with fins. But animals have shown us that there are many more kinds of underwater locomotion, potentially offering unique benefits to robots. We’ll take a look at two papers from ICRA this year that showed bioinspired underwater robots moving in creative new ways: A jet-powered squid robot that can leap out of the water, plus a robotic scallop that moves just like the real thing.
Image: Beihang University
Prototype of the squid robot in (a) open and (b) folded states. The soft fins and arms are controlled by pneumatic actuators.
This “squid-like aquatic-aerial vehicle” from Beihang University in China is modeled after flying squids. Real squids, in addition to being tasty, propel themselves using water jets, and these jets are powerful enough that some squids can not only jump out of the water, but actually achieve controlled flight for a brief period by continuing to jet while in the air. The flight phase is extended through the use of fins as arms and wings to generate a little bit of lift. Real squids use this multimodal propulsion to escape predators, and it’s also much faster—a squid can double its normal swimming speed while in the air, flying at up to 50 body lengths per second.
The squid robot is powered primarily by compressed air, which it stores in a cylinder in its nose (do squids have noses?). The fins and arms are controlled by pneumatic actuators. When the robot wants to move through the water, it opens a value to release a modest amount of compressed air; releasing the air all at once generates enough thrust to fire the robot squid completely out of the water.
The jumping that you see at the end of the video is preliminary work; we’re told that the robot squid can travel between 10 and 20 meters by jumping, whereas using its jet underwater will take it just 10 meters. At the moment, the squid can only fire its jet once, but the researchers plan to replace the compressed air with something a bit denser, like liquid CO2, which will allow for extended operation and multiple jumps. There’s also plenty of work to do with using the fins for dynamic control, which the researchers say will “reveal the superiority of the natural flying squid movement.”
“Design and Experiments of a Squid-like Aquatic-aerial Vehicle With Soft Morphing Fins and Arms,” by Taogang Hou, Xingbang Yang, Haohong Su, Buhui Jiang, Lingkun Chen, Tianmiao Wang, and Jianhong Liang from Beihang University in China, was presented at ICRA 2019 in Montreal.
Image: EPFL
The EPFL researchers studied the morphology and function of a real scallop (a) to design their robot scallop (b), which consists of two shells connected at a hinge and enclosed by a flexible elastic membrane. The robot and animal both swim by rapidly, cyclicly opening and closing their shells to generate water jets for propulsion. When the robot shells open, water is drawn into the body through rear openings near the hinge. When the shells close rapidly, the water is forced out, propelling the robot forward (c).
RoboScallop, a “bivalve inspired swimming robot,” comes from EPFL’s Reconfigurable Robotics Laboratory, headed by Jamie Paik. Real scallops, in addition to being tasty, propel themselves by opening and closing their shells to generate jets of water out of their backsides. By repetitively opening their shells slowly and then closing quickly, scallops can generate forward thrust in a way that’s completely internal to their bodies. Relative to things like fins or spinning propellers, a scallop is simple and robust, especially as you scale down or start looking at large swarms of robots. The EPFL researchers describe their robotic scallop as representing “a unique combination of robust to hazards or sustained use, safe in delicate environments, and simple by design.”
And here’s how the real thing looks:
As you can see from the video, RoboScallop is safe to handle even while it’s operating, although a gentle nibbling is possible if you get too handsy with it. Since the robot sucks water in and then jets it out immediately, the design is resistant to fouling, which can be a significant problem in marine environments. The RoboScallop prototype weighs 65 grams, and tops out at a brisk 16 centimeters per second, while clapping (that’s the actual technical) at just over 2.5 Hz. While RoboScallop doesn’t yet steer, real scallops can change direction by jetting out more water on one side than the other, and RoboScallop should be able to do this as well. The researchers also suggest that RoboScallop itself could even double as a gripper, which as far as I know, is not something that real scallops can do.
“RoboScallop: A Bivalve-Inspired Swimming Robot,” by Matthew A. Robertson, Filip Efremov, and Jamie Paik, was presented at ICRA 2019 in Montreal. Continue reading →
#435648 Surprisingly Speedy Soft Robot Survives ...
Soft robots are getting more and more popular for some very good reasons. Their relative simplicity is one. Their relative low cost is another. And for their simplicity and low cost, they’re generally able to perform very impressively, leveraging the unique features inherent to their design and construction to move themselves and interact with their environment. The other significant reason why soft robots are so appealing is that they’re durable. Without the constraints of rigid parts, they can withstand the sort of abuse that would make any roboticist cringe.
In the current issue of Science Robotics, a group of researchers from Tsinghua University in China and University of California, Berkeley, present a new kind of soft robot that’s both higher performance and much more robust than just about anything we’ve seen before. The deceptively simple robot looks like a bent strip of paper, but it’s able to move at 20 body lengths per second and survive being stomped on by a human wearing tennis shoes. Take that, cockroaches.
This prototype robot measures just 3 centimeters by 1.5 cm. It takes a scanning electron microscope to actually see what the robot is made of—a thermoplastic layer is sandwiched by palladium-gold electrodes, bonded with adhesive silicone to a structural plastic at the bottom. When an AC voltage (as low as 8 volts but typically about 60 volts) is run through the electrodes, the thermoplastic extends and contracts, causing the robot’s back to flex and the little “foot” to shuffle. A complete step cycle takes just 50 milliseconds, yielding a 200 hertz gait. And technically, the robot “runs,” since it does have a brief aerial phase.
Image: Science Robotics
Photos from a high-speed camera show the robot’s gait (A to D) as it contracts and expands its body.
To put the robot’s top speed of 20 body lengths per second in perspective, have a look at this nifty chart, which shows where other animals relative running speeds of some animals and robots versus body mass:
Image: Science Robotics
This chart shows the relative running speeds of some mammals (purple area), arthropods (orange area), and soft robots (blue area) versus body mass. For both mammals and arthropods, relative speeds show a strong negative scaling law with respect to the body mass: speeds increase as body masses decrease. However, for soft robots, the relationship appears to be the opposite: speeds decrease as the body mass decrease. For the little soft robots created by the researchers from Tsinghua University and UC Berkeley (red stars), the scaling law is similar to that of living animals: Higher speed was attained as the body mass decreased.
If you were wondering, like we were, just what that number 39 is on that chart (top left corner), it’s a species of tiny mite that was discovered underneath a rock in California in 1916. The mite is just under 1 mm in size, but it can run at 0.8 kilometer per hour, which is 322 body lengths per second, making it by far (like, by a factor of two at least) the fastest land animal on Earth relative to size. If a human was to run that fast relative to our size, we’d be traveling at a little bit over 2,000 kilometers per hour. It’s not a coincidence that pretty much everything in the upper left of the chart is an insect—speed scales favorably with decreasing mass, since actuators have a proportionally larger effect.
Other notable robots on the chart with impressive speed to mass ratios are number 27, which is this magnetically driven quadruped robot from UMD, and number 86, UC Berkeley’s X2-VelociRoACH.
Anyway, back to this robot. Some other cool things about it:
You can step on it, squishing it flat with a load about 1 million times its own body weight, and it’ll keep on crawling, albeit only half as fast.
Even climbing a slope of 15 degrees, it can still manage to move at 1 body length per second.
It carries peanuts! With a payload of six times its own weight, it moves a sixth as fast, but still, it’s not like you need your peanuts delivered all that quickly anyway, do you?
Image: Science Robotics
The researchers also put together a prototype with two legs instead of one, which was able to demonstrate a potentially faster galloping gait by spending more time in the air. They suggest that robots like these could be used for “environmental exploration, structural inspection, information reconnaissance, and disaster relief,” which are the sorts of things that you suggest that your robot could be used for when you really have no idea what it could be used for. But this work is certainly impressive, with speed and robustness that are largely unmatched by other soft robots. An untethered version seems possible due to the relatively low voltages required to drive the robot, and if they can put some peanut-sized sensors on there as well, practical applications might actually be forthcoming sometime soon.
“Insect-scale Fast Moving and Ultrarobust Soft Robot,” by Yichuan Wu, Justin K. Yim, Jiaming Liang, Zhichun Shao, Mingjing Qi, Junwen Zhong, Zihao Luo, Xiaojun Yan, Min Zhang, Xiaohao Wang, Ronald S. Fearing, Robert J. Full, and Liwei Lin from Tsinghua University and UC Berkeley, is published in Science Robotics. Continue reading →
#435591 Video Friday: This Robotic Thread Could ...
Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):
IEEE Africon 2019 – September 25-27, 2019 – Accra, Ghana
ISRR 2019 – October 6-10, 2019 – Hanoi, Vietnam
Ro-Man 2019 – October 14-18, 2019 – New Delhi, India
Humanoids 2019 – October 15-17, 2019 – Toronto, Canada
ARSO 2019 – October 31-1, 2019 – Beijing, China
ROSCon 2019 – October 31-1, 2019 – Macau
IROS 2019 – November 4-8, 2019 – Macau
Let us know if you have suggestions for next week, and enjoy today’s videos.
Eight engineering students from ETH Zurich are working on a year-long focus project to develop a multimodal robot called Dipper, which can fly, swim, dive underwater, and manage that difficult air-water transition:
The robot uses one motor to selectively drive either a propeller or a marine screw depending on whether it’s in flight or not. We’re told that getting the robot to autonomously do the water to air transition is still a work in progress, but that within a few weeks things should be much smoother.
[ Dipper ]
Thanks Simon!
Giving a jellyfish a hug without stressing them out is exactly as hard as you think, but Harvard’s robot will make sure that all jellyfish get the emotional (and physical) support that they need.
The gripper’s six “fingers” are composed of thin, flat strips of silicone with a hollow channel inside bonded to a layer of flexible but stiffer polymer nanofibers. The fingers are attached to a rectangular, 3D-printed plastic “palm” and, when their channels are filled with water, curl in the direction of the nanofiber-coated side. Each finger exerts an extremely low amount of pressure — about 0.0455 kPA, or less than one-tenth of the pressure of a human’s eyelid on their eye. By contrast, current state-of-the-art soft marine grippers, which are used to capture delicate but more robust animals than jellyfish, exert about 1 kPA.
The gripper was successfully able to trap each jellyfish against the palm of the device, and the jellyfish were unable to break free from the fingers’ grasp until the gripper was depressurized. The jellyfish showed no signs of stress or other adverse effects after being released, and the fingers were able to open and close roughly 100 times before showing signs of wear and tear.
[ Harvard ]
MIT engineers have developed a magnetically steerable, thread-like robot that can actively glide through narrow, winding pathways, such as the labyrinthine vasculature of the brain. In the future, this robotic thread may be paired with existing endovascular technologies, enabling doctors to remotely guide the robot through a patient’s brain vessels to quickly treat blockages and lesions, such as those that occur in aneurysms and stroke.
[ MIT ]
See NASA’s next Mars rover quite literally coming together inside a clean room at the Jet Propulsion Laboratory. This behind-the-scenes look at what goes into building and preparing a rover for Mars, including extensive tests in simulated space environments, was captured from March to July 2019. The rover is expected to launch to the Red Planet in summer 2020 and touch down in February 2021.
The Mars 2020 rover doesn’t have a name yet, but you can give it one! As long as you’re not too old! Which you probably are!
[ Mars 2020 ]
I desperately wish that we could watch this next video at normal speed, not just slowed down, but it’s quite impressive anyway.
Here’s one more video from the Namiki Lab showing some high speed tracking with a pair of very enthusiastic robotic cameras:
[ Namiki Lab ]
Normally, tedious modeling of mechanics, electronics, and information science is required to understand how insects’ or robots’ moving parts coordinate smoothly to take them places. But in a new study, biomechanics researchers at the Georgia Institute of Technology boiled down the sprints of cockroaches to handy principles and equations they then used to make a test robot amble about better.
[ Georgia Tech ]
More magical obstacle-dodging footage from Skydio’s still secret new drone.
We’ve been hard at work extending the capabilities of our upcoming drone, giving you ways to get the control you want without the stress of crashing. The result is you can fly in ways, and get shots, that would simply be impossible any other way. How about flying through obstacles at full speed, backwards?
[ Skydio ]
This is a cute demo with Misty:
[ Misty Robotics ]
We’ve seen pieces of hardware like this before, but always made out of hard materials—a soft version is certainly something new.
Utilizing vacuum power and soft material actuators, we have developed a soft reconfigurable surface (SRS) with multi-modal control and performance capabilities. The SRS is comprised of a square grid array of linear vacuum-powered soft pneumatic actuators (linear V-SPAs), built into plug-and-play modules which enable the arrangement, consolidation, and control of many DoF.
[ RRL ]
The EksoVest is not really a robot, but it’ll make you a cyborg! With super strength!
“This is NOT intended to give you super strength but instead give you super endurance and reduce fatigue so that you have more energy and less soreness at the end of your shift.”
Drat!
[ EksoVest ]
We have created a solution for parents, grandparents, and their children who are living separated. This is an amazing tool to stay connected from a distance through the intimacy that comes through interactive play with a child. For parents who travel for work, deployed military, and families spread across the country, the Cushybot One is much more than a toy; it is the opportunity for maintaining a deep connection with your young child from a distance.
Hmm.
I think the concept here is great, but it’s going to be a serious challenge to successfully commercialize.
[ Indiegogo ]
What happens when you equip RVR with a parachute and send it off a cliff? Watch this episode of RVR Launchpad to find out – then go Behind the Build to see how we (eventually) accomplished this high-flying feat.
[ Sphero ]
These omnidirectional crawler robots aren’t new, but that doesn’t keep them from being fun to watch.
[ NEDO ] via [ Impress ]
We’ll finish up the week with a couple of past ICRA and IROS keynote talks—one by Gill Pratt on The Reliability Challenges of Autonomous Driving, and the other from Peter Hart, on Making Shakey.
[ IEEE RAS ] Continue reading →