Tag Archives: shape
#435640 Video Friday: This Wearable Robotic Tail ...
Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):
DARPA SubT Tunnel Circuit – August 15-22, 2019 – Pittsburgh, Pa., USA
CLAWAR 2019 – August 26-28, 2019 – Kuala Lumpur, Malaysia
IEEE Africon 2019 – September 25-27, 2019 – Accra, Ghana
ISRR 2019 – October 6-10, 2019 – Hanoi, Vietnam
Ro-Man 2019 – October 14-18, 2019 – New Delhi, India
Humanoids 2019 – October 15-17, 2019 – Toronto, Canada
ARSO 2019 – October 31-1, 2019 – Beijing, China
ROSCon 2019 – October 31-1, 2019 – Macau
IROS 2019 – November 4-8, 2019 – Macau
Let us know if you have suggestions for next week, and enjoy today’s videos.
Lakshmi Nair from Georgia Tech describes some fascinating research towards robots that can create their own tools, as presented at ICRA this year:
Using a novel capability to reason about shape, function, and attachment of unrelated parts, researchers have for the first time successfully trained an intelligent agent to create basic tools by combining objects.
The breakthrough comes from Georgia Tech’s Robot Autonomy and Interactive Learning (RAIL) research lab and is a significant step toward enabling intelligent agents to devise more advanced tools that could prove useful in hazardous – and potentially life-threatening – environments.
[ Lakshmi Nair ]
Victor Barasuol, from the Dynamic Legged Systems Lab at IIT, wrote in to share some new research on their HyQ quadruped that enables sensorless shin collision detection. This helps the robot navigate unstructured environments, and also mitigates all those painful shin strikes, because ouch.
This will be presented later this month at the International Conference on Climbing and Walking Robots (CLAWAR) in Kuala Lumpur, Malaysia.
[ IIT ]
Thanks Victor!
You used to have a tail, you know—as an embryo, about a month in to your development. All mammals used to have tails, and now we just have useless tailbones, which don’t help us with balancing even a little bit. BRING BACK THE TAIL!
The tail, created by Junichi Nabeshima, Kouta Minamizawa, and MHD Yamen Saraiji from Keio University’s Graduate School of Media Design, was presented at SIGGRAPH 2019 Emerging Technologies.
[ Paper ] via [ Gizmodo ]
The noises in this video are fantastic.
[ ESA ]
Apparently the industrial revolution wasn’t a thorough enough beatdown of human knitting, because the robots are at it again.
[ MIT CSAIL ]
Skydio’s drones just keep getting more and more impressive. Now if only they’d make one that I can afford…
[ Skydio ]
The only thing more fun than watching robots is watching people react to robots.
[ SEER ]
There aren’t any robots in this video, but it’s robotics-related research, and very soothing to watch.
[ Stanford ]
#autonomousicecreamtricycle
In case it wasn’t clear, which it wasn’t, this is a Roboy project. And if you didn’t understand that first video, you definitely won’t understand this second one:
Whatever that t-shirt is at the end (Roboy in sunglasses puking rainbows…?) I need one.
[ Roboy ]
By adding electronics and computation technology to a simple cane that has been around since ancient times, a team of researchers at Columbia Engineering have transformed it into a 21st century robotic device that can provide light-touch assistance in walking to the aged and others with impaired mobility.
The light-touch robotic cane, called CANINE, acts as a cane-like mobile assistant. The device improves the individual’s proprioception, or self-awareness in space, during walking, which in turn improves stability and balance.
[ ROAR Lab ]
During the second field experiment for DARPA’s OFFensive Swarm-Enabled Tactics (OFFSET) program, which took place at Fort Benning, Georgia, teams of autonomous air and ground robots tested tactics on a mission to isolate an urban objective. Similar to the way a firefighting crew establishes a boundary around a burning building, they first identified locations of interest and then created a perimeter around the focal point.
[ DARPA ]
I think there’s a bit of new footage here of Ghost Robotics’ Vision 60 quadruped walking around without sensors on unstructured terrain.
[ Ghost Robotics ]
If you’re as tired of passenger drone hype as I am, there’s absolutely no need to watch this video of NEC’s latest hover test.
[ AP ]
As researchers teach robots to perform more and more complex tasks, the need for realistic simulation environments is growing. Existing techniques for closing the reality gap by approximating real-world physics often require extensive real world data and/or thousands of simulation samples. This paper presents TuneNet, a new machine learning-based method to directly tune the parameters of one model to match another using an iterative residual tuning technique. TuneNet estimates the parameter difference between two models using a single observation from the target and minimal simulation, allowing rapid, accurate and sample-efficient parameter estimation.
The system can be trained via supervised learning over an auto-generated simulated dataset. We show that TuneNet can perform system identification, even when the true parameter values lie well outside the distribution seen during training, and demonstrate that simulators tuned with TuneNet outperform existing techniques for predicting rigid body motion. Finally, we show that our method can estimate real-world parameter values, allowing a robot to perform sim-to-real task transfer on a dynamic manipulation task unseen during training. We are also making a baseline implementation of our code available online.
[ Paper ]
Here’s an update on what GITAI has been up to with their telepresence astronaut-replacement robot.
[ GITAI ]
Curiosity captured this 360-degree panorama of a location on Mars called “Teal Ridge” on June 18, 2019. This location is part of a larger region the rover has been exploring called the “clay-bearing unit” on the side of Mount Sharp, which is inside Gale Crater. The scene is presented with a color adjustment that approximates white balancing to resemble how the rocks and sand would appear under daytime lighting conditions on Earth.
[ MSL ]
Some updates (in English) on ROS from ROSCon France. The first is a keynote from Brian Gerkey:
And this second video is from Omri Ben-Bassat, about how to keep your Anki Vector alive using ROS:
All of the ROSCon FR talks are available on Vimeo.
[ ROSCon FR ] Continue reading
#435634 Robot Made of Clay Can Sculpt Its Own ...
We’re very familiar with a wide variety of transforming robots—whether for submarines or drones, transformation is a way of making a single robot adaptable to different environments or tasks. Usually, these robots are restricted to a discrete number of configurations—perhaps two or three different forms—because of the constraints imposed by the rigid structures that robots are typically made of.
Soft robotics has the potential to change all this, with robots that don’t have fixed forms but instead can transform themselves into whatever shape will enable them to do what they need to do. At ICRA in Montreal earlier this year, researchers from Yale University demonstrated a creative approach toward a transforming robot powered by string and air, with a body made primarily out of clay.
Photo: Evan Ackerman
The robot is actuated by two different kinds of “skin,” one layered on top of another. There’s a locomotion skin, made of a pattern of pneumatic bladders that can roll the robot forward or backward when the bladders are inflated sequentially. On top of that is the morphing skin, which is cable-driven, and can sculpt the underlying material into a variety of shapes, including spheres, cylinders, and dumbbells. The robot itself consists of both of those skins wrapped around a chunk of clay, with the actuators driven by offboard power and control. Here it is in action:
The Yale researchers have been experimenting with morphing robots that use foams and tensegrity structures for their bodies, but that stuff provides a “restoring force,” springing back into its original shape once the actuation stops. Clay is different because it holds whatever shape it’s formed into, making the robot more energy efficient. And if the dumbbell shape stops being useful, the morphing layer can just squeeze it back into a cylinder or a sphere.
While this robot, and the sample transformation shown in the video, are relatively simplistic, the researchers suggest some ways in which a more complex version could be used in the future:
Photo: IEEE Xplore
This robot’s morphing skin sculpts its clay body into different shapes.
Applications where morphing and locomotion might serve as complementary functions are abundant. For the example skins presented in this work, a search-and-rescue operation could use the clay as a medium to hold a payload such as sensors or transmitters. More broadly, applications include resource-limited conditions where supply chains for materiel are sparse. For example, the morphing sequence shown in Fig. 4 [above] could be used to transform from a rolling sphere to a pseudo-jointed robotic arm. With such a morphing system, it would be possible to robotically morph matter into different forms to perform different functions.
Read this article for free on IEEE Xplore until 5 September 2019
Morphing Robots Using Robotic Skins That Sculpt Clay, by Dylan S. Shah, Michelle C. Yuen, Liana G. Tilton, Ellen J. Yang, and Rebecca Kramer-Bottiglio from Yale University, was presented at ICRA 2019 in Montreal.
[ Yale Faboratory ]
< Back to IEEE Journal Watch Continue reading
#435619 Video Friday: Watch This Robot Dog ...
Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):
IEEE Africon 2019 – September 25-27, 2019 – Accra, Ghana
RoboBusiness 2019 – October 1-3, 2019 – Santa Clara, CA, USA
ISRR 2019 – October 6-10, 2019 – Hanoi, Vietnam
Ro-Man 2019 – October 14-18, 2019 – New Delhi, India
Humanoids 2019 – October 15-17, 2019 – Toronto, Canada
ARSO 2019 – October 31-1, 2019 – Beijing, China
ROSCon 2019 – October 31-1, 2019 – Macau
IROS 2019 – November 4-8, 2019 – Macau
Let us know if you have suggestions for next week, and enjoy today’s videos.
Team PLUTO (University of Pennsylvania, Ghost Robotics, and Exyn Technologies) put together this video giving us a robot’s-eye-view (or whatever they happen to be using for eyes) of the DARPA Subterranean Challenge tunnel circuits.
[ PLUTO ]
Zhifeng Huang has been improving his jet-stepping humanoid robot, which features new hardware and the ability to take larger and more complex steps.
This video reported the last progress of an ongoing project utilizing ducted-fan propulsion system to improve humanoid robot’s ability in stepping over large ditches. The landing point of the robot’s swing foot can be not only forward but also side direction. With keeping quasi-static balance, the robot was able to step over a ditch with 450mm in width (up to 97% of the robot’s leg’s length) in 3D stepping.
[ Paper ]
Thanks Zhifeng!
These underacuated hands from Matei Ciocarlie’s lab at Columbia are magically able to reconfigure themselves to grasp different object types with just one or two motors.
[ Paper ] via [ ROAM Lab ]
This is one reason we should pursue not “autonomous cars” but “fully autonomous cars” that never require humans to take over. We can’t be trusted.
During our early days as the Google self-driving car project, we invited some employees to test our vehicles on their commutes and weekend trips. What we were testing at the time was similar to the highway driver assist features that are now available on cars today, where the car takes over the boring parts of the driving, but if something outside its ability occurs, the driver has to take over immediately.
What we saw was that our testers put too much trust in that technology. They were doing things like texting, applying makeup, and even falling asleep that made it clear they would not be ready to take over driving if the vehicle asked them to. This is why we believe that nothing short of full autonomy will do.
[ Waymo ]
Buddy is a DIY and fetchingly minimalist social robot (of sorts) that will be coming to Kickstarter this month.
We have created a new arduino kit. His name is Buddy. He is a DIY social robot to serve as a replacement for Jibo, Cozmo, or any of the other bots that are no longer available. Fully 3D printed and supported he adds much more to our series of Arduino STEM robotics kits.
Buddy is able to look around and map his surroundings and react to changes within them. He can be surprised and he will always have a unique reaction to changes. The kit can be built very easily in less than an hour. It is even robust enough to take the abuse that kids can give it in a classroom.
[ Littlebots ]
The android Mindar, based on the Buddhist deity of mercy, preaches sermons at Kodaiji temple in Kyoto, and its human colleagues predict that with artificial intelligence it could one day acquire unlimited wisdom. Developed at a cost of almost $1 million (¥106 million) in a joint project between the Zen temple and robotics professor Hiroshi Ishiguro, the robot teaches about compassion and the dangers of desire, anger and ego.
[ Japan Times ]
I’m not sure whether it’s the sound or what, but this thing scares me for some reason.
[ BIRL ]
This gripper uses magnets as a sort of adjustable spring for dynamic stiffness control, which seems pretty clever.
[ Buffalo ]
What a package of medicine sees while being flown by drone from a hospital to a remote clinic in the Dominican Republic. The drone flew 11 km horizontally and 800 meters vertically, and I can’t even imagine what it would take to make that drive.
[ WeRobotics ]
My first ride in a fully autonomous car was at Stanford in 2009. I vividly remember getting in the back seat of a descendant of Junior, and watching the steering wheel turn by itself as the car executed a perfect parking maneuver. Ten years later, it’s still fun to watch other people have that experience.
[ Waymo ]
Flirtey, the pioneer of the commercial drone delivery industry, has unveiled the much-anticipated first video of its next-generation delivery drone, the Flirtey Eagle. The aircraft designer and manufacturer also unveiled the Flirtey Portal, a sophisticated take off and landing platform that enables scalable store-to-door operations; and an autonomous software platform that enables drones to deliver safely to homes.
[ Flirtey ]
EPFL scientists are developing new approaches for improved control of robotic hands – in particular for amputees – that combines individual finger control and automation for improved grasping and manipulation. This interdisciplinary proof-of-concept between neuroengineering and robotics was successfully tested on three amputees and seven healthy subjects.
[ EPFL ]
This video is a few years old, but we’ll take any excuse to watch the majestic sage-grouse be majestic in all their majesticness.
[ UC Davis ]
I like the idea of a game of soccer (or, football to you weirdos in the rest of the world) where the ball has a mind of its own.
[ Sphero ]
Looks like the whole delivery glider idea is really taking off! Or, you know, not taking off.
Weird that they didn’t show the landing, because it sure looked like it was going to plow into the side of the hill at full speed.
[ Yates ] via [ sUAS News ]
This video is from a 2018 paper, but it’s not like we ever get tired of seeing quadrupeds do stuff, right?
[ MIT ]
Founder and Head of Product, Ian Bernstein, and Head of Engineering, Morgan Bell, have been involved in the Misty project for years and they have learned a thing or two about building robots. Hear how and why Misty evolved into a robot development platform, learn what some of the earliest prototypes did (and why they didn’t work for what we envision), and take a deep dive into the technology decisions that form the Misty II platform.
[ Misty Robotics ]
Lex Fridman interviews Vijay Kumar on the Artifiical Intelligence Podcast.
[ AI Podcast ]
This week’s CMU RI Seminar is from Ross Knepper at Cornell, on Formalizing Teamwork in Human-Robot Interaction.
Robots out in the world today work for people but not with people. Before robots can work closely with ordinary people as part of a human-robot team in a home or office setting, robots need the ability to acquire a new mix of functional and social skills. Working with people requires a shared understanding of the task, capabilities, intentions, and background knowledge. For robots to act jointly as part of a team with people, they must engage in collaborative planning, which involves forming a consensus through an exchange of information about goals, capabilities, and partial plans. Often, much of this information is conveyed through implicit communication. In this talk, I formalize components of teamwork involving collaboration, communication, and representation. I illustrate how these concepts interact in the application of social navigation, which I argue is a first-class example of teamwork. In this setting, participants must avoid collision by legibly conveying intended passing sides via nonverbal cues like path shape. A topological representation using the braid groups enables the robot to reason about a small enumerable set of passing outcomes. I show how implicit communication of topological group plans achieves rapid covergence to a group consensus, and how a robot in the group can deliberately influence the ultimate outcome to maximize joint performance, yielding pedestrian comfort with the robot.
[ CMU RI ]
In this week’s episode of Robots in Depth, Per speaks with Julien Bourgeois about Claytronics, a project from Carnegie Mellon and Intel to develop “programmable matter.”
Julien started out as a computer scientist. He was always interested in robotics privately but then had the opportunity to get into micro robots when his lab was merged into the FEMTO-ST Institute. He later worked with Seth Copen Goldstein at Carnegie Mellon on the Claytronics project.
Julien shows an enlarged mock-up of the small robots that make up programmable matter, catoms, and speaks about how they are designed. Currently he is working on a unit that is one centimeter in diameter and he shows us the very small CPU that goes into that model.
[ Robots in Depth ] Continue reading