Tag Archives: talk
#437824 Video Friday: These Giant Robots Are ...
Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):
ACRA 2020 – December 8-10, 2020 – [Online]
Let us know if you have suggestions for next week, and enjoy today's videos.
“Who doesn’t love giant robots?”
Luma, is a towering 8 metre snail which transforms spaces with its otherworldly presence. Another piece, Triffid, stands at 6 metres and its flexible end sweeps high over audiences’ heads like an enchanted plant. The movement of the creatures is inspired by the flexible, wiggling and contorting motions of the animal kingdom and is designed to provoke instinctive reactions and emotions from the people that meet them. Air Giants is a new creative robotic studio founded in 2020. They are based in Bristol, UK, and comprise a small team of artists, roboticists and software engineers. The studio is passionate about creating emotionally effective motion at a scale which is thought-provoking and transporting, as well as expanding the notion of what large robots can be used for.
Here’s a behind the scenes and more on how the creatures work.
[ Air Giants ]
Thanks Emma!
If the idea of submerging a very expensive sensor payload being submerged in a lake makes you as uncomfortable as it makes me, this is not the video for you.
[ ANYbotics ]
As the pandemic continues on, the measures due to this health crisis are increasingly stringent, and working from home continues to be promoted and solicited by many companies, Pepper will allow you to keep in touch with your relatives or even your colleagues.
[ Softbank ]
Fairly impressive footwork from Tencent Robotics.
Although, LittleDog was doing that like a decade ago:
[ Tencent ]
It's been long enough since I've been able to go out for boba tea that a robotic boba tea kiosk seems like a reasonable thing to get for my living room.
[ Bobacino ] via [ Gizmodo ]
Road construction and maintenance is challenging and dangerous work. Pioneer Industrial Systems has spent over twenty years designing custom robotic systems for industrial manufacturers around the world. These robotic systems greatly improve safety and increase efficiency. Now they’re taking that expertise on the road, with the Robotic Maintenance Vehicle. This base unit can be mounted on a truck or trailer, and utilizes various modules to perform a variety of road maintenance tasks.
[ Pioneer ]
Extend Robotics arm uses cloud-based teleoperation software, featuring human-like dexterity and intelligence, with multiple applications in healthcare, utilities and energy
[ Extend Robotics ]
ARC, short for “AI, Robot, Cloud,” includes the latest algorithms and high precision data required for human-robot coexistence. Now with ultra-low latency networks, many robots can simultaneously become smarter, just by connecting to ARC. “ARC Eye” serves as the eyes for all robots, accurately determining the current location and route even indoors where there is no GPS access. “ARC Brain” is the computing system shared simultaneously by all robots, which plans and processes movement, localization, and task performance for the robot.
[ Naver Labs ]
How can we re-imagine urban infrastructures with cutting-edge technologies? Listen to this webinar from Ger Baron, Amsterdam’s CTO, and Senseable City Lab’s researchers, on how MIT and Amsterdam Institute for Advanced Metropolitan Solutions (AMS Institute) are reimagining Amsterdam’s canals with the first fleet of autonomous boats.
[ MIT ]
Join Guy Burroughes in this webinar recording to hear about Spot, the robot dog created by Boston Dynamics, and how RACE plan to use it in nuclear decommissioning and beyond.
[ UKAEA ]
This GRASP on Robotics seminar comes from Marco Pavone at Stanford University, “On Safe and Efficient Human-robot interactions via Multimodal Intent Modeling and Reachability-based Safety Assurance.”
In this talk I will present a decision-making and control stack for human-robot interactions by using autonomous driving as a motivating example. Specifically, I will first discuss a data-driven approach for learning multimodal interaction dynamics between robot-driven and human-driven vehicles based on recent advances in deep generative modeling. Then, I will discuss how to incorporate such a learned interaction model into a real-time, interaction-aware decision-making framework. The framework is designed to be minimally interventional; in particular, by leveraging backward reachability analysis, it ensures safety even when other cars defy the robot's expectations without unduly sacrificing performance. I will present recent results from experiments on a full-scale steer-by-wire platform, validating the framework and providing practical insights. I will conclude the talk by providing an overview of related efforts from my group on infusing safety assurances in robot autonomy stacks equipped with learning-based components, with an emphasis on adding structure within robot learning via control-theoretical and formal methods.
[ UPenn ]
Autonomous Systems Failures: Who is Legally and Morally Responsible? Sponsored by Northwestern University’s Law and Technology Initiative and AI@NU, the event was moderated by Dan Linna and included Northwestern Engineering's Todd Murphey, University of Washington Law Professor Ryan Calo, and Google Senior Research Scientist Madeleine Clare Elish.
[ Northwestern ] Continue reading
#437807 Why We Need Robot Sloths
An inherent characteristic of a robot (I would argue) is embodied motion. We tend to focus on motion rather a lot with robots, and the most dynamic robots get the most attention. This isn’t to say that highly dynamic robots don’t deserve our attention, but there are other robotic philosophies that, while perhaps less visually exciting, are equally valuable under the right circumstances. Magnus Egerstedt, a robotics professor at Georgia Tech, was inspired by some sloths he met in Costa Rica to explore the idea of “slowness as a design paradigm” through an arboreal robot called SlothBot.
Since the robot moves so slowly, why use a robot at all? It may be very energy-efficient, but it’s definitely not more energy efficient than a static sensing system that’s just bolted to a tree or whatever. The robot moves, of course, but it’s also going to be much more expensive (and likely much less reliable) than a handful of static sensors that could cover a similar area. The problem with static sensors, though, is that they’re constrained by power availability, and in environments like under a dense tree canopy, you’re not going to be able to augment their lifetime with solar panels. If your goal is a long-duration study of a small area (over weeks or months or more), SlothBot is uniquely useful in this context because it can crawl out from beneath a tree to find some sun to recharge itself, sunbathe for a while, and then crawl right back again to resume collecting data.
SlothBot is such an interesting concept that we had to check in with Egerstedt with a few more questions.
IEEE Spectrum: Tell us what you find so amazing about sloths!
Magnus Egerstedt: Apart from being kind of cute, the amazing thing about sloths is that they have carved out a successful ecological niche for themselves where being slow is not only acceptable but actually beneficial. Despite their pretty extreme low-energy lifestyle, they exhibit a number of interesting and sometimes outright strange behaviors. And, behaviors having to do with territoriality, foraging, or mating look rather different when you are that slow.
Are you leveraging the slothiness of the design for this robot somehow?
Sadly, the sloth design serves no technical purpose. But we are also viewing the SlothBot as an outreach platform to get kids excited about robotics and/or conservation biology. And having the robot look like a sloth certainly cannot hurt.
“Slowness is ideal for use cases that require a long-term, persistent presence in an environment, like for monitoring tasks. I can imagine slow robots being out on farm fields for entire growing cycles, or suspended on the ocean floor keeping track of pollutants or temperature variations.”
—Magnus Egerstedt, Georgia Tech
Can you talk more about slowness as a design paradigm?
The SlothBot is part of a broader design philosophy that I have started calling “Robot Ecology.” In ecology, the connections between individuals and their environments/habitats play a central role. And the same should hold true in robotics. The robot design must be understood in the environmental context in which it is to be deployed. And, if your task is to be present in a slowly varying environment over a long time scale, being slow seems like the right way to go. Slowness is ideal for use cases that require a long-term, persistent presence in an environment, like for monitoring tasks, where the environment itself is slowly varying. I can imagine slow robots being out on farm fields for entire growing cycles, or suspended on the ocean floor keeping track of pollutants or temperature variations.
How do sloths inspire SlothBot’s functionality?
Its motions are governed by what we call survival constraints. These constraints ensure that the SlothBot is always able to get to a sunny spot to recharge. The actual performance objective that we have given to the robot is to minimize energy consumption, i.e., to simply do nothing subject to the survival constraints. The majority of the time, the robot simply sits there under the trees, measuring various things, seemingly doing absolutely nothing and being rather sloth-like. Whenever the SlothBot does move, it does not move according to some fixed schedule. Instead, it moves because it has to in order to “survive.”
How would you like to improve SlothBot?
I have a few directions I would like to take the SlothBot. One is to make the sensor suites richer to make sure that it can become a versatile and useful science instrument. Another direction involves miniaturization – I would love to see a bunch of small SlothBots “living” among the trees somewhere in a rainforest for years, providing real-time data as to what is happening to the ecosystem. Continue reading
#437758 Remotely Operated Robot Takes Straight ...
Roboticists love hard problems. Challenges like the DRC and SubT have helped (and are still helping) to catalyze major advances in robotics, but not all hard problems require a massive amount of DARPA funding—sometimes, a hard problem can just be something very specific that’s really hard for a robot to do, especially relative to the ease with which a moderately trained human might be able to do it. Catching a ball. Putting a peg in a hole. Or using a straight razor to shave someone’s face without Sweeney Todd-izing them.
This particular roboticist who sees straight-razor face shaving as a hard problem that robots should be solving is John Peter Whitney, who we first met back at IROS 2014 in Chicago when (working at Disney Research) he introduced an elegant fluidic actuator system. These actuators use tubes containing a fluid (like air or water) to transmit forces from a primary robot to a secondary robot in a very efficient way that also allows for either compliance or very high fidelity force feedback, depending on the compressibility of the fluid.
Photo: John Peter Whitney/Northeastern University
Barber meets robot: Boston based barber Jesse Cabbage [top, right] observes the machine created by roboticist John Peter Whitney. Before testing the robot on Whitney’s face, they used his arm for a quick practice [bottom].
Whitney is now at Northeastern University, in Boston, and he recently gave a talk at the RSS workshop on “Reacting to Contact,” where he suggested that straight razor shaving would be an interesting and valuable problem for robotics to work toward, due to its difficulty and requirement for an extremely high level of both performance and reliability.
Now, a straight razor is sort of like a safety razor, except with the safety part removed, which in fact does make it significantly less safe for humans, much less robots. Also not ideal for those worried about safety is that as part of the process the razor ends up in distressingly close proximity to things like the artery that is busily delivering your brain’s entire supply of blood, which is very close to the top of the list of things that most people want to keep blades very far away from. But that didn’t stop Whitney from putting his whiskers where his mouth is and letting his robotic system mediate the ministrations of a professional barber. It’s not an autonomous robotic straight-razor shave (because Whitney is not totally crazy), but it’s a step in that direction, and requires that the hardware Whitney developed be dead reliable.
Perhaps that was a poor choice of words. But, rest assured that Whitney lived long enough to answer our questions after. Here’s the video; it’s part of a longer talk, but it should start in the right spot, at about 23:30.
If Whitney looked a little bit nervous to you, that’s because he was. “This was the first time I’d ever been shaved by someone (something?!) else with a straight razor,” he told us, and while having a professional barber at the helm was some comfort, “the lack of feeling and control on my part was somewhat unsettling.” Whitney says that the barber, Jesse Cabbage of Dentes Barbershop in Somerville, Mass., was surprised by how well he could feel the tactile sensations being transmitted from the razor. “That’s one of the reasons we decided to make this video,” Whitney says. “I can’t show someone how something feels, so the next best thing is to show a delicate task that either from experience or intuition makes it clear to the viewer that the system must have these properties—otherwise the task wouldn’t be possible.”
And as for when Whitney might be comfortable getting shaved by a robotic system without a human in the loop? It’s going to take a lot of work, as do most other hard problems in robotics. “There are two parts to this,” he explains. “One is fault-tolerance of the components themselves (software, electronics, etc.) and the second is the quality of the perception and planning algorithms.”
He offers a comparison to self-driving cars, in which similar (or greater) risks are incurred: “To learn how to perceive, interpret, and adapt, we need a very high-fidelity model of the problem, or a wealth of data and experience, or both” he says. “But in the case of shaving we are greatly lacking in both!” He continues with the analogy: “I think there is a natural progression—the community started with autonomous driving of toy cars on closed courses and worked up to real cars carrying human passengers; in robotic manipulation we are beginning to move out of the ‘toy car’ stage and so I think it’s good to target high-consequence hard problems to help drive progress.”
The ultimate goal is much more general than the creation of a dedicated straight razor shaving robot. This particular hardware system is actually a testbed for exploring MRI-compatible remote needle biopsy.
Of course, the ultimate goal here is much more general than the creation of a dedicated straight razor shaving robot; it’s a challenge that includes a host of sub-goals that will benefit robotics more generally. This particular hardware system Whitney is developing is actually a testbed for exploring MRI-compatible remote needle biopsy, and he and his students are collaborating with Brigham and Women’s Hospital in Boston on adapting this technology to prostate biopsy and ablation procedures. They’re also exploring how delicate touch can be used as a way to map an environment and localize within it, especially where using vision may not be a good option. “These traits and behaviors are especially interesting for applications where we must interact with delicate and uncertain environments,” says Whitney. “Medical robots, assistive and rehabilitation robots and exoskeletons, and shared-autonomy teleoperation for delicate tasks.”
A paper with more details on this robotic system, “Series Elastic Force Control for Soft Robotic Fluid Actuators,” is available on arXiv. Continue reading