Tag Archives: totally
#437733 Video Friday: MIT Media Lab Developing ...
Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):
AWS Cloud Robotics Summit – August 18-19, 2020 – [Online Conference]
CLAWAR 2020 – August 24-26, 2020 – [Online Conference]
ICUAS 2020 – September 1-4, 2020 – Athens, Greece
ICRES 2020 – September 28-29, 2020 – Taipei, Taiwan
AUVSI EXPONENTIAL 2020 – October 5-8, 2020 – [Online Conference]
IROS 2020 – October 25-29, 2020 – Las Vegas, Nev., USA
ICSR 2020 – November 14-16, 2020 – Golden, Colo., USA
Let us know if you have suggestions for next week, and enjoy today’s videos.
Very impressive local obstacle avoidance at a fairly high speed on a small drone, both indoors and outdoors.
[ FAST Lab ]
Matt Carney writes:
My PhD at MIT Media Lab has been the design and build of a next generation powered prosthesis. The bionic ankle, named TF8, was designed to provide biologically equivalent power and range of motion for plantarflexion-dorsiflexion. This video shows the process of going from a blank sheet of paper to people walking on it. Shown are three different people wearing the robot. About a dozen people have since been able to test the hardware.
[ MIT ]
Thanks Matt!
Exciting changes are coming to the iRobot® Home App. Get ready for new personalized experiences, improved features, and an easy-to-use interface. The update is rolling out over the next few weeks!
[ iRobot ]
MOFLIN is an AI Pet created from a totally new concept. It possesses emotional capabilities that evolve like living animals. With its warm soft fur, cute sounds, and adorable movement, you’d want to love it forever. We took a nature inspired approach and developed a unique algorithm that allows MOFLIN to learn and grow by constantly using its interactions to determine patterns and evaluate its surroundings from its sensors. MOFLIN will choose from an infinite number of mobile and sound pattern combinations to respond and express its feelings. To put it in simple terms, it’s like you’re interacting with a living pet.
You lost me at “it’s like you’re interacting with a living pet.”
[ Kickstarter ] via [ Gizmodo ]
This video is only robotics-adjacent, but it has applications for robotic insects. With a high-speed tracking system, we can now follow insects as they jump and fly, and watch how clumsy (but effective) they are at it.
[ Paper ]
Thanks Sawyer!
Suzumori Endo Lab, Tokyo Tech has developed self-excited pneumatic actuators that can be integrally molded by a 3D printer. These actuators use the “automatic flow path switching mechanism” we have devised.
[ Suzimori Endo Lab ]
Quadrupeds are getting so much better at deciding where to step rather than just stepping where they like and trying not to fall over.
[ RSL ]
Omnidirectional micro aerial vehicles are a growing field of research, with demonstrated advantages for aerial interaction and uninhibited observation. While systems with complete pose omnidirectionality and high hover efficiency have been developed independently, a robust system that combines the two has not been demonstrated to date. This paper presents the design and optimal control of a novel omnidirectional vehicle that can exert a wrench in any orientation while maintaining efficient flight configurations.
[ ASL ]
The latest in smooth humanoid walking from Dr. Guero.
[ YouTube ]
Will robots replace humans one day? When it comes to space exploration, robots are our precursors, gathering data to prepare humans for deep space. ESA robotics engineer Martin Azkarate discusses some of the upcoming missions involving robots and the unique science they will perform in this episode of Meet the Experts.
[ ESA ]
The Multi-robot Systems Group at FEE-CTU in Prague is working on an autonomous drone that detects fires and the shoots an extinguisher capsule at them.
[ MRS ]
This experiment with HEAP (Hydraulic Excavator for Autonomous Purposes) demonstrates our latest research in on-site and mobile digital fabrication with found materials. The embankment prototype in natural granular material was achieved using state of the art design and construction processes in mapping, modelling, planning and control. The entire process of building the embankment was fully autonomous. An operator was only present in the cabin for safety purposes.
[ RSL ]
The Simulation, Systems Optimization and Robotics Group (SIM) of Technische Universität Darmstadt’s Department of Computer Science conducts research on cooperating autonomous mobile robots, biologically inspired robots and numerical optimization and control methods.
[ SIM ]
Starting January 1, 2021, your drone platform of choice may be severely limited by the European Union’s new drone regulations. In this short video, senseFly’s Brock Ryder explains what that means for drone programs and operators and where senseFly drones fit in the EU’s new regulatory framework.
[ SenseFly ]
Nearly every company across every industry is looking for new ways to minimize human contact, cut costs and address the labor crunch in repetitive and dangerous jobs. WSJ explores why many are looking to robots as the solution for all three.
[ WSJ ]
You’ll need to prepare yourself emotionally for this video on “Examining Users’ Attitude Towards Robot Punishment.”
[ ACM ]
In this episode of the AI Podcast, Lex interviews Russ Tedrake (MIT and TRI) about biped locomotion, the DRC, home robots, and more.
[ AI Podcast ] Continue reading
#437695 Video Friday: Even Robots Know That You ...
Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):
CLAWAR 2020 – August 24-26, 2020 – [Online Conference]
Other Than Human – September 3-10, 2020 – Stockholm, Sweden
ICRES 2020 – September 28-29, 2020 – Taipei, Taiwan
AUVSI EXPONENTIAL 2020 – October 5-8, 2020 – [Online Conference]
IROS 2020 – October 25-29, 2020 – Las Vegas, Nev., USA
CYBATHLON 2020 – November 13-14, 2020 – [Online Event]
ICSR 2020 – November 14-16, 2020 – Golden, Colo., USA
Let us know if you have suggestions for next week, and enjoy today's videos.
From the Robotics and Perception Group at UZH comes Flightmare, a simulation environment for drones that combines a slick rendering engine with a robust physics engine that can run as fast as your system can handle.
Flightmare is composed of two main components: a configurable rendering engine built on Unity and a flexible physics engine for dynamics simulation. Those two components are totally decoupled and can run independently from each other. Flightmare comes with several desirable features: (i) a large multi-modal sensor suite, including an interface to extract the 3D point-cloud of the scene; (ii) an API for reinforcement learning which can simulate hundreds of quadrotors in parallel; and (iii) an integration with a virtual-reality headset for interaction with the simulated environment. Flightmare can be used for various applications, including path-planning, reinforcement learning, visual-inertial odometry, deep learning, human-robot interaction, etc.
[ Flightmare ]
Quadruped robots yelling at people to maintain social distancing is really starting to become a thing, for better or worse.
We introduce a fully autonomous surveillance robot based on a quadruped platform that can promote social distancing in complex urban environments. Specifically, to achieve autonomy, we mount multiple cameras and a 3D LiDAR on the legged robot. The robot then uses an onboard real-time social distancing detection system to track nearby pedestrian groups. Next, the robot uses a crowd-aware navigation algorithm to move freely in highly dynamic scenarios. The robot finally uses a crowd aware routing algorithm to effectively promote social distancing by using human-friendly verbal cues to send suggestions to overcrowded pedestrians.
[ Project ]
Thanks Fan!
The Personal Robotics Group at Oregon State University is looking at UV germicidal irradiation for surface disinfection with a Fetch Manipulator Robot.
Fetch Robot disinfecting dance party woo!
[ Oregon State ]
How could you not take a mask from this robot?
[ Reachy ]
This work presents the design, development and autonomous navigation of the alpha-version of our Resilient Micro Flyer, a new type of collision-tolerant small aerial robot tailored to traversing and searching within highly confined environments including manhole-sized tubes. The robot is particularly lightweight and agile, while it implements a rigid collision-tolerant design which renders it resilient during forcible interaction with the environment. Furthermore, the design of the system is enhanced through passive flaps ensuring smoother and more compliant collision which was identified to be especially useful in very confined settings.
[ ARL ]
Pepper can make maps and autonomously navigate, which is interesting, but not as interesting as its posture when it's wandering around.
Dat backing into the charging dock tho.
[ Pepper ]
RatChair a strategy for displacing big objects by attaching relatively small vibration sources. After learning how several random bursts of vibration affect its pose, an optimization algorithm discovers the optimal sequence of vibration patterns required to (slowly but surely) move the object to a specified position.
This is from 2015, why isn't all of my furniture autonomous yet?!
[ KAIST ]
The new SeaDrone Pro is designed to be the underwater equivalent of a quadrotor. This video is a rendering, but we've been assured that it does actually exist.
[ SeaDrone ]
Thanks Eduardo!
Porous Loops is a lightweight composite facade panel that shows the potential of 3D printing of mineral foams for building scale applications.
[ ETH ]
Thanks Fan!
Here's an interesting idea for a robotic gripper- it's what appears to be a snap bracelet coupled to a pneumatic actuator that allows the snap bracelet to be reset.
[ Georgia Tech ]
Graze is developing a commercial robotic lawnmower. They're also doing a sort of crowdfunded investment thing, which probably explains the painfully overproduced nature of the following video:
A couple things about this: the hard part, which the video skips over almost entirely, is the mapping, localization, and understanding where to mow and where not to mow. The pitch deck seems to suggest that this is mostly done through computer vision, a thing that's perhaps easy to do under controlled ideal conditions, but difficult to apply to a world full lawns that are all different. The commercial aspect is interesting because golf courses are likely as standardized as you can get, but the emphasis here on how much money they can make without really addressing any of the technical stuff makes me raise an eyebrow or two.
[ Graze ]
The record & playback X-series arm demo allows the user to record the arm's movements while motors are torqued off. Then, the user may torque the motor's on and watch the movements they just made playback!
[ Interbotix ]
Shadow Robot has a new teleop system for its hand. I'm guessing that it's even trickier to use than it looks.
[ Shadow Robot ]
Quanser Interactive Labs is a collection of virtual hardware-based laboratory activities that supplement traditional or online courses. Same as working with physical systems in the lab, students work with virtual twins of Quanser's most popular plants, develop their mathematical models, implement and simulate the dynamic behavior of these systems, design controllers, and validate them on a high-fidelity 3D real-time virtual models. The virtual systems not only look like the real ones, they also behave, can be manipulated, measured, and controlled like real devices. And finally, when students go to the lab, they can deploy their virtually-validated designs on actual physical equipment.
[ Quanser ]
This video shows robot-assisted heart surgery. It's amazing to watch if you haven't seen this sort of thing before, but be aware that there is a lot of blood.
This video demonstrates a fascinating case of robotic left atrial myxoma excision, narrated by Joel Dunning, Middlesbrough, UK. The Robotic platform provides superior visualisation and enhanced dexterity, through keyhole incisions. Robotic surgery is an integral part of our Minimally Invasive Cardiothoracic Surgery Program.
[ Tristan D. Yan ]
Thanks Fan!
In this talk, we present our work on learning control policies directly in simulation that are deployed onto real drones without any fine tuning. The presentation covers autonomous drone racing, drone acrobatics, and uncertainty estimation in deep networks.
[ RPG ] Continue reading
#437683 iRobot Remembers That Robots Are ...
iRobot has released several new robots over the last few years, including the i7 and s9 vacuums. Both of these models are very fancy and very capable, packed with innovative and useful features that we’ve been impressed by. They’re both also quite expensive—with dirt docks included, you’re looking at US $800 for the i7+, and a whopping $1,100 for the s9+. You can knock a couple hundred bucks off of those prices if you don’t want the docks, but still, these vacuums are absolutely luxury items.
If you just want something that’ll do some vacuuming so that you don’t have to, iRobot has recently announced a new Roomba option. The Roomba i3 is iRobot’s new low to midrange vacuum, starting at $400. It’s not nearly as smart as the i7 or the s9, but it can navigate (sort of) and make maps (sort of) and do some basic smart home integration. If that sounds like all you need, the i3 could be the robot vacuum for you.
iRobot calls the i3 “stylish,” and it does look pretty neat with that fabric top. Underneath, you get dual rubber primary brushes plus a side brush. There’s limited compatibility with the iRobot Home app and IFTTT, along with Alexa and Google Home. The i3 is also compatible with iRobot’s Clean Base, but that’ll cost you an extra $200, and iRobot refers to this bundle as the i3+.
The reason that the i3 only offers limited compatibility with iRobot’s app is that the i3 is missing the top-mounted camera that you’ll find in more expensive models. Instead, it relies on a downward-looking optical sensor to help it navigate, and it builds up a map as it’s cleaning by keeping track of when it bumps into obstacles and paying attention to internal sensors like a gyro and wheel odometers. The i3 can localize directly on its charging station or Clean Base (which have beacons on them that the robot can see if it’s close enough), which allows it to resume cleaning after emptying it’s bin or recharging. You’ll get a map of the area that the i3 has cleaned once it’s finished, but that map won’t persist between cleaning sessions, meaning that you can’t do things like set keep-out zones or identify specific rooms for the robot to clean. Many of the more useful features that iRobot’s app offers are based on persistent maps, and this is probably the biggest gap in functionality between the i3 and its more expensive siblings.
According to iRobot senior global product manager Sarah Wang, the kind of augmented dead-reckoning-based mapping that the i3 uses actually works really well: “Based on our internal and external testing, the performance is equivalent with our products that have cameras, like the Roomba 960,” she says. To get this level of performance, though, you do have to be careful, Wang adds. “If you kidnap i3, then it will be very confused, because it doesn’t have a reference to know where it is.” “Kidnapping” is a term that’s used often in robotics to refer to a situation in which an autonomous robot gets moved to an unmapped location, and in the context of a home robot, the best example of this is if you decide that you want your robot to vacuum a different room instead, so you pick it up and move it there.
iRobot used to make this easy by giving all of its robots carrying handles, but not anymore, because getting moved around makes things really difficult for any robot trying to keep track of where it is. While robots like the i7 can recover using their cameras to look for unique features that they recognize, the only permanent, unique landmark that the i3 can for sure identify is the beacon on its dock. What this means is that when it comes to the i3, even more than other Roomba models, the best strategy, is to just “let it do its thing,” says iRobot senior principal system engineer Landon Unninayar.
Photo: iRobot
The Roomba i3 is iRobot’s new low to midrange vacuum, starting at $400.
If you’re looking to spend a bit less than the $400 starting price of the i3, there are other options to be aware of as well. The Roomba 614, for example, does a totally decent job and costs $250. It’s scheduling isn’t very clever, it doesn’t make maps, and it won’t empty itself, but it will absolutely help keep your floors clean as long as you don’t mind being a little bit more hands-on. (And there’s also Neato’s D4, which offers basic persistent maps—and lasers!—for $330.)
The other thing to consider if you’re trying to decide between the i3 and a more expensive Roomba is that without the camera, the i3 likely won’t be able to take advantage of nearly as many of the future improvements that iRobot has said it’s working on. Spending more money on a robot with additional sensors isn’t just buying what it can do now, but also investing in what it may be able to do later on, with its more sophisticated localization and ability to recognize objects. iRobot has promised major app updates every six months, and our guess is that most of the cool new stuff is going to show in the i7 and s9. So, if your top priority is just cleaner floors, the i3 is a solid choice. But if you want a part of what iRobot is working on next, the i3 might end up holding you back. Continue reading
#437667 17 Teams to Take Part in DARPA’s ...
Among all of the other in-person events that have been totally wrecked by COVID-19 is the Cave Circuit of the DARPA Subterranean Challenge. DARPA has already hosted the in-person events for the Tunnel and Urban SubT circuits (see our previous coverage here), and the plan had always been for a trio of events representing three uniquely different underground environments in advance of the SubT Finals, which will somehow combine everything into one bonkers course.
While the SubT Urban Circuit event snuck in just under the lockdown wire in late February, DARPA made the difficult (but prudent) decision to cancel the in-person Cave Circuit event. What this means is that there will be no Systems Track Cave competition, which is a serious disappointment—we were very much looking forward to watching teams of robots navigating through an entirely unpredictable natural environment with a lot of verticality. Fortunately, DARPA is still running a Virtual Cave Circuit, and 17 teams will be taking part in this competition featuring a simulated cave environment that’s as dynamic and detailed as DARPA can make it.
From DARPA’s press releases:
DARPA’s Subterranean (SubT) Challenge will host its Cave Circuit Virtual Competition, which focuses on innovative solutions to map, navigate, and search complex, simulated cave environments November 17. Qualified teams have until Oct. 15 to develop and submit software-based solutions for the Cave Circuit via the SubT Virtual Portal, where their technologies will face unknown cave environments in the cloud-based SubT Simulator. Until then, teams can refine their roster of selected virtual robot models, choose sensor payloads, and continue to test autonomy approaches to maximize their score.
The Cave Circuit also introduces new simulation capabilities, including digital twins of Systems Competition robots to choose from, marsupial-style platforms combining air and ground robots, and breadcrumb nodes that can be dropped by robots to serve as communications relays. Each robot configuration has an associated cost, measured in SubT Credits – an in-simulation currency – based on performance characteristics such as speed, mobility, sensing, and battery life.
Each team’s simulated robots must navigate realistic caves, with features including natural terrain and dynamic rock falls, while they search for and locate various artifacts on the course within five meters of accuracy to score points during a 60-minute timed run. A correct report is worth one point. Each course contains 20 artifacts, which means each team has the potential for a maximum score of 20 points. Teams can leverage numerous practice worlds and even build their own worlds using the cave tiles found in the SubT Tech Repo to perfect their approach before they submit one official solution for scoring. The DARPA team will then evaluate the solution on a set of hidden competition scenarios.
Of the 17 qualified teams (you can see all of them here), there are a handful that we’ll quickly point out. Team BARCS, from Michigan Tech, was the winner of the SubT Virtual Urban Circuit, meaning that they may be the team to beat on Cave as well, although the course is likely to be unique enough that things will get interesting. Some Systems Track teams to watch include Coordinated Robotics, CTU-CRAS-NORLAB, MARBLE, NUS SEDS, and Robotika, and there are also a handful of brand new teams as well.
Now, just because there’s no dedicated Cave Circuit for the Systems Track teams, it doesn’t mean that there won’t be a Cave component (perhaps even a significant one) in the final event, which as far as we know is still scheduled to happen in fall of next year. We’ve heard that many of the Systems Track teams have been testing out their robots in caves anyway, and as the virtual event gets closer, we’ll be doing a sort of Virtual Systems Track series that highlights how different teams are doing mock Cave Circuits in caves they’ve found for themselves.
For more, we checked in with DARPA SubT program manager Dr. Timothy H. Chung.
IEEE Spectrum: Was it a difficult decision to cancel the Systems Track for Cave?
Tim Chung: The decision to go virtual only was heart wrenching, because I think DARPA’s role is to offer up opportunities that may be unimaginable for some of our competitors, like opening up a cave-type site for this competition. We crawled and climbed through a number of these sites, and I share the sense of disappointment that both our team and the competitors have that we won’t be able to share all the advances that have been made since the Urban Circuit. But what we’ve been able to do is pour a lot of our energy and the insights that we got from crawling around in those caves into what’s going to be a really great opportunity on the Virtual Competition side. And whether it’s a global pandemic, or just lack of access to physical sites like caves, virtual environments are an opportunity that we want to develop.
“The simulator offers us a chance to look at where things could be … it really allows for us to find where some of those limits are in the technology based only on our imagination.”
—Timothy H. Chung, DARPA
What kind of new features will be included in the Virtual Cave Circuit for this competition?
I’m really excited about these particular features because we’re seeing an opportunity for increased synergy between the physical and the virtual. The first I’d say is that we scanned some of the Systems Track robots using photogrammetry and combined that with some additional models that we got from the systems competitors themselves to turn their systems robots into virtual models. We often talk about the sim to real transfer and how successful we can get a simulation to transfer over to the physical world, but now we’ve taken something from the physical world and made it virtual. We’ve validated the controllers as well as the kinematics of the robots, we’ve iterated with the systems competitors themselves, and now we have these 13 robots (air and ground) in the SubT Tech Repo that now all virtual competitors can take advantage of.
We also have additional robot capability. Those comms bread crumbs are common among many of the competitors, so we’ve adopted that in the virtual world, and now you have comms relay nodes that are baked in to the SubT Simulator—you can have either six or twelve comms nodes that you can drop from a variety of our ground robot platforms. We have the marsupial deployment capability now, so now we have parent ground robots that can be mixed and matched with different child drones to become marsupial pairs.
And this is something I’ve been planning for for a while: we now have the ability to trigger things like rock falls. They still don’t quite look like Indiana Jones with the boulder coming down the corridor, but this comes really close. In addition to it just being an interesting and realistic consideration, we get to really dynamically test and stress the robots’ ability to navigate and recognize that something has changed in the environment and respond to it.
Image: DARPA
DARPA is still running a Virtual Cave Circuit, and 17 teams will be taking part in this competition featuring a simulated cave environment.
No simulation is perfect, so can you talk to us about what kinds of things aren’t being simulated right now? Where does the simulator not match up to reality?
I think that question is foundational to any conversation about simulation. I’ll give you a couple of examples:
We have the ability to represent wholesale damage to a robot, but it’s not at the actuator or component level. So there’s not a reliability model, although I think that would be really interesting to incorporate so that you could do assessments on things like mean time to failure. But if a robot falls off a ledge, it can be disabled by virtue of being too damaged to continue.
With communications, and this is one that’s near and dear not only to my heart but also to all of those that have lived through developing communication systems and robotic systems, we’ve gone through and conducted RF surveys of underground environments to get a better handle on what propagation effects are. There’s a lot of research that has gone into this, and trying to carry through some of that realism, we do have path loss models for RF communications baked into the SubT Simulator. For example, when you drop a bread crumb node, it’s using a path loss model so that it can represent the degradation of signal as you go farther into a cave. Now, we’re not modeling it at the Maxwell equations level, which I think would be awesome, but we’re not quite there yet.
We do have things like battery depletion, sensor degradation to the extent that simulators can degrade sensor inputs, and things like that. It’s just amazing how close we can get in some places, and how far away we still are in others, and I think showing where the limits are of how far you can get simulation is all part and parcel of why SubT Challenge wants to have both System and Virtual tracks. Simulation can be an accelerant, but it’s not going to be the panacea for development and innovation, and I think all the competitors are cognizant those limitations.
One of the most amazing things about the SubT Virtual Track is that all of the robots operate fully autonomously, without the human(s) in the loop that the System Track teams have when they compete. Why make the Virtual Track even more challenging in that way?
I think it’s one of the defining, delineating attributes of the Virtual Track. Our continued vision for the simulation side is that the simulator offers us a chance to look at where things could be, and allows for us to explore things like larger scales, or increased complexity, or types of environments that we can’t physically gain access to—it really allows for us to find where some of those limits are in the technology based only on our imagination, and this is one of the intrinsic values of simulation.
But I think finding a way to incorporate human input, or more generally human factors like teleoperation interfaces and the in-situ stress that you might not be able to recreate in the context of a virtual competition provided a good reason for us to delineate the two competitions, with the Virtual Competition really being about the role of fully autonomous or self-sufficient systems going off and doing their solution without human guidance, while also acknowledging that the real world has conditions that would not necessarily be represented by a fully simulated version. Having said that, I think cognitive engineering still has an incredibly important role to play in human robot interaction.
What do we have to look forward to during the Virtual Competition Showcase?
We have a number of additional features and capabilities that we’ve baked into the simulator that will allow for us to derive some additional insights into our competition runs. Those insights might involve things like the performance of one or more robots in a given scenario, or the impact of the environment on different types of robots, and what I can tease is that this will be an opportunity for us to showcase both the technology and also the excitement of the robots competing in the virtual environment. I’m trying not to give too many spoilers, but we’ll have an opportunity to really get into the details.
Check back as we get closer to the 17 November event for more on the DARPA SubT Challenge. Continue reading