Tag Archives: environment

#437747 High Performance Ornithopter Drone Is ...

The vast majority of drones are rotary-wing systems (like quadrotors), and for good reason: They’re cheap, they’re easy, they scale up and down well, and we’re getting quite good at controlling them, even in very challenging environments. For most applications, though, drones lose out to birds and their flapping wings in almost every way—flapping wings are very efficient, enable astonishing agility, and are much safer, able to make compliant contact with surfaces rather than shredding them like a rotor system does. But flapping wing have their challenges too: Making flapping-wing robots is so much more difficult than just duct taping spinning motors to a frame that, with a few exceptions, we haven’t seen nearly as much improvement as we have in more conventional drones.

In Science Robotics last week, a group of roboticists from Singapore, Australia, China, and Taiwan described a new design for a flapping-wing robot that offers enough thrust and control authority to make stable transitions between aggressive flight modes—like flipping and diving—while also being able to efficiently glide and gently land. While still more complex than a quadrotor in both hardware and software, this ornithopter’s advantages might make it worthwhile.

One reason that making a flapping-wing robot is difficult is because the wings have to move back and forth at high speed while electric motors spin around and around at high speed. This requires a relatively complex transmission system, which (if you don’t do it carefully), leads to weight penalties and a significant loss of efficiency. One particular challenge is that the reciprocating mass of the wings tends to cause the entire robot to flex back and forth, which alternately binds and disengages elements in the transmission system.

The researchers’ new ornithopter design mitigates the flexing problem using hinges and bearings in pairs. Elastic elements also help improve efficiency, and the ornithopter is in fact more efficient with its flapping wings than it would be with a rotary propeller-based propulsion system. Its thrust exceeds its 26-gram mass by 40 percent, which is where much of the aerobatic capability comes from. And one of the most surprising findings of this paper was that flapping-wing robots can actually be more efficient than propeller-based aircraft.

One of the most surprising findings of this paper was that flapping-wing robots can actually be more efficient than propeller-based aircraft

It’s not just thrust that’s a challenge for ornithopters: Control is much more complex as well. Like birds, ornithopters have tails, but unlike birds, they have to rely almost entirely on tail control authority, not having that bird-level of control over fine wing movements. To make an acrobatic level of control possible, the tail control surfaces on this ornithopter are huge—the tail plane area is 35 percent of the wing area. The wings can also provide some assistance in specific circumstances, as by combining tail control inputs with a deliberate stall of the things to allow the ornithopter to execute rapid flips.

With the ability to take off, hover, glide, land softly, maneuver acrobatically, fly quietly, and interact with its environment in a way that’s not (immediately) catastrophic, flapping-wing drones easily offer enough advantages to keep them interesting. Now that ornithopters been shown to be even more efficient than rotorcraft, the researchers plan to focus on autonomy with the goal of moving their robot toward real-world usefulness.

“Efficient flapping wing drone arrests high-speed flight using post-stall soaring,” by Yao-Wei Chin, Jia Ming Kok, Yong-Qiang Zhu, Woei-Leong Chan, Javaan S. Chahl, Boo Cheong Khoo, and Gih-Keong Lau from from Nanyang Technological University in Singapore, National University of Singapore, Defence Science and Technology Group in Canberra, Australia, Qingdao University of Technology in Shandong, China, University of South Australia in Mawson Lakes, and National Chiao Tung University in Hsinchu, Taiwan, was published in Science Robotics. Continue reading

Posted in Human Robots

#437709 iRobot Announces Major Software Update, ...

Since the release of the very first Roomba in 2002, iRobot’s long-term goal has been to deliver cleaner floors in a way that’s effortless and invisible. Which sounds pretty great, right? And arguably, iRobot has managed to do exactly this, with its most recent generation of robot vacuums that make their own maps and empty their own dustbins. For those of us who trust our robots, this is awesome, but iRobot has gradually been realizing that many Roomba users either don’t want this level of autonomy, or aren’t ready for it.

Today, iRobot is announcing a major new update to its app that represents a significant shift of its overall approach to home robot autonomy. Humans are being brought back into the loop through software that tries to learn when, where, and how you clean so that your Roomba can adapt itself to your life rather than the other way around.

To understand why this is such a shift for iRobot, let’s take a very brief look back at how the Roomba interface has evolved over the last couple of decades. The first generation of Roomba had three buttons on it that allowed (or required) the user to select whether the room being vacuumed was small or medium or large in size. iRobot ditched that system one generation later, replacing the room size buttons with one single “clean” button. Programmable scheduling meant that users no longer needed to push any buttons at all, and with Roombas able to find their way back to their docking stations, all you needed to do was empty the dustbin. And with the most recent few generations (the S and i series), the dustbin emptying is also done for you, reducing direct interaction with the robot to once a month or less.

Image: iRobot

iRobot CEO Colin Angle believes that working toward more intelligent human-robot collaboration is “the brave new frontier” of AI. “This whole journey has been earning the right to take this next step, because a robot can’t be responsive if it’s incompetent,” he says. “But thinking that autonomy was the destination was where I was just completely wrong.”

The point that the top-end Roombas are at now reflects a goal that iRobot has been working toward since 2002: With autonomy, scheduling, and the clean base to empty the bin, you can set up your Roomba to vacuum when you’re not home, giving you cleaner floors every single day without you even being aware that the Roomba is hard at work while you’re out. It’s not just hands-off, it’s brain-off. No noise, no fuss, just things being cleaner thanks to the efforts of a robot that does its best to be invisible to you. Personally, I’ve been completely sold on this idea for home robots, and iRobot CEO Colin Angle was as well.

“I probably told you that the perfect Roomba is the Roomba that you never see, you never touch, you just come home everyday and it’s done the right thing,” Angle told us. “But customers don’t want that—they want to be able to control what the robot does. We started to hear this a couple years ago, and it took a while before it sunk in, but it made sense.”

How? Angle compares it to having a human come into your house to clean, but you weren’t allowed to tell them where or when to do their job. Maybe after a while, you’ll build up the amount of trust necessary for that to work, but in the short term, it would likely be frustrating. And people get frustrated with their Roombas for this reason. “The desire to have more control over what the robot does kept coming up, and for me, it required a pretty big shift in my view of what intelligence we were trying to build. Autonomy is not intelligence. We need to do something more.”

That something more, Angle says, is a partnership as opposed to autonomy. It’s an acknowledgement that not everyone has the same level of trust in robots as the people who build them. It’s an understanding that people want to have a feeling of control over their homes, that they have set up the way that they want, and that they’ve been cleaning the way that they want, and a robot shouldn’t just come in and do its own thing.

This change in direction also represents a substantial shift in resources for iRobot, and the company has pivoted two-thirds of its engineering organization to focus on software-based collaborative intelligence rather than hardware.

“Until the robot proves that it knows enough about your home and about the way that you want your home cleaned,” Angle says, “you can’t move forward.” He adds that this is one of those things that seem obvious in retrospect, but even if they’d wanted to address the issue before, they didn’t have the technology to solve the problem. Now they do. “This whole journey has been earning the right to take this next step, because a robot can’t be responsive if it’s incompetent,” Angle says. “But thinking that autonomy was the destination was where I was just completely wrong.”

The previous iteration of the iRobot app (and Roombas themselves) are built around one big fat CLEAN button. The new approach instead tries to figure out in much more detail where the robot should clean, and when, using a mixture of autonomous technology and interaction with the user.

Where to Clean
Knowing where to clean depends on your Roomba having a detailed and accurate map of its environment. For several generations now, Roombas have been using visual mapping and localization (VSLAM) to build persistent maps of your home. These maps have been used to tell the Roomba to clean in specific rooms, but that’s about it. With the new update, Roombas with cameras will be able to recognize some objects and features in your home, including chairs, tables, couches, and even countertops. The robots will use these features to identify where messes tend to happen so that they can focus on those areas—like around the dining room table or along the front of the couch.

We should take a minute here to clarify how the Roomba is using its camera. The original (primary?) purpose of the camera was for VSLAM, where the robot would take photos of your home, downsample them into QR-code-like patterns of light and dark, and then use those (with the assistance of other sensors) to navigate. Now the camera is also being used to take pictures of other stuff around your house to make that map more useful.

Photo: iRobot

The robots will now try to fit into the kinds of cleaning routines that many people already have established. For example, the app may suggest an “after dinner” routine that cleans just around the kitchen and dining room table.

This is done through machine learning using a library of images of common household objects from a floor perspective that iRobot had to develop from scratch. Angle clarified for us that this is all done via a neural net that runs on the robot, and that “no recognizable images are ever stored on the robot or kept, and no images ever leave the robot.” Worst case, if all the data iRobot has about your home gets somehow stolen, the hacker would only know that (for example) your dining room has a table in it and the approximate size and location of that table, because the map iRobot has of your place only stores symbolic representations rather than images.

Another useful new feature is intended to help manage the “evil Roomba places” (as Angle puts it) that every home has that cause Roombas to get stuck. If the place is evil enough that Roomba has to call you for help because it gave up completely, Roomba will now remember, and suggest that either you make some changes or that it stops cleaning there, which seems reasonable.

When to Clean
It turns out that the primary cause of mission failure for Roombas is not that they get stuck or that they run out of battery—it’s user cancellation, usually because the robot is getting in the way or being noisy when you don’t want it to be. “If you kill a Roomba’s job because it annoys you,” points out Angle, “how is that robot being a good partner? I think it’s an epic fail.” Of course, it’s not the robot’s fault, because Roombas only clean when we tell them to, which Angle says is part of the problem. “People actually aren’t very good at making their own schedules—they tend to oversimplify, and not think through what their schedules are actually about, which leads to lots of [figurative] Roomba death.”

To help you figure out when the robot should actually be cleaning, the new app will look for patterns in when you ask the robot to clean, and then recommend a schedule based on those patterns. That might mean the robot cleans different areas at different times every day of the week. The app will also make scheduling recommendations that are event-based as well, integrated with other smart home devices. Would you prefer the Roomba to clean every time you leave the house? The app can integrate with your security system (or garage door, or any number of other things) and take care of that for you.

More generally, Roomba will now try to fit into the kinds of cleaning routines that many people already have established. For example, the app may suggest an “after dinner” routine that cleans just around the kitchen and dining room table. The app will also, to some extent, pay attention to the environment and season. It might suggest increasing your vacuuming frequency if pollen counts are especially high, or if it’s pet shedding season and you have a dog. Unfortunately, Roomba isn’t (yet?) capable of recognizing dogs on its own, so the app has to cheat a little bit by asking you some basic questions.

A Smarter App

Image: iRobot

The previous iteration of the iRobot app (and Roombas themselves) are built around one big fat CLEAN button. The new approach instead tries to figure out in much more detail where the robot should clean, and when, using a mixture of autonomous technology and interaction with the user.

The app update, which should be available starting today, is free. The scheduling and recommendations will work on every Roomba model, although for object recognition and anything related to mapping, you’ll need one of the more recent and fancier models with a camera. Future app updates will happen on a more aggressive schedule. Major app releases should happen every six months, with incremental updates happening even more frequently than that.

Angle also told us that overall, this change in direction also represents a substantial shift in resources for iRobot, and the company has pivoted two-thirds of its engineering organization to focus on software-based collaborative intelligence rather than hardware. “It’s not like we’re done doing hardware,” Angle assured us. “But we do think about hardware differently. We view our robots as platforms that have longer life cycles, and each platform will be able to support multiple generations of software. We’ve kind of decoupled robot intelligence from hardware, and that’s a change.”

Angle believes that working toward more intelligent collaboration between humans and robots is “the brave new frontier of artificial intelligence. I expect it to be the frontier for a reasonable amount of time to come,” he adds. “We have a lot of work to do to create the type of easy-to-use experience that consumer robots need.” Continue reading

Posted in Human Robots

#437695 Video Friday: Even Robots Know That You ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

CLAWAR 2020 – August 24-26, 2020 – [Online Conference]
Other Than Human – September 3-10, 2020 – Stockholm, Sweden
ICRES 2020 – September 28-29, 2020 – Taipei, Taiwan
AUVSI EXPONENTIAL 2020 – October 5-8, 2020 – [Online Conference]
IROS 2020 – October 25-29, 2020 – Las Vegas, Nev., USA
CYBATHLON 2020 – November 13-14, 2020 – [Online Event]
ICSR 2020 – November 14-16, 2020 – Golden, Colo., USA
Let us know if you have suggestions for next week, and enjoy today's videos.

From the Robotics and Perception Group at UZH comes Flightmare, a simulation environment for drones that combines a slick rendering engine with a robust physics engine that can run as fast as your system can handle.

Flightmare is composed of two main components: a configurable rendering engine built on Unity and a flexible physics engine for dynamics simulation. Those two components are totally decoupled and can run independently from each other. Flightmare comes with several desirable features: (i) a large multi-modal sensor suite, including an interface to extract the 3D point-cloud of the scene; (ii) an API for reinforcement learning which can simulate hundreds of quadrotors in parallel; and (iii) an integration with a virtual-reality headset for interaction with the simulated environment. Flightmare can be used for various applications, including path-planning, reinforcement learning, visual-inertial odometry, deep learning, human-robot interaction, etc.

[ Flightmare ]

Quadruped robots yelling at people to maintain social distancing is really starting to become a thing, for better or worse.

We introduce a fully autonomous surveillance robot based on a quadruped platform that can promote social distancing in complex urban environments. Specifically, to achieve autonomy, we mount multiple cameras and a 3D LiDAR on the legged robot. The robot then uses an onboard real-time social distancing detection system to track nearby pedestrian groups. Next, the robot uses a crowd-aware navigation algorithm to move freely in highly dynamic scenarios. The robot finally uses a crowd aware routing algorithm to effectively promote social distancing by using human-friendly verbal cues to send suggestions to overcrowded pedestrians.

[ Project ]

Thanks Fan!

The Personal Robotics Group at Oregon State University is looking at UV germicidal irradiation for surface disinfection with a Fetch Manipulator Robot.

Fetch Robot disinfecting dance party woo!

[ Oregon State ]

How could you not take a mask from this robot?

[ Reachy ]

This work presents the design, development and autonomous navigation of the alpha-version of our Resilient Micro Flyer, a new type of collision-tolerant small aerial robot tailored to traversing and searching within highly confined environments including manhole-sized tubes. The robot is particularly lightweight and agile, while it implements a rigid collision-tolerant design which renders it resilient during forcible interaction with the environment. Furthermore, the design of the system is enhanced through passive flaps ensuring smoother and more compliant collision which was identified to be especially useful in very confined settings.

[ ARL ]

Pepper can make maps and autonomously navigate, which is interesting, but not as interesting as its posture when it's wandering around.

Dat backing into the charging dock tho.

[ Pepper ]

RatChair a strategy for displacing big objects by attaching relatively small vibration sources. After learning how several random bursts of vibration affect its pose, an optimization algorithm discovers the optimal sequence of vibration patterns required to (slowly but surely) move the object to a specified position.

This is from 2015, why isn't all of my furniture autonomous yet?!

[ KAIST ]

The new SeaDrone Pro is designed to be the underwater equivalent of a quadrotor. This video is a rendering, but we've been assured that it does actually exist.

[ SeaDrone ]

Thanks Eduardo!

Porous Loops is a lightweight composite facade panel that shows the potential of 3D printing of mineral foams for building scale applications.

[ ETH ]

Thanks Fan!

Here's an interesting idea for a robotic gripper- it's what appears to be a snap bracelet coupled to a pneumatic actuator that allows the snap bracelet to be reset.

[ Georgia Tech ]

Graze is developing a commercial robotic lawnmower. They're also doing a sort of crowdfunded investment thing, which probably explains the painfully overproduced nature of the following video:

A couple things about this: the hard part, which the video skips over almost entirely, is the mapping, localization, and understanding where to mow and where not to mow. The pitch deck seems to suggest that this is mostly done through computer vision, a thing that's perhaps easy to do under controlled ideal conditions, but difficult to apply to a world full lawns that are all different. The commercial aspect is interesting because golf courses are likely as standardized as you can get, but the emphasis here on how much money they can make without really addressing any of the technical stuff makes me raise an eyebrow or two.

[ Graze ]

The record & playback X-series arm demo allows the user to record the arm's movements while motors are torqued off. Then, the user may torque the motor's on and watch the movements they just made playback!

[ Interbotix ]

Shadow Robot has a new teleop system for its hand. I'm guessing that it's even trickier to use than it looks.

[ Shadow Robot ]

Quanser Interactive Labs is a collection of virtual hardware-based laboratory activities that supplement traditional or online courses. Same as working with physical systems in the lab, students work with virtual twins of Quanser's most popular plants, develop their mathematical models, implement and simulate the dynamic behavior of these systems, design controllers, and validate them on a high-fidelity 3D real-time virtual models. The virtual systems not only look like the real ones, they also behave, can be manipulated, measured, and controlled like real devices. And finally, when students go to the lab, they can deploy their virtually-validated designs on actual physical equipment.

[ Quanser ]

This video shows robot-assisted heart surgery. It's amazing to watch if you haven't seen this sort of thing before, but be aware that there is a lot of blood.

This video demonstrates a fascinating case of robotic left atrial myxoma excision, narrated by Joel Dunning, Middlesbrough, UK. The Robotic platform provides superior visualisation and enhanced dexterity, through keyhole incisions. Robotic surgery is an integral part of our Minimally Invasive Cardiothoracic Surgery Program.

[ Tristan D. Yan ]

Thanks Fan!

In this talk, we present our work on learning control policies directly in simulation that are deployed onto real drones without any fine tuning. The presentation covers autonomous drone racing, drone acrobatics, and uncertainty estimation in deep networks.

[ RPG ] Continue reading

Posted in Human Robots

#437689 GITAI Sending Autonomous Robot to Space ...

We’ve been keeping a close watch on GITAI since early last year—what caught our interest initially is the history of the company, which includes a bunch of folks who started in the JSK Lab at the University of Tokyo, won the DARPA Robotics Challenge Trials as SCHAFT, got swallowed by Google, narrowly avoided being swallowed by SoftBank, and are now designing robots that can work in space.

The GITAI YouTube channel has kept us more to less up to date on their progress so far, and GITAI has recently announced the next step in this effort: The deployment of one of their robots on board the International Space Station in 2021.

Photo: GITAI

GITAI’s S1 is a task-specific 8-degrees-of-freedom arm with an integrated sensing and computing system and 1-meter reach.

GITAI has been working on a variety of robots for space operations, the most sophisticated of which is a humanoid torso called G1, which is controlled through an immersive telepresence system. What will be launching into space next year is a more task-specific system called the S1, which is an 8-degrees-of-freedom arm with an integrated sensing and computing system that can be wall-mounted and has a 1-meter reach.

The S1 will be living on board a commercially funded, pressurized airlock-extension module called Bishop, developed by NanoRacks. Mounted on the inside of the Bishop module, the S1 will have access to a task board and a small assembly area, where it will demonstrate common crew intra-vehicular activity, or IVA—tasks like flipping switches, turning knobs, and managing cables. It’ll also do some in-space assembly, or ISA, attaching panels to create a solar array.

Here’s a demonstration of some task board activities, conducted on Earth in a mockup of Bishop:

GITAI says that “all operations conducted by the S1 GITAI robotic arm will be autonomous, followed by some teleoperations from Nanoracks’ in-house mission control.” This is interesting, because from what we’ve seen until now, GITAI has had a heavy emphasis on telepresence, with a human in the loop to get stuff done. As GITAI’s founder and CEO Sho Nakanose commented to us a year ago, “Telepresence robots have far better performance and can be made practical much quicker than autonomous robots, so first we are working on making telepresence robots practical.”

So what’s changed? “GITAI has been concentrating on teleoperations to demonstrate the dexterity of our robot, but now it’s time to show our capabilities to do the same this time with autonomy,” Nakanose told us last week. “In an environment with minimum communication latency, it would be preferable to operate a robot more with teleoperations to enhance the capability of the robot, since with the current technology level of AI, what a robot can do autonomously is very limited. However, in an environment where the latency becomes noticeable, it would become more efficient to have a mixture of autonomy and teleoperations depending on the application. Eventually, in an ideal world, a robot will operate almost fully autonomously with minimum human cognizance.”

“In an environment where the latency becomes noticeable, it would become more efficient to have a mixture of autonomy and teleoperations depending on the application. Eventually, in an ideal world, a robot will operate almost fully autonomously with minimum human cognizance.”
—Sho Nakanose, GITAI founder and CEO

Nakanose says that this mission will help GITAI to “acquire the skills, know-how, and experience necessary to prepare a robot to be ISS compatible, prov[ing] the maturity of our technology in the microgravity environment.” Success would mean conducting both IVA and ISA experiments as planned (autonomous and teleop for IVA, fully autonomous for ISA), which would be pretty awesome, but we’re told that GITAI has already received a research and development order for space robots from a private space company, and Nakanose expects that “by the mid-2020s, we will be able to show GITAI's robots working in space on an actual mission.”

NanoRacks is schedule to launch the Bishop module on SpaceX CRS-21 in November. The S1 will be launched separately in 2021, and a NASA astronaut will install the robot and then leave it alone to let it start demonstrating how work in space can be made both safer and cheaper once the humans have gotten out of the way. Continue reading

Posted in Human Robots

#437671 Video Friday: Researchers 3D Print ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

ICRES 2020 – September 28-29, 2020 – Taipei, Taiwan
AUVSI EXPONENTIAL 2020 – October 5-8, 2020 – [Online]
IROS 2020 – October 25-29, 2020 – [Online]
ROS World 2020 – November 12, 2020 – [Online]
CYBATHLON 2020 – November 13-14, 2020 – [Online]
ICSR 2020 – November 14-16, 2020 – Golden, Colo., USA
Let us know if you have suggestions for next week, and enjoy today’s videos.

The Giant Gundam in Yokohama is actually way cooler than I thought it was going to be.

[ Gundam Factory ] via [ YouTube ]

A new 3D-printing method will make it easier to manufacture and control the shape of soft robots, artificial muscles and wearable devices. Researchers at UC San Diego show that by controlling the printing temperature of liquid crystal elastomer, or LCE, they can control the material’s degree of stiffness and ability to contract—also known as degree of actuation. What’s more, they are able to change the stiffness of different areas in the same material by exposing it to heat.

[ UCSD ]

Thanks Ioana!

This is the first successful reactive stepping test on our new torque-controlled biped robot named Bolt. The robot has 3 active degrees of freedom per leg and one passive joint in ankle. Since there is no active joint in ankle, the robot only relies on step location and timing adaptation to stabilize its motion. Not only can the robot perform stepping without active ankles, but it is also capable of rejecting external disturbances as we showed in this video.

[ ODRI ]

The curling robot “Curly” is the first AI-based robot to demonstrate competitive curling skills in an icy real environment with its high uncertainties. Scientists from seven different Korean research institutions including Prof. Klaus-Robert Müller, head of the machine-learning group at TU Berlin and guest professor at Korea University, have developed an AI-based curling robot.

[ TU Berlin ]

MoonRanger, a small robotic rover being developed by Carnegie Mellon University and its spinoff Astrobotic, has completed its preliminary design review in preparation for a 2022 mission to search for signs of water at the moon’s south pole. Red Whittaker explains why the new MoonRanger Lunar Explorer design is innovative and different from prior planetary rovers.

[ CMU ]

Cobalt’s security robot can now navigate unmodified elevators, which is an impressive feat.

Also, EXTERMINATE!

[ Cobalt ]

OrionStar, the robotics company invested in by Cheetah Mobile, announced the Robotic Coffee Master. Incorporating 3,000 hours of AI learning, 30,000 hours of robotic arm testing and machine vision training, the Robotic Coffee Master can perform complex brewing techniques, such as curves and spirals, with millimeter-level stability and accuracy (reset error ≤ 0.1mm).

[ Cheetah Mobile ]

DARPA OFFensive Swarm-Enabled Tactics (OFFSET) researchers recently tested swarms of autonomous air and ground vehicles at the Leschi Town Combined Arms Collective Training Facility (CACTF), located at Joint Base Lewis-McChord (JBLM) in Washington. The Leschi Town field experiment is the fourth of six planned experiments for the OFFSET program, which seeks to develop large-scale teams of collaborative autonomous systems capable of supporting ground forces operating in urban environments.

[ DARPA ]

Here are some highlights from Team Explorer’s SubT Urban competition back in February.

[ Team Explorer ]

Researchers with the Skoltech Intelligent Space Robotics Laboratory have developed a system that allows easy interaction with a micro-quadcopter with LEDs that can be used for light-painting. The researchers used a 92x92x29 mm Crazyflie 2.0 quadrotor that weighs just 27 grams, equipped with a light reflector and an array of controllable RGB LEDs. The control system consists of a glove equipped with an inertial measurement unit (IMU; an electronic device that tracks the movement of a user’s hand), and a base station that runs a machine learning algorithm.

[ Skoltech ]

“DeKonBot” is the prototype of a cleaning and disinfection robot for potentially contaminated surfaces in buildings such as door handles, light switches or elevator buttons. While other cleaning robots often spray the cleaning agents over a large area, DeKonBot autonomously identifies the surface to be cleaned.

[ Fraunhofer IPA ]

On Oct. 20, the OSIRIS-REx mission will perform the first attempt of its Touch-And-Go (TAG) sample collection event. Not only will the spacecraft navigate to the surface using innovative navigation techniques, but it could also collect the largest sample since the Apollo missions.

[ NASA ]

With all the robotics research that seems to happen in places where snow is more of an occasional novelty or annoyance, it’s good to see NORLAB taking things more seriously

[ NORLAB ]

Telexistence’s Model-T robot works very slowly, but very safely, restocking shelves.

[ Telexistence ] via [ YouTube ]

Roboy 3.0 will be unveiled next month!

[ Roboy ]

KUKA ready2_educate is your training cell for hands-on education in robotics. It is especially aimed at schools, universities and company training facilities. The training cell is a complete starter package and your perfect partner for entry into robotics.

[ KUKA ]

A UPenn GRASP Lab Special Seminar on Data Driven Perception for Autonomy, presented by Dapo Afolabi from UC Berkeley.

Perception systems form a crucial part of autonomous and artificial intelligence systems since they convert data about the relationship between an autonomous system and its environment into meaningful information. Perception systems can be difficult to build since they may involve modeling complex physical systems or other autonomous agents. In such scenarios, data driven models may be used to augment physics based models for perception. In this talk, I will present work making use of data driven models for perception tasks, highlighting the benefit of such approaches for autonomous systems.

[ GRASP Lab ]

A Maryland Robotics Center Special Robotics Seminar on Underwater Autonomy, presented by Ioannis Rekleitis from the University of South Carolina.

This talk presents an overview of algorithmic problems related to marine robotics, with a particular focus on increasing the autonomy of robotic systems in challenging environments. I will talk about vision-based state estimation and mapping of underwater caves. An application of monitoring coral reefs is going to be discussed. I will also talk about several vehicles used at the University of South Carolina such as drifters, underwater, and surface vehicles. In addition, a short overview of the current projects will be discussed. The work that I will present has a strong algorithmic flavour, while it is validated in real hardware. Experimental results from several testing campaigns will be presented.

[ MRC ]

This week’s CMU RI Seminar comes from Scott Niekum at UT Austin, on Scaling Probabilistically Safe Learning to Robotics.

Before learning robots can be deployed in the real world, it is critical that probabilistic guarantees can be made about the safety and performance of such systems. This talk focuses on new developments in three key areas for scaling safe learning to robotics: (1) a theory of safe imitation learning; (2) scalable reward inference in the absence of models; (3) efficient off-policy policy evaluation. The proposed algorithms offer a blend of safety and practicality, making a significant step towards safe robot learning with modest amounts of real-world data.

[ CMU RI ] Continue reading

Posted in Human Robots