Tag Archives: german

#439053 Bipedal Robots Are Learning To Move With ...

Most humans are bipeds, but even the best of us are really only bipeds until things get tricky. While our legs may be our primary mobility system, there are lots of situations in which we leverage our arms as well, either passively to keep balance or actively when we put out a hand to steady ourselves on a nearby object. And despite how unstable bipedal robots tend to be, using anything besides legs for mobility has been a challenge in both software and hardware, a significant limitation in highly unstructured environments.

Roboticists from TUM in Germany (with support from the German Research Foundation) have recently given their humanoid robot LOLA some major upgrades to make this kind of multi-contact locomotion possible. While it’s still in the early stages, it’s already some of the most human-like bipedal locomotion we’ve seen.

It’s certainly possible for bipedal robots to walk over challenging terrain without using limbs for support, but I’m sure you can think of lots of times where using your arms to assist with your own bipedal mobility was a requirement. It’s not a requirement because your leg strength or coordination or sense of balance is bad, necessarily. It’s just that sometimes, you might find yourself walking across something that’s highly unstable or in a situation where the consequences of a stumble are exceptionally high. And it may not even matter how much sensing you do beforehand, and how careful you are with your footstep planning: there are limits to how much you can know about your environment beforehand, and that can result in having a really bad time of it. This is why using multi-contact locomotion, whether it’s planned in advance or not, is a useful skill for humans, and should be for robots, too.

As the video notes (and props for being explicit up front about it), this isn’t yet fully autonomous behavior, with foot positions and arm contact points set by hand in advance. But it’s not much of a stretch to see how everything could be done autonomously, since one of the really hard parts (using multiple contact points to dynamically balance a moving robot) is being done onboard and in real time.

Getting LOLA to be able to do this required a major overhaul in hardware as well as software. And Philipp Seiwald, who works with LOLA at TUM, was able to tell us more about it.

IEEE Spectrum: Can you summarize the changes to LOLA’s hardware that are required for multi-contact locomotion?

Philipp Seiwald: The original version of LOLA has been designed for fast biped walking. Although it had two arms, they were not meant to get into contact with the environment but rather to compensate for the dynamic effects of the feet during fast walking. Also, the torso had a relatively simple design that was fine for its original purpose; however, it was not conceived to withstand the high loads coming from the hands during multi-contact maneuvers. Thus, we redesigned the complete upper body of LOLA from scratch. Starting from the pelvis, the strength and stiffness of the torso have been increased. We used the finite element method to optimize critical parts to obtain maximum strength at minimum weight. Moreover, we added additional degrees of freedom to the arms to increase the hands' reachable workspace. The kinematic topology of the arms, i.e., the arrangement of joints and link lengths, has been obtained from an optimization that takes typical multi-contact scenarios into account.

Why is this an important problem for bipedal humanoid robots?

Maintaining balance during locomotion can be considered the primary goal of legged robots. Naturally, this task is more challenging for bipeds when compared to robots with four or even more legs. Although current high-end prototypes show impressive progress, humanoid robots still do not have the robustness and versatility they need for most real-world applications. With our research, we try to contribute to this field and help to push the limits further. Recently, we showed our latest work on walking over uneven terrain without multi-contact support. Although the robustness is already high, there still exist scenarios, such as walking on loose objects, where the robot's stabilization fails when using only foot contacts. The use of additional hand-environment support during this (comparatively) fast walking allows a further significant increase in robustness, i.e., the robot's capability to compensate disturbances, modeling errors, or inaccurate sensor input. Besides stabilization on uneven terrain, multi-contact locomotion also enables more complex motions, e.g., stepping over a tall obstacle or toe-only contacts, as shown in our latest multi-contact video.

How can LOLA decide whether a surface is suitable for multi-contact locomotion?

LOLA’s visual perception system is currently developed by our project partners from the Chair for Computer Aided Medical Procedures & Augmented Reality at the TUM. This system relies on a novel semantic Simultaneous Localization and Mapping (SLAM) pipeline that can robustly extract the scene's semantic components (like floor, walls, and objects therein) by merging multiple observations from different viewpoints and by inferring therefrom the underlying scene graph. This provides a reliable estimate of which scene parts can be used to support the locomotion, based on the assumption that certain structural elements such as walls are fixed, while chairs, for example, are not.

Also, the team plans to develop a specific dataset with annotations further describing the attributes of the object (such as roughness of the surface or its softness) and that will be used to master multi-contact locomotion in even more complex scenes. As of today, the vision and navigation system is not finished yet; thus, in our latest video, we used pre-defined footholds and contact points for the hands. However, within our collaboration, we are working towards a fully integrated and autonomous system.

Is LOLA capable of both proactive and reactive multi-contact locomotion?

The software framework of LOLA has a hierarchical structure. On the highest level, the vision system generates an environment model and estimates the 6D-pose of the robot in the scene. The walking pattern generator then uses this information to plan a dynamically feasible future motion that will lead LOLA to a target position defined by the user. On a lower level, the stabilization module modifies this plan to compensate for model errors or any kind of disturbance and keep overall balance. So our approach currently focuses on proactive multi-contact locomotion. However, we also plan to work on a more reactive behavior such that additional hand support can also be triggered by an unexpected disturbance instead of being planned in advance.

What are some examples of unique capabilities that you are working towards with LOLA?

One of the main goals for the research with LOLA remains fast, autonomous, and robust locomotion on complex, uneven terrain. We aim to reach a walking speed similar to humans. Currently, LOLA can do multi-contact locomotion and cross uneven terrain at a speed of 1.8 km/h, which is comparably fast for a biped robot but still slow for a human. On flat ground, LOLA's high-end hardware allows it to walk at a relatively high maximum speed of 3.38 km/h.

Fully autonomous multi-contact locomotion for a life-sized humanoid robot is a tough task. As algorithms get more complex, computation time increases, which often results in offline motion planning methods. For LOLA, we restrict ourselves to gaited multi-contact locomotion, which means that we try to preserve the core characteristics of bipedal gait and use the arms only for assistance. This allows us to use simplified models of the robot which lead to very efficient algorithms running in real-time and fully onboard.

A long-term scientific goal with LOLA is to understand essential components and control policies of human walking. LOLA's leg kinematics is relatively similar to the human body. Together with scientists from kinesiology, we try to identify similarities and differences between observed human walking and LOLA’s “engineered” walking gait. We hope this research leads, on the one hand, to new ideas for the control of bipeds, and on the other hand, shows via experiments on bipeds if biomechanical models for the human gait are correctly understood. For a comparison of control policies on uneven terrain, LOLA must be able to walk at comparable speeds, which also motivates our research on fast and robust walking.

While it makes sense why the researchers are using LOLA’s arms primarily to assist with a conventional biped gait, looking ahead a bit it’s interesting to think about how robots that we typically consider to be bipeds could potentially leverage their limbs for mobility in decidedly non-human ways.

We’re used to legged robots being one particular morphology, I guess because associating them with either humans or dogs or whatever is just a comfortable way to do it, but there’s no particular reason why a robot with four limbs has to choose between being a quadruped and being a biped with arms, or some hybrid between the two, depending on what its task is. The research being done with LOLA could be a step in that direction, and maybe a hand on the wall in that direction, too. Continue reading

Posted in Human Robots

#437882 Video Friday: MIT Mini-Cheetah Robots ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

ICCR 2020 – December 26-29, 2020 – [Online Conference]
HRI 2021 – March 8-11, 2021 – [Online Conference]
RoboSoft 2021 – April 12-16, 2021 – [Online Conference]
Let us know if you have suggestions for next week, and enjoy today's videos.

What a lovely Christmas video from Norlab.

[ Norlab ]

Thanks Francois!

MIT Mini-Cheetahs are looking for a new home. Our new cheetah cubs, born at NAVER LABS, are for the MIT Mini-Cheetah workshop. MIT professor Sangbae Kim and his research team are supporting joint research by distributing Mini-Cheetahs to researchers all around the world.

[ NAVER Labs ]

For several years, NVIDIA’s research teams have been working to leverage GPU technology to accelerate reinforcement learning (RL). As a result of this promising research, NVIDIA is pleased to announce a preview release of Isaac Gym – NVIDIA’s physics simulation environment for reinforcement learning research. RL-based training is now more accessible as tasks that once required thousands of CPU cores can now instead be trained using a single GPU.

[ NVIDIA ]

At SINTEF in Norway, they're working on ways of using robots to keep tabs on giant floating cages of tasty fish:

One of the tricky things about operating robots in an environment like this is localization, so SINTEF is working on a solution that uses beacons:

While that video shows a lot of simulation (because otherwise there are tons of fish in the way), we're told that the autonomous navigation has been successfully demonstrated with an ROV in “a full scale fish farm with up to 200.000 salmon swimming around the robot.”

[ SINTEF ]

Thanks Eleni!

We’ve been getting ready for the snow in the most BG way possible. Wishing all of you a happy and healthy holiday season.

[ Berkshire Grey ]

ANYbotics doesn’t care what time of the year it is, so Happy Easter!

And here's a little bit about why ANYmal C looks the way it does.

[ ANYbotics ]

Robert “Buz” Chmielewski is using two modular prosthetic limbs developed by APL to feed himself dessert. Smart software puts his utensils in roughly the right spot, and then Buz uses his brain signals to cut the food with knife and fork. Once he is done cutting, the software then brings the food near his mouth, where he again uses brain signals to bring the food the last several inches to his mouth so that he can eat it.

[ JHUAPL ]

Introducing VESPER: a new military-grade small drone that is designed, sourced and built in the United States. Vesper offers a 50-minutes flight time, with speeds up to 45 mph (72 kph) and a total flight range of 25 miles (45 km). The magnetic snap-together architecture enables extremely fast transitions: the battery, props and rotor set can each be swapped in <5 seconds.

[ Vantage Robotics ]

In this video, a multi-material robot simulator is used to design a shape-changing robot, which is then transferred to physical hardware. The simulated and real robots can use shape change to switch between rolling gaits and inchworm gaits, to locomote in multiple environments.

[ Yale Faboratory ]

Get a preview of the cave environments that are being used to inspire the Final Event competition course of the DARPA Subterranean Challenge. In the Final Event, teams will deploy their robots to rapidly map, navigate, and search in competition courses that combine elements of man-made tunnel systems, urban underground, and natural cave networks!

The reason to pay attention this particular video is that it gives us some idea of what DARPA means when they say "cave."

[ SubT ]

MQ25 takes another step toward unmanned aerial refueling for the U.S. Navy. The MQ-25 test asset has flown for the first time with an aerial refueling pod containing the hose and basket that will make it an aerial refueler.

[ Boeing ]

We present a unified model-based and data-driven approach for quadrupedal planning and control to achieve dynamic locomotion over uneven terrain. We utilize on-board proprioceptive and exteroceptive feedback to map sensory information and desired base velocity commands into footstep plans using a reinforcement learning (RL) policy trained in simulation over a wide range of procedurally generated terrains.

[ DRS ]

The video shows the results of the German research project RoPHa. Within the project, the partners developed technologies for two application scenarios with the service robot Care-O-bot 4 in order to support people in need of help when eating.

[ RoPHa Project ]

Thanks Jenny!

This looks like it would be fun, if you are a crazy person.

[ Team BlackSheep ]

Robot accuracy is the limiting factor in many industrial applications. Manufacturers often only specify the pose repeatability values of their robotic systems. Fraunhofer IPA has set up a testing environment for automated measuring of accuracy performance criteria of industrial robots. Following the procedures defined in norm ISO 9283 allows generating reliable and repeatable results. They can be the basis for targeted measures increasing the robotic system’s accuracy.

[ Fraunhofer ]

Thanks Jenny!

The IEEE Women in Engineering – Robotics and Automation Society (WIE-RAS) hosted an online panel on best practices for teaching robotics. The diverse panel boasts experts in robotics education from a variety of disciplines, institutions, and areas of expertise.

[ IEEE RAS ]

Northwestern researchers have developed a first-of-its-kind soft, aquatic robot that is powered by light and rotating magnetic fields. These life-like robotic materials could someday be used as "smart" microscopic systems for production of fuels and drugs, environmental cleanup or transformative medical procedures.

[ Northwestern ]

Tech United Eindhoven's soccer robots now have eight wheels instead of four wheels, making them tweleve times better, if my math is right.

[ TU Eindhoven ] Continue reading

Posted in Human Robots

#436184 Why People Demanded Privacy to Confide ...

This is part four of a six-part series on the history of natural language processing.

Between 1964 and 1966, Joseph Weizenbaum, a German American computer scientist at MIT’s artificial intelligence lab, developed the first-ever chatbot [PDF].

While there were already some rudimentary digital language generators in existence—programs that could spit out somewhat coherent lines of text—Weizenbaum’s program was the first designed explicitly for interactions with humans. The user could type in some statement or set of statements in their normal language, press enter, and receive a response from the machine. As Weizenbaum explained, his program made “certain kinds of natural-language conversation between man and computer possible.”

He named the program Eliza after Eliza Doolittle, the working-class hero of George Bernard Shaw’s Pygmalion who learns how to talk with an upper-class accent. The new Eliza was written for the 36-bit IBM 7094, an early transistorized mainframe computer, in a programming language that Weizenbaum developed called MAD-SLIP.

Because computer time was a valuable resource, Eliza could only be run via a time-sharing system; the user interacted with the program remotely via an electric typewriter and printer. When the user typed in a sentence and pressed enter, a message was sent to the mainframe computer. Eliza scanned the message for the presence of a keyword and used it in a new sentence to form a response that was sent back, printed out, and read by the user.

To encourage ongoing dialogue, Weizenbaum designed Eliza to simulate the type of conversational style used by a Rogerian psychoanalyst. The program would take something the user said and reflect it back with a question, as can be seen in the example below. (Notice how the program takes keywords like “boyfriend” and “depressed” and re-uses them.)

Human: Men are all alike.

Computer: IN WHAT WAY?

Human: They're always bugging us about something or other.

Computer: CAN YOU THINK OF A SPECIFIC EXAMPLE?

Human: Well, my boyfriend made me come here.

Computer: YOUR BOYFRIEND MADE YOU COME HERE

Human: He says I'm depressed much of the time.

Computer: I AM SORRY TO HEAR YOU ARE DEPRESSED

Human: It's true. I am unhappy.

Weizenbaum chose this mode of dialogue for Eliza because it gave the impression that the computer understood what was being said without having to offer anything new to the conversation. It created the illusion of comprehension and engagement in a mere 200 lines of code.

To test Eliza’s capacity to engage an interlocutor, Weizenbaum invited students and colleagues into his office and let them chat with the machine while he looked on. He noticed, with some concern, that during their brief interactions with Eliza, many users began forming emotional attachments to the algorithm. They would open up to the machine and confess problems they were facing in their lives and relationships.

During their brief interactions with Eliza, many users began forming emotional attachments to the algorithm.

Even more surprising was that this sense of intimacy persisted even after Weizenbaum described how the machine worked and explained that it didn’t really understand anything that was being said. Weizenbaum was most troubled when his secretary, who had watched him build the program from scratch over many months, insisted that he leave the room so she could talk to Eliza in private.

For Weizenbaum, this experiment with Eliza made him question an idea that Alan Turing had proposed in 1950 about machine intelligence. In his paper, entitled “Computing Machinery and Intelligence,” Turing suggested that if a computer could conduct a convincingly human conversation in text, one could assume it was intelligent—an idea that became the basis of the famous Turing Test.

But Eliza demonstrated that convincing communication between a human and a machine could take place even if comprehension only flowed from one side: The simulation of intelligence, rather than intelligence itself, was enough to fool people. Weizenbaum called this the Eliza effect, and believed it was a type of “delusional thinking” that humanity would collectively suffer from in the digital age. This insight was a profound shock for Weizenbaum, and one that came to define his intellectual trajectory over the next decade.

The simulation of intelligence, rather than intelligence itself, was enough to fool people.

In 1976, he published Computing Power and Human Reason: From Judgment to Calculation [PDF], which offered a long meditation on why people are willing to believe that a simple machine might be able to understand their complex human emotions.

In this book, he argues that the Eliza effect signifies a broader pathology afflicting “modern man.” In a world conquered by science, technology, and capitalism, people had grown accustomed to viewing themselves as isolated cogs in a large and uncaring machine. In such a diminished social world, Weizenbaum reasoned, people had grown so desperate for connection that they put aside their reason and judgment in order to believe that a program could care about their problems.

Weizenbaum spent the rest of his life developing this humanistic critique of artificial intelligence and digital technology. His mission was to remind people that their machines were not as smart as they were often said to be. And that even though it sometimes appeared as though they could talk, they were never really listening.

This is the fourth installment of a six-part series on the history of natural language processing. Last week’s post described Andrey Markov and Claude Shannon’s painstaking efforts to create statistical models of language for text generation. Come back next Monday for part five, “In 2016, Microsoft’s Racist Chatbot Revealed the Dangers of Conversation.”

You can also check out our prior series on the untold history of AI. Continue reading

Posted in Human Robots

#435757 Robotic Animal Agility

An off-shore wind power platform, somewhere in the North Sea, on a freezing cold night, with howling winds and waves crashing against the impressive structure. An imperturbable ANYmal is quietly conducting its inspection.

ANYmal, a medium sized dog-like quadruped robot, walks down the stairs, lifts a “paw” to open doors or to call the elevator and trots along corridors. Darkness is no problem: it knows the place perfectly, having 3D-mapped it. Its laser sensors keep it informed about its precise path, location and potential obstacles. It conducts its inspection across several rooms. Its cameras zoom in on counters, recording the measurements displayed. Its thermal sensors record the temperature of machines and equipment and its ultrasound microphone checks for potential gas leaks. The robot also inspects lever positions as well as the correct positioning of regulatory fire extinguishers. As the electronic buzz of its engines resumes, it carries on working tirelessly.

After a little over two hours of inspection, the robot returns to its docking station for recharging. It will soon head back out to conduct its next solitary patrol. ANYmal played alongside Mulder and Scully in the “X-Files” TV series*, but it is in no way a Hollywood robot. It genuinely exists and surveillance missions are part of its very near future.

Off-shore oil platforms, the first test fields and probably the first actual application of ANYmal. ©ANYbotics

This quadruped robot was designed by ANYbotics, a spinoff of the Swiss Federal Institute of Technology in Zurich (ETH Zurich). Made of carbon fibre and aluminium, it weighs about thirty kilos. It is fully ruggedised, water- and dust-proof (IP-67). A kevlar belly protects its main body, carrying its powerful brain, batteries, network device, power management system and navigational systems.

ANYmal was designed for all types of terrain, including rubble, sand or snow. It has been field tested on industrial sites and is at ease with new obstacles to overcome (and it can even get up after a fall). Depending on its mission, its batteries last 2 to 4 hours.

On its jointed legs, protected by rubber pads, it can walk (at the speed of human steps), trot, climb, curl upon itself to crawl, carry a load or even jump and dance. It is the need to move on all surfaces that has driven its designers to choose a quadruped. “Biped robots are not easy to stabilise, especially on irregular terrain” explains Dr Péter Fankhauser, co-founder and chief business development officer of ANYbotics. “Wheeled or tracked robots can carry heavy loads, but they are bulky and less agile. Flying drones are highly mobile, but cannot carry load, handle objects or operate in bad weather conditions. We believe that quadrupeds combine the optimal characteristics, both in terms of mobility and versatility.”

What served as a source of inspiration for the team behind the project, the Robotic Systems Lab of the ETH Zurich, is a champion of agility on rugged terrain: the mountain goat. “We are of course still a long way” says Fankhauser. “However, it remains our objective on the longer term.

The first prototype, ALoF, was designed already back in 2009. It was still rather slow, very rigid and clumsy – more of a proof of concept than a robot ready for application. In 2012, StarlETH, fitted with spring joints, could hop, jump and climb. It was with this robot that the team started participating in 2014 in ARGOS, a full-scale challenge, launched by the Total oil group. The idea was to present a robot capable of inspecting an off-shore drilling station autonomously.

Up against dozens of competitors, the ETH Zurich team was the only team to enter the competition with such a quadrupedal robot. They didn’t win, but the multiple field tests were growing evermore convincing. Especially because, during the challenge, the team designed new joints with elastic actuators made in-house. These joints, inspired by tendons and muscles, are compact, sealed and include their own custom control electronics. They can regulate joint torque, position and impedance directly. Thanks to this innovation, the team could enter the same competition with a new version of its robot, ANYmal, fitted with three joints on each leg.

The ARGOS experience confirms the relevance of the selected means of locomotion. “Our robot is lighter, takes up less space on site and it is less noisy” says Fankhauser. “It also overcomes bigger obstacles than larger wheeled or tracked robots!” As ANYmal generated public interest and its transformation into a genuine product seemed more than possible, the startup ANYbotics was launched in 2016. It sold not only its robot, but also its revolutionary joints, called ANYdrive.

Today, ANYmal is not yet ready for sale to companies. However, ANYbotics has a growing number of partnerships with several industries, testing the robot for a few days or several weeks, for all types of tasks. Last October, for example, ANYmal navigated its way through the dark sewage system of the city of Zurich in order to test its capacity to help workers in similar difficult, repetitive and even dangerous tasks.

Why such an early interest among companies? “Because many companies want to integrate robots into their maintenance tasks” answers Fankhauser. “With ANYmal, they can actually evaluate its feasibility and plan their strategy. Eventually, both the architecture and the equipment of buildings could be rethought to be adapted to these maintenance robots”.

ANYmal requires ruggedised, sealed and extremely reliable interconnection solutions, such as LEMO. ©ANYbotics

Through field demonstrations and testing, ANYbotics can gather masses of information (up to 50,000 measurements are recorded every second during each test!) “It helps us to shape the product.” In due time, the startup will be ready to deliver a commercial product which really caters for companies’ needs.

Inspection and surveillance tasks on industrial sites are not the only applications considered. The startup is also thinking of agricultural inspections – with its onboard sensors, ANYmal is capable of mapping its environment, measuring bio mass and even taking soil samples. In the longer term, it could also be used for search and rescue operations. By the way, the robot can already be switched to “remote control” mode at any time and can be easily tele-operated. It is also capable of live audio and video transmission.

The transition from the prototype to the marketed product stage will involve a number of further developments. These include increasing ANYmal’s agility and speed, extending its capacity to map large-scale environments, improving safety, security, user handling and integrating the system with the customer’s data management software. It will also be necessary to enhance the robot’s reliability “so that it can work for days, weeks, or even months without human supervision.” All required certifications will have to be obtained. The locomotion system, which had triggered the whole business, is only one of a number of considerations of ANYbotics.

Designed for extreme environments, for ANYmal smoke is not a problem and it can walk in the snow, through rubble or in water. ©ANYbotics

The startup is not all alone. In fact, it has sold ANYmal robots to a dozen major universities who use them to develop their know-how in robotics. The startup has also founded ANYmal Research, a community including members such as Toyota Research Institute, the German Aerospace Center and the computer company Nvidia. Members have full access to ANYmal’s control software, simulations and documentation. Sharing has boosted both software and hardware ideas and developments (built on ROS, the open-source Robot Operating System). In particular, payload variations, providing for expandability and scalability. For instance, one of the universities uses a robotic arm which enables ANYmal to grasp or handle objects and open doors.

Among possible applications, ANYbotics mentions entertainment. It is not only about playing in more films or TV series, but rather about participating in various attractions (trade shows, museums, etc.). “ANYmal is so novel that it attracts a great amount of interest” confirms Fankhauser with a smile. “Whenever we present it somewhere, people gather around.”

Videos of these events show a fascinated and sometimes slightly fearful audience, when ANYmal gets too close to them. Is it fear of the “bad robot”? “This fear exists indeed and we are happy to be able to use ANYmal also to promote public awareness towards robotics and robots.” Reminiscent of a young dog, ANYmal is truly adapted for the purpose.

However, Péter Fankhauser softens the image of humans and sophisticated robots living together. “These coming years, robots will continue to work in the background, like they have for a long time in factories. Then, they will be used in public places in a selective and targeted way, for instance for dangerous missions. We will need to wait another ten years before animal-like robots, such as ANYmal will share our everyday lives!”

At the Consumer Electronics Show (CES) in Las Vegas in January, Continental, the German automotive manufacturing company, used robots to demonstrate a last-mile delivery. It showed ANYmal getting out of an autonomous vehicle with a parcel, climbing onto the front porch, lifting a paw to ring the doorbell, depositing the parcel before getting back into the vehicle. This futuristic image seems very close indeed.

*X-Files, season 11, episode 7, aired in February 2018 Continue reading

Posted in Human Robots

#435750 Video Friday: Amazon CEO Jeff Bezos ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events):

RSS 2019 – June 22-26, 2019 – Freiburg, Germany
Hamlyn Symposium on Medical Robotics – June 23-26, 2019 – London, U.K.
ETH Robotics Summer School – June 27-1, 2019 – Zurich, Switzerland
MARSS 2019 – July 1-5, 2019 – Helsinki, Finland
ICRES 2019 – July 29-30, 2019 – London, U.K.
Let us know if you have suggestions for next week, and enjoy today’s videos.

Last week at the re:MARS conference, Amazon CEO and aspiring supervillain Jeff Bezos tried out this pair of dexterous robotic hands, which he described as “weirdly natural” to operate. The system combines Shadow Robot’s anthropomorphic robot hands with SynTouch’s biomimetic tactile sensors and HaptX’s haptic feedback gloves.

After playing with the robot, Bezos let out his trademark evil laugh.

[ Shadow Robot ]

The RoboMaster S1 is DJI’s advanced new educational robot that opens the door to limitless learning and entertainment. Develop programming skills, get familiar with AI technology, and enjoy thrilling FPV driving with games and competition. From young learners to tech enthusiasts, get ready to discover endless possibilities with the RoboMaster S1.

[ DJI ]

It’s very impressive to see DLR’s humanoid robot Toro dynamically balancing, even while being handed heavy objects, pushing things, and using multi-contact techniques to kick a fire extinguisher for some reason.

The paper is in RA-L, and you can find it at the link below.

[ RA-L ] via [ DLR ]

Thanks Maximo!

Is it just me, or does the Suzumori Endo Robotics Laboratory’s Super Dragon arm somehow just keep getting longer?

Suzumori Endo Lab, Tokyo Tech developed a 10 m-long articulated manipulator for investigation inside the primary containment vessel of the Fukushima Daiichi Nuclear Power Plants. We employed a coupled tendon-driven mechanism and a gravity compensation mechanism using synthetic fiber ropes to design a lightweight and slender articulated manipulator. This work was published in IEEE Robotics and Automation Letters and Transactions of the JSME.

[ Suzumori Endo Lab ]

From what I can make out thanks to Google Translate, this cute little robot duck (developed by Nissan) helps minimize weeds in rice fields by stirring up the water.

[ Nippon.com ]

Confidence in your robot is when you can just casually throw it off of a balcony 15 meters up.

[ SUTD ]

You had me at “we’re going to completely submerge this apple in chocolate syrup.”

[ Soft Robotics Inc ]

In the mid 2020s, the European Space Agency is planning on sending a robotic sample return mission to the Moon. It’s called Heracles, after the noted snake-strangler of Greek mythology.

[ ESA ]

Rethink Robotics is still around, they’re just much more German than before. And Sawyer is still hard at work stealing jobs from humans.

[ Rethink Robotics ]

The reason to watch this new video of the Ghost Robotics Vision 60 quadruped is for the 3 seconds worth of barrel roll about 40 seconds in.

[ Ghost Robotics ]

This is a relatively low-altitude drop for Squishy Robotics’ tensegrity scout, but it still cool to watch a robot that’s resilient enough to be able to fall and just not worry about it.

[ Squishy Robotics ]

We control here the Apptronik DRACO bipedal robot for unsupported dynamic locomotion. DRACO consists of a 10 DoF lower body with liquid cooled viscoelastic actuators to reduce weight, increase payload, and achieve fast dynamic walking. Control and walking algorithms are designed by UT HCRL Laboratory.

I think all robot videos should be required to start with two “oops” clips followed by a “for real now” clip.

[ Apptronik ]

SAKE’s EZGripper manages to pick up a wrench, and also pick up a raspberry without turning it into instajam.

[ SAKE Robotics ]

And now: the robotic long-tongued piggy, courtesy Sony Toio.

[ Toio ]

In this video the ornithopter developed inside the ERC Advanced Grant GRIFFIN project performs its first flight. This projects aims to develop a flapping wing system with manipulation and human interaction capabilities.

A flapping-wing system with manipulation and human interaction capabilities, you say? I would like to subscribe to your newsletter.

[ GRVC ]

KITECH’s robotic hands and arms can manipulate, among other things, five boxes of Elmos. I’m not sure about the conversion of Elmos to Snuffleupaguses, although it turns out that one Snuffleupagus is exactly 1,000 pounds.

[ Ji-Hun Bae ]

The Australian Centre for Field Robotics (ACFR) has been working on agricultural robots for almost a decade, and this video sums up a bunch of the stuff that they’ve been doing, even if it’s more amusing than practical at times.

[ ACFR ]

ROS 2 is great for multi-robot coordination, like when you need your bubble level to stay really, really level.

[ Acutronic Robotics ]

We don’t hear iRobot CEO Colin Angle give a lot of talks, so this recent one (from Amazon’s re:MARS conference) is definitely worth a listen, especially considering how much innovation we’ve seen from iRobot recently.

Colin Angle, founder and CEO of iRobot, has unveil a series of breakthrough innovations in home robots from iRobot. For the first time on stage, he will discuss and demonstrate what it takes to build a truly intelligent system of robots that work together to accomplish more within the home – and enable that home, and the devices within it, to work together as one.

[ iRobot ]

In the latest episode of Robots in Depth, Per speaks with Federico Pecora from the Center for Applied Autonomous Sensor Systems at Örebro University in Sweden.

Federico talks about working on AI and service robotics. In this area he has worked on planning, especially focusing on why a particular goal is the one that the robot should work on. To make robots as useful and user friendly as possible, he works on inferring the goal from the robot’s environment so that the user does not have to tell the robot everything.

Federico has also worked with AI robotics planning in industry to optimize results. Managing the relative importance of tasks is another challenging area there. In this context, he works on automating not only a single robot for its goal, but an entire fleet of robots for their collective goal. We get to hear about how these techniques are being used in warehouse operations, in mines and in agriculture.

[ Robots in Depth ] Continue reading

Posted in Human Robots