Tag Archives: 2020

#437590 Why We Need a Robot Registry


I have a confession to make: A robot haunts my nightmares. For me, Boston Dynamics’ Spot robot is 32.5 kilograms (71.1 pounds) of pure terror. It can climb stairs. It can open doors. Seeing it in a video cannot prepare you for the moment you cross paths on a trade-show floor. Now that companies can buy a Spot robot for US $74,500, you might encounter Spot anywhere.

Spot robots now patrol public parks in Singapore to enforce social distancing during the pandemic. They meet with COVID-19 patients at Boston’s Brigham and Women’s Hospital so that doctors can conduct remote consultations. Imagine coming across Spot while walking in the park or returning to your car in a parking garage. Wouldn’t you want to know why this hunk of metal is there and who’s operating it? Or at least whom to call to report a malfunction?

Robots are becoming more prominent in daily life, which is why I think governments need to create national registries of robots. Such a registry would let citizens and law enforcement look up the owner of any roaming robot, as well as learn that robot’s purpose. It’s not a far-fetched idea: The U.S. Federal Aviation Administration already has a registry for drones.

Governments could create national databases that require any companies operating robots in public spaces to report the robot make and model, its purpose, and whom to contact if the robot breaks down or causes problems. To allow anyone to use the database, all public robots would have an easily identifiable marker or model number on their bodies. Think of it as a license plate or pet microchip, but for bots.

There are some smaller-scale registries today. San Jose’s Department of Transportation (SJDOT), for example, is working with Kiwibot, a delivery robot manufacturer, to get real-time data from the robots as they roam the city’s streets. The Kiwibots report their location to SJDOT using the open-source Mobility Data Specification, which was originally developed by Los Angeles to track Bird scooters.

Real-time location reporting makes sense for Kiwibots and Spots wandering the streets, but it’s probably overkill for bots confined to cleaning floors or patrolling parking lots. That said, any robots that come in contact with the general public should clearly provide basic credentials and a way to hold their operators accountable. Given that many robots use cameras, people may also be interested in looking up who’s collecting and using that data.

I starting thinking about robot registries after Spot became available in June for anyone to purchase. The idea gained specificity after listening to Andra Keay, founder and managing director at Silicon Valley Robotics, discuss her five rules of ethical robotics at an Arm event in October. I had already been thinking that we needed some way to track robots, but her suggestion to tie robot license plates to a formal registry made me realize that people also need a way to clearly identify individual robots.

Keay pointed out that in addition to sating public curiosity and keeping an eye on robots that could cause harm, a registry could also track robots that have been hacked. For example, robots at risk of being hacked and running amok could be required to report their movements to a database, even if they’re typically restricted to a grocery store or warehouse. While we’re at it, Spot robots should be required to have sirens, because there’s no way I want one of those sneaking up on me.

This article appears in the December 2020 print issue as “Who’s Behind That Robot?” Continue reading

Posted in Human Robots

#437583 Video Friday: Attack of the Hexapod ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

IROS 2020 – October 25-25, 2020 – [Online]
ROS World 2020 – November 12, 2020 – [Online]
CYBATHLON 2020 – November 13-14, 2020 – [Online]
ICSR 2020 – November 14-16, 2020 – Golden, Colo., USA
Let us know if you have suggestions for next week, and enjoy today’s videos.

Happy Halloween from HEBI Robotics!

Thanks Hardik!

[ HEBI Robotics ]

Happy Halloween from Berkshire Grey!

[ Berkshire Grey ]

These are some preliminary results of our lab’s new work on using reinforcement learning to train neural networks to imitate common bipedal gait behaviors, without using any motion capture data or reference trajectories. Our method is described in an upcoming submission to ICRA 2021. Work by Jonah Siekmann and Yesh Godse.

[ OSU DRL ]

The northern goshawk is a fast, powerful raptor that flies effortlessly through forests. This bird was the design inspiration for the next-generation drone developed by scientifics of the Laboratory of Intelligent Systems of EPFL led by Dario Floreano. They carefully studied the shape of the bird’s wings and tail and its flight behavior, and used that information to develop a drone with similar characteristics.

The engineers already designed a bird-inspired drone with morphing wing back in 2016. In a step forward, their new model can adjust the shape of its wing and tail thanks to its artificial feathers. Flying this new type of drone isn’t easy, due to the large number of wing and tail configurations possible. To take full advantage of the drone’s flight capabilities, Floreano’s team plans to incorporate artificial intelligence into the drone’s flight system so that it can fly semi-automatically. The team’s research has been published in Science Robotics.

[ EPFL ]

Oopsie.

[ Roborace ]

We’ve covered MIT’s Roboats in the past, but now they’re big enough to keep a couple of people afloat.

Self-driving boats have been able to transport small items for years, but adding human passengers has felt somewhat intangible due to the current size of the vessels. Roboat II is the “half-scale” boat in the growing body of work, and joins the previously developed quarter-scale Roboat, which is 1 meter long. The third installment, which is under construction in Amsterdam and is considered to be “full scale,” is 4 meters long and aims to carry anywhere from four to six passengers.

[ MIT ]

With a training technique commonly used to teach dogs to sit and stay, Johns Hopkins University computer scientists showed a robot how to teach itself several new tricks, including stacking blocks. With the method, the robot, named Spot, was able to learn in days what typically takes a month.

[ JHU ]

Exyn, a pioneer in autonomous aerial robot systems for complex, GPS-denied industrial environments, today announced the first dog, Kody, to successfully fly a drone at Number 9 Coal Mine, in Lansford, PA. Selected to carry out this mission was the new autonomous aerial robot, the ExynAero.

Yes, this is obviously a publicity stunt, and Kody is only flying the drone in the sense that he’s pushing the launch button and then taking a nap. But that’s also the point— drone autonomy doesn’t get much fuller than this, despite the challenge of the environment.

[ Exyn ]

In this video object instance segmentation and shape completion are combined with classical regrasp planning to perform pick-place of novel objects. It is demonstrated with a UR5, Robotiq 85 parallel-jaw gripper, and Structure depth sensor with three rearrangement tasks: bin packing (minimize the height of the packing), placing bottles onto coasters, and arrange blocks from tallest to shortest (according to the longest edge). The system also accounts for uncertainty in the segmentation/completion by avoiding grasping or placing on parts of the object where perceptual uncertainty is predicted to be high.

[ Paper ] via [ Northeastern ]

Thanks Marcus!

U can’t touch this!

[ University of Tokyo ]

We introduce a way to enable more natural interaction between humans and robots through Mixed Reality, by using a shared coordinate system. Azure Spatial Anchors, which already supports colocalizing multiple HoloLens and smartphone devices in the same space, has now been extended to support robots equipped with cameras. This allows humans and robots sharing the same space to interact naturally: humans can see the plan and intention of the robot, while the robot can interpret commands given from the person’s perspective. We hope that this can be a building block in the future of humans and robots being collaborators and coworkers.

[ Microsoft ]

Some very high jumps from the skinniest quadruped ever.

[ ODRI ]

In this video we present recent efforts to make our humanoid robot LOLA ready for multi-contact locomotion, i.e. additional hand-environment support for extra stabilization during walking.

[ TUM ]

Classic bike moves from Dr. Guero.

[ Dr. Guero ]

For a robotics company, iRobot is OLD.

[ iRobot ]

The Canadian Space Agency presents Juno, a preliminary version of a rover that could one day be sent to the Moon or Mars. Juno can navigate autonomously or be operated remotely. The Lunar Exploration Analogue Deployment (LEAD) consisted in replicating scenarios of a lunar sample return mission.

[ CSA ]

How exactly does the Waymo Driver handle a cat cutting across its driving path? Jonathan N., a Product Manager on our Perception team, breaks it all down for us.

Now do kangaroos.

[ Waymo ]

Jibo is hard at work at MIT playing games with kids.

Children’s creativity plummets as they enter elementary school. Social interactions with peers and playful environments have been shown to foster creativity in children. Digital pedagogical tools often lack the creativity benefits of co-located social interaction with peers. In this work, we leverage a social embodied robot as a playful peer and designed Escape!Bot, a game involving child-robot co-play, where the robot is a social agent that scaffolds for creativity during gameplay.

[ Paper ]

It’s nice when convenience stores are convenient even for the folks who have to do the restocking.

Who’s moving the crates around, though?

[ Telexistence ]

Hi, fans ! Join the ROS World 2020, opening November 12th , and see the footage of ROBOTIS’ ROS platform robots 🙂

[ ROS World 2020 ]

ML/RL methods are often viewed as a magical black box, and while that’s not true, learned policies are nonetheless a valuable tool that can work in conjunction with the underlying physics of the robot. In this video, Agility CTO Jonathan Hurst – wearing his professor hat at Oregon State University – presents some recent student work on using learned policies as a control method for highly dynamic legged robots.

[ Agility Robotics ]

Here’s an ICRA Legged Robots workshop talk from Marco Hutter at ETH Zürich, on Autonomy for ANYmal.

Recent advances in legged robots and their locomotion skills has led to systems that are skilled and mature enough for real-world deployment. In particular, quadrupedal robots have reached a level of mobility to navigate complex environments, which enables them to take over inspection or surveillance jobs in place like offshore industrial plants, in underground areas, or on construction sites. In this talk, I will present our research work with the quadruped ANYmal and explain some of the underlying technologies for locomotion control, environment perception, and mission autonomy. I will show how these robots can learn and plan complex maneuvers, how they can navigate through unknown environments, and how they are able to conduct surveillance, inspection, or exploration scenarios.

[ RSL ] Continue reading

Posted in Human Robots

#437579 Disney Research Makes Robotic Gaze ...

While it’s not totally clear to what extent human-like robots are better than conventional robots for most applications, one area I’m personally comfortable with them is entertainment. The folks over at Disney Research, who are all about entertainment, have been working on this sort of thing for a very long time, and some of their animatronic attractions are actually quite impressive.

The next step for Disney is to make its animatronic figures, which currently feature scripted behaviors, to perform in an interactive manner with visitors. The challenge is that this is where you start to get into potential Uncanny Valley territory, which is what happens when you try to create “the illusion of life,” which is what Disney (they explicitly say) is trying to do.

In a paper presented at IROS this month, a team from Disney Research, Caltech, University of Illinois at Urbana-Champaign, and Walt Disney Imagineering is trying to nail that illusion of life with a single, and perhaps most important, social cue: eye gaze.

Before you watch this video, keep in mind that you’re watching a specific character, as Disney describes:

The robot character plays an elderly man reading a book, perhaps in a library or on a park bench. He has difficulty hearing and his eyesight is in decline. Even so, he is constantly distracted from reading by people passing by or coming up to greet him. Most times, he glances at people moving quickly in the distance, but as people encroach into his personal space, he will stare with disapproval for the interruption, or provide those that are familiar to him with friendly acknowledgment.

What, exactly, does “lifelike” mean in the context of robotic gaze? The paper abstract describes the goal as “[seeking] to create an interaction which demonstrates the illusion of life.” I suppose you could think of it like a sort of old-fashioned Turing test focused on gaze: If the gaze of this robot cannot be distinguished from the gaze of a human, then victory, that’s lifelike. And critically, we’re talking about mutual gaze here—not just a robot gazing off into the distance, but you looking deep into the eyes of this robot and it looking right back at you just like a human would. Or, just like some humans would.

The approach that Disney is using is more animation-y than biology-y or psychology-y. In other words, they’re not trying to figure out what’s going on in our brains to make our eyes move the way that they do when we’re looking at other people and basing their control system on that, but instead, Disney just wants it to look right. This “visual appeal” approach is totally fine, and there’s been an enormous amount of human-robot interaction (HRI) research behind it already, albeit usually with less explicitly human-like platforms. And speaking of human-like platforms, the hardware is a “custom Walt Disney Imagineering Audio-Animatronics bust,” which has DoFs that include neck, eyes, eyelids, and eyebrows.

In order to decide on gaze motions, the system first identifies a person to target with its attention using an RGB-D camera. If more than one person is visible, the system calculates a curiosity score for each, currently simplified to be based on how much motion it sees. Depending on which person that the robot can see has the highest curiosity score, the system will choose from a variety of high level gaze behavior states, including:

Read: The Read state can be considered the “default” state of the character. When not executing another state, the robot character will return to the Read state. Here, the character will appear to read a book located at torso level.

Glance: A transition to the Glance state from the Read or Engage states occurs when the attention engine indicates that there is a stimuli with a curiosity score […] above a certain threshold.

Engage: The Engage state occurs when the attention engine indicates that there is a stimuli […] to meet a threshold and can be triggered from both Read and Glance states. This state causes the robot to gaze at the person-of-interest with both the eyes and head.

Acknowledge: The Acknowledge state is triggered from either Engage or Glance states when the person-of-interest is deemed to be familiar to the robot.

Running underneath these higher level behavior states are lower level motion behaviors like breathing, small head movements, eye blinking, and saccades (the quick eye movements that occur when people, or robots, look between two different focal points). The term for this hierarchical behavioral state layering is a subsumption architecture, which goes all the way back to Rodney Brooks’ work on robots like Genghis in the 1980s and Cog and Kismet in the ’90s, and it provides a way for more complex behaviors to emerge from a set of simple, decentralized low-level behaviors.

“25 years on Disney is using my subsumption architecture for humanoid eye control, better and smoother now than our 1995 implementations on Cog and Kismet.”
—Rodney Brooks, MIT emeritus professor

Brooks, an emeritus professor at MIT and, most recently, cofounder and CTO of Robust.ai, tweeted about the Disney project, saying: “People underestimate how long it takes to get from academic paper to real world robotics. 25 years on Disney is using my subsumption architecture for humanoid eye control, better and smoother now than our 1995 implementations on Cog and Kismet.”

From the paper:

Although originally intended for control of mobile robots, we find that the subsumption architecture, as presented in [17], lends itself as a framework for organizing animatronic behaviors. This is due to the analogous use of subsumption in human behavior: human psychomotor behavior can be intuitively modeled as layered behaviors with incoming sensory inputs, where higher behavioral levels are able to subsume lower behaviors. At the lowest level, we have involuntary movements such as heartbeats, breathing and blinking. However, higher behavioral responses can take over and control lower level behaviors, e.g., fight-or-flight response can induce faster heart rate and breathing. As our robot character is modeled after human morphology, mimicking biological behaviors through the use of a bottom-up approach is straightforward.

The result, as the video shows, appears to be quite good, although it’s hard to tell how it would all come together if the robot had more of, you know, a face. But it seems like you don’t necessarily need to have a lifelike humanoid robot to take advantage of this architecture in an HRI context—any robot that wants to make a gaze-based connection with a human could benefit from doing it in a more human-like way.

“Realistic and Interactive Robot Gaze,” by Matthew K.X.J. Pan, Sungjoon Choi, James Kennedy, Kyna McIntosh, Daniel Campos Zamora, Gunter Niemeyer, Joohyung Kim, Alexis Wieland, and David Christensen from Disney Research, California Institute of Technology, University of Illinois at Urbana-Champaign, and Walt Disney Imagineering, was presented at IROS 2020. You can find the full paper, along with a 13-minute video presentation, on the IROS on-demand conference website.

< Back to IEEE Journal Watch Continue reading

Posted in Human Robots

#437571 Video Friday: Snugglebot Is What We All ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

IROS 2020 – October 25-25, 2020 – [Online]
Robotica 2020 – November 10-14, 2020 – [Online]
ROS World 2020 – November 12, 2020 – [Online]
CYBATHLON 2020 – November 13-14, 2020 – [Online]
ICSR 2020 – November 14-16, 2020 – Golden, Colo., USA
Bay Area Robotics Symposium – November 20, 2020 – [Online]
Let us know if you have suggestions for next week, and enjoy today's videos.

Snugglebot is what we all need right now.

[ Snugglebot ]

In his video message on his prayer intention for November, Pope Francis emphasizes that progress in robotics and artificial intelligence (AI) be oriented “towards respecting the dignity of the person and of Creation”.

[ Vatican News ]

KaPOW!

Apparently it's supposed to do that—the disruptor flies off backwards to reduce recoil on the robot, and has its own parachute to keep it from going too far.

[ Ghost Robotics ]

Animals have many muscles, receptors, and neurons which compose feedback loops. In this study, we designed artificial muscles, receptors, and neurons without any microprocessors, or software-based controllers. We imitate the reflexive rule observed in walking experiments of cats, as a result, the Pneumatic Brainless Robot II emerged running motion (a leg trajectory and a gait pattern) through the interaction between the body, the ground, and the artificial reflexes. We envision that the simple reflex circuit we discovered will be a candidate for a minimal model for describing the principles of animal locomotion.

Find the paper, “Brainless Running: A Quasi-quadruped Robot with Decentralized Spinal Reflexes by Solely Mechanical Devices,” on IROS On-Demand.

[ IROS ]

Thanks Yoichi!

I have no idea what these guys are saying, but they're talking about robots that serve chocolate!

The world of experience of the Zotter Schokoladen Manufaktur of managing director Josef Zotter counts more than 270,000 visitors annually. Since March 2019, this world of chocolate in Bergl near Riegersburg in Austria has been enriched by a new attraction: the world's first chocolate and praline robot from KUKA delights young and old alike and serves up chocolate and pralines to guests according to their personal taste.

[ Zotter ]

This paper proposes a systematic solution that uses an unmanned aerial vehicle (UAV) to aggressively and safely track an agile target. The solution properly handles the challenging situations where the intent of the target and the dense environments are unknown to the UAV. The proposed solution is integrated into an onboard quadrotor system. We fully test the system in challenging real-world tracking missions. Moreover, benchmark comparisons validate that the proposed method surpasses the cutting-edge methods on time efficiency and tracking effectiveness.

[ FAST Lab ]

Southwest Research Institute developed a cable management system for collaborative robotics, or “cobots.” Dress packs used on cobots can create problems when cables are too tight (e-stops) or loose (tangling). SwRI developed ADDRESS, or the Adaptive DRESing System, to provide smarter cobot dress packs that address e-stops and tangling.

[ SWRI ]

A quick demonstration of the acoustic contact sensor in the RBO Hand 2. An embedded microphone records the sound inside of the pneumatic finger. Depending on which part of the finger makes contact, the sound is a little bit different. We create a sensor that recognizes these small changes and predicts the contact location from the sound. The visualization on the left shows the recorded sound (top) and which of the nine contact classes the sensor is currently predicting (bottom).

[ TU Berlin ]

The MAVLab won the prize for the “most innovative design” in the IMAV 2018 indoor competition, in which drones had to fly through windows, gates, and follow a predetermined flight path. The prize was awarded for the demonstration of a fully autonomous version of the “DelFly Nimble”, a tailless flapping wing drone.

In order to fly by itself, the DelFly Nimble was equipped with a single, small camera and a small processor allowing onboard vision processing and control. The jury of international experts in the field praised the agility and autonomous flight capabilities of the DelFly Nimble.

[ MAVLab ]

A reactive walking controller for the Open Dynamic Robot Initiative's skinny quadruped.

[ ODRI ]

Mobile service robots are already able to recognize people and objects while navigating autonomously through their operating environments. But what is the ideal position of the robot to interact with a user? To solve this problem, Fraunhofer IPA developed an approach that connects navigation, 3D environment modeling, and person detection to find the optimal goal pose for HRI.

[ Fraunhofer ]

Yaskawa has been in robotics for a very, very long time.

[ Yaskawa ]

Black in Robotics IROS launch event, featuring Carlotta Berry.

[ Black in Robotics ]

What is AI? I have no idea! But these folks have some opinions.

[ MIT ]

Aerial-based Observations of Volcanic Emissions (ABOVE) is an international collaborative project that is changing the way we sample volcanic gas emissions. Harnessing recent advances in drone technology, unoccupied aerial systems (UAS) in the ABOVE fleet are able to acquire aerial measurements of volcanic gases directly from within previously inaccessible volcanic plumes. In May 2019, a team of 30 researchers undertook an ambitious field deployment to two volcanoes – Tavurvur (Rabaul) and Manam in Papua New Guinea – both amongst the most prodigious emitters of sulphur dioxide on Earth, and yet lacking any measurements of how much carbon they emit to the atmosphere.

[ ABOVE ]

A talk from IHMC's Robert Griffin for ICCAS 2020, including a few updates on their Nadia humanoid.

[ IHMC ] Continue reading

Posted in Human Robots

#437562 Video Friday: Aquanaut Robot Takes to ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

IROS 2020 – October 25-25, 2020 – [Online]
ICSR 2020 – November 14-16, 2020 – Golden, Colo., USA
Bay Area Robotics Symposium – November 20, 2020 – [Online]
ACRA 2020 – December 8-10, 2020 – [Online]
Let us know if you have suggestions for next week, and enjoy today's videos.

To prepare the Perseverance rover for its date with Mars, NASA’s Mars 2020 mission team conducted a wide array of tests to help ensure a successful entry, descent and landing at the Red Planet. From parachute verification in the world’s largest wind tunnel, to hazard avoidance practice in Death Valley, California, to wheel drop testing at NASA’s Jet Propulsion Laboratory and much more, every system was put through its paces to get ready for the big day. The Perseverance rover is scheduled to land on Mars on February 18, 2021.

[ JPL ]

Awesome to see Aquanaut—the “underwater transformer” we wrote about last year—take to the ocean!

Also their new website has SHARKS on it.

[ HMI ]

Nature has inspired engineers at UNSW Sydney to develop a soft fabric robotic gripper which behaves like an elephant's trunk to grasp, pick up and release objects without breaking them.

[ UNSW ]

Collaborative robots offer increased interaction capabilities at relatively low cost but, in contrast to their industrial counterparts, they inevitably lack precision. We address this problem by relying on a dual-arm system with laser-based sensing to measure relative poses between objects of interest and compensate for pose errors coming from robot proprioception.

[ Paper ]

Developed by NAVER LABS, with Korea University of Technology & Education (Koreatech), the robot arm now features an added waist, extending the available workspace, as well as a sensor head that can perceive objects. It has also been equipped with a robot hand “BLT Gripper” that can change to various grasping methods.

[ NAVER Labs ]

In case you were still wondering why SoftBank acquired Aldebaran and Boston Dynamics:

[ RobotStart ]

DJI's new Mini 2 drone is here with a commercial so hip it makes my teeth scream.

[ DJI ]

Using simple materials, such as plastic struts and cardboard rolls, the first prototype of the RBO Hand 3 is already capable of grasping a large range of different objects thanks to its opposable thumb.

The RBO Hand 3 performs an edge grasp before handing-over the object to a person. The hand actively exploits constraints in the environment (the tabletop) for grasping the object. Thanks to its compliance, this interaction is safe and robust.

[ TU Berlin ]

Flyability's Elios 2 helped researchers inspect Reactor Five at the Chernobyl nuclear disaster site in order to determine whether any uranium was present. Prior to this mission, Reactor Five had not been investigated since the disaster in April of 1986.

[ Flyability ]

Thanks Zacc!

SOTO 2 is here! Together with our development partners from the industry, we have greatly enhanced the SOTO prototype over the last two years. With the new version of the robot, Industry 4.0 will become a great deal more real: SOTO brings materials to the assembly line, just-in-time and completely autonomously.

[ Magazino ]

A drone that can fly sustainably for long distances over land and water, and can land almost anywhere, will be able to serve a wide range of applications. There are already drones that fly using ‘green’ hydrogen, but they either fly very slowly or cannot land vertically. That’s why researchers at TU Delft, together with the Royal Netherlands Navy and the Netherlands Coastguard, developed a hydrogen-powered drone that is capable of vertical take-off and landing whilst also being able to fly horizontally efficiently for several hours, much like regular aircraft. The drone uses a combination of hydrogen and batteries as its power source.

[ MAVLab ]

The National Nuclear User Facility for Hot Robotics (NNUF-HR) is an EPSRC funded facility to support UK academia and industry to deliver ground-breaking, impactful research in robotics and artificial intelligence for application in extreme and challenging nuclear environments.

[ NNUF ]

At the Karolinska University Laboratory in Sweden, an innovation project based around an ABB collaborative robot has increased efficiency and created a better working environment for lab staff.

[ ABB ]

What I find interesting about DJI's enormous new agricultural drone is that it's got a spinning obstacle detecting sensor that's a radar, not a lidar.

Also worth noting is that it seems to detect the telephone pole, but not the support wire that you can see in the video feed, although the visualization does make it seem like it can spot the power lines above.

[ DJI ]

Josh Pieper has spend the last year building his own quadruped, and you can see what he's been up to in just 12 minutes.

[ mjbots ]

Thanks Josh!

Dr. Ryan Eustice, TRI Senior Vice President of Automated Driving, delivers a keynote speech — “The Road to Vehicle Automation, a Toyota Guardian Approach” — to SPIE's Future Sensing Technologies 2020. During the presentation, Eustice provides his perspective on the current state of automated driving, summarizes TRI's Guardian approach — which amplifies human drivers, rather than replacing them — and summarizes TRI's recent developments in core AD capabilities.

[ TRI ]

Two excellent talks this week from UPenn GRASP Lab, from Ruzena Bajcsy and Vijay Kumar.

A panel discussion on the future of robotics and societal challenges with Dr. Ruzena Bajcsy as a Roboticist and Founder of the GRASP Lab.

In this talk I will describe the role of the White House Office of Science and Technology Policy in supporting science and technology research and education, and the lessons I learned while serving in the office. I will also identify a few opportunities at the intersection of technology and policy and broad societal challenges.

[ UPenn ]

The IROS 2020 “Perception, Learning, and Control for Autonomous Agile Vehicles” workshop is all online—here's the intro, but you can click through for a playlist that includes videos of the entire program, and slides are available as well.

[ NYU ] Continue reading

Posted in Human Robots