Tag Archives: play

#435731 Video Friday: NASA Is Sending This ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

MARSS 2019 – July 1-5, 2019 – Helsinki, Finland
ICRES 2019 – July 29-30, 2019 – London, UK
DARPA SubT Tunnel Circuit – August 15-22, 2019 – Pittsburgh, PA, USA
Let us know if you have suggestions for next week, and enjoy today’s videos.

The big news today is that NASA is sending a robot to Saturn’s moon Titan. A flying robot. The Dragonfly mission will launch in 2026 and arrive in 2034, but you knew that already, because last January, we posted a detailed article about the concept from the Applied Physics Lab at Johns Hopkins University. And now it’s not a concept anymore, yay!

Again, read all the details plus an interview in 2018 article.

[ NASA ]

A robotic gripping arm that uses engineered bacteria to “taste” for a specific chemical has been developed by engineers at the University of California, Davis, and Carnegie Mellon University. The gripper is a proof-of-concept for biologically-based soft robotics.

The new device uses a biosensing module based on E. coli bacteria engineered to respond to the chemical IPTG by producing a fluorescent protein. The bacterial cells reside in wells with a flexible, porous membrane that allows chemicals to enter but keeps the cells inside. This biosensing module is built into the surface of a flexible gripper on a robotic arm, so the gripper can “taste” the environment through its fingers.

When IPTG crosses the membrane into the chamber, the cells fluoresce and electronic circuits inside the module detect the light. The electrical signal travels to the gripper’s control unit, which can decide whether to pick something up or release it.

[ UC Davis ]

The Toyota Research Institute (TRI) is taking on the hard problems in manipulation research toward making human-assist robots reliable and robust. Dr. Russ Tedrake, TRI Vice President of Robotics Research, explains how we are exploring the challenges and addressing the reliability gap by using a robot loading dishes in a dishwasher as an example task.

[ TRI ]

The Tactile Telerobot is the world’s first haptic telerobotic system that transmits realistic touch feedback to an operator located anywhere in the world. It is the product of joint collaboration between Shadow Robot Company, HaptX, and SynTouch. All Nippon Airways funded the project’s initial research and development.

What’s really unique about this is the HaptX tactile feedback system, which is something we’ve been following for several years now. It’s one of the most magical tech experiences I’ve ever had, and you can read about it here and here.

[ HaptX ]

Thanks Andrew!

I love how snake robots can emulate some of the fanciest moves of real snakes, and then also do bonkers things that real snakes never do.

[ Matsuno Lab ]

Here are a couple interesting videos from the Human-Robot Interaction Lab at Tufts.

A robot is instructed to perform an action and cannot do it due to lack of sensors. But when another robot is placed nearby, it can execute the instruction by tacitly tapping into the other robot’s mind and using that robot’s sensors for its own actions. Yes, it’s automatic, and yes, it’s the BORG!

Two Nao robots are instructed to perform a dance and are able to do it right after instruction. Moreover, they can switch roles immediately, and even a third different PR2 robot can perform the dance right away, demonstrating the ability of our DIARC architecture to learn quickly and share the knowledge with any type of robot running the architecture.

Compared to Nao, PR2 just sounds… depressed.

[ HRI Lab ]

This work explores the problem of robot tool construction – creating tools from parts available in the environment. We advance the state-of-the-art in robotic tool construction by introducing an approach that enables the robot to construct a wider range of tools with greater computational efficiency. Specifically, given an action that the robot wishes to accomplish and a set of building parts available to the robot, our approach reasons about the shape of the parts and potential ways of attaching them, generating a ranking of part combinations that the robot then uses to construct and test the target tool. We validate our approach on the construction of five tools using a physical 7-DOF robot arm.

[ RAIL Lab ] via [ RSS ]

We like Magazino’s approach to warehouse picking- constrain the problem to something you can reliably solve, like shoeboxes.

Magazino has announced a new pricing model for their robots. You pay 55k euros for the robot itself, and then after that, all you pay to keep the robot working is 6 cents per pick, so the robot is only costing you money for the work that it actually does.

[ Magazino ]

Thanks Florin!

Human-Robot Collaborations are happening across factories worldwide, yet very few are using it for smaller businesses, due to high costs or the difficulty of customization. Elephant Robotics, a new player from Shenzhen, the Silicon Valley of Asia, has set its sight on helping smaller businesses gain access to smart robotics. They created a Catbot (a collaborative robotic arm) that will offer high efficiency and flexibility to various industries.

The Catbot is set to help from education projects, photography, massaging, to being a personal barista or co-playing a table game. The customizations are endless. To increase the flexibility of usage, the Catbot is extremely easy to program from a high precision task up to covering hefty ground projects.

[ Elephant Robotics ]

Thanks Johnson!

Dronistics, an EPFL spin-off, has been testing out their enclosed delivery drone in the Dominican Republic through a partnership with WeRobotics.

[ WeRobotics ]

QTrobot is an expressive humanoid robot designed to help children with autism spectrum disorder and children with special educational needs in learning new skills. QTrobot uses simple and exaggerated facial expressions combined by interactive games and stories, to help children improve their emotional skills. QTrobot helps children to learn about and better understand the emotions and teach them strategies to handle their emotions more effectively.

[ LuxAI ]

Here’s a typical day in the life of a Tertill solar-powered autonomous weed-destroying robot.

$300, now shipping from Franklin Robotics.

[ Tertill ]

PAL Robotics is excited to announce a new TIAGo with two arms, TIAGo++! After carefully listening to the robotics community needs, we used TIAGo’s modularity to integrate two 7-DoF arms to our mobile manipulator. TIAGo++ can help you swiftly accomplish your research goals, opening endless possibilities in mobile manipulation.

[ PAL Robotics ]

Thanks Jack!

You’ve definitely already met the Cobalt security robot, but Toyota AI Ventures just threw a pile of money at them and would therefore like you to experience this re-introduction:

[ Cobalt Robotics ] via [ Toyota AI ]

ROSIE is a mobile manipulator kit from HEBI Robotics. And if you don’t like ROSIE, the modular nature of HEBI’s hardware means that you can take her apart and make something more interesting.

[ HEBI Robotics ]

Learn about Kawasaki Robotics’ second addition to their line of duAro dual-arm collaborative robots, duAro2. This model offers an extended vertical reach (550 mm) and an increased payload capacity (3 kg/arm).

[ Kawasaki Robotics ]

Drone Delivery Canada has partnered with Peel Region Paramedics to pilot its proprietary drone delivery platform to enable rapid first responder technology via drone with the goal to reduce response time and potentially save lives.

[ Drone Delivery Canada ]

In this week’s episode of Robots in Depth, Per speaks with Harri Ketamo, from Headai.

Harri Ketamo talks about AI and how he aims to mimic human decision making with algorithms. Harri has done a lot of AI for computer games to create opponents that are entertaining to play against. It is easy to develop a very bad or a very good opponent, but designing an opponent that behaves like a human, is entertaining to play against and that you can beat is quite hard. He talks about how AI in computer games is a very important story telling tool and an important part of making a game entertaining to play.

This work led him into other parts of the AI field. Harri thinks that we sometimes have a problem separating what is real from what is the type of story telling he knows from gaming AI. He calls for critical analysis of AI and says that data has to be used to verify AI decisions and results.

[ Robots in Depth ]

Thanks Per! Continue reading

Posted in Human Robots

#435707 AI Agents Startle Researchers With ...

After 25 million games, the AI agents playing hide-and-seek with each other had mastered four basic game strategies. The researchers expected that part.

After a total of 380 million games, the AI players developed strategies that the researchers didn’t know were possible in the game environment—which the researchers had themselves created. That was the part that surprised the team at OpenAI, a research company based in San Francisco.

The AI players learned everything via a machine learning technique known as reinforcement learning. In this learning method, AI agents start out by taking random actions. Sometimes those random actions produce desired results, which earn them rewards. Via trial-and-error on a massive scale, they can learn sophisticated strategies.

In the context of games, this process can be abetted by having the AI play against another version of itself, ensuring that the opponents will be evenly matched. It also locks the AI into a process of one-upmanship, where any new strategy that emerges forces the opponent to search for a countermeasure. Over time, this “self-play” amounted to what the researchers call an “auto-curriculum.”

According to OpenAI researcher Igor Mordatch, this experiment shows that self-play “is enough for the agents to learn surprising behaviors on their own—it’s like children playing with each other.”

Reinforcement is a hot field of AI research right now. OpenAI’s researchers used the technique when they trained a team of bots to play the video game Dota 2, which squashed a world-champion human team last April. The Alphabet subsidiary DeepMind has used it to triumph in the ancient board game Go and the video game StarCraft.

Aniruddha Kembhavi, a researcher at the Allen Institute for Artificial Intelligence (AI2) in Seattle, says games such as hide-and-seek offer a good way for AI agents to learn “foundational skills.” He worked on a team that taught their AllenAI to play Pictionary with humans, viewing the gameplay as a way for the AI to work on common sense reasoning and communication. “We are, however, quite far away from being able to translate these preliminary findings in highly simplified environments into the real world,” says Kembhavi.

Illustration: OpenAI

AI agents construct a fort during a hide-and-seek game developed by OpenAI.

In OpenAI’s game of hide-and-seek, both the hiders and the seekers received a reward only if they won the game, leaving the AI players to develop their own strategies. Within a simple 3D environment containing walls, blocks, and ramps, the players first learned to run around and chase each other (strategy 1). The hiders next learned to move the blocks around to build forts (2), and then the seekers learned to move the ramps (3), enabling them to jump inside the forts. Then the hiders learned to move all the ramps into their forts before the seekers could use them (4).

The two strategies that surprised the researchers came next. First the seekers learned that they could jump onto a box and “surf” it over to a fort (5), allowing them to jump in—a maneuver that the researchers hadn’t realized was physically possible in the game environment. So as a final countermeasure, the hiders learned to lock all the boxes into place (6) so they weren’t available for use as surfboards.

Illustration: OpenAI

An AI agent uses a nearby box to surf its way into a competitor’s fort.

In this circumstance, having AI agents behave in an unexpected way wasn’t a problem: They found different paths to their rewards, but didn’t cause any trouble. However, you can imagine situations in which the outcome would be rather serious. Robots acting in the real world could do real damage. And then there’s Nick Bostrom’s famous example of a paper clip factory run by an AI, whose goal is to make as many paper clips as possible. As Bostrom told IEEE Spectrum back in 2014, the AI might realize that “human bodies consist of atoms, and those atoms could be used to make some very nice paper clips.”

Bowen Baker, another member of the OpenAI research team, notes that it’s hard to predict all the ways an AI agent will act inside an environment—even a simple one. “Building these environments is hard,” he says. “The agents will come up with these unexpected behaviors, which will be a safety problem down the road when you put them in more complex environments.”

AI researcher Katja Hofmann at Microsoft Research Cambridge, in England, has seen a lot of gameplay by AI agents: She started a competition that uses Minecraft as the playing field. She says the emergent behavior seen in this game, and in prior experiments by other researchers, shows that games can be a useful for studies of safe and responsible AI.

“I find demonstrations like this, in games and game-like settings, a great way to explore the capabilities and limitations of existing approaches in a safe environment,” says Hofmann. “Results like these will help us develop a better understanding on how to validate and debug reinforcement learning systems–a crucial step on the path towards real-world applications.”

Baker says there’s also a hopeful takeaway from the surprises in the hide-and-seek experiment. “If you put these agents into a rich enough environment they will find strategies that we never knew were possible,” he says. “Maybe they can solve problems that we can’t imagine solutions to.” Continue reading

Posted in Human Robots

#435683 How High Fives Help Us Get in Touch With ...

The human sense of touch is so naturally ingrained in our everyday lives that we often don’t notice its presence. Even so, touch is a crucial sensing ability that helps people to understand the world and connect with others. As the market for robots grows, and as robots become more ingrained into our environments, people will expect robots to participate in a wide variety of social touch interactions. At Oregon State University’s Collaborative Robotics and Intelligent Systems (CoRIS) Institute, I research how to equip everyday robots with better social-physical interaction skills—from playful high-fives to challenging physical therapy routines.

Some commercial robots already possess certain physical interaction skills. For example, the videoconferencing feature of mobile telepresence robots can keep far-away family members connected with one another. These robots can also roam distant spaces and bump into people, chairs, and other remote objects. And my Roomba occasionally tickles my toes before turning to vacuum a different area of the room. As a human being, I naturally interpret this (and other Roomba behaviors) as social, even if they were not intended as such. At the same time, for both of these systems, social perceptions of the robots’ physical interaction behaviors are not well understood, and these social touch-like interactions cannot be controlled in nuanced ways.

Before joining CoRIS early this year, I was a postdoc at the University of Southern California’s Interaction Lab, and prior to that, I completed my doctoral work at the GRASP Laboratory’s Haptics Group at the University of Pennsylvania. My dissertation focused on improving the general understanding of how robot control and planning strategies influence perceptions of social touch interactions. As part of that research, I conducted a study of human-robot hand-to-hand contact, focusing on an interaction somewhere between a high five and a hand-clapping game. I decided to study this particular interaction because people often high five, and they will likely expect robots in everyday spaces to high five as well!

I conducted a study of human-robot hand-to-hand contact, focusing on an interaction somewhere between a high five and a hand-clapping game. I decided to study this particular interaction because people often high five, and they will likely expect robots to high five as well!

The implications of motion and planning on the social touch experience in these interactions is also crucial—think about a disappointingly wimpy (or triumphantly amazing) high five that you’ve experienced in the past. This great or terrible high-fiving experience could be fleeting, but it could also influence who you interact with, who you’re friends with, and even how you perceive the character or personalities of those around you. This type of perception, judgement, and response could extend to personal robots, too!

An investigation like this requires a mixture of more traditional robotics research (e.g., understanding how to move and control a robot arm, developing models of the desired robot motion) along with techniques from design and psychology (e.g., performing interviews with research participants, using best practices from experimental methods in perception). Enabling robots with social touch abilities also comes with many challenges, and even skilled humans can have trouble anticipating what another person is about to do. Think about trying to make satisfying hand contact during a high five—you might know the classic adage “watch the elbow,” but if you’re like me, even this may not always work.

I conducted a research study involving eight different types of human-robot hand contact, with different combinations of the following: interactions with a facially reactive or non-reactive robot, a physically reactive or non-reactive planning strategy, and a lower or higher robot arm stiffness. My robotic system could become facially reactive by changing its facial expression in response to hand contact, or physically reactive by updating its plan of where to move next after sensing hand contact. The stiffness of the robot could be adjusted by changing a variable that controlled how quickly the robot’s motors tried to pull its arm to the desired position. I knew from previous research that fine differences in touch interactions can have a big impact on perceived robot character. For example, if a robot grips an object too tightly or for too long while handing an object to a person, it might be perceived as greedy, possessive, or perhaps even Sméagol-like. A robot that lets go too soon might appear careless or sloppy.

In the example cases of robot grip, it’s clear that understanding people’s perceptions of robot characteristics and personality can help roboticists choose the right robot design based on the proposed operating environment of the robot. I likewise wanted to learn how the facial expressions, physical reactions, and stiffness of a hand-clapping robot would influence human perceptions of robot pleasantness, energeticness, dominance, and safety. Understanding this relationship can help roboticists to equip robots with personalities appropriate for the task at hand. For example, a robot assisting people in a grocery store may need to be designed with a high level of pleasantness and only moderate energy, while a maximally effective robot for comedy roast battles may need high degrees of energy and dominance above all else.

After many a late night at the GRASP Lab clapping hands with a big red robot, I was ready to conduct the study. Twenty participants visited the lab to clap hands with our Baxter Research Robot and help me begin to understand how characteristics of this humanoid robot’s social touch influenced its pleasantness, energeticness, dominance, and apparent safety. Baxter interacted with participants using a custom 3D-printed hand that was inlaid with silicone inserts.

The study showed that a facially reactive robot seemed more pleasant and energetic. A physically reactive robot seemed less pleasant, energetic, and dominant for this particular study design and interaction. I thought contact with a stiffer robot would seem harder (and therefore more dominant and less safe), but counter to my expectations, a stiffer-armed robot seemed safer and less dominant to participants. This may be because the stiffer robot was more precise in following its pre-programmed trajectory, therefore seeming more predictable and less free-spirited.

Safety ratings of the robot were generally high, and several participants commented positively on the robot’s facial expressions. Some participants attributed inventive (and non-existent) intelligences to the robot—I used neither computer vision nor the Baxter robot’s cameras in this study, but more than one participant complimented me on how well the robot tracked their hand position. While interacting with the robot, participants displayed happy facial expressions more than any other analyzed type of expression.

Photo: Naomi Fitter

Participants were asked to clap hands with Baxter and describe how they perceived the robot in terms of its pleasantness, energeticness, dominance, and apparent safety.

Circling back to the idea of how people might interpret even rudimentary and practical robot behaviors as social, these results show that this type of social perception isn’t just true for my lovable (but sometimes dopey) Roomba, but also for collaborative industrial robots, and generally, any robot capable of physical human-robot interaction. In designing the motion of Baxter, the adjustment of a single number in the equation that controls joint stiffness can flip the robot from seeming safe and docile to brash and commanding. These implications are sometimes predictable, but often unexpected.

The results of this particular study give us a partial guide to manipulating the emotional experience of robot users by adjusting aspects of robot control and planning, but future work is needed to fully understand the design space of social touch. Will materials play a major role? How about personalized machine learning? Do results generalize over all robot arms, or even a specialized subset like collaborative industrial robot arms? I’m planning to continue answering these questions, and when I finally solve human-robot social touch, I’ll high five all my robots to celebrate.

Naomi Fitter is an assistant professor in the Collaborative Robotics and Intelligent Systems (CoRIS) Institute at Oregon State University, where her Social Haptics, Assistive Robotics, and Embodiment (SHARE) research group aims to equip robots with the ability to engage and empower people in interactions from playful high-fives to challenging physical therapy routines. She completed her doctoral work in the GRASP Laboratory’s Haptics Group and was a postdoctoral scholar in the University of Southern California’s Interaction Lab from 2017 to 2018. Naomi’s not-so-secret pastime is performing stand-up and improv comedy. Continue reading

Posted in Human Robots

#435681 Video Friday: This NASA Robot Uses ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

ICRES 2019 – July 29-30, 2019 – London, U.K.
DARPA SubT Tunnel Circuit – August 15-22, 2019 – Pittsburgh, Pa., USA
IEEE Africon 2019 – September 25-27, 2019 – Accra, Ghana
ISRR 2019 – October 6-10, 2019 – Hanoi, Vietnam
Let us know if you have suggestions for next week, and enjoy today’s videos.

Robots can land on the Moon and drive on Mars, but what about the places they can’t reach? Designed by engineers as NASA’s Jet Propulsion Laboratory in Pasadena, California, a four-limbed robot named LEMUR (Limbed Excursion Mechanical Utility Robot) can scale rock walls, gripping with hundreds of tiny fishhooks in each of its 16 fingers and using artificial intelligence to find its way around obstacles. In its last field test in Death Valley, California, in early 2019, LEMUR chose a route up a cliff, scanning the rock for ancient fossils from the sea that once filled the area.

The LEMUR project has since concluded, but it helped lead to a new generation of walking, climbing and crawling robots. In future missions to Mars or icy moons, robots with AI and climbing technology derived from LEMUR could discover similar signs of life. Those robots are being developed now, honing technology that may one day be part of future missions to distant worlds.

[ NASA ]

This video demonstrates the autonomous footstep planning developed by IHMC. Robots in this video are the Atlas humanoid robot (DRC version) and the NASA Valkyrie. The operator specifies a goal location in the world, which is modeled as planar regions using the robot’s perception sensors. The planner then automatically computes the necessary steps to reach the goal using a Weighted A* algorithm. The algorithm does not reject footholds that have a certain amount of support, but instead modifies them after the plan is found to try and increase that support area.

Currently, narrow terrain has a success rate of about 50%, rough terrain is about 90%, whereas flat ground is near 100%. We plan on increasing planner speed and the ability to plan through mazes and to unseen goals by including a body-path planner as the first step. Control, Perception, and Planning algorithms by IHMC Robotics.

[ IHMC ]

I’ve never really been able to get into watching people play poker, but throw an AI from CMU and Facebook into a game of no-limit Texas hold’em with five humans, and I’m there.

[ Facebook ]

In this video, Cassie Blue is navigating autonomously. Right now, her world is very small, the Wavefield at the University of Michigan, where she is told to turn left at intersections. You’re right, that is not a lot of independence, but it’s a first step away from a human and an RC controller!

Using a RealSense RGBD Camera, an IMU, and our version of an InEKF with contact factors, Cassie Blue is building a 3D semantic map in real time that identifies sidewalks, grass, poles, bicycles, and buildings. From the semantic map, occupancy and cost maps are built with the sidewalk identified as walk-able area and everything else considered as an obstacle. A planner then sets a goal to stay approximately 50 cm to the right of the sidewalk’s left edge and plans a path around obstacles and corners using D*. The path is translated into way-points that are achieved via Cassie Blue’s gait controller.

[ University of Michigan ]

Thanks Jesse!

Dave from HEBI Robotics wrote in to share some new actuators that are designed to get all kinds of dirty: “The R-Series takes HEBI’s X-Series to the next level, providing a sealed robotics solution for rugged, industrial applications and laying the groundwork for industrial users to address challenges that are not well met by traditional robotics. To prove it, we shot some video right in the Allegheny River here in Pittsburgh. Not a bad way to spend an afternoon :-)”

The R-Series Actuator is a full-featured robotic component as opposed to a simple servo motor. The output rotates continuously, requires no calibration or homing on boot-up, and contains a thru-bore for easy daisy-chaining of wiring. Modular in nature, R-Series Actuators can be used in everything from wheeled robots to collaborative robotic arms. They are sealed to IP67 and designed with a lightweight form factor for challenging field applications, and they’re packed with sensors that enable simultaneous control of position, velocity, and torque.

[ HEBI Robotics ]

Thanks Dave!

If your robot hands out karate chops on purpose, that’s great. If it hands out karate chops accidentally, maybe you should fix that.

COVR is short for “being safe around collaborative and versatile robots in shared spaces”. Our mission is to significantly reduce the complexity in safety certifying cobots. Increasing safety for collaborative robots enables new innovative applications, thus increasing production and job creation for companies utilizing the technology. Whether you’re an established company seeking to deploy cobots or an innovative startup with a prototype of a cobot related product, COVR will help you analyze, test and validate the safety for that application.

[ COVR ]

Thanks Anna!

EPFL startup Flybotix has developed a novel drone with just two propellers and an advanced stabilization system that allow it to fly for twice as long as conventional models. That fact, together with its small size, makes it perfect for inspecting hard-to-reach parts of industrial facilities such as ducts.

[ Flybotix ]

SpaceBok is a quadruped robot designed and built by a Swiss student team from ETH Zurich and ZHAW Zurich, currently being tested using Automation and Robotics Laboratories (ARL) facilities at our technical centre in the Netherlands. The robot is being used to investigate the potential of ‘dynamic walking’ and jumping to get around in low gravity environments.

SpaceBok could potentially go up to 2 m high in lunar gravity, although such a height poses new challenges. Once it comes off the ground the legged robot needs to stabilise itself to come down again safely – like a mini-spacecraft. So, like a spacecraft. SpaceBok uses a reaction wheel to control its orientation.

[ ESA ]

A new video from GITAI showing progress on their immersive telepresence robot for space.

[ GITAI ]

Tech United’s HERO robot (a Toyota HSR) competed in the RoboCup@Home competition, and it had a couple of garbage-related hiccups.

[ Tech United ]

Even small drones are getting better at autonomous obstacle avoidance in cluttered environments at useful speeds, as this work from the HKUST Aerial Robotics Group shows.

[ HKUST ]

DelFly Nimbles now come in swarms.

[ DelFly Nimble ]

This is a very short video, but it’s a fairly impressive look at a Baxter robot collaboratively helping someone put a shirt on, a useful task for folks with disabilities.

[ Shibata Lab ]

ANYmal can inspect the concrete in sewers for deterioration by sliding its feet along the ground.

[ ETH Zurich ]

HUG is a haptic user interface for teleoperating advanced robotic systems as the humanoid robot Justin or the assistive robotic system EDAN. With its lightweight robot arms, HUG can measure human movements and simultaneously display forces from the distant environment. In addition to such teleoperation applications, HUG serves as a research platform for virtual assembly simulations, rehabilitation, and training.

[ DLR ]

This video about “image understanding” from CMU in 1979 (!) is amazing, and even though it’s long, you won’t regret watching until 3:30. Or maybe you will.

[ ARGOS (pdf) ]

Will Burrard-Lucas’ BeetleCam turned 10 this month, and in this video, he recounts the history of his little robotic camera.

[ BeetleCam ]

In this week’s episode of Robots in Depth, Per speaks with Gabriel Skantze from Furhat Robotics.

Gabriel Skantze is co-founder and Chief Scientist at Furhat Robotics and Professor in speech technology at KTH with a specialization in conversational systems. He has a background in research into how humans use spoken communication to interact.

In this interview, Gabriel talks about how the social robot revolution makes it necessary to communicate with humans in a human ways through speech and facial expressions. This is necessary as we expand the number of people that interact with robots as well as the types of interaction. Gabriel gives us more insight into the many challenges of implementing spoken communication for co-bots, where robots and humans work closely together. They need to communicate about the world, the objects in it and how to handle them. We also get to hear how having an embodied system using the Furhat robot head helps the interaction between humans and the system.

[ Robots in Depth ] Continue reading

Posted in Human Robots

#435669 Watch World Champion Soccer Robots Take ...

RoboCup 2019 took place earlier this month down in Sydney, Australia. While there are many different events including RoboCup@Home, RoboCup Rescue, and a bunch of different soccer leagues, one of the most compelling events is middle-size league (MSL), where mobile robots each about the size of a fire hydrant play soccer using a regular size FIFA soccer ball. The robots are fully autonomous, making their own decisions in real time about when to dribble, pass, and shoot.

The long-term goal of RoboCup is this:

By the middle of the 21st century, a team of fully autonomous humanoid robot soccer players shall win a soccer game, complying with the official rules of FIFA, against the winner of the most recent World Cup.

While the robots are certainly not there yet, they're definitely getting closer.

Even if you’re not a particular fan of soccer, it’s impressive to watch the robots coordinate with each other, setting up multiple passes and changing tactics on the fly in response to the movements of the other team. And the ability of these robots to shoot accurately is world-class (like, human world-class), as they’re seemingly able to put the ball in whatever corner of the goal they choose with split-second timing.

The final match was between Tech United from Eindhoven University of Technology in the Netherlands (whose robots are called TURTLE), and Team Water from Beijing Information Science & Technology University. Without spoiling it, I can tell you that the game was tied within just the last few seconds, meaning that it had to go to overtime. You can watch the entire match on YouTube, or a 5-minute commentated highlight video here:

It’s become a bit of a tradition to have the winning MSL robots play a team of what looks to be inexperienced adult humans wearing long pants and dress shoes.

The fact that the robots managed to score even once is pretty awesome, and it also looks like the robots are playing very conservatively (more so than the humans) so as not to accidentally injure any of us fragile meatbags with our spindly little legs. I get that RoboCup wants its first team of robots that can beat a human World Cup winning team to be humanoids, but at the moment, the MSL robots are where all the skill is.

To get calibrated on the state of the art for humanoid soccer robots, here’s the adult size final, Team Nimbro from the University of Bonn in Germany versus Team Sweaty from Offenburg University in Germany:

Yup, still a lot of falling over.

There’s lots more RoboCup on YouTube: Some channels to find more matches include the official RoboCup 2019 channel, and Tech United Eindhoven’s channel, which has both live English commentary and some highlight videos.

[ RoboCup 2019 ] Continue reading

Posted in Human Robots