Tag Archives: think

#435791 To Fly Solo, Racing Drones Have a Need ...

Drone racing’s ultimate vision of quadcopters weaving nimbly through obstacle courses has attracted far less excitement and investment than self-driving cars aimed at reshaping ground transportation. But the U.S. military and defense industry are betting on autonomous drone racing as the next frontier for developing AI so that it can handle high-speed navigation within tight spaces without human intervention.

The autonomous drone challenge requires split-second decision-making with six degrees of freedom instead of a car’s mere two degrees of road freedom. One research team developing the AI necessary for controlling autonomous racing drones is the Robotics and Perception Group at the University of Zurich in Switzerland. In late May, the Swiss researchers were among nine teams revealed to be competing in the two-year AlphaPilot open innovation challenge sponsored by U.S. aerospace company Lockheed Martin. The winning team will walk away with up to $2.25 million for beating other autonomous racing drones and a professional human drone pilot in head-to-head competitions.

“I think it is important to first point out that having an autonomous drone to finish a racing track at high speeds or even beating a human pilot does not imply that we can have autonomous drones [capable of] navigating in real-world, complex, unstructured, unknown environments such as disaster zones, collapsed buildings, caves, tunnels or narrow pipes, forests, military scenarios, and so on,” says Davide Scaramuzza, a professor of robotics and perception at the University of Zurich and ETH Zurich. “However, the robust and computationally efficient state estimation algorithms, control, and planning algorithms developed for autonomous drone racing would represent a starting point.”

The nine teams that made the cut—from a pool of 424 AlphaPilot applicants—will compete in four 2019 racing events organized under the Drone Racing League’s Artificial Intelligence Robotic Racing Circuit, says Keith Lynn, program manager for AlphaPilot at Lockheed Martin. To ensure an apples-to-apples comparison of each team’s AI secret sauce, each AlphaPilot team will upload its AI code into identical, specially-built drones that have the NVIDIA Xavier GPU at the core of the onboard computing hardware.

“Lockheed Martin is offering mentorship to the nine AlphaPilot teams to support their AI tech development and innovations,” says Lynn. The company “will be hosting a week-long Developers Summit at MIT in July, dedicated to workshopping and improving AlphaPilot teams’ code,” he added. He notes that each team will retain the intellectual property rights to its AI code.

The AlphaPilot challenge takes inspiration from older autonomous drone racing events hosted by academic researchers, Scaramuzza says. He credits Hyungpil Moon, a professor of robotics and mechanical engineering at Sungkyunkwan University in South Korea, for having organized the annual autonomous drone racing competition at the International Conference on Intelligent Robots and Systems since 2016.

It’s no easy task to create and train AI that can perform high-speed flight through complex environments by relying on visual navigation. One big challenge comes from how drones can accelerate sharply, take sharp turns, fly sideways, do zig-zag patterns and even perform back flips. That means camera images can suddenly appear tilted or even upside down during drone flight. Motion blur may occur when a drone flies very close to structures at high speeds and camera pixels collect light from multiple directions. Both cameras and visual software can also struggle to compensate for sudden changes between light and dark parts of an environment.

To lend AI a helping hand, Scaramuzza’s group recently published a drone racing dataset that includes realistic training data taken from a drone flown by a professional pilot in both indoor and outdoor spaces. The data, which includes complicated aerial maneuvers such as back flips, flight sequences that cover hundreds of meters, and flight speeds of up to 83 kilometers per hour, was presented at the 2019 IEEE International Conference on Robotics and Automation.

The drone racing dataset also includes data captured by the group’s special bioinspired event cameras that can detect changes in motion on a per-pixel basis within microseconds. By comparison, ordinary cameras need milliseconds (each millisecond being 1,000 microseconds) to compare motion changes in each image frame. The event cameras have already proven capable of helping drones nimbly dodge soccer balls thrown at them by the Swiss lab’s researchers.

The Swiss group’s work on the racing drone dataset received funding in part from the U.S. Defense Advanced Research Projects Agency (DARPA), which acts as the U.S. military’s special R&D arm for more futuristic projects. Specifically, the funding came from DARPA’s Fast Lightweight Autonomy program that envisions small autonomous drones capable of flying at high speeds through cluttered environments without GPS guidance or communication with human pilots.

Such speedy drones could serve as military scouts checking out dangerous buildings or alleys. They could also someday help search-and-rescue teams find people trapped in semi-collapsed buildings or lost in the woods. Being able to fly at high speed without crashing into things also makes a drone more efficient at all sorts of tasks by making the most of limited battery life, Scaramuzza says. After all, most drone battery life gets used up by the need to hover in flight and doesn’t get drained much by flying faster.

Even if AI manages to conquer the drone racing obstacle courses, that would be the end of the beginning of the technology’s development. What would still be required? Scaramuzza specifically singled out the need to handle low-visibility conditions involving smoke, dust, fog, rain, snow, fire, hail, as some of the biggest challenges for vision-based algorithms and AI in complex real-life environments.

“I think we should develop and release datasets containing smoke, dust, fog, rain, fire, etc. if we want to allow using autonomous robots to complement human rescuers in saving people lives after an earthquake or natural disaster in the future,” Scaramuzza says. Continue reading

Posted in Human Robots

#435775 Jaco Is a Low-Power Robot Arm That Hooks ...

We usually think of robots as taking the place of humans in various tasks, but robots of all kinds can also enhance human capabilities. This may be especially true for people with disabilities. And while the Cybathlon competition showed what's possible when cutting-edge research robotics is paired with expert humans, that competition isn't necessarily reflective of the kind of robotics available to most people today.

Kinova Robotics's Jaco arm is an assistive robotic arm designed to be mounted on an electric wheelchair. With six degrees of freedom plus a three-fingered gripper, the lightweight carbon fiber arm is frequently used in research because it's rugged and versatile. But from the start, Kinova created it to add autonomy to the lives of people with mobility constraints.

Earlier this year, Kinova shared the story of Mary Nelson, an 11-year-old girl with spinal muscular atrophy, who uses her Jaco arm to show her horse in competition. Spinal muscular atrophy is a neuromuscular disorder that impairs voluntary muscle movement, including muscles that help with respiration, and Mary depends on a power chair for mobility.

We wanted to learn more about how Kinova designs its Jaco arm, and what that means for folks like Mary, so we spoke with both Kinova and Mary's parents to find out how much of a difference a robot arm can make.

IEEE Spectrum: How did Mary interact with the world before having her arm, and what was involved in the decision to try a robot arm in general? And why then Kinova's arm specifically?

Ryan Nelson: Mary interacts with the world much like you and I do, she just uses different tools to do so. For example, she is 100 percent independent using her computer, iPad, and phone, and she prefers to use a mouse. However, she cannot move a standard mouse, so she connects her wheelchair to each device with Bluetooth to move the mouse pointer/cursor using her wheelchair joystick.

For years, we had a Manfrotto magic arm and super clamp attached to her wheelchair and she used that much like the robotic arm. We could put a baseball bat, paint brush, toys, etc. in the super clamp so that Mary could hold the object and interact as physically able children do. Mary has always wanted to be more independent, so we knew the robotic arm was something she must try. We had seen videos of the Kinova arm on YouTube and on their website, so we reached out to them to get a trial.

Can you tell us about the Jaco arm, and how the process of designing an assistive robot arm is different from the process of designing a conventional robot arm?

Nathaniel Swenson, Director of U.S. Operations — Assistive Technologies at Kinova: Jaco is our flagship robotic arm. Inspired by our CEO's uncle and its namesake, Jacques “Jaco” Forest, it was designed as assistive technology with power wheelchair users in mind.

The primary differences between Jaco and our other robots, such as the new Gen3, which was designed to meet the needs of academic and industry research teams, are speed and power consumption. Other robots such as the Gen3 can move faster and draw slightly more power because they aren't limited by the battery size of power wheelchairs. Depending on the use case, they might not interact directly with a human being in the research setting and can safely move more quickly. Jaco is designed to move at safe speeds and make direct contact with the end user and draw very little power directly from their wheelchair.

The most important consideration in the design process of an assistive robot is the safety of the end user. Jaco users operate their robots through their existing drive controls to assist them in daily activities such as eating, drinking, and opening doors and they don't have to worry about the robot draining their chair's batteries throughout the day. The elegant design that results from meeting the needs of our power chair users has benefited subsequent iterations, [of products] such as the Gen3, as well: Kinova's robots are lightweight, extremely efficient in their power consumption, and safe for direct human-robot interaction. This is not true of conventional industrial robots.

What was the learning process like for Mary? Does she feel like she's mastered the arm, or is it a continuous learning process?

Ryan Nelson: The learning process was super quick for Mary. However, she amazes us every day with the new things that she can do with the arm. Literally within minutes of installing the arm on her chair, Mary had it figured out and was shaking hands with the Kinova rep. The control of the arm is super intuitive and the Kinova reps say that SMA (Spinal Muscular Atrophy) children are perfect users because they are so smart—they pick it up right away. Mary has learned to do many fine motor tasks with the arm, from picking up small objects like a pencil or a ruler, to adjusting her glasses on her face, to doing science experiments.

Photo: The Nelson Family

Mary uses a headset microphone to amplify her voice, and she will use the arm and finger to adjust the microphone in front of her mouth after she is done eating (also a task she mastered quickly with the arm). Additionally, Mary will use the arms to reach down and adjust her feet or leg by grabbing them with the arm and moving them to a more comfortable position. All of these examples are things she never really asked us to do, but something she needed and just did on her own, with the help of the arm.

What is the most common feedback that you get from new users of the arm? How about from experienced users who have been using the arm for a while?

Nathaniel Swenson: New users always tell us how excited they are to see what they can accomplish with their new Jaco. From day one, they are able to do things that they have longed to do without assistance from a caregiver: take a drink of water or coffee, scratch an itch, push the button to open an “accessible” door or elevator, or even feed their baby with a bottle.

The most common feedback I hear from experienced users is that Jaco has changed their life. Our experienced users like Mary are rock stars: everywhere they go, people get excited to see what they'll do next. The difference between a new user and an experienced user could be as little as two weeks. People who operate power wheelchairs every day are already expert drivers and we just add a new “gear” to their chair: robot mode. It's fun to see how quickly new users master the intuitive Jaco control modes.

What changes would you like to see in the next generation of Jaco arm?

Ryan Nelson: Titanium fingers! Make it lift heavier objects, hold heavier items like a baseball bat, machine gun, flame thrower, etc., and Mary literally said this last night: “I wish the arm moved fast enough to play the piano.”

Nathaniel Swenson: I love the idea of titanium fingers! Jaco's fingers are made from a flexible polymer and designed to avoid harm. This allows the fingers to bend or dislocate, rather than break, but it also means they are not as durable as a material like titanium. Increased payload, the ability to manipulate heavier objects, requires increased power consumption. We've struck a careful balance between providing enough strength to accomplish most medically necessary Activities of Daily Living and efficient use of the power chair's batteries.

We take Isaac Asimov's Laws of Robotics pretty seriously. When we start to combine machine guns, flame throwers, and artificial intelligence with robots, I get very nervous!

I wish the arm moved fast enough to play the piano, too! I am also a musician and I share Mary's dream of an assistive robot that would enable her to make music. In the meantime, while we work on that, please enjoy this beautiful violin piece by Manami Ito and her one-of-a-kind violin prosthesis:

To what extent could more autonomy for the arm be helpful for users? What would be involved in implementing that?

Nathaniel Swenson: Artificial intelligence, machine learning, and deep learning will introduce greater autonomy in future iterations of assistive robots. This will enable them to perform more complex tasks that aren't currently possible, and enable them to accomplish routine tasks more quickly and with less input than the current manual control requires.

For assistive robots, implementation of greater autonomy involves a focus on end-user safety and improvements in the robot's awareness of its environment. Autonomous robots that work in close proximity with humans need vision. They must be able to see to avoid collisions and they use haptic feedback to tell the robot how much force is being exerted on objects. All of these technologies exist, but the largest obstacle to bringing them to the assistive technology market is to prove to the health insurance companies who will fund them that they are both safe and medically necessary. Continue reading

Posted in Human Robots

#435769 The Ultimate Optimization Problem: How ...

Lucas Joppa thinks big. Even while gazing down into his cup of tea in his modest office on Microsoft’s campus in Redmond, Washington, he seems to see the entire planet bobbing in there like a spherical tea bag.

As Microsoft’s first chief environmental officer, Joppa came up with the company’s AI for Earth program, a five-year effort that’s spending US $50 million on AI-powered solutions to global environmental challenges.

The program is not just about specific deliverables, though. It’s also about mindset, Joppa told IEEE Spectrum in an interview in July. “It’s a plea for people to think about the Earth in the same way they think about the technologies they’re developing,” he says. “You start with an objective. So what’s our objective function for Earth?” (In computer science, an objective function describes the parameter or parameters you are trying to maximize or minimize for optimal results.)

Photo: Microsoft

Lucas Joppa

AI for Earth launched in December 2017, and Joppa’s team has since given grants to more than 400 organizations around the world. In addition to receiving funding, some grantees get help from Microsoft’s data scientists and access to the company’s computing resources.

In a wide-ranging interview about the program, Joppa described his vision of the “ultimate optimization problem”—figuring out which parts of the planet should be used for farming, cities, wilderness reserves, energy production, and so on.

Every square meter of land and water on Earth has an infinite number of possible utility functions. It’s the job of Homo sapiens to describe our overall objective for the Earth. Then it’s the job of computers to produce optimization results that are aligned with the human-defined objective.

I don’t think we’re close at all to being able to do this. I think we’re closer from a technology perspective—being able to run the model—than we are from a social perspective—being able to make decisions about what the objective should be. What do we want to do with the Earth’s surface?

Such questions are increasingly urgent, as climate change has already begun reshaping our planet and our societies. Global sea and air surface temperatures have already risen by an average of 1 degree Celsius above preindustrial levels, according to the Intergovernmental Panel on Climate Change.

Today, people all around the world participated in a “climate strike,” with young people leading the charge and demanding a global transition to renewable energy. On Monday, world leaders will gather in New York for the United Nations Climate Action Summit, where they’re expected to present plans to limit warming to 1.5 degrees Celsius.

Joppa says such summit discussions should aim for a truly holistic solution.

We talk about how to solve climate change. There’s a higher-order question for society: What climate do we want? What output from nature do we want and desire? If we could agree on those things, we could put systems in place for optimizing our environment accordingly. Instead we have this scattered approach, where we try for local optimization. But the sum of local optimizations is never a global optimization.

There’s increasing interest in using artificial intelligence to tackle global environmental problems. New sensing technologies enable scientists to collect unprecedented amounts of data about the planet and its denizens, and AI tools are becoming vital for interpreting all that data.

The 2018 report “Harnessing AI for the Earth,” produced by the World Economic Forum and the consulting company PwC, discusses ways that AI can be used to address six of the world’s most pressing environmental challenges (climate change, biodiversity, and healthy oceans, water security, clean air, and disaster resilience).

Many of the proposed applications involve better monitoring of human and natural systems, as well as modeling applications that would enable better predictions and more efficient use of natural resources.

Joppa says that AI for Earth is taking a two-pronged approach, funding efforts to collect and interpret vast amounts of data alongside efforts that use that data to help humans make better decisions. And that’s where the global optimization engine would really come in handy.

For any location on earth, you should be able to go and ask: What’s there, how much is there, and how is it changing? And more importantly: What should be there?

On land, the data is really only interesting for the first few hundred feet. Whereas in the ocean, the depth dimension is really important.

We need a planet with sensors, with roving agents, with remote sensing. Otherwise our decisions aren’t going to be any good.

AI for Earth isn’t going to create such an online portal within five years, Joppa stresses. But he hopes the projects that he’s funding will contribute to making such a portal possible—eventually.

We’re asking ourselves: What are the fundamental missing layers in the tech stack that would allow people to build a global optimization engine? Some of them are clear, some are still opaque to me.

By the end of five years, I’d like to have identified these missing layers, and have at least one example of each of the components.

Some of the projects that AI for Earth has funded seem to fit that desire. Examples include SilviaTerra, which used satellite imagery and AI to create a map of the 92 billion trees in forested areas across the United States. There’s also OceanMind, a non-profit that detects illegal fishing and helps marine authorities enforce compliance. Platforms like Wildbook and iNaturalist enable citizen scientists to upload pictures of animals and plants, aiding conservation efforts and research on biodiversity. And FarmBeats aims to enable data-driven agriculture with low-cost sensors, drones, and cloud services.

It’s not impossible to imagine putting such services together into an optimization engine that knows everything about the land, the water, and the creatures who live on planet Earth. Then we’ll just have to tell that engine what we want to do about it.

Editor’s note: This story is published in cooperation with more than 250 media organizations and independent journalists that have focused their coverage on climate change ahead of the UN Climate Action Summit. IEEE Spectrum’s participation in the Covering Climate Now partnership builds on our past reporting about this global issue. Continue reading

Posted in Human Robots

#435752 T-RHex Is a Hexapod Robot With ...

In Aaron Johnson’s “Robot Design & Experimentation” class at CMU, teams of students have a semester to design and build an experimental robotic system based on a theme. For spring 2019, that theme was “Bioinspired Robotics,” which is definitely one of our favorite kinds of robotics—animals can do all kinds of crazy things, and it’s always a lot of fun watching robots try to match them. They almost never succeed, of course, but even basic imitation can lead to robots with some unique capabilities.

One of the projects from this year’s course, from Team ScienceParrot, is a new version of RHex called T-RHex (pronounced T-Rex, like the dinosaur). T-RHex comes with a tail, but more importantly, it has tiny tapered toes, which help it grip onto rough surfaces like bricks, wood, and concrete. It’s able to climb its way up very steep slopes, and hang from them, relying on its toes to keep itself from falling off.

T-RHex’s toes are called microspines, and we’ve seen them in all kinds of robots. The most famous of these is probably JPL’s LEMUR IIB (which wins on sheer microspine volume), although the concept goes back at least 15 years to Stanford’s SpinyBot. Robots that use microspines to climb tend to be fairly methodical at it, since the microspines have to be engaged and disengaged with care, limiting their non-climbing mobility.

T-RHex manages to perform many of the same sorts of climbing and hanging maneuvers without losing RHex’s ability for quick, efficient wheel-leg (wheg) locomotion.

If you look closely at T-RHex walking in the video, you’ll notice that in its normal forward gait, it’s sort of walking on its ankles, rather than its toes. This means that the microspines aren’t engaged most of the time, so that the robot can use its regular wheg motion to get around. To engage the microspines, the robot moves its whegs backwards, meaning that its tail is arguably coming out of its head. But since all of T-RHex’s capability is mechanical in nature and it has no active sensors, it doesn’t really need a head, so that’s fine.

The highest climbable slope that T-RHex could manage was 55 degrees, meaning that it can’t, yet, conquer vertical walls. The researchers were most surprised by the robot’s ability to cling to surfaces, where it was perfectly happy to hang out on a slope of 135 degrees, which is a 45 degree overhang (!). I have no idea how it would ever reach that kind of position on its own, but it’s nice to know that if it ever does, its spines will keep doing their job.

Photo: CMU

T-RHex uses laser-cut acrylic legs, with the microspines embedded into 3D-printed toes. The tail is needed to prevent the robot from tipping backward.

For more details about the project, we spoke with Team ScienceParrot member (and CMU PhD student) Catherine Pavlov via email.

IEEE Spectrum: We’re used to seeing RHex with compliant, springy legs—how do the new legs affect T-RHex’s mobility?

Catherine Pavlov: There’s some compliance in the legs, though not as much as RHex—this is driven by the use of acrylic, which was chosen for budget/manufacturing reasons. Matching the compliance of RHex with acrylic would have made the tines too weak (since often only a few hold the load of the robot during climbing). It definitely means you can’t use energy storage in the legs the way RHex does, for example when pronking. T-RHex is probably more limited by motor speed in terms of mobility though. We were using some borrowed Dynamixels that didn’t allow for good positioning at high speeds.

How did you design the climbing gait? Why not use the middle legs, and why is the tail necessary?

The gait was a lot of hand-tuning and trial-and-error. We wanted a left/right symmetric gait to enable load sharing among more spines and prevent out-of-plane twisting of the legs. When using all three pairs, you have to have very accurate angular positioning or one leg pair gets pushed off the wall. Since two legs should be able to hold the full robot gait, using the middle legs was hurting more than it was helping, with the middle legs sometimes pushing the rear ones off of the wall.

The tail is needed to prevent the robot from tipping backward and “sitting” on the wall. During static testing we saw the robot tip backward, disengaging the front legs, at around 35 degrees incline. The tail allows us to load the front legs, even when they’re at a shallow angle to the surface. The climbing gait we designed uses the tail to allow the rear legs to fully recirculate without the robot tipping backward.

Photo: CMU

Team ScienceParrot with T-RHex.

What prevents T-RHex from climbing even steeper surfaces?

There are a few limiting factors. One is that the tines of the legs break pretty easily. I think we also need a lighter platform to get fully vertical—we’re going to look at MiniRHex for future work. We’re also not convinced our gait is the best it can be, we can probably get marginal improvements with more tuning, which might be enough.

Can the microspines assist with more dynamic maneuvers?

Dynamic climbing maneuvers? I think that would only be possible on surfaces with very good surface adhesion and very good surface strength, but it’s certainly theoretically possible. The current instance of T-RHex would definitely break if you tried to wall jump though.

What are you working on next?

Our main target is exploring the space of materials for leg fabrication, such as fiberglass, PLA, urethanes, and maybe metallic glass. We think there’s a lot of room for improvement in the leg material and geometry. We’d also like to see MiniRHex equipped with microspines, which will require legs about half the scale of what we built for T-RHex. Longer-term improvements would be the addition of sensors e.g. for wall detection, and a reliable floor-to-wall transition and dynamic gait transitions.

[ T-RHex ] Continue reading

Posted in Human Robots

#435750 Video Friday: Amazon CEO Jeff Bezos ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events):

RSS 2019 – June 22-26, 2019 – Freiburg, Germany
Hamlyn Symposium on Medical Robotics – June 23-26, 2019 – London, U.K.
ETH Robotics Summer School – June 27-1, 2019 – Zurich, Switzerland
MARSS 2019 – July 1-5, 2019 – Helsinki, Finland
ICRES 2019 – July 29-30, 2019 – London, U.K.
Let us know if you have suggestions for next week, and enjoy today’s videos.

Last week at the re:MARS conference, Amazon CEO and aspiring supervillain Jeff Bezos tried out this pair of dexterous robotic hands, which he described as “weirdly natural” to operate. The system combines Shadow Robot’s anthropomorphic robot hands with SynTouch’s biomimetic tactile sensors and HaptX’s haptic feedback gloves.

After playing with the robot, Bezos let out his trademark evil laugh.

[ Shadow Robot ]

The RoboMaster S1 is DJI’s advanced new educational robot that opens the door to limitless learning and entertainment. Develop programming skills, get familiar with AI technology, and enjoy thrilling FPV driving with games and competition. From young learners to tech enthusiasts, get ready to discover endless possibilities with the RoboMaster S1.

[ DJI ]

It’s very impressive to see DLR’s humanoid robot Toro dynamically balancing, even while being handed heavy objects, pushing things, and using multi-contact techniques to kick a fire extinguisher for some reason.

The paper is in RA-L, and you can find it at the link below.

[ RA-L ] via [ DLR ]

Thanks Maximo!

Is it just me, or does the Suzumori Endo Robotics Laboratory’s Super Dragon arm somehow just keep getting longer?

Suzumori Endo Lab, Tokyo Tech developed a 10 m-long articulated manipulator for investigation inside the primary containment vessel of the Fukushima Daiichi Nuclear Power Plants. We employed a coupled tendon-driven mechanism and a gravity compensation mechanism using synthetic fiber ropes to design a lightweight and slender articulated manipulator. This work was published in IEEE Robotics and Automation Letters and Transactions of the JSME.

[ Suzumori Endo Lab ]

From what I can make out thanks to Google Translate, this cute little robot duck (developed by Nissan) helps minimize weeds in rice fields by stirring up the water.

[ Nippon.com ]

Confidence in your robot is when you can just casually throw it off of a balcony 15 meters up.

[ SUTD ]

You had me at “we’re going to completely submerge this apple in chocolate syrup.”

[ Soft Robotics Inc ]

In the mid 2020s, the European Space Agency is planning on sending a robotic sample return mission to the Moon. It’s called Heracles, after the noted snake-strangler of Greek mythology.

[ ESA ]

Rethink Robotics is still around, they’re just much more German than before. And Sawyer is still hard at work stealing jobs from humans.

[ Rethink Robotics ]

The reason to watch this new video of the Ghost Robotics Vision 60 quadruped is for the 3 seconds worth of barrel roll about 40 seconds in.

[ Ghost Robotics ]

This is a relatively low-altitude drop for Squishy Robotics’ tensegrity scout, but it still cool to watch a robot that’s resilient enough to be able to fall and just not worry about it.

[ Squishy Robotics ]

We control here the Apptronik DRACO bipedal robot for unsupported dynamic locomotion. DRACO consists of a 10 DoF lower body with liquid cooled viscoelastic actuators to reduce weight, increase payload, and achieve fast dynamic walking. Control and walking algorithms are designed by UT HCRL Laboratory.

I think all robot videos should be required to start with two “oops” clips followed by a “for real now” clip.

[ Apptronik ]

SAKE’s EZGripper manages to pick up a wrench, and also pick up a raspberry without turning it into instajam.

[ SAKE Robotics ]

And now: the robotic long-tongued piggy, courtesy Sony Toio.

[ Toio ]

In this video the ornithopter developed inside the ERC Advanced Grant GRIFFIN project performs its first flight. This projects aims to develop a flapping wing system with manipulation and human interaction capabilities.

A flapping-wing system with manipulation and human interaction capabilities, you say? I would like to subscribe to your newsletter.

[ GRVC ]

KITECH’s robotic hands and arms can manipulate, among other things, five boxes of Elmos. I’m not sure about the conversion of Elmos to Snuffleupaguses, although it turns out that one Snuffleupagus is exactly 1,000 pounds.

[ Ji-Hun Bae ]

The Australian Centre for Field Robotics (ACFR) has been working on agricultural robots for almost a decade, and this video sums up a bunch of the stuff that they’ve been doing, even if it’s more amusing than practical at times.

[ ACFR ]

ROS 2 is great for multi-robot coordination, like when you need your bubble level to stay really, really level.

[ Acutronic Robotics ]

We don’t hear iRobot CEO Colin Angle give a lot of talks, so this recent one (from Amazon’s re:MARS conference) is definitely worth a listen, especially considering how much innovation we’ve seen from iRobot recently.

Colin Angle, founder and CEO of iRobot, has unveil a series of breakthrough innovations in home robots from iRobot. For the first time on stage, he will discuss and demonstrate what it takes to build a truly intelligent system of robots that work together to accomplish more within the home – and enable that home, and the devices within it, to work together as one.

[ iRobot ]

In the latest episode of Robots in Depth, Per speaks with Federico Pecora from the Center for Applied Autonomous Sensor Systems at Örebro University in Sweden.

Federico talks about working on AI and service robotics. In this area he has worked on planning, especially focusing on why a particular goal is the one that the robot should work on. To make robots as useful and user friendly as possible, he works on inferring the goal from the robot’s environment so that the user does not have to tell the robot everything.

Federico has also worked with AI robotics planning in industry to optimize results. Managing the relative importance of tasks is another challenging area there. In this context, he works on automating not only a single robot for its goal, but an entire fleet of robots for their collective goal. We get to hear about how these techniques are being used in warehouse operations, in mines and in agriculture.

[ Robots in Depth ] Continue reading

Posted in Human Robots