Tag Archives: use

#435681 Video Friday: This NASA Robot Uses ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

ICRES 2019 – July 29-30, 2019 – London, U.K.
DARPA SubT Tunnel Circuit – August 15-22, 2019 – Pittsburgh, Pa., USA
IEEE Africon 2019 – September 25-27, 2019 – Accra, Ghana
ISRR 2019 – October 6-10, 2019 – Hanoi, Vietnam
Let us know if you have suggestions for next week, and enjoy today’s videos.

Robots can land on the Moon and drive on Mars, but what about the places they can’t reach? Designed by engineers as NASA’s Jet Propulsion Laboratory in Pasadena, California, a four-limbed robot named LEMUR (Limbed Excursion Mechanical Utility Robot) can scale rock walls, gripping with hundreds of tiny fishhooks in each of its 16 fingers and using artificial intelligence to find its way around obstacles. In its last field test in Death Valley, California, in early 2019, LEMUR chose a route up a cliff, scanning the rock for ancient fossils from the sea that once filled the area.

The LEMUR project has since concluded, but it helped lead to a new generation of walking, climbing and crawling robots. In future missions to Mars or icy moons, robots with AI and climbing technology derived from LEMUR could discover similar signs of life. Those robots are being developed now, honing technology that may one day be part of future missions to distant worlds.

[ NASA ]

This video demonstrates the autonomous footstep planning developed by IHMC. Robots in this video are the Atlas humanoid robot (DRC version) and the NASA Valkyrie. The operator specifies a goal location in the world, which is modeled as planar regions using the robot’s perception sensors. The planner then automatically computes the necessary steps to reach the goal using a Weighted A* algorithm. The algorithm does not reject footholds that have a certain amount of support, but instead modifies them after the plan is found to try and increase that support area.

Currently, narrow terrain has a success rate of about 50%, rough terrain is about 90%, whereas flat ground is near 100%. We plan on increasing planner speed and the ability to plan through mazes and to unseen goals by including a body-path planner as the first step. Control, Perception, and Planning algorithms by IHMC Robotics.

[ IHMC ]

I’ve never really been able to get into watching people play poker, but throw an AI from CMU and Facebook into a game of no-limit Texas hold’em with five humans, and I’m there.

[ Facebook ]

In this video, Cassie Blue is navigating autonomously. Right now, her world is very small, the Wavefield at the University of Michigan, where she is told to turn left at intersections. You’re right, that is not a lot of independence, but it’s a first step away from a human and an RC controller!

Using a RealSense RGBD Camera, an IMU, and our version of an InEKF with contact factors, Cassie Blue is building a 3D semantic map in real time that identifies sidewalks, grass, poles, bicycles, and buildings. From the semantic map, occupancy and cost maps are built with the sidewalk identified as walk-able area and everything else considered as an obstacle. A planner then sets a goal to stay approximately 50 cm to the right of the sidewalk’s left edge and plans a path around obstacles and corners using D*. The path is translated into way-points that are achieved via Cassie Blue’s gait controller.

[ University of Michigan ]

Thanks Jesse!

Dave from HEBI Robotics wrote in to share some new actuators that are designed to get all kinds of dirty: “The R-Series takes HEBI’s X-Series to the next level, providing a sealed robotics solution for rugged, industrial applications and laying the groundwork for industrial users to address challenges that are not well met by traditional robotics. To prove it, we shot some video right in the Allegheny River here in Pittsburgh. Not a bad way to spend an afternoon :-)”

The R-Series Actuator is a full-featured robotic component as opposed to a simple servo motor. The output rotates continuously, requires no calibration or homing on boot-up, and contains a thru-bore for easy daisy-chaining of wiring. Modular in nature, R-Series Actuators can be used in everything from wheeled robots to collaborative robotic arms. They are sealed to IP67 and designed with a lightweight form factor for challenging field applications, and they’re packed with sensors that enable simultaneous control of position, velocity, and torque.

[ HEBI Robotics ]

Thanks Dave!

If your robot hands out karate chops on purpose, that’s great. If it hands out karate chops accidentally, maybe you should fix that.

COVR is short for “being safe around collaborative and versatile robots in shared spaces”. Our mission is to significantly reduce the complexity in safety certifying cobots. Increasing safety for collaborative robots enables new innovative applications, thus increasing production and job creation for companies utilizing the technology. Whether you’re an established company seeking to deploy cobots or an innovative startup with a prototype of a cobot related product, COVR will help you analyze, test and validate the safety for that application.

[ COVR ]

Thanks Anna!

EPFL startup Flybotix has developed a novel drone with just two propellers and an advanced stabilization system that allow it to fly for twice as long as conventional models. That fact, together with its small size, makes it perfect for inspecting hard-to-reach parts of industrial facilities such as ducts.

[ Flybotix ]

SpaceBok is a quadruped robot designed and built by a Swiss student team from ETH Zurich and ZHAW Zurich, currently being tested using Automation and Robotics Laboratories (ARL) facilities at our technical centre in the Netherlands. The robot is being used to investigate the potential of ‘dynamic walking’ and jumping to get around in low gravity environments.

SpaceBok could potentially go up to 2 m high in lunar gravity, although such a height poses new challenges. Once it comes off the ground the legged robot needs to stabilise itself to come down again safely – like a mini-spacecraft. So, like a spacecraft. SpaceBok uses a reaction wheel to control its orientation.

[ ESA ]

A new video from GITAI showing progress on their immersive telepresence robot for space.

[ GITAI ]

Tech United’s HERO robot (a Toyota HSR) competed in the RoboCup@Home competition, and it had a couple of garbage-related hiccups.

[ Tech United ]

Even small drones are getting better at autonomous obstacle avoidance in cluttered environments at useful speeds, as this work from the HKUST Aerial Robotics Group shows.

[ HKUST ]

DelFly Nimbles now come in swarms.

[ DelFly Nimble ]

This is a very short video, but it’s a fairly impressive look at a Baxter robot collaboratively helping someone put a shirt on, a useful task for folks with disabilities.

[ Shibata Lab ]

ANYmal can inspect the concrete in sewers for deterioration by sliding its feet along the ground.

[ ETH Zurich ]

HUG is a haptic user interface for teleoperating advanced robotic systems as the humanoid robot Justin or the assistive robotic system EDAN. With its lightweight robot arms, HUG can measure human movements and simultaneously display forces from the distant environment. In addition to such teleoperation applications, HUG serves as a research platform for virtual assembly simulations, rehabilitation, and training.

[ DLR ]

This video about “image understanding” from CMU in 1979 (!) is amazing, and even though it’s long, you won’t regret watching until 3:30. Or maybe you will.

[ ARGOS (pdf) ]

Will Burrard-Lucas’ BeetleCam turned 10 this month, and in this video, he recounts the history of his little robotic camera.

[ BeetleCam ]

In this week’s episode of Robots in Depth, Per speaks with Gabriel Skantze from Furhat Robotics.

Gabriel Skantze is co-founder and Chief Scientist at Furhat Robotics and Professor in speech technology at KTH with a specialization in conversational systems. He has a background in research into how humans use spoken communication to interact.

In this interview, Gabriel talks about how the social robot revolution makes it necessary to communicate with humans in a human ways through speech and facial expressions. This is necessary as we expand the number of people that interact with robots as well as the types of interaction. Gabriel gives us more insight into the many challenges of implementing spoken communication for co-bots, where robots and humans work closely together. They need to communicate about the world, the objects in it and how to handle them. We also get to hear how having an embodied system using the Furhat robot head helps the interaction between humans and the system.

[ Robots in Depth ] Continue reading

Posted in Human Robots

#435656 Will AI Be Fashion Forward—or a ...

The narrative that often accompanies most stories about artificial intelligence these days is how machines will disrupt any number of industries, from healthcare to transportation. It makes sense. After all, technology already drives many of the innovations in these sectors of the economy.

But sneakers and the red carpet? The definitively low-tech fashion industry would seem to be one of the last to turn over its creative direction to data scientists and machine learning algorithms.

However, big brands, e-commerce giants, and numerous startups are betting that AI can ingest data and spit out Chanel. Maybe it’s not surprising, given that fashion is partly about buzz and trends—and there’s nothing more buzzy and trendy in the world of tech today than AI.

In its annual survey of the $3 trillion fashion industry, consulting firm McKinsey predicted that while AI didn’t hit a “critical mass” in 2018, it would increasingly influence the business of everything from design to manufacturing.

“Fashion as an industry really has been so slow to understand its potential roles interwoven with technology. And, to be perfectly honest, the technology doesn’t take fashion seriously.” This comment comes from Zowie Broach, head of fashion at London’s Royal College of Arts, who as a self-described “old fashioned” designer has embraced the disruptive nature of technology—with some caveats.

Co-founder in the late 1990s of the avant-garde fashion label Boudicca, Broach has always seen tech as a tool for designers, even setting up a website for the company circa 1998, way before an online presence became, well, fashionable.

Broach told Singularity Hub that while she is generally optimistic about the future of technology in fashion—the designer has avidly been consuming old sci-fi novels over the last few years—there are still a lot of difficult questions to answer about the interface of algorithms, art, and apparel.

For instance, can AI do what the great designers of the past have done? Fashion was “about designing, it was about a narrative, it was about meaning, it was about expression,” according to Broach.

AI that designs products based on data gleaned from human behavior can potentially tap into the Pavlovian response in consumers in order to make money, Broach noted. But is that channeling creativity, or just digitally dabbling in basic human brain chemistry?

She is concerned about people retaining control of the process, whether we’re talking about their data or their designs. But being empowered with the insights machines could provide into, for example, the geographical nuances of fashion between Dubai, Moscow, and Toronto is thrilling.

“What is it that we want the future to be from a fashion, an identity, and design perspective?” she asked.

Off on the Right Foot
Silicon Valley and some of the biggest brands in the industry offer a few answers about where AI and fashion are headed (though not at the sort of depths that address Broach’s broader questions of aesthetics and ethics).

Take what is arguably the biggest brand in fashion, at least by market cap but probably not by the measure of appearances on Oscar night: Nike. The $100 billion shoe company just gobbled up an AI startup called Celect to bolster its data analytics and optimize its inventory. In other words, Nike hopes it will be able to figure out what’s hot and what’s not in a particular location to stock its stores more efficiently.

The company is going even further with Nike Fit, a foot-scanning platform using a smartphone camera that applies AI techniques from fields like computer vision and machine learning to find the best fit for each person’s foot. The algorithms then identify and recommend the appropriately sized and shaped shoe in different styles.

No doubt the next step will be to 3D print personalized and on-demand sneakers at any store.

San Francisco-based startup ThirdLove is trying to bring a similar approach to bra sizes. Its 20-member data team, Fortune reported, has developed the Fit Finder quiz that uses machine learning algorithms to help pick just the right garment for every body type.

Data scientists are also a big part of the team at Stitch Fix, a former San Francisco startup that went public in 2017 and today sports a market cap of more than $2 billion. The online “personal styling” company uses hundreds of algorithms to not only make recommendations to customers, but to help design new styles and even manage the subscription-based supply chain.

Future of Fashion
E-commerce giant Amazon has thrown its own considerable resources into developing AI applications for retail fashion—with mixed results.

One notable attempt involved a “styling assistant” that came with the company’s Echo Look camera that helped people catalog and manage their wardrobes, evening helping pick out each day’s attire. The company more recently revisited the direct consumer side of AI with an app called StyleSnap, which matches clothes and accessories uploaded to the site with the retailer’s vast inventory and recommends similar styles.

Behind the curtains, Amazon is going even further. A team of researchers in Israel have developed algorithms that can deduce whether a particular look is stylish based on a few labeled images. Another group at the company’s San Francisco research center was working on tech that could generate new designs of items based on images of a particular style the algorithms trained on.

“I will say that the accumulation of many new technologies across the industry could manifest in a highly specialized style assistant, far better than the examples we’ve seen today. However, the most likely thing is that the least sexy of the machine learning work will become the most impactful, and the public may never hear about it.”

That prediction is from an online interview with Leanne Luce, a fashion technology blogger and product manager at Google who recently wrote a book called, succinctly enough, Artificial Intelligence and Fashion.

Data Meets Design
Academics are also sticking their beakers into AI and fashion. Researchers at the University of California, San Diego, and Adobe Research have previously demonstrated that neural networks, a type of AI designed to mimic some aspects of the human brain, can be trained to generate (i.e., design) new product images to match a buyer’s preference, much like the team at Amazon.

Meanwhile, scientists at Hong Kong Polytechnic University are working with China’s answer to Amazon, Alibaba, on developing a FashionAI Dataset to help machines better understand fashion. The effort will focus on how algorithms approach certain building blocks of design, what are called “key points” such as neckline and waistline, and “fashion attributes” like collar types and skirt styles.

The man largely behind the university’s research team is Calvin Wong, a professor and associate head of Hong Kong Polytechnic University’s Institute of Textiles and Clothing. His group has also developed an “intelligent fabric defect detection system” called WiseEye for quality control, reducing the chance of producing substandard fabric by 90 percent.

Wong and company also recently inked an agreement with RCA to establish an AI-powered design laboratory, though the details of that venture have yet to be worked out, according to Broach.

One hope is that such collaborations will not just get at the technological challenges of using machines in creative endeavors like fashion, but will also address the more personal relationships humans have with their machines.

“I think who we are, and how we use AI in fashion, as our identity, is not a superficial skin. It’s very, very important for how we define our future,” Broach said.

Image Credit: Inspirationfeed / Unsplash Continue reading

Posted in Human Robots

#435634 Robot Made of Clay Can Sculpt Its Own ...

We’re very familiar with a wide variety of transforming robots—whether for submarines or drones, transformation is a way of making a single robot adaptable to different environments or tasks. Usually, these robots are restricted to a discrete number of configurations—perhaps two or three different forms—because of the constraints imposed by the rigid structures that robots are typically made of.

Soft robotics has the potential to change all this, with robots that don’t have fixed forms but instead can transform themselves into whatever shape will enable them to do what they need to do. At ICRA in Montreal earlier this year, researchers from Yale University demonstrated a creative approach toward a transforming robot powered by string and air, with a body made primarily out of clay.

Photo: Evan Ackerman

The robot is actuated by two different kinds of “skin,” one layered on top of another. There’s a locomotion skin, made of a pattern of pneumatic bladders that can roll the robot forward or backward when the bladders are inflated sequentially. On top of that is the morphing skin, which is cable-driven, and can sculpt the underlying material into a variety of shapes, including spheres, cylinders, and dumbbells. The robot itself consists of both of those skins wrapped around a chunk of clay, with the actuators driven by offboard power and control. Here it is in action:

The Yale researchers have been experimenting with morphing robots that use foams and tensegrity structures for their bodies, but that stuff provides a “restoring force,” springing back into its original shape once the actuation stops. Clay is different because it holds whatever shape it’s formed into, making the robot more energy efficient. And if the dumbbell shape stops being useful, the morphing layer can just squeeze it back into a cylinder or a sphere.

While this robot, and the sample transformation shown in the video, are relatively simplistic, the researchers suggest some ways in which a more complex version could be used in the future:

Photo: IEEE Xplore

This robot’s morphing skin sculpts its clay body into different shapes.

Applications where morphing and locomotion might serve as complementary functions are abundant. For the example skins presented in this work, a search-and-rescue operation could use the clay as a medium to hold a payload such as sensors or transmitters. More broadly, applications include resource-limited conditions where supply chains for materiel are sparse. For example, the morphing sequence shown in Fig. 4 [above] could be used to transform from a rolling sphere to a pseudo-jointed robotic arm. With such a morphing system, it would be possible to robotically morph matter into different forms to perform different functions.

Read this article for free on IEEE Xplore until 5 September 2019

Morphing Robots Using Robotic Skins That Sculpt Clay, by Dylan S. Shah, Michelle C. Yuen, Liana G. Tilton, Ellen J. Yang, and Rebecca Kramer-Bottiglio from Yale University, was presented at ICRA 2019 in Montreal.

[ Yale Faboratory ]

< Back to IEEE Journal Watch Continue reading

Posted in Human Robots

#435628 Soft Exosuit Makes Walking and Running ...

Researchers at Harvard’s Wyss Institute have been testing a flexible, lightweight exosuit that can improve your metabolic efficiency by 4 to 10 percent while walking and running. This is very important because, according to a press release from Harvard, the suit can help you be faster and more efficient, whether you’re “walking at a leisurely pace,” or “running for your life.” Great!

Making humans better at running for their lives is something that we don’t put nearly enough research effort into, I think. The problem may not come up very often, but when it does, it’s super important (because, bears). So, sign me up for anything that we can do to make our desperate flights faster or more efficient—especially if it’s a lightweight, wearable exosuit that’s soft, flexible, and comfortable to wear.

This is the same sort of exosuit that was part of a DARPA program that we wrote about a few years ago, which was designed to make it easier for soldiers to carry heavy loads for long distances.

Photos: Wyss Institute at Harvard University

The system uses two waist-mounted electrical motors connected with cables to thigh straps that run down around your butt. The motors pull on the cables at the same time that your muscles actuate, helping them out and reducing the amount of work that your muscles put in without decreasing the amount of force they exert on your legs. The entire suit (batteries included) weighs 5 kilograms (11 pounds).

In order for the cables to actuate at the right time, the suit tracks your gait with two inertial measurement units (IMUs) on the thighs and one on the waist, and then adjusts its actuation profile accordingly. It works well, too, with measurable increases in performance:

We show that a portable exosuit that assists hip extension can reduce the metabolic rate of treadmill walking at 1.5 meters per second by 9.3 percent and that of running at 2.5 meters per second by 4.0 percent compared with locomotion without the exosuit. These reduction magnitudes are comparable to the effects of taking off 7.4 and 5.7 kilograms during walking and running, respectively, and are in a range that has shown meaningful athletic performance changes.

By increasing your efficiency, you can think of the suit as being able to make you walk or run faster, or farther, or carry a heavier load, all while spending the same amount of energy (or less), which could be just enough to outrun the bear that’s chasing you. Plus, it doesn’t appear to be uncomfortable to wear, and doesn’t require the user to do anything differently, which means that (unlike most robotics things) it’s maybe actually somewhat practical for real-world use—whether you’re indoors or outdoors, or walking or running, or being chased by a bear or not.

Sadly, I have no idea when you might be able to buy one of these things. But the researchers are looking for ways to make the suit even easier to use, while also reducing the weight and making the efficiency increase more pronounced. Harvard’s Conor Walsh says they’re “excited to continue to apply it to a range of applications, including assisting those with gait impairments, industry workers at risk of injury performing physically strenuous tasks, or recreational weekend warriors.” As a weekend warrior who is not entirely sure whether he can outrun a bear, I’m excited for this.

Reducing the metabolic rate of walking and running with a versatile, portable exosuit, by Jinsoo Kim, Giuk Lee, Roman Heimgartner, Dheepak Arumukhom Revi, Nikos Karavas, Danielle Nathanson, Ignacio Galiana, Asa Eckert-Erdheim, Patrick Murphy, David Perry, Nicolas Menard, Dabin Kim Choe, Philippe Malcolm, and Conor J. Walsh from the Wyss Institute for Biologically Inspired Engineering at Harvard University, appears in the current issue of Science. Continue reading

Posted in Human Robots

#435621 ANYbotics Introduces Sleek New ANYmal C ...

Quadrupedal robots are making significant advances lately, and just in the past few months we’ve seen Boston Dynamics’ Spot hauling a truck, IIT’s HyQReal pulling a plane, MIT’s MiniCheetah doing backflips, Unitree Robotics’ Laikago towing a van, and Ghost Robotics’ Vision 60 exploring a mine. Robot makers are betting that their four-legged machines will prove useful in a variety of applications in construction, security, delivery, and even at home.

ANYbotics has been working on such applications for years, testing out their ANYmal robot in places where humans typically don’t want to go (like offshore platforms) as well as places where humans really don’t want to go (like sewers), and they have a better idea than most companies what can make quadruped robots successful.

This week, ANYbotics is announcing a completely new quadruped platform, ANYmal C, a major upgrade from the really quite research-y ANYmal B. The new quadruped has been optimized for ruggedness and reliability in industrial environments, with a streamlined body painted a color that lets you know it means business.

ANYmal C’s physical specs are pretty impressive for a production quadruped. It can move at 1 meter per second, manage 20-degree slopes and 45-degree stairs, cross 25-centimeter gaps, and squeeze through passages just 60 centimeters wide. It’s packed with cameras and 3D sensors, including a lidar for 3D mapping and simultaneous localization and mapping (SLAM). All these sensors (along with the vast volume of gait research that’s been done with ANYmal) make this one of the most reliably autonomous quadrupeds out there, with real-time motion planning and obstacle avoidance.

Image: ANYbotics

ANYmal can autonomously attach itself to a cone-shaped docking station to recharge.

ANYmal C is also one of the ruggedest legged robots in existence. The 50-kilogram robot is IP67 rated, meaning that it’s completely impervious to dust and can withstand being submerged in a meter of water for an hour. If it’s submerged for longer than that, you’re absolutely doing something wrong. The robot will run for over 2 hours on battery power, and if that’s not enough endurance, don’t worry, because ANYmal can autonomously impale itself on a weird cone-shaped docking station to recharge.

Photo: ANYbotics

ANYmal C’s sensor payload includes cameras and a lidar for 3D mapping and SLAM.

As far as what ANYmal C is designed to actually do, it’s mostly remote inspection tasks where you need to move around through a relatively complex environment, but where for whatever reason you’d be better off not sending a human. ANYmal C has a sensor payload that gives it lots of visual options, like thermal imaging, and with the ability to handle a 10-kilogram payload, the robot can be adapted to many different environments.

Over the next few months, we’re hoping to see more examples of ANYmal C being deployed to do useful stuff in real-world environments, but for now, we do have a bit more detail from ANYbotics CTO Christian Gehring.

IEEE Spectrum: Can you tell us about the development process for ANYmal C?

Christian Gehring: We tested the previous generation of ANYmal (B) in a broad range of environments over the last few years and gained a lot of insights. Based on our learnings, it became clear that we would have to re-design the robot to meet the requirements of industrial customers in terms of safety, quality, reliability, and lifetime. There were different prototype stages both for the new drives and for single robot assemblies. Apart from electrical tests, we thoroughly tested the thermal control and ingress protection of various subsystems like the depth cameras and actuators.

What can ANYmal C do that the previous version of ANYmal can’t?

ANYmal C was redesigned with a focus on performance increase regarding actuation (new drives), computational power (new hexacore Intel i7 PCs), locomotion and navigation skills, and autonomy (new depth cameras). The new robot additionally features a docking system for autonomous recharging and an inspection payload as an option. The design of ANYmal C is far more integrated than its predecessor, which increases both performance and reliability.

How much of ANYmal C’s development and design was driven by your experience with commercial or industry customers?

Tests (such as the offshore installation with TenneT) and discussions with industry customers were important to get the necessary design input in terms of performance, safety, quality, reliability, and lifetime. Most customers ask for very similar inspection tasks that can be performed with our standard inspection payload and the required software packages. Some are looking for a robot that can also solve some simple manipulation tasks like pushing a button. Overall, most use cases customers have in mind are realistic and achievable, but some are really tough for the robot, like climbing 50° stairs in hot environments of 50°C.

Can you describe how much autonomy you expect ANYmal C to have in industrial or commercial operations?

ANYmal C is primarily developed to perform autonomous routine inspections in industrial environments. This autonomy especially adds value for operations that are difficult to access, as human operation is extremely costly. The robot can naturally also be operated via a remote control and we are working on long-distance remote operation as well.

Do you expect that researchers will be interested in ANYmal C? What research applications could it be useful for?

ANYmal C has been designed to also address the needs of the research community. The robot comes with two powerful hexacore Intel i7 computers and can additionally be equipped with an NVIDIA Jetson Xavier graphics card for learning-based applications. Payload interfaces enable users to easily install and test new sensors. By joining our established ANYmal Research community, researchers get access to simulation tools and software APIs, which boosts their research in various areas like control, machine learning, and navigation.

[ ANYmal C ] Continue reading

Posted in Human Robots