Tag Archives: year

#435733 Robot Squid and Robot Scallop Showcase ...

Most underwater robots use one of two ways of getting around. Way one is with propellers, and way two is with fins. But animals have shown us that there are many more kinds of underwater locomotion, potentially offering unique benefits to robots. We’ll take a look at two papers from ICRA this year that showed bioinspired underwater robots moving in creative new ways: A jet-powered squid robot that can leap out of the water, plus a robotic scallop that moves just like the real thing.

Image: Beihang University

Prototype of the squid robot in (a) open and (b) folded states. The soft fins and arms are controlled by pneumatic actuators.

This “squid-like aquatic-aerial vehicle” from Beihang University in China is modeled after flying squids. Real squids, in addition to being tasty, propel themselves using water jets, and these jets are powerful enough that some squids can not only jump out of the water, but actually achieve controlled flight for a brief period by continuing to jet while in the air. The flight phase is extended through the use of fins as arms and wings to generate a little bit of lift. Real squids use this multimodal propulsion to escape predators, and it’s also much faster—a squid can double its normal swimming speed while in the air, flying at up to 50 body lengths per second.

The squid robot is powered primarily by compressed air, which it stores in a cylinder in its nose (do squids have noses?). The fins and arms are controlled by pneumatic actuators. When the robot wants to move through the water, it opens a value to release a modest amount of compressed air; releasing the air all at once generates enough thrust to fire the robot squid completely out of the water.

The jumping that you see at the end of the video is preliminary work; we’re told that the robot squid can travel between 10 and 20 meters by jumping, whereas using its jet underwater will take it just 10 meters. At the moment, the squid can only fire its jet once, but the researchers plan to replace the compressed air with something a bit denser, like liquid CO2, which will allow for extended operation and multiple jumps. There’s also plenty of work to do with using the fins for dynamic control, which the researchers say will “reveal the superiority of the natural flying squid movement.”

“Design and Experiments of a Squid-like Aquatic-aerial Vehicle With Soft Morphing Fins and Arms,” by Taogang Hou, Xingbang Yang, Haohong Su, Buhui Jiang, Lingkun Chen, Tianmiao Wang, and Jianhong Liang from Beihang University in China, was presented at ICRA 2019 in Montreal.

Image: EPFL

The EPFL researchers studied the morphology and function of a real scallop (a) to design their robot scallop (b), which consists of two shells connected at a hinge and enclosed by a flexible elastic membrane. The robot and animal both swim by rapidly, cyclicly opening and closing their shells to generate water jets for propulsion. When the robot shells open, water is drawn into the body through rear openings near the hinge. When the shells close rapidly, the water is forced out, propelling the robot forward (c).

RoboScallop, a “bivalve inspired swimming robot,” comes from EPFL’s Reconfigurable Robotics Laboratory, headed by Jamie Paik. Real scallops, in addition to being tasty, propel themselves by opening and closing their shells to generate jets of water out of their backsides. By repetitively opening their shells slowly and then closing quickly, scallops can generate forward thrust in a way that’s completely internal to their bodies. Relative to things like fins or spinning propellers, a scallop is simple and robust, especially as you scale down or start looking at large swarms of robots. The EPFL researchers describe their robotic scallop as representing “a unique combination of robust to hazards or sustained use, safe in delicate environments, and simple by design.”

And here’s how the real thing looks:

As you can see from the video, RoboScallop is safe to handle even while it’s operating, although a gentle nibbling is possible if you get too handsy with it. Since the robot sucks water in and then jets it out immediately, the design is resistant to fouling, which can be a significant problem in marine environments. The RoboScallop prototype weighs 65 grams, and tops out at a brisk 16 centimeters per second, while clapping (that’s the actual technical) at just over 2.5 Hz. While RoboScallop doesn’t yet steer, real scallops can change direction by jetting out more water on one side than the other, and RoboScallop should be able to do this as well. The researchers also suggest that RoboScallop itself could even double as a gripper, which as far as I know, is not something that real scallops can do.

“RoboScallop: A Bivalve-Inspired Swimming Robot,” by Matthew A. Robertson, Filip Efremov, and Jamie Paik, was presented at ICRA 2019 in Montreal. Continue reading

Posted in Human Robots

#435714 Universal Robots Introduces Its ...

Universal Robots, already the dominant force in collaborative robots, is flexing its muscles in an effort to further expand its reach in the cobots market. The Danish company is introducing today the UR16e, its strongest robotic arm yet, with a payload capability of 16 kilograms (35.3 lbs), reach of 900 millimeters, and repeatability of +/- 0.05 mm.

Universal says the new “heavy duty payload cobot” will allow customers to automate a broader range of processes, including packaging and palletizing, nut and screw driving, and high-payload and CNC machine tending.

In early 2015, Universal introduced the UR3, its smallest robot, which joined the UR5 and the flagship UR10, offering a payload capability of 3, 5, and 10 kg, respectively. Now the company is going in the other direction, announcing a bigger, stronger arm.

“With Universal joining its competitors in extending the reach and payload capacity of its cobots, a new standard of capability is forming,” Rian Whitton, a senior analyst at ABI Research, in London, tweeted.

Like its predecessors, the UR16e is part of Universal’s e-Series platform, which features 6 degrees of freedom and force/torque sensing on the tool flange. The UR family of cobots have stood out from the competition by being versatile in a variety of applications and, most important, easy to deploy and program. Universal didn’t release UR16e’s price, saying only that it is about 10 percent higher than that of the UR10e, which is about $50,000, depending on the configuration.

Jürgen von Hollen, president of Universal Robots, says the company decided to launch the UR16e after studying the market and talking to customers about their needs. “What came out of that process is we understood payload was a true barrier for a lot of customers,” he tells IEEE Spectrum. The 16 kg payload will be particularly useful for applications that require mounting specialized tools on the arm to perform tasks like screw driving and machine tending, he explains. Customers that could benefit from such applications include manufacturing, material handling, and automotive companies.

“We’ve added the payload, and that will open up that market for us,” von Hollen says.

The difference between Universal and Rethink

Universal has grown by leaps and bounds since its founding in 2008. By 2015, it had sold more than 5,000 robots; that number was close to 40,000 as of last year. During the same period, revenue more than doubled from about $100 million to $234 million. At a time when a string of robot makers have shuttered, including most notably Rethink Robotics, a cobots pioneer and Universal’s biggest rival, Universal finds itself in an enviable position, having amassed a commanding market share, estimated at between 50 to 60 percent.

About Rethink, von Hollen says the Boston-based company was a “good competitor,” helping disseminate the advantages and possibilities of cobots. “When Rethink basically ended it was more of a negative than a positive, from my perspective,” he says. In his view, a major difference between the two companies is that Rethink focused on delivering full-fledged applications to customers, whereas Universal focused on delivering a product to the market and letting the system integrators and sales partners deploy the robots to the customer base.

“We’ve always been very focused on delivering the product, whereas I think Rethink was much more focused on applications, very early on, and they added a level of complexity to their company that made it become very de-focused,” he says.

The collaborative robots market: massive growth

And yet, despite its success, Universal is still tiny when you compare it to the giants of industrial automation, which include companies like ABB, Fanuc, Yaskawa, and Kuka, with revenue in the billions of dollars. Although some of these companies have added cobots to their product portfolios—ABB’s YuMi, for example—that market represents a drop in the bucket when you consider global robot sales: The size of the cobots market was estimated at $700 million in 2018, whereas the global market for industrial robot systems (including software, peripherals, and system engineering) is close to $50 billion.

Von Hollen notes that cobots are expected to go through an impressive growth curve—nearly 50 percent year after year until 2025, when sales will reach between $9 to $12 billion. If Universal can maintain its dominance and capture a big slice of that market, it’ll add up to a nice sum. To get there, Universal is not alone: It is backed by U.S. electronics testing equipment maker Teradyne, which acquired Universal in 2015 for $285 million.

“The amount of resources we invest year over year matches the growth we had on sales,” von Hollen says. Universal currently has more than 650 employees, most based at its headquarters in Odense, Denmark, and the rest scattered in 27 offices in 18 countries. “No other company [in the cobots segment] is so focused on one product.”

[ Universal Robots ] Continue reading

Posted in Human Robots

#435712 U.S. Energy Department is First Customer ...

Argonne National Laboratory and Lawrence Livermore National Laboratory will be among the first organizations to install AI computers made from the largest silicon chip ever built. Last month, Cerebras Systems unveiled a 46,225-square millimeter chip with 1.2 trillion transistors designed to speed the training of neural networks. Today, such training is often done in large data centers using GPU-based servers. Cerebras plans to begin selling computers based on the notebook-size chip in the 4th quarter of this year.

“The opportunity to incorporate the largest and fastest AI chip ever—the Cerebras WSE—into our advanced computing infrastructure will enable us to dramatically accelerate our deep learning research in science, engineering, and health” Rick Stevens, head of computing at Argonne National Laboratory, said in a press release. “It will allow us to invent and test more algorithms, to more rapidly explore ideas, and to more quickly identify opportunities for scientific progress.”

Argonne and Lawrence Livermore are the first DOE entities to participate in what is expected to be a multi-year, multi-lab partnership. Cerebras plans to expand to other laboratories in the coming months.

Cerebras computers will be integrated into existing supercomputers at the two DOE labs to act as AI accelerators for those machines. In 2021, Argonne plans to become home to the United States’ first exascale computer, named Aurora; it will be capable of more than 1 billion billion calculations per second. Intel and Cray are the leaders on that $500 million project. The national laboratory is already home to Mira, the 24th-most powerful supercomputer in the world, and Theta, the 28th-most powerful. Lawrence Livermore is also on track to achieve exascale with El Capitan, a $600-million, 1.5-exaflop machine set to go live in late 2022. The lab is also home to the number-two-ranked Sierra supercomputer and the number-10-ranked Lassen.

The U.S. Energy Department established the Artificial Intelligence and Technology Office earlier this month to better take advantage of AI for solving the kinds of problems the U.S. national laboratories tackle. Continue reading

Posted in Human Robots

#435687 Humanoid Robots Teach Coping Skills to ...

Photo: Rob Felt

IEEE Senior Member Ayanna Howard with one of the interactive androids that help children with autism improve their social and emotional engagement.

THE INSTITUTEChildren with autism spectrum disorder can have a difficult time expressing their emotions and can be highly sensitive to sound, sight, and touch. That sometimes restricts their participation in everyday activities, leaving them socially isolated. Occupational therapists can help them cope better, but the time they’re able to spend is limited and the sessions tend to be expensive.

Roboticist Ayanna Howard, an IEEE senior member, has been using interactive androids to guide children with autism on ways to socially and emotionally engage with others—as a supplement to therapy. Howard is chair of the School of Interactive Computing and director of the Human-Automation Systems Lab at Georgia Tech. She helped found Zyrobotics, a Georgia Tech VentureLab startup that is working on AI and robotics technologies to engage children with special needs. Last year Forbes named Howard, Zyrobotics’ chief technology officer, one of the Top 50 U.S. Women in Tech.

In a recent study, Howard and other researchers explored how robots might help children navigate sensory experiences. The experiment involved 18 participants between the ages of 4 and 12; five had autism, and the rest were meeting typical developmental milestones. Two humanoid robots were programmed to express boredom, excitement, nervousness, and 17 other emotional states. As children explored stations set up for hearing, seeing, smelling, tasting, and touching, the robots modeled what the socially acceptable responses should be.

“If a child’s expression is one of happiness or joy, the robot will have a corresponding response of encouragement,” Howard says. “If there are aspects of frustration or sadness, the robot will provide input to try again.” The study suggested that many children with autism exhibit stronger levels of engagement when the robots interact with them at such sensory stations.

It is one of many robotics projects Howard has tackled. She has designed robots for researching glaciers, and she is working on assistive robots for the home, as well as an exoskeleton that can help children who have motor disabilities.

Howard spoke about her work during the Ethics in AI: Impacts of (Anti?) Social Robotics panel session held in May at the IEEE Vision, Innovation, and Challenges Summit in San Diego. You can watch the session on IEEE.tv.

The next IEEE Vision, Innovation, and Challenges Summit and Honors Ceremony will be held on 15 May 2020 at the JW Marriott Parq Vancouver hotel, in Vancouver.

In this interview with The Institute, Howard talks about how she got involved with assistive technologies, the need for a more diverse workforce, and ways IEEE has benefited her career.

FOCUS ON ACCESSIBILITY
Howard was inspired to work on technology that can improve accessibility in 2008 while teaching high school students at a summer camp devoted to science, technology, engineering, and math.

“A young lady with a visual impairment attended camp. The robot programming tools being used at the camp weren’t accessible to her,” Howard says. “As an engineer, I want to fix problems when I see them, so we ended up designing tools to enable access to programming tools that could be used in STEM education.

“That was my starting motivation, and this theme of accessibility has expanded to become a main focus of my research. One of the things about this world of accessibility is that when you start interacting with kids and parents, you discover another world out there of assistive technologies and how robotics can be used for good in education as well as therapy.”

DIVERSITY OF THOUGHT
The Institute asked Howard why it’s important to have a more diverse STEM workforce and what could be done to increase the number of women and others from underrepresented groups.

“The makeup of the current engineering workforce isn’t necessarily representative of the world, which is composed of different races, cultures, ages, disabilities, and socio-economic backgrounds,” Howard says. “We’re creating products used by people around the globe, so we have to ensure they’re being designed for a diverse population. As IEEE members, we also need to engage with people who aren’t engineers, and we don’t do that enough.”

Educational institutions are doing a better job of increasing diversity in areas such as gender, she says, adding that more work is needed because the enrollment numbers still aren’t representative of the population and the gains don’t necessarily carry through after graduation.

“There has been an increase in the number of underrepresented minorities and females going into engineering and computer science,” she says, “but data has shown that their numbers are not sustained in the workforce.”

ROLE MODEL
Because there are more underrepresented groups on today’s college campuses that can form a community, the lack of engineering role models—although a concern on campuses—is more extreme for preuniversity students, Howard says.

“Depending on where you go to school, you may not know what an engineer does or even consider engineering as an option,” she says, “so there’s still a big disconnect there.”

Howard has been involved for many years in math- and science-mentoring programs for at-risk high school girls. She tells them to find what they’re passionate about and combine it with math and science to create something. She also advises them not to let anyone tell them that they can’t.

Howard’s father is an engineer. She says he never encouraged or discouraged her to become one, but when she broke something, he would show her how to fix it and talk her through the process. Along the way, he taught her a logical way of thinking she says all engineers have.

“When I would try to explain something, he would quiz me and tell me to ‘think more logically,’” she says.

Howard earned a bachelor’s degree in engineering from Brown University, in Providence, R.I., then she received both a master’s and doctorate degree in electrical engineering from the University of Southern California. Before joining the faculty of Georgia Tech in 2005, she worked at NASA’s Jet Propulsion Laboratory at the California Institute of Technology for more than a decade as a senior robotics researcher and deputy manager in the Office of the Chief Scientist.

ACTIVE VOLUNTEER
Howard’s father was also an IEEE member, but that’s not why she joined the organization. She says she signed up when she was a student because, “that was something that you just did. Plus, my student membership fee was subsidized.”

She kept the membership as a grad student because of the discounted rates members receive on conferences.

Those conferences have had an impact on her career. “They allow you to understand what the state of the art is,” she says. “Back then you received a printed conference proceeding and reading through it was brutal, but by attending it in person, you got a 15-minute snippet about the research.”

Howard is an active volunteer with the IEEE Robotics and Automation and the IEEE Systems, Man, and Cybernetics societies, holding many positions and serving on several committees. She is also featured in the IEEE Impact Creators campaign. These members were selected because they inspire others to innovate for a better tomorrow.

“I value IEEE for its community,” she says. “One of the nice things about IEEE is that it’s international.” Continue reading

Posted in Human Robots

#435683 How High Fives Help Us Get in Touch With ...

The human sense of touch is so naturally ingrained in our everyday lives that we often don’t notice its presence. Even so, touch is a crucial sensing ability that helps people to understand the world and connect with others. As the market for robots grows, and as robots become more ingrained into our environments, people will expect robots to participate in a wide variety of social touch interactions. At Oregon State University’s Collaborative Robotics and Intelligent Systems (CoRIS) Institute, I research how to equip everyday robots with better social-physical interaction skills—from playful high-fives to challenging physical therapy routines.

Some commercial robots already possess certain physical interaction skills. For example, the videoconferencing feature of mobile telepresence robots can keep far-away family members connected with one another. These robots can also roam distant spaces and bump into people, chairs, and other remote objects. And my Roomba occasionally tickles my toes before turning to vacuum a different area of the room. As a human being, I naturally interpret this (and other Roomba behaviors) as social, even if they were not intended as such. At the same time, for both of these systems, social perceptions of the robots’ physical interaction behaviors are not well understood, and these social touch-like interactions cannot be controlled in nuanced ways.

Before joining CoRIS early this year, I was a postdoc at the University of Southern California’s Interaction Lab, and prior to that, I completed my doctoral work at the GRASP Laboratory’s Haptics Group at the University of Pennsylvania. My dissertation focused on improving the general understanding of how robot control and planning strategies influence perceptions of social touch interactions. As part of that research, I conducted a study of human-robot hand-to-hand contact, focusing on an interaction somewhere between a high five and a hand-clapping game. I decided to study this particular interaction because people often high five, and they will likely expect robots in everyday spaces to high five as well!

I conducted a study of human-robot hand-to-hand contact, focusing on an interaction somewhere between a high five and a hand-clapping game. I decided to study this particular interaction because people often high five, and they will likely expect robots to high five as well!

The implications of motion and planning on the social touch experience in these interactions is also crucial—think about a disappointingly wimpy (or triumphantly amazing) high five that you’ve experienced in the past. This great or terrible high-fiving experience could be fleeting, but it could also influence who you interact with, who you’re friends with, and even how you perceive the character or personalities of those around you. This type of perception, judgement, and response could extend to personal robots, too!

An investigation like this requires a mixture of more traditional robotics research (e.g., understanding how to move and control a robot arm, developing models of the desired robot motion) along with techniques from design and psychology (e.g., performing interviews with research participants, using best practices from experimental methods in perception). Enabling robots with social touch abilities also comes with many challenges, and even skilled humans can have trouble anticipating what another person is about to do. Think about trying to make satisfying hand contact during a high five—you might know the classic adage “watch the elbow,” but if you’re like me, even this may not always work.

I conducted a research study involving eight different types of human-robot hand contact, with different combinations of the following: interactions with a facially reactive or non-reactive robot, a physically reactive or non-reactive planning strategy, and a lower or higher robot arm stiffness. My robotic system could become facially reactive by changing its facial expression in response to hand contact, or physically reactive by updating its plan of where to move next after sensing hand contact. The stiffness of the robot could be adjusted by changing a variable that controlled how quickly the robot’s motors tried to pull its arm to the desired position. I knew from previous research that fine differences in touch interactions can have a big impact on perceived robot character. For example, if a robot grips an object too tightly or for too long while handing an object to a person, it might be perceived as greedy, possessive, or perhaps even Sméagol-like. A robot that lets go too soon might appear careless or sloppy.

In the example cases of robot grip, it’s clear that understanding people’s perceptions of robot characteristics and personality can help roboticists choose the right robot design based on the proposed operating environment of the robot. I likewise wanted to learn how the facial expressions, physical reactions, and stiffness of a hand-clapping robot would influence human perceptions of robot pleasantness, energeticness, dominance, and safety. Understanding this relationship can help roboticists to equip robots with personalities appropriate for the task at hand. For example, a robot assisting people in a grocery store may need to be designed with a high level of pleasantness and only moderate energy, while a maximally effective robot for comedy roast battles may need high degrees of energy and dominance above all else.

After many a late night at the GRASP Lab clapping hands with a big red robot, I was ready to conduct the study. Twenty participants visited the lab to clap hands with our Baxter Research Robot and help me begin to understand how characteristics of this humanoid robot’s social touch influenced its pleasantness, energeticness, dominance, and apparent safety. Baxter interacted with participants using a custom 3D-printed hand that was inlaid with silicone inserts.

The study showed that a facially reactive robot seemed more pleasant and energetic. A physically reactive robot seemed less pleasant, energetic, and dominant for this particular study design and interaction. I thought contact with a stiffer robot would seem harder (and therefore more dominant and less safe), but counter to my expectations, a stiffer-armed robot seemed safer and less dominant to participants. This may be because the stiffer robot was more precise in following its pre-programmed trajectory, therefore seeming more predictable and less free-spirited.

Safety ratings of the robot were generally high, and several participants commented positively on the robot’s facial expressions. Some participants attributed inventive (and non-existent) intelligences to the robot—I used neither computer vision nor the Baxter robot’s cameras in this study, but more than one participant complimented me on how well the robot tracked their hand position. While interacting with the robot, participants displayed happy facial expressions more than any other analyzed type of expression.

Photo: Naomi Fitter

Participants were asked to clap hands with Baxter and describe how they perceived the robot in terms of its pleasantness, energeticness, dominance, and apparent safety.

Circling back to the idea of how people might interpret even rudimentary and practical robot behaviors as social, these results show that this type of social perception isn’t just true for my lovable (but sometimes dopey) Roomba, but also for collaborative industrial robots, and generally, any robot capable of physical human-robot interaction. In designing the motion of Baxter, the adjustment of a single number in the equation that controls joint stiffness can flip the robot from seeming safe and docile to brash and commanding. These implications are sometimes predictable, but often unexpected.

The results of this particular study give us a partial guide to manipulating the emotional experience of robot users by adjusting aspects of robot control and planning, but future work is needed to fully understand the design space of social touch. Will materials play a major role? How about personalized machine learning? Do results generalize over all robot arms, or even a specialized subset like collaborative industrial robot arms? I’m planning to continue answering these questions, and when I finally solve human-robot social touch, I’ll high five all my robots to celebrate.

Naomi Fitter is an assistant professor in the Collaborative Robotics and Intelligent Systems (CoRIS) Institute at Oregon State University, where her Social Haptics, Assistive Robotics, and Embodiment (SHARE) research group aims to equip robots with the ability to engage and empower people in interactions from playful high-fives to challenging physical therapy routines. She completed her doctoral work in the GRASP Laboratory’s Haptics Group and was a postdoctoral scholar in the University of Southern California’s Interaction Lab from 2017 to 2018. Naomi’s not-so-secret pastime is performing stand-up and improv comedy. Continue reading

Posted in Human Robots