Tag Archives: mobile
#439224 Mobile dexterous robots: a key element ...
Kinova robotic arms, from left to right: Gen2, Gen3 lite, Gen3
Multiple companies turned to Kinova® robotic arms to create mobile platforms with manipulation capabilities to tackle many aspects of the sanitary crisis. The addition of a dexterous manipulator to mobile platforms opens the door to applications such as patient care disinfection and cleaning — critical to the fight against the virus.
Ever since the pandemic hit at the beginning of 2020, it became clear that the human resources available to address all the different fronts in the fight against the virus would be thinly stretched — especially considering the fact that these people are subject to falling ill. Mobile robots with manipulation capabilities were quickly identified as a solution to alleviate this problem by freeing skilled people from menial tasks and by allowing remote or automated work which keeps exposure to the virus to a minimum.
Multiple companies turned to Kinova robotic arms for an off-the-shelf manipulation solution suitable for mobile platforms. The history Kinova has with the assistive market is now at the core of the technology — assistive products such as motorized wheelchair-mounted robots like Jaco® were designed from the beginning to be extremely safe, user-friendly, ultra-lightweight, and power-efficient. This experience has transpired into more recent products as well. All these features do not come at the expense of performance, in fact, Kinova robots boast some of the highest payload-to-weight ratios in the industry. It does make sense that robots like these are ideal for applications involving mobile platforms and integration into products that are meant to be interacted with in non-industrial settings.
One of the companies that successfully made such an integration is Diligent, who developed a patient care robot called Moxi by integrating a Kinova Gen2 robot to a mobile platform powered by cloud-based software and artificial intelligence. Moxi is designed to help clinical staff with menial tasks that do not involve the patients, like fetching supplies, delivering samples, and distributing equipment, thus freeing skilled staff like nurses to perform more value-added tasks. Its rounded design and friendly face make interactions with it feel more natural for both the public and the hospital staff who otherwise may not be used to interacting with robots. In the current pandemic, one can easily understand how a robot such as Moxi can find its uses to alleviate the workload of healthcare workers and prove to quickly provide a return on investment for healthcare institutions.
Another type of menial task that became surprisingly important in the context of the sanitary crisis is that of cleaning. Prior to the crisis, Peanut Robotics, a startup from California that raised $2 million in 2019 was already developing a mobile platform carrying a Kinova Gen3 for cleaning commercial spaces such as restaurants, offices, hotels, and even airports. By coupling the 7 degrees of freedom robot to a vertical rail, their system can reach even the most inconvenient places. Rather than using specialized robot end-effectors to work, they take advantage of the flexibility of the robot gripper to grab tools similar to what a human would use, thus making it possible to clean an entire room with a single system, including spraying disinfectant, scrubbing, and wiping — and all that autonomously! With the current context where more surfaces need more frequent cleaning and where being in contact with objects comes with a higher risk of infection, surely we will see this kind of robot increasingly frequently.
However, not all environments are suitable for such a deep cleaning. Common areas in malls or airports for example are simply too large and possibly too crowded for such operations. It is these kinds of cases that A&K Robotics are tackling with their Autonomous Mobile Robotic UV Disinfector (Amrud) — a project selected for funding by Canada’s Advanced Manufacturing Supercluster. They combined their expertise in navigation and mobile platforms with the capabilities of a Kinova Gen3 lite robot. The compact and extremely light (less than 6 kg) robot is carried around wielding a UV light source to disinfect surfaces. Its 6 degrees of freedom allow for more than enough flexibility to waive the light source around even the most complex surfaces. A&K already made the news a few times in 2020 by deploying their solution to assist in the disinfection of floors and high-touch surfaces. Whereas when they started the project back in 2017 they did not get much traction, it is clear that the recent needs got them much deserved attention.
As the pandemic settles, an always-increasing number of applications for robots are found. Be it traditionally non-industrialized industries looking to be more resilient to staff shortages or due to the democratization of working from home, robots are becoming more commonplace than ever. Kinova, with its wide range of robot type offers, is there to assist developers and integrators accomplish their tasks and contribute to the growth of the collaboration of robots in our daily lives.
To learn more about Kinova click here. Continue reading
#438807 Visible Touch: How Cameras Can Help ...
The dawn of the robot revolution is already here, and it is not the dystopian nightmare we imagined. Instead, it comes in the form of social robots: Autonomous robots in homes and schools, offices and public spaces, able to interact with humans and other robots in a socially acceptable, human-perceptible way to resolve tasks related to core human needs.
To design social robots that “understand” humans, robotics scientists are delving into the psychology of human communication. Researchers from Cornell University posit that embedding the sense of touch in social robots could teach them to detect physical interactions and gestures. They describe a way of doing so by relying not on touch but on vision.
A USB camera inside the robot captures shadows of hand gestures on the robot’s surface and classifies them with machine-learning software. They call this method ShadowSense, which they define as a modality between vision and touch, bringing “the high resolution and low cost of vision-sensing to the close-up sensory experience of touch.”
Touch-sensing in social or interactive robots is usually achieved with force sensors or capacitive sensors, says study co-author Guy Hoffman of the Sibley School of Mechanical and Aerospace Engineering at Cornell University. The drawback to his group’s approach has been that, even to achieve coarse spatial resolution, many sensors are needed in a small area.
However, working with non-rigid, inflatable robots, Hoffman and his co-researchers installed a consumer-grade USB camera to which they attached a fisheye lens for a wider field of vision.
“Given that the robot is already hollow, and has a soft and translucent skin, we could do touch interaction by looking at the shadows created by people touching the robot,” says Hoffman. They used deep neural networks to interpret the shadows. “And we were able to do it with very high accuracy,” he says. The robot was able to interpret six different gestures, including one- or two-handed touch, pointing, hugging and punching, with an accuracy of 87.5 to 96 percent, depending on the lighting.
This is not the first time that computer vision has been used for tactile sensing, though the scale and application of ShadowSense is unique. “Photography has been used for touch mainly in robotic grasping,” says Hoffman. By contrast, Hoffman and collaborators wanted to develop a sense that could be “felt” across the whole of the device.
The potential applications for ShadowSense include mobile robot guidance using touch, and interactive screens on soft robots. A third concerns privacy, especially in home-based social robots. “We have another paper currently under review that looks specifically at the ability to detect gestures that are further away [from the robot’s skin],” says Hoffman. This way, users would be able to cover their robot’s camera with a translucent material and still allow it to interpret actions and gestures from shadows. Thus, even though it’s prevented from capturing a high-resolution image of the user or their surrounding environment, using the right kind of training datasets, the robot can continue to monitor some kinds of non-tactile activities.
In its current iteration, Hoffman says, ShadowSense doesn’t do well in low-light conditions. Environmental noise, or shadows from surrounding objects, also interfere with image classification. Relying on one camera also means a single point of failure. “I think if this were to become a commercial product, we would probably [have to] work a little bit better on image detection,” says Hoffman.
As it was, the researchers used transfer learning—reusing a pre-trained deep-learning model in a new problem—for image analysis. “One of the problems with multi-layered neural networks is that you need a lot of training data to make accurate predictions,” says Hoffman. “Obviously, we don’t have millions of examples of people touching a hollow, inflatable robot. But we can use pre-trained networks trained on general images, which we have billions of, and we only retrain the last layers of the network using our own dataset.” Continue reading