Tag Archives: arm
#439929 GITAI’s Autonomous Robot Arm Finds ...
Late last year, Japanese robotics startup GITAI sent their S1 robotic arm up to the International Space Station as part of a commercial airlock extension module to test out some useful space-based autonomy. Everything moves pretty slowly on the ISS, so it wasn't until last month that NASA astronauts installed the S1 arm and GITAI was able to put the system through its paces—or rather, sit in comfy chairs on Earth and watch the arm do most of its tasks by itself, because that's the dream, right?
The good news is that everything went well, and the arm did everything GITAI was hoping it would do. So what's next for commercial autonomous robotics in space? GITAI's CEO tells us what they're working on.
In this technology demonstration, the GITAI S1 autonomous space robot was installed inside the ISS Nanoracks Bishop Airlock and succeeded in executing two tasks: assembling structures and panels for In-Space Assembly (ISA), and operating switches & cables for Intra-Vehicular Activity (IVA).
One of the advantages of working in space is that it's a highly structured environment. Microgravity can be somewhat unpredictable, but you have a very good idea of the characteristics of objects (and even of lighting) because everything that's up there is excessively well defined. So, stuff like using a two-finger gripper for relatively high precision tasks is totally possible, because the variation that the system has to deal with is low. Of course, things can always go wrong, so GITAI also tested teleop procedures from Houston to make sure that having humans in the loop was also an effective way of completing tasks.
Since full autonomy is vastly more difficult than almost full autonomy, occasional teleop is probably going to be critical for space robots of all kinds. We spoke with GITAI CEO Sho Nakanose to learn more about their approach.
IEEE Spectrum: What do you think is the right amount of autonomy for robots working inside of the ISS?
Sho Nakanose: We believe that a combination of 95% autonomous control and 5% remote judgment and remote operation is the most efficient way to work. In this ISS demonstration, all the work was performed with 99% autonomous control and 1% remote decision making. However, in actual operations on the ISS, irregular tasks will occur that cannot be handled by autonomous control, and we believe that such irregular tasks should be handled by remote control from the ground, so we believe that the final ratio of about 5% remote judgment and remote control will be the most efficient.
GITAI will apply the general-purpose autonomous space robotics technology, know-how, and experience acquired through this tech demo to develop extra-vehicular robotics (EVR) that can execute docking, repair, and maintenance tasks for On-Orbit Servicing (OOS) or conduct various activities for lunar exploration and lunar base construction. -Sho Nakanose
I'm sure you did many tests with the system on the ground before sending it to the ISS. How was operating the robot on the ISS different from the testing you had done on Earth?
The biggest difference between experiments on the ground and on the ISS is the microgravity environment, but it was not that difficult to cope with. However, experiments on the ISS, which is an unknown environment that we have never been to before, are subject to a variety of unexpected situations that were extremely difficult to deal with, for example an unexpected communication breakdown occurred due to a failed thruster firing experiment on the Russian module. However, we were able to solve all the problems because the development team had carefully prepared for the irregularities in advance.
It looked like the robot was performing many tasks using equipment designed for humans. Do you think it would be better to design things like screws and control panels to make them easier for robots to see and operate?
Yes, I think so. Unlike the ISS that was built in the past, it is expected that humans and robots will cooperate to work together in the lunar orbiting space station Gateway and the lunar base that will be built in the future. Therefore, it is necessary to devise and implement an interface that is easy to use for both humans and robots. In 2019, GITAI received an order from JAXA to develop guidelines for an interface that is easy for both humans and robots to use on the ISS and Gateway.
What are you working on next?
We are planning to conduct an on-orbit extra-vehicular demonstration in 2023 and a lunar demonstration in 2025. We are also working on space robot development projects for several customers for which we have already received orders. Continue reading
#439691 Researchers develop bionic arm that ...
Cleveland Clinic researchers have engineered a first-of-its-kind bionic arm for patients with upper-limb amputations that allows wearers to think, behave and function like a person without an amputation, according to new findings published in Science Robotics. Continue reading
#439243 Scientists Added a Sense of Touch to a ...
Most people probably underestimate how much our sense of touch helps us navigate the world around us. New research has made it crystal clear after a robotic arm with the ability to feel was able to halve the time it took for the user to complete tasks.
In recent years, rapid advances in both robotics and neural interfaces have brought the dream of bionic limbs (like the one sported by Luke Skywalker in the Star Wars movies) within touching distance. In 2019, researchers even unveiled a robotic prosthetic arm with a sense of touch that the user could control with their thoughts alone.
But so far, these devices have typically relied on connecting to nerves and muscles in the patient’s residual upper arm. That has meant the devices don’t work for those who have been paralyzed or whose injuries have caused too much damage to those tissues.
That may be about to change, though. For the first time, researchers have allowed a patient to control a robotic arm using a direct connection to their brain while simultaneously receiving sensory information from the device. And by closing the loop, the patient was able to complete tasks in half the time compared to controlling the arm without any feedback.
“The control is so intuitive that I’m basically just thinking about things as if I were moving my own arm,” patient Nathan Copeland, who has been working with researchers at the University of Pittsburgh for six years, told NPR.
The results, reported in Science, build on previous work from the same team that showed they could use implants in Copeland’s somatosensory cortex to trigger sensations localized to regions of his hand, despite him having lost feeling and control thanks to a spinal cord injury.
The 28-year-old had also previously controlled an external robotic arm using a neural interface wired up to his motor cortex, but in the latest experiment the researchers combined the two strands of research, with impressive results.
In a series of tasks designed to test dexterity, including moving objects of different shapes and sizes and pouring from one cup to another, Copeland was able to reduce the time he took to complete these tasks from a median of 20 seconds to just 10, and his performance was often equivalent to that of an able-bodied person.
The sensory information that Copeland receives from the arm is still fairly rudimentary. Sensors measure torque in the joints at the base of the robotic fingers, which is then translated into electrical signals and transmitted to the brain. He reported that the feedback didn’t feel natural, but more like pressure or a gentle tingling.
But that’s still a lot more information than cab be gleaned from simply watching the hand’s motions, which is all he had to go on before. And the approach required almost no training, unlike other popular approaches based on sensory substitution that stimulate a patch of skin or provide visual or audio cues that the patient has to learn to associate with tactile sensations.
“We still have a long way to go in terms of making the sensations more realistic and bringing this technology to people’s homes, but the closer we can get to recreating the normal inputs to the brain, the better off we will be,” Robert Gaunt, a co-author of the paper, said in a press release.
“When even limited and imperfect sensation is restored, the person’s performance improved in a pretty significant way.”
An external robotic arm is still a long way from a properly integrated prosthetic, and it will likely require significant work to squeeze all the required technology into a more portable package. But Bolu Ajiboye, a neural engineer from Case Western Reserve University, told Wired that providing realistic sensory signals directly to the brain, and in particular ones that are relayed in real time, is a significant advance.
In a related perspective in Science, Aldo Faisal of Imperial College London said that the integration of a sense of touch may not only boost the performance of prosthetics, but also give patients a greater sense of ownership over their replacement limbs.
The breakthrough, he added, also opens up a host of interesting lines of scientific inquiry, including whether similar approaches could help advance robotics or be used to augment human perception with non-biological sensors.
Image Credit: RAEng_Publications from Pixabay Continue reading