Tag Archives: back
#437758 Remotely Operated Robot Takes Straight ...
Roboticists love hard problems. Challenges like the DRC and SubT have helped (and are still helping) to catalyze major advances in robotics, but not all hard problems require a massive amount of DARPA funding—sometimes, a hard problem can just be something very specific that’s really hard for a robot to do, especially relative to the ease with which a moderately trained human might be able to do it. Catching a ball. Putting a peg in a hole. Or using a straight razor to shave someone’s face without Sweeney Todd-izing them.
This particular roboticist who sees straight-razor face shaving as a hard problem that robots should be solving is John Peter Whitney, who we first met back at IROS 2014 in Chicago when (working at Disney Research) he introduced an elegant fluidic actuator system. These actuators use tubes containing a fluid (like air or water) to transmit forces from a primary robot to a secondary robot in a very efficient way that also allows for either compliance or very high fidelity force feedback, depending on the compressibility of the fluid.
Photo: John Peter Whitney/Northeastern University
Barber meets robot: Boston based barber Jesse Cabbage [top, right] observes the machine created by roboticist John Peter Whitney. Before testing the robot on Whitney’s face, they used his arm for a quick practice [bottom].
Whitney is now at Northeastern University, in Boston, and he recently gave a talk at the RSS workshop on “Reacting to Contact,” where he suggested that straight razor shaving would be an interesting and valuable problem for robotics to work toward, due to its difficulty and requirement for an extremely high level of both performance and reliability.
Now, a straight razor is sort of like a safety razor, except with the safety part removed, which in fact does make it significantly less safe for humans, much less robots. Also not ideal for those worried about safety is that as part of the process the razor ends up in distressingly close proximity to things like the artery that is busily delivering your brain’s entire supply of blood, which is very close to the top of the list of things that most people want to keep blades very far away from. But that didn’t stop Whitney from putting his whiskers where his mouth is and letting his robotic system mediate the ministrations of a professional barber. It’s not an autonomous robotic straight-razor shave (because Whitney is not totally crazy), but it’s a step in that direction, and requires that the hardware Whitney developed be dead reliable.
Perhaps that was a poor choice of words. But, rest assured that Whitney lived long enough to answer our questions after. Here’s the video; it’s part of a longer talk, but it should start in the right spot, at about 23:30.
If Whitney looked a little bit nervous to you, that’s because he was. “This was the first time I’d ever been shaved by someone (something?!) else with a straight razor,” he told us, and while having a professional barber at the helm was some comfort, “the lack of feeling and control on my part was somewhat unsettling.” Whitney says that the barber, Jesse Cabbage of Dentes Barbershop in Somerville, Mass., was surprised by how well he could feel the tactile sensations being transmitted from the razor. “That’s one of the reasons we decided to make this video,” Whitney says. “I can’t show someone how something feels, so the next best thing is to show a delicate task that either from experience or intuition makes it clear to the viewer that the system must have these properties—otherwise the task wouldn’t be possible.”
And as for when Whitney might be comfortable getting shaved by a robotic system without a human in the loop? It’s going to take a lot of work, as do most other hard problems in robotics. “There are two parts to this,” he explains. “One is fault-tolerance of the components themselves (software, electronics, etc.) and the second is the quality of the perception and planning algorithms.”
He offers a comparison to self-driving cars, in which similar (or greater) risks are incurred: “To learn how to perceive, interpret, and adapt, we need a very high-fidelity model of the problem, or a wealth of data and experience, or both” he says. “But in the case of shaving we are greatly lacking in both!” He continues with the analogy: “I think there is a natural progression—the community started with autonomous driving of toy cars on closed courses and worked up to real cars carrying human passengers; in robotic manipulation we are beginning to move out of the ‘toy car’ stage and so I think it’s good to target high-consequence hard problems to help drive progress.”
The ultimate goal is much more general than the creation of a dedicated straight razor shaving robot. This particular hardware system is actually a testbed for exploring MRI-compatible remote needle biopsy.
Of course, the ultimate goal here is much more general than the creation of a dedicated straight razor shaving robot; it’s a challenge that includes a host of sub-goals that will benefit robotics more generally. This particular hardware system Whitney is developing is actually a testbed for exploring MRI-compatible remote needle biopsy, and he and his students are collaborating with Brigham and Women’s Hospital in Boston on adapting this technology to prostate biopsy and ablation procedures. They’re also exploring how delicate touch can be used as a way to map an environment and localize within it, especially where using vision may not be a good option. “These traits and behaviors are especially interesting for applications where we must interact with delicate and uncertain environments,” says Whitney. “Medical robots, assistive and rehabilitation robots and exoskeletons, and shared-autonomy teleoperation for delicate tasks.”
A paper with more details on this robotic system, “Series Elastic Force Control for Soft Robotic Fluid Actuators,” is available on arXiv. Continue reading
#437751 Startup and Academics Find Path to ...
Engineers have been chasing a form of AI that could drastically lower the energy required to do typical AI things like recognize words and images. This analog form of machine learning does one of the key mathematical operations of neural networks using the physics of a circuit instead of digital logic. But one of the main things limiting this approach is that deep learning’s training algorithm, back propagation, has to be done by GPUs or other separate digital systems.
Now University of Montreal AI expert Yoshua Bengio, his student Benjamin Scellier, and colleagues at startup Rain Neuromorphics have come up with way for analog AIs to train themselves. That method, called equilibrium propagation, could lead to continuously learning, low-power analog systems of a far greater computational ability than most in the industry now consider possible, according to Rain CTO Jack Kendall.
Analog circuits could save power in neural networks in part because they can efficiently perform a key calculation, called multiply and accumulate. That calculation multiplies values from inputs according to various weights, and then it sums all those values up. Two fundamental laws of electrical engineering can basically do that, too. Ohm’s Law multiplies voltage and conductance to give current, and Kirchoff’s Current Law sums the currents entering a point. By storing a neural network’s weights in resistive memory devices, such as memristors, multiply-and-accumulate can happen completely in analog, potentially reducing power consumption by orders of magnitude.
The reason analog AI systems can’t train themselves today has a lot to do with the variability of their components. Just like real neurons, those in analog neural networks don’t all behave exactly alike. To do back propagation with analog components, you must build two separate circuit pathways. One going forward to come up with an answer (called inferencing), the other going backward to do the learning so that the answer becomes more accurate. But because of the variability of analog components, the pathways don't match up.
“You end up accumulating error as you go backwards through the network,” says Bengio. To compensate, a network would need lots of power-hungry analog-to-digital and digital-to-analog circuits, defeating the point of going analog.
Equilibrium propagation allows learning and inferencing to happen on the same network, partly by adjusting the behavior of the network as a whole. “What [equilibrium propagation] allows us to do is to say how we should modify each of these devices so that the overall circuit performs the right thing,” he says. “We turn the physical computation that is happening in the analog devices directly to our advantage.”
Right now, equilibrium propagation is only working in simulation. But Rain plans to have a hardware proof-of-principle in late 2021, according to CEO and cofounder Gordon Wilson. “We are really trying to fundamentally reimagine the hardware computational substrate for artificial intelligence, find the right clues from the brain, and use those to inform the design of this,” he says. The result could be what they call end-to-end analog AI systems that capable of running sophisticated robots or even playing a role in data centers. Both of those are currently considered beyond the capabilities of analog AI, which is now focused only on adding inferencing abilities to sensors and other low-power “edge” devices, while leaving the learning to GPUs. Continue reading
#437747 High Performance Ornithopter Drone Is ...
The vast majority of drones are rotary-wing systems (like quadrotors), and for good reason: They’re cheap, they’re easy, they scale up and down well, and we’re getting quite good at controlling them, even in very challenging environments. For most applications, though, drones lose out to birds and their flapping wings in almost every way—flapping wings are very efficient, enable astonishing agility, and are much safer, able to make compliant contact with surfaces rather than shredding them like a rotor system does. But flapping wing have their challenges too: Making flapping-wing robots is so much more difficult than just duct taping spinning motors to a frame that, with a few exceptions, we haven’t seen nearly as much improvement as we have in more conventional drones.
In Science Robotics last week, a group of roboticists from Singapore, Australia, China, and Taiwan described a new design for a flapping-wing robot that offers enough thrust and control authority to make stable transitions between aggressive flight modes—like flipping and diving—while also being able to efficiently glide and gently land. While still more complex than a quadrotor in both hardware and software, this ornithopter’s advantages might make it worthwhile.
One reason that making a flapping-wing robot is difficult is because the wings have to move back and forth at high speed while electric motors spin around and around at high speed. This requires a relatively complex transmission system, which (if you don’t do it carefully), leads to weight penalties and a significant loss of efficiency. One particular challenge is that the reciprocating mass of the wings tends to cause the entire robot to flex back and forth, which alternately binds and disengages elements in the transmission system.
The researchers’ new ornithopter design mitigates the flexing problem using hinges and bearings in pairs. Elastic elements also help improve efficiency, and the ornithopter is in fact more efficient with its flapping wings than it would be with a rotary propeller-based propulsion system. Its thrust exceeds its 26-gram mass by 40 percent, which is where much of the aerobatic capability comes from. And one of the most surprising findings of this paper was that flapping-wing robots can actually be more efficient than propeller-based aircraft.
One of the most surprising findings of this paper was that flapping-wing robots can actually be more efficient than propeller-based aircraft
It’s not just thrust that’s a challenge for ornithopters: Control is much more complex as well. Like birds, ornithopters have tails, but unlike birds, they have to rely almost entirely on tail control authority, not having that bird-level of control over fine wing movements. To make an acrobatic level of control possible, the tail control surfaces on this ornithopter are huge—the tail plane area is 35 percent of the wing area. The wings can also provide some assistance in specific circumstances, as by combining tail control inputs with a deliberate stall of the things to allow the ornithopter to execute rapid flips.
With the ability to take off, hover, glide, land softly, maneuver acrobatically, fly quietly, and interact with its environment in a way that’s not (immediately) catastrophic, flapping-wing drones easily offer enough advantages to keep them interesting. Now that ornithopters been shown to be even more efficient than rotorcraft, the researchers plan to focus on autonomy with the goal of moving their robot toward real-world usefulness.
“Efficient flapping wing drone arrests high-speed flight using post-stall soaring,” by Yao-Wei Chin, Jia Ming Kok, Yong-Qiang Zhu, Woei-Leong Chan, Javaan S. Chahl, Boo Cheong Khoo, and Gih-Keong Lau from from Nanyang Technological University in Singapore, National University of Singapore, Defence Science and Technology Group in Canberra, Australia, Qingdao University of Technology in Shandong, China, University of South Australia in Mawson Lakes, and National Chiao Tung University in Hsinchu, Taiwan, was published in Science Robotics. Continue reading