Tag Archives: design
#438613 Video Friday: Digit Takes a Hike
Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):
HRI 2021 – March 8-11, 2021 – [Online Conference]
RoboSoft 2021 – April 12-16, 2021 – [Online Conference]
ICRA 2021 – May 30-5, 2021 – Xi'an, China
Let us know if you have suggestions for next week, and enjoy today's videos.
It's winter in Oregon, so everything is damp, all the time. No problem for Digit!
Also the case for summer in Oregon.
[ Agility Robotics ]
While other organisms form collective flocks, schools, or swarms for such purposes as mating, predation, and protection, the Lumbriculus variegatus worms are unusual in their ability to braid themselves together to accomplish tasks that unconnected individuals cannot. A new study reported by researchers at the Georgia Institute of Technology describes how the worms self-organize to act as entangled “active matter,” creating surprising collective behaviors whose principles have been applied to help blobs of simple robots evolve their own locomotion.
No, this doesn't squick me out at all, why would it.
[ Georgia Tech ]
A few years ago, we wrote about Zhifeng Huang's jet-foot equipped bipedal robot, and he's been continuing to work on it to the point where it can now step over gaps that are an absolutely astonishing 147% of its leg length.
[ Paper ]
Thanks Zhifeng!
The Inception Drive is a novel, ultra-compact design for an Infinitely Variable Transmission (IVT) that uses nested-pulleys to adjust the gear ratio between input and output shafts. This video shows the first proof-of-concept prototype for a “Fully Balanced” design, where the spinning masses within the drive are completely balanced to reduce vibration, thereby allowing the drive to operate more efficiently and at higher speeds than achievable on an unbalanced design.
As shown in this video, the Inception Drive can change both the speed and direction of rotation of the output shaft while keeping the direction and speed of the input shaft constant. This ability to adjust speed and direction within such a compact package makes the Inception Drive a compelling choice for machine designers in a wide variety of fields, including robotics, automotive, and renewable-energy generation.
[ SRI ]
Robots with kinematic loops are known to have superior mechanical performance. However, due to these loops, their modeling and control is challenging, and prevents a more widespread use. In this paper, we describe a versatile Inverse Kinematics (IK) formulation for the retargeting of expressive motions onto mechanical systems with loops.
[ Disney Research ]
Watch Engineered Arts put together one of its Mesmer robots in a not at all uncanny way.
[ Engineered Arts ]
There's been a bunch of interesting research into vision-based tactile sensing recently; here's some from Van Ho at JAIST:
[ Paper ]
Thanks Van!
This is really more of an automated system than a robot, but these little levitating pucks are very very slick.
ACOPOS 6D is based on the principle of magnetic levitation: Shuttles with integrated permanent magnets float over the surface of electromagnetic motor segments. The modular motor segments are 240 x 240 millimeters in size and can be arranged freely in any shape. A variety of shuttle sizes carry payloads of 0.6 to 14 kilograms and reach speeds of up to 2 meters per second. They can move freely in two-dimensional space, rotate and tilt along three axes and offer precise control over the height of levitation. All together, that gives them six degrees of motion control freedom.
[ ACOPOS ]
Navigation and motion control of a robot to a destination are tasks that have historically been performed with the assumption that contact with the environment is harmful. This makes sense for rigid-bodied robots where obstacle collisions are fundamentally dangerous. However, because many soft robots have bodies that are low-inertia and compliant, obstacle contact is inherently safe. We find that a planner that takes into account and capitalizes on environmental contact produces paths that are more robust to uncertainty than a planner that avoids all obstacle contact.
[ CHARM Lab ]
The quadrotor experts at UZH have been really cranking it up recently.
Aerodynamic forces render accurate high-speed trajectory tracking with quadrotors extremely challenging. These complex aerodynamic effects become a significant disturbance at high speeds, introducing large positional tracking errors, and are extremely difficult to model. To fly at high speeds, feedback control must be able to account for these aerodynamic effects in real-time. This necessitates a modelling procedure that is both accurate and efficient to evaluate. Therefore, we present an approach to model aerodynamic effects using Gaussian Processes, which we incorporate into a Model Predictive Controller to achieve efficient and precise real-time feedback control, leading to up to 70% reduction in trajectory tracking error at high speeds. We verify our method by extensive comparison to a state-of-the-art linear drag model in synthetic and real-world experiments at speeds of up to 14m/s and accelerations beyond 4g.
[ Paper ]
I have not heard much from Harvest Automation over the last couple years and their website was last updated in 2016, but I guess they're selling robots in France, so that's good?
[ Harvest Automation ]
Last year, Clearpath Robotics introduced a ROS package for Spot which enables robotics developers to leverage ROS capabilities out-of-the-box. Here at OTTO Motors, we thought it would be a compelling test case to see just how easy it would be to integrate Spot into our test fleet of OTTO materials handling robots.
[ OTTO Motors ]
Video showcasing recent robotics activities at PRISMA Lab, coordinated by Prof. Bruno Siciliano, at Università di Napoli Federico II.
[ PRISMA Lab ]
Thanks Fan!
State estimation framework developed by the team CoSTAR for the DARPA Subterranean Challenge, where the team achieved 2nd and 1st places in the Tunnel and Urban circuits.
[ Paper ]
Highlights from the 2020 ROS Industrial conference.
[ ROS Industrial ]
Thanks Thilo!
Not robotics, but entertaining anyway. From the CHI 1995 Technical Video Program, “The Tablet Newspaper: a Vision for the Future.”
[ CHI 1995 ]
This week's GRASP on Robotics seminar comes from Allison Okamura at Stanford, on “Wearable Haptic Devices for Ubiquitous Communication.”
Haptic devices allow touch-based information transfer between humans and intelligent systems, enabling communication in a salient but private manner that frees other sensory channels. For such devices to become ubiquitous, their physical and computational aspects must be intuitive and unobtrusive. We explore the design of a wide array of haptic feedback mechanisms, ranging from devices that can be actively touched by the fingertips to multi-modal haptic actuation mounted on the arm. We demonstrate how these devices are effective in virtual reality, human-machine communication, and human-human communication.
[ UPenn ] Continue reading
#438074 A new bio-inspired joint model to design ...
Recent advances in the field of robotics have enabled the fabrication of increasingly sophisticated robotic limbs and exoskeletons. Robotic exoskeletons are essentially wearable 'shells' made of different robotic parts. Exoskeletons can improve the strength, capabilities and stability of users, helping them to tackle heavy physical tasks with less effort or aiding their rehabilitation after accidents. Continue reading
#437978 How Mirroring the Architecture of the ...
While AI can carry out some impressive feats when trained on millions of data points, the human brain can often learn from a tiny number of examples. New research shows that borrowing architectural principles from the brain can help AI get closer to our visual prowess.
The prevailing wisdom in deep learning research is that the more data you throw at an algorithm, the better it will learn. And in the era of Big Data, that’s easier than ever, particularly for the large data-centric tech companies carrying out a lot of the cutting-edge AI research.
Today’s largest deep learning models, like OpenAI’s GPT-3 and Google’s BERT, are trained on billions of data points, and even more modest models require large amounts of data. Collecting these datasets and investing the computational resources to crunch through them is a major bottleneck, particularly for less well-resourced academic labs.
It also means today’s AI is far less flexible than natural intelligence. While a human only needs to see a handful of examples of an animal, a tool, or some other category of object to be able pick it out again, most AI need to be trained on many examples of an object in order to be able to recognize it.
There is an active sub-discipline of AI research aimed at what is known as “one-shot” or “few-shot” learning, where algorithms are designed to be able to learn from very few examples. But these approaches are still largely experimental, and they can’t come close to matching the fastest learner we know—the human brain.
This prompted a pair of neuroscientists to see if they could design an AI that could learn from few data points by borrowing principles from how we think the brain solves this problem. In a paper in Frontiers in Computational Neuroscience, they explained that the approach significantly boosts AI’s ability to learn new visual concepts from few examples.
“Our model provides a biologically plausible way for artificial neural networks to learn new visual concepts from a small number of examples,” Maximilian Riesenhuber, from Georgetown University Medical Center, said in a press release. “We can get computers to learn much better from few examples by leveraging prior learning in a way that we think mirrors what the brain is doing.”
Several decades of neuroscience research suggest that the brain’s ability to learn so quickly depends on its ability to use prior knowledge to understand new concepts based on little data. When it comes to visual understanding, this can rely on similarities of shape, structure, or color, but the brain can also leverage abstract visual concepts thought to be encoded in a brain region called the anterior temporal lobe (ATL).
“It is like saying that a platypus looks a bit like a duck, a beaver, and a sea otter,” said paper co-author Joshua Rule, from the University of California Berkeley.
The researchers decided to try and recreate this capability by using similar high-level concepts learned by an AI to help it quickly learn previously unseen categories of images.
Deep learning algorithms work by getting layers of artificial neurons to learn increasingly complex features of an image or other data type, which are then used to categorize new data. For instance, early layers will look for simple features like edges, while later ones might look for more complex ones like noses, faces, or even more high-level characteristics.
First they trained the AI on 2.5 million images across 2,000 different categories from the popular ImageNet dataset. They then extracted features from various layers of the network, including the very last layer before the output layer. They refer to these as “conceptual features” because they are the highest-level features learned, and most similar to the abstract concepts that might be encoded in the ATL.
They then used these different sets of features to train the AI to learn new concepts based on 2, 4, 8, 16, 32, 64, and 128 examples. They found that the AI that used the conceptual features yielded much better performance than ones trained using lower-level features on lower numbers of examples, but the gap shrunk as they were fed more training examples.
While the researchers admit the challenge they set their AI was relatively simple and only covers one aspect of the complex process of visual reasoning, they said that using a biologically plausible approach to solving the few-shot problem opens up promising new avenues in both neuroscience and AI.
“Our findings not only suggest techniques that could help computers learn more quickly and efficiently, they can also lead to improved neuroscience experiments aimed at understanding how people learn so quickly, which is not yet well understood,” Riesenhuber said.
As the researchers note, the human visual system is still the gold standard when it comes to understanding the world around us. Borrowing from its design principles might turn out to be a profitable direction for future research.
Image Credit: Gerd Altmann from Pixabay Continue reading