Tag Archives: kind
#438012 Video Friday: These Robots Have Made 1 ...
Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):
HRI 2021 – March 8-11, 2021 – [Online Conference]
RoboSoft 2021 – April 12-16, 2021 – [Online Conference]
Let us know if you have suggestions for next week, and enjoy today's videos.
We're proud to announce Starship Delivery Robots have now completed 1,000,000 autonomous deliveries around the world. We were unsure where the one millionth delivery was going to take place, as there are around 15-20 service areas open globally, all with robots doing deliveries every minute. In the end it took place at Bowling Green, Ohio, to a student called Annika Keeton who is a freshman studying pre-health Biology at BGSU. Annika is now part of Starship’s history!
[ Starship ]
I adore this little DIY walking robot- with modular feet and little dials to let you easily adjust the walking parameters, it's an affordable kit that's way more nuanced than most.
It's called Bakiwi, and it costs €95. A squee cover made from feathers or fur is an extra €17. Here's a more serious look at what it can do:
[ Bakiwi ]
Thanks Oswald!
Savva Morozov, an AeroAstro junior, works on autonomous navigation for the MIT mini cheetah robot and reflects on the value of a crowded Infinite Corridor.
[ MIT ]
The world's most advanced haptic feedback gloves just got a huge upgrade! HaptX Gloves DK2 achieves a level of realism that other haptic devices can't match. Whether you’re training your workforce, designing a new product, or controlling robots from a distance, HaptX Gloves make it feel real.
They're the only gloves with true-contact haptics, with patented technology that displace your skin the same way a real object would. With 133 points of tactile feedback per hand, for full palm and fingertip coverage. HaptX Gloves DK2 feature the industry's most powerful force feedback, ~2X the strength of other force feedback gloves. They're also the most accurate motion tracking gloves, with 30 tracked degrees of freedom, sub-millimeter precision, no perceivable latency, and no occlusion.
[ HaptX ]
Yardroid is an outdoor robot “guided by computer vision and artificial intelligence” that seems like it can do almost everything.
These are a lot of autonomous capabilities, but so far, we've only seen the video. So, best not to get too excited until we know more about how it works.
[ Yardroid ]
Thanks Dan!
Since as far as we know, Pepper can't spread COVID, it had a busy year.
I somehow missed seeing that chimpanzee magic show, but here it is:
[ Simon Pierro ] via [ SoftBank Robotics ]
In spite of the pandemic, Professor Hod Lipson’s Robotics Studio persevered and even thrived— learning to work on global teams, to develop protocols for sharing blueprints and code, and to test, evaluate, and refine their designs remotely. Equipped with a 3D printer and a kit of electronics prototyping equipment, our students engineered bipedal robots that were conceptualized, fabricated, programmed, and endlessly iterated around the globe in bedrooms, kitchens, backyards, and any other makeshift laboratory you can imagine.
[ Hod Lipson ]
Thanks Fan!
We all know how much quadrupeds love ice!
[ Ghost Robotics ]
We took the opportunity of the last storm to put the Warthog in the snow of Université Laval. Enjoy!
[ Norlab ]
They've got a long way to go, but autonomous indoor firefighting drones seem like a fantastic idea.
[ CTU ]
Individual manipulators are limited by their vertical total load capacity. This places a fundamental limit on the weight of loads that a single manipulator can move. Cooperative manipulation with two arms has the potential to increase the net weight capacity of the overall system. However, it is critical that proper load sharing takes place between the two arms. In this work, we outline a method that utilizes mechanical intelligence in the form of a whiffletree.
And your word of the day is whiffletree, which is “a mechanism to distribute force evenly through linkages.”
[ DART Lab ]
Thanks Raymond!
Some highlights of robotic projects at FZI in 2020, all using ROS.
[ FZI ]
Thanks Fan!
iRobot CEO Colin Angle threatens my job by sharing some cool robots.
[ iRobot ]
A fascinating new talk from Henry Evans on robotic caregivers.
[ HRL ]
The ANA Avatar XPRIZE semifinals selection submission for Team AVATRINA. The setting is a mock clinic, with the patient sitting on a wheelchair and nurse having completed an initial intake. Avatar enters the room controlled by operator (Doctor). A rolling tray table with medical supplies (stethoscope, pulse oximeter, digital thermometer, oxygen mask, oxygen tube) is by the patient’s side. Demonstrates head tracking, stereo vision, fine manipulation, bimanual manipulation, safe impedance control, and navigation.
[ Team AVATRINA ]
This five year old talk from Mikell Taylor, who wrote for us a while back and is now at Amazon Robotics, is entitled “Nobody Cares About Your Robot.” For better or worse, it really doesn't sound like it was written five years ago.
Robotics for the consumer market – Mikell Taylor from Scott Handsaker on Vimeo.
[ Mikell Taylor ]
Fall River Community Media presents this wonderful guy talking about his love of antique robot toys.
If you enjoy this kind of slow media, Fall River also has weekly Hot Dogs Cool Cats adoption profiles that are super relaxing to watch.
[ YouTube ] Continue reading
#437964 How Explainable Artificial Intelligence ...
The field of artificial intelligence has created computers that can drive cars, synthesize chemical compounds, fold proteins, and detect high-energy particles at a superhuman level.
However, these AI algorithms cannot explain the thought processes behind their decisions. A computer that masters protein folding and also tells researchers more about the rules of biology is much more useful than a computer that folds proteins without explanation.
Therefore, AI researchers like me are now turning our efforts toward developing AI algorithms that can explain themselves in a manner that humans can understand. If we can do this, I believe that AI will be able to uncover and teach people new facts about the world that have not yet been discovered, leading to new innovations.
Learning From Experience
One field of AI, called reinforcement learning, studies how computers can learn from their own experiences. In reinforcement learning, an AI explores the world, receiving positive or negative feedback based on its actions.
This approach has led to algorithms that have independently learned to play chess at a superhuman level and prove mathematical theorems without any human guidance. In my work as an AI researcher, I use reinforcement learning to create AI algorithms that learn how to solve puzzles such as the Rubik’s Cube.
Through reinforcement learning, AIs are independently learning to solve problems that even humans struggle to figure out. This has got me and many other researchers thinking less about what AI can learn and more about what humans can learn from AI. A computer that can solve the Rubik’s Cube should be able to teach people how to solve it, too.
Peering Into the Black Box
Unfortunately, the minds of superhuman AIs are currently out of reach to us humans. AIs make terrible teachers and are what we in the computer science world call “black boxes.”
AI simply spits out solutions without giving reasons for its solutions. Computer scientists have been trying for decades to open this black box, and recent research has shown that many AI algorithms actually do think in ways that are similar to humans. For example, a computer trained to recognize animals will learn about different types of eyes and ears and will put this information together to correctly identify the animal.
The effort to open up the black box is called explainable AI. My research group at the AI Institute at the University of South Carolina is interested in developing explainable AI. To accomplish this, we work heavily with the Rubik’s Cube.
The Rubik’s Cube is basically a pathfinding problem: Find a path from point A—a scrambled Rubik’s Cube—to point B—a solved Rubik’s Cube. Other pathfinding problems include navigation, theorem proving and chemical synthesis.
My lab has set up a website where anyone can see how our AI algorithm solves the Rubik’s Cube; however, a person would be hard-pressed to learn how to solve the cube from this website. This is because the computer cannot tell you the logic behind its solutions.
Solutions to the Rubik’s Cube can be broken down into a few generalized steps—the first step, for example, could be to form a cross while the second step could be to put the corner pieces in place. While the Rubik’s Cube itself has over 10 to the 19th power possible combinations, a generalized step-by-step guide is very easy to remember and is applicable in many different scenarios.
Approaching a problem by breaking it down into steps is often the default manner in which people explain things to one another. The Rubik’s Cube naturally fits into this step-by-step framework, which gives us the opportunity to open the black box of our algorithm more easily. Creating AI algorithms that have this ability could allow people to collaborate with AI and break down a wide variety of complex problems into easy-to-understand steps.
A step-by-step refinement approach can make it easier for humans to understand why AIs do the things they do. Forest Agostinelli, CC BY-ND
Collaboration Leads to Innovation
Our process starts with using one’s own intuition to define a step-by-step plan thought to potentially solve a complex problem. The algorithm then looks at each individual step and gives feedback about which steps are possible, which are impossible and ways the plan could be improved. The human then refines the initial plan using the advice from the AI, and the process repeats until the problem is solved. The hope is that the person and the AI will eventually converge to a kind of mutual understanding.
Currently, our algorithm is able to consider a human plan for solving the Rubik’s Cube, suggest improvements to the plan, recognize plans that do not work and find alternatives that do. In doing so, it gives feedback that leads to a step-by-step plan for solving the Rubik’s Cube that a person can understand. Our team’s next step is to build an intuitive interface that will allow our algorithm to teach people how to solve the Rubik’s Cube. Our hope is to generalize this approach to a wide range of pathfinding problems.
People are intuitive in a way unmatched by any AI, but machines are far better in their computational power and algorithmic rigor. This back and forth between man and machine utilizes the strengths from both. I believe this type of collaboration will shed light on previously unsolved problems in everything from chemistry to mathematics, leading to new solutions, intuitions and innovations that may have, otherwise, been out of reach.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Image Credit: Serg Antonov / Unsplash Continue reading
#437935 Start the New Year Right: By Watching ...
I don’t need to tell you that 2020 was a tough year. There was almost nothing good about it, and we saw it off with a “good riddance” and hopes for a better 2021. But robotics company Boston Dynamics took a different approach to closing out the year: when all else fails, why not dance?
The company released a video last week that I dare you to watch without laughing—or at the very least, cracking a pretty big smile. Because, well, dancing robots are funny. And it’s not just one dancing robot, it’s four of them: two humanoid Atlas bots, one four-legged Spot, and one Handle, a bot-on-wheels built for materials handling.
The robots’ killer moves look almost too smooth and coordinated to be real, leading many to speculate that the video was computer-generated. But if you can trust Elon Musk, there’s no CGI here.
This is not CGI https://t.co/VOivE97vPR
— Elon Musk (@elonmusk) December 29, 2020
Boston Dynamics went through a lot of changes in the last ten years; it was acquired by Google in 2013, then sold to Japanese conglomerate SoftBank in 2017 before being acquired again by Hyundai just a few weeks ago for $1.1 billion. But this isn’t the first time they teach a robot to dance and make a video for all the world to enjoy; Spot tore up the floor to “Uptown Funk” back in 2018.
Four-legged Spot went commercial in June, with a hefty price tag of $74,500, and was put to some innovative pandemic-related uses, including remotely measuring patients’ vital signs and reminding people to social distance.
Hyundai plans to implement its newly-acquired robotics prowess for everything from service and logistics robots to autonomous driving and smart factories.
They’ll have their work cut out for them. Besides being hilarious, kind of heartwarming, and kind of creepy all at once, the robots’ new routine is pretty impressive from an engineering standpoint. Compare it to a 2016 video of Atlas trying to pick up a box (I know it’s a machine with no feelings, but it’s hard not to feel a little bit bad for it, isn’t it?), and it’s clear Boston Dynamics’ technology has made huge strides. It wouldn’t be surprising if, in two years’ time, we see a video of a flash mob of robots whose routine includes partner dancing and back flips (which, admittedly, Atlas can already do).
In the meantime, though, this one is pretty entertaining—and not a bad note on which to start the new year.
Image Credit: Boston Dynamics Continue reading