Tag Archives: dynamic
#436234 Robot Gift Guide 2019
Welcome to the eighth edition of IEEE Spectrum’s Robot Gift Guide!
This year we’re featuring 15 robotic products that we think will make fantastic holiday gifts. As always, we tried to include a broad range of robot types and prices, focusing mostly on items released this year. (A reminder: While we provide links to places where you can buy these items, we’re not endorsing any in particular, and a little bit of research may result in better deals.)
If you need even more robot gift ideas, take a look at our past guides: 2018, 2017, 2016, 2015, 2014, 2013, and 2012. Some of those robots are still great choices and might be way cheaper now than when we first posted about them. And if you have suggestions that you’d like to share, post a comment below to help the rest of us find the perfect robot gift.
Skydio 2
Image: Skydio
What makes robots so compelling is their autonomy, and the Skydio 2 is one of the most autonomous robots we’ve ever seen. It uses an array of cameras to map its environment and avoid obstacles in real-time, making flight safe and effortless and enabling the kinds of shots that would be impossible otherwise. Seriously, this thing is magical, and it’s amazing that you can actually buy one.
$1,000
Skydio
UBTECH Jimu MeeBot 2
Image: UBTECH
The Jimu MeeBot 2.0 from UBTECH is a STEM education robot designed to be easy to build and program. It includes six servo motors, a color sensor, and LED lights. An app for iPhone or iPad provides step-by-step 3D instructions, and helps you code different behaviors for the robot. It’s available exclusively from Apple.
$130
Apple
iRobot Roomba s9+
Image: iRobot
We know that $1,400 is a crazy amount of money to spend on a robot vacuum, but the Roomba s9+ is a crazy robot vacuum. As if all of its sensors and mapping intelligence wasn’t enough, it empties itself, which means that you can have your floors vacuumed every single day for a month and you don’t have to even think about it. This is what home robots are supposed to be.
$1,400
iRobot
PFF Gita
Photo: Piaggio Fast Forward
Nobody likes carrying things, which is why Gita is perfect for everyone with an extra $3,000 lying around. Developed by Piaggio Fast Forward, this autonomous robot will follow you around with a cargo hold full of your most important stuff, and do it in a way guaranteed to attract as much attention as possible.
$3,250
Gita
DJI Mavic Mini
Photo: DJI
It’s tiny, it’s cheap, and it takes good pictures—what more could you ask for from a drone? And for $400, this is an excellent drone to get if you’re on a budget and comfortable with manual flight. Keep in mind that while the Mavic Mini is small enough that you don’t need to register it with the FAA, you do still need to follow all the same rules and regulations.
$400
DJI
LEGO Star Wars Droid Commander
Image: LEGO
Designed for kids ages 8+, this LEGO set includes more than 1,000 pieces, enough to build three different droids: R2-D2, Gonk Droid, and Mouse Droid. Using a Bluetooth-controlled robotic brick called Move Hub, which connects to the LEGO BOOST Star Wars app, kids can change how the robots behave and solve challenges, learning basic robotics and coding skills.
$200
LEGO
Sony Aibo
Photo: Sony
Robot pets don’t get much more sophisticated (or expensive) than Sony’s Aibo. Strictly speaking, it’s one of the most complex consumer robots you can buy, and Sony continues to add to Aibo’s software. Recent new features include user programmability, and the ability to “feed” it.
$2,900 (free aibone and paw pads until 12/29/2019)
Sony
Neato Botvac D4 Connected
Photo: Neato
The Neato Botvac D4 may not have all of the features of its fancier and more expensive siblings, but it does have the features that you probably care the most about: The ability to make maps of its environment for intelligent cleaning (using lasers!), along with user-defined no-go lines that keep it where you want it. And it cleans quite well, too.
$530 $350 (sale)
Neato Robotics
Cubelets Curiosity Set
Photo: Modular Robotics
Cubelets are magnetic blocks that you can snap together to make an endless variety of robots with no programming and no wires. The newest set, called Curiosity, is designed for kids ages 4+ and comes with 10 robotic cubes. These include light and distance sensors, motors, and a Bluetooth module, which connects the robot constructions to the Cubelets app.
$250
Modular Robotics
Tertill
Photo: Franklin Robotics
Tertill does one simple job: It weeds your garden. It’s waterproof, dirt proof, solar powered, and fully autonomous, meaning that you can leave it out in your garden all summer and just enjoy eating your plants rather than taking care of them.
$350
Tertill
iRobot Root
Photo: iRobot
Root was originally developed by Harvard University as a tool to help kids progressively learn to code. iRobot has taken over Root and is now supporting the curriculum, which starts for kids before they even know how to read and should keep them busy for years afterwards.
$200
iRobot
LOVOT
Image: Lovot
Let’s be honest: Nobody is really quite sure what LOVOT is. We can all agree that it’s kinda cute, though. And kinda weird. But cute. Created by Japanese robotics startup Groove X, LOVOT does have a whole bunch of tech packed into its bizarre little body and it will do its best to get you to love it.
$2,750 (¥300,000)
LOVOT
Sphero RVR
Photo: Sphero
RVR is a rugged, versatile, easy to program mobile robot. It’s a development platform designed to be a bridge between educational robots like Sphero and more sophisticated and expensive systems like Misty. It’s mostly affordable, very expandable, and comes from a company with a lot of experience making robots.
$250
Sphero
“How to Train Your Robot”
Image: Lawrence Hall of Science
Aimed at 4th and 5th graders, “How to Train Your Robot,” written by Blooma Goldberg, Ken Goldberg, and Ashley Chase, and illustrated by Dave Clegg, is a perfect introduction to robotics for kids who want to get started with designing and building robots. But the book isn’t just for beginners: It’s also a fun, inspiring read for kids who are already into robotics and want to go further—it even introduces concepts like computer simulations and deep learning. You can download a free digital copy or request hardcopies here.
Free
UC Berkeley
MIT Mini Cheetah
Photo: MIT
Yes, Boston Dynamics’ Spot, now available for lease, is probably the world’s most famous quadruped, but MIT is starting to pump out Mini Cheetahs en masse for researchers, and while we’re not exactly sure how you’d manage to get one of these things short of stealing one directly for MIT, a Mini Cheetah is our fantasy robotics gift this year. Mini Cheetah looks like a ton of fun—it’s portable, highly dynamic, super rugged, and easy to control. We want one!
Price N/A
MIT Biomimetic Robotics Lab
For more tech gift ideas, see also IEEE Spectrum’s annual Gift Guide. Continue reading
#436180 Bipedal Robot Cassie Cal Learns to ...
There’s no particular reason why knowing how to juggle would be a useful skill for a robot. Despite this, robots are frequently taught how to juggle things. Blind robots can juggle, humanoid robots can juggle, and even drones can juggle. Why? Because juggling is hard, man! You have to think about a bunch of different things at once, and also do a bunch of different things at once, which this particular human at least finds to be overly stressful. While juggling may not stress robots out, it does require carefully coordinated sensing and computing and actuation, which means that it’s as good a task as any (and a more entertaining task than most) for testing the capabilities of your system.
UC Berkeley’s Cassie Cal robot, which consists of two legs and what could be called a torso if you were feeling charitable, has just learned to juggle by bouncing a ball on what would be her head if she had one of those. The idea is that if Cassie can juggle while balancing at the same time, she’ll be better able to do other things that require dynamic multitasking, too. And if that doesn’t work out, she’ll still be able to join the circus.
Cassie’s juggling is assisted by an external motion capture system that tracks the location of the ball, but otherwise everything is autonomous. Cassie is able to juggle the ball by leaning forwards and backwards, left and right, and moving up and down. She does this while maintaining her own balance, which is the whole point of this research—successfully executing two dynamic behaviors that may sometimes be at odds with one another. The end goal here is not to make a better juggling robot, but rather to explore dynamic multitasking, a skill that robots will need in order to be successful in human environments.
This work is from the Hybrid Robotics Lab at UC Berkeley, led by Koushil Sreenath, and is being done by Katherine Poggensee, Albert Li, Daniel Sotsaikich, Bike Zhang, and Prasanth Kotaru.
For a bit more detail, we spoke with Albert Li via email.
Image: UC Berkeley
UC Berkeley’s Cassie Cal getting ready to juggle.
IEEE Spectrum: What would be involved in getting Cassie to juggle without relying on motion capture?
Albert Li: Our motivation for starting off with motion capture was to first address the control challenge of juggling on a biped without worrying about implementing the perception. We actually do have a ball detector working on a camera, which would mean we wouldn’t have to rely on the motion capture system. However, we need to mount the camera in a way that it would provide the best upwards field of view, and we also have develop a reliable estimator. The estimator is particularly important because when the ball gets close enough to the camera, we actually can’t track the ball and have to assume our dynamic models describe its motion accurately enough until it bounces back up.
What keeps Cassie from juggling indefinitely?
There are a few factors that affect how long Cassie can sustain a juggle. While in simulation the paddle exhibits homogeneous properties like its stiffness and damping, in reality every surface has anisotropic contact properties. So, there are parts of the paddle which may be better for juggling than others (and importantly, react differently than modeled). These differences in contact are also exacerbated due to how the paddle is cantilevered when mounted on Cassie. When the ball hits these areas, it leads to a larger than expected error in a juggle. Due to the small size of the paddle, the ball may then just hit the paddle’s edge and end the juggling run. Over a very long run, this is a likely occurrence. Additionally, some large juggling errors could cause Cassie’s feet to slip slightly, which ends up changing the stable standing position over time. Since this version of the controller assumes Cassie is stationary, this change in position eventually leads to poor juggles and failure.
Would Cassie be able to juggle while walking (or hovershoe-ing)?
Walking (and hovershoe-ing) while juggling is a far more challenging problem and is certainly a goal for future research. Some of these challenges include getting the paddle to precise poses to juggle the ball while also moving to avoid any destabilizing effects of stepping incorrectly. The number of juggles per step of walking could also vary and make the mathematics of the problem more challenging. The controller goal is also more involved. While the current goal of the juggling controller is to juggle the ball to a static apex position, with a walking juggling controller, we may instead want to hit the ball forwards and also walk forwards to bounce it, juggle the ball along a particular path, etc. Solving such challenges would be the main thrusts of the follow-up research.
Can you give an example of a practical task that would be made possible by using a controller like this?
Studying juggling means studying contact behavior and leveraging our models of it to achieve a known objective. Juggling could also be used to study predictable post-contact flight behavior. Consider the scenario where a robot is attempting to make a catch, but fails, letting the ball to bounce off of its hand, and then recovering the catch. This behavior could also be intentional: It is often easier to first execute a bounce to direct the target and then perform a subsequent action. For example, volleyball players could in principle directly hit a spiked ball back, but almost always bump the ball back up and then return it.
Even beyond this motivating example, the kinds of models we employ to get juggling working are more generally applicable to any task that involves contact, which could include tasks besides bouncing like sliding and rolling. For example, clearing space on a desk by pushing objects to the side may be preferable than individually manipulating each and every object on it.
You mention collaborative juggling or juggling multiple balls—is that something you’ve tried yet? Can you talk a bit more about what you’re working on next?
We haven’t yet started working on collaborative or multi-ball juggling, but that’s also a goal for future work. Juggling multiple balls statically is probably the most reasonable next goal, but presents additional challenges. For instance, you have to encode a notion of juggling urgency (if the second ball isn’t hit hard enough, you have less time to get the first ball up before you get back to the second one).
On the other hand, collaborative human-robot juggling requires a more advanced decision-making framework. To get robust multi-agent juggling, the robot will need to employ some sort of probabilistic model of the expected human behavior (are they likely to move somewhere? Are they trying to catch the ball high or low? Is it safe to hit the ball back?). In general, developing such human models is difficult since humans are fairly unpredictable and often don’t exhibit rational behavior. This will be a focus of future work.
[ Hybrid Robotics Lab ] Continue reading
#436155 This MIT Robot Wants to Use Your ...
MIT researchers have demonstrated a new kind of teleoperation system that allows a two-legged robot to “borrow” a human operator’s physical skills to move with greater agility. The system works a bit like those haptic suits from the Spielberg movie “Ready Player One.” But while the suits in the film were used to connect humans to their VR avatars, the MIT suit connects the operator to a real robot.
The robot is called Little HERMES, and it’s currently just a pair of little legs, about a third the size of an average adult. It can step and jump in place or walk a short distance while supported by a gantry. While that in itself is not very impressive, the researchers say their approach could help bring capable disaster robots closer to reality. They explain that, despite recent advances, building fully autonomous robots with motor and decision-making skills comparable to those of humans remains a challenge. That’s where a more advanced teleoperation system could help.
The researchers, João Ramos, now an assistant professor at the University of Illinois at Urbana-Champaign, and Sangbae Kim, director of MIT’s Biomimetic Robotics Lab, describe the project in this week’s issue of Science Robotics. In the paper, they argue that existing teleoperation systems often can’t effectively match the operator’s motions to that of a robot. In addition, conventional systems provide no physical feedback to the human teleoperator about what the robot is doing. Their new approach addresses these two limitations, and to see how it would work in practice, they built Little HERMES.
Image: Science Robotics
The main components of MIT’s bipedal robot Little HERMES: (A) Custom actuators designed to withstand impact and capable of producing high torque. (B) Lightweight limbs with low inertia and fast leg swing. (C) Impact-robust and lightweight foot sensors with three-axis contact force sensor. (D) Ruggedized IMU to estimates the robot’s torso posture, angular rate, and linear acceleration. (E) Real-time computer sbRIO 9606 from National Instruments for robot control. (F) Two three-cell lithium-polymer batteries in series. (G) Rigid and lightweight frame to minimize the robot mass.
Early this year, the MIT researchers wrote an in-depth article for IEEE Spectrum about the project, which includes Little HERMES and also its big brother, HERMES (for Highly Efficient Robotic Mechanisms and Electromechanical System). In that article, they describe the two main components of the system:
[…] We are building a telerobotic system that has two parts: a humanoid capable of nimble, dynamic behaviors, and a new kind of two-way human-machine interface that sends your motions to the robot and the robot’s motions to you. So if the robot steps on debris and starts to lose its balance, the operator feels the same instability and instinctively reacts to avoid falling. We then capture that physical response and send it back to the robot, which helps it avoid falling, too. Through this human-robot link, the robot can harness the operator’s innate motor skills and split-second reflexes to keep its footing.
You could say we’re putting a human brain inside the machine.
Image: Science Robotics
The human-machine interface built by the MIT researchers for controlling Little HERMES is different from conventional ones in that it relies on the operator’s reflexes to improve the robot’s stability. The researchers call it the balance-feedback interface, or BFI. The main modules of the BFI include: (A) Custom interface attachments for torso and feet designed to capture human motion data at high speed (1 kHz). (B) Two underactuated modules to track the position and orientation of the torso and apply forces to the operator. (C) Each actuation module has three DoFs, one of which is a push/pull rod actuated by a DC brushless motor. (D) A series of linkages with passive joints connected to the operator’s feet and track their spatial translation. (E) Real-time controller cRIO 9082 from National Instruments to close the BFI control loop. (F) Force plate to estimated the operator’s center of pressure position and measure the shear and normal components of the operator’s net contact force.
Here’s more footage of the experiments, showing Little HERMES stepping and jumping in place, walking a few steps forward and backward, and balancing. Watch until the end to see a compilation of unsuccessful stepping experiments. Poor Little HERMES!
In the new Science Robotics paper, the MIT researchers explain how they solved one of the key challenges in making their teleoperation system effective:
The challenge of this strategy lies in properly mapping human body motion to the machine while simultaneously informing the operator how closely the robot is reproducing the movement. Therefore, we propose a solution for this bilateral feedback policy to control a bipedal robot to take steps, jump, and walk in synchrony with a human operator. Such dynamic synchronization was achieved by (i) scaling the core components of human locomotion data to robot proportions in real time and (ii) applying feedback forces to the operator that are proportional to the relative velocity between human and robot.
Little HERMES is now taking its first steps, quite literally, but the researchers say they hope to use robotic legs with similar design as part of a more advanced humanoid. One possibility they’ve envisioned is a fast-moving quadruped robot that could run through various kinds of terrain and then transform into a bipedal robot that would use its hands to perform dexterous manipulations. This could involve merging some of the robots the MIT researchers have built in their lab, possibly creating hybrids between Cheetah and HERMES, or Mini Cheetah and Little HERMES. We can’t wait to see what the resulting robots will look like.
[ Science Robotics ] Continue reading