Tag Archives: best
#435646 Video Friday: Kiki Is a New Social Robot ...
Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):
DARPA SubT Tunnel Circuit – August 15-22, 2019 – Pittsburgh, Pa., USA
IEEE Africon 2019 – September 25-27, 2019 – Accra, Ghana
ISRR 2019 – October 6-10, 2019 – Hanoi, Vietnam
Ro-Man 2019 – October 14-18, 2019 – New Delhi, India
Humanoids 2019 – October 15-17, 2019 – Toronto, Canada
ARSO 2019 – October 31-1, 2019 – Beijing, China
ROSCon 2019 – October 31-1, 2019 – Macau
Let us know if you have suggestions for next week, and enjoy today’s videos.
The DARPA Subterranean Challenge tunnel circuit takes place in just a few weeks, and we’ll be there!
[ DARPA SubT ]
Time lapse video of robotic arm on NASA’s Mars 2020 rover handily maneuvers 88-pounds (40 kilograms) worth of sensor-laden turret as it moves from a deployed to stowed configuration.
If you haven’t read our interview with Matt Robinson, now would be a great time, since he’s one of the folks at JPL who designed this arm.
[ Mars 2020 ]
Kiki is a small, white, stationary social robot with an evolving personality who promises to be your friend and costs $800 and is currently on Kickstarter.
The Kickstarter page is filled with the same type of overpromising that we’ve seen with other (now very dead) social robots: Kiki is “conscious,” “understands your feelings,” and “loves you back.” Oof. That said, we’re happy to see more startups trying to succeed in this space, which is certainly one of the toughest in consumer electronics, and hopefully they’ve been learning from the recent string of failures. And we have to say Kiki is a cute robot. Its overall design, especially the body mechanics and expressive face, look neat. And kudos to the team—the company was founded by two ex-Googlers, Mita Yun and Jitu Das—for including the “unedited prototype videos,” which help counterbalance the hype.
Another thing that Kiki has going for it is that everything runs on the robot itself. This simplifies privacy and means that the robot won’t partially die on you if the company behind it goes under, but also limits how clever the robot will be able to be. The Kickstarter campaign is already over a third funded, so…We’ll see.
[ Kickstarter ]
When your UAV isn’t enough UAV, so you put a UAV on your UAV.
[ CanberraUAV ]
ABB’s YuMi is testing ATMs because a human trying to do this task would go broke almost immediately.
[ ABB ]
DJI has a fancy new FPV system that features easy setup, digital HD streaming at up to 120 FPS, and <30ms latency.
If it looks expensive, that’s because it costs $930 with the remote included.
[ DJI ]
Honeybee Robotics has recently developed a regolith excavation and rock cleaning system for NASA JPL’s PUFFER rovers. This system, called POCCET (PUFFER-Oriented Compact Cleaning and Excavation Tool), uses compressed gas to perform all excavation and cleaning tasks. Weighing less than 300 grams with potential for further mass reduction, POCCET can be used not just on the Moon, but on other Solar System bodies such as asteroids, comets, and even Mars.
[ Honeybee Robotics ]
DJI’s 2019 RoboMaster tournament, which takes place this month in Shenzen, looks like it’ll be fun to watch, with a plenty of action and rules that are easy to understand.
[ RoboMaster ]
Robots and baked goods are an automatic Video Friday inclusion.
Wow I want a cupcake right now.
[ Soft Robotics ]
The ICRA 2019 Best Paper Award went to Michelle A. Lee at Stanford, for “Making Sense of Vision and Touch: Self-Supervised Learning of Multimodal Representations for Contact-Rich Tasks.”
The ICRA video is here, and you can find the paper at the link below.
[ Paper ] via [ RoboHub ]
Cobalt Robotics put out a bunch of marketing-y videos this week, but this one reasonably interesting, even if you’re familiar with what they’re doing over there.
[ Cobalt Robotics ]
RightHand Robotics launched RightPick2 with a gala event which looked like fun as long as you were really, really in to robots.
[ RightHand Robotics ]
Thanks Jeff!
This video presents a framework for whole-body control applied to the assistive robotic system EDAN. We show how the proposed method can be used for a task like open, pass through and close a door. Also, we show the efficiency of the whole-body coordination with controlling the end-effector with respect to a fixed reference. Additionally, showing how easy the system can be manually manoeuvred by direct interaction with the end-effector, without the need for an extra input device.
[ DLR ]
You’ll probably need to turn on auto-translated subtitles for most of this, but it’s worth it for the adorable little single-seat robotic car designed to help people get around airports.
[ ZMP ]
In this week’s episode of Robots in Depth, Per speaks with Gonzalo Rey from Moog about their fancy 3D printed integrated hydraulic actuators.
Gonzalo talks about how Moog got started with hydraulic control,taking part in the space program and early robotics development. He shares how Moog’s technology is used in fly-by-wire systems in aircraft and in flow control in deep space probes. They have even reached Mars.
[ Robots in Depth ] Continue reading
#435601 New Double 3 Robot Makes Telepresence ...
Today, Double Robotics is announcing Double 3, the latest major upgrade to its line of consumer(ish) telepresence robots. We had a (mostly) fantastic time testing out Double 2 back in 2016. One of the things that we found out back then was that it takes a lot of practice to remotely drive the robot around. Double 3 solves this problem by leveraging the substantial advances in 3D sensing and computing that have taken place over the past few years, giving their new robot a level of intelligence that promises to make telepresence more accessible for everyone.
Double 2’s iPad has been replaced by “a fully integrated solution”—which is a fancy way of saying a dedicated 9.7-inch touchscreen and a whole bunch of other stuff. That other stuff includes an NVIDIA Jetson TX2 AI computing module, a beamforming six-microphone array, an 8-watt speaker, a pair of 13-megapixel cameras (wide angle and zoom) on a tilting mount, five ultrasonic rangefinders, and most excitingly, a pair of Intel RealSense D430 depth sensors.
It’s those new depth sensors that really make Double 3 special. The D430 modules each uses a pair of stereo cameras with a pattern projector to generate 1280 x 720 depth data with a range of between 0.2 and 10 meters away. The Double 3 robot uses all of this high quality depth data to locate obstacles, but at this point, it still doesn’t drive completely autonomously. Instead, it presents the remote operator with a slick, augmented reality view of drivable areas in the form of a grid of dots. You just click where you want the robot to go, and it will skillfully take itself there while avoiding obstacles (including dynamic obstacles) and related mishaps along the way.
This effectively offloads the most stressful part of telepresence—not running into stuff—from the remote user to the robot itself, which is the way it should be. That makes it that much easier to encourage people to utilize telepresence for the first time. The way the system is implemented through augmented reality is particularly impressive, I think. It looks like it’s intuitive enough for an inexperienced user without being restrictive, and is a clever way of mitigating even significant amounts of lag.
Otherwise, Double 3’s mobility system is exactly the same as the one featured on Double 2. In fact, that you can stick a Double 3 head on a Double 2 body and it instantly becomes a Double 3. Double Robotics is thoughtfully offering this to current Double 2 owners as a significantly more affordable upgrade option than buying a whole new robot.
For more details on all of Double 3's new features, we spoke with the co-founders of Double Robotics, Marc DeVidts and David Cann.
IEEE Spectrum: Why use this augmented reality system instead of just letting the user click on a regular camera image? Why make things more visually complicated, especially for new users?
Marc DeVidts and David Cann: One of the things that we realized about nine months ago when we got this whole thing working was that without the mixed reality for driving, it was really too magical of an experience for the customer. Even us—we had a hard time understanding whether the robot could really see obstacles and understand where the floor is and that kind of thing. So, we said “What would be the best way of communicating this information to the user?” And the right way to do it ended up drawing the graphics directly onto the scene. It’s really awesome—we have a full, real time 3D scene with the depth information drawn on top of it. We’re starting with some relatively simple graphics, and we’ll be adding more graphics in the future to help the user understand what the robot is seeing.
How robust is the vision system when it comes to obstacle detection and avoidance? Does it work with featureless surfaces, IR absorbent surfaces, in low light, in direct sunlight, etc?
We’ve looked at all of those cases, and one of the reasons that we’re going with the RealSense is the projector that helps us to see blank walls. We also found that having two sensors—one facing the floor and one facing forward—gives us a great coverage area. Having ultrasonic sensors in there as well helps us to detect anything that we can't see with the cameras. They're sort of a last safety measure, especially useful for detecting glass.
It seems like there’s a lot more that you could do with this sensing and mapping capability. What else are you working on?
We're starting with this semi-autonomous driving variant, and we're doing a private beta of full mapping. So, we’re going to do full SLAM of your environment that will be mapped by multiple robots at the same time while you're driving, and then you'll be able to zoom out to a map and click anywhere and it will drive there. That's where we're going with it, but we want to take baby steps to get there. It's the obvious next step, I think, and there are a lot more possibilities there.
Do you expect developers to be excited for this new mapping capability?
We're using a very powerful computer in the robot, a NVIDIA Jetson TX2 running Ubuntu. There's room to grow. It’s actually really exciting to be able to see, in real time, the 3D pose of the robot along with all of the depth data that gets transformed in real time into one view that gives you a full map. Having all of that data and just putting those pieces together and getting everything to work has been a huge feat in of itself.
We have an extensive API for developers to do custom implementations, either for telepresence or other kinds of robotics research. Our system isn't running ROS, but we're going to be adding ROS adapters for all of our hardware components.
Telepresence robots depend heavily on wireless connectivity, which is usually not something that telepresence robotics companies like Double have direct control over. Have you found that connectivity has been getting significantly better since you first introduced Double?
When we started in 2013, we had a lot of customers that didn’t have WiFi in their hallways, just in the conference rooms. We very rarely hear about customers having WiFi connectivity issues these days. The bigger issue we see is when people are calling into the robot from home, where they don't have proper traffic management on their home network. The robot doesn't need a ton of bandwidth, but it does need consistent, low latency bandwidth. And so, if someone else in the house is watching Netflix or something like that, it’s going to saturate your connection. But for the most part, it’s gotten a lot better over the last few years, and it’s no longer a big problem for us.
Do you think 5G will make a significant difference to telepresence robots?
We’ll see. We like the low latency possibilities and the better bandwidth, but it's all going to be a matter of what kind of reception you get. LTE can be great, if you have good reception; it’s all about where the tower is. I’m pretty sure that WiFi is going to be the primary thing for at least the next few years.
DeVidts also mentioned that an unfortunate side effect of the new depth sensors is that hanging a t-shirt on your Double to give it some personality will likely render it partially blind, so that's just something to keep in mind. To make up for this, you can switch around the colorful trim surrounding the screen, which is nowhere near as fun.
When the Double 3 is ready for shipping in late September, US $2,000 will get you the new head with all the sensors and stuff, which seamlessly integrates with your Double 2 base. Buying Double 3 straight up (with the included charging dock) will run you $4,ooo. This is by no means an inexpensive robot, and my impression is that it’s not really designed for individual consumers. But for commercial, corporate, healthcare, or education applications, $4k for a robot as capable as the Double 3 is really quite a good deal—especially considering the kinds of use cases for which it’s ideal.
[ Double Robotics ] Continue reading
#435516 CookieBot Is a Humanoid Robot Armed With ...
FZI's CookieBot will do its best to put you into a sugar coma Continue reading