Tag Archives: new
#435601 New Double 3 Robot Makes Telepresence ...
Today, Double Robotics is announcing Double 3, the latest major upgrade to its line of consumer(ish) telepresence robots. We had a (mostly) fantastic time testing out Double 2 back in 2016. One of the things that we found out back then was that it takes a lot of practice to remotely drive the robot around. Double 3 solves this problem by leveraging the substantial advances in 3D sensing and computing that have taken place over the past few years, giving their new robot a level of intelligence that promises to make telepresence more accessible for everyone.
Double 2’s iPad has been replaced by “a fully integrated solution”—which is a fancy way of saying a dedicated 9.7-inch touchscreen and a whole bunch of other stuff. That other stuff includes an NVIDIA Jetson TX2 AI computing module, a beamforming six-microphone array, an 8-watt speaker, a pair of 13-megapixel cameras (wide angle and zoom) on a tilting mount, five ultrasonic rangefinders, and most excitingly, a pair of Intel RealSense D430 depth sensors.
It’s those new depth sensors that really make Double 3 special. The D430 modules each uses a pair of stereo cameras with a pattern projector to generate 1280 x 720 depth data with a range of between 0.2 and 10 meters away. The Double 3 robot uses all of this high quality depth data to locate obstacles, but at this point, it still doesn’t drive completely autonomously. Instead, it presents the remote operator with a slick, augmented reality view of drivable areas in the form of a grid of dots. You just click where you want the robot to go, and it will skillfully take itself there while avoiding obstacles (including dynamic obstacles) and related mishaps along the way.
This effectively offloads the most stressful part of telepresence—not running into stuff—from the remote user to the robot itself, which is the way it should be. That makes it that much easier to encourage people to utilize telepresence for the first time. The way the system is implemented through augmented reality is particularly impressive, I think. It looks like it’s intuitive enough for an inexperienced user without being restrictive, and is a clever way of mitigating even significant amounts of lag.
Otherwise, Double 3’s mobility system is exactly the same as the one featured on Double 2. In fact, that you can stick a Double 3 head on a Double 2 body and it instantly becomes a Double 3. Double Robotics is thoughtfully offering this to current Double 2 owners as a significantly more affordable upgrade option than buying a whole new robot.
For more details on all of Double 3's new features, we spoke with the co-founders of Double Robotics, Marc DeVidts and David Cann.
IEEE Spectrum: Why use this augmented reality system instead of just letting the user click on a regular camera image? Why make things more visually complicated, especially for new users?
Marc DeVidts and David Cann: One of the things that we realized about nine months ago when we got this whole thing working was that without the mixed reality for driving, it was really too magical of an experience for the customer. Even us—we had a hard time understanding whether the robot could really see obstacles and understand where the floor is and that kind of thing. So, we said “What would be the best way of communicating this information to the user?” And the right way to do it ended up drawing the graphics directly onto the scene. It’s really awesome—we have a full, real time 3D scene with the depth information drawn on top of it. We’re starting with some relatively simple graphics, and we’ll be adding more graphics in the future to help the user understand what the robot is seeing.
How robust is the vision system when it comes to obstacle detection and avoidance? Does it work with featureless surfaces, IR absorbent surfaces, in low light, in direct sunlight, etc?
We’ve looked at all of those cases, and one of the reasons that we’re going with the RealSense is the projector that helps us to see blank walls. We also found that having two sensors—one facing the floor and one facing forward—gives us a great coverage area. Having ultrasonic sensors in there as well helps us to detect anything that we can't see with the cameras. They're sort of a last safety measure, especially useful for detecting glass.
It seems like there’s a lot more that you could do with this sensing and mapping capability. What else are you working on?
We're starting with this semi-autonomous driving variant, and we're doing a private beta of full mapping. So, we’re going to do full SLAM of your environment that will be mapped by multiple robots at the same time while you're driving, and then you'll be able to zoom out to a map and click anywhere and it will drive there. That's where we're going with it, but we want to take baby steps to get there. It's the obvious next step, I think, and there are a lot more possibilities there.
Do you expect developers to be excited for this new mapping capability?
We're using a very powerful computer in the robot, a NVIDIA Jetson TX2 running Ubuntu. There's room to grow. It’s actually really exciting to be able to see, in real time, the 3D pose of the robot along with all of the depth data that gets transformed in real time into one view that gives you a full map. Having all of that data and just putting those pieces together and getting everything to work has been a huge feat in of itself.
We have an extensive API for developers to do custom implementations, either for telepresence or other kinds of robotics research. Our system isn't running ROS, but we're going to be adding ROS adapters for all of our hardware components.
Telepresence robots depend heavily on wireless connectivity, which is usually not something that telepresence robotics companies like Double have direct control over. Have you found that connectivity has been getting significantly better since you first introduced Double?
When we started in 2013, we had a lot of customers that didn’t have WiFi in their hallways, just in the conference rooms. We very rarely hear about customers having WiFi connectivity issues these days. The bigger issue we see is when people are calling into the robot from home, where they don't have proper traffic management on their home network. The robot doesn't need a ton of bandwidth, but it does need consistent, low latency bandwidth. And so, if someone else in the house is watching Netflix or something like that, it’s going to saturate your connection. But for the most part, it’s gotten a lot better over the last few years, and it’s no longer a big problem for us.
Do you think 5G will make a significant difference to telepresence robots?
We’ll see. We like the low latency possibilities and the better bandwidth, but it's all going to be a matter of what kind of reception you get. LTE can be great, if you have good reception; it’s all about where the tower is. I’m pretty sure that WiFi is going to be the primary thing for at least the next few years.
DeVidts also mentioned that an unfortunate side effect of the new depth sensors is that hanging a t-shirt on your Double to give it some personality will likely render it partially blind, so that's just something to keep in mind. To make up for this, you can switch around the colorful trim surrounding the screen, which is nowhere near as fun.
When the Double 3 is ready for shipping in late September, US $2,000 will get you the new head with all the sensors and stuff, which seamlessly integrates with your Double 2 base. Buying Double 3 straight up (with the included charging dock) will run you $4,ooo. This is by no means an inexpensive robot, and my impression is that it’s not really designed for individual consumers. But for commercial, corporate, healthcare, or education applications, $4k for a robot as capable as the Double 3 is really quite a good deal—especially considering the kinds of use cases for which it’s ideal.
[ Double Robotics ] Continue reading
#435575 How an AI Startup Designed a Drug ...
Discovering a new drug can take decades, billions of dollars, and untold man hours from some of the smartest people on the planet. Now a startup says it’s taken a significant step towards speeding the process up using AI.
The typical drug discovery process involves carrying out physical tests on enormous libraries of molecules, and even with the help of robotics it’s an arduous process. The idea of sidestepping this by using computers to virtually screen for promising candidates has been around for decades. But progress has been underwhelming, and it’s still not a major part of commercial pipelines.
Recent advances in deep learning, however, have reignited hopes for the field, and major pharma companies have started tying up with AI-powered drug discovery startups. And now Insilico Medicine has used AI to design a molecule that effectively targets a protein involved in fibrosis—the formation of excess fibrous tissue—in mice in just 46 days.
The platform the company has developed combines two of the hottest sub-fields of AI: the generative adversarial networks, or GANs, which power deepfakes, and reinforcement learning, which is at the heart of the most impressive game-playing AI advances of recent years.
In a paper in Nature Biotechnology, the company’s researchers describe how they trained their model on all the molecules already known to target this protein as well as many other active molecules from various datasets. The model was then used to generate 30,000 candidate molecules.
Unlike most previous efforts, they went a step further and selected the most promising molecules for testing in the lab. The 30,000 candidates were whittled down to just 6 using more conventional drug discovery approaches and were then synthesized in the lab. They were put through increasingly stringent tests, but the leading candidate was found to be effective at targeting the desired protein and behaved as one would hope a drug would.
The authors are clear that the results are just a proof-of-concept, which company CEO Alex Zhavoronkov told Wired stemmed from a challenge set by a pharma partner to design a drug as quickly as possible. But they say they were able to carry out the process faster than traditional methods for a fraction of the cost.
There are some caveats. For a start, the protein being targeted is already very well known and multiple effective drugs exist for it. That gave the company a wealth of data to train their model on, something that isn’t the case for many of the diseases where we urgently need new drugs.
The company’s platform also only targets the very initial stages of the drug discovery process. The authors concede in their paper that the molecules would still take considerable optimization in the lab before they’d be true contenders for clinical trials.
“And that is where you will start to begin to commence to spend the vast piles of money that you will eventually go through in trying to get a drug to market,” writes Derek Lowe in his blog In The Pipeline. The part of the discovery process that the platform tackles represents a tiny fraction of the total cost of drug development, he says.
Nonetheless, the research is a definite advance for virtual screening technology and an important marker of the potential of AI for designing new medicines. Zhavoronkov also told Wired that this research was done more than a year ago, and they’ve since adapted the platform to go after harder drug targets with less data.
And with big pharma companies desperate to slash their ballooning development costs and find treatments for a host of intractable diseases, they can use all the help they can get.
Image Credit: freestocks.org / Unsplash Continue reading