Tag Archives: cameras
#439632 Intel Will Keep Selling RealSense Stereo ...
On Tuesday, CRN reported that Intel will be shutting down its RealSense division, which creates 3D vision systems used extensively in robotics. We confirmed the news with Intel directly on Wednesday, and Intel provided us with the following statement:
We are winding down our RealSense business and transitioning our computer vision talent, technology and products to focus on advancing innovative technologies that better support our core businesses and IDM 2.0 strategy. We will continue to meet our commitments to our current customers and are working with our employees and customers to ensure a smooth transition.However, after speaking with some of our industry sources to try and get a better sense of what happened, we learned that what's actually going on might be more nuanced. And as it turns out, it is: Intel will continue to provide RealSense stereo cameras to people who want them for now, although long term, things don't look good.
Intel's “RealSense business” encompasses a variety of different products. There's stereo depth, which includes the D415, D435, and D455 camera systems—these are what roboticists often use for 3D sensing. There's also lidar in the form of the L515 and associated software products, as well as biometric identification, which uses the F455 depth sensor, and a series of tracking and coded light cameras.
Intel has just confirmed with us that everything but the stereo cameras has been end of life'd. Here's the statement:
Intel has decided to wind down the RealSense business and is announcing the EOL of LiDAR, Facial Authentication and Tracking product lines this month. Intel will continue to provide select Stereo products to its current distribution customers. Hmm. The very careful wording here suggests some things to me, none of them good. The “RealSense business” is still being wound down, and while Intel will “continue to provide” RealSense cameras to customers, my interpretation is that they're still mostly doing what they said in their first release, which is moving their focus and talent elsewhere. So, no more development of new RealSense products, no more community engagement, and probably a minimal amount of support. If you want to buy a RealSense camera from a distributor, great, go ahead and do that, but I wouldn't look for much else. Also, “continue to provide” doesn't necessarily mean “continue to manufacture.” It could be that Intel has a big pile of cameras that they need to get rid of, and that once they're gone, that'll be the end of RealSense.
CRN managed to speak with Intel CEO Pat Gelsinger on Tuesday, and Gelsinger had this to add about the RealSense business:
“Hey, there's some good assets that we can harvest, but it doesn't fit one of those six business units that I've laid out.”
Oof.
We've asked Intel for additional detail, and we'll update this post if we hear anything more.
Sadly, many in the robotics community seemed unsurprised at the initial news about RealSense shutting down, which I guess makes sense, seeing as robotics has been burned in this way before—namely, with Microsoft's decision to discontinue the Kinect sensor (among other examples). What seemed different with RealSense was the extent to which Intel appeared to be interested in engaging with the robotics community and promoting RealSense to roboticists in a way that Microsoft never did with Kinect.
But even though it turns out that RealSense is still (technically) available, these statements over the last few days have created the feeling of a big company with other priorities, a company for whom robotics is a small enough market that it just doesn't really matter. I don't know if this is the reality over at Intel, but it's how things feel right now. My guess is that even roboticists who have been very happy with Intel will begin looking for alternatives.
The best and worst thing about RealSense could be that it's been just so darn ideal for robotics. Intel had the resources to make sensors with excellent performance and sell them for relatively cheap, and they've done exactly that. But in doing so, they've made it more difficult for alternative hardware to get a good foothold in the market, because for most people, RealSense is just the simple and affordable answer to stereo depth sensing. Maybe now, the other folks working on similar sensors (and there are a lot of companies doing very cool stuff) will be able to get a little more traction from researchers and companies who have abruptly been made aware of the need to diversify.
Even though it may not now be strictly necessary, within the next few weeks, we hope to take a look at other stereo depth sensing options for research and commercial robotics to get a better sense of what's out there. Continue reading
#439110 Robotic Exoskeletons Could One Day Walk ...
Engineers, using artificial intelligence and wearable cameras, now aim to help robotic exoskeletons walk by themselves.
Increasingly, researchers around the world are developing lower-body exoskeletons to help people walk. These are essentially walking robots users can strap to their legs to help them move.
One problem with such exoskeletons: They often depend on manual controls to switch from one mode of locomotion to another, such as from sitting to standing, or standing to walking, or walking on the ground to walking up or down stairs. Relying on joysticks or smartphone apps every time you want to switch the way you want to move can prove awkward and mentally taxing, says Brokoslaw Laschowski, a robotics researcher at the University of Waterloo in Canada.
Scientists are working on automated ways to help exoskeletons recognize when to switch locomotion modes — for instance, using sensors attached to legs that can detect bioelectric signals sent from your brain to your muscles telling them to move. However, this approach comes with a number of challenges, such as how how skin conductivity can change as a person’s skin gets sweatier or dries off.
Now several research groups are experimenting with a new approach: fitting exoskeleton users with wearable cameras to provide the machines with vision data that will let them operate autonomously. Artificial intelligence (AI) software can analyze this data to recognize stairs, doors, and other features of the surrounding environment and calculate how best to respond.
Laschowski leads the ExoNet project, the first open-source database of high-resolution wearable camera images of human locomotion scenarios. It holds more than 5.6 million images of indoor and outdoor real-world walking environments. The team used this data to train deep-learning algorithms; their convolutional neural networks can already automatically recognize different walking environments with 73 percent accuracy “despite the large variance in different surfaces and objects sensed by the wearable camera,” Laschowski notes.
According to Laschowski, a potential limitation of their work their reliance on conventional 2-D images, whereas depth cameras could also capture potentially useful distance data. He and his collaborators ultimately chose not to rely on depth cameras for a number of reasons, including the fact that the accuracy of depth measurements typically degrades in outdoor lighting and with increasing distance, he says.
In similar work, researchers in North Carolina had volunteers with cameras either mounted on their eyeglasses or strapped onto their knees walk through a variety of indoor and outdoor settings to capture the kind of image data exoskeletons might use to see the world around them. The aim? “To automate motion,” says Edgar Lobaton an electrical engineering researcher at North Carolina State University. He says they are focusing on how AI software might reduce uncertainty due to factors such as motion blur or overexposed images “to ensure safe operation. We want to ensure that we can really rely on the vision and AI portion before integrating it into the hardware.”
In the future, Laschowski and his colleagues will focus on improving the accuracy of their environmental analysis software with low computational and memory storage requirements, which are important for onboard, real-time operations on robotic exoskeletons. Lobaton and his team also seek to account for uncertainty introduced into their visual systems by movements .
Ultimately, the ExoNet researchers want to explore how AI software can transmit commands to exoskeletons so they can perform tasks such as climbing stairs or avoiding obstacles based on a system’s analysis of a user's current movements and the upcoming terrain. With autonomous cars as inspiration, they are seeking to develop autonomous exoskeletons that can handle the walking task without human input, Laschowski says.
However, Laschowski adds, “User safety is of the utmost importance, especially considering that we're working with individuals with mobility impairments,” resulting perhaps from advanced age or physical disabilities.
“The exoskeleton user will always have the ability to override the system should the classification algorithm or controller make a wrong decision.” Continue reading
#438807 Visible Touch: How Cameras Can Help ...
The dawn of the robot revolution is already here, and it is not the dystopian nightmare we imagined. Instead, it comes in the form of social robots: Autonomous robots in homes and schools, offices and public spaces, able to interact with humans and other robots in a socially acceptable, human-perceptible way to resolve tasks related to core human needs.
To design social robots that “understand” humans, robotics scientists are delving into the psychology of human communication. Researchers from Cornell University posit that embedding the sense of touch in social robots could teach them to detect physical interactions and gestures. They describe a way of doing so by relying not on touch but on vision.
A USB camera inside the robot captures shadows of hand gestures on the robot’s surface and classifies them with machine-learning software. They call this method ShadowSense, which they define as a modality between vision and touch, bringing “the high resolution and low cost of vision-sensing to the close-up sensory experience of touch.”
Touch-sensing in social or interactive robots is usually achieved with force sensors or capacitive sensors, says study co-author Guy Hoffman of the Sibley School of Mechanical and Aerospace Engineering at Cornell University. The drawback to his group’s approach has been that, even to achieve coarse spatial resolution, many sensors are needed in a small area.
However, working with non-rigid, inflatable robots, Hoffman and his co-researchers installed a consumer-grade USB camera to which they attached a fisheye lens for a wider field of vision.
“Given that the robot is already hollow, and has a soft and translucent skin, we could do touch interaction by looking at the shadows created by people touching the robot,” says Hoffman. They used deep neural networks to interpret the shadows. “And we were able to do it with very high accuracy,” he says. The robot was able to interpret six different gestures, including one- or two-handed touch, pointing, hugging and punching, with an accuracy of 87.5 to 96 percent, depending on the lighting.
This is not the first time that computer vision has been used for tactile sensing, though the scale and application of ShadowSense is unique. “Photography has been used for touch mainly in robotic grasping,” says Hoffman. By contrast, Hoffman and collaborators wanted to develop a sense that could be “felt” across the whole of the device.
The potential applications for ShadowSense include mobile robot guidance using touch, and interactive screens on soft robots. A third concerns privacy, especially in home-based social robots. “We have another paper currently under review that looks specifically at the ability to detect gestures that are further away [from the robot’s skin],” says Hoffman. This way, users would be able to cover their robot’s camera with a translucent material and still allow it to interpret actions and gestures from shadows. Thus, even though it’s prevented from capturing a high-resolution image of the user or their surrounding environment, using the right kind of training datasets, the robot can continue to monitor some kinds of non-tactile activities.
In its current iteration, Hoffman says, ShadowSense doesn’t do well in low-light conditions. Environmental noise, or shadows from surrounding objects, also interfere with image classification. Relying on one camera also means a single point of failure. “I think if this were to become a commercial product, we would probably [have to] work a little bit better on image detection,” says Hoffman.
As it was, the researchers used transfer learning—reusing a pre-trained deep-learning model in a new problem—for image analysis. “One of the problems with multi-layered neural networks is that you need a lot of training data to make accurate predictions,” says Hoffman. “Obviously, we don’t have millions of examples of people touching a hollow, inflatable robot. But we can use pre-trained networks trained on general images, which we have billions of, and we only retrain the last layers of the network using our own dataset.” Continue reading