Tag Archives: healthcare
#435601 New Double 3 Robot Makes Telepresence ...
Today, Double Robotics is announcing Double 3, the latest major upgrade to its line of consumer(ish) telepresence robots. We had a (mostly) fantastic time testing out Double 2 back in 2016. One of the things that we found out back then was that it takes a lot of practice to remotely drive the robot around. Double 3 solves this problem by leveraging the substantial advances in 3D sensing and computing that have taken place over the past few years, giving their new robot a level of intelligence that promises to make telepresence more accessible for everyone.
Double 2’s iPad has been replaced by “a fully integrated solution”—which is a fancy way of saying a dedicated 9.7-inch touchscreen and a whole bunch of other stuff. That other stuff includes an NVIDIA Jetson TX2 AI computing module, a beamforming six-microphone array, an 8-watt speaker, a pair of 13-megapixel cameras (wide angle and zoom) on a tilting mount, five ultrasonic rangefinders, and most excitingly, a pair of Intel RealSense D430 depth sensors.
It’s those new depth sensors that really make Double 3 special. The D430 modules each uses a pair of stereo cameras with a pattern projector to generate 1280 x 720 depth data with a range of between 0.2 and 10 meters away. The Double 3 robot uses all of this high quality depth data to locate obstacles, but at this point, it still doesn’t drive completely autonomously. Instead, it presents the remote operator with a slick, augmented reality view of drivable areas in the form of a grid of dots. You just click where you want the robot to go, and it will skillfully take itself there while avoiding obstacles (including dynamic obstacles) and related mishaps along the way.
This effectively offloads the most stressful part of telepresence—not running into stuff—from the remote user to the robot itself, which is the way it should be. That makes it that much easier to encourage people to utilize telepresence for the first time. The way the system is implemented through augmented reality is particularly impressive, I think. It looks like it’s intuitive enough for an inexperienced user without being restrictive, and is a clever way of mitigating even significant amounts of lag.
Otherwise, Double 3’s mobility system is exactly the same as the one featured on Double 2. In fact, that you can stick a Double 3 head on a Double 2 body and it instantly becomes a Double 3. Double Robotics is thoughtfully offering this to current Double 2 owners as a significantly more affordable upgrade option than buying a whole new robot.
For more details on all of Double 3's new features, we spoke with the co-founders of Double Robotics, Marc DeVidts and David Cann.
IEEE Spectrum: Why use this augmented reality system instead of just letting the user click on a regular camera image? Why make things more visually complicated, especially for new users?
Marc DeVidts and David Cann: One of the things that we realized about nine months ago when we got this whole thing working was that without the mixed reality for driving, it was really too magical of an experience for the customer. Even us—we had a hard time understanding whether the robot could really see obstacles and understand where the floor is and that kind of thing. So, we said “What would be the best way of communicating this information to the user?” And the right way to do it ended up drawing the graphics directly onto the scene. It’s really awesome—we have a full, real time 3D scene with the depth information drawn on top of it. We’re starting with some relatively simple graphics, and we’ll be adding more graphics in the future to help the user understand what the robot is seeing.
How robust is the vision system when it comes to obstacle detection and avoidance? Does it work with featureless surfaces, IR absorbent surfaces, in low light, in direct sunlight, etc?
We’ve looked at all of those cases, and one of the reasons that we’re going with the RealSense is the projector that helps us to see blank walls. We also found that having two sensors—one facing the floor and one facing forward—gives us a great coverage area. Having ultrasonic sensors in there as well helps us to detect anything that we can't see with the cameras. They're sort of a last safety measure, especially useful for detecting glass.
It seems like there’s a lot more that you could do with this sensing and mapping capability. What else are you working on?
We're starting with this semi-autonomous driving variant, and we're doing a private beta of full mapping. So, we’re going to do full SLAM of your environment that will be mapped by multiple robots at the same time while you're driving, and then you'll be able to zoom out to a map and click anywhere and it will drive there. That's where we're going with it, but we want to take baby steps to get there. It's the obvious next step, I think, and there are a lot more possibilities there.
Do you expect developers to be excited for this new mapping capability?
We're using a very powerful computer in the robot, a NVIDIA Jetson TX2 running Ubuntu. There's room to grow. It’s actually really exciting to be able to see, in real time, the 3D pose of the robot along with all of the depth data that gets transformed in real time into one view that gives you a full map. Having all of that data and just putting those pieces together and getting everything to work has been a huge feat in of itself.
We have an extensive API for developers to do custom implementations, either for telepresence or other kinds of robotics research. Our system isn't running ROS, but we're going to be adding ROS adapters for all of our hardware components.
Telepresence robots depend heavily on wireless connectivity, which is usually not something that telepresence robotics companies like Double have direct control over. Have you found that connectivity has been getting significantly better since you first introduced Double?
When we started in 2013, we had a lot of customers that didn’t have WiFi in their hallways, just in the conference rooms. We very rarely hear about customers having WiFi connectivity issues these days. The bigger issue we see is when people are calling into the robot from home, where they don't have proper traffic management on their home network. The robot doesn't need a ton of bandwidth, but it does need consistent, low latency bandwidth. And so, if someone else in the house is watching Netflix or something like that, it’s going to saturate your connection. But for the most part, it’s gotten a lot better over the last few years, and it’s no longer a big problem for us.
Do you think 5G will make a significant difference to telepresence robots?
We’ll see. We like the low latency possibilities and the better bandwidth, but it's all going to be a matter of what kind of reception you get. LTE can be great, if you have good reception; it’s all about where the tower is. I’m pretty sure that WiFi is going to be the primary thing for at least the next few years.
DeVidts also mentioned that an unfortunate side effect of the new depth sensors is that hanging a t-shirt on your Double to give it some personality will likely render it partially blind, so that's just something to keep in mind. To make up for this, you can switch around the colorful trim surrounding the screen, which is nowhere near as fun.
When the Double 3 is ready for shipping in late September, US $2,000 will get you the new head with all the sensors and stuff, which seamlessly integrates with your Double 2 base. Buying Double 3 straight up (with the included charging dock) will run you $4,ooo. This is by no means an inexpensive robot, and my impression is that it’s not really designed for individual consumers. But for commercial, corporate, healthcare, or education applications, $4k for a robot as capable as the Double 3 is really quite a good deal—especially considering the kinds of use cases for which it’s ideal.
[ Double Robotics ] Continue reading
#435098 Coming of Age in the Age of AI: The ...
The first generation to grow up entirely in the 21st century will never remember a time before smartphones or smart assistants. They will likely be the first children to ride in self-driving cars, as well as the first whose healthcare and education could be increasingly turned over to artificially intelligent machines.
Futurists, demographers, and marketers have yet to agree on the specifics of what defines the next wave of humanity to follow Generation Z. That hasn’t stopped some, like Australian futurist Mark McCrindle, from coining the term Generation Alpha, denoting a sort of reboot of society in a fully-realized digital age.
“In the past, the individual had no power, really,” McCrindle told Business Insider. “Now, the individual has great control of their lives through being able to leverage this world. Technology, in a sense, transformed the expectations of our interactions.”
No doubt technology may impart Marvel superhero-like powers to Generation Alpha that even tech-savvy Millennials never envisioned over cups of chai latte. But the powers of machine learning, computer vision, and other disciplines under the broad category of artificial intelligence will shape this yet unformed generation more definitively than any before it.
What will it be like to come of age in the Age of AI?
The AI Doctor Will See You Now
Perhaps no other industry is adopting and using AI as much as healthcare. The term “artificial intelligence” appears in nearly 90,000 publications from biomedical literature and research on the PubMed database.
AI is already transforming healthcare and longevity research. Machines are helping to design drugs faster and detect disease earlier. And AI may soon influence not only how we diagnose and treat illness in children, but perhaps how we choose which children will be born in the first place.
A study published earlier this month in NPJ Digital Medicine by scientists from Weill Cornell Medicine used 12,000 photos of human embryos taken five days after fertilization to train an AI algorithm on how to tell which in vitro fertilized embryo had the best chance of a successful pregnancy based on its quality.
Investigators assigned each embryo a grade based on various aspects of its appearance. A statistical analysis then correlated that grade with the probability of success. The algorithm, dubbed Stork, was able to classify the quality of a new set of images with 97 percent accuracy.
“Our algorithm will help embryologists maximize the chances that their patients will have a single healthy pregnancy,” said Dr. Olivier Elemento, director of the Caryl and Israel Englander Institute for Precision Medicine at Weill Cornell Medicine, in a press release. “The IVF procedure will remain the same, but we’ll be able to improve outcomes by harnessing the power of artificial intelligence.”
Other medical researchers see potential in applying AI to detect possible developmental issues in newborns. Scientists in Europe, working with a Finnish AI startup that creates seizure monitoring technology, have developed a technique for detecting movement patterns that might indicate conditions like cerebral palsy.
Published last month in the journal Acta Pediatrica, the study relied on an algorithm to extract the movements from a newborn, turning it into a simplified “stick figure” that medical experts could use to more easily detect clinically relevant data.
The researchers are continuing to improve the datasets, including using 3D video recordings, and are now developing an AI-based method for determining if a child’s motor maturity aligns with its true age. Meanwhile, a study published in February in Nature Medicine discussed the potential of using AI to diagnose pediatric disease.
AI Gets Classy
After being weaned on algorithms, Generation Alpha will hit the books—about machine learning.
China is famously trying to win the proverbial AI arms race by spending billions on new technologies, with one Chinese city alone pledging nearly $16 billion to build a smart economy based on artificial intelligence.
To reach dominance by its stated goal of 2030, Chinese cities are also incorporating AI education into their school curriculum. Last year, China published its first high school textbook on AI, according to the South China Morning Post. More than 40 schools are participating in a pilot program that involves SenseTime, one of the country’s biggest AI companies.
In the US, where it seems every child has access to their own AI assistant, researchers are just beginning to understand how the ubiquity of intelligent machines will influence the ways children learn and interact with their highly digitized environments.
Sandra Chang-Kredl, associate professor of the department of education at Concordia University, told The Globe and Mail that AI could have detrimental effects on learning creativity or emotional connectedness.
Similar concerns inspired Stefania Druga, a member of the Personal Robots group at the MIT Media Lab (and former Education Teaching Fellow at SU), to study interactions between children and artificial intelligence devices in order to encourage positive interactions.
Toward that goal, Druga created Cognimates, a platform that enables children to program and customize their own smart devices such as Alexa or even a smart, functional robot. The kids can also use Cognimates to train their own AI models or even build a machine learning version of Rock Paper Scissors that gets better over time.
“I believe it’s important to also introduce young people to the concepts of AI and machine learning through hands-on projects so they can make more informed and critical use of these technologies,” Druga wrote in a Medium blog post.
Druga is also the founder of Hackidemia, an international organization that sponsors workshops and labs around the world to introduce kids to emerging technologies at an early age.
“I think we are in an arms race in education with the advancement of technology, and we need to start thinking about AI literacy before patterns of behaviors for children and their families settle in place,” she wrote.
AI Goes Back to School
It also turns out that AI has as much to learn from kids. More and more researchers are interested in understanding how children grasp basic concepts that still elude the most advanced machine minds.
For example, developmental psychologist Alison Gopnik has written and lectured extensively about how studying the minds of children can provide computer scientists clues on how to improve machine learning techniques.
In an interview on Vox, she described that while DeepMind’s AlpahZero was trained to be a chessmaster, it struggles with even the simplest changes in the rules, such as allowing the bishop to move horizontally instead of vertically.
“A human chess player, even a kid, will immediately understand how to transfer that new rule to their playing of the game,” she noted. “Flexibility and generalization are something that even human one-year-olds can do but that the best machine learning systems have a much harder time with.”
Last year, the federal defense agency DARPA announced a new program aimed at improving AI by teaching it “common sense.” One of the chief strategies is to develop systems for “teaching machines through experience, mimicking the way babies grow to understand the world.”
Such an approach is also the basis of a new AI program at MIT called the MIT Quest for Intelligence.
The research leverages cognitive science to understand human intelligence, according to an article on the project in MIT Technology Review, such as exploring how young children visualize the world using their own innate 3D models.
“Children’s play is really serious business,” said Josh Tenenbaum, who leads the Computational Cognitive Science lab at MIT and his head of the new program. “They’re experiments. And that’s what makes humans the smartest learners in the known universe.”
In a world increasingly driven by smart technologies, it’s good to know the next generation will be able to keep up.
Image Credit: phoelixDE / Shutterstock.com Continue reading
#435046 The Challenge of Abundance: Boredom, ...
As technology continues to progress, the possibility of an abundant future seems more likely. Artificial intelligence is expected to drive down the cost of labor, infrastructure, and transport. Alternative energy systems are reducing the cost of a wide variety of goods. Poverty rates are falling around the world as more people are able to make a living, and resources that were once inaccessible to millions are becoming widely available.
But such a life presents fuel for the most common complaint against abundance: if robots take all the jobs, basic income provides us livable welfare for doing nothing, and healthcare is a guarantee free of charge, then what is the point of our lives? What would motivate us to work and excel if there are no real risks or rewards? If everything is simply given to us, how would we feel like we’ve ever earned anything?
Time has proven that humans inherently yearn to overcome challenges—in fact, this very desire likely exists as the root of most technological innovation. And the idea that struggling makes us stronger isn’t just anecdotal, it’s scientifically validated.
For instance, kids who use anti-bacterial soaps and sanitizers too often tend to develop weak immune systems, causing them to get sick more frequently and more severely. People who work out purposely suffer through torn muscles so that after a few days of healing their muscles are stronger. And when patients visit a psychologist to handle a fear that is derailing their lives, one of the most common treatments is exposure therapy: a slow increase of exposure to the suffering so that the patient gets stronger and braver each time, able to take on an incrementally more potent manifestation of their fears.
Different Kinds of Struggle
It’s not hard to understand why people might fear an abundant future as a terribly mundane one. But there is one crucial mistake made in this assumption, and it was well summarized by Indian mystic and author Sadhguru, who said during a recent talk at Google:
Stomach empty, only one problem. Stomach full—one hundred problems; because what we refer to as human really begins only after survival is taken care of.
This idea is backed up by Maslow’s hierarchy of needs, which was first presented in his 1943 paper “A Theory of Human Motivation.” Maslow shows the steps required to build to higher and higher levels of the human experience. Not surprisingly, the first two levels deal with physiological needs and the need for safety—in other words, with the body. You need to have food, water, and sleep, or you die. After that, you need to be protected from threats, from the elements, from dangerous people, and from disease and pain.
Maslow’s Hierarchy of Needs. Photo by Wikimedia User:Factoryjoe / CC BY-SA 3.0
The beauty of these first two levels is that they’re clear-cut problems with clear-cut solutions: if you’re hungry, then you eat; if you’re thirsty, then you drink; if you’re tired, then you sleep.
But what about the next tiers of the hierarchy? What of love and belonging, of self-esteem and self-actualization? If we’re lonely, can we just summon up an authentic friend or lover? If we feel neglected by society, can we demand it validate us? If we feel discouraged and disappointed in ourselves, can we simply dial up some confidence and self-esteem?
Of course not, and that’s because these psychological needs are nebulous; they don’t contain clear problems with clear solutions. They involve the external world and other people, and are complicated by the infinite flavors of nuance and compromise that are required to navigate human relationships and personal meaning.
These psychological difficulties are where we grow our personalities, outlooks, and beliefs. The truly defining characteristics of a person are dictated not by the physical situations they were forced into—like birth, socioeconomic class, or physical ailment—but instead by the things they choose. So a future of abundance helps to free us from the physical limitations so that we can truly commit to a life of purpose and meaning, rather than just feel like survival is our purpose.
The Greatest Challenge
And that’s the plot twist. This challenge to come to grips with our own individuality and freedom could actually be the greatest challenge our species has ever faced. Can you imagine waking up every day with infinite possibility? Every choice you make says no to the rest of reality, and so every decision carries with it truly life-defining purpose and meaning. That sounds overwhelming. And that’s probably because in our current socio-economic systems, it is.
Studies have shown that people in wealthier nations tend to experience more anxiety and depression. Ron Kessler, professor of health care policy at Harvard and World Health Organization (WHO) researcher, summarized his findings of global mental health by saying, “When you’re literally trying to survive, who has time for depression? Americans, on the other hand, many of whom lead relatively comfortable lives, blow other nations away in the depression factor, leading some to suggest that depression is a ‘luxury disorder.’”
This might explain why America scores in the top rankings for the most depressed and anxious country on the planet. We surpassed our survival needs, and instead became depressed because our jobs and relationships don’t fulfill our expectations for the next three levels of Maslow’s hierarchy (belonging, esteem, and self-actualization).
But a future of abundance would mean we’d have to deal with these levels. This is the challenge for the future; this is what keeps things from being mundane.
As a society, we would be forced to come to grips with our emotional intelligence, to reckon with philosophy rather than simply contemplate it. Nearly every person you meet will be passionately on their own customized life journey, not following a routine simply because of financial limitations. Such a world seems far more vibrant and interesting than one where most wander sleep-deprived and numb while attempting to survive the rat race.
We can already see the forceful hand of this paradigm shift as self-driving cars become ubiquitous. For example, consider the famous psychological and philosophical “trolley problem.” In this thought experiment, a person sees a trolley car heading towards five people on the train tracks; they see a lever that will allow them to switch the trolley car to a track that instead only has one person on it. Do you switch the lever and have a hand in killing one person, or do you let fate continue and kill five people instead?
For the longest time, this was just an interesting quandary to consider. But now, massive corporations have to have an answer, so they can program their self-driving cars with the ability to choose between hitting a kid who runs into the road or swerving into an oncoming car carrying a family of five. When companies need philosophers to make business decisions, it’s a good sign of what’s to come.
Luckily, it’s possible this forceful reckoning with philosophy and our own consciousness may be exactly what humanity needs. Perhaps our great failure as a species has been a result of advanced cognition still trapped in the first two levels of Maslow’s hierarchy due to a long history of scarcity.
As suggested in the opening scenes in 2001: A Space Odyssey, our ape-like proclivity for violence has long stayed the same while the technology we fight with and live amongst has progressed. So while well-off Americans may have comfortable lives, they still know they live in a system where there is no safety net, where a single tragic failure could still mean hunger and homelessness. And because of this, that evolutionarily hard-wired neurotic part of our brain that fears for our survival has never been able to fully relax, and so that anxiety and depression that come with too much freedom but not enough security stays ever present.
Not only might this shift in consciousness help liberate humanity, but it may be vital if we’re to survive our future creations as well. Whatever values we hold dear as a species are the ones we will imbue into the sentient robots we create. If machine learning is going to take its guidance from humanity, we need to level up humanity’s emotional maturity.
While the physical struggles of the future may indeed fall to the wayside amongst abundance, it’s unlikely to become a mundane world; instead, it will become a vibrant culture where each individual is striving against the most important struggle that affects all of us: the challenge to find inner peace, to find fulfillment, to build meaningful relationships, and ultimately, the challenge to find ourselves.
Image Credit: goffkein.pro / Shutterstock.com Continue reading