Tag Archives: robots
#429785 Expert discusses the future of ...
Science and technology are essential tools for innovation, and to reap their full potential, we also need to articulate and solve the many aspects of today's global issues that are rooted in the political, cultural, and economic realities of the human world. With that mission in mind, MIT's School of Humanities, Arts, and Social Sciences has launched The Human Factor—an ongoing series of stories and interviews that highlight research on the human dimensions of global challenges. Contributors to this series also share ideas for cultivating the multidisciplinary collaborations needed to solve the major civilizational issues of our time. Continue reading →
#429781 Mobile Health Takes on a New Challenge: ...
Technology’s role in modern healthcare is growing. Artificial intelligence is being used for mental health, and smartphones can use add-ons to do things we never would have imagined ten years ago, like diagnose STDs and image our eyes.
Now, mobile health is venturing into new and similarly amazing territory: a smartphone app that uses artificial intelligence and an add-on device to diagnose cervical cancer.
World Health Organization statistics show that 87 percent of cervical cancer deaths occur in developing nations, and there’s a shocking differential in mortality rates between the developed and developing world—of 100,000 women diagnosed with cervical cancer, the disease will kill only two in Western Europe or Australia, but more than 27 in Eastern Africa. This differential is mostly due to the unequal access women in various parts of the world have to high-quality healthcare.
Philanthropic innovation company Global Good is out to change that. They want to use mobile technology to bring quality care and diagnostics to parts of the world that lack doctors and medical infrastructure.
From old to new
Traditional cervical cancer screening works like this: a gynecologist does a Pap smear to get a sample of cervical cells. The sample is sent to an offsite lab to be analyzed, where it joins the queue of thousands of other samples waiting to be analyzed. Results are sent back to the patient’s clinic, and if there’s a problem, the patient must schedule a follow-up visit for further analysis or treatment. During the follow-up visit, the doctor does a colposcopy, which involves examining the cervix with a microscope and taking a biopsy of the abnormal tissue.
That’s a lot of hassle and expense, even for women who have access to all of the above-mentioned resources. What about women in areas where mail service is irregular, or where gynecologists are few and far between?
With Global Good’s diagnostic tool, health workers will use an attachment called an enhanced visual assessment (EVA) scope. It clips onto a smartphone and turns the phone into a sort of colposcope, using an app to take a picture of a woman’s cervix then analyze and store that picture. Women who are determined to have cancerous or pre-cancerous symptoms can then be treated on site.
Whereas prior mobile health solutions might still rely on a person to review the information gathered by the device, Global Good aims to pair their hardware with cutting-edge software. Using the latest in deep learning, they’re building a program that will teach itself to recognize and diagnose the disease.
How it works
The app’s creators partnered with the US National Cancer Institute to get access to 100,000 high-quality, annotated, anonymized cervical images. The images get tagged as belonging to categories like healthy tissue, benign inflammation, precancerous lesions, or suspected cancer.
They’re then fed into a deep learning system, where the software learns to differentiate between categories, progressively improving its ability to recognize symptoms and make accurate diagnoses.
Researchers are training the software with the Cancer Institute’s images before integrating images taken with the EVA scope, which will be more complex due to variations in focus, lighting, and alignment. They’ll then monitor the app’s progress by comparing the diagnoses it makes to diagnoses made by medical experts and lab tests (the traditional way).
Global Good plans to begin field trials of the EVA scope and its accompanying app in Ethiopia this year.
A better (healthcare) future
Their work is part of a growing effort to pair a widely-available technology—smartphones—with artificial intelligence to make previously expensive, complex healthcare processes simpler, cheaper, and accessible to anyone. Just last month, the XPRIZE Foundation awarded $2.5 million for a smartphone add-on kit that diagnoses twelve different illnesses and measures vital signs.
In an ideal future, two big things (among others) will change in healthcare: it will shift from being reactive to being proactive and the gap in quality of care for rich and poor will significantly narrow. Personal health monitoring and point-of-care diagnostics can help with both of those goals.
Technologies like the EVA scope will continue to be applied and adapted to more and more health conditions, and they’ll become cheaper and better in the process. Our ideal healthcare future isn’t here yet, but we’re clearly taking steps in that direction.
Image Credit: Pond5 Continue reading →
#429777 This Week’s Awesome Stories From ...
TRANSPORTATION
Have Scientists Discovered the Cure for Potholes?Angela Chen | The Verge"Self-healing asphalt has been tested on 12 different roads in the Netherlands, and one of these has been functioning and open to the public since 2010. All are still in perfect condition, but Schlangen notes that even normal asphalt roads are fine for about seven to 10 years and that it’s in upcoming years that we’ll really start to see the difference. He estimates that the overall cost of the material would be 25 percent more expensive than normal asphalt, but it could double the life of the road."
ROBOTICS
The Little Robot That Taught the Big Robot a Thing or TwoMatt Simon | WIRED"New research out today from the MIT Computer Science and Artificial Intelligence Laboratory takes a big step toward making such seamless transfers of knowledge a reality. It all begins with a little robot named Optimus and its friend, the famous 6-foot-tall humanoid Atlas."
CONNECTIVITY
A Cheap, Simple Way to Make Anything a Touch PadRachel Metz | MIT Technology Review"Researchers at Carnegie Mellon University say they’ve come up with a way to make many kinds of devices responsive to touch just by spraying them with conductive paint, adding electrodes, and computing where you press on them…Called Electrick, it can be used with materials like plastic, Jell-O, and silicone, and it could make touch tracking a lot cheaper, too, since it relies on already available paint and parts, Zhang says."
3D PRINTING
A New 3D Printing Technology Uses Electricity to Create Stronger Objects for ManufacturingBrian Heater | TechCrunch"FuseBox’s thrust is simultaneously dead simple and entirely complex, but at the most elementary level, it utilizes heat and electricity to increase the temperature of the material before and after each level is deposited. This serves to strengthen the body of the printed product where it’s traditionally weakest during the FDM (fused deposition modeling) print – the same layer-by-layer technology employed by MakerBot and the majority of desktop 3D printers."
SPACE
What Is America's Secret Space Shuttle For?Marina Koren | The Atlantic"The news that the military had a space shuttle quietly orbiting Earth for more than 700 days came as a surprise to some. Why didn’t we know about this thing, the reaction seemed to go. The reaction illustrated the distinct line between the country’s civilian and military activities in space, and how much the general public knows about each."
Image source: Shutterstock Continue reading →
#429771 Second Year of Robotic Art Contest
Press Release by: Robotart.org
Robots Have Learned to Paint in Second Year of Robotic Art Contest
Seattle, Wash – April 19, 2017 – It was just announced that Google has developed AI that can sketch images. It should therefore come as no surprise that dozens of robots from around the world are now also painting with a brush, and many of them are quite skilled.
The Robot Art 2017 competition (http://robotart.org) returns for a second year with over 39 painting robots, more than twice the amount participants it had in its inaugural year. In addition to more robots, there is more artwork. More than 200 paintings have been submitted. With regards to the quality of the artwork, the event’s sponsor and organizer, Andrew Conru, sums it up best,
“The quality of the paintings for many of the teams have reached levels that are comparable with human artists. Many of this year’s entries are expressive, layered, and complex.”
The creativity of the teams and robots was evident not only in the artwork they produced, but also in how they went about making the art. Of the 39 painting robots, no two teams took the exact same approach. The Manibus Team captured the movements of a ballerina and painted it to canvas. HEARTalion built a robot that paints based on emotional interactions with humans. share your inner unicorn used brainwaves to control a mark making mobile robot. Other teams built custom robots that capitalized on their innate lack of precision to make abstract work such as Anguis, a soft snake robot that slithers around its canvas. Other robots were built to collaborate with their artistic creators such as Sander Idzerda’s and Christian H. Seidler’s entries.
Robot Painter. Photo Credit: Robotart.orgTwo returning entries that were notable for their skilled approach to representational paintings in last year’s contest, have gone abstract. e-David submitted multiple abstract self-portraits, not of a human, but of the robot itself. Each of its works was a collaboration between an artist and the machine where most of the decisions were actually made by e-David as it continually watched and optimized its own progress on the canvas with multiple external cameras. CloudPainter also submitted multiple abstract portraits. It’s subjects were taken from photoshoots performed by the robot itself. For several of CloudPainter’s paintings, the only artistic decision made by an artist was to schedule the photoshoot. The robot then used artificial intelligence and deep learning to make all other “artistic” decisions including taking the photos, making an original abstract composition from its favorite, and then executing each brushstroke until it had calculated it had done the best it could to render its original abstract composition.
Robot Painter. Photo Credit: Robotart.orgThe Robot Art 2017 competition will be running between now and May 15th when more than $100,000 in awards will be given to the top painting robots. Winners will be determined based on a combination of public voting, professional judges consisting of working artists, critics, and technologists, and by how well the team met the spirit of the competition – that is to create something beautiful using a physical brush and robotics. The public can see the artwork vote on their favorite robotic paintings at https://robotart.org/artworks/.
The post Second Year of Robotic Art Contest appeared first on Roboticmagazine. Continue reading →
#429769 What Is Intelligence? 20 Years After ...
Twenty years ago, IBM computer Deep Blue beat the world's greatest chess player in a first for machines. How far has artificial intelligence come since then? Continue reading →