Tag Archives: applications

#431427 Why the Best Healthcare Hacks Are the ...

Technology has the potential to solve some of our most intractable healthcare problems. In fact, it’s already doing so, with inventions getting us closer to a medical Tricorder, and progress toward 3D printed organs, and AIs that can do point-of-care diagnosis.
No doubt these applications of cutting-edge tech will continue to push the needle on progress in medicine, diagnosis, and treatment. But what if some of the healthcare hacks we need most aren’t high-tech at all?
According to Dr. Darshak Sanghavi, this is exactly the case. In a talk at Singularity University’s Exponential Medicine last week, Sanghavi told the audience, “We often think in extremely complex ways, but I think a lot of the improvements in health at scale can be done in an analog way.”
Sanghavi is the chief medical officer and senior vice president of translation at OptumLabs, and was previously director of preventive and population health at the Center for Medicare and Medicaid Innovation, where he oversaw the development of large pilot programs aimed at improving healthcare costs and quality.
“How can we improve health at scale, not for only a small number of people, but for entire populations?” Sanghavi asked. With programs that benefit a small group of people, he explained, what tends to happen is that the average health of a population improves, but the disparities across the group worsen.
“My mantra became, ‘The denominator is everybody,’” he said. He shared details of some low-tech but crucial fixes he believes could vastly benefit the US healthcare system.
1. Regulatory Hacking
Healthcare regulations are ultimately what drive many aspects of patient care, for better or worse. Worse because the mind-boggling complexity of regulations (exhibit A: the Affordable Care Act is reportedly about 20,000 pages long) can make it hard for people to get the care they need at a cost they can afford, but better because, as Sanghavi explained, tweaking these regulations in the right way can result in across-the-board improvements in a given population’s health.
An adjustment to Medicare hospitalization rules makes for a relevant example. The code was updated to state that if people who left the hospital were re-admitted within 30 days, that hospital had to pay a penalty. The result was hospitals taking more care to ensure patients were released not only in good health, but also with a solid understanding of what they had to do to take care of themselves going forward. “Here, arguably the writing of a few lines of regulatory code resulted in a remarkable decrease in 30-day re-admissions, and the savings of several billion dollars,” Sanghavi said.
2. Long-Term Focus
It’s easy to focus on healthcare hacks that have immediate, visible results—but what about fixes whose benefits take years to manifest? How can we motivate hospitals, regulators, and doctors to take action when they know they won’t see changes anytime soon?
“I call this the reality TV problem,” Sanghavi said. “Reality shows don’t really care about who’s the most talented recording artist—they care about getting the most viewers. That is exactly how we think about health care.”
Sanghavi’s team wanted to address this problem for heart attacks. They found they could reliably determine someone’s 10-year risk of having a heart attack based on a simple risk profile. Rather than monitoring patients’ cholesterol, blood pressure, weight, and other individual factors, the team took the average 10-year risk across entire provider panels, then made providers responsible for controlling those populations.
“Every percentage point you lower that risk, by hook or by crook, you get some people to stop smoking, you get some people on cholesterol medication. It’s patient-centered decision-making, and the provider then makes money. This is the world’s first predictive analytic model, at scale, that’s actually being paid for at scale,” he said.
3. Aligned Incentives
If hospitals are held accountable for the health of the communities they’re based in, those hospitals need to have the right incentives to follow through. “Hospitals have to spend money on community benefit, but linking that benefit to a meaningful population health metric can catalyze significant improvements,” Sanghavi said.
Darshak Sanghavi speaking at Singularity University’s 2017 Exponential Medicine Summit in San Diego, CA.
He used smoking cessation as an example. His team designed a program where hospitals were given a score (determined by the Centers for Disease Control and Prevention) based on the smoking rate in the counties where they’re located, then given monetary incentives to improve their score. Improving their score, in turn, resulted in better health for their communities, which meant fewer patients to treat for smoking-related health problems.
4. Social Determinants of Health
Social determinants of health include factors like housing, income, family, and food security. The answer to getting people to pay attention to these factors at scale, and creating aligned incentives, Sanghavi said, is “Very simple. We just have to measure it to start with, and measure it universally.”
His team was behind a $157 million pilot program called Accountable Health Communities that went live this year. The program requires all Medicare and Medicaid beneficiaries get screened for various social determinants of health. With all that data being collected, analysts can pinpoint local trends, then target funds to address the underlying problem, whether it’s job training, drug use, or nutritional education. “You’re then free to invest the dollars where they’re needed…this is how we can improve health at scale, with very simple changes in the incentive structures that are created,” he said.
5. ‘Securitizing’ Public Health
Sanghavi’s final point tied back to his discussion of aligning incentives. As misguided as it may seem, the reality is that financial incentives can make a huge difference in healthcare outcomes, from both a patient and a provider perspective.
Sanghavi’s team did an experiment in which they created outcome benchmarks for three major health problems that exist across geographically diverse areas: smoking, adolescent pregnancy, and binge drinking. The team proposed measuring the baseline of these issues then creating what they called a social impact bond. If communities were able to lower their frequency of these conditions by a given percent within a stated period of time, they’d get paid for it.
“What that did was essentially say, ‘you have a buyer for this outcome if you can achieve it,’” Sanghavi said. “And you can try to get there in any way you like.” The program is currently in CMS clearance.
AI and Robots Not Required
Using robots to perform surgery and artificial intelligence to diagnose disease will undoubtedly benefit doctors and patients around the US and the world. But Sanghavi’s talk made it clear that our healthcare system needs much more than this, and that improving population health on a large scale is really a low-tech project—one involving more regulatory and financial innovation than technological innovation.
“The things that get measured are the things that get changed,” he said. “If we choose the right outcomes to predict long-term benefit, and we pay for those outcomes, that’s the way to make progress.”
Image Credit: Wonderful Nature / Shutterstock.com Continue reading

Posted in Human Robots

#431377 The Farms of the Future Will Be ...

Swarms of drones buzz overhead, while robotic vehicles crawl across the landscape. Orbiting satellites snap high-resolution images of the scene far below. Not one human being can be seen in the pre-dawn glow spreading across the land.
This isn’t some post-apocalyptic vision of the future à la The Terminator. This is a snapshot of the farm of the future. Every phase of the operation—from seed to harvest—may someday be automated, without the need to ever get one’s fingernails dirty.
In fact, it’s science fiction already being engineered into reality. Today, robots empowered with artificial intelligence can zap weeds with preternatural precision, while autonomous tractors move with tireless efficiency across the farmland. Satellites can assess crop health from outer space, providing gobs of data to help produce the sort of business intelligence once accessible only to Fortune 500 companies.
“Precision agriculture is on the brink of a new phase of development involving smart machines that can operate by themselves, which will allow production agriculture to become significantly more efficient. Precision agriculture is becoming robotic agriculture,” said professor Simon Blackmore last year during a conference in Asia on the latest developments in robotic agriculture. Blackmore is head of engineering at Harper Adams University and head of the National Centre for Precision Farming in the UK.
It’s Blackmore’s university that recently showcased what may someday be possible. The project, dubbed Hands Free Hectare and led by researchers from Harper Adams and private industry, farmed one hectare (about 2.5 acres) of spring barley without one person ever setting foot in the field.
The team re-purposed, re-wired and roboticized farm equipment ranging from a Japanese tractor to a 25-year-old combine. Drones served as scouts to survey the operation and collect samples to help the team monitor the progress of the barley. At the end of the season, the robo farmers harvested about 4.5 tons of barley at a price tag of £200,000.

“This project aimed to prove that there’s no technological reason why a field can’t be farmed without humans working the land directly now, and we’ve done that,” said Martin Abell, mechatronics researcher for Precision Decisions, which partnered with Harper Adams, in a press release.
I, Robot Farmer
The Harper Adams experiment is the latest example of how machines are disrupting the agricultural industry. Around the same time that the Hands Free Hectare combine was harvesting barley, Deere & Company announced it would acquire a startup called Blue River Technology for a reported $305 million.
Blue River has developed a “see-and-spray” system that combines computer vision and artificial intelligence to discriminate between crops and weeds. It hits the former with fertilizer and blasts the latter with herbicides with such precision that it can eliminate 90 percent of the chemicals used in conventional agriculture.
It’s not just farmland that’s getting a helping hand from robots. A California company called Abundant Robotics, spun out of the nonprofit research institute SRI International, is developing robots capable of picking apples with vacuum-like arms that suck the fruit straight off the trees in the orchards.
“Traditional robots were designed to perform very specific tasks over and over again. But the robots that will be used in food and agricultural applications will have to be much more flexible than what we’ve seen in automotive manufacturing plants in order to deal with natural variation in food products or the outdoor environment,” Dan Harburg, an associate at venture capital firm Anterra Capital who previously worked at a Massachusetts-based startup making a robotic arm capable of grabbing fruit, told AgFunder News.
“This means ag-focused robotics startups have to design systems from the ground up, which can take time and money, and their robots have to be able to complete multiple tasks to avoid sitting on the shelf for a significant portion of the year,” he noted.
Eyes in the Sky
It will take more than an army of robotic tractors to grow a successful crop. The farm of the future will rely on drones, satellites, and other airborne instruments to provide data about their crops on the ground.
Companies like Descartes Labs, for instance, employ machine learning to analyze satellite imagery to forecast soy and corn yields. The Los Alamos, New Mexico startup collects five terabytes of data every day from multiple satellite constellations, including NASA and the European Space Agency. Combined with weather readings and other real-time inputs, Descartes Labs can predict cornfield yields with 99 percent accuracy. Its AI platform can even assess crop health from infrared readings.
The US agency DARPA recently granted Descartes Labs $1.5 million to monitor and analyze wheat yields in the Middle East and Africa. The idea is that accurate forecasts may help identify regions at risk of crop failure, which could lead to famine and political unrest. Another company called TellusLabs out of Somerville, Massachusetts also employs machine learning algorithms to predict corn and soy yields with similar accuracy from satellite imagery.
Farmers don’t have to reach orbit to get insights on their cropland. A startup in Oakland, Ceres Imaging, produces high-resolution imagery from multispectral cameras flown across fields aboard small planes. The snapshots capture the landscape at different wavelengths, identifying insights into problems like water stress, as well as providing estimates of chlorophyll and nitrogen levels. The geo-tagged images mean farmers can easily locate areas that need to be addressed.
Growing From the Inside
Even the best intelligence—whether from drones, satellites, or machine learning algorithms—will be challenged to predict the unpredictable issues posed by climate change. That’s one reason more and more companies are betting the farm on what’s called controlled environment agriculture. Today, that doesn’t just mean fancy greenhouses, but everything from warehouse-sized, automated vertical farms to grow rooms run by robots, located not in the emptiness of Kansas or Nebraska but smack dab in the middle of the main streets of America.
Proponents of these new concepts argue these high-tech indoor farms can produce much higher yields while drastically reducing water usage and synthetic inputs like fertilizer and herbicides.
Iron Ox, out of San Francisco, is developing one-acre urban greenhouses that will be operated by robots and reportedly capable of producing the equivalent of 30 acres of farmland. Powered by artificial intelligence, a team of three robots will run the entire operation of planting, nurturing, and harvesting the crops.
Vertical farming startup Plenty, also based in San Francisco, uses AI to automate its operations, and got a $200 million vote of confidence from the SoftBank Vision Fund earlier this year. The company claims its system uses only 1 percent of the water consumed in conventional agriculture while producing 350 times as much produce. Plenty is part of a new crop of urban-oriented farms, including Bowery Farming and AeroFarms.
“What I can envision is locating a larger scale indoor farm in the economically disadvantaged food desert, in order to stimulate a broader economic impact that could create jobs and generate income for that area,” said Dr. Gary Stutte, an expert in space agriculture and controlled environment agriculture, in an interview with AgFunder News. “The indoor agriculture model is adaptable to becoming an engine for economic growth and food security in both rural and urban food deserts.”
Still, the model is not without its own challenges and criticisms. Most of what these farms can produce falls into the “leafy greens” category and often comes with a premium price, which seems antithetical to the proposed mission of creating oases in the food deserts of cities. While water usage may be minimized, the electricity required to power the operation, especially the LEDs (which played a huge part in revolutionizing indoor agriculture), are not cheap.
Still, all of these advances, from robo farmers to automated greenhouses, may need to be part of a future where nearly 10 billion people will inhabit the planet by 2050. An oft-quoted statistic from the Food and Agriculture Organization of the United Nations says the world must boost food production by 70 percent to meet the needs of the population. Technology may not save the world, but it will help feed it.
Image Credit: Valentin Valkov / Shutterstock.com Continue reading

Posted in Human Robots

#431371 Amazon Is Quietly Building the Robots of ...

Science fiction is the siren song of hard science. How many innocent young students have been lured into complex, abstract science, technology, engineering, or mathematics because of a reckless and irresponsible exposure to Arthur C. Clarke at a tender age? Yet Arthur C. Clarke has a very famous quote: “Any sufficiently advanced technology is indistinguishable from magic.”
It’s the prospect of making that… ahem… magic leap that entices so many people into STEM in the first place. A magic leap that would change the world. How about, for example, having humanoid robots? They could match us in dexterity and speed, perceive the world around them as we do, and be programmed to do, well, more or less anything we can do.
Such a technology would change the world forever.
But how will it arrive? While true sci-fi robots won’t get here right away—the pieces are coming together, and the company best developing them at the moment is Amazon. Where others have struggled to succeed, Amazon has been quietly progressing. Notably, Amazon has more than just a dream, it has the most practical of reasons driving it into robotics.
This practicality matters. Technological development rarely proceeds by magic; it’s a process filled with twists, turns, dead-ends, and financial constraints. New technologies often have to answer questions like “What is this good for, are you being realistic?” A good strategy, then, can be to build something more limited than your initial ambition, but useful for a niche market. That way, you can produce a prototype, have a reasonable business plan, and turn a profit within a decade. You might call these “stepping stone” applications that allow for new technologies to be developed in an economically viable way.
You need something you can sell to someone, soon: that’s how you get investment in your idea. It’s this model that iRobot, developers of the Roomba, used: migrating from military prototypes to robotic vacuum cleaners to become the “boring, successful robot company.” Compare this to Willow Garage, a genius factory if ever there was one: they clearly had ambitions towards a general-purpose, multi-functional robot. They built an impressive device—PR2—and programmed the operating system, ROS, that is still the industry and academic standard to this day.
But since they were unable to sell their robot for much less than $250,000, it was never likely to be a profitable business. This is why Willow Garage is no more, and many workers at the company went into telepresence robotics. Telepresence is essentially videoconferencing with a fancy robot attached to move the camera around. It uses some of the same software (for example, navigation and mapping) without requiring you to solve difficult problems of full autonomy for the robot, or manipulating its environment. It’s certainly one of the stepping-stone areas that various companies are investigating.
Another approach is to go to the people with very high research budgets: the military.
This was the Boston Dynamics approach, and their incredible achievements in bipedal locomotion saw them getting snapped up by Google. There was a great deal of excitement and speculation about Google’s “nightmare factory” whenever a new slick video of a futuristic militarized robot surfaced. But Google broadly backed away from Replicant, their robotics program, and Boston Dynamics was sold. This was partly due to PR concerns over the Terminator-esque designs, but partly because they didn’t see the robotics division turning a profit. They hadn’t found their stepping stones.
This is where Amazon comes in. Why Amazon? First off, they just announced that their profits are up by 30 percent, and yet the company is well-known for their constantly-moving Day One philosophy where a great deal of the profits are reinvested back into the business. But lots of companies have ambition.
One thing Amazon has that few other corporations have, as well as big financial resources, is viable stepping stones for developing the technologies needed for this sort of robotics to become a reality. They already employ 100,000 robots: these are of the “pragmatic, boring, useful” kind that we’ve profiled, which move around the shelves in warehouses. These robots are allowing Amazon to develop localization and mapping software for robots that can autonomously navigate in the simple warehouse environment.
But their ambitions don’t end there. The Amazon Robotics Challenge is a multi-million dollar competition, open to university teams, to produce a robot that can pick and package items in warehouses. The problem of grasping and manipulating a range of objects is not a solved one in robotics, so this work is still done by humans—yet it’s absolutely fundamental for any sci-fi dream robot.
Google, for example, attempted to solve this problem by hooking up 14 robot hands to machine learning algorithms and having them grasp thousands of objects. Although results were promising, the 10 to 20 percent failure rate for grasps is too high for warehouse use. This is a perfect stepping stone for Amazon; should they crack the problem, they will likely save millions in logistics.
Another area where humanoid robotics—especially bipedal locomotion, or walking, has been seriously suggested—is in the last mile delivery problem. Amazon has shown willingness to be creative in this department with their notorious drone delivery service. In other words, it’s all very well to have your self-driving car or van deliver packages to people’s doors, but who puts the package on the doorstep? It’s difficult for wheeled robots to navigate the full range of built environments that exist. That’s why bipedal robots like CASSIE, developed by Oregon State, may one day be used to deliver parcels.
Again: no one more than Amazon stands to profit from cracking this technology. The line from robotics research to profit is very clear.
So, perhaps one day Amazon will have robots that can move around and manipulate their environments. But they’re also working on intelligence that will guide those robots and make them truly useful for a variety of tasks. Amazon has an AI, or at least the framework for an AI: it’s called Alexa, and it’s in tens of millions of homes. The Alexa Prize, another multi-million-dollar competition, is attempting to make Alexa more social.
To develop a conversational AI, at least using the current methods of machine learning, you need data on tens of millions of conversations. You need to understand how people will try to interact with the AI. Amazon has access to this in Alexa, and they’re using it. As owners of the leading voice-activated personal assistant, they have an ecosystem of developers creating apps for Alexa. It will be integrated with the smart home and the Internet of Things. It is a very marketable product, a stepping stone for robot intelligence.
What’s more, the company can benefit from its huge sales infrastructure. For Amazon, having an AI in your home is ideal, because it can persuade you to buy more products through its website. Unlike companies like Google, Amazon has an easy way to make a direct profit from IoT devices, which could fuel funding.
For a humanoid robot to be truly useful, though, it will need vision and intelligence. It will have to understand and interpret its environment, and react accordingly. The way humans learn about our environment is by getting out and seeing it. This is something that, for example, an Alexa coupled to smart glasses would be very capable of doing. There are rumors that Alexa’s AI will soon be used in security cameras, which is an ideal stepping stone task to train an AI to process images from its environment, truly perceiving the world and any threats it might contain.
It’s a slight exaggeration to say that Amazon is in the process of building a secret robot army. The gulf between our sci-fi vision of robots that can intelligently serve us, rather than mindlessly assemble cars, is still vast. But in quietly assembling many of the technologies needed for intelligent, multi-purpose robotics—and with the unique stepping stones they have along the way—Amazon might just be poised to leap that gulf. As if by magic.
Image Credit: Denis Starostin / Shutterstock.com Continue reading

Posted in Human Robots

#431238 AI Is Easy to Fool—Why That Needs to ...

Con artistry is one of the world’s oldest and most innovative professions, and it may soon have a new target. Research suggests artificial intelligence may be uniquely susceptible to tricksters, and as its influence in the modern world grows, attacks against it are likely to become more common.
The root of the problem lies in the fact that artificial intelligence algorithms learn about the world in very different ways than people do, and so slight tweaks to the data fed into these algorithms can throw them off completely while remaining imperceptible to humans.
Much of the research into this area has been conducted on image recognition systems, in particular those relying on deep learning neural networks. These systems are trained by showing them thousands of examples of images of a particular object until they can extract common features that allow them to accurately spot the object in new images.
But the features they extract are not necessarily the same high-level features a human would be looking for, like the word STOP on a sign or a tail on a dog. These systems analyze images at the individual pixel level to detect patterns shared between examples. These patterns can be obscure combinations of pixel values, in small pockets or spread across the image, that would be impossible to discern for a human, but highly accurate at predicting a particular object.

“An attacker can trick the object recognition algorithm into seeing something that isn’t there, without these alterations being obvious to a human.”

What this means is that by identifying these patterns and overlaying them over a different image, an attacker can trick the object recognition algorithm into seeing something that isn’t there, without these alterations being obvious to a human. This kind of manipulation is known as an “adversarial attack.”
Early attempts to trick image recognition systems this way required access to the algorithm’s inner workings to decipher these patterns. But in 2016 researchers demonstrated a “black box” attack that enabled them to trick such a system without knowing its inner workings.
By feeding the system doctored images and seeing how it classified them, they were able to work out what it was focusing on and therefore generate images they knew would fool it. Importantly, the doctored images were not obviously different to human eyes.
These approaches were tested by feeding doctored image data directly into the algorithm, but more recently, similar approaches have been applied in the real world. Last year it was shown that printouts of doctored images that were then photographed on a smartphone successfully tricked an image classification system.
Another group showed that wearing specially designed, psychedelically-colored spectacles could trick a facial recognition system into thinking people were celebrities. In August scientists showed that adding stickers to stop signs in particular configurations could cause a neural net designed to spot them to misclassify the signs.
These last two examples highlight some of the potential nefarious applications for this technology. Getting a self-driving car to miss a stop sign could cause an accident, either for insurance fraud or to do someone harm. If facial recognition becomes increasingly popular for biometric security applications, being able to pose as someone else could be very useful to a con artist.
Unsurprisingly, there are already efforts to counteract the threat of adversarial attacks. In particular, it has been shown that deep neural networks can be trained to detect adversarial images. One study from the Bosch Center for AI demonstrated such a detector, an adversarial attack that fools the detector, and a training regime for the detector that nullifies the attack, hinting at the kind of arms race we are likely to see in the future.
While image recognition systems provide an easy-to-visualize demonstration, they’re not the only machine learning systems at risk. The techniques used to perturb pixel data can be applied to other kinds of data too.

“Bypassing cybersecurity defenses is one of the more worrying and probable near-term applications for this approach.”

Chinese researchers showed that adding specific words to a sentence or misspelling a word can completely throw off machine learning systems designed to analyze what a passage of text is about. Another group demonstrated that garbled sounds played over speakers could make a smartphone running the Google Now voice command system visit a particular web address, which could be used to download malware.
This last example points toward one of the more worrying and probable near-term applications for this approach: bypassing cybersecurity defenses. The industry is increasingly using machine learning and data analytics to identify malware and detect intrusions, but these systems are also highly susceptible to trickery.
At this summer’s DEF CON hacking convention, a security firm demonstrated they could bypass anti-malware AI using a similar approach to the earlier black box attack on the image classifier, but super-powered with an AI of their own.
Their system fed malicious code to the antivirus software and then noted the score it was given. It then used genetic algorithms to iteratively tweak the code until it was able to bypass the defenses while maintaining its function.
All the approaches noted so far are focused on tricking pre-trained machine learning systems, but another approach of major concern to the cybersecurity industry is that of “data poisoning.” This is the idea that introducing false data into a machine learning system’s training set will cause it to start misclassifying things.
This could be particularly challenging for things like anti-malware systems that are constantly being updated to take into account new viruses. A related approach bombards systems with data designed to generate false positives so the defenders recalibrate their systems in a way that then allows the attackers to sneak in.
How likely it is that these approaches will be used in the wild will depend on the potential reward and the sophistication of the attackers. Most of the techniques described above require high levels of domain expertise, but it’s becoming ever easier to access training materials and tools for machine learning.
Simpler versions of machine learning have been at the heart of email spam filters for years, and spammers have developed a host of innovative workarounds to circumvent them. As machine learning and AI increasingly embed themselves in our lives, the rewards for learning how to trick them will likely outweigh the costs.
Image Credit: Nejron Photo / Shutterstock.com Continue reading

Posted in Human Robots

#431189 Researchers Develop New Tech to Predict ...

It is one of the top 10 deadliest diseases in the United States, and it cannot be cured or prevented. But new studies are finding ways to diagnose Alzheimer’s disease in its earliest stages, while some of the latest research says technologies like artificial intelligence can detect dementia years before the first symptoms occur.
These advances, in turn, will help bolster clinical trials seeking a cure or therapies to slow or prevent the disease. Catching Alzheimer’s disease or other forms of dementia early in their progression can help ease symptoms in some cases.
“Often neurodegeneration is diagnosed late when massive brain damage has already occurred,” says professor Francis L Martin at the University of Central Lancashire in the UK, in an email to Singularity Hub. “As we know more about the molecular basis of the disease, there is the possibility of clinical interventions that might slow or halt the progress of the disease, i.e., before brain damage. Extending cognitive ability for even a number of years would have huge benefit.”
Blood Diamond
Martin is the principal investigator on a project that has developed a technique to analyze blood samples to diagnose Alzheimer’s disease and distinguish between other forms of dementia.
The researchers used sensor-based technology with a diamond core to analyze about 550 blood samples. They identified specific chemical bonds within the blood after passing light through the diamond core and recording its interaction with the sample. The results were then compared against blood samples from cases of Alzheimer’s disease and other neurodegenerative diseases, along with those from healthy individuals.
“From a small drop of blood, we derive a fingerprint spectrum. That fingerprint spectrum contains numerical data, which can be inputted into a computational algorithm we have developed,” Martin explains. “This algorithm is validated for prediction of unknown samples. From this we determine sensitivity and specificity. Although not perfect, my clinical colleagues reliably tell me our results are far better than anything else they have seen.”
Martin says the breakthrough is the result of more than 10 years developing sensor-based technologies for routine screening, monitoring, or diagnosing neurodegenerative diseases and cancers.
“My vision was to develop something low-cost that could be readily applied in a typical clinical setting to handle thousands of samples potentially per day or per week,” he says, adding that the technology also has applications in environmental science and food security.
The new test can also distinguish accurately between Alzheimer’s disease and other forms of neurodegeneration, such as Lewy body dementia, which is one of the most common causes of dementia after Alzheimer’s.
“To this point, other than at post-mortem, there has been no single approach towards classifying these pathologies,” Martin notes. “MRI scanning is often used but is labor-intensive, costly, difficult to apply to dementia patients, and not a routine point-of-care test.”
Crystal Ball
Canadian researchers at McGill University believe they can predict Alzheimer’s disease up to two years before its onset using big data and artificial intelligence. They developed an algorithm capable of recognizing the signatures of dementia using a single amyloid PET scan of the brain of patients at risk of developing the disease.
Alzheimer’s is caused by the accumulation of two proteins—amyloid beta and tau. The latest research suggests that amyloid beta leads to the buildup of tau, which is responsible for damaging nerve cells and connections between cells called synapses.
The work was recently published in the journal Neurobiology of Aging.
“Despite the availability of biomarkers capable of identifying the proteins causative of Alzheimer’s disease in living individuals, the current technologies cannot predict whether carriers of AD pathology in the brain will progress to dementia,” Sulantha Mathotaarachchi, lead author on the paper and an expert in artificial neural networks, tells Singularity Hub by email.
The algorithm, trained on a population with amnestic mild cognitive impairment observed over 24 months, proved accurate 84.5 percent of the time. Mathotaarachchi says the algorithm can be trained on different populations for different observational periods, meaning the system can grow more comprehensive with more data.
“The more biomarkers we incorporate, the more accurate the prediction could be,” Mathotaarachchi adds. “However, right now, acquiring [the] required amount of training data is the biggest challenge. … In Alzheimer’s disease, it is known that the amyloid protein deposition occurs decades before symptoms onset.”
Unfortunately, the same process occurs in normal aging as well. “The challenge is to identify the abnormal patterns of deposition that lead to the disease later on,” he says
One of the key goals of the project is to improve the research in Alzheimer’s disease by ensuring those patients with the highest probability to develop dementia are enrolled in clinical trials. That will increase the efficiency of clinical programs, according to Mathotaarachchi.
“One of the most important outcomes from our study was the pilot, online, real-time prediction tool,” he says. “This can be used as a framework for patient screening before recruiting for clinical trials. … If a disease-modifying therapy becomes available for patients, a predictive tool might have clinical applications as well, by providing to the physician information regarding clinical progression.”
Pixel by Pixel Prediction
Private industry is also working toward improving science’s predictive powers when it comes to detecting dementia early. One startup called Darmiyan out of San Francisco claims its proprietary software can pick up signals before the onset of Alzheimer’s disease by up to 15 years.
Darmiyan didn’t respond to a request for comment for this article. Venture Beat reported that the company’s MRI-analyzing software “detects cell abnormalities at a microscopic level to reveal what a standard MRI scan cannot” and that the “software measures and highlights subtle microscopic changes in the brain tissue represented in every pixel of the MRI image long before any symptoms arise.”
Darmiyan claims to have a 90 percent accuracy rate and says its software has been vetted by top academic institutions like New York University, Rockefeller University, and Stanford, according to Venture Beat. The startup is awaiting FDA approval to proceed further but is reportedly working with pharmaceutical companies like Amgen, Johnson & Johnson, and Pfizer on pilot programs.
“Our technology enables smarter drug selection in preclinical animal studies, better patient selection for clinical trials, and much better drug-effect monitoring,” Darmiyan cofounder and CEO Padideh Kamali-Zare told Venture Beat.
Conclusions
An estimated 5.5 million Americans have Alzheimer’s, and one in 10 people over age 65 have been diagnosed with the disease. By mid-century, the number of Alzheimer’s patients could rise to 16 million. Health care costs in 2017 alone are estimated to be $259 billion, and by 2050 the annual price tag could be more than $1 trillion.
In sum, it’s a disease that cripples people and the economy.
Researchers are always after more data as they look to improve outcomes, with the hope of one day developing a cure or preventing the onset of neurodegeneration altogether. If interested in seeing this medical research progress, you can help by signing up on the Brain Health Registry to improve the quality of clinical trials.
Image Credit: rudall30 / Shutterstock.com Continue reading

Posted in Human Robots