Tag Archives: level

#431189 Researchers Develop New Tech to Predict ...

It is one of the top 10 deadliest diseases in the United States, and it cannot be cured or prevented. But new studies are finding ways to diagnose Alzheimer’s disease in its earliest stages, while some of the latest research says technologies like artificial intelligence can detect dementia years before the first symptoms occur.
These advances, in turn, will help bolster clinical trials seeking a cure or therapies to slow or prevent the disease. Catching Alzheimer’s disease or other forms of dementia early in their progression can help ease symptoms in some cases.
“Often neurodegeneration is diagnosed late when massive brain damage has already occurred,” says professor Francis L Martin at the University of Central Lancashire in the UK, in an email to Singularity Hub. “As we know more about the molecular basis of the disease, there is the possibility of clinical interventions that might slow or halt the progress of the disease, i.e., before brain damage. Extending cognitive ability for even a number of years would have huge benefit.”
Blood Diamond
Martin is the principal investigator on a project that has developed a technique to analyze blood samples to diagnose Alzheimer’s disease and distinguish between other forms of dementia.
The researchers used sensor-based technology with a diamond core to analyze about 550 blood samples. They identified specific chemical bonds within the blood after passing light through the diamond core and recording its interaction with the sample. The results were then compared against blood samples from cases of Alzheimer’s disease and other neurodegenerative diseases, along with those from healthy individuals.
“From a small drop of blood, we derive a fingerprint spectrum. That fingerprint spectrum contains numerical data, which can be inputted into a computational algorithm we have developed,” Martin explains. “This algorithm is validated for prediction of unknown samples. From this we determine sensitivity and specificity. Although not perfect, my clinical colleagues reliably tell me our results are far better than anything else they have seen.”
Martin says the breakthrough is the result of more than 10 years developing sensor-based technologies for routine screening, monitoring, or diagnosing neurodegenerative diseases and cancers.
“My vision was to develop something low-cost that could be readily applied in a typical clinical setting to handle thousands of samples potentially per day or per week,” he says, adding that the technology also has applications in environmental science and food security.
The new test can also distinguish accurately between Alzheimer’s disease and other forms of neurodegeneration, such as Lewy body dementia, which is one of the most common causes of dementia after Alzheimer’s.
“To this point, other than at post-mortem, there has been no single approach towards classifying these pathologies,” Martin notes. “MRI scanning is often used but is labor-intensive, costly, difficult to apply to dementia patients, and not a routine point-of-care test.”
Crystal Ball
Canadian researchers at McGill University believe they can predict Alzheimer’s disease up to two years before its onset using big data and artificial intelligence. They developed an algorithm capable of recognizing the signatures of dementia using a single amyloid PET scan of the brain of patients at risk of developing the disease.
Alzheimer’s is caused by the accumulation of two proteins—amyloid beta and tau. The latest research suggests that amyloid beta leads to the buildup of tau, which is responsible for damaging nerve cells and connections between cells called synapses.
The work was recently published in the journal Neurobiology of Aging.
“Despite the availability of biomarkers capable of identifying the proteins causative of Alzheimer’s disease in living individuals, the current technologies cannot predict whether carriers of AD pathology in the brain will progress to dementia,” Sulantha Mathotaarachchi, lead author on the paper and an expert in artificial neural networks, tells Singularity Hub by email.
The algorithm, trained on a population with amnestic mild cognitive impairment observed over 24 months, proved accurate 84.5 percent of the time. Mathotaarachchi says the algorithm can be trained on different populations for different observational periods, meaning the system can grow more comprehensive with more data.
“The more biomarkers we incorporate, the more accurate the prediction could be,” Mathotaarachchi adds. “However, right now, acquiring [the] required amount of training data is the biggest challenge. … In Alzheimer’s disease, it is known that the amyloid protein deposition occurs decades before symptoms onset.”
Unfortunately, the same process occurs in normal aging as well. “The challenge is to identify the abnormal patterns of deposition that lead to the disease later on,” he says
One of the key goals of the project is to improve the research in Alzheimer’s disease by ensuring those patients with the highest probability to develop dementia are enrolled in clinical trials. That will increase the efficiency of clinical programs, according to Mathotaarachchi.
“One of the most important outcomes from our study was the pilot, online, real-time prediction tool,” he says. “This can be used as a framework for patient screening before recruiting for clinical trials. … If a disease-modifying therapy becomes available for patients, a predictive tool might have clinical applications as well, by providing to the physician information regarding clinical progression.”
Pixel by Pixel Prediction
Private industry is also working toward improving science’s predictive powers when it comes to detecting dementia early. One startup called Darmiyan out of San Francisco claims its proprietary software can pick up signals before the onset of Alzheimer’s disease by up to 15 years.
Darmiyan didn’t respond to a request for comment for this article. Venture Beat reported that the company’s MRI-analyzing software “detects cell abnormalities at a microscopic level to reveal what a standard MRI scan cannot” and that the “software measures and highlights subtle microscopic changes in the brain tissue represented in every pixel of the MRI image long before any symptoms arise.”
Darmiyan claims to have a 90 percent accuracy rate and says its software has been vetted by top academic institutions like New York University, Rockefeller University, and Stanford, according to Venture Beat. The startup is awaiting FDA approval to proceed further but is reportedly working with pharmaceutical companies like Amgen, Johnson & Johnson, and Pfizer on pilot programs.
“Our technology enables smarter drug selection in preclinical animal studies, better patient selection for clinical trials, and much better drug-effect monitoring,” Darmiyan cofounder and CEO Padideh Kamali-Zare told Venture Beat.
Conclusions
An estimated 5.5 million Americans have Alzheimer’s, and one in 10 people over age 65 have been diagnosed with the disease. By mid-century, the number of Alzheimer’s patients could rise to 16 million. Health care costs in 2017 alone are estimated to be $259 billion, and by 2050 the annual price tag could be more than $1 trillion.
In sum, it’s a disease that cripples people and the economy.
Researchers are always after more data as they look to improve outcomes, with the hope of one day developing a cure or preventing the onset of neurodegeneration altogether. If interested in seeing this medical research progress, you can help by signing up on the Brain Health Registry to improve the quality of clinical trials.
Image Credit: rudall30 / Shutterstock.com Continue reading

Posted in Human Robots

#431154 The Future of Technology – Robotics in ...

Introduction Now that our technological level has progressed as far as it has, the greatest amount of work is being put into the field of robotics as it directly pertains to home automation and the improvement of technology which already exists in a household. Robotics are seeing a lot of changes, since their technology and … Continue reading

Posted in Human Robots

#430854 Get a Live Look Inside Singularity ...

Singularity University’s (SU) second annual Global Summit begins today in San Francisco, and the Singularity Hub team will be there to give you a live look inside the event, exclusive speaker interviews, and articles on great talks.
Whereas SU’s other summits each focus on a specific field or industry, Global Summit is a broad look at emerging technologies and how they can help solve the world’s biggest challenges.
Talks will cover the latest in artificial intelligence, the brain and technology, augmented and virtual reality, space exploration, the future of work, the future of learning, and more.
We’re bringing three full days of live Facebook programming, streaming on Singularity Hub’s Facebook page, complete with 30+ speaker interviews, tours of the EXPO innovation hall, and tech demos. You can also livestream main stage talks at Singularity University’s Facebook page.
Interviews include Peter Diamandis, cofounder and chairman of Singularity University; Sylvia Earle, National Geographic explorer-in-residence; Esther Wojcicki, founder of the Palo Alto High Media Arts Center; Bob Richards, founder and CEO of Moon Express; Matt Oehrlein, cofounder of MegaBots; and Craig Newmark, founder of Craigslist and the Craig Newmark Foundation.
Pascal Finette, SU vice president of startup solutions, and Alison Berman, SU staff writer and digital producer, will host the show, and Lisa Kay Solomon, SU chair of transformational practices, will put on a special daily segment on exponential leadership with thought leaders.
Make sure you don’t miss anything by ‘liking’ the Singularity Hub and Singularity University Facebook pages and turn on notifications from both pages so you know when we go live. And to get a taste of what’s in store, check out the below selection of stories from last year’s event.
Are We at the Edge of a Second Sexual Revolution?By Vanessa Bates Ramirez
“Brace yourself, because according to serial entrepreneur Martin Varsavsky, all our existing beliefs about procreation are about to be shattered again…According to Varsavsky, the second sexual revolution will decouple procreation from sex, because sex will no longer be the best way to make babies.”
VR Pioneer Chris Milk: Virtual Reality Will Mirror Life Like Nothing Else BeforeBy Jason Ganz
“Milk is already a legend in the VR community…But [he] is just getting started. His company Within has plans to help shape the language we use for virtual reality storytelling. Because let’s be clear, VR storytelling is still very much in its infancy. This fact makes it even crazier there are already VR films out there that can inspire and captivate on such a profound level. And we’re only going up from here.”
7 Key Factors Driving the Artificial Intelligence RevolutionBy David Hill
“Jacobstein calmly and optimistically assures that this revolution isn’t going to disrupt humans completely, but usher in a future in which there’s a symbiosis between human and machine intelligence. He highlighted 7 factors driving this revolution.”
Are There Other Intelligent Civilizations Out There? Two Views on the Fermi ParadoxBy Alison Berman
“Cliché or not, when I stare up at the sky, I still wonder if we’re alone in the galaxy. Could there be another technologically advanced civilization out there? During a panel discussion on space exploration at Singularity University’s Global Summit, Jill Tarter, the Bernard M. Oliver chair at the SETI Institute, was asked to explain the Fermi paradox and her position on it. Her answer was pretty brilliant.”
Engineering Will Soon Be ‘More Parenting Than Programming’By Sveta McShane
“In generative design, the user states desired goals and constraints and allows the computer to generate entire designs, iterations and solution sets based on those constraints. It is, in fact, a lot like parents setting boundaries for their children’s activities. The user basically says, ‘Yes, it’s ok to do this, but it’s not ok to do that.’ The resulting solutions are ones you might never have thought of on your own.”
Biohacking Will Let You Connect Your Body to Anything You WantBy Vanessa Bates Ramirez
“How many cyborgs did you see during your morning commute today? I would guess at least five. Did they make you nervous? Probably not; you likely didn’t even realize they were there…[Hannes] Sjoblad said that the cyborgs we see today don’t look like Hollywood prototypes; they’re regular people who have integrated technology into their bodies to improve or monitor some aspect of their health.”
Peter Diamandis: We’ll Radically Extend Our Lives With New TechnologiesBy Jason Dorrier
“[Diamandis] said humans aren’t the longest-lived animals. Other species have multi-hundred-year lifespans. Last year, a study “dating” Greenland sharks found they can live roughly 400 years. Though the technique isn’t perfectly precise, they estimated one shark to be about 392. Its approximate birthday was 1624…Diamandis said he asked himself: If these animals can live centuries—why can’t I?” Continue reading

Posted in Human Robots

#430761 How Robots Are Getting Better at Making ...

The multiverse of science fiction is populated by robots that are indistinguishable from humans. They are usually smarter, faster, and stronger than us. They seem capable of doing any job imaginable, from piloting a starship and battling alien invaders to taking out the trash and cooking a gourmet meal.
The reality, of course, is far from fantasy. Aside from industrial settings, robots have yet to meet The Jetsons. The robots the public are exposed to seem little more than over-sized plastic toys, pre-programmed to perform a set of tasks without the ability to interact meaningfully with their environment or their creators.
To paraphrase PayPal co-founder and tech entrepreneur Peter Thiel, we wanted cool robots, instead we got 140 characters and Flippy the burger bot. But scientists are making progress to empower robots with the ability to see and respond to their surroundings just like humans.
Some of the latest developments in that arena were presented this month at the annual Robotics: Science and Systems Conference in Cambridge, Massachusetts. The papers drilled down into topics that ranged from how to make robots more conversational and help them understand language ambiguities to helping them see and navigate through complex spaces.
Improved Vision
Ben Burchfiel, a graduate student at Duke University, and his thesis advisor George Konidaris, an assistant professor of computer science at Brown University, developed an algorithm to enable machines to see the world more like humans.
In the paper, Burchfiel and Konidaris demonstrate how they can teach robots to identify and possibly manipulate three-dimensional objects even when they might be obscured or sitting in unfamiliar positions, such as a teapot that has been tipped over.
The researchers trained their algorithm by feeding it 3D scans of about 4,000 common household items such as beds, chairs, tables, and even toilets. They then tested its ability to identify about 900 new 3D objects just from a bird’s eye view. The algorithm made the right guess 75 percent of the time versus a success rate of about 50 percent for other computer vision techniques.
In an email interview with Singularity Hub, Burchfiel notes his research is not the first to train machines on 3D object classification. How their approach differs is that they confine the space in which the robot learns to classify the objects.
“Imagine the space of all possible objects,” Burchfiel explains. “That is to say, imagine you had tiny Legos, and I told you [that] you could stick them together any way you wanted, just build me an object. You have a huge number of objects you could make!”
The infinite possibilities could result in an object no human or machine might recognize.
To address that problem, the researchers had their algorithm find a more restricted space that would host the objects it wants to classify. “By working in this restricted space—mathematically we call it a subspace—we greatly simplify our task of classification. It is the finding of this space that sets us apart from previous approaches.”
Following Directions
Meanwhile, a pair of undergraduate students at Brown University figured out a way to teach robots to understand directions better, even at varying degrees of abstraction.
The research, led by Dilip Arumugam and Siddharth Karamcheti, addressed how to train a robot to understand nuances of natural language and then follow instructions correctly and efficiently.
“The problem is that commands can have different levels of abstraction, and that can cause a robot to plan its actions inefficiently or fail to complete the task at all,” says Arumugam in a press release.
In this project, the young researchers crowdsourced instructions for moving a virtual robot through an online domain. The space consisted of several rooms and a chair, which the robot was told to manipulate from one place to another. The volunteers gave various commands to the robot, ranging from general (“take the chair to the blue room”) to step-by-step instructions.
The researchers then used the database of spoken instructions to teach their system to understand the kinds of words used in different levels of language. The machine learned to not only follow instructions but to recognize the level of abstraction. That was key to kickstart its problem-solving abilities to tackle the job in the most appropriate way.
The research eventually moved from virtual pixels to a real place, using a Roomba-like robot that was able to respond to instructions within one second 90 percent of the time. Conversely, when unable to identify the specificity of the task, it took the robot 20 or more seconds to plan a task about 50 percent of the time.
One application of this new machine-learning technique referenced in the paper is a robot worker in a warehouse setting, but there are many fields that could benefit from a more versatile machine capable of moving seamlessly between small-scale operations and generalized tasks.
“Other areas that could possibly benefit from such a system include things from autonomous vehicles… to assistive robotics, all the way to medical robotics,” says Karamcheti, responding to a question by email from Singularity Hub.
More to Come
These achievements are yet another step toward creating robots that see, listen, and act more like humans. But don’t expect Disney to build a real-life Westworld next to Toon Town anytime soon.
“I think we’re a long way off from human-level communication,” Karamcheti says. “There are so many problems preventing our learning models from getting to that point, from seemingly simple questions like how to deal with words never seen before, to harder, more complicated questions like how to resolve the ambiguities inherent in language, including idiomatic or metaphorical speech.”
Even relatively verbose chatbots can run out of things to say, Karamcheti notes, as the conversation becomes more complex.
The same goes for human vision, according to Burchfiel.
While deep learning techniques have dramatically improved pattern matching—Google can find just about any picture of a cat—there’s more to human eyesight than, well, meets the eye.
“There are two big areas where I think perception has a long way to go: inductive bias and formal reasoning,” Burchfiel says.
The former is essentially all of the contextual knowledge people use to help them reason, he explains. Burchfiel uses the example of a puddle in the street. People are conditioned or biased to assume it’s a puddle of water rather than a patch of glass, for instance.
“This sort of bias is why we see faces in clouds; we have strong inductive bias helping us identify faces,” he says. “While it sounds simple at first, it powers much of what we do. Humans have a very intuitive understanding of what they expect to see, [and] it makes perception much easier.”
Formal reasoning is equally important. A machine can use deep learning, in Burchfiel’s example, to figure out the direction any river flows once it understands that water runs downhill. But it’s not yet capable of applying the sort of human reasoning that would allow us to transfer that knowledge to an alien setting, such as figuring out how water moves through a plumbing system on Mars.
“Much work was done in decades past on this sort of formal reasoning… but we have yet to figure out how to merge it with standard machine-learning methods to create a seamless system that is useful in the actual physical world.”
Robots still have a lot to learn about being human, which should make us feel good that we’re still by far the most complex machines on the planet.
Image Credit: Alex Knight via Unsplash Continue reading

Posted in Human Robots

#430652 The Jobs AI Will Take Over First

11th July 2017: The robotic revolution is set to cause the biggest transformation in the world’s workforce since the industrial revolution. In fact, research suggests that over 30% of jobs in Britain are under threat from breakthroughs in artificial intelligence (AI) technology.

With pioneering advances in technology many jobs that weren’t considered ripe for automation suddenly are. RS Components have used PWC Data to reveal how many jobs per sector are at risk of being taken by robots by 2030, a mere 13 years away. Did you think you were exempt from the robot revolution?

The top three sectors who are most exposed to the threats of robots are Transport and Storage, Manufacturing and Wholesale and Retail with 56%, 46% and 44% risk of automation respectively. The PWC report states that the differentiating factor between losing jobs to automation probability is education; those with a GCSE-level education or lower face a 46% risk, whilst those with undergraduate degrees or higher face a 12% risk. If a job is repetitive, physical and requires minimum effort to train for, this will have a higher likelihood to become automated by machines.

The manufacturing industry has the 3rd highest likelihood potential at 46.6%, shortly behind Transportation and Storage (56.4%) and Water, Sewage and Waste Management (62.6%). Although the manufacturing sector has the 3rd highest likelihood, it has the second largest number of jobs at risk of being taken by robots; an astonishing 1.22 million jobs are at risk in the near future. Repetitive manual labour and routine tasks can be taught to fixed machines and mimicked easily, saving employers both time and money.

The three sectors least at risk are Education, Health and Social and Agriculture, Forestry and Fishing with 9%, 17% and 19% risk of automation respectively. These operations are non-repetitive and consist of characteristics that cannot be taught and are harder to replicate with AI and robotics.

These are not the only fields where the introduction of AI will have an impact on employment prospects; Administrative and Support Services, Accommodation and Food Services, Finance and Insurance, Construction, Real Estate, Public Administration and Defence, and Arts and Entertainment are not out of the woods either.

The future is not all doom and gloom. Automation is set to boost productivity to enable workers to focus on higher value, more rewarding jobs; leaving repetitive and uncomplicated ones to the robots. An increase in sectors that are less easy to automate is also expected due to lower running costs. Wealth and spending will also be boosted by the initiation of AI seizing work. Also, there are just some things AI cannot learn so these jobs will be safe.

In some sectors half of the jobs could be taken by a fully automated system. Is your job next?

The post The Jobs AI Will Take Over First appeared first on Roboticmagazine. Continue reading

Posted in Human Robots