Tag Archives: better
#431362 Does Regulating Artificial Intelligence ...
Some people are afraid that heavily armed artificially intelligent robots might take over the world, enslaving humanity—or perhaps exterminating us. These people, including tech-industry billionaire Elon Musk and eminent physicist Stephen Hawking, say artificial intelligence technology needs to be regulated to manage the risks. But Microsoft founder Bill Gates and Facebook’s Mark Zuckerberg disagree, saying the technology is not nearly advanced enough for those worries to be realistic.
As someone who researches how AI works in robotic decision-making, drones and self-driving vehicles, I’ve seen how beneficial it can be. I’ve developed AI software that lets robots working in teams make individual decisions as part of collective efforts to explore and solve problems. Researchers are already subject to existing rules, regulations and laws designed to protect public safety. Imposing further limitations risks reducing the potential for innovation with AI systems.
How is AI regulated now?
While the term “artificial intelligence” may conjure fantastical images of human-like robots, most people have encountered AI before. It helps us find similar products while shopping, offers movie and TV recommendations, and helps us search for websites. It grades student writing, provides personalized tutoring, and even recognizes objects carried through airport scanners.
In each case, the AI makes things easier for humans. For example, the AI software I developed could be used to plan and execute a search of a field for a plant or animal as part of a science experiment. But even as the AI frees people from doing this work, it is still basing its actions on human decisions and goals about where to search and what to look for.
In areas like these and many others, AI has the potential to do far more good than harm—if used properly. But I don’t believe additional regulations are currently needed. There are already laws on the books of nations, states, and towns governing civil and criminal liabilities for harmful actions. Our drones, for example, must obey FAA regulations, while the self-driving car AI must obey regular traffic laws to operate on public roadways.
Existing laws also cover what happens if a robot injures or kills a person, even if the injury is accidental and the robot’s programmer or operator isn’t criminally responsible. While lawmakers and regulators may need to refine responsibility for AI systems’ actions as technology advances, creating regulations beyond those that already exist could prohibit or slow the development of capabilities that would be overwhelmingly beneficial.
Potential risks from artificial intelligence
It may seem reasonable to worry about researchers developing very advanced artificial intelligence systems that can operate entirely outside human control. A common thought experiment deals with a self-driving car forced to make a decision about whether to run over a child who just stepped into the road or veer off into a guardrail, injuring the car’s occupants and perhaps even those in another vehicle.
Musk and Hawking, among others, worry that a hyper-capable AI system, no longer limited to a single set of tasks like controlling a self-driving car, might decide it doesn’t need humans anymore. It might even look at human stewardship of the planet, the interpersonal conflicts, theft, fraud, and frequent wars, and decide that the world would be better without people.
Science fiction author Isaac Asimov tried to address this potential by proposing three laws limiting robot decision-making: Robots cannot injure humans or allow them “to come to harm.” They must also obey humans—unless this would harm humans—and protect themselves, as long as this doesn’t harm humans or ignore an order.
But Asimov himself knew the three laws were not enough. And they don’t reflect the complexity of human values. What constitutes “harm” is an example: Should a robot protect humanity from suffering related to overpopulation, or should it protect individuals’ freedoms to make personal reproductive decisions?
We humans have already wrestled with these questions in our own, non-artificial intelligences. Researchers have proposed restrictions on human freedoms, including reducing reproduction, to control people’s behavior, population growth, and environmental damage. In general, society has decided against using those methods, even if their goals seem reasonable. Similarly, rather than regulating what AI systems can and can’t do, in my view it would be better to teach them human ethics and values—like parents do with human children.
Artificial intelligence benefits
People already benefit from AI every day—but this is just the beginning. AI-controlled robots could assist law enforcement in responding to human gunmen. Current police efforts must focus on preventing officers from being injured, but robots could step into harm’s way, potentially changing the outcomes of cases like the recent shooting of an armed college student at Georgia Tech and an unarmed high school student in Austin.
Intelligent robots can help humans in other ways, too. They can perform repetitive tasks, like processing sensor data, where human boredom may cause mistakes. They can limit human exposure to dangerous materials and dangerous situations, such as when decontaminating a nuclear reactor, working in areas humans can’t go. In general, AI robots can provide humans with more time to pursue whatever they define as happiness by freeing them from having to do other work.
Achieving most of these benefits will require a lot more research and development. Regulations that make it more expensive to develop AIs or prevent certain uses may delay or forestall those efforts. This is particularly true for small businesses and individuals—key drivers of new technologies—who are not as well equipped to deal with regulation compliance as larger companies. In fact, the biggest beneficiary of AI regulation may be large companies that are used to dealing with it, because startups will have a harder time competing in a regulated environment.
The need for innovation
Humanity faced a similar set of issues in the early days of the internet. But the United States actively avoided regulating the internet to avoid stunting its early growth. Musk’s PayPal and numerous other businesses helped build the modern online world while subject only to regular human-scale rules, like those preventing theft and fraud.
Artificial intelligence systems have the potential to change how humans do just about everything. Scientists, engineers, programmers, and entrepreneurs need time to develop the technologies—and deliver their benefits. Their work should be free from concern that some AIs might be banned, and from the delays and costs associated with new AI-specific regulations.
This article was originally published on The Conversation. Read the original article.
Image Credit: Tatiana Shepeleva / Shutterstock.com Continue reading
#431243 Does Our Survival Depend on Relentless ...
Malthus had a fever dream in the 1790s. While the world was marveling in the first manifestations of modern science and technology and the industrial revolution that was just beginning, he was concerned. He saw the exponential growth in the human population as a terrible problem for the species—an existential threat. He was afraid the human population would overshoot the availability of resources, and then things would really hit the fan.
“Famine seems to be the last, the most dreadful resource of nature. The power of population is so superior to the power of the earth to produce subsistence for man, that premature death must in some shape or other visit the human race. The vices of mankind are active and able ministers of depopulation.”
So Malthus wrote in his famous text, an essay on the principles of population.
But Malthus was wrong. Not just in his proposed solution, which was to stop giving aid and food to the poor so that they wouldn’t explode in population. His prediction was also wrong: there was no great, overwhelming famine that caused the population to stay at the levels of the 1790s. Instead, the world population—with a few dips—has continued to grow exponentially ever since. And it’s still growing.
There have concurrently been developments in agriculture and medicine and, in the 20th century, the Green Revolution, in which Norman Borlaug ensured that countries adopted high-yield varieties of crops—the first precursors to modern ideas of genetically engineering food to produce better crops and more growth. The world was able to produce an astonishing amount of food—enough, in the modern era, for ten billion people. It is only a grave injustice in the way that food is distributed that means 12 percent of the world goes hungry, and we still have starvation. But, aside from that, we were saved by the majesty of another kind of exponential growth; the population grew, but the ability to produce food grew faster.
In so much of the world around us today, there’s the same old story. Take exploitation of fossil fuels: here, there is another exponential race. The exponential growth of our ability to mine coal, extract natural gas, refine oil from ever more complex hydrocarbons: this is pitted against our growing appetite. The stock market is built on exponential growth; you cannot provide compound interest unless the economy grows by a certain percentage a year.
“This relentless and ruthless expectation—that technology will continue to improve in ways we can’t foresee—is not just baked into share prices, but into the very survival of our species.”
When the economy fails to grow exponentially, it’s considered a crisis: a financial catastrophe. This expectation penetrates down to individual investors. In the cryptocurrency markets—hardly immune from bubbles, the bull-and-bear cycle of economics—the traders’ saying is “Buy the hype, sell the news.” Before an announcement is made, the expectation of growth, of a boost—the psychological shift—is almost invariably worth more than whatever the major announcement turns out to be. The idea of growth is baked into the share price, to the extent that even good news can often cause the price to dip when it’s delivered.
In the same way, this relentless and ruthless expectation—that technology will continue to improve in ways we can’t foresee—is not just baked into share prices, but into the very survival of our species. A third of Earth’s soil has been acutely degraded due to agriculture; we are looming on the brink of a topsoil crisis. In less relentless times, we may have tried to solve the problem by letting the fields lie fallow for a few years. But that’s no longer an option: if we do so, people will starve. Instead, we look to a second Green Revolution—genetically modified crops, or hydroponics—to save us.
Climate change is considered by many to be an existential threat. The Intergovernmental Panel on Climate Change has already put their faith in the exponential growth of technology. Many of the scenarios where they can successfully imagine the human race dealing with the climate crisis involve the development and widespread deployment of carbon capture and storage technology. Our hope for the future already has built-in expectations of exponential growth in our technology in this field. Alongside this, to reduce carbon emissions to zero on the timescales we need to, we will surely require new technologies in renewable energy, energy efficiency, and electrification of the transport system.
Without exponential growth in technology continuing, then, we are doomed. Humanity finds itself on a treadmill that’s rapidly accelerating, with the risk of plunging into the abyss if we can’t keep up the pace. Yet this very acceleration could also pose an existential threat. As our global system becomes more interconnected and complex, chaos theory takes over: the economics of a town in Macedonia can influence a US presidential election; critical infrastructure can be brought down by cybercriminals.
New threats, such as biotechnology, nanotechnology, or a generalized artificial intelligence, could put incredible power—power over the entire species—into the hands of a small number of people. We are faced with a paradox: the continued existence of our system depends on the exponential growth of our capacities outpacing the exponential growth of our needs and desires. Yet this very growth will create threats that are unimaginably larger than any humans have faced before in history.
“It is necessary that we understand the consequences and prospects for exponential growth: that we understand the nature of the race that we’re in.”
Neo-Luddites may find satisfaction in rejecting the ill-effects of technology, but they will still live in a society where technology is the lifeblood that keeps the whole system pumping. Now, more than ever, it is necessary that we understand the consequences and prospects for exponential growth: that we understand the nature of the race that we’re in.
If we decide that limitless exponential growth on a finite planet is unsustainable, we need to plan for the transition to a new way of living before our ability to accelerate runs out. If we require new technologies or fields of study to enable this growth to continue, we must focus our efforts on these before anything else. If we want to survive the 21st century without major catastrophe, we don’t have a choice but to understand it. Almost by default, we’re all accelerationists now.
Image Credit: focal point / Shutterstock.com Continue reading
#431189 Researchers Develop New Tech to Predict ...
It is one of the top 10 deadliest diseases in the United States, and it cannot be cured or prevented. But new studies are finding ways to diagnose Alzheimer’s disease in its earliest stages, while some of the latest research says technologies like artificial intelligence can detect dementia years before the first symptoms occur.
These advances, in turn, will help bolster clinical trials seeking a cure or therapies to slow or prevent the disease. Catching Alzheimer’s disease or other forms of dementia early in their progression can help ease symptoms in some cases.
“Often neurodegeneration is diagnosed late when massive brain damage has already occurred,” says professor Francis L Martin at the University of Central Lancashire in the UK, in an email to Singularity Hub. “As we know more about the molecular basis of the disease, there is the possibility of clinical interventions that might slow or halt the progress of the disease, i.e., before brain damage. Extending cognitive ability for even a number of years would have huge benefit.”
Blood Diamond
Martin is the principal investigator on a project that has developed a technique to analyze blood samples to diagnose Alzheimer’s disease and distinguish between other forms of dementia.
The researchers used sensor-based technology with a diamond core to analyze about 550 blood samples. They identified specific chemical bonds within the blood after passing light through the diamond core and recording its interaction with the sample. The results were then compared against blood samples from cases of Alzheimer’s disease and other neurodegenerative diseases, along with those from healthy individuals.
“From a small drop of blood, we derive a fingerprint spectrum. That fingerprint spectrum contains numerical data, which can be inputted into a computational algorithm we have developed,” Martin explains. “This algorithm is validated for prediction of unknown samples. From this we determine sensitivity and specificity. Although not perfect, my clinical colleagues reliably tell me our results are far better than anything else they have seen.”
Martin says the breakthrough is the result of more than 10 years developing sensor-based technologies for routine screening, monitoring, or diagnosing neurodegenerative diseases and cancers.
“My vision was to develop something low-cost that could be readily applied in a typical clinical setting to handle thousands of samples potentially per day or per week,” he says, adding that the technology also has applications in environmental science and food security.
The new test can also distinguish accurately between Alzheimer’s disease and other forms of neurodegeneration, such as Lewy body dementia, which is one of the most common causes of dementia after Alzheimer’s.
“To this point, other than at post-mortem, there has been no single approach towards classifying these pathologies,” Martin notes. “MRI scanning is often used but is labor-intensive, costly, difficult to apply to dementia patients, and not a routine point-of-care test.”
Crystal Ball
Canadian researchers at McGill University believe they can predict Alzheimer’s disease up to two years before its onset using big data and artificial intelligence. They developed an algorithm capable of recognizing the signatures of dementia using a single amyloid PET scan of the brain of patients at risk of developing the disease.
Alzheimer’s is caused by the accumulation of two proteins—amyloid beta and tau. The latest research suggests that amyloid beta leads to the buildup of tau, which is responsible for damaging nerve cells and connections between cells called synapses.
The work was recently published in the journal Neurobiology of Aging.
“Despite the availability of biomarkers capable of identifying the proteins causative of Alzheimer’s disease in living individuals, the current technologies cannot predict whether carriers of AD pathology in the brain will progress to dementia,” Sulantha Mathotaarachchi, lead author on the paper and an expert in artificial neural networks, tells Singularity Hub by email.
The algorithm, trained on a population with amnestic mild cognitive impairment observed over 24 months, proved accurate 84.5 percent of the time. Mathotaarachchi says the algorithm can be trained on different populations for different observational periods, meaning the system can grow more comprehensive with more data.
“The more biomarkers we incorporate, the more accurate the prediction could be,” Mathotaarachchi adds. “However, right now, acquiring [the] required amount of training data is the biggest challenge. … In Alzheimer’s disease, it is known that the amyloid protein deposition occurs decades before symptoms onset.”
Unfortunately, the same process occurs in normal aging as well. “The challenge is to identify the abnormal patterns of deposition that lead to the disease later on,” he says
One of the key goals of the project is to improve the research in Alzheimer’s disease by ensuring those patients with the highest probability to develop dementia are enrolled in clinical trials. That will increase the efficiency of clinical programs, according to Mathotaarachchi.
“One of the most important outcomes from our study was the pilot, online, real-time prediction tool,” he says. “This can be used as a framework for patient screening before recruiting for clinical trials. … If a disease-modifying therapy becomes available for patients, a predictive tool might have clinical applications as well, by providing to the physician information regarding clinical progression.”
Pixel by Pixel Prediction
Private industry is also working toward improving science’s predictive powers when it comes to detecting dementia early. One startup called Darmiyan out of San Francisco claims its proprietary software can pick up signals before the onset of Alzheimer’s disease by up to 15 years.
Darmiyan didn’t respond to a request for comment for this article. Venture Beat reported that the company’s MRI-analyzing software “detects cell abnormalities at a microscopic level to reveal what a standard MRI scan cannot” and that the “software measures and highlights subtle microscopic changes in the brain tissue represented in every pixel of the MRI image long before any symptoms arise.”
Darmiyan claims to have a 90 percent accuracy rate and says its software has been vetted by top academic institutions like New York University, Rockefeller University, and Stanford, according to Venture Beat. The startup is awaiting FDA approval to proceed further but is reportedly working with pharmaceutical companies like Amgen, Johnson & Johnson, and Pfizer on pilot programs.
“Our technology enables smarter drug selection in preclinical animal studies, better patient selection for clinical trials, and much better drug-effect monitoring,” Darmiyan cofounder and CEO Padideh Kamali-Zare told Venture Beat.
Conclusions
An estimated 5.5 million Americans have Alzheimer’s, and one in 10 people over age 65 have been diagnosed with the disease. By mid-century, the number of Alzheimer’s patients could rise to 16 million. Health care costs in 2017 alone are estimated to be $259 billion, and by 2050 the annual price tag could be more than $1 trillion.
In sum, it’s a disease that cripples people and the economy.
Researchers are always after more data as they look to improve outcomes, with the hope of one day developing a cure or preventing the onset of neurodegeneration altogether. If interested in seeing this medical research progress, you can help by signing up on the Brain Health Registry to improve the quality of clinical trials.
Image Credit: rudall30 / Shutterstock.com Continue reading