Tag Archives: Welsh

#434759 To Be Ethical, AI Must Become ...

As over-hyped as artificial intelligence is—everyone’s talking about it, few fully understand it, it might leave us all unemployed but also solve all the world’s problems—its list of accomplishments is growing. AI can now write realistic-sounding text, give a debating champ a run for his money, diagnose illnesses, and generate fake human faces—among much more.

After training these systems on massive datasets, their creators essentially just let them do their thing to arrive at certain conclusions or outcomes. The problem is that more often than not, even the creators don’t know exactly why they’ve arrived at those conclusions or outcomes. There’s no easy way to trace a machine learning system’s rationale, so to speak. The further we let AI go down this opaque path, the more likely we are to end up somewhere we don’t want to be—and may not be able to come back from.

In a panel at the South by Southwest interactive festival last week titled “Ethics and AI: How to plan for the unpredictable,” experts in the field shared their thoughts on building more transparent, explainable, and accountable AI systems.

Not New, but Different
Ryan Welsh, founder and director of explainable AI startup Kyndi, pointed out that having knowledge-based systems perform advanced tasks isn’t new; he cited logistical, scheduling, and tax software as examples. What’s new is the learning component, our inability to trace how that learning occurs, and the ethical implications that could result.

“Now we have these systems that are learning from data, and we’re trying to understand why they’re arriving at certain outcomes,” Welsh said. “We’ve never actually had this broad society discussion about ethics in those scenarios.”

Rather than continuing to build AIs with opaque inner workings, engineers must start focusing on explainability, which Welsh broke down into three subcategories. Transparency and interpretability come first, and refer to being able to find the units of high influence in a machine learning network, as well as the weights of those units and how they map to specific data and outputs.

Then there’s provenance: knowing where something comes from. In an ideal scenario, for example, Open AI’s new text generator would be able to generate citations in its text that reference academic (and human-created) papers or studies.

Explainability itself is the highest and final bar and refers to a system’s ability to explain itself in natural language to the average user by being able to say, “I generated this output because x, y, z.”

“Humans are unique in our ability and our desire to ask why,” said Josh Marcuse, executive director of the Defense Innovation Board, which advises Department of Defense senior leaders on innovation. “The reason we want explanations from people is so we can understand their belief system and see if we agree with it and want to continue to work with them.”

Similarly, we need to have the ability to interrogate AIs.

Two Types of Thinking
Welsh explained that one big barrier standing in the way of explainability is the tension between the deep learning community and the symbolic AI community, which see themselves as two different paradigms and historically haven’t collaborated much.

Symbolic or classical AI focuses on concepts and rules, while deep learning is centered around perceptions. In human thought this is the difference between, for example, deciding to pass a soccer ball to a teammate who is open (you make the decision because conceptually you know that only open players can receive passes), and registering that the ball is at your feet when someone else passes it to you (you’re taking in information without making a decision about it).

“Symbolic AI has abstractions and representation based on logic that’s more humanly comprehensible,” Welsh said. To truly mimic human thinking, AI needs to be able to both perceive information and conceptualize it. An example of perception (deep learning) in an AI is recognizing numbers within an image, while conceptualization (symbolic learning) would give those numbers a hierarchical order and extract rules from the hierachy (4 is greater than 3, and 5 is greater than 4, therefore 5 is also greater than 3).

Explainability comes in when the system can say, “I saw a, b, and c, and based on that decided x, y, or z.” DeepMind and others have recently published papers emphasizing the need to fuse the two paradigms together.

Implications Across Industries
One of the most prominent fields where AI ethics will come into play, and where the transparency and accountability of AI systems will be crucial, is defense. Marcuse said, “We’re accountable beings, and we’re responsible for the choices we make. Bringing in tech or AI to a battlefield doesn’t strip away that meaning and accountability.”

In fact, he added, rather than worrying about how AI might degrade human values, people should be asking how the tech could be used to help us make better moral choices.

It’s also important not to conflate AI with autonomy—a worst-case scenario that springs to mind is an intelligent destructive machine on a rampage. But in fact, Marcuse said, in the defense space, “We have autonomous systems today that don’t rely on AI, and most of the AI systems we’re contemplating won’t be autonomous.”

The US Department of Defense released its 2018 artificial intelligence strategy last month. It includes developing a robust and transparent set of principles for defense AI, investing in research and development for AI that’s reliable and secure, continuing to fund research in explainability, advocating for a global set of military AI guidelines, and finding ways to use AI to reduce the risk of civilian casualties and other collateral damage.

Though these were designed with defense-specific aims in mind, Marcuse said, their implications extend across industries. “The defense community thinks of their problems as being unique, that no one deals with the stakes and complexity we deal with. That’s just wrong,” he said. Making high-stakes decisions with technology is widespread; safety-critical systems are key to aviation, medicine, and self-driving cars, to name a few.

Marcuse believes the Department of Defense can invest in AI safety in a way that has far-reaching benefits. “We all depend on technology to keep us alive and safe, and no one wants machines to harm us,” he said.

A Creation Superior to Its Creator
That said, we’ve come to expect technology to meet our needs in just the way we want, all the time—servers must never be down, GPS had better not take us on a longer route, Google must always produce the answer we’re looking for.

With AI, though, our expectations of perfection may be less reasonable.

“Right now we’re holding machines to superhuman standards,” Marcuse said. “We expect them to be perfect and infallible.” Take self-driving cars. They’re conceived of, built by, and programmed by people, and people as a whole generally aren’t great drivers—just look at traffic accident death rates to confirm that. But the few times self-driving cars have had fatal accidents, there’s been an ensuing uproar and backlash against the industry, as well as talk of implementing more restrictive regulations.

This can be extrapolated to ethics more generally. We as humans have the ability to explain our decisions, but many of us aren’t very good at doing so. As Marcuse put it, “People are emotional, they confabulate, they lie, they’re full of unconscious motivations. They don’t pass the explainability test.”

Why, then, should explainability be the standard for AI?

Even if humans aren’t good at explaining our choices, at least we can try, and we can answer questions that probe at our decision-making process. A deep learning system can’t do this yet, so working towards being able to identify which input data the systems are triggering on to make decisions—even if the decisions and the process aren’t perfect—is the direction we need to head.

Image Credit: a-image / Shutterstock.com Continue reading

Posted in Human Robots

#430286 Artificial Intelligence Predicts Death ...

Do not go gentle into that good night, Old age should burn and rave at close of day; Rage, rage against the dying of the light.
Welsh poet Dylan Thomas’ famous lines are a passionate plea to fight against the inevitability of death. While the sentiment is poetic, the reality is far more prosaic. We are all going to die someday at a time and place that will likely remain a mystery to us until the very end.
Or maybe not.
Researchers are now applying artificial intelligence, particularly machine learning and computer vision, to predict when someone may die. The ultimate goal is not to play the role of Grim Reaper, like in the macabre sci-fi Machine of Death universe, but to treat or even prevent chronic diseases and other illnesses.
The latest research into this application of AI to precision medicine used an off-the-shelf machine-learning platform to analyze 48 chest CT scans. The computer was able to predict which patients would die within five years with 69 percent accuracy. That’s about as good as any human doctor.
The results were published in the Nature journal Scientific Reports by a team led by the University of Adelaide.
In an email interview with Singularity Hub, lead author Dr. Luke Oakden-Rayner, a radiologist and PhD student, says that one of the obvious benefits of using AI in precision medicine is to identify health risks earlier and potentially intervene.
Less obvious, he adds, is the promise of speeding up longevity research.
“Currently, most research into chronic disease and longevity requires long periods of follow-up to detect any difference between patients with and without treatment, because the diseases progress so slowly,” he explains. “If we can quantify the changes earlier, not only can we identify disease while we can intervene more effectively, but we might also be able to detect treatment response much sooner.”
That could lead to faster and cheaper treatments, he adds. “If we could cut a year or two off the time it takes to take a treatment from lab to patient, that could speed up progress in this area substantially.”
AI has a heart
In January, researchers at Imperial College London published results that suggested AI could predict heart failure and death better than a human doctor. The research, published in the journal Radiology, involved creating virtual 3D hearts of about 250 patients that could simulate cardiac function. AI algorithms then went to work to learn what features would serve as the best predictors. The system relied on MRIs, blood tests, and other data for its analyses.
In the end, the machine was faster and better at assessing risk of pulmonary hypertension—about 73 percent versus 60 percent.
The researchers say the technology could be applied to predict outcomes of other heart conditions in the future. “We would like to develop the technology so it can be used in many heart conditions to complement how doctors interpret the results of medical tests,” says study co-author Dr. Tim Dawes in a press release. “The goal is to see if better predictions can guide treatment to help people to live longer.”
AI getting smarter
These sorts of applications with AI to precision medicine are only going to get better as the machines continue to learn, just like any medical school student.
Oakden-Rayner says his team is still building its ideal dataset as it moves forward with its research, but have already improved predictive accuracy by 75 to 80 percent by including information such as age and sex.
“I think there is an upper limit on how accurate we can be, because there is always going to be an element of randomness,” he says, replying to how well AI will be able to pinpoint individual human mortality. “But we can be much more precise than we are now, taking more of each individual’s risks and strengths into account. A model combining all of those factors will hopefully account for more than 80 percent of the risk of near-term mortality.”
Others are even more optimistic about how quickly AI will transform this aspect of the medical field.
“Predicting remaining life span for people is actually one of the easiest applications of machine learning,” Dr. Ziad Obermeyer tells STAT News. “It requires a unique set of data where we have electronic records linked to information about when people died. But once we have that for enough people, you can come up with a very accurate predictor of someone’s likelihood of being alive one month out, for instance, or one year out.”
Obermeyer co-authored a paper last year with Dr. Ezekiel Emanuel in the New England Journal of Medicine called “Predicting the Future—Big Data, Machine Learning, and Clinical Medicine.”
AI still has much to learn
Experts like Obermeyer and Oakden-Rayner agree that advances will come swiftly, but there is still much work to be done.
For one thing, there’s plenty of data out there to mine, but it’s still a bit of a mess. For example, the images needed to train machines still need to be processed to make them useful. “Many groups around the world are now spending millions of dollars on this task, because this appears to be the major bottleneck for successful medical AI,” Oakden-Rayner says.
In the interview with STAT News, Obermeyer says data is fragmented across the health system, so linking information and creating comprehensive datasets will take time and money. He also notes that while there is much excitement about the use of AI in precision medicine, there’s been little activity in testing the algorithms in a clinical setting.
“It’s all very well and good to say you’ve got an algorithm that’s good at predicting. Now let’s actually port them over to the real world in a safe and responsible and ethical way and see what happens,” he says in STAT News.
AI is no accident
Preventing a fatal disease is one thing. But preventing fatal accidents with AI?
That’s what US and Indian researchers set out to do when they looked over the disturbing number of deaths occurring from people taking selfies. The team identified 127 people who died while posing for a self-taken photo over a two-year period.
Based on a combination of text, images and location, the machine learned to identify a selfie as potentially dangerous or not. Running more than 3,000 annotated selfies collected on Twitter through the software resulted in 73 percent accuracy.
“The combination of image-based and location-based features resulted in the best accuracy,” they reported.
What’s next? A sort of selfie early warning system. “One of the directions that we are working on is to have the camera give the user information about [whether or not a particular location is] dangerous, with some score attached to it,” says Ponnurangam Kumaraguru, a professor at Indraprastha Institute of Information Technology in Delhi, in a story by Digital Trends.
AI and the future
This discussion begs the question: Do we really want to know when we’re going to die?
According to at least one paper published in Psychology Review earlier this year, the answer is a resounding “no.” Nearly nine out of 10 people in Germany and Spain who were quizzed about whether they would want to know about their future, including death, said they would prefer to remain ignorant.
Obermeyer sees it differently, at least when it comes to people living with life-threatening illness.
“[O]ne thing that those patients really, really want and aren’t getting from doctors is objective predictions about how long they have to live,” he tells Marketplace public radio. “Doctors are very reluctant to answer those kinds of questions, partly because, you know, you don’t want to be wrong about something so important. But also partly because there’s a sense that patients don’t want to know. And in fact, that turns out not to be true when you actually ask the patients.”
Stock Media provided by photocosma / Pond5 Continue reading

Posted in Human Robots