Tag Archives: bad

#431301 Collective Intelligence Is the Root of ...

Many of us intuitively think about intelligence as an individual trait. As a society, we have a tendency to praise individual game-changers for accomplishments that would not be possible without their teams, often tens of thousands of people that work behind the scenes to make extraordinary things happen.
Matt Ridley, best-selling author of multiple books, including The Rational Optimist: How Prosperity Evolves, challenges this view. He argues that human achievement and intelligence are entirely “networking phenomena.” In other words, intelligence is collective and emergent as opposed to individual.
When asked what scientific concept would improve everybody’s cognitive toolkit, Ridley highlights collective intelligence: “It is by putting brains together through the division of labor— through trade and specialization—that human society stumbled upon a way to raise the living standards, carrying capacity, technological virtuosity, and knowledge base of the species.”
Ridley has spent a lifetime exploring human prosperity and the factors that contribute to it. In a conversation with Singularity Hub, he redefined how we perceive intelligence and human progress.
Raya Bidshahri: The common perspective seems to be that competition is what drives innovation and, consequently, human progress. Why do you think collaboration trumps competition when it comes to human progress?
Matt Ridley: There is a tendency to think that competition is an animal instinct that is natural and collaboration is a human instinct we have to learn. I think there is no evidence for that. Both are deeply rooted in us as a species. The evidence from evolutionary biology tells us that collaboration is just as important as competition. Yet, at the end, the Darwinian perspective is quite correct: it’s usually cooperation for the purpose of competition, wherein a given group tries to achieve something more effectively than another group. But the point is that the capacity to co-operate is very deep in our psyche.
RB: You write that “human achievement is entirely a networking phenomenon,” and we need to stop thinking about intelligence as an individual trait, and that instead we should look at what you refer to as collective intelligence. Why is that?
MR: The best way to think about it is that IQ doesn’t matter, because a hundred stupid people who are talking to each other will accomplish more than a hundred intelligent people who aren’t. It’s absolutely vital to see that everything from the manufacturing of a pencil to the manufacturing of a nuclear power station can’t be done by an individual human brain. You can’t possibly hold in your head all the knowledge you need to do these things. For the last 200,000 years we’ve been exchanging and specializing, which enables us to achieve much greater intelligence than we can as individuals.
RB: We often think of achievement and intelligence on individual terms. Why do you think it’s so counter-intuitive for us to think about collective intelligence?
MR: People are surprisingly myopic to the extent they understand the nature of intelligence. I think it goes back to a pre-human tendency to think in terms of individual stories and actors. For example, we love to read about the famous inventor or scientist who invented or discovered something. We never tell these stories as network stories. We tell them as individual hero stories.

“It’s absolutely vital to see that everything from the manufacturing of a pencil to the manufacturing of a nuclear power station can’t be done by an individual human brain.”

This idea of a brilliant hero who saves the world in the face of every obstacle seems to speak to tribal hunter-gatherer societies, where the alpha male leads and wins. But it doesn’t resonate with how human beings have structured modern society in the last 100,000 years or so. We modern-day humans haven’t internalized a way of thinking that incorporates this definition of distributed and collective intelligence.
RB: One of the books you’re best known for is The Rational Optimist. What does it mean to be a rational optimist?
MR: My optimism is rational because it’s not based on a feeling, it’s based on evidence. If you look at the data on human living standards over the last 200 years and compare it with the way that most people actually perceive our progress during that time, you’ll see an extraordinary gap. On the whole, people seem to think that things are getting worse, but things are actually getting better.
We’ve seen the most astonishing improvements in human living standards: we’ve brought the number of people living in extreme poverty to 9 percent from about 70 percent when I was born. The human lifespan is expanding by five hours a day, child mortality has gone down by two thirds in half a century, and much more. These feats dwarf the things that are going wrong. Yet most people are quite pessimistic about the future despite the things we’ve achieved in the past.
RB: Where does this idea of collective intelligence fit in rational optimism?
MR: Underlying the idea of rational optimism was understanding what prosperity is, and why it happens to us and not to rabbits or rocks. Why are we the only species in the world that has concepts like a GDP, growth rate, or living standard? My answer is that it comes back to this phenomena of collective intelligence. The reason for a rise in living standards is innovation, and the cause of that innovation is our ability to collaborate.
The grand theme of human history is exchange of ideas, collaborating through specialization and the division of labor. Throughout history, it’s in places where there is a lot of open exchange and trade where you get a lot of innovation. And indeed, there are some extraordinary episodes in human history when societies get cut off from exchange and their innovation slows down and they start moving backwards. One example of this is Tasmania, which was isolated and lost a lot of the technologies it started off with.
RB: Lots of people like to point out that just because the world has been getting better doesn’t guarantee it will continue to do so. How do you respond to that line of argumentation?
MR: There is a quote by Thomas Babington Macaulay from 1830, where he was fed up with the pessimists of the time saying things will only get worse. He says, “On what principle is it that with nothing but improvement behind us, we are to expect nothing but deterioration before us?” And this was back in the 1830s, where in Britain and a few other parts of the world, we were only seeing the beginning of the rise of living standards. It’s perverse to argue that because things were getting better in the past, now they are about to get worse.

“I think it’s worth remembering that good news tends to be gradual, and bad news tends to be sudden. Hence, the good stuff is rarely going to make the news.”

Another thing to point out is that people have always said this. Every generation thought they were at the peak looking downhill. If you think about the opportunities technology is about to give us, whether it’s through blockchain, gene editing, or artificial intelligence, there is every reason to believe that 2017 is going to look like a time of absolute misery compared to what our children and grandchildren are going to experience.
RB: There seems to be a fair amount of mayhem in today’s world, and lots of valid problems to pay attention to in the news. What would you say to empower our readers that we will push through it and continue to grow and improve as a species?
MR: I think it’s worth remembering that good news tends to be gradual, and bad news tends to be sudden. Hence, the good stuff is rarely going to make the news. It’s happening in an inexorable way, as a result of ordinary people exchanging, specializing, collaborating, and innovating, and it’s surprisingly hard to stop it.
Even if you look back to the 1940s, at the end of a world war, there was still a lot of innovation happening. In some ways it feels like we are going through a bad period now. I do worry a lot about the anti-enlightenment values that I see spreading in various parts of the world. But then I remind myself that people are working on innovative projects in the background, and these things are going to come through and push us forward.
Image Credit: Sahacha Nilkumhang / Shutterstock.com

We are a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for us to earn fees by linking to Amazon.com and affiliated sites. Continue reading

Posted in Human Robots

#430743 Teaching Machines to Understand, and ...

We humans are swamped with text. It’s not just news and other timely information: Regular people are drowning in legal documents. The problem is so bad we mostly ignore it. Every time a person uses a store’s loyalty rewards card or connects to an online service, his or her activities are governed by the equivalent of hundreds of pages of legalese. Most people pay no attention to these massive documents, often labeled “terms of service,” “user agreement,” or “privacy policy.”
These are just part of a much wider societal problem of information overload. There is so much data stored—exabytes of it, as much stored as has ever been spoken by people in all of human history—that it’s humanly impossible to read and interpret everything. Often, we narrow down our pool of information by choosing particular topics or issues to pay attention to. But it’s important to actually know the meaning and contents of the legal documents that govern how our data is stored and who can see it.
As computer science researchers, we are working on ways artificial intelligence algorithms could digest these massive texts and extract their meaning, presenting it in terms regular people can understand.
Can computers understand text?
Computers store data as 0s and 1s—data that cannot be directly understood by humans. They interpret these data as instructions for displaying text, sound, images, or videos that are meaningful to people. But can computers actually understand the language, not only presenting the words but also their meaning?
One way to find out is to ask computers to summarize their knowledge in ways that people can understand and find useful. It would be best if AI systems could process text quickly enough to help people make decisions as they are needed—for example, when you’re signing up for a new online service and are asked to agree with the site’s privacy policy.
What if a computerized assistant could digest all that legal jargon in a few seconds and highlight key points? Perhaps a user could even tell the automated assistant to pay particular attention to certain issues, like when an email address is shared, or whether search engines can index personal posts. Companies could use this capability, too, to analyze contracts or other lengthy documents.
To do this sort of work, we need to combine a range of AI technologies, including machine learning algorithms that take in large amounts of data and independently identify connections among them; knowledge representation techniques to express and interpret facts and rules about the world; speech recognition systems to convert spoken language to text; and human language comprehension programs that process the text and its context to determine what the user is telling the system to do.
Examining privacy policies
A modern internet-enabled life today more or less requires trusting for-profit companies with private information (like physical and email addresses, credit card numbers and bank account details) and personal data (photos and videos, email messages and location information).
These companies’ cloud-based systems typically keep multiple copies of users’ data as part of backup plans to prevent service outages. That means there are more potential targets—each data center must be securely protected both physically and electronically. Of course, internet companies recognize customers’ concerns and employ security teams to protect users’ data. But the specific and detailed legal obligations they undertake to do that are found in their impenetrable privacy policies. No regular human—and perhaps even no single attorney—can truly understand them.
In our study, we ask computers to summarize the terms and conditions regular users say they agree to when they click “Accept” or “Agree” buttons for online services. We downloaded the publicly available privacy policies of various internet companies, including Amazon AWS, Facebook, Google, HP, Oracle, PayPal, Salesforce, Snapchat, Twitter, and WhatsApp.
Summarizing meaning
Our software examines the text and uses information extraction techniques to identify key information specifying the legal rights, obligations and prohibitions identified in the document. It also uses linguistic analysis to identify whether each rule applies to the service provider, the user or a third-party entity, such as advertisers and marketing companies. Then it presents that information in clear, direct, human-readable statements.
For example, our system identified one aspect of Amazon’s privacy policy as telling a user, “You can choose not to provide certain information, but then you might not be able to take advantage of many of our features.” Another aspect of that policy was described as “We may also collect technical information to help us identify your device for fraud prevention and diagnostic purposes.”

We also found, with the help of the summarizing system, that privacy policies often include rules for third parties—companies that aren’t the service provider or the user—that people might not even know are involved in data storage and retrieval.
The largest number of rules in privacy policies—43 percent—apply to the company providing the service. Just under a quarter of the rules—24 percent—create obligations for users and customers. The rest of the rules govern behavior by third-party services or corporate partners, or could not be categorized by our system.

The next time you click the “I Agree” button, be aware that you may be agreeing to share your data with other hidden companies who will be analyzing it.
We are continuing to improve our ability to succinctly and accurately summarize complex privacy policy documents in ways that people can understand and use to access the risks associated with using a service.

This article was originally published on The Conversation. Read the original article. Continue reading

Posted in Human Robots

#428802 New AI Mental Health Tools Beat Human ...

About 20 percent of youth in the United States live with a mental health condition, according to the National Institute of Mental Health. That’s the bad news. The good news is that mental health professionals have smarter tools than ever before, with artificial intelligence-related technology coming to the forefront to help diagnose patients, often with much greater accuracy than humans. A new study published in the journal Suicide and Life-Threatening Behavior, for example, showed that… read more Continue reading

Posted in Human Robots