Tag Archives: videos

#430743 Teaching Machines to Understand, and ...

We humans are swamped with text. It’s not just news and other timely information: Regular people are drowning in legal documents. The problem is so bad we mostly ignore it. Every time a person uses a store’s loyalty rewards card or connects to an online service, his or her activities are governed by the equivalent of hundreds of pages of legalese. Most people pay no attention to these massive documents, often labeled “terms of service,” “user agreement,” or “privacy policy.”
These are just part of a much wider societal problem of information overload. There is so much data stored—exabytes of it, as much stored as has ever been spoken by people in all of human history—that it’s humanly impossible to read and interpret everything. Often, we narrow down our pool of information by choosing particular topics or issues to pay attention to. But it’s important to actually know the meaning and contents of the legal documents that govern how our data is stored and who can see it.
As computer science researchers, we are working on ways artificial intelligence algorithms could digest these massive texts and extract their meaning, presenting it in terms regular people can understand.
Can computers understand text?
Computers store data as 0s and 1s—data that cannot be directly understood by humans. They interpret these data as instructions for displaying text, sound, images, or videos that are meaningful to people. But can computers actually understand the language, not only presenting the words but also their meaning?
One way to find out is to ask computers to summarize their knowledge in ways that people can understand and find useful. It would be best if AI systems could process text quickly enough to help people make decisions as they are needed—for example, when you’re signing up for a new online service and are asked to agree with the site’s privacy policy.
What if a computerized assistant could digest all that legal jargon in a few seconds and highlight key points? Perhaps a user could even tell the automated assistant to pay particular attention to certain issues, like when an email address is shared, or whether search engines can index personal posts. Companies could use this capability, too, to analyze contracts or other lengthy documents.
To do this sort of work, we need to combine a range of AI technologies, including machine learning algorithms that take in large amounts of data and independently identify connections among them; knowledge representation techniques to express and interpret facts and rules about the world; speech recognition systems to convert spoken language to text; and human language comprehension programs that process the text and its context to determine what the user is telling the system to do.
Examining privacy policies
A modern internet-enabled life today more or less requires trusting for-profit companies with private information (like physical and email addresses, credit card numbers and bank account details) and personal data (photos and videos, email messages and location information).
These companies’ cloud-based systems typically keep multiple copies of users’ data as part of backup plans to prevent service outages. That means there are more potential targets—each data center must be securely protected both physically and electronically. Of course, internet companies recognize customers’ concerns and employ security teams to protect users’ data. But the specific and detailed legal obligations they undertake to do that are found in their impenetrable privacy policies. No regular human—and perhaps even no single attorney—can truly understand them.
In our study, we ask computers to summarize the terms and conditions regular users say they agree to when they click “Accept” or “Agree” buttons for online services. We downloaded the publicly available privacy policies of various internet companies, including Amazon AWS, Facebook, Google, HP, Oracle, PayPal, Salesforce, Snapchat, Twitter, and WhatsApp.
Summarizing meaning
Our software examines the text and uses information extraction techniques to identify key information specifying the legal rights, obligations and prohibitions identified in the document. It also uses linguistic analysis to identify whether each rule applies to the service provider, the user or a third-party entity, such as advertisers and marketing companies. Then it presents that information in clear, direct, human-readable statements.
For example, our system identified one aspect of Amazon’s privacy policy as telling a user, “You can choose not to provide certain information, but then you might not be able to take advantage of many of our features.” Another aspect of that policy was described as “We may also collect technical information to help us identify your device for fraud prevention and diagnostic purposes.”

We also found, with the help of the summarizing system, that privacy policies often include rules for third parties—companies that aren’t the service provider or the user—that people might not even know are involved in data storage and retrieval.
The largest number of rules in privacy policies—43 percent—apply to the company providing the service. Just under a quarter of the rules—24 percent—create obligations for users and customers. The rest of the rules govern behavior by third-party services or corporate partners, or could not be categorized by our system.

The next time you click the “I Agree” button, be aware that you may be agreeing to share your data with other hidden companies who will be analyzing it.
We are continuing to improve our ability to succinctly and accurately summarize complex privacy policy documents in ways that people can understand and use to access the risks associated with using a service.

This article was originally published on The Conversation. Read the original article. Continue reading

Posted in Human Robots

#430686 This Week’s Awesome Stories From ...

ARTIFICIAL INTELLIGENCE
DeepMind’s AI Is Teaching Itself Parkour, and the Results Are AdorableJames Vincent | The Verge“The research explores how reinforcement learning (or RL) can be used to teach a computer to navigate unfamiliar and complex environments. It’s the sort of fundamental AI research that we’re now testing in virtual worlds, but that will one day help program robots that can navigate the stairs in your house.”
VIRTUAL REALITY
Now You Can Broadcast Facebook Live Videos From Virtual RealityDaniel Terdiman | Fast Company“The idea is fairly simple. Spaces allows up to four people—each of whom must have an Oculus Rift VR headset—to hang out together in VR. Together, they can talk, chat, draw, create new objects, watch 360-degree videos, share photos, and much more. And now, they can live-broadcast everything they do in Spaces, much the same way that any Facebook user can produce live video of real life and share it with the world.”
ROBOTICS
I Watched Two Robots Chat Together on Stage at a Tech EventJon Russell | TechCrunch“The robots in question are Sophia and Han, and they belong to Hanson Robotics, a Hong Kong-based company that is developing and deploying artificial intelligence in humanoids. The duo took to the stage at Rise in Hong Kong with Hanson Robotics’ Chief Scientist Ben Goertzel directing the banter. The conversation, which was partially scripted, wasn’t as slick as the human-to-human panels at the show, but it was certainly a sight to behold for the packed audience.”
BIOTECH
Scientists Used CRISPR to Put a GIF Inside a Living Organism’s DNAEmily Mullin | MIT Technology Review“They delivered the GIF into the living bacteria in the form of five frames: images of a galloping horse and rider, taken by English photographer Eadweard Muybridge…The researchers were then able to retrieve the data by sequencing the bacterial DNA. They reconstructed the movie with 90 percent accuracy by reading the pixel nucleotide code.”
DIGITAL MEDIA
AI Creates Fake ObamaCharles Q. Choi | IEEE Spectrum“In the new study, the neural net learned what mouth shapes were linked to various sounds. The researchers took audio clips and dubbed them over the original sound files of a video. They next took mouth shapes that matched the new audio clips and grafted and blended them onto the video. Essentially, the researchers synthesized videos where Obama lip-synched words he said up to decades beforehand.”
Stock Media provided by adam121 / Pond5 Continue reading

Posted in Human Robots

#430667 Welcome to a More Discoverable ...

This week we’ve rolled out our first major round of improvements to Singularity Hub since our ground-up redesign last December. If we did it right, you’ll find that discovering the technological goodies you come here for is much easier, and so too are other Singularity University offerings you might be interested in.
The first and most major change is in the way Hub’s navigation is structured.
The previous categories in our header (Tech, Future, Health, Science) have been replaced by a single page, Topics, which profiles the most popular tech topics across our site. The featured topics in this menu will be updated regularly based on article performance, so you can keep up with what’s trending in AI, biotech, neuroscience, robotics, or whatever is making the biggest splash most recently.
Rolling our hottest topic category tags into one header dropdown allowed us to create greater focus on some of our newest and best offerings.
Our header now prominently features In Focus, which includes articles on how leaders can make the most of today’s accelerating pace of change by learning to think like futurists, innovators, technologists, and humanitarians. We’ve always been technological optimists, and we want to to make it easy for leaders to find the stories that help make hopeful problem-solvers of us all.
We’ve added a section for Experts, which features leaders in the Singularity University community and showcases their thought leadership including interviews and books. In Events, we highlight Singularity University’s global library of local happenings and summits.
Lastly, we’re excited that our growing original video efforts—from our Ray Kurzweil series to our weekly tech news roundup posts—now live under a central Videos section on Hub. This also gives us a place to highlight our favorite video posts from around the web, including the sci-fi shorts we love so much.
Cruising through the rest of Hub, particularly our homepage, you’ll find a much greater variety of content options, including new stories, top stories, event coverage, and videos. In short, it’s everything a homepage should be. On posts, we’ve tried to keep things as clean as possible, and we put a lot of hours into laboriously streamlining our content tagging structure, making it much easier for you to click through category tags into other stories you might like.

Here’s what @singularityhub looked like 2 years ago, 2 weeks ago, & today. Check it out: https://t.co/7cmlTJwc7d pic.twitter.com/jDayIEIFNv
— Singularity Hub (@singularityhub) July 13, 2017

You’ll also see greater visibility into Singularity University events, along with clearer ways to keep up with Hub and SU both, from simple email newsletter signups to callouts for the SingularityU Hub iOS app and events like SU’s Experts on Air series.
We hope you enjoy the ever-evolving, ever-improving Singularity Hub, and we’d love to hear your feedback. Feel free to tweet us, and let us know your thoughts. You can also pitch us or email us. And as always, thank you for your support. Continue reading

Posted in Human Robots

#430588 Video Friday: DARPA’s LUKE Arm, ...

Your weekly selection of awesome robot videos Continue reading

Posted in Human Robots

#428494 Video Friday: Robot Dance Contest, 500 ...

Your weekly selection of awesome robot videos Continue reading

Posted in Human Robots