Tag Archives: Doomsday

#433303 This Week’s Awesome Stories From ...

ARTIFICIAL INTELLIGENCE
Artificial Intelligence Is Now a Pentagon Priority. Will Silicon Valley Help?
Cade Metz | The New York Times
“The consultants and planners who try to forecast threats think AI could be the next technological game changer in warfare. The Chinese government has raised the stakes with its own national strategy. Academic and commercial organizations in China have been open about working closely with the military on AI projects.”

BLOCKCHAIN
The World’s Oldest Blockchain Has Been Hiding in the New York Times Since 1995
Daniel Oberhaus | Motherboard
“Instead of posting customer hashes to a public digital ledger, Surety creates a unique hash value of all the new seals added to the database each week and publishes this hash value in the New York Times. The hash is placed in a small ad in the Times classified section under the heading ‘Notices & Lost and Found’ and has appeared once a week since 1995.”

FUTURE OF WORK
Y Combinator Learns Basic Income Is Not So Basic After All
Nitasha Tiku | Wired
“In January 2016, technology incubator Y Combinator announced plans to fund a long-term study on giving people a guaranteed monthly income, in part to offset fears about jobs being destroyed by automation. …Now, nearly three years later, YC Research, the incubator’s nonprofit arm, says it plans to begin the study next year, after a pilot project in Oakland took much longer than expected.”

ROBOTICS
Robotics-as-a-Service Is on the Way and Invia Robotics Is Leading the Charge
Jonathan Shieber | TechCrunch
“The team at inVia Robotics didn’t start out looking to build a business that would create a new kind of model for selling robotics to the masses, but that may be exactly what they’ve done.”

FUTURE
How to Survive Doomsday
Michael Hahn and Daniel Wolf Savin | Nautilus
“Let’s be optimistic and assume that we manage to avoid a self-inflicted nuclear holocaust, an extinction-sized asteroid, or deadly irradiation from a nearby supernova. That leaves about 6 billion years until the sun turns into a red giant, swelling to the orbit of Earth and melting our planet. Sounds like a lot of time. But don’t get too relaxed. Doomsday is coming a lot sooner than that.”

SPACE
NASA’s New Space Taxis
Mark Harris | Air & Space
“With the first launch in its Commercial Crew Program, NASA is trying something new: opening space exploration to private corporations and astronauts. The 21st century space race begins not as a contest between global superpowers but as a competition between companies.”

Image Credit: Jeremy Thomas / Unsplash Continue reading

Posted in Human Robots

#432311 Everyone Is Talking About AI—But Do ...

In 2017, artificial intelligence attracted $12 billion of VC investment. We are only beginning to discover the usefulness of AI applications. Amazon recently unveiled a brick-and-mortar grocery store that has successfully supplanted cashiers and checkout lines with computer vision, sensors, and deep learning. Between the investment, the press coverage, and the dramatic innovation, “AI” has become a hot buzzword. But does it even exist yet?

At the World Economic Forum Dr. Kai-Fu Lee, a Taiwanese venture capitalist and the founding president of Google China, remarked, “I think it’s tempting for every entrepreneur to package his or her company as an AI company, and it’s tempting for every VC to want to say ‘I’m an AI investor.’” He then observed that some of these AI bubbles could burst by the end of 2018, referring specifically to “the startups that made up a story that isn’t fulfillable, and fooled VCs into investing because they don’t know better.”

However, Dr. Lee firmly believes AI will continue to progress and will take many jobs away from workers. So, what is the difference between legitimate AI, with all of its pros and cons, and a made-up story?

If you parse through just a few stories that are allegedly about AI, you’ll quickly discover significant variation in how people define it, with a blurred line between emulated intelligence and machine learning applications.

I spoke to experts in the field of AI to try to find consensus, but the very question opens up more questions. For instance, when is it important to be accurate to a term’s original definition, and when does that commitment to accuracy amount to the splitting of hairs? It isn’t obvious, and hype is oftentimes the enemy of nuance. Additionally, there is now a vested interest in that hype—$12 billion, to be precise.

This conversation is also relevant because world-renowned thought leaders have been publicly debating the dangers posed by AI. Facebook CEO Mark Zuckerberg suggested that naysayers who attempt to “drum up these doomsday scenarios” are being negative and irresponsible. On Twitter, business magnate and OpenAI co-founder Elon Musk countered that Zuckerberg’s understanding of the subject is limited. In February, Elon Musk engaged again in a similar exchange with Harvard professor Steven Pinker. Musk tweeted that Pinker doesn’t understand the difference between functional/narrow AI and general AI.

Given the fears surrounding this technology, it’s important for the public to clearly understand the distinctions between different levels of AI so that they can realistically assess the potential threats and benefits.

As Smart As a Human?
Erik Cambria, an expert in the field of natural language processing, told me, “Nobody is doing AI today and everybody is saying that they do AI because it’s a cool and sexy buzzword. It was the same with ‘big data’ a few years ago.”

Cambria mentioned that AI, as a term, originally referenced the emulation of human intelligence. “And there is nothing today that is even barely as intelligent as the most stupid human being on Earth. So, in a strict sense, no one is doing AI yet, for the simple fact that we don’t know how the human brain works,” he said.

He added that the term “AI” is often used in reference to powerful tools for data classification. These tools are impressive, but they’re on a totally different spectrum than human cognition. Additionally, Cambria has noticed people claiming that neural networks are part of the new wave of AI. This is bizarre to him because that technology already existed fifty years ago.

However, technologists no longer need to perform the feature extraction by themselves. They also have access to greater computing power. All of these advancements are welcomed, but it is perhaps dishonest to suggest that machines have emulated the intricacies of our cognitive processes.

“Companies are just looking at tricks to create a behavior that looks like intelligence but that is not real intelligence, it’s just a mirror of intelligence. These are expert systems that are maybe very good in a specific domain, but very stupid in other domains,” he said.

This mimicry of intelligence has inspired the public imagination. Domain-specific systems have delivered value in a wide range of industries. But those benefits have not lifted the cloud of confusion.

Assisted, Augmented, or Autonomous
When it comes to matters of scientific integrity, the issue of accurate definitions isn’t a peripheral matter. In a 1974 commencement address at the California Institute of Technology, Richard Feynman famously said, “The first principle is that you must not fool yourself—and you are the easiest person to fool.” In that same speech, Feynman also said, “You should not fool the layman when you’re talking as a scientist.” He opined that scientists should bend over backwards to show how they could be wrong. “If you’re representing yourself as a scientist, then you should explain to the layman what you’re doing—and if they don’t want to support you under those circumstances, then that’s their decision.”

In the case of AI, this might mean that professional scientists have an obligation to clearly state that they are developing extremely powerful, controversial, profitable, and even dangerous tools, which do not constitute intelligence in any familiar or comprehensive sense.

The term “AI” may have become overhyped and confused, but there are already some efforts underway to provide clarity. A recent PwC report drew a distinction between “assisted intelligence,” “augmented intelligence,” and “autonomous intelligence.” Assisted intelligence is demonstrated by the GPS navigation programs prevalent in cars today. Augmented intelligence “enables people and organizations to do things they couldn’t otherwise do.” And autonomous intelligence “establishes machines that act on their own,” such as autonomous vehicles.

Roman Yampolskiy is an AI safety researcher who wrote the book “Artificial Superintelligence: A Futuristic Approach.” I asked him whether the broad and differing meanings might present difficulties for legislators attempting to regulate AI.

Yampolskiy explained, “Intelligence (artificial or natural) comes on a continuum and so do potential problems with such technology. We typically refer to AI which one day will have the full spectrum of human capabilities as artificial general intelligence (AGI) to avoid some confusion. Beyond that point it becomes superintelligence. What we have today and what is frequently used in business is narrow AI. Regulating anything is hard, technology is no exception. The problem is not with terminology but with complexity of such systems even at the current level.”

When asked if people should fear AI systems, Dr. Yampolskiy commented, “Since capability comes on a continuum, so do problems associated with each level of capability.” He mentioned that accidents are already reported with AI-enabled products, and as the technology advances further, the impact could spread beyond privacy concerns or technological unemployment. These concerns about the real-world effects of AI will likely take precedence over dictionary-minded quibbles. However, the issue is also about honesty versus deception.

Is This Buzzword All Buzzed Out?
Finally, I directed my questions towards a company that is actively marketing an “AI Virtual Assistant.” Carl Landers, the CMO at Conversica, acknowledged that there are a multitude of explanations for what AI is and isn’t.

He said, “My definition of AI is technology innovation that helps solve a business problem. I’m really not interested in talking about the theoretical ‘can we get machines to think like humans?’ It’s a nice conversation, but I’m trying to solve a practical business problem.”

I asked him if AI is a buzzword that inspires publicity and attracts clients. According to Landers, this was certainly true three years ago, but those effects have already started to wane. Many companies now claim to have AI in their products, so it’s less of a differentiator. However, there is still a specific intention behind the word. Landers hopes to convey that previously impossible things are now possible. “There’s something new here that you haven’t seen before, that you haven’t heard of before,” he said.

According to Brian Decker, founder of Encom Lab, machine learning algorithms only work to satisfy their preexisting programming, not out of an interior drive for better understanding. Therefore, he views AI as an entirely semantic argument.

Decker stated, “A marketing exec will claim a photodiode controlled porch light has AI because it ‘knows when it is dark outside,’ while a good hardware engineer will point out that not one bit in a register in the entire history of computing has ever changed unless directed to do so according to the logic of preexisting programming.”

Although it’s important for everyone to be on the same page regarding specifics and underlying meaning, AI-powered products are already powering past these debates by creating immediate value for humans. And ultimately, humans care more about value than they do about semantic distinctions. In an interview with Quartz, Kai-Fu Lee revealed that algorithmic trading systems have already given him an 8X return over his private banking investments. “I don’t trade with humans anymore,” he said.

Image Credit: vrender / Shutterstock.com Continue reading

Posted in Human Robots

#431839 The Hidden Human Workforce Powering ...

The tech industry touts its ability to automate tasks and remove slow and expensive humans from the equation. But in the background, a lot of the legwork training machine learning systems, solving problems software can’t, and cleaning up its mistakes is still done by people.
This was highlighted recently when Expensify, which promises to automatically scan photos of receipts to extract data for expense reports, was criticized for sending customers’ personally identifiable receipts to workers on Amazon’s Mechanical Turk (MTurk) crowdsourcing platform.
The company uses text analysis software to read the receipts, but if the automated system falls down then the images are passed to a human for review. While entrusting this job to random workers on MTurk was maybe not so wise—and the company quickly stopped after the furor—the incident brought to light that this kind of human safety net behind AI-powered services is actually very common.
As Wired notes, similar services like Ibotta and Receipt Hog that collect receipt information for marketing purposes also use crowdsourced workers. In a similar vein, while most users might assume their Facebook newsfeed is governed by faceless algorithms, the company has been ramping up the number of human moderators it employs to catch objectionable content that slips through the net, as has YouTube. Twitter also has thousands of human overseers.
Humans aren’t always witting contributors either. The old text-based reCAPTCHA problems Google used to use to distinguish humans from machines was actually simultaneously helping the company digitize books by getting humans to interpret hard-to-read text.
“Every product that uses AI also uses people,” Jeffrey Bigham, a crowdsourcing expert at Carnegie Mellon University, told Wired. “I wouldn’t even say it’s a backstop so much as a core part of the process.”
Some companies are not shy about their use of crowdsourced workers. Startup Eloquent Labs wants to insert them between customer service chatbots and human agents who step in when the machines fail. Many times the AI is pretty certain what particular work means, and an MTurk worker can step in and quickly classify them faster and cheaper than a service agent.
Fashion retailer Gilt provides “pre-emptive shipping,” which uses data analytics to predict what people will buy to get products to them faster. The company uses MTurk workers to provide subjective critiques of clothing that feed into their models.
MTurk isn’t the only player. Companies like Cloudfactory and Crowdflower provide crowdsourced human manpower tailored to particular niches, and some companies prefer to maintain their own communities of workers. Unlabel uses an army of 50,000 humans to check and edit the translations its artificial intelligence system produces for customers.
Most of the time these human workers aren’t just filling in the gaps, they’re also helping to train the machine learning component of these companies’ services by providing new examples of how to solve problems. Other times humans aren’t used “in-the-loop” with AI systems, but to prepare data sets they can learn from by labeling images, text, or audio.
It’s even possible to use crowdsourced workers to carry out tasks typically tackled by machine learning, such as large-scale image analysis and forecasting.
Zooniverse gets citizen scientists to classify images of distant galaxies or videos of animals to help academics analyze large data sets too complex for computers. Almanis creates forecasts on everything from economics to politics with impressive accuracy by giving those who sign up to the website incentives for backing the correct answer to a question. Researchers have used MTurkers to power a chatbot, and there’s even a toolkit for building algorithms to control this human intelligence called TurKit.
So what does this prominent role for humans in AI services mean? Firstly, it suggests that many tools people assume are powered by AI may in fact be relying on humans. This has obvious privacy implications, as the Expensify story highlighted, but should also raise concerns about whether customers are really getting what they pay for.
One example of this is IBM’s Watson for oncology, which is marketed as a data-driven AI system for providing cancer treatment recommendations. But an investigation by STAT highlighted that it’s actually largely driven by recommendations from a handful of (admittedly highly skilled) doctors at Memorial Sloan Kettering Cancer Center in New York.
Secondly, humans intervening in AI-run processes also suggests AI is still largely helpless without us, which is somewhat comforting to know among all the doomsday predictions of AI destroying jobs. At the same time, though, much of this crowdsourced work is monotonous, poorly paid, and isolating.
As machines trained by human workers get better at all kinds of tasks, this kind of piecemeal work filling in the increasingly small gaps in their capabilities may get more common. While tech companies often talk about AI augmenting human intelligence, for many it may actually end up being the other way around.
Image Credit: kentoh / Shutterstock.com Continue reading

Posted in Human Robots

#428838 Terminator and SKYNET are realities ...

Russian humanoid robot F.E.D.O.R. (Final Experimental Demonstration Object Research) is already here and – apart from destroying mankind and taking over the world – it can shake your hand and save you from the rubble (if it chooses), and apparently … Continue reading

Posted in Human Robots