Tag Archives: systems
#431872 AI Uses Titan Supercomputer to Create ...
You don’t have to dig too deeply into the archive of dystopian science fiction to uncover the horror that intelligent machines might unleash. The Matrix and The Terminator are probably the most well-known examples of self-replicating, intelligent machines attempting to enslave or destroy humanity in the process of building a brave new digital world.
The prospect of artificially intelligent machines creating other artificially intelligent machines took a big step forward in 2017. However, we’re far from the runaway technological singularity futurists are predicting by mid-century or earlier, let alone murderous cyborgs or AI avatar assassins.
The first big boost this year came from Google. The tech giant announced it was developing automated machine learning (AutoML), writing algorithms that can do some of the heavy lifting by identifying the right neural networks for a specific job. Now researchers at the Department of Energy’s Oak Ridge National Laboratory (ORNL), using the most powerful supercomputer in the US, have developed an AI system that can generate neural networks as good if not better than any developed by a human in less than a day.
It can take months for the brainiest, best-paid data scientists to develop deep learning software, which sends data through a complex web of mathematical algorithms. The system is modeled after the human brain and known as an artificial neural network. Even Google’s AutoML took weeks to design a superior image recognition system, one of the more standard operations for AI systems today.
Computing Power
Of course, Google Brain project engineers only had access to 800 graphic processing units (GPUs), a type of computer hardware that works especially well for deep learning. Nvidia, which pioneered the development of GPUs, is considered the gold standard in today’s AI hardware architecture. Titan, the supercomputer at ORNL, boasts more than 18,000 GPUs.
The ORNL research team’s algorithm, called MENNDL for Multinode Evolutionary Neural Networks for Deep Learning, isn’t designed to create AI systems that cull cute cat photos from the internet. Instead, MENNDL is a tool for testing and training thousands of potential neural networks to work on unique science problems.
That requires a different approach from the Google and Facebook AI platforms of the world, notes Steven Young, a postdoctoral research associate at ORNL who is on the team that designed MENNDL.
“We’ve discovered that those [neural networks] are very often not the optimal network for a lot of our problems, because our data, while it can be thought of as images, is different,” he explains to Singularity Hub. “These images, and the problems, have very different characteristics from object detection.”
AI for Science
One application of the technology involved a particle physics experiment at the Fermi National Accelerator Laboratory. Fermilab researchers are interested in understanding neutrinos, high-energy subatomic particles that rarely interact with normal matter but could be a key to understanding the early formation of the universe. One Fermilab experiment involves taking a sort of “snapshot” of neutrino interactions.
The team wanted the help of an AI system that could analyze and classify Fermilab’s detector data. MENNDL evaluated 500,000 neural networks in 24 hours. Its final solution proved superior to custom models developed by human scientists.
In another case involving a collaboration with St. Jude Children’s Research Hospital in Memphis, MENNDL improved the error rate of a human-designed algorithm for identifying mitochondria inside 3D electron microscopy images of brain tissue by 30 percent.
“We are able to do better than humans in a fraction of the time at designing networks for these sort of very different datasets that we’re interested in,” Young says.
What makes MENNDL particularly adept is its ability to define the best or most optimal hyperparameters—the key variables—to tackle a particular dataset.
“You don’t always need a big, huge deep network. Sometimes you just need a small network with the right hyperparameters,” Young says.
A Virtual Data Scientist
That’s not dissimilar to the approach of a company called H20.ai, a startup out of Silicon Valley that uses open source machine learning platforms to “democratize” AI. It applies machine learning to create business solutions for Fortune 500 companies, including some of the world’s biggest banks and healthcare companies.
“Our software is more [about] pattern detection, let’s say anti-money laundering or fraud detection or which customer is most likely to churn,” Dr. Arno Candel, chief technology officer at H2O.ai, tells Singularity Hub. “And that kind of insight-generating software is what we call AI here.”
The company’s latest product, Driverless AI, promises to deliver the data scientist equivalent of a chessmaster to its customers (the company claims several such grandmasters in its employ and advisory board). In other words, the system can analyze a raw dataset and, like MENNDL, automatically identify what features should be included in the computer model to make the most of the data based on the best “chess moves” of its grandmasters.
“So we’re using those algorithms, but we’re giving them the human insights from those data scientists, and we automate their thinking,” he explains. “So we created a virtual data scientist that is relentless at trying these ideas.”
Inside the Black Box
Not unlike how the human brain reaches a conclusion, it’s not always possible to understand how a machine, despite being designed by humans, reaches its own solutions. The lack of transparency is often referred to as the AI “black box.” Experts like Young say we can learn something about the evolutionary process of machine learning by generating millions of neural networks and seeing what works well and what doesn’t.
“You’re never going to be able to completely explain what happened, but maybe we can better explain it than we currently can today,” Young says.
Transparency is built into the “thought process” of each particular model generated by Driverless AI, according to Candel.
The computer even explains itself to the user in plain English at each decision point. There is also real-time feedback that allows users to prioritize features, or parameters, to see how the changes improve the accuracy of the model. For example, the system may include data from people in the same zip code as it creates a model to describe customer turnover.
“That’s one of the advantages of our automatic feature engineering: it’s basically mimicking human thinking,” Candel says. “It’s not just neural nets that magically come up with some kind of number, but we’re trying to make it statistically significant.”
Moving Forward
Much digital ink has been spilled over the dearth of skilled data scientists, so automating certain design aspects for developing artificial neural networks makes sense. Experts agree that automation alone won’t solve that particular problem. However, it will free computer scientists to tackle more difficult issues, such as parsing the inherent biases that exist within the data used by machine learning today.
“I think the world has an opportunity to focus more on the meaning of things and not on the laborious tasks of just fitting a model and finding the best features to make that model,” Candel notes. “By automating, we are pushing the burden back for the data scientists to actually do something more meaningful, which is think about the problem and see how you can address it differently to make an even bigger impact.”
The team at ORNL expects it can also make bigger impacts beginning next year when the lab’s next supercomputer, Summit, comes online. While Summit will boast only 4,600 nodes, it will sport the latest and greatest GPU technology from Nvidia and CPUs from IBM. That means it will deliver more than five times the computational performance of Titan, the world’s fifth-most powerful supercomputer today.
“We’ll be able to look at much larger problems on Summit than we were able to with Titan and hopefully get to a solution much faster,” Young says.
It’s all in a day’s work.
Image Credit: Gennady Danilkin / Shutterstock.com Continue reading
#431839 The Hidden Human Workforce Powering ...
The tech industry touts its ability to automate tasks and remove slow and expensive humans from the equation. But in the background, a lot of the legwork training machine learning systems, solving problems software can’t, and cleaning up its mistakes is still done by people.
This was highlighted recently when Expensify, which promises to automatically scan photos of receipts to extract data for expense reports, was criticized for sending customers’ personally identifiable receipts to workers on Amazon’s Mechanical Turk (MTurk) crowdsourcing platform.
The company uses text analysis software to read the receipts, but if the automated system falls down then the images are passed to a human for review. While entrusting this job to random workers on MTurk was maybe not so wise—and the company quickly stopped after the furor—the incident brought to light that this kind of human safety net behind AI-powered services is actually very common.
As Wired notes, similar services like Ibotta and Receipt Hog that collect receipt information for marketing purposes also use crowdsourced workers. In a similar vein, while most users might assume their Facebook newsfeed is governed by faceless algorithms, the company has been ramping up the number of human moderators it employs to catch objectionable content that slips through the net, as has YouTube. Twitter also has thousands of human overseers.
Humans aren’t always witting contributors either. The old text-based reCAPTCHA problems Google used to use to distinguish humans from machines was actually simultaneously helping the company digitize books by getting humans to interpret hard-to-read text.
“Every product that uses AI also uses people,” Jeffrey Bigham, a crowdsourcing expert at Carnegie Mellon University, told Wired. “I wouldn’t even say it’s a backstop so much as a core part of the process.”
Some companies are not shy about their use of crowdsourced workers. Startup Eloquent Labs wants to insert them between customer service chatbots and human agents who step in when the machines fail. Many times the AI is pretty certain what particular work means, and an MTurk worker can step in and quickly classify them faster and cheaper than a service agent.
Fashion retailer Gilt provides “pre-emptive shipping,” which uses data analytics to predict what people will buy to get products to them faster. The company uses MTurk workers to provide subjective critiques of clothing that feed into their models.
MTurk isn’t the only player. Companies like Cloudfactory and Crowdflower provide crowdsourced human manpower tailored to particular niches, and some companies prefer to maintain their own communities of workers. Unlabel uses an army of 50,000 humans to check and edit the translations its artificial intelligence system produces for customers.
Most of the time these human workers aren’t just filling in the gaps, they’re also helping to train the machine learning component of these companies’ services by providing new examples of how to solve problems. Other times humans aren’t used “in-the-loop” with AI systems, but to prepare data sets they can learn from by labeling images, text, or audio.
It’s even possible to use crowdsourced workers to carry out tasks typically tackled by machine learning, such as large-scale image analysis and forecasting.
Zooniverse gets citizen scientists to classify images of distant galaxies or videos of animals to help academics analyze large data sets too complex for computers. Almanis creates forecasts on everything from economics to politics with impressive accuracy by giving those who sign up to the website incentives for backing the correct answer to a question. Researchers have used MTurkers to power a chatbot, and there’s even a toolkit for building algorithms to control this human intelligence called TurKit.
So what does this prominent role for humans in AI services mean? Firstly, it suggests that many tools people assume are powered by AI may in fact be relying on humans. This has obvious privacy implications, as the Expensify story highlighted, but should also raise concerns about whether customers are really getting what they pay for.
One example of this is IBM’s Watson for oncology, which is marketed as a data-driven AI system for providing cancer treatment recommendations. But an investigation by STAT highlighted that it’s actually largely driven by recommendations from a handful of (admittedly highly skilled) doctors at Memorial Sloan Kettering Cancer Center in New York.
Secondly, humans intervening in AI-run processes also suggests AI is still largely helpless without us, which is somewhat comforting to know among all the doomsday predictions of AI destroying jobs. At the same time, though, much of this crowdsourced work is monotonous, poorly paid, and isolating.
As machines trained by human workers get better at all kinds of tasks, this kind of piecemeal work filling in the increasingly small gaps in their capabilities may get more common. While tech companies often talk about AI augmenting human intelligence, for many it may actually end up being the other way around.
Image Credit: kentoh / Shutterstock.com Continue reading
#431828 This Self-Driving AI Is Learning to ...
I don’t have to open the doors of AImotive’s white 2015 Prius to see that it’s not your average car. This particular Prius has been christened El Capitan, the name written below the rear doors, and two small cameras are mounted on top of the car. Bundles of wire snake out from them, as well as from the two additional cameras on the car’s hood and trunk.
Inside is where things really get interesting, though. The trunk holds a computer the size of a microwave, and a large monitor covers the passenger glove compartment and dashboard. The center console has three switches labeled “Allowed,” “Error,” and “Active.”
Budapest-based AImotive is working to provide scalable self-driving technology alongside big players like Waymo and Uber in the autonomous vehicle world. On a highway test ride with CEO Laszlo Kishonti near the company’s office in Mountain View, California, I got a glimpse of just how complex that world is.
Camera-Based Feedback System
AImotive’s approach to autonomous driving is a little different from that of some of the best-known systems. For starters, they’re using cameras, not lidar, as primary sensors. “The traffic system is visual and the cost of cameras is low,” Kishonti said. “A lidar can recognize when there are people near the car, but a camera can differentiate between, say, an elderly person and a child. Lidar’s resolution isn’t high enough to recognize the subtle differences of urban driving.”
Image Credit: AImotive
The company’s aiDrive software uses data from the camera sensors to feed information to its algorithms for hierarchical decision-making, grouped under four concurrent activities: recognition, location, motion, and control.
Kishonti pointed out that lidar has already gotten more cost-efficient, and will only continue to do so.
“Ten years ago, lidar was best because there wasn’t enough processing power to do all the calculations by AI. But the cost of running AI is decreasing,” he said. “In our approach, computer vision and AI processing are key, and for safety, we’ll have fallback sensors like radar or lidar.”
aiDrive currently runs on Nvidia chips, which Kishonti noted were originally designed for graphics, and are not terribly efficient given how power-hungry they are. “We’re planning to substitute lower-cost, lower-energy chips in the next six months,” he said.
Testing in Virtual Reality
Waymo recently announced its fleet has now driven four million miles autonomously. That’s a lot of miles, and hard to compete with. But AImotive isn’t trying to compete, at least not by logging more real-life test miles. Instead, the company is doing 90 percent of its testing in virtual reality. “This is what truly differentiates us from competitors,” Kishonti said.
He outlined the three main benefits of VR testing: it can simulate scenarios too dangerous for the real world (such as hitting something), too costly (not every company has Waymo’s funds to run hundreds of cars on real roads), or too time-consuming (like waiting for rain, snow, or other weather conditions to occur naturally and repeatedly).
“Real-world traffic testing is very skewed towards the boring miles,” he said. “What we want to do is test all the cases that are hard to solve.”
On a screen that looked not unlike multiple games of Mario Kart, he showed me the simulator. Cartoon cars cruised down winding streets, outfitted with all the real-world surroundings: people, trees, signs, other cars. As I watched, a furry kangaroo suddenly hopped across one screen. “Volvo had an issue in Australia,” Kishonti explained. “A kangaroo’s movement is different than other animals since it hops instead of running.” Talk about cases that are hard to solve.
AImotive is currently testing around 1,000 simulated scenarios every night, with a steadily-rising curve of successful tests. These scenarios are broken down into features, and the car’s behavior around those features fed into a neural network. As the algorithms learn more features, the level of complexity the vehicles can handle goes up.
On the Road
After Kishonti and his colleagues filled me in on the details of their product, it was time to test it out. A safety driver sat in the driver’s seat, a computer operator in the passenger seat, and Kishonti and I in back. The driver maintained full control of the car until we merged onto the highway. Then he flicked the “Allowed” switch, his copilot pressed the “Active” switch, and he took his hands off the wheel.
What happened next, you ask?
A few things. El Capitan was going exactly the speed limit—65 miles per hour—which meant all the other cars were passing us. When a car merged in front of us or cut us off, El Cap braked accordingly (if a little abruptly). The monitor displayed the feed from each of the car’s cameras, plus multiple data fields and a simulation where a blue line marked the center of the lane, measured by the cameras tracking the lane markings on either side.
I noticed El Cap wobbling out of our lane a bit, but it wasn’t until two things happened in a row that I felt a little nervous: first we went under a bridge, then a truck pulled up next to us, both bridge and truck casting a complete shadow over our car. At that point El Cap lost it, and we swerved haphazardly to the right, narrowly missing the truck’s rear wheels. The safety driver grabbed the steering wheel and took back control of the car.
What happened, Kishonti explained, was that the shadows made it hard for the car’s cameras to see the lane markings. This was a new scenario the algorithm hadn’t previously encountered. If we’d only gone under a bridge or only been next to the truck for a second, El Cap may not have had so much trouble, but the two events happening in a row really threw the car for a loop—almost literally.
“This is a new scenario we’ll add to our testing,” Kishonti said. He added that another way for the algorithm to handle this type of scenario, rather than basing its speed and positioning on the lane markings, is to mimic nearby cars. “The human eye would see that other cars are still moving at the same speed, even if it can’t see details of the road,” he said.
After another brief—and thankfully uneventful—hands-off cruise down the highway, the safety driver took over, exited the highway, and drove us back to the office.
Driving into the Future
I climbed out of the car feeling amazed not only that self-driving cars are possible, but that driving is possible at all. I squint when driving into a tunnel, swerve to avoid hitting a stray squirrel, and brake gradually at stop signs—all without consciously thinking to do so. On top of learning to steer, brake, and accelerate, self-driving software has to incorporate our brains’ and bodies’ unconscious (but crucial) reactions, like our pupils dilating to let in more light so we can see in a tunnel.
Despite all the progress of machine learning, artificial intelligence, and computing power, I have a wholly renewed appreciation for the thing that’s been in charge of driving up till now: the human brain.
Kishonti seemed to feel similarly. “I don’t think autonomous vehicles in the near future will be better than the best drivers,” he said. “But they’ll be better than the average driver. What we want to achieve is safe, good-quality driving for everyone, with scalability.”
AImotive is currently working with American tech firms and with car and truck manufacturers in Europe, China, and Japan.
Image Credit: Alex Oakenman / Shutterstock.com Continue reading
#431599 8 Ways AI Will Transform Our Cities by ...
How will AI shape the average North American city by 2030? A panel of experts assembled as part of a century-long study into the impact of AI thinks its effects will be profound.
The One Hundred Year Study on Artificial Intelligence is the brainchild of Eric Horvitz, technical fellow and a managing director at Microsoft Research.
Every five years a panel of experts will assess the current state of AI and its future directions. The first panel, comprised of experts in AI, law, political science, policy, and economics, was launched last fall and decided to frame their report around the impact AI will have on the average American city. Here’s how they think it will affect eight key domains of city life in the next fifteen years.
1. Transportation
The speed of the transition to AI-guided transport may catch the public by surprise. Self-driving vehicles will be widely adopted by 2020, and it won’t just be cars — driverless delivery trucks, autonomous delivery drones, and personal robots will also be commonplace.
Uber-style “cars as a service” are likely to replace car ownership, which may displace public transport or see it transition towards similar on-demand approaches. Commutes will become a time to relax or work productively, encouraging people to live further from home, which could combine with reduced need for parking to drastically change the face of modern cities.
Mountains of data from increasing numbers of sensors will allow administrators to model individuals’ movements, preferences, and goals, which could have major impact on the design city infrastructure.
Humans won’t be out of the loop, though. Algorithms that allow machines to learn from human input and coordinate with them will be crucial to ensuring autonomous transport operates smoothly. Getting this right will be key as this will be the public’s first experience with physically embodied AI systems and will strongly influence public perception.
2. Home and Service Robots
Robots that do things like deliver packages and clean offices will become much more common in the next 15 years. Mobile chipmakers are already squeezing the power of last century’s supercomputers into systems-on-a-chip, drastically boosting robots’ on-board computing capacity.
Cloud-connected robots will be able to share data to accelerate learning. Low-cost 3D sensors like Microsoft’s Kinect will speed the development of perceptual technology, while advances in speech comprehension will enhance robots’ interactions with humans. Robot arms in research labs today are likely to evolve into consumer devices around 2025.
But the cost and complexity of reliable hardware and the difficulty of implementing perceptual algorithms in the real world mean general-purpose robots are still some way off. Robots are likely to remain constrained to narrow commercial applications for the foreseeable future.
3. Healthcare
AI’s impact on healthcare in the next 15 years will depend more on regulation than technology. The most transformative possibilities of AI in healthcare require access to data, but the FDA has failed to find solutions to the difficult problem of balancing privacy and access to data. Implementation of electronic health records has also been poor.
If these hurdles can be cleared, AI could automate the legwork of diagnostics by mining patient records and the scientific literature. This kind of digital assistant could allow doctors to focus on the human dimensions of care while using their intuition and experience to guide the process.
At the population level, data from patient records, wearables, mobile apps, and personal genome sequencing will make personalized medicine a reality. While fully automated radiology is unlikely, access to huge datasets of medical imaging will enable training of machine learning algorithms that can “triage” or check scans, reducing the workload of doctors.
Intelligent walkers, wheelchairs, and exoskeletons will help keep the elderly active while smart home technology will be able to support and monitor them to keep them independent. Robots may begin to enter hospitals carrying out simple tasks like delivering goods to the right room or doing sutures once the needle is correctly placed, but these tasks will only be semi-automated and will require collaboration between humans and robots.
4. Education
The line between the classroom and individual learning will be blurred by 2030. Massive open online courses (MOOCs) will interact with intelligent tutors and other AI technologies to allow personalized education at scale. Computer-based learning won’t replace the classroom, but online tools will help students learn at their own pace using techniques that work for them.
AI-enabled education systems will learn individuals’ preferences, but by aggregating this data they’ll also accelerate education research and the development of new tools. Online teaching will increasingly widen educational access, making learning lifelong, enabling people to retrain, and increasing access to top-quality education in developing countries.
Sophisticated virtual reality will allow students to immerse themselves in historical and fictional worlds or explore environments and scientific objects difficult to engage with in the real world. Digital reading devices will become much smarter too, linking to supplementary information and translating between languages.
5. Low-Resource Communities
In contrast to the dystopian visions of sci-fi, by 2030 AI will help improve life for the poorest members of society. Predictive analytics will let government agencies better allocate limited resources by helping them forecast environmental hazards or building code violations. AI planning could help distribute excess food from restaurants to food banks and shelters before it spoils.
Investment in these areas is under-funded though, so how quickly these capabilities will appear is uncertain. There are fears valueless machine learning could inadvertently discriminate by correlating things with race or gender, or surrogate factors like zip codes. But AI programs are easier to hold accountable than humans, so they’re more likely to help weed out discrimination.
6. Public Safety and Security
By 2030 cities are likely to rely heavily on AI technologies to detect and predict crime. Automatic processing of CCTV and drone footage will make it possible to rapidly spot anomalous behavior. This will not only allow law enforcement to react quickly but also forecast when and where crimes will be committed. Fears that bias and error could lead to people being unduly targeted are justified, but well-thought-out systems could actually counteract human bias and highlight police malpractice.
Techniques like speech and gait analysis could help interrogators and security guards detect suspicious behavior. Contrary to concerns about overly pervasive law enforcement, AI is likely to make policing more targeted and therefore less overbearing.
7. Employment and Workplace
The effects of AI will be felt most profoundly in the workplace. By 2030 AI will be encroaching on skilled professionals like lawyers, financial advisers, and radiologists. As it becomes capable of taking on more roles, organizations will be able to scale rapidly with relatively small workforces.
AI is more likely to replace tasks rather than jobs in the near term, and it will also create new jobs and markets, even if it’s hard to imagine what those will be right now. While it may reduce incomes and job prospects, increasing automation will also lower the cost of goods and services, effectively making everyone richer.
These structural shifts in the economy will require political rather than purely economic responses to ensure these riches are shared. In the short run, this may include resources being pumped into education and re-training, but longer term may require a far more comprehensive social safety net or radical approaches like a guaranteed basic income.
8. Entertainment
Entertainment in 2030 will be interactive, personalized, and immeasurably more engaging than today. Breakthroughs in sensors and hardware will see virtual reality, haptics and companion robots increasingly enter the home. Users will be able to interact with entertainment systems conversationally, and they will show emotion, empathy, and the ability to adapt to environmental cues like the time of day.
Social networks already allow personalized entertainment channels, but the reams of data being collected on usage patterns and preferences will allow media providers to personalize entertainment to unprecedented levels. There are concerns this could endow media conglomerates with unprecedented control over people’s online experiences and the ideas to which they are exposed.
But advances in AI will also make creating your own entertainment far easier and more engaging, whether by helping to compose music or choreograph dances using an avatar. Democratizing the production of high-quality entertainment makes it nearly impossible to predict how highly fluid human tastes for entertainment will develop.
Image Credit: Asgord / Shutterstock.com Continue reading