Tag Archives: there
#431839 The Hidden Human Workforce Powering ...
The tech industry touts its ability to automate tasks and remove slow and expensive humans from the equation. But in the background, a lot of the legwork training machine learning systems, solving problems software can’t, and cleaning up its mistakes is still done by people.
This was highlighted recently when Expensify, which promises to automatically scan photos of receipts to extract data for expense reports, was criticized for sending customers’ personally identifiable receipts to workers on Amazon’s Mechanical Turk (MTurk) crowdsourcing platform.
The company uses text analysis software to read the receipts, but if the automated system falls down then the images are passed to a human for review. While entrusting this job to random workers on MTurk was maybe not so wise—and the company quickly stopped after the furor—the incident brought to light that this kind of human safety net behind AI-powered services is actually very common.
As Wired notes, similar services like Ibotta and Receipt Hog that collect receipt information for marketing purposes also use crowdsourced workers. In a similar vein, while most users might assume their Facebook newsfeed is governed by faceless algorithms, the company has been ramping up the number of human moderators it employs to catch objectionable content that slips through the net, as has YouTube. Twitter also has thousands of human overseers.
Humans aren’t always witting contributors either. The old text-based reCAPTCHA problems Google used to use to distinguish humans from machines was actually simultaneously helping the company digitize books by getting humans to interpret hard-to-read text.
“Every product that uses AI also uses people,” Jeffrey Bigham, a crowdsourcing expert at Carnegie Mellon University, told Wired. “I wouldn’t even say it’s a backstop so much as a core part of the process.”
Some companies are not shy about their use of crowdsourced workers. Startup Eloquent Labs wants to insert them between customer service chatbots and human agents who step in when the machines fail. Many times the AI is pretty certain what particular work means, and an MTurk worker can step in and quickly classify them faster and cheaper than a service agent.
Fashion retailer Gilt provides “pre-emptive shipping,” which uses data analytics to predict what people will buy to get products to them faster. The company uses MTurk workers to provide subjective critiques of clothing that feed into their models.
MTurk isn’t the only player. Companies like Cloudfactory and Crowdflower provide crowdsourced human manpower tailored to particular niches, and some companies prefer to maintain their own communities of workers. Unlabel uses an army of 50,000 humans to check and edit the translations its artificial intelligence system produces for customers.
Most of the time these human workers aren’t just filling in the gaps, they’re also helping to train the machine learning component of these companies’ services by providing new examples of how to solve problems. Other times humans aren’t used “in-the-loop” with AI systems, but to prepare data sets they can learn from by labeling images, text, or audio.
It’s even possible to use crowdsourced workers to carry out tasks typically tackled by machine learning, such as large-scale image analysis and forecasting.
Zooniverse gets citizen scientists to classify images of distant galaxies or videos of animals to help academics analyze large data sets too complex for computers. Almanis creates forecasts on everything from economics to politics with impressive accuracy by giving those who sign up to the website incentives for backing the correct answer to a question. Researchers have used MTurkers to power a chatbot, and there’s even a toolkit for building algorithms to control this human intelligence called TurKit.
So what does this prominent role for humans in AI services mean? Firstly, it suggests that many tools people assume are powered by AI may in fact be relying on humans. This has obvious privacy implications, as the Expensify story highlighted, but should also raise concerns about whether customers are really getting what they pay for.
One example of this is IBM’s Watson for oncology, which is marketed as a data-driven AI system for providing cancer treatment recommendations. But an investigation by STAT highlighted that it’s actually largely driven by recommendations from a handful of (admittedly highly skilled) doctors at Memorial Sloan Kettering Cancer Center in New York.
Secondly, humans intervening in AI-run processes also suggests AI is still largely helpless without us, which is somewhat comforting to know among all the doomsday predictions of AI destroying jobs. At the same time, though, much of this crowdsourced work is monotonous, poorly paid, and isolating.
As machines trained by human workers get better at all kinds of tasks, this kind of piecemeal work filling in the increasingly small gaps in their capabilities may get more common. While tech companies often talk about AI augmenting human intelligence, for many it may actually end up being the other way around.
Image Credit: kentoh / Shutterstock.com Continue reading
#431836 Do Our Brains Use Deep Learning to Make ...
The first time Dr. Blake Richards heard about deep learning, he was convinced that he wasn’t just looking at a technique that would revolutionize artificial intelligence. He also knew he was looking at something fundamental about the human brain.
That was the early 2000s, and Richards was taking a course with Dr. Geoff Hinton at the University of Toronto. Hinton, a pioneer architect of the algorithm that would later take the world by storm, was offering an introductory course on his learning method inspired by the human brain.
The key words here are “inspired by.” Despite Richards’ conviction, the odds were stacked against him. The human brain, as it happens, seems to lack a critical function that’s programmed into deep learning algorithms. On the surface, the algorithms were violating basic biological facts already proven by neuroscientists.
But what if, superficial differences aside, deep learning and the brain are actually compatible?
Now, in a new study published in eLife, Richards, working with DeepMind, proposed a new algorithm based on the biological structure of neurons in the neocortex. Also known as the cortex, this outermost region of the brain is home to higher cognitive functions such as reasoning, prediction, and flexible thought.
The team networked their artificial neurons together into a multi-layered network and challenged it with a classic computer vision task—identifying hand-written numbers.
The new algorithm performed well. But the kicker is that it analyzed the learning examples in a way that’s characteristic of deep learning algorithms, even though it was completely based on the brain’s fundamental biology.
“Deep learning is possible in a biological framework,” concludes the team.
Because the model is only a computer simulation at this point, Richards hopes to pass the baton to experimental neuroscientists, who could actively test whether the algorithm operates in an actual brain.
If so, the data could then be passed back to computer scientists to work out the next generation of massively parallel and low-energy algorithms to power our machines.
It’s a first step towards merging the two fields back into a “virtuous circle” of discovery and innovation.
The blame game
While you’ve probably heard of deep learning’s recent wins against humans in the game of Go, you might not know the nitty-gritty behind the algorithm’s operations.
In a nutshell, deep learning relies on an artificial neural network with virtual “neurons.” Like a towering skyscraper, the network is structured into hierarchies: lower-level neurons process aspects of an input—for example, a horizontal or vertical stroke that eventually forms the number four—whereas higher-level neurons extract more abstract aspects of the number four.
To teach the network, you give it examples of what you’re looking for. The signal propagates forward in the network (like climbing up a building), where each neuron works to fish out something fundamental about the number four.
Like children trying to learn a skill the first time, initially the network doesn’t do so well. It spits out what it thinks a universal number four should look like—think a Picasso-esque rendition.
But here’s where the learning occurs: the algorithm compares the output with the ideal output, and computes the difference between the two (dubbed “error”). This error is then “backpropagated” throughout the entire network, telling each neuron: hey, this is how far off you were, so try adjusting your computation closer to the ideal.
Millions of examples and tweakings later, the network inches closer to the desired output and becomes highly proficient at the trained task.
This error signal is crucial for learning. Without efficient “backprop,” the network doesn’t know which of its neurons are off kilter. By assigning blame, the AI can better itself.
The brain does this too. How? We have no clue.
Biological No-Go
What’s clear, though, is that the deep learning solution doesn’t work.
Backprop is a pretty needy function. It requires a very specific infrastructure for it to work as expected.
For one, each neuron in the network has to receive the error feedback. But in the brain, neurons are only connected to a few downstream partners (if that). For backprop to work in the brain, early-level neurons need to be able to receive information from billions of connections in their downstream circuits—a biological impossibility.
And while certain deep learning algorithms adapt a more local form of backprop— essentially between neurons—it requires their connection forwards and backwards to be symmetric. This hardly ever occurs in the brain’s synapses.
More recent algorithms adapt a slightly different strategy, in that they implement a separate feedback pathway that helps the neurons to figure out errors locally. While it’s more biologically plausible, the brain doesn’t have a separate computational network dedicated to the blame game.
What it does have are neurons with intricate structures, unlike the uniform “balls” that are currently applied in deep learning.
Branching Networks
The team took inspiration from pyramidal cells that populate the human cortex.
“Most of these neurons are shaped like trees, with ‘roots’ deep in the brain and ‘branches’ close to the surface,” says Richards. “What’s interesting is that these roots receive a different set of inputs than the branches that are way up at the top of the tree.”
This is an illustration of a multi-compartment neural network model for deep learning. Left: Reconstruction of pyramidal neurons from mouse primary visual cortex. Right: Illustration of simplified pyramidal neuron models. Image Credit: CIFAR
Curiously, the structure of neurons often turn out be “just right” for efficiently cracking a computational problem. Take the processing of sensations: the bottoms of pyramidal neurons are right smack where they need to be to receive sensory input, whereas the tops are conveniently placed to transmit feedback errors.
Could this intricate structure be evolution’s solution to channeling the error signal?
The team set up a multi-layered neural network based on previous algorithms. But rather than having uniform neurons, they gave those in middle layers—sandwiched between the input and output—compartments, just like real neurons.
When trained with hand-written digits, the algorithm performed much better than a single-layered network, despite lacking a way to perform classical backprop. The cell-like structure itself was sufficient to assign error: the error signals at one end of the neuron are naturally kept separate from input at the other end.
Then, at the right moment, the neuron brings both sources of information together to find the best solution.
There’s some biological evidence for this: neuroscientists have long known that the neuron’s input branches perform local computations, which can be integrated with signals that propagate backwards from the so-called output branch.
However, we don’t yet know if this is the brain’s way of dealing blame—a question that Richards urges neuroscientists to test out.
What’s more, the network parsed the problem in a way eerily similar to traditional deep learning algorithms: it took advantage of its multi-layered structure to extract progressively more abstract “ideas” about each number.
“[This is] the hallmark of deep learning,” the authors explain.
The Deep Learning Brain
Without doubt, there will be more twists and turns to the story as computer scientists incorporate more biological details into AI algorithms.
One aspect that Richards and team are already eyeing is a top-down predictive function, in which signals from higher levels directly influence how lower levels respond to input.
Feedback from upper levels doesn’t just provide error signals; it could also be nudging lower processing neurons towards a “better” activity pattern in real-time, says Richards.
The network doesn’t yet outperform other non-biologically derived (but “brain-inspired”) deep networks. But that’s not the point.
“Deep learning has had a huge impact on AI, but, to date, its impact on neuroscience has been limited,” the authors say.
Now neuroscientists have a lead they could experimentally test: that the structure of neurons underlie nature’s own deep learning algorithm.
“What we might see in the next decade or so is a real virtuous cycle of research between neuroscience and AI, where neuroscience discoveries help us to develop new AI and AI can help us interpret and understand our experimental data in neuroscience,” says Richards.
Image Credit: christitzeimaging.com / Shutterstock.com Continue reading
#431828 This Self-Driving AI Is Learning to ...
I don’t have to open the doors of AImotive’s white 2015 Prius to see that it’s not your average car. This particular Prius has been christened El Capitan, the name written below the rear doors, and two small cameras are mounted on top of the car. Bundles of wire snake out from them, as well as from the two additional cameras on the car’s hood and trunk.
Inside is where things really get interesting, though. The trunk holds a computer the size of a microwave, and a large monitor covers the passenger glove compartment and dashboard. The center console has three switches labeled “Allowed,” “Error,” and “Active.”
Budapest-based AImotive is working to provide scalable self-driving technology alongside big players like Waymo and Uber in the autonomous vehicle world. On a highway test ride with CEO Laszlo Kishonti near the company’s office in Mountain View, California, I got a glimpse of just how complex that world is.
Camera-Based Feedback System
AImotive’s approach to autonomous driving is a little different from that of some of the best-known systems. For starters, they’re using cameras, not lidar, as primary sensors. “The traffic system is visual and the cost of cameras is low,” Kishonti said. “A lidar can recognize when there are people near the car, but a camera can differentiate between, say, an elderly person and a child. Lidar’s resolution isn’t high enough to recognize the subtle differences of urban driving.”
Image Credit: AImotive
The company’s aiDrive software uses data from the camera sensors to feed information to its algorithms for hierarchical decision-making, grouped under four concurrent activities: recognition, location, motion, and control.
Kishonti pointed out that lidar has already gotten more cost-efficient, and will only continue to do so.
“Ten years ago, lidar was best because there wasn’t enough processing power to do all the calculations by AI. But the cost of running AI is decreasing,” he said. “In our approach, computer vision and AI processing are key, and for safety, we’ll have fallback sensors like radar or lidar.”
aiDrive currently runs on Nvidia chips, which Kishonti noted were originally designed for graphics, and are not terribly efficient given how power-hungry they are. “We’re planning to substitute lower-cost, lower-energy chips in the next six months,” he said.
Testing in Virtual Reality
Waymo recently announced its fleet has now driven four million miles autonomously. That’s a lot of miles, and hard to compete with. But AImotive isn’t trying to compete, at least not by logging more real-life test miles. Instead, the company is doing 90 percent of its testing in virtual reality. “This is what truly differentiates us from competitors,” Kishonti said.
He outlined the three main benefits of VR testing: it can simulate scenarios too dangerous for the real world (such as hitting something), too costly (not every company has Waymo’s funds to run hundreds of cars on real roads), or too time-consuming (like waiting for rain, snow, or other weather conditions to occur naturally and repeatedly).
“Real-world traffic testing is very skewed towards the boring miles,” he said. “What we want to do is test all the cases that are hard to solve.”
On a screen that looked not unlike multiple games of Mario Kart, he showed me the simulator. Cartoon cars cruised down winding streets, outfitted with all the real-world surroundings: people, trees, signs, other cars. As I watched, a furry kangaroo suddenly hopped across one screen. “Volvo had an issue in Australia,” Kishonti explained. “A kangaroo’s movement is different than other animals since it hops instead of running.” Talk about cases that are hard to solve.
AImotive is currently testing around 1,000 simulated scenarios every night, with a steadily-rising curve of successful tests. These scenarios are broken down into features, and the car’s behavior around those features fed into a neural network. As the algorithms learn more features, the level of complexity the vehicles can handle goes up.
On the Road
After Kishonti and his colleagues filled me in on the details of their product, it was time to test it out. A safety driver sat in the driver’s seat, a computer operator in the passenger seat, and Kishonti and I in back. The driver maintained full control of the car until we merged onto the highway. Then he flicked the “Allowed” switch, his copilot pressed the “Active” switch, and he took his hands off the wheel.
What happened next, you ask?
A few things. El Capitan was going exactly the speed limit—65 miles per hour—which meant all the other cars were passing us. When a car merged in front of us or cut us off, El Cap braked accordingly (if a little abruptly). The monitor displayed the feed from each of the car’s cameras, plus multiple data fields and a simulation where a blue line marked the center of the lane, measured by the cameras tracking the lane markings on either side.
I noticed El Cap wobbling out of our lane a bit, but it wasn’t until two things happened in a row that I felt a little nervous: first we went under a bridge, then a truck pulled up next to us, both bridge and truck casting a complete shadow over our car. At that point El Cap lost it, and we swerved haphazardly to the right, narrowly missing the truck’s rear wheels. The safety driver grabbed the steering wheel and took back control of the car.
What happened, Kishonti explained, was that the shadows made it hard for the car’s cameras to see the lane markings. This was a new scenario the algorithm hadn’t previously encountered. If we’d only gone under a bridge or only been next to the truck for a second, El Cap may not have had so much trouble, but the two events happening in a row really threw the car for a loop—almost literally.
“This is a new scenario we’ll add to our testing,” Kishonti said. He added that another way for the algorithm to handle this type of scenario, rather than basing its speed and positioning on the lane markings, is to mimic nearby cars. “The human eye would see that other cars are still moving at the same speed, even if it can’t see details of the road,” he said.
After another brief—and thankfully uneventful—hands-off cruise down the highway, the safety driver took over, exited the highway, and drove us back to the office.
Driving into the Future
I climbed out of the car feeling amazed not only that self-driving cars are possible, but that driving is possible at all. I squint when driving into a tunnel, swerve to avoid hitting a stray squirrel, and brake gradually at stop signs—all without consciously thinking to do so. On top of learning to steer, brake, and accelerate, self-driving software has to incorporate our brains’ and bodies’ unconscious (but crucial) reactions, like our pupils dilating to let in more light so we can see in a tunnel.
Despite all the progress of machine learning, artificial intelligence, and computing power, I have a wholly renewed appreciation for the thing that’s been in charge of driving up till now: the human brain.
Kishonti seemed to feel similarly. “I don’t think autonomous vehicles in the near future will be better than the best drivers,” he said. “But they’ll be better than the average driver. What we want to achieve is safe, good-quality driving for everyone, with scalability.”
AImotive is currently working with American tech firms and with car and truck manufacturers in Europe, China, and Japan.
Image Credit: Alex Oakenman / Shutterstock.com Continue reading
#431733 Why Humanoid Robots Are Still So Hard to ...
Picture a robot. In all likelihood, you just pictured a sleek metallic or chrome-white humanoid. Yet the vast majority of robots in the world around us are nothing like this; instead, they’re specialized for specific tasks. Our cultural conception of what robots are dates back to the coining of the term robots in the Czech play, Rossum’s Universal Robots, which originally envisioned them as essentially synthetic humans.
The vision of a humanoid robot is tantalizing. There are constant efforts to create something that looks like the robots of science fiction. Recently, an old competitor in this field returned with a new model: Toyota has released what they call the T-HR3. As humanoid robots go, it appears to be pretty dexterous and have a decent grip, with a number of degrees of freedom making the movements pleasantly human.
This humanoid robot operates mostly via a remote-controlled system that allows the user to control the robot’s limbs by exerting different amounts of pressure on a framework. A VR headset completes the picture, allowing the user to control the robot’s body and teleoperate the machine. There’s no word on a price tag, but one imagines a machine with a control system this complicated won’t exactly be on your Christmas list, unless you’re a billionaire.
Toyota is no stranger to robotics. They released a series of “Partner Robots” that had a bizarre affinity for instrument-playing but weren’t often seen doing much else. Given that they didn’t seem to have much capability beyond the automaton that Leonardo da Vinci made hundreds of years ago, they promptly vanished. If, as the name suggests, the T-HR3 is a sequel to these robots, which came out shortly after ASIMO back in 2003, it’s substantially better.
Slightly less humanoid (and perhaps the more useful for it), Toyota’s HSR-2 is a robot base on wheels with a simple mechanical arm. It brings to mind earlier machines produced by dream-factory startup Willow Garage like the PR-2. The idea of an affordable robot that could simply move around on wheels and pick up and fetch objects, and didn’t harbor too-lofty ambitions to do anything else, was quite successful.
So much so that when Robocup, the international robotics competition, looked for a platform for their robot-butler competition @Home, they chose HSR-2 for its ability to handle objects. HSR-2 has been deployed in trial runs to care for the elderly and injured, but has yet to be widely adopted for these purposes five years after its initial release. It’s telling that arguably the most successful multi-purpose humanoid robot isn’t really humanoid at all—and it’s curious that Toyota now seems to want to return to a more humanoid model a decade after they gave up on the project.
What’s unclear, as is often the case with humanoid robots, is what, precisely, the T-HR3 is actually for. The teleoperation gets around the complex problem of control by simply having the machine controlled remotely by a human. That human then handles all the sensory perception, decision-making, planning, and manipulation; essentially, the hardest problems in robotics.
There may not be a great deal of autonomy for the T-HR3, but by sacrificing autonomy, you drastically cut down the uses of the robot. Since it can’t act alone, you need a convincing scenario where you need a teleoperated humanoid robot that’s less precise and vastly more expensive than just getting a person to do the same job. Perhaps someday more autonomy will be developed for the robot, and the master maneuvering system that allows humans to control it will only be used in emergencies to control the robot if it gets stuck.
Toyota’s press release says it is “a platform with capabilities that can safely assist humans in a variety of settings, such as the home, medical facilities, construction sites, disaster-stricken areas and even outer space.” In reality, it’s difficult to see such a robot being affordable or even that useful in the home or in medical facilities (unless it’s substantially stronger than humans). Equally, it certainly doesn’t seem robust enough to be deployed in disaster zones or outer space. These tasks have been mooted for robots for a very long time and few have proved up to the challenge.
Toyota’s third generation humanoid robot, the T-HR3. Image Credit: Toyota
Instead, the robot seems designed to work alongside humans. Its design, standing 1.5 meters tall, weighing 75 kilograms, and possessing 32 degrees of freedom in its body, suggests it is built to closely mimic a person, rather than a robot like ATLAS which is robust enough that you can imagine it being useful in a war zone. In this case, it might be closer to the model of the collaborative robots or co-bots developed by Rethink Robotics, whose tons of safety features, including force-sensitive feedback for the user, reduce the risk of terrible PR surrounding killer robots.
Instead the emphasis is on graceful precision engineering: in the promo video, the robot can be seen balancing on one leg before showing off a few poised, yoga-like poses. This perhaps suggests that an application in elderly care, which Toyota has ventured into before and which was the stated aim of their simple HSR-2, might be more likely than deployment to a disaster zone.
The reason humanoid robots remain so elusive and so tempting is probably because of a simple cognitive mistake. We make two bad assumptions. First, we assume that if you build a humanoid robot, give its joints enough flexibility, throw in a little AI and perhaps some pre-programmed behaviors, then presto, it will be able to do everything humans can. When you see a robot that moves well and looks humanoid, it seems like the hardest part is done; surely this robot could do anything. The reality is never so simple.
We also make the reverse assumption: we assume that when we are finally replaced, it will be by perfect replicas of our own bodies and brains that can fulfill all the functions we used to fulfill. Perhaps, in reality, the future of robots and AI is more like its present: piecemeal, with specialized algorithms and specialized machines gradually learning to outperform humans at every conceivable task without ever looking convincingly human.
It may well be that the T-HR3 is angling towards this concept of machine learning as a platform for future research. Rather than trying to program an omni-capable robot out of the box, it will gradually learn from its human controllers. In this way, you could see the platform being used to explore the limits of what humans can teach robots to do simply by having them mimic sequences of our bodies’ motion, in the same way the exploitation of neural networks is testing the limits of training algorithms on data. No one machine will be able to perform everything a human can, but collectively, they will vastly outperform us at anything you’d want one to do.
So when you see a new android like Toyota’s, feel free to marvel at its technical abilities and indulge in the speculation about whether it’s a PR gimmick or a revolutionary step forward along the road to human replacement. Just remember that, human-level bots or not, we’re already strolling down that road.
Image Credit: Toyota Continue reading