Tag Archives: think
#437816 As Algorithms Take Over More of the ...
Algorithms play an increasingly prominent part in our lives, governing everything from the news we see to the products we buy. As they proliferate, experts say, we need to make sure they don’t collude against us in damaging ways.
Fears of malevolent artificial intelligence plotting humanity’s downfall are a staple of science fiction. But there are plenty of nearer-term situations in which relatively dumb algorithms could do serious harm unintentionally, particularly when they are interlocked in complex networks of relationships.
In the economic sphere a high proportion of decision-making is already being offloaded to machines, and there have been warning signs of where that could lead if we’re not careful. The 2010 “Flash Crash,” where algorithmic traders helped wipe nearly $1 trillion off the stock market in minutes, is a textbook example, and widespread use of automated trading software has been blamed for the increasing fragility of markets.
But another important place where algorithms could undermine our economic system is in price-setting. Competitive markets are essential for the smooth functioning of the capitalist system that underpins Western society, which is why countries like the US have strict anti-trust laws that prevent companies from creating monopolies or colluding to build cartels that artificially inflate prices.
These regulations were built for an era when pricing decisions could always be traced back to a human, though. As self-adapting pricing algorithms increasingly decide the value of products and commodities, those laws are starting to look unfit for purpose, say the authors of a paper in Science.
Using algorithms to quickly adjust prices in a dynamic market is not a new idea—airlines have been using them for decades—but previously these algorithms operated based on rules that were hard-coded into them by programmers.
Today the pricing algorithms that underpin many marketplaces, especially online ones, rely on machine learning instead. After being set an overarching goal like maximizing profit, they develop their own strategies based on experience of the market, often with little human oversight. The most advanced also use forms of AI whose workings are opaque even if humans wanted to peer inside.
In addition, the public nature of online markets means that competitors’ prices are available in real time. It’s well-documented that major retailers like Amazon and Walmart are engaged in a never-ending bot war, using automated software to constantly snoop on their rivals’ pricing and inventory.
This combination of factors sets the stage perfectly for AI-powered pricing algorithms to adopt collusive pricing strategies, say the authors. If given free reign to develop their own strategies, multiple pricing algorithms with real-time access to each other’s prices could quickly learn that cooperating with each other is the best way to maximize profits.
The authors note that researchers have already found evidence that pricing algorithms will spontaneously develop collusive strategies in computer-simulated markets, and a recent study found evidence that suggests pricing algorithms may be colluding in Germany’s retail gasoline market. And that’s a problem, because today’s anti-trust laws are ill-suited to prosecuting this kind of behavior.
Collusion among humans typically involves companies communicating with each other to agree on a strategy that pushes prices above the true market value. They then develop rules to determine how they maintain this markup in a dynamic market that also incorporates the threat of retaliatory pricing to spark a price war if another cartel member tries to undercut the agreed pricing strategy.
Because of the complexity of working out whether specific pricing strategies or prices are the result of collusion, prosecutions have instead relied on communication between companies to establish guilt. That’s a problem because algorithms don’t need to communicate to collude, and as a result there are few legal mechanisms to prosecute this kind of collusion.
That means legal scholars, computer scientists, economists, and policymakers must come together to find new ways to uncover, prohibit, and prosecute the collusive rules that underpin this behavior, say the authors. Key to this will be auditing and testing pricing algorithms, looking for things like retaliatory pricing, price matching, and aggressive responses to price drops but not price rises.
Once collusive pricing rules are uncovered, computer scientists need to come up with ways to constrain algorithms from adopting them without sacrificing their clear efficiency benefits. It could also be helpful to make preventing this kind of collusive behavior the responsibility of the companies deploying them, with stiff penalties for those who don’t keep their algorithms in check.
One problem, though, is that algorithms may evolve strategies that humans would never think of, which could make spotting this behavior tricky. Imbuing courts with the technical knowledge and capacity to investigate this kind of evidence will also prove difficult, but getting to grips with these problems is an even more pressing challenge than it might seem at first.
While anti-competitive pricing algorithms could wreak havoc, there are plenty of other arenas where collusive AI could have even more insidious effects, from military applications to healthcare and insurance. Developing the capacity to predict and prevent AI scheming against us will likely be crucial going forward.
Image Credit: Pexels from Pixabay Continue reading
#437809 Q&A: The Masterminds Behind ...
Illustration: iStockphoto
Getting a car to drive itself is undoubtedly the most ambitious commercial application of artificial intelligence (AI). The research project was kicked into life by the 2004 DARPA Urban Challenge and then taken up as a business proposition, first by Alphabet, and later by the big automakers.
The industry-wide effort vacuumed up many of the world’s best roboticists and set rival companies on a multibillion-dollar acquisitions spree. It also launched a cycle of hype that paraded ever more ambitious deadlines—the most famous of which, made by Alphabet’s Sergei Brin in 2012, was that full self-driving technology would be ready by 2017. Those deadlines have all been missed.
Much of the exhilaration was inspired by the seeming miracles that a new kind of AI—deep learning—was achieving in playing games, recognizing faces, and transliterating voices. Deep learning excels at tasks involving pattern recognition—a particular challenge for older, rule-based AI techniques. However, it now seems that deep learning will not soon master the other intellectual challenges of driving, such as anticipating what human beings might do.
Among the roboticists who have been involved from the start are Gill Pratt, the chief executive officer of Toyota Research Institute (TRI) , formerly a program manager at the Defense Advanced Research Projects Agency (DARPA); and Wolfram Burgard, vice president of automated driving technology for TRI and president of the IEEE Robotics and Automation Society. The duo spoke with IEEE Spectrum’s Philip Ross at TRI’s offices in Palo Alto, Calif.
This interview has been condensed and edited for clarity.
IEEE Spectrum: How does AI handle the various parts of the self-driving problem?
Photo: Toyota
Gill Pratt
Gill Pratt: There are three different systems that you need in a self-driving car: It starts with perception, then goes to prediction, and then goes to planning.
The one that by far is the most problematic is prediction. It’s not prediction of other automated cars, because if all cars were automated, this problem would be much more simple. How do you predict what a human being is going to do? That’s difficult for deep learning to learn right now.
Spectrum: Can you offset the weakness in prediction with stupendous perception?
Photo: Toyota Research Institute for Burgard
Wolfram Burgard
Wolfram Burgard: Yes, that is what car companies basically do. A camera provides semantics, lidar provides distance, radar provides velocities. But all this comes with problems, because sometimes you look at the world from different positions—that’s called parallax. Sometimes you don’t know which range estimate that pixel belongs to. That might make the decision complicated as to whether that is a person painted onto the side of a truck or whether this is an actual person.
With deep learning there is this promise that if you throw enough data at these networks, it’s going to work—finally. But it turns out that the amount of data that you need for self-driving cars is far larger than we expected.
Spectrum: When do deep learning’s limitations become apparent?
Pratt: The way to think about deep learning is that it’s really high-performance pattern matching. You have input and output as training pairs; you say this image should lead to that result; and you just do that again and again, for hundreds of thousands, millions of times.
Here’s the logical fallacy that I think most people have fallen prey to with deep learning. A lot of what we do with our brains can be thought of as pattern matching: “Oh, I see this stop sign, so I should stop.” But it doesn’t mean all of intelligence can be done through pattern matching.
“I asked myself, if all of those cars had automated drive, how good would they have to be to tolerate the number of crashes that would still occur?”
—Gill Pratt, Toyota Research Institute
For instance, when I’m driving and I see a mother holding the hand of a child on a corner and trying to cross the street, I am pretty sure she’s not going to cross at a red light and jaywalk. I know from my experience being a human being that mothers and children don’t act that way. On the other hand, say there are two teenagers—with blue hair, skateboards, and a disaffected look. Are they going to jaywalk? I look at that, you look at that, and instantly the probability in your mind that they’ll jaywalk is much higher than for the mother holding the hand of the child. It’s not that you’ve seen 100,000 cases of young kids—it’s that you understand what it is to be either a teenager or a mother holding a child’s hand.
You can try to fake that kind of intelligence. If you specifically train a neural network on data like that, you could pattern-match that. But you’d have to know to do it.
Spectrum: So you’re saying that when you substitute pattern recognition for reasoning, the marginal return on the investment falls off pretty fast?
Pratt: That’s absolutely right. Unfortunately, we don’t have the ability to make an AI that thinks yet, so we don’t know what to do. We keep trying to use the deep-learning hammer to hammer more nails—we say, well, let’s just pour more data in, and more data.
Spectrum: Couldn’t you train the deep-learning system to recognize teenagers and to assign the category a high propensity for jaywalking?
Burgard: People have been doing that. But it turns out that these heuristics you come up with are extremely hard to tweak. Also, sometimes the heuristics are contradictory, which makes it extremely hard to design these expert systems based on rules. This is where the strength of the deep-learning methods lies, because somehow they encode a way to see a pattern where, for example, here’s a feature and over there is another feature; it’s about the sheer number of parameters you have available.
Our separation of the components of a self-driving AI eases the development and even the learning of the AI systems. Some companies even think about using deep learning to do the job fully, from end to end, not having any structure at all—basically, directly mapping perceptions to actions.
Pratt: There are companies that have tried it; Nvidia certainly tried it. In general, it’s been found not to work very well. So people divide the problem into blocks, where we understand what each block does, and we try to make each block work well. Some of the blocks end up more like the expert system we talked about, where we actually code things, and other blocks end up more like machine learning.
Spectrum: So, what’s next—what new technique is in the offing?
Pratt: If I knew the answer, we’d do it. [Laughter]
Spectrum: You said that if all cars on the road were automated, the problem would be easy. Why not “geofence” the heck out of the self-driving problem, and have areas where only self-driving cars are allowed?
Pratt: That means putting in constraints on the operational design domain. This includes the geography—where the car should be automated; it includes the weather, it includes the level of traffic, it includes speed. If the car is going slow enough to avoid colliding without risking a rear-end collision, that makes the problem much easier. Street trolleys operate with traffic still in some parts of the world, and that seems to work out just fine. People learn that this vehicle may stop at unexpected times. My suspicion is, that is where we’ll see Level 4 autonomy in cities. It’s going to be in the lower speeds.
“We are now in the age of deep learning, and we don’t know what will come after.”
—Wolfram Burgard, Toyota Research Institute
That’s a sweet spot in the operational design domain, without a doubt. There’s another one at high speed on a highway, because access to highways is so limited. But unfortunately there is still the occasional debris that suddenly crosses the road, and the weather gets bad. The classic example is when somebody irresponsibly ties a mattress to the top of a car and it falls off; what are you going to do? And the answer is that terrible things happen—even for humans.
Spectrum: Learning by doing worked for the first cars, the first planes, the first steam boilers, and even the first nuclear reactors. We ran risks then; why not now?
Pratt: It has to do with the times. During the era where cars took off, all kinds of accidents happened, women died in childbirth, all sorts of diseases ran rampant; the expected characteristic of life was that bad things happened. Expectations have changed. Now the chance of dying in some freak accident is quite low because of all the learning that’s gone on, the OSHA [Occupational Safety and Health Administration] rules, UL code for electrical appliances, all the building standards, medicine.
Furthermore—and we think this is very important—we believe that empathy for a human being at the wheel is a significant factor in public acceptance when there is a crash. We don’t know this for sure—it’s a speculation on our part. I’ve driven, I’ve had close calls; that could have been me that made that mistake and had that wreck. I think people are more tolerant when somebody else makes mistakes, and there’s an awful crash. In the case of an automated car, we worry that that empathy won’t be there.
Photo: Toyota
Toyota is using this
Platform 4 automated driving test vehicle, based on the Lexus LS, to develop Level-4 self-driving capabilities for its “Chauffeur” project.
Spectrum: Toyota is building a system called Guardian to back up the driver, and a more futuristic system called Chauffeur, to replace the driver. How can Chauffeur ever succeed? It has to be better than a human plus Guardian!
Pratt: In the discussions we’ve had with others in this field, we’ve talked about that a lot. What is the standard? Is it a person in a basic car? Or is it a person with a car that has active safety systems in it? And what will people think is good enough?
These systems will never be perfect—there will always be some accidents, and no matter how hard we try there will still be occasions where there will be some fatalities. At what threshold are people willing to say that’s okay?
Spectrum: You were among the first top researchers to warn against hyping self-driving technology. What did you see that so many other players did not?
Pratt: First, in my own case, during my time at DARPA I worked on robotics, not cars. So I was somewhat of an outsider. I was looking at it from a fresh perspective, and that helps a lot.
Second, [when I joined Toyota in 2015] I was joining a company that is very careful—even though we have made some giant leaps—with the Prius hybrid drive system as an example. Even so, in general, the philosophy at Toyota is kaizen—making the cars incrementally better every single day. That care meant that I was tasked with thinking very deeply about this thing before making prognostications.
And the final part: It was a new job for me. The first night after I signed the contract I felt this incredible responsibility. I couldn’t sleep that whole night, so I started to multiply out the numbers, all using a factor of 10. How many cars do we have on the road? Cars on average last 10 years, though ours last 20, but let’s call it 10. They travel on an order of 10,000 miles per year. Multiply all that out and you get 10 to the 10th miles per year for our fleet on Planet Earth, a really big number. I asked myself, if all of those cars had automated drive, how good would they have to be to tolerate the number of crashes that would still occur? And the answer was so incredibly good that I knew it would take a long time. That was five years ago.
Burgard: We are now in the age of deep learning, and we don’t know what will come after. We are still making progress with existing techniques, and they look very promising. But the gradient is not as steep as it was a few years ago.
Pratt: There isn’t anything that’s telling us that it can’t be done; I should be very clear on that. Just because we don’t know how to do it doesn’t mean it can’t be done. Continue reading
#437791 Is the Pandemic Spurring a Robot ...
“Are robots really destined to take over restaurant kitchens?” This was the headline of an article published by Eater four years ago. One of the experts interviewed was Siddhartha Srinivasa, at the time professor of the Robotics Institute at Carnegie Mellon University and currently director of Robotics and AI for Amazon. He said, “I’d love to make robots unsexy. It’s weird to say this, but when something becomes unsexy, it means that it works so well that you don’t have to think about it. You don’t stare at your dishwasher as it washes your dishes in fascination, because you know it’s gonna work every time… I want to get robots to that stage of reliability.”
Have we managed to get there over the last four years? Are robots unsexy yet? And how has the pandemic changed the trajectory of automation across industries?
The Covid Effect
The pandemic has had a massive economic impact all over the world, and one of the problems faced by many companies has been keeping their businesses running without putting employees at risk of infection. Many organizations are seeking to remain operational in the short term by automating tasks that would otherwise be carried out by humans. According to Digital Trends, since the start of the pandemic we have seen a significant increase in automation efforts in manufacturing, meat packing, grocery stores and more. In a June survey, 44 percent of corporate financial officers said they were considering more automation in response to coronavirus.
MIT economist David Autor described the economic crisis and the Covid-19 pandemic as “an event that forces automation.” But he added that Covid-19 created a kind of disruption that has forced automation in sectors and activities with a shortage of workers, while at the same time there has been no reduction in demand. This hasn’t taken place in hospitality, where demand has practically disappeared, but it is still present in agriculture and distribution. The latter is being altered by the rapid growth of e-commerce, with more efficient and automated warehouses that can provide better service.
China Leads the Way
China is currently in a unique position to lead the world’s automation economy. Although the country boasts a huge workforce, labor costs have multiplied by 10 over the past 20 years. As the world’s factory, China has a strong incentive to automate its manufacturing sector, which enjoys a solid leadership in high quality products. China is currently the largest and fastest-growing market in the world for industrial robotics, with a 21 percent increase up to $5.4 billion in 2019. This represents one third of global sales. As a result, Chinese companies are developing a significant advantage in terms of learning to work with metallic colleagues.
The reasons behind this Asian dominance are evident: the population has a greater capacity and need for tech adoption. A large percentage of the population will soon be of retirement age, without an equivalent younger demographic to replace it, leading to a pressing need to adopt automation in the short term.
China is well ahead of other countries in restaurant automation. As reported in Bloomberg, in early 2020 UBS Group AG conducted a survey of over 13,000 consumers in different countries and found that 64 percent of Chinese participants had ordered meals through their phones at least once a week, compared to a mere 17 percent in the US. As digital ordering gains ground, robot waiters and chefs are likely not far behind. The West harbors a mistrust towards non-humans that the East does not.
The Robot Evolution
The pandemic was a perfect excuse for robots to replace us. But despite the hype around this idea, robots have mostly disappointed during the pandemic.
Just over 66 different kinds of “social” robots have been piloted in hospitals, health centers, airports, office buildings, and other public and private spaces in response to the pandemic, according to a study from researchers at Pompeu Fabra University (Barcelona, Spain). Their survey looked at 195 robot deployments across 35 countries including China, the US, Thailand, and Hong Kong.
But if the “robot revolution” is a movement in which automation, robotics, and artificial intelligence proliferate through the value chain of various industries, bringing a paradigm shift in how we produce, consume, and distribute products—it hasn’t happened yet.
But there’s a more nuanced answer: rather than a revolution, we’re seeing an incremental robot evolution. It’s a trend that will likely accelerate over the next five years, particularly when 5G takes center stage and robotics as a field leaves behind imitation and evolves independently.
Automation Anxiety
Why don’t we finally welcome the long-promised robotic takeover? Despite progress in AI and increased adoption of industrial robots, consumer-facing robotic products are not nearly as ubiquitous as popular culture predicted decades ago. As Amara’s Law says: “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.” It seems we are living through the Gartner hype cycle.
People have a complicated relationship with robots, torn between admiring them, fearing them, rejecting them, and even boycotting them, as has happened in the automobile industry.
Retail robot in a Walmart store. Credit: Bossa Nova Robotics
Walmart terminated its contract with Bossa Nova and withdrew its 1,000 inventory robots from its stores because the company was concerned about how shoppers were reacting to seeing the six-foot robots in the aisles.
With road blocks like this, will the World Economic Forum’s prediction of almost half of tasks being carried out by machines by 2025 come to pass?
At the rate we’re going, it seems unlikely, even with the boost in automation caused by the pandemic. Robotics will continue to advance its capabilities, and will take over more human jobs as it does so, but it’s unlikely we’ll hit a dramatic inflection point that could be described as a “revolution.” Instead, the robot evolution will happen the way most societal change does: incrementally, with time for people to adapt both practically and psychologically.
For now though, robots are still pretty sexy.
Image Credit: charles taylor / Shutterstock.com Continue reading