Tag Archives: making

#437471 How Giving Robots a Hybrid, Human-Like ...

Squeezing a lot of computing power into robots without using up too much space or energy is a constant battle for their designers. But a new approach that mimics the structure of the human brain could provide a workaround.

The capabilities of most of today’s mobile robots are fairly rudimentary, but giving them the smarts to do their jobs is still a serious challenge. Controlling a body in a dynamic environment takes a surprising amount of processing power, which requires both real estate for chips and considerable amounts of energy to power them.

As robots get more complex and capable, those demands are only going to increase. Today’s most powerful AI systems run in massive data centers across far more chips than can realistically fit inside a machine on the move. And the slow death of Moore’s Law suggests we can’t rely on conventional processors getting significantly more efficient or compact anytime soon.

That prompted a team from the University of Southern California to resurrect an idea from more than 40 years ago: mimicking the human brain’s division of labor between two complimentary structures. While the cerebrum is responsible for higher cognitive functions like vision, hearing, and thinking, the cerebellum integrates sensory data and governs movement, balance, and posture.

When the idea was first proposed the technology didn’t exist to make it a reality, but in a paper recently published in Science Robotics, the researchers describe a hybrid system that combines analog circuits that control motion and digital circuits that govern perception and decision-making in an inverted pendulum robot.

“Through this cooperation of the cerebrum and the cerebellum, the robot can conduct multiple tasks simultaneously with a much shorter latency and lower power consumption,” write the researchers.

The type of robot the researchers were experimenting with looks essentially like a pole balancing on a pair of wheels. They have a broad range of applications, from hoverboards to warehouse logistics—Boston Dynamics’ recently-unveiled Handle robot operates on the same principles. Keeping them stable is notoriously tough, but the new approach managed to significantly improve all digital control approaches by radically improving the speed and efficiency of computations.

Key to bringing the idea alive was the recent emergence of memristors—electrical components whose resistance relies on previous input, which allows them to combine computing and memory in one place in a way similar to how biological neurons operate.

The researchers used memristors to build an analog circuit that runs an algorithm responsible for integrating data from the robot’s accelerometer and gyroscope, which is crucial for detecting the angle and velocity of its body, and another that controls its motion. One key advantage of this setup is that the signals from the sensors are analog, so it does away with the need for extra circuitry to convert them into digital signals, saving both space and power.

More importantly, though, the analog system is an order of magnitude faster and more energy-efficient than a standard all-digital system, the authors report. This not only lets them slash the power requirements, but also lets them cut the processing loop from 3,000 microseconds to just 6. That significantly improves the robot’s stability, with it taking just one second to settle into a steady state compared to more than three seconds using the digital-only platform.

At the minute this is just a proof of concept. The robot the researchers have built is small and rudimentary, and the algorithms being run on the analog circuit are fairly basic. But the principle is a promising one, and there is currently a huge amount of R&D going into neuromorphic and memristor-based analog computing hardware.

As often turns out to be the case, it seems like we can’t go too far wrong by mimicking the best model of computation we have found so far: our own brains.

Image Credit: Photos Hobby / Unsplash Continue reading

Posted in Human Robots

#437446 Can the voice of healthcare robots ...

Robots are gradually making their way into hospitals and other clinical facilities, providing basic assistance to doctors and patients. To facilitate their widespread use in health care settings, however, robotics researchers need to ensure that users feel at ease with robots and accept the help they can offer. This could potentially be achieved by developing robots that communicate in empathetic and compassionate ways. Continue reading

Posted in Human Robots

#437357 Algorithms Workers Can’t See Are ...

“I’m sorry, Dave. I’m afraid I can’t do that.” HAL’s cold, if polite, refusal to open the pod bay doors in 2001: A Space Odyssey has become a defining warning about putting too much trust in artificial intelligence, particularly if you work in space.

In the movies, when a machine decides to be the boss (or humans let it) things go wrong. Yet despite myriad dystopian warnings, control by machines is fast becoming our reality.

Algorithms—sets of instructions to solve a problem or complete a task—now drive everything from browser search results to better medical care.

They are helping design buildings. They are speeding up trading on financial markets, making and losing fortunes in micro-seconds. They are calculating the most efficient routes for delivery drivers.

In the workplace, self-learning algorithmic computer systems are being introduced by companies to assist in areas such as hiring, setting tasks, measuring productivity, evaluating performance, and even terminating employment: “I’m sorry, Dave. I’m afraid you are being made redundant.”

Giving self‐learning algorithms the responsibility to make and execute decisions affecting workers is called “algorithmic management.” It carries a host of risks in depersonalizing management systems and entrenching pre-existing biases.

At an even deeper level, perhaps, algorithmic management entrenches a power imbalance between management and worker. Algorithms are closely guarded secrets. Their decision-making processes are hidden. It’s a black-box: perhaps you have some understanding of the data that went in, and you see the result that comes out, but you have no idea of what goes on in between.

Algorithms at Work
Here are a few examples of algorithms already at work.

At Amazon’s fulfillment center in south-east Melbourne, they set the pace for “pickers,” who have timers on their scanners showing how long they have to find the next item. As soon as they scan that item, the timer resets for the next. All at a “not quite walking, not quite running” speed.

Or how about AI determining your success in a job interview? More than 700 companies have trialed such technology. US developer HireVue says its software speeds up the hiring process by 90 percent by having applicants answer identical questions and then scoring them according to language, tone, and facial expressions.

Granted, human assessments during job interviews are notoriously flawed. Algorithms,however, can also be biased. The classic example is the COMPAS software used by US judges, probation, and parole officers to rate a person’s risk of re-offending. In 2016 a ProPublica investigation showed the algorithm was heavily discriminatory, incorrectly classifying black subjects as higher risk 45 percent of the time, compared with 23 percent for white subjects.

How Gig Workers Cope
Algorithms do what their code tells them to do. The problem is this code is rarely available. This makes them difficult to scrutinize, or even understand.

Nowhere is this more evident than in the gig economy. Uber, Lyft, Deliveroo, and other platforms could not exist without algorithms allocating, monitoring, evaluating, and rewarding work.

Over the past year Uber Eats’ bicycle couriers and drivers, for instance, have blamed unexplained changes to the algorithm for slashing their jobs, and incomes.

Rider’s can’t be 100 percent sure it was all down to the algorithm. But that’s part of the problem. The fact those who depend on the algorithm don’t know one way or the other has a powerful influence on them.

This is a key result from our interviews with 58 food-delivery couriers. Most knew their jobs were allocated by an algorithm (via an app). They knew the app collected data. What they didn’t know was how data was used to award them work.

In response, they developed a range of strategies (or guessed how) to “win” more jobs, such as accepting gigs as quickly as possible and waiting in “magic” locations. Ironically, these attempts to please the algorithm often meant losing the very flexibility that was one of the attractions of gig work.

The information asymmetry created by algorithmic management has two profound effects. First, it threatens to entrench systemic biases, the type of discrimination hidden within the COMPAS algorithm for years. Second, it compounds the power imbalance between management and worker.

Our data also confirmed others’ findings that it is almost impossible to complain about the decisions of the algorithm. Workers often do not know the exact basis of those decisions, and there’s no one to complain to anyway. When Uber Eats bicycle couriers asked for reasons about their plummeting income, for example, responses from the company advised them “we have no manual control over how many deliveries you receive.”

Broader Lessons
When algorithmic management operates as a “black box” one of the consequences is that it is can become an indirect control mechanism. Thus far under-appreciated by Australian regulators, this control mechanism has enabled platforms to mobilize a reliable and scalable workforce while avoiding employer responsibilities.

“The absence of concrete evidence about how the algorithms operate”, the Victorian government’s inquiry into the “on-demand” workforce notes in its report, “makes it hard for a driver or rider to complain if they feel disadvantaged by one.”

The report, published in June, also found it is “hard to confirm if concern over algorithm transparency is real.”

But it is precisely the fact it is hard to confirm that’s the problem. How can we start to even identify, let alone resolve, issues like algorithmic management?

Fair conduct standards to ensure transparency and accountability are a start. One example is the Fair Work initiative, led by the Oxford Internet Institute. The initiative is bringing together researchers with platforms, workers, unions, and regulators to develop global principles for work in the platform economy. This includes “fair management,” which focuses on how transparent the results and outcomes of algorithms are for workers.

Understandings about impact of algorithms on all forms of work is still in its infancy. It demands greater scrutiny and research. Without human oversight based on agreed principles we risk inviting HAL into our workplaces.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: PickPik Continue reading

Posted in Human Robots

#437345 Moore’s Law Lives: Intel Says Chips ...

If you weren’t already convinced the digital world is taking over, you probably are now.

To keep the economy on life support as people stay home to stem the viral tide, we’ve been forced to digitize interactions at scale (for better and worse). Work, school, events, shopping, food, politics. The companies at the center of the digital universe are now powerhouses of the modern era—worth trillions and nearly impossible to avoid in daily life.

Six decades ago, this world didn’t exist.

A humble microchip in the early 1960s would have boasted a handful of transistors. Now, your laptop or smartphone runs on a chip with billions of transistors. As first described by Moore’s Law, this is possible because the number of transistors on a chip doubled with extreme predictability every two years for decades.

But now progress is faltering as the size of transistors approaches physical limits, and the money and time it takes to squeeze a few more onto a chip are growing. There’ve been many predictions that Moore’s Law is, finally, ending. But, perhaps also predictably, the company whose founder coined Moore’s Law begs to differ.

In a keynote presentation at this year’s Hot Chips conference, Intel’s chief architect, Raja Koduri, laid out a roadmap to increase transistor density—that is, the number of transistors you can fit on a chip—by a factor of 50.

“We firmly believe there is a lot more transistor density to come,” Koduri said. “The vision will play out over time—maybe a decade or more—but it will play out.”

Why the optimism?

Calling the end of Moore’s Law is a bit of a tradition. As Peter Lee, vice president at Microsoft Research, quipped to The Economist a few years ago, “The number of people predicting the death of Moore’s Law doubles every two years.” To date, prophets of doom have been premature, and though the pace is slowing, the industry continues to dodge death with creative engineering.

Koduri believes the trend will continue this decade and outlined the upcoming chip innovations Intel thinks can drive more gains in computing power.

Keeping It Traditional
First, engineers can further shrink today’s transistors. Fin field effect transistors (or FinFET) first hit the scene in the 2010s and have since pushed chip features past 14 and 10 nanometers (or nodes, as such size checkpoints are called). Korduri said FinFET will again triple chip density before it’s exhausted.

The Next Generation
FinFET will hand the torch off to nanowire transistors (also known as gate-all-around transistors).

Here’s how they’ll work. A transistor is made up of three basic components: the source, where current is introduced, the gate and channel, where current selectively flows, and the drain. The gate is like a light switch. It controls how much current flows through the channel. A transistor is “on” when the gate allows current to flow, and it’s off when no current flows. The smaller transistors get, the harder it is to control that current.

FinFET maintained fine control of current by surrounding the channel with a gate on three sides. Nanowire designs kick that up a notch by surrounding the channel with a gate on four sides (hence, gate-all-around). They’ve been in the works for years and are expected around 2025. Koduri said first-generation nanowire transistors will be followed by stacked nanowire transistors, and together, they’ll quadruple transistor density.

Building Up
Growing transistor density won’t only be about shrinking transistors, but also going 3D.

This is akin to how skyscrapers increase a city’s population density by adding more usable space on the same patch of land. Along those lines, Intel recently launched its Foveros chip design. Instead of laying a chip’s various “neighborhoods” next to each other in a 2D silicon sprawl, they’ve stacked them on top of each other like a layer cake. Chip stacking isn’t entirely new, but it’s advancing and being applied to general purpose CPUs, like the chips in your phone and laptop.

Koduri said 3D chip stacking will quadruple transistor density.

A Self-Fulfilling Prophecy
The technologies Koduri outlines are an evolution of the same general technology in use today. That is, we don’t need quantum computing or nanotube transistors to augment or replace silicon chips yet. Rather, as it’s done many times over the years, the chip industry will get creative with the design of its core product to realize gains for another decade.

Last year, veteran chip engineer Jim Keller, who at the time was Intel’s head of silicon engineering but has since left the company, told MIT Technology Review there are over a 100 variables driving Moore’s Law (including 3D architectures and new transistor designs). From the standpoint of pure performance, it’s also about how efficiently software uses all those transistors. Keller suggested that with some clever software tweaks “we could get chips that are a hundred times faster in 10 years.”

But whether Intel’s vision pans out as planned is far from certain.

Intel’s faced challenges recently, taking five years instead of two to move its chips from 14 nanometers to 10 nanometers. After a delay of six months for its 7-nanometer chips, it’s now a year behind schedule and lagging other makers who already offer 7-nanometer chips. This is a key point. Yes, chipmakers continue making progress, but it’s getting harder, more expensive, and timelines are stretching.

The question isn’t if Intel and competitors can cram more transistors onto a chip—which, Intel rival TSMC agrees is clearly possible—it’s how long will it take and at what cost?

That said, demand for more computing power isn’t going anywhere.

Amazon, Microsoft, Alphabet, Apple, and Facebook now make up a whopping 20 percent of the stock market’s total value. By that metric, tech is the most dominant industry in at least 70 years. And new technologies—from artificial intelligence and virtual reality to a proliferation of Internet of Things devices and self-driving cars—will demand better chips.

There’s ample motivation to push computing to its bitter limits and beyond. As is often said, Moore’s Law is a self-fulfilling prophecy, and likely whatever comes after it will be too.

Image credit: Laura Ockel / Unsplash Continue reading

Posted in Human Robots

#437303 The Deck Is Not Rigged: Poker and the ...

Tuomas Sandholm, a computer scientist at Carnegie Mellon University, is not a poker player—or much of a poker fan, in fact—but he is fascinated by the game for much the same reason as the great game theorist John von Neumann before him. Von Neumann, who died in 1957, viewed poker as the perfect model for human decision making, for finding the balance between skill and chance that accompanies our every choice. He saw poker as the ultimate strategic challenge, combining as it does not just the mathematical elements of a game like chess but the uniquely human, psychological angles that are more difficult to model precisely—a view shared years later by Sandholm in his research with artificial intelligence.

“Poker is the main benchmark and challenge program for games of imperfect information,” Sandholm told me on a warm spring afternoon in 2018, when we met in his offices in Pittsburgh. The game, it turns out, has become the gold standard for developing artificial intelligence.

Tall and thin, with wire-frame glasses and neat brow hair framing a friendly face, Sandholm is behind the creation of three computer programs designed to test their mettle against human poker players: Claudico, Libratus, and most recently, Pluribus. (When we met, Libratus was still a toddler and Pluribus didn’t yet exist.) The goal isn’t to solve poker, as such, but to create algorithms whose decision making prowess in poker’s world of imperfect information and stochastic situations—situations that are randomly determined and unable to be predicted—can then be applied to other stochastic realms, like the military, business, government, cybersecurity, even health care.

While the first program, Claudico, was summarily beaten by human poker players—“one broke-ass robot,” an observer called it—Libratus has triumphed in a series of one-on-one, or heads-up, matches against some of the best online players in the United States.

Libratus relies on three main modules. The first involves a basic blueprint strategy for the whole game, allowing it to reach a much faster equilibrium than its predecessor. It includes an algorithm called the Monte Carlo Counterfactual Regret Minimization, which evaluates all future actions to figure out which one would cause the least amount of regret. Regret, of course, is a human emotion. Regret for a computer simply means realizing that an action that wasn’t chosen would have yielded a better outcome than one that was. “Intuitively, regret represents how much the AI regrets having not chosen that action in the past,” says Sandholm. The higher the regret, the higher the chance of choosing that action next time.

It’s a useful way of thinking—but one that is incredibly difficult for the human mind to implement. We are notoriously bad at anticipating our future emotions. How much will we regret doing something? How much will we regret not doing something else? For us, it’s an emotionally laden calculus, and we typically fail to apply it in quite the right way. For a computer, it’s all about the computation of values. What does it regret not doing the most, the thing that would have yielded the highest possible expected value?

The second module is a sub-game solver that takes into account the mistakes the opponent has made so far and accounts for every hand she could possibly have. And finally, there is a self-improver. This is the area where data and machine learning come into play. It’s dangerous to try to exploit your opponent—it opens you up to the risk that you’ll get exploited right back, especially if you’re a computer program and your opponent is human. So instead of attempting to do that, the self-improver lets the opponent’s actions inform the areas where the program should focus. “That lets the opponent’s actions tell us where [they] think they’ve found holes in our strategy,” Sandholm explained. This allows the algorithm to develop a blueprint strategy to patch those holes.

It’s a very human-like adaptation, if you think about it. I’m not going to try to outmaneuver you head on. Instead, I’m going to see how you’re trying to outmaneuver me and respond accordingly. Sun-Tzu would surely approve. Watch how you’re perceived, not how you perceive yourself—because in the end, you’re playing against those who are doing the perceiving, and their opinion, right or not, is the only one that matters when you craft your strategy. Overnight, the algorithm patches up its overall approach according to the resulting analysis.

There’s one final thing Libratus is able to do: play in situations with unknown probabilities. There’s a concept in game theory known as the trembling hand: There are branches of the game tree that, under an optimal strategy, one should theoretically never get to; but with some probability, your all-too-human opponent’s hand trembles, they take a wrong action, and you’re suddenly in a totally unmapped part of the game. Before, that would spell disaster for the computer: An unmapped part of the tree means the program no longer knows how to respond. Now, there’s a contingency plan.

Of course, no algorithm is perfect. When Libratus is playing poker, it’s essentially working in a zero-sum environment. It wins, the opponent loses. The opponent wins, it loses. But while some real-life interactions really are zero-sum—cyber warfare comes to mind—many others are not nearly as straightforward: My win does not necessarily mean your loss. The pie is not fixed, and our interactions may be more positive-sum than not.

What’s more, real-life applications have to contend with something that a poker algorithm does not: the weights that are assigned to different elements of a decision. In poker, this is a simple value-maximizing process. But what is value in the human realm? Sandholm had to contend with this before, when he helped craft the world’s first kidney exchange. Do you want to be more efficient, giving the maximum number of kidneys as quickly as possible—or more fair, which may come at a cost to efficiency? Do you want as many lives as possible saved—or do some take priority at the cost of reaching more? Is there a preference for the length of the wait until a transplant? Do kids get preference? And on and on. It’s essential, Sandholm says, to separate means and the ends. To figure out the ends, a human has to decide what the goal is.

“The world will ultimately become a lot safer with the help of algorithms like Libratus,” Sandholm told me. I wasn’t sure what he meant. The last thing that most people would do is call poker, with its competition, its winners and losers, its quest to gain the maximum edge over your opponent, a haven of safety.

“Logic is good, and the AI is much better at strategic reasoning than humans can ever be,” he explained. “It’s taking out irrationality, emotionality. And it’s fairer. If you have an AI on your side, it can lift non-experts to the level of experts. Naïve negotiators will suddenly have a better weapon. We can start to close off the digital divide.”

It was an optimistic note to end on—a zero-sum, competitive game yielding a more ultimately fair and rational world.

I wanted to learn more, to see if it was really possible that mathematics and algorithms could ultimately be the future of more human, more psychological interactions. And so, later that day, I accompanied Nick Nystrom, the chief scientist of the Pittsburgh Supercomputing Center—the place that runs all of Sandholm’s poker-AI programs—to the actual processing center that make undertakings like Libratus possible.

A half-hour drive found us in a parking lot by a large glass building. I’d expected something more futuristic, not the same square, corporate glass squares I’ve seen countless times before. The inside, however, was more promising. First the security checkpoint. Then the ride in the elevator — down, not up, to roughly three stories below ground, where we found ourselves in a maze of corridors with card readers at every juncture to make sure you don’t slip through undetected. A red-lit panel formed the final barrier, leading to a small sliver of space between two sets of doors. I could hear a loud hum coming from the far side.

“Let me tell you what you’re going to see before we walk in,” Nystrom told me. “Once we get inside, it will be too loud to hear.”

I was about to witness the heart of the supercomputing center: 27 large containers, in neat rows, each housing multiple processors with speeds and abilities too great for my mind to wrap around. Inside, the temperature is by turns arctic and tropic, so-called “cold” rows alternating with “hot”—fans operate around the clock to cool the processors as they churn through millions of giga, mega, tera, peta and other ever-increasing scales of data bytes. In the cool rows, robotic-looking lights blink green and blue in orderly progression. In the hot rows, a jumble of multicolored wires crisscrosses in tangled skeins.

In the corners stood machines that had outlived their heyday. There was Sherlock, an old Cray model, that warmed my heart. There was a sad nameless computer, whose anonymity was partially compensated for by the Warhol soup cans adorning its cage (an homage to Warhol’s Pittsburghian origins).

And where does Libratus live, I asked? Which of these computers is Bridges, the computer that runs the AI Sandholm and I had been discussing?

Bridges, it turned out, isn’t a single computer. It’s a system with processing power beyond comprehension. It takes over two and a half petabytes to run Libratus. A single petabyte is a million gigabytes: You could watch over 13 years of HD video, store 10 billion photos, catalog the contents of the entire Library of Congress word for word. That’s a whole lot of computing power. And that’s only to succeed at heads-up poker, in limited circumstances.

Yet despite the breathtaking computing power at its disposal, Libratus is still severely limited. Yes, it beat its opponents where Claudico failed. But the poker professionals weren’t allowed to use many of the tools of their trade, including the opponent analysis software that they depend on in actual online games. And humans tire. Libratus can churn for a two-week marathon, where the human mind falters.

But there’s still much it can’t do: play more opponents, play live, or win every time. There’s more humanity in poker than Libratus has yet conquered. “There’s this belief that it’s all about statistics and correlations. And we actually don’t believe that,” Nystrom explained as we left Bridges behind. “Once in a while correlations are good, but in general, they can also be really misleading.”

Two years later, the Sandholm lab will produce Pluribus. Pluribus will be able to play against five players—and will run on a single computer. Much of the human edge will have evaporated in a short, very short time. The algorithms have improved, as have the computers. AI, it seems, has gained by leaps and bounds.

So does that mean that, ultimately, the algorithmic can indeed beat out the human, that computation can untangle the web of human interaction by discerning “the little tactics of deception, of asking yourself what is the other man going to think I mean to do,” as von Neumann put it?

Long before I’d spoken to Sandholm, I’d met Kevin Slavin, a polymath of sorts whose past careers have including founding a game design company and an interactive art space and launching the Playful Systems group at MIT’s Media Lab. Slavin has a decidedly different view from the creators of Pluribus. “On the one hand, [von Neumann] was a genius,” Kevin Slavin reflects. “But the presumptuousness of it.”

Slavin is firmly on the side of the gambler, who recognizes uncertainty for what it is and thus is able to take calculated risks when necessary, all the while tampering confidence at the outcome. The most you can do is put yourself in the path of luck—but to think you can guess with certainty the actual outcome is a presumptuousness the true poker player foregoes. For Slavin, the wonder of computers is “That they can generate this fabulous, complex randomness.” His opinion of the algorithmic assaults on chance? “This is their moment,” he said. “But it’s the exact opposite of what’s really beautiful about a computer, which is that it can do something that’s actually unpredictable. That, to me, is the magic.”

Will they actually succeed in making the unpredictable predictable, though? That’s what I want to know. Because everything I’ve seen tells me that absolute success is impossible. The deck is not rigged.

“It’s an unbelievable amount of work to get there. What do you get at the end? Let’s say they’re successful. Then we live in a world where there’s no God, agency, or luck,” Slavin responded.

“I don’t want to live there,’’ he added “I just don’t want to live there.”

Luckily, it seems that for now, he won’t have to. There are more things in life than are yet written in the algorithms. We have no reliable lie detection software—whether in the face, the skin, or the brain. In a recent test of bluffing in poker, computer face recognition failed miserably. We can get at discomfort, but we can’t get at the reasons for that discomfort: lying, fatigue, stress—they all look much the same. And humans, of course, can also mimic stress where none exists, complicating the picture even further.

Pluribus may turn out to be powerful, but von Neumann’s challenge still stands: The true nature of games, the most human of the human, remains to be conquered.

This article was originally published on Undark. Read the original article.

Image Credit: José Pablo Iglesias / Unsplash Continue reading

Posted in Human Robots