Tag Archives: john

#438769 Will Robots Make Good Friends? ...

In the 2012 film Robot and Frank, the protagonist, a retired cat burglar named Frank, is suffering the early symptoms of dementia. Concerned and guilty, his son buys him a “home robot” that can talk, do household chores like cooking and cleaning, and remind Frank to take his medicine. It’s a robot the likes of which we’re getting closer to building in the real world.

The film follows Frank, who is initially appalled by the idea of living with a robot, as he gradually begins to see the robot as both functionally useful and socially companionable. The film ends with a clear bond between man and machine, such that Frank is protective of the robot when the pair of them run into trouble.

This is, of course, a fictional story, but it challenges us to explore different kinds of human-to-robot bonds. My recent research on human-robot relationships examines this topic in detail, looking beyond sex robots and robot love affairs to examine that most profound and meaningful of relationships: friendship.

My colleague and I identified some potential risks, like the abandonment of human friends for robotic ones, but we also found several scenarios where robotic companionship can constructively augment people’s lives, leading to friendships that are directly comparable to human-to-human relationships.

Philosophy of Friendship
The robotics philosopher John Danaher sets a very high bar for what friendship means. His starting point is the “true” friendship first described by the Greek philosopher Aristotle, which saw an ideal friendship as premised on mutual good will, admiration, and shared values. In these terms, friendship is about a partnership of equals.

Building a robot that can satisfy Aristotle’s criteria is a substantial technical challenge and is some considerable way off, as Danaher himself admits. Robots that may seem to be getting close, such as Hanson Robotics’ Sophia, base their behavior on a library of pre-prepared responses: a humanoid chatbot, rather than a conversational equal. Anyone who’s had a testing back-and-forth with Alexa or Siri will know AI still has some way to go in this regard.

Aristotle also talked about other forms of “imperfect” friendship, such as “utilitarian” and “pleasure” friendships, which are considered inferior to true friendship because they don’t require symmetrical bonding and are often to one party’s unequal benefit. This form of friendship sets a relatively very low bar which some robots, like “sexbots” and robotic pets, clearly already meet.

Artificial Amigos
For some, relating to robots is just a natural extension of relating to other things in our world, like people, pets, and possessions. Psychologists have even observed how people respond naturally and socially towards media artefacts like computers and televisions. Humanoid robots, you’d have thought, are more personable than your home PC.

However, the field of “robot ethics” is far from unanimous on whether we can—or should— develop any form of friendship with robots. For an influential group of UK researchers who charted a set of “ethical principles of robotics,” human-robot “companionship” is an oxymoron, and to market robots as having social capabilities is dishonest and should be treated with caution, if not alarm. For these researchers, wasting emotional energy on entities that can only simulate emotions will always be less rewarding than forming human-to-human bonds.

But people are already developing bonds with basic robots, like vacuum-cleaning and lawn-trimming machines that can be bought for less than the price of a dishwasher. A surprisingly large number of people give these robots pet names—something they don’t do with their dishwashers. Some even take their cleaning robots on holiday.

Other evidence of emotional bonds with robots include the Shinto blessing ceremony for Sony Aibo robot dogs that were dismantled for spare parts, and the squad of US troops who fired a 21-gun salute, and awarded medals, to a bomb-disposal robot named “Boomer” after it was destroyed in action.

These stories, and the psychological evidence we have so far, make clear that we can extend emotional connections to things that are very different to us, even when we know they are manufactured and pre-programmed. But do those connections constitute a friendship comparable to that shared between humans?

True Friendship?
A colleague and I recently reviewed the extensive literature on human-to-human relationships to try to understand how, and if, the concepts we found could apply to bonds we might form with robots. We found evidence that many coveted human-to-human friendships do not in fact live up to Aristotle’s ideal.

We noted a wide range of human-to-human relationships, from relatives and lovers to parents, carers, service providers, and the intense (but unfortunately one-way) relationships we maintain with our celebrity heroes. Few of these relationships could be described as completely equal and, crucially, they are all destined to evolve over time.

All this means that expecting robots to form Aristotelian bonds with us is to set a standard even human relationships fail to live up to. We also observed forms of social connectedness that are rewarding and satisfying and yet are far from the ideal friendship outlined by the Greek philosopher.

We know that social interaction is rewarding in its own right, and something that, as social mammals, humans have a strong need for. It seems probable that relationships with robots could help to address the deep-seated urge we all feel for social connection—like providing physical comfort, emotional support, and enjoyable social exchanges—currently provided by other humans.

Our paper also discussed some potential risks. These arise particularly in settings where interaction with a robot could come to replace interaction with people, or where people are denied a choice as to whether they interact with a person or a robot—in a care setting, for instance.

These are important concerns, but they’re possibilities and not inevitabilities. In the literature we reviewed we actually found evidence of the opposite effect: robots acting to scaffold social interactions with others, acting as ice-breakers in groups, and helping people to improve their social skills or to boost their self-esteem.

It appears likely that, as time progresses, many of us will simply follow Frank’s path towards acceptance: scoffing at first, before settling into the idea that robots can make surprisingly good companions. Our research suggests that’s already happening—though perhaps not in a way of which Aristotle would have approved.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Andy Kelly on Unsplash Continue reading

Posted in Human Robots

#438014 Meet Blueswarm, a Smart School of ...

Anyone who’s seen an undersea nature documentary has marveled at the complex choreography that schooling fish display, a darting, synchronized ballet with a cast of thousands.

Those instinctive movements have inspired researchers at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS), and the Wyss Institute for Biologically Inspired Engineering. The results could improve the performance and dependability of not just underwater robots, but other vehicles that require decentralized locomotion and organization, such as self-driving cars and robotic space exploration.

The fish collective called Blueswarm was created by a team led by Radhika Nagpal, whose lab is a pioneer in self-organizing systems. The oddly adorable robots can sync their movements like biological fish, taking cues from their plastic-bodied neighbors with no external controls required. Nagpal told IEEE Spectrum that this marks a milestone, demonstrating complex 3D behaviors with implicit coordination in underwater robots.

“Insights from this research will help us develop future miniature underwater swarms that can perform environmental monitoring and search in visually-rich but fragile environments like coral reefs,” Nagpal said. “This research also paves a way to better understand fish schools, by synthetically recreating their behavior.”

The research is published in Science Robotics, with Florian Berlinger as first author. Berlinger said the “Bluedot” robots integrate a trio of blue LED lights, a lithium-polymer battery, a pair of cameras, a Raspberry Pi computer and four controllable fins within a 3D-printed hull. The fish-lens cameras detect LED’s of their fellow swimmers, and apply a custom algorithm to calculate distance, direction and heading.

Based on that simple production and detection of LED light, the team proved that Blueswarm could self-organize behaviors, including aggregation, dispersal and circle formation—basically, swimming in a clockwise synchronization. Researchers also simulated a successful search mission, an autonomous Finding Nemo. Using their dispersion algorithm, the robot school spread out until one could detect a red light in the tank. Its blue LEDs then flashed, triggering the aggregation algorithm to gather the school around it. Such a robot swarm might prove valuable in search-and-rescue missions at sea, covering miles of open water and reporting back to its mates.

“Each Bluebot implicitly reacts to its neighbors’ positions,” Berlinger said. The fish—RoboCod, perhaps?—also integrate a Wifi module to allow uploading new behaviors remotely. The lab’s previous efforts include a 1,000-strong army of “Kilobots,” and a robotic construction crew inspired by termites. Both projects operated in two-dimensional space. But a 3D environment like air or water posed a tougher challenge for sensing and movement.

In nature, Berlinger notes, there’s no scaly CEO to direct the school’s movements. Nor do fish communicate their intentions. Instead, so-called “implicit coordination” guides the school’s collective behavior, with individual members executing high-speed moves based on what they see their neighbors doing. That decentralized, autonomous organization has long fascinated scientists, including in robotics.

“In these situations, it really benefits you to have a highly autonomous robot swarm that is self-sufficient. By using implicit rules and 3D visual perception, we were able to create a system with a high degree of autonomy and flexibility underwater where things like GPS and WiFi are not accessible.”

Berlinger adds the research could one day translate to anything that requires decentralized robots, from self-driving cars and Amazon warehouse vehicles to exploration of faraway planets, where poor latency makes it impossible to transmit commands quickly. Today’s semi-autonomous cars face their own technical hurdles in reliably sensing and responding to their complex environments, including when foul weather obscures onboard sensors or road markers, or when they can’t fix position via GPS. An entire subset of autonomous-car research involves vehicle-to-vehicle (V2V) communications that could give cars a hive mind to guide individual or collective decisions— avoiding snarled traffic, driving safely in tight convoys, or taking group evasive action during a crash that’s beyond their sensory range.

“Once we have millions of cars on the road, there can’t be one computer orchestrating all the traffic, making decisions that work for all the cars,” Berlinger said.

The miniature robots could also work long hours in places that are inaccessible to humans and divers, or even large tethered robots. Nagpal said the synthetic swimmers could monitor and collect data on reefs or underwater infrastructure 24/7, and work into tiny places without disturbing fragile equipment or ecosystems.

“If we could be as good as fish in that environment, we could collect information and be non-invasive, in cluttered environments where everything is an obstacle,” Nagpal said. Continue reading

Posted in Human Robots

#437758 Remotely Operated Robot Takes Straight ...

Roboticists love hard problems. Challenges like the DRC and SubT have helped (and are still helping) to catalyze major advances in robotics, but not all hard problems require a massive amount of DARPA funding—sometimes, a hard problem can just be something very specific that’s really hard for a robot to do, especially relative to the ease with which a moderately trained human might be able to do it. Catching a ball. Putting a peg in a hole. Or using a straight razor to shave someone’s face without Sweeney Todd-izing them.

This particular roboticist who sees straight-razor face shaving as a hard problem that robots should be solving is John Peter Whitney, who we first met back at IROS 2014 in Chicago when (working at Disney Research) he introduced an elegant fluidic actuator system. These actuators use tubes containing a fluid (like air or water) to transmit forces from a primary robot to a secondary robot in a very efficient way that also allows for either compliance or very high fidelity force feedback, depending on the compressibility of the fluid.

Photo: John Peter Whitney/Northeastern University

Barber meets robot: Boston based barber Jesse Cabbage [top, right] observes the machine created by roboticist John Peter Whitney. Before testing the robot on Whitney’s face, they used his arm for a quick practice [bottom].

Whitney is now at Northeastern University, in Boston, and he recently gave a talk at the RSS workshop on “Reacting to Contact,” where he suggested that straight razor shaving would be an interesting and valuable problem for robotics to work toward, due to its difficulty and requirement for an extremely high level of both performance and reliability.

Now, a straight razor is sort of like a safety razor, except with the safety part removed, which in fact does make it significantly less safe for humans, much less robots. Also not ideal for those worried about safety is that as part of the process the razor ends up in distressingly close proximity to things like the artery that is busily delivering your brain’s entire supply of blood, which is very close to the top of the list of things that most people want to keep blades very far away from. But that didn’t stop Whitney from putting his whiskers where his mouth is and letting his robotic system mediate the ministrations of a professional barber. It’s not an autonomous robotic straight-razor shave (because Whitney is not totally crazy), but it’s a step in that direction, and requires that the hardware Whitney developed be dead reliable.

Perhaps that was a poor choice of words. But, rest assured that Whitney lived long enough to answer our questions after. Here’s the video; it’s part of a longer talk, but it should start in the right spot, at about 23:30.

If Whitney looked a little bit nervous to you, that’s because he was. “This was the first time I’d ever been shaved by someone (something?!) else with a straight razor,” he told us, and while having a professional barber at the helm was some comfort, “the lack of feeling and control on my part was somewhat unsettling.” Whitney says that the barber, Jesse Cabbage of Dentes Barbershop in Somerville, Mass., was surprised by how well he could feel the tactile sensations being transmitted from the razor. “That’s one of the reasons we decided to make this video,” Whitney says. “I can’t show someone how something feels, so the next best thing is to show a delicate task that either from experience or intuition makes it clear to the viewer that the system must have these properties—otherwise the task wouldn’t be possible.”

And as for when Whitney might be comfortable getting shaved by a robotic system without a human in the loop? It’s going to take a lot of work, as do most other hard problems in robotics. “There are two parts to this,” he explains. “One is fault-tolerance of the components themselves (software, electronics, etc.) and the second is the quality of the perception and planning algorithms.”

He offers a comparison to self-driving cars, in which similar (or greater) risks are incurred: “To learn how to perceive, interpret, and adapt, we need a very high-fidelity model of the problem, or a wealth of data and experience, or both” he says. “But in the case of shaving we are greatly lacking in both!” He continues with the analogy: “I think there is a natural progression—the community started with autonomous driving of toy cars on closed courses and worked up to real cars carrying human passengers; in robotic manipulation we are beginning to move out of the ‘toy car’ stage and so I think it’s good to target high-consequence hard problems to help drive progress.”

The ultimate goal is much more general than the creation of a dedicated straight razor shaving robot. This particular hardware system is actually a testbed for exploring MRI-compatible remote needle biopsy.

Of course, the ultimate goal here is much more general than the creation of a dedicated straight razor shaving robot; it’s a challenge that includes a host of sub-goals that will benefit robotics more generally. This particular hardware system Whitney is developing is actually a testbed for exploring MRI-compatible remote needle biopsy, and he and his students are collaborating with Brigham and Women’s Hospital in Boston on adapting this technology to prostate biopsy and ablation procedures. They’re also exploring how delicate touch can be used as a way to map an environment and localize within it, especially where using vision may not be a good option. “These traits and behaviors are especially interesting for applications where we must interact with delicate and uncertain environments,” says Whitney. “Medical robots, assistive and rehabilitation robots and exoskeletons, and shared-autonomy teleoperation for delicate tasks.”
A paper with more details on this robotic system, “Series Elastic Force Control for Soft Robotic Fluid Actuators,” is available on arXiv. Continue reading

Posted in Human Robots

#437749 Video Friday: NASA Launches Its Most ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

AWS Cloud Robotics Summit – August 18-19, 2020 – [Virtual Conference]
CLAWAR 2020 – August 24-26, 2020 – [Virtual Conference]
ICUAS 2020 – September 1-4, 2020 – Athens, Greece
ICRES 2020 – September 28-29, 2020 – Taipei, Taiwan
AUVSI EXPONENTIAL 2020 – October 5-8, 2020 – [Virtual Conference]
IROS 2020 – October 25-29, 2020 – Las Vegas, Nevada
ICSR 2020 – November 14-16, 2020 – Golden, Colorado
Let us know if you have suggestions for next week, and enjoy today’s videos.

Yesterday was a big day for what was quite possibly the most expensive robot on Earth up until it wasn’t on Earth anymore.

Perseverance and the Ingenuity helicopter are expected to arrive on Mars early next year.

[ JPL ]

ICYMI, our most popular post this week featured Northeastern University roboticist John Peter Whitney literally putting his neck on the line for science! He was testing a remotely operated straight razor shaving robotic system powered by fluidic actuators. The cutting-edge (sorry!) device transmits forces from a primary stage, operated by a barber, to a secondary stage, with the razor attached.

[ John Peter Whitney ]

Together with Boston Dynamics, Ford is introducing a pilot program into our Van Dyke Transmission Plant. Say hello to Fluffy the Robot Dog, who creates fast and accurate 3D scans that helps Ford engineers when we’re retooling our plants.

Not shown in the video: “At times, Fluffy sits on its robotic haunches and rides on the back of a small, round Autonomous Mobile Robot, known informally as Scouter. Scouter glides smoothly up and down the aisles of the plant, allowing Fluffy to conserve battery power until it’s time to get to work. Scouter can autonomously navigate facilities while scanning and capturing 3-D point clouds to generate a CAD of the facility. If an area is too tight for Scouter, Fluffy comes to the rescue.”

[ Ford ]

There is a thing that happens at 0:28 in this video that I have questions about.

[ Ghost Robotics ]

Pepper is far more polite about touching than most humans.

[ Paper ]

We don’t usually post pure simulation videos unless they give us something to get really, really excited about. So here’s a pure simulation video.

[ Hybrid Robotics ]

University of Michigan researchers are developing new origami inspired methods for designing, fabricating and actuating micro-robots using heat.These improvements will expand the mechanical capabilities of the tiny bots, allowing them to fold into more complex shapes.

[ DRSL ]

HMI is making beastly electric arms work underwater, even if they’re not stapled to a robotic submarine.

[ HMI ]

Here’s some interesting work in progress from MIT’s Biomimetics Robotics Lab. The limb is acting as a “virtual magnet” using a bimodal force and direction sensor.

Thanks Peter!

[ MIT Biomimetics Lab ]

This is adorable but as a former rabbit custodian I can assure you that approximately 3 seconds after this video ended, all of the wires on that robot were chewed to bits.

[ Lingkang Zhang ]

During the ARCHE 2020 integration week, TNO and the ETH Robot System Lab (RSL) collaborated to integrate their research and development process using the Articulated Locomotion and MAnipulation (ALMA) robot. Next to the integration of software, we tested software to confirm proper implementation and development. We also captured visual and auditory data for future software development. This all resulted in the creation of multiple demo’s to show the capabilities of the teleoperation framework using the ALMA robot.

[ RSL ]

When we talk about practical applications quadrupedal robots with foot wheels, we don’t usually think about them on this scale, although we should.

[ RSL ]

Juan wrote in to share a DIY quadruped that he’s been working on, named CHAMP.

Juan says that the demo robot can be built in less than US $1000 with easily accessible parts. “I hope that my project can provide a more accessible platform for students, researchers, and enthusiasts who are interested to learn more about quadrupedal robot development and its underlying technology.”

[ CHAMP ]

Thanks Juan!

Here’s a New Zealand TV report about a study on robot abuse from Christoph Bartneck at the University of Canterbury.

[ Paper ]

Our Robotics Studio is a hands on class exposing students to practical aspects of the design, fabrication, and programming of physical robotic systems. So what happens when the class goes virtual due to the covid-19 virus? Things get physical — all @ home.

[ Columbia ]

A few videos from the Supernumerary Robotic Devices Workshop, held online earlier this month.

“Handheld Robots: Bridging the Gap between Fully External and Wearable Robots,” presented by Walterio Mayol-Cuevas, University of Bristol.

“Playing the Piano with 11 Fingers: The Neurobehavioural Constraints of Human Robot Augmentation,” presented by Aldo Faisal, Imperial College London.

[ Workshop ] Continue reading

Posted in Human Robots

#437303 The Deck Is Not Rigged: Poker and the ...

Tuomas Sandholm, a computer scientist at Carnegie Mellon University, is not a poker player—or much of a poker fan, in fact—but he is fascinated by the game for much the same reason as the great game theorist John von Neumann before him. Von Neumann, who died in 1957, viewed poker as the perfect model for human decision making, for finding the balance between skill and chance that accompanies our every choice. He saw poker as the ultimate strategic challenge, combining as it does not just the mathematical elements of a game like chess but the uniquely human, psychological angles that are more difficult to model precisely—a view shared years later by Sandholm in his research with artificial intelligence.

“Poker is the main benchmark and challenge program for games of imperfect information,” Sandholm told me on a warm spring afternoon in 2018, when we met in his offices in Pittsburgh. The game, it turns out, has become the gold standard for developing artificial intelligence.

Tall and thin, with wire-frame glasses and neat brow hair framing a friendly face, Sandholm is behind the creation of three computer programs designed to test their mettle against human poker players: Claudico, Libratus, and most recently, Pluribus. (When we met, Libratus was still a toddler and Pluribus didn’t yet exist.) The goal isn’t to solve poker, as such, but to create algorithms whose decision making prowess in poker’s world of imperfect information and stochastic situations—situations that are randomly determined and unable to be predicted—can then be applied to other stochastic realms, like the military, business, government, cybersecurity, even health care.

While the first program, Claudico, was summarily beaten by human poker players—“one broke-ass robot,” an observer called it—Libratus has triumphed in a series of one-on-one, or heads-up, matches against some of the best online players in the United States.

Libratus relies on three main modules. The first involves a basic blueprint strategy for the whole game, allowing it to reach a much faster equilibrium than its predecessor. It includes an algorithm called the Monte Carlo Counterfactual Regret Minimization, which evaluates all future actions to figure out which one would cause the least amount of regret. Regret, of course, is a human emotion. Regret for a computer simply means realizing that an action that wasn’t chosen would have yielded a better outcome than one that was. “Intuitively, regret represents how much the AI regrets having not chosen that action in the past,” says Sandholm. The higher the regret, the higher the chance of choosing that action next time.

It’s a useful way of thinking—but one that is incredibly difficult for the human mind to implement. We are notoriously bad at anticipating our future emotions. How much will we regret doing something? How much will we regret not doing something else? For us, it’s an emotionally laden calculus, and we typically fail to apply it in quite the right way. For a computer, it’s all about the computation of values. What does it regret not doing the most, the thing that would have yielded the highest possible expected value?

The second module is a sub-game solver that takes into account the mistakes the opponent has made so far and accounts for every hand she could possibly have. And finally, there is a self-improver. This is the area where data and machine learning come into play. It’s dangerous to try to exploit your opponent—it opens you up to the risk that you’ll get exploited right back, especially if you’re a computer program and your opponent is human. So instead of attempting to do that, the self-improver lets the opponent’s actions inform the areas where the program should focus. “That lets the opponent’s actions tell us where [they] think they’ve found holes in our strategy,” Sandholm explained. This allows the algorithm to develop a blueprint strategy to patch those holes.

It’s a very human-like adaptation, if you think about it. I’m not going to try to outmaneuver you head on. Instead, I’m going to see how you’re trying to outmaneuver me and respond accordingly. Sun-Tzu would surely approve. Watch how you’re perceived, not how you perceive yourself—because in the end, you’re playing against those who are doing the perceiving, and their opinion, right or not, is the only one that matters when you craft your strategy. Overnight, the algorithm patches up its overall approach according to the resulting analysis.

There’s one final thing Libratus is able to do: play in situations with unknown probabilities. There’s a concept in game theory known as the trembling hand: There are branches of the game tree that, under an optimal strategy, one should theoretically never get to; but with some probability, your all-too-human opponent’s hand trembles, they take a wrong action, and you’re suddenly in a totally unmapped part of the game. Before, that would spell disaster for the computer: An unmapped part of the tree means the program no longer knows how to respond. Now, there’s a contingency plan.

Of course, no algorithm is perfect. When Libratus is playing poker, it’s essentially working in a zero-sum environment. It wins, the opponent loses. The opponent wins, it loses. But while some real-life interactions really are zero-sum—cyber warfare comes to mind—many others are not nearly as straightforward: My win does not necessarily mean your loss. The pie is not fixed, and our interactions may be more positive-sum than not.

What’s more, real-life applications have to contend with something that a poker algorithm does not: the weights that are assigned to different elements of a decision. In poker, this is a simple value-maximizing process. But what is value in the human realm? Sandholm had to contend with this before, when he helped craft the world’s first kidney exchange. Do you want to be more efficient, giving the maximum number of kidneys as quickly as possible—or more fair, which may come at a cost to efficiency? Do you want as many lives as possible saved—or do some take priority at the cost of reaching more? Is there a preference for the length of the wait until a transplant? Do kids get preference? And on and on. It’s essential, Sandholm says, to separate means and the ends. To figure out the ends, a human has to decide what the goal is.

“The world will ultimately become a lot safer with the help of algorithms like Libratus,” Sandholm told me. I wasn’t sure what he meant. The last thing that most people would do is call poker, with its competition, its winners and losers, its quest to gain the maximum edge over your opponent, a haven of safety.

“Logic is good, and the AI is much better at strategic reasoning than humans can ever be,” he explained. “It’s taking out irrationality, emotionality. And it’s fairer. If you have an AI on your side, it can lift non-experts to the level of experts. Naïve negotiators will suddenly have a better weapon. We can start to close off the digital divide.”

It was an optimistic note to end on—a zero-sum, competitive game yielding a more ultimately fair and rational world.

I wanted to learn more, to see if it was really possible that mathematics and algorithms could ultimately be the future of more human, more psychological interactions. And so, later that day, I accompanied Nick Nystrom, the chief scientist of the Pittsburgh Supercomputing Center—the place that runs all of Sandholm’s poker-AI programs—to the actual processing center that make undertakings like Libratus possible.

A half-hour drive found us in a parking lot by a large glass building. I’d expected something more futuristic, not the same square, corporate glass squares I’ve seen countless times before. The inside, however, was more promising. First the security checkpoint. Then the ride in the elevator — down, not up, to roughly three stories below ground, where we found ourselves in a maze of corridors with card readers at every juncture to make sure you don’t slip through undetected. A red-lit panel formed the final barrier, leading to a small sliver of space between two sets of doors. I could hear a loud hum coming from the far side.

“Let me tell you what you’re going to see before we walk in,” Nystrom told me. “Once we get inside, it will be too loud to hear.”

I was about to witness the heart of the supercomputing center: 27 large containers, in neat rows, each housing multiple processors with speeds and abilities too great for my mind to wrap around. Inside, the temperature is by turns arctic and tropic, so-called “cold” rows alternating with “hot”—fans operate around the clock to cool the processors as they churn through millions of giga, mega, tera, peta and other ever-increasing scales of data bytes. In the cool rows, robotic-looking lights blink green and blue in orderly progression. In the hot rows, a jumble of multicolored wires crisscrosses in tangled skeins.

In the corners stood machines that had outlived their heyday. There was Sherlock, an old Cray model, that warmed my heart. There was a sad nameless computer, whose anonymity was partially compensated for by the Warhol soup cans adorning its cage (an homage to Warhol’s Pittsburghian origins).

And where does Libratus live, I asked? Which of these computers is Bridges, the computer that runs the AI Sandholm and I had been discussing?

Bridges, it turned out, isn’t a single computer. It’s a system with processing power beyond comprehension. It takes over two and a half petabytes to run Libratus. A single petabyte is a million gigabytes: You could watch over 13 years of HD video, store 10 billion photos, catalog the contents of the entire Library of Congress word for word. That’s a whole lot of computing power. And that’s only to succeed at heads-up poker, in limited circumstances.

Yet despite the breathtaking computing power at its disposal, Libratus is still severely limited. Yes, it beat its opponents where Claudico failed. But the poker professionals weren’t allowed to use many of the tools of their trade, including the opponent analysis software that they depend on in actual online games. And humans tire. Libratus can churn for a two-week marathon, where the human mind falters.

But there’s still much it can’t do: play more opponents, play live, or win every time. There’s more humanity in poker than Libratus has yet conquered. “There’s this belief that it’s all about statistics and correlations. And we actually don’t believe that,” Nystrom explained as we left Bridges behind. “Once in a while correlations are good, but in general, they can also be really misleading.”

Two years later, the Sandholm lab will produce Pluribus. Pluribus will be able to play against five players—and will run on a single computer. Much of the human edge will have evaporated in a short, very short time. The algorithms have improved, as have the computers. AI, it seems, has gained by leaps and bounds.

So does that mean that, ultimately, the algorithmic can indeed beat out the human, that computation can untangle the web of human interaction by discerning “the little tactics of deception, of asking yourself what is the other man going to think I mean to do,” as von Neumann put it?

Long before I’d spoken to Sandholm, I’d met Kevin Slavin, a polymath of sorts whose past careers have including founding a game design company and an interactive art space and launching the Playful Systems group at MIT’s Media Lab. Slavin has a decidedly different view from the creators of Pluribus. “On the one hand, [von Neumann] was a genius,” Kevin Slavin reflects. “But the presumptuousness of it.”

Slavin is firmly on the side of the gambler, who recognizes uncertainty for what it is and thus is able to take calculated risks when necessary, all the while tampering confidence at the outcome. The most you can do is put yourself in the path of luck—but to think you can guess with certainty the actual outcome is a presumptuousness the true poker player foregoes. For Slavin, the wonder of computers is “That they can generate this fabulous, complex randomness.” His opinion of the algorithmic assaults on chance? “This is their moment,” he said. “But it’s the exact opposite of what’s really beautiful about a computer, which is that it can do something that’s actually unpredictable. That, to me, is the magic.”

Will they actually succeed in making the unpredictable predictable, though? That’s what I want to know. Because everything I’ve seen tells me that absolute success is impossible. The deck is not rigged.

“It’s an unbelievable amount of work to get there. What do you get at the end? Let’s say they’re successful. Then we live in a world where there’s no God, agency, or luck,” Slavin responded.

“I don’t want to live there,’’ he added “I just don’t want to live there.”

Luckily, it seems that for now, he won’t have to. There are more things in life than are yet written in the algorithms. We have no reliable lie detection software—whether in the face, the skin, or the brain. In a recent test of bluffing in poker, computer face recognition failed miserably. We can get at discomfort, but we can’t get at the reasons for that discomfort: lying, fatigue, stress—they all look much the same. And humans, of course, can also mimic stress where none exists, complicating the picture even further.

Pluribus may turn out to be powerful, but von Neumann’s challenge still stands: The true nature of games, the most human of the human, remains to be conquered.

This article was originally published on Undark. Read the original article.

Image Credit: José Pablo Iglesias / Unsplash Continue reading

Posted in Human Robots