Tag Archives: fighting

#436484 If Machines Want to Make Art, Will ...

Assuming that the emergence of consciousness in artificial minds is possible, those minds will feel the urge to create art. But will we be able to understand it? To answer this question, we need to consider two subquestions: when does the machine become an author of an artwork? And how can we form an understanding of the art that it makes?

Empathy, we argue, is the force behind our capacity to understand works of art. Think of what happens when you are confronted with an artwork. We maintain that, to understand the piece, you use your own conscious experience to ask what could possibly motivate you to make such an artwork yourself—and then you use that first-person perspective to try to come to a plausible explanation that allows you to relate to the artwork. Your interpretation of the work will be personal and could differ significantly from the artist’s own reasons, but if we share sufficient experiences and cultural references, it might be a plausible one, even for the artist. This is why we can relate so differently to a work of art after learning that it is a forgery or imitation: the artist’s intent to deceive or imitate is very different from the attempt to express something original. Gathering contextual information before jumping to conclusions about other people’s actions—in art, as in life—can enable us to relate better to their intentions.

But the artist and you share something far more important than cultural references: you share a similar kind of body and, with it, a similar kind of embodied perspective. Our subjective human experience stems, among many other things, from being born and slowly educated within a society of fellow humans, from fighting the inevitability of our own death, from cherishing memories, from the lonely curiosity of our own mind, from the omnipresence of the needs and quirks of our biological body, and from the way it dictates the space- and time-scales we can grasp. All conscious machines will have embodied experiences of their own, but in bodies that will be entirely alien to us.

We are able to empathize with nonhuman characters or intelligent machines in human-made fiction because they have been conceived by other human beings from the only subjective perspective accessible to us: “What would it be like for a human to behave as x?” In order to understand machinic art as such—and assuming that we stand a chance of even recognizing it in the first place—we would need a way to conceive a first-person experience of what it is like to be that machine. That is something we cannot do even for beings that are much closer to us. It might very well happen that we understand some actions or artifacts created by machines of their own volition as art, but in doing so we will inevitably anthropomorphize the machine’s intentions. Art made by a machine can be meaningfully interpreted in a way that is plausible only from the perspective of that machine, and any coherent anthropomorphized interpretation will be implausibly alien from the machine perspective. As such, it will be a misinterpretation of the artwork.

But what if we grant the machine privileged access to our ways of reasoning, to the peculiarities of our perception apparatus, to endless examples of human culture? Wouldn’t that enable the machine to make art that a human could understand? Our answer is yes, but this would also make the artworks human—not authentically machinic. All examples so far of “art made by machines” are actually just straightforward examples of human art made with computers, with the artists being the computer programmers. It might seem like a strange claim: how can the programmers be the authors of the artwork if, most of the time, they can’t control—or even anticipate—the actual materializations of the artwork? It turns out that this is a long-standing artistic practice.

Suppose that your local orchestra is playing Beethoven’s Symphony No 7 (1812). Even though Beethoven will not be directly responsible for any of the sounds produced there, you would still say that you are listening to Beethoven. Your experience might depend considerably on the interpretation of the performers, the acoustics of the room, the behavior of fellow audience members or your state of mind. Those and other aspects are the result of choices made by specific individuals or of accidents happening to them. But the author of the music? Ludwig van Beethoven. Let’s say that, as a somewhat odd choice for the program, John Cage’s Imaginary Landscape No 4 (March No 2) (1951) is also played, with 24 performers controlling 12 radios according to a musical score. In this case, the responsibility for the sounds being heard should be attributed to unsuspecting radio hosts, or even to electromagnetic fields. Yet, the shaping of sounds over time—the composition—should be credited to Cage. Each performance of this piece will vary immensely in its sonic materialization, but it will always be a performance of Imaginary Landscape No 4.

Why should we change these principles when artists use computers if, in these respects at least, computer art does not bring anything new to the table? The (human) artists might not be in direct control of the final materializations, or even be able to predict them but, despite that, they are the authors of the work. Various materializations of the same idea—in this case formalized as an algorithm—are instantiations of the same work manifesting different contextual conditions. In fact, a common use of computation in the arts is the production of variations of a process, and artists make extensive use of systems that are sensitive to initial conditions, external inputs, or pseudo-randomness to deliberately avoid repetition of outputs. Having a computer executing a procedure to build an artwork, even if using pseudo-random processes or machine-learning algorithms, is no different than throwing dice to arrange a piece of music, or to pursuing innumerable variations of the same formula. After all, the idea of machines that make art has an artistic tradition that long predates the current trend of artworks made by artificial intelligence.

Machinic art is a term that we believe should be reserved for art made by an artificial mind’s own volition, not for that based on (or directed towards) an anthropocentric view of art. From a human point of view, machinic artworks will still be procedural, algorithmic, and computational. They will be generative, because they will be autonomous from a human artist. And they might be interactive, with humans or other systems. But they will not be the result of a human deferring decisions to a machine, because the first of those—the decision to make art—needs to be the result of a machine’s volition, intentions, and decisions. Only then will we no longer have human art made with computers, but proper machinic art.

The problem is not whether machines will or will not develop a sense of self that leads to an eagerness to create art. The problem is that if—or when—they do, they will have such a different Umwelt that we will be completely unable to relate to it from our own subjective, embodied perspective. Machinic art will always lie beyond our ability to understand it because the boundaries of our comprehension—in art, as in life—are those of the human experience.

This article was originally published at Aeon and has been republished under Creative Commons.

Image Credit: Rene Böhmer / Unsplash Continue reading

Posted in Human Robots

#436079 Video Friday: This Humanoid Robot Will ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

Northeast Robotics Colloquium – October 12, 2019 – Philadelphia, Pa., USA
Ro-Man 2019 – October 14-18, 2019 – New Delhi, India
Humanoids 2019 – October 15-17, 2019 – Toronto, Canada
ARSO 2019 – October 31-1, 2019 – Beijing, China
ROSCon 2019 – October 31-1, 2019 – Macau
IROS 2019 – November 4-8, 2019 – Macau
Let us know if you have suggestions for next week, and enjoy today’s videos.

What’s better than a robotics paper with “dynamic” in the title? A robotics paper with “highly dynamic” in the title. From Sangbae Kim’s lab at MIT, the latest exploits of Mini Cheetah:

Yes I’d very much like one please. Full paper at the link below.

[ Paper ] via [ MIT ]

A humanoid robot serving you ice cream—on his own ice cream bike: What a delicious vision!

[ Roboy ]

The Roomba “i” series and “s” series vacuums have just gotten an update that lets you set “keep out” zones, which is super useful. Tell your robot where not to go!

I feel bad, that Roomba was probably just hungry 🙁

[ iRobot ]

We wrote about Voliro’s tilt-rotor hexcopter a couple years ago, and now it’s off doing practical things, like spray painting a building pretty much the same color that it was before.

[ Voliro ]

Thanks Mina!

Here’s a clever approach for bin-picking problematic objects, like shiny things: Just grab a whole bunch, and then sort out what you need on a nice robot-friendly table.

It might take a little bit longer, but what do you care, you’re probably off sipping a cocktail with a little umbrella in it on a beach somewhere.

[ Harada Lab ]

A unique combination of the IRB 1200 and YuMi industrial robots that use vision, AI and deep learning to recognize and categorize trash for recycling.

[ ABB ]

Measuring glacial movements in-situ is a challenging, but necessary task to model glaciers and predict their future evolution. However, installing GPS stations on ice can be dangerous and expensive when not impossible in the presence of large crevasses. In this project, the ASL develops UAVs for dropping and recovering lightweight GPS stations over inaccessible glaciers to record the ice flow motion. This video shows the results of first tests performed at Gorner glacier, Switzerland, in July 2019.

[ EPFL ]

Turns out Tertills actually do a pretty great job fighting weeds.

Plus, they leave all those cute lil’ Tertill tracks.

[ Franklin Robotics ]

The online autonomous navigation and semantic mapping experiment presented [below] is conducted with the Cassie Blue bipedal robot at the University of Michigan. The sensors attached to the robot include an IMU, a 32-beam LiDAR and an RGB-D camera. The whole online process runs in real-time on a Jetson Xavier and a laptop with an i7 processor.

The resulting map is so precise that it looks like we are doing real-time SLAM (simultaneous localization and mapping). In fact, the map is based on dead-reckoning via the InvEKF.

[ GTSAM ] via [ University of Michigan ]

UBTECH has announced an upgraded version of its Meebot, which is 30 percent bigger and comes with more sensors and programmable eyes.

[ UBTECH ]

ABB’s research team will be working with medical staff, scientist and engineers to develop non-surgical medical robotics systems, including logistics and next-generation automated laboratory technologies. The team will develop robotics solutions that will help eliminate bottlenecks in laboratory work and address the global shortage of skilled medical staff.

[ ABB ]

In this video, Ian and Chris go through Misty’s SDK, discussing the languages we’ve included, the tools that make it easy for you to get started quickly, a quick rundown of how to run the skills you build, plus what’s ahead on the Misty SDK roadmap.

[ Misty Robotics ]

My guess is that this was not one of iRobot’s testing environments for the Roomba.

You know, that’s actually super impressive. And maybe if they threw one of the self-emptying Roombas in there, it would be a viable solution to the entire problem.

[ How Farms Work ]

Part of WeRobotics’ Flying Labs network, Panama Flying Labs is a local knowledge hub catalyzing social good and empowering local experts. Through training and workshops, demonstrations and missions, the Panama Flying Labs team leverages the power of drones, data, and AI to promote entrepreneurship, build local capacity, and confront the pressing social challenges faced by communities in Panama and across Central America.

[ Panama Flying Labs ]

Go on a virtual flythrough of the NIOSH Experimental Mine, one of two courses used in the recent DARPA Subterranean Challenge Tunnel Circuit Event held 15-22 August, 2019. The data used for this partial flythrough tour were collected using 3D LIDAR sensors similar to the sensors commonly used on autonomous mobile robots.

[ SubT ]

Special thanks to PBS, Mark Knobil, Joe Seamans and Stan Brandorff and many others who produced this program in 1991.

It features Reid Simmons (and his 1 year old son), David Wettergreen, Red Whittaker, Mac Macdonald, Omead Amidi, and other Field Robotics Center alumni building the planetary walker prototype called Ambler. The team gets ready for an important demo for NASA.

[ CMU RI ]

As art and technology merge, roboticist Madeline Gannon explores the frontiers of human-robot interaction across the arts, sciences and society, and explores what this could mean for the future.

[ Sonar+D ] Continue reading

Posted in Human Robots

#435828 Video Friday: Boston Dynamics’ ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

RoboBusiness 2019 – October 1-3, 2019 – Santa Clara, Calif., USA
ISRR 2019 – October 6-10, 2019 – Hanoi, Vietnam
Ro-Man 2019 – October 14-18, 2019 – New Delhi, India
Humanoids 2019 – October 15-17, 2019 – Toronto, Canada
ARSO 2019 – October 31-1, 2019 – Beijing, China
ROSCon 2019 – October 31-1, 2019 – Macau
IROS 2019 – November 4-8, 2019 – Macau
Let us know if you have suggestions for next week, and enjoy today’s videos.

You’ve almost certainly seen the new Spot and Atlas videos from Boston Dynamics, if for no other reason than we posted about Spot’s commercial availability earlier this week. But what, are we supposed to NOT include them in Video Friday anyway? Psh! Here you go:

[ Boston Dynamics ]

Eight deadly-looking robots. One Giant Nut trophy. Tonight is the BattleBots season finale, airing on Discovery, 8 p.m. ET, or check your local channels.

[ BattleBots ]

Thanks Trey!

Speaking of battling robots… Having giant robots fight each other is one of those things that sounds really great in theory, but doesn’t work out so well in reality. And sadly, MegaBots is having to deal with reality, which means putting their giant fighting robot up on eBay.

As of Friday afternoon, the current bid is just over $100,000 with a week to go.

[ MegaBots ]

Michigan Engineering has figured out the secret formula to getting 150,000 views on YouTube: drone plus nail gun.

[ Michigan Engineering ]

Michael Burke from the University of Edinburgh writes:

We’ve been learning to scoop grapefruit segments using a PR2, by “feeling” the difference between peel and pulp. We use joint torque measurements to predict the probability that the knife is in the peel or pulp, and use this to apply feedback control to a nominal cutting trajectory learned from human demonstration, so that we remain in a position of maximum uncertainty about which medium we’re cutting. This means we slice along the boundary between the two mediums. It works pretty well!

[ Paper ] via [ Robust Autonomy and Decisions Group ]

Thanks Michael!

Hey look, it’s Jan with eight EMYS robot heads. Hi, Jan! Hi, EMYSes!

[ EMYS ]

We’re putting the KRAKEN Arm through its paces, demonstrating that it can unfold from an Express Rack locker on the International Space Station and access neighboring lockers in NASA’s FabLab system to enable transfer of materials and parts between manufacturing, inspection, and storage stations. The KRAKEN arm will be able to change between multiple ’end effector’ tools such as grippers and inspection sensors – those are in development so they’re not shown in this video.

[ Tethers Unlimited ]

UBTECH’s Alpha Mini Robot with Smart Robot’s “Maatje” software is offering healthcare service to children at Praktijk Intraverte Multidisciplinary Institution in Netherlands.

This institution is using Alpha Mini in counseling children’s behavior. Alpha Mini can move and talk to children and offers games and activities to stimulate and interact with them. Alpha Mini talks, helps and motivates children thereby becoming more flexible in society.

[ UBTECH ]

Some impressive work here from Anusha Nagabandi, Kurt Konoglie, Sergey Levine, Vikash Kumar at Google Brain, training a dexterous multi-fingered hand to do that thing with two balls that I’m really bad at.

Dexterous multi-fingered hands can provide robots with the ability to flexibly perform a wide range of manipulation skills. However, many of the more complex behaviors are also notoriously difficult to control: Performing in-hand object manipulation, executing finger gaits to move objects, and exhibiting precise fine motor skills such as writing, all require finely balancing contact forces, breaking and reestablishing contacts repeatedly, and maintaining control of unactuated objects. In this work, we demonstrate that our method of online planning with deep dynamics models (PDDM) addresses both of these limitations; we show that improvements in learned dynamics models, together with improvements in online model-predictive control, can indeed enable efficient and effective learning of flexible contact-rich dexterous manipulation skills — and that too, on a 24-DoF anthropomorphic hand in the real world, using just 2-4 hours of purely real-world data to learn to simultaneously coordinate multiple free-floating objects.

[ PDDM ]

Thanks Vikash!

CMU’s Ballbot has a deceptively light touch that’s ideal for leading people around.

A paper on this has been submitted to IROS 2019.

[ CMU ]

The Autonomous Robots Lab at the University of Nevada is sharing some of the work they’ve done on path planning and exploration for aerial robots during the DARPA SubT Challenge.

[ Autonomous Robots Lab ]

More proof that anything can be a drone if you staple some motors to it. Even 32 feet of styrofoam insulation.

[ YouTube ]

Whatever you think of military drones, we can all agree that they look cool.

[ Boeing ]

I appreciate the fact that iCub has eyelids, I really do, but sometimes, it ends up looking kinda sleepy in research videos.

[ EPFL LASA ]

Video shows autonomous flight of a lightweight aerial vehicle outdoors and indoors on the campus of Carnegie Mellon University. The vehicle is equipped with limited onboard sensing from a front-facing camera and a proximity sensor. The aerial autonomy is enabled by utilizing a 3D prior map built in Step 1.

[ CMU ]

The Stanford Space Robotics Facility allows researchers to test innovative guidance and navigation algorithms on a realistic frictionless, underactuated system.

[ Stanford ASL ]

In this video, Ian and CP discuss Misty’s many capabilities including robust locomotion, obstacle avoidance, 3D mapping/SLAM, face detection and recognition, sound localization, hardware extensibility, photo and video capture, and programmable personality. They also talk about some of the skills he’s built using these capabilities (and others) and how those skills can be expanded upon by you.

[ Misty Robotics ]

This week’s CMU RI Seminar comes from Aaron Parness at Caltech and NASA JPL, on “Robotic Grippers for Planetary Applications.”

The previous generation of NASA missions to the outer solar system discovered salt water oceans on Europa and Enceladus, each with more liquid water than Earth – compelling targets to look for extraterrestrial life. Closer to home, JAXA and NASA have imaged sky-light entrances to lava tube caves on the Moon more than 100 m in diameter and ESA has characterized the incredibly varied and complex terrain of Comet 67P. While JPL has successfully landed and operated four rovers on the surface of Mars using a 6-wheeled rocker-bogie architecture, future missions will require new mobility architectures for these extreme environments. Unfortunately, the highest value science targets often lie in the terrain that is hardest to access. This talk will explore robotic grippers that enable missions to these extreme terrains through their ability to grip a wide variety of surfaces (shapes, sizes, and geotechnical properties). To prepare for use in space where repair or replacement is not possible, we field-test these grippers and robots in analog extreme terrain on Earth. Many of these systems are enabled by advances in autonomy. The talk will present a rapid overview of my work and a detailed case study of an underactuated rock gripper for deflecting asteroids.

[ CMU ]

Rod Brooks gives some of the best robotics talks ever. He gave this one earlier this week at UC Berkeley, on “Steps Toward Super Intelligence and the Search for a New Path.”

[ UC Berkeley ] Continue reading

Posted in Human Robots

#435626 Video Friday: Watch Robots Make a Crepe ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. Every week, we also post a calendar of upcoming robotics events; here's what we have so far (send us your events!):

Robotronica – August 18, 2019 – Brisbane, Australia
CLAWAR 2019 – August 26-28, 2019 – Kuala Lumpur, Malaysia
IEEE Africon 2019 – September 25-27, 2019 – Accra, Ghana
ISRR 2019 – October 6-10, 2019 – Hanoi, Vietnam
Ro-Man 2019 – October 14-18, 2019 – New Delhi
Humanoids 2019 – October 15-17, 2019 – Toronto
ARSO 2019 – October 31-November 2, 2019 – Beijing
ROSCon 2019 – October 31-November 1, 2019 – Macau
IROS 2019 – November 4-8, 2019 – Macau
Let us know if you have suggestions for next week, and enjoy today's videos.

Team CoSTAR (JPL, MIT, Caltech, KAIST, LTU) has one of the more diverse teams of robots that we’ve seen:

[ Team CoSTAR ]

A team from Carnegie Mellon University and Oregon State University is sending ground and aerial autonomous robots into a Pittsburgh-area mine to prepare for this month’s DARPA Subterranean Challenge.

“Look at that fire extinguisher, what a beauty!” Expect to hear a lot more of that kind of weirdness during SubT.

[ CMU ]

Unitree Robotics is starting to batch-manufacture Laikago Pro quadrupeds, and if you buy four of them, they can carry you around in a chair!

I’m also really liking these videos from companies that are like, “We have a whole bunch of robot dogs now—what weird stuff can we do with them?”

[ Unitree Robotics ]

Why take a handful of pills every day for all the stuff that's wrong with you, when you could take one custom pill instead? Because custom pills are time-consuming to make, that’s why. But robots don’t care!

Multiply Labs’ factory is designed to operate in parallel. All the filling robots and all the quality-control robots are operating at the same time. The robotic arm, in the meanwhile, shuttles dozens of trays up and down the production floor, making sure that each capsule is filled with the right drugs. The manufacturing cell shown in this article can produce 10,000 personalized capsules in an 8-hour shift. A single cell occupies just 128 square feet (12 square meters) on the production floor. This means that a regular production facility (~10,000 square feet, or 929 m2 ) can house 78 cells, for an overall output of 780,000 capsules per shift. This exceeds the output of most traditional manufacturers—while producing unique personalized capsules!

[ Multiply Labs ]

Thanks Fred!

If you’re getting tired of all those annoying drones that sound like giant bees, just have a listen to this turbine-powered one:

[ Malloy Aeronautics ]

In retrospect, it’s kind of amazing that nobody has bothered to put a functional robotic dog head on a quadruped robot before this, right?

Equipped with sensors, high-tech radar imaging, cameras and a directional microphone, this 100-pound (45-kilogram) super-robot is still a “puppy-in-training.” Just like a regular dog, he responds to commands such as “sit,” “stand,” and “lie down.” Eventually, he will be able to understand and respond to hand signals, detect different colors, comprehend many languages, coordinate his efforts with drones, distinguish human faces, and even recognize other dogs.

As an information scout, Astro’s key missions will include detecting guns, explosives and gun residue to assist police, the military, and security personnel. This robodog’s talents won’t just end there, he also can be programmed to assist as a service dog for the visually impaired or to provide medical diagnostic monitoring. The MPCR team also is training Astro to serve as a first responder for search-and-rescue missions such as hurricane reconnaissance as well as military maneuvers.

[ FAU ]

And now this amazing video, “The Coke Thief,” from ICRA 2005 (!):

[ Paper ]

CYBATHLON Series put the focus on one or two of the six disciplines and are organized in cooperation with international universities and partners. The CYBATHLON Arm and Leg Prosthesis Series took place in Karlsruhe, Germany, from 16 to 18 May and was organized in cooperation with the Karlsruhe Institute of Technology (KIT) and the trade fair REHAB Karlsruhe.

The CYBATHLON Wheelchair Series took place in Kawasaki, Japan on 5 May 2019 and was organized in cooperation with the CYBATHLON Wheelchair Series Japan Organizing Committee and supported by the Swiss Embassy.

[ Cybathlon ]

Rainbow crepe robot!

There’s also this other robot, which I assume does something besides what's in the video, because otherwise it appears to be a massively overengineered way of shaping cooked rice into a chubby triangle.

[ PC Watch ]

The Weaponized Plastic Fighting League at Fetch Robotics has had another season of shardation, deintegration, explodification, and other -tions. Here are a couple fan favorite match videos:

[ Fetch Robotics ]

This video is in German, but it’s worth watching for the three seconds of extremely satisfying footage showing a robot twisting dough into pretzels.

[ Festo ]

Putting brains into farming equipment is a no-brainer, since it’s a semi-structured environment that's generally clear of wayward humans driving other vehicles.

[ Lovol ]

Thanks Fan!

Watch some robots assemble suspiciously Lego-like (but definitely not actually Lego) minifigs.

[ DevLinks ]

The Robotics Innovation Facility (RIFBristol) helps businesses, entrepreneurs, researchers and public sector bodies to embrace the concept of ‘Industry 4.0'. From training your staff in robotics, and demonstrating how automation can improve your manufacturing processes, to prototyping and validating your new innovations—we can provide the support you need.

[ RIF ]

Ryan Gariepy from Clearpath Robotics (and a bunch of other stuff) gave a talk at ICRA with the title of “Move Fast and (Don’t) Break Things: Commercializing Robotics at the Speed of Venture Capital,” which is more interesting when you know that this year’s theme was “Notable Failures.”

[ Clearpath Robotics ]

In this week’s episode of Robots in Depth, Per interviews Michael Nielsen, a computer vision researcher at the Danish Technological Institute.

Michael worked with a fusion of sensors like stereo vision, thermography, radar, lidar and high-frame-rate cameras, merging multiple images for high dynamic range. All this, to be able to navigate the tricky situation in a farm field where you need to navigate close to or even in what is grown. Multibaseline cameras were also used to provide range detection over a wide range of distances.

We also learn about how he expanded his work into sorting recycling, a very challenging problem. We also hear about the problems faced when using time of flight and sheet of light cameras. He then shares some good results using stereo vision, especially combined with blue light random dot projectors.

[ Robots in Depth ] Continue reading

Posted in Human Robots

#435196 Avatar Love? New ‘Black Mirror’ ...

This week, the widely-anticipated fifth season of the dystopian series Black Mirror was released on Netflix. The storylines this season are less focused on far-out scenarios and increasingly aligned with current issues. With only three episodes, this season raises more questions than it answers, often leaving audiences bewildered.

The episode Smithereens explores our society’s crippling addiction to social media platforms and the monopoly they hold over our data. In Rachel, Jack and Ashley Too, we see the disruptive impact of technologies on the music and entertainment industry, and the price of fame for artists in the digital world. Like most Black Mirror episodes, these explore the sometimes disturbing implications of tech advancements on humanity.

But once again, in the midst of all the doom and gloom, the creators of the series leave us with a glimmer of hope. Aligned with Pride month, the episode Striking Vipers explores the impact of virtual reality on love, relationships, and sexual fluidity.

*The review contains a few spoilers.*

Striking Vipers
The first episode of the season, Striking Vipers may be one of the most thought-provoking episodes in Black Mirror history. Reminiscent of previous episodes San Junipero and Hang the DJ, the writers explore the potential for technology to transform human intimacy.

The episode tells the story of two old friends, Danny and Karl, whose friendship is reignited in an unconventional way. Karl unexpectedly appears at Danny’s 38th birthday and reintroduces him to the VR version of a game they used to play years before. In the game Striking Vipers X, each of the players is represented by an avatar of their choice in an uncanny digital reality. Following old tradition, Karl chooses to become the female fighter, Roxanne, and Danny takes on the role of the male fighter, Lance. The state-of-the-art VR headsets appear to use an advanced form of brain-machine interface to allow each player to be fully immersed in the virtual world, emulating all physical sensations.

To their surprise (and confusion), Danny and Karl find themselves transitioning from fist-fighting to kissing. Over the course of many games, they continue to explore a sexual and romantic relationship in the virtual world, leaving them confused and distant in the real world. The virtual and physical realities begin to blur, and so do the identities of the players with their avatars. Danny, who is married (in a heterosexual relationship) and is a father, begins to carry guilt and confusion in the real world. They both wonder if there would be any spark between them in real life.

The brain-machine interface (BMI) depicted in the episode is still science fiction, but that hasn’t stopped innovators from pushing the technology forward. Experts today are designing more intricate BMI systems while programming better algorithms to interpret the neural signals they capture. Scientists have already succeeded in enabling paralyzed patients to type with their minds, and are even allowing people to communicate with one another purely through brainwaves.

The convergence of BMIs with virtual reality and artificial intelligence could make the experience of such immersive digital realities possible. Virtual reality, too, is decreasing exponentially in cost and increasing in quality.

The narrative provides meaningful commentary on another tech area—gaming. It highlights video games not necessarily as addictive distractions, but rather as a platform for connecting with others in a deeper way. This is already very relevant. Video games like Final Fantasy are often a tool for meaningful digital connections for their players.

The Implications of Virtual Reality on Love and Relationships
The narrative of Striking Vipers raises many novel questions about the implications of immersive technologies on relationships: could the virtual world allow us a safe space to explore suppressed desires? Can virtual avatars make it easier for us to show affection to those we care about? Can a sexual or romantic encounter in the digital world be considered infidelity?

Above all, the episode explores the therapeutic possibilities of such technologies. While many fears about virtual reality had been raised in previous seasons of Black Mirror, this episode was focused on its potential. This includes the potential of immersive technology to be a source of liberation, meaningful connections, and self-exploration, as well as a tool for realizing our true identities and desires.

Once again, this is aligned with emerging trends in VR. We are seeing the rise of social VR applications and platforms that allow you to hang out with your friends and family as avatars in the virtual space. The technology is allowing for animation movies, such as Coco VR, to become an increasingly social and interactive experience. Considering that meaningful social interaction can alleviate depression and anxiety, such applications could contribute to well-being.

Techno-philosopher and National Geographic host Jason Silva points out that immersive media technologies can be “engines of empathy.” VR allows us to enter virtual spaces that mimic someone else’s state of mind, allowing us to empathize with the way they view the world. Silva said, “Imagine the intimacy that becomes possible when people meet and they say, ‘Hey, do you want to come visit my world? Do you want to see what it’s like to be inside my head?’”

What is most fascinating about Striking Vipers is that it explores how we may redefine love with virtual reality; we are introduced to love between virtual avatars. While this kind of love may seem confusing to audiences, it may be one of the complex implications of virtual reality on human relationships.

In many ways, the title Black Mirror couldn’t be more appropriate, as each episode serves as a mirror to the most disturbing aspects of our psyches as they get amplified through technology. However, what we see in uplifting and thought-provoking plots like Striking Vipers, San Junipero, and Hang The DJ is that technology could also amplify the most positive aspects of our humanity. This includes our powerful capacity to love.

Image Credit: Arsgera / Shutterstock.com Continue reading

Posted in Human Robots