Tag Archives: Would
#436484 If Machines Want to Make Art, Will ...
Assuming that the emergence of consciousness in artificial minds is possible, those minds will feel the urge to create art. But will we be able to understand it? To answer this question, we need to consider two subquestions: when does the machine become an author of an artwork? And how can we form an understanding of the art that it makes?
Empathy, we argue, is the force behind our capacity to understand works of art. Think of what happens when you are confronted with an artwork. We maintain that, to understand the piece, you use your own conscious experience to ask what could possibly motivate you to make such an artwork yourself—and then you use that first-person perspective to try to come to a plausible explanation that allows you to relate to the artwork. Your interpretation of the work will be personal and could differ significantly from the artist’s own reasons, but if we share sufficient experiences and cultural references, it might be a plausible one, even for the artist. This is why we can relate so differently to a work of art after learning that it is a forgery or imitation: the artist’s intent to deceive or imitate is very different from the attempt to express something original. Gathering contextual information before jumping to conclusions about other people’s actions—in art, as in life—can enable us to relate better to their intentions.
But the artist and you share something far more important than cultural references: you share a similar kind of body and, with it, a similar kind of embodied perspective. Our subjective human experience stems, among many other things, from being born and slowly educated within a society of fellow humans, from fighting the inevitability of our own death, from cherishing memories, from the lonely curiosity of our own mind, from the omnipresence of the needs and quirks of our biological body, and from the way it dictates the space- and time-scales we can grasp. All conscious machines will have embodied experiences of their own, but in bodies that will be entirely alien to us.
We are able to empathize with nonhuman characters or intelligent machines in human-made fiction because they have been conceived by other human beings from the only subjective perspective accessible to us: “What would it be like for a human to behave as x?” In order to understand machinic art as such—and assuming that we stand a chance of even recognizing it in the first place—we would need a way to conceive a first-person experience of what it is like to be that machine. That is something we cannot do even for beings that are much closer to us. It might very well happen that we understand some actions or artifacts created by machines of their own volition as art, but in doing so we will inevitably anthropomorphize the machine’s intentions. Art made by a machine can be meaningfully interpreted in a way that is plausible only from the perspective of that machine, and any coherent anthropomorphized interpretation will be implausibly alien from the machine perspective. As such, it will be a misinterpretation of the artwork.
But what if we grant the machine privileged access to our ways of reasoning, to the peculiarities of our perception apparatus, to endless examples of human culture? Wouldn’t that enable the machine to make art that a human could understand? Our answer is yes, but this would also make the artworks human—not authentically machinic. All examples so far of “art made by machines” are actually just straightforward examples of human art made with computers, with the artists being the computer programmers. It might seem like a strange claim: how can the programmers be the authors of the artwork if, most of the time, they can’t control—or even anticipate—the actual materializations of the artwork? It turns out that this is a long-standing artistic practice.
Suppose that your local orchestra is playing Beethoven’s Symphony No 7 (1812). Even though Beethoven will not be directly responsible for any of the sounds produced there, you would still say that you are listening to Beethoven. Your experience might depend considerably on the interpretation of the performers, the acoustics of the room, the behavior of fellow audience members or your state of mind. Those and other aspects are the result of choices made by specific individuals or of accidents happening to them. But the author of the music? Ludwig van Beethoven. Let’s say that, as a somewhat odd choice for the program, John Cage’s Imaginary Landscape No 4 (March No 2) (1951) is also played, with 24 performers controlling 12 radios according to a musical score. In this case, the responsibility for the sounds being heard should be attributed to unsuspecting radio hosts, or even to electromagnetic fields. Yet, the shaping of sounds over time—the composition—should be credited to Cage. Each performance of this piece will vary immensely in its sonic materialization, but it will always be a performance of Imaginary Landscape No 4.
Why should we change these principles when artists use computers if, in these respects at least, computer art does not bring anything new to the table? The (human) artists might not be in direct control of the final materializations, or even be able to predict them but, despite that, they are the authors of the work. Various materializations of the same idea—in this case formalized as an algorithm—are instantiations of the same work manifesting different contextual conditions. In fact, a common use of computation in the arts is the production of variations of a process, and artists make extensive use of systems that are sensitive to initial conditions, external inputs, or pseudo-randomness to deliberately avoid repetition of outputs. Having a computer executing a procedure to build an artwork, even if using pseudo-random processes or machine-learning algorithms, is no different than throwing dice to arrange a piece of music, or to pursuing innumerable variations of the same formula. After all, the idea of machines that make art has an artistic tradition that long predates the current trend of artworks made by artificial intelligence.
Machinic art is a term that we believe should be reserved for art made by an artificial mind’s own volition, not for that based on (or directed towards) an anthropocentric view of art. From a human point of view, machinic artworks will still be procedural, algorithmic, and computational. They will be generative, because they will be autonomous from a human artist. And they might be interactive, with humans or other systems. But they will not be the result of a human deferring decisions to a machine, because the first of those—the decision to make art—needs to be the result of a machine’s volition, intentions, and decisions. Only then will we no longer have human art made with computers, but proper machinic art.
The problem is not whether machines will or will not develop a sense of self that leads to an eagerness to create art. The problem is that if—or when—they do, they will have such a different Umwelt that we will be completely unable to relate to it from our own subjective, embodied perspective. Machinic art will always lie beyond our ability to understand it because the boundaries of our comprehension—in art, as in life—are those of the human experience.
This article was originally published at Aeon and has been republished under Creative Commons.
Image Credit: Rene Böhmer / Unsplash Continue reading
#436470 Retail Robots Are on the Rise—at Every ...
The robots are coming! The robots are coming! On our sidewalks, in our skies, in our every store… Over the next decade, robots will enter the mainstream of retail.
As countless robots work behind the scenes to stock shelves, serve customers, and deliver products to our doorstep, the speed of retail will accelerate.
These changes are already underway. In this blog, we’ll elaborate on how robots are entering the retail ecosystem.
Let’s dive in.
Robot Delivery
On August 3rd, 2016, Domino’s Pizza introduced the Domino’s Robotic Unit, or “DRU” for short. The first home delivery pizza robot, the DRU looks like a cross between R2-D2 and an oversized microwave.
LIDAR and GPS sensors help it navigate, while temperature sensors keep hot food hot and cold food cold. Already, it’s been rolled out in ten countries, including New Zealand, France, and Germany, but its August 2016 debut was critical—as it was the first time we’d seen robotic home delivery.
And it won’t be the last.
A dozen or so different delivery bots are fast entering the market. Starship Technologies, for instance, a startup created by Skype founders Janus Friis and Ahti Heinla, has a general-purpose home delivery robot. Right now, the system is an array of cameras and GPS sensors, but upcoming models will include microphones, speakers, and even the ability—via AI-driven natural language processing—to communicate with customers. Since 2016, Starship has already carried out 50,000 deliveries in over 100 cities across 20 countries.
Along similar lines, Nuro—co-founded by Jiajun Zhu, one of the engineers who helped develop Google’s self-driving car—has a miniature self-driving car of its own. Half the size of a sedan, the Nuro looks like a toaster on wheels, except with a mission. This toaster has been designed to carry cargo—about 12 bags of groceries (version 2.0 will carry 20)—which it’s been doing for select Kroger stores since 2018. Domino’s also partnered with Nuro in 2019.
As these delivery bots take to our streets, others are streaking across the sky.
Back in 2016, Amazon came first, announcing Prime Air—the e-commerce giant’s promise of drone delivery in 30 minutes or less. Almost immediately, companies ranging from 7-Eleven and Walmart to Google and Alibaba jumped on the bandwagon.
While critics remain doubtful, the head of the FAA’s drone integration department recently said that drone deliveries may be “a lot closer than […] the skeptics think. [Companies are] getting ready for full-blown operations. We’re processing their applications. I would like to move as quickly as I can.”
In-Store Robots
While delivery bots start to spare us trips to the store, those who prefer shopping the old-fashioned way—i.e., in person—also have plenty of human-robot interaction in store. In fact, these robotics solutions have been around for a while.
In 2010, SoftBank introduced Pepper, a humanoid robot capable of understanding human emotion. Pepper is cute: 4 feet tall, with a white plastic body, two black eyes, a dark slash of a mouth, and a base shaped like a mermaid’s tail. Across her chest is a touch screen to aid in communication. And there’s been a lot of communication. Pepper’s cuteness is intentional, as it matches its mission: help humans enjoy life as much as possible.
Over 12,000 Peppers have been sold. She serves ice cream in Japan, greets diners at a Pizza Hut in Singapore, and dances with customers at a Palo Alto electronics store. More importantly, Pepper’s got company.
Walmart uses shelf-stocking robots for inventory control. Best Buy uses a robo-cashier, allowing select locations to operate 24-7. And Lowe’s Home Improvement employs the LoweBot—a giant iPad on wheels—to help customers find the items they need while tracking inventory along the way.
Warehouse Bots
Yet the biggest benefit robots provide might be in-warehouse logistics.
In 2012, when Amazon dished out $775 million for Kiva Systems, few could predict that just 6 years later, 45,000 Kiva robots would be deployed at all of their fulfillment centers, helping process a whopping 306 items per second during the Christmas season.
And many other retailers are following suit.
Order jeans from the Gap, and soon they’ll be sorted, packed, and shipped with the help of a Kindred robot. Remember the old arcade game where you picked up teddy bears with a giant claw? That’s Kindred, only her claw picks up T-shirts, pants, and the like, placing them in designated drop-off zones that resemble tiny mailboxes (for further sorting or shipping).
The big deal here is democratization. Kindred’s robot is cheap and easy to deploy, allowing smaller companies to compete with giants like Amazon.
Final Thoughts
For retailers interested in staying in business, there doesn’t appear to be much choice in the way of robotics.
By 2024, the US minimum wage is projected to be $15 an hour (the House of Representatives has already passed the bill, but the wage hike is meant to unfold gradually between now and 2025), and many consider that number far too low.
Yet, as human labor costs continue to climb, robots won’t just be coming, they’ll be here, there, and everywhere. It’s going to become increasingly difficult for store owners to justify human workers who call in sick, show up late, and can easily get injured. Robots work 24-7. They never take a day off, never need a bathroom break, health insurance, or parental leave.
Going forward, this spells a growing challenge of technological unemployment (a blog topic I will cover in the coming month). But in retail, robotics usher in tremendous benefits for companies and customers alike.
And while professional re-tooling initiatives and the transition of human capital from retail logistics to a booming experience economy take hold, robotic retail interaction and last-mile delivery will fundamentally transform our relationship with commerce.
This blog comes from The Future is Faster Than You Think—my upcoming book, to be released Jan 28th, 2020. To get an early copy and access up to $800 worth of pre-launch giveaways, sign up here!
Join Me
(1) A360 Executive Mastermind: If you’re an exponentially and abundance-minded entrepreneur who would like coaching directly from me, consider joining my Abundance 360 Mastermind, a highly selective community of 360 CEOs and entrepreneurs who I coach for 3 days every January in Beverly Hills, Ca. Through A360, I provide my members with context and clarity about how converging exponential technologies will transform every industry. I’m committed to running A360 for the course of an ongoing 25-year journey as a “countdown to the Singularity.”
If you’d like to learn more and consider joining our 2020 membership, apply here.
(2) Abundance-Digital Online Community: I’ve also created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is Singularity University’s ‘onramp’ for exponential entrepreneurs — those who want to get involved and play at a higher level. Click here to learn more.
(Both A360 and Abundance-Digital are part of Singularity University — your participation opens you to a global community.)
Image Credit: Image by imjanuary from Pixabay Continue reading
#436462 Robotic Exoskeletons, Like This One, Are ...
When you imagine an exoskeleton, chances are it might look a bit like the Guardian XO from Sarcos Robotics. The XO is literally a robot you wear (or maybe, it wears you). The suit’s powered limbs sense your movements and match their position to yours with little latency to give you effortless superstrength and endurance—lifting 200 pounds will feel like 10.
A vision of robots and humankind working together in harmony. Now, isn’t that nice?
Of course, there isn’t anything terribly novel about an exoskeleton. We’ve seen plenty of concepts and demonstrations in the last decade. These include light exoskeletons tailored to industrial settings—some of which are being tested out by the likes of Honda—and healthcare exoskeletons that support the elderly or folks with disabilities.
Full-body powered robotic exoskeletons are a bit rarer, which makes the Sarcos suit pretty cool to look at. But like all things in robotics, practicality matters as much as vision. It’s worth asking: Will anyone buy and use the thing? Is it more than a concept video?
Sarcos thinks so, and they’re excited about it. “If you were to ask the question, what does 30 years and $300 million look like,” Sarcos CEO, Ben Wolff, told IEEE Spectrum, “you’re going to see it downstairs.”
The XO appears to check a few key boxes. For one, it’s user friendly. According to Sarcos, it only takes a few minutes for the uninitiated to strap in and get up to speed. Feeling comfortable doing work with the suit takes a few hours. This is thanks to a high degree of sensor-based automation that allows the robot to seamlessly match its user’s movements.
The XO can also operate for more than a few minutes. It has two hours of battery life, and with spares on hand, it can go all day. The batteries are hot-swappable, meaning you can replace a drained battery with a new one without shutting the system down.
The suit is aimed at manufacturing, where workers are regularly moving heavy stuff around. Additionally, Wolff told CNET, the suit could see military use. But that doesn’t mean Avatar-style combat. The XO, Wolff said, is primarily about logistics (lifting and moving heavy loads) and isn’t designed to be armored, so it won’t likely see the front lines.
The system will set customers back $100,000 a year to rent, which sounds like a lot, but for industrial or military purposes, the six-figure rental may not deter would-be customers if the suit proves itself a useful bit of equipment. (And it’s reasonable to imagine the price coming down as the technology becomes more commonplace and competitors arrive.)
Sarcos got into exoskeletons a couple decades ago and was originally funded by the military (like many robotics endeavors). Videos hit YouTube as long ago as 2008, but after announcing the company was taking orders for the XO earlier this year, Sarcos says they’ll deliver the first alpha units in January, which is a notable milestone.
Broadly, robotics has advanced a lot in recent years. YouTube sensations like Boston Dynamics have regularly earned millions of views (and inevitably, headlines stoking robot fear). They went from tethered treadmill sessions to untethered backflips off boxes. While today’s robots really are vastly superior to their ancestors, they’ve struggled to prove themselves useful. A counterpoint to flashy YouTube videos, the DARPA Robotics Challenge gave birth to another meme altogether. Robots falling over. Often and awkwardly.
This year marks some of the first commercial fruits of a few decades’ research. Boston Dynamics recently started offering its robot dog, Spot, to select customers in 2019. Whether this proves to be a headline-worthy flash in the pan or something sustainable remains to be seen. But between robots with more autonomy and exoskeletons like the XO, the exoskeleton variety will likely be easier to make more practical for various uses.
Whereas autonomous robots require highly advanced automation to navigate uncertain and ever-changing conditions—automation which, at the moment, remains largely elusive (though the likes of Google are pairing the latest AI with robots to tackle the problem)—an exoskeleton mainly requires physical automation. The really hard bits, like navigating and recognizing and interacting with objects, are outsourced to its human operator.
As it turns out, for today’s robots the best AI is still us. We may yet get chipper automatons like Rosy the Robot, but until then, for complicated applications, we’ll strap into our mechs for their strength and endurance, and they’ll wear us for our brains.
Image Credit: Sarcos Robotics Continue reading