Tag Archives: Artificial General Intelligence

#437982 Superintelligent AI May Be Impossible to ...

It may be theoretically impossible for humans to control a superintelligent AI, a new study finds. Worse still, the research also quashes any hope for detecting such an unstoppable AI when it’s on the verge of being created.

Slightly less grim is the timetable. By at least one estimate, many decades lie ahead before any such existential computational reckoning could be in the cards for humanity.

Alongside news of AI besting humans at games such as chess, Go and Jeopardy have come fears that superintelligent machines smarter than the best human minds might one day run amok. “The question about whether superintelligence could be controlled if created is quite old,” says study lead author Manuel Alfonseca, a computer scientist at the Autonomous University of Madrid. “It goes back at least to Asimov’s First Law of Robotics, in the 1940s.”

The Three Laws of Robotics, first introduced in Isaac Asimov's 1942 short story “Runaround,” are as follows:

A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

In 2014, philosopher Nick Bostrom, director of the Future of Humanity Institute at the University of Oxford, not only explored ways in which a superintelligent AI could destroy us but also investigated potential control strategies for such a machine—and the reasons they might not work.

Bostrom outlined two possible types of solutions of this “control problem.” One is to control what the AI can do, such as keeping it from connecting to the Internet, and the other is to control what it wants to do, such as teaching it rules and values so it would act in the best interests of humanity. The problem with the former is that Bostrom thought a supersmart machine could probably break free from any bonds we could make. With the latter, he essentially feared that humans might not be smart enough to train a superintelligent AI.

Now Alfonseca and his colleagues suggest it may be impossible to control a superintelligent AI, due to fundamental limits inherent to computing itself. They detailed their findings this month in the Journal of Artificial Intelligence Research.

The researchers suggested that any algorithm that sought to ensure a superintelligent AI cannot harm people had to first simulate the machine’s behavior to predict the potential consequences of its actions. This containment algorithm then would need to halt the supersmart machine if it might indeed do harm.

However, the scientists said it was impossible for any containment algorithm to simulate the AI’s behavior and predict with absolute certainty whether its actions might lead to harm. The algorithm could fail to correctly simulate the AI’s behavior or accurately predict the consequences of the AI’s actions and not recognize such failures.

“Asimov’s first law of robotics has been proved to be incomputable,” Alfonseca says, “and therefore unfeasible.”

We may not even know if we have created a superintelligent machine, the researchers say. This is a consequence of Rice’s theorem, which essentially states that one cannot in general figure anything out about what a computer program might output just by looking at the program, Alfonseca explains.

On the other hand, there’s no need to spruce up the guest room for our future robot overlords quite yet. Three important caveats to the research still leave plenty of uncertainty to the group’s predictions.

First, Alfonseca estimates AI’s moment of truth remains, he says, “At least two centuries in the future.”

Second, he says researchers do not know if so-called artificial general intelligence, also known as strong AI, is theoretically even feasible. “That is, a machine as intelligent as we are in an ample variety of fields,” Alfonseca explains.

Last, Alfonseca says, “We have not proved that superintelligences can never be controlled—only that they can’t always be controlled.”

Although it may not be possible to control a superintelligent artificial general intelligence, it should be possible to control a superintelligent narrow AI—one specialized for certain functions instead of being capable of a broad range of tasks like humans. “We already have superintelligences of this type,” Alfonseca says. “For instance, we have machines that can compute mathematics much faster than we can. This is [narrow] superintelligence, isn’t it?” Continue reading

Posted in Human Robots

#437929 These Were Our Favorite Tech Stories ...

This time last year we were commemorating the end of a decade and looking ahead to the next one. Enter the year that felt like a decade all by itself: 2020. News written in January, the before-times, feels hopelessly out of touch with all that came after. Stories published in the early days of the pandemic are, for the most part, similarly naive.

The year’s news cycle was swift and brutal, ping-ponging from pandemic to extreme social and political tension, whipsawing economies, and natural disasters. Hope. Despair. Loneliness. Grief. Grit. More hope. Another lockdown. It’s been a hell of a year.

Though 2020 was dominated by big, hairy societal change, science and technology took significant steps forward. Researchers singularly focused on the pandemic and collaborated on solutions to a degree never before seen. New technologies converged to deliver vaccines in record time. The dark side of tech, from biased algorithms to the threat of omnipresent surveillance and corporate control of artificial intelligence, continued to rear its head.

Meanwhile, AI showed uncanny command of language, joined Reddit threads, and made inroads into some of science’s grandest challenges. Mars rockets flew for the first time, and a private company delivered astronauts to the International Space Station. Deprived of night life, concerts, and festivals, millions traveled to virtual worlds instead. Anonymous jet packs flew over LA. Mysterious monoliths appeared and disappeared worldwide.

It was all, you know, very 2020. For this year’s (in-no-way-all-encompassing) list of fascinating stories in tech and science, we tried to select those that weren’t totally dated by the news, but rose above it in some way. So, without further ado: This year’s picks.

How Science Beat the Virus
Ed Yong | The Atlantic
“Much like famous initiatives such as the Manhattan Project and the Apollo program, epidemics focus the energies of large groups of scientists. …But ‘nothing in history was even close to the level of pivoting that’s happening right now,’ Madhukar Pai of McGill University told me. … No other disease has been scrutinized so intensely, by so much combined intellect, in so brief a time.”

‘It Will Change Everything’: DeepMind’s AI Makes Gigantic Leap in Solving Protein Structures
Ewen Callaway | Nature
“In some cases, AlphaFold’s structure predictions were indistinguishable from those determined using ‘gold standard’ experimental methods such as X-ray crystallography and, in recent years, cryo-electron microscopy (cryo-EM). AlphaFold might not obviate the need for these laborious and expensive methods—yet—say scientists, but the AI will make it possible to study living things in new ways.”

OpenAI’s Latest Breakthrough Is Astonishingly Powerful, But Still Fighting Its Flaws
James Vincent | The Verge
“What makes GPT-3 amazing, they say, is not that it can tell you that the capital of Paraguay is Asunción (it is) or that 466 times 23.5 is 10,987 (it’s not), but that it’s capable of answering both questions and many more beside simply because it was trained on more data for longer than other programs. If there’s one thing we know that the world is creating more and more of, it’s data and computing power, which means GPT-3’s descendants are only going to get more clever.”

Artificial General Intelligence: Are We Close, and Does It Even Make Sense to Try?
Will Douglas Heaven | MIT Technology Review
“A machine that could think like a person has been the guiding vision of AI research since the earliest days—and remains its most divisive idea. …So why is AGI controversial? Why does it matter? And is it a reckless, misleading dream—or the ultimate goal?”

The Dark Side of Big Tech’s Funding for AI Research
Tom Simonite | Wired
“Timnit Gebru’s exit from Google is a powerful reminder of how thoroughly companies dominate the field, with the biggest computers and the most resources. …[Meredith] Whittaker of AI Now says properly probing the societal effects of AI is fundamentally incompatible with corporate labs. ‘That kind of research that looks at the power and politics of AI is and must be inherently adversarial to the firms that are profiting from this technology.’i”

We’re Not Prepared for the End of Moore’s Law
David Rotman | MIT Technology Review
“Quantum computing, carbon nanotube transistors, even spintronics, are enticing possibilities—but none are obvious replacements for the promise that Gordon Moore first saw in a simple integrated circuit. We need the research investments now to find out, though. Because one prediction is pretty much certain to come true: we’re always going to want more computing power.”

Inside the Race to Build the Best Quantum Computer on Earth
Gideon Lichfield | MIT Technology Review
“Regardless of whether you agree with Google’s position [on ‘quantum supremacy’] or IBM’s, the next goal is clear, Oliver says: to build a quantum computer that can do something useful. …The trouble is that it’s nearly impossible to predict what the first useful task will be, or how big a computer will be needed to perform it.”

The Secretive Company That Might End Privacy as We Know It
Kashmir Hill | The New York Times
“Searching someone by face could become as easy as Googling a name. Strangers would be able to listen in on sensitive conversations, take photos of the participants and know personal secrets. Someone walking down the street would be immediately identifiable—and his or her home address would be only a few clicks away. It would herald the end of public anonymity.”

Wrongfully Accused by an Algorithm
Kashmir Hill | The New York Times
“Mr. Williams knew that he had not committed the crime in question. What he could not have known, as he sat in the interrogation room, is that his case may be the first known account of an American being wrongfully arrested based on a flawed match from a facial recognition algorithm, according to experts on technology and the law.”

Predictive Policing Algorithms Are Racist. They Need to Be Dismantled.
Will Douglas Heaven | MIT Technology Review
“A number of studies have shown that these tools perpetuate systemic racism, and yet we still know very little about how they work, who is using them, and for what purpose. All of this needs to change before a proper reckoning can take pace. Luckily, the tide may be turning.”

The Panopticon Is Already Here
Ross Andersen | The Atlantic
“Artificial intelligence has applications in nearly every human domain, from the instant translation of spoken language to early viral-outbreak detection. But Xi [Jinping] also wants to use AI’s awesome analytical powers to push China to the cutting edge of surveillance. He wants to build an all-seeing digital system of social control, patrolled by precog algorithms that identify potential dissenters in real time.”

The Case For Cities That Aren’t Dystopian Surveillance States
Cory Doctorow | The Guardian
“Imagine a human-centered smart city that knows everything it can about things. It knows how many seats are free on every bus, it knows how busy every road is, it knows where there are short-hire bikes available and where there are potholes. …What it doesn’t know is anything about individuals in the city.”

The Modern World Has Finally Become Too Complex for Any of Us to Understand
Tim Maughan | OneZero
“One of the dominant themes of the last few years is that nothing makes sense. …I am here to tell you that the reason so much of the world seems incomprehensible is that it is incomprehensible. From social media to the global economy to supply chains, our lives rest precariously on systems that have become so complex, and we have yielded so much of it to technologies and autonomous actors that no one totally comprehends it all.”

The Conscience of Silicon Valley
Zach Baron | GQ
“What I really hoped to do, I said, was to talk about the future and how to live in it. This year feels like a crossroads; I do not need to explain what I mean by this. …I want to destroy my computer, through which I now work and ‘have drinks’ and stare at blurry simulations of my parents sometimes; I want to kneel down and pray to it like a god. I want someone—I want Jaron Lanier—to tell me where we’re going, and whether it’s going to be okay when we get there. Lanier just nodded. All right, then.”

Yes to Tech Optimism. And Pessimism.
Shira Ovide | The New York Times
“Technology is not something that exists in a bubble; it is a phenomenon that changes how we live or how our world works in ways that help and hurt. That calls for more humility and bridges across the optimism-pessimism divide from people who make technology, those of us who write about it, government officials and the public. We need to think on the bright side. And we need to consider the horribles.”

How Afrofuturism Can Help the World Mend
C. Brandon Ogbunu | Wired
“…[W. E. B. DuBois’] ‘The Comet’ helped lay the foundation for a paradigm known as Afrofuturism. A century later, as a comet carrying disease and social unrest has upended the world, Afrofuturism may be more relevant than ever. Its vision can help guide us out of the rubble, and help us to consider universes of better alternatives.”

Wikipedia Is the Last Best Place on the Internet
Richard Cooke | Wired
“More than an encyclopedia, Wikipedia has become a community, a library, a constitution, an experiment, a political manifesto—the closest thing there is to an online public square. It is one of the few remaining places that retains the faintly utopian glow of the early World Wide Web.”

Can Genetic Engineering Bring Back the American Chestnut?
Gabriel Popkin | The New York Times Magazine
“The geneticists’ research forces conservationists to confront, in a new and sometimes discomfiting way, the prospect that repairing the natural world does not necessarily mean returning to an unblemished Eden. It may instead mean embracing a role that we’ve already assumed: engineers of everything, including nature.”

At the Limits of Thought
David C. Krakauer | Aeon
“A schism is emerging in the scientific enterprise. On the one side is the human mind, the source of every story, theory, and explanation that our species holds dear. On the other stand the machines, whose algorithms possess astonishing predictive power but whose inner workings remain radically opaque to human observers.”

Is the Internet Conscious? If It Were, How Would We Know?
Meghan O’Gieblyn | Wired
“Does the internet behave like a creature with an internal life? Does it manifest the fruits of consciousness? There are certainly moments when it seems to. Google can anticipate what you’re going to type before you fully articulate it to yourself. Facebook ads can intuit that a woman is pregnant before she tells her family and friends. It is easy, in such moments, to conclude that you’re in the presence of another mind—though given the human tendency to anthropomorphize, we should be wary of quick conclusions.”

The Internet Is an Amnesia Machine
Simon Pitt | OneZero
“There was a time when I didn’t know what a Baby Yoda was. Then there was a time I couldn’t go online without reading about Baby Yoda. And now, Baby Yoda is a distant, shrugging memory. Soon there will be a generation of people who missed the whole thing and for whom Baby Yoda is as meaningless as it was for me a year ago.”

Digital Pregnancy Tests Are Almost as Powerful as the Original IBM PC
Tom Warren | The Verge
“Each test, which costs less than $5, includes a processor, RAM, a button cell battery, and a tiny LCD screen to display the result. …Foone speculates that this device is ‘probably faster at number crunching and basic I/O than the CPU used in the original IBM PC.’ IBM’s original PC was based on Intel’s 8088 microprocessor, an 8-bit chip that operated at 5Mhz. The difference here is that this is a pregnancy test you pee on and then throw away.”

The Party Goes on in Massive Online Worlds
Cecilia D’Anastasio | Wired
“We’re more stand-outside types than the types to cast a flashy glamour spell and chat up the nearest cat girl. But, hey, it’s Final Fantasy XIV online, and where my body sat in New York, the epicenter of America’s Covid-19 outbreak, there certainly weren’t any parties.”

The Facebook Groups Where People Pretend the Pandemic Isn’t Happening
Kaitlyn Tiffany | The Atlantic
“Losing track of a friend in a packed bar or screaming to be heard over a live band is not something that’s happening much in the real world at the moment, but it happens all the time in the 2,100-person Facebook group ‘a group where we all pretend we’re in the same venue.’ So does losing shoes and Juul pods, and shouting matches over which bands are the saddest, and therefore the greatest.”

Did You Fly a Jetpack Over Los Angeles This Weekend? Because the FBI Is Looking for You
Tom McKay | Gizmodo
“Did you fly a jetpack over Los Angeles at approximately 3,000 feet on Sunday? Some kind of tiny helicopter? Maybe a lawn chair with balloons tied to it? If the answer to any of the above questions is ‘yes,’ you should probably lay low for a while (by which I mean cool it on the single-occupant flying machine). That’s because passing airline pilots spotted you, and now it’s this whole thing with the FBI and the Federal Aviation Administration, both of which are investigating.”

Image Credit: Thomas Kinto / Unsplash Continue reading

Posted in Human Robots

#437477 If a Robot Is Conscious, Is It OK to ...

In the Star Trek: The Next Generation episode “The Measure of a Man,” Data, an android crew member of the Enterprise, is to be dismantled for research purposes unless Captain Picard can argue that Data deserves the same rights as a human being. Naturally the question arises: What is the basis upon which something has rights? What gives an entity moral standing?

The philosopher Peter Singer argues that creatures that can feel pain or suffer have a claim to moral standing. He argues that nonhuman animals have moral standing, since they can feel pain and suffer. Limiting it to people would be a form of speciesism, something akin to racism and sexism.

Without endorsing Singer’s line of reasoning, we might wonder if it can be extended further to an android robot like Data. It would require that Data can either feel pain or suffer. And how you answer that depends on how you understand consciousness and intelligence.

As real artificial intelligence technology advances toward Hollywood’s imagined versions, the question of moral standing grows more important. If AIs have moral standing, philosophers like me reason, it could follow that they have a right to life. That means you cannot simply dismantle them, and might also mean that people shouldn’t interfere with their pursuing their goals.

Two Flavors of Intelligence and a Test
IBM’s Deep Blue chess machine was successfully trained to beat grandmaster Gary Kasparov. But it could not do anything else. This computer had what’s called domain-specific intelligence.

On the other hand, there’s the kind of intelligence that allows for the ability to do a variety of things well. It is called domain-general intelligence. It’s what lets people cook, ski, and raise children—tasks that are related, but also very different.

Artificial general intelligence, AGI, is the term for machines that have domain-general intelligence. Arguably no machine has yet demonstrated that kind of intelligence. This summer, a startup called OpenAI released a new version of its Generative Pre-Training language model. GPT-3 is a natural language processing system, trained to read and write so that it can be easily understood by people.

It drew immediate notice, not just because of its impressive ability to mimic stylistic flourishes and put together plausible content, but also because of how far it had come from a previous version. Despite this impressive performance, GPT-3 doesn’t actually know anything beyond how to string words together in various ways. AGI remains quite far off.

Named after pioneering AI researcher Alan Turing, the Turing test helps determine when an AI is intelligent. Can a person conversing with a hidden AI tell whether it’s an AI or a human being? If he can’t, then for all practical purposes, the AI is intelligent. But this test says nothing about whether the AI might be conscious.

Two Kinds of Consciousness
There are two parts to consciousness. First, there’s the what-it’s-like-for-me aspect of an experience, the sensory part of consciousness. Philosophers call this phenomenal consciousness. It’s about how you experience a phenomenon, like smelling a rose or feeling pain.

In contrast, there’s also access consciousness. That’s the ability to report, reason, behave, and act in a coordinated and responsive manner to stimuli based on goals. For example, when I pass the soccer ball to my friend making a play on the goal, I am responding to visual stimuli, acting from prior training, and pursuing a goal determined by the rules of the game. I make the pass automatically, without conscious deliberation, in the flow of the game.

Blindsight nicely illustrates the difference between the two types of consciousness. Someone with this neurological condition might report, for example, that they cannot see anything in the left side of their visual field. But if asked to pick up a pen from an array of objects in the left side of their visual field, they can reliably do so. They cannot see the pen, yet they can pick it up when prompted—an example of access consciousness without phenomenal consciousness.

Data is an android. How do these distinctions play out with respect to him?

The Data Dilemma
The android Data demonstrates that he is self-aware in that he can monitor whether or not, for example, he is optimally charged or there is internal damage to his robotic arm.

Data is also intelligent in the general sense. He does a lot of distinct things at a high level of mastery. He can fly the Enterprise, take orders from Captain Picard and reason with him about the best path to take.

He can also play poker with his shipmates, cook, discuss topical issues with close friends, fight with enemies on alien planets, and engage in various forms of physical labor. Data has access consciousness. He would clearly pass the Turing test.

However, Data most likely lacks phenomenal consciousness—he does not, for example, delight in the scent of roses or experience pain. He embodies a supersized version of blindsight. He’s self-aware and has access consciousness—can grab the pen—but across all his senses he lacks phenomenal consciousness.

Now, if Data doesn’t feel pain, at least one of the reasons Singer offers for giving a creature moral standing is not fulfilled. But Data might fulfill the other condition of being able to suffer, even without feeling pain. Suffering might not require phenomenal consciousness the way pain essentially does.

For example, what if suffering were also defined as the idea of being thwarted from pursuing a just cause without causing harm to others? Suppose Data’s goal is to save his crewmate, but he can’t reach her because of damage to one of his limbs. Data’s reduction in functioning that keeps him from saving his crewmate is a kind of nonphenomenal suffering. He would have preferred to save the crewmate, and would be better off if he did.

In the episode, the question ends up resting not on whether Data is self-aware—that is not in doubt. Nor is it in question whether he is intelligent—he easily demonstrates that he is in the general sense. What is unclear is whether he is phenomenally conscious. Data is not dismantled because, in the end, his human judges cannot agree on the significance of consciousness for moral standing.

Should an AI Get Moral Standing?
Data is kind; he acts to support the well-being of his crewmates and those he encounters on alien planets. He obeys orders from people and appears unlikely to harm them, and he seems to protect his own existence. For these reasons he appears peaceful and easier to accept into the realm of things that have moral standing.

But what about Skynet in the Terminator movies? Or the worries recently expressed by Elon Musk about AI being more dangerous than nukes, and by Stephen Hawking on AI ending humankind?

Human beings don’t lose their claim to moral standing just because they act against the interests of another person. In the same way, you can’t automatically say that just because an AI acts against the interests of humanity or another AI it doesn’t have moral standing. You might be justified in fighting back against an AI like Skynet, but that does not take away its moral standing. If moral standing is given in virtue of the capacity to nonphenomenally suffer, then Skynet and Data both get it even if only Data wants to help human beings.

There are no artificial general intelligence machines yet. But now is the time to consider what it would take to grant them moral standing. How humanity chooses to answer the question of moral standing for nonbiological creatures will have big implications for how we deal with future AIs—whether kind and helpful like Data, or set on destruction, like Skynet.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Ico Maker / Shutterstock.com Continue reading

Posted in Human Robots

#437460 This Week’s Awesome Tech Stories From ...

ARTIFICIAL INTELLIGENCE
A Radical New Technique Lets AI Learn With Practically No Data
Karen Hao | MIT Technology Review
“Shown photos of a horse and a rhino, and told a unicorn is something in between, [children] can recognize the mythical creature in a picture book the first time they see it. …Now a new paper from the University of Waterloo in Ontario suggests that AI models should also be able to do this—a process the researchers call ‘less than one’-shot, or LO-shot, learning.”

FUTURE
Artificial General Intelligence: Are We Close, and Does It Even Make Sense to Try?
Will Douglas Heaven | MIT Technology Review
“A machine that could think like a person has been the guiding vision of AI research since the earliest days—and remains its most divisive idea. …So why is AGI controversial? Why does it matter? And is it a reckless, misleading dream—or the ultimate goal?”

HEALTH
The Race for a Super-Antibody Against the Coronavirus
Apoorva Mandavilli | The New York Times
“Dozens of companies and academic groups are racing to develop antibody therapies. …But some scientists are betting on a dark horse: Prometheus, a ragtag group of scientists who are months behind in the competition—and yet may ultimately deliver the most powerful antibody.”

SPACE
How to Build a Spacecraft to Save the World
Daniel Oberhaus | Wired
“The goal of the Double Asteroid Redirection Test, or DART, is to slam the [spacecraft] into a small asteroid orbiting a larger asteroid 7 million miles from Earth. …It should be able to change the asteroid’s orbit just enough to be detectable from Earth, demonstrating that this kind of strike could nudge an oncoming threat out of Earth’s way. Beyond that, everything is just an educated guess, which is exactly why NASA needs to punch an asteroid with a robot.”

TRANSPORTATION
Inside Gravity’s Daring Mission to Make Jetpacks a Reality
Oliver Franklin-Wallis | Wired
“The first time someone flies a jetpack, a curious thing happens: just as their body leaves the ground, their legs start to flail. …It’s as if the vestibular system can’t quite believe what’s happening. This isn’t natural. Then suddenly, thrust exceeds weight, and—they’re aloft. …It’s that moment, lift-off, that has given jetpacks an enduring appeal for over a century.”

FUTURE OF FOOD
Inside Singapore’s Huge Bet on Vertical Farming
Megan Tatum | MIT Technology Review
“…to cram all [of Singapore’s] gleaming towers and nearly 6 million people into a land mass half the size of Los Angeles, it has sacrificed many things, including food production. Farms make up no more than 1% of its total land (in the United States it’s 40%), forcing the small city-state to shell out around $10 billion each year importing 90% of its food. Here was an example of technology that could change all that.”

COMPUTING
The Effort to Build the Mathematical Library of the Future
Kevin Hartnett | Quanta
“Digitizing mathematics is a longtime dream. The expected benefits range from the mundane—computers grading students’ homework—to the transcendent: using artificial intelligence to discover new mathematics and find new solutions to old problems.”

Image credit: Kevin Mueller / Unsplash Continue reading

Posted in Human Robots

#436491 The Year’s Most Fascinating Tech ...

Last Saturday we took a look at some of the most-read Singularity Hub articles from 2019. This week, we’re featuring some of our favorite articles from the last year. As opposed to short pieces about what’s happening, these are long reads about why it matters and what’s coming next. Some of them make the news while others frame the news, go deep on big ideas, go behind the scenes, or explore the human side of technological progress.

We hope you find them as fascinating, inspiring, and illuminating as we did.

DeepMind and Google: The Battle to Control Artificial Intelligence
Hal Hodson | 1843
“[DeepMind cofounder and CEO Demis] Hassabis thought DeepMind would be a hybrid: it would have the drive of a startup, the brains of the greatest universities, and the deep pockets of one of the world’s most valuable companies. Every element was in place to hasten the arrival of [artificial general intelligence] and solve the causes of human misery.”

The Most Powerful Person in Silicon Valley
Katrina Brooker | Fast Company
“Billionaire Masayoshi Son—not Elon Musk, Jeff Bezos, or Mark Zuckerberg—has the most audacious vision for an AI-powered utopia where machines control how we live. And he’s spending hundreds of billions of dollars to realize it. Are you ready to live in Masa World?”

AR Will Spark the Next Big Tech Platform—Call It Mirrorworld
Kevin Kelly | Wired
“Eventually this melded world will be the size of our planet. It will be humanity’s greatest achievement, creating new levels of wealth, new social problems, and uncountable opportunities for billions of people. There are no experts yet to make this world; you are not late.”

Behind the Scenes of a Radical New Cancer Cure
Ilana Yurkiewicz | Undark
“I remember the first time I watched a patient get his Day 0 infusion. It felt anti-climactic. The entire process took about 15 minutes. The CAR-T cells are invisible to the naked eye, housed in a small plastic bag containing clear liquid. ‘That’s it?’ my patient asked when the nurse said it was over. The infusion part is easy. The hard part is everything that comes next.”

The Promise and Price of Cellular Therapies
Siddhartha Mukherjee | The New Yorker
“We like to imagine medical revolutions as, well, revolutionary—propelled forward through leaps of genius and technological innovation. But they are also evolutionary, nudged forward through the optimization of design and manufacture.”

Impossible Foods’ Rising Empire of Almost Meat
Chris Ip | Engadget
“Impossible says it wants to ultimately create a parallel universe of ersatz animal products from steak to eggs. …Yet as Impossible ventures deeper into the culinary uncanny valley, it also needs society to discard a fundamental cultural idea that dates back millennia and accept a new truth: Meat doesn’t have to come from animals.”

Inside the Amazon Warehouse Where Humans and Machines Become One
Matt Simon | Wired
“Seen from above, the scale of the system is dizzying. My robot, a little orange slab known as a ‘drive’ (or more formally and mythically, Pegasus), is just one of hundreds of its kind swarming a 125,000-square-foot ‘field’ pockmarked with chutes. It’s a symphony of electric whirring, with robots pausing for one another at intersections and delivering their packages to the slides.”

Boston Dynamics’ Robots Are Preparing to Leave the Lab—Is the World Ready?
James Vincent | The Verge
“After decades of kicking machines in parking lots, the company is set to launch its first ever commercial bot later this year: the quadrupedal Spot. It’s a crucial test for a company that’s spent decades pursuing long-sighted R&D. And more importantly, the success—or failure—of Spot will tell us a lot about our own robot future. Are we ready for machines to walk among us?”

I Cut the ‘Big Five’ Tech Giants From My Life. It Was Hell
Kashmir Hill | Gizmodo
“Critics of the big tech companies are often told, ‘If you don’t like the company, don’t use its products.’ I did this experiment to find out if that is possible, and I found out that it’s not—with the exception of Apple. …These companies are unavoidable because they control internet infrastructure, online commerce, and information flows.”

Why I (Still) Love Tech: In Defense of a Difficult Industry
Paul Ford | Wired
“The mysteries of software caught my eye when I was a boy, and I still see it with the same wonder, even though I’m now an adult. Proudshamed, yes, but I still love it, the mess of it, the code and toolkits, down to the pixels and the processors, and up to the buses and bridges. I love the whole made world. But I can’t deny that the miracle is over, and that there is an unbelievable amount of work left for us to do.”

The Peculiar Blindness of Experts
David Epstein | The Atlantic
“In business, esteemed (and lavishly compensated) forecasters routinely are wildly wrong in their predictions of everything from the next stock-market correction to the next housing boom. Reliable insight into the future is possible, however. It just requires a style of thinking that’s uncommon among experts who are certain that their deep knowledge has granted them a special grasp of what is to come.”

The Most Controversial Tree in the World
Rowan Jacobson | Pacific Standard
“…we are all GMOs, the beneficiaries of freakishly unlikely genetic mash-ups, and the real Island of Dr. Moreau is that blue-green botanical garden positioned third from the sun. Rather than changing the nature of nature, as I once thought, this might just be the very nature of nature.”

How an Augmented Reality Game Escalated Into Real-World Spy Warfare
Elizabeth Ballou | Vice
“In Ingress, players accept that every park and train station could be the site of an epic showdown, but that’s only the first step. The magic happens when other people accept that, too. When players feel like that magic is real, there are few limits to what they’ll do or where they’ll go for the sake of the game. ”

The Shady Cryptocurrency Boom on the Post-Soviet Frontier
Hannah Lucinda Smith | Wired
“…although the tourists won’t guess it as they stand at Kuchurgan’s gates, admiring how the evening light reflects off the silver plaque of Lenin, this plant is pumping out juice to a modern-day gold rush: a cryptocurrency boom that is underway all across the former Soviet Union, from the battlefields of eastern Ukraine to time-warp enclaves like Transnistria and freshly annexed Crimea.”

Scientists Are Totally Rethinking Animal Cognition
Ross Andersen | The Atlantic
“This idea that animals are conscious was long unpopular in the West, but it has lately found favor among scientists who study animal cognition. …For many scientists, the resonant mystery is no longer which animals are conscious, but which are not.”

I Wrote This on a 30-Year-Old Computer
Ian Bogost | The Atlantic
“[Back then] computing was an accompaniment to life, rather than the sieve through which all ideas and activities must filter. That makes using this 30-year-old device a surprising joy, one worth longing for on behalf of what it was at the time, rather than for the future it inaugurated.”

Image Credit: Wes Hicks / Unsplash Continue reading

Posted in Human Robots