Tag Archives: signs

#439726 Rule of the Robots: Warning Signs

A few years ago, Martin Ford published a book called Architects of Intelligence, in which he interviewed 23 of the most experienced AI and robotics researchers in the world. Those interviews are just as fascinating to read now as they were in 2018, but Ford's since had some extra time to chew on them, in the context of a several years of somewhat disconcertingly rapid AI progress (and hype), coupled with the economic upheaval caused by the pandemic.

In his new book, Rule of the Robots: How Artificial Intelligence Will Transform Everything, Ford takes a markedly well-informed but still generally optimistic look at where AI is taking us as a society. It's not all good, and there are still a lot of unknowns, but Ford has a perspective that's both balanced and nuanced, and I can promise you that the book is well worth a read.

The following excerpt is a section entitled “Warning Signs,” from the chapter “Deep Learning and the Future of Artificial Intelligence.”

—Evan Ackerman

The 2010s were arguably the most exciting and consequential decade in the history of artificial intelligence. Though there have certainly been conceptual improvements in the algorithms used in AI, the primary driver of all this progress has simply been deploying more expansive deep neural networks on ever faster computer hardware where they can hoover up greater and greater quantities of training data. This “scaling” strategy has been explicit since the 2012 ImageNet competition that set off the deep learning revolution. In November of that year, a front-page New York Times article was instrumental in bringing awareness of deep learning technology to the broader public sphere. The article, written by reporter John Markoff, ends with a quote from Geoff Hinton: “The point about this approach is that it scales beautifully. Basically you just need to keep making it bigger and faster, and it will get better. There's no looking back now.”

There is increasing evidence, however, that this primary engine of progress is beginning to sputter out. According to one analysis by the research organization OpenAI, the computational resources required for cutting-edge AI projects is “increasing exponentially” and doubling about every 3.4 months.

In a December 2019 Wired magazine interview, Jerome Pesenti, Facebook's Vice President of AI, suggested that even for a company with pockets as deep as Facebook's, this would be financially unsustainable:

When you scale deep learning, it tends to behave better and to be able to solve a broader task in a better way. So, there's an advantage to scaling. But clearly the rate of progress is not sustainable. If you look at top experiments, each year the cost [is] going up 10-fold. Right now, an experiment might be in seven figures, but it's not going to go to nine or ten figures, it's not possible, nobody can afford that.

Pesenti goes on to offer a stark warning about the potential for scaling to continue to be the primary driver of progress: “At some point we're going to hit the wall. In many ways we already have.” Beyond the financial limits of scaling to ever larger neural networks, there are also important environmental considerations. A 2019 analysis by researchers at the University of Massachusetts, Amherst, found that training a very large deep learning system could potentially emit as much carbon dioxide as five cars over their full operational lifetimes.

Even if the financial and environmental impact challenges can be overcome—perhaps through the development of vastly more efficient hardware or software—scaling as a strategy simply may not be sufficient to produce sustained progress. Ever-increasing investments in computation have produced systems with extraordinary proficiency in narrow domains, but it is becoming increasingly clear that deep neural networks are subject to reliability limitations that may make the technology unsuitable for many mission critical applications unless important conceptual breakthroughs are made. One of the most notable demonstrations of the technology's weaknesses came when a group of researchers at Vicarious, small company focused on building dexterous robots, performed an analysis of the neural network used in Deep-Mind's DQN, the system that had learned to dominate Atari video games. One test was performed on Breakout, a game in which the player has to manipulate a paddle to intercept a fast-moving ball. When the paddle was shifted just a few pixels higher on the screen—a change that might not even be noticed by a human player—the system's previously superhuman performance immediately took a nose dive. DeepMind's software had no ability to adapt to even this small alteration. The only way to get back to top-level performance would have been to start from scratch and completely retrain the system with data based on the new screen configuration.

What this tells us is that while DeepMind's powerful neural networks do instantiate a representation of the Breakout screen, this representation remains firmly anchored to raw pixels even at the higher levels of abstraction deep in the network. There is clearly no emergent understanding of the paddle as an actual object that can be moved. In other words, there is nothing close to a human-like comprehension of the material objects that the pixels on the screen represent or the physics that govern their movement. It's just pixels all the way down. While some AI researchers may continue to believe that a more comprehensive understanding might eventually emerge if only there were more layers of artificial neurons, running on faster hardware and consuming still more data, I think this is very unlikely. More fundamental innovations will be required before we begin to see machines with a more human-like conception of the world.

This general type of problem, in which an AI system is inflexible and unable to adapt to even small unexpected changes in its input data, is referred to, among researchers, as “brittleness.” A brittle AI application may not be a huge problem if it results in a warehouse robot occasionally packing the wrong item into a box. In other applications, however, the same technical shortfall can be catastrophic. This explains, for example, why progress toward fully autonomous self-driving cars has not lived up to some of the more exuberant early predictions.

As these limitations came into focus toward the end of the decade, there was a gnawing fear that the field had once again gotten over its skis and that the hype cycle had driven expectations to unrealistic levels. In the tech media and on social media, one of the most terrifying phrases in the field of artificial intelligence—”AI winter”—was making a reappearance. In a January 2020 interview with the BBC, Yoshua Bengio said that “AI's abilities were somewhat overhyped . . . by certain companies with an interest in doing so.”

My own view is that if another AI winter indeed looms, it's likely to be a mild one. Though the concerns about slowing progress are well founded, it remains true that over the past few years AI has been deeply integrated into the infrastructure and business models of the largest technology companies. These companies have seen significant returns on their massive investments in computing resources and AI talent, and they now view artificial intelligence as absolutely critical to their ability to compete in the marketplace. Likewise, nearly every technology startup is now, to some degree, investing in AI, and companies large and small in other industries are beginning to deploy the technology. This successful integration into the commercial sphere is vastly more significant than anything that existed in prior AI winters, and as a result the field benefits from an army of advocates throughout the corporate world and has a general momentum that will act to moderate any downturn.

There's also a sense in which the fall of scalability as the primary driver of progress may have a bright side. When there is a widespread belief that simply throwing more computing resources at a problem will produce important advances, there is significantly less incentive to invest in the much more difficult work of true innovation. This was arguably the case, for example, with Moore's Law. When there was near absolute confidence that computer speeds would double roughly every two years, the semiconductor industry tended to focus on cranking out ever faster versions of the same microprocessor designs from companies like Intel and Motorola. In recent years, the acceleration in raw computer speeds has become less reliable, and our traditional definition of Moore's Law is approaching its end game as the dimensions of the circuits imprinted on chips shrink to nearly atomic size. This has forced engineers to engage in more “out of the box” thinking, resulting in innovations such as software designed for massively parallel computing and entirely new chip architectures—many of which are optimized for the complex calculations required by deep neural networks. I think we can expect the same sort of idea explosion to happen in deep learning, and artificial intelligence more broadly, as the crutch of simply scaling to larger neural networks becomes a less viable path to progress.

Excerpted from “Rule of the Robots: How Artificial Intelligence will Transform Everything.” Copyright 2021 Basic Books. Available from Basic Books, an imprint of Hachette Book Group, Inc. Continue reading

Posted in Human Robots

#439077 How Scientists Grew Human Muscles in Pig ...

The little pigs bouncing around the lab looked exceedingly normal. Yet their adorable exterior hid a remarkable secret: each piglet carried two different sets of genes. For now, both sets came from their own species. But one day, one of those sets may be human.

The piglets are chimeras—creatures with intermingled sets of genes, as if multiple entities were seamlessly mashed together. Named after the Greek lion-goat-serpent monsters, chimeras may hold the key to an endless supply of human organs and tissues for transplant. The crux is growing these human parts in another animal—one close enough in size and function to our own.

Last week, a team from the University of Minnesota unveiled two mind-bending chimeras. One was joyous little piglets, each propelled by muscles grown from a different pig. Another was pig embryos, transplanted into surrogate pigs, that developed human muscles for more than 20 days.

The study, led by Drs. Mary and Daniel Garry at the University of Minnesota, had a therapeutic point: engineering a brilliant way to replace muscle loss, especially for the muscles around our skeletons that allow us to move and navigate the world. Trauma and injury, such as from firearm wounds or car crashes, can damage muscle tissue beyond the point of repair. Unfortunately, muscles are also stubborn in that donor tissue from cadavers doesn’t usually “take” at the injury site. For now, there are no effective treatments for severe muscle death, called volumetric muscle loss.

The new human-pig hybrids are designed to tackle this problem. Muscle wasting aside, the study also points to a clever “hack” that increases the amount of human tissue inside a growing pig embryo.

If further improved, the technology could “provide an unlimited supply of organs for transplantation,” said Dr. Mary Garry to Inverse. What’s more, because the human tissue can be sourced from patients themselves, the risk of rejection by the immune system is relatively low—even when grown inside a pig.

“The shortage of organs for heart transplantation, vascular grafting, and skeletal muscle is staggering,” said Garry. Human-animal chimeras could have a “seismic impact” that transforms organ transplantation and helps solve the organ shortage crisis.

That is, if society accepts the idea of a semi-humanoid pig.

Wait…But How?
The new study took a page from previous chimera recipes.

The main ingredients and steps go like this: first, you need an embryo that lacks the ability to develop a tissue or organ. This leaves an “empty slot” of sorts that you can fill with another set of genes—pig, human, or even monkey.

Second, you need to fine-tune the recipe so that the embryos “take” the new genes, incorporating them into their bodies as if they were their own. Third, the new genes activate to instruct the growing embryo to make the necessary tissue or organs without harming the overall animal. Finally, the foreign genes need to stay put, without cells migrating to another body part—say, the brain.

Not exactly straightforward, eh? The piglets are technological wonders that mix cutting-edge gene editing with cloning technologies.

The team went for two chimeras: one with two sets of pig genes, the other with a pig and human mix. Both started with a pig embryo that can’t make its own skeletal muscles (those are the muscles surrounding your bones). Using CRISPR, the gene-editing Swiss Army Knife, they snipped out three genes that are absolutely necessary for those muscles to develop. Like hitting a bullseye with three arrows simultaneously, it’s already a technological feat.

Here’s the really clever part: the muscles around your bones have a slightly different genetic makeup than the ones that line your blood vessels or the ones that pump your heart. While the resulting pig embryos had severe muscle deformities as they developed, their hearts beat as normal. This means the gene editing cut only impacted skeletal muscles.

Then came step two: replacing the missing genes. Using a microneedle, the team injected a fertilized and slightly developed pig egg—called a blastomere—into the embryo. If left on its natural course, a blastomere eventually develops into another embryo. This step “smashes” the two sets of genes together, with the newcomer filling the muscle void. The hybrid embryo was then placed into a surrogate, and roughly four months later, chimeric piglets were born.

Equipped with foreign DNA, the little guys nevertheless seemed totally normal, nosing around the lab and running everywhere without obvious clumsy stumbles. Under the microscope, their “xenomorph” muscles were indistinguishable from run-of-the-mill average muscle tissue—no signs of damage or inflammation, and as stretchy and tough as muscles usually are. What’s more, the foreign DNA seemed to have only developed into muscles, even though they were prevalent across the body. Extensive fishing experiments found no trace of the injected set of genes inside blood vessels or the brain.

A Better Human-Pig Hybrid
Confident in their recipe, the team next repeated the experiment with human cells, with a twist. Instead of using controversial human embryonic stem cells, which are obtained from aborted fetuses, they relied on induced pluripotent stem cells (iPSCs). These are skin cells that have been reverted back into a stem cell state.

Unlike previous attempts at making human chimeras, the team then scoured the genetic landscape of how pig and human embryos develop to find any genetic “brakes” that could derail the process. One gene, TP53, stood out, which was then promptly eliminated with CRISPR.

This approach provides a way for future studies to similarly increase the efficiency of interspecies chimeras, the team said.

The human-pig embryos were then carefully grown inside surrogate pigs for less than a month, and extensively analyzed. By day 20, the hybrids had already grown detectable human skeletal muscle. Similar to the pig-pig chimeras, the team didn’t detect any signs that the human genes had sprouted cells that would eventually become neurons or other non-muscle cells.

For now, human-animal chimeras are not allowed to grow to term, in part to stem the theoretical possibility of engineering humanoid hybrid animals (shudder). However, a sentient human-pig chimera is something that the team specifically addressed. Through multiple experiments, they found no trace of human genes in the embryos’ brain stem cells 20 and 27 days into development. Similarly, human donor genes were absent in cells that would become the hybrid embryos’ reproductive cells.

Despite bioethical quandaries and legal restrictions, human-animal chimeras have taken off, both as a source of insight into human brain development and a well of personalized organs and tissues for transplant. In 2019, Japan lifted its ban on developing human brain cells inside animal embryos, as well as the term limit—to global controversy. There’s also the question of animal welfare, given that hybrid clones will essentially become involuntary organ donors.

As the debates rage on, scientists are nevertheless pushing the limits of human-animal chimeras, while treading as carefully as possible.

“Our data…support the feasibility of the generation of these interspecies chimeras, which will serve as a model for translational research or, one day, as a source for xenotransplantation,” the team said.

Image Credit: Christopher Carson on Unsplash Continue reading

Posted in Human Robots

#437946 Video Friday: These Robots Are Ready for ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

HRI 2021 – March 8-11, 2021 – [Online]
RoboSoft 2021 – April 12-16, 2021 – [Online]
Let us know if you have suggestions for next week, and enjoy today’s videos.

Is it too late to say, “Happy Holidays”? Yes! Is it too late for a post packed with holiday robot videos? Never!

The Autonomous Systems Lab at ETH Zurich wishes everyone a Merry Christmas and a Happy 2021!

Now you know the best kept secret in robotics- the ETH Zurich Autonomous Systems Lab is a shack in the woods. With an elevator.

[ ASL ]

We have had to do things differently this year, and the holiday season is no exception. But through it all, we still found ways to be together. From all of us at NATO, Happy Holidays. After training in the snow and mountains of Iceland, an EOD team returns to base. Passing signs reminding them to ‘Keep your distance’ due to COVID-19, they return to their office a little dejected, unsure how they can safely enjoy the holidays. But the EOD robot saves the day and finds a unique way to spread the holiday cheer – socially distanced, of course.

[ EATA ]

Season's Greetings from Voliro!

[ Voliro ]

Thanks Daniel!

Even if you don't have a robot at home, you can still make Halodi Robotics's gingerbread cookies the old fashioned way.

[ Halodi Robotics ]

Thanks Jesper!

We wish you all a Merry Christmas in this very different 2020. This year has truly changed the world and our way of living. We, Energy Robotics, like to say thank you to all our customers, partners, supporters, friends and family.

An Aibo ERS-7? Sweet!

[ Energy Robotics ]

Thanks Stefan!

The nickname for this drone should be “The Grinch.”

As it turns out, in real life taking samples of trees to determine how healthy they are is best done from the top.

[ DeLeaves ]

Thanks Alexis!

ETH Zurich would like to wish you happy holidays and a successful 2021 full of energy and health!

[ ETH Zurich ]

The QBrobotics Team wishes you all a Merry Christmas and a Happy New Year!

[ QBrobotics ]

Extend Robotics avatar twin got so excited opening a Christmas gift, using two arms coordinating, showing the dexterity and speed.

[ Extend Robotics ]

HEBI Robotics wishes everyone a great holiday season! Onto 2021!

[ HEBI Robotics ]

Christmas at the Mobile Robots Lab at Poznan Polytechnic.

[ Poznan ]

SWarm Holiday Wishes from the Hauert Lab!

[ Hauert Lab ]

Brubotics-VUB SMART and SHERO team wishes you a Merry Christmas and Happy 2021!

[ SMART ]

Success is all about teamwork! Thank you for supporting PAL Robotics. This festive season enjoy and stay safe!

[ PAL Robotics ]

Our robots wish you Happy Holidays! Starring world's first robot slackliner (Leonardo)!

[ Caltech ]

Happy Holidays and a Prosperous New Year from ZenRobotics!

[ ZenRobotics ]

Our Highly Dexterous Manipulation System (HDMS) dual-arm robot is ringing in the new year with good cheer!

[ RE2 Robotics ]

Happy Holidays 2020 from NAO!

[ SoftBank Robotics ]

Happy Holidays from DENSO Robotics!

[ DENSO ] Continue reading

Posted in Human Robots

#437935 Start the New Year Right: By Watching ...

I don’t need to tell you that 2020 was a tough year. There was almost nothing good about it, and we saw it off with a “good riddance” and hopes for a better 2021. But robotics company Boston Dynamics took a different approach to closing out the year: when all else fails, why not dance?

The company released a video last week that I dare you to watch without laughing—or at the very least, cracking a pretty big smile. Because, well, dancing robots are funny. And it’s not just one dancing robot, it’s four of them: two humanoid Atlas bots, one four-legged Spot, and one Handle, a bot-on-wheels built for materials handling.

The robots’ killer moves look almost too smooth and coordinated to be real, leading many to speculate that the video was computer-generated. But if you can trust Elon Musk, there’s no CGI here.

This is not CGI https://t.co/VOivE97vPR

— Elon Musk (@elonmusk) December 29, 2020

Boston Dynamics went through a lot of changes in the last ten years; it was acquired by Google in 2013, then sold to Japanese conglomerate SoftBank in 2017 before being acquired again by Hyundai just a few weeks ago for $1.1 billion. But this isn’t the first time they teach a robot to dance and make a video for all the world to enjoy; Spot tore up the floor to “Uptown Funk” back in 2018.

Four-legged Spot went commercial in June, with a hefty price tag of $74,500, and was put to some innovative pandemic-related uses, including remotely measuring patients’ vital signs and reminding people to social distance.

Hyundai plans to implement its newly-acquired robotics prowess for everything from service and logistics robots to autonomous driving and smart factories.

They’ll have their work cut out for them. Besides being hilarious, kind of heartwarming, and kind of creepy all at once, the robots’ new routine is pretty impressive from an engineering standpoint. Compare it to a 2016 video of Atlas trying to pick up a box (I know it’s a machine with no feelings, but it’s hard not to feel a little bit bad for it, isn’t it?), and it’s clear Boston Dynamics’ technology has made huge strides. It wouldn’t be surprising if, in two years’ time, we see a video of a flash mob of robots whose routine includes partner dancing and back flips (which, admittedly, Atlas can already do).

In the meantime, though, this one is pretty entertaining—and not a bad note on which to start the new year.

Image Credit: Boston Dynamics Continue reading

Posted in Human Robots

#437816 As Algorithms Take Over More of the ...

Algorithms play an increasingly prominent part in our lives, governing everything from the news we see to the products we buy. As they proliferate, experts say, we need to make sure they don’t collude against us in damaging ways.

Fears of malevolent artificial intelligence plotting humanity’s downfall are a staple of science fiction. But there are plenty of nearer-term situations in which relatively dumb algorithms could do serious harm unintentionally, particularly when they are interlocked in complex networks of relationships.

In the economic sphere a high proportion of decision-making is already being offloaded to machines, and there have been warning signs of where that could lead if we’re not careful. The 2010 “Flash Crash,” where algorithmic traders helped wipe nearly $1 trillion off the stock market in minutes, is a textbook example, and widespread use of automated trading software has been blamed for the increasing fragility of markets.

But another important place where algorithms could undermine our economic system is in price-setting. Competitive markets are essential for the smooth functioning of the capitalist system that underpins Western society, which is why countries like the US have strict anti-trust laws that prevent companies from creating monopolies or colluding to build cartels that artificially inflate prices.

These regulations were built for an era when pricing decisions could always be traced back to a human, though. As self-adapting pricing algorithms increasingly decide the value of products and commodities, those laws are starting to look unfit for purpose, say the authors of a paper in Science.

Using algorithms to quickly adjust prices in a dynamic market is not a new idea—airlines have been using them for decades—but previously these algorithms operated based on rules that were hard-coded into them by programmers.

Today the pricing algorithms that underpin many marketplaces, especially online ones, rely on machine learning instead. After being set an overarching goal like maximizing profit, they develop their own strategies based on experience of the market, often with little human oversight. The most advanced also use forms of AI whose workings are opaque even if humans wanted to peer inside.

In addition, the public nature of online markets means that competitors’ prices are available in real time. It’s well-documented that major retailers like Amazon and Walmart are engaged in a never-ending bot war, using automated software to constantly snoop on their rivals’ pricing and inventory.

This combination of factors sets the stage perfectly for AI-powered pricing algorithms to adopt collusive pricing strategies, say the authors. If given free reign to develop their own strategies, multiple pricing algorithms with real-time access to each other’s prices could quickly learn that cooperating with each other is the best way to maximize profits.

The authors note that researchers have already found evidence that pricing algorithms will spontaneously develop collusive strategies in computer-simulated markets, and a recent study found evidence that suggests pricing algorithms may be colluding in Germany’s retail gasoline market. And that’s a problem, because today’s anti-trust laws are ill-suited to prosecuting this kind of behavior.

Collusion among humans typically involves companies communicating with each other to agree on a strategy that pushes prices above the true market value. They then develop rules to determine how they maintain this markup in a dynamic market that also incorporates the threat of retaliatory pricing to spark a price war if another cartel member tries to undercut the agreed pricing strategy.

Because of the complexity of working out whether specific pricing strategies or prices are the result of collusion, prosecutions have instead relied on communication between companies to establish guilt. That’s a problem because algorithms don’t need to communicate to collude, and as a result there are few legal mechanisms to prosecute this kind of collusion.

That means legal scholars, computer scientists, economists, and policymakers must come together to find new ways to uncover, prohibit, and prosecute the collusive rules that underpin this behavior, say the authors. Key to this will be auditing and testing pricing algorithms, looking for things like retaliatory pricing, price matching, and aggressive responses to price drops but not price rises.

Once collusive pricing rules are uncovered, computer scientists need to come up with ways to constrain algorithms from adopting them without sacrificing their clear efficiency benefits. It could also be helpful to make preventing this kind of collusive behavior the responsibility of the companies deploying them, with stiff penalties for those who don’t keep their algorithms in check.

One problem, though, is that algorithms may evolve strategies that humans would never think of, which could make spotting this behavior tricky. Imbuing courts with the technical knowledge and capacity to investigate this kind of evidence will also prove difficult, but getting to grips with these problems is an even more pressing challenge than it might seem at first.

While anti-competitive pricing algorithms could wreak havoc, there are plenty of other arenas where collusive AI could have even more insidious effects, from military applications to healthcare and insurance. Developing the capacity to predict and prevent AI scheming against us will likely be crucial going forward.

Image Credit: Pexels from Pixabay Continue reading

Posted in Human Robots