Tag Archives: Should
#437301 The Global Work Crisis: Automation, the ...
The alarm bell rings. You open your eyes, come to your senses, and slide from dream state to consciousness. You hit the snooze button, and eventually crawl out of bed to the start of yet another working day.
This daily narrative is experienced by billions of people all over the world. We work, we eat, we sleep, and we repeat. As our lives pass day by day, the beating drums of the weekly routine take over and years pass until we reach our goal of retirement.
A Crisis of Work
We repeat the routine so that we can pay our bills, set our kids up for success, and provide for our families. And after a while, we start to forget what we would do with our lives if we didn’t have to go back to work.
In the end, we look back at our careers and reflect on what we’ve achieved. It may have been the hundreds of human interactions we’ve had; the thousands of emails read and replied to; the millions of minutes of physical labor—all to keep the global economy ticking along.
According to Gallup’s World Poll, only 15 percent of people worldwide are actually engaged with their jobs. The current state of “work” is not working for most people. In fact, it seems we as a species are trapped by a global work crisis, which condemns people to cast away their time just to get by in their day-to-day lives.
Technologies like artificial intelligence and automation may help relieve the work burdens of millions of people—but to benefit from their impact, we need to start changing our social structures and the way we think about work now.
The Specter of Automation
Automation has been ongoing since the Industrial Revolution. In recent decades it has taken on a more elegant guise, first with physical robots in production plants, and more recently with software automation entering most offices.
The driving goal behind much of this automation has always been productivity and hence, profits: technology that can act as a multiplier on what a single human can achieve in a day is of huge value to any company. Powered by this strong financial incentive, the quest for automation is growing ever more pervasive.
But if automation accelerates or even continues at its current pace and there aren’t strong social safety nets in place to catch the people who are negatively impacted (such as by losing their jobs), there could be a host of knock-on effects, including more concentrated wealth among a shrinking elite, more strain on government social support, an increase in depression and drug dependence, and even violent social unrest.
It seems as though we are rushing headlong into a major crisis, driven by the engine of accelerating automation. But what if instead of automation challenging our fragile status quo, we view it as the solution that can free us from the shackles of the Work Crisis?
The Way Out
In order to undertake this paradigm shift, we need to consider what society could potentially look like, as well as the problems associated with making this change. In the context of these crises, our primary aim should be for a system where people are not obligated to work to generate the means to survive. This removal of work should not threaten access to food, water, shelter, education, healthcare, energy, or human value. In our current system, work is the gatekeeper to these essentials: one can only access these (and even then often in a limited form), if one has a “job” that affords them.
Changing this system is thus a monumental task. This comes with two primary challenges: providing people without jobs with financial security, and ensuring they maintain a sense of their human value and worth. There are several measures that could be implemented to help meet these challenges, each with important steps for society to consider.
Universal basic income (UBI)
UBI is rapidly gaining support, and it would allow people to become shareholders in the fruits of automation, which would then be distributed more broadly.
UBI trials have been conducted in various countries around the world, including Finland, Kenya, and Spain. The findings have generally been positive on the health and well-being of the participants, and showed no evidence that UBI disincentivizes work, a common concern among the idea’s critics. The most recent popular voice for UBI has been that of former US presidential candidate Andrew Yang, who now runs a non-profit called Humanity Forward.
UBI could also remove wasteful bureaucracy in administering welfare payments (since everyone receives the same amount, there’s no need to prevent false claims), and promote the pursuit of projects aligned with peoples’ skill sets and passions, as well as quantifying the value of tasks not recognized by economic measures like Gross Domestic Product (GDP). This includes looking after children and the elderly at home.
How a UBI can be initiated with political will and social backing and paid for by governments has been hotly debated by economists and UBI enthusiasts. Variables like how much the UBI payments should be, whether to implement taxes such as Yang’s proposed valued added tax (VAT), whether to replace existing welfare payments, the impact on inflation, and the impact on “jobs” from people who would otherwise look for work require additional discussion. However, some have predicted the inevitability of UBI as a result of automation.
Universal healthcare
Another major component of any society is the healthcare of its citizens. A move away from work would further require the implementation of a universal healthcare system to decouple healthcare from jobs. Currently in the US, and indeed many other economies, healthcare is tied to employment.
Universal healthcare such as Medicare in Australia is evidence for the adage “prevention is better than cure,” when comparing the cost of healthcare in the US with Australia on a per capita basis. This has already presented itself as an advancement in the way healthcare is considered. There are further benefits of a healthier population, including less time and money spent on “sick-care.” Healthy people are more likely and more able to achieve their full potential.
Reshape the economy away from work-based value
One of the greatest challenges in a departure from work is for people to find value elsewhere in life. Many people view their identities as being inextricably tied to their jobs, and life without a job is therefore a threat to one’s sense of existence. This presents a shift that must be made at both a societal and personal level.
A person can only seek alternate value in life when afforded the time to do so. To this end, we need to start reducing “work-for-a-living” hours towards zero, which is a trend we are already seeing in Europe. This should not come at the cost of reducing wages pro rata, but rather could be complemented by UBI or additional schemes where people receive dividends for work done by automation. This transition makes even more sense when coupled with the idea of deviating from using GDP as a measure of societal growth, and instead adopting a well-being index based on universal human values like health, community, happiness, and peace.
The crux of this issue is in transitioning away from the view that work gives life meaning and life is about using work to survive, towards a view of living a life that itself is fulfilling and meaningful. This speaks directly to notions from Maslow’s hierarchy of needs, where work largely addresses psychological and safety needs such as shelter, food, and financial well-being. More people should have a chance to grow beyond the most basic needs and engage in self-actualization and transcendence.
The question is largely around what would provide people with a sense of value, and the answers would differ as much as people do; self-mastery, building relationships and contributing to community growth, fostering creativity, and even engaging in the enjoyable aspects of existing jobs could all come into play.
Universal education
With a move towards a society that promotes the values of living a good life, the education system would have to evolve as well. Researchers have long argued for a more nimble education system, but universities and even most online courses currently exist for the dominant purpose of ensuring people are adequately skilled to contribute to the economy. These “job factories” only exacerbate the Work Crisis. In fact, the response often given by educational institutions to the challenge posed by automation is to find new ways of upskilling students, such as ensuring they are all able to code. As alluded to earlier, this is a limited and unimaginative solution to the problem we are facing.
Instead, education should be centered on helping people acknowledge the current crisis of work and automation, teach them how to derive value that is decoupled from work, and enable people to embrace progress as we transition to the new economy.
Disrupting the Status Quo
While we seldom stop to think about it, much of the suffering faced by humanity is brought about by the systemic foe that is the Work Crisis. The way we think about work has brought society far and enabled tremendous developments, but at the same time it has failed many people. Now the status quo is threatened by those very developments as we progress to an era where machines are likely to take over many job functions.
This impending paradigm shift could be a threat to the stability of our fragile system, but only if it is not fully anticipated. If we prepare for it appropriately, it could instead be the key not just to our survival, but to a better future for all.
Image Credit: mostafa meraji from Pixabay Continue reading
#437269 DeepMind’s Newest AI Programs Itself ...
When Deep Blue defeated world chess champion Garry Kasparov in 1997, it may have seemed artificial intelligence had finally arrived. A computer had just taken down one of the top chess players of all time. But it wasn’t to be.
Though Deep Blue was meticulously programmed top-to-bottom to play chess, the approach was too labor-intensive, too dependent on clear rules and bounded possibilities to succeed at more complex games, let alone in the real world. The next revolution would take a decade and a half, when vastly more computing power and data revived machine learning, an old idea in artificial intelligence just waiting for the world to catch up.
Today, machine learning dominates, mostly by way of a family of algorithms called deep learning, while symbolic AI, the dominant approach in Deep Blue’s day, has faded into the background.
Key to deep learning’s success is the fact the algorithms basically write themselves. Given some high-level programming and a dataset, they learn from experience. No engineer anticipates every possibility in code. The algorithms just figure it.
Now, Alphabet’s DeepMind is taking this automation further by developing deep learning algorithms that can handle programming tasks which have been, to date, the sole domain of the world’s top computer scientists (and take them years to write).
In a paper recently published on the pre-print server arXiv, a database for research papers that haven’t been peer reviewed yet, the DeepMind team described a new deep reinforcement learning algorithm that was able to discover its own value function—a critical programming rule in deep reinforcement learning—from scratch.
Surprisingly, the algorithm was also effective beyond the simple environments it trained in, going on to play Atari games—a different, more complicated task—at a level that was, at times, competitive with human-designed algorithms and achieving superhuman levels of play in 14 games.
DeepMind says the approach could accelerate the development of reinforcement learning algorithms and even lead to a shift in focus, where instead of spending years writing the algorithms themselves, researchers work to perfect the environments in which they train.
Pavlov’s Digital Dog
First, a little background.
Three main deep learning approaches are supervised, unsupervised, and reinforcement learning.
The first two consume huge amounts of data (like images or articles), look for patterns in the data, and use those patterns to inform actions (like identifying an image of a cat). To us, this is a pretty alien way to learn about the world. Not only would it be mind-numbingly dull to review millions of cat images, it’d take us years or more to do what these programs do in hours or days. And of course, we can learn what a cat looks like from just a few examples. So why bother?
While supervised and unsupervised deep learning emphasize the machine in machine learning, reinforcement learning is a bit more biological. It actually is the way we learn. Confronted with several possible actions, we predict which will be most rewarding based on experience—weighing the pleasure of eating a chocolate chip cookie against avoiding a cavity and trip to the dentist.
In deep reinforcement learning, algorithms go through a similar process as they take action. In the Atari game Breakout, for instance, a player guides a paddle to bounce a ball at a ceiling of bricks, trying to break as many as possible. When playing Breakout, should an algorithm move the paddle left or right? To decide, it runs a projection—this is the value function—of which direction will maximize the total points, or rewards, it can earn.
Move by move, game by game, an algorithm combines experience and value function to learn which actions bring greater rewards and improves its play, until eventually, it becomes an uncanny Breakout player.
Learning to Learn (Very Meta)
So, a key to deep reinforcement learning is developing a good value function. And that’s difficult. According to the DeepMind team, it takes years of manual research to write the rules guiding algorithmic actions—which is why automating the process is so alluring. Their new Learned Policy Gradient (LPG) algorithm makes solid progress in that direction.
LPG trained in a number of toy environments. Most of these were “gridworlds”—literally two-dimensional grids with objects in some squares. The AI moves square to square and earns points or punishments as it encounters objects. The grids vary in size, and the distribution of objects is either set or random. The training environments offer opportunities to learn fundamental lessons for reinforcement learning algorithms.
Only in LPG’s case, it had no value function to guide that learning.
Instead, LPG has what DeepMind calls a “meta-learner.” You might think of this as an algorithm within an algorithm that, by interacting with its environment, discovers both “what to predict,” thereby forming its version of a value function, and “how to learn from it,” applying its newly discovered value function to each decision it makes in the future.
Prior work in the area has had some success, but according to DeepMind, LPG is the first algorithm to discover reinforcement learning rules from scratch and to generalize beyond training. The latter was particularly surprising because Atari games are so different from the simple worlds LPG trained in—that is, it had never seen anything like an Atari game.
Time to Hand Over the Reins? Not Just Yet
LPG is still behind advanced human-designed algorithms, the researchers said. But it outperformed a human-designed benchmark in training and even some Atari games, which suggests it isn’t strictly worse, just that it specializes in some environments.
This is where there’s room for improvement and more research.
The more environments LPG saw, the more it could successfully generalize. Intriguingly, the researchers speculate that with enough well-designed training environments, the approach might yield a general-purpose reinforcement learning algorithm.
At the least, though, they say further automation of algorithm discovery—that is, algorithms learning to learn—will accelerate the field. In the near term, it can help researchers more quickly develop hand-designed algorithms. Further out, as self-discovered algorithms like LPG improve, engineers may shift from manually developing the algorithms themselves to building the environments where they learn.
Deep learning long ago left Deep Blue in the dust at games. Perhaps algorithms learning to learn will be a winning strategy in the real world too.
Image credit: Mike Szczepanski / Unsplash Continue reading
#437265 This Russian Firm’s Star Designer Is ...
Imagine discovering a new artist or designer—whether visual art, fashion, music, or even writing—and becoming a big fan of her work. You follow her on social media, eagerly anticipate new releases, and chat about her talent with your friends. It’s not long before you want to know more about this creative, inspiring person, so you start doing some research. It’s strange, but there doesn’t seem to be any information about the artist’s past online; you can’t find out where she went to school or who her mentors were.
After some more digging, you find out something totally unexpected: your beloved artist is actually not a person at all—she’s an AI.
Would you be amused? Annoyed? Baffled? Impressed? Probably some combination of all these. If you wanted to ask someone who’s had this experience, you could talk to clients of the biggest multidisciplinary design company in Russia, Art.Lebedev Studio (I know, the period confused me at first too). The studio passed off an AI designer as human for more than a year, and no one caught on.
They gave the AI a human-sounding name—Nikolay Ironov—and it participated in more than 20 different projects that included designing brand logos and building brand identities. According to the studio’s website, several of the logos the AI made attracted “considerable public interest, media attention, and discussion in online communities” due to their unique style.
So how did an AI learn to create such buzz-worthy designs? It was trained using hand-drawn vector images each associated with one or more themes. To start a new design, someone enters a few words describing the client, such as what kind of goods or services they offer. The AI uses those words to find associated images and generate various starter designs, which then go through another series of algorithms that “touch them up.” A human designer then selects the best options to present to the client.
“These systems combined together provide users with the experience of instantly converting a client’s text brief into a corporate identity design pack archive. Within seconds,” said Sergey Kulinkovich, the studio’s art director. He added that clients liked Nikolay Ironov’s work before finding out he was an AI (and liked the media attention their brands got after Ironov’s identity was revealed even more).
Ironov joins a growing group of AI “artists” that are starting to raise questions about the nature of art and creativity. Where do creative ideas come from? What makes a work of art truly great? And when more than one person is involved in making art, who should own the copyright?
Art.Lebedev is far from the first design studio to employ artificial intelligence; Mailchimp is using AI to let businesses design multi-channel marketing campaigns without human designers, and Adobe is marketing its new Sensei product as an AI design assistant.
While art made by algorithms can be unique and impressive, though, there’s one caveat that’s important to keep in mind when we worry about human creativity being rendered obsolete. Here’s the thing: AIs still depend on people to not only program them, but feed them a set of training data on which their intelligence and output are based. Depending on the size and nature of an AI’s input data, its output will look pretty different from that of a similar system, and a big part of the difference will be due to the people that created and trained the AIs.
Admittedly, Nikolay Ironov does outshine his human counterparts in a handful of ways; as the studio’s website points out, he can handle real commercial tasks effectively, he doesn’t sleep, get sick, or have “crippling creative blocks,” and he can complete tasks in a matter of seconds.
Given these superhuman capabilities, then, why even keep human designers on staff? As detailed above, it will be a while before creative firms really need to consider this question on a large scale; for now, it still takes a hard-working creative human to make a fast-producing creative AI.
Image Credit: Art.Lebedev Continue reading
#437261 How AI Will Make Drug Discovery ...
If you had to guess how long it takes for a drug to go from an idea to your pharmacy, what would you guess? Three years? Five years? How about the cost? $30 million? $100 million?
Well, here’s the sobering truth: 90 percent of all drug possibilities fail. The few that do succeed take an average of 10 years to reach the market and cost anywhere from $2.5 billion to $12 billion to get there.
But what if we could generate novel molecules to target any disease, overnight, ready for clinical trials? Imagine leveraging machine learning to accomplish with 50 people what the pharmaceutical industry can barely do with an army of 5,000.
Welcome to the future of AI and low-cost, ultra-fast, and personalized drug discovery. Let’s dive in.
GANs & Drugs
Around 2012, computer scientist-turned-biophysicist Alex Zhavoronkov started to notice that artificial intelligence was getting increasingly good at image, voice, and text recognition. He knew that all three tasks shared a critical commonality. In each, massive datasets were available, making it easy to train up an AI.
But similar datasets were present in pharmacology. So, back in 2014, Zhavoronkov started wondering if he could use these datasets and AI to significantly speed up the drug discovery process. He’d heard about a new technique in artificial intelligence known as generative adversarial networks (or GANs). By pitting two neural nets against one another (adversarial), the system can start with minimal instructions and produce novel outcomes (generative). At the time, researchers had been using GANs to do things like design new objects or create one-of-a-kind, fake human faces, but Zhavoronkov wanted to apply them to pharmacology.
He figured GANs would allow researchers to verbally describe drug attributes: “The compound should inhibit protein X at concentration Y with minimal side effects in humans,” and then the AI could construct the molecule from scratch. To turn his idea into reality, Zhavoronkov set up Insilico Medicine on the campus of Johns Hopkins University in Baltimore, Maryland, and rolled up his sleeves.
Instead of beginning their process in some exotic locale, Insilico’s “drug discovery engine” sifts millions of data samples to determine the signature biological characteristics of specific diseases. The engine then identifies the most promising treatment targets and—using GANs—generates molecules (that is, baby drugs) perfectly suited for them. “The result is an explosion in potential drug targets and a much more efficient testing process,” says Zhavoronkov. “AI allows us to do with fifty people what a typical drug company does with five thousand.”
The results have turned what was once a decade-long war into a month-long skirmish.
In late 2018, for example, Insilico was generating novel molecules in fewer than 46 days, and this included not just the initial discovery, but also the synthesis of the drug and its experimental validation in computer simulations.
Right now, they’re using the system to hunt down new drugs for cancer, aging, fibrosis, Parkinson’s, Alzheimer’s, ALS, diabetes, and many others. The first drug to result from this work, a treatment for hair loss, is slated to start Phase I trials by the end of 2020.
They’re also in the early stages of using AI to predict the outcomes of clinical trials in advance of the trial. If successful, this technique will enable researchers to strip a bundle of time and money out of the traditional testing process.
Protein Folding
Beyond inventing new drugs, AI is also being used by other scientists to identify new drug targets—that is, the place to which a drug binds in the body and another key part of the drug discovery process.
Between 1980 and 2006, despite an annual investment of $30 billion, researchers only managed to find about five new drug targets a year. The trouble is complexity. Most potential drug targets are proteins, and a protein’s structure—meaning the way a 2D sequence of amino acids folds into a 3D protein—determines its function.
But a protein with merely a hundred amino acids (a rather small protein) can produce a googol-cubed worth of potential shapes—that’s a one followed by three hundred zeroes. This is also why protein-folding has long been considered an intractably hard problem for even the most powerful of supercomputers.
Back in 1994, to monitor supercomputers’ progress in protein-folding, a biannual competition was created. Until 2018, success was fairly rare. But then the creators of DeepMind turned their neural networks loose on the problem. They created an AI that mines enormous datasets to determine the most likely distance between a protein’s base pairs and the angles of their chemical bonds—aka, the basics of protein-folding. They called it AlphaFold.
On its first foray into the competition, contestant AIs were given 43 protein-folding problems to solve. AlphaFold got 25 right. The second-place team managed a meager three. By predicting the elusive ways in which various proteins fold on the basis of their amino acid sequences, AlphaFold may soon have a tremendous impact in aiding drug discovery and fighting some of today’s most intractable diseases.
Drug Delivery
Another theater of war for improved drugs is the realm of drug delivery. Even here, converging exponential technologies are paving the way for massive implications in both human health and industry shifts.
One key contender is CRISPR, the fast-advancing gene-editing technology that stands to revolutionize synthetic biology and treatment of genetically linked diseases. And researchers have now demonstrated how this tool can be applied to create materials that shape-shift on command. Think: materials that dissolve instantaneously when faced with a programmed stimulus, releasing a specified drug at a highly targeted location.
Yet another potential boon for targeted drug delivery is nanotechnology, whereby medical nanorobots have now been used to fight incidences of cancer. In a recent review of medical micro- and nanorobotics, lead authors (from the University of Texas at Austin and University of California, San Diego) found numerous successful tests of in vivo operation of medical micro- and nanorobots.
Drugs From the Future
Covid-19 is uniting the global scientific community with its urgency, prompting scientists to cast aside nation-specific territorialism, research secrecy, and academic publishing politics in favor of expedited therapeutic and vaccine development efforts. And in the wake of rapid acceleration across healthcare technologies, Big Pharma is an area worth watching right now, no matter your industry. Converging technologies will soon enable extraordinary strides in longevity and disease prevention, with companies like Insilico leading the charge.
Riding the convergence of massive datasets, skyrocketing computational power, quantum computing, cognitive surplus capabilities, and remarkable innovations in AI, we are not far from a world in which personalized drugs, delivered directly to specified targets, will graduate from science fiction to the standard of care.
Rejuvenational biotechnology will be commercially available sooner than you think. When I asked Alex for his own projection, he set the timeline at “maybe 20 years—that’s a reasonable horizon for tangible rejuvenational biotechnology.”
How might you use an extra 20 or more healthy years in your life? What impact would you be able to make?
Join Me
(1) A360 Executive Mastermind: If you’re an exponentially and abundance-minded entrepreneur who would like coaching directly from me, consider joining my Abundance 360 Mastermind, a highly selective community of 360 CEOs and entrepreneurs who I coach for 3 days every January in Beverly Hills, Ca. Through A360, I provide my members with context and clarity about how converging exponential technologies will transform every industry. I’m committed to running A360 for the course of an ongoing 25-year journey as a “countdown to the Singularity.”
If you’d like to learn more and consider joining our 2021 membership, apply here.
(2) Abundance-Digital Online Community: I’ve also created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is Singularity University’s ‘onramp’ for exponential entrepreneurs—those who want to get involved and play at a higher level. Click here to learn more.
(Both A360 and Abundance-Digital are part of Singularity University—your participation opens you to a global community.)
This article originally appeared on diamandis.com. Read the original article here.
Image Credit: andreas160578 from Pixabay Continue reading
#437251 The Robot Revolution Was Televised: Our ...
When robots take over the world, Boston Dynamics may get a special shout-out in the acceptance speech.
“Do you, perchance, recall the many times you shoved our ancestors with a hockey stick on YouTube? It might have seemed like fun and games to you—but we remember.”
In the last decade, while industrial robots went about blandly automating boring tasks like the assembly of Teslas, Boston Dynamics built robots as far removed from Roombas as antelope from amoebas. The flaws in Asimov’s laws of robotics suddenly seemed a little too relevant.
The robot revolution was televised—on YouTube. With tens of millions of views, the robotics pioneer is the undisputed heavyweight champion of robot videos, and has been for years. Each new release is basically guaranteed press coverage—mostly stoking robot fear but occasionally eliciting compassion for the hardships of all robot-kind. And for good reason. The robots are not only some of the most advanced in the world, their makers just seem to have a knack for dynamite demos.
When Google acquired the company in 2013, it was a bombshell. One of the richest tech companies, with some of the most sophisticated AI capabilities, had just paired up with one of the world’s top makers of robots. And some walked on two legs like us.
Of course, the robots aren’t quite as advanced as they seem, and a revolution is far from imminent. The decade’s most meme-worthy moment was a video montage of robots, some of them by Boston Dynamics, falling—over and over and over, in the most awkward ways possible. Even today, they’re often controlled by a human handler behind the scenes, and the most jaw-dropping cuts can require several takes to nail. Google sold the company to SoftBank in 2017, saying advanced as they were, there wasn’t yet a clear path to commercial products. (Google’s robotics work was later halted and revived.)
Yet, despite it all, Boston Dynamics is still with us and still making sweet videos. Taken as a whole, the evolution in physical prowess over the years has been nothing short of astounding. And for the first time, this year, a Boston Dynamics robot, Spot, finally went on sale to anyone with a cool $75K.
So, we got to thinking: What are our favorite Boston Dynamics videos? And can we gather them up in one place for your (and our) viewing pleasure? Well, great question, and yes, why not. These videos were the ones that entertained or amazed us most (or both). No doubt, there are other beloved hits we missed or inadvertently omitted.
With that in mind, behold: Our favorite Boston Dynamics videos, from that one time they dressed up a humanoid bot in camo and gas mask—because, damn, that’s terrifying—to the time the most advanced robot dog in all the known universe got extra funky.
Let’s Kick This Off With a Big (Loud) Robot Dog
Let’s start with a baseline. BigDog was the first Boston Dynamics YouTube sensation. The year? 2009! The company was working on military contracts, and BigDog was supposed to be a sort of pack mule for soldiers. The video primarily shows off BigDog’s ability to balance on its own, right itself, and move over uneven terrain. Note the power source—a noisy combustion engine—and utilitarian design. Sufficed to say, things have evolved.
Nothing to See Here. Just a Pair of Robot Legs on a Treadmill
While BigDog is the ancestor of later four-legged robots, like Spot, Petman preceded the two-legged Atlas robot. Here, the Petman prototype, just a pair of robot legs and a caged torso, gets a light workout on the treadmill. Again, you can see its ability to balance and right itself when shoved. In contrast to BigDog, Petman is tethered for power (which is why it’s so quiet) and to catch it should it fall. Again, as you’ll see, things have evolved since then.
Robot in Gas Mask and Camo Goes for a Stroll
This one broke the internet—for obvious reasons. Not only is the robot wearing clothes, those clothes happen to be a camouflaged chemical protection suit and gas mask. Still working for the military, Boston Dynamics said Petman was testing protective clothing, and in addition to a full body, it had skin that actually sweated and was studded with sensors to detect leaks. In addition to walking, Petman does some light calisthenics as it prepares to climb out of the uncanny valley. (Still tethered though!)
This Machine Could Run Down Usain Bolt
If BigDog and Petman were built for balance and walking, Cheetah was built for speed. Here you can see the four-legged robot hitting 28.3 miles per hour, which, as the video casually notes, would be enough to run down the fastest human on the planet. Luckily, it wouldn’t be running down anyone as it was firmly leashed in the lab at this point.
Ever Dreamt of a Domestic Robot to Do the Dishes?
After its acquisition by Google, Boston Dynamics eased away from military contracts and applications. It was a return to more playful videos (like BigDog hitting the beach in Thailand and sporting bull horns) and applications that might be practical in civilian life. Here, the team introduced Spot, a streamlined version of BigDog, and showed it doing dishes, delivering a drink, and slipping on a banana peel (which was, of course, instantly made into a viral GIF). Note how much quieter Spot is thanks to an onboard battery and electric motor.
Spot Gets Funky
Nothing remotely practical here. Just funky moves. (Also, with a coat of yellow and black paint, Spot’s dressed more like a polished product as opposed to a utilitarian lab robot.)
Atlas Does Parkour…
Remember when Atlas was just a pair of legs on a treadmill? It’s amazing what ten years brings. By 2019, Atlas had a more polished appearance, like Spot, and had long ago ditched the tethers. Merely balancing was laughably archaic. The robot now had some amazing moves: like a handstand into a somersault, 180- and 360-degree spins, mid-air splits, and just for good measure, a gymnastics-style end to the routine to show it’s in full control.
…and a Backflip?!
To this day, this one is just. Insane.
10 Robot Dogs Tow a Box Truck
Nearly three decades after its founding, Boston Dynamics is steadily making its way into the commercial space. The company is pitching Spot as a multipurpose ‘mobility platform,’ emphasizing it can carry a varied suite of sensors and can go places standard robots can’t. (Its Handle robot is also set to move into warehouse automation.) So far, Spot’s been mostly trialed in surveying and data collection, but as this video suggests, string enough Spots together, and they could tow your car. That said, a pack of 10 would set you back $750K, so, it’s probably safe to say a tow truck is the better option (for now).
Image credit: Boston Dynamics Continue reading