Tag Archives: both

#437929 These Were Our Favorite Tech Stories ...

This time last year we were commemorating the end of a decade and looking ahead to the next one. Enter the year that felt like a decade all by itself: 2020. News written in January, the before-times, feels hopelessly out of touch with all that came after. Stories published in the early days of the pandemic are, for the most part, similarly naive.

The year’s news cycle was swift and brutal, ping-ponging from pandemic to extreme social and political tension, whipsawing economies, and natural disasters. Hope. Despair. Loneliness. Grief. Grit. More hope. Another lockdown. It’s been a hell of a year.

Though 2020 was dominated by big, hairy societal change, science and technology took significant steps forward. Researchers singularly focused on the pandemic and collaborated on solutions to a degree never before seen. New technologies converged to deliver vaccines in record time. The dark side of tech, from biased algorithms to the threat of omnipresent surveillance and corporate control of artificial intelligence, continued to rear its head.

Meanwhile, AI showed uncanny command of language, joined Reddit threads, and made inroads into some of science’s grandest challenges. Mars rockets flew for the first time, and a private company delivered astronauts to the International Space Station. Deprived of night life, concerts, and festivals, millions traveled to virtual worlds instead. Anonymous jet packs flew over LA. Mysterious monoliths appeared and disappeared worldwide.

It was all, you know, very 2020. For this year’s (in-no-way-all-encompassing) list of fascinating stories in tech and science, we tried to select those that weren’t totally dated by the news, but rose above it in some way. So, without further ado: This year’s picks.

How Science Beat the Virus
Ed Yong | The Atlantic
“Much like famous initiatives such as the Manhattan Project and the Apollo program, epidemics focus the energies of large groups of scientists. …But ‘nothing in history was even close to the level of pivoting that’s happening right now,’ Madhukar Pai of McGill University told me. … No other disease has been scrutinized so intensely, by so much combined intellect, in so brief a time.”

‘It Will Change Everything’: DeepMind’s AI Makes Gigantic Leap in Solving Protein Structures
Ewen Callaway | Nature
“In some cases, AlphaFold’s structure predictions were indistinguishable from those determined using ‘gold standard’ experimental methods such as X-ray crystallography and, in recent years, cryo-electron microscopy (cryo-EM). AlphaFold might not obviate the need for these laborious and expensive methods—yet—say scientists, but the AI will make it possible to study living things in new ways.”

OpenAI’s Latest Breakthrough Is Astonishingly Powerful, But Still Fighting Its Flaws
James Vincent | The Verge
“What makes GPT-3 amazing, they say, is not that it can tell you that the capital of Paraguay is Asunción (it is) or that 466 times 23.5 is 10,987 (it’s not), but that it’s capable of answering both questions and many more beside simply because it was trained on more data for longer than other programs. If there’s one thing we know that the world is creating more and more of, it’s data and computing power, which means GPT-3’s descendants are only going to get more clever.”

Artificial General Intelligence: Are We Close, and Does It Even Make Sense to Try?
Will Douglas Heaven | MIT Technology Review
“A machine that could think like a person has been the guiding vision of AI research since the earliest days—and remains its most divisive idea. …So why is AGI controversial? Why does it matter? And is it a reckless, misleading dream—or the ultimate goal?”

The Dark Side of Big Tech’s Funding for AI Research
Tom Simonite | Wired
“Timnit Gebru’s exit from Google is a powerful reminder of how thoroughly companies dominate the field, with the biggest computers and the most resources. …[Meredith] Whittaker of AI Now says properly probing the societal effects of AI is fundamentally incompatible with corporate labs. ‘That kind of research that looks at the power and politics of AI is and must be inherently adversarial to the firms that are profiting from this technology.’i”

We’re Not Prepared for the End of Moore’s Law
David Rotman | MIT Technology Review
“Quantum computing, carbon nanotube transistors, even spintronics, are enticing possibilities—but none are obvious replacements for the promise that Gordon Moore first saw in a simple integrated circuit. We need the research investments now to find out, though. Because one prediction is pretty much certain to come true: we’re always going to want more computing power.”

Inside the Race to Build the Best Quantum Computer on Earth
Gideon Lichfield | MIT Technology Review
“Regardless of whether you agree with Google’s position [on ‘quantum supremacy’] or IBM’s, the next goal is clear, Oliver says: to build a quantum computer that can do something useful. …The trouble is that it’s nearly impossible to predict what the first useful task will be, or how big a computer will be needed to perform it.”

The Secretive Company That Might End Privacy as We Know It
Kashmir Hill | The New York Times
“Searching someone by face could become as easy as Googling a name. Strangers would be able to listen in on sensitive conversations, take photos of the participants and know personal secrets. Someone walking down the street would be immediately identifiable—and his or her home address would be only a few clicks away. It would herald the end of public anonymity.”

Wrongfully Accused by an Algorithm
Kashmir Hill | The New York Times
“Mr. Williams knew that he had not committed the crime in question. What he could not have known, as he sat in the interrogation room, is that his case may be the first known account of an American being wrongfully arrested based on a flawed match from a facial recognition algorithm, according to experts on technology and the law.”

Predictive Policing Algorithms Are Racist. They Need to Be Dismantled.
Will Douglas Heaven | MIT Technology Review
“A number of studies have shown that these tools perpetuate systemic racism, and yet we still know very little about how they work, who is using them, and for what purpose. All of this needs to change before a proper reckoning can take pace. Luckily, the tide may be turning.”

The Panopticon Is Already Here
Ross Andersen | The Atlantic
“Artificial intelligence has applications in nearly every human domain, from the instant translation of spoken language to early viral-outbreak detection. But Xi [Jinping] also wants to use AI’s awesome analytical powers to push China to the cutting edge of surveillance. He wants to build an all-seeing digital system of social control, patrolled by precog algorithms that identify potential dissenters in real time.”

The Case For Cities That Aren’t Dystopian Surveillance States
Cory Doctorow | The Guardian
“Imagine a human-centered smart city that knows everything it can about things. It knows how many seats are free on every bus, it knows how busy every road is, it knows where there are short-hire bikes available and where there are potholes. …What it doesn’t know is anything about individuals in the city.”

The Modern World Has Finally Become Too Complex for Any of Us to Understand
Tim Maughan | OneZero
“One of the dominant themes of the last few years is that nothing makes sense. …I am here to tell you that the reason so much of the world seems incomprehensible is that it is incomprehensible. From social media to the global economy to supply chains, our lives rest precariously on systems that have become so complex, and we have yielded so much of it to technologies and autonomous actors that no one totally comprehends it all.”

The Conscience of Silicon Valley
Zach Baron | GQ
“What I really hoped to do, I said, was to talk about the future and how to live in it. This year feels like a crossroads; I do not need to explain what I mean by this. …I want to destroy my computer, through which I now work and ‘have drinks’ and stare at blurry simulations of my parents sometimes; I want to kneel down and pray to it like a god. I want someone—I want Jaron Lanier—to tell me where we’re going, and whether it’s going to be okay when we get there. Lanier just nodded. All right, then.”

Yes to Tech Optimism. And Pessimism.
Shira Ovide | The New York Times
“Technology is not something that exists in a bubble; it is a phenomenon that changes how we live or how our world works in ways that help and hurt. That calls for more humility and bridges across the optimism-pessimism divide from people who make technology, those of us who write about it, government officials and the public. We need to think on the bright side. And we need to consider the horribles.”

How Afrofuturism Can Help the World Mend
C. Brandon Ogbunu | Wired
“…[W. E. B. DuBois’] ‘The Comet’ helped lay the foundation for a paradigm known as Afrofuturism. A century later, as a comet carrying disease and social unrest has upended the world, Afrofuturism may be more relevant than ever. Its vision can help guide us out of the rubble, and help us to consider universes of better alternatives.”

Wikipedia Is the Last Best Place on the Internet
Richard Cooke | Wired
“More than an encyclopedia, Wikipedia has become a community, a library, a constitution, an experiment, a political manifesto—the closest thing there is to an online public square. It is one of the few remaining places that retains the faintly utopian glow of the early World Wide Web.”

Can Genetic Engineering Bring Back the American Chestnut?
Gabriel Popkin | The New York Times Magazine
“The geneticists’ research forces conservationists to confront, in a new and sometimes discomfiting way, the prospect that repairing the natural world does not necessarily mean returning to an unblemished Eden. It may instead mean embracing a role that we’ve already assumed: engineers of everything, including nature.”

At the Limits of Thought
David C. Krakauer | Aeon
“A schism is emerging in the scientific enterprise. On the one side is the human mind, the source of every story, theory, and explanation that our species holds dear. On the other stand the machines, whose algorithms possess astonishing predictive power but whose inner workings remain radically opaque to human observers.”

Is the Internet Conscious? If It Were, How Would We Know?
Meghan O’Gieblyn | Wired
“Does the internet behave like a creature with an internal life? Does it manifest the fruits of consciousness? There are certainly moments when it seems to. Google can anticipate what you’re going to type before you fully articulate it to yourself. Facebook ads can intuit that a woman is pregnant before she tells her family and friends. It is easy, in such moments, to conclude that you’re in the presence of another mind—though given the human tendency to anthropomorphize, we should be wary of quick conclusions.”

The Internet Is an Amnesia Machine
Simon Pitt | OneZero
“There was a time when I didn’t know what a Baby Yoda was. Then there was a time I couldn’t go online without reading about Baby Yoda. And now, Baby Yoda is a distant, shrugging memory. Soon there will be a generation of people who missed the whole thing and for whom Baby Yoda is as meaningless as it was for me a year ago.”

Digital Pregnancy Tests Are Almost as Powerful as the Original IBM PC
Tom Warren | The Verge
“Each test, which costs less than $5, includes a processor, RAM, a button cell battery, and a tiny LCD screen to display the result. …Foone speculates that this device is ‘probably faster at number crunching and basic I/O than the CPU used in the original IBM PC.’ IBM’s original PC was based on Intel’s 8088 microprocessor, an 8-bit chip that operated at 5Mhz. The difference here is that this is a pregnancy test you pee on and then throw away.”

The Party Goes on in Massive Online Worlds
Cecilia D’Anastasio | Wired
“We’re more stand-outside types than the types to cast a flashy glamour spell and chat up the nearest cat girl. But, hey, it’s Final Fantasy XIV online, and where my body sat in New York, the epicenter of America’s Covid-19 outbreak, there certainly weren’t any parties.”

The Facebook Groups Where People Pretend the Pandemic Isn’t Happening
Kaitlyn Tiffany | The Atlantic
“Losing track of a friend in a packed bar or screaming to be heard over a live band is not something that’s happening much in the real world at the moment, but it happens all the time in the 2,100-person Facebook group ‘a group where we all pretend we’re in the same venue.’ So does losing shoes and Juul pods, and shouting matches over which bands are the saddest, and therefore the greatest.”

Did You Fly a Jetpack Over Los Angeles This Weekend? Because the FBI Is Looking for You
Tom McKay | Gizmodo
“Did you fly a jetpack over Los Angeles at approximately 3,000 feet on Sunday? Some kind of tiny helicopter? Maybe a lawn chair with balloons tied to it? If the answer to any of the above questions is ‘yes,’ you should probably lay low for a while (by which I mean cool it on the single-occupant flying machine). That’s because passing airline pilots spotted you, and now it’s this whole thing with the FBI and the Federal Aviation Administration, both of which are investigating.”

Image Credit: Thomas Kinto / Unsplash Continue reading

Posted in Human Robots

#437924 How a Software Map of the Entire Planet ...

i
“3D map data is the scaffolding of the 21st century.”

–Edward Miller, Founder, Scape Technologies, UK

Covered in cameras, sensors, and a distinctly spaceship looking laser system, Google’s autonomous vehicles were easy to spot when they first hit public roads in 2015. The key hardware ingredient is a spinning laser fixed to the roof, called lidar, which provides the car with a pair of eyes to see the world. Lidar works by sending out beams of light and measuring the time it takes to bounce off objects back to the source. By timing the light’s journey, these depth-sensing systems construct fully 3D maps of their surroundings.

3D maps like these are essentially software copies of the real world. They will be crucial to the development of a wide range of emerging technologies including autonomous driving, drone delivery, robotics, and a fast-approaching future filled with augmented reality.

Like other rapidly improving technologies, lidar is moving quickly through its development cycle. What was an expensive technology on the roof of a well-funded research project is now becoming cheaper, more capable, and readily available to consumers. At some point, lidar will come standard on most mobile devices and is now available to early-adopting owners of the iPhone 12 Pro.

Consumer lidar represents the inevitable shift from wealthy tech companies generating our world’s map data, to a more scalable crowd-sourced approach. To develop the repository for their Street View Maps product, Google reportedly spent $1-2 billion sending cars across continents photographing every street. Compare that to a live-mapping service like Waze, which uses crowd-sourced user data from its millions of users to generate accurate and real-time traffic conditions. Though these maps serve different functions, one is a static, expensive, unchanging map of the world while the other is dynamic, real-time, and constructed by users themselves.

Soon millions of people may be scanning everything from bedrooms to neighborhoods, resulting in 3D maps of significant quality. An online search for lidar room scans demonstrates just how richly textured these three-dimensional maps are compared to anything we’ve had before. With lidar and other depth-sensing systems, we now have the tools to create exact software copies of everywhere and everything on earth.

At some point, likely aided by crowdsourcing initiatives, these maps will become living breathing, real-time representations of the world. Some refer to this idea as a “digital twin” of the planet. In a feature cover story, Kevin Kelly, the cofounder of Wired magazine, calls this concept the “mirrorworld,” a one-to-one software map of everything.

So why is that such a big deal? Take augmented reality as an example.

Of all the emerging industries dependent on such a map, none are more invested in seeing this concept emerge than those within the AR landscape. Apple, for example, is not-so-secretly developing a pair of AR glasses, which they hope will deliver a mainstream turning point for the technology.

For Apple’s AR devices to work as anticipated, they will require virtual maps of the world, a concept AR insiders call the “AR cloud,” which is synonymous with the “mirrorworld” concept. These maps will be two things. First, they will be a tool that creators use to place AR content in very specific locations; like a world canvas to paint on. Second, they will help AR devices both locate and understand the world around them so they can render content in a believable way.

Imagine walking down a street wanting to check the trading hours of a local business. Instead of pulling out your phone to do a tedious search online, you conduct the equivalent of a visual google search simply by gazing at the store. Albeit a trivial example, the AR cloud represents an entirely non-trivial new way of managing how we organize the world’s information. Access to knowledge can be shifted away from the faraway monitors in our pocket, to its relevant real-world location.

Ultimately this describes a blurring of physical and digital infrastructure. Our public and private spaces will thus be comprised equally of both.

No example demonstrates this idea better than Pokémon Go. The game is straightforward enough; users capture virtual characters scattered around the real world. Today, the game relies on traditional GPS technology to place its characters, but GPS is accurate only to within a few meters of a location. For a car navigating on a highway or locating Pikachus in the world, that level of precision is sufficient. For drone deliveries, driverless cars, or placing a Pikachu in a specific location, say on a tree branch in a park, GPS isn’t accurate enough. As astonishing as it may seem, many experimental AR cloud concepts, even entirely mapped cities, are location specific down to the centimeter.

Niantic, the $4 billion publisher behind Pokémon Go, is aggressively working on developing a crowd-sourced approach to building better AR Cloud maps by encouraging their users to scan the world for them. Their recent acquisition of 6D.ai, a mapping software company developed by the University of Oxford’s Victor Prisacariu through his work at Oxford’s Active Vision Lab, indicates Niantic’s ambition to compete with the tech giants in this space.

With 6D.ai’s technology, Niantic is developing the in-house ability to generate their own 3D maps while gaining better semantic understanding of the world. By going beyond just knowing there’s a temporary collection of orange cones in a certain location, for example, the game may one day understand the meaning behind this; that a temporary construction zone means no Pokémon should spawn here to avoid drawing players to this location.

Niantic is not the only company working on this. Many of the big tech firms you would expect have entire teams focused on map data. Facebook, for example, recently acquired the UK-based Scape technologies, a computer vision startup mapping entire cities with centimeter precision.

As our digital maps of the world improve, expect a relentless and justified discussion of privacy concerns as well. How will society react to the idea of a real-time 3D map of their bedroom living on a Facebook or Amazon server? Those horrified by the use of facial recognition AI being used in public spaces are unlikely to find comfort in the idea of a machine-readable world subject to infinite monitoring.

The ability to build high-precision maps of the world could reshape the way we engage with our planet and promises to be one of the biggest technology developments of the next decade. While these maps may stay hidden as behind-the-scenes infrastructure powering much flashier technologies that capture the world’s attention, they will soon prop up large portions of our technological future.

Keep that in mind when a car with no driver is sharing your road.

Image credit: sergio souza / Pexels Continue reading

Posted in Human Robots

#437918 Video Friday: These Robots Wish You ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

ICCR 2020 – December 26-29, 2020 – [Online]
HRI 2021 – March 8-11, 2021 – [Online]
RoboSoft 2021 – April 12-16, 2021 – [Online]
Let us know if you have suggestions for next week, and enjoy today's videos.

Look who’s baaaack: Jibo! After being sold (twice?), this pioneering social home robot (it was first announced back in 2014!) now belongs to NTT Disruption, which was described to us as the “disruptive company of NTT Group.” We are all for disruption, so this looks like a great new home for Jibo.

[ NTT Disruption ]

Thanks Ana!

FZI's Christmas Party was a bit of a challenge this year; good thing robots are totally competent to have a part on their own.

[ FZI ]

Thanks Arne!

Do you have a lonely dog that just wants a friend to watch cat videos on YouTube with? The Danish Technological Institute has a gift idea for you.

[ DTI ]

Thanks Samuel!

Once upon a time, not so far away, there was an elf who received a very special gift. Watch this heartwarming story. Happy Holidays from the Robotiq family to yours!

Of course, these elves are not now unemployed, they've instead moved over to toy design full time!

[ Robotiq ]

An elegant Christmas video from the Dynamics System Lab, make sure and watch through the very end for a little extra cheer.

[ Dynamic Systems Lab ]

Thanks Angela!

Usually I complain when robotics companies make holiday videos without any real robots in them, but this is pretty darn cute from Yaskawa this year.

[ Yaskawa ]

Here's our little christmas gift to the fans of strange dynamic behavior. The gyro will follow any given shape as soon as the tip touches its edge and the rotation is fast enough. The friction between tip and shape generates a tangential force, creating a moment such that the gyroscopic reaction pushes the tip towards the shape. The resulting normal force produces a moment that guides the tip along the shape's edge.

[ TUM ]

Happy Holidays from Fanuc!

Okay but why does there have to be an assembly line elf just to put in those little cranks?

[ Fanuc ]

Astrobotic's cute little CubeRover is at NASA busy not getting stuck in places.

[ Astrobotic ]

Team CoSTAR is sharing more of their work on subterranean robotic exploration.

[ CoSTAR ]

Skydio Autonomy Enterprise Foundation (AEF), a new software product that delivers advanced AI-powered capabilities to assist the pilot during tactical situational awareness scenarios and detailed industrial asset inspections. Designed for professionals, it offers an enterprise-caliber flight experience through the new Skydio Enterprise application.

[ Skydio ]

GITAI's S1 autonomous robot will conduct two experiments: IVA (Intra-Vehicular Activity) tasks such as switch and cable operations, and assembly of structures and panels to demonstrate its capability for ISA (In-Space Assembly) tasks. This video was recorded in the Nanoracks Bishop Airlock mock-up facility @GITAI Tokyo office.

[ GITAI ]

It's no Atlas, but this is some impressive dynamic balancing from iCub.

[ IIT ]

The Campaign to Stop Killer Robots and I don't agree on a lot of things, and I don't agree with a lot of the assumptions made in this video, either. But, here you go!

[ CSKR ]

I don't know much about this robot, but I love it.

[ Columbia ]

Most cable-suspended robots have a very well defined workspace, but you can increase that workspace by swinging them around. Wheee!

[ Laval ]

How you know your robot's got some skill: “to evaluate the performance in climbing over the step, we compared the R.L. result to the results of 12 students who attempted to find the best planning. The RL outperformed all the group, in terms of effort and time, both in continuous (joystick) and partition planning.”

[ Zarrouk Lab ]

In the Spring 2021 semester, mechanical engineering students taking MIT class 2.007, Design and Manufacturing I, will be able to participate in the class’ iconic final robot competition from the comfort of their own home. Whether they take the class virtually or semi-virtually, students will be sent a massive kit of tools and materials to build their own unique robot along with a “Home Alone” inspired game board for the final global competition.

[ MIT ]

Well, this thing is still around!

[ Moley Robotics ]

Manuel Ahumada wrote in to share this robotic Baby Yoda that he put together with a little bit of help from Intel's OpenBot software.

[ YouTube ]

Thanks Manuel!

Here's what Zoox has been working on for the past half-decade.

[ Zoox ] Continue reading

Posted in Human Robots

#437912 “Boston Dynamics Will Continue to ...

Last week’s announcement that Hyundai acquired Boston Dynamics from SoftBank left us with a lot of questions. We attempted to answer many of those questions ourselves, which is typically bad practice, but sometimes it’s the only option when news like that breaks.

Fortunately, yesterday we were able to speak with Michael Patrick Perry, vice president of business development at Boston Dynamics, who candidly answered our questions about Boston Dynamics’ new relationship with Hyundai and what the near future has in store.

IEEE Spectrum: Boston Dynamics is worth 1.1 billion dollars! Can you put that valuation into context for us?

Michael Patrick Perry: Since 2018, we’ve shifted to becoming a commercial organization. And that’s included a number of things, like taking our existing technology and bringing it to market for the first time. We’ve gone from zero to 400 Spot robots deployed, building out an ecosystem of software developers, sensor providers, and integrators. With that scale of deployment and looking at the pipeline of opportunities that we have lined up over the next year, I think people have started to believe that this isn’t just a one-off novelty—that there’s actual value that Spot is able to create. Secondly, with some of our efforts in the logistics market, we’re getting really strong signals both with our Pick product and also with some early discussions around Handle’s deployment in warehouses, which we think are going to be transformational for that industry.

So, the thing that’s really exciting is that two years ago, we were talking about this vision, and people said, “Wow, that sounds really cool, let’s see how you do.” And now we have the validation from the market saying both that this is actually useful, and that we’re able to execute. And that’s where I think we’re starting to see belief in the long-term viability of Boston Dynamics, not just as a cutting-edge research shop, but also as a business.

Photo: Boston Dynamics

Boston Dynamics says it has deployed 400 Spot robots, building out an “ecosystem of software developers, sensor providers, and integrators.”

How would you describe Hyundai’s overall vision for the future of robotics, and how do they want Boston Dynamics to fit into that vision?

In the immediate term, Hyundai’s focus is to continue our existing trajectories, with Spot, Handle, and Atlas. They believe in the work that we’ve done so far, and we think that combining with a partner that understands many of the industries in which we’re targeting, whether its manufacturing, construction, or logistics, can help us improve our products. And obviously as we start thinking about producing these robots at scale, Hyundai’s expertise in manufacturing is going to be really helpful for us.

Looking down the line, both Boston Dynamics and Hyundai believe in the value of smart mobility, and they’ve made a number of plays in that space. Whether it’s urban air mobility or autonomous driving, they’ve been really thinking about connecting the digital and the physical world through moving systems, whether that’s a car, a vertical takeoff and landing multi-rotor vehicle, or a robot. We are well positioned to take on robotics side of that while also connecting to some of these other autonomous services.

Can you tell us anything about the kind of robotics that the Hyundai Motor Group has going on right now?

So they’re working on a lot of really interesting stuff—exactly how that connects, you know, it’s early days, and we don’t have anything explicitly to share. But they’ve got a smart and talented robotics team that’s working in a variety of directions that shares overlap with us. Obviously, a lot of things related to autonomous driving shares some DNA with the work that we’re doing in autonomy for Spot and Handle, so it’s pretty exciting to see.

What are you most excited about here? How do you think this deal will benefit Boston Dynamics?

I think there are a number of things. One is that they have an expertise in hardware, in a way that’s unique. They understand and appreciate the complexity of creating large complex robotic systems. So I think there’s some shared understanding of what it takes to create a great hardware product. And then also they have the resources to help us actually build those products with them together—they have manufacturing resources and things like that.

“Robotics isn’t a short term game. We’ve scaled pretty rapidly but if you start looking at what the full potential of a company like Boston Dynamics is, it’s going to take years to realize, and I think Hyundai is committed to that long-term vision”

Another thing that’s exciting is that Hyundai has some pretty visionary bets for autonomous driving and unmanned aerial systems, and all of that fits very neatly into the connected vision of robotics that we were talking about before. Robotics isn’t a short term game. We’ve scaled pretty rapidly for a robotics company in terms of the scale of robots we’ve able to deploy in the field, but if you start looking at what the full potential of a company like Boston Dynamics is, it’s going to take years to realize, and I think Hyundai is committed to that long-term vision.

And when you’ve been talking with Hyundai, what are they most excited about?

I think they’re really excited about our existing products and our technology. Looking at some of the things that Spot, Pick, and Handle are able to do now, there are applications that many of Hyundai’s customers could benefit from in terms of mobility, remote sensing, and material handling. Looking down the line, Hyundai is also very interested in smart city technology, and mobile robotics is going to be a core piece of that.

We tend to focus on Spot and Handle and Atlas in terms of platform capabilities, but can you talk a bit about some of the component-level technology that’s unique to Boston Dynamics, and that could be of interest to Hyundai?

Creating very power-dense actuator design is something that we’ve been successful at for several years, starting back with BigDog and LS3. And Handle has some hydraulic actuators and valves that are pretty unique in terms of their design and capability. Fundamentally, we have a systems engineering approach that brings together both hardware and software internally. You’ll often see different groups that specialize in something, like great mechanical or electrical engineering groups, or great controls teams, but what I think makes Boston Dynamics so special is that we’re able to put everything on the table at once to create a system that’s incredibly capable. And that’s why with something like Spot, we’re able to produce it at scale, while also making it flexible enough for all the different applications that the robot is being used for right now.

It’s hard to talk specifics right now, but there are obviously other disciplines within mechanical engineering or electrical engineering or controls for robots or autonomous systems where some of our technology could be applied.

Photo: Boston Dynamics

Boston Dynamics is in the process of commercializing Handle, iterating on its design and planning to get box-moving robots on-site with customers in the next year or two.

While Boston Dynamics was part of Google, and then SoftBank, it seems like there’s been an effort to maintain independence. Is it going to be different with Hyundai? Will there be more direct integration or collaboration?

Obviously it’s early days, but right now, we have support to continue executing against all the plans that we have. That includes all the commercialization of Spot, as well as things for Atlas, which is really going to be pushing the capability of our team to expand into new areas. That’s going to be our immediate focus, and we don’t see anything that’s going to pull us away from that core focus in the near term.

As it stands right now, Boston Dynamics will continue to be Boston Dynamics under this new ownership.

How much of what you do at Boston Dynamics right now would you characterize as fundamental robotics research, and how much is commercialization? And how do you see that changing over the next couple of years?

We have been expanding our commercial team, but we certainly keep a lot of the core capabilities of fundamental robotics research. Some of it is very visible, like the new behavior development for Atlas where we’re pushing the limits of perception and path planning. But a lot of the stuff that we’re working on is a little bit under the hood, things that are less obvious—terrain handling, intervention handling, how to make safe faults, for example. Initially when Spot started slipping on things, it would flail around trying to get back up. We’ve had to figure out the right balance between the robot struggling to stand, and when it should decide to just lock its limbs and fall over because it’s safer to do that.

I’d say the other big thrust for us is manipulation. Our gripper for Spot is coming out early next year, and that’s going to unlock a new set of capabilities for us. We have years and years of locomotion experience, but the ability to manipulate is a space that’s still relatively new to us. So we’ve been ramping up a lot of work over the last several years trying to get to an early but still valuable iteration of the technology, and we’ll continue pushing on that as we start learning what’s most useful to our customers.

“I’d say the other big thrust for us is manipulation. Our gripper for Spot is coming out early next year, and that’s going to unlock a new set of capabilities for us. We have years and years of locomotion experience, but the ability to manipulate is a space that’s still relatively new to us”

Looking back, Spot as a commercial robot has a history that goes back to robots like LS3 and BigDog, which were very ambitious projects funded by agencies like DARPA without much in the way of commercial expectations. Do you think these very early stage, very expensive, very technical projects are still things that Boston Dynamics can take on?

Yes—I would point to a lot of the things we do with Atlas as an example of that. While we don’t have immediate plans to commercialize Atlas, we can point to technologies that come out of Atlas that have enabled some of our commercial efforts over time. There’s not necessarily a clear roadmap of how every piece of Atlas research is going to feed over into a commercial product; it’s more like, this is a really hard fundamental robotics challenge, so let’s tackle it and learn things that we can then benefit from across the company.

And fundamentally, our team loves doing cool stuff with robots, and you’ll continue seeing that in the months to come.

Photo: Boston Dynamics

Spot’s arm with gripper is coming out early next year, and Boston Dynamics says that’s going to “unlock a new set of capabilities for us.”

What would it take to commercialize Atlas? And are you getting closer with Handle?

We’re in the process of commercializing Handle. We’re at a relatively early stage, but we have a plan to get the first versions for box moving on-site with customers in the next year or two. Last year, we did some on-site deployments as proof-of-concept trials, and using the feedback from that, we did a new design pass on the robot, and we’re looking at increasing our manufacturing capability. That’s all in progress.

For Atlas, it’s like the Formula 1 of robots—you’re not going to take a Formula 1 car and try to make it less capable so that you can drive it on the road. We’re still trying to see what are some applications that would necessitate an energy and computationally intensive humanoid robot as opposed to something that’s more inherently stable. Trying to understand that application space is something that we’re interested in, and then down the line, we could look at creating new morphologies to help address specific applications. In many ways, Handle is the first version of that, where we said, “Atlas is good at moving boxes but it’s very complicated and expensive, so let’s create a simpler and smaller design that can achieve some of the same things.”

The press release mentioned a mobile robot for warehouses that will be introduced next year—is that Handle?

Yes, that’s the work that we’re doing on Handle.

As we start thinking about a whole robotic solution for the warehouse, we have to look beyond a high power, low footprint, dynamic platform like Handle and also consider things that are a little less exciting on video. We need a vision system that can look at a messy stack of boxes and figure out how to pick them up, we need an interface between a robot and an order building system—things where people might question why Boston Dynamics is focusing on them because it doesn’t fit in with our crazy backflipping robots, but it’s really incumbent on us to create that full end-to-end solution.

Are you confident that under Hyundai’s ownership, Boston Dynamics will be able to continue taking the risks required to remain on the cutting edge of robotics?

I think we will continue to push the envelope of what robots are capable of, and I think in the near term, you’ll be able to see that realized in our products and the research that we’re pushing forward with. 2021 is going to be a great year for us. Continue reading

Posted in Human Robots

#437905 New Deep Learning Method Helps Robots ...

One of the biggest things standing in the way of the robot revolution is their inability to adapt. That may be about to change though, thanks to a new approach that blends pre-learned skills on the fly to tackle new challenges.

Put a robot in a tightly-controlled environment and it can quickly surpass human performance at complex tasks, from building cars to playing table tennis. But throw these machines a curve ball and they’re in trouble—just check out this compilation of some of the world’s most advanced robots coming unstuck in the face of notoriously challenging obstacles like sand, steps, and doorways.

The reason robots tend to be so fragile is that the algorithms that control them are often manually designed. If they encounter a situation the designer didn’t think of, which is almost inevitable in the chaotic real world, then they simply don’t have the tools to react.

Rapid advances in AI have provided a potential workaround by letting robots learn how to carry out tasks instead of relying on hand-coded instructions. A particularly promising approach is deep reinforcement learning, where the robot interacts with its environment through a process of trial-and-error and is rewarded for carrying out the correct actions. Over many repetitions it can use this feedback to learn how to accomplish the task at hand.

But the approach requires huge amounts of data to solve even simple tasks. And most of the things we would want a robot to do are actually comprised of many smaller tasks—for instance, delivering a parcel involves learning how to pick an object up, how to walk, how to navigate, and how to pass an object to someone else, among other things.

Training all these sub-tasks simultaneously is hugely complex and far beyond the capabilities of most current AI systems, so many experiments so far have focused on narrow skills. Some have tried to train AI on multiple skills separately and then use an overarching system to flip between these expert sub-systems, but these approaches still can’t adapt to completely new challenges.

Building off this research, though, scientists have now created a new AI system that can blend together expert sub-systems specialized for a specific task. In a paper in Science Robotics, they explain how this allows a four-legged robot to improvise new skills and adapt to unfamiliar challenges in real time.

The technique, dubbed multi-expert learning architecture (MELA), relies on a two-stage training approach. First the researchers used a computer simulation to train two neural networks to carry out two separate tasks: trotting and recovering from a fall.

They then used the models these two networks learned as seeds for eight other neural networks specialized for more specific motor skills, like rolling over or turning left or right. The eight “expert networks” were trained simultaneously along with a “gating network,” which learns how to combine these experts to solve challenges.

Because the gating network synthesizes the expert networks rather than switching them on sequentially, MELA is able to come up with blends of different experts that allow it to tackle problems none could solve alone.

The authors liken the approach to training people in how to play soccer. You start out by getting them to do drills on individual skills like dribbling, passing, or shooting. Once they’ve mastered those, they can then intelligently combine them to deal with more dynamic situations in a real game.

After training the algorithm in simulation, the researchers uploaded it to a four-legged robot and subjected it to a battery of tests, both indoors and outdoors. The robot was able to adapt quickly to tricky surfaces like gravel or pebbles, and could quickly recover from being repeatedly pushed over before continuing on its way.

There’s still some way to go before the approach could be adapted for real-world commercially useful robots. For a start, MELA currently isn’t able to integrate visual perception or a sense of touch; it simply relies on feedback from the robot’s joints to tell it what’s going on around it. The more tasks you ask the robot to master, the more complex and time-consuming the training will get.

Nonetheless, the new approach points towards a promising way to make multi-skilled robots become more than the sum of their parts. As much fun as it is, it seems like laughing at compilations of clumsy robots may soon be a thing of the past.

Image Credit: Yang et al., Science Robotics Continue reading

Posted in Human Robots