Tag Archives: super

#437596 IROS Robotics Conference Is Online Now ...

The 2020 International Conference on Intelligent Robots and Systems (IROS) was originally going to be held in Las Vegas this week. Like ICRA last spring, IROS has transitioned to a completely online conference, which is wonderful news: Now everyone everywhere can participate in IROS without having to spend a dime on travel.

IROS officially opened yesterday, and the best news is that registration is entirely free! We’ll take a quick look at what IROS has on offer this year, which includes some stuff that’s brand news to IROS.

Registration for IROS is super easy, and did we mention that it’s free? To register, just go here and fill out a quick and easy form. You don’t even have to be an IEEE Member or anything like that, although in our unbiased opinion, an IEEE membership is well worth it. Once you get the confirmation email, go to https://www.iros2020.org/ondemand/, put in the email address you used to register, and that’s it, you’ve got IROS!

Here are some highlights:

Plenaries and Keynotes
Without the normal space and time constraints, you won’t have to pick and choose between any of the three plenaries or 10 keynotes. Some of them are fancier than others, but we’re used to that sort of thing by now. It’s worth noting that all three plenaries (and three of the 10 keynotes) are given by extraordinarily talented women, which is excellent to see.

Technical Tracks
There are over 1,400 technical talks, divided up into 12 categories of 20 sessions each. Note that each of the 12 categories that you see on the main page can be scrolled through to show all 20 of the sessions; if there’s a bright red arrow pointing left or right you can scroll, and if the arrow is transparent, you’ve reached the end.

On the session page, you’ll see an autoplaying advertisement (that you can mute but not stop), below which each talk has a preview slide, a link to a ~15 minute presentation video, and another link to a PDF of the paper. No supplementary videos are available, which is a bit disappointing. While you can leave a comment on the video, there’s no way of interacting with the author(s) directly through the IROS site, so you’ll have to check the paper for an email address if you want to ask a question.

Award Finalists
IROS has thoughtfully grouped all of the paper award finalists together into nine sessions. These are some truly outstanding papers, and it’s worth watching these sessions even if you’re not interested in specific subject matter.

Workshops and Tutorials
This stuff is a little more impacted by asynchronicity and on-demandedness, and some of the workshops and tutorials have already taken place. But IROS has done a good job at collecting videos of everything and making them easy to access, and the dedicated websites for the workshops and tutorials themselves sometimes have more detailed info. If you’re having trouble finding where the workshops and tutorial section is, try the “Entrance” drop-down menu up at the top.

IROS Original Series
In place of social events and lab tours, IROS this year has come up with the “IROS Original Series,” which “hosts unique content that would be difficult to see at in-person events.” Right now, there are some interviews with a diverse group of interesting roboticists, and hopefully more will show up later on.

Enjoy!
Everything on the IROS On-Demand site should be available for at least the next month, so there’s no need to try and watch a thousand presentations over three days (which is what we normally have to do). So, relax, and enjoy yourself a bit by browsing all the options. And additional content will be made available over the next several weeks, so make sure to check back often to see what’s new.

[ IROS 2020 ] Continue reading

Posted in Human Robots

#437543 This Is How We’ll Engineer Artificial ...

Take a Jeopardy! guess: this body part was once referred to as the “consummation of all perfection as an instrument.”

Answer: “What is the human hand?”

Our hands are insanely complex feats of evolutionary engineering. Densely-packed sensors provide intricate and ultra-sensitive feelings of touch. Dozens of joints synergize to give us remarkable dexterity. A “sixth sense” awareness of where our hands are in space connects them to the mind, making it possible to open a door, pick up a mug, and pour coffee in total darkness based solely on what they feel.

So why can’t robots do the same?

In a new article in Science, Dr. Subramanian Sundaram at Boston and Harvard University argues that it’s high time to rethink robotic touch. Scientists have long dreamed of artificially engineering robotic hands with the same dexterity and feedback that we have. Now, after decades, we’re at the precipice of a breakthrough thanks to two major advances. One, we better understand how touch works in humans. Two, we have the mega computational powerhouse called machine learning to recapitulate biology in silicon.

Robotic hands with a sense of touch—and the AI brain to match it—could overhaul our idea of robots. Rather than charming, if somewhat clumsy, novelties, robots equipped with human-like hands are far more capable of routine tasks—making food, folding laundry—and specialized missions like surgery or rescue. But machines aren’t the only ones to gain. For humans, robotic prosthetic hands equipped with accurate, sensitive, and high-resolution artificial touch is the next giant breakthrough to seamlessly link a biological brain to a mechanical hand.

Here’s what Sundaram laid out to get us to that future.

How Does Touch Work, Anyway?
Let me start with some bad news: reverse engineering the human hand is really hard. It’s jam-packed with over 17,000 sensors tuned to mechanical forces alone, not to mention sensors for temperature and pain. These force “receptors” rely on physical distortions—bending, stretching, curling—to signal to the brain.

The good news? We now have a far clearer picture of how biological touch works. Imagine a coin pressed into your palm. The sensors embedded in the skin, called mechanoreceptors, capture that pressure, and “translate” it into electrical signals. These signals pulse through the nerves on your hand to the spine, and eventually make their way to the brain, where they gets interpreted as “touch.”

At least, that’s the simple version, but one too vague and not particularly useful for recapitulating touch. To get there, we need to zoom in.

The cells on your hand that collect touch signals, called tactile “first order” neurons (enter Star Wars joke) are like upside-down trees. Intricate branches extend from their bodies, buried deep in the skin, to a vast area of the hand. Each neuron has its own little domain called “receptor fields,” although some overlap. Like governors, these neurons manage a semi-dedicated region, so that any signal they transfer to the higher-ups—spinal cord and brain—is actually integrated from multiple sensors across a large distance.

It gets more intricate. The skin itself is a living entity that can regulate its own mechanical senses through hydration. Sweat, for example, softens the skin, which changes how it interacts with surrounding objects. Ever tried putting a glove onto a sweaty hand? It’s far more of a struggle than a dry one, and feels different.

In a way, the hand’s tactile neurons play a game of Morse Code. Through different frequencies of electrical beeps, they’re able to transfer information about an object’s size, texture, weight, and other properties, while also asking the brain for feedback to better control the object.

Biology to Machine
Reworking all of our hands’ greatest features into machines is absolutely daunting. But robots have a leg up—they’re not restricted to biological hardware. Earlier this year, for example, a team from Columbia engineered a “feeling” robotic finger using overlapping light emitters and sensors in a way loosely similar to receptor fields. Distortions in light were then analyzed with deep learning to translate into contact location and force.

Although a radical departure from our own electrical-based system, the Columbia team’s attempt was clearly based on human biology. They’re not alone. “Substantial progress is being made in the creation of soft, stretchable electronic skins,” said Sundaram, many of which can sense forces or pressure, although they’re currently still limited.

What’s promising, however, is the “exciting progress in using visual data,” said Sundaram. Computer vision has gained enormously from ubiquitous cameras and large datasets, making it possible to train powerful but data-hungry algorithms such as deep convolutional neural networks (CNNs).

By piggybacking on their success, we can essentially add “eyes” to robotic hands, a superpower us humans can’t imagine. Even better, CNNs and other classes of algorithms can be readily adopted for processing tactile data. Together, a robotic hand could use its eyes to scan an object, plan its movements for grasp, and use touch for feedback to adjust its grip. Maybe we’ll finally have a robot that easily rescues the phone sadly dropped into a composting toilet. Or something much grander to benefit humanity.

That said, relying too heavily on vision could also be a downfall. Take a robot that scans a wide area of rubble for signs of life during a disaster response. If touch relies on sight, then it would have to keep a continuous line-of-sight in a complex and dynamic setting—something computer vision doesn’t do well in, at least for now.

A Neuromorphic Way Forward
Too Debbie Downer? I got your back! It’s hard to overstate the challenges, but what’s clear is that emerging machine learning tools can tackle data processing challenges. For vision, it’s distilling complex images into “actionable control policies,” said Sundaram. For touch, it’s easy to imagine the same. Couple the two together, and that’s a robotic super-hand in the making.

Going forward, argues Sundaram, we need to closely adhere to how the hand and brain process touch. Hijacking our biological “touch machinery” has already proved useful. In 2019, one team used a nerve-machine interface for amputees to control a robotic arm—the DEKA LUKE arm—and sense what the limb and attached hand were feeling. Pressure on the LUKE arm and hand activated an implanted neural interface, which zapped remaining nerves in a way that the brain processes as touch. When the AI analyzed pressure data similar to biological tactile neurons, the person was able to better identify different objects with their eyes closed.

“Neuromorphic tactile hardware (and software) advances will strongly influence the future of bionic prostheses—a compelling application of robotic hands,” said Sundaram, adding that the next step is to increase the density of sensors.

Two additional themes made the list of progressing towards a cyborg future. One is longevity, in that sensors on a robot need to be able to reliably produce large quantities of high-quality data—something that’s seemingly mundane, but is a practical limitation.

The other is going all-in-one. Rather than just a pressure sensor, we need something that captures the myriad of touch sensations. From feather-light to a heavy punch, from vibrations to temperatures, a tree-like architecture similar to our hands would help organize, integrate, and otherwise process data collected from those sensors.

Just a decade ago, mind-controlled robotics were considered a blue sky, stretch-goal neurotechnological fantasy. We now have a chance to “close the loop,” from thought to movement to touch and back to thought, and make some badass robots along the way.

Image Credit: PublicDomainPictures from Pixabay Continue reading

Posted in Human Robots

#437460 This Week’s Awesome Tech Stories From ...

ARTIFICIAL INTELLIGENCE
A Radical New Technique Lets AI Learn With Practically No Data
Karen Hao | MIT Technology Review
“Shown photos of a horse and a rhino, and told a unicorn is something in between, [children] can recognize the mythical creature in a picture book the first time they see it. …Now a new paper from the University of Waterloo in Ontario suggests that AI models should also be able to do this—a process the researchers call ‘less than one’-shot, or LO-shot, learning.”

FUTURE
Artificial General Intelligence: Are We Close, and Does It Even Make Sense to Try?
Will Douglas Heaven | MIT Technology Review
“A machine that could think like a person has been the guiding vision of AI research since the earliest days—and remains its most divisive idea. …So why is AGI controversial? Why does it matter? And is it a reckless, misleading dream—or the ultimate goal?”

HEALTH
The Race for a Super-Antibody Against the Coronavirus
Apoorva Mandavilli | The New York Times
“Dozens of companies and academic groups are racing to develop antibody therapies. …But some scientists are betting on a dark horse: Prometheus, a ragtag group of scientists who are months behind in the competition—and yet may ultimately deliver the most powerful antibody.”

SPACE
How to Build a Spacecraft to Save the World
Daniel Oberhaus | Wired
“The goal of the Double Asteroid Redirection Test, or DART, is to slam the [spacecraft] into a small asteroid orbiting a larger asteroid 7 million miles from Earth. …It should be able to change the asteroid’s orbit just enough to be detectable from Earth, demonstrating that this kind of strike could nudge an oncoming threat out of Earth’s way. Beyond that, everything is just an educated guess, which is exactly why NASA needs to punch an asteroid with a robot.”

TRANSPORTATION
Inside Gravity’s Daring Mission to Make Jetpacks a Reality
Oliver Franklin-Wallis | Wired
“The first time someone flies a jetpack, a curious thing happens: just as their body leaves the ground, their legs start to flail. …It’s as if the vestibular system can’t quite believe what’s happening. This isn’t natural. Then suddenly, thrust exceeds weight, and—they’re aloft. …It’s that moment, lift-off, that has given jetpacks an enduring appeal for over a century.”

FUTURE OF FOOD
Inside Singapore’s Huge Bet on Vertical Farming
Megan Tatum | MIT Technology Review
“…to cram all [of Singapore’s] gleaming towers and nearly 6 million people into a land mass half the size of Los Angeles, it has sacrificed many things, including food production. Farms make up no more than 1% of its total land (in the United States it’s 40%), forcing the small city-state to shell out around $10 billion each year importing 90% of its food. Here was an example of technology that could change all that.”

COMPUTING
The Effort to Build the Mathematical Library of the Future
Kevin Hartnett | Quanta
“Digitizing mathematics is a longtime dream. The expected benefits range from the mundane—computers grading students’ homework—to the transcendent: using artificial intelligence to discover new mathematics and find new solutions to old problems.”

Image credit: Kevin Mueller / Unsplash Continue reading

Posted in Human Robots

#437182 MIT’s Tiny New Brain Chip Aims for AI ...

The human brain operates on roughly 20 watts of power (a third of a 60-watt light bulb) in a space the size of, well, a human head. The biggest machine learning algorithms use closer to a nuclear power plant’s worth of electricity and racks of chips to learn.

That’s not to slander machine learning, but nature may have a tip or two to improve the situation. Luckily, there’s a branch of computer chip design heeding that call. By mimicking the brain, super-efficient neuromorphic chips aim to take AI off the cloud and put it in your pocket.

The latest such chip is smaller than a piece of confetti and has tens of thousands of artificial synapses made out of memristors—chip components that can mimic their natural counterparts in the brain.

In a recent paper in Nature Nanotechnology, a team of MIT scientists say their tiny new neuromorphic chip was used to store, retrieve, and manipulate images of Captain America’s Shield and MIT’s Killian Court. Whereas images stored with existing methods tended to lose fidelity over time, the new chip’s images remained crystal clear.

“So far, artificial synapse networks exist as software. We’re trying to build real neural network hardware for portable artificial intelligence systems,” Jeehwan Kim, associate professor of mechanical engineering at MIT said in a press release. “Imagine connecting a neuromorphic device to a camera on your car, and having it recognize lights and objects and make a decision immediately, without having to connect to the internet. We hope to use energy-efficient memristors to do those tasks on-site, in real-time.”

A Brain in Your Pocket
Whereas the computers in our phones and laptops use separate digital components for processing and memory—and therefore need to shuttle information between the two—the MIT chip uses analog components called memristors that process and store information in the same place. This is similar to the way the brain works and makes memristors far more efficient. To date, however, they’ve struggled with reliability and scalability.

To overcome these challenges, the MIT team designed a new kind of silicon-based, alloyed memristor. Ions flowing in memristors made from unalloyed materials tend to scatter as the components get smaller, meaning the signal loses fidelity and the resulting computations are less reliable. The team found an alloy of silver and copper helped stabilize the flow of silver ions between electrodes, allowing them to scale the number of memristors on the chip without sacrificing functionality.

While MIT’s new chip is promising, there’s likely a ways to go before memristor-based neuromorphic chips go mainstream. Between now and then, engineers like Kim have their work cut out for them to further scale and demonstrate their designs. But if successful, they could make for smarter smartphones and other even smaller devices.

“We would like to develop this technology further to have larger-scale arrays to do image recognition tasks,” Kim said. “And some day, you might be able to carry around artificial brains to do these kinds of tasks, without connecting to supercomputers, the internet, or the cloud.”

Special Chips for AI
The MIT work is part of a larger trend in computing and machine learning. As progress in classical chips has flagged in recent years, there’s been an increasing focus on more efficient software and specialized chips to continue pushing the pace.

Neuromorphic chips, for example, aren’t new. IBM and Intel are developing their own designs. So far, their chips have been based on groups of standard computing components, such as transistors (as opposed to memristors), arranged to imitate neurons in the brain. These chips are, however, still in the research phase.

Graphics processing units (GPUs)—chips originally developed for graphics-heavy work like video games—are the best practical example of specialized hardware for AI and were heavily used in this generation of machine learning early on. In the years since, Google, NVIDIA, and others have developed even more specialized chips that cater more specifically to machine learning.

The gains from such specialized chips are already being felt.

In a recent cost analysis of machine learning, research and investment firm ARK Invest said cost declines have far outpaced Moore’s Law. In a particular example, they found the cost to train an image recognition algorithm (ResNet-50) went from around $1,000 in 2017 to roughly $10 in 2019. The fall in cost to actually run such an algorithm was even more dramatic. It took $10,000 to classify a billion images in 2017 and just $0.03 in 2019.

Some of these declines can be traced to better software, but according to ARK, specialized chips have improved performance by nearly 16 times in the last three years.

As neuromorphic chips—and other tailored designs—advance further in the years to come, these trends in cost and performance may continue. Eventually, if all goes to plan, we might all carry a pocket brain that can do the work of today’s best AI.

Image credit: Peng Lin Continue reading

Posted in Human Robots

#437145 3 Major Materials Science ...

Few recognize the vast implications of materials science.

To build today’s smartphone in the 1980s, it would cost about $110 million, require nearly 200 kilowatts of energy (compared to 2kW per year today), and the device would be 14 meters tall, according to Applied Materials CTO Omkaram Nalamasu.

That’s the power of materials advances. Materials science has democratized smartphones, bringing the technology to the pockets of over 3.5 billion people. But far beyond devices and circuitry, materials science stands at the center of innumerable breakthroughs across energy, future cities, transit, and medicine. And at the forefront of Covid-19, materials scientists are forging ahead with biomaterials, nanotechnology, and other materials research to accelerate a solution.

As the name suggests, materials science is the branch devoted to the discovery and development of new materials. It’s an outgrowth of both physics and chemistry, using the periodic table as its grocery store and the laws of physics as its cookbook.

And today, we are in the middle of a materials science revolution. In this article, we’ll unpack the most important materials advancements happening now.

Let’s dive in.

The Materials Genome Initiative
In June 2011 at Carnegie Mellon University, President Obama announced the Materials Genome Initiative, a nationwide effort to use open source methods and AI to double the pace of innovation in materials science. Obama felt this acceleration was critical to the US’s global competitiveness, and held the key to solving significant challenges in clean energy, national security, and human welfare. And it worked.

By using AI to map the hundreds of millions of different possible combinations of elements—hydrogen, boron, lithium, carbon, etc.—the initiative created an enormous database that allows scientists to play a kind of improv jazz with the periodic table.

This new map of the physical world lets scientists combine elements faster than ever before and is helping them create all sorts of novel elements. And an array of new fabrication tools are further amplifying this process, allowing us to work at altogether new scales and sizes, including the atomic scale, where we’re now building materials one atom at a time.

Biggest Materials Science Breakthroughs
These tools have helped create the metamaterials used in carbon fiber composites for lighter-weight vehicles, advanced alloys for more durable jet engines, and biomaterials to replace human joints. We’re also seeing breakthroughs in energy storage and quantum computing. In robotics, new materials are helping us create the artificial muscles needed for humanoid, soft robots—think Westworld in your world.

Let’s unpack some of the leading materials science breakthroughs of the past decade.

(1) Lithium-ion batteries

The lithium-ion battery, which today powers everything from our smartphones to our autonomous cars, was first proposed in the 1970s. It couldn’t make it to market until the 1990s, and didn’t begin to reach maturity until the past few years.

An exponential technology, these batteries have been dropping in price for three decades, plummeting 90 percent between 1990 and 2010, and 80 percent since. Concurrently, they’ve seen an eleven-fold increase in capacity.

But producing enough of them to meet demand has been an ongoing problem. Tesla has stepped up to the challenge: one of the company’s Gigafactories in Nevada churns out 20 gigawatts of energy storage per year, marking the first time we’ve seen lithium-ion batteries produced at scale.

Musk predicts 100 Gigafactories could store the energy needs of the entire globe. Other companies are moving quickly to integrate this technology as well: Renault is building a home energy storage based on their Zoe batteries, BMW’s 500 i3 battery packs are being integrated into the UK’s national energy grid, and Toyota, Nissan, and Audi have all announced pilot projects.

Lithium-ion batteries will continue to play a major role in renewable energy storage, helping bring down solar and wind energy prices to compete with those of coal and gasoline.

(2) Graphene

Derived from the same graphite found in everyday pencils, graphene is a sheet of carbon just one atom thick. It is nearly weightless, but 200 times stronger than steel. Conducting electricity and dissipating heat faster than any other known substance, this super-material has transformative applications.

Graphene enables sensors, high-performance transistors, and even gel that helps neurons communicate in the spinal cord. Many flexible device screens, drug delivery systems, 3D printers, solar panels, and protective fabric use graphene.

As manufacturing costs decrease, this material has the power to accelerate advancements of all kinds.

(3) Perovskite

Right now, the “conversion efficiency” of the average solar panel—a measure of how much captured sunlight can be turned into electricity—hovers around 16 percent, at a cost of roughly $3 per watt.

Perovskite, a light-sensitive crystal and one of our newer new materials, has the potential to get that up to 66 percent, which would double what silicon panels can muster.

Perovskite’s ingredients are widely available and inexpensive to combine. What do all these factors add up to? Affordable solar energy for everyone.

Materials of the Nano-World
Nanotechnology is the outer edge of materials science, the point where matter manipulation gets nano-small—that’s a million times smaller than an ant, 8,000 times smaller than a red blood cell, and 2.5 times smaller than a strand of DNA.

Nanobots are machines that can be directed to produce more of themselves, or more of whatever else you’d like. And because this takes place at an atomic scale, these nanobots can pull apart any kind of material—soil, water, air—atom by atom, and use these now raw materials to construct just about anything.

Progress has been surprisingly swift in the nano-world, with a bevy of nano-products now on the market. Never want to fold clothes again? Nanoscale additives to fabrics help them resist wrinkling and staining. Don’t do windows? Not a problem! Nano-films make windows self-cleaning, anti-reflective, and capable of conducting electricity. Want to add solar to your house? We’ve got nano-coatings that capture the sun’s energy.

Nanomaterials make lighter automobiles, airplanes, baseball bats, helmets, bicycles, luggage, power tools—the list goes on. Researchers at Harvard built a nanoscale 3D printer capable of producing miniature batteries less than one millimeter wide. And if you don’t like those bulky VR goggles, researchers are now using nanotech to create smart contact lenses with a resolution six times greater than that of today’s smartphones.

And even more is coming. Right now, in medicine, drug delivery nanobots are proving especially useful in fighting cancer. Computing is a stranger story, as a bioengineer at Harvard recently stored 700 terabytes of data in a single gram of DNA.

On the environmental front, scientists can take carbon dioxide from the atmosphere and convert it into super-strong carbon nanofibers for use in manufacturing. If we can do this at scale—powered by solar—a system one-tenth the size of the Sahara Desert could reduce CO2 in the atmosphere to pre-industrial levels in about a decade.

The applications are endless. And coming fast. Over the next decade, the impact of the very, very small is about to get very, very large.

Final Thoughts
With the help of artificial intelligence and quantum computing over the next decade, the discovery of new materials will accelerate exponentially.

And with these new discoveries, customized materials will grow commonplace. Future knee implants will be personalized to meet the exact needs of each body, both in terms of structure and composition.

Though invisible to the naked eye, nanoscale materials will integrate into our everyday lives, seamlessly improving medicine, energy, smartphones, and more.

Ultimately, the path to demonetization and democratization of advanced technologies starts with re-designing materials— the invisible enabler and catalyst. Our future depends on the materials we create.

(Note: This article is an excerpt from The Future Is Faster Than You Think—my new book, just released on January 28th! To get your own copy, click here!)

Join Me
(1) A360 Executive Mastermind: If you’re an exponentially and abundance-minded entrepreneur who would like coaching directly from me, consider joining my Abundance 360 Mastermind, a highly selective community of 360 CEOs and entrepreneurs who I coach for 3 days every January in Beverly Hills, Ca. Through A360, I provide my members with context and clarity about how converging exponential technologies will transform every industry. I’m committed to running A360 for the course of an ongoing 25-year journey as a “countdown to the Singularity.”

If you’d like to learn more and consider joining our 2021 membership, apply here.

(2) Abundance-Digital Online Community: I’ve also created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is Singularity University’s ‘onramp’ for exponential entrepreneurs—those who want to get involved and play at a higher level. Click here to learn more.

(Both A360 and Abundance-Digital are part of Singularity University—your participation opens you to a global community.)

This article originally appeared on diamandis.com. Read the original article here.

Image Credit: Anand Kumar from Pixabay Continue reading

Posted in Human Robots