Tag Archives: iron

#439349 The Four Stages of Intelligent Matter ...

Imagine clothing that can warm or cool you, depending on how you’re feeling. Or artificial skin that responds to touch, temperature, and wicks away moisture automatically. Or cyborg hands controlled with DNA motors that can adjust based on signals from the outside world.

Welcome to the era of intelligent matter—an unconventional AI computing idea directly woven into the fabric of synthetic matter. Powered by brain-based computing, these materials can weave the skins of soft robots or form microswarms of drug-delivering nanobots, all while reserving power as they learn and adapt.

Sound like sci-fi? It gets weirder. The crux that’ll guide us towards intelligent matter, said Dr. W.H.P. Pernice at the University of Munster and colleagues, is a distributed “brain” across the material’s “body”— far more alien than the structure of our own minds.

Picture a heated blanket. Rather than powering it with a single controller, it’ll have computing circuits sprinkled all over. This computing network can then tap into a type of brain-like process, called “neuromorphic computing.” This technological fairy dust then transforms a boring blanket into one that learns what temperature you like and at what times of the day to predict your preferences as a new season rolls around.

Oh yeah, and if made from nano-sized building blocks, it could also reshuffle its internal structure to store your info with a built-in memory.

“The long-term goal is de-centralized neuromorphic computing,” said Pernice. Taking inspiration from nature, we can then begin to engineer matter that’s powered by brain-like hardware, running AI across the entire material.

In other words: Iron Man’s Endgame nanosuit? Here we come.

Why Intelligent Matter?
From rockets that could send us to Mars to a plain cotton T-shirt, we’ve done a pretty good job using materials we either developed or harvested. But that’s all they are—passive matter.

In contrast, nature is rich with intelligent matter. Take human skin. It’s waterproof, only selectively allows some molecules in, and protects us from pressure, friction, and most bacteria and viruses. It can also heal itself after a scratch or rip, and it senses outside temperature to cool us down when it gets too hot.

While our skin doesn’t “think” in the traditional sense, it can shuttle information to the brain in a blink. Then the magic happens. With over 100 billion neurons, the brain can run massively parallel computations in its circuits, while consuming only about 20 watts—not too different from the 13” Macbook Pro I’m currently typing on. Why can’t a material do the same?

The problem is that our current computing architecture struggles to support brain-like computing because of energy costs and time lags.

Enter neuromorphic computing. It’s an idea that hijacks the brain’s ability to process data simultaneously with minimal energy. To get there, scientists are redesigning computer chips from the ground up. For example, instead of today’s chips that divorce computing modules from memory modules, these chips process information and store it at the same location. It might seem weird, but it’s what our brains do when learning and storing new information. This arrangement slashes the need for wires between memory and computation modules, essentially teleporting information rather than sending it down a traffic-jammed cable.

The end result is massively parallel computing at a very low energy cost.

The Road to Intelligent Matter
In Pernice and his colleagues’ opinion, there are four stages that can get us to intelligent matter.

The first is structural—basically your run-of-the-mill matter that can be complex but can’t change its properties. Think 3D printed frames of a lung or other organs. Intricate, but not adaptable.

Next is responsive matter. This can shift its makeup in response to the environment. Similar to an octopus changing its skin color to hide from predators, these materials can change their shape, color, or stiffness. One example is a 3D printed sunflower embedded with sensors that blossoms or closes depending on heat, force, and light. Another is responsive soft materials that can stretch and plug into biological systems, such as an artificial muscle made of silicon that can stretch and lift over 13 pounds repeatedly upon heating. While it’s a neat trick, it doesn’t adapt and can only follow its pre-programmed fate.

Higher up the intelligence food chain are adaptive materials. These have a built-in network to process information, temporarily store it, and adjust behavior from that feedback. One example are micro-swarms of tiny robots that move in a coordinated way, similar to schools of fish or birds. But because their behavior is also pre-programmed, they can’t learn from or remember their environment.

Finally, there’s intelligent material, which can learn and memorize.

“[It] is able to interact with its environment, learn from the input it receives, and self-regulates its action,” the team wrote.

It starts with four components. The first is a sensor, which captures information from both the outside world and the material’s internal state—think of a temperature sensor on your skin. Next is an actuator, basically something that changes the property of the material. For example, making your skin sweat more as the temperature goes up. The third is a memory unit that can store information long-term and save it as knowledge for the future. Finally, the last is a network—Bluetooth, wireless, or whatnot—that connects each component, similar to nerves in our brains.

“The close interplay between all four functional elements is essential for processing information, which is generated during the entire process of interaction between matter and the environment, to enable learning,” the team said.

How?
Here’s where neuromorphic computing comes in.

“Living organisms, in particular, can be considered as unconventional computing systems,” the authors said. “Programmable and highly interconnected networks are particularly well suited to carrying out these tasks and brain-inspired neuromorphic hardware aims.”

The brain runs on neurons and synapses—the junctions that connect individual neurons into networks. Scientists have tapped into a wide variety of materials to engineer artificial components of the brain connected into networks. Google’s tensor processing unit and IBM’s TrueNorth are both famous examples; they allow computation and memory to occur in the same place, making them especially powerful for running AI algorithms.

But the next step, said the authors, is to distribute these mini brains inside a material while adding sensors and actuators, essentially forming a circuit that mimics the entire human nervous system. For the matter to respond quickly, we may need to tap into other technologies.

One idea is to use light. Chips that operate on optical neural networks can both calculate and operate at the speed of light. Another is to build materials that can reflect on their own decisions, with neural networks that listen and learn. Add to that matter that can physically change its form based on input—like from water to ice—and we may have a library of intelligent matter that could transform multiple industries, especially for autonomous nanobots and life-like prosthetics.

“A wide variety of technological applications of intelligent matter can be foreseen,” the authors said.

Image Credit: ktsdesign / Shutterstock.com Continue reading

Posted in Human Robots

#439062 Xenobots 2.0: These Living Robots ...

The line between animals and machines was already getting blurry after a team of scientists and roboticists unveiled the first living robots last year. Now the same team has released version 2.0 of their so-called xenobots, and they’re faster, stronger, and more capable than ever.

In January 2020, researchers from Tufts University and the University of Vermont laid out a method for building tiny biological machines out of the eggs of the African claw frog Xenopus laevis. Dubbed xenobots after their animal forebear, they could move independently, push objects, and even team up to create swarms.

Remarkably, building them involved no genetic engineering. Instead, the team used an evolutionary algorithm running on a supercomputer to test out thousands of potential designs made up of different configurations of cells.

Once they’d found some promising candidates that could solve the tasks they were interested in, they used microsurgical tools to build real-world versions out of living cells. The most promising design was built by splicing heart muscle cells (which could contract to propel the xenobots), and skin cells (which provided a rigid support).

Impressive as that might sound, having to build each individual xenobot by hand is obviously tedious. But now the team has devised a new approach that works from the bottom up by getting the xenobots to self-assemble their bodies from single cells. Not only is the approach more scalable, the new xenobots are faster, live longer, and even have a rudimentary memory.

In a paper in Science Robotics, the researchers describe how they took stem cells from frog embryos and allowed them to grow into clumps of several thousand cells called spheroids. After a few days, the stem cells had turned into skin cells covered in small hair-like projections called cilia, which wriggle back and forth.

Normally, these structures are used to spread mucus around on the frog’s skin. But when divorced from their normal context they took on a function more similar to that seen in microorganisms, which use cilia to move about by acting like tiny paddles.

“We are witnessing the remarkable plasticity of cellular collectives, which build a rudimentary new ‘body’ that is quite distinct from their default—in this case, a frog—despite having a completely normal genome,” corresponding author Michael Levin from Tufts University said in a press release.

“We see that cells can re-purpose their genetically encoded hardware, like cilia, for new functions such as locomotion. It is amazing that cells can spontaneously take on new roles and create new body plans and behaviors without long periods of evolutionary selection for those features,” he said.

Not only were the new xenobots faster and longer-lived, they were also much better at tasks like working together as a swarm to gather piles of iron oxide particles. And while the form and function of the xenobots was achieved without any genetic engineering, in an extra experiment the team injected them with RNA that caused them to produce a fluorescent protein that changes color when exposed to a particular color of light.

This allowed the xenobots to record whether they had come into contact with a specific light source while traveling about. The researchers say this is a proof of principle that the xenobots can be imbued with a molecular memory, and future work could allow them to record multiple stimuli and potentially even react to them.

What exactly these xenobots could eventually be used for is still speculative, but they have features that make them a promising alternative to non-organic alternatives. For a start, robots made of stem cells are completely biodegradable and also have their own power source in the form of “yolk platelets” found in all amphibian embryos. They are also able to self-heal in as little as five minutes if cut, and can take advantage of cells’ ability to process all kinds of chemicals.

That suggests they could have applications in everything from therapeutics to environmental engineering. But the researchers also hope to use them to better understand the processes that allow individual cells to combine and work together to create a larger organism, and how these processes might be harnessed and guided for regenerative medicine.

As these animal-machine hybrids advance, they are sure to raise ethical concerns and question marks over the potential risks. But it looks like the future of robotics could be a lot more wet and squishy than we imagined.

Image Credit: Doug Blackiston/Tufts University Continue reading

Posted in Human Robots

#436573 This Week’s Awesome Tech Stories From ...

ARTIFICIAL INTELLIGENCE
The Messy, Secretive Reality Behind OpenAI’s Bid to Save the World
Karen Hao | MIT Technology Review
“The AI moonshot was founded in the spirit of transparency. This is the inside story of how competitive pressure eroded that idealism. …Yet OpenAI is still a bastion of talent and cutting-edge research, filled with people who are sincerely striving to work for the benefit of humanity. In other words, it still has the most important elements, and there’s still time for it to change.”

ROBOTICS
3D Printed Four-Legged Robot Is Ready to Take on Spot—at a Lower Price
Luke Dormehl | Digital Trends
“[Ghost Robotics and Origin] have teamed up to develop a new line of robots, called the Spirit Series, which offer impressively capable four-legged robots, but which can be printed using additive manufacturing at a fraction of the cost and speed of traditional manufacturing approaches.”

PRIVACY
The Studs on This Punk Bracelet Are Actually Microphone-Jamming Ultrasonic Speakers
Andrew Liszewski | Gizmodo
“You can prevent facial recognition cameras from identifying you by wearing face paint, masks, or sometimes just a pair of oversized sunglasses. Keeping conversations private from an ever-growing number of microphone-equipped devices isn’t quite as easy, but researchers have created what could be the first wearable that actually helps increase your privacy.”

TRANSPORTATION
Iron Man Dreams Are Closer to Becoming a Reality Thanks to This New Jetman Dubai Video
Julia Alexander | The Verge
“Tony Stark may have destroyed his Iron Man suits in Iron Man 3 (only to bring out a whole new line in Avengers: Age of Ultron), but Jetman Dubai’s Iron Man-like dreams of autonomous human flight are realer than ever. A new video published by the company shows pilot Vince Reffet using a jet-powered, carbon-fiber suit to launch off the ground and fly 6,000 feet in the air.”

TECHNOLOGY
Wikipedia Is the Last Best Place on the Internet
Richard Cooke | Wired
“More than an encyclopedia, Wikipedia has become a community, a library, a constitution, an experiment, a political manifesto—the closest thing there is to an online public square. It is one of the few remaining places that retains the faintly utopian glow of the early World Wide Web.”

SCIENCE
The Very Large Array Will Search for Evidence of Extraterrestrial Life
Georgina Torbet | Digital Trends
“To begin the project, an interface will be added to the NRAO’s Very Large Array (VLA) in New Mexico to search for events or structures which could indicate the presence of life, such as laser beams, structures built around stars, indications of constructed satellites, or atmospheric chemicals produced by industry.”

SCIENCE FICTION
The Terrible Truth About Star Trek’s Transporters
Cassidy Ward | SyFy Wire
“The fact that you are scanned, deconstructed, and rebuilt almost immediately thereafter only creates the illusion of continuity. In reality, you are killed and then something exactly like you is born, elsewhere. There’s a whole philosophical debate about whether this really matters. If the person constructed on the other end is identical to you, down to the atomic level, is there any measurable difference from it being actually you?”

Image Credit: Samuel Giacomelli / Unsplash Continue reading

Posted in Human Robots

#436218 An AI Debated Its Own Potential for Good ...

Artificial intelligence is going to overhaul the way we live and work. But will the changes it brings be for the better? As the technology slowly develops (let’s remember that right now, we’re still very much in the narrow AI space and nowhere near an artificial general intelligence), whether it will end up doing us more harm than good is a question at the top of everyone’s mind.

What kind of response might we get if we posed this question to an AI itself?

Last week at the Cambridge Union in England, IBM did just that. Its Project Debater (an AI that narrowly lost a debate to human debating champion Harish Natarajan in February) gave the opening arguments in a debate about the promise and peril of artificial intelligence.

Critical thinking, linking different lines of thought, and anticipating counter-arguments are all valuable debating skills that humans can practice and refine. While these skills are tougher for an AI to get good at since they often require deeper contextual understanding, AI does have a major edge over humans in absorbing and analyzing information. In the February debate, Project Debater used IBM’s cloud computing infrastructure to read hundreds of millions of documents and extract relevant details to construct an argument.

This time around, Debater looked through 1,100 arguments for or against AI. The arguments were submitted to IBM by the public during the week prior to the debate, through a website set up for that purpose. Of the 1,100 submissions, the AI classified 570 as anti-AI, or of the opinion that the technology will bring more harm to humanity than good. 511 arguments were found to be pro-AI, and the rest were irrelevant to the topic at hand.

Debater grouped the arguments into five themes; the technology’s ability to take over dangerous or monotonous jobs was a pro-AI theme, and on the flip side was its potential to perpetuate the biases of its creators. “AI companies still have too little expertise on how to properly assess datasets and filter out bias,” the tall black box that houses Project Debater said. “AI will take human bias and will fixate it for generations.”
After Project Debater kicked off the debate by giving opening arguments for both sides, two teams of people took over, elaborating on its points and coming up with their own counter-arguments.

In the end, an audience poll voted in favor of the pro-AI side, but just barely; 51.2 percent of voters felt convinced that AI can help us more than it can hurt us.

The software’s natural language processing was able to identify racist, obscene, or otherwise inappropriate comments and weed them out as being irrelevant to the debate. But it also repeated the same arguments multiple times, and mixed up a statement about bias as being pro-AI rather than anti-AI.

IBM has been working on Project Debater for over six years, and though it aims to iron out small glitches like these, the system’s goal isn’t to ultimately outwit and defeat humans. On the contrary, the AI is meant to support our decision-making by taking in and processing huge amounts of information in a nuanced way, more quickly than we ever could.

IBM engineer Noam Slonim envisions Project Debater’s tech being used, for example, by a government seeking citizens’ feedback about a new policy. “This technology can help to establish an interesting and effective communication channel between the decision maker and the people that are going to be impacted by the decision,” he said.

As for the question of whether AI will do more good or harm, perhaps Sylvie Delacroix put it best. A professor of law and ethics at the University of Birmingham who argued on the pro-AI side of the debate, she pointed out that the impact AI will have depends on the way we design it, saying “AI is only as good as the data it has been fed.”

She’s right; rather than asking what sort of impact AI will have on humanity, we should start by asking what sort of impact we want it to have. The people working on AI—not AIs themselves—are ultimately responsible for how much good or harm will be done.

Image Credit: IBM Project Debater at Cambridge Union Society, photo courtesy of IBM Research Continue reading

Posted in Human Robots

#436119 How 3D Printing, Vertical Farming, and ...

Food. What we eat, and how we grow it, will be fundamentally transformed in the next decade.

Already, indoor farming is projected to be a US$40.25 billion industry by 2022, with a compound annual growth rate of 9.65 percent. Meanwhile, the food 3D printing industry is expected to grow at an even higher rate, averaging 50 percent annual growth.

And converging exponential technologies—from materials science to AI-driven digital agriculture—are not slowing down. Today’s breakthroughs will soon allow our planet to boost its food production by nearly 70 percent, using a fraction of the real estate and resources, to feed 9 billion by mid-century.

What you consume, how it was grown, and how it will end up in your stomach will all ride the wave of converging exponentials, revolutionizing the most basic of human needs.

Printing Food
3D printing has already had a profound impact on the manufacturing sector. We are now able to print in hundreds of different materials, making anything from toys to houses to organs. However, we are finally seeing the emergence of 3D printers that can print food itself.

Redefine Meat, an Israeli startup, wants to tackle industrial meat production using 3D printers that can generate meat, no animals required. The printer takes in fat, water, and three different plant protein sources, using these ingredients to print a meat fiber matrix with trapped fat and water, thus mimicking the texture and flavor of real meat.

Slated for release in 2020 at a cost of $100,000, their machines are rapidly demonetizing and will begin by targeting clients in industrial-scale meat production.

Anrich3D aims to take this process a step further, 3D printing meals that are customized to your medical records, heath data from your smart wearables, and patterns detected by your sleep trackers. The company plans to use multiple extruders for multi-material printing, allowing them to dispense each ingredient precisely for nutritionally optimized meals. Currently in an R&D phase at the Nanyang Technological University in Singapore, the company hopes to have its first taste tests in 2020.

These are only a few of the many 3D food printing startups springing into existence. The benefits from such innovations are boundless.

Not only will food 3D printing grant consumers control over the ingredients and mixtures they consume, but it is already beginning to enable new innovations in flavor itself, democratizing far healthier meal options in newly customizable cuisine categories.

Vertical Farming
Vertical farming, whereby food is grown in vertical stacks (in skyscrapers and buildings rather than outside in fields), marks a classic case of converging exponential technologies. Over just the past decade, the technology has surged from a handful of early-stage pilots to a full-grown industry.

Today, the average American meal travels 1,500-2,500 miles to get to your plate. As summed up by Worldwatch Institute researcher Brian Halweil, “We are spending far more energy to get food to the table than the energy we get from eating the food.” Additionally, the longer foods are out of the soil, the less nutritious they become, losing on average 45 percent of their nutrition before being consumed.

Yet beyond cutting down on time and transportation losses, vertical farming eliminates a whole host of issues in food production. Relying on hydroponics and aeroponics, vertical farms allows us to grow crops with 90 percent less water than traditional agriculture—which is critical for our increasingly thirsty planet.

Currently, the largest player around is Bay Area-based Plenty Inc. With over $200 million in funding from Softbank, Plenty is taking a smart tech approach to indoor agriculture. Plants grow on 20-foot-high towers, monitored by tens of thousands of cameras and sensors, optimized by big data and machine learning.

This allows the company to pack 40 plants in the space previously occupied by 1. The process also produces yields 350 times greater than outdoor farmland, using less than 1 percent as much water.

And rather than bespoke veggies for the wealthy few, Plenty’s processes allow them to knock 20-35 percent off the costs of traditional grocery stores. To date, Plenty has their home base in South San Francisco, a 100,000 square-foot farm in Kent, Washington, an indoor farm in the United Arab Emirates, and recently started construction on over 300 farms in China.

Another major player is New Jersey-based Aerofarms, which can now grow two million pounds of leafy greens without sunlight or soil.

To do this, Aerofarms leverages AI-controlled LEDs to provide optimized wavelengths of light for each plant. Using aeroponics, the company delivers nutrients by misting them directly onto the plants’ roots—no soil required. Rather, plants are suspended in a growth mesh fabric made from recycled water bottles. And here too, sensors, cameras, and machine learning govern the entire process.

While 50-80 percent of the cost of vertical farming is human labor, autonomous robotics promises to solve that problem. Enter contenders like Iron Ox, a firm that has developed the Angus robot, capable of moving around plant-growing containers.

The writing is on the wall, and traditional agriculture is fast being turned on its head.

Materials Science
In an era where materials science, nanotechnology, and biotechnology are rapidly becoming the same field of study, key advances are enabling us to create healthier, more nutritious, more efficient, and longer-lasting food.

For starters, we are now able to boost the photosynthetic abilities of plants. Using novel techniques to improve a micro-step in the photosynthesis process chain, researchers at UCLA were able to boost tobacco crop yield by 14-20 percent. Meanwhile, the RIPE Project, backed by Bill Gates and run out of the University of Illinois, has matched and improved those numbers.

And to top things off, The University of Essex was even able to improve tobacco yield by 27-47 percent by increasing the levels of protein involved in photo-respiration.

In yet another win for food-related materials science, Santa Barbara-based Apeel Sciences is further tackling the vexing challenge of food waste. Now approaching commercialization, Apeel uses lipids and glycerolipids found in the peels, seeds, and pulps of all fruits and vegetables to create “cutin”—the fatty substance that composes the skin of fruits and prevents them from rapidly spoiling by trapping moisture.

By then spraying fruits with this generated substance, Apeel can preserve foods 60 percent longer using an odorless, tasteless, colorless organic substance.

And stores across the US are already using this method. By leveraging our advancing knowledge of plants and chemistry, materials science is allowing us to produce more food with far longer-lasting freshness and more nutritious value than ever before.

Convergence
With advances in 3D printing, vertical farming, and materials sciences, we can now make food smarter, more productive, and far more resilient.

By the end of the next decade, you should be able to 3D print a fusion cuisine dish from the comfort of your home, using ingredients harvested from vertical farms, with nutritional value optimized by AI and materials science. However, even this picture doesn’t account for all the rapid changes underway in the food industry.

Join me next week for Part 2 of the Future of Food for a discussion on how food production will be transformed, quite literally, from the bottom up.

Join Me
Abundance-Digital Online Community: Stay ahead of technological advancements and turn your passion into action. Abundance Digital is now part of Singularity University. Learn more.

Image Credit: Vanessa Bates Ramirez Continue reading

Posted in Human Robots