Tag Archives: viral

#434854 New Lifelike Biomaterial Self-Reproduces ...

Life demands flux.

Every living organism is constantly changing: cells divide and die, proteins build and disintegrate, DNA breaks and heals. Life demands metabolism—the simultaneous builder and destroyer of living materials—to continuously upgrade our bodies. That’s how we heal and grow, how we propagate and survive.

What if we could endow cold, static, lifeless robots with the gift of metabolism?

In a study published this month in Science Robotics, an international team developed a DNA-based method that gives raw biomaterials an artificial metabolism. Dubbed DASH—DNA-based assembly and synthesis of hierarchical materials—the method automatically generates “slime”-like nanobots that dynamically move and navigate their environments.

Like humans, the artificial lifelike material used external energy to constantly change the nanobots’ bodies in pre-programmed ways, recycling their DNA-based parts as both waste and raw material for further use. Some “grew” into the shape of molecular double-helixes; others “wrote” the DNA letters inside micro-chips.

The artificial life forms were also rather “competitive”—in quotes, because these molecular machines are not conscious. Yet when pitted against each other, two DASH bots automatically raced forward, crawling in typical slime-mold fashion at a scale easily seen under the microscope—and with some iterations, with the naked human eye.

“Fundamentally, we may be able to change how we create and use the materials with lifelike characteristics. Typically materials and objects we create in general are basically static… one day, we may be able to ‘grow’ objects like houses and maintain their forms and functions autonomously,” said study author Dr. Shogo Hamada to Singularity Hub.

“This is a great study that combines the versatility of DNA nanotechnology with the dynamics of living materials,” said Dr. Job Boekhoven at the Technical University of Munich, who was not involved in the work.

Dissipative Assembly
The study builds on previous ideas on how to make molecular Lego blocks that essentially assemble—and destroy—themselves.

Although the inspiration came from biological metabolism, scientists have long hoped to cut their reliance on nature. At its core, metabolism is just a bunch of well-coordinated chemical reactions, programmed by eons of evolution. So why build artificial lifelike materials still tethered by evolution when we can use chemistry to engineer completely new forms of artificial life?

Back in 2015, for example, a team led by Boekhoven described a way to mimic how our cells build their internal “structural beams,” aptly called the cytoskeleton. The key here, unlike many processes in nature, isn’t balance or equilibrium; rather, the team engineered an extremely unstable system that automatically builds—and sustains—assemblies from molecular building blocks when given an external source of chemical energy.

Sound familiar? The team basically built molecular devices that “die” without “food.” Thanks to the laws of thermodynamics (hey ya, Newton!), that energy eventually dissipates, and the shapes automatically begin to break down, completing an artificial “circle of life.”

The new study took the system one step further: rather than just mimicking synthesis, they completed the circle by coupling the building process with dissipative assembly.

Here, the “assembling units themselves are also autonomously created from scratch,” said Hamada.

DNA Nanobots
The process of building DNA nanobots starts on a microfluidic chip.

Decades of research have allowed researchers to optimize DNA assembly outside the body. With the help of catalysts, which help “bind” individual molecules together, the team found that they could easily alter the shape of the self-assembling DNA bots—which formed fiber-like shapes—by changing the structure of the microfluidic chambers.

Computer simulations played a role here too: through both digital simulations and observations under the microscope, the team was able to identify a few critical rules that helped them predict how their molecules self-assemble while navigating a maze of blocking “pillars” and channels carved onto the microchips.

This “enabled a general design strategy for the DASH patterns,” they said.

In particular, the whirling motion of the fluids as they coursed through—and bumped into—ridges in the chips seems to help the DNA molecules “entangle into networks,” the team explained.

These insights helped the team further develop the “destroying” part of metabolism. Similar to linking molecules into DNA chains, their destruction also relies on enzymes.

Once the team pumped both “generation” and “degeneration” enzymes into the microchips, along with raw building blocks, the process was completely autonomous. The simultaneous processes were so lifelike that the team used a metric commonly used in robotics, finite-state automation, to measure the behavior of their DNA nanobots from growth to eventual decay.

“The result is a synthetic structure with features associated with life. These behaviors include locomotion, self-regeneration, and spatiotemporal regulation,” said Boekhoven.

Molecular Slime Molds
Just witnessing lifelike molecules grow in place like the dance move running man wasn’t enough.

In their next experiments, the team took inspiration from slugs to program undulating movements into their DNA bots. Here, “movement” is actually a sort of illusion: the machines “moved” because their front ends kept regenerating, whereas their back ends degenerated. In essence, the molecular slime was built from linking multiple individual “DNA robot-like” units together: each unit receives a delayed “decay” signal from the head of the slime in a way that allowed the whole artificial “organism” to crawl forward, against the steam of fluid flow.

Here’s the fun part: the team eventually engineered two molecular slime bots and pitted them against each other, Mario Kart-style. In these experiments, the faster moving bot alters the state of its competitor to promote “decay.” This slows down the competitor, allowing the dominant DNA nanoslug to win in a race.

Of course, the end goal isn’t molecular podracing. Rather, the DNA-based bots could easily amplify a given DNA or RNA sequence, making them efficient nano-diagnosticians for viral and other infections.

The lifelike material can basically generate patterns that doctors can directly ‘see’ with their eyes, which makes DNA or RNA molecules from bacteria and viruses extremely easy to detect, the team said.

In the short run, “the detection device with this self-generating material could be applied to many places and help people on site, from farmers to clinics, by providing an easy and accurate way to detect pathogens,” explained Hamaga.

A Futuristic Iron Man Nanosuit?
I’m letting my nerd flag fly here. In Avengers: Infinity Wars, the scientist-engineer-philanthropist-playboy Tony Stark unveiled a nanosuit that grew to his contours when needed and automatically healed when damaged.

DASH may one day realize that vision. For now, the team isn’t focused on using the technology for regenerating armor—rather, the dynamic materials could create new protein assemblies or chemical pathways inside living organisms, for example. The team also envisions adding simple sensing and computing mechanisms into the material, which can then easily be thought of as a robot.

Unlike synthetic biology, the goal isn’t to create artificial life. Rather, the team hopes to give lifelike properties to otherwise static materials.

“We are introducing a brand-new, lifelike material concept powered by its very own artificial metabolism. We are not making something that’s alive, but we are creating materials that are much more lifelike than have ever been seen before,” said lead author Dr. Dan Luo.

“Ultimately, our material may allow the construction of self-reproducing machines… artificial metabolism is an important step toward the creation of ‘artificial’ biological systems with dynamic, lifelike capabilities,” added Hamada. “It could open a new frontier in robotics.”

Image Credit: A timelapse image of DASH, by Jeff Tyson at Cornell University. Continue reading

Posted in Human Robots

#434673 The World’s Most Valuable AI ...

It recognizes our faces. It knows the videos we might like. And it can even, perhaps, recommend the best course of action to take to maximize our personal health.

Artificial intelligence and its subset of disciplines—such as machine learning, natural language processing, and computer vision—are seemingly becoming integrated into our daily lives whether we like it or not. What was once sci-fi is now ubiquitous research and development in company and university labs around the world.

Similarly, the startups working on many of these AI technologies have seen their proverbial stock rise. More than 30 of these companies are now valued at over a billion dollars, according to data research firm CB Insights, which itself employs algorithms to provide insights into the tech business world.

Private companies with a billion-dollar valuation were so uncommon not that long ago that they were dubbed unicorns. Now there are 325 of these once-rare creatures, with a combined valuation north of a trillion dollars, as CB Insights maintains a running count of this exclusive Unicorn Club.

The subset of AI startups accounts for about 10 percent of the total membership, growing rapidly in just 4 years from 0 to 32. Last year, an unprecedented 17 AI startups broke the billion-dollar barrier, with 2018 also a record year for venture capital into private US AI companies at $9.3 billion, CB Insights reported.

What exactly is all this money funding?

AI Keeps an Eye Out for You
Let’s start with the bad news first.

Facial recognition is probably one of the most ubiquitous applications of AI today. It’s actually a decades-old technology often credited to a man named Woodrow Bledsoe, who used an instrument called a RAND tablet that could semi-autonomously match faces from a database. That was in the 1960s.

Today, most of us are familiar with facial recognition as a way to unlock our smartphones. But the technology has gained notoriety as a surveillance tool of law enforcement, particularly in China.

It’s no secret that the facial recognition algorithms developed by several of the AI unicorns from China—SenseTime, CloudWalk, and Face++ (also known as Megvii)—are used to monitor the country’s 1.3 billion citizens. Police there are even equipped with AI-powered eyeglasses for such purposes.

A fourth billion-dollar Chinese startup, Yitu Technologies, also produces a platform for facial recognition in the security realm, and develops AI systems in healthcare on top of that. For example, its CARE.AITM Intelligent 4D Imaging System for Chest CT can reputedly identify in real time a variety of lesions for the possible early detection of cancer.

The AI Doctor Is In
As Peter Diamandis recently noted, AI is rapidly augmenting healthcare and longevity. He mentioned another AI unicorn from China in this regard—iCarbonX, which plans to use machines to develop personalized health plans for every individual.

A couple of AI unicorns on the hardware side of healthcare are OrCam Technologies and Butterfly. The former, an Israeli company, has developed a wearable device for the vision impaired called MyEye that attaches to one’s eyeglasses. The device can identify people and products, as well as read text, conveying the information through discrete audio.

Butterfly Network, out of Connecticut, has completely upended the healthcare market with a handheld ultrasound machine that works with a smartphone.

“Orcam and Butterfly are amazing examples of how machine learning can be integrated into solutions that provide a step-function improvement over state of the art in ultra-competitive markets,” noted Andrew Byrnes, investment director at Comet Labs, a venture capital firm focused on AI and robotics, in an email exchange with Singularity Hub.

AI in the Driver’s Seat
Comet Labs’ portfolio includes two AI unicorns, Megvii and Pony.ai.

The latter is one of three billion-dollar startups developing the AI technology behind self-driving cars, with the other two being Momenta.ai and Zoox.

Founded in 2016 near San Francisco (with another headquarters in China), Pony.ai debuted its latest self-driving system, called PonyAlpha, last year. The platform uses multiple sensors (LiDAR, cameras, and radar) to navigate its environment, but its “sensor fusion technology” makes things simple by choosing the most reliable sensor data for any given driving scenario.

Zoox is another San Francisco area startup founded a couple of years earlier. In late 2018, it got the green light from the state of California to be the first autonomous vehicle company to transport a passenger as part of a pilot program. Meanwhile, China-based Momenta.ai is testing level four autonomy for its self-driving system. Autonomous driving levels are ranked zero to five, with level five being equal to a human behind the wheel.

The hype around autonomous driving is currently in overdrive, and Byrnes thinks regulatory roadblocks will keep most self-driving cars in idle for the foreseeable future. The exception, he said, is China, which is adopting a “systems” approach to autonomy for passenger transport.

“If [autonomous mobility] solves bigger problems like traffic that can elicit government backing, then that has the potential to go big fast,” he said. “This is why we believe Pony.ai will be a winner in the space.”

AI in the Back Office
An AI-powered technology that perhaps only fans of the cult classic Office Space might appreciate has suddenly taken the business world by storm—robotic process automation (RPA).

RPA companies take the mundane back office work, such as filling out invoices or processing insurance claims, and turn it over to bots. The intelligent part comes into play because these bots can tackle unstructured data, such as text in an email or even video and pictures, in order to accomplish an increasing variety of tasks.

Both Automation Anywhere and UiPath are older companies, founded in 2003 and 2005, respectively. However, since just 2017, they have raised nearly a combined $1 billion in disclosed capital.

Cybersecurity Embraces AI
Cybersecurity is another industry where AI is driving investment into startups. Sporting imposing names like CrowdStrike, Darktrace, and Tanium, these cybersecurity companies employ different machine-learning techniques to protect computers and other IT assets beyond the latest software update or virus scan.

Darktrace, for instance, takes its inspiration from the human immune system. Its algorithms can purportedly “learn” the unique pattern of each device and user on a network, detecting emerging problems before things spin out of control.

All three companies are used by major corporations and governments around the world. CrowdStrike itself made headlines a few years ago when it linked the hacking of the Democratic National Committee email servers to the Russian government.

Looking Forward
I could go on, and introduce you to the world’s most valuable startup, a Chinese company called Bytedance that is valued at $75 billion for news curation and an app to create 15-second viral videos. But that’s probably not where VC firms like Comet Labs are generally putting their money.

Byrnes sees real value in startups that are taking “data-driven approaches to problems specific to unique industries.” Take the example of Chicago-based unicorn Uptake Technologies, which analyzes incoming data from machines, from wind turbines to tractors, to predict problems before they occur with the machinery. A not-yet unicorn called PingThings in the Comet Labs portfolio does similar predictive analytics for the energy utilities sector.

“One question we like asking is, ‘What does the state of the art look like in your industry in three to five years?’” Byrnes said. “We ask that a lot, then we go out and find the technology-focused teams building those things.”

Image Credit: Andrey Suslov / Shutterstock.com Continue reading

Posted in Human Robots

#434611 This Week’s Awesome Stories From ...

AUTOMATION
The Rise of the Robot Reporter
Jaclyn Paiser | The New York Times
“In addition to covering company earnings for Bloomberg, robot reporters have been prolific producers of articles on minor league baseball for The Associated Press, high school football for The Washington Post and earthquakes for The Los Angeles Times.”

ROBOTICS
Penny-Sized Ionocraft Flies With No Moving Parts
Evan Ackerman | IEEE Spectrum
“Electrohydrodynamic (EHD) thrusters, sometimes called ion thrusters, use a high strength electric field to generate a plasma of ionized air. …Magical, right? No moving parts, completely silent, and it flies!”

ARTIFICIAL INTELLIGENCE
Making New Drugs With a Dose of Artificial Intelligence
Cade Metz | The New York Times
“…DeepMind won the [protein folding] competition by a sizable margin—it improved the prediction accuracy nearly twice as much as experts expected from the contest winner. DeepMind’s victory showed how the future of biochemical research will increasingly be driven by machines and the people who oversee those machines.”

COMPUTING
Nano-Switches Made Out of Graphene Could Make Our Devices Even Smaller
Emerging Technology From the arXiv | MIT Technology Review
“For the first time, physicists have built reliable, efficient graphene nanomachines that can be fabricated on silicon chips. They could lead to even greater miniaturization.”

BIOTECH
The Problem With Big DNA
Sarah Zhang | The Atlantic
“It took researchers days to search through thousands of genome sequences. Now it takes just a few seconds. …As sequencing becomes more common, the number of publicly available bacterial and viral genomes has doubled. At the rate this work is going, within a few years multiple millions of searchable pathogen genomes will be available—a library of DNA and disease, spread the world over.”

CRYPTOCURRENCY
Fire (and Lots of It): Berkeley Researcher on the Only Way to Fix Cryptocurrency
Dan Goodin | Ars Technica
“Weaver said, there’s no basis for the promises that cryptocurrencies’ decentralized structure and blockchain basis will fundamentally transform commerce or economics. That means the sky-high valuations spawned by those false promises are completely unjustified. …To support that conclusion, Weaver recited an oft-repeated list of supposed benefits of cryptocurrencies and explained why, after closer scrutiny, he believed them to be myths.”

Image Credit: Katya Havok / Shutterstock.com Continue reading

Posted in Human Robots

#434303 Making Superhumans Through Radical ...

Imagine trying to read War and Peace one letter at a time. The thought alone feels excruciating. But in many ways, this painful idea holds parallels to how human-machine interfaces (HMI) force us to interact with and process data today.

Designed back in the 1970s at Xerox PARC and later refined during the 1980s by Apple, today’s HMI was originally conceived during fundamentally different times, and specifically, before people and machines were generating so much data. Fast forward to 2019, when humans are estimated to produce 44 zettabytes of data—equal to two stacks of books from here to Pluto—and we are still using the same HMI from the 1970s.

These dated interfaces are not equipped to handle today’s exponential rise in data, which has been ushered in by the rapid dematerialization of many physical products into computers and software.

Breakthroughs in perceptual and cognitive computing, especially machine learning algorithms, are enabling technology to process vast volumes of data, and in doing so, they are dramatically amplifying our brain’s abilities. Yet even with these powerful technologies that at times make us feel superhuman, the interfaces are still crippled with poor ergonomics.

Many interfaces are still designed around the concept that human interaction with technology is secondary, not instantaneous. This means that any time someone uses technology, they are inevitably multitasking, because they must simultaneously perform a task and operate the technology.

If our aim, however, is to create technology that truly extends and amplifies our mental abilities so that we can offload important tasks, the technology that helps us must not also overwhelm us in the process. We must reimagine interfaces to work in coherence with how our minds function in the world so that our brains and these tools can work together seamlessly.

Embodied Cognition
Most technology is designed to serve either the mind or the body. It is a problematic divide, because our brains use our entire body to process the world around us. Said differently, our minds and bodies do not operate distinctly. Our minds are embodied.

Studies using MRI scans have shown that when a person feels an emotion in their gut, blood actually moves to that area of the body. The body and the mind are linked in this way, sharing information back and forth continuously.

Current technology presents data to the brain differently from how the brain processes data. Our brains, for example, use sensory data to continually encode and decipher patterns within the neocortex. Our brains do not create a linguistic label for each item, which is how the majority of machine learning systems operate, nor do our brains have an image associated with each of these labels.

Our bodies move information through us instantaneously, in a sense “computing” at the speed of thought. What if our technology could do the same?

Using Cognitive Ergonomics to Design Better Interfaces
Well-designed physical tools, as philosopher Martin Heidegger once meditated on while using the metaphor of a hammer, seem to disappear into the “hand.” They are designed to amplify a human ability and not get in the way during the process.

The aim of physical ergonomics is to understand the mechanical movement of the human body and then adapt a physical system to amplify the human output in accordance. By understanding the movement of the body, physical ergonomics enables ergonomically sound physical affordances—or conditions—so that the mechanical movement of the body and the mechanical movement of the machine can work together harmoniously.

Cognitive ergonomics applied to HMI design uses this same idea of amplifying output, but rather than focusing on physical output, the focus is on mental output. By understanding the raw materials the brain uses to comprehend information and form an output, cognitive ergonomics allows technologists and designers to create technological affordances so that the brain can work seamlessly with interfaces and remove the interruption costs of our current devices. In doing so, the technology itself “disappears,” and a person’s interaction with technology becomes fluid and primary.

By leveraging cognitive ergonomics in HMI design, we can create a generation of interfaces that can process and present data the same way humans process real-world information, meaning through fully-sensory interfaces.

Several brain-machine interfaces are already on the path to achieving this. AlterEgo, a wearable device developed by MIT researchers, uses electrodes to detect and understand nonverbal prompts, which enables the device to read the user’s mind and act as an extension of the user’s cognition.

Another notable example is the BrainGate neural device, created by researchers at Stanford University. Just two months ago, a study was released showing that this brain implant system allowed paralyzed patients to navigate an Android tablet with their thoughts alone.

These are two extraordinary examples of what is possible for the future of HMI, but there is still a long way to go to bring cognitive ergonomics front and center in interface design.

Disruptive Innovation Happens When You Step Outside Your Existing Users
Most of today’s interfaces are designed by a narrow population, made up predominantly of white, non-disabled men who are prolific in the use of technology (you may recall The New York Times viral article from 2016, Artificial Intelligence’s White Guy Problem). If you ask this population if there is a problem with today’s HMIs, most will say no, and this is because the technology has been designed to serve them.

This lack of diversity means a limited perspective is being brought to interface design, which is problematic if we want HMI to evolve and work seamlessly with the brain. To use cognitive ergonomics in interface design, we must first gain a more holistic understanding of how people with different abilities understand the world and how they interact with technology.

Underserved groups, such as people with physical disabilities, operate on what Clayton Christensen coined in The Innovator’s Dilemma as the fringe segment of a market. Developing solutions that cater to fringe groups can in fact disrupt the larger market by opening a downward, much larger market.

Learning From Underserved Populations
When technology fails to serve a group of people, that group must adapt the technology to meet their needs.

The workarounds created are often ingenious, specifically because they have not been arrived at by preferences, but out of necessity that has forced disadvantaged users to approach the technology from a very different vantage point.

When a designer or technologist begins learning from this new viewpoint and understanding challenges through a different lens, they can bring new perspectives to design—perspectives that otherwise can go unseen.

Designers and technologists can also learn from people with physical disabilities who interact with the world by leveraging other senses that help them compensate for one they may lack. For example, some blind people use echolocation to detect objects in their environments.

The BrainPort device developed by Wicab is an incredible example of technology leveraging one human sense to serve or compliment another. The BrainPort device captures environmental information with a wearable video camera and converts this data into soft electrical stimulation sequences that are sent to a device on the user’s tongue—the most sensitive touch receptor in the body. The user learns how to interpret the patterns felt on their tongue, and in doing so, become able to “see” with their tongue.

Key to the future of HMI design is learning how different user groups navigate the world through senses beyond sight. To make cognitive ergonomics work, we must understand how to leverage the senses so we’re not always solely relying on our visual or verbal interactions.

Radical Inclusion for the Future of HMI
Bringing radical inclusion into HMI design is about gaining a broader lens on technology design at large, so that technology can serve everyone better.

Interestingly, cognitive ergonomics and radical inclusion go hand in hand. We can’t design our interfaces with cognitive ergonomics without bringing radical inclusion into the picture, and we also will not arrive at radical inclusion in technology so long as cognitive ergonomics are not considered.

This new mindset is the only way to usher in an era of technology design that amplifies the collective human ability to create a more inclusive future for all.

Image Credit: jamesteohart / Shutterstock.com Continue reading

Posted in Human Robots

#433803 This Week’s Awesome Stories From ...

ARTIFICIAL INTELLIGENCE
The AI Cold War That Could Doom Us All
Nicholas Thompson | Wired
“At the dawn of a new stage in the digital revolution, the world’s two most powerful nations are rapidly retreating into positions of competitive isolation, like players across a Go board. …Is the arc of the digital revolution bending toward tyranny, and is there any way to stop it?”

LONGEVITY
Finally, the Drug That Keeps You Young
Stephen S. Hall | MIT Technology Review
“The other thing that has changed is that the field of senescence—and the recognition that senescent cells can be such drivers of aging—has finally gained acceptance. Whether those drugs will work in people is still an open question. But the first human trials are under way right now.”

SYNTHETIC BIOLOGY
Ginkgo Bioworks Is Turning Human Cells Into On-Demand Factories
Megan Molteni | Wired
“The biotech unicorn is already cranking out an impressive number of microbial biofactories that grow and multiply and burp out fragrances, fertilizers, and soon, psychoactive substances. And they do it at a fraction of the cost of traditional systems. But Kelly is thinking even bigger.”

CYBERNETICS
Thousands of Swedes Are Inserting Microchips Under Their Skin
Maddy Savage | NPR
“Around the size of a grain of rice, the chips typically are inserted into the skin just above each user’s thumb, using a syringe similar to that used for giving vaccinations. The procedure costs about $180. So many Swedes are lining up to get the microchips that the country’s main chipping company says it can’t keep up with the number of requests.”

ART
AI Art at Christie’s Sells for $432,500
Gabe Cohn | The New York Times
“Last Friday, a portrait produced by artificial intelligence was hanging at Christie’s New York opposite an Andy Warhol print and beside a bronze work by Roy Lichtenstein. On Thursday, it sold for well over double the price realized by both those pieces combined.”

ETHICS
Should a Self-Driving Car Kill the Baby or the Grandma? Depends on Where You’re From
Karen Hao | MIT Technology Review
“The researchers never predicted the experiment’s viral reception. Four years after the platform went live, millions of people in 233 countries and territories have logged 40 million decisions, making it one of the largest studies ever done on global moral preferences.”

TECHNOLOGY
The Rodney Brooks Rules for Predicting a Technology’s Success
Rodney Brooks | IEEE Spectrum
“Building electric cars and reusable rockets is fairly easy. Building a nuclear fusion reactor, flying cars, self-driving cars, or a Hyperloop system is very hard. What makes the difference?”

Image Source: spainter_vfx / Shutterstock.com Continue reading

Posted in Human Robots