Tag Archives: sound

#434854 New Lifelike Biomaterial Self-Reproduces ...

Life demands flux.

Every living organism is constantly changing: cells divide and die, proteins build and disintegrate, DNA breaks and heals. Life demands metabolism—the simultaneous builder and destroyer of living materials—to continuously upgrade our bodies. That’s how we heal and grow, how we propagate and survive.

What if we could endow cold, static, lifeless robots with the gift of metabolism?

In a study published this month in Science Robotics, an international team developed a DNA-based method that gives raw biomaterials an artificial metabolism. Dubbed DASH—DNA-based assembly and synthesis of hierarchical materials—the method automatically generates “slime”-like nanobots that dynamically move and navigate their environments.

Like humans, the artificial lifelike material used external energy to constantly change the nanobots’ bodies in pre-programmed ways, recycling their DNA-based parts as both waste and raw material for further use. Some “grew” into the shape of molecular double-helixes; others “wrote” the DNA letters inside micro-chips.

The artificial life forms were also rather “competitive”—in quotes, because these molecular machines are not conscious. Yet when pitted against each other, two DASH bots automatically raced forward, crawling in typical slime-mold fashion at a scale easily seen under the microscope—and with some iterations, with the naked human eye.

“Fundamentally, we may be able to change how we create and use the materials with lifelike characteristics. Typically materials and objects we create in general are basically static… one day, we may be able to ‘grow’ objects like houses and maintain their forms and functions autonomously,” said study author Dr. Shogo Hamada to Singularity Hub.

“This is a great study that combines the versatility of DNA nanotechnology with the dynamics of living materials,” said Dr. Job Boekhoven at the Technical University of Munich, who was not involved in the work.

Dissipative Assembly
The study builds on previous ideas on how to make molecular Lego blocks that essentially assemble—and destroy—themselves.

Although the inspiration came from biological metabolism, scientists have long hoped to cut their reliance on nature. At its core, metabolism is just a bunch of well-coordinated chemical reactions, programmed by eons of evolution. So why build artificial lifelike materials still tethered by evolution when we can use chemistry to engineer completely new forms of artificial life?

Back in 2015, for example, a team led by Boekhoven described a way to mimic how our cells build their internal “structural beams,” aptly called the cytoskeleton. The key here, unlike many processes in nature, isn’t balance or equilibrium; rather, the team engineered an extremely unstable system that automatically builds—and sustains—assemblies from molecular building blocks when given an external source of chemical energy.

Sound familiar? The team basically built molecular devices that “die” without “food.” Thanks to the laws of thermodynamics (hey ya, Newton!), that energy eventually dissipates, and the shapes automatically begin to break down, completing an artificial “circle of life.”

The new study took the system one step further: rather than just mimicking synthesis, they completed the circle by coupling the building process with dissipative assembly.

Here, the “assembling units themselves are also autonomously created from scratch,” said Hamada.

DNA Nanobots
The process of building DNA nanobots starts on a microfluidic chip.

Decades of research have allowed researchers to optimize DNA assembly outside the body. With the help of catalysts, which help “bind” individual molecules together, the team found that they could easily alter the shape of the self-assembling DNA bots—which formed fiber-like shapes—by changing the structure of the microfluidic chambers.

Computer simulations played a role here too: through both digital simulations and observations under the microscope, the team was able to identify a few critical rules that helped them predict how their molecules self-assemble while navigating a maze of blocking “pillars” and channels carved onto the microchips.

This “enabled a general design strategy for the DASH patterns,” they said.

In particular, the whirling motion of the fluids as they coursed through—and bumped into—ridges in the chips seems to help the DNA molecules “entangle into networks,” the team explained.

These insights helped the team further develop the “destroying” part of metabolism. Similar to linking molecules into DNA chains, their destruction also relies on enzymes.

Once the team pumped both “generation” and “degeneration” enzymes into the microchips, along with raw building blocks, the process was completely autonomous. The simultaneous processes were so lifelike that the team used a metric commonly used in robotics, finite-state automation, to measure the behavior of their DNA nanobots from growth to eventual decay.

“The result is a synthetic structure with features associated with life. These behaviors include locomotion, self-regeneration, and spatiotemporal regulation,” said Boekhoven.

Molecular Slime Molds
Just witnessing lifelike molecules grow in place like the dance move running man wasn’t enough.

In their next experiments, the team took inspiration from slugs to program undulating movements into their DNA bots. Here, “movement” is actually a sort of illusion: the machines “moved” because their front ends kept regenerating, whereas their back ends degenerated. In essence, the molecular slime was built from linking multiple individual “DNA robot-like” units together: each unit receives a delayed “decay” signal from the head of the slime in a way that allowed the whole artificial “organism” to crawl forward, against the steam of fluid flow.

Here’s the fun part: the team eventually engineered two molecular slime bots and pitted them against each other, Mario Kart-style. In these experiments, the faster moving bot alters the state of its competitor to promote “decay.” This slows down the competitor, allowing the dominant DNA nanoslug to win in a race.

Of course, the end goal isn’t molecular podracing. Rather, the DNA-based bots could easily amplify a given DNA or RNA sequence, making them efficient nano-diagnosticians for viral and other infections.

The lifelike material can basically generate patterns that doctors can directly ‘see’ with their eyes, which makes DNA or RNA molecules from bacteria and viruses extremely easy to detect, the team said.

In the short run, “the detection device with this self-generating material could be applied to many places and help people on site, from farmers to clinics, by providing an easy and accurate way to detect pathogens,” explained Hamaga.

A Futuristic Iron Man Nanosuit?
I’m letting my nerd flag fly here. In Avengers: Infinity Wars, the scientist-engineer-philanthropist-playboy Tony Stark unveiled a nanosuit that grew to his contours when needed and automatically healed when damaged.

DASH may one day realize that vision. For now, the team isn’t focused on using the technology for regenerating armor—rather, the dynamic materials could create new protein assemblies or chemical pathways inside living organisms, for example. The team also envisions adding simple sensing and computing mechanisms into the material, which can then easily be thought of as a robot.

Unlike synthetic biology, the goal isn’t to create artificial life. Rather, the team hopes to give lifelike properties to otherwise static materials.

“We are introducing a brand-new, lifelike material concept powered by its very own artificial metabolism. We are not making something that’s alive, but we are creating materials that are much more lifelike than have ever been seen before,” said lead author Dr. Dan Luo.

“Ultimately, our material may allow the construction of self-reproducing machines… artificial metabolism is an important step toward the creation of ‘artificial’ biological systems with dynamic, lifelike capabilities,” added Hamada. “It could open a new frontier in robotics.”

Image Credit: A timelapse image of DASH, by Jeff Tyson at Cornell University. Continue reading

Posted in Human Robots

#434767 7 Non-Obvious Trends Shaping the Future

When you think of trends that might be shaping the future, the first things that come to mind probably have something to do with technology: Robots taking over jobs. Artificial intelligence advancing and proliferating. 5G making everything faster, connected cities making everything easier, data making everything more targeted.

Technology is undoubtedly changing the way we live, and will continue to do so—probably at an accelerating rate—in the near and far future. But there are other trends impacting the course of our lives and societies, too. They’re less obvious, and some have nothing to do with technology.

For the past nine years, entrepreneur and author Rohit Bhargava has read hundreds of articles across all types of publications, tagged and categorized them by topic, funneled frequent topics into broader trends, analyzed those trends, narrowed them down to the most significant ones, and published a book about them as part of his ‘Non-Obvious’ series. He defines a trend as “a unique curated observation of the accelerating present.”

In an encore session at South by Southwest last week (his initial talk couldn’t fit hundreds of people who wanted to attend, so a re-do was scheduled), Bhargava shared details of his creative process, why it’s hard to think non-obviously, the most important trends of this year, and how to make sure they don’t get the best of you.

Thinking Differently
“Non-obvious thinking is seeing the world in a way other people don’t see it,” Bhargava said. “The secret is curating your ideas.” Curation collects ideas and presents them in a meaningful way; museum curators, for example, decide which works of art to include in an exhibit and how to present them.

For his own curation process, Bhargava uses what he calls the haystack method. Rather than searching for a needle in a haystack, he gathers ‘hay’ (ideas and stories) then uses them to locate and define a ‘needle’ (a trend). “If you spend enough time gathering information, you can put the needle into the middle of the haystack,” he said.

A big part of gathering information is looking for it in places you wouldn’t normally think to look. In his case, that means that on top of reading what everyone else reads—the New York Times, the Washington Post, the Economist—he also buys publications like Modern Farmer, Teen Vogue, and Ink magazine. “It’s like stepping into someone else’s world who’s not like me,” he said. “That’s impossible to do online because everything is personalized.”

Three common barriers make non-obvious thinking hard.

The first is unquestioned assumptions, which are facts or habits we think will never change. When James Dyson first invented the bagless vacuum, he wanted to sell the license to it, but no one believed people would want to spend more money up front on a vacuum then not have to buy bags. The success of Dyson’s business today shows how mistaken that assumption—that people wouldn’t adapt to a product that, at the end of the day, was far more sensible—turned out to be. “Making the wrong basic assumptions can doom you,” Bhargava said.

The second barrier to thinking differently is constant disruption. “Everything is changing as industries blend together,” Bhargava said. “The speed of change makes everyone want everything, all the time, and people expect the impossible.” We’ve come to expect every alternative to be presented to us in every moment, but in many cases this doesn’t serve us well; we’re surrounded by noise and have trouble discerning what’s valuable and authentic.

This ties into the third barrier, which Bhargava calls the believability crisis. “Constant sensationalism makes people skeptical about everything,” he said. With the advent of fake news and technology like deepfakes, we’re in a post-truth, post-fact era, and are in a constant battle to discern what’s real from what’s not.

2019 Trends
Bhargava’s efforts to see past these barriers and curate information yielded 15 trends he believes are currently shaping the future. He shared seven of them, along with thoughts on how to stay ahead of the curve.

Retro Trust
We tend to trust things we have a history with. “People like nostalgic experiences,” Bhargava said. With tech moving as fast as it is, old things are quickly getting replaced by shinier, newer, often more complex things. But not everyone’s jumping on board—and some who’ve been on board are choosing to jump off in favor of what worked for them in the past.

“We’re turning back to vinyl records and film cameras, deliberately downgrading to phones that only text and call,” Bhargava said. In a period of too much change too fast, people are craving familiarity and dependability. To capitalize on that sentiment, entrepreneurs should seek out opportunities for collaboration—how can you build a product that’s new, but feels reliable and familiar?

Muddled Masculinity
Women have increasingly taken on more leadership roles, advanced in the workplace, now own more homes than men, and have higher college graduation rates. That’s all great for us ladies—but not so great for men or, perhaps more generally, for the concept of masculinity.

“Female empowerment is causing confusion about what it means to be a man today,” Bhargava said. “Men don’t know what to do—should they say something? Would that make them an asshole? Should they keep quiet? Would that make them an asshole?”

By encouraging the non-conforming, we can help take some weight off the traditional gender roles, and their corresponding divisions and pressures.

Innovation Envy
Innovation has become an over-used word, to the point that it’s thrown onto ideas and actions that aren’t really innovative at all. “We innovate by looking at someone else and doing the same,” Bhargava said. If an employee brings a radical idea to someone in a leadership role, in many companies the leadership will say they need a case study before implementing the radical idea—but if it’s already been done, it’s not innovative. “With most innovation what ends up happening is not spectacular failure, but irrelevance,” Bhargava said.

He suggests that rather than being on the defensive, companies should play offense with innovation, and when it doesn’t work “fail as if no one’s watching” (often, no one will be).

Artificial Influence
Thanks to social media and other technologies, there are a growing number of fabricated things that, despite not being real, influence how we think. “15 percent of all Twitter accounts may be fake, and there are 60 million fake Facebook accounts,” Bhargava said. There are virtual influencers and even virtual performers.

“Don’t hide the artificial ingredients,” Bhargava advised. “Some people are going to pretend it’s all real. We have to be ethical.” The creators of fabrications meant to influence the way people think, or the products they buy, or the decisions they make, should make it crystal-clear that there aren’t living, breathing people behind the avatars.

Enterprise Empathy
Another reaction to the fast pace of change these days—and the fast pace of life, for that matter—is that empathy is regaining value and even becoming a driver of innovation. Companies are searching for ways to give people a sense of reassurance. The Tesco grocery brand in the UK has a “relaxed lane” for those who don’t want to feel rushed as they check out. Starbucks opened a “signing store” in Washington DC, and most of its regular customers have learned some sign language.

“Use empathy as a principle to help yourself stand out,” Bhargava said. Besides being a good business strategy, “made with empathy” will ideally promote, well, more empathy, a quality there’s often a shortage of.

Robot Renaissance
From automating factory jobs to flipping burgers to cleaning our floors, robots have firmly taken their place in our day-to-day lives—and they’re not going away anytime soon. “There are more situations with robots than ever before,” Bhargava said. “They’re exploring underwater. They’re concierges at hotels.”

The robot revolution feels intimidating. But Bhargava suggests embracing robots with more curiosity than concern. While they may replace some tasks we don’t want replaced, they’ll also be hugely helpful in multiple contexts, from elderly care to dangerous manual tasks.

Similar to retro trust and enterprise empathy, organizations have started to tell their brand’s story to gain customer loyalty. “Stories give us meaning, and meaning is what we need in order to be able to put the pieces together,” Bhargava said. “Stories give us a way of understanding the world.”

Finding the story behind your business, brand, or even yourself, and sharing it openly, can help you connect with people, be they customers, coworkers, or friends.

Tech’s Ripple Effects
While it may not overtly sound like it, most of the trends Bhargava identified for 2019 are tied to technology, and are in fact a sort of backlash against it. Tech has made us question who to trust, how to innovate, what’s real and what’s fake, how to make the best decisions, and even what it is that makes us human.

By being aware of these trends, sharing them, and having conversations about them, we’ll help shape the way tech continues to be built, and thus the way it impacts us down the road.

Image Credit: Rohit Bhargava by Brian Smale Continue reading

Posted in Human Robots

#434685 How Tech Will Let You Learn Anything, ...

Today, over 77 percent of Americans own a smartphone with access to the world’s information and near-limitless learning resources.

Yet nearly 36 million adults in the US are constrained by low literacy skills, excluding them from professional opportunities, prospects of upward mobility, and full engagement with their children’s education.

And beyond its direct impact, low literacy rates affect us all. Improving literacy among adults is predicted to save $230 billion in national healthcare costs and could result in US labor productivity increases of up to 2.5 percent.

Across the board, exponential technologies are making demonetized learning tools, digital training platforms, and literacy solutions more accessible than ever before.

With rising automation and major paradigm shifts underway in the job market, these tools not only promise to make today’s workforce more versatile, but could play an invaluable role in breaking the poverty cycles often associated with low literacy.

Just three years ago, the Barbara Bush Foundation for Family Literacy and the Dollar General Literacy Foundation joined forces to tackle this intractable problem, launching a $7 million Adult Literacy XPRIZE.

Challenging teams to develop smartphone apps that significantly increase literacy skills among adult learners in just 12 months, the competition brought five prize teams to the fore, each targeting multiple demographics across the nation.

Now, after four years of research, prototyping, testing, and evaluation, XPRIZE has just this week announced two grand prize winners: Learning Upgrade and People ForWords.

In this blog, I’ll be exploring the nuts and bolts of our two winning teams and how exponential technologies are beginning to address rapidly shifting workforce demands.

We’ll discuss:

Meeting 100 percent adult literacy rates
Retooling today’s workforce for tomorrow’s job market
Granting the gift of lifelong learning

Let’s dive in.

Adult Literacy XPRIZE
Emphasizing the importance of accessible mediums and scalability, the Adult Literacy XPRIZE called for teams to create mobile solutions that lower the barrier to entry, encourage persistence, develop relevant learning content, and can scale nationally.

Outperforming the competition in two key demographic groups in aggregate—native English speakers and English language learners—teams Learning Upgrade and People ForWords together claimed the prize.

To win, both organizations successfully generated the greatest gains between a pre- and post-test, administered one year apart to learners in a 12-month field test across Los Angeles, Dallas, and Philadelphia.

Prize money in hand, Learning Upgrade and People ForWords are now scaling up their solutions, each targeting a key demographic in America’s pursuit of adult literacy.

Based in San Diego, Learning Upgrade has developed an Android and iOS app that helps students learn English and math through video, songs, and gamification. Offering a total of 21 courses from kindergarten through adult education, Learning Upgrade touts a growing platform of over 900 lessons spanning English, reading, math, and even GED prep.

To further personalize each student’s learning, Learning Upgrade measures time-on-task and builds out formative performance assessments, granting teachers a quantified, real-time view of each student’s progress across both lessons and criteria.

Specialized in English reading skills, Dallas-based People ForWords offers a similarly delocalized model with its mobile game “Codex: Lost Words of Atlantis.” Based on an archaeological adventure storyline, the app features an immersive virtual environment.

Set in the Atlantis Library (now with a 3D rendering underway), Codex takes its students through narrative-peppered lessons covering everything from letter-sound practice to vocabulary reinforcement in a hidden object game.

But while both mobile apps have recruited initial piloting populations, the key to success is scale.

Using a similar incentive prize competition structure to drive recruitment, the second phase of the XPRIZE is a $1 million Barbara Bush Foundation Adult Literacy XPRIZE Communities Competition. For 15 months, the competition will challenge organizations, communities, and individuals alike to onboard adult learners onto both prize-winning platforms and fellow finalist team apps, AmritaCREATE and Cell-Ed.

Each awarded $125,000 for participation in the Communities Competition, AmritaCREATE and Cell-Ed bring yet other nuanced advantages to the table.

While AmritaCREATE curates culturally appropriate e-content relevant to given life skills, Cell-Ed takes a learn-on-the-go approach, offering micro-lessons, on-demand essential skills training, and individualized coaching on any mobile device, no internet required.

Although all these cases target slightly different demographics and problem niches, they converge upon common phenomena: mobility, efficiency, life skill relevance, personalized learning, and practicability.

And what better to scale these benefits than AI and immersive virtual environments?

In the case of education’s growing mobility, 5G and the explosion of connectivity speeds will continue to drive a learn-anytime-anywhere education model, whereby adult users learn on the fly, untethered to web access or rigid time strictures.

As I’ve explored in a previous blog on AI-crowd collaboration, we might also see the rise of AI learning consultants responsible for processing data on how you learn.

Quantifying and analyzing your interaction with course modules, where you get stuck, where you thrive, and what tools cause you ease or frustration, each user’s AI trainer might then issue personalized recommendations based on crowd feedback.

Adding a human touch, each app’s hired teaching consultants would thereby be freed to track many more students’ progress at once, vetting AI-generated tips and adjustments, and offering life coaching along the way.

Lastly, virtual learning environments—and, one day, immersive VR—will facilitate both speed and retention, two of the most critical constraints as learners age.

As I often reference, people generally remember only 10 percent of what we see, 20 percent of what we hear, and 30 percent of what we read…. But over a staggering 90 percent of what we do or experience.

By introducing gamification, immersive testing activities, and visually rich sensory environments, adult literacy platforms have a winning chance at scalability, retention, and user persistence.

Exponential Tools: Training and Retooling a Dynamic Workforce
Beyond literacy, however, virtual and augmented reality have already begun disrupting the professional training market.

As projected by ABI Research, the enterprise VR training market is on track to exceed $6.3 billion in value by 2022.

Leading the charge, Walmart has already implemented VR across 200 Academy training centers, running over 45 modules and simulating everything from unusual customer requests to a Black Friday shopping rush.

Then in September of last year, Walmart committed to a 17,000-headset order of the Oculus Go to equip every US Supercenter, neighborhood market, and discount store with VR-based employee training.

In the engineering world, Bell Helicopter is using VR to massively expedite development and testing of its latest aircraft, FCX-001. Partnering with Sector 5 Digital and HTC VIVE, Bell found it could concentrate a typical six-year aircraft design process into the course of six months, turning physical mockups into CAD-designed virtual replicas.

But beyond the design process itself, Bell is now one of a slew of companies pioneering VR pilot tests and simulations with real-world accuracy. Seated in a true-to-life virtual cockpit, pilots have now tested countless iterations of the FCX-001 in virtual flight, drawing directly onto the 3D model and enacting aircraft modifications in real time.

And in an expansion of our virtual senses, several key players are already working on haptic feedback. In the case of VR flight, French company Go Touch VR is now partnering with software developer FlyInside on fingertip-mounted haptic tech for aviation.

Dramatically reducing time and trouble required for VR-testing pilots, they aim to give touch-based confirmation of every switch and dial activated on virtual flights, just as one would experience in a full-sized cockpit mockup. Replicating texture, stiffness, and even the sensation of holding an object, these piloted devices contain a suite of actuators to simulate everything from a light touch to higher-pressured contact, all controlled by gaze and finger movements.

When it comes to other high-risk simulations, virtual and augmented reality have barely scratched the surface.
Firefighters can now combat virtual wildfires with new platforms like FLAIM Trainer or TargetSolutions. And thanks to the expansion of medical AR/VR services like 3D4Medical or Echopixel, surgeons might soon perform operations on annotated organs and magnified incision sites, speeding up reaction times and vastly improving precision.

But perhaps most urgently, virtual reality will offer an immediate solution to today’s constant industry turnover and large-scale re-education demands.

VR educational facilities with exact replicas of anything from large industrial equipment to minute circuitry will soon give anyone a second chance at the 21st-century job market.

Want to become an electric, autonomous vehicle mechanic at age 44? Throw on a demonetized VR module and learn by doing, testing your prototype iterations at almost zero cost and with no risk of harming others.

Want to be a plasma physicist and play around with a virtual nuclear fusion reactor? Now you’ll be able to simulate results and test out different tweaks, logging Smart Educational Record credits in the process.

As tomorrow’s career model shifts from a “one-and-done graduate degree” to continuous lifelong education, professional VR-based re-education will allow for a continuous education loop, reducing the barrier to entry for anyone wanting to try their hand at a new industry.

Learn Anything, Anytime, at Any Age
As VR and artificial intelligence converge with demonetized mobile connectivity, we are finally witnessing an era in which no one will be left behind.

Whether in pursuit of fundamental life skills, professional training, linguistic competence, or specialized retooling, users of all ages, career paths, income brackets, and goals are now encouraged to be students, no longer condemned to stagnancy.

Traditional constraints need no longer prevent non-native speakers from gaining an equal foothold, or specialists from pivoting into new professions, or low-income parents from staking new career paths.

As exponential technologies drive democratized access, bolstering initiatives such as the Barbara Bush Foundation Adult Literacy XPRIZE are blazing the trail to make education a scalable priority for all.

Join Me
Abundance-Digital Online Community: I’ve created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is my ‘onramp’ for exponential entrepreneurs – those who want to get involved and play at a higher level. Click here to learn more.

Image Credit: Iulia Ghimisli / Shutterstock.com Continue reading

Posted in Human Robots

#434643 Sensors and Machine Learning Are Giving ...

According to some scientists, humans really do have a sixth sense. There’s nothing supernatural about it: the sense of proprioception tells you about the relative positions of your limbs and the rest of your body. Close your eyes, block out all sound, and you can still use this internal “map” of your external body to locate your muscles and body parts – you have an innate sense of the distances between them, and the perception of how they’re moving, above and beyond your sense of touch.

This sense is invaluable for allowing us to coordinate our movements. In humans, the brain integrates senses including touch, heat, and the tension in muscle spindles to allow us to build up this map.

Replicating this complex sense has posed a great challenge for roboticists. We can imagine simulating the sense of sight with cameras, sound with microphones, or touch with pressure-pads. Robots with chemical sensors could be far more accurate than us in smell and taste, but building in proprioception, the robot’s sense of itself and its body, is far more difficult, and is a large part of why humanoid robots are so tricky to get right.

Simultaneous localization and mapping (SLAM) software allows robots to use their own senses to build up a picture of their surroundings and environment, but they’d need a keen sense of the position of their own bodies to interact with it. If something unexpected happens, or in dark environments where primary senses are not available, robots can struggle to keep track of their own position and orientation. For human-robot interaction, wearable robotics, and delicate applications like surgery, tiny differences can be extremely important.

Piecemeal Solutions
In the case of hard robotics, this is generally solved by using a series of strain and pressure sensors in each joint, which allow the robot to determine how its limbs are positioned. That works fine for rigid robots with a limited number of joints, but for softer, more flexible robots, this information is limited. Roboticists are faced with a dilemma: a vast, complex array of sensors for every degree of freedom in the robot’s movement, or limited skill in proprioception?

New techniques, often involving new arrays of sensory material and machine-learning algorithms to fill in the gaps, are starting to tackle this problem. Take the work of Thomas George Thuruthel and colleagues in Pisa and San Diego, who draw inspiration from the proprioception of humans. In a new paper in Science Robotics, they describe the use of soft sensors distributed through a robotic finger at random. This placement is much like the constant adaptation of sensors in humans and animals, rather than relying on feedback from a limited number of positions.

The sensors allow the soft robot to react to touch and pressure in many different locations, forming a map of itself as it contorts into complicated positions. The machine-learning algorithm serves to interpret the signals from the randomly-distributed sensors: as the finger moves around, it’s observed by a motion capture system. After training the robot’s neural network, it can associate the feedback from the sensors with the position of the finger detected in the motion-capture system, which can then be discarded. The robot observes its own motions to understand the shapes that its soft body can take, and translate them into the language of these soft sensors.

“The advantages of our approach are the ability to predict complex motions and forces that the soft robot experiences (which is difficult with traditional methods) and the fact that it can be applied to multiple types of actuators and sensors,” said Michael Tolley of the University of California San Diego. “Our method also includes redundant sensors, which improves the overall robustness of our predictions.”

The use of machine learning lets the roboticists come up with a reliable model for this complex, non-linear system of motions for the actuators, something difficult to do by directly calculating the expected motion of the soft-bot. It also resembles the human system of proprioception, built on redundant sensors that change and shift in position as we age.

In Search of a Perfect Arm
Another approach to training robots in using their bodies comes from Robert Kwiatkowski and Hod Lipson of Columbia University in New York. In their paper “Task-agnostic self-modeling machines,” also recently published in Science Robotics, they describe a new type of robotic arm.

Robotic arms and hands are getting increasingly dexterous, but training them to grasp a large array of objects and perform many different tasks can be an arduous process. It’s also an extremely valuable skill to get right: Amazon is highly interested in the perfect robot arm. Google hooked together an array of over a dozen robot arms so that they could share information about grasping new objects, in part to cut down on training time.

Individually training a robot arm to perform every individual task takes time and reduces the adaptability of your robot: either you need an ML algorithm with a huge dataset of experiences, or, even worse, you need to hard-code thousands of different motions. Kwiatkowski and Lipson attempt to overcome this by developing a robotic system that has a “strong sense of self”: a model of its own size, shape, and motions.

They do this using deep machine learning. The robot begins with no prior knowledge of its own shape or the underlying physics of its motion. It then repeats a series of a thousand random trajectories, recording the motion of its arm. Kwiatkowski and Lipson compare this to a baby in the first year of life observing the motions of its own hands and limbs, fascinated by picking up and manipulating objects.

Again, once the robot has trained itself to interpret these signals and build up a robust model of its own body, it’s ready for the next stage. Using that deep-learning algorithm, the researchers then ask the robot to design strategies to accomplish simple pick-up and place and handwriting tasks. Rather than laboriously and narrowly training itself for each individual task, limiting its abilities to a very narrow set of circumstances, the robot can now strategize how to use its arm for a much wider range of situations, with no additional task-specific training.

Damage Control
In a further experiment, the researchers replaced part of the arm with a “deformed” component, intended to simulate what might happen if the robot was damaged. The robot can then detect that something’s up and “reconfigure” itself, reconstructing its self-model by going through the training exercises once again; it was then able to perform the same tasks with only a small reduction in accuracy.

Machine learning techniques are opening up the field of robotics in ways we’ve never seen before. Combining them with our understanding of how humans and other animals are able to sense and interact with the world around us is bringing robotics closer and closer to becoming truly flexible and adaptable, and, eventually, omnipresent.

But before they can get out and shape the world, as these studies show, they will need to understand themselves.

Image Credit: jumbojan / Shutterstock.com Continue reading

Posted in Human Robots

#434492 Black Mirror’s ‘Bandersnatch’ ...

When was the last time you watched a movie where you could control the plot?

Bandersnatch is the first interactive film in the sci fi anthology series Black Mirror. Written by series creator Charlie Brooker and directed by David Slade, the film tells the story of young programmer Stefan Butler, who is adapting a fantasy choose-your-own-adventure novel called Bandersnatch into a video game. Throughout the film, viewers are given the power to influence Butler’s decisions, leading to diverging plots with different endings.

Like many Black Mirror episodes, this film is mind-bending, dark, and thought-provoking. In addition to innovating cinema as we know it, it is a fascinating rumination on free will, parallel realities, and emerging technologies.

Pick Your Own Adventure
With a non-linear script, Bandersnatch is a viewing experience like no other. Throughout the film viewers are given the option of making a decision for the protagonist. In these instances, they have 10 seconds to make a decision until a default decision is made. For example, in the early stage of the plot, Butler is given the choice of accepting or rejecting Tuckersoft’s offer to develop a video game and the viewer gets to decide what he does. The decision then shapes the plot accordingly.

The video game Butler is developing involves moving through a graphical maze of corridors while avoiding a creature called the Pax, and at times making choices through an on-screen instruction (sound familiar?). In other words, it’s a pick-your-own-adventure video game in a pick-your-own-adventure movie.

Many viewers have ended up spending hours exploring all the different branches of the narrative (though the average viewing is 90 minutes). One user on reddit has mapped out an entire flowchart, showing how all the different decisions (and pseudo-decisions) lead to various endings.

However, over time, Butler starts to question his own free will. It’s almost as if he’s beginning to realize that the audience is controlling him. In one branch of the narrative, he is confronted by this reality when the audience indicates to him that he is being controlled in a Netflix show: “I am watching you on Netflix. I make all the decisions for you”. Butler, as you can imagine, is horrified by this message.

But Butler isn’t the only one who has an illusion of choice. We, the seemingly powerful viewers, also appear to operate under the illusion of choice. Despite there being five main endings to the film, they are all more or less the same.

The Science Behind Bandersnatch
The premise of Bandersnatch isn’t based on fantasy, but hard science. Free will has always been a widely-debated issue in neuroscience, with reputable scientists and studies demonstrating that the whole concept may be an illusion.

In the 1970s, a psychologist named Benjamin Libet conducted a series of experiments that studied voluntary decision making in humans. He found that brain activity imitating an action, such as moving your wrist, preceded the conscious awareness of the action.

Psychologist Malcom Gladwell theorizes that while we like to believe we spend a lot of time thinking about our decisions, our mental processes actually work rapidly, automatically, and often subconsciously, from relatively little information. In addition to this, thinking and making decisions are usually a byproduct of several different brain systems, such as the hippocampus, amygdala, and prefrontal cortex working together. You are more conscious of some information processes in the brain than others.

As neuroscientist and philosopher Sam Harris points out in his book Free Will, “You did not pick your parents or the time and place of your birth. You didn’t choose your gender or most of your life experiences. You had no control whatsoever over your genome or the development of your brain. And now your brain is making choices on the basis of preferences and beliefs that have been hammered into it over a lifetime.” Like Butler, we may believe we are operating under full agency of our abilities, but we are at the mercy of many internal and external factors that influence our decisions.

Beyond free will, Bandersnatch also taps into the theory of parallel universes, a facet of the astronomical theory of the multiverse. In astrophysics, there is a theory that there are parallel universes other than our own, where all the choices you made are played out in alternate realities. For instance, if today you had the option of having cereal or eggs for breakfast, and you chose eggs, in a parallel universe, you chose cereal. Human history and our lives may have taken different paths in these parallel universes.

The Future of Cinema
In the future, the viewing experience will no longer be a passive one. Bandersnatch is just a glimpse into how technology is revolutionizing film as we know it and making it a more interactive and personalized experience. All the different scenarios and branches of the plot were scripted and filmed, but in the future, they may be adapted real-time via artificial intelligence.

Virtual reality may allow us to play an even more active role by making us participants or characters in the film. Data from your history of preferences and may be used to create a unique version of the plot that is optimized for your viewing experience.

Let’s also not underestimate the social purpose of advancing film and entertainment. Science fiction gives us the ability to create simulations of the future. Different narratives can allow us to explore how powerful technologies combined with human behavior can result in positive or negative scenarios. Perhaps in the future, science fiction will explore implications of technologies and observe human decision making in novel contexts, via AI-powered films in the virtual world.

Image Credit: andrey_l / Shutterstock.com

We are a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for us to earn fees by linking to Amazon.com and affiliated sites. Continue reading

Posted in Human Robots