Tag Archives: energy

#436207 This Week’s Awesome Tech Stories From ...

COMPUTING
A Giant Superfast AI Chip Is Being Used to Find Better Cancer Drugs
Karen Hao | MIT Technology Review
“Thus far, Cerebras’s computer has checked all the boxes. Thanks to its chip size—it is larger than an iPad and has 1.2 trillion transistors for making calculations—it isn’t necessary to hook multiple smaller processors together, which can slow down model training. In testing, it has already shrunk the training time of models from weeks to hours.”

MEDICINE
Humans Put Into Suspended Animation for First Time
Ian Sample | The Guardian
“The process involves rapidly cooling the brain to less than 10C by replacing the patient’s blood with ice-cold saline solution. Typically the solution is pumped directly into the aorta, the main artery that carries blood away from the heart to the rest of the body.”

DRONES
This Transforming Drone Can Be Fired Straight Out of a Cannon
James Vincent | The Verge
“Drones are incredibly useful machines in the air, but getting them up and flying can be tricky, especially in crowded, windy, or emergency scenarios when speed is a factor. But a group of researchers from Caltech university and NASA’s Jet Propulsion Laboratory have come up with an elegant and oh-so-fun solution: fire the damn thing out of a cannon.”

ROBOTICS
Alphabet’s Dream of an ‘Everyday Robot’ Is Just Out of Reach
Tom Simonite | Wired
“Sorting trash was chosen as a convenient challenge to test the project’s approach to creating more capable robots. It’s using artificial intelligence software developed in collaboration with Google to make robots that learn complex tasks through on-the-job experience. The hope is to make robots less reliant on human coding for their skills, and capable of adapting quickly to complex new tasks and environments.”

ENVIRONMENT
The Electric Car Revolution May Take a Lot Longer Than Expected
James Temple | MIT Technology Review
“A new report from the MIT Energy Initiative warns that EVs may never reach the same sticker price so long as they rely on lithium-ion batteries, the energy storage technology that powers most of today’s consumer electronics. In fact, it’s likely to take another decade just to eliminate the difference in the lifetime costs between the vehicle categories, which factors in the higher fuel and maintenance expenses of standard cars and trucks.”

SPACE
How Two Intruders From Interstellar Space Are Upending Astronomy
Alexandra Witze | Nature
“From the tallest peak in Hawaii to a high plateau in the Andes, some of the biggest telescopes on Earth will point towards a faint smudge of light over the next few weeks. …What they’re looking for is a rare visitor that is about to make its closest approach to the Sun. After that, they have just months to grab as much information as they can from the object before it disappears forever into the blackness of space.”

Image Credit: Simone Hutsch / Unsplash Continue reading

Posted in Human Robots

#436190 What Is the Uncanny Valley?

Have you ever encountered a lifelike humanoid robot or a realistic computer-generated face that seem a bit off or unsettling, though you can’t quite explain why?

Take for instance AVA, one of the “digital humans” created by New Zealand tech startup Soul Machines as an on-screen avatar for Autodesk. Watching a lifelike digital being such as AVA can be both fascinating and disconcerting. AVA expresses empathy through her demeanor and movements: slightly raised brows, a tilt of the head, a nod.

By meticulously rendering every lash and line in its avatars, Soul Machines aimed to create a digital human that is virtually undistinguishable from a real one. But to many, rather than looking natural, AVA actually looks creepy. There’s something about it being almost human but not quite that can make people uneasy.

Like AVA, many other ultra-realistic avatars, androids, and animated characters appear stuck in a disturbing in-between world: They are so lifelike and yet they are not “right.” This void of strangeness is known as the uncanny valley.

Uncanny Valley: Definition and History
The uncanny valley is a concept first introduced in the 1970s by Masahiro Mori, then a professor at the Tokyo Institute of Technology. The term describes Mori’s observation that as robots appear more humanlike, they become more appealing—but only up to a certain point. Upon reaching the uncanny valley, our affinity descends into a feeling of strangeness, a sense of unease, and a tendency to be scared or freaked out.

Image: Masahiro Mori

The uncanny valley as depicted in Masahiro Mori’s original graph: As a robot’s human likeness [horizontal axis] increases, our affinity towards the robot [vertical axis] increases too, but only up to a certain point. For some lifelike robots, our response to them plunges, and they appear repulsive or creepy. That’s the uncanny valley.

In his seminal essay for Japanese journal Energy, Mori wrote:

I have noticed that, in climbing toward the goal of making robots appear human, our affinity for them increases until we come to a valley, which I call the uncanny valley.

Later in the essay, Mori describes the uncanny valley by using an example—the first prosthetic hands:

One might say that the prosthetic hand has achieved a degree of resemblance to the human form, perhaps on a par with false teeth. However, when we realize the hand, which at first site looked real, is in fact artificial, we experience an eerie sensation. For example, we could be startled during a handshake by its limp boneless grip together with its texture and coldness. When this happens, we lose our sense of affinity, and the hand becomes uncanny.

In an interview with IEEE Spectrum, Mori explained how he came up with the idea for the uncanny valley:

“Since I was a child, I have never liked looking at wax figures. They looked somewhat creepy to me. At that time, electronic prosthetic hands were being developed, and they triggered in me the same kind of sensation. These experiences had made me start thinking about robots in general, which led me to write that essay. The uncanny valley was my intuition. It was one of my ideas.”

Uncanny Valley Examples
To better illustrate how the uncanny valley works, here are some examples of the phenomenon. Prepare to be freaked out.

1. Telenoid

Photo: Hiroshi Ishiguro/Osaka University/ATR

Taking the top spot in the “creepiest” rankings of IEEE Spectrum’s Robots Guide, Telenoid is a robotic communication device designed by Japanese roboticist Hiroshi Ishiguro. Its bald head, lifeless face, and lack of limbs make it seem more alien than human.

2. Diego-san

Photo: Andrew Oh/Javier Movellan/Calit2

Engineers and roboticists at the University of California San Diego’s Machine Perception Lab developed this robot baby to help parents better communicate with their infants. At 1.2 meters (4 feet) tall and weighing 30 kilograms (66 pounds), Diego-san is a big baby—bigger than an average 1-year-old child.

“Even though the facial expression is sophisticated and intuitive in this infant robot, I still perceive a false smile when I’m expecting the baby to appear happy,” says Angela Tinwell, a senior lecturer at the University of Bolton in the U.K. and author of The Uncanny Valley in Games and Animation. “This, along with a lack of detail in the eyes and forehead, can make the baby appear vacant and creepy, so I would want to avoid those ‘dead eyes’ rather than interacting with Diego-san.”

​3. Geminoid HI

Photo: Osaka University/ATR/Kokoro

Another one of Ishiguro’s creations, Geminoid HI is his android replica. He even took hair from his own scalp to put onto his robot twin. Ishiguro says he created Geminoid HI to better understand what it means to be human.

4. Sophia

Photo: Mikhail Tereshchenko/TASS/Getty Images

Designed by David Hanson of Hanson Robotics, Sophia is one of the most famous humanoid robots. Like Soul Machines’ AVA, Sophia displays a range of emotional expressions and is equipped with natural language processing capabilities.

5. Anthropomorphized felines

The uncanny valley doesn’t only happen with robots that adopt a human form. The 2019 live-action versions of the animated film The Lion King and the musical Cats brought the uncanny valley to the forefront of pop culture. To some fans, the photorealistic computer animations of talking lions and singing cats that mimic human movements were just creepy.

Are you feeling that eerie sensation yet?

Uncanny Valley: Science or Pseudoscience?
Despite our continued fascination with the uncanny valley, its validity as a scientific concept is highly debated. The uncanny valley wasn’t actually proposed as a scientific concept, yet has often been criticized in that light.

Mori himself said in his IEEE Spectrum interview that he didn’t explore the concept from a rigorous scientific perspective but as more of a guideline for robot designers:

Pointing out the existence of the uncanny valley was more of a piece of advice from me to people who design robots rather than a scientific statement.

Karl MacDorman, an associate professor of human-computer interaction at Indiana University who has long studied the uncanny valley, interprets the classic graph not as expressing Mori’s theory but as a heuristic for learning the concept and organizing observations.

“I believe his theory is instead expressed by his examples, which show that a mismatch in the human likeness of appearance and touch or appearance and motion can elicit a feeling of eeriness,” MacDorman says. “In my own experiments, I have consistently reproduced this effect within and across sense modalities. For example, a mismatch in the human realism of the features of a face heightens eeriness; a robot with a human voice or a human with a robotic voice is eerie.”

How to Avoid the Uncanny Valley
Unless you intend to create creepy characters or evoke a feeling of unease, you can follow certain design principles to avoid the uncanny valley. “The effect can be reduced by not creating robots or computer-animated characters that combine features on different sides of a boundary—for example, human and nonhuman, living and nonliving, or real and artificial,” MacDorman says.

To make a robot or avatar more realistic and move it beyond the valley, Tinwell says to ensure that a character’s facial expressions match its emotive tones of speech, and that its body movements are responsive and reflect its hypothetical emotional state. Special attention must also be paid to facial elements such as the forehead, eyes, and mouth, which depict the complexities of emotion and thought. “The mouth must be modeled and animated correctly so the character doesn’t appear aggressive or portray a ‘false smile’ when they should be genuinely happy,” she says.

For Christoph Bartneck, an associate professor at the University of Canterbury in New Zealand, the goal is not to avoid the uncanny valley, but to avoid bad character animations or behaviors, stressing the importance of matching the appearance of a robot with its ability. “We’re trained to spot even the slightest divergence from ‘normal’ human movements or behavior,” he says. “Hence, we often fail in creating highly realistic, humanlike characters.”

But he warns that the uncanny valley appears to be more of an uncanny cliff. “We find the likability to increase and then crash once robots become humanlike,” he says. “But we have never observed them ever coming out of the valley. You fall off and that’s it.” Continue reading

Posted in Human Robots

#436186 Video Friday: Invasion of the Mini ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

DARPA SubT Urban Circuit – February 18-27, 2020 – Olympia, Wash., USA
Let us know if you have suggestions for next week, and enjoy today’s videos.

There will be a Mini-Cheetah Workshop (sponsored by Naver Labs) a year from now at IROS 2020 in Las Vegas. Mini-Cheetahs for everyone!

That’s just a rendering, of course, but this isn’t:

[ MCW ]

I was like 95 percent sure that the Urban Circuit of the DARPA SubT Challenge was going to be in something very subway station-y. Oops!

In the Subterranean (SubT) Challenge, teams deploy autonomous ground and aerial systems to attempt to map, identify, and report artifacts along competition courses in underground environments. The artifacts represent items a first responder or service member may encounter in unknown underground sites. This video provides a preview of the Urban Circuit event location. The Urban Circuit is scheduled for February 18-27, 2020, at Satsop Business Park west of Olympia, Washington.

[ SubT ]

Researchers at SEAS and the Wyss Institute for Biologically Inspired Engineering have developed a resilient RoboBee powered by soft artificial muscles that can crash into walls, fall onto the floor, and collide with other RoboBees without being damaged. It is the first microrobot powered by soft actuators to achieve controlled flight.

To solve the problem of power density, the researchers built upon the electrically-driven soft actuators developed in the lab of David Clarke, the Extended Tarr Family Professor of Materials. These soft actuators are made using dielectric elastomers, soft materials with good insulating properties, that deform when an electric field is applied. By improving the electrode conductivity, the researchers were able to operate the actuator at 500 Hertz, on par with the rigid actuators used previously in similar robots.

Next, the researchers aim to increase the efficiency of the soft-powered robot, which still lags far behind more traditional flying robots.

[ Harvard ]

We present a system for fast and robust handovers with a robot character, together with a user study investigating the effect of robot speed and reaction time on perceived interaction quality. The system can match and exceed human speeds and confirms that users prefer human-level timing.

In a 3×3 user study, we vary the speed of the robot and add variable sensorimotor delays. We evaluate the social perception of the robot using the Robot Social Attribute Scale (RoSAS). Inclusion of a small delay, mimicking the delay of the human sensorimotor system, leads to an improvement in perceived qualities over both no delay and long delay conditions. Specifically, with no delay the robot is perceived as more discomforting and with a long delay, it is perceived as less warm.

[ Disney Research ]

When cars are autonomous, they’re not going to be able to pump themselves full of gas. Or, more likely, electrons. Kuka has the solution.

[ Kuka ]

This looks like fun, right?

[ Robocoaster ]

NASA is leading the way in the use of On-orbit Servicing, Assembly, and Manufacturing to enable large, persistent, upgradable, and maintainable spacecraft. This video was developed by the Advanced Concepts Lab (ACL) at NASA Langley Research Center.

[ NASA ]

The noisiest workshop by far at Humanoids last month (by far) was Musical Interactions With Humanoids, the end result of which was this:

[ Workshop ]

IROS is an IEEE event, and in furthering the IEEE mission to benefit humanity through technological innovation, IROS is doing a great job. But don’t take it from us – we are joined by IEEE President-Elect Professor Toshio Fukuda to find out a bit more about the impact events like IROS can have, as well as examine some of the issues around intelligent robotics and systems – from privacy to transparency of the systems at play.

[ IROS ]

Speaking of IROS, we hope you’ve been enjoying our coverage. We have already featured Harvard’s strange sea-urchin-inspired robot and a Japanese quadruped that can climb vertical ladders, with more stories to come over the next several weeks.

In the mean time, enjoy these 10 videos from the conference (as usual, we’re including the title, authors, and abstract for each—if you’d like more details about any of these projects, let us know and we’ll find out more for you).

“A Passive Closing, Tendon Driven, Adaptive Robot Hand for Ultra-Fast, Aerial Grasping and Perching,” by Andrew McLaren, Zak Fitzgerald, Geng Gao, and Minas Liarokapis from the University of Auckland, New Zealand.

Current grasping methods for aerial vehicles are slow, inaccurate and they cannot adapt to any target object. Thus, they do not allow for on-the-fly, ultra-fast grasping. In this paper, we present a passive closing, adaptive robot hand design that offers ultra-fast, aerial grasping for a wide range of everyday objects. We investigate alternative uses of structural compliance for the development of simple, adaptive robot grippers and hands and we propose an appropriate quick release mechanism that facilitates an instantaneous grasping execution. The quick release mechanism is triggered by a simple distance sensor. The proposed hand utilizes only two actuators to control multiple degrees of freedom over three fingers and it retains the superior grasping capabilities of adaptive grasping mechanisms, even under significant object pose or other environmental uncertainties. The hand achieves a grasping time of 96 ms, a maximum grasping force of 56 N and it is able to secure objects of various shapes at high speeds. The proposed hand can serve as the end-effector of grasping capable Unmanned Aerial Vehicle (UAV) platforms and it can offer perching capabilities, facilitating autonomous docking.

“Unstructured Terrain Navigation and Topographic Mapping With a Low-Cost Mobile Cuboid Robot,” by Andrew S. Morgan, Robert L. Baines, Hayley McClintock, and Brian Scassellati from Yale University, USA.

Current robotic terrain mapping techniques require expensive sensor suites to construct an environmental representation. In this work, we present a cube-shaped robot that can roll through unstructured terrain and construct a detailed topographic map of the surface that it traverses in real time with low computational and monetary expense. Our approach devolves many of the complexities of locomotion and mapping to passive mechanical features. Namely, rolling movement is achieved by sequentially inflating latex bladders that are located on four sides of the robot to destabilize and tip it. Sensing is achieved via arrays of fine plastic pins that passively conform to the geometry of underlying terrain, retracting into the cube. We developed a topography by shade algorithm to process images of the displaced pins to reconstruct terrain contours and elevation. We experimentally validated the efficacy of the proposed robot through object mapping and terrain locomotion tasks.

“Toward a Ballbot for Physically Leading People: A Human-Centered Approach,” by Zhongyu Li and Ralph Hollis from Carnegie Mellon University, USA.

This work presents a new human-centered method for indoor service robots to provide people with physical assistance and active guidance while traveling through congested and narrow spaces. As most previous work is robot-centered, this paper develops an end-to-end framework which includes a feedback path of the measured human positions. The framework combines a planning algorithm and a human-robot interaction module to guide the led person to a specified planned position. The approach is deployed on a person-size dynamically stable mobile robot, the CMU ballbot. Trials were conducted where the ballbot physically led a blindfolded person to safely navigate in a cluttered environment.

“Achievement of Online Agile Manipulation Task for Aerial Transformable Multilink Robot,” by Fan Shi, Moju Zhao, Tomoki Anzai, Keita Ito, Xiangyu Chen, Kei Okada, and Masayuki Inaba from the University of Tokyo, Japan.

Transformable aerial robots are favorable in aerial manipulation tasks for their flexible ability to change configuration during the flight. By assuming robot keeping in the mild motion, the previous researches sacrifice aerial agility to simplify the complex non-linear system into a single rigid body with a linear controller. In this paper, we present a framework towards agile swing motion for the transformable multi-links aerial robot. We introduce a computational-efficient non-linear model predictive controller and joints motion primitive frame-work to achieve agile transforming motions and validate with a novel robot named HYRURS-X. Finally, we implement our framework under a table tennis task to validate the online and agile performance.

“Small-Scale Compliant Dual Arm With Tail for Winged Aerial Robots,” by Alejandro Suarez, Manuel Perez, Guillermo Heredia, and Anibal Ollero from the University of Seville, Spain.

Winged aerial robots represent an evolution of aerial manipulation robots, replacing the multirotor vehicles by fixed or flapping wing platforms. The development of this morphology is motivated in terms of efficiency, endurance and safety in some inspection operations where multirotor platforms may not be suitable. This paper presents a first prototype of compliant dual arm as preliminary step towards the realization of a winged aerial robot capable of perching and manipulating with the wings folded. The dual arm provides 6 DOF (degrees of freedom) for end effector positioning in a human-like kinematic configuration, with a reach of 25 cm (half-scale w.r.t. the human arm), and 0.2 kg weight. The prototype is built with micro metal gear motors, measuring the joint angles and the deflection with small potentiometers. The paper covers the design, electronics, modeling and control of the arms. Experimental results in test-bench validate the developed prototype and its functionalities, including joint position and torque control, bimanual grasping, the dynamic equilibrium with the tail, and the generation of 3D maps with laser sensors attached at the arms.

“A Novel Small-Scale Turtle-inspired Amphibious Spherical Robot,” by Huiming Xing, Shuxiang Guo, Liwei Shi, Xihuan Hou, Yu Liu, Huikang Liu, Yao Hu, Debin Xia, and Zan Li from Beijing Institute of Technology, China.

This paper describes a novel small-scale turtle-inspired Amphibious Spherical Robot (ASRobot) to accomplish exploration tasks in the restricted environment, such as amphibious areas and narrow underwater cave. A Legged, Multi-Vectored Water-Jet Composite Propulsion Mechanism (LMVWCPM) is designed with four legs, one of which contains three connecting rod parts, one water-jet thruster and three joints driven by digital servos. Using this mechanism, the robot is able to walk like amphibious turtles on various terrains and swim flexibly in submarine environment. A simplified kinematic model is established to analyze crawling gaits. With simulation of the crawling gait, the driving torques of different joints contributed to the choice of servos and the size of links of legs. Then we also modeled the robot in water and proposed several underwater locomotion. In order to assess the performance of the proposed robot, a series of experiments were carried out in the lab pool and on flat ground using the prototype robot. Experiments results verified the effectiveness of LMVWCPM and the amphibious control approaches.

“Advanced Autonomy on a Low-Cost Educational Drone Platform,” by Luke Eller, Theo Guerin, Baichuan Huang, Garrett Warren, Sophie Yang, Josh Roy, and Stefanie Tellex from Brown University, USA.

PiDrone is a quadrotor platform created to accompany an introductory robotics course. Students build an autonomous flying robot from scratch and learn to program it through assignments and projects. Existing educational robots do not have significant autonomous capabilities, such as high-level planning and mapping. We present a hardware and software framework for an autonomous aerial robot, in which all software for autonomy can run onboard the drone, implemented in Python. We present an Unscented Kalman Filter (UKF) for accurate state estimation. Next, we present an implementation of Monte Carlo (MC) Localization and Fast-SLAM for Simultaneous Localization and Mapping (SLAM). The performance of UKF, localization, and SLAM is tested and compared to ground truth, provided by a motion-capture system. Our evaluation demonstrates that our autonomous educational framework runs quickly and accurately on a Raspberry Pi in Python, making it ideal for use in educational settings.

“FlightGoggles: Photorealistic Sensor Simulation for Perception-driven Robotics using Photogrammetry and Virtual Reality,” by Winter Guerra, Ezra Tal, Varun Murali, Gilhyun Ryou and Sertac Karaman from the Massachusetts Institute of Technology, USA.

FlightGoggles is a photorealistic sensor simulator for perception-driven robotic vehicles. The key contributions of FlightGoggles are twofold. First, FlightGoggles provides photorealistic exteroceptive sensor simulation using graphics assets generated with photogrammetry. Second, it provides the ability to combine (i) synthetic exteroceptive measurements generated in silico in real time and (ii) vehicle dynamics and proprioceptive measurements generated in motio by vehicle(s) in flight in a motion-capture facility. FlightGoggles is capable of simulating a virtual-reality environment around autonomous vehicle(s) in flight. While a vehicle is in flight in the FlightGoggles virtual reality environment, exteroceptive sensors are rendered synthetically in real time while all complex dynamics are generated organically through natural interactions of the vehicle. The FlightGoggles framework allows for researchers to accelerate development by circumventing the need to estimate complex and hard-to-model interactions such as aerodynamics, motor mechanics, battery electrochemistry, and behavior of other agents. The ability to perform vehicle-in-the-loop experiments with photorealistic exteroceptive sensor simulation facilitates novel research directions involving, e.g., fast and agile autonomous flight in obstacle-rich environments, safe human interaction, and flexible sensor selection. FlightGoggles has been utilized as the main test for selecting nine teams that will advance in the AlphaPilot autonomous drone racing challenge. We survey approaches and results from the top AlphaPilot teams, which may be of independent interest. FlightGoggles is distributed as open-source software along with the photorealistic graphics assets for several simulation environments, under the MIT license at http://flightgoggles.mit.edu.

“An Autonomous Quadrotor System for Robust High-Speed Flight Through Cluttered Environments Without GPS,” by Marc Rigter, Benjamin Morrell, Robert G. Reid, Gene B. Merewether, Theodore Tzanetos, Vinay Rajur, KC Wong, and Larry H. Matthies from University of Sydney, Australia; NASA Jet Propulsion Laboratory, California Institute of Technology, USA; and Georgia Institute of Technology, USA.

Robust autonomous flight without GPS is key to many emerging drone applications, such as delivery, search and rescue, and warehouse inspection. These and other appli- cations require accurate trajectory tracking through cluttered static environments, where GPS can be unreliable, while high- speed, agile, flight can increase efficiency. We describe the hardware and software of a quadrotor system that meets these requirements with onboard processing: a custom 300 mm wide quadrotor that uses two wide-field-of-view cameras for visual- inertial motion tracking and relocalization to a prior map. Collision-free trajectories are planned offline and tracked online with a custom tracking controller. This controller includes compensation for drag and variability in propeller performance, enabling accurate trajectory tracking, even at high speeds where aerodynamic effects are significant. We describe a system identification approach that identifies quadrotor-specific parameters via maximum likelihood estimation from flight data. Results from flight experiments are presented, which 1) validate the system identification method, 2) show that our controller with aerodynamic compensation reduces tracking error by more than 50% in both horizontal flights at up to 8.5 m/s and vertical flights at up to 3.1 m/s compared to the state-of-the-art, and 3) demonstrate our system tracking complex, aggressive, trajectories.

“Morphing Structure for Changing Hydrodynamic Characteristics of a Soft Underwater Walking Robot,” by Michael Ishida, Dylan Drotman, Benjamin Shih, Mark Hermes, Mitul Luhar, and Michael T. Tolley from the University of California, San Diego (UCSD) and University of Southern California, USA.

Existing platforms for underwater exploration and inspection are often limited to traversing open water and must expend large amounts of energy to maintain a position in flow for long periods of time. Many benthic animals overcome these limitations using legged locomotion and have different hydrodynamic profiles dictated by different body morphologies. This work presents an underwater legged robot with soft legs and a soft inflatable morphing body that can change shape to influence its hydrodynamic characteristics. Flow over the morphing body separates behind the trailing edge of the inflated shape, so whether the protrusion is at the front, center, or back of the robot influences the amount of drag and lift. When the legged robot (2.87 N underwater weight) needs to remain stationary in flow, an asymmetrically inflated body resists sliding by reducing lift on the body by 40% (from 0.52 N to 0.31 N) at the highest flow rate tested while only increasing drag by 5.5% (from 1.75 N to 1.85 N). When the legged robot needs to walk with flow, a large inflated body is pushed along by the flow, causing the robot to walk 16% faster than it would with an uninflated body. The body shape significantly affects the ability of the robot to walk against flow as it is able to walk against 0.09 m/s flow with the uninflated body, but is pushed backwards with a large inflated body. We demonstrate that the robot can detect changes in flow velocity with a commercial force sensor and respond by morphing into a hydrodynamically preferable shape. Continue reading

Posted in Human Robots

#436126 Quantum Computing Gets a Boost From AI ...

Illustration: Greg Mably

Anyone of a certain age who has even a passing interest in computers will remember the remarkable breakthrough that IBM made in 1997 when its Deep Blue chess-playing computer defeated Garry Kasparov, then the world chess champion. Computer scientists passed another such milestone in March 2016, when DeepMind (a subsidiary of Alphabet, Google’s parent company) announced that its AlphaGo program had defeated world-champion player Lee Sedol in the game of Go, a board game that had vexed AI researchers for decades. Recently, DeepMind’s algorithms have also bested human players in the computer games StarCraft IIand Quake Arena III.

Some believe that the cognitive capacities of machines will overtake those of human beings in many spheres within a few decades. Others are more cautious and point out that our inability to understand the source of our own cognitive powers presents a daunting hurdle. How can we make thinking machines if we don’t fully understand our own thought processes?

Citizen science, which enlists masses of people to tackle research problems, holds promise here, in no small part because it can be used effectively to explore the boundary between human and artificial intelligence.

Some citizen-science projects ask the public to collect data from their surroundings (as eButterfly does for butterflies) or to monitor delicate ecosystems (as Eye on the Reef does for Australia’s Great Barrier Reef). Other projects rely on online platforms on which people help to categorize obscure phenomena in the night sky (Zooniverse) or add to the understanding of the structure of proteins (Foldit). Typically, people can contribute to such projects without any prior knowledge of the subject. Their fundamental cognitive skills, like the ability to quickly recognize patterns, are sufficient.

In order to design and develop video games that can allow citizen scientists to tackle scientific problems in a variety of fields, professor and group leader Jacob Sherson founded ScienceAtHome (SAH), at Aarhus University, in Denmark. The group began by considering topics in quantum physics, but today SAH hosts games covering other areas of physics, math, psychology, cognitive science, and behavioral economics. We at SAH search for innovative solutions to real research challenges while providing insight into how people think, both alone and when working in groups.

It is computationally intractable to completely map out a higher-dimensional landscape: It is called the curse of high dimensionality, and it plagues many optimization problems.

We believe that the design of new AI algorithms would benefit greatly from a better understanding of how people solve problems. This surmise has led us to establish the Center for Hybrid Intelligence within SAH, which tries to combine human and artificial intelligence, taking advantage of the particular strengths of each. The center’s focus is on the gamification of scientific research problems and the development of interfaces that allow people to understand and work together with AI.

Our first game, Quantum Moves, was inspired by our group’s research into quantum computers. Such computers can in principle solve certain problems that would take a classical computer billions of years. Quantum computers could challenge current cryptographic protocols, aid in the design of new materials, and give insight into natural processes that require an exact solution of the equations of quantum mechanics—something normal computers are inherently bad at doing.

One candidate system for building such a computer would capture individual atoms by “freezing” them, as it were, in the interference pattern produced when a laser beam is reflected back on itself. The captured atoms can thus be organized like eggs in a carton, forming a periodic crystal of atoms and light. Using these atoms to perform quantum calculations requires that we use tightly focused laser beams, called optical tweezers, to transport the atoms from site to site in the light crystal. This is a tricky business because individual atoms do not behave like particles; instead, they resemble a wavelike liquid governed by the laws of quantum mechanics.

In Quantum Moves, a player manipulates a touch screen or mouse to move a simulated laser tweezer and pick up a trapped atom, represented by a liquidlike substance in a bowl. Then the player must bring the atom back to the tweezer’s initial position while trying to minimize the sloshing of the liquid. Such sloshing would increase the energy of the atom and ultimately introduce errors into the operations of the quantum computer. Therefore, at the end of a move, the liquid should be at a complete standstill.

To understand how people and computers might approach such a task differently, you need to know something about how computerized optimization algorithms work. The countless ways of moving a glass of water without spilling may be regarded as constituting a “solution landscape.” One solution is represented by a single point in that landscape, and the height of that point represents the quality of the solution—how smoothly and quickly the glass of water was moved. This landscape might resemble a mountain range, where the top of each mountain represents a local optimum and where the challenge is to find the highest peak in the range—the global optimum.

Illustration: Greg Mably

Researchers must compromise between searching the landscape for taller mountains (“exploration”) and climbing to the top of the nearest mountain (“exploitation”). Making such a trade-off may seem easy when exploring an actual physical landscape: Merely hike around a bit to get at least the general lay of the land before surveying in greater detail what seems to be the tallest peak. But because each possible way of changing the solution defines a new dimension, a realistic problem can have thousands of dimensions. It is computationally intractable to completely map out such a higher-dimensional landscape. We call this the curse of high dimensionality, and it plagues many optimization problems.

Although algorithms are wonderfully efficient at crawling to the top of a given mountain, finding good ways of searching through the broader landscape poses quite a challenge, one that is at the forefront of AI research into such control problems. The conventional approach is to come up with clever ways of reducing the search space, either through insights generated by researchers or with machine-learning algorithms trained on large data sets.

At SAH, we attacked certain quantum-optimization problems by turning them into a game. Our goal was not to show that people can beat computers in this arena but rather to understand the process of generating insights into such problems. We addressed two core questions: whether allowing players to explore the infinite space of possibilities will help them find good solutions and whether we can learn something by studying their behavior.

Today, more than 250,000 people have played Quantum Moves, and to our surprise, they did in fact search the space of possible moves differently from the algorithm we had put to the task. Specifically, we found that although players could not solve the optimization problem on their own, they were good at searching the broad landscape. The computer algorithms could then take those rough ideas and refine them.

Herbert A. Simon said that “solving a problem simply means representing it so as to make the solution transparent.” Apparently, that’s what our games can do with their novel user interfaces.

Perhaps even more interesting was our discovery that players had two distinct ways of solving the problem, each with a clear physical interpretation. One set of players started by placing the tweezer close to the atom while keeping a barrier between the atom trap and the tweezer. In classical physics, a barrier is an impenetrable obstacle, but because the atom liquid is a quantum-mechanical object, it can tunnel through the barrier into the tweezer, after which the player simply moved the tweezer to the target area. Another set of players moved the tweezer directly into the atom trap, picked up the atom liquid, and brought it back. We called these two strategies the “tunneling” and “shoveling” strategies, respectively.

Such clear strategies are extremely valuable because they are very difficult to obtain directly from an optimization algorithm. Involving humans in the optimization loop can thus help us gain insight into the underlying physical phenomena that are at play, knowledge that may then be transferred to other types of problems.

Quantum Moves raised several obvious issues. First, because generating an exceptional solution required further computer-based optimization, players were unable to get immediate feedback to help them improve their scores, and this often left them feeling frustrated. Second, we had tested this approach on only one scientific challenge with a clear classical analogue, that of the sloshing liquid. We wanted to know whether such gamification could be applied more generally, to a variety of scientific challenges that do not offer such immediately applicable visual analogies.

We address these two concerns in Quantum Moves 2. Here, the player first generates a number of candidate solutions by playing the original game. Then the player chooses which solutions to optimize using a built-in algorithm. As the algorithm improves a player’s solution, it modifies the solution path—the movement of the tweezer—to represent the optimized solution. Guided by this feedback, players can then improve their strategy, come up with a new solution, and iteratively feed it back into this process. This gameplay provides high-level heuristics and adds human intuition to the algorithm. The person and the machine work in tandem—a step toward true hybrid intelligence.

In parallel with the development of Quantum Moves 2, we also studied how people collaboratively solve complex problems. To that end, we opened our atomic physics laboratory to the general public—virtually. We let people from around the world dictate the experiments we would run to see if they would find ways to improve the results we were getting. What results? That’s a little tricky to explain, so we need to pause for a moment and provide a little background on the relevant physics.

One of the essential steps in building the quantum computer along the lines described above is to create the coldest state of matter in the universe, known as a Bose-Einstein condensate. Here millions of atoms oscillate in synchrony to form a wavelike substance, one of the largest purely quantum phenomena known. To create this ultracool state of matter, researchers typically use a combination of laser light and magnetic fields. There is no familiar physical analogy between such a strange state of matter and the phenomena of everyday life.

The result we were seeking in our lab was to create as much of this enigmatic substance as was possible given the equipment available. The sequence of steps to accomplish that was unknown. We hoped that gamification could help to solve this problem, even though it had no classical analogy to present to game players.

Images: ScienceAtHome

Fun and Games: The
Quantum Moves game evolved over time, from a relatively crude early version [top] to its current form [second from top] and then a major revision,
Quantum Moves 2 [third from top].
Skill Lab: Science Detective games [bottom] test players’ cognitive skills.

In October 2016, we released a game that, for two weeks, guided how we created Bose-Einstein condensates in our laboratory. By manipulating simple curves in the game interface, players generated experimental sequences for us to use in producing these condensates—and they did so without needing to know anything about the underlying physics. A player would generate such a solution, and a few minutes later we would run the sequence in our laboratory. The number of ultracold atoms in the resulting Bose-Einstein condensate was measured and fed back to the player as a score. Players could then decide either to try to improve their previous solution or to copy and modify other players’ solutions. About 600 people from all over the world participated, submitting 7,577 solutions in total. Many of them yielded bigger condensates than we had previously produced in the lab.

So this exercise succeeded in achieving our primary goal, but it also allowed us to learn something about human behavior. We learned, for example, that players behave differently based on where they sit on the leaderboard. High-performing players make small changes to their successful solutions (exploitation), while poorly performing players are willing to make more dramatic changes (exploration). As a collective, the players nicely balance exploration and exploitation. How they do so provides valuable inspiration to researchers trying to understand human problem solving in social science as well as to those designing new AI algorithms.

How could mere amateurs outperform experienced experimental physicists? The players certainly weren’t better at physics than the experts—but they could do better because of the way in which the problem was posed. By turning the research challenge into a game, we gave players the chance to explore solutions that had previously required complex programming to study. Indeed, even expert experimentalists improved their solutions dramatically by using this interface.

Insight into why that’s possible can probably be found in the words of the late economics Nobel laureate Herbert A. Simon: “Solving a problem simply means representing it so as to make the solution transparent [PDF].” Apparently, that’s what our games can do with their novel user interfaces. We believe that such interfaces might be a key to using human creativity to solve other complex research problems.

Eventually, we’d like to get a better understanding of why this kind of gamification works as well as it does. A first step would be to collect more data on what the players do while they are playing. But even with massive amounts of data, detecting the subtle patterns underlying human intuition is an overwhelming challenge. To advance, we need a deeper insight into the cognition of the individual players.

As a step forward toward this goal, ScienceAtHome created Skill Lab: Science Detective, a suite of minigames exploring visuospatial reasoning, response inhibition, reaction times, and other basic cognitive skills. Then we compare players’ performance in the games with how well these same people did on established psychological tests of those abilities. The point is to allow players to assess their own cognitive strengths and weaknesses while donating their data for further public research.

In the fall of 2018 we launched a prototype of this large-scale profiling in collaboration with the Danish Broadcasting Corp. Since then more than 20,000 people have participated, and in part because of the publicity granted by the public-service channel, participation has been very evenly distributed across ages and by gender. Such broad appeal is rare in social science, where the test population is typically drawn from a very narrow demographic, such as college students.

Never before has such a large academic experiment in human cognition been conducted. We expect to gain new insights into many things, among them how combinations of cognitive abilities sharpen or decline with age, what characteristics may be used to prescreen for mental illnesses, and how to optimize the building of teams in our work lives.

And so what started as a fun exercise in the weird world of quantum mechanics has now become an exercise in understanding the nuances of what makes us human. While we still seek to understand atoms, we can now aspire to understand people’s minds as well.

This article appears in the November 2019 print issue as “A Man-Machine Mind Meld for Quantum Computing.”

About the Authors
Ottó Elíasson, Carrie Weidner, Janet Rafner, and Shaeema Zaman Ahmed work with the ScienceAtHome project at Aarhus University in Denmark. Continue reading

Posted in Human Robots

#436119 How 3D Printing, Vertical Farming, and ...

Food. What we eat, and how we grow it, will be fundamentally transformed in the next decade.

Already, indoor farming is projected to be a US$40.25 billion industry by 2022, with a compound annual growth rate of 9.65 percent. Meanwhile, the food 3D printing industry is expected to grow at an even higher rate, averaging 50 percent annual growth.

And converging exponential technologies—from materials science to AI-driven digital agriculture—are not slowing down. Today’s breakthroughs will soon allow our planet to boost its food production by nearly 70 percent, using a fraction of the real estate and resources, to feed 9 billion by mid-century.

What you consume, how it was grown, and how it will end up in your stomach will all ride the wave of converging exponentials, revolutionizing the most basic of human needs.

Printing Food
3D printing has already had a profound impact on the manufacturing sector. We are now able to print in hundreds of different materials, making anything from toys to houses to organs. However, we are finally seeing the emergence of 3D printers that can print food itself.

Redefine Meat, an Israeli startup, wants to tackle industrial meat production using 3D printers that can generate meat, no animals required. The printer takes in fat, water, and three different plant protein sources, using these ingredients to print a meat fiber matrix with trapped fat and water, thus mimicking the texture and flavor of real meat.

Slated for release in 2020 at a cost of $100,000, their machines are rapidly demonetizing and will begin by targeting clients in industrial-scale meat production.

Anrich3D aims to take this process a step further, 3D printing meals that are customized to your medical records, heath data from your smart wearables, and patterns detected by your sleep trackers. The company plans to use multiple extruders for multi-material printing, allowing them to dispense each ingredient precisely for nutritionally optimized meals. Currently in an R&D phase at the Nanyang Technological University in Singapore, the company hopes to have its first taste tests in 2020.

These are only a few of the many 3D food printing startups springing into existence. The benefits from such innovations are boundless.

Not only will food 3D printing grant consumers control over the ingredients and mixtures they consume, but it is already beginning to enable new innovations in flavor itself, democratizing far healthier meal options in newly customizable cuisine categories.

Vertical Farming
Vertical farming, whereby food is grown in vertical stacks (in skyscrapers and buildings rather than outside in fields), marks a classic case of converging exponential technologies. Over just the past decade, the technology has surged from a handful of early-stage pilots to a full-grown industry.

Today, the average American meal travels 1,500-2,500 miles to get to your plate. As summed up by Worldwatch Institute researcher Brian Halweil, “We are spending far more energy to get food to the table than the energy we get from eating the food.” Additionally, the longer foods are out of the soil, the less nutritious they become, losing on average 45 percent of their nutrition before being consumed.

Yet beyond cutting down on time and transportation losses, vertical farming eliminates a whole host of issues in food production. Relying on hydroponics and aeroponics, vertical farms allows us to grow crops with 90 percent less water than traditional agriculture—which is critical for our increasingly thirsty planet.

Currently, the largest player around is Bay Area-based Plenty Inc. With over $200 million in funding from Softbank, Plenty is taking a smart tech approach to indoor agriculture. Plants grow on 20-foot-high towers, monitored by tens of thousands of cameras and sensors, optimized by big data and machine learning.

This allows the company to pack 40 plants in the space previously occupied by 1. The process also produces yields 350 times greater than outdoor farmland, using less than 1 percent as much water.

And rather than bespoke veggies for the wealthy few, Plenty’s processes allow them to knock 20-35 percent off the costs of traditional grocery stores. To date, Plenty has their home base in South San Francisco, a 100,000 square-foot farm in Kent, Washington, an indoor farm in the United Arab Emirates, and recently started construction on over 300 farms in China.

Another major player is New Jersey-based Aerofarms, which can now grow two million pounds of leafy greens without sunlight or soil.

To do this, Aerofarms leverages AI-controlled LEDs to provide optimized wavelengths of light for each plant. Using aeroponics, the company delivers nutrients by misting them directly onto the plants’ roots—no soil required. Rather, plants are suspended in a growth mesh fabric made from recycled water bottles. And here too, sensors, cameras, and machine learning govern the entire process.

While 50-80 percent of the cost of vertical farming is human labor, autonomous robotics promises to solve that problem. Enter contenders like Iron Ox, a firm that has developed the Angus robot, capable of moving around plant-growing containers.

The writing is on the wall, and traditional agriculture is fast being turned on its head.

Materials Science
In an era where materials science, nanotechnology, and biotechnology are rapidly becoming the same field of study, key advances are enabling us to create healthier, more nutritious, more efficient, and longer-lasting food.

For starters, we are now able to boost the photosynthetic abilities of plants. Using novel techniques to improve a micro-step in the photosynthesis process chain, researchers at UCLA were able to boost tobacco crop yield by 14-20 percent. Meanwhile, the RIPE Project, backed by Bill Gates and run out of the University of Illinois, has matched and improved those numbers.

And to top things off, The University of Essex was even able to improve tobacco yield by 27-47 percent by increasing the levels of protein involved in photo-respiration.

In yet another win for food-related materials science, Santa Barbara-based Apeel Sciences is further tackling the vexing challenge of food waste. Now approaching commercialization, Apeel uses lipids and glycerolipids found in the peels, seeds, and pulps of all fruits and vegetables to create “cutin”—the fatty substance that composes the skin of fruits and prevents them from rapidly spoiling by trapping moisture.

By then spraying fruits with this generated substance, Apeel can preserve foods 60 percent longer using an odorless, tasteless, colorless organic substance.

And stores across the US are already using this method. By leveraging our advancing knowledge of plants and chemistry, materials science is allowing us to produce more food with far longer-lasting freshness and more nutritious value than ever before.

Convergence
With advances in 3D printing, vertical farming, and materials sciences, we can now make food smarter, more productive, and far more resilient.

By the end of the next decade, you should be able to 3D print a fusion cuisine dish from the comfort of your home, using ingredients harvested from vertical farms, with nutritional value optimized by AI and materials science. However, even this picture doesn’t account for all the rapid changes underway in the food industry.

Join me next week for Part 2 of the Future of Food for a discussion on how food production will be transformed, quite literally, from the bottom up.

Join Me
Abundance-Digital Online Community: Stay ahead of technological advancements and turn your passion into action. Abundance Digital is now part of Singularity University. Learn more.

Image Credit: Vanessa Bates Ramirez Continue reading

Posted in Human Robots