Tag Archives: do

#436165 Video Friday: DJI’s Mavic Mini Is ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

IROS 2019 – November 4-8, 2019 – Macau
Let us know if you have suggestions for next week, and enjoy today’s videos.

DJI’s new Mavic Mini looks like a pretty great drone for US $400 ($500 for a combo with more accessories): It’s tiny, flies for 30 minutes, and will do what you need as far as pictures and video (although not a whole lot more).

DJI seems to have put a bunch of effort into making the drone 249 grams, 1 gram under what’s required for FAA registration. That means you save $5 and a few minutes of your time, but that does not mean you don’t have to follow the FAA’s rules and regulations governing drone use.

[ DJI ]

Don’t panic, but Clearpath and HEBI Robotics have armed the Jackal:

After locking eyes across a crowded room at ICRA 2019, Clearpath Robotics and HEBI Robotics basked in that warm and fuzzy feeling that comes with starting a new and exciting relationship. Over a conference hall coffee, they learned that the two companies have many overlapping interests. The most compelling was the realization that customers across a variety of industries are hunting for an elusive true love of their own – a robust but compact robotic platform combined with a long reach manipulator for remote inspection tasks.

After ICRA concluded, Arron Griffiths, Application Engineer at Clearpath, and Matthew Tesch, Software Engineer at HEBI, kept in touch and decided there had been enough magic in the air to warrant further exploration. A couple of months later, Matthew arrived at Clearpath to formally introduce the HEBI’s X-Series Arm to Clearpath’s Jackal UGV. It was love.

[ Clearpath ]

Thanks Dave!

I’m really not a fan of the people-carrying drones, but heavy lift cargo drones seem like a more okay idea.

Volocopter, the pioneer in Urban Air Mobility, presented the demonstrator of its VoloDrone. This marks Volocopters expansion into the logistics, agriculture, infrastructure and public services industry. The VoloDrone is an unmanned, fully electric, heavy-lift utility drone capable of carrying a payload of 200 kg (440 lbs) up to 40 km (25 miles). With a standardized payload attachment, VoloDrone can serve a great variety of purposes from transporting boxes, to liquids, to equipment and beyond. It can be remotely piloted or flown in automated mode on pre-set routes.

[ Volocopter ]

JAY is a mobile service robot that projects a display on the floor and plays sound with its speaker. By playing sounds and videos, it provides visual and audio entertainment in various places such as exhibition halls, airports, hotels, department stores and more.

[ Rainbow Robotics ]

The DARPA Subterranean Challenge Virtual Tunnel Circuit concluded this week—it was the same idea as the physical challenge that took place in August, just with a lot less IRL dirt.

The awards ceremony and team presentations are in this next video, and we’ll have more on this once we get back from IROS.

[ DARPA SubT ]

NASA is sending a mobile robot to the south pole of the Moon to get a close-up view of the location and concentration of water ice in the region and for the first time ever, actually sample the water ice at the same pole where the first woman and next man will land in 2024 under the Artemis program.

About the size of a golf cart, the Volatiles Investigating Polar Exploration Rover, or VIPER, will roam several miles, using its four science instruments — including a 1-meter drill — to sample various soil environments. Planned for delivery in December 2022, VIPER will collect about 100 days of data that will be used to inform development of the first global water resource maps of the Moon.

[ NASA ]

Happy Halloween from HEBI Robotics!

[ HEBI ]

Happy Halloween from Soft Robotics!

[ Soft Robotics ]

Halloween must be really, really confusing for autonomous cars.

[ Waymo ]

Once a year at Halloween, hardworking JPL engineers put their skills to the test in a highly competitive pumpkin carving contest. The result: A pumpkin gently landed on the Moon, its retrorockets smoldering, while across the room a Nemo-inspired pumpkin explored the sub-surface ocean of Jupiter moon Europa. Suffice to say that when the scientists and engineers at NASA’s Jet Propulsion Laboratory compete in a pumpkin-carving contest, the solar system’s the limit. Take a look at some of the masterpieces from 2019.

Now in its ninth year, the contest gives teams only one hour to carve and decorate their pumpkin though they can prepare non-pumpkin materials – like backgrounds, sound effects and motorized parts – ahead of time.

[ JPL ]

The online autonomous navigation and semantic mapping experiment presented [below] is conducted with the Cassie Blue bipedal robot at the University of Michigan. The sensors attached to the robot include an IMU, a 32-beam LiDAR and an RGB-D camera. The whole online process runs in real-time on a Jetson Xavier and a laptop with an i7 processor.

[ BPL ]

Misty II is now available to anyone who wants one, and she’s on sale for a mere $2900.

[ Misty ]

We leveraged LIDAR-based slam, in conjunction with our specialized relative localization sensor UVDAR to perform a de-centralized, communication-free swarm flight without the units knowing their absolute locations. The swarming and obstacle avoidance control is based on a modified Boids-like algorithm, while the whole swarm is controlled by directing a selected leader unit.

[ MRS ]

The MallARD robot is an autonomous surface vehicle (ASV), designed for the monitoring and inspection of wet storage facilities for example spent fuel pools or wet silos. The MallARD is holonomic, uses a LiDAR for localisation and features a robust trajectory tracking controller.

The University of Manchester’s researcher Dr Keir Groves designed and built the autonomous surface vehicle (ASV) for the challenge which came in the top three of the second round in Nov 2017. The MallARD went on to compete in a final 3rd round where it was deployed in a spent fuel pond at a nuclear power plant in Finland by the IAEA, along with two other entries. The MallARD came second overall, in November 2018.

[ RNE ]

Thanks Jennifer!

I sometimes get the sense that in the robotic grasping and manipulation world, suction cups are kinda seen as cheating at times. But, their nature allows you to do some pretty interesting things.

More clever octopus footage please.

[ CMU ]

A Personal, At-Home Teacher For Playful Learning: From academic topics to child-friendly news bulletins, fun facts and more, Miko 2 is packed with relevant and freshly updated content specially designed by educationists and child-specialists. Your little one won’t even realize they’re learning.

As we point out pretty much every time we post a video like this, keep in mind that you’re seeing a heavily edited version of a hypothetical best case scenario for how this robot can function. And things like “creating a relationship that they can then learn how to form with their peers” is almost certainly overselling things. But at $300 (shipping included), this may be a decent robot as long as your expectations are appropriately calibrated.

[ Miko ]

ICRA 2018 plenary talk by Rodney Brooks: “Robots and People: the Research Challenge.”

[ IEEE RAS ]

ICRA-X 2018 talk by Ron Arkin: “Lethal Autonomous Robots and the Plight of the Noncombatant.”

[ IEEE RAS ]

On the most recent episode of the AI Podcast, Lex Fridman interviews Garry Kasparov.

[ AI Podcast ] Continue reading

Posted in Human Robots

#436149 Blue Frog Robotics Answers (Some of) Our ...

In September of 2015, Buddy the social home robot closed its Indiegogo crowdfunding campaign more than 600 percent over its funding goal. A thousand people pledged for a robot originally scheduled to be delivered in December of 2016. But nearly three years later, the future of Buddy is still unclear. Last May, Blue Frog Robotics asked for forgiveness from its backers and announced the launch of an “equity crowdfunding campaign” to try to raise the additional funding necessary to deliver the robot in April of 2020.

By the time the crowdfunding campaign launched in August, the delivery date had slipped again, to September 2020, even as Blue Frog attempted to draw investors by estimating that sales of Buddy would “increase from 2000 robots in 2020 to 20,000 in 2023.” Blue Frog’s most recent communication with backers, in September, mentions a new CTO and a North American office, but does little to reassure backers of Buddy that they’ll ever be receiving their robot.

Backers of the robot are understandably concerned about the future of Buddy, so we sent a series of questions to the founder and CEO of Blue Frog Robotics, Rodolphe Hasselvander.

We’ve edited this interview slightly for clarity, but we should also note that Hasselvander was unable to provide answers to every question. In particular, we asked for some basic information about Blue Frog’s near-term financial plans, on which the entire future of Buddy seems to depend. We’ve left those questions in the interview anyway, along with Hasselvander’s response.

1. At this point, how much additional funding is necessary to deliver Buddy to backers?
2. Assuming funding is successful, when can backers expect to receive Buddy?
3. What happens if the fundraising goal is not met?
4. You estimate that sales of Buddy will increase 10x over three years. What is this estimate based on?

Rodolphe Hasselvander: Regarding the questions 1-4, unfortunately, as we are fundraising in a Regulation D, we do not comment on prospect, customer data, sales forecasts, or figures. Please refer to our press release here to have information about the fundraising.

5. Do you feel that you are currently being transparent enough about this process to satisfy backers?
6. Buddy’s launch date has moved from April 2020 to September 2020 over the last four months. Why should backers remain confident about Buddy’s schedule?

Since the last newsletter, we haven’t changed our communication, the backers will be the first to receive their Buddy, and we plan an official launch in September 2020.

7. What is the goal of My Buddy World?

At Blue Frog, we think that matching a great product with a big market can only happen through continual experimentation, iteration and incorporation of customer feedback. That’s why we created the forum My Buddy World. It has been designed for our Buddy Community to join us, discuss the world’s first emotional robot, and create with us. The objective is to deepen our conversation with Buddy’s fans and users, stay agile in testing our hypothesis and validate our product-market fit. We trust the value of collaboration. Behind Buddy, there is a team of roboticists, engineers, and programmers that are eager to know more about our consumers’ needs and are excited to work with them to create the perfect human/robot experience.

8. How is the current version of Buddy different from the 2015 version that backers pledged for during the successful crowdfunding campaign, in both hardware and software?

We have completely revised some parts of Buddy as well as replaced and/or added more accurate and reliable components to ensure we fully satisfy our customers’ requirements for a mature and high-quality robot from day one. We sourced more innovative components to make sure that Buddy has the most up-to-date technologies such as adding four microphones, a high def thermal matrix, a 3D camera, an 8-megapixel RGB camera, time-of-flight sensors, and touch sensors.
If you want more info, we just posted an article about what is Buddy here.

9. Will the version of Buddy that ships to backers in 2020 do everything that that was shown in the original crowdfunding video?

Concerning the capabilities of Buddy regarding the video published on YouTube, I confirm that Buddy will be able to do everything you can see, like patrol autonomously and secure your home, telepresence, mathematics applications, interactive stories for children, IoT/smart home management, face recognition, alarm clock, reminder, message/photo sharing, music, hands free call, people following, games like hide and seek (and more). In addition, everyone will be able to create their own apps thanks to the “BuddyLab” application.

10. What makes you confident that Buddy will be successful when Jibo, Kuri, and other social robots have not?

Consumer robotics is a new market. Some people think it is a tough one. But we, at Blue Frog Robotics, believe it is a path of learning, understanding, and finding new ways to serve consumers. Here are the five key factors that will make Buddy successful.

1) A market-fit robot

Blue Frog Robotics is a consumer-centric company. We know that a successful business model and a compelling fit to market Buddy must come up from solving consumers’ frustrations and problems in a way that’s new and exciting. We started from there.

By leveraged existing research and syndicated consumer data sets to understand our customers’ needs and aspirations, we get that creating a robot is not about the best tech innovation and features, but always about how well technology becomes a service to one’s basic human needs and assets: convenience, connection, security, fun, self-improvement, and time. To answer to these consumers’ needs and wants, we designed an all-in-one robot with four vital capabilities: intelligence, emotionality, mobility, and customization.

With his multi-purpose brain, he addresses a broad range of needs in modern-day life, from securing homes to carrying out his owners’ daily activities, from helping people with disabilities to educating children, from entertaining to just becoming a robot friend.

Buddy is a disruptive innovative robot that is about to transform the way we live, learn, utilize information, play, and even care about our health.
2) Endless possibilities

One of the major advantages of Buddy is his adaptability. Beyond to be adorable, playful, talkative, and to accompany anyone in their daily life at home whether you are comfortable with technology or not, he offers via his platform applications to engage his owners in a wide range of activities. From fitness to cooking, from health monitoring to education, from games to meditation, the combination of intelligence, sensors, mobility, multi-touch panel opens endless possibilities for consumers and organizations to adapt their Buddy to their own needs.
3) An affordable price

Buddy will be the first robot combining smart, social, and mobile capabilities and a developed platform with a personality to enter the U.S. market at affordable price.

Our competitors are social or assistant robots but rarely both. Competitors differentiate themselves by features: mobile, non-mobile; by shapes: humanoid or not; by skills: social versus smart; targeting a specific domain like entertainment, retail assistant, eldercare, or education for children; and by price. Regarding our six competitors: Moorebot, Elli-Q, and Olly are not mobile; Lynx and Nao are in toy category; Pepper is above $10k targeting B2B market; and finally, Temi can’t be considered an emotional robot.
Buddy remains highly differentiated as an all-in-one, best of his class experience, covering the needs for social interactions and assistance of his owners at each stage of their life at an affordable price.

The price range of Buddy will be between US $1700 and $2000.

4) A winning business model

Buddy’s great business model combines hardware, software, and services, and provides game-changing convenience for consumers, organizations, and developers.

Buddy offers a multi-sided value proposition focused on three vertical markets: direct consumers, corporations (healthcare, education, hospitality), and developers. The model creates engagement and sustained usage and produces stable and diverse cash flow.
5) A Passion for people and technology

From day one, we have always believed in the power of our dream: To bring the services and the fun of an emotional robot in every house, every hospital, in every care house. Each day, we refuse to think that we are stuck or limited; we work hard to make Buddy a reality that will help people all over the world and make them smile.

While we certainly appreciate Hasselvander’s consistent optimism and obvious enthusiasm, we’re obligated to point out that some of our most important questions were not directly answered. We haven’t learned anything that makes us all that much more confident that Blue Frog will be able to successfully deliver Buddy this time. Hasselvander also didn’t address our specific question about whether he feels like Blue Frog’s communication strategy with backers has been adequate, which is particularly relevant considering that over the four months between the last two newsletters, Buddy’s launch date slipped by six months.

At this point, all we can do is hope that the strategy Blue Frog has chosen will be successful. We’ll let you know if as soon as we learn more.

[ Buddy ] Continue reading

Posted in Human Robots

#436146 Video Friday: Kuka’s Robutt Is a ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

ARSO 2019 – October 31-1, 2019 – Beijing, China
ROSCon 2019 – October 31-1, 2019 – Macau
IROS 2019 – November 4-8, 2019 – Macau
Let us know if you have suggestions for next week, and enjoy today’s videos.

Kuka’s “robutt” can, according to the company, simulate “thousands of butts in the pursuit of durability and comfort.” Two of the robots are used at a Ford development center in Germany to evaluate new car seats. The tests are quite exhaustive, consisting of around 25,000 simulated sitting motions for each new seat design.” Or as Kuka puts it, “Pleasing all the butts on the planet is serious business.”

[ Kuka ]

Here’s a clever idea: 3D printing manipulators, and then using the 3D printer head to move those manipulators around and do stuff with them:

[ Paper ]

Two former soldiers performed a series of tests to see if the ONYX Exoskeleton gave them extra strength and endurance in difficult environments.

So when can I rent one of these to help me move furniture?

[ Lockheed ]

One of the defining characteristics of legged robots in general (and humanoid robots in particular) is the ability of walking on various types of terrain. In this video, we show our humanoid robot TORO walking dynamically over uneven (on grass outside the lab), rough (large gravel), and compliant terrain (a soft gym mattress). The robot can maintain its balance, even when the ground shifts rapidly under foot, such as when walking over gravel. This behaviour showcases the torque-control capability of quickly adapting the contact forces compared to position control methods.

An in-depth discussion of the current implementation is presented in the paper “Dynamic Walking on Compliant and Uneven Terrain using DCM and Passivity-based Whole-body Control”.

[ DLR RMC ]

Tsuki is a ROS-enabled quadruped designed and built by Lingkang Zhang. It’s completely position controlled, with no contact sensors on the feet, or even an IMU.

It can even do flips!

[ Tsuki ]

Thanks Lingkang!

TRI CEO Dr. Gill Pratt presents TRI’s contributions to Toyota’s New “LQ” Concept Vehicle, which includes onboard artificial intelligence agent “Yui” and LQ’s automated driving technology.

[ TRI ]

Hooman Hedayati wrote in to share some work (presented at HRI this year) on using augmented reality to make drone teleoperation more intuitive. Get a virtual drone to do what you want first, and then the real drone will follow.

[ Paper ]

Thanks Hooman!

You can now order a Sphero RVR for $250. It’s very much not spherical, but it does other stuff, so we’ll give it a pass.

[ Sphero ]

The AI Gamer Q56 robot is an expert at whatever this game is, using AI plus actual physical control manipulation. Watch until the end!

[ Bandai Namco ]

We present a swarm of autonomous flying robots for the exploration of unknown environments. The tiny robots do not make maps of their environment, but deal with obstacles on the fly. In robotics, the algorithms for navigating like this are called “bug algorithms”. The navigation of the robots involves them first flying away from the base station and later finding their way back with the help of a wireless beacon.

[ MAVLab ]

Okay Soft Robotics you successfully and disgustingly convinced us that vacuum grippers should never be used for food handling. Yuck!

[ Soft Robotics ]

Beyond the asteroid belt are “fossils of planet formation” known as the Trojan asteroids. These primitive bodies share Jupiter’s orbit in two vast swarms, and may hold clues to the formation and evolution of our solar system. Now, NASA is preparing to explore the Trojan asteroids for the first time. A mission called Lucy will launch in 2021 and visit seven asteroids over the course of twelve years – one in the main belt and six in Jupiter’s Trojan swarms.

[ NASA ]

I’m not all that impressed by this concept car from Lexus except that it includes some kind of super-thin autonomous luggage-carrying drone.

The LF-30 Electrified also carries the ‘Lexus Airporter’ drone-technology support vehicle. Using autonomous control, the Lexus Airporter is capable of such tasks as independently transporting baggage from a household doorstep to the vehicle’s luggage area.

[ Lexus ]

Vision 60 legged robot managing unstructured terrain without vision or force sensors in its legs. Using only high-transparency actuators and 2kHz algorithmic stability control… 4-limbs and 12-motors with only a velocity command.

[ Ghost Robotics ]

Tech United Eindhoven is looking good for RoboCup@Home 2020.

[ Tech United ]

Penn engineers participated in the Subterranean (SubT) Challenge hosted by DARPA, the Defense Advanced Research Projects Agency. The goal of this Challenge is for teams to develop automated systems that can work in underground environments so they could be deployed after natural disasters or on dangerous search-and-rescue missions.

[ Team PLUTO ]

It’s BeetleCam vs White Rhinos in Kenya, and the White Rhinos don’t seem to mind at all.

[ Will Burrard-Lucas ] Continue reading

Posted in Human Robots

#436140 Let’s Build Robots That Are as Smart ...

Illustration: Nicholas Little

Let’s face it: Robots are dumb. At best they are idiot savants, capable of doing one thing really well. In general, even those robots require specialized environments in which to do their one thing really well. This is why autonomous cars or robots for home health care are so difficult to build. They’ll need to react to an uncountable number of situations, and they’ll need a generalized understanding of the world in order to navigate them all.

Babies as young as two months already understand that an unsupported object will fall, while five-month-old babies know materials like sand and water will pour from a container rather than plop out as a single chunk. Robots lack these understandings, which hinders them as they try to navigate the world without a prescribed task and movement.

But we could see robots with a generalized understanding of the world (and the processing power required to wield it) thanks to the video-game industry. Researchers are bringing physics engines—the software that provides real-time physical interactions in complex video-game worlds—to robotics. The goal is to develop robots’ understanding in order to learn about the world in the same way babies do.

Giving robots a baby’s sense of physics helps them navigate the real world and can even save on computing power, according to Lochlainn Wilson, the CEO of SE4, a Japanese company building robots that could operate on Mars. SE4 plans to avoid the problems of latency caused by distance from Earth to Mars by building robots that can operate independently for a few hours before receiving more instructions from Earth.

Wilson says that his company uses simple physics engines such as PhysX to help build more-independent robots. He adds that if you can tie a physics engine to a coprocessor on the robot, the real-time basic physics intuitions won’t take compute cycles away from the robot’s primary processor, which will often be focused on a more complicated task.

Wilson’s firm occasionally still turns to a traditional graphics engine, such as Unity or the Unreal Engine, to handle the demands of a robot’s movement. In certain cases, however, such as a robot accounting for friction or understanding force, you really need a robust physics engine, Wilson says, not a graphics engine that simply simulates a virtual environment. For his projects, he often turns to the open-source Bullet Physics engine built by Erwin Coumans, who is now an employee at Google.

Bullet is a popular physics-engine option, but it isn’t the only one out there. Nvidia Corp., for example, has realized that its gaming and physics engines are well-placed to handle the computing demands required by robots. In a lab in Seattle, Nvidia is working with teams from the University of Washington to build kitchen robots, fully articulated robot hands and more, all equipped with Nvidia’s tech.

When I visited the lab, I watched a robot arm move boxes of food from counters to cabinets. That’s fairly straightforward, but that same robot arm could avoid my body if I got in its way, and it could adapt if I moved a box of food or dropped it onto the floor.

The robot could also understand that less pressure is needed to grasp something like a cardboard box of Cheez-It crackers versus something more durable like an aluminum can of tomato soup.

Nvidia’s silicon has already helped advance the fields of artificial intelligence and computer vision by making it possible to process multiple decisions in parallel. It’s possible that the company’s new focus on virtual worlds will help advance the field of robotics and teach robots to think like babies.

This article appears in the November 2019 print issue as “Robots as Smart as Babies.” Continue reading

Posted in Human Robots

#436126 Quantum Computing Gets a Boost From AI ...

Illustration: Greg Mably

Anyone of a certain age who has even a passing interest in computers will remember the remarkable breakthrough that IBM made in 1997 when its Deep Blue chess-playing computer defeated Garry Kasparov, then the world chess champion. Computer scientists passed another such milestone in March 2016, when DeepMind (a subsidiary of Alphabet, Google’s parent company) announced that its AlphaGo program had defeated world-champion player Lee Sedol in the game of Go, a board game that had vexed AI researchers for decades. Recently, DeepMind’s algorithms have also bested human players in the computer games StarCraft IIand Quake Arena III.

Some believe that the cognitive capacities of machines will overtake those of human beings in many spheres within a few decades. Others are more cautious and point out that our inability to understand the source of our own cognitive powers presents a daunting hurdle. How can we make thinking machines if we don’t fully understand our own thought processes?

Citizen science, which enlists masses of people to tackle research problems, holds promise here, in no small part because it can be used effectively to explore the boundary between human and artificial intelligence.

Some citizen-science projects ask the public to collect data from their surroundings (as eButterfly does for butterflies) or to monitor delicate ecosystems (as Eye on the Reef does for Australia’s Great Barrier Reef). Other projects rely on online platforms on which people help to categorize obscure phenomena in the night sky (Zooniverse) or add to the understanding of the structure of proteins (Foldit). Typically, people can contribute to such projects without any prior knowledge of the subject. Their fundamental cognitive skills, like the ability to quickly recognize patterns, are sufficient.

In order to design and develop video games that can allow citizen scientists to tackle scientific problems in a variety of fields, professor and group leader Jacob Sherson founded ScienceAtHome (SAH), at Aarhus University, in Denmark. The group began by considering topics in quantum physics, but today SAH hosts games covering other areas of physics, math, psychology, cognitive science, and behavioral economics. We at SAH search for innovative solutions to real research challenges while providing insight into how people think, both alone and when working in groups.

It is computationally intractable to completely map out a higher-dimensional landscape: It is called the curse of high dimensionality, and it plagues many optimization problems.

We believe that the design of new AI algorithms would benefit greatly from a better understanding of how people solve problems. This surmise has led us to establish the Center for Hybrid Intelligence within SAH, which tries to combine human and artificial intelligence, taking advantage of the particular strengths of each. The center’s focus is on the gamification of scientific research problems and the development of interfaces that allow people to understand and work together with AI.

Our first game, Quantum Moves, was inspired by our group’s research into quantum computers. Such computers can in principle solve certain problems that would take a classical computer billions of years. Quantum computers could challenge current cryptographic protocols, aid in the design of new materials, and give insight into natural processes that require an exact solution of the equations of quantum mechanics—something normal computers are inherently bad at doing.

One candidate system for building such a computer would capture individual atoms by “freezing” them, as it were, in the interference pattern produced when a laser beam is reflected back on itself. The captured atoms can thus be organized like eggs in a carton, forming a periodic crystal of atoms and light. Using these atoms to perform quantum calculations requires that we use tightly focused laser beams, called optical tweezers, to transport the atoms from site to site in the light crystal. This is a tricky business because individual atoms do not behave like particles; instead, they resemble a wavelike liquid governed by the laws of quantum mechanics.

In Quantum Moves, a player manipulates a touch screen or mouse to move a simulated laser tweezer and pick up a trapped atom, represented by a liquidlike substance in a bowl. Then the player must bring the atom back to the tweezer’s initial position while trying to minimize the sloshing of the liquid. Such sloshing would increase the energy of the atom and ultimately introduce errors into the operations of the quantum computer. Therefore, at the end of a move, the liquid should be at a complete standstill.

To understand how people and computers might approach such a task differently, you need to know something about how computerized optimization algorithms work. The countless ways of moving a glass of water without spilling may be regarded as constituting a “solution landscape.” One solution is represented by a single point in that landscape, and the height of that point represents the quality of the solution—how smoothly and quickly the glass of water was moved. This landscape might resemble a mountain range, where the top of each mountain represents a local optimum and where the challenge is to find the highest peak in the range—the global optimum.

Illustration: Greg Mably

Researchers must compromise between searching the landscape for taller mountains (“exploration”) and climbing to the top of the nearest mountain (“exploitation”). Making such a trade-off may seem easy when exploring an actual physical landscape: Merely hike around a bit to get at least the general lay of the land before surveying in greater detail what seems to be the tallest peak. But because each possible way of changing the solution defines a new dimension, a realistic problem can have thousands of dimensions. It is computationally intractable to completely map out such a higher-dimensional landscape. We call this the curse of high dimensionality, and it plagues many optimization problems.

Although algorithms are wonderfully efficient at crawling to the top of a given mountain, finding good ways of searching through the broader landscape poses quite a challenge, one that is at the forefront of AI research into such control problems. The conventional approach is to come up with clever ways of reducing the search space, either through insights generated by researchers or with machine-learning algorithms trained on large data sets.

At SAH, we attacked certain quantum-optimization problems by turning them into a game. Our goal was not to show that people can beat computers in this arena but rather to understand the process of generating insights into such problems. We addressed two core questions: whether allowing players to explore the infinite space of possibilities will help them find good solutions and whether we can learn something by studying their behavior.

Today, more than 250,000 people have played Quantum Moves, and to our surprise, they did in fact search the space of possible moves differently from the algorithm we had put to the task. Specifically, we found that although players could not solve the optimization problem on their own, they were good at searching the broad landscape. The computer algorithms could then take those rough ideas and refine them.

Herbert A. Simon said that “solving a problem simply means representing it so as to make the solution transparent.” Apparently, that’s what our games can do with their novel user interfaces.

Perhaps even more interesting was our discovery that players had two distinct ways of solving the problem, each with a clear physical interpretation. One set of players started by placing the tweezer close to the atom while keeping a barrier between the atom trap and the tweezer. In classical physics, a barrier is an impenetrable obstacle, but because the atom liquid is a quantum-mechanical object, it can tunnel through the barrier into the tweezer, after which the player simply moved the tweezer to the target area. Another set of players moved the tweezer directly into the atom trap, picked up the atom liquid, and brought it back. We called these two strategies the “tunneling” and “shoveling” strategies, respectively.

Such clear strategies are extremely valuable because they are very difficult to obtain directly from an optimization algorithm. Involving humans in the optimization loop can thus help us gain insight into the underlying physical phenomena that are at play, knowledge that may then be transferred to other types of problems.

Quantum Moves raised several obvious issues. First, because generating an exceptional solution required further computer-based optimization, players were unable to get immediate feedback to help them improve their scores, and this often left them feeling frustrated. Second, we had tested this approach on only one scientific challenge with a clear classical analogue, that of the sloshing liquid. We wanted to know whether such gamification could be applied more generally, to a variety of scientific challenges that do not offer such immediately applicable visual analogies.

We address these two concerns in Quantum Moves 2. Here, the player first generates a number of candidate solutions by playing the original game. Then the player chooses which solutions to optimize using a built-in algorithm. As the algorithm improves a player’s solution, it modifies the solution path—the movement of the tweezer—to represent the optimized solution. Guided by this feedback, players can then improve their strategy, come up with a new solution, and iteratively feed it back into this process. This gameplay provides high-level heuristics and adds human intuition to the algorithm. The person and the machine work in tandem—a step toward true hybrid intelligence.

In parallel with the development of Quantum Moves 2, we also studied how people collaboratively solve complex problems. To that end, we opened our atomic physics laboratory to the general public—virtually. We let people from around the world dictate the experiments we would run to see if they would find ways to improve the results we were getting. What results? That’s a little tricky to explain, so we need to pause for a moment and provide a little background on the relevant physics.

One of the essential steps in building the quantum computer along the lines described above is to create the coldest state of matter in the universe, known as a Bose-Einstein condensate. Here millions of atoms oscillate in synchrony to form a wavelike substance, one of the largest purely quantum phenomena known. To create this ultracool state of matter, researchers typically use a combination of laser light and magnetic fields. There is no familiar physical analogy between such a strange state of matter and the phenomena of everyday life.

The result we were seeking in our lab was to create as much of this enigmatic substance as was possible given the equipment available. The sequence of steps to accomplish that was unknown. We hoped that gamification could help to solve this problem, even though it had no classical analogy to present to game players.

Images: ScienceAtHome

Fun and Games: The
Quantum Moves game evolved over time, from a relatively crude early version [top] to its current form [second from top] and then a major revision,
Quantum Moves 2 [third from top].
Skill Lab: Science Detective games [bottom] test players’ cognitive skills.

In October 2016, we released a game that, for two weeks, guided how we created Bose-Einstein condensates in our laboratory. By manipulating simple curves in the game interface, players generated experimental sequences for us to use in producing these condensates—and they did so without needing to know anything about the underlying physics. A player would generate such a solution, and a few minutes later we would run the sequence in our laboratory. The number of ultracold atoms in the resulting Bose-Einstein condensate was measured and fed back to the player as a score. Players could then decide either to try to improve their previous solution or to copy and modify other players’ solutions. About 600 people from all over the world participated, submitting 7,577 solutions in total. Many of them yielded bigger condensates than we had previously produced in the lab.

So this exercise succeeded in achieving our primary goal, but it also allowed us to learn something about human behavior. We learned, for example, that players behave differently based on where they sit on the leaderboard. High-performing players make small changes to their successful solutions (exploitation), while poorly performing players are willing to make more dramatic changes (exploration). As a collective, the players nicely balance exploration and exploitation. How they do so provides valuable inspiration to researchers trying to understand human problem solving in social science as well as to those designing new AI algorithms.

How could mere amateurs outperform experienced experimental physicists? The players certainly weren’t better at physics than the experts—but they could do better because of the way in which the problem was posed. By turning the research challenge into a game, we gave players the chance to explore solutions that had previously required complex programming to study. Indeed, even expert experimentalists improved their solutions dramatically by using this interface.

Insight into why that’s possible can probably be found in the words of the late economics Nobel laureate Herbert A. Simon: “Solving a problem simply means representing it so as to make the solution transparent [PDF].” Apparently, that’s what our games can do with their novel user interfaces. We believe that such interfaces might be a key to using human creativity to solve other complex research problems.

Eventually, we’d like to get a better understanding of why this kind of gamification works as well as it does. A first step would be to collect more data on what the players do while they are playing. But even with massive amounts of data, detecting the subtle patterns underlying human intuition is an overwhelming challenge. To advance, we need a deeper insight into the cognition of the individual players.

As a step forward toward this goal, ScienceAtHome created Skill Lab: Science Detective, a suite of minigames exploring visuospatial reasoning, response inhibition, reaction times, and other basic cognitive skills. Then we compare players’ performance in the games with how well these same people did on established psychological tests of those abilities. The point is to allow players to assess their own cognitive strengths and weaknesses while donating their data for further public research.

In the fall of 2018 we launched a prototype of this large-scale profiling in collaboration with the Danish Broadcasting Corp. Since then more than 20,000 people have participated, and in part because of the publicity granted by the public-service channel, participation has been very evenly distributed across ages and by gender. Such broad appeal is rare in social science, where the test population is typically drawn from a very narrow demographic, such as college students.

Never before has such a large academic experiment in human cognition been conducted. We expect to gain new insights into many things, among them how combinations of cognitive abilities sharpen or decline with age, what characteristics may be used to prescreen for mental illnesses, and how to optimize the building of teams in our work lives.

And so what started as a fun exercise in the weird world of quantum mechanics has now become an exercise in understanding the nuances of what makes us human. While we still seek to understand atoms, we can now aspire to understand people’s minds as well.

This article appears in the November 2019 print issue as “A Man-Machine Mind Meld for Quantum Computing.”

About the Authors
Ottó Elíasson, Carrie Weidner, Janet Rafner, and Shaeema Zaman Ahmed work with the ScienceAtHome project at Aarhus University in Denmark. Continue reading

Posted in Human Robots