Tag Archives: resemble
#436911 Scientists Linked Artificial and ...
Scientists have linked up two silicon-based artificial neurons with a biological one across multiple countries into a fully-functional network. Using standard internet protocols, they established a chain of communication whereby an artificial neuron controls a living, biological one, and passes on the info to another artificial one.
Whoa.
We’ve talked plenty about brain-computer interfaces and novel computer chips that resemble the brain. We’ve covered how those “neuromorphic” chips could link up into tremendously powerful computing entities, using engineered communication nodes called artificial synapses.
As Moore’s law is dying, we even said that neuromorphic computing is one path towards the future of extremely powerful, low energy consumption artificial neural network-based computing—in hardware—that could in theory better link up with the brain. Because the chips “speak” the brain’s language, in theory they could become neuroprosthesis hubs far more advanced and “natural” than anything currently possible.
This month, an international team put all of those ingredients together, turning theory into reality.
The three labs, scattered across Padova, Italy, Zurich, Switzerland, and Southampton, England, collaborated to create a fully self-controlled, hybrid artificial-biological neural network that communicated using biological principles, but over the internet.
The three-neuron network, linked through artificial synapses that emulate the real thing, was able to reproduce a classic neuroscience experiment that’s considered the basis of learning and memory in the brain. In other words, artificial neuron and synapse “chips” have progressed to the point where they can actually use a biological neuron intermediary to form a circuit that, at least partially, behaves like the real thing.
That’s not to say cyborg brains are coming soon. The simulation only recreated a small network that supports excitatory transmission in the hippocampus—a critical region that supports memory—and most brain functions require enormous cross-talk between numerous neurons and circuits. Nevertheless, the study is a jaw-dropping demonstration of how far we’ve come in recreating biological neurons and synapses in artificial hardware.
And perhaps one day, the currently “experimental” neuromorphic hardware will be integrated into broken biological neural circuits as bridges to restore movement, memory, personality, and even a sense of self.
The Artificial Brain Boom
One important thing: this study relies heavily on a decade of research into neuromorphic computing, or the implementation of brain functions inside computer chips.
The best-known example is perhaps IBM’s TrueNorth, which leveraged the brain’s computational principles to build a completely different computer than what we have today. Today’s computers run on a von Neumann architecture, in which memory and processing modules are physically separate. In contrast, the brain’s computing and memory are simultaneously achieved at synapses, small “hubs” on individual neurons that talk to adjacent ones.
Because memory and processing occur on the same site, biological neurons don’t have to shuttle data back and forth between processing and storage compartments, massively reducing processing time and energy use. What’s more, a neuron’s history will also influence how it behaves in the future, increasing flexibility and adaptability compared to computers. With the rise of deep learning, which loosely mimics neural processing as the prima donna of AI, the need to reduce power while boosting speed and flexible learning is becoming ever more tantamount in the AI community.
Neuromorphic computing was partially born out of this need. Most chips utilize special ingredients that change their resistance (or other physical characteristics) to mimic how a neuron might adapt to stimulation. Some chips emulate a whole neuron, that is, how it responds to a history of stimulation—does it get easier or harder to fire? Others imitate synapses themselves, that is, how easily they will pass on the information to another neuron.
Although single neuromorphic chips have proven to be far more efficient and powerful than current computer chips running machine learning algorithms in toy problems, so far few people have tried putting the artificial components together with biological ones in the ultimate test.
That’s what this study did.
A Hybrid Network
Still with me? Let’s talk network.
It’s gonna sound complicated, but remember: learning is the formation of neural networks, and neurons that fire together wire together. To rephrase: when learning, neurons will spontaneously organize into networks so that future instances will re-trigger the entire network. To “wire” together, downstream neurons will become more responsive to their upstream neural partners, so that even a whisper will cause them to activate. In contrast, some types of stimulation will cause the downstream neuron to “chill out” so that only an upstream “shout” will trigger downstream activation.
Both these properties—easier or harder to activate downstream neurons—are essentially how the brain forms connections. The “amping up,” in neuroscience jargon, is long-term potentiation (LTP), whereas the down-tuning is LTD (long-term depression). These two phenomena were first discovered in the rodent hippocampus more than half a century ago, and ever since have been considered as the biological basis of how the brain learns and remembers, and implicated in neurological problems such as addition (seriously, you can’t pass Neuro 101 without learning about LTP and LTD!).
So it’s perhaps especially salient that one of the first artificial-brain hybrid networks recapitulated this classic result.
To visualize: the three-neuron network began in Switzerland, with an artificial neuron with the badass name of “silicon spiking neuron.” That neuron is linked to an artificial synapse, a “memristor” located in the UK, which is then linked to a biological rat neuron cultured in Italy. The rat neuron has a “smart” microelectrode, controlled by the artificial synapse, to stimulate it. This is the artificial-to-biological pathway.
Meanwhile, the rat neuron in Italy also has electrodes that listen in on its electrical signaling. This signaling is passed back to another artificial synapse in the UK, which is then used to control a second artificial neuron back in Switzerland. This is the biological-to-artificial pathway back. As a testimony in how far we’ve come in digitizing neural signaling, all of the biological neural responses are digitized and sent over the internet to control its far-out artificial partner.
Here’s the crux: to demonstrate a functional neural network, just having the biological neuron passively “pass on” electrical stimulation isn’t enough. It has to show the capacity to learn, that is, to be able to mimic the amping up and down-tuning that are LTP and LTD, respectively.
You’ve probably guessed the results: certain stimulation patterns to the first artificial neuron in Switzerland changed how the artificial synapse in the UK operated. This, in turn, changed the stimulation to the biological neuron, so that it either amped up or toned down depending on the input.
Similarly, the response of the biological neuron altered the second artificial synapse, which then controlled the output of the second artificial neuron. Altogether, the biological and artificial components seamlessly linked up, over thousands of miles, into a functional neural circuit.
Cyborg Mind-Meld
So…I’m still picking my jaw up off the floor.
It’s utterly insane seeing a classic neuroscience learning experiment repeated with an integrated network with artificial components. That said, a three-neuron network is far from the thousands of synapses (if not more) needed to truly re-establish a broken neural circuit in the hippocampus, which DARPA has been aiming to do. And LTP/LTD has come under fire recently as the de facto brain mechanism for learning, though so far they remain cemented as neuroscience dogma.
However, this is one of the few studies where you see fields coming together. As Richard Feynman famously said, “What I cannot recreate, I cannot understand.” Even though neuromorphic chips were built on a high-level rather than molecular-level understanding of how neurons work, the study shows that artificial versions can still synapse with their biological counterparts. We’re not just on the right path towards understanding the brain, we’re recreating it, in hardware—if just a little.
While the study doesn’t have immediate use cases, practically it does boost both the neuromorphic computing and neuroprosthetic fields.
“We are very excited with this new development,” said study author Dr. Themis Prodromakis at the University of Southampton. “On one side it sets the basis for a novel scenario that was never encountered during natural evolution, where biological and artificial neurons are linked together and communicate across global networks; laying the foundations for the Internet of Neuro-electronics. On the other hand, it brings new prospects to neuroprosthetic technologies, paving the way towards research into replacing dysfunctional parts of the brain with AI chips.”
Image Credit: Gerd Altmann from Pixabay Continue reading
#436470 Retail Robots Are on the Rise—at Every ...
The robots are coming! The robots are coming! On our sidewalks, in our skies, in our every store… Over the next decade, robots will enter the mainstream of retail.
As countless robots work behind the scenes to stock shelves, serve customers, and deliver products to our doorstep, the speed of retail will accelerate.
These changes are already underway. In this blog, we’ll elaborate on how robots are entering the retail ecosystem.
Let’s dive in.
Robot Delivery
On August 3rd, 2016, Domino’s Pizza introduced the Domino’s Robotic Unit, or “DRU” for short. The first home delivery pizza robot, the DRU looks like a cross between R2-D2 and an oversized microwave.
LIDAR and GPS sensors help it navigate, while temperature sensors keep hot food hot and cold food cold. Already, it’s been rolled out in ten countries, including New Zealand, France, and Germany, but its August 2016 debut was critical—as it was the first time we’d seen robotic home delivery.
And it won’t be the last.
A dozen or so different delivery bots are fast entering the market. Starship Technologies, for instance, a startup created by Skype founders Janus Friis and Ahti Heinla, has a general-purpose home delivery robot. Right now, the system is an array of cameras and GPS sensors, but upcoming models will include microphones, speakers, and even the ability—via AI-driven natural language processing—to communicate with customers. Since 2016, Starship has already carried out 50,000 deliveries in over 100 cities across 20 countries.
Along similar lines, Nuro—co-founded by Jiajun Zhu, one of the engineers who helped develop Google’s self-driving car—has a miniature self-driving car of its own. Half the size of a sedan, the Nuro looks like a toaster on wheels, except with a mission. This toaster has been designed to carry cargo—about 12 bags of groceries (version 2.0 will carry 20)—which it’s been doing for select Kroger stores since 2018. Domino’s also partnered with Nuro in 2019.
As these delivery bots take to our streets, others are streaking across the sky.
Back in 2016, Amazon came first, announcing Prime Air—the e-commerce giant’s promise of drone delivery in 30 minutes or less. Almost immediately, companies ranging from 7-Eleven and Walmart to Google and Alibaba jumped on the bandwagon.
While critics remain doubtful, the head of the FAA’s drone integration department recently said that drone deliveries may be “a lot closer than […] the skeptics think. [Companies are] getting ready for full-blown operations. We’re processing their applications. I would like to move as quickly as I can.”
In-Store Robots
While delivery bots start to spare us trips to the store, those who prefer shopping the old-fashioned way—i.e., in person—also have plenty of human-robot interaction in store. In fact, these robotics solutions have been around for a while.
In 2010, SoftBank introduced Pepper, a humanoid robot capable of understanding human emotion. Pepper is cute: 4 feet tall, with a white plastic body, two black eyes, a dark slash of a mouth, and a base shaped like a mermaid’s tail. Across her chest is a touch screen to aid in communication. And there’s been a lot of communication. Pepper’s cuteness is intentional, as it matches its mission: help humans enjoy life as much as possible.
Over 12,000 Peppers have been sold. She serves ice cream in Japan, greets diners at a Pizza Hut in Singapore, and dances with customers at a Palo Alto electronics store. More importantly, Pepper’s got company.
Walmart uses shelf-stocking robots for inventory control. Best Buy uses a robo-cashier, allowing select locations to operate 24-7. And Lowe’s Home Improvement employs the LoweBot—a giant iPad on wheels—to help customers find the items they need while tracking inventory along the way.
Warehouse Bots
Yet the biggest benefit robots provide might be in-warehouse logistics.
In 2012, when Amazon dished out $775 million for Kiva Systems, few could predict that just 6 years later, 45,000 Kiva robots would be deployed at all of their fulfillment centers, helping process a whopping 306 items per second during the Christmas season.
And many other retailers are following suit.
Order jeans from the Gap, and soon they’ll be sorted, packed, and shipped with the help of a Kindred robot. Remember the old arcade game where you picked up teddy bears with a giant claw? That’s Kindred, only her claw picks up T-shirts, pants, and the like, placing them in designated drop-off zones that resemble tiny mailboxes (for further sorting or shipping).
The big deal here is democratization. Kindred’s robot is cheap and easy to deploy, allowing smaller companies to compete with giants like Amazon.
Final Thoughts
For retailers interested in staying in business, there doesn’t appear to be much choice in the way of robotics.
By 2024, the US minimum wage is projected to be $15 an hour (the House of Representatives has already passed the bill, but the wage hike is meant to unfold gradually between now and 2025), and many consider that number far too low.
Yet, as human labor costs continue to climb, robots won’t just be coming, they’ll be here, there, and everywhere. It’s going to become increasingly difficult for store owners to justify human workers who call in sick, show up late, and can easily get injured. Robots work 24-7. They never take a day off, never need a bathroom break, health insurance, or parental leave.
Going forward, this spells a growing challenge of technological unemployment (a blog topic I will cover in the coming month). But in retail, robotics usher in tremendous benefits for companies and customers alike.
And while professional re-tooling initiatives and the transition of human capital from retail logistics to a booming experience economy take hold, robotic retail interaction and last-mile delivery will fundamentally transform our relationship with commerce.
This blog comes from The Future is Faster Than You Think—my upcoming book, to be released Jan 28th, 2020. To get an early copy and access up to $800 worth of pre-launch giveaways, sign up here!
Join Me
(1) A360 Executive Mastermind: If you’re an exponentially and abundance-minded entrepreneur who would like coaching directly from me, consider joining my Abundance 360 Mastermind, a highly selective community of 360 CEOs and entrepreneurs who I coach for 3 days every January in Beverly Hills, Ca. Through A360, I provide my members with context and clarity about how converging exponential technologies will transform every industry. I’m committed to running A360 for the course of an ongoing 25-year journey as a “countdown to the Singularity.”
If you’d like to learn more and consider joining our 2020 membership, apply here.
(2) Abundance-Digital Online Community: I’ve also created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is Singularity University’s ‘onramp’ for exponential entrepreneurs — those who want to get involved and play at a higher level. Click here to learn more.
(Both A360 and Abundance-Digital are part of Singularity University — your participation opens you to a global community.)
Image Credit: Image by imjanuary from Pixabay Continue reading
#436126 Quantum Computing Gets a Boost From AI ...
Illustration: Greg Mably
Anyone of a certain age who has even a passing interest in computers will remember the remarkable breakthrough that IBM made in 1997 when its Deep Blue chess-playing computer defeated Garry Kasparov, then the world chess champion. Computer scientists passed another such milestone in March 2016, when DeepMind (a subsidiary of Alphabet, Google’s parent company) announced that its AlphaGo program had defeated world-champion player Lee Sedol in the game of Go, a board game that had vexed AI researchers for decades. Recently, DeepMind’s algorithms have also bested human players in the computer games StarCraft IIand Quake Arena III.
Some believe that the cognitive capacities of machines will overtake those of human beings in many spheres within a few decades. Others are more cautious and point out that our inability to understand the source of our own cognitive powers presents a daunting hurdle. How can we make thinking machines if we don’t fully understand our own thought processes?
Citizen science, which enlists masses of people to tackle research problems, holds promise here, in no small part because it can be used effectively to explore the boundary between human and artificial intelligence.
Some citizen-science projects ask the public to collect data from their surroundings (as eButterfly does for butterflies) or to monitor delicate ecosystems (as Eye on the Reef does for Australia’s Great Barrier Reef). Other projects rely on online platforms on which people help to categorize obscure phenomena in the night sky (Zooniverse) or add to the understanding of the structure of proteins (Foldit). Typically, people can contribute to such projects without any prior knowledge of the subject. Their fundamental cognitive skills, like the ability to quickly recognize patterns, are sufficient.
In order to design and develop video games that can allow citizen scientists to tackle scientific problems in a variety of fields, professor and group leader Jacob Sherson founded ScienceAtHome (SAH), at Aarhus University, in Denmark. The group began by considering topics in quantum physics, but today SAH hosts games covering other areas of physics, math, psychology, cognitive science, and behavioral economics. We at SAH search for innovative solutions to real research challenges while providing insight into how people think, both alone and when working in groups.
It is computationally intractable to completely map out a higher-dimensional landscape: It is called the curse of high dimensionality, and it plagues many optimization problems.
We believe that the design of new AI algorithms would benefit greatly from a better understanding of how people solve problems. This surmise has led us to establish the Center for Hybrid Intelligence within SAH, which tries to combine human and artificial intelligence, taking advantage of the particular strengths of each. The center’s focus is on the gamification of scientific research problems and the development of interfaces that allow people to understand and work together with AI.
Our first game, Quantum Moves, was inspired by our group’s research into quantum computers. Such computers can in principle solve certain problems that would take a classical computer billions of years. Quantum computers could challenge current cryptographic protocols, aid in the design of new materials, and give insight into natural processes that require an exact solution of the equations of quantum mechanics—something normal computers are inherently bad at doing.
One candidate system for building such a computer would capture individual atoms by “freezing” them, as it were, in the interference pattern produced when a laser beam is reflected back on itself. The captured atoms can thus be organized like eggs in a carton, forming a periodic crystal of atoms and light. Using these atoms to perform quantum calculations requires that we use tightly focused laser beams, called optical tweezers, to transport the atoms from site to site in the light crystal. This is a tricky business because individual atoms do not behave like particles; instead, they resemble a wavelike liquid governed by the laws of quantum mechanics.
In Quantum Moves, a player manipulates a touch screen or mouse to move a simulated laser tweezer and pick up a trapped atom, represented by a liquidlike substance in a bowl. Then the player must bring the atom back to the tweezer’s initial position while trying to minimize the sloshing of the liquid. Such sloshing would increase the energy of the atom and ultimately introduce errors into the operations of the quantum computer. Therefore, at the end of a move, the liquid should be at a complete standstill.
To understand how people and computers might approach such a task differently, you need to know something about how computerized optimization algorithms work. The countless ways of moving a glass of water without spilling may be regarded as constituting a “solution landscape.” One solution is represented by a single point in that landscape, and the height of that point represents the quality of the solution—how smoothly and quickly the glass of water was moved. This landscape might resemble a mountain range, where the top of each mountain represents a local optimum and where the challenge is to find the highest peak in the range—the global optimum.
Illustration: Greg Mably
Researchers must compromise between searching the landscape for taller mountains (“exploration”) and climbing to the top of the nearest mountain (“exploitation”). Making such a trade-off may seem easy when exploring an actual physical landscape: Merely hike around a bit to get at least the general lay of the land before surveying in greater detail what seems to be the tallest peak. But because each possible way of changing the solution defines a new dimension, a realistic problem can have thousands of dimensions. It is computationally intractable to completely map out such a higher-dimensional landscape. We call this the curse of high dimensionality, and it plagues many optimization problems.
Although algorithms are wonderfully efficient at crawling to the top of a given mountain, finding good ways of searching through the broader landscape poses quite a challenge, one that is at the forefront of AI research into such control problems. The conventional approach is to come up with clever ways of reducing the search space, either through insights generated by researchers or with machine-learning algorithms trained on large data sets.
At SAH, we attacked certain quantum-optimization problems by turning them into a game. Our goal was not to show that people can beat computers in this arena but rather to understand the process of generating insights into such problems. We addressed two core questions: whether allowing players to explore the infinite space of possibilities will help them find good solutions and whether we can learn something by studying their behavior.
Today, more than 250,000 people have played Quantum Moves, and to our surprise, they did in fact search the space of possible moves differently from the algorithm we had put to the task. Specifically, we found that although players could not solve the optimization problem on their own, they were good at searching the broad landscape. The computer algorithms could then take those rough ideas and refine them.
Herbert A. Simon said that “solving a problem simply means representing it so as to make the solution transparent.” Apparently, that’s what our games can do with their novel user interfaces.
Perhaps even more interesting was our discovery that players had two distinct ways of solving the problem, each with a clear physical interpretation. One set of players started by placing the tweezer close to the atom while keeping a barrier between the atom trap and the tweezer. In classical physics, a barrier is an impenetrable obstacle, but because the atom liquid is a quantum-mechanical object, it can tunnel through the barrier into the tweezer, after which the player simply moved the tweezer to the target area. Another set of players moved the tweezer directly into the atom trap, picked up the atom liquid, and brought it back. We called these two strategies the “tunneling” and “shoveling” strategies, respectively.
Such clear strategies are extremely valuable because they are very difficult to obtain directly from an optimization algorithm. Involving humans in the optimization loop can thus help us gain insight into the underlying physical phenomena that are at play, knowledge that may then be transferred to other types of problems.
Quantum Moves raised several obvious issues. First, because generating an exceptional solution required further computer-based optimization, players were unable to get immediate feedback to help them improve their scores, and this often left them feeling frustrated. Second, we had tested this approach on only one scientific challenge with a clear classical analogue, that of the sloshing liquid. We wanted to know whether such gamification could be applied more generally, to a variety of scientific challenges that do not offer such immediately applicable visual analogies.
We address these two concerns in Quantum Moves 2. Here, the player first generates a number of candidate solutions by playing the original game. Then the player chooses which solutions to optimize using a built-in algorithm. As the algorithm improves a player’s solution, it modifies the solution path—the movement of the tweezer—to represent the optimized solution. Guided by this feedback, players can then improve their strategy, come up with a new solution, and iteratively feed it back into this process. This gameplay provides high-level heuristics and adds human intuition to the algorithm. The person and the machine work in tandem—a step toward true hybrid intelligence.
In parallel with the development of Quantum Moves 2, we also studied how people collaboratively solve complex problems. To that end, we opened our atomic physics laboratory to the general public—virtually. We let people from around the world dictate the experiments we would run to see if they would find ways to improve the results we were getting. What results? That’s a little tricky to explain, so we need to pause for a moment and provide a little background on the relevant physics.
One of the essential steps in building the quantum computer along the lines described above is to create the coldest state of matter in the universe, known as a Bose-Einstein condensate. Here millions of atoms oscillate in synchrony to form a wavelike substance, one of the largest purely quantum phenomena known. To create this ultracool state of matter, researchers typically use a combination of laser light and magnetic fields. There is no familiar physical analogy between such a strange state of matter and the phenomena of everyday life.
The result we were seeking in our lab was to create as much of this enigmatic substance as was possible given the equipment available. The sequence of steps to accomplish that was unknown. We hoped that gamification could help to solve this problem, even though it had no classical analogy to present to game players.
Images: ScienceAtHome
Fun and Games: The
Quantum Moves game evolved over time, from a relatively crude early version [top] to its current form [second from top] and then a major revision,
Quantum Moves 2 [third from top].
Skill Lab: Science Detective games [bottom] test players’ cognitive skills.
In October 2016, we released a game that, for two weeks, guided how we created Bose-Einstein condensates in our laboratory. By manipulating simple curves in the game interface, players generated experimental sequences for us to use in producing these condensates—and they did so without needing to know anything about the underlying physics. A player would generate such a solution, and a few minutes later we would run the sequence in our laboratory. The number of ultracold atoms in the resulting Bose-Einstein condensate was measured and fed back to the player as a score. Players could then decide either to try to improve their previous solution or to copy and modify other players’ solutions. About 600 people from all over the world participated, submitting 7,577 solutions in total. Many of them yielded bigger condensates than we had previously produced in the lab.
So this exercise succeeded in achieving our primary goal, but it also allowed us to learn something about human behavior. We learned, for example, that players behave differently based on where they sit on the leaderboard. High-performing players make small changes to their successful solutions (exploitation), while poorly performing players are willing to make more dramatic changes (exploration). As a collective, the players nicely balance exploration and exploitation. How they do so provides valuable inspiration to researchers trying to understand human problem solving in social science as well as to those designing new AI algorithms.
How could mere amateurs outperform experienced experimental physicists? The players certainly weren’t better at physics than the experts—but they could do better because of the way in which the problem was posed. By turning the research challenge into a game, we gave players the chance to explore solutions that had previously required complex programming to study. Indeed, even expert experimentalists improved their solutions dramatically by using this interface.
Insight into why that’s possible can probably be found in the words of the late economics Nobel laureate Herbert A. Simon: “Solving a problem simply means representing it so as to make the solution transparent [PDF].” Apparently, that’s what our games can do with their novel user interfaces. We believe that such interfaces might be a key to using human creativity to solve other complex research problems.
Eventually, we’d like to get a better understanding of why this kind of gamification works as well as it does. A first step would be to collect more data on what the players do while they are playing. But even with massive amounts of data, detecting the subtle patterns underlying human intuition is an overwhelming challenge. To advance, we need a deeper insight into the cognition of the individual players.
As a step forward toward this goal, ScienceAtHome created Skill Lab: Science Detective, a suite of minigames exploring visuospatial reasoning, response inhibition, reaction times, and other basic cognitive skills. Then we compare players’ performance in the games with how well these same people did on established psychological tests of those abilities. The point is to allow players to assess their own cognitive strengths and weaknesses while donating their data for further public research.
In the fall of 2018 we launched a prototype of this large-scale profiling in collaboration with the Danish Broadcasting Corp. Since then more than 20,000 people have participated, and in part because of the publicity granted by the public-service channel, participation has been very evenly distributed across ages and by gender. Such broad appeal is rare in social science, where the test population is typically drawn from a very narrow demographic, such as college students.
Never before has such a large academic experiment in human cognition been conducted. We expect to gain new insights into many things, among them how combinations of cognitive abilities sharpen or decline with age, what characteristics may be used to prescreen for mental illnesses, and how to optimize the building of teams in our work lives.
And so what started as a fun exercise in the weird world of quantum mechanics has now become an exercise in understanding the nuances of what makes us human. While we still seek to understand atoms, we can now aspire to understand people’s minds as well.
This article appears in the November 2019 print issue as “A Man-Machine Mind Meld for Quantum Computing.”
About the Authors
Ottó Elíasson, Carrie Weidner, Janet Rafner, and Shaeema Zaman Ahmed work with the ScienceAtHome project at Aarhus University in Denmark. Continue reading