Tag Archives: machines

#432190 In the Future, There Will Be No Limit to ...

New planets found in distant corners of the galaxy. Climate models that may improve our understanding of sea level rise. The emergence of new antimalarial drugs. These scientific advances and discoveries have been in the news in recent months.

While representing wildly divergent disciplines, from astronomy to biotechnology, they all have one thing in common: Artificial intelligence played a key role in their scientific discovery.

One of the more recent and famous examples came out of NASA at the end of 2017. The US space agency had announced an eighth planet discovered in the Kepler-90 system. Scientists had trained a neural network—a computer with a “brain” modeled on the human mind—to re-examine data from Kepler, a space-borne telescope with a four-year mission to seek out new life and new civilizations. Or, more precisely, to find habitable planets where life might just exist.

The researchers trained the artificial neural network on a set of 15,000 previously vetted signals until it could identify true planets and false positives 96 percent of the time. It then went to work on weaker signals from nearly 700 star systems with known planets.

The machine detected Kepler 90i—a hot, rocky planet that orbits its sun about every two Earth weeks—through a nearly imperceptible change in brightness captured when a planet passes a star. It also found a sixth Earth-sized planet in the Kepler-80 system.

AI Handles Big Data
The application of AI to science is being driven by three great advances in technology, according to Ross King from the Manchester Institute of Biotechnology at the University of Manchester, leader of a team that developed an artificially intelligent “scientist” called Eve.

Those three advances include much faster computers, big datasets, and improved AI methods, King said. “These advances increasingly give AI superhuman reasoning abilities,” he told Singularity Hub by email.

AI systems can flawlessly remember vast numbers of facts and extract information effortlessly from millions of scientific papers, not to mention exhibit flawless logical reasoning and near-optimal probabilistic reasoning, King says.

AI systems also beat humans when it comes to dealing with huge, diverse amounts of data.

That’s partly what attracted a team of glaciologists to turn to machine learning to untangle the factors involved in how heat from Earth’s interior might influence the ice sheet that blankets Greenland.

Algorithms juggled 22 geologic variables—such as bedrock topography, crustal thickness, magnetic anomalies, rock types, and proximity to features like trenches, ridges, young rifts, and volcanoes—to predict geothermal heat flux under the ice sheet throughout Greenland.

The machine learning model, for example, predicts elevated heat flux upstream of Jakobshavn Glacier, the fastest-moving glacier in the world.

“The major advantage is that we can incorporate so many different types of data,” explains Leigh Stearns, associate professor of geology at Kansas University, whose research takes her to the polar regions to understand how and why Earth’s great ice sheets are changing, questions directly related to future sea level rise.

“All of the other models just rely on one parameter to determine heat flux, but the [machine learning] approach incorporates all of them,” Stearns told Singularity Hub in an email. “Interestingly, we found that there is not just one parameter…that determines the heat flux, but a combination of many factors.”

The research was published last month in Geophysical Research Letters.

Stearns says her team hopes to apply high-powered machine learning to characterize glacier behavior over both short and long-term timescales, thanks to the large amounts of data that she and others have collected over the last 20 years.

Emergence of Robot Scientists
While Stearns sees machine learning as another tool to augment her research, King believes artificial intelligence can play a much bigger role in scientific discoveries in the future.

“I am interested in developing AI systems that autonomously do science—robot scientists,” he said. Such systems, King explained, would automatically originate hypotheses to explain observations, devise experiments to test those hypotheses, physically run the experiments using laboratory robotics, and even interpret the results. The conclusions would then influence the next cycle of hypotheses and experiments.

His AI scientist Eve recently helped researchers discover that triclosan, an ingredient commonly found in toothpaste, could be used as an antimalarial drug against certain strains that have developed a resistance to other common drug therapies. The research was published in the journal Scientific Reports.

Automation using artificial intelligence for drug discovery has become a growing area of research, as the machines can work orders of magnitude faster than any human. AI is also being applied in related areas, such as synthetic biology for the rapid design and manufacture of microorganisms for industrial uses.

King argues that machines are better suited to unravel the complexities of biological systems, with even the most “simple” organisms are host to thousands of genes, proteins, and small molecules that interact in complicated ways.

“Robot scientists and semi-automated AI tools are essential for the future of biology, as there are simply not enough human biologists to do the necessary work,” he said.

Creating Shockwaves in Science
The use of machine learning, neural networks, and other AI methods can often get better results in a fraction of the time it would normally take to crunch data.

For instance, scientists at the National Center for Supercomputing Applications, located at the University of Illinois at Urbana-Champaign, have a deep learning system for the rapid detection and characterization of gravitational waves. Gravitational waves are disturbances in spacetime, emanating from big, high-energy cosmic events, such as the massive explosion of a star known as a supernova. The “Holy Grail” of this type of research is to detect gravitational waves from the Big Bang.

Dubbed Deep Filtering, the method allows real-time processing of data from LIGO, a gravitational wave observatory comprised of two enormous laser interferometers located thousands of miles apart in California and Louisiana. The research was published in Physics Letters B. You can watch a trippy visualization of the results below.

In a more down-to-earth example, scientists published a paper last month in Science Advances on the development of a neural network called ConvNetQuake to detect and locate minor earthquakes from ground motion measurements called seismograms.

ConvNetQuake uncovered 17 times more earthquakes than traditional methods. Scientists say the new method is particularly useful in monitoring small-scale seismic activity, which has become more frequent, possibly due to fracking activities that involve injecting wastewater deep underground. You can learn more about ConvNetQuake in this video:

King says he believes that in the long term there will be no limit to what AI can accomplish in science. He and his team, including Eve, are currently working on developing cancer therapies under a grant from DARPA.

“Robot scientists are getting smarter and smarter; human scientists are not,” he says. “Indeed, there is arguably a case that human scientists are less good. I don’t see any scientist alive today of the stature of a Newton or Einstein—despite the vast number of living scientists. The Physics Nobel [laureate] Frank Wilczek is on record as saying (10 years ago) that in 100 years’ time the best physicist will be a machine. I agree.”

Image Credit: Romaset / Shutterstock.com Continue reading

Posted in Human Robots

#432051 What Roboticists Are Learning From Early ...

You might not have heard of Hanson Robotics, but if you’re reading this, you’ve probably seen their work. They were the company behind Sophia, the lifelike humanoid avatar that’s made dozens of high-profile media appearances. Before that, they were the company behind that strange-looking robot that seemed a bit like Asimo with Albert Einstein’s head—or maybe you saw BINA48, who was interviewed for the New York Times in 2010 and featured in Jon Ronson’s books. For the sci-fi aficionados amongst you, they even made a replica of legendary author Philip K. Dick, best remembered for having books with titles like Do Androids Dream of Electric Sheep? turned into films with titles like Blade Runner.

Hanson Robotics, in other words, with their proprietary brand of life-like humanoid robots, have been playing the same game for a while. Sometimes it can be a frustrating game to watch. Anyone who gives the robot the slightest bit of thought will realize that this is essentially a chat-bot, with all the limitations this implies. Indeed, even in that New York Times interview with BINA48, author Amy Harmon describes it as a frustrating experience—with “rare (but invariably thrilling) moments of coherence.” This sensation will be familiar to anyone who’s conversed with a chatbot that has a few clever responses.

The glossy surface belies the lack of real intelligence underneath; it seems, at first glance, like a much more advanced machine than it is. Peeling back that surface layer—at least for a Hanson robot—means you’re peeling back Frubber. This proprietary substance—short for “Flesh Rubber,” which is slightly nightmarish—is surprisingly complicated. Up to thirty motors are required just to control the face; they manipulate liquid cells in order to make the skin soft, malleable, and capable of a range of different emotional expressions.

A quick combinatorial glance at the 30+ motors suggests that there are millions of possible combinations; researchers identify 62 that they consider “human-like” in Sophia, although not everyone agrees with this assessment. Arguably, the technical expertise that went into reconstructing the range of human facial expressions far exceeds the more simplistic chat engine the robots use, although it’s the second one that allows it to inflate the punters’ expectations with a few pre-programmed questions in an interview.

Hanson Robotics’ belief is that, ultimately, a lot of how humans will eventually relate to robots is going to depend on their faces and voices, as well as on what they’re saying. “The perception of identity is so intimately bound up with the perception of the human form,” says David Hanson, company founder.

Yet anyone attempting to design a robot that won’t terrify people has to contend with the uncanny valley—that strange blend of concern and revulsion people react with when things appear to be creepily human. Between cartoonish humanoids and genuine humans lies what has often been a no-go zone in robotic aesthetics.

The uncanny valley concept originated with roboticist Masahiro Mori, who argued that roboticists should avoid trying to replicate humans exactly. Since anything that wasn’t perfect, but merely very good, would elicit an eerie feeling in humans, shirking the challenge entirely was the only way to avoid the uncanny valley. It’s probably a task made more difficult by endless streams of articles about AI taking over the world that inexplicably conflate AI with killer humanoid Terminators—which aren’t particularly likely to exist (although maybe it’s best not to push robots around too much).

The idea behind this realm of psychological horror is fairly simple, cognitively speaking.

We know how to categorize things that are unambiguously human or non-human. This is true even if they’re designed to interact with us. Consider the popularity of Aibo, Jibo, or even some robots that don’t try to resemble humans. Something that resembles a human, but isn’t quite right, is bound to evoke a fear response in the same way slightly distorted music or slightly rearranged furniture in your home will. The creature simply doesn’t fit.

You may well reject the idea of the uncanny valley entirely. David Hanson, naturally, is not a fan. In the paper Upending the Uncanny Valley, he argues that great art forms have often resembled humans, but the ultimate goal for humanoid roboticists is probably to create robots we can relate to as something closer to humans than works of art.

Meanwhile, Hanson and other scientists produce competing experiments to either demonstrate that the uncanny valley is overhyped, or to confirm it exists and probe its edges.

The classic experiment involves gradually morphing a cartoon face into a human face, via some robotic-seeming intermediaries—yet it’s in movement that the real horror of the almost-human often lies. Hanson has argued that incorporating cartoonish features may help—and, sometimes, that the uncanny valley is a generational thing which will melt away when new generations grow used to the quirks of robots. Although Hanson might dispute the severity of this effect, it’s clearly what he’s trying to avoid with each new iteration.

Hiroshi Ishiguro is the latest of the roboticists to have dived headlong into the valley.

Building on the work of pioneers like Hanson, those who study human-robot interaction are pushing at the boundaries of robotics—but also of social science. It’s usually difficult to simulate what you don’t understand, and there’s still an awful lot we don’t understand about how we interpret the constant streams of non-verbal information that flow when you interact with people in the flesh.

Ishiguro took this imitation of human forms to extreme levels. Not only did he monitor and log the physical movements people made on videotapes, but some of his robots are based on replicas of people; the Repliee series began with a ‘replicant’ of his daughter. This involved making a rubber replica—a silicone cast—of her entire body. Future experiments were focused on creating Geminoid, a replica of Ishiguro himself.

As Ishiguro aged, he realized that it would be more effective to resemble his replica through cosmetic surgery rather than by continually creating new casts of his face, each with more lines than the last. “I decided not to get old anymore,” Ishiguro said.

We love to throw around abstract concepts and ideas: humans being replaced by machines, cared for by machines, getting intimate with machines, or even merging themselves with machines. You can take an idea like that, hold it in your hand, and examine it—dispassionately, if not without interest. But there’s a gulf between thinking about it and living in a world where human-robot interaction is not a field of academic research, but a day-to-day reality.

As the scientists studying human-robot interaction develop their robots, their replicas, and their experiments, they are making some of the first forays into that world. We might all be living there someday. Understanding ourselves—decrypting the origins of empathy and love—may be the greatest challenge to face. That is, if you want to avoid the valley.

Image Credit: Anton Gvozdikov / Shutterstock.com Continue reading

Posted in Human Robots

#431995 The 10 Grand Challenges Facing Robotics ...

Robotics research has been making great strides in recent years, but there are still many hurdles to the machines becoming a ubiquitous presence in our lives. The journal Science Robotics has now identified 10 grand challenges the field will have to grapple with to make that a reality.

Editors conducted an online survey on unsolved challenges in robotics and assembled an expert panel of roboticists to shortlist the 30 most important topics, which were then grouped into 10 grand challenges that could have major impact in the next 5 to 10 years. Here’s what they came up with.

1. New Materials and Fabrication Schemes
Roboticists are beginning to move beyond motors, gears, and sensors by experimenting with things like artificial muscles, soft robotics, and new fabrication methods that combine multiple functions in one material. But most of these advances have been “one-off” demonstrations, which are not easy to combine.

Multi-functional materials merging things like sensing, movement, energy harvesting, or energy storage could allow more efficient robot designs. But combining these various properties in a single machine will require new approaches that blend micro-scale and large-scale fabrication techniques. Another promising direction is materials that can change over time to adapt or heal, but this requires much more research.

2. Bioinspired and Bio-Hybrid Robots
Nature has already solved many of the problems roboticists are trying to tackle, so many are turning to biology for inspiration or even incorporating living systems into their robots. But there are still major bottlenecks in reproducing the mechanical performance of muscle and the ability of biological systems to power themselves.

There has been great progress in artificial muscles, but their robustness, efficiency, and energy and power density need to be improved. Embedding living cells into robots can overcome challenges of powering small robots, as well as exploit biological features like self-healing and embedded sensing, though how to integrate these components is still a major challenge. And while a growing “robo-zoo” is helping tease out nature’s secrets, more work needs to be done on how animals transition between capabilities like flying and swimming to build multimodal platforms.

3. Power and Energy
Energy storage is a major bottleneck for mobile robotics. Rising demand from drones, electric vehicles, and renewable energy is driving progress in battery technology, but the fundamental challenges have remained largely unchanged for years.

That means that in parallel to battery development, there need to be efforts to minimize robots’ power utilization and give them access to new sources of energy. Enabling them to harvest energy from their environment and transmitting power to them wirelessly are two promising approaches worthy of investigation.

4. Robot Swarms
Swarms of simple robots that assemble into different configurations to tackle various tasks can be a cheaper, more flexible alternative to large, task-specific robots. Smaller, cheaper, more powerful hardware that lets simple robots sense their environment and communicate is combining with AI that can model the kind of behavior seen in nature’s flocks.

But there needs to be more work on the most efficient forms of control at different scales—small swarms can be controlled centrally, but larger ones need to be more decentralized. They also need to be made robust and adaptable to the changing conditions of the real world and resilient to deliberate or accidental damage. There also needs to be more work on swarms of non-homogeneous robots with complementary capabilities.

5. Navigation and Exploration
A key use case for robots is exploring places where humans cannot go, such as the deep sea, space, or disaster zones. That means they need to become adept at exploring and navigating unmapped, often highly disordered and hostile environments.

The major challenges include creating systems that can adapt, learn, and recover from navigation failures and are able to make and recognize new discoveries. This will require high levels of autonomy that allow the robots to monitor and reconfigure themselves while being able to build a picture of the world from multiple data sources of varying reliability and accuracy.

6. AI for Robotics
Deep learning has revolutionized machines’ ability to recognize patterns, but that needs to be combined with model-based reasoning to create adaptable robots that can learn on the fly.

Key to this will be creating AI that’s aware of its own limitations and can learn how to learn new things. It will also be important to create systems that are able to learn quickly from limited data rather than the millions of examples used in deep learning. Further advances in our understanding of human intelligence will be essential to solving these problems.

7. Brain-Computer Interfaces
BCIs will enable seamless control of advanced robotic prosthetics but could also prove a faster, more natural way to communicate instructions to robots or simply help them understand human mental states.

Most current approaches to measuring brain activity are expensive and cumbersome, though, so work on compact, low-power, and wireless devices will be important. They also tend to involve extended training, calibration, and adaptation due to the imprecise nature of reading brain activity. And it remains to be seen if they will outperform simpler techniques like eye tracking or reading muscle signals.

8. Social Interaction
If robots are to enter human environments, they will need to learn to deal with humans. But this will be difficult, as we have very few concrete models of human behavior and we are prone to underestimate the complexity of what comes naturally to us.

Social robots will need to be able to perceive minute social cues like facial expression or intonation, understand the cultural and social context they are operating in, and model the mental states of people they interact with to tailor their dealings with them, both in the short term and as they develop long-standing relationships with them.

9. Medical Robotics
Medicine is one of the areas where robots could have significant impact in the near future. Devices that augment a surgeon’s capabilities are already in regular use, but the challenge will be to increase the autonomy of these systems in such a high-stakes environment.

Autonomous robot assistants will need to be able to recognize human anatomy in a variety of contexts and be able to use situational awareness and spoken commands to understand what’s required of them. In surgery, autonomous robots could perform the routine steps of a procedure, giving way to the surgeon for more complicated patient-specific bits.

Micro-robots that operate inside the human body also hold promise, but there are still many roadblocks to their adoption, including effective delivery systems, tracking and control methods, and crucially, finding therapies where they improve on current approaches.

10. Robot Ethics and Security
As the preceding challenges are overcome and robots are increasingly integrated into our lives, this progress will create new ethical conundrums. Most importantly, we may become over-reliant on robots.

That could lead to humans losing certain skills and capabilities, making us unable to take the reins in the case of failures. We may end up delegating tasks that should, for ethical reasons, have some human supervision, and allow people to pass the buck to autonomous systems in the case of failure. It could also reduce self-determination, as human behaviors change to accommodate the routines and restrictions required for robots and AI to work effectively.

Image Credit: Zenzen / Shutterstock.com Continue reading

Posted in Human Robots

#431939 This Awesome Robot Is the Size of a ...

They say size isn’t everything, but when it comes to delta robots it seems like it’s pretty important.

The speed and precision of these machines sees them employed in delicate pick-and-place tasks in all kinds of factories, as well as to control 3D printer heads. But Harvard researchers have found that scaling them down to millimeter scale makes them even faster and more precise, opening up applications in everything from microsurgery to manipulating tiny objects like circuit board components or even living cells.

Unlike the industrial robots you’re probably more familiar with, delta robots consist of three individually controlled arms supporting a platform. Different combinations of movements can move the platform in three directions, and a variety of tools can be attached to this platform.



The benefit of this design is that unlike a typical robotic arm, all the motors are housed at the base rather than at the joints, which reduces their mechanical complexity, but also—importantly—the weight of the arms. That means they can move and accelerate faster and with greater precision.

It’s been known for a while that the physics of these robots means the smaller you can make them, the more pronounced these advantages become, but scientists had struggled to build them at scales below tens of centimeters.

In a recent paper in the journal Science Robotics, the researchers describe how they used an origami-inspired micro-fabrication approach that relies on folding flat sheets of composite materials to create a robot measuring just 15 millimeters by 15 millimeters by 20 millimeters.

The robot dubbed “milliDelta” features joints that rely on a flexible polymer core to bend—a simplified version of the more complicated joints found in large-scale delta robots. The machine was powered by three piezoelectric actuators, which flex when a voltage is applied, and could perform movements at frequencies 15 to 20 times higher than current delta robots, with precisions down to roughly 5 micrometers.

One potential use for the device is to cancel out surgeons’ hand tremors as they carry out delicate microsurgery procedures, such as operations on the eye’s retina. The researchers actually investigated this application in their paper. They got volunteers to hold a toothpick and measured the movement of the tip to map natural hand tremors. They fed this data to the milliDelta, which was able to match the movements and therefore cancel them out.

In an email to Singularity Hub, the researchers said that adding the robot to the end of a surgical tool could make it possible to stabilize needles or scalpels, though this would require some design optimization. For a start, the base would have to be redesigned to fit on a surgical tool, and sensors would have to be added to the robot to allow it to measure tremors in real time.

Another promising application for the device would be placing components on circuit boards at very high speeds, which could prove valuable in electronics manufacturing. The researchers even think the device’s precision means it could be used for manipulating living cells in research and clinical laboratories.

The researchers even said it would be feasible to integrate the devices onto microrobots to give them similarly impressive manipulation capabilities, though that would require considerable work to overcome control and sensing challenges.

Image Credit: Wyss institute / Harvard Continue reading

Posted in Human Robots

#431920 If We Could Engineer Animals to Be as ...

Advances in neural implants and genetic engineering suggest that in the not–too–distant future we may be able to boost human intelligence. If that’s true, could we—and should we—bring our animal cousins along for the ride?
Human brain augmentation made headlines last year after several tech firms announced ambitious efforts to build neural implant technology. Duke University neuroscientist Mikhail Lebedev told me in July it could be decades before these devices have applications beyond the strictly medical.
But he said the technology, as well as other pharmacological and genetic engineering approaches, will almost certainly allow us to boost our mental capacities at some point in the next few decades.
Whether this kind of cognitive enhancement is a good idea or not, and how we should regulate it, are matters of heated debate among philosophers, futurists, and bioethicists, but for some it has raised the question of whether we could do the same for animals.
There’s already tantalizing evidence of the idea’s feasibility. As detailed in BBC Future, a group from MIT found that mice that were genetically engineered to express the human FOXP2 gene linked to learning and speech processing picked up maze routes faster. Another group at Wake Forest University studying Alzheimer’s found that neural implants could boost rhesus monkeys’ scores on intelligence tests.
The concept of “animal uplift” is most famously depicted in the Planet of the Apes movie series, whose planet–conquering protagonists are likely to put most people off the idea. But proponents are less pessimistic about the outcomes.
Science fiction author David Brin popularized the concept in his “Uplift” series of novels, in which humans share the world with various other intelligent animals that all bring their own unique skills, perspectives, and innovations to the table. “The benefits, after a few hundred years, could be amazing,” he told Scientific American.
Others, like George Dvorsky, the director of the Rights of Non-Human Persons program at the Institute for Ethics and Emerging Technologies, go further and claim there is a moral imperative. He told the Boston Globe that denying augmentation technology to animals would be just as unethical as excluding certain groups of humans.
Others are less convinced. Forbes’ Alex Knapp points out that developing the technology to uplift animals will likely require lots of very invasive animal research that will cause huge suffering to the animals it purports to help. This is problematic enough with normal animals, but could be even more morally dubious when applied to ones whose cognitive capacities have been enhanced.
The whole concept could also be based on a fundamental misunderstanding of the nature of intelligence. Humans are prone to seeing intelligence as a single, self-contained metric that progresses in a linear way with humans at the pinnacle.
In an opinion piece in Wired arguing against the likelihood of superhuman artificial intelligence, Kevin Kelly points out that science has no such single dimension with which to rank the intelligence of different species. Each one combines a bundle of cognitive capabilities, some of which are well below our own capabilities and others which are superhuman. He uses the example of the squirrel, which can remember the precise location of thousands of acorns for years.
Uplift efforts may end up being less about boosting intelligence and more about making animals more human-like. That represents “a kind of benevolent colonialism” that assumes being more human-like is a good thing, Paul Graham Raven, a futures researcher at the University of Sheffield in the United Kingdom, told the Boston Globe. There’s scant evidence that’s the case, and it’s easy to see how a chimpanzee with the mind of a human might struggle to adjust.
There are also fundamental barriers that may make it difficult to achieve human-level cognitive capabilities in animals, no matter how advanced brain augmentation technology gets. In 2013 Swedish researchers selectively bred small fish called guppies for bigger brains. This made them smarter, but growing the energy-intensive organ meant the guppies developed smaller guts and produced fewer offspring to compensate.
This highlights the fact that uplifting animals may require more than just changes to their brains, possibly a complete rewiring of their physiology that could prove far more technically challenging than human brain augmentation.
Our intelligence is intimately tied to our evolutionary history—our brains are bigger than other animals’; opposable thumbs allow us to use tools; our vocal chords make complex communication possible. No matter how much you augment a cow’s brain, it still couldn’t use a screwdriver or talk to you in English because it simply doesn’t have the machinery.
Finally, from a purely selfish point of view, even if it does become possible to create a level playing field between us and other animals, it may not be a smart move for humanity. There’s no reason to assume animals would be any more benevolent than we are, having evolved in the same ‘survival of the fittest’ crucible that we have. And given our already endless capacity to divide ourselves along national, religious, or ethnic lines, conflict between species seems inevitable.
We’re already likely to face considerable competition from smart machines in the coming decades if you believe the hype around AI. So maybe adding a few more intelligent species to the mix isn’t the best idea.
Image Credit: Ron Meijer / Shutterstock.com Continue reading

Posted in Human Robots