Tag Archives: security

#433278 Outdated Evolution: Updating Our ...

What happens when evolution shapes an animal for tribes of 150 primitive individuals living in a chaotic jungle, and then suddenly that animal finds itself living with millions of others in an engineered metropolis, their pockets all bulging with devices of godlike power?

The result, it seems, is a modern era of tension where archaic forms of governance struggle to keep up with the technological advances of their citizenry, where governmental policies act like constraining bottlenecks rather than spearheads of progress.

Simply put, our governments have failed to adapt to disruptive technologies. And if we are to regain our stability moving forward into a future of even greater disruption, it’s imperative that we understand the issues that got us into this situation and what kind of solutions we can engineer to overcome our governmental weaknesses.

Hierarchy vs. Technological Decentralization
Many of the greatest issues our governments face today come from humanity’s biologically-hardwired desire for centralized hierarchies. This innate proclivity towards building and navigating systems of status and rank were evolutionary gifts handed down to us by our ape ancestors, where each member of a community had a mental map of their social hierarchy. Their nervous systems behaved differently depending on their rank in this hierarchy, influencing their interactions in a way that ensured only the most competent ape would rise to the top to gain access to the best food and mates.

As humanity emerged and discovered the power of language, we continued this practice by ensuring that those at the top of the hierarchies, those with the greatest education and access to information, were the dominant decision-makers for our communities.

However, this kind of structured chain of power is only necessary if we’re operating in conditions of scarcity. But resources, including information, are no longer scarce.

It’s estimated that more than two-thirds of adults in the world now own a smartphone, giving the average citizen the same access to the world’s information as the leaders of our governments. And with global poverty falling from 35.5 percent to 10.9 percent over the last 25 years, our younger generations are growing up seeing automation and abundance as a likely default, where innovations like solar energy, lab-grown meat, and 3D printing are expected to become commonplace.

It’s awareness of this paradigm shift that has empowered the recent rise of decentralization. As information and access to resources become ubiquitous, there is noticeably less need for our inefficient and bureaucratic hierarchies.

For example, if blockchain can prove its feasibility for large-scale systems, it can be used to update and upgrade numerous applications to a decentralized model, including currency and voting. Such innovations would lower the risk of failing banks collapsing the economy like they did in 2008, as well as prevent corrupt politicians from using gerrymandering and long queues at polling stations to deter voter participation.

Of course, technology isn’t a magic wand that should be implemented carelessly. Facebook’s “move fast and break things” approach might have very possibly broken American democracy in 2016, as social media played on some of the worst tendencies humanity can operate on during an election: fear and hostility.

But if decentralized technology, like blockchain’s public ledgers, can continue to spread a sense of security and transparency throughout society, perhaps we can begin to quiet that paranoia and hyper-vigilance our brains evolved to cope with living as apes in dangerous jungles. By decentralizing our power structures, we take away the channels our outdated biological behaviors might use to enact social dominance and manipulation.

The peace of mind this creates helps to reestablish trust in our communities and in our governments. And with trust in the government increased, it’s likely we’ll see our next issue corrected.

From Business and Law to Science and Technology
A study found that 59 percent of US presidents, 68 percent of vice presidents, and 78 percent of secretaries of state were lawyers by education and occupation. That’s more than one out of every two people in the most powerful positions in the American government restricted to a field dedicated to convincing other people (judges) their perspective is true, even if they lack evidence.

And so the scientific method became less important than semantics to our leaders.

Similarly, of the 535 individuals in the American congress, only 24 hold a PhD, only 2 of which are in a STEM field. And so far, it’s not getting better: Trump is the first president since WWII not to name a science advisor.

But if we can use technologies like blockchain to increase transparency, efficiency, and trust in the government, then the upcoming generations who understand decentralization, abundance, and exponential technologies might feel inspired enough to run for government positions. This helps solve that common problem where the smartest and most altruistic people tend to avoid government positions because they don’t want to play the semantic and deceitful game of politics.

By changing this narrative, our governments can begin to fill with techno-progressive individuals who actually understand the technologies that are rapidly reshaping our reality. And this influence of expertise is going to be crucial as our governments are forced to restructure and create new policies to accommodate the incoming disruption.

Clearing Regulations to Begin Safe Experimentation
As exponential technologies become more ubiquitous, we’re likely going to see young kids and garage tinkerers creating powerful AIs and altering genetics thanks to tools like CRISPR and free virtual reality tutorials.

This easy accessibility to such powerful technology means unexpected and rapid progress can occur almost overnight, quickly overwhelming our government’s regulatory systems.

Uber and Airbnb are two of the best examples of our government’s inability to keep up with such technology, both companies achieving market dominance before regulators were even able to consider how to handle them. And when a government has decided against them, they often still continue to operate because people simply choose to keep using the apps.

Luckily, this kind of disruption hasn’t yet posed a major existential threat. But this will change when we see companies begin developing cyborg body parts, brain-computer interfaces, nanobot health injectors, and at-home genetic engineering kits.

For this reason, it’s crucial that we have experts who understand how to update our regulations to be as flexible as is necessary to ensure we don’t create black market conditions like we’ve done with drugs. It’s better to have safe and monitored experimentation, rather than forcing individuals into seedy communities using unsafe products.

Survival of the Most Adaptable
If we hope to be an animal that survives our changing environment, we have to adapt. We cannot cling to the behaviors and systems formed thousands of years ago. We must instead acknowledge that we now exist in an ecosystem of disruptive technology, and we must evolve and update our governments if they’re going to be capable of navigating these transformative impacts.

Image Credit: mmatee / Shutterstock.com Continue reading

Posted in Human Robots

#432249 New Malicious AI Report Outlines Biggest ...

Everyone’s talking about deep fakes: audio-visual imitations of people, generated by increasingly powerful neural networks, that will soon be indistinguishable from the real thing. Politicians are regularly laid low by scandals that arise from audio-visual recordings. Try watching the footage that could be created of Barack Obama from his speeches, and the Lyrebird impersonations. You could easily, today or in the very near future, create a forgery that might be indistinguishable from the real thing. What would that do to politics?

Once the internet is flooded with plausible-seeming tapes and recordings of this sort, how are we going to decide what’s real and what isn’t? Democracy, and our ability to counteract threats, is already threatened by a lack of agreement on the facts. Once you can’t believe the evidence of your senses anymore, we’re in serious trouble. Ultimately, you can dream up all kinds of utterly terrifying possibilities for these deep fakes, from fake news to blackmail.

How to solve the problem? Some have suggested that media websites like Facebook or Twitter should carry software that probes every video to see if it’s a deep fake or not and labels the fakes. But this will prove computationally intensive. Plus, imagine a case where we have such a system, and a fake is “verified as real” by news media algorithms that have been fooled by clever hackers.

The other alternative is even more dystopian: you can prove something isn’t true simply by always having an alibi. Lawfare describes a “solution” where those concerned about deep fakes have all of their movements and interactions recorded. So to avoid being blackmailed or having your reputation ruined, you just consent to some company engaging in 24/7 surveillance of everything you say or do and having total power over that information. What could possibly go wrong?

The point is, in the same way that you don’t need human-level, general AI or humanoid robotics to create systems that can cause disruption in the world of work, you also don’t need a general intelligence to threaten security and wreak havoc on society. Andrew Ng, AI researcher, says that worrying about the risks from superintelligent AI is like “worrying about overpopulation on Mars.” There are clearly risks that arise even from the simple algorithms we have today.

The looming issue of deep fakes is just one of the threats considered by the new malicious AI report, which has co-authors from the Future of Humanity Institute and the Centre for the Study of Existential Risk (among other organizations.) They limit their focus to the technologies of the next five years.

Some of the concerns the report explores are enhancements to familiar threats.

Automated hacking can get better, smarter, and algorithms can adapt to changing security protocols. “Phishing emails,” where people are scammed by impersonating someone they trust or an official organization, could be generated en masse and made more realistic by scraping data from social media. Standard phishing works by sending such a great volume of emails that even a very low success rate can be profitable. Spear phishing aims at specific targets by impersonating family members, but can be labor intensive. If AI algorithms enable every phishing scam to become sharper in this way, more people are going to get gouged.

Then there are novel threats that come from our own increasing use of and dependence on artificial intelligence to make decisions.

These algorithms may be smart in some ways, but as any human knows, computers are utterly lacking in common sense; they can be fooled. A rather scary application is adversarial examples. Machine learning algorithms are often used for image recognition. But it’s possible, if you know a little about how the algorithm is structured, to construct the perfect level of noise to add to an image, and fool the machine. Two images can be almost completely indistinguishable to the human eye. But by adding some cleverly-calculated noise, the hackers can fool the algorithm into thinking an image of a panda is really an image of a gibbon (in the OpenAI example). Research conducted by OpenAI demonstrates that you can fool algorithms even by printing out examples on stickers.

Now imagine that instead of tricking a computer into thinking that a panda is actually a gibbon, you fool it into thinking that a stop sign isn’t there, or that the back of someone’s car is really a nice open stretch of road. In the adversarial example case, the images are almost indistinguishable to humans. By the time anyone notices the road sign has been “hacked,” it could already be too late.

As the OpenAI foundation freely admits, worrying about whether we’d be able to tame a superintelligent AI is a hard problem. It looks all the more difficult when you realize some of our best algorithms can be fooled by stickers; even “modern simple algorithms can behave in ways we do not intend.”

There are ways around this approach.

Adversarial training can generate lots of adversarial examples and explicitly train the algorithm not to be fooled by them—but it’s costly in terms of time and computation, and puts you in an arms race with hackers. Many strategies for defending against adversarial examples haven’t proved adaptive enough; correcting against vulnerabilities one at a time is too slow. Moreover, it demonstrates a point that can be lost in the AI hype: algorithms can be fooled in ways we didn’t anticipate. If we don’t learn about these vulnerabilities until the algorithms are everywhere, serious disruption can occur. And no matter how careful you are, some vulnerabilities are likely to remain to be exploited, even if it takes years to find them.

Just look at the Meltdown and Spectre vulnerabilities, which weren’t widely known about for more than 20 years but could enable hackers to steal personal information. Ultimately, the more blind faith we put into algorithms and computers—without understanding the opaque inner mechanics of how they work—the more vulnerable we will be to these forms of attack. And, as China dreams of using AI to predict crimes and enhance the police force, the potential for unjust arrests can only increase.

This is before you get into the truly nightmarish territory of “killer robots”—not the Terminator, but instead autonomous or consumer drones which could potentially be weaponized by bad actors and used to conduct attacks remotely. Some reports have indicated that terrorist organizations are already trying to do this.

As with any form of technology, new powers for humanity come with new risks. And, as with any form of technology, closing Pandora’s box will prove very difficult.

Somewhere between the excessively hyped prospects of AI that will do everything for us and AI that will destroy the world lies reality: a complex, ever-changing set of risks and rewards. The writers of the malicious AI report note that one of their key motivations is ensuring that the benefits of new technology can be delivered to people as quickly, but as safely, as possible. In the rush to exploit the potential for algorithms and create 21st-century infrastructure, we must ensure we’re not building in new dangers.

Image Credit: lolloj / Shutterstock.com Continue reading

Posted in Human Robots

#432036 The Power to Upgrade Our Own Biology Is ...

Upgrading our biology may sound like science fiction, but attempts to improve humanity actually date back thousands of years. Every day, we enhance ourselves through seemingly mundane activities such as exercising, meditating, or consuming performance-enhancing drugs, such as caffeine or adderall. However, the tools with which we upgrade our biology are improving at an accelerating rate and becoming increasingly invasive.

In recent decades, we have developed a wide array of powerful methods, such as genetic engineering and brain-machine interfaces, that are redefining our humanity. In the short run, such enhancement technologies have medical applications and may be used to treat many diseases and disabilities. Additionally, in the coming decades, they could allow us to boost our physical abilities or even digitize human consciousness.

What’s New?
Many futurists argue that our devices, such as our smartphones, are already an extension of our cortex and in many ways an abstract form of enhancement. According to philosophers Andy Clark and David Chalmers’ theory of extended mind, we use technology to expand the boundaries of the human mind beyond our skulls.

One can argue that having access to a smartphone enhances one’s cognitive capacities and abilities and is an indirect form of enhancement of its own. It can be considered an abstract form of brain-machine interface. Beyond that, wearable devices and computers are already accessible in the market, and people like athletes use them to boost their progress.

However, these interfaces are becoming less abstract.

Not long ago, Elon Musk announced a new company, Neuralink, with the goal of merging the human mind with AI. The past few years have seen remarkable developments in both the hardware and software of brain-machine interfaces. Experts are designing more intricate electrodes while programming better algorithms to interpret neural signals. Scientists have already succeeded in enabling paralyzed patients to type with their minds, and are even allowing brains to communicate with one another purely through brainwaves.

Ethical Challenges of Enhancement
There are many social and ethical implications of such advancements.

One of the most fundamental issues with cognitive and physical enhancement techniques is that they contradict the very definition of merit and success that society has relied on for millennia. Many forms of performance-enhancing drugs have been considered “cheating” for the longest time.

But perhaps we ought to revisit some of our fundamental assumptions as a society.

For example, we like to credit hard work and talent in a fair manner, where “fair” generally implies that an individual has acted in a way that has served him to merit his rewards. If you are talented and successful, it is considered to be because you chose to work hard and take advantage of the opportunities available to you. But by these standards, how much of our accomplishments can we truly be credited for?

For instance, the genetic lottery can have an enormous impact on an individual’s predisposition and personality, which can in turn affect factors such as motivation, reasoning skills, and other mental abilities. Many people are born with a natural ability or a physique that gives them an advantage in a particular area or predisposes them to learn faster. But is it justified to reward someone for excellence if their genes had a pivotal role in their path to success?

Beyond that, there are already many ways in which we take “shortcuts” to better mental performance. Seemingly mundane activities like drinking coffee, meditating, exercising, or sleeping well can boost one’s performance in any given area and are tolerated by society. Even the use of language can have positive physical and psychological effects on the human brain, which can be liberating to the individual and immensely beneficial to society at large. And let’s not forget the fact that some of us are born into more access to developing literacy than others.

Given all these reasons, one could argue that cognitive abilities and talents are currently derived more from uncontrollable factors and luck than we like to admit. If anything, technologies like brain-machine interfaces can enhance individual autonomy and allow one a choice of how capable they become.

As Karim Jebari points out (pdf), if a certain characteristic or trait is required to perform a particular role and an individual lacks this trait, would it be wrong to implement the trait through brain-machine interfaces or genetic engineering? How is this different from any conventional form of learning or acquiring a skill? If anything, this would be removing limitations on individuals that result from factors outside their control, such as biological predisposition (or even traits induced from traumatic experiences) to act or perform in a certain way.

Another major ethical concern is equality. As with any other emerging technology, there are valid concerns that cognitive enhancement tech will benefit only the wealthy, thus exacerbating current inequalities. This is where public policy and regulations can play a pivotal role in the impact of technology on society.

Enhancement technologies can either contribute to inequality or allow us to solve it. Educating and empowering the under-privileged can happen at a much more rapid rate, helping the overall rate of human progress accelerate. The “normal range” for human capacity and intelligence, however it is defined, could shift dramatically towards more positive trends.

Many have also raised concerns over the negative applications of government-led biological enhancement, including eugenics-like movements and super-soldiers. Naturally, there are also issues of safety, security, and well-being, especially within the early stages of experimentation with enhancement techniques.

Brain-machine interfaces, for instance, could have implications on autonomy. The interface involves using information extracted from the brain to stimulate or modify systems in order to accomplish a goal. This part of the process can be enhanced by implementing an artificial intelligence system onto the interface—one that exposes the possibility of a third party potentially manipulating individual’s personalities, emotions, and desires by manipulating the interface.

A Tool For Transcendence
It’s important to discuss these risks, not so that we begin to fear and avoid such technologies, but so that we continue to advance in a way that minimizes harm and allows us to optimize the benefits.

Stephen Hawking notes that “with genetic engineering, we will be able to increase the complexity of our DNA, and improve the human race.” Indeed, the potential advantages of modifying biology are revolutionary. Doctors would gain access to a powerful tool to tackle disease, allowing us to live longer and healthier lives. We might be able to extend our lifespan and tackle aging, perhaps a critical step to becoming a space-faring species. We may begin to modify the brain’s building blocks to become more intelligent and capable of solving grand challenges.

In their book Evolving Ourselves, Juan Enriquez and Steve Gullans describe a world where evolution is no longer driven by natural processes. Instead, it is driven by human choices, through what they call unnatural selection and non-random mutation. Human enhancement is bringing us closer to such a world—it could allow us to take control of our evolution and truly shape the future of our species.

Image Credit: GrAl/ Shutterstock.com Continue reading

Posted in Human Robots

#432027 We Read This 800-Page Report on the ...

The longevity field is bustling but still fragmented, and the “silver tsunami” is coming.

That is the takeaway of The Science of Longevity, the behemoth first volume of a four-part series offering a bird’s-eye view of the longevity industry in 2017. The report, a joint production of the Biogerontology Research Foundation, Deep Knowledge Life Science, Aging Analytics Agency, and Longevity.International, synthesizes the growing array of academic and industry ventures related to aging, healthspan, and everything in between.

This is huge, not only in scale but also in ambition. The report, totally worth a read here, will be followed by four additional volumes in 2018, covering topics ranging from the business side of longevity ventures to financial systems to potential tensions between life extension and religion.

And that’s just the first step. The team hopes to publish updated versions of the report annually, giving scientists, investors, and regulatory agencies an easy way to keep their finger on the longevity pulse.

“In 2018, ‘aging’ remains an unnamed adversary in an undeclared war. For all intents and purposes it is mere abstraction in the eyes of regulatory authorities worldwide,” the authors write.

That needs to change.

People often arrive at the field of aging from disparate areas with wildly diverse opinions and strengths. The report compiles these individual efforts at cracking aging into a systematic resource—a “periodic table” for longevity that clearly lays out emerging trends and promising interventions.

The ultimate goal? A global framework serving as a road map to guide the burgeoning industry. With such a framework in hand, academics and industry alike are finally poised to petition the kind of large-scale investments and regulatory changes needed to tackle aging with a unified front.

Infographic depicting many of the key research hubs and non-profits within the field of geroscience.
Image Credit: Longevity.International
The Aging Globe
The global population is rapidly aging. And our medical and social systems aren’t ready to handle this oncoming “silver tsunami.”

Take the medical field. Many age-related diseases such as Alzheimer’s lack effective treatment options. Others, including high blood pressure, stroke, lung or heart problems, require continuous medication and monitoring, placing enormous strain on medical resources.

What’s more, because disease risk rises exponentially with age, medical care for the elderly becomes a game of whack-a-mole: curing any individual disease such as cancer only increases healthy lifespan by two to three years before another one hits.

That’s why in recent years there’s been increasing support for turning the focus to the root of the problem: aging. Rather than tackling individual diseases, geroscience aims to add healthy years to our lifespan—extending “healthspan,” so to speak.

Despite this relative consensus, the field still faces a roadblock. The US FDA does not yet recognize aging as a bona fide disease. Without such a designation, scientists are banned from testing potential interventions for aging in clinical trials (that said, many have used alternate measures such as age-related biomarkers or Alzheimer’s symptoms as a proxy).

Luckily, the FDA’s stance is set to change. The promising anti-aging drug metformin, for example, is already in clinical trials, examining its effect on a variety of age-related symptoms and diseases. This report, and others to follow, may help push progress along.

“It is critical for investors, policymakers, scientists, NGOs, and influential entities to prioritize the amelioration of the geriatric world scenario and recognize aging as a critical matter of global economic security,” the authors say.

Biomedical Gerontology
The causes of aging are complex, stubborn, and not all clear.

But the report lays out two main streams of intervention with already promising results.

The first is to understand the root causes of aging and stop them before damage accumulates. It’s like meddling with cogs and other inner workings of a clock to slow it down, the authors say.

The report lays out several treatments to keep an eye on.

Geroprotective drugs is a big one. Often repurposed from drugs already on the market, these traditional small molecule drugs target a wide variety of metabolic pathways that play a role in aging. Think anti-oxidants, anti-inflammatory, and drugs that mimic caloric restriction, a proven way to extend healthspan in animal models.

More exciting are the emerging technologies. One is nanotechnology. Nanoparticles of carbon, “bucky-balls,” for example, have already been shown to fight viral infections and dangerous ion particles, as well as stimulate the immune system and extend lifespan in mice (though others question the validity of the results).

Blood is another promising, if surprising, fountain of youth: recent studies found that molecules in the blood of the young rejuvenate the heart, brain, and muscles of aged rodents, though many of these findings have yet to be replicated.

Rejuvenation Biotechnology
The second approach is repair and maintenance.

Rather than meddling with inner clockwork, here we force back the hands of a clock to set it back. The main example? Stem cell therapy.

This type of approach would especially benefit the brain, which harbors small, scattered numbers of stem cells that deplete with age. For neurodegenerative diseases like Alzheimer’s, in which neurons progressively die off, stem cell therapy could in theory replace those lost cells and mend those broken circuits.

Once a blue-sky idea, the discovery of induced pluripotent stem cells (iPSCs), where scientists can turn skin and other mature cells back into a stem-like state, hugely propelled the field into near reality. But to date, stem cells haven’t been widely adopted in clinics.

It’s “a toolkit of highly innovative, highly invasive technologies with clinical trials still a great many years off,” the authors say.

But there is a silver lining. The boom in 3D tissue printing offers an alternative approach to stem cells in replacing aging organs. Recent investment from the Methuselah Foundation and other institutions suggests interest remains high despite still being a ways from mainstream use.

A Disruptive Future
“We are finally beginning to see an industry emerge from mankind’s attempts to make sense of the biological chaos,” the authors conclude.

Looking through the trends, they identified several technologies rapidly gaining steam.

One is artificial intelligence, which is already used to bolster drug discovery. Machine learning may also help identify new longevity genes or bring personalized medicine to the clinic based on a patient’s records or biomarkers.

Another is senolytics, a class of drugs that kill off “zombie cells.” Over 10 prospective candidates are already in the pipeline, with some expected to enter the market in less than a decade, the authors say.

Finally, there’s the big gun—gene therapy. The treatment, unlike others mentioned, can directly target the root of any pathology. With a snip (or a swap), genetic tools can turn off damaging genes or switch on ones that promote a youthful profile. It is the most preventative technology at our disposal.

There have already been some success stories in animal models. Using gene therapy, rodents given a boost in telomerase activity, which lengthens the protective caps of DNA strands, live healthier for longer.

“Although it is the prospect farthest from widespread implementation, it may ultimately prove the most influential,” the authors say.

Ultimately, can we stop the silver tsunami before it strikes?

Perhaps not, the authors say. But we do have defenses: the technologies outlined in the report, though still immature, could one day stop the oncoming tidal wave in its tracks.

Now we just have to bring them out of the lab and into the real world. To push the transition along, the team launched Longevity.International, an online meeting ground that unites various stakeholders in the industry.

By providing scientists, entrepreneurs, investors, and policy-makers a platform for learning and discussion, the authors say, we may finally generate enough drive to implement our defenses against aging. The war has begun.

Read the report in full here, and watch out for others coming soon here. The second part of the report profiles 650 (!!!) longevity-focused research hubs, non-profits, scientists, conferences, and literature. It’s an enormously helpful resource—totally worth keeping it in your back pocket for future reference.

Image Credit: Worraket / Shutterstock.com Continue reading

Posted in Human Robots

#431995 The 10 Grand Challenges Facing Robotics ...

Robotics research has been making great strides in recent years, but there are still many hurdles to the machines becoming a ubiquitous presence in our lives. The journal Science Robotics has now identified 10 grand challenges the field will have to grapple with to make that a reality.

Editors conducted an online survey on unsolved challenges in robotics and assembled an expert panel of roboticists to shortlist the 30 most important topics, which were then grouped into 10 grand challenges that could have major impact in the next 5 to 10 years. Here’s what they came up with.

1. New Materials and Fabrication Schemes
Roboticists are beginning to move beyond motors, gears, and sensors by experimenting with things like artificial muscles, soft robotics, and new fabrication methods that combine multiple functions in one material. But most of these advances have been “one-off” demonstrations, which are not easy to combine.

Multi-functional materials merging things like sensing, movement, energy harvesting, or energy storage could allow more efficient robot designs. But combining these various properties in a single machine will require new approaches that blend micro-scale and large-scale fabrication techniques. Another promising direction is materials that can change over time to adapt or heal, but this requires much more research.

2. Bioinspired and Bio-Hybrid Robots
Nature has already solved many of the problems roboticists are trying to tackle, so many are turning to biology for inspiration or even incorporating living systems into their robots. But there are still major bottlenecks in reproducing the mechanical performance of muscle and the ability of biological systems to power themselves.

There has been great progress in artificial muscles, but their robustness, efficiency, and energy and power density need to be improved. Embedding living cells into robots can overcome challenges of powering small robots, as well as exploit biological features like self-healing and embedded sensing, though how to integrate these components is still a major challenge. And while a growing “robo-zoo” is helping tease out nature’s secrets, more work needs to be done on how animals transition between capabilities like flying and swimming to build multimodal platforms.

3. Power and Energy
Energy storage is a major bottleneck for mobile robotics. Rising demand from drones, electric vehicles, and renewable energy is driving progress in battery technology, but the fundamental challenges have remained largely unchanged for years.

That means that in parallel to battery development, there need to be efforts to minimize robots’ power utilization and give them access to new sources of energy. Enabling them to harvest energy from their environment and transmitting power to them wirelessly are two promising approaches worthy of investigation.

4. Robot Swarms
Swarms of simple robots that assemble into different configurations to tackle various tasks can be a cheaper, more flexible alternative to large, task-specific robots. Smaller, cheaper, more powerful hardware that lets simple robots sense their environment and communicate is combining with AI that can model the kind of behavior seen in nature’s flocks.

But there needs to be more work on the most efficient forms of control at different scales—small swarms can be controlled centrally, but larger ones need to be more decentralized. They also need to be made robust and adaptable to the changing conditions of the real world and resilient to deliberate or accidental damage. There also needs to be more work on swarms of non-homogeneous robots with complementary capabilities.

5. Navigation and Exploration
A key use case for robots is exploring places where humans cannot go, such as the deep sea, space, or disaster zones. That means they need to become adept at exploring and navigating unmapped, often highly disordered and hostile environments.

The major challenges include creating systems that can adapt, learn, and recover from navigation failures and are able to make and recognize new discoveries. This will require high levels of autonomy that allow the robots to monitor and reconfigure themselves while being able to build a picture of the world from multiple data sources of varying reliability and accuracy.

6. AI for Robotics
Deep learning has revolutionized machines’ ability to recognize patterns, but that needs to be combined with model-based reasoning to create adaptable robots that can learn on the fly.

Key to this will be creating AI that’s aware of its own limitations and can learn how to learn new things. It will also be important to create systems that are able to learn quickly from limited data rather than the millions of examples used in deep learning. Further advances in our understanding of human intelligence will be essential to solving these problems.

7. Brain-Computer Interfaces
BCIs will enable seamless control of advanced robotic prosthetics but could also prove a faster, more natural way to communicate instructions to robots or simply help them understand human mental states.

Most current approaches to measuring brain activity are expensive and cumbersome, though, so work on compact, low-power, and wireless devices will be important. They also tend to involve extended training, calibration, and adaptation due to the imprecise nature of reading brain activity. And it remains to be seen if they will outperform simpler techniques like eye tracking or reading muscle signals.

8. Social Interaction
If robots are to enter human environments, they will need to learn to deal with humans. But this will be difficult, as we have very few concrete models of human behavior and we are prone to underestimate the complexity of what comes naturally to us.

Social robots will need to be able to perceive minute social cues like facial expression or intonation, understand the cultural and social context they are operating in, and model the mental states of people they interact with to tailor their dealings with them, both in the short term and as they develop long-standing relationships with them.

9. Medical Robotics
Medicine is one of the areas where robots could have significant impact in the near future. Devices that augment a surgeon’s capabilities are already in regular use, but the challenge will be to increase the autonomy of these systems in such a high-stakes environment.

Autonomous robot assistants will need to be able to recognize human anatomy in a variety of contexts and be able to use situational awareness and spoken commands to understand what’s required of them. In surgery, autonomous robots could perform the routine steps of a procedure, giving way to the surgeon for more complicated patient-specific bits.

Micro-robots that operate inside the human body also hold promise, but there are still many roadblocks to their adoption, including effective delivery systems, tracking and control methods, and crucially, finding therapies where they improve on current approaches.

10. Robot Ethics and Security
As the preceding challenges are overcome and robots are increasingly integrated into our lives, this progress will create new ethical conundrums. Most importantly, we may become over-reliant on robots.

That could lead to humans losing certain skills and capabilities, making us unable to take the reins in the case of failures. We may end up delegating tasks that should, for ethical reasons, have some human supervision, and allow people to pass the buck to autonomous systems in the case of failure. It could also reduce self-determination, as human behaviors change to accommodate the routines and restrictions required for robots and AI to work effectively.

Image Credit: Zenzen / Shutterstock.com Continue reading

Posted in Human Robots