Tag Archives: Bonds

#438769 Will Robots Make Good Friends? ...

In the 2012 film Robot and Frank, the protagonist, a retired cat burglar named Frank, is suffering the early symptoms of dementia. Concerned and guilty, his son buys him a “home robot” that can talk, do household chores like cooking and cleaning, and remind Frank to take his medicine. It’s a robot the likes of which we’re getting closer to building in the real world.

The film follows Frank, who is initially appalled by the idea of living with a robot, as he gradually begins to see the robot as both functionally useful and socially companionable. The film ends with a clear bond between man and machine, such that Frank is protective of the robot when the pair of them run into trouble.

This is, of course, a fictional story, but it challenges us to explore different kinds of human-to-robot bonds. My recent research on human-robot relationships examines this topic in detail, looking beyond sex robots and robot love affairs to examine that most profound and meaningful of relationships: friendship.

My colleague and I identified some potential risks, like the abandonment of human friends for robotic ones, but we also found several scenarios where robotic companionship can constructively augment people’s lives, leading to friendships that are directly comparable to human-to-human relationships.

Philosophy of Friendship
The robotics philosopher John Danaher sets a very high bar for what friendship means. His starting point is the “true” friendship first described by the Greek philosopher Aristotle, which saw an ideal friendship as premised on mutual good will, admiration, and shared values. In these terms, friendship is about a partnership of equals.

Building a robot that can satisfy Aristotle’s criteria is a substantial technical challenge and is some considerable way off, as Danaher himself admits. Robots that may seem to be getting close, such as Hanson Robotics’ Sophia, base their behavior on a library of pre-prepared responses: a humanoid chatbot, rather than a conversational equal. Anyone who’s had a testing back-and-forth with Alexa or Siri will know AI still has some way to go in this regard.

Aristotle also talked about other forms of “imperfect” friendship, such as “utilitarian” and “pleasure” friendships, which are considered inferior to true friendship because they don’t require symmetrical bonding and are often to one party’s unequal benefit. This form of friendship sets a relatively very low bar which some robots, like “sexbots” and robotic pets, clearly already meet.

Artificial Amigos
For some, relating to robots is just a natural extension of relating to other things in our world, like people, pets, and possessions. Psychologists have even observed how people respond naturally and socially towards media artefacts like computers and televisions. Humanoid robots, you’d have thought, are more personable than your home PC.

However, the field of “robot ethics” is far from unanimous on whether we can—or should— develop any form of friendship with robots. For an influential group of UK researchers who charted a set of “ethical principles of robotics,” human-robot “companionship” is an oxymoron, and to market robots as having social capabilities is dishonest and should be treated with caution, if not alarm. For these researchers, wasting emotional energy on entities that can only simulate emotions will always be less rewarding than forming human-to-human bonds.

But people are already developing bonds with basic robots, like vacuum-cleaning and lawn-trimming machines that can be bought for less than the price of a dishwasher. A surprisingly large number of people give these robots pet names—something they don’t do with their dishwashers. Some even take their cleaning robots on holiday.

Other evidence of emotional bonds with robots include the Shinto blessing ceremony for Sony Aibo robot dogs that were dismantled for spare parts, and the squad of US troops who fired a 21-gun salute, and awarded medals, to a bomb-disposal robot named “Boomer” after it was destroyed in action.

These stories, and the psychological evidence we have so far, make clear that we can extend emotional connections to things that are very different to us, even when we know they are manufactured and pre-programmed. But do those connections constitute a friendship comparable to that shared between humans?

True Friendship?
A colleague and I recently reviewed the extensive literature on human-to-human relationships to try to understand how, and if, the concepts we found could apply to bonds we might form with robots. We found evidence that many coveted human-to-human friendships do not in fact live up to Aristotle’s ideal.

We noted a wide range of human-to-human relationships, from relatives and lovers to parents, carers, service providers, and the intense (but unfortunately one-way) relationships we maintain with our celebrity heroes. Few of these relationships could be described as completely equal and, crucially, they are all destined to evolve over time.

All this means that expecting robots to form Aristotelian bonds with us is to set a standard even human relationships fail to live up to. We also observed forms of social connectedness that are rewarding and satisfying and yet are far from the ideal friendship outlined by the Greek philosopher.

We know that social interaction is rewarding in its own right, and something that, as social mammals, humans have a strong need for. It seems probable that relationships with robots could help to address the deep-seated urge we all feel for social connection—like providing physical comfort, emotional support, and enjoyable social exchanges—currently provided by other humans.

Our paper also discussed some potential risks. These arise particularly in settings where interaction with a robot could come to replace interaction with people, or where people are denied a choice as to whether they interact with a person or a robot—in a care setting, for instance.

These are important concerns, but they’re possibilities and not inevitabilities. In the literature we reviewed we actually found evidence of the opposite effect: robots acting to scaffold social interactions with others, acting as ice-breakers in groups, and helping people to improve their social skills or to boost their self-esteem.

It appears likely that, as time progresses, many of us will simply follow Frank’s path towards acceptance: scoffing at first, before settling into the idea that robots can make surprisingly good companions. Our research suggests that’s already happening—though perhaps not in a way of which Aristotle would have approved.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Andy Kelly on Unsplash Continue reading

Posted in Human Robots

#437982 Superintelligent AI May Be Impossible to ...

It may be theoretically impossible for humans to control a superintelligent AI, a new study finds. Worse still, the research also quashes any hope for detecting such an unstoppable AI when it’s on the verge of being created.

Slightly less grim is the timetable. By at least one estimate, many decades lie ahead before any such existential computational reckoning could be in the cards for humanity.

Alongside news of AI besting humans at games such as chess, Go and Jeopardy have come fears that superintelligent machines smarter than the best human minds might one day run amok. “The question about whether superintelligence could be controlled if created is quite old,” says study lead author Manuel Alfonseca, a computer scientist at the Autonomous University of Madrid. “It goes back at least to Asimov’s First Law of Robotics, in the 1940s.”

The Three Laws of Robotics, first introduced in Isaac Asimov's 1942 short story “Runaround,” are as follows:

A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

In 2014, philosopher Nick Bostrom, director of the Future of Humanity Institute at the University of Oxford, not only explored ways in which a superintelligent AI could destroy us but also investigated potential control strategies for such a machine—and the reasons they might not work.

Bostrom outlined two possible types of solutions of this “control problem.” One is to control what the AI can do, such as keeping it from connecting to the Internet, and the other is to control what it wants to do, such as teaching it rules and values so it would act in the best interests of humanity. The problem with the former is that Bostrom thought a supersmart machine could probably break free from any bonds we could make. With the latter, he essentially feared that humans might not be smart enough to train a superintelligent AI.

Now Alfonseca and his colleagues suggest it may be impossible to control a superintelligent AI, due to fundamental limits inherent to computing itself. They detailed their findings this month in the Journal of Artificial Intelligence Research.

The researchers suggested that any algorithm that sought to ensure a superintelligent AI cannot harm people had to first simulate the machine’s behavior to predict the potential consequences of its actions. This containment algorithm then would need to halt the supersmart machine if it might indeed do harm.

However, the scientists said it was impossible for any containment algorithm to simulate the AI’s behavior and predict with absolute certainty whether its actions might lead to harm. The algorithm could fail to correctly simulate the AI’s behavior or accurately predict the consequences of the AI’s actions and not recognize such failures.

“Asimov’s first law of robotics has been proved to be incomputable,” Alfonseca says, “and therefore unfeasible.”

We may not even know if we have created a superintelligent machine, the researchers say. This is a consequence of Rice’s theorem, which essentially states that one cannot in general figure anything out about what a computer program might output just by looking at the program, Alfonseca explains.

On the other hand, there’s no need to spruce up the guest room for our future robot overlords quite yet. Three important caveats to the research still leave plenty of uncertainty to the group’s predictions.

First, Alfonseca estimates AI’s moment of truth remains, he says, “At least two centuries in the future.”

Second, he says researchers do not know if so-called artificial general intelligence, also known as strong AI, is theoretically even feasible. “That is, a machine as intelligent as we are in an ample variety of fields,” Alfonseca explains.

Last, Alfonseca says, “We have not proved that superintelligences can never be controlled—only that they can’t always be controlled.”

Although it may not be possible to control a superintelligent artificial general intelligence, it should be possible to control a superintelligent narrow AI—one specialized for certain functions instead of being capable of a broad range of tasks like humans. “We already have superintelligences of this type,” Alfonseca says. “For instance, we have machines that can compute mathematics much faster than we can. This is [narrow] superintelligence, isn’t it?” Continue reading

Posted in Human Robots

#437261 How AI Will Make Drug Discovery ...

If you had to guess how long it takes for a drug to go from an idea to your pharmacy, what would you guess? Three years? Five years? How about the cost? $30 million? $100 million?

Well, here’s the sobering truth: 90 percent of all drug possibilities fail. The few that do succeed take an average of 10 years to reach the market and cost anywhere from $2.5 billion to $12 billion to get there.

But what if we could generate novel molecules to target any disease, overnight, ready for clinical trials? Imagine leveraging machine learning to accomplish with 50 people what the pharmaceutical industry can barely do with an army of 5,000.

Welcome to the future of AI and low-cost, ultra-fast, and personalized drug discovery. Let’s dive in.

GANs & Drugs
Around 2012, computer scientist-turned-biophysicist Alex Zhavoronkov started to notice that artificial intelligence was getting increasingly good at image, voice, and text recognition. He knew that all three tasks shared a critical commonality. In each, massive datasets were available, making it easy to train up an AI.

But similar datasets were present in pharmacology. So, back in 2014, Zhavoronkov started wondering if he could use these datasets and AI to significantly speed up the drug discovery process. He’d heard about a new technique in artificial intelligence known as generative adversarial networks (or GANs). By pitting two neural nets against one another (adversarial), the system can start with minimal instructions and produce novel outcomes (generative). At the time, researchers had been using GANs to do things like design new objects or create one-of-a-kind, fake human faces, but Zhavoronkov wanted to apply them to pharmacology.

He figured GANs would allow researchers to verbally describe drug attributes: “The compound should inhibit protein X at concentration Y with minimal side effects in humans,” and then the AI could construct the molecule from scratch. To turn his idea into reality, Zhavoronkov set up Insilico Medicine on the campus of Johns Hopkins University in Baltimore, Maryland, and rolled up his sleeves.

Instead of beginning their process in some exotic locale, Insilico’s “drug discovery engine” sifts millions of data samples to determine the signature biological characteristics of specific diseases. The engine then identifies the most promising treatment targets and—using GANs—generates molecules (that is, baby drugs) perfectly suited for them. “The result is an explosion in potential drug targets and a much more efficient testing process,” says Zhavoronkov. “AI allows us to do with fifty people what a typical drug company does with five thousand.”

The results have turned what was once a decade-long war into a month-long skirmish.

In late 2018, for example, Insilico was generating novel molecules in fewer than 46 days, and this included not just the initial discovery, but also the synthesis of the drug and its experimental validation in computer simulations.

Right now, they’re using the system to hunt down new drugs for cancer, aging, fibrosis, Parkinson’s, Alzheimer’s, ALS, diabetes, and many others. The first drug to result from this work, a treatment for hair loss, is slated to start Phase I trials by the end of 2020.

They’re also in the early stages of using AI to predict the outcomes of clinical trials in advance of the trial. If successful, this technique will enable researchers to strip a bundle of time and money out of the traditional testing process.

Protein Folding
Beyond inventing new drugs, AI is also being used by other scientists to identify new drug targets—that is, the place to which a drug binds in the body and another key part of the drug discovery process.

Between 1980 and 2006, despite an annual investment of $30 billion, researchers only managed to find about five new drug targets a year. The trouble is complexity. Most potential drug targets are proteins, and a protein’s structure—meaning the way a 2D sequence of amino acids folds into a 3D protein—determines its function.

But a protein with merely a hundred amino acids (a rather small protein) can produce a googol-cubed worth of potential shapes—that’s a one followed by three hundred zeroes. This is also why protein-folding has long been considered an intractably hard problem for even the most powerful of supercomputers.

Back in 1994, to monitor supercomputers’ progress in protein-folding, a biannual competition was created. Until 2018, success was fairly rare. But then the creators of DeepMind turned their neural networks loose on the problem. They created an AI that mines enormous datasets to determine the most likely distance between a protein’s base pairs and the angles of their chemical bonds—aka, the basics of protein-folding. They called it AlphaFold.

On its first foray into the competition, contestant AIs were given 43 protein-folding problems to solve. AlphaFold got 25 right. The second-place team managed a meager three. By predicting the elusive ways in which various proteins fold on the basis of their amino acid sequences, AlphaFold may soon have a tremendous impact in aiding drug discovery and fighting some of today’s most intractable diseases.

Drug Delivery
Another theater of war for improved drugs is the realm of drug delivery. Even here, converging exponential technologies are paving the way for massive implications in both human health and industry shifts.

One key contender is CRISPR, the fast-advancing gene-editing technology that stands to revolutionize synthetic biology and treatment of genetically linked diseases. And researchers have now demonstrated how this tool can be applied to create materials that shape-shift on command. Think: materials that dissolve instantaneously when faced with a programmed stimulus, releasing a specified drug at a highly targeted location.

Yet another potential boon for targeted drug delivery is nanotechnology, whereby medical nanorobots have now been used to fight incidences of cancer. In a recent review of medical micro- and nanorobotics, lead authors (from the University of Texas at Austin and University of California, San Diego) found numerous successful tests of in vivo operation of medical micro- and nanorobots.

Drugs From the Future
Covid-19 is uniting the global scientific community with its urgency, prompting scientists to cast aside nation-specific territorialism, research secrecy, and academic publishing politics in favor of expedited therapeutic and vaccine development efforts. And in the wake of rapid acceleration across healthcare technologies, Big Pharma is an area worth watching right now, no matter your industry. Converging technologies will soon enable extraordinary strides in longevity and disease prevention, with companies like Insilico leading the charge.

Riding the convergence of massive datasets, skyrocketing computational power, quantum computing, cognitive surplus capabilities, and remarkable innovations in AI, we are not far from a world in which personalized drugs, delivered directly to specified targets, will graduate from science fiction to the standard of care.

Rejuvenational biotechnology will be commercially available sooner than you think. When I asked Alex for his own projection, he set the timeline at “maybe 20 years—that’s a reasonable horizon for tangible rejuvenational biotechnology.”

How might you use an extra 20 or more healthy years in your life? What impact would you be able to make?

Join Me
(1) A360 Executive Mastermind: If you’re an exponentially and abundance-minded entrepreneur who would like coaching directly from me, consider joining my Abundance 360 Mastermind, a highly selective community of 360 CEOs and entrepreneurs who I coach for 3 days every January in Beverly Hills, Ca. Through A360, I provide my members with context and clarity about how converging exponential technologies will transform every industry. I’m committed to running A360 for the course of an ongoing 25-year journey as a “countdown to the Singularity.”

If you’d like to learn more and consider joining our 2021 membership, apply here.

(2) Abundance-Digital Online Community: I’ve also created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is Singularity University’s ‘onramp’ for exponential entrepreneurs—those who want to get involved and play at a higher level. Click here to learn more.

(Both A360 and Abundance-Digital are part of Singularity University—your participation opens you to a global community.)

This article originally appeared on diamandis.com. Read the original article here.

Image Credit: andreas160578 from Pixabay Continue reading

Posted in Human Robots