Tag Archives: work

#434559 Can AI Tell the Difference Between a ...

Scarcely a day goes by without another headline about neural networks: some new task that deep learning algorithms can excel at, approaching or even surpassing human competence. As the application of this approach to computer vision has continued to improve, with algorithms capable of specialized recognition tasks like those found in medicine, the software is getting closer to widespread commercial use—for example, in self-driving cars. Our ability to recognize patterns is a huge part of human intelligence: if this can be done faster by machines, the consequences will be profound.

Yet, as ever with algorithms, there are deep concerns about their reliability, especially when we don’t know precisely how they work. State-of-the-art neural networks will confidently—and incorrectly—classify images that look like television static or abstract art as real-world objects like school-buses or armadillos. Specific algorithms could be targeted by “adversarial examples,” where adding an imperceptible amount of noise to an image can cause an algorithm to completely mistake one object for another. Machine learning experts enjoy constructing these images to trick advanced software, but if a self-driving car could be fooled by a few stickers, it might not be so fun for the passengers.

These difficulties are hard to smooth out in large part because we don’t have a great intuition for how these neural networks “see” and “recognize” objects. The main insight analyzing a trained network itself can give us is a series of statistical weights, associating certain groups of points with certain objects: this can be very difficult to interpret.

Now, new research from UCLA, published in the journal PLOS Computational Biology, is testing neural networks to understand the limits of their vision and the differences between computer vision and human vision. Nicholas Baker, Hongjing Lu, and Philip J. Kellman of UCLA, alongside Gennady Erlikhman of the University of Nevada, tested a deep convolutional neural network called VGG-19. This is state-of-the-art technology that is already outperforming humans on standardized tests like the ImageNet Large Scale Visual Recognition Challenge.

They found that, while humans tend to classify objects based on their overall (global) shape, deep neural networks are far more sensitive to the textures of objects, including local color gradients and the distribution of points on the object. This result helps explain why neural networks in image recognition make mistakes that no human ever would—and could allow for better designs in the future.

In the first experiment, a neural network was trained to sort images into 1 of 1,000 different categories. It was then presented with silhouettes of these images: all of the local information was lost, while only the outline of the object remained. Ordinarily, the trained neural net was capable of recognizing these objects, assigning more than 90% probability to the correct classification. Studying silhouettes, this dropped to 10%. While human observers could nearly always produce correct shape labels, the neural networks appeared almost insensitive to the overall shape of the images. On average, the correct object was ranked as the 209th most likely solution by the neural network, even though the overall shapes were an exact match.

A particularly striking example arose when they tried to get the neural networks to classify glass figurines of objects they could already recognize. While you or I might find it easy to identify a glass model of an otter or a polar bear, the neural network classified them as “oxygen mask” and “can opener” respectively. By presenting glass figurines, where the texture information that neural networks relied on for classifying objects is lost, the neural network was unable to recognize the objects by shape alone. The neural network was similarly hopeless at classifying objects based on drawings of their outline.

If you got one of these right, you’re better than state-of-the-art image recognition software. Image Credit: Nicholas Baker, Hongjing Lu, Gennady Erlikhman, Philip J. Kelman. “Deep convolutional networks do not classify based on global object shape.” Plos Computational Biology. 12/7/18. / CC BY 4.0
When the neural network was explicitly trained to recognize object silhouettes—given no information in the training data aside from the object outlines—the researchers found that slight distortions or “ripples” to the contour of the image were again enough to fool the AI, while humans paid them no mind.

The fact that neural networks seem to be insensitive to the overall shape of an object—relying instead on statistical similarities between local distributions of points—suggests a further experiment. What if you scrambled the images so that the overall shape was lost but local features were preserved? It turns out that the neural networks are far better and faster at recognizing scrambled versions of objects than outlines, even when humans struggle. Students could classify only 37% of the scrambled objects, while the neural network succeeded 83% of the time.

Humans vastly outperform machines at classifying object (a) as a bear, while the machine learning algorithm has few problems classifying the bear in figure (b). Image Credit: Nicholas Baker, Hongjing Lu, Gennady Erlikhman, Philip J. Kelman. “Deep convolutional networks do not classify based on global object shape.” Plos Computational Biology. 12/7/18. / CC BY 4.0
“This study shows these systems get the right answer in the images they were trained on without considering shape,” Kellman said. “For humans, overall shape is primary for object recognition, and identifying images by overall shape doesn’t seem to be in these deep learning systems at all.”

Naively, one might expect that—as the many layers of a neural network are modeled on connections between neurons in the brain and resemble the visual cortex specifically—the way computer vision operates must necessarily be similar to human vision. But this kind of research shows that, while the fundamental architecture might resemble that of the human brain, the resulting “mind” operates very differently.

Researchers can, increasingly, observe how the “neurons” in neural networks light up when exposed to stimuli and compare it to how biological systems respond to the same stimuli. Perhaps someday it might be possible to use these comparisons to understand how neural networks are “thinking” and how those responses differ from humans.

But, as yet, it takes a more experimental psychology to probe how neural networks and artificial intelligence algorithms perceive the world. The tests employed against the neural network are closer to how scientists might try to understand the senses of an animal or the developing brain of a young child rather than a piece of software.

By combining this experimental psychology with new neural network designs or error-correction techniques, it may be possible to make them even more reliable. Yet this research illustrates just how much we still don’t understand about the algorithms we’re creating and using: how they tick, how they make decisions, and how they’re different from us. As they play an ever-greater role in society, understanding the psychology of neural networks will be crucial if we want to use them wisely and effectively—and not end up missing the woods for the trees.

Image Credit: Irvan Pratama / Shutterstock.com Continue reading

Posted in Human Robots

#434544 This Week’s Awesome Stories From ...

ARTIFICIAL INTELLIGENCE
DeepMind Beats Pros at Starcraft in Another Triumph for Bots
Tom Simonite | Wired
“DeepMind’s feat is the most complex yet in a long train of contests in which computers have beaten top humans at games. Checkers fell in 1994, chess in 1997, and DeepMind’s earlier bot AlphaGo became the first to beat a champion at the board game Go in 2016. The StarCraft bot is the most powerful AI game player yet; it may also be the least unexpected.”

GENETICS
Complete Axolotl Genome Could Pave the Way Toward Human Tissue Regeneration
George Dvorsky | Gizmodo
“Now that researchers have a near-complete axolotl genome—the new assembly still requires a bit of fine-tuning (more on that in a bit)—they, along with others, can now go about the work of identifying the genes responsible for axolotl tissue regeneration.”

FUTURE
We Analyzed 16,625 Papers to Figure Out Where AI Is Headed Next
Karen Hao | MIT Technology Review
“…though deep learning has singlehandedly thrust AI into the public eye, it represents just a small blip in the history of humanity’s quest to replicate our own intelligence. It’s been at the forefront of that effort for less than 10 years. When you zoom out on the whole history of the field, it’s easy to realize that it could soon be on its way out.”

COMPUTING
Apple’s Finger-Controller Patent Is a Glimpse at Mixed Reality’s Future
Mark Sullivan | Fast Company
“[Apple’s] engineers are now looking past the phone touchscreen toward mixed reality, where the company’s next great UX will very likely be built. A recent patent application gives some tantalizing clues as to how Apple’s people are thinking about aspects of that challenge.”

GOVERNANCE
How Do You Govern Machines That Can Learn? Policymakers Are Trying to Figure That Out
Steve Lohr | The New York Times
“Regulation is coming. That’s a good thing. Rules of competition and behavior are the foundation of healthy, growing markets. That was the consensus of the policymakers at MIT. But they also agreed that artificial intelligence raises some fresh policy challenges.”

Image Credit: Victoria Shapiro / Shutterstock.com Continue reading

Posted in Human Robots

#434532 How Microrobots Will Fix Our Roads and ...

Swarms of microrobots will scuttle along beneath our roads and pavements, finding and fixing leaky pipes and faulty cables. Thanks to their efforts, we can avoid costly road work that costs billions of dollars each year—not to mention frustrating traffic delays.

That is, if a new project sponsored by the U.K. government is a success. Recent developments in the space seem to point towards a bright future for microrobots.

Microrobots Saving Billions
Each year, around 1.5 million road excavations take place across the U.K. Many are due to leaky pipes and faulty cables that necessitate excavation of road surfaces in order to fix them. The resulting repairs, alongside disruptions to traffic and businesses, are estimated to cost a whopping £6.3 billion ($8 billion).

A consortium of scientists, led by University of Sheffield Professor Kirill Horoshenkov, are planning to use microrobots to negate most of these costs. The group has received a £7.2 million ($9.2 million) grant to develop and build their bots.

According to Horoshenkov, the microrobots will come in two versions. One is an inspection bot, which will navigate along underground infrastructure and examine its condition via sonar. The inspectors will be complemented by worker bots capable of carrying out repairs with cement and adhesives or cleaning out blockages with a high-powered jet. The inspector bots will be around one centimeter long and possibly autonomous, while the worker bots will be slightly larger and steered via remote control.

If successful, it is believed the bots could potentially save the U.K. economy around £5 billion ($6.4 billion) a year.

The U.K. government has set aside a further £19 million ($24 million) for research into robots for hazardous environments, such as nuclear decommissioning, drones for oil pipeline monitoring, and artificial intelligence software to detect the need for repairs on satellites in orbit.

The Lowest-Hanging Fruit
Microrobots like the ones now under development in the U.K. have many potential advantages and use cases. Thanks to their small size they can navigate tight spaces, for example in search and rescue operations, and robot swarm technology would allow them to collaborate to perform many different functions, including in construction projects.

To date, the number of microrobots in use is relatively limited, but that could be about to change, with bots closing in on other types of inspection jobs, which could be considered one of the lowest-hanging fruits.

Engineering firm Rolls-Royce (not the car company, but the one that builds aircraft engines) is looking to use microrobots to inspect some of the up to 25,000 individual parts that make up an engine. The microrobots use the cockroach as a model, and Rolls Royce believes they could save engineers time when performing the maintenance checks that can take over a month per engine.

Even Smaller Successes
Going further down in scale, recent years have seen a string of successes for nanobots. For example, a team of researchers at the Femto-ST Institute have used nanobots to build what is likely the world’s smallest house (if this isn’t a category at Guinness, someone needs to get on the phone with them), which stands a ‘towering’ 0.015 millimeters.

One of the areas where nanobots have shown great promise is in medicine. Several studies have shown how the minute bots are capable of delivering drugs directly into dense biological tissue, which can otherwise be highly challenging to target directly. Such delivery systems have a great potential for improving the treatment of a wide range of ailments and illnesses, including cancer.

There’s no question that the ecosystem of microrobots and nanobots is evolving. While still in their early days, the above successes point to a near-future boom in the bots we may soon refer to as our ‘littlest everyday helpers.’

Image Credit: 5nikolas5 / Shutterstock.com Continue reading

Posted in Human Robots

#434508 The Top Biotech and Medicine Advances to ...

2018 was bonkers for science.

From a woman who gave birth using a transplanted uterus, to the infamous CRISPR baby scandal, to forensics adopting consumer-based genealogy test kits to track down criminals, last year was a factory churning out scientific “whoa” stories with consequences for years to come.

With CRISPR still in the headlines, Britain ready to bid Europe au revoir, and multiple scientific endeavors taking off, 2019 is shaping up to be just as tumultuous.

Here are the science and health stories that may blow up in the new year. But first, a note of caveat: predicting the future is tough. Forecasting is the lovechild between statistics and (a good deal of) intuition, and entire disciplines have been dedicated to the endeavor. But January is the perfect time to gaze into the crystal ball for wisps of insight into the year to come. Last year we predicted the widespread approval of gene therapy products—on the most part, we nailed it. This year we’re hedging our bets with multiple predictions.

Gene Drives Used in the Wild
The concept of gene drives scares many, for good reason. Gene drives are a step up in severity (and consequences) from CRISPR and other gene-editing tools. Even with germline editing, in which the sperm, egg, or embryos are altered, gene editing affects just one genetic line—one family—at least at the beginning, before they reproduce with the general population.

Gene drives, on the other hand, have the power to wipe out entire species.

In a nutshell, they’re little bits of DNA code that help a gene transfer from parent to child with almost 100 percent perfect probability. The “half of your DNA comes from dad, the other comes from mom” dogma? Gene drives smash that to bits.

In other words, the only time one would consider using a gene drive is to change the genetic makeup of an entire population. It sounds like the plot of a supervillain movie, but scientists have been toying around with the idea of deploying the technology—first in mosquitoes, then (potentially) in rodents.

By releasing just a handful of mutant mosquitoes that carry gene drives for infertility, for example, scientists could potentially wipe out entire populations that carry infectious scourges like malaria, dengue, or Zika. The technology is so potent—and dangerous—the US Defense Advances Research Projects Agency is shelling out $65 million to suss out how to deploy, control, counter, or even reverse the effects of tampering with ecology.

Last year, the U.N. gave a cautious go-ahead for the technology to be deployed in the wild in limited terms. Now, the first release of a genetically modified mosquito is set for testing in Burkina Faso in Africa—the first-ever field experiment involving gene drives.

The experiment will only release mosquitoes in the Anopheles genus, which are the main culprits transferring disease. As a first step, over 10,000 male mosquitoes are set for release into the wild. These dudes are genetically sterile but do not cause infertility, and will help scientists examine how they survive and disperse as a preparation for deploying gene-drive-carrying mosquitoes.

Hot on the project’s heels, the nonprofit consortium Target Malaria, backed by the Bill and Melinda Gates foundation, is engineering a gene drive called Mosq that will spread infertility across the population or kill out all female insects. Their attempt to hack the rules of inheritance—and save millions in the process—is slated for 2024.

A Universal Flu Vaccine
People often brush off flu as a mere annoyance, but the infection kills hundreds of thousands each year based on the CDC’s statistical estimates.

The flu virus is actually as difficult of a nemesis as HIV—it mutates at an extremely rapid rate, making effective vaccines almost impossible to engineer on time. Scientists currently use data to forecast the strains that will likely explode into an epidemic and urge the public to vaccinate against those predictions. That’s partly why, on average, flu vaccines only have a success rate of roughly 50 percent—not much better than a coin toss.

Tired of relying on educated guesses, scientists have been chipping away at a universal flu vaccine that targets all strains—perhaps even those we haven’t yet identified. Often referred to as the “holy grail” in epidemiology, these vaccines try to alert our immune systems to parts of a flu virus that are least variable from strain to strain.

Last November, a first universal flu vaccine developed by BiondVax entered Phase 3 clinical trials, which means it’s already been proven safe and effective in a small numbers and is now being tested in a broader population. The vaccine doesn’t rely on dead viruses, which is a common technique. Rather, it uses a small chain of amino acids—the chemical components that make up proteins—to stimulate the immune system into high alert.

With the government pouring $160 million into the research and several other universal candidates entering clinical trials, universal flu vaccines may finally experience a breakthrough this year.

In-Body Gene Editing Shows Further Promise
CRISPR and other gene editing tools headed the news last year, including both downers suggesting we already have immunity to the technology and hopeful news of it getting ready for treating inherited muscle-wasting diseases.

But what wasn’t widely broadcasted was the in-body gene editing experiments that have been rolling out with gusto. Last September, Sangamo Therapeutics in Richmond, California revealed that they had injected gene-editing enzymes into a patient in an effort to correct a genetic deficit that prevents him from breaking down complex sugars.

The effort is markedly different than the better-known CAR-T therapy, which extracts cells from the body for genetic engineering before returning them to the hosts. Rather, Sangamo’s treatment directly injects viruses carrying the edited genes into the body. So far, the procedure looks to be safe, though at the time of reporting it was too early to determine effectiveness.

This year the company hopes to finally answer whether it really worked.

If successful, it means that devastating genetic disorders could potentially be treated with just a few injections. With a gamut of new and more precise CRISPR and other gene-editing tools in the works, the list of treatable inherited diseases is likely to grow. And with the CRISPR baby scandal potentially dampening efforts at germline editing via regulations, in-body gene editing will likely receive more attention if Sangamo’s results return positive.

Neuralink and Other Brain-Machine Interfaces
Neuralink is the stuff of sci fi: tiny implanted particles into the brain could link up your biological wetware with silicon hardware and the internet.

But that’s exactly what Elon Musk’s company, founded in 2016, seeks to develop: brain-machine interfaces that could tinker with your neural circuits in an effort to treat diseases or even enhance your abilities.

Last November, Musk broke his silence on the secretive company, suggesting that he may announce something “interesting” in a few months, that’s “better than anyone thinks is possible.”

Musk’s aspiration for achieving symbiosis with artificial intelligence isn’t the driving force for all brain-machine interfaces (BMIs). In the clinics, the main push is to rehabilitate patients—those who suffer from paralysis, memory loss, or other nerve damage.

2019 may be the year that BMIs and neuromodulators cut the cord in the clinics. These devices may finally work autonomously within a malfunctioning brain, applying electrical stimulation only when necessary to reduce side effects without requiring external monitoring. Or they could allow scientists to control brains with light without needing bulky optical fibers.

Cutting the cord is just the first step to fine-tuning neurological treatments—or enhancements—to the tune of your own brain, and 2019 will keep on bringing the music.

Image Credit: angellodeco / Shutterstock.com Continue reading

Posted in Human Robots

#434492 Black Mirror’s ‘Bandersnatch’ ...

When was the last time you watched a movie where you could control the plot?

Bandersnatch is the first interactive film in the sci fi anthology series Black Mirror. Written by series creator Charlie Brooker and directed by David Slade, the film tells the story of young programmer Stefan Butler, who is adapting a fantasy choose-your-own-adventure novel called Bandersnatch into a video game. Throughout the film, viewers are given the power to influence Butler’s decisions, leading to diverging plots with different endings.

Like many Black Mirror episodes, this film is mind-bending, dark, and thought-provoking. In addition to innovating cinema as we know it, it is a fascinating rumination on free will, parallel realities, and emerging technologies.

Pick Your Own Adventure
With a non-linear script, Bandersnatch is a viewing experience like no other. Throughout the film viewers are given the option of making a decision for the protagonist. In these instances, they have 10 seconds to make a decision until a default decision is made. For example, in the early stage of the plot, Butler is given the choice of accepting or rejecting Tuckersoft’s offer to develop a video game and the viewer gets to decide what he does. The decision then shapes the plot accordingly.

The video game Butler is developing involves moving through a graphical maze of corridors while avoiding a creature called the Pax, and at times making choices through an on-screen instruction (sound familiar?). In other words, it’s a pick-your-own-adventure video game in a pick-your-own-adventure movie.

Many viewers have ended up spending hours exploring all the different branches of the narrative (though the average viewing is 90 minutes). One user on reddit has mapped out an entire flowchart, showing how all the different decisions (and pseudo-decisions) lead to various endings.

However, over time, Butler starts to question his own free will. It’s almost as if he’s beginning to realize that the audience is controlling him. In one branch of the narrative, he is confronted by this reality when the audience indicates to him that he is being controlled in a Netflix show: “I am watching you on Netflix. I make all the decisions for you”. Butler, as you can imagine, is horrified by this message.

But Butler isn’t the only one who has an illusion of choice. We, the seemingly powerful viewers, also appear to operate under the illusion of choice. Despite there being five main endings to the film, they are all more or less the same.

The Science Behind Bandersnatch
The premise of Bandersnatch isn’t based on fantasy, but hard science. Free will has always been a widely-debated issue in neuroscience, with reputable scientists and studies demonstrating that the whole concept may be an illusion.

In the 1970s, a psychologist named Benjamin Libet conducted a series of experiments that studied voluntary decision making in humans. He found that brain activity imitating an action, such as moving your wrist, preceded the conscious awareness of the action.

Psychologist Malcom Gladwell theorizes that while we like to believe we spend a lot of time thinking about our decisions, our mental processes actually work rapidly, automatically, and often subconsciously, from relatively little information. In addition to this, thinking and making decisions are usually a byproduct of several different brain systems, such as the hippocampus, amygdala, and prefrontal cortex working together. You are more conscious of some information processes in the brain than others.

As neuroscientist and philosopher Sam Harris points out in his book Free Will, “You did not pick your parents or the time and place of your birth. You didn’t choose your gender or most of your life experiences. You had no control whatsoever over your genome or the development of your brain. And now your brain is making choices on the basis of preferences and beliefs that have been hammered into it over a lifetime.” Like Butler, we may believe we are operating under full agency of our abilities, but we are at the mercy of many internal and external factors that influence our decisions.

Beyond free will, Bandersnatch also taps into the theory of parallel universes, a facet of the astronomical theory of the multiverse. In astrophysics, there is a theory that there are parallel universes other than our own, where all the choices you made are played out in alternate realities. For instance, if today you had the option of having cereal or eggs for breakfast, and you chose eggs, in a parallel universe, you chose cereal. Human history and our lives may have taken different paths in these parallel universes.

The Future of Cinema
In the future, the viewing experience will no longer be a passive one. Bandersnatch is just a glimpse into how technology is revolutionizing film as we know it and making it a more interactive and personalized experience. All the different scenarios and branches of the plot were scripted and filmed, but in the future, they may be adapted real-time via artificial intelligence.

Virtual reality may allow us to play an even more active role by making us participants or characters in the film. Data from your history of preferences and may be used to create a unique version of the plot that is optimized for your viewing experience.

Let’s also not underestimate the social purpose of advancing film and entertainment. Science fiction gives us the ability to create simulations of the future. Different narratives can allow us to explore how powerful technologies combined with human behavior can result in positive or negative scenarios. Perhaps in the future, science fiction will explore implications of technologies and observe human decision making in novel contexts, via AI-powered films in the virtual world.

Image Credit: andrey_l / Shutterstock.com

We are a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for us to earn fees by linking to Amazon.com and affiliated sites. Continue reading

Posted in Human Robots