Tag Archives: tools

#435224 Can AI Save the Internet from Fake News?

There’s an old proverb that says “seeing is believing.” But in the age of artificial intelligence, it’s becoming increasingly difficult to take anything at face value—literally.

The rise of so-called “deepfakes,” in which different types of AI-based techniques are used to manipulate video content, has reached the point where Congress held its first hearing last month on the potential abuses of the technology. The congressional investigation coincided with the release of a doctored video of Facebook CEO Mark Zuckerberg delivering what appeared to be a sinister speech.

View this post on Instagram

‘Imagine this…’ (2019) Mark Zuckerberg reveals the truth about Facebook and who really owns the future… see more @sheffdocfest VDR technology by @cannyai #spectreknows #privacy #democracy #surveillancecapitalism #dataism #deepfake #deepfakes #contemporaryartwork #digitalart #generativeart #newmediaart #codeart #markzuckerberg #artivism #contemporaryart

A post shared by Bill Posters (@bill_posters_uk) on Jun 7, 2019 at 7:15am PDT

Scientists are scrambling for solutions on how to combat deepfakes, while at the same time others are continuing to refine the techniques for less nefarious purposes, such as automating video content for the film industry.

At one end of the spectrum, for example, researchers at New York University’s Tandon School of Engineering have proposed implanting a type of digital watermark using a neural network that can spot manipulated photos and videos.

The idea is to embed the system directly into a digital camera. Many smartphone cameras and other digital devices already use AI to boost image quality and make other corrections. The authors of the study out of NYU say their prototype platform increased the chances of detecting manipulation from about 45 percent to more than 90 percent without sacrificing image quality.

On the other hand, researchers at Carnegie Mellon University recently hit on a technique for automatically and rapidly converting large amounts of video content from one source into the style of another. In one example, the scientists transferred the facial expressions of comedian John Oliver onto the bespectacled face of late night show host Stephen Colbert.

The CMU team says the method could be a boon to the movie industry, such as by converting black and white films to color, though it also conceded that the technology could be used to develop deepfakes.

Words Matter with Fake News
While the current spotlight is on how to combat video and image manipulation, a prolonged trench warfare on fake news is being fought by academia, nonprofits, and the tech industry.

This isn’t the fake news that some have come to use as a knee-jerk reaction to fact-based information that might be less than flattering to the subject of the report. Rather, fake news is deliberately-created misinformation that is spread via the internet.

In a recent Pew Research Center poll, Americans said fake news is a bigger problem than violent crime, racism, and terrorism. Fortunately, many of the linguistic tools that have been applied to determine when people are being deliberately deceitful can be baked into algorithms for spotting fake news.

That’s the approach taken by a team at the University of Michigan (U-M) to develop an algorithm that was better than humans at identifying fake news—76 percent versus 70 percent—by focusing on linguistic cues like grammatical structure, word choice, and punctuation.

For example, fake news tends to be filled with hyperbole and exaggeration, using terms like “overwhelming” or “extraordinary.”

“I think that’s a way to make up for the fact that the news is not quite true, so trying to compensate with the language that’s being used,” Rada Mihalcea, a computer science and engineering professor at U-M, told Singularity Hub.

The paper “Automatic Detection of Fake News” was based on the team’s previous studies on how people lie in general, without necessarily having the intention of spreading fake news, she said.

“Deception is a complicated and complex phenomenon that requires brain power,” Mihalcea noted. “That often results in simpler language, where you have shorter sentences or shorter documents.”

AI Versus AI
While most fake news is still churned out by humans with identifiable patterns of lying, according to Mihalcea, other researchers are already anticipating how to detect misinformation manufactured by machines.

A group led by Yejin Choi, with the Allen Institute of Artificial Intelligence and the University of Washington in Seattle, is one such team. The researchers recently introduced the world to Grover, an AI platform that is particularly good at catching autonomously-generated fake news because it’s equally good at creating it.

“This is due to a finding that is perhaps counterintuitive: strong generators for neural fake news are themselves strong detectors of it,” wrote Rowan Zellers, a PhD student and team member, in a Medium blog post. “A generator of fake news will be most familiar with its own peculiarities, such as using overly common or predictable words, as well as the peculiarities of similar generators.”

The team found that the best current discriminators can classify neural fake news from real, human-created text with 73 percent accuracy. Grover clocks in with 92 percent accuracy based on a training set of 5,000 neural network-generated fake news samples. Zellers wrote that Grover got better at scale, identifying 97.5 percent of made-up machine mumbo jumbo when trained on 80,000 articles.

It performed almost as well against fake news created by a powerful new text-generation system called GPT-2 built by OpenAI, a nonprofit research lab founded by Elon Musk, classifying 96.1 percent of the machine-written articles.

OpenAI had so feared that the platform could be abused that it has only released limited versions of the software. The public can play with a scaled-down version posted by a machine learning engineer named Adam King, where the user types in a short prompt and GPT-2 bangs out a short story or poem based on the snippet of text.

No Silver AI Bullet
While real progress is being made against fake news, the challenges of using AI to detect and correct misinformation are abundant, according to Hugo Williams, outreach manager for Logically, a UK-based startup that is developing different detectors using elements of deep learning and natural language processing, among others. He explained that the Logically models analyze information based on a three-pronged approach.

Publisher metadata: Is the article from a known, reliable, and trustworthy publisher with a history of credible journalism?
Network behavior: Is the article proliferating through social platforms and networks in ways typically associated with misinformation?
Content: The AI scans articles for hundreds of known indicators typically found in misinformation.

“There is no single algorithm which is capable of doing this,” Williams wrote in an email to Singularity Hub. “Even when you have a collection of different algorithms which—when combined—can give you relatively decent indications of what is unreliable or outright false, there will always need to be a human layer in the pipeline.”

The company released a consumer app in India back in February just before that country’s election cycle that was a “great testing ground” to refine its technology for the next app release, which is scheduled in the UK later this year. Users can submit articles for further scrutiny by a real person.

“We see our technology not as replacing traditional verification work, but as a method of simplifying and streamlining a very manual process,” Williams said. “In doing so, we’re able to publish more fact checks at a far quicker pace than other organizations.”

“With heightened analysis and the addition of more contextual information around the stories that our users are reading, we are not telling our users what they should or should not believe, but encouraging critical thinking based upon reliable, credible, and verified content,” he added.

AI may never be able to detect fake news entirely on its own, but it can help us be smarter about what we read on the internet.

Image Credit: Dennis Lytyagin / Shutterstock.com Continue reading

Posted in Human Robots

#435172 DARPA’s New Project Is Investing ...

When Elon Musk and DARPA both hop aboard the cyborg hypetrain, you know brain-machine interfaces (BMIs) are about to achieve the impossible.

BMIs, already the stuff of science fiction, facilitate crosstalk between biological wetware with external computers, turning human users into literal cyborgs. Yet mind-controlled robotic arms, microelectrode “nerve patches”, or “memory Band-Aids” are still purely experimental medical treatments for those with nervous system impairments.

With the Next-Generation Nonsurgical Neurotechnology (N3) program, DARPA is looking to expand BMIs to the military. This month, the project tapped six academic teams to engineer radically different BMIs to hook up machines to the brains of able-bodied soldiers. The goal is to ditch surgery altogether—while minimizing any biological interventions—to link up brain and machine.

Rather than microelectrodes, which are currently surgically inserted into the brain to hijack neural communication, the project is looking to acoustic signals, electromagnetic waves, nanotechnology, genetically-enhanced neurons, and infrared beams for their next-gen BMIs.

It’s a radical departure from current protocol, with potentially thrilling—or devastating—impact. Wireless BMIs could dramatically boost bodily functions of veterans with neural damage or post-traumatic stress disorder (PTSD), or allow a single soldier to control swarms of AI-enabled drones with his or her mind. Or, similar to the Black Mirror episode Men Against Fire, it could cloud the perception of soldiers, distancing them from the emotional guilt of warfare.

When trickled down to civilian use, these new technologies are poised to revolutionize medical treatment. Or they could galvanize the transhumanist movement with an inconceivably powerful tool that fundamentally alters society—for better or worse.

Here’s what you need to know.

Radical Upgrades
The four-year N3 program focuses on two main aspects: noninvasive and “minutely” invasive neural interfaces to both read and write into the brain.

Because noninvasive technologies sit on the scalp, their sensors and stimulators will likely measure entire networks of neurons, such as those controlling movement. These systems could then allow soldiers to remotely pilot robots in the field—drones, rescue bots, or carriers like Boston Dynamics’ BigDog. The system could even boost multitasking prowess—mind-controlling multiple weapons at once—similar to how able-bodied humans can operate a third robotic arm in addition to their own two.

In contrast, minutely invasive technologies allow scientists to deliver nanotransducers without surgery: for example, an injection of a virus carrying light-sensitive sensors, or other chemical, biotech, or self-assembled nanobots that can reach individual neurons and control their activity independently without damaging sensitive tissue. The proposed use for these technologies isn’t yet well-specified, but as animal experiments have shown, controlling the activity of single neurons at multiple points is sufficient to program artificial memories of fear, desire, and experiences directly into the brain.

“A neural interface that enables fast, effective, and intuitive hands-free interaction with military systems by able-bodied warfighters is the ultimate program goal,” DARPA wrote in its funding brief, released early last year.

The only technologies that will be considered must have a viable path toward eventual use in healthy human subjects.

“Final N3 deliverables will include a complete integrated bidirectional brain-machine interface system,” the project description states. This doesn’t just include hardware, but also new algorithms tailored to these system, demonstrated in a “Department of Defense-relevant application.”

The Tools
Right off the bat, the usual tools of the BMI trade, including microelectrodes, MRI, or transcranial magnetic stimulation (TMS) are off the table. These popular technologies rely on surgery, heavy machinery, or personnel to sit very still—conditions unlikely in the real world.

The six teams will tap into three different kinds of natural phenomena for communication: magnetism, light beams, and acoustic waves.

Dr. Jacob Robinson at Rice University, for example, is combining genetic engineering, infrared laser beams, and nanomagnets for a bidirectional system. The $18 million project, MOANA (Magnetic, Optical and Acoustic Neural Access device) uses viruses to deliver two extra genes into the brain. One encodes a protein that sits on top of neurons and emits infrared light when the cell activates. Red and infrared light can penetrate through the skull. This lets a skull cap, embedded with light emitters and detectors, pick up these signals for subsequent decoding. Ultra-fast and utra-sensitvie photodetectors will further allow the cap to ignore scattered light and tease out relevant signals emanating from targeted portions of the brain, the team explained.

The other new gene helps write commands into the brain. This protein tethers iron nanoparticles to the neurons’ activation mechanism. Using magnetic coils on the headset, the team can then remotely stimulate magnetic super-neurons to fire while leaving others alone. Although the team plans to start in cell cultures and animals, their goal is to eventually transmit a visual image from one person to another. “In four years we hope to demonstrate direct, brain-to-brain communication at the speed of thought and without brain surgery,” said Robinson.

Other projects in N3 are just are ambitious.

The Carnegie Mellon team, for example, plans to use ultrasound waves to pinpoint light interaction in targeted brain regions, which can then be measured through a wearable “hat.” To write into the brain, they propose a flexible, wearable electrical mini-generator that counterbalances the noisy effect of the skull and scalp to target specific neural groups.

Similarly, a group at Johns Hopkins is also measuring light path changes in the brain to correlate them with regional brain activity to “read” wetware commands.

The Teledyne Scientific & Imaging group, in contrast, is turning to tiny light-powered “magnetometers” to detect small, localized magnetic fields that neurons generate when they fire, and match these signals to brain output.

The nonprofit Battelle team gets even fancier with their ”BrainSTORMS” nanotransducers: magnetic nanoparticles wrapped in a piezoelectric shell. The shell can convert electrical signals from neurons into magnetic ones and vice-versa. This allows external transceivers to wirelessly pick up the transformed signals and stimulate the brain through a bidirectional highway.

The magnetometers can be delivered into the brain through a nasal spray or other non-invasive methods, and magnetically guided towards targeted brain regions. When no longer needed, they can once again be steered out of the brain and into the bloodstream, where the body can excrete them without harm.

Four-Year Miracle
Mind-blown? Yeah, same. However, the challenges facing the teams are enormous.

DARPA’s stated goal is to hook up at least 16 sites in the brain with the BMI, with a lag of less than 50 milliseconds—on the scale of average human visual perception. That’s crazy high resolution for devices sitting outside the brain, both in space and time. Brain tissue, blood vessels, and the scalp and skull are all barriers that scatter and dissipate neural signals. All six teams will need to figure out the least computationally-intensive ways to fish out relevant brain signals from background noise, and triangulate them to the appropriate brain region to decipher intent.

In the long run, four years and an average $20 million per project isn’t much to potentially transform our relationship with machines—for better or worse. DARPA, to its credit, is keenly aware of potential misuse of remote brain control. The program is under the guidance of a panel of external advisors with expertise in bioethical issues. And although DARPA’s focus is on enabling able-bodied soldiers to better tackle combat challenges, it’s hard to argue that wireless, non-invasive BMIs will also benefit those most in need: veterans and other people with debilitating nerve damage. To this end, the program is heavily engaging the FDA to ensure it meets safety and efficacy regulations for human use.

Will we be there in just four years? I’m skeptical. But these electrical, optical, acoustic, magnetic, and genetic BMIs, as crazy as they sound, seem inevitable.

“DARPA is preparing for a future in which a combination of unmanned systems, AI, and cyber operations may cause conflicts to play out on timelines that are too short for humans to effectively manage with current technology alone,” said Al Emondi, the N3 program manager.

The question is, now that we know what’s in store, how should the rest of us prepare?

Image Credit: With permission from DARPA N3 project. Continue reading

Posted in Human Robots

#435080 12 Ways Big Tech Can Take Big Action on ...

Bill Gates and Mark Zuckerberg have invested $1 billion in Breakthrough Energy to fund next-generation solutions to tackle climate. But there is a huge risk that any successful innovation will only reach the market as the world approaches 2030 at the earliest.

We now know that reducing the risk of dangerous climate change means halving global greenhouse gas emissions by that date—in just 11 years. Perhaps Gates, Zuckerberg, and all the tech giants should invest equally in innovations to do with how their own platforms —search, social media, eCommerce—can support societal behavior changes to drive down emissions.

After all, the tech giants influence the decisions of four billion consumers every day. It is time for a social contract between tech and society.

Recently myself and collaborator Johan Falk published a report during the World Economic Forum in Davos outlining 12 ways the tech sector can contribute to supporting societal goals to stabilize Earth’s climate.

Become genuine climate guardians

Tech giants go to great lengths to show how serious they are about reducing their emissions. But I smell cognitive dissonance. Google and Microsoft are working in partnership with oil companies to develop AI tools to help maximize oil recovery. This is not the behavior of companies working flat-out to stabilize Earth’s climate. Indeed, few major tech firms have visions that indicate a stable and resilient planet might be a good goal, yet AI alone has the potential to slash greenhouse gas emissions by four percent by 2030—equivalent to the emissions of Australia, Canada, and Japan combined.

We are now developing a playbook, which we plan to publish later this year at the UN climate summit, about making it as simple as possible for a CEO to become a climate guardian.

Hey Alexa, do you care about the stability of Earth’s climate?

Increasingly, consumers are delegating their decisions to narrow artificial intelligence like Alexa and Siri. Welcome to a world of zero-click purchases.

Should algorithms and information architecture be designed to nudge consumer behavior towards low-carbon choices, for example by making these options the default? We think so. People don’t mind being nudged; in fact, they welcome efforts to make their lives better. For instance, if I want to lose weight, I know I will need all the help I can get. Let’s ‘nudge for good’ and experiment with supporting societal goals.

Use social media for good

Facebook’s goal is to bring the world closer together. With 2.2 billion users on the platform, CEO Mark Zuckerberg can reasonably claim this goal is possible. But social media has changed the flow of information in the world, creating a lucrative industry around a toxic brown-cloud of confusion and anger, with frankly terrifying implications for democracy. This has been linked to the rise of nationalism and populism, and to the election of leaders who shun international cooperation, dismiss scientific knowledge, and reverse climate action at a moment when we need it more than ever.

Social media tools need re-engineering to help people make sense of the world, support democratic processes, and build communities around societal goals. Make this your mission.

Design for a future on Earth

Almost everything is designed with computer software, from buildings to mobile phones to consumer packaging. It is time to make zero-carbon design the new default and design products for sharing, re-use and disassembly.

The future is circular

Halving emissions in a decade will require all companies to adopt circular business models to reduce material use. Some tech companies are leading the charge. Apple has committed to becoming 100 percent circular as soon as possible. Great.

While big tech companies strive to be market leaders here, many other companies lack essential knowledge. Tech companies can support rapid adoption in different economic sectors, not least because they have the know-how to scale innovations exponentially. It makes business sense. If economies of scale drive the price of recycled steel and aluminium down, everyone wins.

Reward low-carbon consumption

eCommerce platforms can create incentives for low-carbon consumption. The world’s largest experiment in greening consumer behavior is Ant Forest, set up by Chinese fintech giant Ant Financial.

An estimated 300 million customers—similar to the population of the United States—gain points for making low-carbon choices such as walking to work, using public transport, or paying bills online. Virtual points are eventually converted into real trees. Sure, big questions remain about its true influence on emissions, but this is a space for rapid experimentation for big impact.

Make information more useful

Science is our tool for defining reality. Scientific consensus is how we attain reliable knowledge. Even after the information revolution, reliable knowledge about the world remains fragmented and unstructured. Build the next generation of search engines to genuinely make the world’s knowledge useful for supporting societal goals.

We need to put these tools towards supporting shared world views of the state of the planet based on the best science. New AI tools being developed by startups like Iris.ai can help see through the fog. From Alexa to Google Home and Siri, the future is “Voice”, but who chooses the information source? The highest bidder? Again, the implications for climate are huge.

Create new standards for digital advertising and marketing

Half of global ad revenue will soon be online, and largely going to a small handful of companies. How about creating a novel ethical standard on what is advertised and where? Companies could consider promoting sustainable choices and healthy lifestyles and limiting advertising of high-emissions products such as cheap flights.

We are what we eat

It is no secret that tech is about to disrupt grocery. The supermarkets of the future will be built on personal consumer data. With about two billion people either obese or overweight, revolutions in choice architecture could support positive diet choices, reduce meat consumption, halve food waste and, into the bargain, slash greenhouse gas emissions.

The future of transport is not cars, it’s data

The 2020s look set to be the biggest disruption of the automobile industry since Henry Ford unveiled the Model T. Two seismic shifts are on their way.

First, electric cars now compete favorably with petrol engines on range. Growth will reach an inflection point within a year or two once prices reach parity. The death of the internal combustion engine in Europe and Asia is assured with end dates announced by China, India, France, the UK, and most of Scandinavia. Dates range from 2025 (Norway) to 2040 (UK and China).

Tech giants can accelerate the demise. Uber recently announced a passenger surcharge to help London drivers save around $1,500 a year towards the cost of an electric car.

Second, driverless cars can shift the transport economic model from ownership to service and ride sharing. A complete shift away from privately-owned vehicles is around the corner, with large implications for emissions.

Clean-energy living and working

Most buildings are barely used and inefficiently heated and cooled. Digitization can slash this waste and its corresponding emissions through measurement, monitoring, and new business models to use office space. While, just a few unicorns are currently in this space, the potential is enormous. Buildings are one of the five biggest sources of emissions, yet have the potential to become clean energy producers in a distributed energy network.

Creating liveable cities

More cities are setting ambitious climate targets to halve emissions in a decade or even less. Tech companies can support this transition by driving demand for low-carbon services for their workforces and offices, but also by providing tools to help monitor emissions and act to reduce them. Google, for example, is collecting travel and other data from across cities to estimate emissions in real time. This is possible through technologies like artificial intelligence and the internet of things. But beware of smart cities that turn out to be not so smart. Efficiencies can reduce resilience when cities face crises.

It’s a Start
Of course, it will take more than tech to solve the climate crisis. But tech is a wildcard. The actions of the current tech giants and their acolytes could serve to destabilize the climate further or bring it under control.

We need a new social contract between tech companies and society to achieve societal goals. The alternative is unthinkable. Without drastic action now, climate chaos threatens to engulf us all. As this future approaches, regulators will be forced to take ever more draconian action to rein in the problem. Acting now will reduce that risk.

Note: A version of this article was originally published on World Economic Forum

Image Credit: Bruce Rolff / Shutterstock.com Continue reading

Posted in Human Robots

#435056 How Researchers Used AI to Better ...

A few years back, DeepMind’s Demis Hassabis famously prophesized that AI and neuroscience will positively feed into each other in a “virtuous circle.” If realized, this would fundamentally expand our insight into intelligence, both machine and human.

We’ve already seen some proofs of concept, at least in the brain-to-AI direction. For example, memory replay, a biological mechanism that fortifies our memories during sleep, also boosted AI learning when abstractly appropriated into deep learning models. Reinforcement learning, loosely based on our motivation circuits, is now behind some of AI’s most powerful tools.

Hassabis is about to be proven right again.

Last week, two studies independently tapped into the power of ANNs to solve a 70-year-old neuroscience mystery: how does our visual system perceive reality?

The first, published in Cell, used generative networks to evolve DeepDream-like images that hyper-activate complex visual neurons in monkeys. These machine artworks are pure nightmare fuel to the human eye; but together, they revealed a fundamental “visual hieroglyph” that may form a basic rule for how we piece together visual stimuli to process sight into perception.

In the second study, a team used a deep ANN model—one thought to mimic biological vision—to synthesize new patterns tailored to control certain networks of visual neurons in the monkey brain. When directly shown to monkeys, the team found that the machine-generated artworks could reliably activate predicted populations of neurons. Future improved ANN models could allow even better control, giving neuroscientists a powerful noninvasive tool to study the brain. The work was published in Science.

The individual results, though fascinating, aren’t necessarily the point. Rather, they illustrate how scientists are now striving to complete the virtuous circle: tapping AI to probe natural intelligence. Vision is only the beginning—the tools can potentially be expanded into other sensory domains. And the more we understand about natural brains, the better we can engineer artificial ones.

It’s a “great example of leveraging artificial intelligence to study organic intelligence,” commented Dr. Roman Sandler at Kernel.co on Twitter.

Why Vision?
ANNs and biological vision have quite the history.

In the late 1950s, the legendary neuroscientist duo David Hubel and Torsten Wiesel became some of the first to use mathematical equations to understand how neurons in the brain work together.

In a series of experiments—many using cats—the team carefully dissected the structure and function of the visual cortex. Using myriads of images, they revealed that vision is processed in a hierarchy: neurons in “earlier” brain regions, those closer to the eyes, tend to activate when they “see” simple patterns such as lines. As we move deeper into the brain, from the early V1 to a nub located slightly behind our ears, the IT cortex, neurons increasingly respond to more complex or abstract patterns, including faces, animals, and objects. The discovery led some scientists to call certain IT neurons “Jennifer Aniston cells,” which fire in response to pictures of the actress regardless of lighting, angle, or haircut. That is, IT neurons somehow extract visual information into the “gist” of things.

That’s not trivial. The complex neural connections that lead to increasing abstraction of what we see into what we think we see—what we perceive—is a central question in machine vision: how can we teach machines to transform numbers encoding stimuli into dots, lines, and angles that eventually form “perceptions” and “gists”? The answer could transform self-driving cars, facial recognition, and other computer vision applications as they learn to better generalize.

Hubel and Wiesel’s Nobel-prize-winning studies heavily influenced the birth of ANNs and deep learning. Much of earlier ANN “feed-forward” model structures are based on our visual system; even today, the idea of increasing layers of abstraction—for perception or reasoning—guide computer scientists to build AI that can better generalize. The early romance between vision and deep learning is perhaps the bond that kicked off our current AI revolution.

It only seems fair that AI would feed back into vision neuroscience.

Hieroglyphs and Controllers
In the Cell study, a team led by Dr. Margaret Livingstone at Harvard Medical School tapped into generative networks to unravel IT neurons’ complex visual alphabet.

Scientists have long known that neurons in earlier visual regions (V1) tend to fire in response to “grating patches” oriented in certain ways. Using a limited set of these patches like letters, V1 neurons can “express a visual sentence” and represent any image, said Dr. Arash Afraz at the National Institute of Health, who was not involved in the study.

But how IT neurons operate remained a mystery. Here, the team used a combination of genetic algorithms and deep generative networks to “evolve” computer art for every studied neuron. In seven monkeys, the team implanted electrodes into various parts of the visual IT region so that they could monitor the activity of a single neuron.

The team showed each monkey an initial set of 40 images. They then picked the top 10 images that stimulated the highest neural activity, and married them to 30 new images to “evolve” the next generation of images. After 250 generations, the technique, XDREAM, generated a slew of images that mashed up contorted face-like shapes with lines, gratings, and abstract shapes.

This image shows the evolution of an optimum image for stimulating a visual neuron in a monkey. Image Credit: Ponce, Xiao, and Schade et al. – Cell.
“The evolved images look quite counter-intuitive,” explained Afraz. Some clearly show detailed structures that resemble natural images, while others show complex structures that can’t be characterized by our puny human brains.

This figure shows natural images (right) and images evolved by neurons in the inferotemporal cortex of a monkey (left). Image Credit: Ponce, Xiao, and Schade et al. – Cell.
“What started to emerge during each experiment were pictures that were reminiscent of shapes in the world but were not actual objects in the world,” said study author Carlos Ponce. “We were seeing something that was more like the language cells use with each other.”

This image was evolved by a neuron in the inferotemporal cortex of a monkey using AI. Image Credit: Ponce, Xiao, and Schade et al. – Cell.
Although IT neurons don’t seem to use a simple letter alphabet, it does rely on a vast array of characters like hieroglyphs or Chinese characters, “each loaded with more information,” said Afraz.

The adaptive nature of XDREAM turns it into a powerful tool to probe the inner workings of our brains—particularly for revealing discrepancies between biology and models.

The Science study, led by Dr. James DiCarlo at MIT, takes a similar approach. Using ANNs to generate new patterns and images, the team was able to selectively predict and independently control neuron populations in a high-level visual region called V4.

“So far, what has been done with these models is predicting what the neural responses would be to other stimuli that they have not seen before,” said study author Dr. Pouya Bashivan. “The main difference here is that we are going one step further and using the models to drive the neurons into desired states.”

It suggests that our current ANN models for visual computation “implicitly capture a great deal of visual knowledge” which we can’t really describe, but which the brain uses to turn vision information into perception, the authors said. By testing AI-generated images on biological vision, however, the team concluded that today’s ANNs have a degree of understanding and generalization. The results could potentially help engineer even more accurate ANN models of biological vision, which in turn could feed back into machine vision.

“One thing is clear already: Improved ANN models … have led to control of a high-level neural population that was previously out of reach,” the authors said. “The results presented here have likely only scratched the surface of what is possible with such implemented characterizations of the brain’s neural networks.”

To Afraz, the power of AI here is to find cracks in human perception—both our computational models of sensory processes, as well as our evolved biological software itself. AI can be used “as a perfect adversarial tool to discover design cracks” of IT, said Afraz, such as finding computer art that “fools” a neuron into thinking the object is something else.

“As artificial intelligence researchers develop models that work as well as the brain does—or even better—we will still need to understand which networks are more likely to behave safely and further human goals,” said Ponce. “More efficient AI can be grounded by knowledge of how the brain works.”

Image Credit: Sangoiri / Shutterstock.com Continue reading

Posted in Human Robots

#435023 Inflatable Robot Astronauts and How to ...

The typical cultural image of a robot—as a steel, chrome, humanoid bucket of bolts—is often far from the reality of cutting-edge robotics research. There are difficulties, both social and technological, in realizing the image of a robot from science fiction—let alone one that can actually help around the house. Often, it’s simply the case that great expense in producing a humanoid robot that can perform dozens of tasks quite badly is less appropriate than producing some other design that’s optimized to a specific situation.

A team of scientists from Brigham Young University has received funding from NASA to investigate an inflatable robot called, improbably, King Louie. The robot was developed by Pneubotics, who have a long track record in the world of soft robotics.

In space, weight is at a premium. The world watched in awe and amusement when Commander Chris Hadfield sang “Space Oddity” from the International Space Station—but launching that guitar into space likely cost around $100,000. A good price for launching payload into outer space is on the order of $10,000 per pound ($22,000/kg).

For that price, it would cost a cool $1.7 million to launch Boston Dynamics’ famous ATLAS robot to the International Space Station, and its bulk would be inconvenient in the cramped living quarters available. By contrast, an inflatable robot like King Louie is substantially lighter and can simply be deflated and folded away when not in use. The robot can be manufactured from cheap, lightweight, and flexible materials, and minor damage is easy to repair.

Inflatable Robots Under Pressure
The concept of inflatable robots is not new: indeed, earlier prototypes of King Louie were exhibited back in 2013 at Google I/O’s After Hours, flailing away at each other in a boxing ring. Sparks might fly in fights between traditional robots, but the aim here was to demonstrate that the robots are passively safe: the soft, inflatable figures won’t accidentally smash delicate items when moving around.

Health and safety regulations form part of the reason why robots don’t work alongside humans more often, but soft robots would be far safer to use in healthcare or around children (whose first instinct, according to BYU’s promotional video, is either to hug or punch King Louie.) It’s also much harder to have nightmarish fantasies about robotic domination with these friendlier softbots: Terminator would’ve been a much shorter franchise if Skynet’s droids were inflatable.

Robotic exoskeletons are increasingly used for physical rehabilitation therapies, as well as for industrial purposes. As countries like Japan seek to care for their aging populations with robots and alleviate the burden on nurses, who suffer from some of the highest rates of back injuries of any profession, soft robots will become increasingly attractive for use in healthcare.

Precision and Proprioception
The main issue is one of control. Rigid, metallic robots may be more expensive and more dangerous, but the simple fact of their rigidity makes it easier to map out and control the precise motions of each of the robot’s limbs, digits, and actuators. Individual motors attached to these rigid robots can allow for a great many degrees of freedom—individual directions in which parts of the robot can move—and precision control.

For example, ATLAS has 28 degrees of freedom, while Shadow’s dexterous robot hand alone has 20. This is much harder to do with an inflatable robot, for precisely the same reasons that make it safer. Without hard and rigid bones, other methods of control must be used.

In the case of King Louie, the robot is made up of many expandable air chambers. An air-compressor changes the pressure levels in these air chambers, allowing them to expand and contract. This harks back to some of the earliest pneumatic automata. Pairs of chambers act antagonistically, like muscles, such that when one chamber “tenses,” another relaxes—allowing King Louie to have, for example, four degrees of freedom in each of its arms.

The robot is also surprisingly strong. Professor Killpack, who works at BYU on the project, estimates that its payload is comparable to other humanoid robots on the market, like Rethink Robotics’ Baxter (RIP).

Proprioception, that sixth sense that allows us to map out and control our own bodies and muscles in fine detail, is being enhanced for a wider range of soft, flexible robots with the use of machine learning algorithms connected to input from a whole host of sensors on the robot’s body.

Part of the reason this is so complicated with soft, flexible robots is that the shape and “map” of the robot’s body can change; that’s the whole point. But this means that every time King Louie is inflated, its body is a slightly different shape; when it becomes deformed, for example due to picking up objects, the shape changes again, and the complex ways in which the fabric can twist and bend are far more difficult to model and sense than the behavior of the rigid metal of King Louie’s hard counterparts. When you’re looking for precision, seemingly-small changes can be the difference between successfully holding an object or dropping it.

Learning to Move
Researchers at BYU are therefore spending a great deal of time on how to control the soft-bot enough to make it comparably useful. One method involves the commercial tracking technology used in the Vive VR system: by moving the game controller, which provides a constant feedback to the robot’s arm, you can control its position. Since the tracking software provides an estimate of the robot’s joint angles and continues to provide feedback until the arm is correctly aligned, this type of feedback method is likely to work regardless of small changes to the robot’s shape.

The other technologies the researchers are looking into for their softbot include arrays of flexible, tactile sensors to place on the softbot’s skin, and minimizing the complex cross-talk between these arrays to get coherent information about the robot’s environment. As with some of the new proprioception research, the project is looking into neural networks as a means of modeling the complicated dynamics—the motion and response to forces—of the softbot. This method relies on large amounts of observational data, mapping how the robot is inflated and how it moves, rather than explicitly understanding and solving the equations that govern its motion—which hopefully means the methods can work even as the robot changes.

There’s still a long way to go before soft and inflatable robots can be controlled sufficiently well to perform all the tasks they might be used for. Ultimately, no one robotic design is likely to be perfect for any situation.

Nevertheless, research like this gives us hope that one day, inflatable robots could be useful tools, or even companions, at which point the advertising slogans write themselves: Don’t let them down, and they won’t let you down!

Image Credit: Brigham Young University. Continue reading

Posted in Human Robots