Tag Archives: manipulation

#431238 AI Is Easy to Fool—Why That Needs to ...

Con artistry is one of the world’s oldest and most innovative professions, and it may soon have a new target. Research suggests artificial intelligence may be uniquely susceptible to tricksters, and as its influence in the modern world grows, attacks against it are likely to become more common.
The root of the problem lies in the fact that artificial intelligence algorithms learn about the world in very different ways than people do, and so slight tweaks to the data fed into these algorithms can throw them off completely while remaining imperceptible to humans.
Much of the research into this area has been conducted on image recognition systems, in particular those relying on deep learning neural networks. These systems are trained by showing them thousands of examples of images of a particular object until they can extract common features that allow them to accurately spot the object in new images.
But the features they extract are not necessarily the same high-level features a human would be looking for, like the word STOP on a sign or a tail on a dog. These systems analyze images at the individual pixel level to detect patterns shared between examples. These patterns can be obscure combinations of pixel values, in small pockets or spread across the image, that would be impossible to discern for a human, but highly accurate at predicting a particular object.

“An attacker can trick the object recognition algorithm into seeing something that isn’t there, without these alterations being obvious to a human.”

What this means is that by identifying these patterns and overlaying them over a different image, an attacker can trick the object recognition algorithm into seeing something that isn’t there, without these alterations being obvious to a human. This kind of manipulation is known as an “adversarial attack.”
Early attempts to trick image recognition systems this way required access to the algorithm’s inner workings to decipher these patterns. But in 2016 researchers demonstrated a “black box” attack that enabled them to trick such a system without knowing its inner workings.
By feeding the system doctored images and seeing how it classified them, they were able to work out what it was focusing on and therefore generate images they knew would fool it. Importantly, the doctored images were not obviously different to human eyes.
These approaches were tested by feeding doctored image data directly into the algorithm, but more recently, similar approaches have been applied in the real world. Last year it was shown that printouts of doctored images that were then photographed on a smartphone successfully tricked an image classification system.
Another group showed that wearing specially designed, psychedelically-colored spectacles could trick a facial recognition system into thinking people were celebrities. In August scientists showed that adding stickers to stop signs in particular configurations could cause a neural net designed to spot them to misclassify the signs.
These last two examples highlight some of the potential nefarious applications for this technology. Getting a self-driving car to miss a stop sign could cause an accident, either for insurance fraud or to do someone harm. If facial recognition becomes increasingly popular for biometric security applications, being able to pose as someone else could be very useful to a con artist.
Unsurprisingly, there are already efforts to counteract the threat of adversarial attacks. In particular, it has been shown that deep neural networks can be trained to detect adversarial images. One study from the Bosch Center for AI demonstrated such a detector, an adversarial attack that fools the detector, and a training regime for the detector that nullifies the attack, hinting at the kind of arms race we are likely to see in the future.
While image recognition systems provide an easy-to-visualize demonstration, they’re not the only machine learning systems at risk. The techniques used to perturb pixel data can be applied to other kinds of data too.

“Bypassing cybersecurity defenses is one of the more worrying and probable near-term applications for this approach.”

Chinese researchers showed that adding specific words to a sentence or misspelling a word can completely throw off machine learning systems designed to analyze what a passage of text is about. Another group demonstrated that garbled sounds played over speakers could make a smartphone running the Google Now voice command system visit a particular web address, which could be used to download malware.
This last example points toward one of the more worrying and probable near-term applications for this approach: bypassing cybersecurity defenses. The industry is increasingly using machine learning and data analytics to identify malware and detect intrusions, but these systems are also highly susceptible to trickery.
At this summer’s DEF CON hacking convention, a security firm demonstrated they could bypass anti-malware AI using a similar approach to the earlier black box attack on the image classifier, but super-powered with an AI of their own.
Their system fed malicious code to the antivirus software and then noted the score it was given. It then used genetic algorithms to iteratively tweak the code until it was able to bypass the defenses while maintaining its function.
All the approaches noted so far are focused on tricking pre-trained machine learning systems, but another approach of major concern to the cybersecurity industry is that of “data poisoning.” This is the idea that introducing false data into a machine learning system’s training set will cause it to start misclassifying things.
This could be particularly challenging for things like anti-malware systems that are constantly being updated to take into account new viruses. A related approach bombards systems with data designed to generate false positives so the defenders recalibrate their systems in a way that then allows the attackers to sneak in.
How likely it is that these approaches will be used in the wild will depend on the potential reward and the sophistication of the attackers. Most of the techniques described above require high levels of domain expertise, but it’s becoming ever easier to access training materials and tools for machine learning.
Simpler versions of machine learning have been at the heart of email spam filters for years, and spammers have developed a host of innovative workarounds to circumvent them. As machine learning and AI increasingly embed themselves in our lives, the rewards for learning how to trick them will likely outweigh the costs.
Image Credit: Nejron Photo / Shutterstock.com Continue reading

Posted in Human Robots

#431203 Could We Build a Blade Runner-Style ...

The new Blade Runner sequel will return us to a world where sophisticated androids made with organic body parts can match the strength and emotions of their human creators. As someone who builds biologically inspired robots, I’m interested in whether our own technology will ever come close to matching the “replicants” of Blade Runner 2049.
The reality is that we’re a very long way from building robots with human-like abilities. But advances in so-called soft robotics show a promising way forward for technology that could be a new basis for the androids of the future.
From a scientific point of view, the real challenge is replicating the complexity of the human body. Each one of us is made up of millions and millions of cells, and we have no clue how we can build such a complex machine that is indistinguishable from us humans. The most complex machines today, for example the world’s largest airliner, the Airbus A380, are composed of millions of parts. But in order to match the complexity level of humans, we would need to scale this complexity up about a million times.
There are currently three different ways that engineering is making the border between humans and robots more ambiguous. Unfortunately, these approaches are only starting points and are not yet even close to the world of Blade Runner.
There are human-like robots built from scratch by assembling artificial sensors, motors, and computers to resemble the human body and motion. However, extending the current human-like robot would not bring Blade Runner-style androids closer to humans, because every artificial component, such as sensors and motors, are still hopelessly primitive compared to their biological counterparts.
There is also cyborg technology, where the human body is enhanced with machines such as robotic limbs and wearable and implantable devices. This technology is similarly very far away from matching our own body parts.
Finally, there is the technology of genetic manipulation, where an organism’s genetic code is altered to modify that organism’s body. Although we have been able to identify and manipulate individual genes, we still have a limited understanding of how an entire human emerges from genetic code. As such, we don’t know the degree to which we can actually program code to design everything we wish.
Soft robotics: a way forward?
But we might be able to move robotics closer to the world of Blade Runner by pursuing other technologies and, in particular, by turning to nature for inspiration. The field of soft robotics is a good example. In the last decade or so, robotics researchers have been making considerable efforts to make robots soft, deformable, squishable, and flexible.
This technology is inspired by the fact that 90% of the human body is made from soft substances such as skin, hair, and tissues. This is because most of the fundamental functions in our body rely on soft parts that can change shape, from the heart and lungs pumping fluid around our body to the eye lenses generating signals from their movement. Cells even change shape to trigger division, self-healing and, ultimately, the evolution of the body.
The softness of our bodies is the origin of all their functionality needed to stay alive. So being able to build soft machines would at least bring us a step closer to the robotic world of Blade Runner. Some of the recent technological advances include artificial hearts made out of soft functional materials that are pumping fluid through deformation. Similarly, soft, wearable gloves can help make hand grasping stronger. And “epidermal electronics” has enabled us to tattoo electronic circuits onto our biological skins.
Softness is the keyword that brings humans and technologies closer together. Sensors, motors, and computers are all of a sudden integrated into human bodies once they became soft, and the border between us and external devices becomes ambiguous, just like soft contact lenses became part of our eyes.
Nevertheless, the hardest challenge is how to make individual parts of a soft robot body physically adaptable by self-healing, growing, and differentiating. After all, every part of a living organism is also alive in biological systems in order to make our bodies totally adaptable and evolvable, the function of which could make machines totally indistinguishable from ourselves.
It is impossible to predict when the robotic world of Blade Runner might arrive, and if it does, it will probably be very far in the future. But as long as the desire to build machines indistinguishable from humans is there, the current trends of robotic revolution could make it possible to achieve that dream.
This article was originally published on The Conversation. Read the original article.
Image Credit: Dariush M / Shutterstock.com Continue reading

Posted in Human Robots

#431078 This Year’s Awesome Robot Stories From ...

Each week we scour the web for great articles and fascinating advances across our core topics, from AI to biotech and the brain. But robots have a special place in our hearts. This week, we took a look back at 2017 so far and unearthed a few favorite robots for your reading and viewing pleasure.
Tarzan the Swinging Robot Could Be the Future of FarmingMariella Moon | Engadget“Tarzan will be able to swing over crops using its 3D-printed claws and parallel guy-wires stretched over fields. It will then take measurements and pictures of each plant with its built-in camera while suspended…While it may take some time to achieve that goal, the researchers plan to start testing the robot soon.”
Grasping Robots Compete to Rule Amazon’s Warehouses Tom Simonite | Wired“Robots able to help with so-called picking tasks would boost Amazon’s efficiency—and make it much less reliant on human workers. It’s why the company has invited a motley crew of mechanical arms, grippers, suction cups—and their human handlers—to Nagoya, Japan, this week to show off their manipulation skills.”
Robots Learn to Speak Body LanguageAlyssa Pagano | IEEE Spectrum“One notable feature of the OpenPose system is that it can track not only a person’s head, torso, and limbs but also individual fingers. To do that, the researchers used CMU’s Panoptic Studio, a dome lined with 500 cameras, where they captured body poses at a variety of angles and then used those images to build a data set.”
I Watched Two Robots Chat Together on Stage at a Tech EventJon Russell | TechCrunch“The robots in question are Sophia and Han, and they belong to Hanson Robotics, a Hong Kong-based company that is developing and deploying artificial intelligence in humanoids. The duo took to the stage at Rise in Hong Kong with Hanson Robotics’ Chief Scientist Ben Goertzel directing the banter. The conversation, which was partially scripted, wasn’t as slick as the human-to-human panels at the show, but it was certainly a sight to behold for the packed audience.”
How This Japanese Robotics Master Is Building Better, More Human AndroidsHarry McCracken | Fast Company“On the tech side, making a robot look and behave like a person involves everything from electronics to the silicone Ishiguro’s team uses to simulate skin. ‘We have a technology to precisely control pneumatic actuators,’ he says, noting, as an example of what they need to re-create, that ‘the human shoulder has four degrees of freedom.’”
Stock Media provided by Besjunior / Pond5 Continue reading

Posted in Human Robots

#430640 RE2 Robotics Receives Air Force Funding ...

PITTSBURGH, PA – June 21, 2017 – RE2 Robotics announced today that the Company was selected by the Air Force to develop a drop-in robotic system to rapidly convert a variety of traditionally manned aircraft to robotically piloted, autonomous aircraft under the Small Business Innovation Research (SBIR) program. This robotic system, named “Common Aircraft Retrofit for Novel Autonomous Control” (CARNAC), will operate the aircraft similarly to a human pilot and will not require any modifications to the aircraft.
Automation and autonomy have broad value to the Department of Defense with the potential to enhance system performance of existing platforms, reduce costs, and enable new missions and capabilities, especially with reduced human exposure to dangerous or life-threatening situations. The CARNAC project leverages existing aviation assets and advances in vehicle automation technologies to develop a cutting-edge drop-in robotic flight system.
During the program, RE2 Robotics will demonstrate system architecture feasibility, humanoid-like robotic manipulation capabilities, vision-based flight-status recognition, and cognitive architecture-based decision making.
“Our team is excited to incorporate the Company’s robotic manipulation expertise with proven technologies in applique systems, vision processing algorithms, and decision making to create a customized application that will allow a wide variety of existing aircraft to be outfitted with a robotic pilot,” stated Jorgen Pedersen, president and CEO of RE2 Robotics. “By creating a drop-in robotic pilot, we have the ability to insert autonomy into and expand the capabilities of not only traditionally manned air vehicles, but ground and underwater vehicles as well. This application will open up a whole new market for our mobile robotic manipulator systems.”
###
About RE2 RoboticsRE2 Robotics develops mobile robotic technologies that enable robot users to remotely interact with their world from a safe distance — whether on the ground, in the air, or underwater. RE2 creates interoperable robotic manipulator arms with human-like performance, intuitive human robot interfaces, and advanced autonomy software for mobile robotics. For more information, please visit www.resquared.com or call 412.681.6382.
Media Contact: RE2 Public Relations, pr@resquared.com, 412.681.6382.
The post RE2 Robotics Receives Air Force Funding to Develop Robotic Pilot appeared first on Roboticmagazine. Continue reading

Posted in Human Robots