Tag Archives: car

#431362 Does Regulating Artificial Intelligence ...

Some people are afraid that heavily armed artificially intelligent robots might take over the world, enslaving humanity—or perhaps exterminating us. These people, including tech-industry billionaire Elon Musk and eminent physicist Stephen Hawking, say artificial intelligence technology needs to be regulated to manage the risks. But Microsoft founder Bill Gates and Facebook’s Mark Zuckerberg disagree, saying the technology is not nearly advanced enough for those worries to be realistic.
As someone who researches how AI works in robotic decision-making, drones and self-driving vehicles, I’ve seen how beneficial it can be. I’ve developed AI software that lets robots working in teams make individual decisions as part of collective efforts to explore and solve problems. Researchers are already subject to existing rules, regulations and laws designed to protect public safety. Imposing further limitations risks reducing the potential for innovation with AI systems.
How is AI regulated now?
While the term “artificial intelligence” may conjure fantastical images of human-like robots, most people have encountered AI before. It helps us find similar products while shopping, offers movie and TV recommendations, and helps us search for websites. It grades student writing, provides personalized tutoring, and even recognizes objects carried through airport scanners.
In each case, the AI makes things easier for humans. For example, the AI software I developed could be used to plan and execute a search of a field for a plant or animal as part of a science experiment. But even as the AI frees people from doing this work, it is still basing its actions on human decisions and goals about where to search and what to look for.
In areas like these and many others, AI has the potential to do far more good than harm—if used properly. But I don’t believe additional regulations are currently needed. There are already laws on the books of nations, states, and towns governing civil and criminal liabilities for harmful actions. Our drones, for example, must obey FAA regulations, while the self-driving car AI must obey regular traffic laws to operate on public roadways.
Existing laws also cover what happens if a robot injures or kills a person, even if the injury is accidental and the robot’s programmer or operator isn’t criminally responsible. While lawmakers and regulators may need to refine responsibility for AI systems’ actions as technology advances, creating regulations beyond those that already exist could prohibit or slow the development of capabilities that would be overwhelmingly beneficial.
Potential risks from artificial intelligence
It may seem reasonable to worry about researchers developing very advanced artificial intelligence systems that can operate entirely outside human control. A common thought experiment deals with a self-driving car forced to make a decision about whether to run over a child who just stepped into the road or veer off into a guardrail, injuring the car’s occupants and perhaps even those in another vehicle.
Musk and Hawking, among others, worry that a hyper-capable AI system, no longer limited to a single set of tasks like controlling a self-driving car, might decide it doesn’t need humans anymore. It might even look at human stewardship of the planet, the interpersonal conflicts, theft, fraud, and frequent wars, and decide that the world would be better without people.
Science fiction author Isaac Asimov tried to address this potential by proposing three laws limiting robot decision-making: Robots cannot injure humans or allow them “to come to harm.” They must also obey humans—unless this would harm humans—and protect themselves, as long as this doesn’t harm humans or ignore an order.
But Asimov himself knew the three laws were not enough. And they don’t reflect the complexity of human values. What constitutes “harm” is an example: Should a robot protect humanity from suffering related to overpopulation, or should it protect individuals’ freedoms to make personal reproductive decisions?
We humans have already wrestled with these questions in our own, non-artificial intelligences. Researchers have proposed restrictions on human freedoms, including reducing reproduction, to control people’s behavior, population growth, and environmental damage. In general, society has decided against using those methods, even if their goals seem reasonable. Similarly, rather than regulating what AI systems can and can’t do, in my view it would be better to teach them human ethics and values—like parents do with human children.
Artificial intelligence benefits
People already benefit from AI every day—but this is just the beginning. AI-controlled robots could assist law enforcement in responding to human gunmen. Current police efforts must focus on preventing officers from being injured, but robots could step into harm’s way, potentially changing the outcomes of cases like the recent shooting of an armed college student at Georgia Tech and an unarmed high school student in Austin.
Intelligent robots can help humans in other ways, too. They can perform repetitive tasks, like processing sensor data, where human boredom may cause mistakes. They can limit human exposure to dangerous materials and dangerous situations, such as when decontaminating a nuclear reactor, working in areas humans can’t go. In general, AI robots can provide humans with more time to pursue whatever they define as happiness by freeing them from having to do other work.
Achieving most of these benefits will require a lot more research and development. Regulations that make it more expensive to develop AIs or prevent certain uses may delay or forestall those efforts. This is particularly true for small businesses and individuals—key drivers of new technologies—who are not as well equipped to deal with regulation compliance as larger companies. In fact, the biggest beneficiary of AI regulation may be large companies that are used to dealing with it, because startups will have a harder time competing in a regulated environment.
The need for innovation
Humanity faced a similar set of issues in the early days of the internet. But the United States actively avoided regulating the internet to avoid stunting its early growth. Musk’s PayPal and numerous other businesses helped build the modern online world while subject only to regular human-scale rules, like those preventing theft and fraud.
Artificial intelligence systems have the potential to change how humans do just about everything. Scientists, engineers, programmers, and entrepreneurs need time to develop the technologies—and deliver their benefits. Their work should be free from concern that some AIs might be banned, and from the delays and costs associated with new AI-specific regulations.
This article was originally published on The Conversation. Read the original article.
Image Credit: Tatiana Shepeleva / Shutterstock.com Continue reading

Posted in Human Robots

#431238 AI Is Easy to Fool—Why That Needs to ...

Con artistry is one of the world’s oldest and most innovative professions, and it may soon have a new target. Research suggests artificial intelligence may be uniquely susceptible to tricksters, and as its influence in the modern world grows, attacks against it are likely to become more common.
The root of the problem lies in the fact that artificial intelligence algorithms learn about the world in very different ways than people do, and so slight tweaks to the data fed into these algorithms can throw them off completely while remaining imperceptible to humans.
Much of the research into this area has been conducted on image recognition systems, in particular those relying on deep learning neural networks. These systems are trained by showing them thousands of examples of images of a particular object until they can extract common features that allow them to accurately spot the object in new images.
But the features they extract are not necessarily the same high-level features a human would be looking for, like the word STOP on a sign or a tail on a dog. These systems analyze images at the individual pixel level to detect patterns shared between examples. These patterns can be obscure combinations of pixel values, in small pockets or spread across the image, that would be impossible to discern for a human, but highly accurate at predicting a particular object.

“An attacker can trick the object recognition algorithm into seeing something that isn’t there, without these alterations being obvious to a human.”

What this means is that by identifying these patterns and overlaying them over a different image, an attacker can trick the object recognition algorithm into seeing something that isn’t there, without these alterations being obvious to a human. This kind of manipulation is known as an “adversarial attack.”
Early attempts to trick image recognition systems this way required access to the algorithm’s inner workings to decipher these patterns. But in 2016 researchers demonstrated a “black box” attack that enabled them to trick such a system without knowing its inner workings.
By feeding the system doctored images and seeing how it classified them, they were able to work out what it was focusing on and therefore generate images they knew would fool it. Importantly, the doctored images were not obviously different to human eyes.
These approaches were tested by feeding doctored image data directly into the algorithm, but more recently, similar approaches have been applied in the real world. Last year it was shown that printouts of doctored images that were then photographed on a smartphone successfully tricked an image classification system.
Another group showed that wearing specially designed, psychedelically-colored spectacles could trick a facial recognition system into thinking people were celebrities. In August scientists showed that adding stickers to stop signs in particular configurations could cause a neural net designed to spot them to misclassify the signs.
These last two examples highlight some of the potential nefarious applications for this technology. Getting a self-driving car to miss a stop sign could cause an accident, either for insurance fraud or to do someone harm. If facial recognition becomes increasingly popular for biometric security applications, being able to pose as someone else could be very useful to a con artist.
Unsurprisingly, there are already efforts to counteract the threat of adversarial attacks. In particular, it has been shown that deep neural networks can be trained to detect adversarial images. One study from the Bosch Center for AI demonstrated such a detector, an adversarial attack that fools the detector, and a training regime for the detector that nullifies the attack, hinting at the kind of arms race we are likely to see in the future.
While image recognition systems provide an easy-to-visualize demonstration, they’re not the only machine learning systems at risk. The techniques used to perturb pixel data can be applied to other kinds of data too.

“Bypassing cybersecurity defenses is one of the more worrying and probable near-term applications for this approach.”

Chinese researchers showed that adding specific words to a sentence or misspelling a word can completely throw off machine learning systems designed to analyze what a passage of text is about. Another group demonstrated that garbled sounds played over speakers could make a smartphone running the Google Now voice command system visit a particular web address, which could be used to download malware.
This last example points toward one of the more worrying and probable near-term applications for this approach: bypassing cybersecurity defenses. The industry is increasingly using machine learning and data analytics to identify malware and detect intrusions, but these systems are also highly susceptible to trickery.
At this summer’s DEF CON hacking convention, a security firm demonstrated they could bypass anti-malware AI using a similar approach to the earlier black box attack on the image classifier, but super-powered with an AI of their own.
Their system fed malicious code to the antivirus software and then noted the score it was given. It then used genetic algorithms to iteratively tweak the code until it was able to bypass the defenses while maintaining its function.
All the approaches noted so far are focused on tricking pre-trained machine learning systems, but another approach of major concern to the cybersecurity industry is that of “data poisoning.” This is the idea that introducing false data into a machine learning system’s training set will cause it to start misclassifying things.
This could be particularly challenging for things like anti-malware systems that are constantly being updated to take into account new viruses. A related approach bombards systems with data designed to generate false positives so the defenders recalibrate their systems in a way that then allows the attackers to sneak in.
How likely it is that these approaches will be used in the wild will depend on the potential reward and the sophistication of the attackers. Most of the techniques described above require high levels of domain expertise, but it’s becoming ever easier to access training materials and tools for machine learning.
Simpler versions of machine learning have been at the heart of email spam filters for years, and spammers have developed a host of innovative workarounds to circumvent them. As machine learning and AI increasingly embed themselves in our lives, the rewards for learning how to trick them will likely outweigh the costs.
Image Credit: Nejron Photo / Shutterstock.com Continue reading

Posted in Human Robots

#431165 Intel Jumps Into Brain-Like Computing ...

The brain has long inspired the design of computers and their software. Now Intel has become the latest tech company to decide that mimicking the brain’s hardware could be the next stage in the evolution of computing.
On Monday the company unveiled an experimental “neuromorphic” chip called Loihi. Neuromorphic chips are microprocessors whose architecture is configured to mimic the biological brain’s network of neurons and the connections between them called synapses.
While neural networks—the in vogue approach to artificial intelligence and machine learning—are also inspired by the brain and use layers of virtual neurons, they are still implemented on conventional silicon hardware such as CPUs and GPUs.
The main benefit of mimicking the architecture of the brain on a physical chip, say neuromorphic computing’s proponents, is energy efficiency—the human brain runs on roughly 20 watts. The “neurons” in neuromorphic chips carry out the role of both processor and memory which removes the need to shuttle data back and forth between separate units, which is how traditional chips work. Each neuron also only needs to be powered while it’s firing.

At present, most machine learning is done in data centers due to the massive energy and computing requirements. Creating chips that capture some of nature’s efficiency could allow AI to be run directly on devices like smartphones, cars, and robots.
This is exactly the kind of application Michael Mayberry, managing director of Intel’s research arm, touts in a blog post announcing Loihi. He talks about CCTV cameras that can run image recognition to identify missing persons or traffic lights that can track traffic flow to optimize timing and keep vehicles moving.
There’s still a long way to go before that happens though. According to Wired, so far Intel has only been working with prototypes, and the first full-size version of the chip won’t be built until November.
Once complete, it will feature 130,000 neurons and 130 million synaptic connections split between 128 computing cores. The device will be 1,000 times more energy-efficient than standard approaches, according to Mayberry, but more impressive are claims the chip will be capable of continuous learning.
Intel’s newly launched self-learning neuromorphic chip.
Normally deep learning works by training a neural network on giant datasets to create a model that can then be applied to new data. The Loihi chip will combine training and inference on the same chip, which will allow it to learn on the fly, constantly updating its models and adapting to changing circumstances without having to be deliberately re-trained.
A select group of universities and research institutions will be the first to get their hands on the new chip in the first half of 2018, but Mayberry said it could be years before it’s commercially available. Whether commercialization happens at all may largely depend on whether early adopters can get the hardware to solve any practically useful problems.
So far neuromorphic computing has struggled to gain traction outside the research community. IBM released a neuromorphic chip called TrueNorth in 2014, but the device has yet to showcase any commercially useful applications.
Lee Gomes summarizes the hurdles facing neuromorphic computing excellently in IEEE Spectrum. One is that deep learning can run on very simple, low-precision hardware that can be optimized to use very little power, which suggests complicated new architectures may struggle to find purchase.
It’s also not easy to transfer deep learning approaches developed on conventional chips over to neuromorphic hardware, and even Intel Labs chief scientist Narayan Srinivasa admitted to Forbes Loihi wouldn’t work well with some deep learning models.
Finally, there’s considerable competition in the quest to develop new computer architectures specialized for machine learning. GPU vendors Nvidia and AMD have pivoted to take advantage of this newfound market and companies like Google and Microsoft are developing their own in-house solutions.
Intel, for its part, isn’t putting all its eggs in one basket. Last year it bought two companies building chips for specialized machine learning—Movidius and Nervana—and this was followed up with the $15 billion purchase of self-driving car chip- and camera-maker Mobileye.
And while the jury is still out on neuromorphic computing, it makes sense for a company eager to position itself as the AI chipmaker of the future to have its fingers in as many pies as possible. There are a growing number of voices suggesting that despite its undoubted power, deep learning alone will not allow us to imbue machines with the kind of adaptable, general intelligence humans possess.
What new approaches will get us there are hard to predict, but it’s entirely possible they will only work on hardware that closely mimics the one device we already know is capable of supporting this kind of intelligence—the human brain.
Image Credit: Intel Continue reading

Posted in Human Robots

#431023 Finish Him! MegaBots’ Giant Robot Duel ...

It began two years ago when MegaBots co-founders Matt Oehrlein and Gui Cavalcanti donned American flags as capes and challenged Suidobashi Heavy Industries to a giant robot duel in a YouTube video that immediately went viral.
The battle proposed: MegaBots’ 15-foot tall, 1,200-pound MK2 robot vs. Suidobashi’s 9,000-pound robot, KURATAS. Oehrlein and Cavalcanti first discovered the KURATAS robot in a listing on Amazon with a million-dollar price tag.
In an equally flamboyant response video, Suidobashi CEO and founder Kogoro Kurata accepted the challenge. (Yes, he named his robot after himself.) Both parties planned to take a year to prepare their robots for combat.
In the end, it took twice the amount of time. Nonetheless, the battle is going down this September in an undisclosed location.
Oehrlein shared more about the much-anticipated showdown during our interview at Singularity University’s Global Summit.

Two years since the initial video, MegaBots has now completed the combat-capable MK3 robot, named Eagle Prime. This new 12-ton, 16-foot-tall robot is powered by a 430-horsepower Corvette engine and requires two human pilots.
It’s also the robot they recently shipped to take on KURATAS.

Building Eagle Prime has been no small feat. With arms and legs that each weigh as much as a car, assembling the robot takes forklifts, cranes, and a lot of caution. Fortress One, MegaBots’ headquarters in Hayward, California is where the magic happens.
In terms of “weaponry,” Eagle Prime features a giant pneumatic cannon that shoots huge paint cannonballs. Oehrlein warns, “They can shatter all the windows in a car. It’s very powerful.” A logging grapple, which looks like a giant claw and exerts 3,000 pounds of steel-crushing force, has also been added to the robot.

“It’s a combination of range combat, using the paint balls to maybe blind cameras on the other robot or take out sensitive electronics, and then closing in with the claw and trying to disable their systems at close range,” Oehrlein explains.
Safety systems include a cockpit roll cage for the two pilots, five-point safety seatbelt harnesses, neck restraints, helmets, and flame retardant suits.
Co-founder, Matt Oehrlein, inside the cockpit of MegaBots’ Eagle Prime giant robot.
Oehrlein and Cavalcanti have also spent considerable time inside Eagle Prime practicing battlefield tactics and maneuvering the robot through obstacle courses.
Suidobashi’s robot is a bit shorter and lighter, but also a little faster, so the battle dynamics should be interesting.
You may be thinking, “Why giant dueling robots?”
MegaBots’ grand vision is a full-blown international sports league of giant fighting robots on the scale of Formula One racing. Picture a nostalgic evening sipping a beer (or three) and watching Pacific Rim- and Power Rangers-inspired robots battle—only in real life.
Eagle Prime is, in good humor, a proudly patriotic robot.
“Japan is known as a robotic powerhouse,” says Oehrlein, “I think there’s something interesting about the slightly overconfident American trying to get a foothold in the robotics space and doing it by building a bigger, louder, heavier robot, in true American fashion.”
For safety reasons, no fans will be admitted during the time of the fight. The battle will be posted after the fact on MegaBots’ YouTube channel and Facebook page.
We’ll soon find out whether this becomes another American underdog story.
In the meantime, I give my loyalty to MegaBots, and in the words of Mortal Kombat, say, “Finish him!”

via GIPHY
Image Credit: MegaBots Continue reading

Posted in Human Robots

#431015 Finish Him! MegaBots’ Giant Robot Duel ...

It began two years ago when MegaBots co-founders Matt Oehrlein and Gui Cavalcanti donned American flags as capes and challenged Suidobashi Heavy Industries to a giant robot duel in a YouTube video that immediately went viral.
The battle proposed: MegaBots’ 15-foot tall, 1,200-pound MK2 robot vs. Suidobashi’s 9,000-pound robot, KURATAS. Oehrlein and Cavalcanti first discovered the KURATAS robot in a listing on Amazon with a million-dollar price tag.
In an equally flamboyant response video, Suidobashi CEO and founder Kogoro Kurata accepted the challenge. (Yes, he named his robot after himself.) Both parties planned to take a year to prepare their robots for combat.
In the end, it took twice the amount of time. Nonetheless, the battle is going down this September in an undisclosed location in Japan.
Oehrlein shared more about the much-anticipated showdown during our interview at Singularity University’s Global Summit.

Two years since the initial video, MegaBots has now completed the combat-capable MK3 robot, named Eagle Prime. This new 12-ton, 16-foot-tall robot is powered by a 430-horsepower Corvette engine and requires two human pilots.
It’s also the robot they recently shipped to Japan to take on KURATAS.

Building Eagle Prime has been no small feat. With arms and legs that each weigh as much as a car, assembling the robot takes forklifts, cranes, and a lot of caution. Fortress One, MegaBots’ headquarters in Hayward, California is where the magic happens.
In terms of “weaponry,” Eagle Prime features a giant pneumatic cannon that shoots huge paint cannonballs. Oehrlein warns, “They can shatter all the windows in a car. It’s very powerful.” A logging grapple, which looks like a giant claw and exerts 3,000 pounds of steel-crushing force, has also been added to the robot.
“It’s a combination of range combat, using the paint balls to maybe blind cameras on the other robot or take out sensitive electronics, and then closing in with the claw and trying to disable their systems at close range,” Oehrlein explains.
Safety systems include a cockpit roll cage for the two pilots, five-point safety seatbelt harnesses, neck restraints, helmets, and flame retardant suits.
Co-founder, Matt Oehrlein, inside the cockpit of MegaBots’ Eagle Prime giant robot.
Oehrlein and Cavalcanti have also spent considerable time inside Eagle Prime practicing battlefield tactics and maneuvering the robot through obstacle courses.
Suidobashi’s robot is a bit shorter and lighter, but also a little faster, so the battle dynamics should be interesting.
You may be thinking, “Why giant dueling robots?”
MegaBots’ grand vision is a full-blown international sports league of giant fighting robots on the scale of Formula One racing. Picture a nostalgic evening sipping a beer (or three) and watching Pacific Rim- and Power Rangers-inspired robots battle—only in real life.
Eagle Prime is, in good humor, a proudly patriotic robot.
“Japan is known as a robotic powerhouse,” says Oehrlein, “I think there’s something interesting about the slightly overconfident American trying to get a foothold in the robotics space and doing it by building a bigger, louder, heavier robot, in true American fashion.”
For safety reasons, no fans will be admitted during the time of the fight. The battle will be posted after the fact on MegaBots’ YouTube channel and Facebook page.
We’ll soon find out whether this becomes another American underdog story.
In the meantime, I give my loyalty to MegaBots, and in the words of Mortal Kombat, say, “Finish him!”

via GIPHY
Image Credit: MegaBots Continue reading

Posted in Human Robots