Tag Archives: build
#431733 Why Humanoid Robots Are Still So Hard to ...
Picture a robot. In all likelihood, you just pictured a sleek metallic or chrome-white humanoid. Yet the vast majority of robots in the world around us are nothing like this; instead, they’re specialized for specific tasks. Our cultural conception of what robots are dates back to the coining of the term robots in the Czech play, Rossum’s Universal Robots, which originally envisioned them as essentially synthetic humans.
The vision of a humanoid robot is tantalizing. There are constant efforts to create something that looks like the robots of science fiction. Recently, an old competitor in this field returned with a new model: Toyota has released what they call the T-HR3. As humanoid robots go, it appears to be pretty dexterous and have a decent grip, with a number of degrees of freedom making the movements pleasantly human.
This humanoid robot operates mostly via a remote-controlled system that allows the user to control the robot’s limbs by exerting different amounts of pressure on a framework. A VR headset completes the picture, allowing the user to control the robot’s body and teleoperate the machine. There’s no word on a price tag, but one imagines a machine with a control system this complicated won’t exactly be on your Christmas list, unless you’re a billionaire.
Toyota is no stranger to robotics. They released a series of “Partner Robots” that had a bizarre affinity for instrument-playing but weren’t often seen doing much else. Given that they didn’t seem to have much capability beyond the automaton that Leonardo da Vinci made hundreds of years ago, they promptly vanished. If, as the name suggests, the T-HR3 is a sequel to these robots, which came out shortly after ASIMO back in 2003, it’s substantially better.
Slightly less humanoid (and perhaps the more useful for it), Toyota’s HSR-2 is a robot base on wheels with a simple mechanical arm. It brings to mind earlier machines produced by dream-factory startup Willow Garage like the PR-2. The idea of an affordable robot that could simply move around on wheels and pick up and fetch objects, and didn’t harbor too-lofty ambitions to do anything else, was quite successful.
So much so that when Robocup, the international robotics competition, looked for a platform for their robot-butler competition @Home, they chose HSR-2 for its ability to handle objects. HSR-2 has been deployed in trial runs to care for the elderly and injured, but has yet to be widely adopted for these purposes five years after its initial release. It’s telling that arguably the most successful multi-purpose humanoid robot isn’t really humanoid at all—and it’s curious that Toyota now seems to want to return to a more humanoid model a decade after they gave up on the project.
What’s unclear, as is often the case with humanoid robots, is what, precisely, the T-HR3 is actually for. The teleoperation gets around the complex problem of control by simply having the machine controlled remotely by a human. That human then handles all the sensory perception, decision-making, planning, and manipulation; essentially, the hardest problems in robotics.
There may not be a great deal of autonomy for the T-HR3, but by sacrificing autonomy, you drastically cut down the uses of the robot. Since it can’t act alone, you need a convincing scenario where you need a teleoperated humanoid robot that’s less precise and vastly more expensive than just getting a person to do the same job. Perhaps someday more autonomy will be developed for the robot, and the master maneuvering system that allows humans to control it will only be used in emergencies to control the robot if it gets stuck.
Toyota’s press release says it is “a platform with capabilities that can safely assist humans in a variety of settings, such as the home, medical facilities, construction sites, disaster-stricken areas and even outer space.” In reality, it’s difficult to see such a robot being affordable or even that useful in the home or in medical facilities (unless it’s substantially stronger than humans). Equally, it certainly doesn’t seem robust enough to be deployed in disaster zones or outer space. These tasks have been mooted for robots for a very long time and few have proved up to the challenge.
Toyota’s third generation humanoid robot, the T-HR3. Image Credit: Toyota
Instead, the robot seems designed to work alongside humans. Its design, standing 1.5 meters tall, weighing 75 kilograms, and possessing 32 degrees of freedom in its body, suggests it is built to closely mimic a person, rather than a robot like ATLAS which is robust enough that you can imagine it being useful in a war zone. In this case, it might be closer to the model of the collaborative robots or co-bots developed by Rethink Robotics, whose tons of safety features, including force-sensitive feedback for the user, reduce the risk of terrible PR surrounding killer robots.
Instead the emphasis is on graceful precision engineering: in the promo video, the robot can be seen balancing on one leg before showing off a few poised, yoga-like poses. This perhaps suggests that an application in elderly care, which Toyota has ventured into before and which was the stated aim of their simple HSR-2, might be more likely than deployment to a disaster zone.
The reason humanoid robots remain so elusive and so tempting is probably because of a simple cognitive mistake. We make two bad assumptions. First, we assume that if you build a humanoid robot, give its joints enough flexibility, throw in a little AI and perhaps some pre-programmed behaviors, then presto, it will be able to do everything humans can. When you see a robot that moves well and looks humanoid, it seems like the hardest part is done; surely this robot could do anything. The reality is never so simple.
We also make the reverse assumption: we assume that when we are finally replaced, it will be by perfect replicas of our own bodies and brains that can fulfill all the functions we used to fulfill. Perhaps, in reality, the future of robots and AI is more like its present: piecemeal, with specialized algorithms and specialized machines gradually learning to outperform humans at every conceivable task without ever looking convincingly human.
It may well be that the T-HR3 is angling towards this concept of machine learning as a platform for future research. Rather than trying to program an omni-capable robot out of the box, it will gradually learn from its human controllers. In this way, you could see the platform being used to explore the limits of what humans can teach robots to do simply by having them mimic sequences of our bodies’ motion, in the same way the exploitation of neural networks is testing the limits of training algorithms on data. No one machine will be able to perform everything a human can, but collectively, they will vastly outperform us at anything you’d want one to do.
So when you see a new android like Toyota’s, feel free to marvel at its technical abilities and indulge in the speculation about whether it’s a PR gimmick or a revolutionary step forward along the road to human replacement. Just remember that, human-level bots or not, we’re already strolling down that road.
Image Credit: Toyota Continue reading
#431653 9 Robot Animals Built From Nature’s ...
Millions of years of evolution have allowed animals to develop some elegant and highly efficient solutions to problems like locomotion, flight, and dexterity. As Boston Dynamics unveils its latest mechanical animals, here’s a rundown of nine recent robots that borrow from nature and why.
SpotMini – Boston Dynamics
Starting with BigDog in 2005, the US company has built a whole stable of four-legged robots in recent years. Their first product was designed to be a robotic packhorse for soldiers that borrowed the quadrupedal locomotion of animals to travel over terrain too rough for conventional vehicles.
The US Army ultimately rejected the robot for being too noisy, according to the Guardian, but since then the company has scaled down its design, first to the Spot, then a first edition of the SpotMini that came out last year.
The latter came with a robotic arm where its head should be and was touted as a domestic helper, but a sleeker second edition without the arm was released earlier this month. There’s little detail on what the new robot is designed for, but the more polished design suggests a more consumer-focused purpose.
OctopusGripper – Festo
Festo has released a long line of animal-inspired machines over the years, from a mechanical kangaroo to robotic butterflies. Its latest creation isn’t a full animal—instead it’s a gripper based on an octopus tentacle that can be attached to the end of a robotic arm.
The pneumatically-powered device is made of soft silicone and features two rows of suction cups on its inner edge. By applying compressed air the tentacle can wrap around a wide variety of differently shaped objects, just like its natural counterpart, and a vacuum can be applied to the larger suction cups to grip the object securely. Because it’s soft, it holds promise for robots required to operate safely in collaboration with humans.
CRAM – University of California, Berkeley
Cockroaches are renowned for their hardiness and ability to disappear down cracks that seem far too small for them. Researchers at UC Berkeley decided these capabilities could be useful for search and rescue missions and so set about experimenting on the insects to find out their secrets.
They found the bugs can squeeze into gaps a fifth of their normal standing height by splaying their legs out to the side without significantly slowing themselves down. So they built a palm-sized robot with a jointed plastic shell that could do the same to squeeze into crevices half its normal height.
Snake Robot – Carnegie Mellon University
Search and rescue missions are a common theme for animal-inspired robots, but the snake robot built by CMU researchers is one of the first to be tested in a real disaster.
A team of roboticists from the university helped Mexican Red Cross workers search collapsed buildings for survivors after the 7.1-magnitude earthquake that struck Mexico City in September. The snake design provides a small diameter and the ability to move in almost any direction, which makes the robot ideal for accessing tight spaces, though the team was unable to locate any survivors.
The snake currently features a camera on the front, but researchers told IEEE Spectrum that the experience helped them realize they should also add a microphone to listen for people trapped under the rubble.
Bio-Hybrid Stingray – Harvard University
Taking more than just inspiration from the animal kingdom, a group from Harvard built a robotic stingray out of silicone and rat heart muscle cells.
The robot uses the same synchronized undulations along the edge of its fins to propel itself as a ray does. But while a ray has two sets of muscles to pull the fins up and down, the new device has only one that pulls them down, with a springy gold skeleton that pulls them back up again. The cells are also genetically modified to be activated by flashes of light.
The project’s leader eventually hopes to engineer a human heart, and both his stingray and an earlier jellyfish bio-robot are primarily aimed at better understanding how that organ works.
Bat Bot – Caltech
Most recent advances in drone technology have come from quadcopters, but Caltech engineers think rigid devices with rapidly spinning propellers are probably not ideal for use in close quarters with humans.
That’s why they turned to soft-winged bats for inspiration. That’s no easy feat, though, considering bats use more than 40 joints with each flap of their wings, so the team had to optimize down to nine joints to avoid it becoming too bulky. The simplified bat can’t ascend yet, but its onboard computer and sensors let it autonomously carry out glides, turns, and dives.
Salto – UC Berkeley
While even the most advanced robots tend to plod around, tree-dwelling animals have the ability to spring from branch to branch to clear obstacles and climb quickly. This could prove invaluable for search and rescue robots by allowing them to quickly traverse disordered rubble.
UC Berkeley engineers turned to the Senegal bush baby for inspiration after determining it scored highest in “vertical jumping agility”—a combination of how high and how frequently an animal can jump. They recreated its ability to get into a super-low crouch that stores energy in its tendons to create a robot that could carry out parkour-style double jumps off walls to quickly gain height.
Pleurobot – École Polytechnique Fédérale de Lausanne
Normally robots are masters of air, land, or sea, but the robotic salamander built by researchers at EPFL can both walk and swim.
Its designers used X-ray videos to carefully study how the amphibians move before using this to build a true-to-life robotic version using 3D printed bones, motorized joints, and a synthetic nervous system made up of electronic circuitry.
The robot’s low center of mass and segmented legs make it great at navigating rough terrain without losing balance, and the ability to swim gives added versatility. They also hope it will help paleontologists gain a better understanding of the movements of the first tetrapods to transition from water to land, which salamanders are the best living analog of.
Eelume – Eelume
A snakelike body isn’t only useful on land—eels are living proof it’s an efficient way to travel underwater, too. Norwegian robotics company Eelume has borrowed these principles to build a robot capable of sub-sea inspection, maintenance, and repair.
The modular design allows operators to put together their own favored configuration of joints and payloads such as sensors and tools. And while an early version of the robot used the same method of locomotion as an eel, the latest version undergoing sea trials has added a variety of thrusters for greater speeds and more maneuverability.
Image Credit: Boston Dynamics / YouTube Continue reading
#431559 Drug Discovery AI to Scour a Universe of ...
On a dark night, away from city lights, the stars of the Milky Way can seem uncountable. Yet from any given location no more than 4,500 are visible to the naked eye. Meanwhile, our galaxy has 100–400 billion stars, and there are even more galaxies in the universe.
The numbers of the night sky are humbling. And they give us a deep perspective…on drugs.
Yes, this includes wow-the-stars-are-freaking-amazing-tonight drugs, but also the kinds of drugs that make us well again when we’re sick. The number of possible organic compounds with “drug-like” properties dwarfs the number of stars in the universe by over 30 orders of magnitude.
Next to this multiverse of possibility, the chemical configurations scientists have made into actual medicines are like the smattering of stars you’d glimpse downtown.
But for good reason.
Exploring all that potential drug-space is as humanly impossible as exploring all of physical space, and even if we could, most of what we’d find wouldn’t fit our purposes. Still, the idea that wonder drugs must surely lurk amid the multitudes is too tantalizing to ignore.
Which is why, Alex Zhavoronkov said at Singularity University’s Exponential Medicine in San Diego last week, we should use artificial intelligence to do more of the legwork and speed discovery. This, he said, could be one of the next big medical applications for AI.
Dogs, Diagnosis, and Drugs
Zhavoronkov is CEO of Insilico Medicine and CSO of the Biogerontology Research Foundation. Insilico is one of a number of AI startups aiming to accelerate drug discovery with AI.
In recent years, Zhavoronkov said, the now-famous machine learning technique, deep learning, has made progress on a number of fronts. Algorithms that can teach themselves to play games—like DeepMind’s AlphaGo Zero or Carnegie Mellon’s poker playing AI—are perhaps the most headline-grabbing of the bunch. But pattern recognition was the thing that kicked deep learning into overdrive early on, when machine learning algorithms went from struggling to tell dogs and cats apart to outperforming their peers and then their makers in quick succession.
[Watch this video for an AI update from Neil Jacobstein, chair of Artificial Intelligence and Robotics at Singularity University.]
In medicine, deep learning algorithms trained on databases of medical images can spot life-threatening disease with equal or greater accuracy than human professionals. There’s even speculation that AI, if we learn to trust it, could be invaluable in diagnosing disease. And, as Zhavoronkov noted, with more applications and a longer track record that trust is coming.
“Tesla is already putting cars on the street,” Zhavoronkov said. “Three-year, four-year-old technology is already carrying passengers from point A to point B, at 100 miles an hour, and one mistake and you’re dead. But people are trusting their lives to this technology.”
“So, why don’t we do it in pharma?”
Trial and Error and Try Again
AI wouldn’t drive the car in pharmaceutical research. It’d be an assistant that, when paired with a chemist or two, could fast-track discovery by screening more possibilities for better candidates.
There’s plenty of room to make things more efficient, according to Zhavoronkov.
Drug discovery is arduous and expensive. Chemists sift tens of thousands of candidate compounds for the most promising to synthesize. Of these, a handful will go on to further research, fewer will make it to human clinical trials, and a fraction of those will be approved.
The whole process can take many years and cost hundreds of millions of dollars.
This is a big data problem if ever there was one, and deep learning thrives on big data. Early applications have shown their worth unearthing subtle patterns in huge training databases. Although drug-makers already use software to sift compounds, such software requires explicit rules written by chemists. AI’s allure is its ability to learn and improve on its own.
“There are two strategies for AI-driven innovation in pharma to ensure you get better molecules and much faster approvals,” Zhavoronkov said. “One is looking for the needle in the haystack, and another one is creating a new needle.”
To find the needle in the haystack, algorithms are trained on large databases of molecules. Then they go looking for molecules with attractive properties. But creating a new needle? That’s a possibility enabled by the generative adversarial networks Zhavoronkov specializes in.
Such algorithms pit two neural networks against each other. One generates meaningful output while the other judges whether this output is true or false, Zhavoronkov said. Together, the networks generate new objects like text, images, or in this case, molecular structures.
“We started employing this particular technology to make deep neural networks imagine new molecules, to make it perfect right from the start. So, to come up with really perfect needles,” Zhavoronkov said. “[You] can essentially go to this [generative adversarial network] and ask it to create molecules that inhibit protein X at concentration Y, with the highest viability, specific characteristics, and minimal side effects.”
Zhavoronkov believes AI can find or fabricate more needles from the array of molecular possibilities, freeing human chemists to focus on synthesizing only the most promising. If it works, he hopes we can increase hits, minimize misses, and generally speed the process up.
Proof’s in the Pudding
Insilico isn’t alone on its drug-discovery quest, nor is it a brand new area of interest.
Last year, a Harvard group published a paper on an AI that similarly suggests drug candidates. The software trained on 250,000 drug-like molecules and used its experience to generate new molecules that blended existing drugs and made suggestions based on desired properties.
An MIT Technology Review article on the subject highlighted a few of the challenges such systems may still face. The results returned aren’t always meaningful or easy to synthesize in the lab, and the quality of these results, as always, is only as good as the data dined upon.
Stanford chemistry professor and Andreesen Horowitz partner, Vijay Pande, said that images, speech, and text—three of the areas deep learning’s made quick strides in—have better, cleaner data. Chemical data, on the other hand, is still being optimized for deep learning. Also, while there are public databases, much data still lives behind closed doors at private companies.
To overcome the challenges and prove their worth, Zhavoronkov said, his company is very focused on validating the tech. But this year, skepticism in the pharmaceutical industry seems to be easing into interest and investment.
AI drug discovery startup Exscientia inked a deal with Sanofi for $280 million and GlaxoSmithKline for $42 million. Insilico is also partnering with GlaxoSmithKline, and Numerate is working with Takeda Pharmaceutical. Even Google may jump in. According to an article in Nature outlining the field, the firm’s deep learning project, Google Brain, is growing its biosciences team, and industry watchers wouldn’t be surprised to see them target drug discovery.
With AI and the hardware running it advancing rapidly, the greatest potential may yet be ahead. Perhaps, one day, all 1060 molecules in drug-space will be at our disposal. “You should take all the data you have, build n new models, and search as much of that 1060 as possible” before every decision you make, Brandon Allgood, CTO at Numerate, told Nature.
Today’s projects need to live up to their promises, of course, but Zhavoronkov believes AI will have a big impact in the coming years, and now’s the time to integrate it. “If you are working for a pharma company, and you’re still thinking, ‘Okay, where is the proof?’ Once there is a proof, and once you can see it to believe it—it’s going to be too late,” he said.
Image Credit: Klavdiya Krinichnaya / Shutterstock.com Continue reading
#431543 China Is an Entrepreneurial Hotbed That ...
Last week, Eric Schmidt, chairman of Alphabet, predicted that China will rapidly overtake the US in artificial intelligence…in as little as five years.
Last month, China announced plans to open a $10 billion quantum computing research center in 2020.
Bottom line, China is aggressively investing in exponential technologies, pursuing a bold goal of becoming the global AI superpower by 2030.
Based on what I’ve observed from China’s entrepreneurial scene, I believe they have a real shot of hitting that goal.
As I described in a previous tech blog, I recently traveled to China with a group of my Abundance 360 members, where I was hosted by my friend Kai-Fu Lee, the founder, chairman, and CEO of Sinovation Ventures.
On one of our first nights, Kai-Fu invited us to a special dinner at Da Dong Roast, which specializes in Peking duck, where we shared an 18-course meal.
The meal was amazing, and Kai-Fu’s dinner conversation provided us priceless insights on Chinese entrepreneurs.
Three topics opened my eyes. Here’s the wisdom I’d like to share with you.
1. The Entrepreneurial Culture in China
Chinese entrepreneurship has exploded onto the scene and changed significantly over the past 10 years.
In my opinion, one significant way that Chinese entrepreneurs vary from their American counterparts is in work ethic. The mantra I found in the startups I visited in Beijing and Shanghai was “9-9-6”—meaning the employees only needed to work from 9 am to 9 pm, 6 days a week.
Another concept Kai-Fu shared over dinner was the almost ‘dictatorial’ leadership of the founder/CEO. In China, it’s not uncommon for the Founder/CEO to own the majority of the company, or at least 30–40 percent. It’s also the case that what the CEO says is gospel. Period, no debate. There is no minority or dissenting opinion. When the CEO says “march,” the company asks, “which way?”
When Kai-Fu started Sinovation (his $1 billion+ venture fund), there were few active angel investors. Today, China has a rich ecosystem of angel, venture capital, and government-funded innovation parks.
As venture capital in China has evolved, so too has the mindset of the entrepreneur.
Kai -Fu recalled an early investment he made in which, after an unfortunate streak, the entrepreneur came to him, almost in tears, apologizing for losing his money and promising he would earn it back for him in another way. Kai-Fu comforted the entrepreneur and said there was no such need.
Only a few years later, the situation was vastly different. An entrepreneur who was going through a similar unfortunate streak came to Kai Fu and told him he only had $2 million left of his initial $12 million investment. He informed him he saw no value in returning the money and instead was going to take the last $2 million and use it as a final push to see if the company could succeed. He then promised Kai-Fu if he failed, he would remember what Kai-Fu did for him and, as such, possibly give Sinovation an opportunity to invest in him with his next company.
2. Chinese Companies Are No Longer Just ‘Copycats’
During dinner, Kai-Fu lamented that 10 years ago, it would be fair to call Chinese companies copycats of American companies. Five years ago, the claim would be controversial. Today, however, Kai-Fu is clear that claim is entirely false.
While smart Chinese startups will still look at what American companies are doing and build on trends, today it’s becoming a wise business practice for American tech giants to analyze Chinese companies. If you look at many new features of Facebook’s Messenger, it seems to very closely mirror TenCent’s WeChat.
Interestingly, tight government controls in China have actually spurred innovation. Take TV, for example, a highly regulated industry. Because of this regulation, most entertainment in China is consumed on the internet or by phone. Game shows, reality shows, and more will be entirely centered online.
Kai-Fu told us about one of his investments in a company that helps create Chinese singing sensations. They take girls in from a young age, school them, and regardless of talent, help build their presence and brand as singers. Once ready, these singers are pushed across all the available platforms, and superstars are born. The company recognizes its role in this superstar status, though, which is why it takes a 50 percent cut of all earnings.
This company is just one example of how Chinese entrepreneurs take advantage of China’s unique position, market, and culture.
3. China’s Artificial Intelligence Play
Kai-Fu wrapped up his talk with a brief introduction into the expansive AI industry in China. I previously discussed Face++, a Sinovation investment, which is creating radically efficient facial recognition technology. Face++ is light years ahead of anyone else globally at recognition in live videos. However, Face++ is just one of the incredible advances in AI coming out of China.
Baidu, one of China’s most valuable tech companies, started out as just a search company. However, they now run one of the country’s leading self-driving car programs.
Baidu’s goal is to create a software suite atop existing hardware that will control all self-driving aspects of a vehicle but also be able to provide additional services such as HD mapping and more.
Another interesting application came from another of Sinovation’s investments, Smart Finance Group (SFG). Given most payments are mobile (through WeChat or Alipay), only ~20 percent of the population in China have a credit history. This makes it very difficult for individuals in China to acquire a loan.
SFG’s mobile application takes in user data (as much as the user allows) and, based on the information provided, uses an AI agent to create a financial profile with the power to offer an instant loan. This loan can be deposited directly into their WeChat or Alipay account and is typically approved in minutes. Unlike American loan companies, they avoid default and long-term debt by only providing a one-month loan with 10% interest. Borrow $200, and you pay back $220 by the following month.
Artificial intelligence is exploding in China, and Kai-Fu believes it will touch every single industry.
The only constant is change, and the rate of change is constantly increasing.
In the next 10 years, we’ll see tremendous changes on the geopolitical front and the global entrepreneurial scene caused by technological empowerment.
China is an entrepreneurial hotbed that cannot be ignored. I’m monitoring it closely. Are you?
Image Credit: anekoho / Shutterstock.com Continue reading
#431392 What AI Can Now Do Is Remarkable—But ...
Major websites all over the world use a system called CAPTCHA to verify that someone is indeed a human and not a bot when entering data or signing into an account. CAPTCHA stands for the “Completely Automated Public Turing test to tell Computers and Humans Apart.” The squiggly letters and numbers, often posted against photographs or textured backgrounds, have been a good way to foil hackers. They are annoying but effective.
The days of CAPTCHA as a viable line of defense may, however, be numbered.
Researchers at Vicarious, a Californian artificial intelligence firm funded by Amazon founder Jeff Bezos and Facebook’s Mark Zuckerberg, have just published a paper documenting how they were able to defeat CAPTCHA using new artificial intelligence techniques. Whereas today’s most advanced artificial intelligence (AI) technologies use neural networks that require massive amounts of data to learn from, sometimes millions of examples, the researchers said their system needed just five training steps to crack Google’s reCAPTCHA technology. With this, they achieved a 67 percent success rate per character—reasonably close to the human accuracy rate of 87 percent. In answering PayPal and Yahoo CAPTCHAs, the system achieved an accuracy rate of greater than 50 percent.
The CAPTCHA breakthrough came hard on the heels of another major milestone from Google’s DeepMind team, the people who built the world’s best Go-playing system. DeepMind built a new artificial-intelligence system called AlphaGo Zero that taught itself to play the game at a world-beating level with minimal training data, mainly using trial and error—in a fashion similar to how humans learn.
Both playing Go and deciphering CAPTCHAs are clear examples of what we call narrow AI, which is different from artificial general intelligence (AGI)—the stuff of science fiction. Remember R2-D2 of Star Wars, Ava from Ex Machina, and Samantha from Her? They could do many things and learned everything they needed on their own.
Narrow AI technologies are systems that can only perform one specific type of task. For example, if you asked AlphaGo Zero to learn to play Monopoly, it could not, even though that is a far less sophisticated game than Go. If you asked the CAPTCHA cracker to learn to understand a spoken phrase, it would not even know where to start.
To date, though, even narrow AI has been difficult to build and perfect. To perform very elementary tasks such as determining whether an image is of a cat or a dog, the system requires the development of a model that details exactly what is being analyzed and massive amounts of data with labeled examples of both. The examples are used to train the AI systems, which are modeled on the neural networks in the brain, in which the connections between layers of neurons are adjusted based on what is observed. To put it simply, you tell an AI system exactly what to learn, and the more data you give it, the more accurate it becomes.
The methods that Vicarious and Google used were different; they allowed the systems to learn on their own, albeit in a narrow field. By making their own assumptions about what the training model should be and trying different permutations until they got the right results, they were able to teach themselves how to read the letters in a CAPTCHA or to play a game.
This blurs the line between narrow AI and AGI and has broader implications in robotics and virtually any other field in which machine learning in complex environments may be relevant.
Beyond visual recognition, the Vicarious breakthrough and AlphaGo Zero success are encouraging scientists to think about how AIs can learn to do things from scratch. And this brings us one step closer to coexisting with classes of AIs and robots that can learn to perform new tasks that are slight variants on their previous tasks—and ultimately the AGI of science fiction.
So R2-D2 may be here sooner than we expected.
This article was originally published by The Washington Post. Read the original article here.
Image Credit: Zapp2Photo / Shutterstock.com Continue reading