Tag Archives: unique
#431872 AI Uses Titan Supercomputer to Create ...
You don’t have to dig too deeply into the archive of dystopian science fiction to uncover the horror that intelligent machines might unleash. The Matrix and The Terminator are probably the most well-known examples of self-replicating, intelligent machines attempting to enslave or destroy humanity in the process of building a brave new digital world.
The prospect of artificially intelligent machines creating other artificially intelligent machines took a big step forward in 2017. However, we’re far from the runaway technological singularity futurists are predicting by mid-century or earlier, let alone murderous cyborgs or AI avatar assassins.
The first big boost this year came from Google. The tech giant announced it was developing automated machine learning (AutoML), writing algorithms that can do some of the heavy lifting by identifying the right neural networks for a specific job. Now researchers at the Department of Energy’s Oak Ridge National Laboratory (ORNL), using the most powerful supercomputer in the US, have developed an AI system that can generate neural networks as good if not better than any developed by a human in less than a day.
It can take months for the brainiest, best-paid data scientists to develop deep learning software, which sends data through a complex web of mathematical algorithms. The system is modeled after the human brain and known as an artificial neural network. Even Google’s AutoML took weeks to design a superior image recognition system, one of the more standard operations for AI systems today.
Computing Power
Of course, Google Brain project engineers only had access to 800 graphic processing units (GPUs), a type of computer hardware that works especially well for deep learning. Nvidia, which pioneered the development of GPUs, is considered the gold standard in today’s AI hardware architecture. Titan, the supercomputer at ORNL, boasts more than 18,000 GPUs.
The ORNL research team’s algorithm, called MENNDL for Multinode Evolutionary Neural Networks for Deep Learning, isn’t designed to create AI systems that cull cute cat photos from the internet. Instead, MENNDL is a tool for testing and training thousands of potential neural networks to work on unique science problems.
That requires a different approach from the Google and Facebook AI platforms of the world, notes Steven Young, a postdoctoral research associate at ORNL who is on the team that designed MENNDL.
“We’ve discovered that those [neural networks] are very often not the optimal network for a lot of our problems, because our data, while it can be thought of as images, is different,” he explains to Singularity Hub. “These images, and the problems, have very different characteristics from object detection.”
AI for Science
One application of the technology involved a particle physics experiment at the Fermi National Accelerator Laboratory. Fermilab researchers are interested in understanding neutrinos, high-energy subatomic particles that rarely interact with normal matter but could be a key to understanding the early formation of the universe. One Fermilab experiment involves taking a sort of “snapshot” of neutrino interactions.
The team wanted the help of an AI system that could analyze and classify Fermilab’s detector data. MENNDL evaluated 500,000 neural networks in 24 hours. Its final solution proved superior to custom models developed by human scientists.
In another case involving a collaboration with St. Jude Children’s Research Hospital in Memphis, MENNDL improved the error rate of a human-designed algorithm for identifying mitochondria inside 3D electron microscopy images of brain tissue by 30 percent.
“We are able to do better than humans in a fraction of the time at designing networks for these sort of very different datasets that we’re interested in,” Young says.
What makes MENNDL particularly adept is its ability to define the best or most optimal hyperparameters—the key variables—to tackle a particular dataset.
“You don’t always need a big, huge deep network. Sometimes you just need a small network with the right hyperparameters,” Young says.
A Virtual Data Scientist
That’s not dissimilar to the approach of a company called H20.ai, a startup out of Silicon Valley that uses open source machine learning platforms to “democratize” AI. It applies machine learning to create business solutions for Fortune 500 companies, including some of the world’s biggest banks and healthcare companies.
“Our software is more [about] pattern detection, let’s say anti-money laundering or fraud detection or which customer is most likely to churn,” Dr. Arno Candel, chief technology officer at H2O.ai, tells Singularity Hub. “And that kind of insight-generating software is what we call AI here.”
The company’s latest product, Driverless AI, promises to deliver the data scientist equivalent of a chessmaster to its customers (the company claims several such grandmasters in its employ and advisory board). In other words, the system can analyze a raw dataset and, like MENNDL, automatically identify what features should be included in the computer model to make the most of the data based on the best “chess moves” of its grandmasters.
“So we’re using those algorithms, but we’re giving them the human insights from those data scientists, and we automate their thinking,” he explains. “So we created a virtual data scientist that is relentless at trying these ideas.”
Inside the Black Box
Not unlike how the human brain reaches a conclusion, it’s not always possible to understand how a machine, despite being designed by humans, reaches its own solutions. The lack of transparency is often referred to as the AI “black box.” Experts like Young say we can learn something about the evolutionary process of machine learning by generating millions of neural networks and seeing what works well and what doesn’t.
“You’re never going to be able to completely explain what happened, but maybe we can better explain it than we currently can today,” Young says.
Transparency is built into the “thought process” of each particular model generated by Driverless AI, according to Candel.
The computer even explains itself to the user in plain English at each decision point. There is also real-time feedback that allows users to prioritize features, or parameters, to see how the changes improve the accuracy of the model. For example, the system may include data from people in the same zip code as it creates a model to describe customer turnover.
“That’s one of the advantages of our automatic feature engineering: it’s basically mimicking human thinking,” Candel says. “It’s not just neural nets that magically come up with some kind of number, but we’re trying to make it statistically significant.”
Moving Forward
Much digital ink has been spilled over the dearth of skilled data scientists, so automating certain design aspects for developing artificial neural networks makes sense. Experts agree that automation alone won’t solve that particular problem. However, it will free computer scientists to tackle more difficult issues, such as parsing the inherent biases that exist within the data used by machine learning today.
“I think the world has an opportunity to focus more on the meaning of things and not on the laborious tasks of just fitting a model and finding the best features to make that model,” Candel notes. “By automating, we are pushing the burden back for the data scientists to actually do something more meaningful, which is think about the problem and see how you can address it differently to make an even bigger impact.”
The team at ORNL expects it can also make bigger impacts beginning next year when the lab’s next supercomputer, Summit, comes online. While Summit will boast only 4,600 nodes, it will sport the latest and greatest GPU technology from Nvidia and CPUs from IBM. That means it will deliver more than five times the computational performance of Titan, the world’s fifth-most powerful supercomputer today.
“We’ll be able to look at much larger problems on Summit than we were able to with Titan and hopefully get to a solution much faster,” Young says.
It’s all in a day’s work.
Image Credit: Gennady Danilkin / Shutterstock.com Continue reading
#431862 Want Self-Healing Robots and Tires? ...
We all have scars, and each one tells a story. Tales of tomfoolery, tales of haphazardness, or in my case, tales of stupidity.
Whether the cause of your scar was a push-bike accident, a lack of concentration while cutting onions, or simply the byproduct of an active lifestyle, the experience was likely extremely painful and distressing. Not to mention the long and vexatious recovery period, stretching out for weeks and months after the actual event!
Cast your minds back to that time. How you longed for instant relief from your discomfort! How you longed to have your capabilities restored in an instant!
Well, materials that can heal themselves in an instant may not be far from becoming a reality—and a family of them known as elastomers holds the key.
“Elastomer” is essentially a big, fancy word for rubber. However, elastomers have one unique property—they are capable of returning to their original form after being vigorously stretched and deformed.
This unique property of elastomers has caught the eye of many scientists around the world, particularly those working in the field of robotics. The reason? Elastomer can be encouraged to return to its original shape, in many cases by simply applying heat. The implication of this is the quick and cost-effective repair of “wounds”—cuts, tears, and punctures to the soft, elastomer-based appendages of a robot’s exoskeleton.
Researchers from Vrije University in Brussels, Belgium have been toying with the technique, and with remarkable success. The team built a robotic hand with fingers made of a type of elastomer. They found that cuts and punctures were indeed able to repair themselves simply by applying heat to the affected area.
How long does the healing process take? In this instance, about a day. Now that’s a lot shorter than the weeks and months of recovery time we typically need for a flesh wound, during which we are unable to write, play the guitar, or do the dishes. If you consider the latter to be a bad thing…
However, it’s not the first time scientists have played around with elastomers and examined their self-healing properties. Another team of scientists, headed up by Cheng-Hui Li and Chao Wang, discovered another type of elastomer that exhibited autonomous self-healing properties. Just to help you picture this stuff, the material closely resembles animal muscle— strong, flexible, and elastic. With autogenetic restorative powers to boot.
Advancements in the world of self-healing elastomers, or rubbers, may also affect the lives of everyday motorists. Researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) have developed a self-healing rubber material that could be used to make tires that repair their own punctures.
This time the mechanism of self-healing doesn’t involve heat. Rather, it is related to a physical phenomenon associated with the rubber’s unique structure. Normally, when a large enough stress is applied to a typical rubber, there is catastrophic failure at the focal point of that stress. The self-healing rubber the researchers created, on the other hand, distributes that same stress evenly over a network of “crazes”—which are like cracks connected by strands of fiber.
Here’s the interesting part. Not only does this unique physical characteristic of the rubber prevent catastrophic failure, it facilitates self-repair. According to Harvard researchers, when the stress is released, the material snaps back to its original form and the crazes heal.
This wonder material could be used in any number of rubber-based products.
Professor Jinrong Wu, of Sichuan University, China, and co-author of the study, happened to single out tires: “Imagine that we could use this material as one of the components to make a rubber tire… If you have a cut through the tire, this tire wouldn’t have to be replaced right away. Instead, it would self-heal while driving, enough to give you leeway to avoid dramatic damage,” said Wu.
So where to from here? Well, self-healing elastomers could have a number of different applications. According to the article published by Quartz, cited earlier, the material could be used on artificial limbs. Perhaps it will provide some measure of structural integrity without looking like a tattered mess after years of regular use.
Or perhaps a sort of elastomer-based hybrid skin is on the horizon. A skin in which wounds heal instantly. And recovery time, unlike your regular old human skin of yesteryear, is significantly slashed. Furthermore, this future skin might eliminate those little reminders we call scars.
For those with poor judgment skills, this spells an end to disquieting reminders of our own stupidity.
Image Credit: Vrije Universiteit Brussel / Prof. Dr. ir. Bram Vanderborght Continue reading
#431592 Reactive Content Will Get to Know You ...
The best storytellers react to their audience. They look for smiles, signs of awe, or boredom; they simultaneously and skillfully read both the story and their sitters. Kevin Brooks, a seasoned storyteller working for Motorola’s Human Interface Labs, explains, “As the storyteller begins, they must tune in to… the audience’s energy. Based on this energy, the storyteller will adjust their timing, their posture, their characterizations, and sometimes even the events of the story. There is a dialog between audience and storyteller.”
Shortly after I read the script to Melita, the latest virtual reality experience from Madrid-based immersive storytelling company Future Lighthouse, CEO Nicolas Alcalá explained to me that the piece is an example of “reactive content,” a concept he’s been working on since his days at Singularity University.
For the first time in history, we have access to technology that can merge the reactive and affective elements of oral storytelling with the affordances of digital media, weaving stunning visuals, rich soundtracks, and complex meta-narratives in a story arena that has the capability to know you more intimately than any conventional storyteller could.
It’s no understatement to say that the storytelling potential here is phenomenal.
In short, we can refer to content as reactive if it reads and reacts to users based on their body rhythms, emotions, preferences, and data points. Artificial intelligence is used to analyze users’ behavior or preferences to sculpt unique storylines and narratives, essentially allowing for a story that changes in real time based on who you are and how you feel.
The development of reactive content will allow those working in the industry to go one step further than simply translating the essence of oral storytelling into VR. Rather than having a narrative experience with a digital storyteller who can read you, reactive content has the potential to create an experience with a storyteller who knows you.
This means being able to subtly insert minor personal details that have a specific meaning to the viewer. When we talk to our friends we often use experiences we’ve shared in the past or knowledge of our audience to give our story as much resonance as possible. Targeting personal memories and aspects of our lives is a highly effective way to elicit emotions and aid in visualizing narratives. When you can do this with the addition of visuals, music, and characters—all lifted from someone’s past—you have the potential for overwhelmingly engaging and emotionally-charged content.
Future Lighthouse inform me that for now, reactive content will rely primarily on biometric feedback technology such as breathing, heartbeat, and eye tracking sensors. A simple example would be a story in which parts of the environment or soundscape change in sync with the user’s heartbeat and breathing, or characters who call you out for not paying attention.
The next step would be characters and situations that react to the user’s emotions, wherein algorithms analyze biometric information to make inferences about states of emotional arousal (“why are you so nervous?” etc.). Another example would be implementing the use of “arousal parameters,” where the audience can choose what level of “fear” they want from a VR horror story before algorithms modulate the experience using information from biometric feedback devices.
The company’s long-term goal is to gather research on storytelling conventions and produce a catalogue of story “wireframes.” This entails distilling the basic formula to different genres so they can then be fleshed out with visuals, character traits, and soundtracks that are tailored for individual users based on their deep data, preferences, and biometric information.
The development of reactive content will go hand in hand with a renewed exploration of diverging, dynamic storylines, and multi-narratives, a concept that hasn’t had much impact in the movie world thus far. In theory, the idea of having a story that changes and mutates is captivating largely because of our love affair with serendipity and unpredictability, a cultural condition theorist Arthur Kroker refers to as the “hypertextual imagination.” This feeling of stepping into the unknown with the possibility of deviation from the habitual translates as a comforting reminder that our own lives can take exciting and unexpected turns at any moment.
The inception of the concept into mainstream culture dates to the classic Choose Your Own Adventure book series that launched in the late 70s, which in its literary form had great success. However, filmic takes on the theme have made somewhat less of an impression. DVDs like I’m Your Man (1998) and Switching (2003) both use scene selection tools to determine the direction of the storyline.
A more recent example comes from Kino Industries, who claim to have developed the technology to allow filmmakers to produce interactive films in which viewers can use smartphones to quickly vote on which direction the narrative takes at numerous decision points throughout the film.
The main problem with diverging narrative films has been the stop-start nature of the interactive element: when I’m immersed in a story I don’t want to have to pick up a controller or remote to select what’s going to happen next. Every time the audience is given the option to take a new path (“press this button”, “vote on X, Y, Z”) the narrative— and immersion within that narrative—is temporarily halted, and it takes the mind a while to get back into this state of immersion.
Reactive content has the potential to resolve these issues by enabling passive interactivity—that is, input and output without having to pause and actively make decisions or engage with the hardware. This will result in diverging, dynamic narratives that will unfold seamlessly while being dependent on and unique to the specific user and their emotions. Passive interactivity will also remove the game feel that can often be a symptom of interactive experiences and put a viewer somewhere in the middle: still firmly ensconced in an interactive dynamic narrative, but in a much subtler way.
While reading the Melita script I was particularly struck by a scene in which the characters start to engage with the user and there’s a synchronicity between the user’s heartbeat and objects in the virtual world. As the narrative unwinds and the words of Melita’s character get more profound, parts of the landscape, which seemed to be flashing and pulsating at random, come together and start to mimic the user’s heartbeat.
In 2013, Jane Aspell of Anglia Ruskin University (UK) and Lukas Heydrich of the Swiss Federal Institute of Technology proved that a user’s sense of presence and identification with a virtual avatar could be dramatically increased by syncing the on-screen character with the heartbeat of the user. The relationship between bio-digital synchronicity, immersion, and emotional engagement is something that will surely have revolutionary narrative and storytelling potential.
Image Credit: Tithi Luadthong / Shutterstock.com Continue reading
#431543 China Is an Entrepreneurial Hotbed That ...
Last week, Eric Schmidt, chairman of Alphabet, predicted that China will rapidly overtake the US in artificial intelligence…in as little as five years.
Last month, China announced plans to open a $10 billion quantum computing research center in 2020.
Bottom line, China is aggressively investing in exponential technologies, pursuing a bold goal of becoming the global AI superpower by 2030.
Based on what I’ve observed from China’s entrepreneurial scene, I believe they have a real shot of hitting that goal.
As I described in a previous tech blog, I recently traveled to China with a group of my Abundance 360 members, where I was hosted by my friend Kai-Fu Lee, the founder, chairman, and CEO of Sinovation Ventures.
On one of our first nights, Kai-Fu invited us to a special dinner at Da Dong Roast, which specializes in Peking duck, where we shared an 18-course meal.
The meal was amazing, and Kai-Fu’s dinner conversation provided us priceless insights on Chinese entrepreneurs.
Three topics opened my eyes. Here’s the wisdom I’d like to share with you.
1. The Entrepreneurial Culture in China
Chinese entrepreneurship has exploded onto the scene and changed significantly over the past 10 years.
In my opinion, one significant way that Chinese entrepreneurs vary from their American counterparts is in work ethic. The mantra I found in the startups I visited in Beijing and Shanghai was “9-9-6”—meaning the employees only needed to work from 9 am to 9 pm, 6 days a week.
Another concept Kai-Fu shared over dinner was the almost ‘dictatorial’ leadership of the founder/CEO. In China, it’s not uncommon for the Founder/CEO to own the majority of the company, or at least 30–40 percent. It’s also the case that what the CEO says is gospel. Period, no debate. There is no minority or dissenting opinion. When the CEO says “march,” the company asks, “which way?”
When Kai-Fu started Sinovation (his $1 billion+ venture fund), there were few active angel investors. Today, China has a rich ecosystem of angel, venture capital, and government-funded innovation parks.
As venture capital in China has evolved, so too has the mindset of the entrepreneur.
Kai -Fu recalled an early investment he made in which, after an unfortunate streak, the entrepreneur came to him, almost in tears, apologizing for losing his money and promising he would earn it back for him in another way. Kai-Fu comforted the entrepreneur and said there was no such need.
Only a few years later, the situation was vastly different. An entrepreneur who was going through a similar unfortunate streak came to Kai Fu and told him he only had $2 million left of his initial $12 million investment. He informed him he saw no value in returning the money and instead was going to take the last $2 million and use it as a final push to see if the company could succeed. He then promised Kai-Fu if he failed, he would remember what Kai-Fu did for him and, as such, possibly give Sinovation an opportunity to invest in him with his next company.
2. Chinese Companies Are No Longer Just ‘Copycats’
During dinner, Kai-Fu lamented that 10 years ago, it would be fair to call Chinese companies copycats of American companies. Five years ago, the claim would be controversial. Today, however, Kai-Fu is clear that claim is entirely false.
While smart Chinese startups will still look at what American companies are doing and build on trends, today it’s becoming a wise business practice for American tech giants to analyze Chinese companies. If you look at many new features of Facebook’s Messenger, it seems to very closely mirror TenCent’s WeChat.
Interestingly, tight government controls in China have actually spurred innovation. Take TV, for example, a highly regulated industry. Because of this regulation, most entertainment in China is consumed on the internet or by phone. Game shows, reality shows, and more will be entirely centered online.
Kai-Fu told us about one of his investments in a company that helps create Chinese singing sensations. They take girls in from a young age, school them, and regardless of talent, help build their presence and brand as singers. Once ready, these singers are pushed across all the available platforms, and superstars are born. The company recognizes its role in this superstar status, though, which is why it takes a 50 percent cut of all earnings.
This company is just one example of how Chinese entrepreneurs take advantage of China’s unique position, market, and culture.
3. China’s Artificial Intelligence Play
Kai-Fu wrapped up his talk with a brief introduction into the expansive AI industry in China. I previously discussed Face++, a Sinovation investment, which is creating radically efficient facial recognition technology. Face++ is light years ahead of anyone else globally at recognition in live videos. However, Face++ is just one of the incredible advances in AI coming out of China.
Baidu, one of China’s most valuable tech companies, started out as just a search company. However, they now run one of the country’s leading self-driving car programs.
Baidu’s goal is to create a software suite atop existing hardware that will control all self-driving aspects of a vehicle but also be able to provide additional services such as HD mapping and more.
Another interesting application came from another of Sinovation’s investments, Smart Finance Group (SFG). Given most payments are mobile (through WeChat or Alipay), only ~20 percent of the population in China have a credit history. This makes it very difficult for individuals in China to acquire a loan.
SFG’s mobile application takes in user data (as much as the user allows) and, based on the information provided, uses an AI agent to create a financial profile with the power to offer an instant loan. This loan can be deposited directly into their WeChat or Alipay account and is typically approved in minutes. Unlike American loan companies, they avoid default and long-term debt by only providing a one-month loan with 10% interest. Borrow $200, and you pay back $220 by the following month.
Artificial intelligence is exploding in China, and Kai-Fu believes it will touch every single industry.
The only constant is change, and the rate of change is constantly increasing.
In the next 10 years, we’ll see tremendous changes on the geopolitical front and the global entrepreneurial scene caused by technological empowerment.
China is an entrepreneurial hotbed that cannot be ignored. I’m monitoring it closely. Are you?
Image Credit: anekoho / Shutterstock.com Continue reading
#431371 Amazon Is Quietly Building the Robots of ...
Science fiction is the siren song of hard science. How many innocent young students have been lured into complex, abstract science, technology, engineering, or mathematics because of a reckless and irresponsible exposure to Arthur C. Clarke at a tender age? Yet Arthur C. Clarke has a very famous quote: “Any sufficiently advanced technology is indistinguishable from magic.”
It’s the prospect of making that… ahem… magic leap that entices so many people into STEM in the first place. A magic leap that would change the world. How about, for example, having humanoid robots? They could match us in dexterity and speed, perceive the world around them as we do, and be programmed to do, well, more or less anything we can do.
Such a technology would change the world forever.
But how will it arrive? While true sci-fi robots won’t get here right away—the pieces are coming together, and the company best developing them at the moment is Amazon. Where others have struggled to succeed, Amazon has been quietly progressing. Notably, Amazon has more than just a dream, it has the most practical of reasons driving it into robotics.
This practicality matters. Technological development rarely proceeds by magic; it’s a process filled with twists, turns, dead-ends, and financial constraints. New technologies often have to answer questions like “What is this good for, are you being realistic?” A good strategy, then, can be to build something more limited than your initial ambition, but useful for a niche market. That way, you can produce a prototype, have a reasonable business plan, and turn a profit within a decade. You might call these “stepping stone” applications that allow for new technologies to be developed in an economically viable way.
You need something you can sell to someone, soon: that’s how you get investment in your idea. It’s this model that iRobot, developers of the Roomba, used: migrating from military prototypes to robotic vacuum cleaners to become the “boring, successful robot company.” Compare this to Willow Garage, a genius factory if ever there was one: they clearly had ambitions towards a general-purpose, multi-functional robot. They built an impressive device—PR2—and programmed the operating system, ROS, that is still the industry and academic standard to this day.
But since they were unable to sell their robot for much less than $250,000, it was never likely to be a profitable business. This is why Willow Garage is no more, and many workers at the company went into telepresence robotics. Telepresence is essentially videoconferencing with a fancy robot attached to move the camera around. It uses some of the same software (for example, navigation and mapping) without requiring you to solve difficult problems of full autonomy for the robot, or manipulating its environment. It’s certainly one of the stepping-stone areas that various companies are investigating.
Another approach is to go to the people with very high research budgets: the military.
This was the Boston Dynamics approach, and their incredible achievements in bipedal locomotion saw them getting snapped up by Google. There was a great deal of excitement and speculation about Google’s “nightmare factory” whenever a new slick video of a futuristic militarized robot surfaced. But Google broadly backed away from Replicant, their robotics program, and Boston Dynamics was sold. This was partly due to PR concerns over the Terminator-esque designs, but partly because they didn’t see the robotics division turning a profit. They hadn’t found their stepping stones.
This is where Amazon comes in. Why Amazon? First off, they just announced that their profits are up by 30 percent, and yet the company is well-known for their constantly-moving Day One philosophy where a great deal of the profits are reinvested back into the business. But lots of companies have ambition.
One thing Amazon has that few other corporations have, as well as big financial resources, is viable stepping stones for developing the technologies needed for this sort of robotics to become a reality. They already employ 100,000 robots: these are of the “pragmatic, boring, useful” kind that we’ve profiled, which move around the shelves in warehouses. These robots are allowing Amazon to develop localization and mapping software for robots that can autonomously navigate in the simple warehouse environment.
But their ambitions don’t end there. The Amazon Robotics Challenge is a multi-million dollar competition, open to university teams, to produce a robot that can pick and package items in warehouses. The problem of grasping and manipulating a range of objects is not a solved one in robotics, so this work is still done by humans—yet it’s absolutely fundamental for any sci-fi dream robot.
Google, for example, attempted to solve this problem by hooking up 14 robot hands to machine learning algorithms and having them grasp thousands of objects. Although results were promising, the 10 to 20 percent failure rate for grasps is too high for warehouse use. This is a perfect stepping stone for Amazon; should they crack the problem, they will likely save millions in logistics.
Another area where humanoid robotics—especially bipedal locomotion, or walking, has been seriously suggested—is in the last mile delivery problem. Amazon has shown willingness to be creative in this department with their notorious drone delivery service. In other words, it’s all very well to have your self-driving car or van deliver packages to people’s doors, but who puts the package on the doorstep? It’s difficult for wheeled robots to navigate the full range of built environments that exist. That’s why bipedal robots like CASSIE, developed by Oregon State, may one day be used to deliver parcels.
Again: no one more than Amazon stands to profit from cracking this technology. The line from robotics research to profit is very clear.
So, perhaps one day Amazon will have robots that can move around and manipulate their environments. But they’re also working on intelligence that will guide those robots and make them truly useful for a variety of tasks. Amazon has an AI, or at least the framework for an AI: it’s called Alexa, and it’s in tens of millions of homes. The Alexa Prize, another multi-million-dollar competition, is attempting to make Alexa more social.
To develop a conversational AI, at least using the current methods of machine learning, you need data on tens of millions of conversations. You need to understand how people will try to interact with the AI. Amazon has access to this in Alexa, and they’re using it. As owners of the leading voice-activated personal assistant, they have an ecosystem of developers creating apps for Alexa. It will be integrated with the smart home and the Internet of Things. It is a very marketable product, a stepping stone for robot intelligence.
What’s more, the company can benefit from its huge sales infrastructure. For Amazon, having an AI in your home is ideal, because it can persuade you to buy more products through its website. Unlike companies like Google, Amazon has an easy way to make a direct profit from IoT devices, which could fuel funding.
For a humanoid robot to be truly useful, though, it will need vision and intelligence. It will have to understand and interpret its environment, and react accordingly. The way humans learn about our environment is by getting out and seeing it. This is something that, for example, an Alexa coupled to smart glasses would be very capable of doing. There are rumors that Alexa’s AI will soon be used in security cameras, which is an ideal stepping stone task to train an AI to process images from its environment, truly perceiving the world and any threats it might contain.
It’s a slight exaggeration to say that Amazon is in the process of building a secret robot army. The gulf between our sci-fi vision of robots that can intelligently serve us, rather than mindlessly assemble cars, is still vast. But in quietly assembling many of the technologies needed for intelligent, multi-purpose robotics—and with the unique stepping stones they have along the way—Amazon might just be poised to leap that gulf. As if by magic.
Image Credit: Denis Starostin / Shutterstock.com Continue reading