Tag Archives: going

#431872 AI Uses Titan Supercomputer to Create ...

You don’t have to dig too deeply into the archive of dystopian science fiction to uncover the horror that intelligent machines might unleash. The Matrix and The Terminator are probably the most well-known examples of self-replicating, intelligent machines attempting to enslave or destroy humanity in the process of building a brave new digital world.
The prospect of artificially intelligent machines creating other artificially intelligent machines took a big step forward in 2017. However, we’re far from the runaway technological singularity futurists are predicting by mid-century or earlier, let alone murderous cyborgs or AI avatar assassins.
The first big boost this year came from Google. The tech giant announced it was developing automated machine learning (AutoML), writing algorithms that can do some of the heavy lifting by identifying the right neural networks for a specific job. Now researchers at the Department of Energy’s Oak Ridge National Laboratory (ORNL), using the most powerful supercomputer in the US, have developed an AI system that can generate neural networks as good if not better than any developed by a human in less than a day.
It can take months for the brainiest, best-paid data scientists to develop deep learning software, which sends data through a complex web of mathematical algorithms. The system is modeled after the human brain and known as an artificial neural network. Even Google’s AutoML took weeks to design a superior image recognition system, one of the more standard operations for AI systems today.
Computing Power
Of course, Google Brain project engineers only had access to 800 graphic processing units (GPUs), a type of computer hardware that works especially well for deep learning. Nvidia, which pioneered the development of GPUs, is considered the gold standard in today’s AI hardware architecture. Titan, the supercomputer at ORNL, boasts more than 18,000 GPUs.
The ORNL research team’s algorithm, called MENNDL for Multinode Evolutionary Neural Networks for Deep Learning, isn’t designed to create AI systems that cull cute cat photos from the internet. Instead, MENNDL is a tool for testing and training thousands of potential neural networks to work on unique science problems.
That requires a different approach from the Google and Facebook AI platforms of the world, notes Steven Young, a postdoctoral research associate at ORNL who is on the team that designed MENNDL.
“We’ve discovered that those [neural networks] are very often not the optimal network for a lot of our problems, because our data, while it can be thought of as images, is different,” he explains to Singularity Hub. “These images, and the problems, have very different characteristics from object detection.”
AI for Science
One application of the technology involved a particle physics experiment at the Fermi National Accelerator Laboratory. Fermilab researchers are interested in understanding neutrinos, high-energy subatomic particles that rarely interact with normal matter but could be a key to understanding the early formation of the universe. One Fermilab experiment involves taking a sort of “snapshot” of neutrino interactions.
The team wanted the help of an AI system that could analyze and classify Fermilab’s detector data. MENNDL evaluated 500,000 neural networks in 24 hours. Its final solution proved superior to custom models developed by human scientists.
In another case involving a collaboration with St. Jude Children’s Research Hospital in Memphis, MENNDL improved the error rate of a human-designed algorithm for identifying mitochondria inside 3D electron microscopy images of brain tissue by 30 percent.
“We are able to do better than humans in a fraction of the time at designing networks for these sort of very different datasets that we’re interested in,” Young says.
What makes MENNDL particularly adept is its ability to define the best or most optimal hyperparameters—the key variables—to tackle a particular dataset.
“You don’t always need a big, huge deep network. Sometimes you just need a small network with the right hyperparameters,” Young says.
A Virtual Data Scientist
That’s not dissimilar to the approach of a company called H20.ai, a startup out of Silicon Valley that uses open source machine learning platforms to “democratize” AI. It applies machine learning to create business solutions for Fortune 500 companies, including some of the world’s biggest banks and healthcare companies.
“Our software is more [about] pattern detection, let’s say anti-money laundering or fraud detection or which customer is most likely to churn,” Dr. Arno Candel, chief technology officer at H2O.ai, tells Singularity Hub. “And that kind of insight-generating software is what we call AI here.”
The company’s latest product, Driverless AI, promises to deliver the data scientist equivalent of a chessmaster to its customers (the company claims several such grandmasters in its employ and advisory board). In other words, the system can analyze a raw dataset and, like MENNDL, automatically identify what features should be included in the computer model to make the most of the data based on the best “chess moves” of its grandmasters.
“So we’re using those algorithms, but we’re giving them the human insights from those data scientists, and we automate their thinking,” he explains. “So we created a virtual data scientist that is relentless at trying these ideas.”
Inside the Black Box
Not unlike how the human brain reaches a conclusion, it’s not always possible to understand how a machine, despite being designed by humans, reaches its own solutions. The lack of transparency is often referred to as the AI “black box.” Experts like Young say we can learn something about the evolutionary process of machine learning by generating millions of neural networks and seeing what works well and what doesn’t.
“You’re never going to be able to completely explain what happened, but maybe we can better explain it than we currently can today,” Young says.
Transparency is built into the “thought process” of each particular model generated by Driverless AI, according to Candel.
The computer even explains itself to the user in plain English at each decision point. There is also real-time feedback that allows users to prioritize features, or parameters, to see how the changes improve the accuracy of the model. For example, the system may include data from people in the same zip code as it creates a model to describe customer turnover.
“That’s one of the advantages of our automatic feature engineering: it’s basically mimicking human thinking,” Candel says. “It’s not just neural nets that magically come up with some kind of number, but we’re trying to make it statistically significant.”
Moving Forward
Much digital ink has been spilled over the dearth of skilled data scientists, so automating certain design aspects for developing artificial neural networks makes sense. Experts agree that automation alone won’t solve that particular problem. However, it will free computer scientists to tackle more difficult issues, such as parsing the inherent biases that exist within the data used by machine learning today.
“I think the world has an opportunity to focus more on the meaning of things and not on the laborious tasks of just fitting a model and finding the best features to make that model,” Candel notes. “By automating, we are pushing the burden back for the data scientists to actually do something more meaningful, which is think about the problem and see how you can address it differently to make an even bigger impact.”
The team at ORNL expects it can also make bigger impacts beginning next year when the lab’s next supercomputer, Summit, comes online. While Summit will boast only 4,600 nodes, it will sport the latest and greatest GPU technology from Nvidia and CPUs from IBM. That means it will deliver more than five times the computational performance of Titan, the world’s fifth-most powerful supercomputer today.
“We’ll be able to look at much larger problems on Summit than we were able to with Titan and hopefully get to a solution much faster,” Young says.
It’s all in a day’s work.
Image Credit: Gennady Danilkin / Shutterstock.com Continue reading

Posted in Human Robots

#431869 When Will We Finally Achieve True ...

The field of artificial intelligence goes back a long way, but many consider it was officially born when a group of scientists at Dartmouth College got together for a summer, back in 1956. Computers had, over the last few decades, come on in incredible leaps and bounds; they could now perform calculations far faster than humans. Optimism, given the incredible progress that had been made, was rational. Genius computer scientist Alan Turing had already mooted the idea of thinking machines just a few years before. The scientists had a fairly simple idea: intelligence is, after all, just a mathematical process. The human brain was a type of machine. Pick apart that process, and you can make a machine simulate it.
The problem didn’t seem too hard: the Dartmouth scientists wrote, “We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.” This research proposal, by the way, contains one of the earliest uses of the term artificial intelligence. They had a number of ideas—maybe simulating the human brain’s pattern of neurons could work and teaching machines the abstract rules of human language would be important.
The scientists were optimistic, and their efforts were rewarded. Before too long, they had computer programs that seemed to understand human language and could solve algebra problems. People were confidently predicting there would be a human-level intelligent machine built within, oh, let’s say, the next twenty years.
It’s fitting that the industry of predicting when we’d have human-level intelligent AI was born at around the same time as the AI industry itself. In fact, it goes all the way back to Turing’s first paper on “thinking machines,” where he predicted that the Turing Test—machines that could convince humans they were human—would be passed in 50 years, by 2000. Nowadays, of course, people are still predicting it will happen within the next 20 years, perhaps most famously Ray Kurzweil. There are so many different surveys of experts and analyses that you almost wonder if AI researchers aren’t tempted to come up with an auto reply: “I’ve already predicted what your question will be, and no, I can’t really predict that.”
The issue with trying to predict the exact date of human-level AI is that we don’t know how far is left to go. This is unlike Moore’s Law. Moore’s Law, the doubling of processing power roughly every couple of years, makes a very concrete prediction about a very specific phenomenon. We understand roughly how to get there—improved engineering of silicon wafers—and we know we’re not at the fundamental limits of our current approach (at least, not until you’re trying to work on chips at the atomic scale). You cannot say the same about artificial intelligence.
Common Mistakes
Stuart Armstrong’s survey looked for trends in these predictions. Specifically, there were two major cognitive biases he was looking for. The first was the idea that AI experts predict true AI will arrive (and make them immortal) conveniently just before they’d be due to die. This is the “Rapture of the Nerds” criticism people have leveled at Kurzweil—his predictions are motivated by fear of death, desire for immortality, and are fundamentally irrational. The ability to create a superintelligence is taken as an article of faith. There are also criticisms by people working in the AI field who know first-hand the frustrations and limitations of today’s AI.
The second was the idea that people always pick a time span of 15 to 20 years. That’s enough to convince people they’re working on something that could prove revolutionary very soon (people are less impressed by efforts that will lead to tangible results centuries down the line), but not enough for you to be embarrassingly proved wrong. Of the two, Armstrong found more evidence for the second one—people were perfectly happy to predict AI after they died, although most didn’t, but there was a clear bias towards “15–20 years from now” in predictions throughout history.
Measuring Progress
Armstrong points out that, if you want to assess the validity of a specific prediction, there are plenty of parameters you can look at. For example, the idea that human-level intelligence will be developed by simulating the human brain does at least give you a clear pathway that allows you to assess progress. Every time we get a more detailed map of the brain, or successfully simulate another part of it, we can tell that we are progressing towards this eventual goal, which will presumably end in human-level AI. We may not be 20 years away on that path, but at least you can scientifically evaluate the progress.
Compare this to those that say AI, or else consciousness, will “emerge” if a network is sufficiently complex, given enough processing power. This might be how we imagine human intelligence and consciousness emerged during evolution—although evolution had billions of years, not just decades. The issue with this is that we have no empirical evidence: we have never seen consciousness manifest itself out of a complex network. Not only do we not know if this is possible, we cannot know how far away we are from reaching this, as we can’t even measure progress along the way.
There is an immense difficulty in understanding which tasks are hard, which has continued from the birth of AI to the present day. Just look at that original research proposal, where understanding human language, randomness and creativity, and self-improvement are all mentioned in the same breath. We have great natural language processing, but do our computers understand what they’re processing? We have AI that can randomly vary to be “creative,” but is it creative? Exponential self-improvement of the kind the singularity often relies on seems far away.
We also struggle to understand what’s meant by intelligence. For example, AI experts consistently underestimated the ability of AI to play Go. Many thought, in 2015, it would take until 2027. In the end, it took two years, not twelve. But does that mean AI is any closer to being able to write the Great American Novel, say? Does it mean it’s any closer to conceptually understanding the world around it? Does it mean that it’s any closer to human-level intelligence? That’s not necessarily clear.
Not Human, But Smarter Than Humans
But perhaps we’ve been looking at the wrong problem. For example, the Turing test has not yet been passed in the sense that AI cannot convince people it’s human in conversation; but of course the calculating ability, and perhaps soon the ability to perform other tasks like pattern recognition and driving cars, far exceed human levels. As “weak” AI algorithms make more decisions, and Internet of Things evangelists and tech optimists seek to find more ways to feed more data into more algorithms, the impact on society from this “artificial intelligence” can only grow.
It may be that we don’t yet have the mechanism for human-level intelligence, but it’s also true that we don’t know how far we can go with the current generation of algorithms. Those scary surveys that state automation will disrupt society and change it in fundamental ways don’t rely on nearly as many assumptions about some nebulous superintelligence.
Then there are those that point out we should be worried about AI for other reasons. Just because we can’t say for sure if human-level AI will arrive this century, or never, it doesn’t mean we shouldn’t prepare for the possibility that the optimistic predictors could be correct. We need to ensure that human values are programmed into these algorithms, so that they understand the value of human life and can act in “moral, responsible” ways.
Phil Torres, at the Project for Future Human Flourishing, expressed it well in an interview with me. He points out that if we suddenly decided, as a society, that we had to solve the problem of morality—determine what was right and wrong and feed it into a machine—in the next twenty years…would we even be able to do it?
So, we should take predictions with a grain of salt. Remember, it turned out the problems the AI pioneers foresaw were far more complicated than they anticipated. The same could be true today. At the same time, we cannot be unprepared. We should understand the risks and take our precautions. When those scientists met in Dartmouth in 1956, they had no idea of the vast, foggy terrain before them. Sixty years later, we still don’t know how much further there is to go, or how far we can go. But we’re going somewhere.
Image Credit: Ico Maker / Shutterstock.com Continue reading

Posted in Human Robots

#431866 The Technologies We’ll Have Our Eyes ...

It’s that time of year again when our team has a little fun and throws on our futurist glasses to look ahead at some of the technologies and trends we’re most anticipating next year.
Whether the implications of a technology are vast or it resonates with one of us personally, here’s the list from some of the Singularity Hub team of what we have our eyes on as we enter the new year.
For a little refresher, these were the technologies our team was fired up about at the start of 2017.
Tweet us the technology you’re excited to watch in 2018 at @SingularityHub.
Cryptocurrency and Blockchain
“Given all the noise Bitcoin is making globally in the media, it is driving droves of main street investors to dabble in and learn more about cryptocurrencies. This will continue to raise valuations and drive adoption of blockchain. From Bank of America recently getting a blockchain-based patent approved to the Australian Securities Exchange’s plan to use blockchain, next year is going to be chock-full of these stories. Coindesk even recently spotted a patent filing from Apple involving blockchain. From ‘China’s Ethereum’, NEO, to IOTA to Golem to Qtum, there are a lot of interesting cryptos to follow given the immense numbers of potential applications. Hang on, it’s going to be a bumpy ride in 2018!”
–Kirk Nankivell, Website Manager
There Is No One Technology to Watch
“Next year may be remembered for advances in gene editing, blockchain, AI—or most likely all these and more. There is no single technology to watch. A number of consequential trends are advancing and converging. This general pace of change is exciting, and it also contributes to spiking anxiety. Technology’s invisible lines of force are extending further and faster into our lives and subtly subverting how we view the world and each other in unanticipated ways. Still, all the near-term messiness and volatility, the little and not-so-little dramas, the hype and disillusion, the controversies and conflict, all that smooths out a bit when you take a deep breath and a step back, and it’s my sincere hope and belief the net result will be more beneficial than harmful.”
–Jason Dorrier, Managing Editor
‘Fake News’ Fighting Technology
“It’s been a wild ride for the media this year with the term ‘fake news’ moving from the public’s peripheral and into mainstream vocabulary. The spread of ‘fake news’ is often blamed on media outlets, but social media platforms and search engines are often responsible too. (Facebook still won’t identify as a media company—maybe next year?) Yes, technology can contribute to spreading false information, but it can also help stop it. From technologists who are building in-article ‘trust indicator’ features, to artificial intelligence systems that can both spot and shut down fake news early on, I’m hopeful we can create new solutions to this huge problem. One step further: if publishers step up to fix this we might see some faith restored in the media.”
–Alison E. Berman, Digital Producer
Pay-as-You-Go Home Solar Power
“People in rural African communities are increasingly bypassing electrical grids (which aren’t even an option in many cases) and installing pay-as-you-go solar panels on their homes. The companies offering these services are currently not subject to any regulations, though they’re essentially acting as a utility. As demand for power grows, they’ll have to come up with ways to efficiently scale, and to balance the humanitarian and capitalistic aspects of their work. It’s fascinating to think traditional grids may never be necessary in many areas of the continent thanks to this technology.”
–Vanessa Bates Ramirez, Associate Editor
Virtual Personal Assistants
“AI is clearly going to rule our lives, and in many ways it already makes us look like clumsy apes. Alexa, Siri, and Google Assistant are promising first steps toward a world of computers that understand us and relate to us on an emotional level. I crave the day when my Apple Watch coaches me into healthier habits, lets me know about new concerts nearby, speaks to my self-driving Lyft on my behalf, and can help me respond effectively to aggravating emails based on communication patterns. But let’s not brush aside privacy concerns and the implications of handing over our personal data to megacorporations. The scariest thing here is that privacy laws and advertising ethics do not accommodate this level of intrusive data hoarding.”
–Matthew Straub, Director of Digital Engagement (Hub social media)
Solve for Learning: Educational Apps for Children in Conflict Zones
“I am most excited by exponential technology when it is used to help solve a global grand challenge. Educational apps are currently being developed to help solve for learning by increasing accessibility to learning opportunities for children living in conflict zones. Many children in these areas are not receiving an education, with girls being 2.5 times more likely than boys to be out of school. The EduApp4Syria project is developing apps to help children in Syria and Kashmir learn in their native languages. Mobile phones are increasingly available in these areas, and the apps are available offline for children who do not have consistent access to mobile networks. The apps are low-cost, easily accessible, and scalable educational opportunities.
–Paige Wilcoxson, Director, Curriculum & Learning Design
Image Credit: Triff / Shutterstock.com Continue reading

Posted in Human Robots

#431859 Digitized to Democratized: These Are the ...

“The Six Ds are a chain reaction of technological progression, a road map of rapid development that always leads to enormous upheaval and opportunity.”
–Peter Diamandis and Steven Kotler, Bold
We live in incredible times. News travels the globe in an instant. Music, movies, games, communication, and knowledge are ever-available on always-connected devices. From biotechnology to artificial intelligence, powerful technologies that were once only available to huge organizations and governments are becoming more accessible and affordable thanks to digitization.
The potential for entrepreneurs to disrupt industries and corporate behemoths to unexpectedly go extinct has never been greater.
One hundred or fifty or even twenty years ago, disruption meant coming up with a product or service people needed but didn’t have yet, then finding a way to produce it with higher quality and lower costs than your competitors. This entailed hiring hundreds or thousands of employees, having a large physical space to put them in, and waiting years or even decades for hard work to pay off and products to come to fruition.

“Technology is disrupting traditional industrial processes, and they’re never going back.”

But thanks to digital technologies developing at exponential rates of change, the landscape of 21st-century business has taken on a dramatically different look and feel.
The structure of organizations is changing. Instead of thousands of employees and large physical plants, modern start-ups are small organizations focused on information technologies. They dematerialize what was once physical and create new products and revenue streams in months, sometimes weeks.
It no longer takes a huge corporation to have a huge impact.
Technology is disrupting traditional industrial processes, and they’re never going back. This disruption is filled with opportunity for forward-thinking entrepreneurs.
The secret to positively impacting the lives of millions of people is understanding and internalizing the growth cycle of digital technologies. This growth cycle takes place in six key steps, which Peter Diamandis calls the Six Ds of Exponentials: digitization, deception, disruption, demonetization, dematerialization, and democratization.
According to Diamandis, cofounder and chairman of Singularity University and founder and executive chairman of XPRIZE, when something is digitized it begins to behave like an information technology.

Newly digitized products develop at an exponential pace instead of a linear one, fooling onlookers at first before going on to disrupt companies and whole industries. Before you know it, something that was once expensive and physical is an app that costs a buck.
Newspapers and CDs are two obvious recent examples. The entertainment and media industries are still dealing with the aftermath of digitization as they attempt to transform and update old practices tailored to a bygone era. But it won’t end with digital media. As more of the economy is digitized—from medicine to manufacturing—industries will hop on an exponential curve and be similarly disrupted.
Diamandis’s 6 Ds are critical to understanding and planning for this disruption.
The 6 Ds of Exponential Organizations are Digitized, Deceptive, Disruptive, Demonetized, Dematerialized, and Democratized.

Diamandis uses the contrasting fates of Kodak and Instagram to illustrate the power of the six Ds and exponential thinking.
Kodak invented the digital camera in 1975, but didn’t invest heavily in the new technology, instead sticking with what had always worked: traditional cameras and film. In 1996, Kodak had a $28 billion market capitalization with 95,000 employees.
But the company didn’t pay enough attention to how digitization of their core business was changing it; people were no longer taking pictures in the same way and for the same reasons as before.
After a downward spiral, Kodak went bankrupt in 2012. That same year, Facebook acquired Instagram, a digital photo sharing app, which at the time was a startup with 13 employees. The acquisition’s price tag? $1 billion. And Instagram had been founded only 18 months earlier.
The most ironic piece of this story is that Kodak invented the digital camera; they took the first step toward overhauling the photography industry and ushering it into the modern age, but they were unwilling to disrupt their existing business by taking a risk in what was then uncharted territory. So others did it instead.
The same can happen with any technology that’s just getting off the ground. It’s easy to stop pursuing it in the early part of the exponential curve, when development appears to be moving slowly. But failing to follow through only gives someone else the chance to do it instead.
The Six Ds are a road map showing what can happen when an exponential technology is born. Not every phase is easy, but the results give even small teams the power to change the world in a faster and more impactful way than traditional business ever could.
Image Credit: Mohammed Tareq / Shutterstock Continue reading

Posted in Human Robots

#431828 This Self-Driving AI Is Learning to ...

I don’t have to open the doors of AImotive’s white 2015 Prius to see that it’s not your average car. This particular Prius has been christened El Capitan, the name written below the rear doors, and two small cameras are mounted on top of the car. Bundles of wire snake out from them, as well as from the two additional cameras on the car’s hood and trunk.
Inside is where things really get interesting, though. The trunk holds a computer the size of a microwave, and a large monitor covers the passenger glove compartment and dashboard. The center console has three switches labeled “Allowed,” “Error,” and “Active.”
Budapest-based AImotive is working to provide scalable self-driving technology alongside big players like Waymo and Uber in the autonomous vehicle world. On a highway test ride with CEO Laszlo Kishonti near the company’s office in Mountain View, California, I got a glimpse of just how complex that world is.
Camera-Based Feedback System
AImotive’s approach to autonomous driving is a little different from that of some of the best-known systems. For starters, they’re using cameras, not lidar, as primary sensors. “The traffic system is visual and the cost of cameras is low,” Kishonti said. “A lidar can recognize when there are people near the car, but a camera can differentiate between, say, an elderly person and a child. Lidar’s resolution isn’t high enough to recognize the subtle differences of urban driving.”
Image Credit: AImotive
The company’s aiDrive software uses data from the camera sensors to feed information to its algorithms for hierarchical decision-making, grouped under four concurrent activities: recognition, location, motion, and control.
Kishonti pointed out that lidar has already gotten more cost-efficient, and will only continue to do so.
“Ten years ago, lidar was best because there wasn’t enough processing power to do all the calculations by AI. But the cost of running AI is decreasing,” he said. “In our approach, computer vision and AI processing are key, and for safety, we’ll have fallback sensors like radar or lidar.”
aiDrive currently runs on Nvidia chips, which Kishonti noted were originally designed for graphics, and are not terribly efficient given how power-hungry they are. “We’re planning to substitute lower-cost, lower-energy chips in the next six months,” he said.
Testing in Virtual Reality
Waymo recently announced its fleet has now driven four million miles autonomously. That’s a lot of miles, and hard to compete with. But AImotive isn’t trying to compete, at least not by logging more real-life test miles. Instead, the company is doing 90 percent of its testing in virtual reality. “This is what truly differentiates us from competitors,” Kishonti said.
He outlined the three main benefits of VR testing: it can simulate scenarios too dangerous for the real world (such as hitting something), too costly (not every company has Waymo’s funds to run hundreds of cars on real roads), or too time-consuming (like waiting for rain, snow, or other weather conditions to occur naturally and repeatedly).
“Real-world traffic testing is very skewed towards the boring miles,” he said. “What we want to do is test all the cases that are hard to solve.”
On a screen that looked not unlike multiple games of Mario Kart, he showed me the simulator. Cartoon cars cruised down winding streets, outfitted with all the real-world surroundings: people, trees, signs, other cars. As I watched, a furry kangaroo suddenly hopped across one screen. “Volvo had an issue in Australia,” Kishonti explained. “A kangaroo’s movement is different than other animals since it hops instead of running.” Talk about cases that are hard to solve.
AImotive is currently testing around 1,000 simulated scenarios every night, with a steadily-rising curve of successful tests. These scenarios are broken down into features, and the car’s behavior around those features fed into a neural network. As the algorithms learn more features, the level of complexity the vehicles can handle goes up.
On the Road
After Kishonti and his colleagues filled me in on the details of their product, it was time to test it out. A safety driver sat in the driver’s seat, a computer operator in the passenger seat, and Kishonti and I in back. The driver maintained full control of the car until we merged onto the highway. Then he flicked the “Allowed” switch, his copilot pressed the “Active” switch, and he took his hands off the wheel.
What happened next, you ask?
A few things. El Capitan was going exactly the speed limit—65 miles per hour—which meant all the other cars were passing us. When a car merged in front of us or cut us off, El Cap braked accordingly (if a little abruptly). The monitor displayed the feed from each of the car’s cameras, plus multiple data fields and a simulation where a blue line marked the center of the lane, measured by the cameras tracking the lane markings on either side.
I noticed El Cap wobbling out of our lane a bit, but it wasn’t until two things happened in a row that I felt a little nervous: first we went under a bridge, then a truck pulled up next to us, both bridge and truck casting a complete shadow over our car. At that point El Cap lost it, and we swerved haphazardly to the right, narrowly missing the truck’s rear wheels. The safety driver grabbed the steering wheel and took back control of the car.
What happened, Kishonti explained, was that the shadows made it hard for the car’s cameras to see the lane markings. This was a new scenario the algorithm hadn’t previously encountered. If we’d only gone under a bridge or only been next to the truck for a second, El Cap may not have had so much trouble, but the two events happening in a row really threw the car for a loop—almost literally.
“This is a new scenario we’ll add to our testing,” Kishonti said. He added that another way for the algorithm to handle this type of scenario, rather than basing its speed and positioning on the lane markings, is to mimic nearby cars. “The human eye would see that other cars are still moving at the same speed, even if it can’t see details of the road,” he said.
After another brief—and thankfully uneventful—hands-off cruise down the highway, the safety driver took over, exited the highway, and drove us back to the office.
Driving into the Future
I climbed out of the car feeling amazed not only that self-driving cars are possible, but that driving is possible at all. I squint when driving into a tunnel, swerve to avoid hitting a stray squirrel, and brake gradually at stop signs—all without consciously thinking to do so. On top of learning to steer, brake, and accelerate, self-driving software has to incorporate our brains’ and bodies’ unconscious (but crucial) reactions, like our pupils dilating to let in more light so we can see in a tunnel.
Despite all the progress of machine learning, artificial intelligence, and computing power, I have a wholly renewed appreciation for the thing that’s been in charge of driving up till now: the human brain.
Kishonti seemed to feel similarly. “I don’t think autonomous vehicles in the near future will be better than the best drivers,” he said. “But they’ll be better than the average driver. What we want to achieve is safe, good-quality driving for everyone, with scalability.”
AImotive is currently working with American tech firms and with car and truck manufacturers in Europe, China, and Japan.
Image Credit: Alex Oakenman / Shutterstock.com Continue reading

Posted in Human Robots