Tag Archives: google
#432031 Why the Rise of Self-Driving Vehicles ...
It’s been a long time coming. For years Waymo (formerly known as Google Chauffeur) has been diligently developing, driving, testing and refining its fleets of various models of self-driving cars. Now Waymo is going big. The company recently placed an order for several thousand new Chrysler Pacifica minivans and next year plans to launch driverless taxis in a number of US cities.
This deal raises one of the biggest unanswered questions about autonomous vehicles: if fleets of driverless taxis make it cheap and easy for regular people to get around, what’s going to happen to car ownership?
One popular line of thought goes as follows: as autonomous ride-hailing services become ubiquitous, people will no longer need to buy their own cars. This notion has a certain logical appeal. It makes sense to assume that as driverless taxis become widely available, most of us will eagerly sell the family car and use on-demand taxis to get to work, run errands, or pick up the kids. After all, vehicle ownership is pricey and most cars spend the vast majority of their lives parked.
Even experts believe commercial availability of autonomous vehicles will cause car sales to drop.
Market research firm KPMG estimates that by 2030, midsize car sales in the US will decline from today’s 5.4 million units sold each year to nearly half that number, a measly 2.1 million units. Another market research firm, ReThinkX, offers an even more pessimistic estimate (or optimistic, depending on your opinion of cars), predicting that autonomous vehicles will reduce consumer demand for new vehicles by a whopping 70 percent.
The reality is that the impending death of private vehicle sales is greatly exaggerated. Despite the fact that autonomous taxis will be a beneficial and widely-embraced form of urban transportation, we will witness the opposite. Most people will still prefer to own their own autonomous vehicle. In fact, the total number of units of autonomous vehicles sold each year is going to increase rather than decrease.
When people predict the demise of car ownership, they are overlooking the reality that the new autonomous automotive industry is not going to be just a re-hash of today’s car industry with driverless vehicles. Instead, the automotive industry of the future will be selling what could be considered an entirely new product: a wide variety of intelligent, self-guiding transportation robots. When cars become a widely used type of transportation robot, they will be cheap, ubiquitous, and versatile.
Several unique characteristics of autonomous vehicles will ensure that people will continue to buy their own cars.
1. Cost: Thanks to simpler electric engines and lighter auto bodies, autonomous vehicles will be cheaper to buy and maintain than today’s human-driven vehicles. Some estimates bring the price to $10K per vehicle, a stark contrast with today’s average of $30K per vehicle.
2. Personal belongings: Consumers will be able to do much more in their driverless vehicles, including work, play, and rest. This means they will want to keep more personal items in their cars.
3. Frequent upgrades: The average (human-driven) car today is owned for 10 years. As driverless cars become software-driven devices, their price/performance ratio will track to Moore’s law. Their rapid improvement will increase the appeal and frequency of new vehicle purchases.
4. Instant accessibility: In a dense urban setting, a driverless taxi is able to show up within minutes of being summoned. But not so in rural areas, where people live miles apart. For many, delay and “loss of control” over their own mobility will increase the appeal of owning their own vehicle.
5. Diversity of form and function: Autonomous vehicles will be available in a wide variety of sizes and shapes. Consumers will drive demand for custom-made, purpose-built autonomous vehicles whose form is adapted for a particular function.
Let’s explore each of these characteristics in more detail.
Autonomous vehicles will cost less for several reasons. For one, they will be powered by electric engines, which are cheaper to construct and maintain than gasoline-powered engines. Removing human drivers will also save consumers money. Autonomous vehicles will be much less likely to have accidents, hence they can be built out of lightweight, lower-cost materials and will be cheaper to insure. With the human interface no longer needed, autonomous vehicles won’t be burdened by the manufacturing costs of a complex dashboard, steering wheel, and foot pedals.
While hop-on, hop-off autonomous taxi-based mobility services may be ideal for some of the urban population, several sizeable customer segments will still want to own their own cars.
These include people who live in sparsely-populated rural areas who can’t afford to wait extended periods of time for a taxi to appear. Families with children will prefer to own their own driverless cars to house their childrens’ car seats and favorite toys and sippy cups. Another loyal car-buying segment will be die-hard gadget-hounds who will eagerly buy a sexy upgraded model every year or so, unable to resist the siren song of AI that is three times as safe, or a ride that is twice as smooth.
Finally, consider the allure of robotic diversity.
Commuters will invest in a home office on wheels, a sleek, traveling workspace resembling the first-class suite on an airplane. On the high end of the market, city-dwellers and country-dwellers alike will special-order custom-made autonomous vehicles whose shape and on-board gadgetry is adapted for a particular function or hobby. Privately-owned small businesses will buy their own autonomous delivery robot that could range in size from a knee-high, last-mile delivery pod, to a giant, long-haul shipping device.
As autonomous vehicles near commercial viability, Waymo’s procurement deal with Fiat Chrysler is just the beginning.
The exact value of this future automotive industry has yet to be defined, but research from Intel’s internal autonomous vehicle division estimates this new so-called “passenger economy” could be worth nearly $7 trillion a year. To position themselves to capture a chunk of this potential revenue, companies whose businesses used to lie in previously disparate fields such as robotics, software, ships, and entertainment (to name but a few) have begun to form a bewildering web of what they hope will be symbiotic partnerships. Car hailing and chip companies are collaborating with car rental companies, who in turn are befriending giant software firms, who are launching joint projects with all sizes of hardware companies, and so on.
Last year, car companies sold an estimated 80 million new cars worldwide. Over the course of nearly a century, car companies and their partners, global chains of suppliers and service providers, have become masters at mass-producing and maintaining sturdy and cost-effective human-driven vehicles. As autonomous vehicle technology becomes ready for mainstream use, traditional automotive companies are being forced to grapple with the painful realization that they must compete in a new playing field.
The challenge for traditional car-makers won’t be that people no longer want to own cars. Instead, the challenge will be learning to compete in a new and larger transportation industry where consumers will choose their product according to the appeal of its customized body and the quality of its intelligent software.
—
Melba Kurman and Hod Lipson are the authors of Driverless: Intelligent Cars and the Road Ahead and Fabricated: the New World of 3D Printing.
Image Credit: hfzimages / Shutterstock.com
We are a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for us to earn fees by linking to Amazon.com and affiliated sites. Continue reading
#431928 How Fast Is AI Progressing? Stanford’s ...
When? This is probably the question that futurists, AI experts, and even people with a keen interest in technology dread the most. It has proved famously difficult to predict when new developments in AI will take place. The scientists at the Dartmouth Summer Research Project on Artificial Intelligence in 1956 thought that perhaps two months would be enough to make “significant advances” in a whole range of complex problems, including computers that can understand language, improve themselves, and even understand abstract concepts.
Sixty years later, and these problems are not yet solved. The AI Index, from Stanford, is an attempt to measure how much progress has been made in artificial intelligence.
The index adopts a unique approach, and tries to aggregate data across many regimes. It contains Volume of Activity metrics, which measure things like venture capital investment, attendance at academic conferences, published papers, and so on. The results are what you might expect: tenfold increases in academic activity since 1996, an explosive growth in startups focused around AI, and corresponding venture capital investment. The issue with this metric is that it measures AI hype as much as AI progress. The two might be correlated, but then again, they may not.
The index also scrapes data from the popular coding website Github, which hosts more source code than anyone in the world. They can track the amount of AI-related software people are creating, as well as the interest levels in popular machine learning packages like Tensorflow and Keras. The index also keeps track of the sentiment of news articles that mention AI: surprisingly, given concerns about the apocalypse and an employment crisis, those considered “positive” outweigh the “negative” by three to one.
But again, this could all just be a measure of AI enthusiasm in general.
No one would dispute the fact that we’re in an age of considerable AI hype, but the progress of AI is littered by booms and busts in hype, growth spurts that alternate with AI winters. So the AI Index attempts to track the progress of algorithms against a series of tasks. How well does computer vision perform at the Large Scale Visual Recognition challenge? (Superhuman at annotating images since 2015, but they still can’t answer questions about images very well, combining natural language processing and image recognition). Speech recognition on phone calls is almost at parity.
In other narrow fields, AIs are still catching up to humans. Translation might be good enough that you can usually get the gist of what’s being said, but still scores poorly on the BLEU metric for translation accuracy. The AI index even keeps track of how well the programs can do on the SAT test, so if you took it, you can compare your score to an AI’s.
Measuring the performance of state-of-the-art AI systems on narrow tasks is useful and fairly easy to do. You can define a metric that’s simple to calculate, or devise a competition with a scoring system, and compare new software with old in a standardized way. Academics can always debate about the best method of assessing translation or natural language understanding. The Loebner prize, a simplified question-and-answer Turing Test, recently adopted Winograd Schema type questions, which rely on contextual understanding. AI has more difficulty with these.
Where the assessment really becomes difficult, though, is in trying to map these narrow-task performances onto general intelligence. This is hard because of a lack of understanding of our own intelligence. Computers are superhuman at chess, and now even a more complex game like Go. The braver predictors who came up with timelines thought AlphaGo’s success was faster than expected, but does this necessarily mean we’re closer to general intelligence than they thought?
Here is where it’s harder to track progress.
We can note the specialized performance of algorithms on tasks previously reserved for humans—for example, the index cites a Nature paper that shows AI can now predict skin cancer with more accuracy than dermatologists. We could even try to track one specific approach to general AI; for example, how many regions of the brain have been successfully simulated by a computer? Alternatively, we could simply keep track of the number of professions and professional tasks that can now be performed to an acceptable standard by AI.
“We are running a race, but we don’t know how to get to the endpoint, or how far we have to go.”
Progress in AI over the next few years is far more likely to resemble a gradual rising tide—as more and more tasks can be turned into algorithms and accomplished by software—rather than the tsunami of a sudden intelligence explosion or general intelligence breakthrough. Perhaps measuring the ability of an AI system to learn and adapt to the work routines of humans in office-based tasks could be possible.
The AI index doesn’t attempt to offer a timeline for general intelligence, as this is still too nebulous and confused a concept.
Michael Woodridge, head of Computer Science at the University of Oxford, notes, “The main reason general AI is not captured in the report is that neither I nor anyone else would know how to measure progress.” He is concerned about another AI winter, and overhyped “charlatans and snake-oil salesmen” exaggerating the progress that has been made.
A key concern that all the experts bring up is the ethics of artificial intelligence.
Of course, you don’t need general intelligence to have an impact on society; algorithms are already transforming our lives and the world around us. After all, why are Amazon, Google, and Facebook worth any money? The experts agree on the need for an index to measure the benefits of AI, the interactions between humans and AIs, and our ability to program values, ethics, and oversight into these systems.
Barbra Grosz of Harvard champions this view, saying, “It is important to take on the challenge of identifying success measures for AI systems by their impact on people’s lives.”
For those concerned about the AI employment apocalypse, tracking the use of AI in the fields considered most vulnerable (say, self-driving cars replacing taxi drivers) would be a good idea. Society’s flexibility for adapting to AI trends should be measured, too; are we providing people with enough educational opportunities to retrain? How about teaching them to work alongside the algorithms, treating them as tools rather than replacements? The experts also note that the data suffers from being US-centric.
We are running a race, but we don’t know how to get to the endpoint, or how far we have to go. We are judging by the scenery, and how far we’ve run already. For this reason, measuring progress is a daunting task that starts with defining progress. But the AI index, as an annual collection of relevant information, is a good start.
Image Credit: Photobank gallery / Shutterstock.com Continue reading
#431873 Why the World Is Still Getting ...
If you read or watch the news, you’ll likely think the world is falling to pieces. Trends like terrorism, climate change, and a growing population straining the planet’s finite resources can easily lead you to think our world is in crisis.
But there’s another story, a story the news doesn’t often report. This story is backed by data, and it says we’re actually living in the most peaceful, abundant time in history, and things are likely to continue getting better.
The News vs. the Data
The reality that’s often clouded by a constant stream of bad news is we’re actually seeing a massive drop in poverty, fewer deaths from violent crime and preventable diseases. On top of that, we’re the most educated populace to ever walk the planet.
“Violence has been in decline for thousands of years, and today we may be living in the most peaceful era in the existence of our species.” –Steven Pinker
In the last hundred years, we’ve seen the average human life expectancy nearly double, the global GDP per capita rise exponentially, and childhood mortality drop 10-fold.
That’s pretty good progress! Maybe the world isn’t all gloom and doom.If you’re still not convinced the world is getting better, check out the charts in this article from Vox and on Peter Diamandis’ website for a lot more data.
Abundance for All Is Possible
So now that you know the world isn’t so bad after all, here’s another thing to think about: it can get much better, very soon.
In their book Abundance: The Future Is Better Than You Think, Steven Kotler and Peter Diamandis suggest it may be possible for us to meet and even exceed the basic needs of all the people living on the planet today.
“In the hands of smart and driven innovators, science and technology take things which were once scarce and make them abundant and accessible to all.”
This means making sure every single person in the world has adequate food, water and shelter, as well as a good education, access to healthcare, and personal freedom.
This might seem unimaginable, especially if you tend to think the world is only getting worse. But given how much progress we’ve already made in the last few hundred years, coupled with the recent explosion of information sharing and new, powerful technologies, abundance for all is not as out of reach as you might believe.
Throughout history, we’ve seen that in the hands of smart and driven innovators, science and technology take things which were once scarce and make them abundant and accessible to all.
Napoleon III
In Abundance, Diamandis and Kotler tell the story of how aluminum went from being one of the rarest metals on the planet to being one of the most abundant…
In the 1800s, aluminum was more valuable than silver and gold because it was rarer. So when Napoleon III entertained the King of Siam, the king and his guests were honored by being given aluminum utensils, while the rest of the dinner party ate with gold.
But aluminum is not really rare.
In fact, aluminum is the third most abundant element in the Earth’s crust, making up 8.3% of the weight of our planet. But it wasn’t until chemists Charles Martin Hall and Paul Héroult discovered how to use electrolysis to cheaply separate aluminum from surrounding materials that the element became suddenly abundant.
The problems keeping us from achieving a world where everyone’s basic needs are met may seem like resource problems — when in reality, many are accessibility problems.
The Engine Driving Us Toward Abundance: Exponential Technology
History is full of examples like the aluminum story. The most powerful one of the last few decades is information technology. Think about all the things that computers and the internet made abundant that were previously far less accessible because of cost or availability … Here are just a few examples:
Easy access to the world’s information
Ability to share information freely with anyone and everyone
Free/cheap long-distance communication
Buying and selling goods/services regardless of location
Less than two decades ago, when someone reached a certain level of economic stability, they could spend somewhere around $10K on stereos, cameras, entertainment systems, etc — today, we have all that equipment in the palm of our hand.
Now, there is a new generation of technologies heavily dependant on information technology and, therefore, similarly riding the wave of exponential growth. When put to the right use, emerging technologies like artificial intelligence, robotics, digital manufacturing, nano-materials and digital biology make it possible for us to drastically raise the standard of living for every person on the planet.
These are just some of the innovations which are unlocking currently scarce resources:
IBM’s Watson Health is being trained and used in medical facilities like the Cleveland Clinic to help doctors diagnose disease. In the future, it’s likely we’ll trust AI just as much, if not more than humans to diagnose disease, allowing people all over the world to have access to great diagnostic tools regardless of whether there is a well-trained doctor near them.
Solar power is now cheaper than fossil fuels in some parts of the world, and with advances in new materials and storage, the cost may decrease further. This could eventually lead to nearly-free, clean energy for people across the world.
Google’s GMNT network can now translate languages as well as a human, unlocking the ability for people to communicate globally as we never have before.
Self-driving cars are already on the roads of several American cities and will be coming to a road near you in the next couple years. Considering the average American spends nearly two hours driving every day, not having to drive would free up an increasingly scarce resource: time.
The Change-Makers
Today’s innovators can create enormous change because they have these incredible tools—which would have once been available only to big organizations—at their fingertips. And, as a result of our hyper-connected world, there is an unprecedented ability for people across the planet to work together to create solutions to some of our most pressing problems today.
“In today’s hyperlinked world, solving problems anywhere, solves problems everywhere.” –Peter Diamandis and Steven Kotler, Abundance
According to Diamandis and Kotler, there are three groups of people accelerating positive change.
DIY InnovatorsIn the 1970s and 1980s, the Homebrew Computer Club was a meeting place of “do-it-yourself” computer enthusiasts who shared ideas and spare parts. By the 1990s and 2000s, that little club became known as an inception point for the personal computer industry — dozens of companies, including Apple Computer, can directly trace their origins back to Homebrew. Since then, we’ve seen the rise of the social entrepreneur, the Maker Movement and the DIY Bio movement, which have similar ambitions to democratize social reform, manufacturing, and biology, the way Homebrew democratized computers. These are the people who look for new opportunities and aren’t afraid to take risks to create something new that will change the status-quo.
Techno-PhilanthropistsUnlike the robber barons of the 19th and early 20th centuries, today’s “techno-philanthropists” are not just giving away some of their wealth for a new museum, they are using their wealth to solve global problems and investing in social entrepreneurs aiming to do the same. The Bill and Melinda Gates Foundation has given away at least $28 billion, with a strong focus on ending diseases like polio, malaria, and measles for good. Jeff Skoll, after cashing out of eBay with $2 billion in 1998, went on to create the Skoll Foundation, which funds social entrepreneurs across the world. And last year, Mark Zuckerberg and Priscilla Chan pledged to give away 99% of their $46 billion in Facebook stock during their lifetimes.
The Rising BillionCisco estimates that by 2020, there will be 4.1 billion people connected to the internet, up from 3 billion in 2015. This number might even be higher, given the efforts of companies like Facebook, Google, Virgin Group, and SpaceX to bring internet access to the world. That’s a billion new people in the next several years who will be connected to the global conversation, looking to learn, create and better their own lives and communities.In his book, Fortune at the Bottom of the Pyramid, C.K. Pahalad writes that finding co-creative ways to serve this rising market can help lift people out of poverty while creating viable businesses for inventive companies.
The Path to Abundance
Eager to create change, innovators armed with powerful technologies can accomplish incredible feats. Kotler and Diamandis imagine that the path to abundance occurs in three tiers:
Basic Needs (food, water, shelter)
Tools of Growth (energy, education, access to information)
Ideal Health and Freedom
Of course, progress doesn’t always happen in a straight, logical way, but having a framework to visualize the needs is helpful.
Many people don’t believe it’s possible to end the persistent global problems we’re facing. However, looking at history, we can see many examples where technological tools have unlocked resources that previously seemed scarce.
Technological solutions are not always the answer, and we need social change and policy solutions as much as we need technology solutions. But we have seen time and time again, that powerful tools in the hands of innovative, driven change-makers can make the seemingly impossible happen.
You can download the full “Path to Abundance” infographic here. It was created under a CC BY-NC-ND license. If you share, please attribute to Singularity University.
Image Credit: janez volmajer / Shutterstock.com Continue reading
#431872 AI Uses Titan Supercomputer to Create ...
You don’t have to dig too deeply into the archive of dystopian science fiction to uncover the horror that intelligent machines might unleash. The Matrix and The Terminator are probably the most well-known examples of self-replicating, intelligent machines attempting to enslave or destroy humanity in the process of building a brave new digital world.
The prospect of artificially intelligent machines creating other artificially intelligent machines took a big step forward in 2017. However, we’re far from the runaway technological singularity futurists are predicting by mid-century or earlier, let alone murderous cyborgs or AI avatar assassins.
The first big boost this year came from Google. The tech giant announced it was developing automated machine learning (AutoML), writing algorithms that can do some of the heavy lifting by identifying the right neural networks for a specific job. Now researchers at the Department of Energy’s Oak Ridge National Laboratory (ORNL), using the most powerful supercomputer in the US, have developed an AI system that can generate neural networks as good if not better than any developed by a human in less than a day.
It can take months for the brainiest, best-paid data scientists to develop deep learning software, which sends data through a complex web of mathematical algorithms. The system is modeled after the human brain and known as an artificial neural network. Even Google’s AutoML took weeks to design a superior image recognition system, one of the more standard operations for AI systems today.
Computing Power
Of course, Google Brain project engineers only had access to 800 graphic processing units (GPUs), a type of computer hardware that works especially well for deep learning. Nvidia, which pioneered the development of GPUs, is considered the gold standard in today’s AI hardware architecture. Titan, the supercomputer at ORNL, boasts more than 18,000 GPUs.
The ORNL research team’s algorithm, called MENNDL for Multinode Evolutionary Neural Networks for Deep Learning, isn’t designed to create AI systems that cull cute cat photos from the internet. Instead, MENNDL is a tool for testing and training thousands of potential neural networks to work on unique science problems.
That requires a different approach from the Google and Facebook AI platforms of the world, notes Steven Young, a postdoctoral research associate at ORNL who is on the team that designed MENNDL.
“We’ve discovered that those [neural networks] are very often not the optimal network for a lot of our problems, because our data, while it can be thought of as images, is different,” he explains to Singularity Hub. “These images, and the problems, have very different characteristics from object detection.”
AI for Science
One application of the technology involved a particle physics experiment at the Fermi National Accelerator Laboratory. Fermilab researchers are interested in understanding neutrinos, high-energy subatomic particles that rarely interact with normal matter but could be a key to understanding the early formation of the universe. One Fermilab experiment involves taking a sort of “snapshot” of neutrino interactions.
The team wanted the help of an AI system that could analyze and classify Fermilab’s detector data. MENNDL evaluated 500,000 neural networks in 24 hours. Its final solution proved superior to custom models developed by human scientists.
In another case involving a collaboration with St. Jude Children’s Research Hospital in Memphis, MENNDL improved the error rate of a human-designed algorithm for identifying mitochondria inside 3D electron microscopy images of brain tissue by 30 percent.
“We are able to do better than humans in a fraction of the time at designing networks for these sort of very different datasets that we’re interested in,” Young says.
What makes MENNDL particularly adept is its ability to define the best or most optimal hyperparameters—the key variables—to tackle a particular dataset.
“You don’t always need a big, huge deep network. Sometimes you just need a small network with the right hyperparameters,” Young says.
A Virtual Data Scientist
That’s not dissimilar to the approach of a company called H20.ai, a startup out of Silicon Valley that uses open source machine learning platforms to “democratize” AI. It applies machine learning to create business solutions for Fortune 500 companies, including some of the world’s biggest banks and healthcare companies.
“Our software is more [about] pattern detection, let’s say anti-money laundering or fraud detection or which customer is most likely to churn,” Dr. Arno Candel, chief technology officer at H2O.ai, tells Singularity Hub. “And that kind of insight-generating software is what we call AI here.”
The company’s latest product, Driverless AI, promises to deliver the data scientist equivalent of a chessmaster to its customers (the company claims several such grandmasters in its employ and advisory board). In other words, the system can analyze a raw dataset and, like MENNDL, automatically identify what features should be included in the computer model to make the most of the data based on the best “chess moves” of its grandmasters.
“So we’re using those algorithms, but we’re giving them the human insights from those data scientists, and we automate their thinking,” he explains. “So we created a virtual data scientist that is relentless at trying these ideas.”
Inside the Black Box
Not unlike how the human brain reaches a conclusion, it’s not always possible to understand how a machine, despite being designed by humans, reaches its own solutions. The lack of transparency is often referred to as the AI “black box.” Experts like Young say we can learn something about the evolutionary process of machine learning by generating millions of neural networks and seeing what works well and what doesn’t.
“You’re never going to be able to completely explain what happened, but maybe we can better explain it than we currently can today,” Young says.
Transparency is built into the “thought process” of each particular model generated by Driverless AI, according to Candel.
The computer even explains itself to the user in plain English at each decision point. There is also real-time feedback that allows users to prioritize features, or parameters, to see how the changes improve the accuracy of the model. For example, the system may include data from people in the same zip code as it creates a model to describe customer turnover.
“That’s one of the advantages of our automatic feature engineering: it’s basically mimicking human thinking,” Candel says. “It’s not just neural nets that magically come up with some kind of number, but we’re trying to make it statistically significant.”
Moving Forward
Much digital ink has been spilled over the dearth of skilled data scientists, so automating certain design aspects for developing artificial neural networks makes sense. Experts agree that automation alone won’t solve that particular problem. However, it will free computer scientists to tackle more difficult issues, such as parsing the inherent biases that exist within the data used by machine learning today.
“I think the world has an opportunity to focus more on the meaning of things and not on the laborious tasks of just fitting a model and finding the best features to make that model,” Candel notes. “By automating, we are pushing the burden back for the data scientists to actually do something more meaningful, which is think about the problem and see how you can address it differently to make an even bigger impact.”
The team at ORNL expects it can also make bigger impacts beginning next year when the lab’s next supercomputer, Summit, comes online. While Summit will boast only 4,600 nodes, it will sport the latest and greatest GPU technology from Nvidia and CPUs from IBM. That means it will deliver more than five times the computational performance of Titan, the world’s fifth-most powerful supercomputer today.
“We’ll be able to look at much larger problems on Summit than we were able to with Titan and hopefully get to a solution much faster,” Young says.
It’s all in a day’s work.
Image Credit: Gennady Danilkin / Shutterstock.com Continue reading