Tag Archives: reliable
Autonomous vehicles can follow the general rules of American roads, recognizing traffic signals and lane markings, noticing crosswalks and other regular features of the streets. But they work only on well-marked roads that are carefully scanned and mapped in advance.
Many paved roads, though, have faded paint, signs obscured behind trees and unusual intersections. In addition, 1.4 million miles of U.S. roads—one-third of the country’s public roadways—are unpaved, with no on-road signals like lane markings or stop-here lines. That doesn’t include miles of private roads, unpaved driveways or off-road trails.
What’s a rule-following autonomous car to do when the rules are unclear or nonexistent? And what are its passengers to do when they discover their vehicle can’t get them where they’re going?
Accounting for the Obscure
Most challenges in developing advanced technologies involve handling infrequent or uncommon situations, or events that require performance beyond a system’s normal capabilities. That’s definitely true for autonomous vehicles. Some on-road examples might be navigating construction zones, encountering a horse and buggy, or seeing graffiti that looks like a stop sign. Off-road, the possibilities include the full variety of the natural world, such as trees down over the road, flooding and large puddles—or even animals blocking the way.
At Mississippi State University’s Center for Advanced Vehicular Systems, we have taken up the challenge of training algorithms to respond to circumstances that almost never happen, are difficult to predict and are complex to create. We seek to put autonomous cars in the hardest possible scenario: driving in an area the car has no prior knowledge of, with no reliable infrastructure like road paint and traffic signs, and in an unknown environment where it’s just as likely to see a cactus as a polar bear.
Our work combines virtual technology and the real world. We create advanced simulations of lifelike outdoor scenes, which we use to train artificial intelligence algorithms to take a camera feed and classify what it sees, labeling trees, sky, open paths and potential obstacles. Then we transfer those algorithms to a purpose-built all-wheel-drive test vehicle and send it out on our dedicated off-road test track, where we can see how our algorithms work and collect more data to feed into our simulations.
We have developed a simulator that can create a wide range of realistic outdoor scenes for vehicles to navigate through. The system generates a range of landscapes of different climates, like forests and deserts, and can show how plants, shrubs and trees grow over time. It can also simulate weather changes, sunlight and moonlight, and the accurate locations of 9,000 stars.
The system also simulates the readings of sensors commonly used in autonomous vehicles, such as lidar and cameras. Those virtual sensors collect data that feeds into neural networks as valuable training data.
Simulated desert, meadow and forest environments generated by the Mississippi State University Autonomous Vehicle Simulator. Chris Goodin, Mississippi State University, Author provided.
Building a Test Track
Simulations are only as good as their portrayals of the real world. Mississippi State University has purchased 50 acres of land on which we are developing a test track for off-road autonomous vehicles. The property is excellent for off-road testing, with unusually steep grades for our area of Mississippi—up to 60 percent inclines—and a very diverse population of plants.
We have selected certain natural features of this land that we expect will be particularly challenging for self-driving vehicles, and replicated them exactly in our simulator. That allows us to directly compare results from the simulation and real-life attempts to navigate the actual land. Eventually, we’ll create similar real and virtual pairings of other types of landscapes to improve our vehicle’s capabilities.
A road washout, as seen in real life, left, and in simulation. Chris Goodin, Mississippi State University, Author provided.
Collecting More Data
We have also built a test vehicle, called the Halo Project, which has an electric motor and sensors and computers that can navigate various off-road environments. The Halo Project car has additional sensors to collect detailed data about its actual surroundings, which can help us build virtual environments to run new tests in.
The Halo Project car can collect data about driving and navigating in rugged terrain. Beth Newman Wynn, Mississippi State University, Author provided.
Two of its lidar sensors, for example, are mounted at intersecting angles on the front of the car so their beams sweep across the approaching ground. Together, they can provide information on how rough or smooth the surface is, as well as capturing readings from grass and other plants and items on the ground.
Lidar beams intersect, scanning the ground in front of the vehicle. Chris Goodin, Mississippi State University, Author provided
We’ve seen some exciting early results from our research. For example, we have shown promising preliminary results that machine learning algorithms trained on simulated environments can be useful in the real world. As with most autonomous vehicle research, there is still a long way to go, but our hope is that the technologies we’re developing for extreme cases will also help make autonomous vehicles more functional on today’s roads.
Matthew Doude, Associate Director, Center for Advanced Vehicular Systems; Ph.D. Student in Industrial and Systems Engineering, Mississippi State University; Christopher Goodin, Assistant Research Professor, Center for Advanced Vehicular Systems, Mississippi State University, and Daniel Carruth, Assistant Research Professor and Associate Director for Human Factors and Advanced Vehicle System, Center for Advanced Vehicular Systems, Mississippi State University
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Photo provided for The Conversation by Matthew Goudin / CC BY ND Continue reading
Making sure artificial intelligence does what we want and behaves in predictable ways will be crucial as the technology becomes increasingly ubiquitous. It’s an area frequently neglected in the race to develop products, but DeepMind has now outlined its research agenda to tackle the problem.
AI safety, as the field is known, has been gaining prominence in recent years. That’s probably at least partly down to the overzealous warnings of a coming AI apocalypse from well-meaning, but underqualified pundits like Elon Musk and Stephen Hawking. But it’s also recognition of the fact that AI technology is quickly pervading all aspects of our lives, making decisions on everything from what movies we watch to whether we get a mortgage.
That’s why DeepMind hired a bevy of researchers who specialize in foreseeing the unforeseen consequences of the way we built AI back in 2016. And now the team has spelled out the three key domains they think require research if we’re going to build autonomous machines that do what we want.
In a new blog designed to provide updates on the team’s work, they introduce the ideas of specification, robustness, and assurance, which they say will act as the cornerstones of their future research. Specification involves making sure AI systems do what their operator intends; robustness means a system can cope with changes to its environment and attempts to throw it off course; and assurance involves our ability to understand what systems are doing and how to control them.
A classic thought experiment designed to illustrate how we could lose control of an AI system can help illustrate the problem of specification. Philosopher Nick Bostrom’s posited a hypothetical machine charged with making as many paperclips as possible. Because the creators fail to add what they might assume are obvious additional goals like not harming people, the AI wipes out humanity so we can’t switch it off before turning all matter in the universe into paperclips.
Obviously the example is extreme, but it shows how a poorly-specified goal can lead to unexpected and disastrous outcomes. Properly codifying the desires of the designer is no easy feat, though; often there are not neat ways to encompass both the explicit and implicit goals in ways that are understandable to the machine and don’t leave room for ambiguities, meaning we often rely on incomplete approximations.
The researchers note recent research by OpenAI in which an AI was trained to play a boat-racing game called CoastRunners. The game rewards players for hitting targets laid out along the race route. The AI worked out that it could get a higher score by repeatedly knocking over regenerating targets rather than actually completing the course. The blog post includes a link to a spreadsheet detailing scores of such examples.
Another key concern for AI designers is making their creation robust to the unpredictability of the real world. Despite their superhuman abilities on certain tasks, most cutting-edge AI systems are remarkably brittle. They tend to be trained on highly-curated datasets and so can fail when faced with unfamiliar input. This can happen by accident or by design—researchers have come up with numerous ways to trick image recognition algorithms into misclassifying things, including thinking a 3D printed tortoise was actually a gun.
Building systems that can deal with every possible encounter may not be feasible, so a big part of making AIs more robust may be getting them to avoid risks and ensuring they can recover from errors, or that they have failsafes to ensure errors don’t lead to catastrophic failure.
And finally, we need to have ways to make sure we can tell whether an AI is performing the way we expect it to. A key part of assurance is being able to effectively monitor systems and interpret what they’re doing—if we’re basing medical treatments or sentencing decisions on the output of an AI, we’d like to see the reasoning. That’s a major outstanding problem for popular deep learning approaches, which are largely indecipherable black boxes.
The other half of assurance is the ability to intervene if a machine isn’t behaving the way we’d like. But designing a reliable off switch is tough, because most learning systems have a strong incentive to prevent anyone from interfering with their goals.
The authors don’t pretend to have all the answers, but they hope the framework they’ve come up with can help guide others working on AI safety. While it may be some time before AI is truly in a position to do us harm, hopefully early efforts like these will mean it’s built on a solid foundation that ensures it is aligned with our goals.
Image Credit: cono0430 / Shutterstock.com Continue reading
Researchers at the U.S. Army Research Laboratory and the Robotics Institute at Carnegie Mellon University developed a new technique to quickly teach robots novel traversal behaviors with minimal human oversight. Continue reading
From a first-principles perspective, the task of feeding eight billion people boils down to converting energy from the sun into chemical energy in our bodies.
Traditionally, solar energy is converted by photosynthesis into carbohydrates in plants (i.e., biomass), which are either eaten by the vegans amongst us, or fed to animals, for those with a carnivorous preference.
Today, the process of feeding humanity is extremely inefficient.
If we could radically reinvent what we eat, and how we create that food, what might you imagine that “future of food” would look like?
In this post we’ll cover:
CRISPR engineered foods
The alt-protein revolution
Let’s dive in.
Where we grow our food…
The average American meal travels over 1,500 miles from farm to table. Wine from France, beef from Texas, potatoes from Idaho.
Imagine instead growing all of your food in a 50-story tall vertical farm in downtown LA or off-shore on the Great Lakes where the travel distance is no longer 1,500 miles but 50 miles.
Delocalized farming will minimize travel costs at the same time that it maximizes freshness.
Perhaps more importantly, vertical farming also allows tomorrow’s farmer the ability to control the exact conditions of her plants year round.
Rather than allowing the vagaries of the weather and soil conditions to dictate crop quality and yield, we can now perfectly control the growing cycle.
LED lighting provides the crops with the maximum amount of light, at the perfect frequency, 24 hours a day, 7 days a week.
At the same time, sensors and robots provide the root system the exact pH and micronutrients required, while fine-tuning the temperature of the farm.
Such precision farming can generate yields that are 200% to 400% above normal.
Next let’s explore how we can precision-engineer the genetic properties of the plant itself.
CRISPR and Genetically Engineered Foods
What food do we grow?
A fundamental shift is occurring in our relationship with agriculture. We are going from evolution by natural selection (Darwinism) to evolution by human direction.
CRISPR (the cutting edge gene editing tool) is providing a pathway for plant breeding that is more predictable, faster and less expensive than traditional breeding methods.
Rather than our crops being subject to nature’s random, environmental whim, CRISPR unlocks our capability to modify our crops to match the available environment.
Further, using CRISPR we will be able to optimize the nutrient density of our crops, enhancing their value and volume.
CRISPR may also hold the key to eliminating common allergens from crops. As we identify the allergen gene in peanuts, for instance, we can use CRISPR to silence that gene, making the crops we raise safer for and more accessible to a rapidly growing population.
Yet another application is our ability to make plants resistant to infection or more resistant to drought or cold.
Helping to accelerate the impact of CRISPR, the USDA recently announced that genetically engineered crops will not be regulated—providing an opening for entrepreneurs to capitalize on the opportunities for optimization CRISPR enables.
CRISPR applications in agriculture are an opportunity to help a billion people and become a billionaire in the process.
Protecting crops against volatile environments, combating crop diseases and increasing nutrient values, CRISPR is a promising tool to help feed the world’s rising population.
The Alt-Protein/Lab-Grown Meat Revolution
Something like a third of the Earth’s arable land is used for raising livestock—a massive amount of land—and global demand for meat is predicted to double in the coming decade.
Today, we must grow an entire cow—all bones, skin, and internals included—to produce a steak.
Imagine if we could instead start with a single muscle stem cell and only grow the steak, without needing the rest of the cow? Think of it as cellular agriculture.
Imagine returning millions, perhaps billions, of acres of grazing land back to the wilderness? This is the promise of lab-grown meats.
Lab-grown meat can also be engineered (using technology like CRISPR) to be packed with nutrients and be the healthiest, most delicious protein possible.
We’re watching this technology develop in real time. Several startups across the globe are already working to bring artificial meats to the food industry.
JUST, Inc. (previously Hampton Creek) run by my friend Josh Tetrick, has been on a mission to build a food system where everyone can get and afford delicious, nutritious food. They started by exploring 300,000+ species of plants all around the world to see how they can make food better and now are investing heavily in stem-cell-grown meats.
Backed by Richard Branson and Bill Gates, Memphis Meats is working on ways to produce real meat from animal cells, rather than whole animals. So far, they have produced beef, chicken, and duck using cultured cells from living animals.
As with vertical farming, transitioning production of our majority protein source to a carefully cultivated environment allows for agriculture to optimize inputs (water, soil, energy, land footprint), nutrients and, importantly, taste.
Vertical farming and cellular agriculture are reinventing how we think about our food supply chain and what food we produce.
The next question to answer is who will be producing the food?
Let’s look back at how farming evolved through history.
Farmers 0.0 (Neolithic Revolution, around 9000 BCE): The hunter-gatherer to agriculture transition gains momentum, and humans cultivated the ability to domesticate plants for food production.
Farmers 1.0 (until around the 19th century): Farmers spent all day in the field performing backbreaking labor, and agriculture accounted for most jobs.
Farmers 2.0 (mid-20th century, Green Revolution): From the invention of the first farm tractor in 1812 through today, transformative mechanical biochemical technologies (fertilizer) boosted yields and made the job of farming easier, driving the US farm job rate down to less than two percent today.
Farmers 3.0: In the near future, farmers will leverage exponential technologies (e.g., AI, networks, sensors, robotics, drones), CRISPR and genetic engineering, and new business models to solve the world’s greatest food challenges and efficiently feed the eight-billion-plus people on Earth.
An important driver of the Farmer 3.0 evolution is the delocalization of agriculture driven by vertical and urban farms. Vertical farms and urban agriculture are empowering a new breed of agriculture entrepreneurs.
Let’s take a look at an innovative incubator in Brooklyn, New York called Square Roots.
Ten farm-in-a-shipping-containers in a Brooklyn parking lot represent the first Square Roots campus. Each 8-foot x 8.5-foot x 20-foot shipping container contains an equivalent of 2 acres of produce and can yield more than 50 pounds of produce each week.
For 13 months, one cohort of next-generation food entrepreneurs takes part in a curriculum with foundations in farming, business, community and leadership.
The urban farming incubator raised a $5.4 million seed funding round in August 2017.
Training a new breed of entrepreneurs to apply exponential technology to growing food is essential to the future of farming.
One of our massive transformative purposes at the Abundance Group is to empower entrepreneurs to generate extraordinary wealth while creating a world of abundance. Vertical farms and cellular agriculture are key elements enabling the next generation of food and agriculture entrepreneurs.
Technology is driving food abundance.
We’re already seeing food become demonetized, as the graph below shows.
From 1960 to 2014, the percent of income spent on food in the U.S. fell from 19 percent to under 10 percent of total disposable income—a dramatic decrease over the 40 percent of household income spent on food in 1900.
The dropping percent of per-capita disposable income spent on food. Source: USDA, Economic Research Service, Food Expenditure Series
Ultimately, technology has enabled a massive variety of food at a significantly reduced cost and with fewer resources used for production.
We’re increasingly going to optimize and fortify the food supply chain to achieve more reliable, predictable, and nutritious ways to obtain basic sustenance.
And that means a world with abundant, nutritious, and inexpensive food for every man, woman, and child.
What an extraordinary time to be alive.
Abundance-Digital Online Community: I’ve created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital.
Abundance-Digital is my ‘onramp’ for exponential entrepreneurs—those who want to get involved and play at a higher level. Click here to learn more.
Image Credit: Nejron Photo / Shutterstock.com Continue reading
Elon Musk Presents His Tunnel Vision to the People of LA
Jack Stewart and Aarian Marshall | Wired
“Now, Musk wants to build this new, 2.1-mile tunnel, near LA’s Sepulveda pass. It’s all part of his broader vision of a sprawling network that could take riders from Sherman Oaks in the north to Long Beach Airport in the south, Santa Monica in the west to Dodger Stadium in the east—without all that troublesome traffic.”
Feel What This Robot Feels Through Tactile Expressions
Evan Ackerman | IEEE Spectrum
“Guy Hoffman’s Human-Robot Collaboration & Companionship (HRC2) Lab at Cornell University is working on a new robot that’s designed to investigate this concept of textural communication, which really hasn’t been explored in robotics all that much. The robot uses a pneumatically powered elastomer skin that can be dynamically textured with either goosebumps or spikes, which should help it communicate more effectively, especially if what it’s trying to communicate is, ‘Don’t touch me!’”
In Virtual Reality, How Much Body Do You Need?
Steph Yin | The New York Times
“In a paper published Tuesday in Scientific Reports, they showed that animating virtual hands and feet alone is enough to make people feel their sense of body drift toward an invisible avatar. Their work fits into a corpus of research on illusory body ownership, which has challenged understandings of perception and contributed to therapies like treating pain for amputees who experience phantom limb.”
How Graphene and Gold Could Help Us Test Drugs and Monitor Cancer
Angela Chen | The Verge
“In today’s study, scientists learned to precisely control the amount of electricity graphene generates by changing how much light they shine on the material. When they grew heart cells on the graphene, they could manipulate the cells too, says study co-author Alex Savtchenko, a physicist at the University of California, San Diego. They could make it beat 1.5 times faster, three times faster, 10 times faster, or whatever they needed.”
Robotic Noses Could Be the Future of Disaster Rescue—If They Can Outsniff Search Dogs
Eleanor Cummins | Popular Science
“While canine units are a tried and fairly true method for identifying people trapped in the wreckage of a disaster, analytical chemists have for years been working in the lab to create a robotic alternative. A synthetic sniffer, they argue, could potentially prove to be just as or even more reliable than a dog, more resilient in the face of external pressures like heat and humidity, and infinitely more portable.”
Image Credit: Sergey Nivens / Shutterstock.com Continue reading