Tag Archives: cars
#433911 Thanksgiving Food for Thought: The Tech ...
With the Thanksgiving holiday upon us, it’s a great time to reflect on the future of food. Over the last few years, we have seen a dramatic rise in exponential technologies transforming the food industry from seed to plate. Food is important in many ways—too little or too much of it can kill us, and it is often at the heart of family, culture, our daily routines, and our biggest celebrations. The agriculture and food industries are also two of the world’s biggest employers. Let’s take a look to see what is in store for the future.
Robotic Farms
Over the last few years, we have seen a number of new companies emerge in the robotic farming industry. This includes new types of farming equipment used in arable fields, as well as indoor robotic vertical farms. In November 2017, Hands Free Hectare became the first in the world to remotely grow an arable crop. They used autonomous tractors to sow and spray crops, small rovers to take soil samples, drones to monitor crop growth, and an unmanned combine harvester to collect the crops. Since then, they’ve also grown and harvested a field of winter wheat, and have been adding additional technologies and capabilities to their arsenal of robotic farming equipment.
Indoor vertical farming is also rapidly expanding. As Engadget reported in October 2018, a number of startups are now growing crops like leafy greens, tomatoes, flowers, and herbs. These farms can grow food in urban areas, reducing transport, water, and fertilizer costs, and often don’t need pesticides since they are indoors. IronOx, which is using robots to grow plants with navigation technology used by self-driving cars, can grow 30 times more food per acre of land using 90 percent less water than traditional farmers. Vertical farming company Plenty was recently funded by Softbank’s Vision Fund, Jeff Bezos, and others to build 300 vertical farms in China.
These startups are not only succeeding in wealthy countries. Hello Tractor, an “uberized” tractor, has worked with 250,000 smallholder farms in Africa, creating both food security and tech-infused agriculture jobs. The World Food Progam’s Innovation Accelerator (an impact partner of Singularity University) works with hundreds of startups aimed at creating zero hunger. One project is focused on supporting refugees in developing “food computers” in refugee camps—computerized devices that grow food while also adjusting to the conditions around them. As exponential trends drive down the costs of robotics, sensors, software, and energy, we should see robotic farming scaling around the world and becoming the main way farming takes place.
Cultured Meat
Exponential technologies are not only revolutionizing how we grow vegetables and grains, but also how we generate protein and meat. The new cultured meat industry is rapidly expanding, led by startups such as Memphis Meats, Mosa Meats, JUST Meat, Inc. and Finless Foods, and backed by heavyweight investors including DFJ, Bill Gates, Richard Branson, Cargill, and Tyson Foods.
Cultured meat is grown in a bioreactor using cells from an animal, a scaffold, and a culture. The process is humane and, potentially, scientists can make the meat healthier by adding vitamins, removing fat, or customizing it to an individual’s diet and health concerns. Another benefit is that cultured meats, if grown at scale, would dramatically reduce environmental destruction, pollution, and climate change caused by the livestock and fishing industries. Similar to vertical farms, cultured meat is produced using technology and can be grown anywhere, on-demand and in a decentralized way.
Similar to robotic farming equipment, bioreactors will also follow exponential trends, rapidly falling in cost. In fact, the first cultured meat hamburger (created by Singularity University faculty Member Mark Post of Mosa Meats in 2013) cost $350,000 dollars. In 2018, Fast Company reported the cost was now about $11 per burger, and the Israeli startup Future Meat Technologies predicted they will produce beef at about $2 per pound in 2020, which will be competitive with existing prices. For those who have turkey on their mind, one can read about New Harvest’s work (one of the leading think tanks and research centers for the cultured meat and cellular agriculture industry) in funding efforts to generate a nugget of cultured turkey meat.
One outstanding question is whether cultured meat is safe to eat and how it will interact with the overall food supply chain. In the US, regulators like the Food and Drug Administration (FDA) and the US Department of Agriculture (USDA) are working out their roles in this process, with the FDA overseeing the cellular process and the FDA overseeing production and labeling.
Food Processing
Tech companies are also making great headway in streamlining food processing. Norwegian company Tomra Foods was an early leader in using imaging recognition, sensors, artificial intelligence, and analytics to more efficiently sort food based on shape, composition of fat, protein, and moisture, and other food safety and quality indicators. Their technologies have improved food yield by 5-10 percent, which is significant given they own 25 percent of their market.
These advances are also not limited to large food companies. In 2016 Google reported how a small family farm in Japan built a world-class cucumber sorting device using their open-source machine learning tool TensorFlow. SU startup Impact Vision uses hyper-spectral imaging to analyze food quality, which increases revenues and reduces food waste and product recalls from contamination.
These examples point to a question many have on their mind: will we live in a future where a few large companies use advanced technologies to grow the majority of food on the planet, or will the falling costs of these technologies allow family farms, startups, and smaller players to take part in creating a decentralized system? Currently, the future could flow either way, but it is important for smaller companies to take advantage of the most cutting-edge technology in order to stay competitive.
Food Purchasing and Delivery
In the last year, we have also seen a number of new developments in technology improving access to food. Amazon Go is opening grocery stores in Seattle, San Francisco, and Chicago where customers use an app that allows them to pick up their products and pay without going through cashier lines. Sam’s Club is not far behind, with an app that also allows a customer to purchase goods in-store.
The market for food delivery is also growing. In 2017, Morgan Stanley estimated that the online food delivery market from restaurants could grow to $32 billion by 2021, from $12 billion in 2017. Companies like Zume are pioneering robot-powered pizza making and delivery. In addition to using robotics to create affordable high-end gourmet pizzas in their shop, they also have a pizza delivery truck that can assemble and cook pizzas while driving. Their system combines predictive analytics using past customer data to prepare pizzas for certain neighborhoods before the orders even come in. In early November 2018, the Wall Street Journal estimated that Zume is valued at up to $2.25 billion.
Looking Ahead
While each of these developments is promising on its own, it’s also important to note that since all these technologies are in some way digitized and connected to the internet, the various food tech players can collaborate. In theory, self-driving delivery restaurants could share data on what they are selling to their automated farm equipment, facilitating coordination of future crops. There is a tremendous opportunity to improve efficiency, lower costs, and create an abundance of healthy, sustainable food for all.
On the other hand, these technologies are also deeply disruptive. According to the Food and Agricultural Organization of the United Nations, in 2010 about one billion people, or a third of the world’s workforce, worked in the farming and agricultural industries. We need to ensure these farmers are linked to new job opportunities, as well as facilitate collaboration between existing farming companies and technologists so that the industries can continue to grow and lead rather than be displaced.
Just as importantly, each of us might think about how these changes in the food industry might impact our own ways of life and culture. Thanksgiving celebrates community and sharing of food during a time of scarcity. Technology will help create an abundance of food and less need for communities to depend on one another. What are the ways that you will create community, sharing, and culture in this new world?
Image Credit: nikkytok / Shutterstock.com Continue reading
#433901 The SpiNNaker Supercomputer, Modeled ...
We’ve long used the brain as inspiration for computers, but the SpiNNaker supercomputer, switched on this month, is probably the closest we’ve come to recreating it in silicon. Now scientists hope to use the supercomputer to model the very thing that inspired its design.
The brain is the most complex machine in the known universe, but that complexity comes primarily from its architecture rather than the individual components that make it up. Its highly interconnected structure means that relatively simple messages exchanged between billions of individual neurons add up to carry out highly complex computations.
That’s the paradigm that has inspired the ‘Spiking Neural Network Architecture” (SpiNNaker) supercomputer at the University of Manchester in the UK. The project is the brainchild of Steve Furber, the designer of the original ARM processor. After a decade of development, a million-core version of the machine that will eventually be able to simulate up to a billion neurons was switched on earlier this month.
The idea of splitting computation into very small chunks and spreading them over many processors is already the leading approach to supercomputing. But even the most parallel systems require a lot of communication, and messages may have to pack in a lot of information, such as the task that needs to be completed or the data that needs to be processed.
In contrast, messages in the brain consist of simple electrochemical impulses, or spikes, passed between neurons, with information encoded primarily in the timing or rate of those spikes (which is more important is a topic of debate among neuroscientists). Each neuron is connected to thousands of others via synapses, and complex computation relies on how spikes cascade through these highly-connected networks.
The SpiNNaker machine attempts to replicate this using a model called Address Event Representation. Each of the million cores can simulate roughly a million synapses, so depending on the model, 1,000 neurons with 1,000 connections or 100 neurons with 10,000 connections. Information is encoded in the timing of spikes and the identity of the neuron sending them. When a neuron is activated it broadcasts a tiny packet of data that contains its address, and spike timing is implicitly conveyed.
By modeling their machine on the architecture of the brain, the researchers hope to be able to simulate more biological neurons in real time than any other machine on the planet. The project is funded by the European Human Brain Project, a ten-year science mega-project aimed at bringing together neuroscientists and computer scientists to understand the brain, and researchers will be able to apply for time on the machine to run their simulations.
Importantly, it’s possible to implement various different neuronal models on the machine. The operation of neurons involves a variety of complex biological processes, and it’s still unclear whether this complexity is an artefact of evolution or central to the brain’s ability to process information. The ability to simulate up to a billion simple neurons or millions of more complex ones on the same machine should help to slowly tease out the answer.
Even at a billion neurons, that still only represents about one percent of the human brain, so it’s still going to be limited to investigating isolated networks of neurons. But the previous 500,000-core machine has already been used to do useful simulations of the Basal Ganglia—an area affected in Parkinson’s disease—and an outer layer of the brain that processes sensory information.
The full-scale supercomputer will make it possible to study even larger networks previously out of reach, which could lead to breakthroughs in our understanding of both the healthy and unhealthy functioning of the brain.
And while neurological simulation is the main goal for the machine, it could also provide a useful research tool for roboticists. Previous research has already shown a small board of SpiNNaker chips can be used to control a simple wheeled robot, but Furber thinks the SpiNNaker supercomputer could also be used to run large-scale networks that can process sensory input and generate motor output in real time and at low power.
That low power operation is of particular promise for robotics. The brain is dramatically more power-efficient than conventional supercomputers, and by borrowing from its principles SpiNNaker has managed to capture some of that efficiency. That could be important for running mobile robotic platforms that need to carry their own juice around.
This ability to run complex neural networks at low power has been one of the main commercial drivers for so-called neuromorphic computing devices that are physically modeled on the brain, such as IBM’s TrueNorth chip and Intel’s Loihi. The hope is that complex artificial intelligence applications normally run in massive data centers could be run on edge devices like smartphones, cars, and robots.
But these devices, including SpiNNaker, operate very differently from the leading AI approaches, and its not clear how easy it would be to transfer between the two. The need to adopt an entirely new programming paradigm is likely to limit widespread adoption, and the lack of commercial traction for the aforementioned devices seems to back that up.
At the same time, though, this new paradigm could potentially lead to dramatic breakthroughs in massively parallel computing. SpiNNaker overturns many of the foundational principles of how supercomputers work that make it much more flexible and error-tolerant.
For now, the machine is likely to be firmly focused on accelerating our understanding of how the brain works. But its designers also hope those findings could in turn point the way to more efficient and powerful approaches to computing.
Image Credit: Adrian Grosu / Shutterstock.com Continue reading
#433852 How Do We Teach Autonomous Cars To Drive ...
Autonomous vehicles can follow the general rules of American roads, recognizing traffic signals and lane markings, noticing crosswalks and other regular features of the streets. But they work only on well-marked roads that are carefully scanned and mapped in advance.
Many paved roads, though, have faded paint, signs obscured behind trees and unusual intersections. In addition, 1.4 million miles of U.S. roads—one-third of the country’s public roadways—are unpaved, with no on-road signals like lane markings or stop-here lines. That doesn’t include miles of private roads, unpaved driveways or off-road trails.
What’s a rule-following autonomous car to do when the rules are unclear or nonexistent? And what are its passengers to do when they discover their vehicle can’t get them where they’re going?
Accounting for the Obscure
Most challenges in developing advanced technologies involve handling infrequent or uncommon situations, or events that require performance beyond a system’s normal capabilities. That’s definitely true for autonomous vehicles. Some on-road examples might be navigating construction zones, encountering a horse and buggy, or seeing graffiti that looks like a stop sign. Off-road, the possibilities include the full variety of the natural world, such as trees down over the road, flooding and large puddles—or even animals blocking the way.
At Mississippi State University’s Center for Advanced Vehicular Systems, we have taken up the challenge of training algorithms to respond to circumstances that almost never happen, are difficult to predict and are complex to create. We seek to put autonomous cars in the hardest possible scenario: driving in an area the car has no prior knowledge of, with no reliable infrastructure like road paint and traffic signs, and in an unknown environment where it’s just as likely to see a cactus as a polar bear.
Our work combines virtual technology and the real world. We create advanced simulations of lifelike outdoor scenes, which we use to train artificial intelligence algorithms to take a camera feed and classify what it sees, labeling trees, sky, open paths and potential obstacles. Then we transfer those algorithms to a purpose-built all-wheel-drive test vehicle and send it out on our dedicated off-road test track, where we can see how our algorithms work and collect more data to feed into our simulations.
Starting Virtual
We have developed a simulator that can create a wide range of realistic outdoor scenes for vehicles to navigate through. The system generates a range of landscapes of different climates, like forests and deserts, and can show how plants, shrubs and trees grow over time. It can also simulate weather changes, sunlight and moonlight, and the accurate locations of 9,000 stars.
The system also simulates the readings of sensors commonly used in autonomous vehicles, such as lidar and cameras. Those virtual sensors collect data that feeds into neural networks as valuable training data.
Simulated desert, meadow and forest environments generated by the Mississippi State University Autonomous Vehicle Simulator. Chris Goodin, Mississippi State University, Author provided.
Building a Test Track
Simulations are only as good as their portrayals of the real world. Mississippi State University has purchased 50 acres of land on which we are developing a test track for off-road autonomous vehicles. The property is excellent for off-road testing, with unusually steep grades for our area of Mississippi—up to 60 percent inclines—and a very diverse population of plants.
We have selected certain natural features of this land that we expect will be particularly challenging for self-driving vehicles, and replicated them exactly in our simulator. That allows us to directly compare results from the simulation and real-life attempts to navigate the actual land. Eventually, we’ll create similar real and virtual pairings of other types of landscapes to improve our vehicle’s capabilities.
A road washout, as seen in real life, left, and in simulation. Chris Goodin, Mississippi State University, Author provided.
Collecting More Data
We have also built a test vehicle, called the Halo Project, which has an electric motor and sensors and computers that can navigate various off-road environments. The Halo Project car has additional sensors to collect detailed data about its actual surroundings, which can help us build virtual environments to run new tests in.
The Halo Project car can collect data about driving and navigating in rugged terrain. Beth Newman Wynn, Mississippi State University, Author provided.
Two of its lidar sensors, for example, are mounted at intersecting angles on the front of the car so their beams sweep across the approaching ground. Together, they can provide information on how rough or smooth the surface is, as well as capturing readings from grass and other plants and items on the ground.
Lidar beams intersect, scanning the ground in front of the vehicle. Chris Goodin, Mississippi State University, Author provided
We’ve seen some exciting early results from our research. For example, we have shown promising preliminary results that machine learning algorithms trained on simulated environments can be useful in the real world. As with most autonomous vehicle research, there is still a long way to go, but our hope is that the technologies we’re developing for extreme cases will also help make autonomous vehicles more functional on today’s roads.
Matthew Doude, Associate Director, Center for Advanced Vehicular Systems; Ph.D. Student in Industrial and Systems Engineering, Mississippi State University; Christopher Goodin, Assistant Research Professor, Center for Advanced Vehicular Systems, Mississippi State University, and Daniel Carruth, Assistant Research Professor and Associate Director for Human Factors and Advanced Vehicle System, Center for Advanced Vehicular Systems, Mississippi State University
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Photo provided for The Conversation by Matthew Goudin / CC BY ND Continue reading