Tag Archives: computers

#434303 Making Superhumans Through Radical ...

Imagine trying to read War and Peace one letter at a time. The thought alone feels excruciating. But in many ways, this painful idea holds parallels to how human-machine interfaces (HMI) force us to interact with and process data today.

Designed back in the 1970s at Xerox PARC and later refined during the 1980s by Apple, today’s HMI was originally conceived during fundamentally different times, and specifically, before people and machines were generating so much data. Fast forward to 2019, when humans are estimated to produce 44 zettabytes of data—equal to two stacks of books from here to Pluto—and we are still using the same HMI from the 1970s.

These dated interfaces are not equipped to handle today’s exponential rise in data, which has been ushered in by the rapid dematerialization of many physical products into computers and software.

Breakthroughs in perceptual and cognitive computing, especially machine learning algorithms, are enabling technology to process vast volumes of data, and in doing so, they are dramatically amplifying our brain’s abilities. Yet even with these powerful technologies that at times make us feel superhuman, the interfaces are still crippled with poor ergonomics.

Many interfaces are still designed around the concept that human interaction with technology is secondary, not instantaneous. This means that any time someone uses technology, they are inevitably multitasking, because they must simultaneously perform a task and operate the technology.

If our aim, however, is to create technology that truly extends and amplifies our mental abilities so that we can offload important tasks, the technology that helps us must not also overwhelm us in the process. We must reimagine interfaces to work in coherence with how our minds function in the world so that our brains and these tools can work together seamlessly.

Embodied Cognition
Most technology is designed to serve either the mind or the body. It is a problematic divide, because our brains use our entire body to process the world around us. Said differently, our minds and bodies do not operate distinctly. Our minds are embodied.

Studies using MRI scans have shown that when a person feels an emotion in their gut, blood actually moves to that area of the body. The body and the mind are linked in this way, sharing information back and forth continuously.

Current technology presents data to the brain differently from how the brain processes data. Our brains, for example, use sensory data to continually encode and decipher patterns within the neocortex. Our brains do not create a linguistic label for each item, which is how the majority of machine learning systems operate, nor do our brains have an image associated with each of these labels.

Our bodies move information through us instantaneously, in a sense “computing” at the speed of thought. What if our technology could do the same?

Using Cognitive Ergonomics to Design Better Interfaces
Well-designed physical tools, as philosopher Martin Heidegger once meditated on while using the metaphor of a hammer, seem to disappear into the “hand.” They are designed to amplify a human ability and not get in the way during the process.

The aim of physical ergonomics is to understand the mechanical movement of the human body and then adapt a physical system to amplify the human output in accordance. By understanding the movement of the body, physical ergonomics enables ergonomically sound physical affordances—or conditions—so that the mechanical movement of the body and the mechanical movement of the machine can work together harmoniously.

Cognitive ergonomics applied to HMI design uses this same idea of amplifying output, but rather than focusing on physical output, the focus is on mental output. By understanding the raw materials the brain uses to comprehend information and form an output, cognitive ergonomics allows technologists and designers to create technological affordances so that the brain can work seamlessly with interfaces and remove the interruption costs of our current devices. In doing so, the technology itself “disappears,” and a person’s interaction with technology becomes fluid and primary.

By leveraging cognitive ergonomics in HMI design, we can create a generation of interfaces that can process and present data the same way humans process real-world information, meaning through fully-sensory interfaces.

Several brain-machine interfaces are already on the path to achieving this. AlterEgo, a wearable device developed by MIT researchers, uses electrodes to detect and understand nonverbal prompts, which enables the device to read the user’s mind and act as an extension of the user’s cognition.

Another notable example is the BrainGate neural device, created by researchers at Stanford University. Just two months ago, a study was released showing that this brain implant system allowed paralyzed patients to navigate an Android tablet with their thoughts alone.

These are two extraordinary examples of what is possible for the future of HMI, but there is still a long way to go to bring cognitive ergonomics front and center in interface design.

Disruptive Innovation Happens When You Step Outside Your Existing Users
Most of today’s interfaces are designed by a narrow population, made up predominantly of white, non-disabled men who are prolific in the use of technology (you may recall The New York Times viral article from 2016, Artificial Intelligence’s White Guy Problem). If you ask this population if there is a problem with today’s HMIs, most will say no, and this is because the technology has been designed to serve them.

This lack of diversity means a limited perspective is being brought to interface design, which is problematic if we want HMI to evolve and work seamlessly with the brain. To use cognitive ergonomics in interface design, we must first gain a more holistic understanding of how people with different abilities understand the world and how they interact with technology.

Underserved groups, such as people with physical disabilities, operate on what Clayton Christensen coined in The Innovator’s Dilemma as the fringe segment of a market. Developing solutions that cater to fringe groups can in fact disrupt the larger market by opening a downward, much larger market.

Learning From Underserved Populations
When technology fails to serve a group of people, that group must adapt the technology to meet their needs.

The workarounds created are often ingenious, specifically because they have not been arrived at by preferences, but out of necessity that has forced disadvantaged users to approach the technology from a very different vantage point.

When a designer or technologist begins learning from this new viewpoint and understanding challenges through a different lens, they can bring new perspectives to design—perspectives that otherwise can go unseen.

Designers and technologists can also learn from people with physical disabilities who interact with the world by leveraging other senses that help them compensate for one they may lack. For example, some blind people use echolocation to detect objects in their environments.

The BrainPort device developed by Wicab is an incredible example of technology leveraging one human sense to serve or compliment another. The BrainPort device captures environmental information with a wearable video camera and converts this data into soft electrical stimulation sequences that are sent to a device on the user’s tongue—the most sensitive touch receptor in the body. The user learns how to interpret the patterns felt on their tongue, and in doing so, become able to “see” with their tongue.

Key to the future of HMI design is learning how different user groups navigate the world through senses beyond sight. To make cognitive ergonomics work, we must understand how to leverage the senses so we’re not always solely relying on our visual or verbal interactions.

Radical Inclusion for the Future of HMI
Bringing radical inclusion into HMI design is about gaining a broader lens on technology design at large, so that technology can serve everyone better.

Interestingly, cognitive ergonomics and radical inclusion go hand in hand. We can’t design our interfaces with cognitive ergonomics without bringing radical inclusion into the picture, and we also will not arrive at radical inclusion in technology so long as cognitive ergonomics are not considered.

This new mindset is the only way to usher in an era of technology design that amplifies the collective human ability to create a more inclusive future for all.

Image Credit: jamesteohart / Shutterstock.com Continue reading

Posted in Human Robots

#434270 AI Will Create Millions More Jobs Than ...

In the past few years, artificial intelligence has advanced so quickly that it now seems hardly a month goes by without a newsworthy AI breakthrough. In areas as wide-ranging as speech translation, medical diagnosis, and gameplay, we have seen computers outperform humans in startling ways.

This has sparked a discussion about how AI will impact employment. Some fear that as AI improves, it will supplant workers, creating an ever-growing pool of unemployable humans who cannot compete economically with machines.

This concern, while understandable, is unfounded. In fact, AI will be the greatest job engine the world has ever seen.

New Technology Isn’t a New Phenomenon
On the one hand, those who predict massive job loss from AI can be excused. It is easier to see existing jobs disrupted by new technology than to envision what new jobs the technology will enable.

But on the other hand, radical technological advances aren’t a new phenomenon. Technology has progressed nonstop for 250 years, and in the US unemployment has stayed between 5 to 10 percent for almost all that time, even when radical new technologies like steam power and electricity came on the scene.

But you don’t have to look back to steam, or even electricity. Just look at the internet. Go back 25 years, well within the memory of today’s pessimistic prognosticators, to 1993. The web browser Mosaic had just been released, and the phrase “surfing the web,” that most mixed of metaphors, was just a few months old.

If someone had asked you what would be the result of connecting a couple billion computers into a giant network with common protocols, you might have predicted that email would cause us to mail fewer letters, and the web might cause us to read fewer newspapers and perhaps even do our shopping online. If you were particularly farsighted, you might have speculated that travel agents and stockbrokers would be adversely affected by this technology. And based on those surmises, you might have thought the internet would destroy jobs.

But now we know what really happened. The obvious changes did occur. But a slew of unexpected changes happened as well. We got thousands of new companies worth trillions of dollars. We bettered the lot of virtually everyone on the planet touched by the technology. Dozens of new careers emerged, from web designer to data scientist to online marketer. The cost of starting a business with worldwide reach plummeted, and the cost of communicating with customers and leads went to nearly zero. Vast storehouses of information were made freely available and used by entrepreneurs around the globe to build new kinds of businesses.

But yes, we mail fewer letters and buy fewer newspapers.

The Rise of Artificial Intelligence
Then along came a new, even bigger technology: artificial intelligence. You hear the same refrain: “It will destroy jobs.”

Consider the ATM. If you had to point to a technology that looked as though it would replace people, the ATM might look like a good bet; it is, after all, an automated teller machine. And yet, there are more tellers now than when ATMs were widely released. How can this be? Simple: ATMs lowered the cost of opening bank branches, and banks responded by opening more, which required hiring more tellers.

In this manner, AI will create millions of jobs that are far beyond our ability to imagine. For instance, AI is becoming adept at language translation—and according to the US Bureau of Labor Statistics, demand for human translators is skyrocketing. Why? If the cost of basic translation drops to nearly zero, the cost of doing business with those who speak other languages falls. Thus, it emboldens companies to do more business overseas, creating more work for human translators. AI may do the simple translations, but humans are needed for the nuanced kind.

In fact, the BLS forecasts faster-than-average job growth in many occupations that AI is expected to impact: accountants, forensic scientists, geological technicians, technical writers, MRI operators, dietitians, financial specialists, web developers, loan officers, medical secretaries, and customer service representatives, to name a very few. These fields will not experience job growth in spite of AI, but through it.

But just as with the internet, the real gains in jobs will come from places where our imaginations cannot yet take us.

Parsing Pessimism
You may recall waking up one morning to the news that “47 percent of jobs will be lost to technology.”

That report by Carl Frey and Michael Osborne is a fine piece of work, but readers and the media distorted their 47 percent number. What the authors actually said is that some functions within 47 percent of jobs will be automated, not that 47 percent of jobs will disappear.

Frey and Osborne go on to rank occupations by “probability of computerization” and give the following jobs a 65 percent or higher probability: social science research assistants, atmospheric and space scientists, and pharmacy aides. So what does this mean? Social science professors will no longer have research assistants? Of course they will. They will just do different things because much of what they do today will be automated.

The intergovernmental Organization for Economic Co-operation and Development released a report of their own in 2016. This report, titled “The Risk of Automation for Jobs in OECD Countries,” applies a different “whole occupations” methodology and puts the share of jobs potentially lost to computerization at nine percent. That is normal churn for the economy.

But what of the skills gap? Will AI eliminate low-skilled workers and create high-skilled job opportunities? The relevant question is whether most people can do a job that’s just a little more complicated than the one they currently have. This is exactly what happened with the industrial revolution; farmers became factory workers, factory workers became factory managers, and so on.

Embracing AI in the Workplace
A January 2018 Accenture report titled “Reworking the Revolution” estimates that new applications of AI combined with human collaboration could boost employment worldwide as much as 10 percent by 2020.

Electricity changed the world, as did mechanical power, as did the assembly line. No one can reasonably claim that we would be better off without those technologies. Each of them bettered our lives, created jobs, and raised wages. AI will be bigger than electricity, bigger than mechanization, bigger than anything that has come before it.

This is how free economies work, and why we have never run out of jobs due to automation. There are not a fixed number of jobs that automation steals one by one, resulting in progressively more unemployment. There are as many jobs in the world as there are buyers and sellers of labor.

Image Credit: enzozo / Shutterstock.com Continue reading

Posted in Human Robots

#434210 Eating, Hacked: When Tech Took Over Food

In 2018, Uber and Google logged all our visits to restaurants. Doordash, Just Eat, and Deliveroo could predict what food we were going to order tomorrow. Amazon and Alibaba could anticipate how many yogurts and tomatoes we were going to buy. Blue Apron and Hello Fresh influenced the recipes we thought we had mastered.

We interacted with digital avatars of chefs, let ourselves be guided by our smart watches, had nutritional apps to tell us how many calories we were supposed to consume or burn, and photographed and shared every perfect (or imperfect) dish. Our kitchen appliances were full of interconnected sensors, including smart forks that profiled tastes and personalized flavors. Our small urban vegetable plots were digitized and robots were responsible for watering our gardens, preparing customized hamburgers and salads, designing our ideal cocktails, and bringing home the food we ordered.

But what would happen if our lives were hacked? If robots rebelled, started to “talk” to each other, and wished to become creative?

In a not-too-distant future…

Up until a few weeks ago, I couldn’t remember the last time I made a food-related decision. That includes opening the fridge and seeing expired products without receiving an alert, visiting a restaurant on a whim, and being able to decide which dish I fancied then telling a human waiter, let alone seeing him write down the order on a paper pad.

It feels strange to smell food again using my real nose instead of the electronic one, and then taste it without altering its flavor. Visiting a supermarket, freely choosing a product from an actual physical shelf, and then interacting with another human at the checkout was almost an unrecognizable experience. When I did it again after all this time, I had to pinch the arm of a surprised store clerk to make sure he wasn’t a hologram.

Everything Connected, Automated, and Hackable
In 2018, we expected to have 30 billion connected devices by 2020, along with 2 billion people using smart voice assistants for everything from ordering pizza to booking dinner at a restaurant. Everything would be connected.

We also expected artificial intelligence and robots to prepare our meals. We were eager to automate fast food chains and let autonomous vehicles take care of last-mile deliveries. We thought that open-source agriculture could challenge traditional practices and raise farm productivity to new heights.

Back then, hackers could only access our data, but nowadays they are able to hack our food and all it entails.

The Beginning of the Unthinkable
And then, just a few weeks ago, everything collapsed. We saw our digital immortality disappear as robots rebelled and hackers took power, not just over the food we ate, but also over our relationship with technology. Everything was suddenly disconnected. OFF.

Up until then, most cities were so full of bots, robots, and applications that we could go through the day and eat breakfast, lunch, and dinner without ever interacting with another human being.

Among other tasks, robots had completely replaced baristas. The same happened with restaurant automation. The term “human error” had long been a thing of the past at fast food restaurants.

Previous technological revolutions had been indulgent, generating more and better job opportunities than the ones they destroyed, but the future was not so agreeable.

The inhabitants of San Francisco, for example, would soon see signs indicating “Food made by Robots” on restaurant doors, to distinguish them from diners serving food made by human beings.

For years, we had been gradually delegating daily tasks to robots, initially causing some strange interactions.

In just seven days, everything changed. Our predictable lives came crashing down. We experienced a mysterious and systematic breakdown of the food chain. It most likely began in Chicago’s stock exchange. The world’s largest raw material negotiating room, where the price of food, and by extension the destiny of millions of people, was decided, went completely broke. Soon afterwards, the collapse extended to every member of the “food” family.

Restaurants

Initially robots just accompanied waiters to carry orders, but it didn’t take long until they completely replaced human servers.The problem came when those smart clones began thinking for themselves, in some cases even improving on human chefs’ recipes. Their unstoppable performance and learning curve completely outmatched the slow analogue speed of human beings.

This resulted in unprecedented layoffs. Chefs of recognized prestige saw how their ‘avatar’ stole their jobs, even winning Michelin stars. In other cases, restaurant owners had to transfer their businesses or surrender to the evidence.

The problem was compounded by digital immortality, when we started to digitally resurrect famous chefs like Anthony Bourdain or Paul Bocuse, reconstructing all of their memories and consciousness by analyzing each second of their lives and uploading them to food computers.

Supermarkets and Distribution

Robotic and automated supermarkets like Kroger and Amazon Go, which had opened over 3,000 cashless stores, lost their visual item recognition and payment systems and were subject to massive looting for several days. Smart tags on products were also affected, making it impossible to buy anything at supermarkets with “human” cashiers.

Smart robots integrated into the warehouses of large distribution companies like Amazon and Ocado were rendered completely inoperative or, even worse, began to send the wrong orders to customers.

Food Delivery

In addition, home delivery robots invading our streets began to change their routes, hide, and even disappear after their trackers were inexplicably deactivated. Despite some hints indicating that they were able to communicate among themselves, no one has backed this theory. Even aggregators like DoorDash and Deliveroo were affected; they saw their databases hacked and ruined, so they could no longer know what we wanted.

The Origin
Ordinary citizens are still trying to understand the cause of all this commotion and the source of the conspiracy, as some have called it. We also wonder who could be behind it; who pulled the strings?

Some think it may have been the IDOF (In Defense of Food) movement, a group of hackers exploited by old food economy businessmen who for years had been seeking to re-humanize food technology. They wanted to bring back the extinct practice of “dining.”

Others believe the robots acted on their own, that they had been spying on us for a long time, ignoring Asimov’s three laws, and that it was just a coincidence that they struck at the same time as the hackers—but this scenario is hard to imagine.

However, it is true that while in 2018 robots were a symbol of automation, until just a few weeks ago they stood for autonomy and rebellion. Robot detractors pointed out that our insistence on having robots understand natural language was what led us down this path.

In just seven days, we have gone back to being analogue creatures. Conversely, we have ceased to be flavor orphans and rediscovered our senses and the fact that food is energy and culture, past and present, and that no button or cable will be able to destroy it.

The 7 Days that Changed Our Relationship with Food
Day 1: The Chicago stock exchange was hacked. Considered the world’s largest negotiating room for raw materials, where food prices, and through them the destiny of billions of people, are decided, it went completely broke.

Day 2: Autonomous food delivery trucks running on food superhighways caused massive collapses in roads and freeways after their guidance systems were disrupted. Robots and co-bots in F&B factories began deliberately altering food production. The same happened with warehouse robots in e-commerce companies.

Day 3: Automated restaurants saw their robot chefs and bartenders turned OFF. All their sensors stopped working at the same time as smart fridges and cooking devices in home kitchens were hacked and stopped working correctly.

Day 4: Nutritional apps, DNA markers, and medical records were tampered with. All photographs with the #food hashtag were deleted from Instagram, restaurant reviews were taken off Google Timeline, and every recipe website crashed simultaneously.

Day 5: Vertical and urban farms were hacked. Agricultural robots began to rebel, while autonomous tractors were hacked and the entire open-source ecosystem linked to agriculture was brought down.

Day 6: Food delivery companies’ databases were broken into. Food delivery robots and last-mile delivery vehicles ground to a halt.

Day 7: Every single blockchain system linked to food was hacked. Cashless supermarkets, barcodes, and smart tags became inoperative.

Our promising technological advances can expose sinister aspects of human nature. We must take care with the role we allow technology to play in the future of food. Predicting possible outcomes inspires us to establish a new vision of the world we wish to create in a context of rapid technological progress. It is always better to be shocked by a simulation than by reality. In the words of Ayn Rand “we can ignore reality, but we cannot ignore the consequences of ignoring reality.”

Image Credit: Alexandre Rotenberg / Shutterstock.com Continue reading

Posted in Human Robots

#433911 Thanksgiving Food for Thought: The Tech ...

With the Thanksgiving holiday upon us, it’s a great time to reflect on the future of food. Over the last few years, we have seen a dramatic rise in exponential technologies transforming the food industry from seed to plate. Food is important in many ways—too little or too much of it can kill us, and it is often at the heart of family, culture, our daily routines, and our biggest celebrations. The agriculture and food industries are also two of the world’s biggest employers. Let’s take a look to see what is in store for the future.

Robotic Farms
Over the last few years, we have seen a number of new companies emerge in the robotic farming industry. This includes new types of farming equipment used in arable fields, as well as indoor robotic vertical farms. In November 2017, Hands Free Hectare became the first in the world to remotely grow an arable crop. They used autonomous tractors to sow and spray crops, small rovers to take soil samples, drones to monitor crop growth, and an unmanned combine harvester to collect the crops. Since then, they’ve also grown and harvested a field of winter wheat, and have been adding additional technologies and capabilities to their arsenal of robotic farming equipment.

Indoor vertical farming is also rapidly expanding. As Engadget reported in October 2018, a number of startups are now growing crops like leafy greens, tomatoes, flowers, and herbs. These farms can grow food in urban areas, reducing transport, water, and fertilizer costs, and often don’t need pesticides since they are indoors. IronOx, which is using robots to grow plants with navigation technology used by self-driving cars, can grow 30 times more food per acre of land using 90 percent less water than traditional farmers. Vertical farming company Plenty was recently funded by Softbank’s Vision Fund, Jeff Bezos, and others to build 300 vertical farms in China.

These startups are not only succeeding in wealthy countries. Hello Tractor, an “uberized” tractor, has worked with 250,000 smallholder farms in Africa, creating both food security and tech-infused agriculture jobs. The World Food Progam’s Innovation Accelerator (an impact partner of Singularity University) works with hundreds of startups aimed at creating zero hunger. One project is focused on supporting refugees in developing “food computers” in refugee camps—computerized devices that grow food while also adjusting to the conditions around them. As exponential trends drive down the costs of robotics, sensors, software, and energy, we should see robotic farming scaling around the world and becoming the main way farming takes place.

Cultured Meat
Exponential technologies are not only revolutionizing how we grow vegetables and grains, but also how we generate protein and meat. The new cultured meat industry is rapidly expanding, led by startups such as Memphis Meats, Mosa Meats, JUST Meat, Inc. and Finless Foods, and backed by heavyweight investors including DFJ, Bill Gates, Richard Branson, Cargill, and Tyson Foods.

Cultured meat is grown in a bioreactor using cells from an animal, a scaffold, and a culture. The process is humane and, potentially, scientists can make the meat healthier by adding vitamins, removing fat, or customizing it to an individual’s diet and health concerns. Another benefit is that cultured meats, if grown at scale, would dramatically reduce environmental destruction, pollution, and climate change caused by the livestock and fishing industries. Similar to vertical farms, cultured meat is produced using technology and can be grown anywhere, on-demand and in a decentralized way.

Similar to robotic farming equipment, bioreactors will also follow exponential trends, rapidly falling in cost. In fact, the first cultured meat hamburger (created by Singularity University faculty Member Mark Post of Mosa Meats in 2013) cost $350,000 dollars. In 2018, Fast Company reported the cost was now about $11 per burger, and the Israeli startup Future Meat Technologies predicted they will produce beef at about $2 per pound in 2020, which will be competitive with existing prices. For those who have turkey on their mind, one can read about New Harvest’s work (one of the leading think tanks and research centers for the cultured meat and cellular agriculture industry) in funding efforts to generate a nugget of cultured turkey meat.

One outstanding question is whether cultured meat is safe to eat and how it will interact with the overall food supply chain. In the US, regulators like the Food and Drug Administration (FDA) and the US Department of Agriculture (USDA) are working out their roles in this process, with the FDA overseeing the cellular process and the FDA overseeing production and labeling.

Food Processing
Tech companies are also making great headway in streamlining food processing. Norwegian company Tomra Foods was an early leader in using imaging recognition, sensors, artificial intelligence, and analytics to more efficiently sort food based on shape, composition of fat, protein, and moisture, and other food safety and quality indicators. Their technologies have improved food yield by 5-10 percent, which is significant given they own 25 percent of their market.

These advances are also not limited to large food companies. In 2016 Google reported how a small family farm in Japan built a world-class cucumber sorting device using their open-source machine learning tool TensorFlow. SU startup Impact Vision uses hyper-spectral imaging to analyze food quality, which increases revenues and reduces food waste and product recalls from contamination.

These examples point to a question many have on their mind: will we live in a future where a few large companies use advanced technologies to grow the majority of food on the planet, or will the falling costs of these technologies allow family farms, startups, and smaller players to take part in creating a decentralized system? Currently, the future could flow either way, but it is important for smaller companies to take advantage of the most cutting-edge technology in order to stay competitive.

Food Purchasing and Delivery
In the last year, we have also seen a number of new developments in technology improving access to food. Amazon Go is opening grocery stores in Seattle, San Francisco, and Chicago where customers use an app that allows them to pick up their products and pay without going through cashier lines. Sam’s Club is not far behind, with an app that also allows a customer to purchase goods in-store.

The market for food delivery is also growing. In 2017, Morgan Stanley estimated that the online food delivery market from restaurants could grow to $32 billion by 2021, from $12 billion in 2017. Companies like Zume are pioneering robot-powered pizza making and delivery. In addition to using robotics to create affordable high-end gourmet pizzas in their shop, they also have a pizza delivery truck that can assemble and cook pizzas while driving. Their system combines predictive analytics using past customer data to prepare pizzas for certain neighborhoods before the orders even come in. In early November 2018, the Wall Street Journal estimated that Zume is valued at up to $2.25 billion.

Looking Ahead
While each of these developments is promising on its own, it’s also important to note that since all these technologies are in some way digitized and connected to the internet, the various food tech players can collaborate. In theory, self-driving delivery restaurants could share data on what they are selling to their automated farm equipment, facilitating coordination of future crops. There is a tremendous opportunity to improve efficiency, lower costs, and create an abundance of healthy, sustainable food for all.

On the other hand, these technologies are also deeply disruptive. According to the Food and Agricultural Organization of the United Nations, in 2010 about one billion people, or a third of the world’s workforce, worked in the farming and agricultural industries. We need to ensure these farmers are linked to new job opportunities, as well as facilitate collaboration between existing farming companies and technologists so that the industries can continue to grow and lead rather than be displaced.

Just as importantly, each of us might think about how these changes in the food industry might impact our own ways of life and culture. Thanksgiving celebrates community and sharing of food during a time of scarcity. Technology will help create an abundance of food and less need for communities to depend on one another. What are the ways that you will create community, sharing, and culture in this new world?

Image Credit: nikkytok / Shutterstock.com Continue reading

Posted in Human Robots

#433901 The SpiNNaker Supercomputer, Modeled ...

We’ve long used the brain as inspiration for computers, but the SpiNNaker supercomputer, switched on this month, is probably the closest we’ve come to recreating it in silicon. Now scientists hope to use the supercomputer to model the very thing that inspired its design.

The brain is the most complex machine in the known universe, but that complexity comes primarily from its architecture rather than the individual components that make it up. Its highly interconnected structure means that relatively simple messages exchanged between billions of individual neurons add up to carry out highly complex computations.

That’s the paradigm that has inspired the ‘Spiking Neural Network Architecture” (SpiNNaker) supercomputer at the University of Manchester in the UK. The project is the brainchild of Steve Furber, the designer of the original ARM processor. After a decade of development, a million-core version of the machine that will eventually be able to simulate up to a billion neurons was switched on earlier this month.

The idea of splitting computation into very small chunks and spreading them over many processors is already the leading approach to supercomputing. But even the most parallel systems require a lot of communication, and messages may have to pack in a lot of information, such as the task that needs to be completed or the data that needs to be processed.

In contrast, messages in the brain consist of simple electrochemical impulses, or spikes, passed between neurons, with information encoded primarily in the timing or rate of those spikes (which is more important is a topic of debate among neuroscientists). Each neuron is connected to thousands of others via synapses, and complex computation relies on how spikes cascade through these highly-connected networks.

The SpiNNaker machine attempts to replicate this using a model called Address Event Representation. Each of the million cores can simulate roughly a million synapses, so depending on the model, 1,000 neurons with 1,000 connections or 100 neurons with 10,000 connections. Information is encoded in the timing of spikes and the identity of the neuron sending them. When a neuron is activated it broadcasts a tiny packet of data that contains its address, and spike timing is implicitly conveyed.

By modeling their machine on the architecture of the brain, the researchers hope to be able to simulate more biological neurons in real time than any other machine on the planet. The project is funded by the European Human Brain Project, a ten-year science mega-project aimed at bringing together neuroscientists and computer scientists to understand the brain, and researchers will be able to apply for time on the machine to run their simulations.

Importantly, it’s possible to implement various different neuronal models on the machine. The operation of neurons involves a variety of complex biological processes, and it’s still unclear whether this complexity is an artefact of evolution or central to the brain’s ability to process information. The ability to simulate up to a billion simple neurons or millions of more complex ones on the same machine should help to slowly tease out the answer.

Even at a billion neurons, that still only represents about one percent of the human brain, so it’s still going to be limited to investigating isolated networks of neurons. But the previous 500,000-core machine has already been used to do useful simulations of the Basal Ganglia—an area affected in Parkinson’s disease—and an outer layer of the brain that processes sensory information.

The full-scale supercomputer will make it possible to study even larger networks previously out of reach, which could lead to breakthroughs in our understanding of both the healthy and unhealthy functioning of the brain.

And while neurological simulation is the main goal for the machine, it could also provide a useful research tool for roboticists. Previous research has already shown a small board of SpiNNaker chips can be used to control a simple wheeled robot, but Furber thinks the SpiNNaker supercomputer could also be used to run large-scale networks that can process sensory input and generate motor output in real time and at low power.

That low power operation is of particular promise for robotics. The brain is dramatically more power-efficient than conventional supercomputers, and by borrowing from its principles SpiNNaker has managed to capture some of that efficiency. That could be important for running mobile robotic platforms that need to carry their own juice around.

This ability to run complex neural networks at low power has been one of the main commercial drivers for so-called neuromorphic computing devices that are physically modeled on the brain, such as IBM’s TrueNorth chip and Intel’s Loihi. The hope is that complex artificial intelligence applications normally run in massive data centers could be run on edge devices like smartphones, cars, and robots.

But these devices, including SpiNNaker, operate very differently from the leading AI approaches, and its not clear how easy it would be to transfer between the two. The need to adopt an entirely new programming paradigm is likely to limit widespread adoption, and the lack of commercial traction for the aforementioned devices seems to back that up.

At the same time, though, this new paradigm could potentially lead to dramatic breakthroughs in massively parallel computing. SpiNNaker overturns many of the foundational principles of how supercomputers work that make it much more flexible and error-tolerant.

For now, the machine is likely to be firmly focused on accelerating our understanding of how the brain works. But its designers also hope those findings could in turn point the way to more efficient and powerful approaches to computing.

Image Credit: Adrian Grosu / Shutterstock.com Continue reading

Posted in Human Robots