Tag Archives: end

#431603 What We Can Learn From the Second Life ...

For every new piece of technology that gets developed, you can usually find people saying it will never be useful. The president of the Michigan Savings Bank in 1903, for example, said, “The horse is here to stay but the automobile is only a novelty—a fad.” It’s equally easy to find people raving about whichever new technology is at the peak of the Gartner Hype Cycle, which tracks the buzz around these newest developments and attempts to temper predictions. When technologies emerge, there are all kinds of uncertainties, from the actual capacity of the technology to its use cases in real life to the price tag.
Eventually the dust settles, and some technologies get widely adopted, to the extent that they can become “invisible”; people take them for granted. Others fall by the wayside as gimmicky fads or impractical ideas. Picking which horses to back is the difference between Silicon Valley millions and Betamax pub-quiz-question obscurity. For a while, it seemed that Google had—for once—backed the wrong horse.
Google Glass emerged from Google X, the ubiquitous tech giant’s much-hyped moonshot factory, where highly secretive researchers work on the sci-fi technologies of the future. Self-driving cars and artificial intelligence are the more mundane end for an organization that apparently once looked into jetpacks and teleportation.
The original smart glasses, Google began selling Google Glass in 2013 for $1,500 as prototypes for their acolytes, around 8,000 early adopters. Users could control the glasses with a touchpad, or, activated by tilting the head back, with voice commands. Audio relay—as with several wearable products—is via bone conduction, which transmits sound by vibrating the skull bones of the user. This was going to usher in the age of augmented reality, the next best thing to having a chip implanted directly into your brain.
On the surface, it seemed to be a reasonable proposition. People had dreamed about augmented reality for a long time—an onboard, JARVIS-style computer giving you extra information and instant access to communications without even having to touch a button. After smartphone ubiquity, it looked like a natural step forward.
Instead, there was a backlash. People may be willing to give their data up to corporations, but they’re less pleased with the idea that someone might be filming them in public. The worst aspect of smartphones is trying to talk to people who are distractedly scrolling through their phones. There’s a famous analogy in Revolutionary Road about an old couple’s loveless marriage: the husband tunes out his wife’s conversation by turning his hearing aid down to zero. To many, Google Glass seemed to provide us with a whole new way to ignore each other in favor of our Twitter feeds.
Then there’s the fact that, regardless of whether it’s because we’re not used to them, or if it’s a more permanent feature, people wearing AR tech often look very silly. Put all this together with a lack of early functionality, the high price (do you really feel comfortable wearing a $1,500 computer?), and a killer pun for the users—Glassholes—and the final recipe wasn’t great for Google.
Google Glass was quietly dropped from sale in 2015 with the ominous slogan posted on Google’s website “Thanks for exploring with us.” Reminding the Glass users that they had always been referred to as “explorers”—beta-testing a product, in many ways—it perhaps signaled less enthusiasm for wearables than the original, Google Glass skydive might have suggested.
In reality, Google went back to the drawing board. Not with the technology per se, although it has improved in the intervening years, but with the uses behind the technology.
Under what circumstances would you actually need a Google Glass? When would it genuinely be preferable to a smartphone that can do many of the same things and more? Beyond simply being a fashion item, which Google Glass decidedly was not, even the most tech-evangelical of us need a convincing reason to splash $1,500 on a wearable computer that’s less socially acceptable and less easy to use than the machine you’re probably reading this on right now.
Enter the Google Glass Enterprise Edition.
Piloted in factories during the years that Google Glass was dormant, and now roaring back to life and commercially available, the Google Glass relaunch got under way in earnest in July of 2017. The difference here was the specific audience: workers in factories who need hands-free computing because they need to use their hands at the same time.
In this niche application, wearable computers can become invaluable. A new employee can be trained with pre-programmed material that explains how to perform actions in real time, while instructions can be relayed straight into a worker’s eyeline without them needing to check a phone or switch to email.
Medical devices have long been a dream application for Google Glass. You can imagine a situation where people receive real-time information during surgery, or are augmented by artificial intelligence that provides additional diagnostic information or questions in response to a patient’s symptoms. The quest to develop a healthcare AI, which can provide recommendations in response to natural language queries, is on. The famously untidy doctor’s handwriting—and the associated death toll—could be avoided if the glasses could take dictation straight into a patient’s medical records. All of this is far more useful than allowing people to check Facebook hands-free while they’re riding the subway.
Google’s “Lens” application indicates another use for Google Glass that hadn’t quite matured when the original was launched: the Lens processes images and provides information about them. You can look at text and have it translated in real time, or look at a building or sign and receive additional information. Image processing, either through neural networks hooked up to a cloud database or some other means, is the frontier that enables driverless cars and similar technology to exist. Hook this up to a voice-activated assistant relaying information to the user, and you have your killer application: real-time annotation of the world around you. It’s this functionality that just wasn’t ready yet when Google launched Glass.
Amazon’s recent announcement that they want to integrate Alexa into a range of smart glasses indicates that the tech giants aren’t ready to give up on wearables yet. Perhaps, in time, people will become used to voice activation and interaction with their machines, at which point smart glasses with bone conduction will genuinely be more convenient than a smartphone.
But in many ways, the real lesson from the initial failure—and promising second life—of Google Glass is a simple question that developers of any smart technology, from the Internet of Things through to wearable computers, must answer. “What can this do that my smartphone can’t?” Find your answer, as the Enterprise Edition did, as Lens might, and you find your product.
Image Credit: Hattanas / Shutterstock.com Continue reading

Posted in Human Robots

#431589 Experiments in robotics could help ...

Amazon's launch in Australia today is likely to put fresh scrutiny on the speed of Australia Post's own deliveries. To that end, Australia Post is trialling the use of robots to deliver parcels more promptly. Continue reading

Posted in Human Robots

#431412 3 Dangerous Ideas From Ray Kurzweil

Recently, I interviewed my friend Ray Kurzweil at the Googleplex for a 90-minute webinar on disruptive and dangerous ideas, a prelude to my fireside chat with Ray at Abundance 360 this January.

Ray is my friend and cofounder and chancellor of Singularity University. He is also an XPRIZE trustee, a director of engineering at Google, and one of the best predictors of our exponential future.
It’s my pleasure to share with you three compelling ideas that came from our conversation.
1. The nation-state will soon be irrelevant.
Historically, we humans don’t like change. We like waking up in the morning and knowing that the world is the same as the night before.
That’s one reason why government institutions exist: to stabilize society.
But how will this change in 20 or 30 years? What role will stabilizing institutions play in a world of continuous, accelerating change?
“Institutions stick around, but they change their role in our lives,” Ray explained. “They already have. The nation-state is not as profound as it was. Religion used to direct every aspect of your life, minute to minute. It’s still important in some ways, but it’s much less important, much less pervasive. [It] plays a much smaller role in most people’s lives than it did, and the same is true for governments.”
Ray continues: “We are fantastically interconnected already. Nation-states are not islands anymore. So we’re already much more of a global community. The generation growing up today really feels like world citizens much more than ever before, because they’re talking to people all over the world, and it’s not a novelty.”
I’ve previously shared my belief that national borders have become extremely porous, with ideas, people, capital, and technology rapidly flowing between nations. In decades past, your cultural identity was tied to your birthplace. In the decades ahead, your identify is more a function of many other external factors. If you love space, you’ll be connected with fellow space-cadets around the globe more than you’ll be tied to someone born next door.
2. We’ll hit longevity escape velocity before we realize we’ve hit it.
Ray and I share a passion for extending the healthy human lifespan.
I frequently discuss Ray’s concept of “longevity escape velocity”—the point at which, for every year that you’re alive, science is able to extend your life for more than a year.
Scientists are continually extending the human lifespan, helping us cure heart disease, cancer, and eventually, neurodegenerative disease. This will keep accelerating as technology improves.
During my discussion with Ray, I asked him when he expects we’ll reach “escape velocity…”
His answer? “I predict it’s likely just another 10 to 12 years before the general public will hit longevity escape velocity.”
“At that point, biotechnology is going to have taken over medicine,” Ray added. “The next decade is going to be a profound revolution.”
From there, Ray predicts that nanorobots will “basically finish the job of the immune system,” with the ability to seek and destroy cancerous cells and repair damaged organs.
As we head into this sci-fi-like future, your most important job for the next 15 years is to stay alive. “Wear your seatbelt until we get the self-driving cars going,” Ray jokes.
The implications to society will be profound. While the scarcity-minded in government will react saying, “Social Security will be destroyed,” the more abundance-minded will realize that extending a person’s productive earning life space from 65 to 75 or 85 years old would be a massive boon to GDP.
3. Technology will help us define and actualize human freedoms.
The third dangerous idea from my conversation with Ray is about how technology will enhance our humanity, not detract from it.
You may have heard critics complain that technology is making us less human and increasingly disconnected.
Ray and I share a slightly different viewpoint: that technology enables us to tap into the very essence of what it means to be human.
“I don’t think humans even have to be biological,” explained Ray. “I think humans are the species that changes who we are.”
Ray argues that this began when humans developed the earliest technologies—fire and stone tools. These tools gave people new capabilities and became extensions of our physical bodies.
At its base level, technology is the means by which we change our environment and change ourselves. This will continue, even as the technologies themselves evolve.
“People say, ‘Well, do I really want to become part machine?’ You’re not even going to notice it,” Ray says, “because it’s going to be a sensible thing to do at each point.”
Today, we take medicine to fight disease and maintain good health and would likely consider it irresponsible if someone refused to take a proven, life-saving medicine.
In the future, this will still happen—except the medicine might have nanobots that can target disease or will also improve your memory so you can recall things more easily.
And because this new medicine works so well for so many, public perception will change. Eventually, it will become the norm… as ubiquitous as penicillin and ibuprofen are today.
In this way, ingesting nanorobots, uploading your brain to the cloud, and using devices like smart contact lenses can help humans become, well, better at being human.
Ray sums it up: “We are the species that changes who we are to become smarter and more profound, more beautiful, more creative, more musical, funnier, sexier.”
Speaking of sexuality and beauty, Ray also sees technology expanding these concepts. “In virtual reality, you can be someone else. Right now, actually changing your gender in real reality is a pretty significant, profound process, but you could do it in virtual reality much more easily and you can be someone else. A couple could become each other and discover their relationship from the other’s perspective.”
In the 2030s, when Ray predicts sensor-laden nanorobots will be able to go inside the nervous system, virtual or augmented reality will become exceptionally realistic, enabling us to “be someone else and have other kinds of experiences.”
Why Dangerous Ideas Matter
Why is it so important to discuss dangerous ideas?
I often say that the day before something is a breakthrough, it’s a crazy idea.
By consuming and considering a steady diet of “crazy ideas,” you train yourself to think bigger and bolder, a critical requirement for making impact.
As humans, we are linear and scarcity-minded.
As entrepreneurs, we must think exponentially and abundantly.
At the end of the day, the formula for a true breakthrough is equal to “having a crazy idea” you believe in, plus the passion to pursue that idea against all naysayers and obstacles.
Image Credit: Tithi Luadthong / Shutterstock.com Continue reading

Posted in Human Robots

#431385 Here’s How to Get to Conscious ...

“We cannot be conscious of what we are not conscious of.” – Julian Jaynes, The Origin of Consciousness in the Breakdown of the Bicameral Mind
Unlike the director leads you to believe, the protagonist of Ex Machina, Andrew Garland’s 2015 masterpiece, isn’t Caleb, a young programmer tasked with evaluating machine consciousness. Rather, it’s his target Ava, a breathtaking humanoid AI with a seemingly child-like naïveté and an enigmatic mind.
Like most cerebral movies, Ex Machina leaves the conclusion up to the viewer: was Ava actually conscious? In doing so, it also cleverly avoids a thorny question that has challenged most AI-centric movies to date: what is consciousness, and can machines have it?
Hollywood producers aren’t the only people stumped. As machine intelligence barrels forward at breakneck speed—not only exceeding human performance on games such as DOTA and Go, but doing so without the need for human expertise—the question has once more entered the scientific mainstream.
Are machines on the verge of consciousness?
This week, in a review published in the prestigious journal Science, cognitive scientists Drs. Stanislas Dehaene, Hakwan Lau and Sid Kouider of the Collège de France, University of California, Los Angeles and PSL Research University, respectively, argue: not yet, but there is a clear path forward.
The reason? Consciousness is “resolutely computational,” the authors say, in that it results from specific types of information processing, made possible by the hardware of the brain.
There is no magic juice, no extra spark—in fact, an experiential component (“what is it like to be conscious?”) isn’t even necessary to implement consciousness.
If consciousness results purely from the computations within our three-pound organ, then endowing machines with a similar quality is just a matter of translating biology to code.
Much like the way current powerful machine learning techniques heavily borrow from neurobiology, the authors write, we may be able to achieve artificial consciousness by studying the structures in our own brains that generate consciousness and implementing those insights as computer algorithms.
From Brain to Bot
Without doubt, the field of AI has greatly benefited from insights into our own minds, both in form and function.
For example, deep neural networks, the architecture of algorithms that underlie AlphaGo’s breathtaking sweep against its human competitors, are loosely based on the multi-layered biological neural networks that our brain cells self-organize into.
Reinforcement learning, a type of “training” that teaches AIs to learn from millions of examples, has roots in a centuries-old technique familiar to anyone with a dog: if it moves toward the right response (or result), give a reward; otherwise ask it to try again.
In this sense, translating the architecture of human consciousness to machines seems like a no-brainer towards artificial consciousness. There’s just one big problem.
“Nobody in AI is working on building conscious machines because we just have nothing to go on. We just don’t have a clue about what to do,” said Dr. Stuart Russell, the author of Artificial Intelligence: A Modern Approach in a 2015 interview with Science.
Multilayered consciousness
The hard part, long before we can consider coding machine consciousness, is figuring out what consciousness actually is.
To Dehaene and colleagues, consciousness is a multilayered construct with two “dimensions:” C1, the information readily in mind, and C2, the ability to obtain and monitor information about oneself. Both are essential to consciousness, but one can exist without the other.
Say you’re driving a car and the low fuel light comes on. Here, the perception of the fuel-tank light is C1—a mental representation that we can play with: we notice it, act upon it (refill the gas tank) and recall and speak about it at a later date (“I ran out of gas in the boonies!”).
“The first meaning we want to separate (from consciousness) is the notion of global availability,” explains Dehaene in an interview with Science. When you’re conscious of a word, your whole brain is aware of it, in a sense that you can use the information across modalities, he adds.
But C1 is not just a “mental sketchpad.” It represents an entire architecture that allows the brain to draw multiple modalities of information from our senses or from memories of related events, for example.
Unlike subconscious processing, which often relies on specific “modules” competent at a defined set of tasks, C1 is a global workspace that allows the brain to integrate information, decide on an action, and follow through until the end.
Like The Hunger Games, what we call “conscious” is whatever representation, at one point in time, wins the competition to access this mental workspace. The winners are shared among different brain computation circuits and are kept in the spotlight for the duration of decision-making to guide behavior.
Because of these features, C1 consciousness is highly stable and global—all related brain circuits are triggered, the authors explain.
For a complex machine such as an intelligent car, C1 is a first step towards addressing an impending problem, such as a low fuel light. In this example, the light itself is a type of subconscious signal: when it flashes, all of the other processes in the machine remain uninformed, and the car—even if equipped with state-of-the-art visual processing networks—passes by gas stations without hesitation.
With C1 in place, the fuel tank would alert the car computer (allowing the light to enter the car’s “conscious mind”), which in turn checks the built-in GPS to search for the next gas station.
“We think in a machine this would translate into a system that takes information out of whatever processing module it’s encapsulated in, and make it available to any of the other processing modules so they can use the information,” says Dehaene. “It’s a first sense of consciousness.”
Meta-cognition
In a way, C1 reflects the mind’s capacity to access outside information. C2 goes introspective.
The authors define the second facet of consciousness, C2, as “meta-cognition:” reflecting on whether you know or perceive something, or whether you just made an error (“I think I may have filled my tank at the last gas station, but I forgot to keep a receipt to make sure”). This dimension reflects the link between consciousness and sense of self.
C2 is the level of consciousness that allows you to feel more or less confident about a decision when making a choice. In computational terms, it’s an algorithm that spews out the probability that a decision (or computation) is correct, even if it’s often experienced as a “gut feeling.”
C2 also has its claws in memory and curiosity. These self-monitoring algorithms allow us to know what we know or don’t know—so-called “meta-memory,” responsible for that feeling of having something at the tip of your tongue. Monitoring what we know (or don’t know) is particularly important for children, says Dehaene.
“Young children absolutely need to monitor what they know in order to…inquire and become curious and learn more,” he explains.
The two aspects of consciousness synergize to our benefit: C1 pulls relevant information into our mental workspace (while discarding other “probable” ideas or solutions), while C2 helps with long-term reflection on whether the conscious thought led to a helpful response.
Going back to the low fuel light example, C1 allows the car to solve the problem in the moment—these algorithms globalize the information, so that the car becomes aware of the problem.
But to solve the problem, the car would need a “catalog of its cognitive abilities”—a self-awareness of what resources it has readily available, for example, a GPS map of gas stations.
“A car with this sort of self-knowledge is what we call having C2,” says Dehaene. Because the signal is globally available and because it’s being monitored in a way that the machine is looking at itself, the car would care about the low gas light and behave like humans do—lower fuel consumption and find a gas station.
“Most present-day machine learning systems are devoid of any self-monitoring,” the authors note.
But their theory seems to be on the right track. The few examples whereby a self-monitoring system was implemented—either within the structure of the algorithm or as a separate network—the AI has generated “internal models that are meta-cognitive in nature, making it possible for an agent to develop a (limited, implicit, practical) understanding of itself.”
Towards conscious machines
Would a machine endowed with C1 and C2 behave as if it were conscious? Very likely: a smartcar would “know” that it’s seeing something, express confidence in it, report it to others, and find the best solutions for problems. If its self-monitoring mechanisms break down, it may also suffer “hallucinations” or even experience visual illusions similar to humans.
Thanks to C1 it would be able to use the information it has and use it flexibly, and because of C2 it would know the limit of what it knows, says Dehaene. “I think (the machine) would be conscious,” and not just merely appearing so to humans.
If you’re left with a feeling that consciousness is far more than global information sharing and self-monitoring, you’re not alone.
“Such a purely functional definition of consciousness may leave some readers unsatisfied,” the authors acknowledge.
“But we’re trying to take a radical stance, maybe simplifying the problem. Consciousness is a functional property, and when we keep adding functions to machines, at some point these properties will characterize what we mean by consciousness,” Dehaene concludes.
Image Credit: agsandrew / Shutterstock.com Continue reading

Posted in Human Robots

#431377 The Farms of the Future Will Be ...

Swarms of drones buzz overhead, while robotic vehicles crawl across the landscape. Orbiting satellites snap high-resolution images of the scene far below. Not one human being can be seen in the pre-dawn glow spreading across the land.
This isn’t some post-apocalyptic vision of the future à la The Terminator. This is a snapshot of the farm of the future. Every phase of the operation—from seed to harvest—may someday be automated, without the need to ever get one’s fingernails dirty.
In fact, it’s science fiction already being engineered into reality. Today, robots empowered with artificial intelligence can zap weeds with preternatural precision, while autonomous tractors move with tireless efficiency across the farmland. Satellites can assess crop health from outer space, providing gobs of data to help produce the sort of business intelligence once accessible only to Fortune 500 companies.
“Precision agriculture is on the brink of a new phase of development involving smart machines that can operate by themselves, which will allow production agriculture to become significantly more efficient. Precision agriculture is becoming robotic agriculture,” said professor Simon Blackmore last year during a conference in Asia on the latest developments in robotic agriculture. Blackmore is head of engineering at Harper Adams University and head of the National Centre for Precision Farming in the UK.
It’s Blackmore’s university that recently showcased what may someday be possible. The project, dubbed Hands Free Hectare and led by researchers from Harper Adams and private industry, farmed one hectare (about 2.5 acres) of spring barley without one person ever setting foot in the field.
The team re-purposed, re-wired and roboticized farm equipment ranging from a Japanese tractor to a 25-year-old combine. Drones served as scouts to survey the operation and collect samples to help the team monitor the progress of the barley. At the end of the season, the robo farmers harvested about 4.5 tons of barley at a price tag of £200,000.

“This project aimed to prove that there’s no technological reason why a field can’t be farmed without humans working the land directly now, and we’ve done that,” said Martin Abell, mechatronics researcher for Precision Decisions, which partnered with Harper Adams, in a press release.
I, Robot Farmer
The Harper Adams experiment is the latest example of how machines are disrupting the agricultural industry. Around the same time that the Hands Free Hectare combine was harvesting barley, Deere & Company announced it would acquire a startup called Blue River Technology for a reported $305 million.
Blue River has developed a “see-and-spray” system that combines computer vision and artificial intelligence to discriminate between crops and weeds. It hits the former with fertilizer and blasts the latter with herbicides with such precision that it can eliminate 90 percent of the chemicals used in conventional agriculture.
It’s not just farmland that’s getting a helping hand from robots. A California company called Abundant Robotics, spun out of the nonprofit research institute SRI International, is developing robots capable of picking apples with vacuum-like arms that suck the fruit straight off the trees in the orchards.
“Traditional robots were designed to perform very specific tasks over and over again. But the robots that will be used in food and agricultural applications will have to be much more flexible than what we’ve seen in automotive manufacturing plants in order to deal with natural variation in food products or the outdoor environment,” Dan Harburg, an associate at venture capital firm Anterra Capital who previously worked at a Massachusetts-based startup making a robotic arm capable of grabbing fruit, told AgFunder News.
“This means ag-focused robotics startups have to design systems from the ground up, which can take time and money, and their robots have to be able to complete multiple tasks to avoid sitting on the shelf for a significant portion of the year,” he noted.
Eyes in the Sky
It will take more than an army of robotic tractors to grow a successful crop. The farm of the future will rely on drones, satellites, and other airborne instruments to provide data about their crops on the ground.
Companies like Descartes Labs, for instance, employ machine learning to analyze satellite imagery to forecast soy and corn yields. The Los Alamos, New Mexico startup collects five terabytes of data every day from multiple satellite constellations, including NASA and the European Space Agency. Combined with weather readings and other real-time inputs, Descartes Labs can predict cornfield yields with 99 percent accuracy. Its AI platform can even assess crop health from infrared readings.
The US agency DARPA recently granted Descartes Labs $1.5 million to monitor and analyze wheat yields in the Middle East and Africa. The idea is that accurate forecasts may help identify regions at risk of crop failure, which could lead to famine and political unrest. Another company called TellusLabs out of Somerville, Massachusetts also employs machine learning algorithms to predict corn and soy yields with similar accuracy from satellite imagery.
Farmers don’t have to reach orbit to get insights on their cropland. A startup in Oakland, Ceres Imaging, produces high-resolution imagery from multispectral cameras flown across fields aboard small planes. The snapshots capture the landscape at different wavelengths, identifying insights into problems like water stress, as well as providing estimates of chlorophyll and nitrogen levels. The geo-tagged images mean farmers can easily locate areas that need to be addressed.
Growing From the Inside
Even the best intelligence—whether from drones, satellites, or machine learning algorithms—will be challenged to predict the unpredictable issues posed by climate change. That’s one reason more and more companies are betting the farm on what’s called controlled environment agriculture. Today, that doesn’t just mean fancy greenhouses, but everything from warehouse-sized, automated vertical farms to grow rooms run by robots, located not in the emptiness of Kansas or Nebraska but smack dab in the middle of the main streets of America.
Proponents of these new concepts argue these high-tech indoor farms can produce much higher yields while drastically reducing water usage and synthetic inputs like fertilizer and herbicides.
Iron Ox, out of San Francisco, is developing one-acre urban greenhouses that will be operated by robots and reportedly capable of producing the equivalent of 30 acres of farmland. Powered by artificial intelligence, a team of three robots will run the entire operation of planting, nurturing, and harvesting the crops.
Vertical farming startup Plenty, also based in San Francisco, uses AI to automate its operations, and got a $200 million vote of confidence from the SoftBank Vision Fund earlier this year. The company claims its system uses only 1 percent of the water consumed in conventional agriculture while producing 350 times as much produce. Plenty is part of a new crop of urban-oriented farms, including Bowery Farming and AeroFarms.
“What I can envision is locating a larger scale indoor farm in the economically disadvantaged food desert, in order to stimulate a broader economic impact that could create jobs and generate income for that area,” said Dr. Gary Stutte, an expert in space agriculture and controlled environment agriculture, in an interview with AgFunder News. “The indoor agriculture model is adaptable to becoming an engine for economic growth and food security in both rural and urban food deserts.”
Still, the model is not without its own challenges and criticisms. Most of what these farms can produce falls into the “leafy greens” category and often comes with a premium price, which seems antithetical to the proposed mission of creating oases in the food deserts of cities. While water usage may be minimized, the electricity required to power the operation, especially the LEDs (which played a huge part in revolutionizing indoor agriculture), are not cheap.
Still, all of these advances, from robo farmers to automated greenhouses, may need to be part of a future where nearly 10 billion people will inhabit the planet by 2050. An oft-quoted statistic from the Food and Agriculture Organization of the United Nations says the world must boost food production by 70 percent to meet the needs of the population. Technology may not save the world, but it will help feed it.
Image Credit: Valentin Valkov / Shutterstock.com Continue reading

Posted in Human Robots