Tag Archives: energy

#433911 Thanksgiving Food for Thought: The Tech ...

With the Thanksgiving holiday upon us, it’s a great time to reflect on the future of food. Over the last few years, we have seen a dramatic rise in exponential technologies transforming the food industry from seed to plate. Food is important in many ways—too little or too much of it can kill us, and it is often at the heart of family, culture, our daily routines, and our biggest celebrations. The agriculture and food industries are also two of the world’s biggest employers. Let’s take a look to see what is in store for the future.

Robotic Farms
Over the last few years, we have seen a number of new companies emerge in the robotic farming industry. This includes new types of farming equipment used in arable fields, as well as indoor robotic vertical farms. In November 2017, Hands Free Hectare became the first in the world to remotely grow an arable crop. They used autonomous tractors to sow and spray crops, small rovers to take soil samples, drones to monitor crop growth, and an unmanned combine harvester to collect the crops. Since then, they’ve also grown and harvested a field of winter wheat, and have been adding additional technologies and capabilities to their arsenal of robotic farming equipment.

Indoor vertical farming is also rapidly expanding. As Engadget reported in October 2018, a number of startups are now growing crops like leafy greens, tomatoes, flowers, and herbs. These farms can grow food in urban areas, reducing transport, water, and fertilizer costs, and often don’t need pesticides since they are indoors. IronOx, which is using robots to grow plants with navigation technology used by self-driving cars, can grow 30 times more food per acre of land using 90 percent less water than traditional farmers. Vertical farming company Plenty was recently funded by Softbank’s Vision Fund, Jeff Bezos, and others to build 300 vertical farms in China.

These startups are not only succeeding in wealthy countries. Hello Tractor, an “uberized” tractor, has worked with 250,000 smallholder farms in Africa, creating both food security and tech-infused agriculture jobs. The World Food Progam’s Innovation Accelerator (an impact partner of Singularity University) works with hundreds of startups aimed at creating zero hunger. One project is focused on supporting refugees in developing “food computers” in refugee camps—computerized devices that grow food while also adjusting to the conditions around them. As exponential trends drive down the costs of robotics, sensors, software, and energy, we should see robotic farming scaling around the world and becoming the main way farming takes place.

Cultured Meat
Exponential technologies are not only revolutionizing how we grow vegetables and grains, but also how we generate protein and meat. The new cultured meat industry is rapidly expanding, led by startups such as Memphis Meats, Mosa Meats, JUST Meat, Inc. and Finless Foods, and backed by heavyweight investors including DFJ, Bill Gates, Richard Branson, Cargill, and Tyson Foods.

Cultured meat is grown in a bioreactor using cells from an animal, a scaffold, and a culture. The process is humane and, potentially, scientists can make the meat healthier by adding vitamins, removing fat, or customizing it to an individual’s diet and health concerns. Another benefit is that cultured meats, if grown at scale, would dramatically reduce environmental destruction, pollution, and climate change caused by the livestock and fishing industries. Similar to vertical farms, cultured meat is produced using technology and can be grown anywhere, on-demand and in a decentralized way.

Similar to robotic farming equipment, bioreactors will also follow exponential trends, rapidly falling in cost. In fact, the first cultured meat hamburger (created by Singularity University faculty Member Mark Post of Mosa Meats in 2013) cost $350,000 dollars. In 2018, Fast Company reported the cost was now about $11 per burger, and the Israeli startup Future Meat Technologies predicted they will produce beef at about $2 per pound in 2020, which will be competitive with existing prices. For those who have turkey on their mind, one can read about New Harvest’s work (one of the leading think tanks and research centers for the cultured meat and cellular agriculture industry) in funding efforts to generate a nugget of cultured turkey meat.

One outstanding question is whether cultured meat is safe to eat and how it will interact with the overall food supply chain. In the US, regulators like the Food and Drug Administration (FDA) and the US Department of Agriculture (USDA) are working out their roles in this process, with the FDA overseeing the cellular process and the FDA overseeing production and labeling.

Food Processing
Tech companies are also making great headway in streamlining food processing. Norwegian company Tomra Foods was an early leader in using imaging recognition, sensors, artificial intelligence, and analytics to more efficiently sort food based on shape, composition of fat, protein, and moisture, and other food safety and quality indicators. Their technologies have improved food yield by 5-10 percent, which is significant given they own 25 percent of their market.

These advances are also not limited to large food companies. In 2016 Google reported how a small family farm in Japan built a world-class cucumber sorting device using their open-source machine learning tool TensorFlow. SU startup Impact Vision uses hyper-spectral imaging to analyze food quality, which increases revenues and reduces food waste and product recalls from contamination.

These examples point to a question many have on their mind: will we live in a future where a few large companies use advanced technologies to grow the majority of food on the planet, or will the falling costs of these technologies allow family farms, startups, and smaller players to take part in creating a decentralized system? Currently, the future could flow either way, but it is important for smaller companies to take advantage of the most cutting-edge technology in order to stay competitive.

Food Purchasing and Delivery
In the last year, we have also seen a number of new developments in technology improving access to food. Amazon Go is opening grocery stores in Seattle, San Francisco, and Chicago where customers use an app that allows them to pick up their products and pay without going through cashier lines. Sam’s Club is not far behind, with an app that also allows a customer to purchase goods in-store.

The market for food delivery is also growing. In 2017, Morgan Stanley estimated that the online food delivery market from restaurants could grow to $32 billion by 2021, from $12 billion in 2017. Companies like Zume are pioneering robot-powered pizza making and delivery. In addition to using robotics to create affordable high-end gourmet pizzas in their shop, they also have a pizza delivery truck that can assemble and cook pizzas while driving. Their system combines predictive analytics using past customer data to prepare pizzas for certain neighborhoods before the orders even come in. In early November 2018, the Wall Street Journal estimated that Zume is valued at up to $2.25 billion.

Looking Ahead
While each of these developments is promising on its own, it’s also important to note that since all these technologies are in some way digitized and connected to the internet, the various food tech players can collaborate. In theory, self-driving delivery restaurants could share data on what they are selling to their automated farm equipment, facilitating coordination of future crops. There is a tremendous opportunity to improve efficiency, lower costs, and create an abundance of healthy, sustainable food for all.

On the other hand, these technologies are also deeply disruptive. According to the Food and Agricultural Organization of the United Nations, in 2010 about one billion people, or a third of the world’s workforce, worked in the farming and agricultural industries. We need to ensure these farmers are linked to new job opportunities, as well as facilitate collaboration between existing farming companies and technologists so that the industries can continue to grow and lead rather than be displaced.

Just as importantly, each of us might think about how these changes in the food industry might impact our own ways of life and culture. Thanksgiving celebrates community and sharing of food during a time of scarcity. Technology will help create an abundance of food and less need for communities to depend on one another. What are the ways that you will create community, sharing, and culture in this new world?

Image Credit: nikkytok / Shutterstock.com Continue reading

Posted in Human Robots

#433884 Designer Babies, and Their Babies: How ...

As if stand-alone technologies weren’t advancing fast enough, we’re in age where we must study the intersection points of these technologies. How is what’s happening in robotics influenced by what’s happening in 3D printing? What could be made possible by applying the latest advances in quantum computing to nanotechnology?

Along these lines, one crucial tech intersection is that of artificial intelligence and genomics. Each field is seeing constant progress, but Jamie Metzl believes it’s their convergence that will really push us into uncharted territory, beyond even what we’ve imagined in science fiction. “There’s going to be this push and pull, this competition between the reality of our biology with its built-in limitations and the scope of our aspirations,” he said.

Metzl is a senior fellow at the Atlantic Council and author of the upcoming book Hacking Darwin: Genetic Engineering and the Future of Humanity. At Singularity University’s Exponential Medicine conference last week, he shared his insights on genomics and AI, and where their convergence could take us.

Life As We Know It
Metzl explained how genomics as a field evolved slowly—and then quickly. In 1953, James Watson and Francis Crick identified the double helix structure of DNA, and realized that the order of the base pairs held a treasure trove of genetic information. There was such a thing as a book of life, and we’d found it.

In 2003, when the Human Genome Project was completed (after 13 years and $2.7 billion), we learned the order of the genome’s 3 billion base pairs, and the location of specific genes on our chromosomes. Not only did a book of life exist, we figured out how to read it.

Jamie Metzl at Exponential Medicine
Fifteen years after that, it’s 2018 and precision gene editing in plants, animals, and humans is changing everything, and quickly pushing us into an entirely new frontier. Forget reading the book of life—we’re now learning how to write it.

“Readable, writable, and hackable, what’s clear is that human beings are recognizing that we are another form of information technology, and just like our IT has entered this exponential curve of discovery, we will have that with ourselves,” Metzl said. “And it’s intersecting with the AI revolution.”

Learning About Life Meets Machine Learning
In 2016, DeepMind’s AlphaGo program outsmarted the world’s top Go player. In 2017 AlphaGo Zero was created: unlike AlphaGo, AlphaGo Zero wasn’t trained using previous human games of Go, but was simply given the rules of Go—and in four days it defeated the AlphaGo program.

Our own biology is, of course, vastly more complex than the game of Go, and that, Metzl said, is our starting point. “The system of our own biology that we are trying to understand is massively, but very importantly not infinitely, complex,” he added.

Getting a standardized set of rules for our biology—and, eventually, maybe even outsmarting our biology—will require genomic data. Lots of it.

Multiple countries already starting to produce this data. The UK’s National Health Service recently announced a plan to sequence the genomes of five million Britons over the next five years. In the US the All of Us Research Program will sequence a million Americans. China is the most aggressive in sequencing its population, with a goal of sequencing half of all newborns by 2020.

“We’re going to get these massive pools of sequenced genomic data,” Metzl said. “The real gold will come from comparing people’s sequenced genomes to their electronic health records, and ultimately their life records.” Getting people comfortable with allowing open access to their data will be another matter; Metzl mentioned that Luna DNA and others have strategies to help people get comfortable with giving consent to their private information. But this is where China’s lack of privacy protection could end up being a significant advantage.

To compare genotypes and phenotypes at scale—first millions, then hundreds of millions, then eventually billions, Metzl said—we’re going to need AI and big data analytic tools, and algorithms far beyond what we have now. These tools will let us move from precision medicine to predictive medicine, knowing precisely when and where different diseases are going to occur and shutting them down before they start.

But, Metzl said, “As we unlock the genetics of ourselves, it’s not going to be about just healthcare. It’s ultimately going to be about who and what we are as humans. It’s going to be about identity.”

Designer Babies, and Their Babies
In Metzl’s mind, the most serious application of our genomic knowledge will be in embryo selection.

Currently, in-vitro fertilization (IVF) procedures can extract around 15 eggs, fertilize them, then do pre-implantation genetic testing; right now what’s knowable is single-gene mutation diseases and simple traits like hair color and eye color. As we get to the millions and then billions of people with sequences, we’ll have information about how these genetics work, and we’re going to be able to make much more informed choices,” Metzl said.

Imagine going to a fertility clinic in 2023. You give a skin graft or a blood sample, and using in-vitro gametogenesis (IVG)—infertility be damned—your skin or blood cells are induced to become eggs or sperm, which are then combined to create embryos. The dozens or hundreds of embryos created from artificial gametes each have a few cells extracted from them, and these cells are sequenced. The sequences will tell you the likelihood of specific traits and disease states were that embryo to be implanted and taken to full term. “With really anything that has a genetic foundation, we’ll be able to predict with increasing levels of accuracy how that potential child will be realized as a human being,” Metzl said.

This, he added, could lead to some wild and frightening possibilities: if you have 1,000 eggs and you pick one based on its optimal genetic sequence, you could then mate your embryo with somebody else who has done the same thing in a different genetic line. “Your five-day-old embryo and their five-day-old embryo could have a child using the same IVG process,” Metzl said. “Then that child could have a child with another five-day-old embryo from another genetic line, and you could go on and on down the line.”

Sounds insane, right? But wait, there’s more: as Jason Pontin reported earlier this year in Wired, “Gene-editing technologies such as Crispr-Cas9 would make it relatively easy to repair, add, or remove genes during the IVG process, eliminating diseases or conferring advantages that would ripple through a child’s genome. This all may sound like science fiction, but to those following the research, the combination of IVG and gene editing appears highly likely, if not inevitable.”

From Crazy to Commonplace?
It’s a slippery slope from gene editing and embryo-mating to a dystopian race to build the most perfect humans possible. If somebody’s investing so much time and energy in selecting their embryo, Metzl asked, how will they think about the mating choices of their children? IVG could quickly leave the realm of healthcare and enter that of evolution.

“We all need to be part of an inclusive, integrated, global dialogue on the future of our species,” Metzl said. “Healthcare professionals are essential nodes in this.” Not least among this dialogue should be the question of access to tech like IVG; are there steps we can take to keep it from becoming a tool for a wealthy minority, and thereby perpetuating inequality and further polarizing societies?

As Pontin points out, at its inception 40 years ago IVF also sparked fear, confusion, and resistance—and now it’s as normal and common as could be, with millions of healthy babies conceived using the technology.

The disruption that genomics, AI, and IVG will bring to reproduction could follow a similar story cycle—if we’re smart about it. As Metzl put it, “This must be regulated, because it is life.”

Image Credit: hywards / Shutterstock.com Continue reading

Posted in Human Robots

#433785 DeepMind’s Eerie Reimagination of the ...

If a recent project using Google’s DeepMind were a recipe, you would take a pair of AI systems, images of animals, and a whole lot of computing power. Mix it all together, and you’d get a series of imagined animals dreamed up by one of the AIs. A look through the research paper about the project—or this open Google Folder of images it produced—will likely lead you to agree that the results are a mix of impressive and downright eerie.

But the eerie factor doesn’t mean the project shouldn’t be considered a success and a step forward for future uses of AI.

From GAN To BigGAN
The team behind the project consists of Andrew Brock, a PhD student at Edinburgh Center for Robotics, and DeepMind intern and researcher Jeff Donahue and Karen Simonyan.

They used a so-called Generative Adversarial Network (GAN) to generate the images. In a GAN, two AI systems collaborate in a game-like manner. One AI produces images of an object or creature. The human equivalent would be drawing pictures of, for example, a dog—without necessarily knowing what a dog exactly looks like. Those images are then shown to the second AI, which has already been fed images of dogs. The second AI then tells the first one how far off its efforts were. The first one uses this information to improve its images. The two go back and forth in an iterative process, and the goal is for the first AI to become so good at creating images of dogs that the second can’t tell the difference between its creations and actual pictures of dogs.

The team was able to draw on Google’s vast vaults of computational power to create images of a quality and life-like nature that were beyond almost anything seen before. In part, this was achieved by feeding the GAN with more images than is usually the case. According to IFLScience, the standard is to feed about 64 images per subject into the GAN. In this case, the research team fed about 2,000 images per subject into the system, leading to it being nicknamed BigGAN.

Their results showed that feeding the system with more images and using masses of raw computer power markedly increased the GAN’s precision and ability to create life-like renditions of the subjects it was trained to reproduce.

“The main thing these models need is not algorithmic improvements, but computational ones. […] When you increase model capacity and you increase the number of images you show at every step, you get this twofold combined effect,” Andrew Brock told Fast Company.

The Power Drain
The team used 512 of Google’s AI-focused Tensor Processing Units (TPU) to generate 512-pixel images. Each experiment took between 24 and 48 hours to run.

That kind of computing power needs a lot of electricity. As artist and Innovator-In-Residence at the Library of Congress Jer Thorp tongue-in-cheek put it on Twitter: “The good news is that AI can now give you a more believable image of a plate of spaghetti. The bad news is that it used roughly enough energy to power Cleveland for the afternoon.”

Thorp added that a back-of-the-envelope calculation showed that the computations to produce the images would require about 27,000 square feet of solar panels to have adequate power.

BigGAN’s images have been hailed by researchers, with Oriol Vinyals, research scientist at DeepMind, rhetorically asking if these were the ‘Best GAN samples yet?’

However, they are still not perfect. The number of legs on a given creature is one example of where the BigGAN seemed to struggle. The system was good at recognizing that something like a spider has a lot of legs, but seemed unable to settle on how many ‘a lot’ was supposed to be. The same applied to dogs, especially if the images were supposed to show said dogs in motion.

Those eerie images are contrasted by other renditions that show such lifelike qualities that a human mind has a hard time identifying them as fake. Spaniels with lolling tongues, ocean scenery, and butterflies were all rendered with what looks like perfection. The same goes for an image of a hamburger that was good enough to make me stop writing because I suddenly needed lunch.

The Future Use Cases
GAN networks were first introduced in 2014, and given their relative youth, researchers and companies are still busy trying out possible use cases.

One possible use is image correction—making pixillated images clearer. Not only does this help your future holiday snaps, but it could be applied in industries such as space exploration. A team from the University of Michigan and the Max Planck Institute have developed a method for GAN networks to create images from text descriptions. At Berkeley, a research group has used GAN to create an interface that lets users change the shape, size, and design of objects, including a handbag.

For anyone who has seen a film like Wag the Dog or read 1984, the possibilities are also starkly alarming. GANs could, in other words, make fake news look more real than ever before.

For now, it seems that while not all GANs require the computational and electrical power of the BigGAN, there is still some way to reach these potential use cases. However, if there’s one lesson from Moore’s Law and exponential technology, it is that today’s technical roadblock quickly becomes tomorrow’s minor issue as technology progresses.

Image Credit: Ondrej Prosicky/Shutterstock Continue reading

Posted in Human Robots

#433668 A Decade of Commercial Space ...

In many industries, a decade is barely enough time to cause dramatic change unless something disruptive comes along—a new technology, business model, or service design. The space industry has recently been enjoying all three.

But 10 years ago, none of those innovations were guaranteed. In fact, on Sept. 28, 2008, an entire company watched and hoped as their flagship product attempted a final launch after three failures. With cash running low, this was the last shot. Over 21,000 kilograms of kerosene and liquid oxygen ignited and powered two booster stages off the launchpad.

This first official picture of the Soviet satellite Sputnik I was issued in Moscow Oct. 9, 1957. The satellite measured 1 foot, 11 inches and weighed 184 pounds. The Space Age began as the Soviet Union launched Sputnik, the first man-made satellite, into orbit, on Oct. 4, 1957.AP Photo/TASS
When that Falcon 1 rocket successfully reached orbit and the company secured a subsequent contract with NASA, SpaceX had survived its ‘startup dip’. That milestone, the first privately developed liquid-fueled rocket to reach orbit, ignited a new space industry that is changing our world, on this planet and beyond. What has happened in the intervening years, and what does it mean going forward?

While scientists are busy developing new technologies that address the countless technical problems of space, there is another segment of researchers, including myself, studying the business angle and the operations issues facing this new industry. In a recent paper, my colleague Christopher Tang and I investigate the questions firms need to answer in order to create a sustainable space industry and make it possible for humans to establish extraterrestrial bases, mine asteroids and extend space travel—all while governments play an increasingly smaller role in funding space enterprises. We believe these business solutions may hold the less-glamorous key to unlocking the galaxy.

The New Global Space Industry
When the Soviet Union launched their Sputnik program, putting a satellite in orbit in 1957, they kicked off a race to space fueled by international competition and Cold War fears. The Soviet Union and the United States played the primary roles, stringing together a series of “firsts” for the record books. The first chapter of the space race culminated with Neil Armstrong and Buzz Aldrin’s historic Apollo 11 moon landing which required massive public investment, on the order of US$25.4 billion, almost $200 billion in today’s dollars.

Competition characterized this early portion of space history. Eventually, that evolved into collaboration, with the International Space Station being a stellar example, as governments worked toward shared goals. Now, we’ve entered a new phase—openness—with private, commercial companies leading the way.

The industry for spacecraft and satellite launches is becoming more commercialized, due, in part, to shrinking government budgets. According to a report from the investment firm Space Angels, a record 120 venture capital firms invested over $3.9 billion in private space enterprises last year. The space industry is also becoming global, no longer dominated by the Cold War rivals, the United States and USSR.

In 2018 to date, there have been 72 orbital launches, an average of two per week, from launch pads in China, Russia, India, Japan, French Guinea, New Zealand, and the US.

The uptick in orbital launches of actual rockets as well as spacecraft launches, which includes satellites and probes launched from space, coincides with this openness over the past decade.

More governments, firms and even amateurs engage in various spacecraft launches than ever before. With more entities involved, innovation has flourished. As Roberson notes in Digital Trends, “Private, commercial spaceflight. Even lunar exploration, mining, and colonization—it’s suddenly all on the table, making the race for space today more vital than it has felt in years.”

Worldwide launches into space. Orbital launches include manned and unmanned spaceships launched into orbital flight from Earth. Spacecraft launches include all vehicles such as spaceships, satellites and probes launched from Earth or space. Wooten, J. and C. Tang (2018) Operations in space, Decision Sciences; Space Launch Report (Kyle 2017); Spacecraft Encyclopedia (Lafleur 2017), CC BY-ND

One can see this vitality plainly in the news. On Sept. 21, Japan announced that two of its unmanned rovers, dubbed Minerva-II-1, had landed on a small, distant asteroid. For perspective, the scale of this landing is similar to hitting a 6-centimeter target from 20,000 kilometers away. And earlier this year, people around the world watched in awe as SpaceX’s Falcon Heavy rocket successfully launched and, more impressively, returned its two boosters to a landing pad in a synchronized ballet of epic proportions.

Challenges and Opportunities
Amidst the growth of capital, firms, and knowledge, both researchers and practitioners must figure out how entities should manage their daily operations, organize their supply chain, and develop sustainable operations in space. This is complicated by the hurdles space poses: distance, gravity, inhospitable environments, and information scarcity.

One of the greatest challenges involves actually getting the things people want in space, into space. Manufacturing everything on Earth and then launching it with rockets is expensive and restrictive. A company called Made In Space is taking a different approach by maintaining an additive manufacturing facility on the International Space Station and 3D printing right in space. Tools, spare parts, and medical devices for the crew can all be created on demand. The benefits include more flexibility and better inventory management on the space station. In addition, certain products can be produced better in space than on Earth, such as pure optical fiber.

How should companies determine the value of manufacturing in space? Where should capacity be built and how should it be scaled up? The figure below breaks up the origin and destination of goods between Earth and space and arranges products into quadrants. Humans have mastered the lower left quadrant, made on Earth—for use on Earth. Moving clockwise from there, each quadrant introduces new challenges, for which we have less and less expertise.

A framework of Earth-space operations. Wooten, J. and C. Tang (2018) Operations in Space, Decision Sciences, CC BY-ND
I first became interested in this particular problem as I listened to a panel of robotics experts discuss building a colony on Mars (in our third quadrant). You can’t build the structures on Earth and easily send them to Mars, so you must manufacture there. But putting human builders in that extreme environment is equally problematic. Essentially, an entirely new mode of production using robots and automation in an advance envoy may be required.

Resources in Space
You might wonder where one gets the materials for manufacturing in space, but there is actually an abundance of resources: Metals for manufacturing can be found within asteroids, water for rocket fuel is frozen as ice on planets and moons, and rare elements like helium-3 for energy are embedded in the crust of the moon. If we brought that particular isotope back to Earth, we could eliminate our dependence on fossil fuels.

As demonstrated by the recent Minerva-II-1 asteroid landing, people are acquiring the technical know-how to locate and navigate to these materials. But extraction and transport are open questions.

How do these cases change the economics in the space industry? Already, companies like Planetary Resources, Moon Express, Deep Space Industries, and Asterank are organizing to address these opportunities. And scholars are beginning to outline how to navigate questions of property rights, exploitation and partnerships.

Threats From Space Junk
A computer-generated image of objects in Earth orbit that are currently being tracked. Approximately 95 percent of the objects in this illustration are orbital debris – not functional satellites. The dots represent the current location of each item. The orbital debris dots are scaled according to the image size of the graphic to optimize their visibility and are not scaled to Earth. NASA
The movie “Gravity” opens with a Russian satellite exploding, which sets off a chain reaction of destruction thanks to debris hitting a space shuttle, the Hubble telescope, and part of the International Space Station. The sequence, while not perfectly plausible as written, is a very real phenomenon. In fact, in 2013, a Russian satellite disintegrated when it was hit with fragments from a Chinese satellite that exploded in 2007. Known as the Kessler effect, the danger from the 500,000-plus pieces of space debris has already gotten some attention in public policy circles. How should one prevent, reduce or mitigate this risk? Quantifying the environmental impact of the space industry and addressing sustainable operations is still to come.

NASA scientist Mark Matney is seen through a fist-sized hole in a 3-inch thick piece of aluminum at Johnson Space Center’s orbital debris program lab. The hole was created by a thumb-size piece of material hitting the metal at very high speed simulating possible damage from space junk. AP Photo/Pat Sullivan
What’s Next?
It’s true that space is becoming just another place to do business. There are companies that will handle the logistics of getting your destined-for-space module on board a rocket; there are companies that will fly those rockets to the International Space Station; and there are others that can make a replacement part once there.

What comes next? In one sense, it’s anybody’s guess, but all signs point to this new industry forging ahead. A new breakthrough could alter the speed, but the course seems set: exploring farther away from home, whether that’s the moon, asteroids, or Mars. It’s hard to believe that 10 years ago, SpaceX launches were yet to be successful. Today, a vibrant private sector consists of scores of companies working on everything from commercial spacecraft and rocket propulsion to space mining and food production. The next step is working to solidify the business practices and mature the industry.

Standing in a large hall at the University of Pittsburgh as part of the White House Frontiers Conference, I see the future. Wrapped around my head are state-of-the-art virtual reality goggles. I’m looking at the surface of Mars. Every detail is immediate and crisp. This is not just a video game or an aimless exercise. The scientific community has poured resources into such efforts because exploration is preceded by information. And who knows, maybe 10 years from now, someone will be standing on the actual surface of Mars.

Image Credit: SpaceX

Joel Wooten, Assistant Professor of Management Science, University of South Carolina

This article is republished from The Conversation under a Creative Commons license. Read the original article. Continue reading

Posted in Human Robots

#433655 First-Ever Grad Program in Space Mining ...

Maybe they could call it the School of Space Rock: A new program being offered at the Colorado School of Mines (CSM) will educate post-graduate students on the nuts and bolts of extracting and using valuable materials such as rare metals and frozen water from space rocks like asteroids or the moon.

Officially called Space Resources, the graduate-level program is reputedly the first of its kind in the world to offer a course in the emerging field of space mining. Heading the program is Angel Abbud-Madrid, director of the Center for Space Resources at Mines, a well-known engineering school located in Golden, Colorado, where Molson Coors taps Rocky Mountain spring water for its earthly brews.

The first semester for the new discipline began last month. While Abbud-Madrid didn’t immediately respond to an interview request, Singularity Hub did talk to Chris Lewicki, president and CEO of Planetary Resources, a space mining company whose founders include Peter Diamandis, Singularity University co-founder.

A former NASA engineer who worked on multiple Mars missions, Lewicki says the Space Resources program at CSM, with its multidisciplinary focus on science, economics, and policy, will help students be light years ahead of their peers in the nascent field of space mining.

“I think it’s very significant that they’ve started this program,” he said. “Having students with that kind of background exposure just allows them to be productive on day one instead of having to kind of fill in a lot of things for them.”

Who would be attracted to apply for such a program? There are many professionals who could be served by a post-baccalaureate certificate, master’s degree, or even Ph.D. in Space Resources, according to Lewicki. Certainly aerospace engineers and planetary scientists would be among the faces in the classroom.

“I think it’s [also] people who have an interest in what I would call maybe space robotics,” he said. Lewicki is referring not only to the classic example of robotic arms like the Canadarm2, which lends a hand to astronauts aboard the International Space Station, but other types of autonomous platforms.

One example might be Planetary Resources’ own Arkyd-6, a small, autonomous satellite called a CubeSat launched earlier this year to test different technologies that might be used for deep-space exploration of resources. The proof-of-concept was as much a test for the technology—such as the first space-based use of a mid-wave infrared imager to detect water resources—as it was for being able to work in space on a shoestring budget.

“We really proved that doing one of these billion-dollar science missions to deep space can be done for a lot less if you have a very focused goal, and if you kind of cut a lot of corners and then put some commercial approaches into those things,” Lewicki said.

A Trillion-Dollar Industry
Why space mining? There are at least a trillion reasons.

Astrophysicist Neil deGrasse Tyson famously said that the first trillionaire will be the “person who exploits the natural resources on asteroids.” That’s because asteroids—rocky remnants from the formation of our solar system more than four billion years ago—harbor precious metals, ranging from platinum and gold to iron and nickel.

For instance, one future target of exploration by NASA—an asteroid dubbed 16 Psyche, orbiting the sun in the asteroid belt between Mars and Jupiter—is worth an estimated $10,000 quadrillion. It’s a number so mind-bogglingly big that it would crash the global economy, if someone ever figured out how to tow it back to Earth without literally crashing it into the planet.

Living Off the Land
Space mining isn’t just about getting rich. Many argue that humanity’s ability to extract resources in space, especially water that can be refined into rocket fuel, will be a key technology to extend our reach beyond near-Earth space.

The presence of frozen water around the frigid polar regions of the moon, for example, represents an invaluable source to power future deep-space missions. Splitting H20 into its component elements of hydrogen and oxygen would provide a nearly inexhaustible source of rocket fuel. Today, it costs $10,000 to put a pound of payload in Earth orbit, according to NASA.

Until more advanced rocket technology is developed, the moon looks to be the best bet for serving as the launching pad to Mars and beyond.

Moon Versus Asteroid
However, Lewicki notes that despite the moon’s proximity and our more intimate familiarity with its pockmarked surface, that doesn’t mean a lunar mission to extract resources is any easier than a multi-year journey to a fast-moving asteroid.

For one thing, fighting gravity to and from the moon is no easy feat, as the moon has a significantly stronger gravitational field than an asteroid. Another challenge is that the frozen water is located in permanently shadowed lunar craters, meaning space miners can’t rely on solar-powered equipment, but on some sort of external energy source.

And then there’s the fact that moon craters might just be the coldest places in the solar system. NASA’s Lunar Reconnaissance Orbiter found temperatures plummeted as low as 26 Kelvin, or more than minus 400 degrees Fahrenheit. In comparison, the coldest temperatures on Earth have been recorded near the South Pole in Antarctica—about minus 148 degrees F.

“We don’t operate machines in that kind of thermal environment,” Lewicki said of the extreme temperatures detected in the permanent dark regions of the moon. “Antarctica would be a balmy desert island compared to a lunar polar crater.”

Of course, no one knows quite what awaits us in the asteroid belt. Answers may soon be forthcoming. Last week, the Japan Aerospace Exploration Agency landed two small, hopping rovers on an asteroid called Ryugu. Meanwhile, NASA hopes to retrieve a sample from the near-Earth asteroid Bennu when its OSIRIS-REx mission makes contact at the end of this year.

No Bucks, No Buck Rogers
Visionaries like Elon Musk and Jeff Bezos talk about colonies on Mars, with millions of people living and working in space. The reality is that there’s probably a reason Buck Rogers was set in the 25th century: It’s going to take a lot of money and a lot of time to realize those sci-fi visions.

Or, as Lewicki put it: “No bucks, no Buck Rogers.”

The cost of operating in outer space can be prohibitive. Planetary Resources itself is grappling with raising additional funding, with reports this year about layoffs and even a possible auction of company assets.

Still, Lewicki is confident that despite economic and technical challenges, humanity will someday exceed even the boldest dreamers—skyscrapers on the moon, interplanetary trips to Mars—as judged against today’s engineering marvels.

“What we’re doing is going to be very hard, very painful, and almost certainly worth it,” he said. “Who would have thought that there would be a job for a space miner that you could go to school for, even just five or ten years ago. Things move quickly.”

Image Credit: M-SUR / Shutterstock.com Continue reading

Posted in Human Robots