Tag Archives: perspective

#433668 A Decade of Commercial Space ...

In many industries, a decade is barely enough time to cause dramatic change unless something disruptive comes along—a new technology, business model, or service design. The space industry has recently been enjoying all three.

But 10 years ago, none of those innovations were guaranteed. In fact, on Sept. 28, 2008, an entire company watched and hoped as their flagship product attempted a final launch after three failures. With cash running low, this was the last shot. Over 21,000 kilograms of kerosene and liquid oxygen ignited and powered two booster stages off the launchpad.

This first official picture of the Soviet satellite Sputnik I was issued in Moscow Oct. 9, 1957. The satellite measured 1 foot, 11 inches and weighed 184 pounds. The Space Age began as the Soviet Union launched Sputnik, the first man-made satellite, into orbit, on Oct. 4, 1957.AP Photo/TASS
When that Falcon 1 rocket successfully reached orbit and the company secured a subsequent contract with NASA, SpaceX had survived its ‘startup dip’. That milestone, the first privately developed liquid-fueled rocket to reach orbit, ignited a new space industry that is changing our world, on this planet and beyond. What has happened in the intervening years, and what does it mean going forward?

While scientists are busy developing new technologies that address the countless technical problems of space, there is another segment of researchers, including myself, studying the business angle and the operations issues facing this new industry. In a recent paper, my colleague Christopher Tang and I investigate the questions firms need to answer in order to create a sustainable space industry and make it possible for humans to establish extraterrestrial bases, mine asteroids and extend space travel—all while governments play an increasingly smaller role in funding space enterprises. We believe these business solutions may hold the less-glamorous key to unlocking the galaxy.

The New Global Space Industry
When the Soviet Union launched their Sputnik program, putting a satellite in orbit in 1957, they kicked off a race to space fueled by international competition and Cold War fears. The Soviet Union and the United States played the primary roles, stringing together a series of “firsts” for the record books. The first chapter of the space race culminated with Neil Armstrong and Buzz Aldrin’s historic Apollo 11 moon landing which required massive public investment, on the order of US$25.4 billion, almost $200 billion in today’s dollars.

Competition characterized this early portion of space history. Eventually, that evolved into collaboration, with the International Space Station being a stellar example, as governments worked toward shared goals. Now, we’ve entered a new phase—openness—with private, commercial companies leading the way.

The industry for spacecraft and satellite launches is becoming more commercialized, due, in part, to shrinking government budgets. According to a report from the investment firm Space Angels, a record 120 venture capital firms invested over $3.9 billion in private space enterprises last year. The space industry is also becoming global, no longer dominated by the Cold War rivals, the United States and USSR.

In 2018 to date, there have been 72 orbital launches, an average of two per week, from launch pads in China, Russia, India, Japan, French Guinea, New Zealand, and the US.

The uptick in orbital launches of actual rockets as well as spacecraft launches, which includes satellites and probes launched from space, coincides with this openness over the past decade.

More governments, firms and even amateurs engage in various spacecraft launches than ever before. With more entities involved, innovation has flourished. As Roberson notes in Digital Trends, “Private, commercial spaceflight. Even lunar exploration, mining, and colonization—it’s suddenly all on the table, making the race for space today more vital than it has felt in years.”

Worldwide launches into space. Orbital launches include manned and unmanned spaceships launched into orbital flight from Earth. Spacecraft launches include all vehicles such as spaceships, satellites and probes launched from Earth or space. Wooten, J. and C. Tang (2018) Operations in space, Decision Sciences; Space Launch Report (Kyle 2017); Spacecraft Encyclopedia (Lafleur 2017), CC BY-ND

One can see this vitality plainly in the news. On Sept. 21, Japan announced that two of its unmanned rovers, dubbed Minerva-II-1, had landed on a small, distant asteroid. For perspective, the scale of this landing is similar to hitting a 6-centimeter target from 20,000 kilometers away. And earlier this year, people around the world watched in awe as SpaceX’s Falcon Heavy rocket successfully launched and, more impressively, returned its two boosters to a landing pad in a synchronized ballet of epic proportions.

Challenges and Opportunities
Amidst the growth of capital, firms, and knowledge, both researchers and practitioners must figure out how entities should manage their daily operations, organize their supply chain, and develop sustainable operations in space. This is complicated by the hurdles space poses: distance, gravity, inhospitable environments, and information scarcity.

One of the greatest challenges involves actually getting the things people want in space, into space. Manufacturing everything on Earth and then launching it with rockets is expensive and restrictive. A company called Made In Space is taking a different approach by maintaining an additive manufacturing facility on the International Space Station and 3D printing right in space. Tools, spare parts, and medical devices for the crew can all be created on demand. The benefits include more flexibility and better inventory management on the space station. In addition, certain products can be produced better in space than on Earth, such as pure optical fiber.

How should companies determine the value of manufacturing in space? Where should capacity be built and how should it be scaled up? The figure below breaks up the origin and destination of goods between Earth and space and arranges products into quadrants. Humans have mastered the lower left quadrant, made on Earth—for use on Earth. Moving clockwise from there, each quadrant introduces new challenges, for which we have less and less expertise.

A framework of Earth-space operations. Wooten, J. and C. Tang (2018) Operations in Space, Decision Sciences, CC BY-ND
I first became interested in this particular problem as I listened to a panel of robotics experts discuss building a colony on Mars (in our third quadrant). You can’t build the structures on Earth and easily send them to Mars, so you must manufacture there. But putting human builders in that extreme environment is equally problematic. Essentially, an entirely new mode of production using robots and automation in an advance envoy may be required.

Resources in Space
You might wonder where one gets the materials for manufacturing in space, but there is actually an abundance of resources: Metals for manufacturing can be found within asteroids, water for rocket fuel is frozen as ice on planets and moons, and rare elements like helium-3 for energy are embedded in the crust of the moon. If we brought that particular isotope back to Earth, we could eliminate our dependence on fossil fuels.

As demonstrated by the recent Minerva-II-1 asteroid landing, people are acquiring the technical know-how to locate and navigate to these materials. But extraction and transport are open questions.

How do these cases change the economics in the space industry? Already, companies like Planetary Resources, Moon Express, Deep Space Industries, and Asterank are organizing to address these opportunities. And scholars are beginning to outline how to navigate questions of property rights, exploitation and partnerships.

Threats From Space Junk
A computer-generated image of objects in Earth orbit that are currently being tracked. Approximately 95 percent of the objects in this illustration are orbital debris – not functional satellites. The dots represent the current location of each item. The orbital debris dots are scaled according to the image size of the graphic to optimize their visibility and are not scaled to Earth. NASA
The movie “Gravity” opens with a Russian satellite exploding, which sets off a chain reaction of destruction thanks to debris hitting a space shuttle, the Hubble telescope, and part of the International Space Station. The sequence, while not perfectly plausible as written, is a very real phenomenon. In fact, in 2013, a Russian satellite disintegrated when it was hit with fragments from a Chinese satellite that exploded in 2007. Known as the Kessler effect, the danger from the 500,000-plus pieces of space debris has already gotten some attention in public policy circles. How should one prevent, reduce or mitigate this risk? Quantifying the environmental impact of the space industry and addressing sustainable operations is still to come.

NASA scientist Mark Matney is seen through a fist-sized hole in a 3-inch thick piece of aluminum at Johnson Space Center’s orbital debris program lab. The hole was created by a thumb-size piece of material hitting the metal at very high speed simulating possible damage from space junk. AP Photo/Pat Sullivan
What’s Next?
It’s true that space is becoming just another place to do business. There are companies that will handle the logistics of getting your destined-for-space module on board a rocket; there are companies that will fly those rockets to the International Space Station; and there are others that can make a replacement part once there.

What comes next? In one sense, it’s anybody’s guess, but all signs point to this new industry forging ahead. A new breakthrough could alter the speed, but the course seems set: exploring farther away from home, whether that’s the moon, asteroids, or Mars. It’s hard to believe that 10 years ago, SpaceX launches were yet to be successful. Today, a vibrant private sector consists of scores of companies working on everything from commercial spacecraft and rocket propulsion to space mining and food production. The next step is working to solidify the business practices and mature the industry.

Standing in a large hall at the University of Pittsburgh as part of the White House Frontiers Conference, I see the future. Wrapped around my head are state-of-the-art virtual reality goggles. I’m looking at the surface of Mars. Every detail is immediate and crisp. This is not just a video game or an aimless exercise. The scientific community has poured resources into such efforts because exploration is preceded by information. And who knows, maybe 10 years from now, someone will be standing on the actual surface of Mars.

Image Credit: SpaceX

Joel Wooten, Assistant Professor of Management Science, University of South Carolina

This article is republished from The Conversation under a Creative Commons license. Read the original article. Continue reading

Posted in Human Robots

#433278 Outdated Evolution: Updating Our ...

What happens when evolution shapes an animal for tribes of 150 primitive individuals living in a chaotic jungle, and then suddenly that animal finds itself living with millions of others in an engineered metropolis, their pockets all bulging with devices of godlike power?

The result, it seems, is a modern era of tension where archaic forms of governance struggle to keep up with the technological advances of their citizenry, where governmental policies act like constraining bottlenecks rather than spearheads of progress.

Simply put, our governments have failed to adapt to disruptive technologies. And if we are to regain our stability moving forward into a future of even greater disruption, it’s imperative that we understand the issues that got us into this situation and what kind of solutions we can engineer to overcome our governmental weaknesses.

Hierarchy vs. Technological Decentralization
Many of the greatest issues our governments face today come from humanity’s biologically-hardwired desire for centralized hierarchies. This innate proclivity towards building and navigating systems of status and rank were evolutionary gifts handed down to us by our ape ancestors, where each member of a community had a mental map of their social hierarchy. Their nervous systems behaved differently depending on their rank in this hierarchy, influencing their interactions in a way that ensured only the most competent ape would rise to the top to gain access to the best food and mates.

As humanity emerged and discovered the power of language, we continued this practice by ensuring that those at the top of the hierarchies, those with the greatest education and access to information, were the dominant decision-makers for our communities.

However, this kind of structured chain of power is only necessary if we’re operating in conditions of scarcity. But resources, including information, are no longer scarce.

It’s estimated that more than two-thirds of adults in the world now own a smartphone, giving the average citizen the same access to the world’s information as the leaders of our governments. And with global poverty falling from 35.5 percent to 10.9 percent over the last 25 years, our younger generations are growing up seeing automation and abundance as a likely default, where innovations like solar energy, lab-grown meat, and 3D printing are expected to become commonplace.

It’s awareness of this paradigm shift that has empowered the recent rise of decentralization. As information and access to resources become ubiquitous, there is noticeably less need for our inefficient and bureaucratic hierarchies.

For example, if blockchain can prove its feasibility for large-scale systems, it can be used to update and upgrade numerous applications to a decentralized model, including currency and voting. Such innovations would lower the risk of failing banks collapsing the economy like they did in 2008, as well as prevent corrupt politicians from using gerrymandering and long queues at polling stations to deter voter participation.

Of course, technology isn’t a magic wand that should be implemented carelessly. Facebook’s “move fast and break things” approach might have very possibly broken American democracy in 2016, as social media played on some of the worst tendencies humanity can operate on during an election: fear and hostility.

But if decentralized technology, like blockchain’s public ledgers, can continue to spread a sense of security and transparency throughout society, perhaps we can begin to quiet that paranoia and hyper-vigilance our brains evolved to cope with living as apes in dangerous jungles. By decentralizing our power structures, we take away the channels our outdated biological behaviors might use to enact social dominance and manipulation.

The peace of mind this creates helps to reestablish trust in our communities and in our governments. And with trust in the government increased, it’s likely we’ll see our next issue corrected.

From Business and Law to Science and Technology
A study found that 59 percent of US presidents, 68 percent of vice presidents, and 78 percent of secretaries of state were lawyers by education and occupation. That’s more than one out of every two people in the most powerful positions in the American government restricted to a field dedicated to convincing other people (judges) their perspective is true, even if they lack evidence.

And so the scientific method became less important than semantics to our leaders.

Similarly, of the 535 individuals in the American congress, only 24 hold a PhD, only 2 of which are in a STEM field. And so far, it’s not getting better: Trump is the first president since WWII not to name a science advisor.

But if we can use technologies like blockchain to increase transparency, efficiency, and trust in the government, then the upcoming generations who understand decentralization, abundance, and exponential technologies might feel inspired enough to run for government positions. This helps solve that common problem where the smartest and most altruistic people tend to avoid government positions because they don’t want to play the semantic and deceitful game of politics.

By changing this narrative, our governments can begin to fill with techno-progressive individuals who actually understand the technologies that are rapidly reshaping our reality. And this influence of expertise is going to be crucial as our governments are forced to restructure and create new policies to accommodate the incoming disruption.

Clearing Regulations to Begin Safe Experimentation
As exponential technologies become more ubiquitous, we’re likely going to see young kids and garage tinkerers creating powerful AIs and altering genetics thanks to tools like CRISPR and free virtual reality tutorials.

This easy accessibility to such powerful technology means unexpected and rapid progress can occur almost overnight, quickly overwhelming our government’s regulatory systems.

Uber and Airbnb are two of the best examples of our government’s inability to keep up with such technology, both companies achieving market dominance before regulators were even able to consider how to handle them. And when a government has decided against them, they often still continue to operate because people simply choose to keep using the apps.

Luckily, this kind of disruption hasn’t yet posed a major existential threat. But this will change when we see companies begin developing cyborg body parts, brain-computer interfaces, nanobot health injectors, and at-home genetic engineering kits.

For this reason, it’s crucial that we have experts who understand how to update our regulations to be as flexible as is necessary to ensure we don’t create black market conditions like we’ve done with drugs. It’s better to have safe and monitored experimentation, rather than forcing individuals into seedy communities using unsafe products.

Survival of the Most Adaptable
If we hope to be an animal that survives our changing environment, we have to adapt. We cannot cling to the behaviors and systems formed thousands of years ago. We must instead acknowledge that we now exist in an ecosystem of disruptive technology, and we must evolve and update our governments if they’re going to be capable of navigating these transformative impacts.

Image Credit: mmatee / Shutterstock.com Continue reading

Posted in Human Robots

#432893 These 4 Tech Trends Are Driving Us ...

From a first-principles perspective, the task of feeding eight billion people boils down to converting energy from the sun into chemical energy in our bodies.

Traditionally, solar energy is converted by photosynthesis into carbohydrates in plants (i.e., biomass), which are either eaten by the vegans amongst us, or fed to animals, for those with a carnivorous preference.

Today, the process of feeding humanity is extremely inefficient.

If we could radically reinvent what we eat, and how we create that food, what might you imagine that “future of food” would look like?

In this post we’ll cover:

Vertical farms
CRISPR engineered foods
The alt-protein revolution
Farmer 3.0

Let’s dive in.

Vertical Farming
Where we grow our food…

The average American meal travels over 1,500 miles from farm to table. Wine from France, beef from Texas, potatoes from Idaho.

Imagine instead growing all of your food in a 50-story tall vertical farm in downtown LA or off-shore on the Great Lakes where the travel distance is no longer 1,500 miles but 50 miles.

Delocalized farming will minimize travel costs at the same time that it maximizes freshness.

Perhaps more importantly, vertical farming also allows tomorrow’s farmer the ability to control the exact conditions of her plants year round.

Rather than allowing the vagaries of the weather and soil conditions to dictate crop quality and yield, we can now perfectly control the growing cycle.

LED lighting provides the crops with the maximum amount of light, at the perfect frequency, 24 hours a day, 7 days a week.

At the same time, sensors and robots provide the root system the exact pH and micronutrients required, while fine-tuning the temperature of the farm.

Such precision farming can generate yields that are 200% to 400% above normal.

Next let’s explore how we can precision-engineer the genetic properties of the plant itself.

CRISPR and Genetically Engineered Foods
What food do we grow?

A fundamental shift is occurring in our relationship with agriculture. We are going from evolution by natural selection (Darwinism) to evolution by human direction.

CRISPR (the cutting edge gene editing tool) is providing a pathway for plant breeding that is more predictable, faster and less expensive than traditional breeding methods.

Rather than our crops being subject to nature’s random, environmental whim, CRISPR unlocks our capability to modify our crops to match the available environment.

Further, using CRISPR we will be able to optimize the nutrient density of our crops, enhancing their value and volume.

CRISPR may also hold the key to eliminating common allergens from crops. As we identify the allergen gene in peanuts, for instance, we can use CRISPR to silence that gene, making the crops we raise safer for and more accessible to a rapidly growing population.

Yet another application is our ability to make plants resistant to infection or more resistant to drought or cold.

Helping to accelerate the impact of CRISPR, the USDA recently announced that genetically engineered crops will not be regulated—providing an opening for entrepreneurs to capitalize on the opportunities for optimization CRISPR enables.

CRISPR applications in agriculture are an opportunity to help a billion people and become a billionaire in the process.

Protecting crops against volatile environments, combating crop diseases and increasing nutrient values, CRISPR is a promising tool to help feed the world’s rising population.

The Alt-Protein/Lab-Grown Meat Revolution
Something like a third of the Earth’s arable land is used for raising livestock—a massive amount of land—and global demand for meat is predicted to double in the coming decade.

Today, we must grow an entire cow—all bones, skin, and internals included—to produce a steak.

Imagine if we could instead start with a single muscle stem cell and only grow the steak, without needing the rest of the cow? Think of it as cellular agriculture.

Imagine returning millions, perhaps billions, of acres of grazing land back to the wilderness? This is the promise of lab-grown meats.

Lab-grown meat can also be engineered (using technology like CRISPR) to be packed with nutrients and be the healthiest, most delicious protein possible.

We’re watching this technology develop in real time. Several startups across the globe are already working to bring artificial meats to the food industry.

JUST, Inc. (previously Hampton Creek) run by my friend Josh Tetrick, has been on a mission to build a food system where everyone can get and afford delicious, nutritious food. They started by exploring 300,000+ species of plants all around the world to see how they can make food better and now are investing heavily in stem-cell-grown meats.

Backed by Richard Branson and Bill Gates, Memphis Meats is working on ways to produce real meat from animal cells, rather than whole animals. So far, they have produced beef, chicken, and duck using cultured cells from living animals.

As with vertical farming, transitioning production of our majority protein source to a carefully cultivated environment allows for agriculture to optimize inputs (water, soil, energy, land footprint), nutrients and, importantly, taste.

Farmer 3.0
Vertical farming and cellular agriculture are reinventing how we think about our food supply chain and what food we produce.

The next question to answer is who will be producing the food?

Let’s look back at how farming evolved through history.

Farmers 0.0 (Neolithic Revolution, around 9000 BCE): The hunter-gatherer to agriculture transition gains momentum, and humans cultivated the ability to domesticate plants for food production.

Farmers 1.0 (until around the 19th century): Farmers spent all day in the field performing backbreaking labor, and agriculture accounted for most jobs.

Farmers 2.0 (mid-20th century, Green Revolution): From the invention of the first farm tractor in 1812 through today, transformative mechanical biochemical technologies (fertilizer) boosted yields and made the job of farming easier, driving the US farm job rate down to less than two percent today.

Farmers 3.0: In the near future, farmers will leverage exponential technologies (e.g., AI, networks, sensors, robotics, drones), CRISPR and genetic engineering, and new business models to solve the world’s greatest food challenges and efficiently feed the eight-billion-plus people on Earth.

An important driver of the Farmer 3.0 evolution is the delocalization of agriculture driven by vertical and urban farms. Vertical farms and urban agriculture are empowering a new breed of agriculture entrepreneurs.

Let’s take a look at an innovative incubator in Brooklyn, New York called Square Roots.

Ten farm-in-a-shipping-containers in a Brooklyn parking lot represent the first Square Roots campus. Each 8-foot x 8.5-foot x 20-foot shipping container contains an equivalent of 2 acres of produce and can yield more than 50 pounds of produce each week.

For 13 months, one cohort of next-generation food entrepreneurs takes part in a curriculum with foundations in farming, business, community and leadership.

The urban farming incubator raised a $5.4 million seed funding round in August 2017.

Training a new breed of entrepreneurs to apply exponential technology to growing food is essential to the future of farming.

One of our massive transformative purposes at the Abundance Group is to empower entrepreneurs to generate extraordinary wealth while creating a world of abundance. Vertical farms and cellular agriculture are key elements enabling the next generation of food and agriculture entrepreneurs.

Conclusion
Technology is driving food abundance.

We’re already seeing food become demonetized, as the graph below shows.

From 1960 to 2014, the percent of income spent on food in the U.S. fell from 19 percent to under 10 percent of total disposable income—a dramatic decrease over the 40 percent of household income spent on food in 1900.

The dropping percent of per-capita disposable income spent on food. Source: USDA, Economic Research Service, Food Expenditure Series
Ultimately, technology has enabled a massive variety of food at a significantly reduced cost and with fewer resources used for production.

We’re increasingly going to optimize and fortify the food supply chain to achieve more reliable, predictable, and nutritious ways to obtain basic sustenance.

And that means a world with abundant, nutritious, and inexpensive food for every man, woman, and child.

What an extraordinary time to be alive.

Join Me
Abundance-Digital Online Community: I’ve created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital.

Abundance-Digital is my ‘onramp’ for exponential entrepreneurs—those who want to get involved and play at a higher level. Click here to learn more.

Image Credit: Nejron Photo / Shutterstock.com Continue reading

Posted in Human Robots

#432880 Google’s Duplex Raises the Question: ...

By now, you’ve probably seen Google’s new Duplex software, which promises to call people on your behalf to book appointments for haircuts and the like. As yet, it only exists in demo form, but already it seems like Google has made a big stride towards capturing a market that plenty of companies have had their eye on for quite some time. This software is impressive, but it raises questions.

Many of you will be familiar with the stilted, robotic conversations you can have with early chatbots that are, essentially, glorified menus. Instead of pressing 1 to confirm or 2 to re-enter, some of these bots would allow for simple commands like “Yes” or “No,” replacing the buttons with limited ability to recognize a few words. Using them was often a far more frustrating experience than attempting to use a menu—there are few things more irritating than a robot saying, “Sorry, your response was not recognized.”

Google Duplex scheduling a hair salon appointment:

Google Duplex calling a restaurant:

Even getting the response recognized is hard enough. After all, there are countless different nuances and accents to baffle voice recognition software, and endless turns of phrase that amount to saying the same thing that can confound natural language processing (NLP), especially if you like your phrasing quirky.

You may think that standard customer-service type conversations all travel the same route, using similar words and phrasing. But when there are over 80,000 ways to order coffee, and making a mistake is frowned upon, even simple tasks require high accuracy over a huge dataset.

Advances in audio processing, neural networks, and NLP, as well as raw computing power, have meant that basic recognition of what someone is trying to say is less of an issue. Soundhound’s virtual assistant prides itself on being able to process complicated requests (perhaps needlessly complicated).

The deeper issue, as with all attempts to develop conversational machines, is one of understanding context. There are so many ways a conversation can go that attempting to construct a conversation two or three layers deep quickly runs into problems. Multiply the thousands of things people might say by the thousands they might say next, and the combinatorics of the challenge runs away from most chatbots, leaving them as either glorified menus, gimmicks, or rather bizarre to talk to.

Yet Google, who surely remembers from Glass the risk of premature debuts for technology, especially the kind that ask you to rethink how you interact with or trust in software, must have faith in Duplex to show it on the world stage. We know that startups like Semantic Machines and x.ai have received serious funding to perform very similar functions, using natural-language conversations to perform computing tasks, schedule meetings, book hotels, or purchase items.

It’s no great leap to imagine Google will soon do the same, bringing us closer to a world of onboard computing, where Lens labels the world around us and their assistant arranges it for us (all the while gathering more and more data it can convert into personalized ads). The early demos showed some clever tricks for keeping the conversation within a fairly narrow realm where the AI should be comfortable and competent, and the blog post that accompanied the release shows just how much effort has gone into the technology.

Yet given the privacy and ethics funk the tech industry finds itself in, and people’s general unease about AI, the main reaction to Duplex’s impressive demo was concern. The voice sounded too natural, bringing to mind Lyrebird and their warnings of deepfakes. You might trust “Do the Right Thing” Google with this technology, but it could usher in an era when automated robo-callers are far more convincing.

A more human-like voice may sound like a perfectly innocuous improvement, but the fact that the assistant interjects naturalistic “umm” and “mm-hm” responses to more perfectly mimic a human rubbed a lot of people the wrong way. This wasn’t just a voice assistant trying to sound less grinding and robotic; it was actively trying to deceive people into thinking they were talking to a human.

Google is running the risk of trying to get to conversational AI by going straight through the uncanny valley.

“Google’s experiments do appear to have been designed to deceive,” said Dr. Thomas King of the Oxford Internet Institute’s Digital Ethics Lab, according to Techcrunch. “Their main hypothesis was ‘can you distinguish this from a real person?’ In this case it’s unclear why their hypothesis was about deception and not the user experience… there should be some kind of mechanism there to let people know what it is they are speaking to.”

From Google’s perspective, being able to say “90 percent of callers can’t tell the difference between this and a human personal assistant” is an excellent marketing ploy, even though statistics about how many interactions are successful might be more relevant.

In fact, Duplex runs contrary to pretty much every major recommendation about ethics for the use of robotics or artificial intelligence, not to mention certain eavesdropping laws. Transparency is key to holding machines (and the people who design them) accountable, especially when it comes to decision-making.

Then there are the more subtle social issues. One prominent effect social media has had is to allow people to silo themselves; in echo chambers of like-minded individuals, it’s hard to see how other opinions exist. Technology exacerbates this by removing the evolutionary cues that go along with face-to-face interaction. Confronted with a pair of human eyes, people are more generous. Confronted with a Twitter avatar or a Facebook interface, people hurl abuse and criticism they’d never dream of using in a public setting.

Now that we can use technology to interact with ever fewer people, will it change us? Is it fair to offload the burden of dealing with a robot onto the poor human at the other end of the line, who might have to deal with dozens of such calls a day? Google has said that if the AI is in trouble, it will put you through to a human, which might help save receptionists from the hell of trying to explain a concept to dozens of dumbfounded AI assistants all day. But there’s always the risk that failures will be blamed on the person and not the machine.

As AI advances, could we end up treating the dwindling number of people in these “customer-facing” roles as the buggiest part of a fully automatic service? Will people start accusing each other of being robots on the phone, as well as on Twitter?

Google has provided plenty of reassurances about how the system will be used. They have said they will ensure that the system is identified, and it’s hardly difficult to resolve this problem; a slight change in the script from their demo would do it. For now, consumers will likely appreciate moves that make it clear whether the “intelligent agents” that make major decisions for us, that we interact with daily, and that hide behind social media avatars or phone numbers are real or artificial.

Image Credit: Besjunior / Shutterstock.com Continue reading

Posted in Human Robots

#432262 How We Can ‘Robot-Proof’ Education ...

Like millions of other individuals in the workforce, you’re probably wondering if you will one day be replaced by a machine. If you’re a student, you’re probably wondering if your chosen profession will even exist by the time you’ve graduated. From driving to legal research, there isn’t much that technology hasn’t already automated (or begun to automate). Many of us will need to adapt to this disruption in the workforce.

But it’s not enough for students and workers to adapt, become lifelong learners, and re-skill themselves. We also need to see innovation and initiative at an institutional and governmental level. According to research by The Economist, almost half of all jobs could be automated by computers within the next two decades, and no government in the world is prepared for it.

While many see the current trend in automation as a terrifying threat, others see it as an opportunity. In Robot-Proof: Higher Education in the Age of Artificial Intelligence, Northeastern University president Joseph Aoun proposes educating students in a way that will allow them to do the things that machines can’t. He calls for a new paradigm that teaches young minds “to invent, to create, and to discover”—filling the relevant needs of our world that robots simply can’t fill. Aoun proposes a much-needed novel framework that will allow us to “robot-proof” education.

Literacies and Core Cognitive Capacities of the Future
Aoun lays a framework for a new discipline, humanics, which discusses the important capacities and literacies for emerging education systems. At its core, the framework emphasizes our uniquely human abilities and strengths.

The three key literacies include data literacy (being able to manage and analyze big data), technological literacy (being able to understand exponential technologies and conduct computational thinking), and human literacy (being able to communicate and evaluate social, ethical, and existential impact).

Beyond the literacies, at the heart of Aoun’s framework are four cognitive capacities that are crucial to develop in our students if they are to be resistant to automation: critical thinking, systems thinking, entrepreneurship, and cultural agility.

“These capacities are mindsets rather than bodies of knowledge—mental architecture rather than mental furniture,” he writes. “Going forward, people will still need to know specific bodies of knowledge to be effective in the workplace, but that alone will not be enough when intelligent machines are doing much of the heavy lifting of information. To succeed, tomorrow’s employees will have to demonstrate a higher order of thought.”

Like many other experts in education, Joseph Aoun emphasizes the importance of critical thinking. This is important not just when it comes to taking a skeptical approach to information, but also being able to logically break down a claim or problem into multiple layers of analysis. We spend so much time teaching students how to answer questions that we often neglect to teach them how to ask questions. Asking questions—and asking good ones—is a foundation of critical thinking. Before you can solve a problem, you must be able to critically analyze and question what is causing it. This is why critical thinking and problem solving are coupled together.

The second capacity, systems thinking, involves being able to think holistically about a problem. The most creative problem-solvers and thinkers are able to take a multidisciplinary perspective and connect the dots between many different fields. According to Aoun, it “involves seeing across areas that machines might be able to comprehend individually but that they cannot analyze in an integrated way, as a whole.” It represents the absolute opposite of how most traditional curricula is structured with emphasis on isolated subjects and content knowledge.

Among the most difficult-to-automate tasks or professions is entrepreneurship.

In fact, some have gone so far as to claim that in the future, everyone will be an entrepreneur. Yet traditionally, initiative has been something students show in spite of or in addition to their schoolwork. For most students, developing a sense of initiative and entrepreneurial skills has often been part of their extracurricular activities. It needs to be at the core of our curricula, not a supplement to it. At its core, teaching entrepreneurship is about teaching our youth to solve complex problems with resilience, to become global leaders, and to solve grand challenges facing our species.

Finally, with an increasingly globalized world, there is a need for more workers with cultural agility, the ability to build amongst different cultural contexts and norms.

One of the major trends today is the rise of the contingent workforce. We are seeing an increasing percentage of full-time employees working on the cloud. Multinational corporations have teams of employees collaborating at different offices across the planet. Collaboration across online networks requires a skillset of its own. As education expert Tony Wagner points out, within these digital contexts, leadership is no longer about commanding with top-down authority, but rather about leading by influence.

An Emphasis on Creativity
The framework also puts an emphasis on experiential or project-based learning, wherein the heart of the student experience is not lectures or exams but solving real-life problems and learning by doing, creating, and executing. Unsurprisingly, humans continue to outdo machines when it comes to innovating and pushing intellectual, imaginative, and creative boundaries, making jobs involving these skills the hardest to automate.

In fact, technological trends are giving rise to what many thought leaders refer to as the imagination economy. This is defined as “an economy where intuitive and creative thinking create economic value, after logical and rational thinking have been outsourced to other economies.” Consequently, we need to develop our students’ creative abilities to ensure their success against machines.

In its simplest form, creativity represents the ability to imagine radical ideas and then go about executing them in reality.

In many ways, we are already living in our creative imaginations. Consider this: every invention or human construct—whether it be the spaceship, an architectural wonder, or a device like an iPhone—once existed as a mere idea, imagined in someone’s mind. The world we have designed and built around us is an extension of our imaginations and is only possible because of our creativity. Creativity has played a powerful role in human progress—now imagine what the outcomes would be if we tapped into every young mind’s creative potential.

The Need for a Radical Overhaul
What is clear from the recommendations of Aoun and many other leading thinkers in this space is that an effective 21st-century education system is radically different from the traditional systems we currently have in place. There is a dramatic contrast between these future-oriented frameworks and the way we’ve structured our traditional, industrial-era and cookie-cutter-style education systems.

It’s time for a change, and incremental changes or subtle improvements are no longer enough. What we need to see are more moonshots and disruption in the education sector. In a world of exponential growth and accelerating change, it is never too soon for a much-needed dramatic overhaul.

Image Credit: Besjunior / Shutterstock.com Continue reading

Posted in Human Robots