Tag Archives: mind

#434837 In Defense of Black Box AI

Deep learning is powering some amazing new capabilities, but we find it hard to scrutinize the workings of these algorithms. Lack of interpretability in AI is a common concern and many are trying to fix it, but is it really always necessary to know what’s going on inside these “black boxes”?

In a recent perspective piece for Science, Elizabeth Holm, a professor of materials science and engineering at Carnegie Mellon University, argued in defense of the black box algorithm. I caught up with her last week to find out more.

Edd Gent: What’s your experience with black box algorithms?

Elizabeth Holm: I got a dual PhD in materials science and engineering and scientific computing. I came to academia about six years ago and part of what I wanted to do in making this career change was to refresh and revitalize my computer science side.

I realized that computer science had changed completely. It used to be about algorithms and making codes run fast, but now it’s about data and artificial intelligence. There are the interpretable methods like random forest algorithms, where we can tell how the machine is making its decisions. And then there are the black box methods, like convolutional neural networks.

Once in a while we can find some information about their inner workings, but most of the time we have to accept their answers and kind of probe around the edges to figure out the space in which we can use them and how reliable and accurate they are.

EG: What made you feel like you had to mount a defense of these black box algorithms?

EH: When I started talking with my colleagues, I found that the black box nature of many of these algorithms was a real problem for them. I could understand that because we’re scientists, we always want to know why and how.

It got me thinking as a bit of a contrarian, “Are black boxes all bad? Must we reject them?” Surely not, because human thought processes are fairly black box. We often rely on human thought processes that the thinker can’t necessarily explain.

It’s looking like we’re going to be stuck with these methods for a while, because they’re really helpful. They do amazing things. And so there’s a very pragmatic realization that these are the best methods we’ve got to do some really important problems, and we’re not right now seeing alternatives that are interpretable. We’re going to have to use them, so we better figure out how.

EG: In what situations do you think we should be using black box algorithms?

EH: I came up with three rules. The simplest rule is: when the cost of a bad decision is small and the value of a good decision is high, it’s worth it. The example I gave in the paper is targeted advertising. If you send an ad no one wants it doesn’t cost a lot. If you’re the receiver it doesn’t cost a lot to get rid of it.

There are cases where the cost is high, and that’s then we choose the black box if it’s the best option to do the job. Things get a little trickier here because we have to ask “what are the costs of bad decisions, and do we really have them fully characterized?” We also have to be very careful knowing that our systems may have biases, they may have limitations in where you can apply them, they may be breakable.

But at the same time, there are certainly domains where we’re going to test these systems so extensively that we know their performance in virtually every situation. And if their performance is better than the other methods, we need to do it. Self driving vehicles are a significant example—it’s almost certain they’re going to have to use black box methods, and that they’re going to end up being better drivers than humans.

The third rule is the more fun one for me as a scientist, and that’s the case where the black box really enlightens us as to a new way to look at something. We have trained a black box to recognize the fracture energy of breaking a piece of metal from a picture of the broken surface. It did a really good job, and humans can’t do this and we don’t know why.

What the computer seems to be seeing is noise. There’s a signal in that noise, and finding it is very difficult, but if we do we may find something significant to the fracture process, and that would be an awesome scientific discovery.

EG: Do you think there’s been too much emphasis on interpretability?

EH: I think the interpretability problem is a fundamental, fascinating computer science grand challenge and there are significant issues where we need to have an interpretable model. But how I would frame it is not that there’s too much emphasis on interpretability, but rather that there’s too much dismissiveness of uninterpretable models.

I think that some of the current social and political issues surrounding some very bad black box outcomes have convinced people that all machine learning and AI should be interpretable because that will somehow solve those problems.

Asking humans to explain their rationale has not eliminated bias, or stereotyping, or bad decision-making in humans. Relying too much on interpreted ability perhaps puts the responsibility in the wrong place for getting better results. I can make a better black box without knowing exactly in what way the first one was bad.

EG: Looking further into the future, do you think there will be situations where humans will have to rely on black box algorithms to solve problems we can’t get our heads around?

EH: I do think so, and it’s not as much of a stretch as we think it is. For example, humans don’t design the circuit map of computer chips anymore. We haven’t for years. It’s not a black box algorithm that designs those circuit boards, but we’ve long since given up trying to understand a particular computer chip’s design.

With the billions of circuits in every computer chip, the human mind can’t encompass it, either in scope or just the pure time that it would take to trace every circuit. There are going to be cases where we want a system so complex that only the patience that computers have and their ability to work in very high-dimensional spaces is going to be able to do it.

So we can continue to argue about interpretability, but we need to acknowledge that we’re going to need to use black boxes. And this is our opportunity to do our due diligence to understand how to use them responsibly, ethically, and with benefits rather than harm. And that’s going to be a social conversation as well as as a scientific one.

*Responses have been edited for length and style

Image Credit: Chingraph / Shutterstock.com Continue reading

Posted in Human Robots

#434818 Watch These Robots Do Tasks You Thought ...

Robots have been masters of manufacturing at speed and precision for decades, but give them a seemingly simple task like stacking shelves, and they quickly get stuck. That’s changing, though, as engineers build systems that can take on the deceptively tricky tasks most humans can do with their eyes closed.

Boston Dynamics is famous for dramatic reveals of robots performing mind-blowing feats that also leave you scratching your head as to what the market is—think the bipedal Atlas doing backflips or Spot the galloping robot dog.

Last week, the company released a video of a robot called Handle that looks like an ostrich on wheels carrying out the seemingly mundane task of stacking boxes in a warehouse.

It might seem like a step backward, but this is exactly the kind of practical task robots have long struggled with. While the speed and precision of industrial robots has seen them take over many functions in modern factories, they’re generally limited to highly prescribed tasks carried out in meticulously-controlled environments.

That’s because despite their mechanical sophistication, most are still surprisingly dumb. They can carry out precision welding on a car or rapidly assemble electronics, but only by rigidly following a prescribed set of motions. Moving cardboard boxes around a warehouse might seem simple to a human, but it actually involves a variety of tasks machines still find pretty difficult—perceiving your surroundings, navigating, and interacting with objects in a dynamic environment.

But the release of this video suggests Boston Dynamics thinks these kinds of applications are close to prime time. Last week the company doubled down by announcing the acquisition of start-up Kinema Systems, which builds computer vision systems for robots working in warehouses.

It’s not the only company making strides in this area. On the same day the video went live, Google unveiled a robot arm called TossingBot that can pick random objects from a box and quickly toss them into another container beyond its reach, which could prove very useful for sorting items in a warehouse. The machine can train on new objects in just an hour or two, and can pick and toss up to 500 items an hour with better accuracy than any of the humans who tried the task.

And an apple-picking robot built by Abundant Robotics is currently on New Zealand farms navigating between rows of apple trees using LIDAR and computer vision to single out ripe apples before using a vacuum tube to suck them off the tree.

In most cases, advances in machine learning and computer vision brought about by the recent AI boom are the keys to these rapidly improving capabilities. Robots have historically had to be painstakingly programmed by humans to solve each new task, but deep learning is making it possible for them to quickly train themselves on a variety of perception, navigation, and dexterity tasks.

It’s not been simple, though, and the application of deep learning in robotics has lagged behind other areas. A major limitation is that the process typically requires huge amounts of training data. That’s fine when you’re dealing with image classification, but when that data needs to be generated by real-world robots it can make the approach impractical. Simulations offer the possibility to run this training faster than real time, but it’s proved difficult to translate policies learned in virtual environments into the real world.

Recent years have seen significant progress on these fronts, though, and the increasing integration of modern machine learning with robotics. In October, OpenAI imbued a robotic hand with human-level dexterity by training an algorithm in a simulation using reinforcement learning before transferring it to the real-world device. The key to ensuring the translation went smoothly was injecting random noise into the simulation to mimic some of the unpredictability of the real world.

And just a couple of weeks ago, MIT researchers demonstrated a new technique that let a robot arm learn to manipulate new objects with far less training data than is usually required. By getting the algorithm to focus on a few key points on the object necessary for picking it up, the system could learn to pick up a previously unseen object after seeing only a few dozen examples (rather than the hundreds or thousands typically required).

How quickly these innovations will trickle down to practical applications remains to be seen, but a number of startups as well as logistics behemoth Amazon are developing robots designed to flexibly pick and place the wide variety of items found in your average warehouse.

Whether the economics of using robots to replace humans at these kinds of menial tasks makes sense yet is still unclear. The collapse of collaborative robotics pioneer Rethink Robotics last year suggests there are still plenty of challenges.

But at the same time, the number of robotic warehouses is expected to leap from 4,000 today to 50,000 by 2025. It may not be long until robots are muscling in on tasks we’ve long assumed only humans could do.

Image Credit: Visual Generation / Shutterstock.com Continue reading

Posted in Human Robots

#434786 AI Performed Like a Human on a Gestalt ...

Dr. Been Kim wants to rip open the black box of deep learning.

A senior researcher at Google Brain, Kim specializes in a sort of AI psychology. Like cognitive psychologists before her, she develops various ways to probe the alien minds of artificial neural networks (ANNs), digging into their gory details to better understand the models and their responses to inputs.

The more interpretable ANNs are, the reasoning goes, the easier it is to reveal potential flaws in their reasoning. And if we understand when or why our systems choke, we’ll know when not to use them—a foundation for building responsible AI.

There are already several ways to tap into ANN reasoning, but Kim’s inspiration for unraveling the AI black box came from an entirely different field: cognitive psychology. The field aims to discover fundamental rules of how the human mind—essentially also a tantalizing black box—operates, Kim wrote with her colleagues.

In a new paper uploaded to the pre-publication server arXiv, the team described a way to essentially perform a human cognitive test on ANNs. The test probes how we automatically complete gaps in what we see, so that they form entire objects—for example, perceiving a circle from a bunch of loose dots arranged along a clock face. Psychologist dub this the “law of completion,” a highly influential idea that led to explanations of how our minds generalize data into concepts.

Because deep neural networks in machine vision loosely mimic the structure and connections of the visual cortex, the authors naturally asked: do ANNs also exhibit the law of completion? And what does that tell us about how an AI thinks?

Enter the Germans
The law of completion is part of a series of ideas from Gestalt psychology. Back in the 1920s, long before the advent of modern neuroscience, a group of German experimental psychologists asked: in this chaotic, flashy, unpredictable world, how do we piece together input in a way that leads to meaningful perceptions?

The result is a group of principles known together as the Gestalt effect: that the mind self-organizes to form a global whole. In the more famous words of Gestalt psychologist Kurt Koffka, our perception forms a whole that’s “something else than the sum of its parts.” Not greater than; just different.

Although the theory has its critics, subsequent studies in humans and animals suggest that the law of completion happens on both the cognitive and neuroanatomical level.

Take a look at the drawing below. You immediately “see” a shape that’s actually the negative: a triangle or a square (A and B). Or you further perceive a 3D ball (C), or a snake-like squiggle (D). Your mind fills in blank spots, so that the final perception is more than just the black shapes you’re explicitly given.

Image Credit: Wikimedia Commons contributors, the free media repository.
Neuroscientists now think that the effect comes from how our visual system processes information. Arranged in multiple layers and columns, lower-level neurons—those first to wrangle the data—tend to extract simpler features such as lines or angles. In Gestalt speak, they “see” the parts.

Then, layer by layer, perception becomes more abstract, until higher levels of the visual system directly interpret faces or objects—or things that don’t really exist. That is, the “whole” emerges.

The Experiment Setup
Inspired by these classical experiments, Kim and team developed a protocol to test the Gestalt effect on feed-forward ANNs: one simple, the other, dubbed the “Inception V3,” far more complex and widely used in the machine vision community.

The main idea is similar to the triangle drawings above. First, the team generated three datasets: one set shows complete, ordinary triangles. The second—the “Illusory” set, shows triangles with the edges removed but the corners intact. Thanks to the Gestalt effect, to us humans these generally still look like triangles. The third set also only shows incomplete triangle corners. But here, the corners are randomly rotated so that we can no longer imagine a line connecting them—hence, no more triangle.

To generate a dataset large enough to tease out small effects, the authors changed the background color, image rotation, and other aspects of the dataset. In all, they produced nearly 1,000 images to test their ANNs on.

“At a high level, we compare an ANN’s activation similarities between the three sets of stimuli,” the authors explained. The process is two steps: first, train the AI on complete triangles. Second, test them on the datasets. If the response is more similar between the illusory set and the complete triangle—rather than the randomly rotated set—it should suggest a sort of Gestalt closure effect in the network.

Machine Gestalt
Right off the bat, the team got their answer: yes, ANNs do seem to exhibit the law of closure.

When trained on natural images, the networks better classified the illusory set as triangles than those with randomized connection weights or networks trained on white noise.

When the team dug into the “why,” things got more interesting. The ability to complete an image correlated with the network’s ability to generalize.

Humans subconsciously do this constantly: anything with a handle made out of ceramic, regardless of shape, could easily be a mug. ANNs still struggle to grasp common features—clues that immediately tells us “hey, that’s a mug!” But when they do, it sometimes allows the networks to better generalize.

“What we observe here is that a network that is able to generalize exhibits…more of the closure effect [emphasis theirs], hinting that the closure effect reflects something beyond simply learning features,” the team wrote.

What’s more, remarkably similar to the visual cortex, “higher” levels of the ANNs showed more of the closure effect than lower layers, and—perhaps unsurprisingly—the more layers a network had, the more it exhibited the closure effect.

As the networks learned, their ability to map out objects from fragments also improved. When the team messed around with the brightness and contrast of the images, the AI still learned to see the forest from the trees.

“Our findings suggest that neural networks trained with natural images do exhibit closure,” the team concluded.

AI Psychology
That’s not to say that ANNs recapitulate the human brain. As Google’s Deep Dream, an effort to coax AIs into spilling what they’re perceiving, clearly demonstrates, machine vision sees some truly weird stuff.

In contrast, because they’re modeled after the human visual cortex, perhaps it’s not all that surprising that these networks also exhibit higher-level properties inherent to how we process information.

But to Kim and her colleagues, that’s exactly the point.

“The field of psychology has developed useful tools and insights to study human brains– tools that we may be able to borrow to analyze artificial neural networks,” they wrote.

By tweaking these tools to better analyze machine minds, the authors were able to gain insight on how similarly or differently they see the world from us. And that’s the crux: the point isn’t to say that ANNs perceive the world sort of, kind of, maybe similar to humans. It’s to tap into a wealth of cognitive psychology tools, established over decades using human minds, to probe that of ANNs.

“The work here is just one step along a much longer path,” the authors conclude.

“Understanding where humans and neural networks differ will be helpful for research on interpretability by enlightening the fundamental differences between the two interesting species.”

Image Credit: Popova Alena / Shutterstock.com Continue reading

Posted in Human Robots

#434772 Traditional Higher Education Is Losing ...

Should you go to graduate school? If so, why? If not, what are your alternatives? Millions of young adults across the globe—and their parents and mentors—find themselves asking these questions every year.

Earlier this month, I explored how exponential technologies are rising to meet the needs of the rapidly changing workforce.

In this blog, I’ll dive into a highly effective way to build the business acumen and skills needed to make the most significant impact in these exponential times.

To start, let’s dive into the value of graduate school versus apprenticeship—especially during this time of extraordinarily rapid growth, and the micro-diversification of careers.

The True Value of an MBA
All graduate schools are not created equal.

For complex technical trades like medicine, engineering, and law, formal graduate-level training provides a critical foundation for safe, ethical practice (until these trades are fully augmented by artificial intelligence and automation…).

For the purposes of today’s blog, let’s focus on the value of a Master in Business Administration (MBA) degree, compared to acquiring your business acumen through various forms of apprenticeship.

The Waning of Business Degrees
Ironically, business schools are facing a tough business problem. The rapid rate of technological change, a booming job market, and the digitization of education are chipping away at the traditional graduate-level business program.

The data speaks for itself.

The Decline of Graduate School Admissions
Enrollment in two-year, full-time MBA programs in the US fell by more than one-third from 2010 to 2016.

While in previous years, top business schools (e.g. Stanford, Harvard, and Wharton) were safe from the decrease in applications, this year, they also felt the waning interest in MBA programs.

Harvard Business School: 4.5 percent decrease in applications, the school’s biggest drop since 2005.
Wharton: 6.7 percent decrease in applications.
Stanford Graduate School: 4.6 percent decrease in applications.

Another signal of change began unfolding over the past week. You may have read news headlines about an emerging college admissions scam, which implicates highly selective US universities, sports coaches, parents, and students in a conspiracy to game the undergraduate admissions process.

Already, students are filing multibillion-dollar civil lawsuits arguing that the scheme has devalued their degrees or denied them a fair admissions opportunity.

MBA Graduates in the Workforce
To meet today’s business needs, startups and massive companies alike are increasingly hiring technologists, developers, and engineers in place of the MBA graduates they may have preferentially hired in the past.

While 85 percent of US employers expect to hire MBA graduates this year (a decrease from 91 percent in 2017), 52 percent of employers worldwide expect to hire graduates with a master’s in data analytics (an increase from 35 percent last year).

We’re also seeing the waning of MBA degree holders at the CEO level.

For decades, an MBA was the hallmark of upward mobility towards the C-suite of top companies.

But as exponential technologies permeate not only products but every part of the supply chain—from manufacturing and shipping to sales, marketing and customer service—that trend is changing by necessity.

Looking at the Harvard Business Review’s Top 100 CEOs in 2018 list, more CEOs on the list held engineering degrees than MBAs (34 held engineering degrees, while 32 held MBAs).

There’s much more to leading innovative companies than an advanced business degree.

How Are Schools Responding?
With disruption to the advanced business education system already here, some business schools are applying notes from their own innovation classes to brace for change.

Over the past half-decade, we’ve seen schools with smaller MBA programs shut their doors in favor of advanced degrees with more specialization. This directly responds to market demand for skills in data science, supply chain, and manufacturing.

Some degrees resemble the precise skills training of technical trades. Others are very much in line with the apprenticeship models we’ll explore next.

Regardless, this new specialization strategy is working and attracting more new students. Over the past decade (2006 to 2016), enrollment in specialized graduate business programs doubled.

Higher education is also seeing a preference shift toward for-profit trade schools, like coding boot camps. This shift is one of several forces pushing universities to adopt skill-specific advanced degrees.

But some schools are slow to adapt, raising the question: how and when will these legacy programs be disrupted? A survey of over 170 business school deans around the world showed that many programs are operating at a loss.

But if these schools are world-class business institutions, as advertised, why do they keep the doors open even while they lose money? The surveyed deans revealed an important insight: they keep the degree program open because of the program’s prestige.

Why Go to Business School?
Shorthand Credibility, Cognitive Biases, and Prestige
Regardless of what knowledge a person takes away from graduate school, attending one of the world’s most rigorous and elite programs gives grads external validation.

With over 55 percent of MBA applicants applying to just 6 percent of graduate business schools, we have a clear cognitive bias toward the perceived elite status of certain universities.

To the outside world, thanks to the power of cognitive biases, an advanced degree is credibility shorthand for your capabilities.

Simply passing through a top school’s filtration system means that you had some level of abilities and merits.

And startup success statistics tend to back up that perceived enhanced capability. Let’s take, for example, universities with the most startup unicorn founders (see the figure below).

When you consider the 320+ unicorn startups around the world today, these numbers become even more impressive. Stanford’s 18 unicorn companies account for over 5 percent of global unicorns, and Harvard is responsible for producing just under 5 percent.

Combined, just these two universities (out of over 5,000 in the US, and thousands more around the world) account for 1 in 10 of the billion-dollar private companies in the world.

By the numbers, the prestigious reputation of these elite business programs has a firm basis in current innovation success.

While prestige may be inherent to the degree earned by graduates from these business programs, the credibility boost from holding one of these degrees is not a guaranteed path to success in the business world.

For example, you might expect that the Harvard School of Business or Stanford Graduate School of Business would come out on top when tallying up the alma maters of Fortune 500 CEOs.

It turns out that the University of Wisconsin-Madison leads the business school pack with 14 CEOs to Harvard’s 12. Beyond prestige, the success these elite business programs see translates directly into cultivating unmatched networks and relationships.

Relationships
Graduate schools—particularly at the upper echelon—are excellent at attracting sharp students.

At an elite business school, if you meet just five to ten people with extraordinary skill sets, personalities, ideas, or networks, then you have returned your $200,000 education investment.

It’s no coincidence that some 40 percent of Silicon Valley venture capitalists are alumni of either Harvard or Stanford.

From future investors to advisors, friends, and potential business partners, relationships are critical to an entrepreneur’s success.

Apprenticeships
As we saw above, graduate business degree programs are melting away in the current wave of exponential change.

With an increasing $1.5 trillion in student debt, there must be a more impactful alternative to attending graduate school for those starting their careers.

When I think about the most important skills I use today as an entrepreneur, writer, and strategic thinker, they didn’t come from my decade of graduate school at Harvard or MIT… they came from my experiences building real technologies and companies, and working with mentors.

Apprenticeship comes in a variety of forms; here, I’ll cover three top-of-mind approaches:

Real-world business acumen via startup accelerators
A direct apprenticeship model
The 6 D’s of mentorship

Startup Accelerators and Business Practicum
Let’s contrast the shrinking interest in MBA programs with applications to a relatively new model of business education: startup accelerators.

Startup accelerators are short-term (typically three to six months), cohort-based programs focusing on providing startup founders with the resources (capital, mentorship, relationships, and education) needed to refine their entrepreneurial acumen.

While graduate business programs have been condensing, startup accelerators are alive, well, and expanding rapidly.

In the 10 years from 2005 (when Paul Graham founded Y Combinator) through 2015, the number of startup accelerators in the US increased by more than tenfold.

The increase in startup accelerator activity hints at a larger trend: our best and brightest business minds are opting to invest their time and efforts in obtaining hands-on experience, creating tangible value for themselves and others, rather than diving into the theory often taught in business school classrooms.

The “Strike Force” Model
The Strike Force is my elite team of young entrepreneurs who work directly with me across all of my companies, travel by my side, sit in on every meeting with me, and help build businesses that change the world.

Previous Strike Force members have gone on to launch successful companies, including Bold Capital Partners, my $250 million venture capital firm.

Strike Force is an apprenticeship for the next generation of exponential entrepreneurs.

To paraphrase my good friend Tony Robbins: If you want to short-circuit the video game, find someone who’s been there and done that and is now doing something you want to one day do.

Every year, over 500,000 apprentices in the US follow this precise template. These apprentices are learning a craft they wish to master, under the mentorship of experts (skilled metal workers, bricklayers, medical technicians, electricians, and more) who have already achieved the desired result.

What if we more readily applied this model to young adults with aspirations of creating massive value through the vehicles of entrepreneurship and innovation?

For the established entrepreneur: How can you bring young entrepreneurs into your organization to create more value for your company, while also passing on your ethos and lessons learned to the next generation?

For the young, driven millennial: How can you find your mentor and convince him or her to take you on as an apprentice? What value can you create for this person in exchange for their guidance and investment in your professional development?

The 6 D’s of Mentorship
In my last blog on education, I shared how mobile device and internet penetration will transform adult literacy and basic education. Mobile phones and connectivity already create extraordinary value for entrepreneurs and young professionals looking to take their business acumen and skill set to the next level.

For all of human history up until the last decade or so, if you wanted to learn from the best and brightest in business, leadership, or strategy, you either needed to search for a dated book that they wrote at the local library or bookstore, or you had to be lucky enough to meet that person for a live conversation.

Now you can access the mentorship of just about any thought leader on the planet, at any time, for free.

Thanks to the power of the internet, mentorship has digitized, demonetized, dematerialized, and democratized.

What do you want to learn about?

Investing? Leadership? Technology? Marketing? Project management?

You can access a near-infinite stream of cutting-edge tools, tactics, and lessons from thousands of top performers from nearly every field—instantaneously, and for free.

For example, every one of Warren Buffett’s letters to his Berkshire Hathaway investors over the past 40 years is available for free on a device that fits in your pocket.

The rise of audio—particularly podcasts and audiobooks—is another underestimated driving force away from traditional graduate business programs and toward apprenticeships.

Over 28 million podcast episodes are available for free. Once you identify the strong signals in the noise, you’re still left with thousands of hours of long-form podcast conversation from which to learn valuable lessons.

Whenever and wherever you want, you can learn from the world’s best. In the future, mentorship and apprenticeship will only become more personalized. Imagine accessing a high-fidelity, AI-powered avatar of Bill Gates, Richard Branson, or Arthur C. Clarke (one of my early mentors) to help guide you through your career.

Virtual mentorship and coaching are powerful education forces that are here to stay.

Bringing It All Together
The education system is rapidly changing. Traditional master’s programs for business are ebbing away in the tides of exponential technologies. Apprenticeship models are reemerging as an effective way to train tomorrow’s leaders.

In a future blog, I’ll revisit the concept of apprenticeships and other effective business school alternatives.

If you are a young, ambitious entrepreneur (or the parent of one), remember that you live in the most abundant time ever in human history to refine your craft.

Right now, you have access to world-class mentorship and cutting-edge best-practices—literally in the palm of your hand. What will you do with this extraordinary power?

Join Me
Abundance-Digital Online Community: I’ve created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is my ‘onramp’ for exponential entrepreneurs – those who want to get involved and play at a higher level. Click here to learn more.

Image Credit: fongbeerredhot / Shutterstock.com Continue reading

Posted in Human Robots

#434767 7 Non-Obvious Trends Shaping the Future

When you think of trends that might be shaping the future, the first things that come to mind probably have something to do with technology: Robots taking over jobs. Artificial intelligence advancing and proliferating. 5G making everything faster, connected cities making everything easier, data making everything more targeted.

Technology is undoubtedly changing the way we live, and will continue to do so—probably at an accelerating rate—in the near and far future. But there are other trends impacting the course of our lives and societies, too. They’re less obvious, and some have nothing to do with technology.

For the past nine years, entrepreneur and author Rohit Bhargava has read hundreds of articles across all types of publications, tagged and categorized them by topic, funneled frequent topics into broader trends, analyzed those trends, narrowed them down to the most significant ones, and published a book about them as part of his ‘Non-Obvious’ series. He defines a trend as “a unique curated observation of the accelerating present.”

In an encore session at South by Southwest last week (his initial talk couldn’t fit hundreds of people who wanted to attend, so a re-do was scheduled), Bhargava shared details of his creative process, why it’s hard to think non-obviously, the most important trends of this year, and how to make sure they don’t get the best of you.

Thinking Differently
“Non-obvious thinking is seeing the world in a way other people don’t see it,” Bhargava said. “The secret is curating your ideas.” Curation collects ideas and presents them in a meaningful way; museum curators, for example, decide which works of art to include in an exhibit and how to present them.

For his own curation process, Bhargava uses what he calls the haystack method. Rather than searching for a needle in a haystack, he gathers ‘hay’ (ideas and stories) then uses them to locate and define a ‘needle’ (a trend). “If you spend enough time gathering information, you can put the needle into the middle of the haystack,” he said.

A big part of gathering information is looking for it in places you wouldn’t normally think to look. In his case, that means that on top of reading what everyone else reads—the New York Times, the Washington Post, the Economist—he also buys publications like Modern Farmer, Teen Vogue, and Ink magazine. “It’s like stepping into someone else’s world who’s not like me,” he said. “That’s impossible to do online because everything is personalized.”

Three common barriers make non-obvious thinking hard.

The first is unquestioned assumptions, which are facts or habits we think will never change. When James Dyson first invented the bagless vacuum, he wanted to sell the license to it, but no one believed people would want to spend more money up front on a vacuum then not have to buy bags. The success of Dyson’s business today shows how mistaken that assumption—that people wouldn’t adapt to a product that, at the end of the day, was far more sensible—turned out to be. “Making the wrong basic assumptions can doom you,” Bhargava said.

The second barrier to thinking differently is constant disruption. “Everything is changing as industries blend together,” Bhargava said. “The speed of change makes everyone want everything, all the time, and people expect the impossible.” We’ve come to expect every alternative to be presented to us in every moment, but in many cases this doesn’t serve us well; we’re surrounded by noise and have trouble discerning what’s valuable and authentic.

This ties into the third barrier, which Bhargava calls the believability crisis. “Constant sensationalism makes people skeptical about everything,” he said. With the advent of fake news and technology like deepfakes, we’re in a post-truth, post-fact era, and are in a constant battle to discern what’s real from what’s not.

2019 Trends
Bhargava’s efforts to see past these barriers and curate information yielded 15 trends he believes are currently shaping the future. He shared seven of them, along with thoughts on how to stay ahead of the curve.

Retro Trust
We tend to trust things we have a history with. “People like nostalgic experiences,” Bhargava said. With tech moving as fast as it is, old things are quickly getting replaced by shinier, newer, often more complex things. But not everyone’s jumping on board—and some who’ve been on board are choosing to jump off in favor of what worked for them in the past.

“We’re turning back to vinyl records and film cameras, deliberately downgrading to phones that only text and call,” Bhargava said. In a period of too much change too fast, people are craving familiarity and dependability. To capitalize on that sentiment, entrepreneurs should seek out opportunities for collaboration—how can you build a product that’s new, but feels reliable and familiar?

Muddled Masculinity
Women have increasingly taken on more leadership roles, advanced in the workplace, now own more homes than men, and have higher college graduation rates. That’s all great for us ladies—but not so great for men or, perhaps more generally, for the concept of masculinity.

“Female empowerment is causing confusion about what it means to be a man today,” Bhargava said. “Men don’t know what to do—should they say something? Would that make them an asshole? Should they keep quiet? Would that make them an asshole?”

By encouraging the non-conforming, we can help take some weight off the traditional gender roles, and their corresponding divisions and pressures.

Innovation Envy
Innovation has become an over-used word, to the point that it’s thrown onto ideas and actions that aren’t really innovative at all. “We innovate by looking at someone else and doing the same,” Bhargava said. If an employee brings a radical idea to someone in a leadership role, in many companies the leadership will say they need a case study before implementing the radical idea—but if it’s already been done, it’s not innovative. “With most innovation what ends up happening is not spectacular failure, but irrelevance,” Bhargava said.

He suggests that rather than being on the defensive, companies should play offense with innovation, and when it doesn’t work “fail as if no one’s watching” (often, no one will be).

Artificial Influence
Thanks to social media and other technologies, there are a growing number of fabricated things that, despite not being real, influence how we think. “15 percent of all Twitter accounts may be fake, and there are 60 million fake Facebook accounts,” Bhargava said. There are virtual influencers and even virtual performers.

“Don’t hide the artificial ingredients,” Bhargava advised. “Some people are going to pretend it’s all real. We have to be ethical.” The creators of fabrications meant to influence the way people think, or the products they buy, or the decisions they make, should make it crystal-clear that there aren’t living, breathing people behind the avatars.

Enterprise Empathy
Another reaction to the fast pace of change these days—and the fast pace of life, for that matter—is that empathy is regaining value and even becoming a driver of innovation. Companies are searching for ways to give people a sense of reassurance. The Tesco grocery brand in the UK has a “relaxed lane” for those who don’t want to feel rushed as they check out. Starbucks opened a “signing store” in Washington DC, and most of its regular customers have learned some sign language.

“Use empathy as a principle to help yourself stand out,” Bhargava said. Besides being a good business strategy, “made with empathy” will ideally promote, well, more empathy, a quality there’s often a shortage of.

Robot Renaissance
From automating factory jobs to flipping burgers to cleaning our floors, robots have firmly taken their place in our day-to-day lives—and they’re not going away anytime soon. “There are more situations with robots than ever before,” Bhargava said. “They’re exploring underwater. They’re concierges at hotels.”

The robot revolution feels intimidating. But Bhargava suggests embracing robots with more curiosity than concern. While they may replace some tasks we don’t want replaced, they’ll also be hugely helpful in multiple contexts, from elderly care to dangerous manual tasks.

Back-storytelling
Similar to retro trust and enterprise empathy, organizations have started to tell their brand’s story to gain customer loyalty. “Stories give us meaning, and meaning is what we need in order to be able to put the pieces together,” Bhargava said. “Stories give us a way of understanding the world.”

Finding the story behind your business, brand, or even yourself, and sharing it openly, can help you connect with people, be they customers, coworkers, or friends.

Tech’s Ripple Effects
While it may not overtly sound like it, most of the trends Bhargava identified for 2019 are tied to technology, and are in fact a sort of backlash against it. Tech has made us question who to trust, how to innovate, what’s real and what’s fake, how to make the best decisions, and even what it is that makes us human.

By being aware of these trends, sharing them, and having conversations about them, we’ll help shape the way tech continues to be built, and thus the way it impacts us down the road.

Image Credit: Rohit Bhargava by Brian Smale Continue reading

Posted in Human Robots