Tag Archives: buy

#432271 Your Shopping Experience Is on the Verge ...

Exponential technologies (AI, VR, 3D printing, and networks) are radically reshaping traditional retail.

E-commerce giants (Amazon, Walmart, Alibaba) are digitizing the retail industry, riding the exponential growth of computation.

Many brick-and-mortar stores have already gone bankrupt, or migrated their operations online.

Massive change is occurring in this arena.

For those “real-life stores” that survive, an evolution is taking place from a product-centric mentality to an experience-based business model by leveraging AI, VR/AR, and 3D printing.

Let’s dive in.

E-Commerce Trends
Last year, 3.8 billion people were connected online. By 2024, thanks to 5G, stratospheric and space-based satellites, we will grow to 8 billion people online, each with megabit to gigabit connection speeds.

These 4.2 billion new digital consumers will begin buying things online, a potential bonanza for the e-commerce world.

At the same time, entrepreneurs seeking to service these four-billion-plus new consumers can now skip the costly steps of procuring retail space and hiring sales clerks.

Today, thanks to global connectivity, contract production, and turnkey pack-and-ship logistics, an entrepreneur can go from an idea to building and scaling a multimillion-dollar business from anywhere in the world in record time.

And while e-commerce sales have been exploding (growing from $34 billion in Q1 2009 to $115 billion in Q3 2017), e-commerce only accounted for about 10 percent of total retail sales in 2017.

In 2016, global online sales totaled $1.8 trillion. Remarkably, this $1.8 trillion was spent by only 1.5 billion people — a mere 20 percent of Earth’s global population that year.

There’s plenty more room for digital disruption.

AI and the Retail Experience
For the business owner, AI will demonetize e-commerce operations with automated customer service, ultra-accurate supply chain modeling, marketing content generation, and advertising.

In the case of customer service, imagine an AI that is trained by every customer interaction, learns how to answer any consumer question perfectly, and offers feedback to product designers and company owners as a result.

Facebook’s handover protocol allows live customer service representatives and language-learning bots to work within the same Facebook Messenger conversation.

Taking it one step further, imagine an AI that is empathic to a consumer’s frustration, that can take any amount of abuse and come back with a smile every time. As one example, meet Ava. “Ava is a virtual customer service agent, to bring a whole new level of personalization and brand experience to that customer experience on a day-to-day basis,” says Greg Cross, CEO of Ava’s creator, an Austrian company called Soul Machines.

Predictive modeling and machine learning are also optimizing product ordering and the supply chain process. For example, Skubana, a platform for online sellers, leverages data analytics to provide entrepreneurs constant product performance feedback and maintain optimal warehouse stock levels.

Blockchain is set to follow suit in the retail space. ShipChain and Ambrosus plan to introduce transparency and trust into shipping and production, further reducing costs for entrepreneurs and consumers.

Meanwhile, for consumers, personal shopping assistants are shifting the psychology of the standard shopping experience.

Amazon’s Alexa marks an important user interface moment in this regard.

Alexa is in her infancy with voice search and vocal controls for smart homes. Already, Amazon’s Alexa users, on average, spent more on Amazon.com when purchasing than standard Amazon Prime customers — $1,700 versus $1,400.

As I’ve discussed in previous posts, the future combination of virtual reality shopping, coupled with a personalized, AI-enabled fashion advisor will make finding, selecting, and ordering products fast and painless for consumers.

But let’s take it one step further.

Imagine a future in which your personal AI shopper knows your desires better than you do. Possible? I think so. After all, our future AIs will follow us, watch us, and observe our interactions — including how long we glance at objects, our facial expressions, and much more.

In this future, shopping might be as easy as saying, “Buy me a new outfit for Saturday night’s dinner party,” followed by a surprise-and-delight moment in which the outfit that arrives is perfect.

In this future world of AI-enabled shopping, one of the most disruptive implications is that advertising is now dead.

In a world where an AI is buying my stuff, and I’m no longer in the decision loop, why would a big brand ever waste money on a Super Bowl advertisement?

The dematerialization, demonetization, and democratization of personalized shopping has only just begun.

The In-Store Experience: Experiential Retailing
In 2017, over 6,700 brick-and-mortar retail stores closed their doors, surpassing the former record year for store closures set in 2008 during the financial crisis. Regardless, business is still booming.

As shoppers seek the convenience of online shopping, brick-and-mortar stores are tapping into the power of the experience economy.

Rather than focusing on the practicality of the products they buy, consumers are instead seeking out the experience of going shopping.

The Internet of Things, artificial intelligence, and computation are exponentially improving the in-person consumer experience.

As AI dominates curated online shopping, AI and data analytics tools are also empowering real-life store owners to optimize staffing, marketing strategies, customer relationship management, and inventory logistics.

In the short term,retail store locations will serve as the next big user interface for production 3D printing (custom 3D printed clothes at the Ministry of Supply), virtual and augmented reality (DIY skills clinics), and the Internet of Things (checkout-less shopping).

In the long term,we’ll see how our desire for enhanced productivity and seamless consumption balances with our preference for enjoyable real-life consumer experiences — all of which will be driven by exponential technologies.

One thing is certain: the nominal shopping experience is on the verge of a major transformation.

Implications
The convergence of exponential technologies has already revamped how and where we shop, how we use our time, and how much we pay.

Twenty years ago, Amazon showed us how the web could offer each of us the long tail of available reading material, and since then, the world of e-commerce has exploded.

And yet we still haven’t experienced the cost savings coming our way from drone delivery, the Internet of Things, tokenized ecosystems, the impact of truly powerful AI, or even the other major applications for 3D printing and AR/VR.

Perhaps nothing will be more transformed than today’s $20 trillion retail sector.

Hold on, stay tuned, and get your AI-enabled cryptocurrency ready.

Join Me
Abundance Digital Online Community: I’ve created a digital/online community of bold, abundance-minded entrepreneurs called Abundance Digital.

Abundance Digital is my ‘onramp’ for exponential entrepreneurs — those who want to get involved and play at a higher level. Click here to learn more.

Image Credit: Zapp2Photo / Shutterstock.com Continue reading

Posted in Human Robots

#432236 Why Hasn’t AI Mastered Language ...

In the myth about the Tower of Babel, people conspired to build a city and tower that would reach heaven. Their creator observed, “And now nothing will be restrained from them, which they have imagined to do.” According to the myth, God thwarted this effort by creating diverse languages so that they could no longer collaborate.

In our modern times, we’re experiencing a state of unprecedented connectivity thanks to technology. However, we’re still living under the shadow of the Tower of Babel. Language remains a barrier in business and marketing. Even though technological devices can quickly and easily connect, humans from different parts of the world often can’t.

Translation agencies step in, making presentations, contracts, outsourcing instructions, and advertisements comprehensible to all intended recipients. Some agencies also offer “localization” expertise. For instance, if a company is marketing in Quebec, the advertisements need to be in Québécois French, not European French. Risk-averse companies may be reluctant to invest in these translations. Consequently, these ventures haven’t achieved full market penetration.

Global markets are waiting, but AI-powered language translation isn’t ready yet, despite recent advancements in natural language processing and sentiment analysis. AI still has difficulties processing requests in one language, without the additional complications of translation. In November 2016, Google added a neural network to its translation tool. However, some of its translations are still socially and grammatically odd. I spoke to technologists and a language professor to find out why.

“To Google’s credit, they made a pretty massive improvement that appeared almost overnight. You know, I don’t use it as much. I will say this. Language is hard,” said Michael Housman, chief data science officer at RapportBoost.AI and faculty member of Singularity University.

He explained that the ideal scenario for machine learning and artificial intelligence is something with fixed rules and a clear-cut measure of success or failure. He named chess as an obvious example, and noted machines were able to beat the best human Go player. This happened faster than anyone anticipated because of the game’s very clear rules and limited set of moves.

Housman elaborated, “Language is almost the opposite of that. There aren’t as clearly-cut and defined rules. The conversation can go in an infinite number of different directions. And then of course, you need labeled data. You need to tell the machine to do it right or wrong.”

Housman noted that it’s inherently difficult to assign these informative labels. “Two translators won’t even agree on whether it was translated properly or not,” he said. “Language is kind of the wild west, in terms of data.”

Google’s technology is now able to consider the entirety of a sentence, as opposed to merely translating individual words. Still, the glitches linger. I asked Dr. Jorge Majfud, Associate Professor of Spanish, Latin American Literature, and International Studies at Jacksonville University, to explain why consistently accurate language translation has thus far eluded AI.

He replied, “The problem is that considering the ‘entire’ sentence is still not enough. The same way the meaning of a word depends on the rest of the sentence (more in English than in Spanish), the meaning of a sentence depends on the rest of the paragraph and the rest of the text, as the meaning of a text depends on a larger context called culture, speaker intentions, etc.”

He noted that sarcasm and irony only make sense within this widened context. Similarly, idioms can be problematic for automated translations.

“Google translation is a good tool if you use it as a tool, that is, not to substitute human learning or understanding,” he said, before offering examples of mistranslations that could occur.

“Months ago, I went to buy a drill at Home Depot and I read a sign under a machine: ‘Saw machine.’ Right below it, the Spanish translation: ‘La máquina vió,’ which means, ‘The machine did see it.’ Saw, not as a noun but as a verb in the preterit form,” he explained.

Dr. Majfud warned, “We should be aware of the fragility of their ‘interpretation.’ Because to translate is basically to interpret, not just an idea but a feeling. Human feelings and ideas that only humans can understand—and sometimes not even we, humans, understand other humans.”

He noted that cultures, gender, and even age can pose barriers to this understanding and also contended that an over-reliance on technology is leading to our cultural and political decline. Dr. Majfud mentioned that Argentinean writer Julio Cortázar used to refer to dictionaries as “cemeteries.” He suggested that automatic translators could be called “zombies.”

Erik Cambria is an academic AI researcher and assistant professor at Nanyang Technological University in Singapore. He mostly focuses on natural language processing, which is at the core of AI-powered language translation. Like Dr. Majfud, he sees the complexity and associated risks. “There are so many things that we unconsciously do when we read a piece of text,” he told me. Reading comprehension requires multiple interrelated tasks, which haven’t been accounted for in past attempts to automate translation.

Cambria continued, “The biggest issue with machine translation today is that we tend to go from the syntactic form of a sentence in the input language to the syntactic form of that sentence in the target language. That’s not what we humans do. We first decode the meaning of the sentence in the input language and then we encode that meaning into the target language.”

Additionally, there are cultural risks involved with these translations. Dr. Ramesh Srinivasan, Director of UCLA’s Digital Cultures Lab, said that new technological tools sometimes reflect underlying biases.

“There tend to be two parameters that shape how we design ‘intelligent systems.’ One is the values and you might say biases of those that create the systems. And the second is the world if you will that they learn from,” he told me. “If you build AI systems that reflect the biases of their creators and of the world more largely, you get some, occasionally, spectacular failures.”

Dr. Srinivasan said translation tools should be transparent about their capabilities and limitations. He said, “You know, the idea that a single system can take languages that I believe are very diverse semantically and syntactically from one another and claim to unite them or universalize them, or essentially make them sort of a singular entity, it’s a misnomer, right?”

Mary Cochran, co-founder of Launching Labs Marketing, sees the commercial upside. She mentioned that listings in online marketplaces such as Amazon could potentially be auto-translated and optimized for buyers in other countries.

She said, “I believe that we’re just at the tip of the iceberg, so to speak, with what AI can do with marketing. And with better translation, and more globalization around the world, AI can’t help but lead to exploding markets.”

Image Credit: igor kisselev / Shutterstock.com Continue reading

Posted in Human Robots

#432031 Why the Rise of Self-Driving Vehicles ...

It’s been a long time coming. For years Waymo (formerly known as Google Chauffeur) has been diligently developing, driving, testing and refining its fleets of various models of self-driving cars. Now Waymo is going big. The company recently placed an order for several thousand new Chrysler Pacifica minivans and next year plans to launch driverless taxis in a number of US cities.

This deal raises one of the biggest unanswered questions about autonomous vehicles: if fleets of driverless taxis make it cheap and easy for regular people to get around, what’s going to happen to car ownership?

One popular line of thought goes as follows: as autonomous ride-hailing services become ubiquitous, people will no longer need to buy their own cars. This notion has a certain logical appeal. It makes sense to assume that as driverless taxis become widely available, most of us will eagerly sell the family car and use on-demand taxis to get to work, run errands, or pick up the kids. After all, vehicle ownership is pricey and most cars spend the vast majority of their lives parked.

Even experts believe commercial availability of autonomous vehicles will cause car sales to drop.

Market research firm KPMG estimates that by 2030, midsize car sales in the US will decline from today’s 5.4 million units sold each year to nearly half that number, a measly 2.1 million units. Another market research firm, ReThinkX, offers an even more pessimistic estimate (or optimistic, depending on your opinion of cars), predicting that autonomous vehicles will reduce consumer demand for new vehicles by a whopping 70 percent.

The reality is that the impending death of private vehicle sales is greatly exaggerated. Despite the fact that autonomous taxis will be a beneficial and widely-embraced form of urban transportation, we will witness the opposite. Most people will still prefer to own their own autonomous vehicle. In fact, the total number of units of autonomous vehicles sold each year is going to increase rather than decrease.

When people predict the demise of car ownership, they are overlooking the reality that the new autonomous automotive industry is not going to be just a re-hash of today’s car industry with driverless vehicles. Instead, the automotive industry of the future will be selling what could be considered an entirely new product: a wide variety of intelligent, self-guiding transportation robots. When cars become a widely used type of transportation robot, they will be cheap, ubiquitous, and versatile.

Several unique characteristics of autonomous vehicles will ensure that people will continue to buy their own cars.

1. Cost: Thanks to simpler electric engines and lighter auto bodies, autonomous vehicles will be cheaper to buy and maintain than today’s human-driven vehicles. Some estimates bring the price to $10K per vehicle, a stark contrast with today’s average of $30K per vehicle.

2. Personal belongings: Consumers will be able to do much more in their driverless vehicles, including work, play, and rest. This means they will want to keep more personal items in their cars.

3. Frequent upgrades: The average (human-driven) car today is owned for 10 years. As driverless cars become software-driven devices, their price/performance ratio will track to Moore’s law. Their rapid improvement will increase the appeal and frequency of new vehicle purchases.

4. Instant accessibility: In a dense urban setting, a driverless taxi is able to show up within minutes of being summoned. But not so in rural areas, where people live miles apart. For many, delay and “loss of control” over their own mobility will increase the appeal of owning their own vehicle.

5. Diversity of form and function: Autonomous vehicles will be available in a wide variety of sizes and shapes. Consumers will drive demand for custom-made, purpose-built autonomous vehicles whose form is adapted for a particular function.

Let’s explore each of these characteristics in more detail.

Autonomous vehicles will cost less for several reasons. For one, they will be powered by electric engines, which are cheaper to construct and maintain than gasoline-powered engines. Removing human drivers will also save consumers money. Autonomous vehicles will be much less likely to have accidents, hence they can be built out of lightweight, lower-cost materials and will be cheaper to insure. With the human interface no longer needed, autonomous vehicles won’t be burdened by the manufacturing costs of a complex dashboard, steering wheel, and foot pedals.

While hop-on, hop-off autonomous taxi-based mobility services may be ideal for some of the urban population, several sizeable customer segments will still want to own their own cars.

These include people who live in sparsely-populated rural areas who can’t afford to wait extended periods of time for a taxi to appear. Families with children will prefer to own their own driverless cars to house their childrens’ car seats and favorite toys and sippy cups. Another loyal car-buying segment will be die-hard gadget-hounds who will eagerly buy a sexy upgraded model every year or so, unable to resist the siren song of AI that is three times as safe, or a ride that is twice as smooth.

Finally, consider the allure of robotic diversity.

Commuters will invest in a home office on wheels, a sleek, traveling workspace resembling the first-class suite on an airplane. On the high end of the market, city-dwellers and country-dwellers alike will special-order custom-made autonomous vehicles whose shape and on-board gadgetry is adapted for a particular function or hobby. Privately-owned small businesses will buy their own autonomous delivery robot that could range in size from a knee-high, last-mile delivery pod, to a giant, long-haul shipping device.

As autonomous vehicles near commercial viability, Waymo’s procurement deal with Fiat Chrysler is just the beginning.

The exact value of this future automotive industry has yet to be defined, but research from Intel’s internal autonomous vehicle division estimates this new so-called “passenger economy” could be worth nearly $7 trillion a year. To position themselves to capture a chunk of this potential revenue, companies whose businesses used to lie in previously disparate fields such as robotics, software, ships, and entertainment (to name but a few) have begun to form a bewildering web of what they hope will be symbiotic partnerships. Car hailing and chip companies are collaborating with car rental companies, who in turn are befriending giant software firms, who are launching joint projects with all sizes of hardware companies, and so on.

Last year, car companies sold an estimated 80 million new cars worldwide. Over the course of nearly a century, car companies and their partners, global chains of suppliers and service providers, have become masters at mass-producing and maintaining sturdy and cost-effective human-driven vehicles. As autonomous vehicle technology becomes ready for mainstream use, traditional automotive companies are being forced to grapple with the painful realization that they must compete in a new playing field.

The challenge for traditional car-makers won’t be that people no longer want to own cars. Instead, the challenge will be learning to compete in a new and larger transportation industry where consumers will choose their product according to the appeal of its customized body and the quality of its intelligent software.

Melba Kurman and Hod Lipson are the authors of Driverless: Intelligent Cars and the Road Ahead and Fabricated: the New World of 3D Printing.

Image Credit: hfzimages / Shutterstock.com

We are a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for us to earn fees by linking to Amazon.com and affiliated sites. Continue reading

Posted in Human Robots

#431958 The Next Generation of Cameras Might See ...

You might be really pleased with the camera technology in your latest smartphone, which can recognize your face and take slow-mo video in ultra-high definition. But these technological feats are just the start of a larger revolution that is underway.

The latest camera research is shifting away from increasing the number of mega-pixels towards fusing camera data with computational processing. By that, we don’t mean the Photoshop style of processing where effects and filters are added to a picture, but rather a radical new approach where the incoming data may not actually look like at an image at all. It only becomes an image after a series of computational steps that often involve complex mathematics and modeling how light travels through the scene or the camera.

This additional layer of computational processing magically frees us from the chains of conventional imaging techniques. One day we may not even need cameras in the conventional sense any more. Instead we will use light detectors that only a few years ago we would never have considered any use for imaging. And they will be able to do incredible things, like see through fog, inside the human body and even behind walls.

Single Pixel Cameras
One extreme example is the single pixel camera, which relies on a beautifully simple principle. Typical cameras use lots of pixels (tiny sensor elements) to capture a scene that is likely illuminated by a single light source. But you can also do things the other way around, capturing information from many light sources with a single pixel.

To do this you need a controlled light source, for example a simple data projector that illuminates the scene one spot at a time or with a series of different patterns. For each illumination spot or pattern, you then measure the amount of light reflected and add everything together to create the final image.

Clearly the disadvantage of taking a photo in this is way is that you have to send out lots of illumination spots or patterns in order to produce one image (which would take just one snapshot with a regular camera). But this form of imaging would allow you to create otherwise impossible cameras, for example that work at wavelengths of light beyond the visible spectrum, where good detectors cannot be made into cameras.

These cameras could be used to take photos through fog or thick falling snow. Or they could mimic the eyes of some animals and automatically increase an image’s resolution (the amount of detail it captures) depending on what’s in the scene.

It is even possible to capture images from light particles that have never even interacted with the object we want to photograph. This would take advantage of the idea of “quantum entanglement,” that two particles can be connected in a way that means whatever happens to one happens to the other, even if they are a long distance apart. This has intriguing possibilities for looking at objects whose properties might change when lit up, such as the eye. For example, does a retina look the same when in darkness as in light?

Multi-Sensor Imaging
Single-pixel imaging is just one of the simplest innovations in upcoming camera technology and relies, on the face of it, on the traditional concept of what forms a picture. But we are currently witnessing a surge of interest for systems that use lots of information but traditional techniques only collect a small part of it.

This is where we could use multi-sensor approaches that involve many different detectors pointed at the same scene. The Hubble telescope was a pioneering example of this, producing pictures made from combinations of many different images taken at different wavelengths. But now you can buy commercial versions of this kind of technology, such as the Lytro camera that collects information about light intensity and direction on the same sensor, to produce images that can be refocused after the image has been taken.

The next generation camera will probably look something like the Light L16 camera, which features ground-breaking technology based on more than ten different sensors. Their data are combined using a computer to provide a 50 MB, re-focusable and re-zoomable, professional-quality image. The camera itself looks like a very exciting Picasso interpretation of a crazy cell-phone camera.

Yet these are just the first steps towards a new generation of cameras that will change the way in which we think of and take images. Researchers are also working hard on the problem of seeing through fog, seeing behind walls, and even imaging deep inside the human body and brain.

All of these techniques rely on combining images with models that explain how light travels through through or around different substances.

Another interesting approach that is gaining ground relies on artificial intelligence to “learn” to recognize objects from the data. These techniques are inspired by learning processes in the human brain and are likely to play a major role in future imaging systems.

Single photon and quantum imaging technologies are also maturing to the point that they can take pictures with incredibly low light levels and videos with incredibly fast speeds reaching a trillion frames per second. This is enough to even capture images of light itself traveling across as scene.

Some of these applications might require a little time to fully develop, but we now know that the underlying physics should allow us to solve these and other problems through a clever combination of new technology and computational ingenuity.

This article was originally published on The Conversation. Read the original article.

Image Credit: Sylvia Adams / Shutterstock.com Continue reading

Posted in Human Robots

#431925 How the Science of Decision-Making Will ...

Neuroscientist Brie Linkenhoker believes that leaders must be better prepared for future strategic challenges by continually broadening their worldviews.
As the director of Worldview Stanford, Brie and her team produce multimedia content and immersive learning experiences to make academic research and insights accessible and useable by curious leaders. These future-focused topics are designed to help curious leaders understand the forces shaping the future.
Worldview Stanford has tackled such interdisciplinary topics as the power of minds, the science of decision-making, environmental risk and resilience, and trust and power in the age of big data.
We spoke with Brie about why understanding our biases is critical to making better decisions, particularly in a time of increasing change and complexity.

Lisa Kay Solomon: What is Worldview Stanford?
Brie Linkenhoker: Leaders and decision makers are trying to navigate this complex hairball of a planet that we live on and that requires keeping up on a lot of diverse topics across multiple fields of study and research. Universities like Stanford are where that new knowledge is being created, but it’s not getting out and used as readily as we would like, so that’s what we’re working on.
Worldview is designed to expand our individual and collective worldviews about important topics impacting our future. Your worldview is not a static thing, it’s constantly changing. We believe it should be informed by lots of different perspectives, different cultures, by knowledge from different domains and disciplines. This is more important now than ever.
At Worldview, we create learning experiences that are an amalgamation of all of those things.
LKS: One of your marquee programs is the Science of Decision Making. Can you tell us about that course and why it’s important?
BL: We tend to think about decision makers as being people in leadership positions, but every person who works in your organization, every member of your family, every member of the community is a decision maker. You have to decide what to buy, who to partner with, what government regulations to anticipate.
You have to think not just about your own decisions, but you have to anticipate how other people make decisions too. So, when we set out to create the Science of Decision Making, we wanted to help people improve their own decisions and be better able to predict, understand, anticipate the decisions of others.

“I think in another 10 or 15 years, we’re probably going to have really rich models of how we actually make decisions and what’s going on in the brain to support them.”

We realized that the only way to do that was to combine a lot of different perspectives, so we recruited experts from economics, psychology, neuroscience, philosophy, biology, and religion. We also brought in cutting-edge research on artificial intelligence and virtual reality and explored conversations about how technology is changing how we make decisions today and how it might support our decision-making in the future.
There’s no single set of answers. There are as many unanswered questions as there are answered questions.
LKS: One of the other things you explore in this course is the role of biases and heuristics. Can you explain the importance of both in decision-making?
BL: When I was a strategy consultant, executives would ask me, “How do I get rid of the biases in my decision-making or my organization’s decision-making?” And my response would be, “Good luck with that. It isn’t going to happen.”
As human beings we make, probably, thousands of decisions every single day. If we had to be actively thinking about each one of those decisions, we wouldn’t get out of our house in the morning, right?
We have to be able to do a lot of our decision-making essentially on autopilot to free up cognitive resources for more difficult decisions. So, we’ve evolved in the human brain a set of what we understand to be heuristics or rules of thumb.
And heuristics are great in, say, 95 percent of situations. It’s that five percent, or maybe even one percent, that they’re really not so great. That’s when we have to become aware of them because in some situations they can become biases.
For example, it doesn’t matter so much that we’re not aware of our rules of thumb when we’re driving to work or deciding what to make for dinner. But they can become absolutely critical in situations where a member of law enforcement is making an arrest or where you’re making a decision about a strategic investment or even when you’re deciding who to hire.
Let’s take hiring for a moment.
How many years is a hire going to impact your organization? You’re potentially looking at 5, 10, 15, 20 years. Having the right person in a role could change the future of your business entirely. That’s one of those areas where you really need to be aware of your own heuristics and biases—and we all have them. There’s no getting rid of them.
LKS: We seem to be at a time when the boundaries between different disciplines are starting to blend together. How has the advancement of neuroscience help us become better leaders? What do you see happening next?
BL: Heuristics and biases are very topical these days, thanks in part to Michael Lewis’s fantastic book, The Undoing Project, which is the story of the groundbreaking work that Nobel Prize winner Danny Kahneman and Amos Tversky did in the psychology and biases of human decision-making. Their work gave rise to the whole new field of behavioral economics.
In the last 10 to 15 years, neuroeconomics has really taken off. Neuroeconomics is the combination of behavioral economics with neuroscience. In behavioral economics, they use economic games and economic choices that have numbers associated with them and have real-world application.
For example, they ask, “How much would you spend to buy A versus B?” Or, “If I offered you X dollars for this thing that you have, would you take it or would you say no?” So, it’s trying to look at human decision-making in a format that’s easy to understand and quantify within a laboratory setting.
Now you bring neuroscience into that. You can have people doing those same kinds of tasks—making those kinds of semi-real-world decisions—in a brain scanner, and we can now start to understand what’s going on in the brain while people are making decisions. You can ask questions like, “Can I look at the signals in someone’s brain and predict what decision they’re going to make?” That can help us build a model of decision-making.
I think in another 10 or 15 years, we’re probably going to have really rich models of how we actually make decisions and what’s going on in the brain to support them. That’s very exciting for a neuroscientist.
Image Credit: Black Salmon / Shutterstock.com Continue reading

Posted in Human Robots