Tag Archives: emotion
#437293 These Scientists Just Completed a 3D ...
Human brain maps are a dime a dozen these days. Maps that detail neurons in a certain region. Maps that draw out functional connections between those cells. Maps that dive deeper into gene expression. Or even meta-maps that combine all of the above.
But have you ever wondered: how well do those maps represent my brain? After all, no two brains are alike. And if we’re ever going to reverse-engineer the brain as a computer simulation—as Europe’s Human Brain Project is trying to do—shouldn’t we ask whose brain they’re hoping to simulate?
Enter a new kind of map: the Julich-Brain, a probabilistic map of human brains that accounts for individual differences using a computational framework. Rather than generating a static PDF of a brain map, the Julich-Brain atlas is also dynamic, in that it continuously changes to incorporate more recent brain mapping results. So far, the map has data from over 24,000 thinly sliced sections from 23 postmortem brains covering most years of adulthood at the cellular level. But the atlas can also continuously adapt to progress in mapping technologies to aid brain modeling and simulation, and link to other atlases and alternatives.
In other words, rather than “just another” human brain map, the Julich-Brain atlas is its own neuromapping API—one that could unite previous brain-mapping efforts with more modern methods.
“It is exciting to see how far the combination of brain research and digital technologies has progressed,” said Dr. Katrin Amunts of the Institute of Neuroscience and Medicine at Research Centre Jülich in Germany, who spearheaded the study.
The Old Dogma
The Julich-Brain atlas embraces traditional brain-mapping while also yanking the field into the 21st century.
First, the new atlas includes the brain’s cytoarchitecture, or how brain cells are organized. As brain maps go, these kinds of maps are the oldest and most fundamental. Rather than exploring how neurons talk to each other functionally—which is all the rage these days with connectome maps—cytoarchitecture maps draw out the physical arrangement of neurons.
Like a census, these maps literally capture how neurons are distributed in the brain, what they look like, and how they layer within and between different brain regions.
Because neurons aren’t packed together the same way between different brain regions, this provides a way to parse the brain into areas that can be further studied. When we say the brain’s “memory center,” the hippocampus, or the emotion center, the “amygdala,” these distinctions are based on cytoarchitectural maps.
Some may call this type of mapping “boring.” But cytoarchitecture maps form the very basis of any sort of neuroscience understanding. Like hand-drawn maps from early explorers sailing to the western hemisphere, these maps provide the brain’s geographical patterns from which we try to decipher functional connections. If brain regions are cities, then cytoarchitecture maps attempt to show trading or other “functional” activities that occur in the interlinking highways.
You might’ve heard of the most common cytoarchitecture map used today: the Brodmann map from 1909 (yup, that old), which divided the brain into classical regions based on the cells’ morphology and location. The map, while impactful, wasn’t able to account for brain differences between people. More recent brain-mapping technologies have allowed us to dig deeper into neuronal differences and divide the brain into more regions—180 areas in the cortex alone, compared with 43 in the original Brodmann map.
The new study took inspiration from that age-old map and transformed it into a digital ecosystem.
A Living Atlas
Work began on the Julich-Brain atlas in the mid-1990s, with a little help from the crowd.
The preparation of human tissue and its microstructural mapping, analysis, and data processing is incredibly labor-intensive, the authors lamented, making it impossible to do for the whole brain at high resolution in just one lab. To build their “Google Earth” for the brain, the team hooked up with EBRAINS, a shared computing platform set up by the Human Brain Project to promote collaboration between neuroscience labs in the EU.
First, the team acquired MRI scans of 23 postmortem brains, sliced the brains into wafer-thin sections, and scanned and digitized them. They corrected distortions from the chopping using data from the MRI scans and then lined up neurons in consecutive sections—picture putting together a 3D puzzle—to reconstruct the whole brain. Overall, the team had to analyze 24,000 brain sections, which prompted them to build a computational management system for individual brain sections—a win, because they could now track individual donor brains too.
Their method was quite clever. They first mapped their results to a brain template from a single person, called the MNI-Colin27 template. Because the reference brain was extremely detailed, this allowed the team to better figure out the location of brain cells and regions in a particular anatomical space.
However, MNI-Colin27’s brain isn’t your or my brain—or any of the brains the team analyzed. To dilute any of Colin’s potential brain quirks, the team also mapped their dataset onto an “average brain,” dubbed the ICBM2009c (catchy, I know).
This step allowed the team to “standardize” their results with everything else from the Human Connectome Project and the UK Biobank, kind of like adding their Google Maps layer to the existing map. To highlight individual brain differences, the team overlaid their dataset on existing ones, and looked for differences in the cytoarchitecture.
The microscopic architecture of neurons change between two areas (dotted line), forming the basis of different identifiable brain regions. To account for individual differences, the team also calculated a probability map (right hemisphere). Image credit: Forschungszentrum Juelich / Katrin Amunts
Based on structure alone, the brains were both remarkably different and shockingly similar at the same time. For example, the cortexes—the outermost layer of the brain—were physically different across donor brains of different age and sex. The region especially divergent between people was Broca’s region, which is traditionally linked to speech production. In contrast, parts of the visual cortex were almost identical between the brains.
The Brain-Mapping Future
Rather than relying on the brain’s visible “landmarks,” which can still differ between people, the probabilistic map is far more precise, the authors said.
What’s more, the map could also pool yet unmapped regions in the cortex—about 30 percent or so—into “gap maps,” providing neuroscientists with a better idea of what still needs to be understood.
“New maps are continuously replacing gap maps with progress in mapping while the process is captured and documented … Consequently, the atlas is not static but rather represents a ‘living map,’” the authors said.
Thanks to its structurally-sound architecture down to individual cells, the atlas can contribute to brain modeling and simulation down the line—especially for personalized brain models for neurological disorders such as seizures. Researchers can also use the framework for other species, and they can even incorporate new data-crunching processors into the workflow, such as mapping brain regions using artificial intelligence.
Fundamentally, the goal is to build shared resources to better understand the brain. “[These atlases] help us—and more and more researchers worldwide—to better understand the complex organization of the brain and to jointly uncover how things are connected,” the authors said.
Image credit: Richard Watts, PhD, University of Vermont and Fair Neuroimaging Lab, Oregon Health and Science University Continue reading
#437276 Cars Will Soon Be Able to Sense and ...
Imagine you’re on your daily commute to work, driving along a crowded highway while trying to resist looking at your phone. You’re already a little stressed out because you didn’t sleep well, woke up late, and have an important meeting in a couple hours, but you just don’t feel like your best self.
Suddenly another car cuts you off, coming way too close to your front bumper as it changes lanes. Your already-simmering emotions leap into overdrive, and you lay on the horn and shout curses no one can hear.
Except someone—or, rather, something—can hear: your car. Hearing your angry words, aggressive tone, and raised voice, and seeing your furrowed brow, the onboard computer goes into “soothe” mode, as it’s been programmed to do when it detects that you’re angry. It plays relaxing music at just the right volume, releases a puff of light lavender-scented essential oil, and maybe even says some meditative quotes to calm you down.
What do you think—creepy? Helpful? Awesome? Weird? Would you actually calm down, or get even more angry that a car is telling you what to do?
Scenarios like this (maybe without the lavender oil part) may not be imaginary for much longer, especially if companies working to integrate emotion-reading artificial intelligence into new cars have their way. And it wouldn’t just be a matter of your car soothing you when you’re upset—depending what sort of regulations are enacted, the car’s sensors, camera, and microphone could collect all kinds of data about you and sell it to third parties.
Computers and Feelings
Just as AI systems can be trained to tell the difference between a picture of a dog and one of a cat, they can learn to differentiate between an angry tone of voice or facial expression and a happy one. In fact, there’s a whole branch of machine intelligence devoted to creating systems that can recognize and react to human emotions; it’s called affective computing.
Emotion-reading AIs learn what different emotions look and sound like from large sets of labeled data; “smile = happy,” “tears = sad,” “shouting = angry,” and so on. The most sophisticated systems can likely even pick up on the micro-expressions that flash across our faces before we consciously have a chance to control them, as detailed by Daniel Goleman in his groundbreaking book Emotional Intelligence.
Affective computing company Affectiva, a spinoff from MIT Media Lab, says its algorithms are trained on 5,313,751 face videos (videos of people’s faces as they do an activity, have a conversation, or react to stimuli) representing about 2 billion facial frames. Fascinatingly, Affectiva claims its software can even account for cultural differences in emotional expression (for example, it’s more normalized in Western cultures to be very emotionally expressive, whereas Asian cultures tend to favor stoicism and politeness), as well as gender differences.
But Why?
As reported in Motherboard, companies like Affectiva, Cerence, Xperi, and Eyeris have plans in the works to partner with automakers and install emotion-reading AI systems in new cars. Regulations passed last year in Europe and a bill just introduced this month in the US senate are helping make the idea of “driver monitoring” less weird, mainly by emphasizing the safety benefits of preemptive warning systems for tired or distracted drivers (remember that part in the beginning about sneaking glances at your phone? Yeah, that).
Drowsiness and distraction can’t really be called emotions, though—so why are they being lumped under an umbrella that has a lot of other implications, including what many may consider an eerily Big Brother-esque violation of privacy?
Our emotions, in fact, are among the most private things about us, since we are the only ones who know their true nature. We’ve developed the ability to hide and disguise our emotions, and this can be a useful skill at work, in relationships, and in scenarios that require negotiation or putting on a game face.
And I don’t know about you, but I’ve had more than one good cry in my car. It’s kind of the perfect place for it; private, secluded, soundproof.
Putting systems into cars that can recognize and collect data about our emotions under the guise of preventing accidents due to the state of mind of being distracted or the physical state of being sleepy, then, seems a bit like a bait and switch.
A Highway to Privacy Invasion?
European regulations will help keep driver data from being used for any purpose other than ensuring a safer ride. But the US is lagging behind on the privacy front, with car companies largely free from any enforceable laws that would keep them from using driver data as they please.
Affectiva lists the following as use cases for occupant monitoring in cars: personalizing content recommendations, providing alternate route recommendations, adapting environmental conditions like lighting and heating, and understanding user frustration with virtual assistants and designing those assistants to be emotion-aware so that they’re less frustrating.
Our phones already do the first two (though, granted, we’re not supposed to look at them while we drive—but most cars now let you use bluetooth to display your phone’s content on the dashboard), and the third is simply a matter of reaching a hand out to turn a dial or press a button. The last seems like a solution for a problem that wouldn’t exist without said… solution.
Despite how unnecessary and unsettling it may seem, though, emotion-reading AI isn’t going away, in cars or other products and services where it might provide value.
Besides automotive AI, Affectiva also makes software for clients in the advertising space. With consent, the built-in camera on users’ laptops records them while they watch ads, gauging their emotional response, what kind of marketing is most likely to engage them, and how likely they are to buy a given product. Emotion-recognition tech is also being used or considered for use in mental health applications, call centers, fraud monitoring, and education, among others.
In a 2015 TED talk, Affectiva co-founder Rana El-Kaliouby told her audience that we’re living in a world increasingly devoid of emotion, and her goal was to bring emotions back into our digital experiences. Soon they’ll be in our cars, too; whether the benefits will outweigh the costs remains to be seen.
Image Credit: Free-Photos from Pixabay Continue reading
#436470 Retail Robots Are on the Rise—at Every ...
The robots are coming! The robots are coming! On our sidewalks, in our skies, in our every store… Over the next decade, robots will enter the mainstream of retail.
As countless robots work behind the scenes to stock shelves, serve customers, and deliver products to our doorstep, the speed of retail will accelerate.
These changes are already underway. In this blog, we’ll elaborate on how robots are entering the retail ecosystem.
Let’s dive in.
Robot Delivery
On August 3rd, 2016, Domino’s Pizza introduced the Domino’s Robotic Unit, or “DRU” for short. The first home delivery pizza robot, the DRU looks like a cross between R2-D2 and an oversized microwave.
LIDAR and GPS sensors help it navigate, while temperature sensors keep hot food hot and cold food cold. Already, it’s been rolled out in ten countries, including New Zealand, France, and Germany, but its August 2016 debut was critical—as it was the first time we’d seen robotic home delivery.
And it won’t be the last.
A dozen or so different delivery bots are fast entering the market. Starship Technologies, for instance, a startup created by Skype founders Janus Friis and Ahti Heinla, has a general-purpose home delivery robot. Right now, the system is an array of cameras and GPS sensors, but upcoming models will include microphones, speakers, and even the ability—via AI-driven natural language processing—to communicate with customers. Since 2016, Starship has already carried out 50,000 deliveries in over 100 cities across 20 countries.
Along similar lines, Nuro—co-founded by Jiajun Zhu, one of the engineers who helped develop Google’s self-driving car—has a miniature self-driving car of its own. Half the size of a sedan, the Nuro looks like a toaster on wheels, except with a mission. This toaster has been designed to carry cargo—about 12 bags of groceries (version 2.0 will carry 20)—which it’s been doing for select Kroger stores since 2018. Domino’s also partnered with Nuro in 2019.
As these delivery bots take to our streets, others are streaking across the sky.
Back in 2016, Amazon came first, announcing Prime Air—the e-commerce giant’s promise of drone delivery in 30 minutes or less. Almost immediately, companies ranging from 7-Eleven and Walmart to Google and Alibaba jumped on the bandwagon.
While critics remain doubtful, the head of the FAA’s drone integration department recently said that drone deliveries may be “a lot closer than […] the skeptics think. [Companies are] getting ready for full-blown operations. We’re processing their applications. I would like to move as quickly as I can.”
In-Store Robots
While delivery bots start to spare us trips to the store, those who prefer shopping the old-fashioned way—i.e., in person—also have plenty of human-robot interaction in store. In fact, these robotics solutions have been around for a while.
In 2010, SoftBank introduced Pepper, a humanoid robot capable of understanding human emotion. Pepper is cute: 4 feet tall, with a white plastic body, two black eyes, a dark slash of a mouth, and a base shaped like a mermaid’s tail. Across her chest is a touch screen to aid in communication. And there’s been a lot of communication. Pepper’s cuteness is intentional, as it matches its mission: help humans enjoy life as much as possible.
Over 12,000 Peppers have been sold. She serves ice cream in Japan, greets diners at a Pizza Hut in Singapore, and dances with customers at a Palo Alto electronics store. More importantly, Pepper’s got company.
Walmart uses shelf-stocking robots for inventory control. Best Buy uses a robo-cashier, allowing select locations to operate 24-7. And Lowe’s Home Improvement employs the LoweBot—a giant iPad on wheels—to help customers find the items they need while tracking inventory along the way.
Warehouse Bots
Yet the biggest benefit robots provide might be in-warehouse logistics.
In 2012, when Amazon dished out $775 million for Kiva Systems, few could predict that just 6 years later, 45,000 Kiva robots would be deployed at all of their fulfillment centers, helping process a whopping 306 items per second during the Christmas season.
And many other retailers are following suit.
Order jeans from the Gap, and soon they’ll be sorted, packed, and shipped with the help of a Kindred robot. Remember the old arcade game where you picked up teddy bears with a giant claw? That’s Kindred, only her claw picks up T-shirts, pants, and the like, placing them in designated drop-off zones that resemble tiny mailboxes (for further sorting or shipping).
The big deal here is democratization. Kindred’s robot is cheap and easy to deploy, allowing smaller companies to compete with giants like Amazon.
Final Thoughts
For retailers interested in staying in business, there doesn’t appear to be much choice in the way of robotics.
By 2024, the US minimum wage is projected to be $15 an hour (the House of Representatives has already passed the bill, but the wage hike is meant to unfold gradually between now and 2025), and many consider that number far too low.
Yet, as human labor costs continue to climb, robots won’t just be coming, they’ll be here, there, and everywhere. It’s going to become increasingly difficult for store owners to justify human workers who call in sick, show up late, and can easily get injured. Robots work 24-7. They never take a day off, never need a bathroom break, health insurance, or parental leave.
Going forward, this spells a growing challenge of technological unemployment (a blog topic I will cover in the coming month). But in retail, robotics usher in tremendous benefits for companies and customers alike.
And while professional re-tooling initiatives and the transition of human capital from retail logistics to a booming experience economy take hold, robotic retail interaction and last-mile delivery will fundamentally transform our relationship with commerce.
This blog comes from The Future is Faster Than You Think—my upcoming book, to be released Jan 28th, 2020. To get an early copy and access up to $800 worth of pre-launch giveaways, sign up here!
Join Me
(1) A360 Executive Mastermind: If you’re an exponentially and abundance-minded entrepreneur who would like coaching directly from me, consider joining my Abundance 360 Mastermind, a highly selective community of 360 CEOs and entrepreneurs who I coach for 3 days every January in Beverly Hills, Ca. Through A360, I provide my members with context and clarity about how converging exponential technologies will transform every industry. I’m committed to running A360 for the course of an ongoing 25-year journey as a “countdown to the Singularity.”
If you’d like to learn more and consider joining our 2020 membership, apply here.
(2) Abundance-Digital Online Community: I’ve also created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is Singularity University’s ‘onramp’ for exponential entrepreneurs — those who want to get involved and play at a higher level. Click here to learn more.
(Both A360 and Abundance-Digital are part of Singularity University — your participation opens you to a global community.)
Image Credit: Image by imjanuary from Pixabay Continue reading