Tag Archives: full

#437202 Scientists Used Dopamine to Seamlessly ...

In just half a decade, neuromorphic devices—or brain-inspired computing—already seem quaint. The current darling? Artificial-biological hybrid computing, uniting both man-made computer chips and biological neurons seamlessly into semi-living circuits.

It sounds crazy, but a new study in Nature Materials shows that it’s possible to get an artificial neuron to communicate directly with a biological one using not just electricity, but dopamine—a chemical the brain naturally uses to change how neural circuits behave, most known for signaling reward.

Because these chemicals, known as “neurotransmitters,” are how biological neurons functionally link up in the brain, the study is a dramatic demonstration that it’s possible to connect artificial components with biological brain cells into a functional circuit.

The team isn’t the first to pursue hybrid neural circuits. Previously, a different team hooked up two silicon-based artificial neurons with a biological one into a circuit using electrical protocols alone. Although a powerful demonstration of hybrid computing, the study relied on only one-half of the brain’s computational ability: electrical computing.

The new study now tackles the other half: chemical computing. It adds a layer of compatibility that lays the groundwork not just for brain-inspired computers, but also for brain-machine interfaces and—perhaps—a sort of “cyborg” future. After all, if your brain can’t tell the difference between an artificial neuron and your own, could you? And even if you did, would you care?

Of course, that scenario is far in the future—if ever. For now, the team, led by Dr. Alberto Salleo, professor of materials science and engineering at Stanford University, collectively breathed a sigh of relief that the hybrid circuit worked.

“It’s a demonstration that this communication melding chemistry and electricity is possible,” said Salleo. “You could say it’s a first step toward a brain-machine interface, but it’s a tiny, tiny very first step.”

Neuromorphic Computing
The study grew from years of work into neuromorphic computing, or data processing inspired by the brain.

The blue-sky idea was inspired by the brain’s massive parallel computing capabilities, along with vast energy savings. By mimicking these properties, scientists reasoned, we could potentially turbo-charge computing. Neuromorphic devices basically embody artificial neural networks in physical form—wouldn’t hardware that mimics how the brain processes information be even more efficient and powerful?

These explorations led to novel neuromorphic chips, or artificial neurons that “fire” like biological ones. Additional work found that it’s possible to link these chips up into powerful circuits that run deep learning with ease, with bioengineered communication nodes called artificial synapses.

As a potential computing hardware replacement, these systems have proven to be incredibly promising. Yet scientists soon wondered: given their similarity to biological brains, can we use them as “replacement parts” for brains that suffer from traumatic injuries, aging, or degeneration? Can we hook up neuromorphic components to the brain to restore its capabilities?

Buzz & Chemistry
Theoretically, the answer’s yes.

But there’s a huge problem: current brain-machine interfaces only use electrical signals to mimic neural computation. The brain, in contrast, has two tricks up its sleeve: electricity and chemicals, or electrochemical.

Within a neuron, electricity travels up its incoming branches, through the bulbous body, then down the output branches. When electrical signals reach the neuron’s outgoing “piers,” dotted along the output branch, however, they hit a snag. A small gap exists between neurons, so to get to the other side, the electrical signals generally need to be converted into little bubble ships, packed with chemicals, and set sail to the other neuronal shore.

In other words, without chemical signals, the brain can’t function normally. These neurotransmitters don’t just passively carry information. Dopamine, for example, can dramatically change how a neural circuit functions. For an artificial-biological hybrid neural system, the absence of chemistry is like nixing international cargo vessels and only sticking with land-based trains and highways.

“To emulate biological synaptic behavior, the connectivity of the neuromorphic device must be dynamically regulated by the local neurotransmitter activity,” the team said.

Let’s Get Electro-Chemical
The new study started with two neurons: the upstream, an immortalized biological cell that releases dopamine; and the downstream, an artificial neuron that the team previously introduced in 2017, made of a mix of biocompatible and electrical-conducting materials.

Rather than the classic neuron shape, picture more of a sandwich with a chunk bitten out in the middle (yup, I’m totally serious). Each of the remaining parts of the sandwich is a soft electrode, made of biological polymers. The “bitten out” part has a conductive solution that can pass on electrical signals.

The biological cell sits close to the first electrode. When activated, it dumps out boats of dopamine, which drift to the electrode and chemically react with it—mimicking the process of dopamine docking onto a biological neuron. This, in turn, generates a current that’s passed on to the second electrode through the conductive solution channel. When this current reaches the second electrode, it changes the electrode’s conductance—that is, how well it can pass on electrical information. This second step is analogous to docked dopamine “ships” changing how likely it is that a biological neuron will fire in the future.

In other words, dopamine release from the biological neuron interacts with the artificial one, so that the chemicals change how the downstream neuron behaves in a somewhat lasting way—a loose mimic of what happens inside the brain during learning.

But that’s not all. Chemical signaling is especially powerful in the brain because it’s flexible. Dopamine, for example, only grabs onto the downstream neurons for a bit before it returns back to its upstream neuron—that is, recycled or destroyed. This means that its effect is temporary, giving the neural circuit breathing room to readjust its activity.

The Stanford team also tried reconstructing this quirk in their hybrid circuit. They crafted a microfluidic channel that shuttles both dopamine and its byproduct away from the artificial neurons after they’ve done their job for recycling.

Putting It All Together
After confirming that biological cells can survive happily on top of the artificial one, the team performed a few tests to see if the hybrid circuit could “learn.”

They used electrical methods to first activate the biological dopamine neuron, and watched the artificial one. Before the experiment, the team wasn’t quite sure what to expect. Theoretically, it made sense that dopamine would change the artificial neuron’s conductance, similar to learning. But “it was hard to know whether we’d achieve the outcome we predicted on paper until we saw it happen in the lab,” said study author Scott Keene.

On the first try, however, the team found that the burst of chemical signaling was able to change the artificial neuron’s conductance long-term, similar to the neuroscience dogma “neurons that fire together, wire together.” Activating the upstream biological neuron with chemicals also changed the artificial neuron’s conductance in a way that mimicked learning.

“That’s when we realized the potential this has for emulating the long-term learning process of a synapse,” said Keene.

Visualizing under an electron microscope, the team found that, similar to its biological counterpart, the hybrid synapse was able to efficiently recycle dopamine with timescales similar to the brain after some calibration. By playing with how much dopamine accumulates at the artificial neuron, the team found that they loosely mimic a learning rule called spike learning—a darling of machine learning inspired by the brain’s computation.

A Hybrid Future?
Unfortunately for cyborg enthusiasts, the work is still in its infancy.

For one, the artificial neurons are still rather bulky compared to biological ones. This means that they can’t capture and translate information from a single “boat” of dopamine. It’s also unclear if, and how, a hybrid synapse can work inside a living brain. Given the billions of synapses firing away in our heads, it’ll be a challenge to find-and-replace those that need replacement, and be able to control our memories and behaviors similar to natural ones.

That said, we’re inching ever closer to full-capability artificial-biological hybrid circuits.

“The neurotransmitter-mediated neuromorphic device presented in this work constitutes a fundamental building block for artificial neural networks that can be directly modulated based on biological feedback from live neurons,” the authors concluded. “[It] is a crucial first step in realizing next-generation adaptive biohybrid interfaces.”

Image Credit: Gerd Altmann from Pixabay Continue reading

Posted in Human Robots

#436784 This Week’s Awesome Tech Stories From ...

COMPUTING
Inside the Race to Build the Best Quantum Computer on Earth
Gideon Lichfield | MIT Technology Review
“Regardless of whether you agree with Google’s position [on ‘quantum supremacy’] or IBM’s, the next goal is clear, Oliver says: to build a quantum computer that can do something useful. …The trouble is that it’s nearly impossible to predict what the first useful task will be, or how big a computer will be needed to perform it.”

FUTURE
We’re Not Prepared for the End of Moore’s Law
David Rotman | MIT Technology Review
“Quantum computing, carbon nanotube transistors, even spintronics, are enticing possibilities—but none are obvious replacements for the promise that Gordon Moore first saw in a simple integrated circuit. We need the research investments now to find out, though. Because one prediction is pretty much certain to come true: we’re always going to want more computing power.”

ROBOTICS
Flippy the Burger-Flipping Robot Is Changing the Face of Fast Food as We Know It
Luke Dormehl | Digital Trends
“Flippy is the result of the Miso team’s robotics expertise, coupled with that industry-specific knowledge. It’s a burger-flipping robot arm that’s equipped with both thermal and regular vision, which grills burgers to order while also advising human collaborators in the kitchen when they need to add cheese or prep buns for serving.”

BIOTECHNOLOGY
The Next Generation of Batteries Could Be Built by Viruses
Daniel Oberhaus | Wired
“[MIT bioengineering professor Angela Belcher has] made viruses that can work with over 150 different materials and demonstrated that her technique can be used to manufacture other materials like solar cells. Belcher’s dream of zipping around in a ‘virus-powered car’ still hasn’t come true, but after years of work she and her colleagues at MIT are on the cusp of taking the technology out of the lab and into the real world.”

SPACE
Biggest Cosmic Explosion Ever Detected Left Huge Dent in Space
Hannah Devlin | The Guardian
“The biggest cosmic explosion on record has been detected—an event so powerful that it punched a dent the size of 15 Milky Ways in the surrounding space. The eruption is thought to have originated at a supermassive black hole in the Ophiuchus galaxy cluster, which is about 390 million light years from Earth.”

SCIENCE FICTION
Star Trek’s Warp Speed Would Have Tragic Consequences
Cassidy Ward | SyFy
“The various crews of Trek‘s slate of television shows and movies can get from here to there without much fanfare. Seeking out new worlds and new civilizations is no more difficult than gassing up the car and packing a cooler full of junk food. And they don’t even need to do that! The replicators will crank out a bologna sandwich just like mom used to make. All that’s left is to go, but what happens then?”

Image Credit: sergio souza / Pexels Continue reading

Posted in Human Robots

#436559 This Is What an AI Said When Asked to ...

“What’s past is prologue.” So says the famed quote from Shakespeare’s The Tempest, alleging that we can look to what has already happened as an indication of what will happen next.

This idea could be interpreted as being rather bleak; are we doomed to repeat the errors of the past until we correct them? We certainly do need to learn and re-learn life lessons—whether in our work, relationships, finances, health, or other areas—in order to grow as people.

Zooming out, the same phenomenon exists on a much bigger scale—that of our collective human history. We like to think we’re improving as a species, but haven’t yet come close to doing away with the conflicts and injustices that plagued our ancestors.

Zooming back in (and lightening up) a little, what about the short-term future? What might happen over the course of this year, and what information would we use to make educated guesses about it?

The editorial team at The Economist took a unique approach to answering these questions. On top of their own projections for 2020, including possible scenarios in politics, economics, and the continued development of technologies like artificial intelligence, they looked to an AI to make predictions of its own. What it came up with is intriguing, and a little bit uncanny.

[For the full list of the questions and answers, read The Economist article].

An AI That Reads—Then Writes
Almost exactly a year ago, non-profit OpenAI announced it had built a neural network for natural language processing called GPT-2. The announcement was met with some controversy, as it included the caveat that the tool would not be immediately released to the public due to its potential for misuse. It was then released in phases over the course of several months.

GPT-2’s creators upped the bar on quality when training the neural net; rather than haphazardly feeding it low-quality text, they only used articles that got more than three upvotes on Reddit (admittedly, this doesn’t guarantee high quality across the board—but it’s something).

The training dataset consisted of 40GB of text. For context, 1GB of text is about 900,000 ASCII pages or 130,000 double-spaced Microsoft Word pages.

The tool has no understanding of the text it’s generating, of course. It uses language patterns and word sequences to draw statistical associations between words and phrases, building a sort of guidebook for itself (not unlike the grammar rules and vocabulary words you might study when trying to learn a foreign language). It then uses that guidebook to answer questions or predict what will come after a particular sequence of words.

GPT-2’s creators did note that when the model is trained on specifically-selected datasets for narrower applications, its replies become more convincing.

Some Light Editing
Though the Economist article detailing GPT-2’s predictions describes the text the model generated as “unedited,” this isn’t wholly accurate. In a Medium post, deputy editor Tom Standage describes the methodology he used, and it does involve a degree of tweaking.

Standage wrote an introductory paragraph, followed by his opening question (“Greetings, GPT-2. Have you done an interview before?”). He configured the tool to generate five responses to each subsequent question then selected the best answer, deleting any questions GPT-2 came up with as part of its response.

AI’s View of the World—and Itself
The questions Standage asked ranged from tech-related (the future of AI, technological unemployment, misuse of technology) to political and economic (re-election of Donald Trump, the US-China trade war). Here are a couple of his questions and GPT-2’s answers:

Q: Do fake news stories, generated using AI, pose a threat to democracy? Are we likely to see this tactic being used in the 2020 American presidential elections?
A: Yes, we’ll probably see them. It’s just a matter of when. Fake news stories are generally generated by political campaigns, and have the potential to have a huge impact on the outcome of an election. This is because they spread misinformation, often with no basis in fact.

Q: How do you see the economic outlook for 2020?
A: I think it is very unclear. The outlook for the UK economy is uncertain, but the American economy is very strong. I think we are going to have a lot of turbulence in the world economy.

While some of its answers seemed insightful—when asked about the future of AI, GPT-2 advised that we treat the technology like a tool and “put as much effort into developing [it] as necessary, rather than worrying that it’s going to harm us and destroy our lives”—many of the responses were quite vague. One answer, for example, mentions that there will “probably be major changes in China.” Are we talking political changes? Social? Economic? All of the above?

However, while nothing the AI wrote was particularly illuminating or new, it’s impressive that it was able to expound on the correct topic for each question, and in fully coherent English.

GPT-2 named itself—AI—as the most important disruptive technology to watch in 2020. This is telling; it’s coming from a synthesis of millions of pages of published articles. If GPT-2 thinks AI is more important than, say, gene editing or quantum computing, it could be right. Or it could simply be that AI is at peak hype, and as such more is being written about it than about other technologies.

Equally intriguing was GPT-2’s response when asked whether Donald Trump will win a second term: “I think he will not win a second term. I think he will be defeated in the general election.” Some deeper insight there would be great, but hey—we’ll take it.

Predicting Predictions
Since an AI can read and synthesize vast data sets much faster than we can, it’s being used to predict all kinds of things, from virus outbreaks to crime. But asking it to philosophize on the future based on the (Reddit-curated) past is new, and if you think about it, a pretty fascinating undertaking.

As GPT-2 and tools like it continually improve, we’ll likely see them making more—and better—predictions of the future. In the meantime, let’s hope that the new data these models are trained on—news of what’s happening this week, this month, this year—add to an already-present sense of optimism.

When asked if it had any advice for readers, GPT-2 replied, “The big projects that you think are impossible today are actually possible in the near future.”

Image Credit: Alexas_Fotos from Pixabay Continue reading

Posted in Human Robots

#436550 Work in the Age of Web 3.0

What is the future of work? Is our future one of ‘technological socialism’ (where technology is taking care of our needs)? Or will tomorrow’s workplace be completely virtualized, allowing us to hang out at home in our PJs while “walking” about our virtual corporate headquarters?

This blog will look at the future of work during the age of Web 3.0, examining scenarios in which artificial intelligence, virtual reality, and the spatial web converge to transform every element of our careers, from training, to execution, to free time.

To offer a quick recap on what the Spatial Web is and how it works, let’s cover some brief history.

A Quick Recap on Web 3.0
While Web 1.0 consisted of static documents and read-only data (static web pages), Web 2.0 introduced multimedia content, interactive web applications, and participatory social media, all of these mediated by two-dimensional screens.

But over the next two to five years, the convergence of 5G, artificial intelligence, VR/AR, and a trillion-sensor economy will enable us to both map our physical world into virtual space and superimpose a digital data layer onto our physical environments. Suddenly, all our information will be manipulated, stored, understood and experienced in spatial ways.

In this blog, I’ll be discussing the Spatial Web’s vast implications for:

Professional Training
Delocalized Business & the Virtual Workplace
Smart Permissions & Data Security

Let’s dive in.

Virtual Training, Real-World Results
Virtual and augmented reality have already begun disrupting the professional training market. As projected by ABI Research, the enterprise VR training market is on track to exceed $6.3 billion in value by 2022.

Leading the charge, Walmart has already implemented VR across 200 Academy training centers, running over 45 modules and simulating everything from unusual customer requests to a Black Friday shopping rush.

Then in September 2018, Walmart committed to a 17,000-headset order of the Oculus Go to equip every US Supercenter, neighborhood market, and discount store with VR-based employee training. By mid-2019, Walmart had tracked a 10-15 percent boost in employee confidence as a result of newly implemented VR training.

In the engineering world, Bell Helicopter is using VR to massively expedite development and testing of its latest aircraft, FCX-001. Partnering with Sector 5 Digital and HTC VIVE, Bell found it could concentrate a typical 6-year aircraft design process into the course of 6 months, turning physical mock-ups into CAD-designed virtual replicas.

But beyond the design process itself, Bell is now one of a slew of companies pioneering VR pilot tests and simulations with real-world accuracy. Seated in a true-to-life virtual cockpit, pilots have now tested countless iterations of the FCX-001 in virtual flight, drawing directly onto the 3D model and enacting aircraft modifications in real-time.

And in an expansion of our virtual senses, several key players are already working on haptic feedback. In the case of VR flight, French company Go Touch VR is now partnering with software developer FlyInside on fingertip-mounted haptic tech for aviation.

Dramatically reducing time and trouble required for VR-testing pilots, they aim to give touch-based confirmation of every switch and dial activated on virtual flights, just as one would experience in a full-sized cockpit mockup. Replicating texture, stiffness, and even the sensation of holding an object, these piloted devices contain a suite of actuators to simulate everything from a light touch to higher-pressured contact, all controlled by gaze and finger movements.

When it comes to other high-risk simulations, virtual and augmented reality have barely scratched the surface.

Firefighters can now combat virtual wildfires with new platforms like FLAIM Trainer or TargetSolutions. And thanks to the expansion of medical AR/VR services like 3D4Medical or Echopixel, surgeons might soon perform operations on annotated organs and magnified incision sites, speeding up reaction times and vastly improving precision.

But perhaps most urgent, Web 3.0 and its VR interface will offer an immediate solution for today’s constant industry turnover and large-scale re-education demands. VR educational facilities with exact replicas of anything from large industrial equipment to minute circuitry will soon give anyone a second chance at the 21st-century job market.

Want to be an electric, autonomous vehicle mechanic at age 15? Throw on a demonetized VR module and learn by doing, testing your prototype iterations at almost zero cost and with no risk of harming others.

Want to be a plasma physicist and play around with a virtual nuclear fusion reactor? Now you’ll be able to simulate results and test out different tweaks, logging Smart Educational Record credits in the process.

As tomorrow’s career model shifts from a “one-and-done graduate degree” to continuous lifelong education, professional VR-based re-education will allow for a continuous education loop, reducing the barrier to entry for anyone wanting to enter a new industry.

But beyond professional training and virtually enriched, real-world work scenarios, Web 3.0 promises entirely virtual workplaces and blockchain-secured authorization systems.

Rise of the Virtual Workplace & Digital Data Integrity
In addition to enabling a virtual goods marketplace, the Spatial Web is also giving way to “virtual company headquarters” and completely virtualized companies, where employees can work from home or any place on the planet.

Too good to be true? Check out an incredible publicly listed company called eXp Realty.

Launched on the heels of the 2008 financial crisis, eXp Realty beat the odds, going public this past May and surpassing a $1B market cap on day one of trading. But how? Opting for a demonetized virtual model, eXp’s founder Glenn Sanford decided to ditch brick and mortar from the get-go, instead building out an online virtual campus for employees, contractors, and thousands of agents.

And after years of hosting team meetings, training seminars, and even agent discussions with potential buyers through 2D digital interfaces, eXp’s virtual headquarters went spatial. What is eXp’s primary corporate value? FUN! And Glenn Sanford’s employees love their jobs.

In a bid to transition from 2D interfaces to immersive, 3D work experiences, virtual platform VirBELA built out the company’s office space in VR, unlocking indefinite scaling potential and an extraordinary new precedent. Foregoing any physical locations for a centralized VR campus, eXp Realty has essentially thrown out all overhead and entered a lucrative market with barely any upfront costs.

Delocalize with VR, and you can now hire anyone with Internet access (right next door or on the other side of the planet), redesign your corporate office every month, throw in an ocean-view office or impromptu conference room for client meetings, and forget about guzzled-up hours in traffic.

Throw in the Spatial Web’s fundamental blockchain-based data layer, and now cryptographically secured virtual IDs will let you validate colleagues’ identities or any of the virtual avatars we will soon inhabit.

This becomes critically important for spatial information logs—keeping incorruptible records of who’s present at a meeting, which data each person has access to, and AI-translated reports of everything discussed and contracts agreed to.

But as I discussed in a previous Spatial Web blog, not only will Web 3.0 and VR advancements allow us to build out virtual worlds, but we’ll soon be able to digitally map our real-world physical offices or entire commercial high rises too.

As data gets added and linked to any given employee’s office, conference room, or security system, we might then access online-merge-offline environments and information through augmented reality.

Imagine showing up at your building’s concierge and your AR glasses automatically check you into the building, authenticating your identity and pulling up any reminders you’ve linked to that specific location.

You stop by a friend’s office, and his smart security system lets you know he’ll arrive in an hour. Need to book a public conference room that’s already been scheduled by another firm’s marketing team? Offer to pay them a fee and, once accepted, a smart transaction will automatically deliver a payment to their company account.

With blockchain-verified digital identities, spatially logged data, and virtually manifest information, business logistics take a fraction of the time, operations grow seamless, and corporate data will be safer than ever.

Final Thoughts
While converging technologies slash the lifespan of Fortune 500 companies, bring on the rise of vast new industries, and transform the job market, Web 3.0 is changing the way we work, where we work, and who we work with.

Life-like virtual modules are already unlocking countless professional training camps, modifiable in real time and easily updated. Virtual programming and blockchain-based authentication are enabling smart data logging, identity protection, and on-demand smart asset trading. And VR/AR-accessible worlds (and corporate campuses) not only demonetize, dematerialize, and delocalize our everyday workplaces, but enrich our physical worlds with AI-driven, context-specific data.

Welcome to the Spatial Web workplace.

Join Me
(1) A360 Executive Mastermind: If you’re an exponentially and abundance-minded entrepreneur who would like coaching directly from me, consider joining my Abundance 360 Mastermind, a highly selective community of 360 CEOs and entrepreneurs who I coach for 3 days every January in Beverly Hills, Ca. Through A360, I provide my members with context and clarity about how converging exponential technologies will transform every industry. I’m committed to running A360 for the course of an ongoing 25-year journey as a “countdown to the Singularity.”

If you’d like to learn more and consider joining our 2021 membership, apply here.

(2) Abundance-Digital Online Community: I’ve also created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is Singularity University’s ‘onramp’ for exponential entrepreneurs—those who want to get involved and play at a higher level. Click here to learn more.

(Both A360 and Abundance-Digital are part of Singularity University—your participation opens you to a global community.)

This article originally appeared on diamandis.com. Read the original article here.

Image Credit: Image by Gerd Altmann from Pixabay Continue reading

Posted in Human Robots

#436507 The Weird, the Wacky, the Just Plain ...

As you know if you’ve ever been to, heard of, or read about the annual Consumer Electronics Show in Vegas, there’s no shortage of tech in any form: gadgets, gizmos, and concepts abound. You probably couldn’t see them all in a month even if you spent all day every day trying.

Given the sheer scale of the show, the number of exhibitors, and the inherent subjectivity of bestowing superlatives, it’s hard to pick out the coolest tech from CES. But I’m going to do it anyway; in no particular order, here are some of the products and concepts that I personally found most intriguing at this year’s event.

e-Novia’s Haptic Gloves
Italian startup e-Novia’s Weart glove uses a ‘sensing core’ to record tactile sensations and an ‘actuation core’ to reproduce those sensations onto the wearer’s skin. Haptic gloves will bring touch to VR and AR experiences, making them that much more life-like. The tech could also be applied to digitization of materials and in gaming and entertainment.

e-Novia’s modular haptic glove
I expected a full glove, but in fact there were two rings that attached to my fingers. Weart co-founder Giovanni Spagnoletti explained that they’re taking a modular approach, so as to better tailor the technology to different experiences. He then walked me through a virtual reality experience that was a sort of simulated science experiment: I had to lift a glass beaker, place it on a stove, pour in an ingredient, open a safe to access some dry ice, add that, and so on. As I went through the steps, I felt the beaker heat up and cool off at the expected times, and felt the liquid moving inside, as well as the pressure of my fingertips against the numbered buttons on the safe.

A virtual (but tactile) science experiment
There was a slight delay between my taking an action and feeling the corresponding tactile sensation, but on the whole, the haptic glove definitely made the experience more realistic—and more fun. Slightly less fun but definitely more significant, Spagnoletti told me Weart is working with a medical group to bring tactile sensations to VR training for surgeons.

Sarcos Robotics’ Exoskeleton
That tire may as well be a feather
Sarcos Robotics unveiled its Guardian XO full-body exoskeleton, which it says can safely lift up to 200 pounds across an extended work session. What’s cool about this particular exoskeleton is that it’s not just a prototype; the company announced a partnership with Delta airlines, which will be trialing the technology for aircraft maintenance, engine repair, and luggage handling. In a demo, I watched a petite female volunteer strap into the exoskeleton and easily lift a 50-pound weight with one hand, and a Sarcos employee lift and attach a heavy component of a propeller; she explained that the strength-augmenting function of the exoskeleton can easily be switched on or off—and the wearer’s hands released—to facilitate multi-step tasks.

Hyundai’s Flying Taxi
Where to?
Hyundai and Uber partnered to unveil an air taxi concept. With a 49-foot wingspan, 4 lift rotors, and 4 tilt rotors, the aircraft would be manned by a pilot and could carry 4 passengers at speeds up to 180 miles per hour. The companies say you’ll be able to ride across your city in one of these by 2030—we’ll see if the regulatory environment, public opinion, and other factors outside of technological capability let that happen.

Mercedes’ Avatar Concept Car
Welcome to the future
As evident from its name, Mercedes’ sweet new Vision AVTR concept car was inspired by the movie Avatar; director James Cameron helped design it. The all-electric car has no steering wheel, transparent doors, seats made of vegan leather, and 33 reptilian-scale-like flaps on the back; its design is meant to connect the driver with both the car and the surrounding environment in a natural, seamless way.

Next-generation scrolling
Offered the chance to ‘drive’ the car, I jumped on it. Placing my hand on the center console started the engine, and within seconds it had synced to my heartbeat, which reverberated through the car. The whole dashboard, from driver door to passenger door, is one big LED display. It showed a virtual landscape I could select by holding up my hand: as I moved my hand from left to right, different images were projected onto my open palm. Closing my hand on an image selected it, and suddenly it looked like I was in the middle of a lush green mountain range. Applying slight forward pressure on the center console made the car advance in the virtual landscape; it was essentially like playing a really cool video game.

Mercedes is aiming to have a carbon-neutral production fleet by 2039, and to reduce the amount of energy it uses during production by 40 percent by 2030. It’s unclear when—or whether—the man-machine-nature connecting features of the Vision AVTR will start showing up in production, but I for one will be on the lookout.

Waverly Labs’ In-Ear Translator
Waverly Labs unveiled its Ambassador translator earlier this year and has it on display at the show. It’s worn on the ear and uses a far-field microphone array with speech recognition to translate real-time conversations in 20 different languages. Besides in-ear audio, translations can also appear as text on an app or be broadcast live in a conference environment.

It’s kind of like a giant talking earring
I stopped by the booth and tested out the translator with Waverly senior software engineer Georgiy Konovalov. We each hooked on an earpiece, and first, he spoke to me in Russian. After a delay of a couple seconds, I heard his words in—slightly robotic, but fully comprehensible—English. Then we switched: I spoke to him in Spanish, my words popped up on his phone screen in Cyrillic, and he translated them back to English for me out loud.

On the whole, the demo was pretty cool. If you’ve ever been lost in a foreign country whose language you don’t speak, imagine how handy a gadget like this would come in. Let’s just hope that once they’re more widespread, these products don’t end up discouraging people from learning languages.

Not to be outdone, Google also announced updates to its Translate product, which is being deployed at information desks in JFK airport’s international terminal, in sports stadiums in Qatar, and by some large hotel chains.

Stratuscent’s Digital Nose
AI is making steady progress towards achieving human-like vision and hearing—but there’s been less work done on mimicking our sense of smell (maybe because it’s less useful in everyday applications). Stratuscent’s digital nose, which it says is based on NASA patents, uses chemical receptors and AI to identify both simple chemicals and complex scents. The company is aiming to create the world’s first comprehensive database of everyday scents, which it says it will use to make “intelligent decisions” for customers. What kind of decisions remains to be seen—and smelled.

Banner Image Credit: The Mercedes Vision AVTR concept car. Photo by Vanessa Bates Ramirez Continue reading

Posted in Human Robots