Tag Archives: tech

#439183 This Week’s Awesome Tech Stories From ...

ROBOTICS
The Robot Surgeon Will See You Now
Cade Metz | The New York Times
“Real scalpels, artificial intelligence—what could go wrong? …The [Berkeley] project is a part of a much wider effort to bring artificial intelligence into the operating room. Using many of the same technologies that underpin self-driving cars, autonomous drones and warehouse robots, researchers are working to automate surgical robots too. These methods are still a long way from everyday use, but progress is accelerating.”

FUTURE
This Tech Was Science Fiction 20 Years Ago. Now It’s Reality
Luke Dormehl | Digital Trends
“A couple of decades ago, kids were reading Harry Potter books, Pixar movies were all the rage, and Microsoft’s Xbox and Sony’s PlayStation were battling it out for video game supremacy. That doesn’t sound all that different from 2021. But technology has come a long way in that time. Not only is today’s tech far more powerful than it was 20 years ago, but a lot of the gadgets we thought of as science fiction have become part of our lives.”

LONGEVITY
How Long Can We Live?
Ferris Jabr | The New York Times Magazine
“As the global population approaches eight billion, and science discovers increasingly promising ways to slow or reverse aging in the lab, the question of human longevity’s potential limits is more urgent than ever. When their work is examined closely, it’s clear that longevity scientists hold a wide range of nuanced perspectives on the future of humanity.”

3D PRINTING
Forget Digging for Fossils. This Museum Printed a Full T-Rex Skeleton Instead
Luke Dormehl | Digital Trends
“For a team of researchers at the Naturalis Biodiversity Center in Leiden, the Netherlands, copying a T. rex took some state-of-the-art laser scanning technology, a giant 3D printer, a just-as-sizable postage bill, almost 45 million square millimeters of acrylic paint, and a group of experts wishing to push the boundaries of additive manufacturing.”

HEALTH
One Vaccine to Rule Them All
James Hamblin | The Atlantic
“i‘A universal SARS-CoV-2 vaccine is step one,’ [Anthony] Fauci said. Step two would be a universal coronavirus vaccine, capable of protecting us not only from SARS-CoV-2 in all its forms, but also from the inevitable emergence of new and different coronaviruses that might cause future pandemics. The race to create such a vaccine may prove one of the great feats of a generation.”

TECHNOLOGY
These Materials Could Make Science Fiction a Reality
John Markoff | The New York Times
“Imagine operating a computer by moving your hands in the air as Tony Stark does in Iron Man. Or using a smartphone to magnify an object as does the device that Harrison Ford’s character uses in Blade Runner. …These advances and a host of others on the horizon could happen because of metamaterials, making it possible to control beams of light with the same ease that computer chips control electricity.”

DRONES
Wingcopter Debuts a Triple-Drop Drone to Create ‘Logistical Highways in the Sky’
Aria Alamalhodaei | TechCrunch
“The Wingcopter 198, which was revealed Tuesday, is capable of making three separate deliveries per flight, the company said. Wingcopter has couched this multi-stop capability as a critical feature that will allow it to grow a cost-efficient—and hopefully profitable—drone-delivery-as-a-service business.”

SPACE
The Asteroid Impact Simulation Has Ended in Disaster
George Dvorsky | Gizmodo
“An international exercise to simulate an asteroid striking Earth has come to an end. With just six days to go before a fictitious impact, things don’t look good for a 185-mile-wide region between Prague and Munich. …This may sound like a grim role-playing game, but it’s very serious business. Led by NASA’s Jet Propulsion Laboratory’s Center for Near Earth Object Studies, the asteroid impact simulation is meant to prepare scientists, planners, and key decision makers for the real thing, should it ever occur.”

Image Credit: mitsal dian / Unsplash Continue reading

Posted in Human Robots

#439157 This Week’s Awesome Tech Stories From ...

COMPUTING
Now for AI’s Latest Trick: Writing Computer Code
Will Knight | Wired
“It can take years to learn how to write computer code well. SourceAI, a Paris startup, thinks programming shouldn’t be such a big deal. The company is fine-tuning a tool that uses artificial intelligence to write code based on a short text description of what the code should do. Tell the company’s tool to ‘multiply two numbers given by a user,’ for example, and it will whip up a dozen or so lines in Python to do just that.”

SPACE
NASA’s Perseverance Rover Just Turned CO2 Into Oxygen
Morgan McFall-Johnsen | Business Insider
“That’s good news for the prospect of sending human explorers to Mars. Oxygen takes up a lot of room on a spacecraft, and it’s unlikely that astronauts will be able to bring enough with them to Mars. So they’ll need to produce their own oxygen from the Martian atmosphere, both for breathing and for fueling rockets to return to Earth.”

ARTIFICIAL INTELLIGENCE
Latest Neural Nets Solve World’s Hardest Equations Faster Than Ever Before
Anil Ananthaswamy | Quanta
“…researchers have built new kinds of artificial neural networks that can approximate solutions to partial differential equations orders of magnitude faster than traditional PDE solvers. And once trained, the new neural nets can solve not just a single PDE but an entire family of them without retraining.”

SPACE
NASA’s Bold Bet on Starship for the Moon May Change Spaceflight Forever
Eric Berger | Ars Technica
“Until now, the plans NASA had contemplated for human exploration in deep space all had echoes of the Apollo program. …By betting on Starship, which entails a host of development risks, NASA is taking a chance on what would be a much brighter future. One in which not a handful of astronauts go to the Moon or Mars, but dozens and then hundreds. In this sense, Starship represents a radical departure for NASA and human exploration.”

AUTOMATION
Who Will Win the Self-Driving Race? Here Are Eight Possibilities
Timothy B. Lee | Ars Technica
“…predicting what the next couple of years will bring is a challenge. So rather than offering a single prediction, here are eight: I’ve broken down the future into eight possible scenarios, each with a rough probability. …A decade from now, we’ll be able to look back and say which companies or approaches were on the right track. For now, we can only guess.”

TECHNOLOGY
Europe’s Proposed Limits on AI Would Have Global Consequences
Will Knight | Wired
“The rules are the most significant international effort to regulate AI to date, covering facial recognition, autonomous driving, and the algorithms that drive online advertising, automated hiring, and credit scoring. The proposed rules could help shape global norms and regulations around a promising but contentious technology.”

SCIENCE
What Do You Call a Bunch of Black Holes: A Crush? A Scream?
Dennis Overbye | The New York Times
“[Astrophysicist Jocelyn Kelly Holley-Bockelmann] was trying to run a Zoom meeting of the [Laser Interferometer Space Antenna] recently ‘when one of the members said his daughter was wondering what you call a collective of black holes—and then the meeting fell apart, with everyone trying to up one another,’ she said in an email. ‘Each time I saw a suggestion, I had to stop and giggle like a loon, which egged us all on more.’i”

ENVIRONMENT
Stopping Plastic in Rivers From Reaching the Ocean With New Tech From the Ocean Cleanup Project
Stephen Beacham | CNET
“First announced by Ocean Cleanup founder and CEO Boyan Slat in 2019, the Interceptors are moored to river beds and use the currents to snag debris floating on the surface. Then they direct the trash onto a conveyor belt that shuttles it into six large onboard dumpsters. The Interceptors run completely autonomously day and night, getting power from solar panels.”

FUTURE
Hackers Used to Be Humans. Soon, AIs Will Hack Humanity
Bruce Schneier | Wired
“Hacking is as old as humanity. We are creative problem solvers. We exploit loopholes, manipulate systems, and strive for more influence, power, and wealth. To date, hacking has exclusively been a human activity. Not for long. As I lay out in a report I just published, artificial intelligence will eventually find vulnerabilities in all sorts of social, economic, and political systems, and then exploit them at unprecedented speed, scale, and scope.”

Image Credit: NASA (Image of Martian sand dunes taken by NASA’s Curiosity rover) Continue reading

Posted in Human Robots

#439132 This Week’s Awesome Tech Stories From ...

ARTIFICIAL INTELLIGENCE
15 Graphs You Need to See to Understand AI in 2021
Charles Q. Choi | IEEE Spectrum
“If you haven’t had time to read the AI Index Report for 2021, which clocks in at 222 pages, don’t worry—we’ve got you covered. The massive document, produced by the Stanford Institute for Human-Centered Artificial Intelligence, is packed full of data and graphs, and we’ve plucked out 15 that provide a snapshot of the current state of AI.”

FUTURE
Geoffrey Hinton Has a Hunch About What’s Next for Artificial Intelligence
Siobhan Roberts | MIT Technology Review
“Back in November, the computer scientist and cognitive psychologist Geoffrey Hinton had a hunch. After a half-century’s worth of attempts—some wildly successful—he’d arrived at another promising insight into how the brain works and how to replicate its circuitry in a computer.”

ROBOTICS
Robotic Exoskeletons Could One Day Walk by Themselves
Charles Q. Choi | IEEE Spectrum
“Ultimately, the ExoNet researchers want to explore how AI software can transmit commands to exoskeletons so they can perform tasks such as climbing stairs or avoiding obstacles based on a system’s analysis of a user’s current movements and the upcoming terrain. With autonomous cars as inspiration, they are seeking to develop autonomous exoskeletons that can handle the walking task without human input, Laschowski says.”

TECHNOLOGY
Microsoft Buys AI Speech Tech Company Nuance for $19.7 Billion
James Vincent | The Verge
“The $19.7 billion acquisition of Nuance is Microsoft’s second-largest behind its purchase of LinkedIn in 2016 for $26 billion. It comes at a time when speech tech is improving rapidly, thanks to the deep learning boom in AI, and there are simultaneously more opportunities for its use.”

ENVIRONMENT
Google’s New 3D Time-Lapse Feature Shows How Humans Are Affecting the Planet
Sam Rutherford | Gizmodo
“Described by Google Earth director Rebecca Moore as the biggest update to Google Earth since 2017, Timelapse in Google Earth combines more than 24 million satellite photos, two petabytes of data, and 2 million hours of CPU processing time to create a 4.4-terapixel interactive view showing how the Earth has changed from 1984 to 2020.”

GENETICS
The Genetic Mistakes That Could Shape Our Species
Zaria Gorvett | BBC
“New technologies may have already introduced genetic errors to the human gene pool. How long will they last? And how could they affect us? …According to [Stanford’s Hank] Greely, who has written a book about the implications of He [Jiankui]’s project, the answer depends on what the edits do and how they’re inherited.”

SPACE
The Era of Reusability in Space Has Begun
Eric Berger | Ars Technica
“As [Earth orbit] becomes more cluttered [due to falling launch costs], the responsible thing is to more actively refuel, recycle, and dispose of satellites. Northrop Grumman has made meaningful progress toward such a future of satellite servicing. As a result, reusability is now moving into space.”

COMPUTING
100 Million More IoT Devices Are Exposed—and They Won’t Be the Last
Lily Hay Newman | Wired
“Over the last few years, researchers have found a shocking number of vulnerabilities in seemingly basic code that underpins how devices communicate with the internet. Now a new set of nine such vulnerabilities are exposing an estimated 100 million devices worldwide, including an array of internet-of-things products and IT management servers.”

Image Credit: Naitian (Tony) Wang / Unsplash Continue reading

Posted in Human Robots

#439070 Are Digital Humans the Next Step in ...

In the fictional worlds of film and TV, artificial intelligence has been depicted as so advanced that it is indistinguishable from humans. But what if we’re actually getting closer to a world where AI is capable of thinking and feeling?

Tech company UneeQ is embarking on that journey with its “digital humans.” These avatars act as visual interfaces for customer service chatbots, virtual assistants, and other applications. UneeQ’s digital humans appear lifelike not only in terms of language and tone of voice, but also because of facial movements: raised eyebrows, a tilt of the head, a smile, even a wink. They transform a transaction into an interaction: creepy yet astonishing, human, but not quite.

What lies beneath UneeQ’s digital humans? Their 3D faces are modeled on actual human features. Speech recognition enables the avatar to understand what a person is saying, and natural language processing is used to craft a response. Before the avatar utters a word, specific emotions and facial expressions are encoded within the response.

UneeQ may be part of a larger trend towards humanizing computing. ObEN’s digital avatars serve as virtual identities for celebrities, influencers, gaming characters, and other entities in the media and entertainment industry. Meanwhile, Soul Machines is taking a more biological approach, with a “digital brain” that simulates aspects of the human brain to modulate the emotions “felt” and “expressed” by its “digital people.” Amelia is employing a similar methodology in building its “digital employees.” It emulates parts of the brain involved with memory to respond to queries and, with each interaction, learns to deliver more engaging and personalized experiences.

Shiwali Mohan, an AI systems scientist at the Palo Alto Research Center, is skeptical of these digital beings. “They’re humanlike in their looks and the way they sound, but that in itself is not being human,” she says. “Being human is also how you think, how you approach problems, and how you break them down; and that takes a lot of algorithmic design. Designing for human-level intelligence is a different endeavor than designing graphics that behave like humans. If you think about the problems we’re trying to design these avatars for, we might not need something that looks like a human—it may not even be the right solution path.”

And even if these avatars appear near-human, they still evoke an uncanny valley feeling. “If something looks like a human, we have high expectations of them, but they might behave differently in ways that humans just instinctively know how other humans react. These differences give rise to the uncanny valley feeling,” says Mohan.

Yet the demand is there, with Amelia seeing high adoption of its digital employees across the financial, health care, and retail sectors. “We find that banks and insurance companies, which are so risk-averse, are leading the adoption of such disruptive technologies because they understand that the risk of non-adoption is much greater than the risk of early adoption,” says Chetan Dube, Amelia’s CEO. “Unless they innovate their business models and make them much more efficient digitally, they might be left behind.” Dube adds that the COVID-19 pandemic has accelerated adoption of digital employees in health care and retail as well.

Amelia, Soul Machines, and UneeQ are taking their digital beings a step further, enabling organizations to create avatars themselves using low-code or no-code platforms: Digital Employee Builder for Amelia, Creator for UneeQ, and Digital DNA Studio for Soul Machines. Unreal Engine, a game engine developed by Epic Games, is doing the same with MetaHuman Creator, a tool that allows anyone to create photorealistic digital humans. “The biggest motivation for Digital Employee Builder is to democratize AI,” Dube says.

Mohan is cautious about this approach. “AI has problems with bias creeping in from data sets and into the way it speaks. The AI community is still trying to figure out how to measure and counter that bias,” she says. “[Companies] have to have an AI expert on board that can recommend the right things to build for.”

Despite being wary of the technology, Mohan supports the purpose behind these virtual beings and is optimistic about where they’re headed. “We do need these tools that support humans in different kinds of things. I think the vision is the pro, and I’m behind that vision,” she says. “As we develop more sophisticated AI technology, we would then have to implement novel ways of interacting with that technology. Hopefully, all of that is designed to support humans in their goals.” Continue reading

Posted in Human Robots

#439042 How Scientists Used Ultrasound to Read ...

Thanks to neural implants, mind reading is no longer science fiction.

As I’m writing this sentence, a tiny chip with arrays of electrodes could sit on my brain, listening in on the crackling of my neurons firing as my hands dance across the keyboard. Sophisticated algorithms could then decode these electrical signals in real time. My brain’s inner language to plan and move my fingers could then be used to guide a robotic hand to do the same. Mind-to-machine control, voilà!

Yet as the name implies, even the most advanced neural implant has a problem: it’s an implant. For electrodes to reliably read the brain’s electrical chatter, they need to pierce through the its protective membrane and into brain tissue. Danger of infection aside, over time, damage accumulates around the electrodes, distorting their signals or even rendering them unusable.

Now, researchers from Caltech have paved a way to read the brain without any physical contact. Key to their device is a relatively new superstar in neuroscience: functional ultrasound, which uses sound waves to capture activity in the brain.

In monkeys, the technology could reliably predict their eye movement and hand gestures after just a single trial—without the usual lengthy training process needed to decode a movement. If adopted by humans, the new mind-reading tech represents a triple triumph: it requires minimal surgery and minimal learning, but yields maximal resolution for brain decoding. For people who are paralyzed, it could be a paradigm shift in how they control their prosthetics.

“We pushed the limits of ultrasound neuroimaging and were thrilled that it could predict movement,” said study author Dr. Sumner Norman.

To Dr. Krishna Shenoy at Stanford, who was not involved, the study will finally put ultrasound “on the map as a brain-machine interface technique. Adding to this toolkit is spectacular,” he said.

Breaking the Sound Barrier
Using sound to decode brain activity might seem preposterous, but ultrasound has had quite the run in medicine. You’ve probably heard of its most common use: taking photos of a fetus in pregnancy. The technique uses a transducer, which emits ultrasound pulses into the body and finds boundaries in tissue structure by analyzing the sound waves that bounce back.

Roughly a decade ago, neuroscientists realized they could adapt the tech for brain scanning. Rather than directly measuring the brain’s electrical chatter, it looks at a proxy—blood flow. When certain brain regions or circuits are active, the brain requires much more energy, which is provided by increased blood flow. In this way, functional ultrasound works similarly to functional MRI, but at a far higher resolution—roughly ten times, the authors said. Plus, people don’t have to lie very still in an expensive, claustrophobic magnet.

“A key question in this work was: If we have a technique like functional ultrasound that gives us high-resolution images of the brain’s blood flow dynamics in space and over time, is there enough information from that imaging to decode something useful about behavior?” said study author Dr. Mikhail Shapiro.

There’s plenty of reasons for doubt. As the new kid on the block, functional ultrasound has some known drawbacks. A major one: it gives a far less direct signal than electrodes. Previous studies show that, with multiple measurements, it can provide a rough picture of brain activity. But is that enough detail to guide a robotic prosthesis?

One-Trial Wonder
The new study put functional ultrasound to the ultimate test: could it reliably detect movement intention in monkeys? Because their brains are the most similar to ours, rhesus macaque monkeys are often the critical step before a brain-machine interface technology is adapted for humans.

The team first inserted small ultrasound transducers into the skulls of two rhesus monkeys. While it sounds intense, the surgery doesn’t penetrate the brain or its protective membrane; it’s only on the skull. Compared to electrodes, this means the brain itself isn’t physically harmed.

The device is linked to a computer, which controls the direction of sound waves and captures signals from the brain. For this study, the team aimed the pulses at the posterior parietal cortex, a part of the “motor” aspect of the brain, which plans movement. If right now you’re thinking about scrolling down this page, that’s the brain region already activated, before your fingers actually perform the movement.

Then came the tests. The first looked at eye movements—something pretty necessary before planning actual body movements without tripping all over the place. Here, the monkeys learned to focus on a central dot on a computer screen. A second dot, either left or right, then flashed. The monkeys’ task was to flicker their eyes to the most recent dot. It’s something that seems easy for us, but requires sophisticated brain computation.

The second task was more straightforward. Rather than just moving their eyes to the second target dot, the monkeys learned to grab and manipulate a joystick to move a cursor to that target.

Using brain imaging to decode the mind and control movement. Image Credit: S. Norman, Caltech
As the monkeys learned, so did the device. Ultrasound data capturing brain activity was fed into a sophisticated machine learning algorithm to guess the monkeys’ intentions. Here’s the kicker: once trained, using data from just a single trial, the algorithm was able to correctly predict the monkeys’ actual eye movement—whether left or right—with roughly 78 percent accuracy. The accuracy for correctly maneuvering the joystick was even higher, at nearly 90 percent.

That’s crazy accurate, and very much needed for a mind-controlled prosthetic. If you’re using a mind-controlled cursor or limb, the last thing you’d want is to have to imagine the movement multiple times before you actually click the web button, grab the door handle, or move your robotic leg.

Even more impressive is the resolution. Sound waves seem omnipresent, but with focused ultrasound, it’s possible to measure brain activity at a resolution of 100 microns—roughly 10 neurons in the brain.

A Cyborg Future?
Before you start worrying about scientists blasting your brain with sound waves to hack your mind, don’t worry. The new tech still requires skull surgery, meaning that a small chunk of skull needs to be removed. However, the brain itself is spared. This means that compared to electrodes, ultrasound could offer less damage and potentially a far longer mind reading than anything currently possible.

There are downsides. Focused ultrasound is far younger than any electrode-based neural implants, and can’t yet reliably decode 360-degree movement or fine finger movements. For now, the tech requires a wire to link the device to a computer, which is off-putting to many people and will prevent widespread adoption. Add to that the inherent downside of focused ultrasound, which lags behind electrical recordings by roughly two seconds.

All that aside, however, the tech is just tiptoeing into a future where minds and machines seamlessly connect. Ultrasound can penetrate the skull, though not yet at the resolution needed for imaging and decoding brain activity. The team is already working with human volunteers with traumatic brain injuries, who had to have a piece of their skulls removed, to see how well ultrasound works for reading their minds.

“What’s most exciting is that functional ultrasound is a young technique with huge potential. This is just our first step in bringing high performance, less invasive brain-machine interface to more people,” said Norman.

Image Credit: Free-Photos / Pixabay Continue reading

Posted in Human Robots