Tag Archives: the

#439400 A Neuron’s Sense of Timing Encodes ...

We like to think of brains as computers: A physical system that processes inputs and spits out outputs. But, obviously, what’s between your ears bears little resemblance to your laptop.

Computer scientists know the intimate details of how computers store and process information because they design and build them. But neuroscientists didn’t build brains, which makes them a bit like a piece of alien technology they’ve found and are trying to reverse engineer.

At this point, researchers have catalogued the components fairly well. We know the brain is a vast and intricate network of cells called neurons that communicate by way of electrical and chemical signals. What’s harder to figure out is how this network makes sense of the world.

To do that, scientists try to tie behavior to activity in the brain by listening to the chatter of its neurons firing. If neurons in a region get rowdy when a person is eating chocolate, well, those cells might be processing taste or directing chewing. This method has mostly focused on the frequency at which neurons fire—that is, how often they fire in a given period of time.

But frequency alone is an imprecise measure. For years, research in rats has suggested that when neurons fire relative to their peers—during navigation of spaces in particular—may also encode information. This process, in which the timing of some neurons grows increasingly out of step with their neighbors, is called “phase precession.”

It wasn’t known if phase precession was widespread in mammals, but recent studies have found it in bats and marmosets. And now, a new study has shown that it happens in humans too, strengthening the case that phase precession may occur across species.

The new study also found evidence of phase precession outside of spatial tasks, lending some weight to the idea it may be a more general process in learning throughout the brain.

The paper was published in the journal Cell last month by a Columbia University team of researchers led by neuroscientist and biomedical engineer Josh Jacobs.

The researchers say more studies are needed to flesh out the role of phase precession in the brain, and how or if it contributes to learning is still uncertain.

But to Salman Qasim, a post-doctoral fellow on Jacobs’ team and lead author of the paper, the patterns are tantalizing. “[Phase precession is] so prominent and prevalent in the rodent brain that it makes you want to assume it’s a generalizable mechanism,” he told Quanta Magazine this month.

Rat Brains to Human Brains
Though phase precession in rats has been studied for decades, it’s taken longer to unearth it in humans for a couple reasons. For one, it’s more challenging to study in humans at the level of neurons because it requires placing electrodes deep in the brain. Also, our patterns of brain activity are subtler and more complex, making them harder to untangle.

To solve the first challenge, the team analyzed decade-old recordings of neural chatter from 13 patients with drug-resistant epilepsy. As a part of their treatment, the patients had electrodes implanted to map the storms of activity during a seizure.

In one test, they navigated a two-dimensional virtual world—like a simple video game—on a laptop. Their brain activity was recorded as they were instructed to drive and drop off “passengers” at six stores around the perimeter of a rectangular track.

The team combed through this activity for hints of phase precession.

Active regions of the brain tend to fire together at a steady rate. These rhythms, called brain waves, are like a metronome or internal clock. Phase precession occurs when individual neurons fall out of step with the prevailing brain waves nearby. In navigation of spaces, like in this study, a particular type of neuron, called a “place cell,” fires earlier and earlier compared to its peers as the subject approaches and passes through a region. Its early firing eventually links up with the late firing of the next place cell in the chain, strengthening the synapse between the two and encoding a path through space.

In rats, theta waves in the hippocampus, which is a region associated with navigation, are strong and clear, making precession easier to pick out. In humans, they’re weaker and more variable. So the team used a clever statistical analysis to widen the observed wave frequencies into a range. And that’s when the phase precession clearly stood out.

This result lined up with prior navigation studies in rats. But the team went a step further.

In another part of the brain, the frontal cortex, they found phase precession in neurons not involved in navigation. The timing of these cells fell out of step with their neighbors as the subject achieved the goal of dropping passengers off at one of the stores. This indicated phase precession may also encode the sequence of steps leading up to a goal.

The findings, therefore, extend the occurrence of phase precession to humans and to new tasks and regions in the brain. The researchers say this suggests the phenomenon may be a general mechanism that encodes experiences over time. Indeed, other research—some very recent and not yet peer-reviewed—validates this idea, tying it to the processing of sounds, smells, and series of images.

And, the cherry on top, the process compresses experience to the length of a single brain wave. That is, an experience that takes seconds—say, a rat moving through several locations in the real world—is compressed to the fraction of a second it takes the associated neurons to fire in sequence.

In theory, this could help explain how we learn so fast from so few examples. Something artificial intelligence algorithms struggle to do.

As enticing as the research is, however, both the team involved in the study and other researchers say it’s still too early to draw definitive conclusions. There are other theories for how humans learn so quickly, and it’s possible phase precession is an artifact of the way the brain functions as opposed to a driver of its information processing.

That said, the results justify more serious investigation.

“Anyone who looks at brain activity as much as we do knows that it’s often a chaotic, stochastic mess,” Qasim told Wired last month. “So when you see some order emerge in that chaos, you want to ascribe to it some sort of functional purpose.”

Only time will tell if that order is a fundamental neural algorithm or something else.

Image Credit: Daniele Franchi / Unsplash Continue reading

Posted in Human Robots

#439395 This Week’s Awesome Tech Stories From ...

ARTIFICIAL INTELLIGENCE
Need to Fit Billions of Transistors on Your Chip? Let AI Do It
Will Knight | Wired
“Google, Nvidia, and others are training algorithms in the dark arts of designing semiconductors—some of which will be used to run artificial intelligence programs. …This should help companies draw up more powerful and efficient blueprints in much less time.”

DIGITAL MEDIA
AI Voice Actors Sound More Human Than Ever—and They’re Ready to Hire
Karen Hao | MIT Technology Review
“A new wave of startups are using deep learning to build synthetic voice actors for digital assistants, video-game characters, and corporate videos. …Companies can now license these voices to say whatever they need. They simply feed some text into the voice engine, and out will spool a crisp audio clip of a natural-sounding performance.”

AUGMENTED REALITY
5 Years After Pokémon Go, It’s Time for the Metaverse
Steven Levy | Wired
“That’s right, it’s now the fifth anniversary of Pokémon Go and the craze that marked its launch. That phenomenon was not only a milestone for the company behind the game, a Google offshoot called Niantic, but for the digital world in general. Pokémon Go was the first wildly popular implementation of augmented reality, a budding technology at the time, and it gave us a preview of what techno-pundits now believe is the next big thing.”

SPACE
Startups Aim Beyond Earth
Erin Woo | The New York Times
“Investors are putting more money than ever into space technology. Space start-ups raised over $7 billion in 2020, double the amount from just two years earlier, according to the space analytics firm BryceTech. …The boom, many executives, analysts and investors say, is fueled partly by advancements that have made it affordable for private companies—not just nations—to develop space technology and launch products into space.”

ENVIRONMENT
New Fabric Passively Cools Whatever It’s Covering—Including You
John Timmer | Ars Technica
“Without using energy, [passive cooling] materials take heat from whatever they’re covering and radiate it out to space. Most of these efforts have focused on building materials, with the goal of creating roofs that can keep buildings a few degrees cooler than the surrounding air. But now a team based in China has taken the same principles and applied them to fabric, creating a vest that keeps its users about 3º C cooler than they would be otherwise.”

SCIENCE
NASA Is Supporting the Search for Alien Megastructures
Daniel Oberhaus | Supercluster
“For the first time in history, America’s space agency is officially sponsoring a search for alien megastructures. ‘I’m encouraged that we’ve got NASA funding to support this,’ says [UC Berkeley’s Steve] Croft. ‘We’re using a NASA mission to fulfill a stated NASA objective—the search for life in the universe. But we’re doing it through a technosignature search that is not very expensive for NASA compared to some of their biosignature work.’i”

ROBOTICS
Boston Dynamics, BTS, and Ballet: The Next Act for Robotics
Sydney Skybetter | Wired
“Even though Boston Dynamics’ dancing robots are currently relegated to the realm of branded spectacle, I am consistently impressed by the company’s choreographic strides. In artists’ hands, these machines are becoming eminently capable of expression through performance. Boston Dynamics is a company that takes dance seriously, and, per its blog post, uses choreography as ‘a form of highly accelerated lifecycle testing for the hardware.’i”

INNOVATION
The Rise of ‘ARPA-Everything’ and What It Means for Science
Jeff Tollefson | Nature
“Enamored with the innovation that DARPA fostered in the United States, governments around the world, including in Europe and Japan, have attempted to duplicate the agency within their own borders. …Scientists who have studied the DARPA model say it works if applied properly, and to the right, ‘ARPA-able’ problems. But replicating DARPA’s recipe isn’t easy.”

AUTOMATION
No Driver? No Problem—This Is the Indy Autonomous Challenge
Gregory Leporati | Ars Technica
“The upcoming competition is, in many ways, the spiritual successor of the DARPA Grand Challenge, a robotics race from the early 2000s. …’If you could bring back the excitement of the DARPA Grand Challenge,’ [ESN’s president and CEO] Paul Mitchell continued, ‘and apply it to a really challenging edge use case, like high-speed racing, then that can leap the industry from where it is to where it needs to be to help us realize our autonomous future.’i”

Image Credit: Henry & Co. / Unsplash Continue reading

Posted in Human Robots

#439380 Autonomous excavators ready for around ...

Researchers from Baidu Research Robotics and Auto-Driving Lab (RAL) and the University of Maryland, College Park, have introduced an autonomous excavator system (AES) that can perform material loading tasks for a long duration without any human intervention while offering performance closely equivalent to that of an experienced human operator. Continue reading

Posted in Human Robots

#439366 Why Robots Can’t Be Counted On to Find ...

On Thursday, a portion of the 12-story Champlain Towers South condominium building in Surfside, Florida (just outside of Miami) suffered a catastrophic partial collapse. As of Saturday morning, according to the Miami Herald, 159 people are still missing, and rescuers are removing debris with careful urgency while using dogs and microphones to search for survivors still trapped within a massive pile of tangled rubble.

It seems like robots should be ready to help with something like this. But they aren’t.

JOE RAEDLE/GETTY IMAGES

A Miami-Dade Fire Rescue official and a K-9 continue the search and rescue operations in the partially collapsed 12-story Champlain Towers South condo building on June 24, 2021 in Surfside, Florida.

The picture above shows what the site of the collapse in Florida looks like. It’s highly unstructured, and would pose a challenge for most legged robots to traverse, although you could see a tracked robot being able to manage it. But there are already humans and dogs working there, and as long as the environment is safe to move over, it’s not necessary or practical to duplicate that functionality with a robot, especially when time is critical.

What is desperately needed right now is a way of not just locating people underneath all of that rubble, but also getting an understanding of the structure of the rubble around a person, and what exactly is between that person and the surface. For that, we don’t need robots that can get over rubble: we need robots that can get into rubble. And we don’t have them.

To understand why, we talked with Robin Murphy at Texas A&M, who directs the Humanitarian Robotics and AI Laboratory, formerly the Center for Robot-Assisted Search and Rescue (CRASAR), which is now a non-profit. Murphy has been involved in applying robotic technology to disasters worldwide, including 9/11, Fukushima, and Hurricane Harvey. The work she’s doing isn’t abstract research—CRASAR deploys teams of trained professionals with proven robotic technology to assist (when asked) with disasters around the world, and then uses those experiences as the foundation of a data-driven approach to improve disaster robotics technology and training.

According to Murphy, using robots to explore rubble of collapsed buildings is, for the moment, not possible in any kind of way that could be realistically used on a disaster site. Rubble, generally, is a wildly unstructured and unpredictable environment. Most robots are simply too big to fit through rubble, and the environment isn’t friendly to very small robots either, since there’s frequently water from ruptured plumbing making everything muddy and slippery, among many other physical hazards. Wireless communication or localization is often impossible, so tethers are required, which solves the comms and power problems but can easily get caught or tangled on obstacles.

Even if you can build a robot small enough and durable enough to be able to physically fit through the kinds of voids that you’d find in the rubble of a collapsed building (like these snake robots were able to do in Mexico in 2017), useful mobility is about more than just following existing passages. Many disaster scenarios in robotics research assume that objectives are accessible if you just follow the right path, but real disasters aren’t like that, and large voids may require some amount of forced entry, if entry is even possible at all. An ability to forcefully burrow, which doesn’t really exist yet in this context but is an active topic of research, is critical for a robot to be able to move around in rubble where there may not be any tunnels or voids leading it where it wants to go.

And even if you can build a robot that can successfully burrow its way through rubble, there’s the question of what value it’s able to provide once it gets where it needs to be. Robotic sensing systems are in general not designed for extreme close quarters, and visual sensors like cameras can rapidly get damaged or get so much dirt on them that they become useless. Murphy explains that ideally, a rubble-exploring robot would be able to do more than just locate victims, but would also be able to use its sensors to assist in their rescue. “Trained rescuers need to see the internal structure of the rubble, not just the state of the victim. Imagine a surgeon who needs to find a bullet in a shooting victim, but does not have any idea of the layout of the victims organs; if the surgeon just cuts straight down, they may make matters worse. Same thing with collapses, it’s like the game of pick-up sticks. But if a structural specialist can see inside the pile of pick-up sticks, they can extract the victim faster and safer with less risk of a secondary collapse.”

Besides these technical challenges, the other huge part to all of this is that any system that you’d hope to use in the context of rescuing people must be fully mature. It’s obviously unethical to take a research-grade robot into a situation like the Florida building collapse and spend time and resources trying to prove that it works. “Robots that get used for disasters are typically used every day for similar tasks,” explains Murphy. For example, it wouldn’t be surprising to see drones being used to survey the parts of the building in Florida that are still standing to make sure that it’s safe for people to work nearby, because drones are a mature and widely adopted technology that has already proven itself. Until a disaster robot has achieved a similar level of maturity, we’re not likely to see it take place in an active rescue.

Keeping in mind that there are no existing robots that fulfill all of the above criteria for actual use, we asked Murphy to describe her ideal disaster robot for us. “It would look like a very long, miniature ferret,” she says. “A long, flexible, snake-like body, with small legs and paws that can grab and push and shove.” The robo-ferret would be able to burrow, to wiggle and squish and squeeze its way through tight twists and turns, and would be equipped with functional eyelids to protect and clean its sensors. But since there are no robo-ferrets, what existing robot would Murphy like to see in Florida right now? “I’m not there in Miami,” Murphy tells us, “but my first thought when I saw this was I really hope that one day we’re able to commercialize Japan’s Active Scope Camera.”

The Active Scope Camera was developed at Tohoku University by Satoshi Tadokoro about 15 years ago. It operates kind of like a long, skinny, radially symmetrical bristlebot with the ability to push itself forward:

The hose is covered by inclined cilia. Motors with eccentric mass are installed in the cable and excite vibration and cause an up-and-down motion of the cable. The tips of the cilia stick on the floor when the cable moves down and propel the body. Meanwhile, the tips slip against the floor, and the body does not move back when it moves up. A repetition of this process showed that the cable can slowly move in a narrow space of rubble piles.

“It's quirky, but the idea of being able to get into those small spaces and go about 30 feet in and look around is a big deal,” Murphy says. But the last publication we can find about this system is nearly a decade old—if it works so well, we asked Murphy, why isn’t it more widely available to be used after a building collapses? “When a disaster happens, there’s a little bit of interest, and some funding. But then that funding goes away until the next disaster. And after a certain point, there’s just no financial incentive to create an actual product that’s reliable in hardware and software and sensors, because fortunately events like this building collapse are rare.”

Photo: Center for Robot-Assisted Search and Rescue

Dr. Satoshi Tadokoro inserting the Active Scope Camera robot at the 2007 Berkman Plaza II (Jacksonville, FL) parking garage collapse.

The fortunate rarity of disasters like these complicates the development cycle of disaster robots as well, says Murphy. That’s part of the reason why CRASAR exists in the first place—it’s a way for robotics researchers to understand what first responders need from robots, and to test those robots in realistic disaster scenarios to determine best practices. “I think this is a case where policy and government can actually help,” Murphy tells us. “They can help by saying, we do actually need this, and we’re going to support the development of useful disaster robots.”

Robots should be able to help out in the situation happening right now in Florida, and we should be spending more time and effort on research in that direction that could potentially be saving lives. We’re close, but as with so many aspects of practical robotics, it feels like we’ve been close for years. There are systems out there with a lot of potential, they just need all help necessary to cross the gap from research project to a practical, useful system that can be deployed when needed. Continue reading

Posted in Human Robots

#439357 How the Financial Industry Can Apply AI ...

iStockphoto

THE INSTITUTE Artificial intelligence is transforming the financial services industry. The technology is being used to determine creditworthiness, identify money laundering, and detect fraud.

AI also is helping to personalize services and recommend new offerings by developing a better understanding of customers. Chatbots and other AI assistants have made it easier for clients to get answers to their questions, 24/7.

Although confidence in financial institutions is high, according to the Banking Exchange, that’s not the case with AI. Many people have raised concerns about bias, discrimination, privacy, surveillance, and transparency.

Regulations are starting to take shape to address such concerns. In April the European Commission released the first legal framework to govern use of the technology, as reported in IEEE Spectrum. The proposed European regulations cover a variety of AI applications including credit checks, chatbots, and social credit scoring, which assesses an individual’s creditworthiness based on behavior. The U.S. Federal Trade Commission in April said it expects AI to be used truthfully, fairly, and equitably when it comes to decisions about credit, insurance, and other services.

To ensure the financial industry is addressing such issues, IEEE recently launched a free guide, “Trusted Data and Artificial Intelligence Systems (AIS) for Financial Services.” The authors of the nearly 100-page playbook want to ensure that those involved in developing the technologies are not neglecting human well-being and ethical considerations.

More than 50 representatives from major banks, credit unions, pension funds, and legal and compliance groups in Canada, the United Kingdom, and the United States provided input, as did AI experts from academia and technology companies.

“This IEEE finance playbook is a milestone achievement and provides a much-needed practical road map for organizations globally to develop their trusted data and ethical AI systems.”

“We are in the business of trust. A primary goal of financial services organizations is to use client and member data to generate new products and services that deliver value,” Sami Ahmed says. He is a member of IEEE industry executive steering committee that oversaw the playbook’s creation.

Ahmed is senior vice president of data and advanced analytics of OMERS, Ontario’s municipal government employees’ pension fund and one of the largest institutional investors in Canada.

“Best-in-class guidance assembled from industry experts in IEEE’s finance playbook,” he says, “addresses emerging risks such as bias, fairness, explainability, and privacy in our data and algorithms to inform smarter business decisions and uphold that trust.”

The playbook includes a road map to help organizations develop their systems. To provide a theoretical framework, the document incorporates IEEE’s “Ethically Aligned Design” report, the IEEE 7000 series of AI standards and projects, and the Ethics Certification Program for Autonomous and Intelligent Systems.

“Design looks completely different when a product has already been developed or is in prototype form,” says John C.Havens, executive director of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. “The primary message of ethically aligned design is to use the methodology outlined in the document to address these issues at the outset.”

Havens adds that although IEEE isn’t well known by financial services regulatory bodies, it does have a lot of credibility in harnessing the technical community and creating consensus-based material.

“That is why IEEE is the right place to publish this playbook, which sets the groundwork for standards development in the future,” he says.

IEEE Member Pavel Abdur-Rahman, chair of the IEEE industry executive steering committee, says the document was necessary to accomplish three things. One was to “verticalize the discussion within financial services for a very industry-specific capability building dialog. Another was to involve industry participants in the cocreation of this playbook, not only to curate best practices but also to develop and drive adoption of the IEEE standards into their organizations.” Lastly, he says, “it’s the first step toward creating recommended practices for MLOps [machine-learning operations], data cooperatives, and data products and marketplaces.

Abdur-Rahman is the head of trusted data and AI at IBM Canada.

ROAD MAP AND RESOURCES
The playbook has two sections, a road map for how to build trusted AI systems and resources from experts.

The road map helps organizations identify where they are in the process of adopting responsible ethically aligned design: early, developing, advanced, or mature stage. This section also outlines 20 ways that trusted data and AI can provide value to operating units within a financial organization. Called use cases, the examples include cybersecurity, loan and deposit pricing, improving operational efficiency, and talent acquisition. Graphs are used to break down potential ethical concerns for each use case.

The key resources section includes best practices, educational videos, guidelines, and reports on codes of conduct, ethical challenges, building bots responsibly, and other topics. Among the groups contributing resources are the European Commission, IBM, the IEEE Standards Association, Microsoft, and the World Economic Forum. Also included is a report on the impact the coronavirus pandemic has had on the financial services industry in Canada. Supplemental information includes a list of 84 documents on ethical guidelines.

“We are at a critical junction of industrial-scale AI adoption and acceleration,” says Amy Shi-Nash, a member of the steering committee and the global head of analytics and data science for HSBC. “This IEEE finance playbook is a milestone achievement and provides a much-needed practical road map for organizations globally to develop their trusted data and ethical AI systems.”

To get an evaluation of the readiness of your organization’s AI system, you can anonymously take a 20-minute survey.

IEEE membership offers a wide range of benefits and opportunities for those who share a common interest in technology. If you are not already a member, consider joining IEEE and becoming part of a worldwide network of more than 400,000 students and professionals. Continue reading

Posted in Human Robots