Tag Archives: summit

#436437 Why AI Will Be the Best Tool for ...

Dmitry Kaminskiy speaks as though he were trying to unload everything he knows about the science and economics of longevity—from senolytics research that seeks to stop aging cells from spewing inflammatory proteins and other molecules to the trillion-dollar life extension industry that he and his colleagues are trying to foster—in one sitting.

At the heart of the discussion with Singularity Hub is the idea that artificial intelligence will be the engine that drives breakthroughs in how we approach healthcare and healthy aging—a concept with little traction even just five years ago.

“At that time, it was considered too futuristic that artificial intelligence and data science … might be more accurate compared to any hypothesis of human doctors,” said Kaminskiy, co-founder and managing partner at Deep Knowledge Ventures, an investment firm that is betting big on AI and longevity.

How times have changed. Artificial intelligence in healthcare is attracting more investments and deals than just about any sector of the economy, according to data research firm CB Insights. In the most recent third quarter, AI healthcare startups raised nearly $1.6 billion, buoyed by a $550 million mega-round from London-based Babylon Health, which uses AI to collect data from patients, analyze the information, find comparable matches, then make recommendations.

Even without the big bump from Babylon Health, AI healthcare startups raised more than $1 billion last quarter, including two companies focused on longevity therapeutics: Juvenescence and Insilico Medicine.

The latter has risen to prominence for its novel use of reinforcement learning and general adversarial networks (GANs) to accelerate the drug discovery process. Insilico Medicine recently published a seminal paper that demonstrated how such an AI system could generate a drug candidate in just 46 days. Co-founder and CEO Alex Zhavoronkov said he believes there is no greater goal in healthcare today—or, really, any venture—than extending the healthy years of the human lifespan.

“I don’t think that there is anything more important than that,” he told Singularity Hub, explaining that an unhealthy society is detrimental to a healthy economy. “I think that it’s very, very important to extend healthy, productive lifespan just to fix the economy.”

An Aging Crisis
The surge of interest in longevity is coming at a time when life expectancy in the US is actually dropping, despite the fact that we spend more money on healthcare than any other nation.

A new paper in the Journal of the American Medical Association found that after six decades of gains, life expectancy for Americans has decreased since 2014, particularly among young and middle-aged adults. While some of the causes are societal, such as drug overdoses and suicide, others are health-related.

While average life expectancy in the US is 78, Kaminskiy noted that healthy life expectancy is about ten years less.

To Zhavoronkov’s point about the economy (a topic of great interest to Kaminskiy as well), the US spent $1.1 trillion on chronic diseases in 2016, according to a report from the Milken Institute, with diabetes, cardiovascular conditions, and Alzheimer’s among the most costly expenses to the healthcare system. When the indirect costs of lost economic productivity are included, the total price tag of chronic diseases in the US is $3.7 trillion, nearly 20 percent of GDP.

“So this is the major negative feedback on the national economy and creating a lot of negative social [and] financial issues,” Kaminskiy said.

Investing in Longevity
That has convinced Kaminskiy that an economy focused on extending healthy human lifespans—including the financial instruments and institutions required to support a long-lived population—is the best way forward.

He has co-authored a book on the topic with Margaretta Colangelo, another managing partner at Deep Knowledge Ventures, which has launched a specialized investment fund, Longevity.Capital, focused on the longevity industry. Kaminskiy estimates that there are now about 20 such investment funds dedicated to funding life extension companies.

In November at the inaugural AI for Longevity Summit in London, he and his collaborators also introduced the Longevity AI Consortium, an academic-industry initiative at King’s College London. Eventually, the research center will include an AI Longevity Accelerator program to serve as a bridge between startups and UK investors.

Deep Knowledge Ventures has committed about £7 million ($9 million) over the next three years to the accelerator program, as well as establishing similar consortiums in other regions of the world, according to Franco Cortese, a partner at Longevity.Capital and director of the Aging Analytics Agency, which has produced a series of reports on longevity.

A Cure for What Ages You
One of the most recent is an overview of Biomarkers for Longevity. A biomarker, in the case of longevity, is a measurable component of health that can indicate a disease state or a more general decline in health associated with aging. Examples range from something as simple as BMI as an indicator of obesity, which is associated with a number of chronic diseases, to sophisticated measurements of telomeres, the protective ends of chromosomes that shorten as we age.

While some researchers are working on moonshot therapies to reverse or slow aging—with a few even arguing we could expand human life on the order of centuries—Kaminskiy said he believes understanding biomarkers of aging could make more radical interventions unnecessary.

In this vision of healthcare, people would be able to monitor their health 24-7, with sensors attuned to various biomarkers that could indicate the onset of everything from the flu to diabetes. AI would be instrumental in not just ingesting the billions of data points required to develop such a system, but also what therapies, treatments, or micro-doses of a drug or supplement would be required to maintain homeostasis.

“Consider it like Tesla with many, many detectors, analyzing the behavior of the car in real time, and a cloud computing system monitoring those signals in real time with high frequency,” Kaminskiy explained. “So the same shall be applied for humans.”

And only sophisticated algorithms, Kaminskiy argued, can make longevity healthcare work on a mass scale but at the individual level. Precision medicine becomes preventive medicine. Healthcare truly becomes a system to support health rather than a way to fight disease.

Image Credit: Photo by h heyerlein on Unsplash Continue reading

Posted in Human Robots

#435791 To Fly Solo, Racing Drones Have a Need ...

Drone racing’s ultimate vision of quadcopters weaving nimbly through obstacle courses has attracted far less excitement and investment than self-driving cars aimed at reshaping ground transportation. But the U.S. military and defense industry are betting on autonomous drone racing as the next frontier for developing AI so that it can handle high-speed navigation within tight spaces without human intervention.

The autonomous drone challenge requires split-second decision-making with six degrees of freedom instead of a car’s mere two degrees of road freedom. One research team developing the AI necessary for controlling autonomous racing drones is the Robotics and Perception Group at the University of Zurich in Switzerland. In late May, the Swiss researchers were among nine teams revealed to be competing in the two-year AlphaPilot open innovation challenge sponsored by U.S. aerospace company Lockheed Martin. The winning team will walk away with up to $2.25 million for beating other autonomous racing drones and a professional human drone pilot in head-to-head competitions.

“I think it is important to first point out that having an autonomous drone to finish a racing track at high speeds or even beating a human pilot does not imply that we can have autonomous drones [capable of] navigating in real-world, complex, unstructured, unknown environments such as disaster zones, collapsed buildings, caves, tunnels or narrow pipes, forests, military scenarios, and so on,” says Davide Scaramuzza, a professor of robotics and perception at the University of Zurich and ETH Zurich. “However, the robust and computationally efficient state estimation algorithms, control, and planning algorithms developed for autonomous drone racing would represent a starting point.”

The nine teams that made the cut—from a pool of 424 AlphaPilot applicants—will compete in four 2019 racing events organized under the Drone Racing League’s Artificial Intelligence Robotic Racing Circuit, says Keith Lynn, program manager for AlphaPilot at Lockheed Martin. To ensure an apples-to-apples comparison of each team’s AI secret sauce, each AlphaPilot team will upload its AI code into identical, specially-built drones that have the NVIDIA Xavier GPU at the core of the onboard computing hardware.

“Lockheed Martin is offering mentorship to the nine AlphaPilot teams to support their AI tech development and innovations,” says Lynn. The company “will be hosting a week-long Developers Summit at MIT in July, dedicated to workshopping and improving AlphaPilot teams’ code,” he added. He notes that each team will retain the intellectual property rights to its AI code.

The AlphaPilot challenge takes inspiration from older autonomous drone racing events hosted by academic researchers, Scaramuzza says. He credits Hyungpil Moon, a professor of robotics and mechanical engineering at Sungkyunkwan University in South Korea, for having organized the annual autonomous drone racing competition at the International Conference on Intelligent Robots and Systems since 2016.

It’s no easy task to create and train AI that can perform high-speed flight through complex environments by relying on visual navigation. One big challenge comes from how drones can accelerate sharply, take sharp turns, fly sideways, do zig-zag patterns and even perform back flips. That means camera images can suddenly appear tilted or even upside down during drone flight. Motion blur may occur when a drone flies very close to structures at high speeds and camera pixels collect light from multiple directions. Both cameras and visual software can also struggle to compensate for sudden changes between light and dark parts of an environment.

To lend AI a helping hand, Scaramuzza’s group recently published a drone racing dataset that includes realistic training data taken from a drone flown by a professional pilot in both indoor and outdoor spaces. The data, which includes complicated aerial maneuvers such as back flips, flight sequences that cover hundreds of meters, and flight speeds of up to 83 kilometers per hour, was presented at the 2019 IEEE International Conference on Robotics and Automation.

The drone racing dataset also includes data captured by the group’s special bioinspired event cameras that can detect changes in motion on a per-pixel basis within microseconds. By comparison, ordinary cameras need milliseconds (each millisecond being 1,000 microseconds) to compare motion changes in each image frame. The event cameras have already proven capable of helping drones nimbly dodge soccer balls thrown at them by the Swiss lab’s researchers.

The Swiss group’s work on the racing drone dataset received funding in part from the U.S. Defense Advanced Research Projects Agency (DARPA), which acts as the U.S. military’s special R&D arm for more futuristic projects. Specifically, the funding came from DARPA’s Fast Lightweight Autonomy program that envisions small autonomous drones capable of flying at high speeds through cluttered environments without GPS guidance or communication with human pilots.

Such speedy drones could serve as military scouts checking out dangerous buildings or alleys. They could also someday help search-and-rescue teams find people trapped in semi-collapsed buildings or lost in the woods. Being able to fly at high speed without crashing into things also makes a drone more efficient at all sorts of tasks by making the most of limited battery life, Scaramuzza says. After all, most drone battery life gets used up by the need to hover in flight and doesn’t get drained much by flying faster.

Even if AI manages to conquer the drone racing obstacle courses, that would be the end of the beginning of the technology’s development. What would still be required? Scaramuzza specifically singled out the need to handle low-visibility conditions involving smoke, dust, fog, rain, snow, fire, hail, as some of the biggest challenges for vision-based algorithms and AI in complex real-life environments.

“I think we should develop and release datasets containing smoke, dust, fog, rain, fire, etc. if we want to allow using autonomous robots to complement human rescuers in saving people lives after an earthquake or natural disaster in the future,” Scaramuzza says. Continue reading

Posted in Human Robots

#435769 The Ultimate Optimization Problem: How ...

Lucas Joppa thinks big. Even while gazing down into his cup of tea in his modest office on Microsoft’s campus in Redmond, Washington, he seems to see the entire planet bobbing in there like a spherical tea bag.

As Microsoft’s first chief environmental officer, Joppa came up with the company’s AI for Earth program, a five-year effort that’s spending US $50 million on AI-powered solutions to global environmental challenges.

The program is not just about specific deliverables, though. It’s also about mindset, Joppa told IEEE Spectrum in an interview in July. “It’s a plea for people to think about the Earth in the same way they think about the technologies they’re developing,” he says. “You start with an objective. So what’s our objective function for Earth?” (In computer science, an objective function describes the parameter or parameters you are trying to maximize or minimize for optimal results.)

Photo: Microsoft

Lucas Joppa

AI for Earth launched in December 2017, and Joppa’s team has since given grants to more than 400 organizations around the world. In addition to receiving funding, some grantees get help from Microsoft’s data scientists and access to the company’s computing resources.

In a wide-ranging interview about the program, Joppa described his vision of the “ultimate optimization problem”—figuring out which parts of the planet should be used for farming, cities, wilderness reserves, energy production, and so on.

Every square meter of land and water on Earth has an infinite number of possible utility functions. It’s the job of Homo sapiens to describe our overall objective for the Earth. Then it’s the job of computers to produce optimization results that are aligned with the human-defined objective.

I don’t think we’re close at all to being able to do this. I think we’re closer from a technology perspective—being able to run the model—than we are from a social perspective—being able to make decisions about what the objective should be. What do we want to do with the Earth’s surface?

Such questions are increasingly urgent, as climate change has already begun reshaping our planet and our societies. Global sea and air surface temperatures have already risen by an average of 1 degree Celsius above preindustrial levels, according to the Intergovernmental Panel on Climate Change.

Today, people all around the world participated in a “climate strike,” with young people leading the charge and demanding a global transition to renewable energy. On Monday, world leaders will gather in New York for the United Nations Climate Action Summit, where they’re expected to present plans to limit warming to 1.5 degrees Celsius.

Joppa says such summit discussions should aim for a truly holistic solution.

We talk about how to solve climate change. There’s a higher-order question for society: What climate do we want? What output from nature do we want and desire? If we could agree on those things, we could put systems in place for optimizing our environment accordingly. Instead we have this scattered approach, where we try for local optimization. But the sum of local optimizations is never a global optimization.

There’s increasing interest in using artificial intelligence to tackle global environmental problems. New sensing technologies enable scientists to collect unprecedented amounts of data about the planet and its denizens, and AI tools are becoming vital for interpreting all that data.

The 2018 report “Harnessing AI for the Earth,” produced by the World Economic Forum and the consulting company PwC, discusses ways that AI can be used to address six of the world’s most pressing environmental challenges (climate change, biodiversity, and healthy oceans, water security, clean air, and disaster resilience).

Many of the proposed applications involve better monitoring of human and natural systems, as well as modeling applications that would enable better predictions and more efficient use of natural resources.

Joppa says that AI for Earth is taking a two-pronged approach, funding efforts to collect and interpret vast amounts of data alongside efforts that use that data to help humans make better decisions. And that’s where the global optimization engine would really come in handy.

For any location on earth, you should be able to go and ask: What’s there, how much is there, and how is it changing? And more importantly: What should be there?

On land, the data is really only interesting for the first few hundred feet. Whereas in the ocean, the depth dimension is really important.

We need a planet with sensors, with roving agents, with remote sensing. Otherwise our decisions aren’t going to be any good.

AI for Earth isn’t going to create such an online portal within five years, Joppa stresses. But he hopes the projects that he’s funding will contribute to making such a portal possible—eventually.

We’re asking ourselves: What are the fundamental missing layers in the tech stack that would allow people to build a global optimization engine? Some of them are clear, some are still opaque to me.

By the end of five years, I’d like to have identified these missing layers, and have at least one example of each of the components.

Some of the projects that AI for Earth has funded seem to fit that desire. Examples include SilviaTerra, which used satellite imagery and AI to create a map of the 92 billion trees in forested areas across the United States. There’s also OceanMind, a non-profit that detects illegal fishing and helps marine authorities enforce compliance. Platforms like Wildbook and iNaturalist enable citizen scientists to upload pictures of animals and plants, aiding conservation efforts and research on biodiversity. And FarmBeats aims to enable data-driven agriculture with low-cost sensors, drones, and cloud services.

It’s not impossible to imagine putting such services together into an optimization engine that knows everything about the land, the water, and the creatures who live on planet Earth. Then we’ll just have to tell that engine what we want to do about it.

Editor’s note: This story is published in cooperation with more than 250 media organizations and independent journalists that have focused their coverage on climate change ahead of the UN Climate Action Summit. IEEE Spectrum’s participation in the Covering Climate Now partnership builds on our past reporting about this global issue. Continue reading

Posted in Human Robots

#435687 Humanoid Robots Teach Coping Skills to ...

Photo: Rob Felt

IEEE Senior Member Ayanna Howard with one of the interactive androids that help children with autism improve their social and emotional engagement.

THE INSTITUTEChildren with autism spectrum disorder can have a difficult time expressing their emotions and can be highly sensitive to sound, sight, and touch. That sometimes restricts their participation in everyday activities, leaving them socially isolated. Occupational therapists can help them cope better, but the time they’re able to spend is limited and the sessions tend to be expensive.

Roboticist Ayanna Howard, an IEEE senior member, has been using interactive androids to guide children with autism on ways to socially and emotionally engage with others—as a supplement to therapy. Howard is chair of the School of Interactive Computing and director of the Human-Automation Systems Lab at Georgia Tech. She helped found Zyrobotics, a Georgia Tech VentureLab startup that is working on AI and robotics technologies to engage children with special needs. Last year Forbes named Howard, Zyrobotics’ chief technology officer, one of the Top 50 U.S. Women in Tech.

In a recent study, Howard and other researchers explored how robots might help children navigate sensory experiences. The experiment involved 18 participants between the ages of 4 and 12; five had autism, and the rest were meeting typical developmental milestones. Two humanoid robots were programmed to express boredom, excitement, nervousness, and 17 other emotional states. As children explored stations set up for hearing, seeing, smelling, tasting, and touching, the robots modeled what the socially acceptable responses should be.

“If a child’s expression is one of happiness or joy, the robot will have a corresponding response of encouragement,” Howard says. “If there are aspects of frustration or sadness, the robot will provide input to try again.” The study suggested that many children with autism exhibit stronger levels of engagement when the robots interact with them at such sensory stations.

It is one of many robotics projects Howard has tackled. She has designed robots for researching glaciers, and she is working on assistive robots for the home, as well as an exoskeleton that can help children who have motor disabilities.

Howard spoke about her work during the Ethics in AI: Impacts of (Anti?) Social Robotics panel session held in May at the IEEE Vision, Innovation, and Challenges Summit in San Diego. You can watch the session on IEEE.tv.

The next IEEE Vision, Innovation, and Challenges Summit and Honors Ceremony will be held on 15 May 2020 at the JW Marriott Parq Vancouver hotel, in Vancouver.

In this interview with The Institute, Howard talks about how she got involved with assistive technologies, the need for a more diverse workforce, and ways IEEE has benefited her career.

FOCUS ON ACCESSIBILITY
Howard was inspired to work on technology that can improve accessibility in 2008 while teaching high school students at a summer camp devoted to science, technology, engineering, and math.

“A young lady with a visual impairment attended camp. The robot programming tools being used at the camp weren’t accessible to her,” Howard says. “As an engineer, I want to fix problems when I see them, so we ended up designing tools to enable access to programming tools that could be used in STEM education.

“That was my starting motivation, and this theme of accessibility has expanded to become a main focus of my research. One of the things about this world of accessibility is that when you start interacting with kids and parents, you discover another world out there of assistive technologies and how robotics can be used for good in education as well as therapy.”

DIVERSITY OF THOUGHT
The Institute asked Howard why it’s important to have a more diverse STEM workforce and what could be done to increase the number of women and others from underrepresented groups.

“The makeup of the current engineering workforce isn’t necessarily representative of the world, which is composed of different races, cultures, ages, disabilities, and socio-economic backgrounds,” Howard says. “We’re creating products used by people around the globe, so we have to ensure they’re being designed for a diverse population. As IEEE members, we also need to engage with people who aren’t engineers, and we don’t do that enough.”

Educational institutions are doing a better job of increasing diversity in areas such as gender, she says, adding that more work is needed because the enrollment numbers still aren’t representative of the population and the gains don’t necessarily carry through after graduation.

“There has been an increase in the number of underrepresented minorities and females going into engineering and computer science,” she says, “but data has shown that their numbers are not sustained in the workforce.”

ROLE MODEL
Because there are more underrepresented groups on today’s college campuses that can form a community, the lack of engineering role models—although a concern on campuses—is more extreme for preuniversity students, Howard says.

“Depending on where you go to school, you may not know what an engineer does or even consider engineering as an option,” she says, “so there’s still a big disconnect there.”

Howard has been involved for many years in math- and science-mentoring programs for at-risk high school girls. She tells them to find what they’re passionate about and combine it with math and science to create something. She also advises them not to let anyone tell them that they can’t.

Howard’s father is an engineer. She says he never encouraged or discouraged her to become one, but when she broke something, he would show her how to fix it and talk her through the process. Along the way, he taught her a logical way of thinking she says all engineers have.

“When I would try to explain something, he would quiz me and tell me to ‘think more logically,’” she says.

Howard earned a bachelor’s degree in engineering from Brown University, in Providence, R.I., then she received both a master’s and doctorate degree in electrical engineering from the University of Southern California. Before joining the faculty of Georgia Tech in 2005, she worked at NASA’s Jet Propulsion Laboratory at the California Institute of Technology for more than a decade as a senior robotics researcher and deputy manager in the Office of the Chief Scientist.

ACTIVE VOLUNTEER
Howard’s father was also an IEEE member, but that’s not why she joined the organization. She says she signed up when she was a student because, “that was something that you just did. Plus, my student membership fee was subsidized.”

She kept the membership as a grad student because of the discounted rates members receive on conferences.

Those conferences have had an impact on her career. “They allow you to understand what the state of the art is,” she says. “Back then you received a printed conference proceeding and reading through it was brutal, but by attending it in person, you got a 15-minute snippet about the research.”

Howard is an active volunteer with the IEEE Robotics and Automation and the IEEE Systems, Man, and Cybernetics societies, holding many positions and serving on several committees. She is also featured in the IEEE Impact Creators campaign. These members were selected because they inspire others to innovate for a better tomorrow.

“I value IEEE for its community,” she says. “One of the nice things about IEEE is that it’s international.” Continue reading

Posted in Human Robots

#435676 Intel’s Neuromorphic System Hits 8 ...

At the DARPA Electronics Resurgence Initiative Summit today in Detroit, Intel plans to unveil an 8-million-neuron neuromorphic system comprising 64 Loihi research chips—codenamed Pohoiki Beach. Loihi chips are built with an architecture that more closely matches the way the brain works than do chips designed to do deep learning or other forms of AI. For the set of problems that such “spiking neural networks” are particularly good at, Loihi is about 1,000 times as fast as a CPU and 10,000 times as energy efficient. The new 64-Loihi system represents the equivalent of 8-million neurons, but that’s just a step to a 768-chip, 100-million-neuron system that the company plans for the end of 2019.

Intel and its research partners are just beginning to test what massive neural systems like Pohoiki Beach can do, but so far the evidence points to even greater performance and efficiency, says Mike Davies, director of neuromorphic research at Intel.

“We’re quickly accumulating results and data that there are definite benefits… mostly in the domain of efficiency. Virtually every one that we benchmark…we find significant gains in this architecture,” he says.

Going from a single-Loihi to 64 of them is more of a software issue than a hardware one. “We designed scalability into the Loihi chip from the beginning,” says Davies. “The chip has a hierarchical routing interface…which allows us to scale to up to 16,000 chips. So 64 is just the next step.”

Photo: Tim Herman/Intel Corporation

One of Intel’s Nahuku boards, each of which contains 8 to 32 Intel Loihi neuromorphic chips, shown here interfaced to an Intel Arria 10 FPGA development kit. Intel’s latest neuromorphic system, Pohoiki Beach, is made up of multiple Nahuku boards and contains 64 Loihi chips.

Finding algorithms that run well on an 8-million-neuron system and optimizing those algorithms in software is a considerable effort, he says. Still, the payoff could be huge. Neural networks that are more brain-like, such as Loihi, could be immune to some of the artificial intelligence’s—for lack of a better word—dumbness.

For example, today’s neural networks suffer from something called catastrophic forgetting. If you tried to teach a trained neural network to recognize something new—a new road sign, say—by simply exposing the network to the new input, it would disrupt the network so badly that it would become terrible at recognizing anything. To avoid this, you have to completely retrain the network from the ground up. (DARPA’s Lifelong Learning, or L2M, program is dedicated to solving this problem.)

(Here’s my favorite analogy: Say you coached a basketball team, and you raised the net by 30 centimeters while nobody was looking. The players would miss a bunch at first, but they’d figure things out quickly. If those players were like today’s neural networks, you’d have to pull them off the court and teach them the entire game over again—dribbling, passing, everything.)

Loihi can run networks that might be immune to catastrophic forgetting, meaning it learns a bit more like a human. In fact, there’s evidence through a research collaboration with Thomas Cleland’s group at Cornell University, that Loihi can achieve what’s called one-shot learning. That is, learning a new feature after being exposed to it only once. The Cornell group showed this by abstracting a model of the olfactory system so that it would run on Loihi. When exposed to a new virtual scent, the system not only didn't catastrophically forget everything else it had smelled, it learned to recognize the new scent just from the single exposure.

Loihi might also be able to run feature-extraction algorithms that are immune to the kinds of adversarial attacks that befuddle today’s image recognition systems. Traditional neural networks don’t really understand the features they’re extracting from an image in the way our brains do. “They can be fooled with simplistic attacks like changing individual pixels or adding a screen of noise that wouldn’t fool a human in any way,” Davies explains. But the sparse-coding algorithms Loihi can run work more like the human visual system and so wouldn’t fall for such shenanigans. (Disturbingly, humans are not completely immune to such attacks.)

Photo: Tim Herman/Intel Corporation

A close-up shot of Loihi, Intel’s neuromorphic research chip. Intel’s latest neuromorphic system, Pohoiki Beach, will be comprised of 64 of these Loihi chips.

Researchers have also been using Loihi to improve real-time control for robotic systems. For example, last week at the Telluride Neuromorphic Cognition Engineering Workshop—an event Davies called “summer camp for neuromorphics nerds”—researchers were hard at work using a Loihi-based system to control a foosball table. “It strikes people as crazy,” he says. “But it’s a nice illustration of neuromorphic technology. It’s fast, requires quick response, quick planning, and anticipation. These are what neuromorphic chips are good at.” Continue reading

Posted in Human Robots