Tag Archives: work
#437964 How Explainable Artificial Intelligence ...
The field of artificial intelligence has created computers that can drive cars, synthesize chemical compounds, fold proteins, and detect high-energy particles at a superhuman level.
However, these AI algorithms cannot explain the thought processes behind their decisions. A computer that masters protein folding and also tells researchers more about the rules of biology is much more useful than a computer that folds proteins without explanation.
Therefore, AI researchers like me are now turning our efforts toward developing AI algorithms that can explain themselves in a manner that humans can understand. If we can do this, I believe that AI will be able to uncover and teach people new facts about the world that have not yet been discovered, leading to new innovations.
Learning From Experience
One field of AI, called reinforcement learning, studies how computers can learn from their own experiences. In reinforcement learning, an AI explores the world, receiving positive or negative feedback based on its actions.
This approach has led to algorithms that have independently learned to play chess at a superhuman level and prove mathematical theorems without any human guidance. In my work as an AI researcher, I use reinforcement learning to create AI algorithms that learn how to solve puzzles such as the Rubik’s Cube.
Through reinforcement learning, AIs are independently learning to solve problems that even humans struggle to figure out. This has got me and many other researchers thinking less about what AI can learn and more about what humans can learn from AI. A computer that can solve the Rubik’s Cube should be able to teach people how to solve it, too.
Peering Into the Black Box
Unfortunately, the minds of superhuman AIs are currently out of reach to us humans. AIs make terrible teachers and are what we in the computer science world call “black boxes.”
AI simply spits out solutions without giving reasons for its solutions. Computer scientists have been trying for decades to open this black box, and recent research has shown that many AI algorithms actually do think in ways that are similar to humans. For example, a computer trained to recognize animals will learn about different types of eyes and ears and will put this information together to correctly identify the animal.
The effort to open up the black box is called explainable AI. My research group at the AI Institute at the University of South Carolina is interested in developing explainable AI. To accomplish this, we work heavily with the Rubik’s Cube.
The Rubik’s Cube is basically a pathfinding problem: Find a path from point A—a scrambled Rubik’s Cube—to point B—a solved Rubik’s Cube. Other pathfinding problems include navigation, theorem proving and chemical synthesis.
My lab has set up a website where anyone can see how our AI algorithm solves the Rubik’s Cube; however, a person would be hard-pressed to learn how to solve the cube from this website. This is because the computer cannot tell you the logic behind its solutions.
Solutions to the Rubik’s Cube can be broken down into a few generalized steps—the first step, for example, could be to form a cross while the second step could be to put the corner pieces in place. While the Rubik’s Cube itself has over 10 to the 19th power possible combinations, a generalized step-by-step guide is very easy to remember and is applicable in many different scenarios.
Approaching a problem by breaking it down into steps is often the default manner in which people explain things to one another. The Rubik’s Cube naturally fits into this step-by-step framework, which gives us the opportunity to open the black box of our algorithm more easily. Creating AI algorithms that have this ability could allow people to collaborate with AI and break down a wide variety of complex problems into easy-to-understand steps.
A step-by-step refinement approach can make it easier for humans to understand why AIs do the things they do. Forest Agostinelli, CC BY-ND
Collaboration Leads to Innovation
Our process starts with using one’s own intuition to define a step-by-step plan thought to potentially solve a complex problem. The algorithm then looks at each individual step and gives feedback about which steps are possible, which are impossible and ways the plan could be improved. The human then refines the initial plan using the advice from the AI, and the process repeats until the problem is solved. The hope is that the person and the AI will eventually converge to a kind of mutual understanding.
Currently, our algorithm is able to consider a human plan for solving the Rubik’s Cube, suggest improvements to the plan, recognize plans that do not work and find alternatives that do. In doing so, it gives feedback that leads to a step-by-step plan for solving the Rubik’s Cube that a person can understand. Our team’s next step is to build an intuitive interface that will allow our algorithm to teach people how to solve the Rubik’s Cube. Our hope is to generalize this approach to a wide range of pathfinding problems.
People are intuitive in a way unmatched by any AI, but machines are far better in their computational power and algorithmic rigor. This back and forth between man and machine utilizes the strengths from both. I believe this type of collaboration will shed light on previously unsolved problems in everything from chemistry to mathematics, leading to new solutions, intuitions and innovations that may have, otherwise, been out of reach.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Image Credit: Serg Antonov / Unsplash Continue reading
#437957 Meet Assembloids, Mini Human Brains With ...
It’s not often that a twitching, snowman-shaped blob of 3D human tissue makes someone’s day.
But when Dr. Sergiu Pasca at Stanford University witnessed the tiny movement, he knew his lab had achieved something special. You see, the blob was evolved from three lab-grown chunks of human tissue: a mini-brain, mini-spinal cord, and mini-muscle. Each individual component, churned to eerie humanoid perfection inside bubbling incubators, is already a work of scientific genius. But Pasca took the extra step, marinating the three components together inside a soup of nutrients.
The result was a bizarre, Lego-like human tissue that replicates the basic circuits behind how we decide to move. Without external prompting, when churned together like ice cream, the three ingredients physically linked up into a fully functional circuit. The 3D mini-brain, through the information highway formed by the artificial spinal cord, was able to make the lab-grown muscle twitch on demand.
In other words, if you think isolated mini-brains—known formally as brain organoids—floating in a jar is creepy, upgrade your nightmares. The next big thing in probing the brain is assembloids—free-floating brain circuits—that now combine brain tissue with an external output.
The end goal isn’t to freak people out. Rather, it’s to recapitulate our nervous system, from input to output, inside the controlled environment of a Petri dish. An autonomous, living brain-spinal cord-muscle entity is an invaluable model for figuring out how our own brains direct the intricate muscle movements that allow us stay upright, walk, or type on a keyboard.
It’s the nexus toward more dexterous brain-machine interfaces, and a model to understand when brain-muscle connections fail—as in devastating conditions like Lou Gehrig’s disease or Parkinson’s, where people slowly lose muscle control due to the gradual death of neurons that control muscle function. Assembloids are a sort of “mini-me,” a workaround for testing potential treatments on a simple “replica” of a person rather than directly on a human.
From Organoids to Assembloids
The miniature snippet of the human nervous system has been a long time in the making.
It all started in 2014, when Dr. Madeleine Lancaster, then a post-doc at Stanford, grew a shockingly intricate 3D replica of human brain tissue inside a whirling incubator. Revolutionarily different than standard cell cultures, which grind up brain tissue to reconstruct as a flat network of cells, Lancaster’s 3D brain organoids were incredibly sophisticated in their recapitulation of the human brain during development. Subsequent studies further solidified their similarity to the developing brain of a fetus—not just in terms of neuron types, but also their connections and structure.
With the finding that these mini-brains sparked with electrical activity, bioethicists increasingly raised red flags that the blobs of human brain tissue—no larger than the size of a pea at most—could harbor the potential to develop a sense of awareness if further matured and with external input and output.
Despite these concerns, brain organoids became an instant hit. Because they’re made of human tissue—often taken from actual human patients and converted into stem-cell-like states—organoids harbor the same genetic makeup as their donors. This makes it possible to study perplexing conditions such as autism, schizophrenia, or other brain disorders in a dish. What’s more, because they’re grown in the lab, it’s possible to genetically edit the mini-brains to test potential genetic culprits in the search for a cure.
Yet mini-brains had an Achilles’ heel: not all were made the same. Rather, depending on the region of the brain that was reverse engineered, the cells had to be persuaded by different cocktails of chemical soups and maintained in isolation. It was a stark contrast to our own developing brains, where regions are connected through highways of neural networks and work in tandem.
Pasca faced the problem head-on. Betting on the brain’s self-assembling capacity, his team hypothesized that it might be possible to grow different mini-brains, each reflecting a different brain region, and have them fuse together into a synchronized band of neuron circuits to process information. Last year, his idea paid off.
In one mind-blowing study, his team grew two separate portions of the brain into blobs, one representing the cortex, the other a deeper part of the brain known to control reward and movement, called the striatum. Shockingly, when put together, the two blobs of human brain tissue fused into a functional couple, automatically establishing neural highways that resulted in one of the most sophisticated recapitulations of a human brain. Pasca crowned this tissue engineering crème-de-la-crème “assembloids,” a portmanteau between “assemble” and “organoids.”
“We have demonstrated that regionalized brain spheroids can be put together to form fused structures called brain assembloids,” said Pasca at the time.” [They] can then be used to investigate developmental processes that were previously inaccessible.”
And if that’s possible for wiring up a lab-grown brain, why wouldn’t it work for larger neural circuits?
Assembloids, Assemble
The new study is the fruition of that idea.
The team started with human skin cells, scraped off of eight healthy people, and transformed them into a stem-cell-like state, called iPSCs. These cells have long been touted as the breakthrough for personalized medical treatment, before each reflects the genetic makeup of its original host.
Using two separate cocktails, the team then generated mini-brains and mini-spinal cords using these iPSCs. The two components were placed together “in close proximity” for three days inside a lab incubator, gently floating around each other in an intricate dance. To the team’s surprise, under the microscope using tracers that glow in the dark, they saw highways of branches extending from one organoid to the other like arms in a tight embrace. When stimulated with electricity, the links fired up, suggesting that the connections weren’t just for show—they’re capable of transmitting information.
“We made the parts,” said Pasca, “but they knew how to put themselves together.”
Then came the ménage à trois. Once the mini-brain and spinal cord formed their double-decker ice cream scoop, the team overlaid them onto a layer of muscle cells—cultured separately into a human-like muscular structure. The end result was a somewhat bizarre and silly-looking snowman, made of three oddly-shaped spherical balls.
Yet against all odds, the brain-spinal cord assembly reached out to the lab-grown muscle. Using a variety of tools, including measuring muscle contraction, the team found that this utterly Frankenstein-like snowman was able to make the muscle component contract—in a way similar to how our muscles twitch when needed.
“Skeletal muscle doesn’t usually contract on its own,” said Pasca. “Seeing that first twitch in a lab dish immediately after cortical stimulation is something that’s not soon forgotten.”
When tested for longevity, the contraption lasted for up to 10 weeks without any sort of breakdown. Far from a one-shot wonder, the isolated circuit worked even better the longer each component was connected.
Pasca isn’t the first to give mini-brains an output channel. Last year, the queen of brain organoids, Lancaster, chopped up mature mini-brains into slices, which were then linked to muscle tissue through a cultured spinal cord. Assembloids are a step up, showing that it’s possible to automatically sew multiple nerve-linked structures together, such as brain and muscle, sans slicing.
The question is what happens when these assembloids become more sophisticated, edging ever closer to the inherent wiring that powers our movements. Pasca’s study targets outputs, but what about inputs? Can we wire input channels, such as retinal cells, to mini-brains that have a rudimentary visual cortex to process those examples? Learning, after all, depends on examples of our world, which are processed inside computational circuits and delivered as outputs—potentially, muscle contractions.
To be clear, few would argue that today’s mini-brains are capable of any sort of consciousness or awareness. But as mini-brains get increasingly more sophisticated, at what point can we consider them a sort of AI, capable of computation or even something that mimics thought? We don’t yet have an answer—but the debates are on.
Image Credit: christitzeimaging.com / Shutterstock.com Continue reading
#437935 Start the New Year Right: By Watching ...
I don’t need to tell you that 2020 was a tough year. There was almost nothing good about it, and we saw it off with a “good riddance” and hopes for a better 2021. But robotics company Boston Dynamics took a different approach to closing out the year: when all else fails, why not dance?
The company released a video last week that I dare you to watch without laughing—or at the very least, cracking a pretty big smile. Because, well, dancing robots are funny. And it’s not just one dancing robot, it’s four of them: two humanoid Atlas bots, one four-legged Spot, and one Handle, a bot-on-wheels built for materials handling.
The robots’ killer moves look almost too smooth and coordinated to be real, leading many to speculate that the video was computer-generated. But if you can trust Elon Musk, there’s no CGI here.
This is not CGI https://t.co/VOivE97vPR
— Elon Musk (@elonmusk) December 29, 2020
Boston Dynamics went through a lot of changes in the last ten years; it was acquired by Google in 2013, then sold to Japanese conglomerate SoftBank in 2017 before being acquired again by Hyundai just a few weeks ago for $1.1 billion. But this isn’t the first time they teach a robot to dance and make a video for all the world to enjoy; Spot tore up the floor to “Uptown Funk” back in 2018.
Four-legged Spot went commercial in June, with a hefty price tag of $74,500, and was put to some innovative pandemic-related uses, including remotely measuring patients’ vital signs and reminding people to social distance.
Hyundai plans to implement its newly-acquired robotics prowess for everything from service and logistics robots to autonomous driving and smart factories.
They’ll have their work cut out for them. Besides being hilarious, kind of heartwarming, and kind of creepy all at once, the robots’ new routine is pretty impressive from an engineering standpoint. Compare it to a 2016 video of Atlas trying to pick up a box (I know it’s a machine with no feelings, but it’s hard not to feel a little bit bad for it, isn’t it?), and it’s clear Boston Dynamics’ technology has made huge strides. It wouldn’t be surprising if, in two years’ time, we see a video of a flash mob of robots whose routine includes partner dancing and back flips (which, admittedly, Atlas can already do).
In the meantime, though, this one is pretty entertaining—and not a bad note on which to start the new year.
Image Credit: Boston Dynamics Continue reading