Tag Archives: end
#437872 AlphaFold Proves That AI Can Crack ...
Any successful implementation of artificial intelligence hinges on asking the right questions in the right way. That’s what the British AI company DeepMind (a subsidiary of Alphabet) accomplished when it used its neural network to tackle one of biology’s grand challenges, the protein-folding problem. Its neural net, known as AlphaFold, was able to predict the 3D structures of proteins based on their amino acid sequences with unprecedented accuracy.
AlphaFold’s predictions at the 14th Critical Assessment of protein Structure Prediction (CASP14) were accurate to within an atom’s width for most of the proteins. The competition consisted of blindly predicting the structure of proteins that have only recently been experimentally determined—with some still awaiting determination.
Called the building blocks of life, proteins consist of 20 different amino acids in various combinations and sequences. A protein's biological function is tied to its 3D structure. Therefore, knowledge of the final folded shape is essential to understanding how a specific protein works—such as how they interact with other biomolecules, how they may be controlled or modified, and so on. “Being able to predict structure from sequence is the first real step towards protein design,” says Janet M. Thornton, director emeritus of the European Bioinformatics Institute. It also has enormous benefits in understanding disease-causing pathogens. For instance, at the moment only about 18 of the 26 proteins in the SARS-CoV-2 virus are known.
Predicting a protein’s 3D structure is a computational nightmare. In 1969 Cyrus Levinthal estimated that there are 10300 possible conformational combinations for a single protein, which would take longer than the age of the known universe to evaluate by brute force calculation. AlphaFold can do it in a few days.
As scientific breakthroughs go, AlphaFold’s discovery is right up there with the likes of James Watson and Francis Crick’s DNA double-helix model, or, more recently, Jennifer Doudna and Emmanuelle Charpentier’s CRISPR-Cas9 genome editing technique.
How did a team that just a few years ago was teaching an AI to master a 3,000-year-old game end up training one to answer a question plaguing biologists for five decades? That, says Briana Brownell, data scientist and founder of the AI company PureStrategy, is the beauty of artificial intelligence: The same kind of algorithm can be used for very different things.
“Whenever you have a problem that you want to solve with AI,” she says, “you need to figure out how to get the right data into the model—and then the right sort of output that you can translate back into the real world.”
DeepMind’s success, she says, wasn’t so much a function of picking the right neural nets but rather “how they set up the problem in a sophisticated enough way that the neural network-based modeling [could] actually answer the question.”
AlphaFold showed promise in 2018, when DeepMind introduced a previous iteration of their AI at CASP13, achieving the highest accuracy among all participants. The team had trained its to model target shapes from scratch, without using previously solved proteins as templates.
For 2020 they deployed new deep learning architectures into the AI, using an attention-based model that was trained end-to-end. Attention in a deep learning network refers to a component that manages and quantifies the interdependence between the input and output elements, as well as between the input elements themselves.
The system was trained on public datasets of the approximately 170,000 known experimental protein structures in addition to databases with protein sequences of unknown structures.
“If you look at the difference between their entry two years ago and this one, the structure of the AI system was different,” says Brownell. “This time, they’ve figured out how to translate the real world into data … [and] created an output that could be translated back into the real world.”
Like any AI system, AlphaFold may need to contend with biases in the training data. For instance, Brownell says, AlphaFold is using available information about protein structure that has been measured in other ways. However, there are also many proteins with as yet unknown 3D structures. Therefore, she says, a bias could conceivably creep in toward those kinds of proteins that we have more structural data for.
Thornton says it’s difficult to predict how long it will take for AlphaFold’s breakthrough to translate into real-world applications.
“We only have experimental structures for about 10 per cent of the 20,000 proteins [in] the human body,” she says. “A powerful AI model could unveil the structures of the other 90 per cent.”
Apart from increasing our understanding of human biology and health, she adds, “it is the first real step toward… building proteins that fulfill a specific function. From protein therapeutics to biofuels or enzymes that eat plastic, the possibilities are endless.” Continue reading
#437869 Video Friday: Japan’s Gundam Robot ...
Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):
ACRA 2020 – December 8-10, 2020 – [Online]
Let us know if you have suggestions for next week, and enjoy today’s videos.
Another BIG step for Japan’s Gundam project.
[ Gundam Factory ]
We present an interactive design system that allows users to create sculpting styles and fabricate clay models using a standard 6-axis robot arm. Given a general mesh as input, the user iteratively selects sub-areas of the mesh through decomposition and embeds the design expression into an initial set of toolpaths by modifying key parameters that affect the visual appearance of the sculpted surface finish. We demonstrate the versatility of our approach by designing and fabricating different sculpting styles over a wide range of clay models.
[ Disney Research ]
China’s Chang’e-5 completed the drilling, sampling and sealing of lunar soil at 04:53 BJT on Wednesday, marking the first automatic sampling on the Moon, the China National Space Administration (CNSA) announced Wednesday.
[ CCTV ]
Red Hat’s been putting together an excellent documentary on Willow Garage and ROS, and all five parts have just been released. We posted Part 1 a little while ago, so here’s Part 2 and Part 3.
Parts 4 and 5 are at the link below!
[ Red Hat ]
Congratulations to ANYbotics on a well-deserved raise!
ANYbotics has origins in the Robotic Systems Lab at ETH Zurich, and ANYmal’s heritage can be traced back at least as far as StarlETH, which we first met at ICRA 2013.
[ ANYbotics ]
Most conventional robots are working with 0.05-0.1mm accuracy. Such accuracy requires high-end components like low-backlash gears, high-resolution encoders, complicated CNC parts, powerful motor drives, etc. Those in combination end up an expensive solution, which is either unaffordable or unnecessary for many applications. As a result, we found the Apicoo Robotics to provide our customers solutions with a much lower cost and higher stability.
[ Apicoo Robotics ]
The Skydio 2 is an incredible drone that can take incredible footage fully autonomously, but it definitely helps if you do incredible things in incredible places.
[ Skydio ]
Jueying is the first domestic sensitive quadruped robot for industry applications and scenarios. It can coordinate (replace) humans to reach any place that can be reached. It has superior environmental adaptability, excellent dynamic balance capabilities and precise Environmental perception capabilities. By carrying functional modules for different application scenarios in the safe load area, the mobile superiority of the quadruped robot can be organically integrated with the commercialization of functional modules, providing smart factories, smart parks, scene display and public safety application solutions.
[ DeepRobotics ]
We have developed semi-autonomous quadruped robot, called LASER-D (Legged-Agile-Smart-Efficient Robot for Disinfection) for performing disinfection in cluttered environments. The robot is equipped with a spray-based disinfection system and leverages the body motion to controlling the spray action without the need for an extra stabilization mechanism. The system includes an image processing capability to verify disinfected regions with high accuracy. This system allows the robot to successfully carry out effective disinfection tasks while safely traversing through cluttered environments, climb stairs/slopes, and navigate on slippery surfaces.
[ USC Viterbi ]
We propose the “multi-vision hand”, in which a number of small high-speed cameras are mounted on the robot hand of a common 7 degrees-of-freedom robot. Also, we propose visual-servoing control by using a multi-vision system that combines the multi-vision hand and external fixed high-speed cameras. The target task was ball catching motion, which requires high-speed operation. In the proposed catching control, the catch position of the ball, which is estimated by the external fixed high-speed cameras, is corrected by the multi-vision hand in real-time.
More details available through IROS on-demand.
[ Namiki Laboratory ]
Shunichi Kurumaya wrote in to share his work on PneuFinger, a pneumatically actuated compliant robotic gripping system.
[ Nakamura Lab ]
Thanks Shunichi!
Motivated by insights into the human teaching process, we introduce a method for incorporating unstructured natural language into imitation learning. At training time, the expert can provide demonstrations along with verbal descriptions in order to describe the underlying intent, e.g., “Go to the large green bowl’’. The training process, then, interrelates the different modalities to encode the correlations between language, perception, and motion. The resulting language-conditioned visuomotor policies can be conditioned at run time on new human commands and instructions, which allows for more fine-grained control over the trained policies while also reducing situational ambiguity.
[ ASU ]
Thanks Heni!
Gita is on sale for the holidays for only $2,000.
[ Gita ]
This video introduces a computational approach for routing thin artificial muscle actuators through hyperelastic soft robots, in order to achieve a desired deformation behavior. Provided with a robot design, and a set of example deformations, we continuously co-optimize the routing of actuators, and their actuation, to approximate example deformations as closely as possible.
[ Disney Research ]
Researchers and mountain rescuers in Switzerland are making huge progress in the field of autonomous drones as the technology becomes more in-demand for global search-and-rescue operations.
[ SWI ]
This short clip of the Ghost Robotics V60 features an interesting, if awkward looking, righting behavior at the end.
[ Ghost Robotics ]
Europe’s Rosalind Franklin ExoMars rover has a younger ’sibling’, ExoMy. The blueprints and software for this mini-version of the full-size Mars explorer are available for free so that anyone can 3D print, assemble and program their own ExoMy.
[ ESA ]
The holiday season is here, and with the added impact of Covid-19 consumer demand is at an all-time high. Berkshire Grey is the partner that today’s leading organizations turn to when it comes to fulfillment automation.
[ Berkshire Grey ]
Until very recently, the vast majority of studies and reports on the use of cargo drones for public health were almost exclusively focused on the technology. The driving interest from was on the range that these drones could travel, how much they could carry and how they worked. Little to no attention was placed on the human side of these projects. Community perception, community engagement, consent and stakeholder feedback were rarely if ever addressed. This webinar presents the findings from a very recent study that finally sheds some light on the human side of drone delivery projects.
[ WeRobotics ] Continue reading