Tag Archives: knowledge
#439081 Classify This Robot-Woven Sneaker With ...
For athletes trying to run fast, the right shoe can be essential to achieving peak performance. For athletes trying to run fast as humanly possible, a runner’s shoe can also become a work of individually customized engineering.
This is why Adidas has married 3D printing with robotic automation in a mass-market footwear project it’s called Futurecraft.Strung, expected to be available for purchase as soon as later this year. Using a customized, 3D-printed sole, a Futurecraft.Strung manufacturing robot can place some 2,000 threads from up to 10 different sneaker yarns in one upper section of the shoe.
Skylar Tibbits, founder and co-director of the Self-Assembly Lab and associate professor in MIT's Department of Architecture, says that because of its small scale, footwear has been an area of focus for 3D printing and additive manufacturing, which involves adding material bit by bit.
“There are really interesting complex geometry problems,” he says. “It’s pretty well suited.”
Photo: Adidas
Beginning with a 3D-printed sole, Adidas robots weave together some 2000 threads from up to 10 different sneaker yarns to make one Futurecraft.Strung shoe—expected on the marketplace later this year or sometime in 2022.
Adidas began working on the Futurecraft.Strung project in 2016. Then two years later, Adidas Futurecraft, the company’s innovation incubator, began collaborating with digital design studio Kram/Weisshaar. In less than a year the team built the software and hardware for the upper part of the shoe, called Strung uppers.
“Most 3D printing in the footwear space has been focused on the midsole or outsole, like the bottom of the shoe,” Tibbits explains. But now, he says, Adidas is bringing robotics and a threaded design to the upper part of the shoe. The company bases its Futurecraft.Strung design on high-resolution scans of how runners’ feet move as they travel.
This more flexible design can benefit athletes in multiple sports, according to an Adidas blog post. It will be able to use motion capture of an athlete’s foot and feedback from the athlete to make the design specific to the athlete’s specific gait. Adidas customizes the weaving of the shoe’s “fabric” (really more like an elaborate woven string figure, a cat’s cradle to fit the foot) to achieve a close and comfortable fit, the company says.
What they call their “4D sole” consists of a design combining 3D printing with materials that can change their shape and properties over time. In fact, Tibbits coined the term 4D printing to describe this process in 2013. The company takes customized data from the Adidas Athlete Intelligent Engine to make the shoe, according to Kram/Weisshaar’s website.
Photo: Adidas
Closeup of the weaving process behind a Futurecraft.Strung shoe
“With Strung for the first time, we can program single threads in any direction, where each thread has a different property or strength,” Fionn Corcoran-Tadd, an innovation designer at Adidas’ Futurecraft lab, said in a company video. Each thread serves a purpose, the video noted. “This is like customized string art for your feet,” Tibbits says.
Although the robotics technology the company uses has been around for many years, what Adidas’s robotic weavers can achieve with thread is a matter of elaborate geometry. “It’s more just like a really elegant way to build up material combining robotics and the fibers and yarns into these intricate and complex patterns,” he says.
Robots can of course create patterns with more precision than if someone wound it by hand, as well as rapidly and reliably changing the yarn and color of the fabric pattern. Adidas says it can make a single upper in 45 minutes and a pair of sneakers in 1 hour and 30 minutes. It plans to reduce this time down to minutes in the months ahead, the company said.
An Adidas spokesperson says sneakers incorporating the Futurecraft.Strung uppers design are a prototype, but the company plans to bring a Strung shoe to market in late 2021 or 2022. However, Adidas Futurecraft sneakers are currently available with a 3D-printed midsole.
Adidas plans to continue gathering data from athletes to customize the uppers of sneakers. “We’re building up a library of knowledge and it will get more interesting as we aggregate data of testing and from different athletes and sports,” the Adidas Futurecraft team writes in a blog post. “The more we understand about how data can become design code, the more we can take that and apply it to new Strung textiles. It’s a continuous evolution.” Continue reading
#437978 How Mirroring the Architecture of the ...
While AI can carry out some impressive feats when trained on millions of data points, the human brain can often learn from a tiny number of examples. New research shows that borrowing architectural principles from the brain can help AI get closer to our visual prowess.
The prevailing wisdom in deep learning research is that the more data you throw at an algorithm, the better it will learn. And in the era of Big Data, that’s easier than ever, particularly for the large data-centric tech companies carrying out a lot of the cutting-edge AI research.
Today’s largest deep learning models, like OpenAI’s GPT-3 and Google’s BERT, are trained on billions of data points, and even more modest models require large amounts of data. Collecting these datasets and investing the computational resources to crunch through them is a major bottleneck, particularly for less well-resourced academic labs.
It also means today’s AI is far less flexible than natural intelligence. While a human only needs to see a handful of examples of an animal, a tool, or some other category of object to be able pick it out again, most AI need to be trained on many examples of an object in order to be able to recognize it.
There is an active sub-discipline of AI research aimed at what is known as “one-shot” or “few-shot” learning, where algorithms are designed to be able to learn from very few examples. But these approaches are still largely experimental, and they can’t come close to matching the fastest learner we know—the human brain.
This prompted a pair of neuroscientists to see if they could design an AI that could learn from few data points by borrowing principles from how we think the brain solves this problem. In a paper in Frontiers in Computational Neuroscience, they explained that the approach significantly boosts AI’s ability to learn new visual concepts from few examples.
“Our model provides a biologically plausible way for artificial neural networks to learn new visual concepts from a small number of examples,” Maximilian Riesenhuber, from Georgetown University Medical Center, said in a press release. “We can get computers to learn much better from few examples by leveraging prior learning in a way that we think mirrors what the brain is doing.”
Several decades of neuroscience research suggest that the brain’s ability to learn so quickly depends on its ability to use prior knowledge to understand new concepts based on little data. When it comes to visual understanding, this can rely on similarities of shape, structure, or color, but the brain can also leverage abstract visual concepts thought to be encoded in a brain region called the anterior temporal lobe (ATL).
“It is like saying that a platypus looks a bit like a duck, a beaver, and a sea otter,” said paper co-author Joshua Rule, from the University of California Berkeley.
The researchers decided to try and recreate this capability by using similar high-level concepts learned by an AI to help it quickly learn previously unseen categories of images.
Deep learning algorithms work by getting layers of artificial neurons to learn increasingly complex features of an image or other data type, which are then used to categorize new data. For instance, early layers will look for simple features like edges, while later ones might look for more complex ones like noses, faces, or even more high-level characteristics.
First they trained the AI on 2.5 million images across 2,000 different categories from the popular ImageNet dataset. They then extracted features from various layers of the network, including the very last layer before the output layer. They refer to these as “conceptual features” because they are the highest-level features learned, and most similar to the abstract concepts that might be encoded in the ATL.
They then used these different sets of features to train the AI to learn new concepts based on 2, 4, 8, 16, 32, 64, and 128 examples. They found that the AI that used the conceptual features yielded much better performance than ones trained using lower-level features on lower numbers of examples, but the gap shrunk as they were fed more training examples.
While the researchers admit the challenge they set their AI was relatively simple and only covers one aspect of the complex process of visual reasoning, they said that using a biologically plausible approach to solving the few-shot problem opens up promising new avenues in both neuroscience and AI.
“Our findings not only suggest techniques that could help computers learn more quickly and efficiently, they can also lead to improved neuroscience experiments aimed at understanding how people learn so quickly, which is not yet well understood,” Riesenhuber said.
As the researchers note, the human visual system is still the gold standard when it comes to understanding the world around us. Borrowing from its design principles might turn out to be a profitable direction for future research.
Image Credit: Gerd Altmann from Pixabay Continue reading
#437924 How a Software Map of the Entire Planet ...
i
“3D map data is the scaffolding of the 21st century.”
–Edward Miller, Founder, Scape Technologies, UK
Covered in cameras, sensors, and a distinctly spaceship looking laser system, Google’s autonomous vehicles were easy to spot when they first hit public roads in 2015. The key hardware ingredient is a spinning laser fixed to the roof, called lidar, which provides the car with a pair of eyes to see the world. Lidar works by sending out beams of light and measuring the time it takes to bounce off objects back to the source. By timing the light’s journey, these depth-sensing systems construct fully 3D maps of their surroundings.
3D maps like these are essentially software copies of the real world. They will be crucial to the development of a wide range of emerging technologies including autonomous driving, drone delivery, robotics, and a fast-approaching future filled with augmented reality.
Like other rapidly improving technologies, lidar is moving quickly through its development cycle. What was an expensive technology on the roof of a well-funded research project is now becoming cheaper, more capable, and readily available to consumers. At some point, lidar will come standard on most mobile devices and is now available to early-adopting owners of the iPhone 12 Pro.
Consumer lidar represents the inevitable shift from wealthy tech companies generating our world’s map data, to a more scalable crowd-sourced approach. To develop the repository for their Street View Maps product, Google reportedly spent $1-2 billion sending cars across continents photographing every street. Compare that to a live-mapping service like Waze, which uses crowd-sourced user data from its millions of users to generate accurate and real-time traffic conditions. Though these maps serve different functions, one is a static, expensive, unchanging map of the world while the other is dynamic, real-time, and constructed by users themselves.
Soon millions of people may be scanning everything from bedrooms to neighborhoods, resulting in 3D maps of significant quality. An online search for lidar room scans demonstrates just how richly textured these three-dimensional maps are compared to anything we’ve had before. With lidar and other depth-sensing systems, we now have the tools to create exact software copies of everywhere and everything on earth.
At some point, likely aided by crowdsourcing initiatives, these maps will become living breathing, real-time representations of the world. Some refer to this idea as a “digital twin” of the planet. In a feature cover story, Kevin Kelly, the cofounder of Wired magazine, calls this concept the “mirrorworld,” a one-to-one software map of everything.
So why is that such a big deal? Take augmented reality as an example.
Of all the emerging industries dependent on such a map, none are more invested in seeing this concept emerge than those within the AR landscape. Apple, for example, is not-so-secretly developing a pair of AR glasses, which they hope will deliver a mainstream turning point for the technology.
For Apple’s AR devices to work as anticipated, they will require virtual maps of the world, a concept AR insiders call the “AR cloud,” which is synonymous with the “mirrorworld” concept. These maps will be two things. First, they will be a tool that creators use to place AR content in very specific locations; like a world canvas to paint on. Second, they will help AR devices both locate and understand the world around them so they can render content in a believable way.
Imagine walking down a street wanting to check the trading hours of a local business. Instead of pulling out your phone to do a tedious search online, you conduct the equivalent of a visual google search simply by gazing at the store. Albeit a trivial example, the AR cloud represents an entirely non-trivial new way of managing how we organize the world’s information. Access to knowledge can be shifted away from the faraway monitors in our pocket, to its relevant real-world location.
Ultimately this describes a blurring of physical and digital infrastructure. Our public and private spaces will thus be comprised equally of both.
No example demonstrates this idea better than Pokémon Go. The game is straightforward enough; users capture virtual characters scattered around the real world. Today, the game relies on traditional GPS technology to place its characters, but GPS is accurate only to within a few meters of a location. For a car navigating on a highway or locating Pikachus in the world, that level of precision is sufficient. For drone deliveries, driverless cars, or placing a Pikachu in a specific location, say on a tree branch in a park, GPS isn’t accurate enough. As astonishing as it may seem, many experimental AR cloud concepts, even entirely mapped cities, are location specific down to the centimeter.
Niantic, the $4 billion publisher behind Pokémon Go, is aggressively working on developing a crowd-sourced approach to building better AR Cloud maps by encouraging their users to scan the world for them. Their recent acquisition of 6D.ai, a mapping software company developed by the University of Oxford’s Victor Prisacariu through his work at Oxford’s Active Vision Lab, indicates Niantic’s ambition to compete with the tech giants in this space.
With 6D.ai’s technology, Niantic is developing the in-house ability to generate their own 3D maps while gaining better semantic understanding of the world. By going beyond just knowing there’s a temporary collection of orange cones in a certain location, for example, the game may one day understand the meaning behind this; that a temporary construction zone means no Pokémon should spawn here to avoid drawing players to this location.
Niantic is not the only company working on this. Many of the big tech firms you would expect have entire teams focused on map data. Facebook, for example, recently acquired the UK-based Scape technologies, a computer vision startup mapping entire cities with centimeter precision.
As our digital maps of the world improve, expect a relentless and justified discussion of privacy concerns as well. How will society react to the idea of a real-time 3D map of their bedroom living on a Facebook or Amazon server? Those horrified by the use of facial recognition AI being used in public spaces are unlikely to find comfort in the idea of a machine-readable world subject to infinite monitoring.
The ability to build high-precision maps of the world could reshape the way we engage with our planet and promises to be one of the biggest technology developments of the next decade. While these maps may stay hidden as behind-the-scenes infrastructure powering much flashier technologies that capture the world’s attention, they will soon prop up large portions of our technological future.
Keep that in mind when a car with no driver is sharing your road.
Image credit: sergio souza / Pexels Continue reading
#437872 AlphaFold Proves That AI Can Crack ...
Any successful implementation of artificial intelligence hinges on asking the right questions in the right way. That’s what the British AI company DeepMind (a subsidiary of Alphabet) accomplished when it used its neural network to tackle one of biology’s grand challenges, the protein-folding problem. Its neural net, known as AlphaFold, was able to predict the 3D structures of proteins based on their amino acid sequences with unprecedented accuracy.
AlphaFold’s predictions at the 14th Critical Assessment of protein Structure Prediction (CASP14) were accurate to within an atom’s width for most of the proteins. The competition consisted of blindly predicting the structure of proteins that have only recently been experimentally determined—with some still awaiting determination.
Called the building blocks of life, proteins consist of 20 different amino acids in various combinations and sequences. A protein's biological function is tied to its 3D structure. Therefore, knowledge of the final folded shape is essential to understanding how a specific protein works—such as how they interact with other biomolecules, how they may be controlled or modified, and so on. “Being able to predict structure from sequence is the first real step towards protein design,” says Janet M. Thornton, director emeritus of the European Bioinformatics Institute. It also has enormous benefits in understanding disease-causing pathogens. For instance, at the moment only about 18 of the 26 proteins in the SARS-CoV-2 virus are known.
Predicting a protein’s 3D structure is a computational nightmare. In 1969 Cyrus Levinthal estimated that there are 10300 possible conformational combinations for a single protein, which would take longer than the age of the known universe to evaluate by brute force calculation. AlphaFold can do it in a few days.
As scientific breakthroughs go, AlphaFold’s discovery is right up there with the likes of James Watson and Francis Crick’s DNA double-helix model, or, more recently, Jennifer Doudna and Emmanuelle Charpentier’s CRISPR-Cas9 genome editing technique.
How did a team that just a few years ago was teaching an AI to master a 3,000-year-old game end up training one to answer a question plaguing biologists for five decades? That, says Briana Brownell, data scientist and founder of the AI company PureStrategy, is the beauty of artificial intelligence: The same kind of algorithm can be used for very different things.
“Whenever you have a problem that you want to solve with AI,” she says, “you need to figure out how to get the right data into the model—and then the right sort of output that you can translate back into the real world.”
DeepMind’s success, she says, wasn’t so much a function of picking the right neural nets but rather “how they set up the problem in a sophisticated enough way that the neural network-based modeling [could] actually answer the question.”
AlphaFold showed promise in 2018, when DeepMind introduced a previous iteration of their AI at CASP13, achieving the highest accuracy among all participants. The team had trained its to model target shapes from scratch, without using previously solved proteins as templates.
For 2020 they deployed new deep learning architectures into the AI, using an attention-based model that was trained end-to-end. Attention in a deep learning network refers to a component that manages and quantifies the interdependence between the input and output elements, as well as between the input elements themselves.
The system was trained on public datasets of the approximately 170,000 known experimental protein structures in addition to databases with protein sequences of unknown structures.
“If you look at the difference between their entry two years ago and this one, the structure of the AI system was different,” says Brownell. “This time, they’ve figured out how to translate the real world into data … [and] created an output that could be translated back into the real world.”
Like any AI system, AlphaFold may need to contend with biases in the training data. For instance, Brownell says, AlphaFold is using available information about protein structure that has been measured in other ways. However, there are also many proteins with as yet unknown 3D structures. Therefore, she says, a bias could conceivably creep in toward those kinds of proteins that we have more structural data for.
Thornton says it’s difficult to predict how long it will take for AlphaFold’s breakthrough to translate into real-world applications.
“We only have experimental structures for about 10 per cent of the 20,000 proteins [in] the human body,” she says. “A powerful AI model could unveil the structures of the other 90 per cent.”
Apart from increasing our understanding of human biology and health, she adds, “it is the first real step toward… building proteins that fulfill a specific function. From protein therapeutics to biofuels or enzymes that eat plastic, the possibilities are endless.” Continue reading