Tag Archives: chemical
#434643 Sensors and Machine Learning Are Giving ...
According to some scientists, humans really do have a sixth sense. There’s nothing supernatural about it: the sense of proprioception tells you about the relative positions of your limbs and the rest of your body. Close your eyes, block out all sound, and you can still use this internal “map” of your external body to locate your muscles and body parts – you have an innate sense of the distances between them, and the perception of how they’re moving, above and beyond your sense of touch.
This sense is invaluable for allowing us to coordinate our movements. In humans, the brain integrates senses including touch, heat, and the tension in muscle spindles to allow us to build up this map.
Replicating this complex sense has posed a great challenge for roboticists. We can imagine simulating the sense of sight with cameras, sound with microphones, or touch with pressure-pads. Robots with chemical sensors could be far more accurate than us in smell and taste, but building in proprioception, the robot’s sense of itself and its body, is far more difficult, and is a large part of why humanoid robots are so tricky to get right.
Simultaneous localization and mapping (SLAM) software allows robots to use their own senses to build up a picture of their surroundings and environment, but they’d need a keen sense of the position of their own bodies to interact with it. If something unexpected happens, or in dark environments where primary senses are not available, robots can struggle to keep track of their own position and orientation. For human-robot interaction, wearable robotics, and delicate applications like surgery, tiny differences can be extremely important.
Piecemeal Solutions
In the case of hard robotics, this is generally solved by using a series of strain and pressure sensors in each joint, which allow the robot to determine how its limbs are positioned. That works fine for rigid robots with a limited number of joints, but for softer, more flexible robots, this information is limited. Roboticists are faced with a dilemma: a vast, complex array of sensors for every degree of freedom in the robot’s movement, or limited skill in proprioception?
New techniques, often involving new arrays of sensory material and machine-learning algorithms to fill in the gaps, are starting to tackle this problem. Take the work of Thomas George Thuruthel and colleagues in Pisa and San Diego, who draw inspiration from the proprioception of humans. In a new paper in Science Robotics, they describe the use of soft sensors distributed through a robotic finger at random. This placement is much like the constant adaptation of sensors in humans and animals, rather than relying on feedback from a limited number of positions.
The sensors allow the soft robot to react to touch and pressure in many different locations, forming a map of itself as it contorts into complicated positions. The machine-learning algorithm serves to interpret the signals from the randomly-distributed sensors: as the finger moves around, it’s observed by a motion capture system. After training the robot’s neural network, it can associate the feedback from the sensors with the position of the finger detected in the motion-capture system, which can then be discarded. The robot observes its own motions to understand the shapes that its soft body can take, and translate them into the language of these soft sensors.
“The advantages of our approach are the ability to predict complex motions and forces that the soft robot experiences (which is difficult with traditional methods) and the fact that it can be applied to multiple types of actuators and sensors,” said Michael Tolley of the University of California San Diego. “Our method also includes redundant sensors, which improves the overall robustness of our predictions.”
The use of machine learning lets the roboticists come up with a reliable model for this complex, non-linear system of motions for the actuators, something difficult to do by directly calculating the expected motion of the soft-bot. It also resembles the human system of proprioception, built on redundant sensors that change and shift in position as we age.
In Search of a Perfect Arm
Another approach to training robots in using their bodies comes from Robert Kwiatkowski and Hod Lipson of Columbia University in New York. In their paper “Task-agnostic self-modeling machines,” also recently published in Science Robotics, they describe a new type of robotic arm.
Robotic arms and hands are getting increasingly dexterous, but training them to grasp a large array of objects and perform many different tasks can be an arduous process. It’s also an extremely valuable skill to get right: Amazon is highly interested in the perfect robot arm. Google hooked together an array of over a dozen robot arms so that they could share information about grasping new objects, in part to cut down on training time.
Individually training a robot arm to perform every individual task takes time and reduces the adaptability of your robot: either you need an ML algorithm with a huge dataset of experiences, or, even worse, you need to hard-code thousands of different motions. Kwiatkowski and Lipson attempt to overcome this by developing a robotic system that has a “strong sense of self”: a model of its own size, shape, and motions.
They do this using deep machine learning. The robot begins with no prior knowledge of its own shape or the underlying physics of its motion. It then repeats a series of a thousand random trajectories, recording the motion of its arm. Kwiatkowski and Lipson compare this to a baby in the first year of life observing the motions of its own hands and limbs, fascinated by picking up and manipulating objects.
Again, once the robot has trained itself to interpret these signals and build up a robust model of its own body, it’s ready for the next stage. Using that deep-learning algorithm, the researchers then ask the robot to design strategies to accomplish simple pick-up and place and handwriting tasks. Rather than laboriously and narrowly training itself for each individual task, limiting its abilities to a very narrow set of circumstances, the robot can now strategize how to use its arm for a much wider range of situations, with no additional task-specific training.
Damage Control
In a further experiment, the researchers replaced part of the arm with a “deformed” component, intended to simulate what might happen if the robot was damaged. The robot can then detect that something’s up and “reconfigure” itself, reconstructing its self-model by going through the training exercises once again; it was then able to perform the same tasks with only a small reduction in accuracy.
Machine learning techniques are opening up the field of robotics in ways we’ve never seen before. Combining them with our understanding of how humans and other animals are able to sense and interact with the world around us is bringing robotics closer and closer to becoming truly flexible and adaptable, and, eventually, omnipresent.
But before they can get out and shape the world, as these studies show, they will need to understand themselves.
Image Credit: jumbojan / Shutterstock.com Continue reading
#433532 Not your ordinary “pet” ...
Another scary robot from Boston Dynamics, in conjunction with the US Department of Defense… PETMAN! Here it is testing hazmat suits. Better than a real human being subjected to hazardous chemicals, I guess…
#432893 These 4 Tech Trends Are Driving Us ...
From a first-principles perspective, the task of feeding eight billion people boils down to converting energy from the sun into chemical energy in our bodies.
Traditionally, solar energy is converted by photosynthesis into carbohydrates in plants (i.e., biomass), which are either eaten by the vegans amongst us, or fed to animals, for those with a carnivorous preference.
Today, the process of feeding humanity is extremely inefficient.
If we could radically reinvent what we eat, and how we create that food, what might you imagine that “future of food” would look like?
In this post we’ll cover:
Vertical farms
CRISPR engineered foods
The alt-protein revolution
Farmer 3.0
Let’s dive in.
Vertical Farming
Where we grow our food…
The average American meal travels over 1,500 miles from farm to table. Wine from France, beef from Texas, potatoes from Idaho.
Imagine instead growing all of your food in a 50-story tall vertical farm in downtown LA or off-shore on the Great Lakes where the travel distance is no longer 1,500 miles but 50 miles.
Delocalized farming will minimize travel costs at the same time that it maximizes freshness.
Perhaps more importantly, vertical farming also allows tomorrow’s farmer the ability to control the exact conditions of her plants year round.
Rather than allowing the vagaries of the weather and soil conditions to dictate crop quality and yield, we can now perfectly control the growing cycle.
LED lighting provides the crops with the maximum amount of light, at the perfect frequency, 24 hours a day, 7 days a week.
At the same time, sensors and robots provide the root system the exact pH and micronutrients required, while fine-tuning the temperature of the farm.
Such precision farming can generate yields that are 200% to 400% above normal.
Next let’s explore how we can precision-engineer the genetic properties of the plant itself.
CRISPR and Genetically Engineered Foods
What food do we grow?
A fundamental shift is occurring in our relationship with agriculture. We are going from evolution by natural selection (Darwinism) to evolution by human direction.
CRISPR (the cutting edge gene editing tool) is providing a pathway for plant breeding that is more predictable, faster and less expensive than traditional breeding methods.
Rather than our crops being subject to nature’s random, environmental whim, CRISPR unlocks our capability to modify our crops to match the available environment.
Further, using CRISPR we will be able to optimize the nutrient density of our crops, enhancing their value and volume.
CRISPR may also hold the key to eliminating common allergens from crops. As we identify the allergen gene in peanuts, for instance, we can use CRISPR to silence that gene, making the crops we raise safer for and more accessible to a rapidly growing population.
Yet another application is our ability to make plants resistant to infection or more resistant to drought or cold.
Helping to accelerate the impact of CRISPR, the USDA recently announced that genetically engineered crops will not be regulated—providing an opening for entrepreneurs to capitalize on the opportunities for optimization CRISPR enables.
CRISPR applications in agriculture are an opportunity to help a billion people and become a billionaire in the process.
Protecting crops against volatile environments, combating crop diseases and increasing nutrient values, CRISPR is a promising tool to help feed the world’s rising population.
The Alt-Protein/Lab-Grown Meat Revolution
Something like a third of the Earth’s arable land is used for raising livestock—a massive amount of land—and global demand for meat is predicted to double in the coming decade.
Today, we must grow an entire cow—all bones, skin, and internals included—to produce a steak.
Imagine if we could instead start with a single muscle stem cell and only grow the steak, without needing the rest of the cow? Think of it as cellular agriculture.
Imagine returning millions, perhaps billions, of acres of grazing land back to the wilderness? This is the promise of lab-grown meats.
Lab-grown meat can also be engineered (using technology like CRISPR) to be packed with nutrients and be the healthiest, most delicious protein possible.
We’re watching this technology develop in real time. Several startups across the globe are already working to bring artificial meats to the food industry.
JUST, Inc. (previously Hampton Creek) run by my friend Josh Tetrick, has been on a mission to build a food system where everyone can get and afford delicious, nutritious food. They started by exploring 300,000+ species of plants all around the world to see how they can make food better and now are investing heavily in stem-cell-grown meats.
Backed by Richard Branson and Bill Gates, Memphis Meats is working on ways to produce real meat from animal cells, rather than whole animals. So far, they have produced beef, chicken, and duck using cultured cells from living animals.
As with vertical farming, transitioning production of our majority protein source to a carefully cultivated environment allows for agriculture to optimize inputs (water, soil, energy, land footprint), nutrients and, importantly, taste.
Farmer 3.0
Vertical farming and cellular agriculture are reinventing how we think about our food supply chain and what food we produce.
The next question to answer is who will be producing the food?
Let’s look back at how farming evolved through history.
Farmers 0.0 (Neolithic Revolution, around 9000 BCE): The hunter-gatherer to agriculture transition gains momentum, and humans cultivated the ability to domesticate plants for food production.
Farmers 1.0 (until around the 19th century): Farmers spent all day in the field performing backbreaking labor, and agriculture accounted for most jobs.
Farmers 2.0 (mid-20th century, Green Revolution): From the invention of the first farm tractor in 1812 through today, transformative mechanical biochemical technologies (fertilizer) boosted yields and made the job of farming easier, driving the US farm job rate down to less than two percent today.
Farmers 3.0: In the near future, farmers will leverage exponential technologies (e.g., AI, networks, sensors, robotics, drones), CRISPR and genetic engineering, and new business models to solve the world’s greatest food challenges and efficiently feed the eight-billion-plus people on Earth.
An important driver of the Farmer 3.0 evolution is the delocalization of agriculture driven by vertical and urban farms. Vertical farms and urban agriculture are empowering a new breed of agriculture entrepreneurs.
Let’s take a look at an innovative incubator in Brooklyn, New York called Square Roots.
Ten farm-in-a-shipping-containers in a Brooklyn parking lot represent the first Square Roots campus. Each 8-foot x 8.5-foot x 20-foot shipping container contains an equivalent of 2 acres of produce and can yield more than 50 pounds of produce each week.
For 13 months, one cohort of next-generation food entrepreneurs takes part in a curriculum with foundations in farming, business, community and leadership.
The urban farming incubator raised a $5.4 million seed funding round in August 2017.
Training a new breed of entrepreneurs to apply exponential technology to growing food is essential to the future of farming.
One of our massive transformative purposes at the Abundance Group is to empower entrepreneurs to generate extraordinary wealth while creating a world of abundance. Vertical farms and cellular agriculture are key elements enabling the next generation of food and agriculture entrepreneurs.
Conclusion
Technology is driving food abundance.
We’re already seeing food become demonetized, as the graph below shows.
From 1960 to 2014, the percent of income spent on food in the U.S. fell from 19 percent to under 10 percent of total disposable income—a dramatic decrease over the 40 percent of household income spent on food in 1900.
The dropping percent of per-capita disposable income spent on food. Source: USDA, Economic Research Service, Food Expenditure Series
Ultimately, technology has enabled a massive variety of food at a significantly reduced cost and with fewer resources used for production.
We’re increasingly going to optimize and fortify the food supply chain to achieve more reliable, predictable, and nutritious ways to obtain basic sustenance.
And that means a world with abundant, nutritious, and inexpensive food for every man, woman, and child.
What an extraordinary time to be alive.
Join Me
Abundance-Digital Online Community: I’ve created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital.
Abundance-Digital is my ‘onramp’ for exponential entrepreneurs—those who want to get involved and play at a higher level. Click here to learn more.
Image Credit: Nejron Photo / Shutterstock.com Continue reading