Tag Archives: california
#434673 The World’s Most Valuable AI ...
It recognizes our faces. It knows the videos we might like. And it can even, perhaps, recommend the best course of action to take to maximize our personal health.
Artificial intelligence and its subset of disciplines—such as machine learning, natural language processing, and computer vision—are seemingly becoming integrated into our daily lives whether we like it or not. What was once sci-fi is now ubiquitous research and development in company and university labs around the world.
Similarly, the startups working on many of these AI technologies have seen their proverbial stock rise. More than 30 of these companies are now valued at over a billion dollars, according to data research firm CB Insights, which itself employs algorithms to provide insights into the tech business world.
Private companies with a billion-dollar valuation were so uncommon not that long ago that they were dubbed unicorns. Now there are 325 of these once-rare creatures, with a combined valuation north of a trillion dollars, as CB Insights maintains a running count of this exclusive Unicorn Club.
The subset of AI startups accounts for about 10 percent of the total membership, growing rapidly in just 4 years from 0 to 32. Last year, an unprecedented 17 AI startups broke the billion-dollar barrier, with 2018 also a record year for venture capital into private US AI companies at $9.3 billion, CB Insights reported.
What exactly is all this money funding?
AI Keeps an Eye Out for You
Let’s start with the bad news first.
Facial recognition is probably one of the most ubiquitous applications of AI today. It’s actually a decades-old technology often credited to a man named Woodrow Bledsoe, who used an instrument called a RAND tablet that could semi-autonomously match faces from a database. That was in the 1960s.
Today, most of us are familiar with facial recognition as a way to unlock our smartphones. But the technology has gained notoriety as a surveillance tool of law enforcement, particularly in China.
It’s no secret that the facial recognition algorithms developed by several of the AI unicorns from China—SenseTime, CloudWalk, and Face++ (also known as Megvii)—are used to monitor the country’s 1.3 billion citizens. Police there are even equipped with AI-powered eyeglasses for such purposes.
A fourth billion-dollar Chinese startup, Yitu Technologies, also produces a platform for facial recognition in the security realm, and develops AI systems in healthcare on top of that. For example, its CARE.AITM Intelligent 4D Imaging System for Chest CT can reputedly identify in real time a variety of lesions for the possible early detection of cancer.
The AI Doctor Is In
As Peter Diamandis recently noted, AI is rapidly augmenting healthcare and longevity. He mentioned another AI unicorn from China in this regard—iCarbonX, which plans to use machines to develop personalized health plans for every individual.
A couple of AI unicorns on the hardware side of healthcare are OrCam Technologies and Butterfly. The former, an Israeli company, has developed a wearable device for the vision impaired called MyEye that attaches to one’s eyeglasses. The device can identify people and products, as well as read text, conveying the information through discrete audio.
Butterfly Network, out of Connecticut, has completely upended the healthcare market with a handheld ultrasound machine that works with a smartphone.
“Orcam and Butterfly are amazing examples of how machine learning can be integrated into solutions that provide a step-function improvement over state of the art in ultra-competitive markets,” noted Andrew Byrnes, investment director at Comet Labs, a venture capital firm focused on AI and robotics, in an email exchange with Singularity Hub.
AI in the Driver’s Seat
Comet Labs’ portfolio includes two AI unicorns, Megvii and Pony.ai.
The latter is one of three billion-dollar startups developing the AI technology behind self-driving cars, with the other two being Momenta.ai and Zoox.
Founded in 2016 near San Francisco (with another headquarters in China), Pony.ai debuted its latest self-driving system, called PonyAlpha, last year. The platform uses multiple sensors (LiDAR, cameras, and radar) to navigate its environment, but its “sensor fusion technology” makes things simple by choosing the most reliable sensor data for any given driving scenario.
Zoox is another San Francisco area startup founded a couple of years earlier. In late 2018, it got the green light from the state of California to be the first autonomous vehicle company to transport a passenger as part of a pilot program. Meanwhile, China-based Momenta.ai is testing level four autonomy for its self-driving system. Autonomous driving levels are ranked zero to five, with level five being equal to a human behind the wheel.
The hype around autonomous driving is currently in overdrive, and Byrnes thinks regulatory roadblocks will keep most self-driving cars in idle for the foreseeable future. The exception, he said, is China, which is adopting a “systems” approach to autonomy for passenger transport.
“If [autonomous mobility] solves bigger problems like traffic that can elicit government backing, then that has the potential to go big fast,” he said. “This is why we believe Pony.ai will be a winner in the space.”
AI in the Back Office
An AI-powered technology that perhaps only fans of the cult classic Office Space might appreciate has suddenly taken the business world by storm—robotic process automation (RPA).
RPA companies take the mundane back office work, such as filling out invoices or processing insurance claims, and turn it over to bots. The intelligent part comes into play because these bots can tackle unstructured data, such as text in an email or even video and pictures, in order to accomplish an increasing variety of tasks.
Both Automation Anywhere and UiPath are older companies, founded in 2003 and 2005, respectively. However, since just 2017, they have raised nearly a combined $1 billion in disclosed capital.
Cybersecurity Embraces AI
Cybersecurity is another industry where AI is driving investment into startups. Sporting imposing names like CrowdStrike, Darktrace, and Tanium, these cybersecurity companies employ different machine-learning techniques to protect computers and other IT assets beyond the latest software update or virus scan.
Darktrace, for instance, takes its inspiration from the human immune system. Its algorithms can purportedly “learn” the unique pattern of each device and user on a network, detecting emerging problems before things spin out of control.
All three companies are used by major corporations and governments around the world. CrowdStrike itself made headlines a few years ago when it linked the hacking of the Democratic National Committee email servers to the Russian government.
Looking Forward
I could go on, and introduce you to the world’s most valuable startup, a Chinese company called Bytedance that is valued at $75 billion for news curation and an app to create 15-second viral videos. But that’s probably not where VC firms like Comet Labs are generally putting their money.
Byrnes sees real value in startups that are taking “data-driven approaches to problems specific to unique industries.” Take the example of Chicago-based unicorn Uptake Technologies, which analyzes incoming data from machines, from wind turbines to tractors, to predict problems before they occur with the machinery. A not-yet unicorn called PingThings in the Comet Labs portfolio does similar predictive analytics for the energy utilities sector.
“One question we like asking is, ‘What does the state of the art look like in your industry in three to five years?’” Byrnes said. “We ask that a lot, then we go out and find the technology-focused teams building those things.”
Image Credit: Andrey Suslov / Shutterstock.com Continue reading
#434643 Sensors and Machine Learning Are Giving ...
According to some scientists, humans really do have a sixth sense. There’s nothing supernatural about it: the sense of proprioception tells you about the relative positions of your limbs and the rest of your body. Close your eyes, block out all sound, and you can still use this internal “map” of your external body to locate your muscles and body parts – you have an innate sense of the distances between them, and the perception of how they’re moving, above and beyond your sense of touch.
This sense is invaluable for allowing us to coordinate our movements. In humans, the brain integrates senses including touch, heat, and the tension in muscle spindles to allow us to build up this map.
Replicating this complex sense has posed a great challenge for roboticists. We can imagine simulating the sense of sight with cameras, sound with microphones, or touch with pressure-pads. Robots with chemical sensors could be far more accurate than us in smell and taste, but building in proprioception, the robot’s sense of itself and its body, is far more difficult, and is a large part of why humanoid robots are so tricky to get right.
Simultaneous localization and mapping (SLAM) software allows robots to use their own senses to build up a picture of their surroundings and environment, but they’d need a keen sense of the position of their own bodies to interact with it. If something unexpected happens, or in dark environments where primary senses are not available, robots can struggle to keep track of their own position and orientation. For human-robot interaction, wearable robotics, and delicate applications like surgery, tiny differences can be extremely important.
Piecemeal Solutions
In the case of hard robotics, this is generally solved by using a series of strain and pressure sensors in each joint, which allow the robot to determine how its limbs are positioned. That works fine for rigid robots with a limited number of joints, but for softer, more flexible robots, this information is limited. Roboticists are faced with a dilemma: a vast, complex array of sensors for every degree of freedom in the robot’s movement, or limited skill in proprioception?
New techniques, often involving new arrays of sensory material and machine-learning algorithms to fill in the gaps, are starting to tackle this problem. Take the work of Thomas George Thuruthel and colleagues in Pisa and San Diego, who draw inspiration from the proprioception of humans. In a new paper in Science Robotics, they describe the use of soft sensors distributed through a robotic finger at random. This placement is much like the constant adaptation of sensors in humans and animals, rather than relying on feedback from a limited number of positions.
The sensors allow the soft robot to react to touch and pressure in many different locations, forming a map of itself as it contorts into complicated positions. The machine-learning algorithm serves to interpret the signals from the randomly-distributed sensors: as the finger moves around, it’s observed by a motion capture system. After training the robot’s neural network, it can associate the feedback from the sensors with the position of the finger detected in the motion-capture system, which can then be discarded. The robot observes its own motions to understand the shapes that its soft body can take, and translate them into the language of these soft sensors.
“The advantages of our approach are the ability to predict complex motions and forces that the soft robot experiences (which is difficult with traditional methods) and the fact that it can be applied to multiple types of actuators and sensors,” said Michael Tolley of the University of California San Diego. “Our method also includes redundant sensors, which improves the overall robustness of our predictions.”
The use of machine learning lets the roboticists come up with a reliable model for this complex, non-linear system of motions for the actuators, something difficult to do by directly calculating the expected motion of the soft-bot. It also resembles the human system of proprioception, built on redundant sensors that change and shift in position as we age.
In Search of a Perfect Arm
Another approach to training robots in using their bodies comes from Robert Kwiatkowski and Hod Lipson of Columbia University in New York. In their paper “Task-agnostic self-modeling machines,” also recently published in Science Robotics, they describe a new type of robotic arm.
Robotic arms and hands are getting increasingly dexterous, but training them to grasp a large array of objects and perform many different tasks can be an arduous process. It’s also an extremely valuable skill to get right: Amazon is highly interested in the perfect robot arm. Google hooked together an array of over a dozen robot arms so that they could share information about grasping new objects, in part to cut down on training time.
Individually training a robot arm to perform every individual task takes time and reduces the adaptability of your robot: either you need an ML algorithm with a huge dataset of experiences, or, even worse, you need to hard-code thousands of different motions. Kwiatkowski and Lipson attempt to overcome this by developing a robotic system that has a “strong sense of self”: a model of its own size, shape, and motions.
They do this using deep machine learning. The robot begins with no prior knowledge of its own shape or the underlying physics of its motion. It then repeats a series of a thousand random trajectories, recording the motion of its arm. Kwiatkowski and Lipson compare this to a baby in the first year of life observing the motions of its own hands and limbs, fascinated by picking up and manipulating objects.
Again, once the robot has trained itself to interpret these signals and build up a robust model of its own body, it’s ready for the next stage. Using that deep-learning algorithm, the researchers then ask the robot to design strategies to accomplish simple pick-up and place and handwriting tasks. Rather than laboriously and narrowly training itself for each individual task, limiting its abilities to a very narrow set of circumstances, the robot can now strategize how to use its arm for a much wider range of situations, with no additional task-specific training.
Damage Control
In a further experiment, the researchers replaced part of the arm with a “deformed” component, intended to simulate what might happen if the robot was damaged. The robot can then detect that something’s up and “reconfigure” itself, reconstructing its self-model by going through the training exercises once again; it was then able to perform the same tasks with only a small reduction in accuracy.
Machine learning techniques are opening up the field of robotics in ways we’ve never seen before. Combining them with our understanding of how humans and other animals are able to sense and interact with the world around us is bringing robotics closer and closer to becoming truly flexible and adaptable, and, eventually, omnipresent.
But before they can get out and shape the world, as these studies show, they will need to understand themselves.
Image Credit: jumbojan / Shutterstock.com Continue reading
#433474 How to Feed Global Demand for ...
“You really can’t justify tuna in Chicago as a source of sustenance.” That’s according to Dr. Sylvia Earle, a National Geographic Society Explorer who was the first female chief scientist at NOAA. She came to the Good Food Institute’s Good Food Conference to deliver a call to action around global food security, agriculture, environmental protection, and the future of consumer choice.
It seems like all options should be on the table to feed an exploding population threatened by climate change. But Dr. Earle, who is faculty at Singularity University, drew a sharp distinction between seafood for sustenance versus seafood as a choice. “There is this widespread claim that we must take large numbers of wildlife from the sea in order to have food security.”
A few minutes later, Dr. Earle directly addressed those of us in the audience. “We know the value of a dead fish,” she said. That’s market price. “But what is the value of a live fish in the ocean?”
That’s when my mind blew open. What is the value—or put another way, the cost—of using the ocean as a major source of protein for humans? How do you put a number on that? Are we talking about dollars and cents, or about something far larger?
Dr. Liz Specht of the Good Food Institute drew the audience’s attention to a strange imbalance. Currently, about half of the yearly global catch of seafood comes from aquaculture. That means that the other half is wild caught. It’s hard to imagine half of your meat coming directly from the forests and the plains, isn’t it? And yet half of the world’s seafood comes from direct harvesting of the oceans, by way of massive overfishing, a terrible toll from bycatch, a widespread lack of regulation and enforcement, and even human rights violations such as slavery.
The search for solutions is on, from both within the fishing industry and from external agencies such as governments and philanthropists. Could there be another way?
Makers of plant-based seafood and clean seafood think they know how to feed the global demand for seafood without harming the ocean. These companies are part of a larger movement harnessing technology to reduce our reliance on wild and domesticated animals—and all the environmental, economic, and ethical issues that come with it.
Producers of plant-based seafood (20 or so currently) are working to capture the taste, texture, and nutrition of conventional seafood without the limitations of geography or the health of a local marine population. Like with plant-based meat, makers of plant-based seafood are harnessing food science and advances in chemistry, biology, and engineering to make great food. The industry’s strategy? Start with what the consumer wants, and then figure out how to achieve that great taste through technology.
So how does plant-based seafood taste? Pretty good, as it turns out. (The biggest benefit of a food-oriented conference is that your mouth is always full!)
I sampled “tuna” salad made from Good Catch Food’s fish-free tuna, which is sourced from legumes; the texture was nearly indistinguishable from that of flaked albacore tuna, and there was no lingering fishy taste to overpower my next bite. In a blind taste test, I probably wouldn’t have known that I was eating a plant-based seafood alternative. Next I reached for Ocean Hugger Food’s Ahimi, a tomato-based alternative to raw tuna. I adore Hawaiian poke, so I was pleasantly surprised when my Ahimi-based poke captured the bite of ahi tuna. It wasn’t quite as delightfully fatty as raw tuna, but with wild tuna populations struggling to recover from a 97% decline in numbers from 40 years ago, Ahimi is a giant stride in the right direction.
These plant-based alternatives aren’t the only game in town, however.
The clean meat industry, which has also been called “cultured meat” or “cellular agriculture,” isn’t seeking to lure consumers away from animal protein. Instead, cells are sampled from live animals and grown in bioreactors—meaning that no animal is slaughtered to produce real meat.
Clean seafood is poised to piggyback off platforms developed for clean meat; growing fish cells in the lab should rely on the same processes as growing meat cells. I know of four companies currently focusing on seafood (Finless Foods, Wild Type, BlueNalu, and Seafuture Sustainable Biotech), and a few more are likely to emerge from stealth mode soon.
Importantly, there’s likely not much difference between growing clean seafood from the top or the bottom of the food chain. Tuna, for example, are top predators that must grow for at least 10 years before they’re suitable as food. Each year, a tuna consumes thousands of pounds of other fish, shellfish, and plankton. That “long tail of groceries,” said Dr. Earle, “is a pretty expensive choice.” Excitingly, clean tuna would “level the trophic playing field,” as Dr. Specht pointed out.
All this is only the beginning of what might be possible.
Combining synthetic biology with clean meat and seafood means that future products could be personalized for individual taste preferences or health needs, by reprogramming the DNA of the cells in the lab. Industries such as bioremediation and biofuels likely have a lot to teach us about sourcing new ingredients and flavors from algae and marine plants. By harnessing rapid advances in automation, robotics, sensors, machine vision, and other big-data analytics, the manufacturing and supply chains for clean seafood could be remarkably safe and robust. Clean seafood would be just that: clean, without pathogens, parasites, or the plastic threatening to fill our oceans, meaning that you could enjoy it raw.
What about price? Dr. Mark Post, a pioneer in clean meat who is also faculty at Singularity University, estimated that 80% of clean-meat production costs come from the expensive medium in which cells are grown—and some ingredients in the medium are themselves sourced from animals, which misses the point of clean meat. Plus, to grow a whole cut of food, like a fish fillet, the cells need to be coaxed into a complex 3D structure with various cell types like muscle cells and fat cells. These two technical challenges must be solved before clean meat and seafood give consumers the experience they want, at the price they want.
In this respect clean seafood has an unusual edge. Most of what we know about growing animal cells in the lab comes from the research and biomedical industries (from tissue engineering, for example)—but growing cells to replace an organ has different constraints than growing cells for food. The link between clean seafood and biomedicine is less direct, empowering innovators to throw out dogma and find novel reagents, protocols, and equipment to grow seafood that captures the tastes, textures, smells, and overall experience of dining by the ocean.
Asked to predict when we’ll be seeing clean seafood in the grocery store, Lou Cooperhouse the CEO of BlueNalu, explained that the challenges aren’t only in the lab: marketing, sales, distribution, and communication with consumers are all critical. As Niya Gupta, the founder of Fork & Goode, said, “The question isn’t ‘can we do it’, but ‘can we sell it’?”
The good news is that the clean meat and seafood industry is highly collaborative; there are at least two dozen companies in the space, and they’re all talking to each other. “This is an ecosystem,” said Dr. Uma Valeti, the co-founder of Memphis Meats. “We’re not competing with each other.” It will likely be at least a decade before science, business, and regulation enable clean meat and seafood to routinely appear on restaurant menus, let alone market shelves.
Until then, think carefully about your food choices. Meditate on Dr. Earle’s question: “What is the real cost of that piece of halibut?” Or chew on this from Dr. Ricardo San Martin, of the Sutardja Center at the University of California, Berkeley: “Food is a system of meanings, not an object.” What are you saying when you choose your food, about your priorities and your values and how you want the future to look? Do you think about animal welfare? Most ethical regulations don’t extend to marine life, and if you don’t think that ocean creatures feel pain, consider the lobster.
Seafood is largely an acquired taste, since most of us don’t live near the water. Imagine a future in which children grow up loving the taste of delicious seafood but without hurting a living animal, the ocean, or the global environment.
Do more than imagine. As Dr. Earle urged us, “Convince the public at large that this is a really cool idea.”
Widely available
Medium availability
Emerging
Gardein
Ahimi (Ocean Hugger)
New Wave Foods
Sophie’s Kitchen
Cedar Lake
To-funa Fish
Quorn
SoFine Foods
Seamore
Vegetarian Plus
Akua
Good Catch
Heritage
Hungry Planet
Odontella
Loma Linda
Heritage Health Food
Terramino Foods
The Vegetarian Butcher
May Wah
VBites
Table based on Figure 5 of the report “An Ocean of Opportunity: Plant-based and clean seafood for sustainable oceans without sacrifice,” from The Good Food Institute.
Image Credit: Tono Balaguer / Shutterstock.com Continue reading