Tag Archives: pretty
#433288 The New AI Tech Turning Heads in Video ...
A new technique using artificial intelligence to manipulate video content gives new meaning to the expression “talking head.”
An international team of researchers showcased the latest advancement in synthesizing facial expressions—including mouth, eyes, eyebrows, and even head position—in video at this month’s 2018 SIGGRAPH, a conference on innovations in computer graphics, animation, virtual reality, and other forms of digital wizardry.
The project is called Deep Video Portraits. It relies on a type of AI called generative adversarial networks (GANs) to modify a “target” actor based on the facial and head movement of a “source” actor. As the name implies, GANs pit two opposing neural networks against one another to create a realistic talking head, right down to the sneer or raised eyebrow.
In this case, the adversaries are actually working together: One neural network generates content, while the other rejects or approves each effort. The back-and-forth interplay between the two eventually produces a realistic result that can easily fool the human eye, including reproducing a static scene behind the head as it bobs back and forth.
The researchers say the technique can be used by the film industry for a variety of purposes, from editing facial expressions of actors for matching dubbed voices to repositioning an actor’s head in post-production. AI can not only produce highly realistic results, but much quicker ones compared to the manual processes used today, according to the researchers. You can read the full paper of their work here.
“Deep Video Portraits shows how such a visual effect could be created with less effort in the future,” said Christian Richardt, from the University of Bath’s motion capture research center CAMERA, in a press release. “With our approach, even the positioning of an actor’s head and their facial expression could be easily edited to change camera angles or subtly change the framing of a scene to tell the story better.”
AI Tech Different Than So-Called “Deepfakes”
The work is far from the first to employ AI to manipulate video and audio. At last year’s SIGGRAPH conference, researchers from the University of Washington showcased their work using algorithms that inserted audio recordings from a person in one instance into a separate video of the same person in a different context.
In this case, they “faked” a video using a speech from former President Barack Obama addressing a mass shooting incident during his presidency. The AI-doctored video injects the audio into an unrelated video of the president while also blending the facial and mouth movements, creating a pretty credible job of lip synching.
A previous paper by many of the same scientists on the Deep Video Portraits project detailed how they were first able to manipulate a video in real time of a talking head (in this case, actor and former California governor Arnold Schwarzenegger). The Face2Face system pulled off this bit of digital trickery using a depth-sensing camera that tracked the facial expressions of an Asian female source actor.
A less sophisticated method of swapping faces using a machine learning software dubbed FakeApp emerged earlier this year. Predictably, the tech—requiring numerous photos of the source actor in order to train the neural network—was used for more juvenile pursuits, such as injecting a person’s face onto a porn star.
The application gave rise to the term “deepfakes,” which is now used somewhat ubiquitously to describe all such instances of AI-manipulated video—much to the chagrin of some of the researchers involved in more legitimate uses.
Fighting AI-Created Video Forgeries
However, the researchers are keenly aware that their work—intended for benign uses such as in the film industry or even to correct gaze and head positions for more natural interactions through video teleconferencing—could be used for nefarious purposes. Fake news is the most obvious concern.
“With ever-improving video editing technology, we must also start being more critical about the video content we consume every day, especially if there is no proof of origin,” said Michael Zollhöfer, a visiting assistant professor at Stanford University and member of the Deep Video Portraits team, in the press release.
Toward that end, the research team is training the same adversarial neural networks to spot video forgeries. They also strongly recommend that developers clearly watermark videos that are edited through AI or otherwise, and denote clearly what part and element of the scene was modified.
To catch less ethical users, the US Department of Defense, through the Defense Advanced Research Projects Agency (DARPA), is supporting a program called Media Forensics. This latest DARPA challenge enlists researchers to develop technologies to automatically assess the integrity of an image or video, as part of an end-to-end media forensics platform.
The DARPA official in charge of the program, Matthew Turek, did tell MIT Technology Review that so far the program has “discovered subtle cues in current GAN-manipulated images and videos that allow us to detect the presence of alterations.” In one reported example, researchers have targeted eyes, which rarely blink in the case of “deepfakes” like those created by FakeApp, because the AI is trained on still pictures. That method would seem to be less effective to spot the sort of forgeries created by Deep Video Portraits, which appears to flawlessly match the entire facial and head movements between the source and target actors.
“We believe that the field of digital forensics should and will receive a lot more attention in the future to develop approaches that can automatically prove the authenticity of a video clip,” Zollhöfer said. “This will lead to ever-better approaches that can spot such modifications even if we humans might not be able to spot them with our own eyes.
Image Credit: Tancha / Shutterstock.com Continue reading
#432880 Google’s Duplex Raises the Question: ...
By now, you’ve probably seen Google’s new Duplex software, which promises to call people on your behalf to book appointments for haircuts and the like. As yet, it only exists in demo form, but already it seems like Google has made a big stride towards capturing a market that plenty of companies have had their eye on for quite some time. This software is impressive, but it raises questions.
Many of you will be familiar with the stilted, robotic conversations you can have with early chatbots that are, essentially, glorified menus. Instead of pressing 1 to confirm or 2 to re-enter, some of these bots would allow for simple commands like “Yes” or “No,” replacing the buttons with limited ability to recognize a few words. Using them was often a far more frustrating experience than attempting to use a menu—there are few things more irritating than a robot saying, “Sorry, your response was not recognized.”
Google Duplex scheduling a hair salon appointment:
Google Duplex calling a restaurant:
Even getting the response recognized is hard enough. After all, there are countless different nuances and accents to baffle voice recognition software, and endless turns of phrase that amount to saying the same thing that can confound natural language processing (NLP), especially if you like your phrasing quirky.
You may think that standard customer-service type conversations all travel the same route, using similar words and phrasing. But when there are over 80,000 ways to order coffee, and making a mistake is frowned upon, even simple tasks require high accuracy over a huge dataset.
Advances in audio processing, neural networks, and NLP, as well as raw computing power, have meant that basic recognition of what someone is trying to say is less of an issue. Soundhound’s virtual assistant prides itself on being able to process complicated requests (perhaps needlessly complicated).
The deeper issue, as with all attempts to develop conversational machines, is one of understanding context. There are so many ways a conversation can go that attempting to construct a conversation two or three layers deep quickly runs into problems. Multiply the thousands of things people might say by the thousands they might say next, and the combinatorics of the challenge runs away from most chatbots, leaving them as either glorified menus, gimmicks, or rather bizarre to talk to.
Yet Google, who surely remembers from Glass the risk of premature debuts for technology, especially the kind that ask you to rethink how you interact with or trust in software, must have faith in Duplex to show it on the world stage. We know that startups like Semantic Machines and x.ai have received serious funding to perform very similar functions, using natural-language conversations to perform computing tasks, schedule meetings, book hotels, or purchase items.
It’s no great leap to imagine Google will soon do the same, bringing us closer to a world of onboard computing, where Lens labels the world around us and their assistant arranges it for us (all the while gathering more and more data it can convert into personalized ads). The early demos showed some clever tricks for keeping the conversation within a fairly narrow realm where the AI should be comfortable and competent, and the blog post that accompanied the release shows just how much effort has gone into the technology.
Yet given the privacy and ethics funk the tech industry finds itself in, and people’s general unease about AI, the main reaction to Duplex’s impressive demo was concern. The voice sounded too natural, bringing to mind Lyrebird and their warnings of deepfakes. You might trust “Do the Right Thing” Google with this technology, but it could usher in an era when automated robo-callers are far more convincing.
A more human-like voice may sound like a perfectly innocuous improvement, but the fact that the assistant interjects naturalistic “umm” and “mm-hm” responses to more perfectly mimic a human rubbed a lot of people the wrong way. This wasn’t just a voice assistant trying to sound less grinding and robotic; it was actively trying to deceive people into thinking they were talking to a human.
Google is running the risk of trying to get to conversational AI by going straight through the uncanny valley.
“Google’s experiments do appear to have been designed to deceive,” said Dr. Thomas King of the Oxford Internet Institute’s Digital Ethics Lab, according to Techcrunch. “Their main hypothesis was ‘can you distinguish this from a real person?’ In this case it’s unclear why their hypothesis was about deception and not the user experience… there should be some kind of mechanism there to let people know what it is they are speaking to.”
From Google’s perspective, being able to say “90 percent of callers can’t tell the difference between this and a human personal assistant” is an excellent marketing ploy, even though statistics about how many interactions are successful might be more relevant.
In fact, Duplex runs contrary to pretty much every major recommendation about ethics for the use of robotics or artificial intelligence, not to mention certain eavesdropping laws. Transparency is key to holding machines (and the people who design them) accountable, especially when it comes to decision-making.
Then there are the more subtle social issues. One prominent effect social media has had is to allow people to silo themselves; in echo chambers of like-minded individuals, it’s hard to see how other opinions exist. Technology exacerbates this by removing the evolutionary cues that go along with face-to-face interaction. Confronted with a pair of human eyes, people are more generous. Confronted with a Twitter avatar or a Facebook interface, people hurl abuse and criticism they’d never dream of using in a public setting.
Now that we can use technology to interact with ever fewer people, will it change us? Is it fair to offload the burden of dealing with a robot onto the poor human at the other end of the line, who might have to deal with dozens of such calls a day? Google has said that if the AI is in trouble, it will put you through to a human, which might help save receptionists from the hell of trying to explain a concept to dozens of dumbfounded AI assistants all day. But there’s always the risk that failures will be blamed on the person and not the machine.
As AI advances, could we end up treating the dwindling number of people in these “customer-facing” roles as the buggiest part of a fully automatic service? Will people start accusing each other of being robots on the phone, as well as on Twitter?
Google has provided plenty of reassurances about how the system will be used. They have said they will ensure that the system is identified, and it’s hardly difficult to resolve this problem; a slight change in the script from their demo would do it. For now, consumers will likely appreciate moves that make it clear whether the “intelligent agents” that make major decisions for us, that we interact with daily, and that hide behind social media avatars or phone numbers are real or artificial.
Image Credit: Besjunior / Shutterstock.com Continue reading
#432539 10 Amazing Things You Can Learn From ...
Hardly a day goes by without a research study or article published talking sh*t—or more precisely, talking about the gut microbiome. When it comes to cutting-edge innovations in medicine, all signs point to the microbiome. Maybe we should have listened to Hippocrates: “All disease begins in the gut.”
Your microbiome is mostly located in your gut and contains trillions of little guys and gals called microbes. If you want to optimize your health, biohack your body, make progress against chronic disease, or know which foods are right for you—almost all of this information can be found in your microbiome.
My company, Viome, offers technology to measure your microscopic organisms and their behavior at a molecular level. Think of it as the Instagram of your inner world. A snapshot of what’s happening inside your body. New research about the microbiome is changing our understanding of who we are as humans and how the human body functions.
It turns out the microbiome may be mission control for your body and mind. Your healthy microbiome is part best friend, part power converter, part engine, and part pharmacist. At Viome, we’re working to analyze these microbial functions and recommend a list of personalized food and supplements to keep these internal complex machines in a finely tuned balance.
We now have more information than ever before about what your microbiome is doing, and it’s going to help you and the rest of the world do a whole lot better. The new insights emerging from microbiome research are changing our perception of what keeps us healthy and what makes us sick. This new understanding of the microbiome activities may put an end to conflicting food advice and make fad diets a thing of the past.
What are these new insights showing us? The information is nothing short of mind-blowing. The value of your poop just got an upgrade.
Here are some of the amazing things we’ve learned from our work at Viome.
1. Was Popeye wrong? Why “health food” isn’t necessarily healthy.
Each week there is a new fad diet released, discussed, and followed. The newest “research” shows that this is now the superfood to eat for everyone. But, too often, the fad diet is just a regurgitation of what worked for one person and shouldn’t be followed by everyone else.
For example, we’ve been told to eat our greens and that greens and nuts are “anti-inflammatory,” but this is actually not always true. Spinach, bran, rhubarb, beets, nuts, and nut butters all contain oxalate. We now know that oxalate-containing food can be harmful, unless you have the microbes present that can metabolize it into a non-harmful substance.
30% of Viome customers do not have the microbes to metabolize oxalates properly. In other words, “healthy foods” like spinach are actually not healthy for these people.
Looks like not everyone should follow Popeye’s food plan.
2. Aren’t foods containing “antioxidants” always good for everyone?
Just like oxalates, polyphenols in foods are usually considered very healthy, but unless you have microbes that utilize specific polyphenols, you may not get full benefit from them. One example is a substance found in these foods called ellagic acid. We can detect if your microbiome is metabolizing ellagic acid and converting it into urolithin A. It is only the urolithin A that has anti-inflammatory and antioxidant effects. Without the microbes to do this conversion you will not benefit from the ellagic acid in foods.
Examples: Walnuts, raspberries, pomegranate, blackberries, pecans, and cranberries all contain ellagic acid.
We have analyzed tens of thousands of people, and only about 50% of the people actually benefit from eating more foods containing ellagic acid.
3. You’re probably eating too much protein (and it may be causing inflammation).
When you think high-protein diet, you think paleo, keto, and high-performance diets.
Protein is considered good for you. It helps build muscle and provide energy—but if you eat too much, it can cause inflammation and decrease longevity.
We can analyze the activity of your microbiome to determine if you are eating too much protein that feeds protein-fermenting bacteria like Alistipes putredinis and Tannerella forsythia, and if these organisms are producing harmful substances such as ammonia, hydrogen sulfide, p-cresol, or putrescine. These substances can damage your gut lining and lead to things like leaky gut.
4. Something’s fishy. Are “healthy foods” causing heart disease?
Choline in certain foods can get converted by bacteria into a substance called trimethylamine (TMA) that is associated with heart disease when it gets absorbed into your body and converted to TMAO. However, TMA conversion doesn’t happen in individuals without these types of bacteria in their microbiome.
We can see the TMA production pathways and many of the gammaproteobacteria that do this conversion.
What foods contain choline? Liver, salmon, chickpeas, split peas, eggs, navy beans, peanuts, and many others.
Before you decide to go full-on pescatarian or paleo, you may want to check if your microbiome is producing TMA with that salmon or steak.
5. Hold up, Iron Man. We can see inflammation from too much iron.
Minerals like iron in your food can, in certain inflammatory microbial environments, promote growth of pathogens like Esherichia, Shigella, and Salmonella.
Maybe it wasn’t just that raw chicken that gave you food poisoning, but your toxic microbiome that made you sick.
On the other hand, when you don’t have enough iron, you could become anemic leading to weakness and shortness of breath.
So, just like Iron Man, it’s about finding your balance so that you can fly.
6. Are you anxious or stressed? Your poop will tell you.
Our gut and brain are connected via the vagus nerve. A large majority of neurotransmitters are either produced or consumed by our microbiome. In fact, some 90% of all serotonin (a feel-good neurotransmitter) is produced by your gut microbiome and not by your brain.
When you have a toxic microbiome that’s producing a large amount of toxins like hydrogen sulfide, the lining of your gut starts to deteriorate into what’s known as leaky gut. Think of leaky gut as your gut not having healthy borders or boundaries. And when this happens, all kinds of disease can emerge. When the barrier of the gut breaks down, it starts a chain reaction causing low-grade chronic inflammation—which has been identified as a potential source of depression and higher levels of anxiety, in addition to many other chronic diseases.
We’re not saying you shouldn’t meditate, but if you want to get the most out of your meditation and really reduce your stress levels, make sure you are eating the right food that promotes a healthy microbiome.
7. Your microbiome is better than Red Bull.
If you want more energy, get your microbiome back into balance.
No you don’t need three pots of coffee to keep you going, you just need a balanced microbiome.
Your microbiome is responsible for calorie extraction, or creating energy, through pathways such as the Tricarboxylic acid cycle. Our bodies depend on the energy that our microbiome produces.
How much energy we get from our food is dependent on how efficient our microbiome is at converting the food into energy. High-performing microbiomes are excellent at converting food into energy. This is great when you are an athlete and need the extra energy, but if you don’t use up the energy it may be the source of some of those unwanted pounds.
If the microbes can’t or won’t metabolize the glucose (sugar) that you eat, it will be stored as fat. If the microbes are extracting too many calories from your food or producing lipopolysaccharides (LPS) and causing metabolic endotoxemia leading to activation of toll-like receptors and insulin resistance you may end up storing what you eat as fat.
Think of your microbiome as Doc Brown’s car from the future—it can take pretty much anything and turn it into fuel if it’s strong and resilient enough.
8. We can see your joint pain in your poop.
Got joint pain? Your microbiome can tell you why.
Lipopolysaccharide (LPS) is a key pro-inflammatory molecule made by some of your microbes. If your microbes are making too much LPS, it can wreak havoc on your immune system by putting it into overdrive. When your immune system goes on the warpath there is often collateral damage to your joints and other body parts.
Perhaps balancing your microbiome is a better solution than reaching for the glucosamine. Think of your microbiome as the top general of your immune army. It puts your immune system through basic training and determines when it goes to war.
Ideally, your immune system wins the quick battle and gets some rest, but sometimes if your microbiome keeps it on constant high alert it becomes a long, drawn-out war resulting in chronic inflammation and chronic diseases.
Are you really “getting older” or is your microbiome just making you “feel” older because it keeps giving warnings to your immune system ultimately leading to chronic pain?
Before you throw in the towel on your favorite activities, check your microbiome. And, if you have anything with “itis” in it, it’s possible that when you balance your microbiome the inflammation from your “itis” will be reduced.
9. Your gut is doing the talking for your mouth.
When you have low stomach acid, your mouth bacteria makes it down to your GI tract.
Stomach acid is there to protect you from the bacteria in your mouth and the parasites and fungi that are in your food. If you don’t have enough of it, the bacteria in your mouth will invade your gut. This invasion is associated with and a risk factor for autoimmune disease and inflammation in the gut.
We are learning that low stomach acid is perhaps one of the major causes of chronic disease. This stomach acid is essential to kill mouth bacteria and help us digest our food.
What kinds of things cause low stomach acid? Stress and antacids like Nexium, Zantac, and Prilosec.
10. Carbs can be protein precursors.
Rejoice! Perhaps carbs aren’t as bad as we thought (as long as your microbiome is up to the task). We can see if some of the starches you eat can be made into amino acids by the microbiome.
Our microbiome makes 20% of our branched-chain amino acids (BCAAs) for us, and it will adapt to make these vital BCAAs for us in almost any way it can.
Essentially, your microbiome is hooking up carbons and hydrogens into different formulations of BCAAs, depending on what you feed it. The microbiome is excellent at adapting and pivoting based on the food you feed it and the environment that it’s in.
So, good news: Carbs are protein precursors, as long as you have the right microbiome.
Stop Talking Sh*t Now
Your microbiome is a world class entrepreneur that can take low-grade sources of food and turn them into valuable and useable energy.
You have a best friend and confidant within you that is working wonders to make sure you have energy and that all of your needs are met.
And, just like a best friend, if you take great care of your microbiome, it will take great care of you.
Given the research emerging daily about the microbiome and its importance on your quality of life, prioritizing the health of your microbiome is essential.
When you have a healthy microbiome, you’ll have a healthy life.
It’s now clear that some of the greatest insights for your health will come from your poop.
It’s time to stop talking sh*t and get your sh*t together. Your life may depend on it.
Viome can help you identify what your microbiome is actually doing. The combination of Viome’s metatranscriptomic technology and cutting-edge artificial intelligence is paving a brand new path forward for microbiome health.
Image Credit: WhiteDragon / Shutterstock.com Continue reading
#432352 Watch This Lifelike Robot Fish Swim ...
Earth’s oceans are having a rough go of it these days. On top of being the repository for millions of tons of plastic waste, global warming is affecting the oceans and upsetting marine ecosystems in potentially irreversible ways.
Coral bleaching, for example, occurs when warming water temperatures or other stress factors cause coral to cast off the algae that live on them. The coral goes from lush and colorful to white and bare, and sometimes dies off altogether. This has a ripple effect on the surrounding ecosystem.
Warmer water temperatures have also prompted many species of fish to move closer to the north or south poles, disrupting fisheries and altering undersea environments.
To keep these issues in check or, better yet, try to address and improve them, it’s crucial for scientists to monitor what’s going on in the water. A paper released last week by a team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) unveiled a new tool for studying marine life: a biomimetic soft robotic fish, dubbed SoFi, that can swim with, observe, and interact with real fish.
SoFi isn’t the first robotic fish to hit the water, but it is the most advanced robot of its kind. Here’s what sets it apart.
It swims in three dimensions
Up until now, most robotic fish could only swim forward at a given water depth, advancing at a steady speed. SoFi blows older models out of the water. It’s equipped with side fins called dive planes, which move to adjust its angle and allow it to turn, dive downward, or head closer to the surface. Its density and thus its buoyancy can also be adjusted by compressing or decompressing air in an inner compartment.
“To our knowledge, this is the first robotic fish that can swim untethered in three dimensions for extended periods of time,” said CSAIL PhD candidate Robert Katzschmann, lead author of the study. “We are excited about the possibility of being able to use a system like this to get closer to marine life than humans can get on their own.”
The team took SoFi to the Rainbow Reef in Fiji to test out its swimming skills, and the robo fish didn’t disappoint—it was able to swim at depths of over 50 feet for 40 continuous minutes. What keeps it swimming? A lithium polymer battery just like the one that powers our smartphones.
It’s remote-controlled… by Super Nintendo
SoFi has sensors to help it see what’s around it, but it doesn’t have a mind of its own yet. Rather, it’s controlled by a nearby scuba-diving human, who can send it commands related to speed, diving, and turning. The best part? The commands come from an actual repurposed (and waterproofed) Super Nintendo controller. What’s not to love?
Image Credit: MIT CSAIL
Previous robotic fish built by this team had to be tethered to a boat, so the fact that SoFi can swim independently is a pretty big deal. Communication between the fish and the diver was most successful when the two were less than 10 meters apart.
It looks real, sort of
SoFi’s side fins are a bit stiff, and its camera may not pass for natural—but otherwise, it looks a lot like a real fish. This is mostly thanks to the way its tail moves; a motor pumps water between two chambers in the tail, and as one chamber fills, the tail bends towards that side, then towards the other side as water is pumped into the other chamber. The result is a motion that closely mimics the way fish swim. Not only that, the hydraulic system can change the water flow to get different tail movements that let SoFi swim at varying speeds; its average speed is around half a body length (21.7 centimeters) per second.
Besides looking neat, it’s important SoFi look lifelike so it can blend in with marine life and not scare real fish away, so it can get close to them and observe them.
“A robot like this can help explore the reef more closely than current robots, both because it can get closer more safely for the reef and because it can be better accepted by the marine species.” said Cecilia Laschi, a biorobotics professor at the Sant’Anna School of Advanced Studies in Pisa, Italy.
Just keep swimming
It sounds like this fish is nothing short of a regular Nemo. But its creators aren’t quite finished yet.
They’d like SoFi to be able to swim faster, so they’ll work on improving the robo fish’s pump system and streamlining its body and tail design. They also plan to tweak SoFi’s camera to help it follow real fish.
“We view SoFi as a first step toward developing almost an underwater observatory of sorts,” said CSAIL director Daniela Rus. “It has the potential to be a new type of tool for ocean exploration and to open up new avenues for uncovering the mysteries of marine life.”
The CSAIL team plans to make a whole school of SoFis to help biologists learn more about how marine life is reacting to environmental changes.
Image Credit: MIT CSAIL Continue reading