Tag Archives: fighting

#433655 First-Ever Grad Program in Space Mining ...

Maybe they could call it the School of Space Rock: A new program being offered at the Colorado School of Mines (CSM) will educate post-graduate students on the nuts and bolts of extracting and using valuable materials such as rare metals and frozen water from space rocks like asteroids or the moon.

Officially called Space Resources, the graduate-level program is reputedly the first of its kind in the world to offer a course in the emerging field of space mining. Heading the program is Angel Abbud-Madrid, director of the Center for Space Resources at Mines, a well-known engineering school located in Golden, Colorado, where Molson Coors taps Rocky Mountain spring water for its earthly brews.

The first semester for the new discipline began last month. While Abbud-Madrid didn’t immediately respond to an interview request, Singularity Hub did talk to Chris Lewicki, president and CEO of Planetary Resources, a space mining company whose founders include Peter Diamandis, Singularity University co-founder.

A former NASA engineer who worked on multiple Mars missions, Lewicki says the Space Resources program at CSM, with its multidisciplinary focus on science, economics, and policy, will help students be light years ahead of their peers in the nascent field of space mining.

“I think it’s very significant that they’ve started this program,” he said. “Having students with that kind of background exposure just allows them to be productive on day one instead of having to kind of fill in a lot of things for them.”

Who would be attracted to apply for such a program? There are many professionals who could be served by a post-baccalaureate certificate, master’s degree, or even Ph.D. in Space Resources, according to Lewicki. Certainly aerospace engineers and planetary scientists would be among the faces in the classroom.

“I think it’s [also] people who have an interest in what I would call maybe space robotics,” he said. Lewicki is referring not only to the classic example of robotic arms like the Canadarm2, which lends a hand to astronauts aboard the International Space Station, but other types of autonomous platforms.

One example might be Planetary Resources’ own Arkyd-6, a small, autonomous satellite called a CubeSat launched earlier this year to test different technologies that might be used for deep-space exploration of resources. The proof-of-concept was as much a test for the technology—such as the first space-based use of a mid-wave infrared imager to detect water resources—as it was for being able to work in space on a shoestring budget.

“We really proved that doing one of these billion-dollar science missions to deep space can be done for a lot less if you have a very focused goal, and if you kind of cut a lot of corners and then put some commercial approaches into those things,” Lewicki said.

A Trillion-Dollar Industry
Why space mining? There are at least a trillion reasons.

Astrophysicist Neil deGrasse Tyson famously said that the first trillionaire will be the “person who exploits the natural resources on asteroids.” That’s because asteroids—rocky remnants from the formation of our solar system more than four billion years ago—harbor precious metals, ranging from platinum and gold to iron and nickel.

For instance, one future target of exploration by NASA—an asteroid dubbed 16 Psyche, orbiting the sun in the asteroid belt between Mars and Jupiter—is worth an estimated $10,000 quadrillion. It’s a number so mind-bogglingly big that it would crash the global economy, if someone ever figured out how to tow it back to Earth without literally crashing it into the planet.

Living Off the Land
Space mining isn’t just about getting rich. Many argue that humanity’s ability to extract resources in space, especially water that can be refined into rocket fuel, will be a key technology to extend our reach beyond near-Earth space.

The presence of frozen water around the frigid polar regions of the moon, for example, represents an invaluable source to power future deep-space missions. Splitting H20 into its component elements of hydrogen and oxygen would provide a nearly inexhaustible source of rocket fuel. Today, it costs $10,000 to put a pound of payload in Earth orbit, according to NASA.

Until more advanced rocket technology is developed, the moon looks to be the best bet for serving as the launching pad to Mars and beyond.

Moon Versus Asteroid
However, Lewicki notes that despite the moon’s proximity and our more intimate familiarity with its pockmarked surface, that doesn’t mean a lunar mission to extract resources is any easier than a multi-year journey to a fast-moving asteroid.

For one thing, fighting gravity to and from the moon is no easy feat, as the moon has a significantly stronger gravitational field than an asteroid. Another challenge is that the frozen water is located in permanently shadowed lunar craters, meaning space miners can’t rely on solar-powered equipment, but on some sort of external energy source.

And then there’s the fact that moon craters might just be the coldest places in the solar system. NASA’s Lunar Reconnaissance Orbiter found temperatures plummeted as low as 26 Kelvin, or more than minus 400 degrees Fahrenheit. In comparison, the coldest temperatures on Earth have been recorded near the South Pole in Antarctica—about minus 148 degrees F.

“We don’t operate machines in that kind of thermal environment,” Lewicki said of the extreme temperatures detected in the permanent dark regions of the moon. “Antarctica would be a balmy desert island compared to a lunar polar crater.”

Of course, no one knows quite what awaits us in the asteroid belt. Answers may soon be forthcoming. Last week, the Japan Aerospace Exploration Agency landed two small, hopping rovers on an asteroid called Ryugu. Meanwhile, NASA hopes to retrieve a sample from the near-Earth asteroid Bennu when its OSIRIS-REx mission makes contact at the end of this year.

No Bucks, No Buck Rogers
Visionaries like Elon Musk and Jeff Bezos talk about colonies on Mars, with millions of people living and working in space. The reality is that there’s probably a reason Buck Rogers was set in the 25th century: It’s going to take a lot of money and a lot of time to realize those sci-fi visions.

Or, as Lewicki put it: “No bucks, no Buck Rogers.”

The cost of operating in outer space can be prohibitive. Planetary Resources itself is grappling with raising additional funding, with reports this year about layoffs and even a possible auction of company assets.

Still, Lewicki is confident that despite economic and technical challenges, humanity will someday exceed even the boldest dreamers—skyscrapers on the moon, interplanetary trips to Mars—as judged against today’s engineering marvels.

“What we’re doing is going to be very hard, very painful, and almost certainly worth it,” he said. “Who would have thought that there would be a job for a space miner that you could go to school for, even just five or ten years ago. Things move quickly.”

Image Credit: M-SUR / Shutterstock.com Continue reading

Posted in Human Robots

#433288 The New AI Tech Turning Heads in Video ...

A new technique using artificial intelligence to manipulate video content gives new meaning to the expression “talking head.”

An international team of researchers showcased the latest advancement in synthesizing facial expressions—including mouth, eyes, eyebrows, and even head position—in video at this month’s 2018 SIGGRAPH, a conference on innovations in computer graphics, animation, virtual reality, and other forms of digital wizardry.

The project is called Deep Video Portraits. It relies on a type of AI called generative adversarial networks (GANs) to modify a “target” actor based on the facial and head movement of a “source” actor. As the name implies, GANs pit two opposing neural networks against one another to create a realistic talking head, right down to the sneer or raised eyebrow.

In this case, the adversaries are actually working together: One neural network generates content, while the other rejects or approves each effort. The back-and-forth interplay between the two eventually produces a realistic result that can easily fool the human eye, including reproducing a static scene behind the head as it bobs back and forth.

The researchers say the technique can be used by the film industry for a variety of purposes, from editing facial expressions of actors for matching dubbed voices to repositioning an actor’s head in post-production. AI can not only produce highly realistic results, but much quicker ones compared to the manual processes used today, according to the researchers. You can read the full paper of their work here.

“Deep Video Portraits shows how such a visual effect could be created with less effort in the future,” said Christian Richardt, from the University of Bath’s motion capture research center CAMERA, in a press release. “With our approach, even the positioning of an actor’s head and their facial expression could be easily edited to change camera angles or subtly change the framing of a scene to tell the story better.”

AI Tech Different Than So-Called “Deepfakes”
The work is far from the first to employ AI to manipulate video and audio. At last year’s SIGGRAPH conference, researchers from the University of Washington showcased their work using algorithms that inserted audio recordings from a person in one instance into a separate video of the same person in a different context.

In this case, they “faked” a video using a speech from former President Barack Obama addressing a mass shooting incident during his presidency. The AI-doctored video injects the audio into an unrelated video of the president while also blending the facial and mouth movements, creating a pretty credible job of lip synching.

A previous paper by many of the same scientists on the Deep Video Portraits project detailed how they were first able to manipulate a video in real time of a talking head (in this case, actor and former California governor Arnold Schwarzenegger). The Face2Face system pulled off this bit of digital trickery using a depth-sensing camera that tracked the facial expressions of an Asian female source actor.

A less sophisticated method of swapping faces using a machine learning software dubbed FakeApp emerged earlier this year. Predictably, the tech—requiring numerous photos of the source actor in order to train the neural network—was used for more juvenile pursuits, such as injecting a person’s face onto a porn star.

The application gave rise to the term “deepfakes,” which is now used somewhat ubiquitously to describe all such instances of AI-manipulated video—much to the chagrin of some of the researchers involved in more legitimate uses.

Fighting AI-Created Video Forgeries
However, the researchers are keenly aware that their work—intended for benign uses such as in the film industry or even to correct gaze and head positions for more natural interactions through video teleconferencing—could be used for nefarious purposes. Fake news is the most obvious concern.

“With ever-improving video editing technology, we must also start being more critical about the video content we consume every day, especially if there is no proof of origin,” said Michael Zollhöfer, a visiting assistant professor at Stanford University and member of the Deep Video Portraits team, in the press release.

Toward that end, the research team is training the same adversarial neural networks to spot video forgeries. They also strongly recommend that developers clearly watermark videos that are edited through AI or otherwise, and denote clearly what part and element of the scene was modified.

To catch less ethical users, the US Department of Defense, through the Defense Advanced Research Projects Agency (DARPA), is supporting a program called Media Forensics. This latest DARPA challenge enlists researchers to develop technologies to automatically assess the integrity of an image or video, as part of an end-to-end media forensics platform.

The DARPA official in charge of the program, Matthew Turek, did tell MIT Technology Review that so far the program has “discovered subtle cues in current GAN-manipulated images and videos that allow us to detect the presence of alterations.” In one reported example, researchers have targeted eyes, which rarely blink in the case of “deepfakes” like those created by FakeApp, because the AI is trained on still pictures. That method would seem to be less effective to spot the sort of forgeries created by Deep Video Portraits, which appears to flawlessly match the entire facial and head movements between the source and target actors.

“We believe that the field of digital forensics should and will receive a lot more attention in the future to develop approaches that can automatically prove the authenticity of a video clip,” Zollhöfer said. “This will lead to ever-better approaches that can spot such modifications even if we humans might not be able to spot them with our own eyes.

Image Credit: Tancha / Shutterstock.com Continue reading

Posted in Human Robots

#431866 The Technologies We’ll Have Our Eyes ...

It’s that time of year again when our team has a little fun and throws on our futurist glasses to look ahead at some of the technologies and trends we’re most anticipating next year.
Whether the implications of a technology are vast or it resonates with one of us personally, here’s the list from some of the Singularity Hub team of what we have our eyes on as we enter the new year.
For a little refresher, these were the technologies our team was fired up about at the start of 2017.
Tweet us the technology you’re excited to watch in 2018 at @SingularityHub.
Cryptocurrency and Blockchain
“Given all the noise Bitcoin is making globally in the media, it is driving droves of main street investors to dabble in and learn more about cryptocurrencies. This will continue to raise valuations and drive adoption of blockchain. From Bank of America recently getting a blockchain-based patent approved to the Australian Securities Exchange’s plan to use blockchain, next year is going to be chock-full of these stories. Coindesk even recently spotted a patent filing from Apple involving blockchain. From ‘China’s Ethereum’, NEO, to IOTA to Golem to Qtum, there are a lot of interesting cryptos to follow given the immense numbers of potential applications. Hang on, it’s going to be a bumpy ride in 2018!”
–Kirk Nankivell, Website Manager
There Is No One Technology to Watch
“Next year may be remembered for advances in gene editing, blockchain, AI—or most likely all these and more. There is no single technology to watch. A number of consequential trends are advancing and converging. This general pace of change is exciting, and it also contributes to spiking anxiety. Technology’s invisible lines of force are extending further and faster into our lives and subtly subverting how we view the world and each other in unanticipated ways. Still, all the near-term messiness and volatility, the little and not-so-little dramas, the hype and disillusion, the controversies and conflict, all that smooths out a bit when you take a deep breath and a step back, and it’s my sincere hope and belief the net result will be more beneficial than harmful.”
–Jason Dorrier, Managing Editor
‘Fake News’ Fighting Technology
“It’s been a wild ride for the media this year with the term ‘fake news’ moving from the public’s peripheral and into mainstream vocabulary. The spread of ‘fake news’ is often blamed on media outlets, but social media platforms and search engines are often responsible too. (Facebook still won’t identify as a media company—maybe next year?) Yes, technology can contribute to spreading false information, but it can also help stop it. From technologists who are building in-article ‘trust indicator’ features, to artificial intelligence systems that can both spot and shut down fake news early on, I’m hopeful we can create new solutions to this huge problem. One step further: if publishers step up to fix this we might see some faith restored in the media.”
–Alison E. Berman, Digital Producer
Pay-as-You-Go Home Solar Power
“People in rural African communities are increasingly bypassing electrical grids (which aren’t even an option in many cases) and installing pay-as-you-go solar panels on their homes. The companies offering these services are currently not subject to any regulations, though they’re essentially acting as a utility. As demand for power grows, they’ll have to come up with ways to efficiently scale, and to balance the humanitarian and capitalistic aspects of their work. It’s fascinating to think traditional grids may never be necessary in many areas of the continent thanks to this technology.”
–Vanessa Bates Ramirez, Associate Editor
Virtual Personal Assistants
“AI is clearly going to rule our lives, and in many ways it already makes us look like clumsy apes. Alexa, Siri, and Google Assistant are promising first steps toward a world of computers that understand us and relate to us on an emotional level. I crave the day when my Apple Watch coaches me into healthier habits, lets me know about new concerts nearby, speaks to my self-driving Lyft on my behalf, and can help me respond effectively to aggravating emails based on communication patterns. But let’s not brush aside privacy concerns and the implications of handing over our personal data to megacorporations. The scariest thing here is that privacy laws and advertising ethics do not accommodate this level of intrusive data hoarding.”
–Matthew Straub, Director of Digital Engagement (Hub social media)
Solve for Learning: Educational Apps for Children in Conflict Zones
“I am most excited by exponential technology when it is used to help solve a global grand challenge. Educational apps are currently being developed to help solve for learning by increasing accessibility to learning opportunities for children living in conflict zones. Many children in these areas are not receiving an education, with girls being 2.5 times more likely than boys to be out of school. The EduApp4Syria project is developing apps to help children in Syria and Kashmir learn in their native languages. Mobile phones are increasingly available in these areas, and the apps are available offline for children who do not have consistent access to mobile networks. The apps are low-cost, easily accessible, and scalable educational opportunities.
–Paige Wilcoxson, Director, Curriculum & Learning Design
Image Credit: Triff / Shutterstock.com Continue reading

Posted in Human Robots

#431023 Finish Him! MegaBots’ Giant Robot Duel ...

It began two years ago when MegaBots co-founders Matt Oehrlein and Gui Cavalcanti donned American flags as capes and challenged Suidobashi Heavy Industries to a giant robot duel in a YouTube video that immediately went viral.
The battle proposed: MegaBots’ 15-foot tall, 1,200-pound MK2 robot vs. Suidobashi’s 9,000-pound robot, KURATAS. Oehrlein and Cavalcanti first discovered the KURATAS robot in a listing on Amazon with a million-dollar price tag.
In an equally flamboyant response video, Suidobashi CEO and founder Kogoro Kurata accepted the challenge. (Yes, he named his robot after himself.) Both parties planned to take a year to prepare their robots for combat.
In the end, it took twice the amount of time. Nonetheless, the battle is going down this September in an undisclosed location.
Oehrlein shared more about the much-anticipated showdown during our interview at Singularity University’s Global Summit.

Two years since the initial video, MegaBots has now completed the combat-capable MK3 robot, named Eagle Prime. This new 12-ton, 16-foot-tall robot is powered by a 430-horsepower Corvette engine and requires two human pilots.
It’s also the robot they recently shipped to take on KURATAS.

Building Eagle Prime has been no small feat. With arms and legs that each weigh as much as a car, assembling the robot takes forklifts, cranes, and a lot of caution. Fortress One, MegaBots’ headquarters in Hayward, California is where the magic happens.
In terms of “weaponry,” Eagle Prime features a giant pneumatic cannon that shoots huge paint cannonballs. Oehrlein warns, “They can shatter all the windows in a car. It’s very powerful.” A logging grapple, which looks like a giant claw and exerts 3,000 pounds of steel-crushing force, has also been added to the robot.

“It’s a combination of range combat, using the paint balls to maybe blind cameras on the other robot or take out sensitive electronics, and then closing in with the claw and trying to disable their systems at close range,” Oehrlein explains.
Safety systems include a cockpit roll cage for the two pilots, five-point safety seatbelt harnesses, neck restraints, helmets, and flame retardant suits.
Co-founder, Matt Oehrlein, inside the cockpit of MegaBots’ Eagle Prime giant robot.
Oehrlein and Cavalcanti have also spent considerable time inside Eagle Prime practicing battlefield tactics and maneuvering the robot through obstacle courses.
Suidobashi’s robot is a bit shorter and lighter, but also a little faster, so the battle dynamics should be interesting.
You may be thinking, “Why giant dueling robots?”
MegaBots’ grand vision is a full-blown international sports league of giant fighting robots on the scale of Formula One racing. Picture a nostalgic evening sipping a beer (or three) and watching Pacific Rim- and Power Rangers-inspired robots battle—only in real life.
Eagle Prime is, in good humor, a proudly patriotic robot.
“Japan is known as a robotic powerhouse,” says Oehrlein, “I think there’s something interesting about the slightly overconfident American trying to get a foothold in the robotics space and doing it by building a bigger, louder, heavier robot, in true American fashion.”
For safety reasons, no fans will be admitted during the time of the fight. The battle will be posted after the fact on MegaBots’ YouTube channel and Facebook page.
We’ll soon find out whether this becomes another American underdog story.
In the meantime, I give my loyalty to MegaBots, and in the words of Mortal Kombat, say, “Finish him!”

via GIPHY
Image Credit: MegaBots Continue reading

Posted in Human Robots

#431015 Finish Him! MegaBots’ Giant Robot Duel ...

It began two years ago when MegaBots co-founders Matt Oehrlein and Gui Cavalcanti donned American flags as capes and challenged Suidobashi Heavy Industries to a giant robot duel in a YouTube video that immediately went viral.
The battle proposed: MegaBots’ 15-foot tall, 1,200-pound MK2 robot vs. Suidobashi’s 9,000-pound robot, KURATAS. Oehrlein and Cavalcanti first discovered the KURATAS robot in a listing on Amazon with a million-dollar price tag.
In an equally flamboyant response video, Suidobashi CEO and founder Kogoro Kurata accepted the challenge. (Yes, he named his robot after himself.) Both parties planned to take a year to prepare their robots for combat.
In the end, it took twice the amount of time. Nonetheless, the battle is going down this September in an undisclosed location in Japan.
Oehrlein shared more about the much-anticipated showdown during our interview at Singularity University’s Global Summit.

Two years since the initial video, MegaBots has now completed the combat-capable MK3 robot, named Eagle Prime. This new 12-ton, 16-foot-tall robot is powered by a 430-horsepower Corvette engine and requires two human pilots.
It’s also the robot they recently shipped to Japan to take on KURATAS.

Building Eagle Prime has been no small feat. With arms and legs that each weigh as much as a car, assembling the robot takes forklifts, cranes, and a lot of caution. Fortress One, MegaBots’ headquarters in Hayward, California is where the magic happens.
In terms of “weaponry,” Eagle Prime features a giant pneumatic cannon that shoots huge paint cannonballs. Oehrlein warns, “They can shatter all the windows in a car. It’s very powerful.” A logging grapple, which looks like a giant claw and exerts 3,000 pounds of steel-crushing force, has also been added to the robot.
“It’s a combination of range combat, using the paint balls to maybe blind cameras on the other robot or take out sensitive electronics, and then closing in with the claw and trying to disable their systems at close range,” Oehrlein explains.
Safety systems include a cockpit roll cage for the two pilots, five-point safety seatbelt harnesses, neck restraints, helmets, and flame retardant suits.
Co-founder, Matt Oehrlein, inside the cockpit of MegaBots’ Eagle Prime giant robot.
Oehrlein and Cavalcanti have also spent considerable time inside Eagle Prime practicing battlefield tactics and maneuvering the robot through obstacle courses.
Suidobashi’s robot is a bit shorter and lighter, but also a little faster, so the battle dynamics should be interesting.
You may be thinking, “Why giant dueling robots?”
MegaBots’ grand vision is a full-blown international sports league of giant fighting robots on the scale of Formula One racing. Picture a nostalgic evening sipping a beer (or three) and watching Pacific Rim- and Power Rangers-inspired robots battle—only in real life.
Eagle Prime is, in good humor, a proudly patriotic robot.
“Japan is known as a robotic powerhouse,” says Oehrlein, “I think there’s something interesting about the slightly overconfident American trying to get a foothold in the robotics space and doing it by building a bigger, louder, heavier robot, in true American fashion.”
For safety reasons, no fans will be admitted during the time of the fight. The battle will be posted after the fact on MegaBots’ YouTube channel and Facebook page.
We’ll soon find out whether this becomes another American underdog story.
In the meantime, I give my loyalty to MegaBots, and in the words of Mortal Kombat, say, “Finish him!”

via GIPHY
Image Credit: MegaBots Continue reading

Posted in Human Robots