Tag Archives: combat

#436550 Work in the Age of Web 3.0

What is the future of work? Is our future one of ‘technological socialism’ (where technology is taking care of our needs)? Or will tomorrow’s workplace be completely virtualized, allowing us to hang out at home in our PJs while “walking” about our virtual corporate headquarters?

This blog will look at the future of work during the age of Web 3.0, examining scenarios in which artificial intelligence, virtual reality, and the spatial web converge to transform every element of our careers, from training, to execution, to free time.

To offer a quick recap on what the Spatial Web is and how it works, let’s cover some brief history.

A Quick Recap on Web 3.0
While Web 1.0 consisted of static documents and read-only data (static web pages), Web 2.0 introduced multimedia content, interactive web applications, and participatory social media, all of these mediated by two-dimensional screens.

But over the next two to five years, the convergence of 5G, artificial intelligence, VR/AR, and a trillion-sensor economy will enable us to both map our physical world into virtual space and superimpose a digital data layer onto our physical environments. Suddenly, all our information will be manipulated, stored, understood and experienced in spatial ways.

In this blog, I’ll be discussing the Spatial Web’s vast implications for:

Professional Training
Delocalized Business & the Virtual Workplace
Smart Permissions & Data Security

Let’s dive in.

Virtual Training, Real-World Results
Virtual and augmented reality have already begun disrupting the professional training market. As projected by ABI Research, the enterprise VR training market is on track to exceed $6.3 billion in value by 2022.

Leading the charge, Walmart has already implemented VR across 200 Academy training centers, running over 45 modules and simulating everything from unusual customer requests to a Black Friday shopping rush.

Then in September 2018, Walmart committed to a 17,000-headset order of the Oculus Go to equip every US Supercenter, neighborhood market, and discount store with VR-based employee training. By mid-2019, Walmart had tracked a 10-15 percent boost in employee confidence as a result of newly implemented VR training.

In the engineering world, Bell Helicopter is using VR to massively expedite development and testing of its latest aircraft, FCX-001. Partnering with Sector 5 Digital and HTC VIVE, Bell found it could concentrate a typical 6-year aircraft design process into the course of 6 months, turning physical mock-ups into CAD-designed virtual replicas.

But beyond the design process itself, Bell is now one of a slew of companies pioneering VR pilot tests and simulations with real-world accuracy. Seated in a true-to-life virtual cockpit, pilots have now tested countless iterations of the FCX-001 in virtual flight, drawing directly onto the 3D model and enacting aircraft modifications in real-time.

And in an expansion of our virtual senses, several key players are already working on haptic feedback. In the case of VR flight, French company Go Touch VR is now partnering with software developer FlyInside on fingertip-mounted haptic tech for aviation.

Dramatically reducing time and trouble required for VR-testing pilots, they aim to give touch-based confirmation of every switch and dial activated on virtual flights, just as one would experience in a full-sized cockpit mockup. Replicating texture, stiffness, and even the sensation of holding an object, these piloted devices contain a suite of actuators to simulate everything from a light touch to higher-pressured contact, all controlled by gaze and finger movements.

When it comes to other high-risk simulations, virtual and augmented reality have barely scratched the surface.

Firefighters can now combat virtual wildfires with new platforms like FLAIM Trainer or TargetSolutions. And thanks to the expansion of medical AR/VR services like 3D4Medical or Echopixel, surgeons might soon perform operations on annotated organs and magnified incision sites, speeding up reaction times and vastly improving precision.

But perhaps most urgent, Web 3.0 and its VR interface will offer an immediate solution for today’s constant industry turnover and large-scale re-education demands. VR educational facilities with exact replicas of anything from large industrial equipment to minute circuitry will soon give anyone a second chance at the 21st-century job market.

Want to be an electric, autonomous vehicle mechanic at age 15? Throw on a demonetized VR module and learn by doing, testing your prototype iterations at almost zero cost and with no risk of harming others.

Want to be a plasma physicist and play around with a virtual nuclear fusion reactor? Now you’ll be able to simulate results and test out different tweaks, logging Smart Educational Record credits in the process.

As tomorrow’s career model shifts from a “one-and-done graduate degree” to continuous lifelong education, professional VR-based re-education will allow for a continuous education loop, reducing the barrier to entry for anyone wanting to enter a new industry.

But beyond professional training and virtually enriched, real-world work scenarios, Web 3.0 promises entirely virtual workplaces and blockchain-secured authorization systems.

Rise of the Virtual Workplace & Digital Data Integrity
In addition to enabling a virtual goods marketplace, the Spatial Web is also giving way to “virtual company headquarters” and completely virtualized companies, where employees can work from home or any place on the planet.

Too good to be true? Check out an incredible publicly listed company called eXp Realty.

Launched on the heels of the 2008 financial crisis, eXp Realty beat the odds, going public this past May and surpassing a $1B market cap on day one of trading. But how? Opting for a demonetized virtual model, eXp’s founder Glenn Sanford decided to ditch brick and mortar from the get-go, instead building out an online virtual campus for employees, contractors, and thousands of agents.

And after years of hosting team meetings, training seminars, and even agent discussions with potential buyers through 2D digital interfaces, eXp’s virtual headquarters went spatial. What is eXp’s primary corporate value? FUN! And Glenn Sanford’s employees love their jobs.

In a bid to transition from 2D interfaces to immersive, 3D work experiences, virtual platform VirBELA built out the company’s office space in VR, unlocking indefinite scaling potential and an extraordinary new precedent. Foregoing any physical locations for a centralized VR campus, eXp Realty has essentially thrown out all overhead and entered a lucrative market with barely any upfront costs.

Delocalize with VR, and you can now hire anyone with Internet access (right next door or on the other side of the planet), redesign your corporate office every month, throw in an ocean-view office or impromptu conference room for client meetings, and forget about guzzled-up hours in traffic.

Throw in the Spatial Web’s fundamental blockchain-based data layer, and now cryptographically secured virtual IDs will let you validate colleagues’ identities or any of the virtual avatars we will soon inhabit.

This becomes critically important for spatial information logs—keeping incorruptible records of who’s present at a meeting, which data each person has access to, and AI-translated reports of everything discussed and contracts agreed to.

But as I discussed in a previous Spatial Web blog, not only will Web 3.0 and VR advancements allow us to build out virtual worlds, but we’ll soon be able to digitally map our real-world physical offices or entire commercial high rises too.

As data gets added and linked to any given employee’s office, conference room, or security system, we might then access online-merge-offline environments and information through augmented reality.

Imagine showing up at your building’s concierge and your AR glasses automatically check you into the building, authenticating your identity and pulling up any reminders you’ve linked to that specific location.

You stop by a friend’s office, and his smart security system lets you know he’ll arrive in an hour. Need to book a public conference room that’s already been scheduled by another firm’s marketing team? Offer to pay them a fee and, once accepted, a smart transaction will automatically deliver a payment to their company account.

With blockchain-verified digital identities, spatially logged data, and virtually manifest information, business logistics take a fraction of the time, operations grow seamless, and corporate data will be safer than ever.

Final Thoughts
While converging technologies slash the lifespan of Fortune 500 companies, bring on the rise of vast new industries, and transform the job market, Web 3.0 is changing the way we work, where we work, and who we work with.

Life-like virtual modules are already unlocking countless professional training camps, modifiable in real time and easily updated. Virtual programming and blockchain-based authentication are enabling smart data logging, identity protection, and on-demand smart asset trading. And VR/AR-accessible worlds (and corporate campuses) not only demonetize, dematerialize, and delocalize our everyday workplaces, but enrich our physical worlds with AI-driven, context-specific data.

Welcome to the Spatial Web workplace.

Join Me
(1) A360 Executive Mastermind: If you’re an exponentially and abundance-minded entrepreneur who would like coaching directly from me, consider joining my Abundance 360 Mastermind, a highly selective community of 360 CEOs and entrepreneurs who I coach for 3 days every January in Beverly Hills, Ca. Through A360, I provide my members with context and clarity about how converging exponential technologies will transform every industry. I’m committed to running A360 for the course of an ongoing 25-year journey as a “countdown to the Singularity.”

If you’d like to learn more and consider joining our 2021 membership, apply here.

(2) Abundance-Digital Online Community: I’ve also created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is Singularity University’s ‘onramp’ for exponential entrepreneurs—those who want to get involved and play at a higher level. Click here to learn more.

(Both A360 and Abundance-Digital are part of Singularity University—your participation opens you to a global community.)

This article originally appeared on diamandis.com. Read the original article here.

Image Credit: Image by Gerd Altmann from Pixabay Continue reading

Posted in Human Robots

#436462 Robotic Exoskeletons, Like This One, Are ...

When you imagine an exoskeleton, chances are it might look a bit like the Guardian XO from Sarcos Robotics. The XO is literally a robot you wear (or maybe, it wears you). The suit’s powered limbs sense your movements and match their position to yours with little latency to give you effortless superstrength and endurance—lifting 200 pounds will feel like 10.

A vision of robots and humankind working together in harmony. Now, isn’t that nice?

Of course, there isn’t anything terribly novel about an exoskeleton. We’ve seen plenty of concepts and demonstrations in the last decade. These include light exoskeletons tailored to industrial settings—some of which are being tested out by the likes of Honda—and healthcare exoskeletons that support the elderly or folks with disabilities.

Full-body powered robotic exoskeletons are a bit rarer, which makes the Sarcos suit pretty cool to look at. But like all things in robotics, practicality matters as much as vision. It’s worth asking: Will anyone buy and use the thing? Is it more than a concept video?

Sarcos thinks so, and they’re excited about it. “If you were to ask the question, what does 30 years and $300 million look like,” Sarcos CEO, Ben Wolff, told IEEE Spectrum, “you’re going to see it downstairs.”

The XO appears to check a few key boxes. For one, it’s user friendly. According to Sarcos, it only takes a few minutes for the uninitiated to strap in and get up to speed. Feeling comfortable doing work with the suit takes a few hours. This is thanks to a high degree of sensor-based automation that allows the robot to seamlessly match its user’s movements.

The XO can also operate for more than a few minutes. It has two hours of battery life, and with spares on hand, it can go all day. The batteries are hot-swappable, meaning you can replace a drained battery with a new one without shutting the system down.

The suit is aimed at manufacturing, where workers are regularly moving heavy stuff around. Additionally, Wolff told CNET, the suit could see military use. But that doesn’t mean Avatar-style combat. The XO, Wolff said, is primarily about logistics (lifting and moving heavy loads) and isn’t designed to be armored, so it won’t likely see the front lines.

The system will set customers back $100,000 a year to rent, which sounds like a lot, but for industrial or military purposes, the six-figure rental may not deter would-be customers if the suit proves itself a useful bit of equipment. (And it’s reasonable to imagine the price coming down as the technology becomes more commonplace and competitors arrive.)

Sarcos got into exoskeletons a couple decades ago and was originally funded by the military (like many robotics endeavors). Videos hit YouTube as long ago as 2008, but after announcing the company was taking orders for the XO earlier this year, Sarcos says they’ll deliver the first alpha units in January, which is a notable milestone.

Broadly, robotics has advanced a lot in recent years. YouTube sensations like Boston Dynamics have regularly earned millions of views (and inevitably, headlines stoking robot fear). They went from tethered treadmill sessions to untethered backflips off boxes. While today’s robots really are vastly superior to their ancestors, they’ve struggled to prove themselves useful. A counterpoint to flashy YouTube videos, the DARPA Robotics Challenge gave birth to another meme altogether. Robots falling over. Often and awkwardly.

This year marks some of the first commercial fruits of a few decades’ research. Boston Dynamics recently started offering its robot dog, Spot, to select customers in 2019. Whether this proves to be a headline-worthy flash in the pan or something sustainable remains to be seen. But between robots with more autonomy and exoskeletons like the XO, the exoskeleton variety will likely be easier to make more practical for various uses.

Whereas autonomous robots require highly advanced automation to navigate uncertain and ever-changing conditions—automation which, at the moment, remains largely elusive (though the likes of Google are pairing the latest AI with robots to tackle the problem)—an exoskeleton mainly requires physical automation. The really hard bits, like navigating and recognizing and interacting with objects, are outsourced to its human operator.

As it turns out, for today’s robots the best AI is still us. We may yet get chipper automatons like Rosy the Robot, but until then, for complicated applications, we’ll strap into our mechs for their strength and endurance, and they’ll wear us for our brains.

Image Credit: Sarcos Robotics Continue reading

Posted in Human Robots

#436209 Video Friday: Robotic Endoscope Travels ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

DARPA SubT Urban Circuit – February 18-27, 2020 – Olympia, WA, USA
Let us know if you have suggestions for next week, and enjoy today's videos.

Kuka has just announced the results of its annual Innovation Award. From an initial batch of 30 applicants, five teams reached the finals (we were part of the judging committee). The five finalists worked for nearly a year on their applications, which they demonstrated this week at the Medica trade show in Düsseldorf, Germany. And the winner of the €20,000 prize is…Team RoboFORCE, led by the STORM Lab in the U.K., which developed a “robotic magnetic flexible endoscope for painless colorectal cancer screening, surveillance, and intervention.”

The system could improve colonoscopy procedures by reducing pain and discomfort as well as other risks such as bleeding and perforation, according to the STORM Lab researchers. It uses a magnetic field to control the endoscope, pulling rather than pushing it through the colon.

The other four finalists also presented some really interesting applications—you can see their videos below.

“Because we were so please with the high quality of the submissions, we will have next year’s finals again at the Medica fair, and the challenge will be named ‘Medical Robotics’,” says Rainer Bischoff, vice president for corporate research at Kuka. He adds that the selected teams will again use Kuka’s LBR Med robot arm, which is “already certified for integration into medical products and makes it particularly easy for startups to use a robot as the main component for a particular solution.”

Applications are now open for Kuka’s Innovation Award 2020. You can find more information on how to enter here. The deadline is 5 January 2020.

[ Kuka ]

Oh good, Aibo needs to be fed now.

You know what comes next, right?

[ Aibo ]

Your cat needs this robot.

It's about $200 on Kickstarter.

[ Kickstarter ]

Enjoy this tour of the Skydio offices courtesy Skydio 2, which runs into not even one single thing.

If any Skydio employees had important piles of papers on their desks, well, they don’t anymore.

[ Skydio ]

Artificial intelligence is everywhere nowadays, but what exactly does it mean? We asked a group MIT computer science grad students and post-docs how they personally define AI.

“When most people say AI, they actually mean machine learning, which is just pattern recognition.” Yup.

[ MIT ]

Using event-based cameras, this drone control system can track attitude at 1600 degrees per second (!).

[ UZH ]

Introduced at CES 2018, Walker is an intelligent humanoid service robot from UBTECH Robotics. Below are the latest features and technologies used during our latest round of development to make Walker even better.

[ Ubtech ]

Introducing the Alpha Prime by #VelodyneLidar, the most advanced lidar sensor on the market! Alpha Prime delivers an unrivaled combination of field-of-view, range, high-resolution, clarity and operational performance.

Performance looks good, but don’t expect it to be cheap.

[ Velodyne ]

Ghost Robotics’ Spirit 40 will start shipping to researchers in January of next year.

[ Ghost Robotics ]

Unitree is about to ship the first batch of their AlienGo quadrupeds as well:

[ Unitree ]

Mechanical engineering’s Sarah Bergbreiter discusses her work on micro robotics, how they draw inspiration from insects and animals, and how tiny robots can help humans in a variety of fields.

[ CMU ]

Learning contact-rich, robotic manipulation skills is a challenging problem due to the high-dimensionality of the state and action space as well as uncertainty from noisy sensors and inaccurate motor control. To combat these factors and achieve more robust manipulation, humans actively exploit contact constraints in the environment. By adopting a similar strategy, robots can also achieve more robust manipulation. In this paper, we enable a robot to autonomously modify its environment and thereby discover how to ease manipulation skill learning. Specifically, we provide the robot with fixtures that it can freely place within the environment. These fixtures provide hard constraints that limit the outcome of robot actions. Thereby, they funnel uncertainty from perception and motor control and scaffold manipulation skill learning.

[ Stanford ]

Since 2016, Verity's drones have completed more than 200,000 flights around the world. Completely autonomous, client-operated and designed for live events, Verity is making the magic real by turning drones into flying lights, characters, and props.

[ Verity ]

To monitor and stop the spread of wildfires, University of Michigan engineers developed UAVs that could find, map and report fires. One day UAVs like this could work with disaster response units, firefighters and other emergency teams to provide real-time accurate information to reduce damage and save lives. For their research, the University of Michigan graduate students won first place at a competition for using a swarm of UAVs to successfully map and report simulated wildfires.

[ University of Michigan ]

Here’s an important issue that I haven’t heard talked about all that much: How first responders should interact with self-driving cars.

“To put the car in manual mode, you must call Waymo.” Huh.

[ Waymo ]

Here’s what Gitai has been up to recently, from a Humanoids 2019 workshop talk.

[ Gitai ]

The latest CMU RI seminar comes from Girish Chowdhary at the University of Illinois at Urbana-Champaign on “Autonomous and Intelligent Robots in Unstructured Field Environments.”

What if a team of collaborative autonomous robots grew your food for you? In this talk, I will discuss some key advances in robotics, machine learning, and autonomy that will one day enable teams of small robots to grow food for you in your backyard in a fundamentally more sustainable way than modern mega-farms! Teams of small aerial and ground robots could be a potential solution to many of the serious problems that modern agriculture is facing. However, fully autonomous robots that operate without supervision for weeks, months, or entire growing season are not yet practical. I will discuss my group’s theoretical and practical work towards the underlying challenging problems in robotic systems, autonomy, sensing, and learning. I will begin with our lightweight, compact, and autonomous field robot TerraSentia and the recent successes of this type of undercanopy robots for high-throughput phenotyping with deep learning-based machine vision. I will also discuss how to make a team of autonomous robots learn to coordinate to weed large agricultural farms under partial observability. These direct applications will help me make the case for the type of reinforcement learning and adaptive control that are necessary to usher in the next generation of autonomous field robots that learn to solve complex problems in harsh, changing, and dynamic environments. I will then end with an overview of our new MURI, in which we are working towards developing AI and control that leverages neurodynamics inspired by the Octopus brain.

[ CMU RI ] Continue reading

Posted in Human Robots

#435748 Video Friday: This Robot Is Like a ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

RSS 2019 – June 22-26, 2019 – Freiburg, Germany
Hamlyn Symposium on Medical Robotics – June 23-26, 2019 – London, U.K.
ETH Robotics Summer School – June 27-1, 2019 – Zurich, Switzerland
MARSS 2019 – July 1-5, 2019 – Helsinki, Finland
ICRES 2019 – July 29-30, 2019 – London, U.K.
DARPA SubT Tunnel Circuit – August 15-22, 2019 – Pittsburgh, Pa., USA
Let us know if you have suggestions for next week, and enjoy today’s videos.

It’s been a while since we last spoke to Joe Jones, the inventor of Roomba, about his solar-powered, weed-killing robot, called Tertill, which he was launching as a Kickstarter project. Tertill is now available for purchase (US $300) and is shipping right now.

[ Tertill ]

Usually, we don’t post videos that involve drone use that looks to be either illegal or unsafe. These flights over the protests in Hong Kong are almost certainly both. However, it’s also a unique perspective on the scale of these protests.

[ Team BlackSheep ]

ICYMI: iRobot announced this week that it has acquired Root Robotics.

[ iRobot ]

This Boston Dynamics parody video went viral this week.

The CGI is good but the gratuitous violence—even if it’s against a fake robot—is a bit too much?

This is still our favorite Boston Dynamics parody video:

[ Corridor ]

Biomedical Engineering Department Head Bin He and his team have developed the first-ever successful non-invasive mind-controlled robotic arm to continuously track a computer cursor.

[ CMU ]

Organic chemists, prepare to meet your replacement:

Automated chemical synthesis carries great promises of safety, efficiency and reproducibility for both research and industry laboratories. Current approaches are based on specifically-designed automation systems, which present two major drawbacks: (i) existing apparatus must be modified to be integrated into the automation systems; (ii) such systems are not flexible and would require substantial re-design to handle new reactions or procedures. In this paper, we propose a system based on a robot arm which, by mimicking the motions of human chemists, is able to perform complex chemical reactions without any modifications to the existing setup used by humans. The system is capable of precise liquid handling, mixing, filtering, and is flexible: new skills and procedures could be added with minimum effort. We show that the robot is able to perform a Michael reaction, reaching a yield of 34%, which is comparable to that obtained by a junior chemist (undergraduate student in Chemistry).

[ arXiv ] via [ NTU ]

So yeah, ICRA 2019 was huge and awesome. Here are some brief highlights.

[ Montreal Gazette ]

For about US $5, this drone will deliver raw meat and beer to you if you live on an uninhabited island in Tokyo Bay.

[ Nikkei ]

The Smart Microsystems Lab at Michigan State University has a new version of their Autonomous Surface Craft. It’s autonomous, open source, and awfully hard to sink.

[ SML ]

As drone shows go, this one is pretty good.

[ CCTV ]

Here’s a remote controlled robot shooting stuff with a very large gun.

[ HDT ]

Over a period of three quarters (September 2018 thru May 2019), we’ve had the opportunity to work with five graduating University of Denver students as they brought their idea for a Misty II arm extension to life.

[ Misty Robotics ]

If you wonder how it looks to inspect burners and superheaters of a boiler with an Elios 2, here you are! This inspection was performed by Svenska Elektrod in a peat-fired boiler for Vattenfall in Sweden. Enjoy!

[ Flyability ]

The newest Soft Robotics technology, mGrip mini fingers, made for tight spaces, small packaging, and delicate items, giving limitless opportunities for your applications.

[ Soft Robotics ]

What if legged robots were able to generate dynamic motions in real-time while interacting with a complex environment? Such technology would represent a significant step forward the deployment of legged systems in real world scenarios. This means being able to replace humans in the execution of dangerous tasks and to collaborate with them in industrial applications.

This workshop aims to bring together researchers from all the relevant communities in legged locomotion such as: numerical optimization, machine learning (ML), model predictive control (MPC) and computational geometry in order to chart the most promising methods to address the above-mentioned scientific challenges.

[ Num Opt Wkshp ]

Army researchers teamed with the U.S. Marine Corps to fly and test 3-D printed quadcopter prototypes a the Marine Corps Air Ground Combat Center in 29 Palms, California recently.

[ CCDC ARL ]

Lex Fridman’s Artificial Intelligence podcast featuring Rosalind Picard.

[ AI Podcast ]

In this week’s episode of Robots in Depth, per speaks with Christian Guttmann, executive director of the Nordic AI Artificial Intelligence Institute.

Christian Guttmann talks about AI and wanting to understand intelligence enough to recreate it. Christian has be focusing on AI in healthcare and has recently started to communicate the opportunities and challenges in artificial intelligence to the general public. This is something that the host Per Sjöborg is also very passionate about. We also get to hear about the Nordic AI institute and the work it does to inform all parts of society about AI.

[ Robots in Depth ] Continue reading

Posted in Human Robots

#435224 Can AI Save the Internet from Fake News?

There’s an old proverb that says “seeing is believing.” But in the age of artificial intelligence, it’s becoming increasingly difficult to take anything at face value—literally.

The rise of so-called “deepfakes,” in which different types of AI-based techniques are used to manipulate video content, has reached the point where Congress held its first hearing last month on the potential abuses of the technology. The congressional investigation coincided with the release of a doctored video of Facebook CEO Mark Zuckerberg delivering what appeared to be a sinister speech.

View this post on Instagram

‘Imagine this…’ (2019) Mark Zuckerberg reveals the truth about Facebook and who really owns the future… see more @sheffdocfest VDR technology by @cannyai #spectreknows #privacy #democracy #surveillancecapitalism #dataism #deepfake #deepfakes #contemporaryartwork #digitalart #generativeart #newmediaart #codeart #markzuckerberg #artivism #contemporaryart

A post shared by Bill Posters (@bill_posters_uk) on Jun 7, 2019 at 7:15am PDT

Scientists are scrambling for solutions on how to combat deepfakes, while at the same time others are continuing to refine the techniques for less nefarious purposes, such as automating video content for the film industry.

At one end of the spectrum, for example, researchers at New York University’s Tandon School of Engineering have proposed implanting a type of digital watermark using a neural network that can spot manipulated photos and videos.

The idea is to embed the system directly into a digital camera. Many smartphone cameras and other digital devices already use AI to boost image quality and make other corrections. The authors of the study out of NYU say their prototype platform increased the chances of detecting manipulation from about 45 percent to more than 90 percent without sacrificing image quality.

On the other hand, researchers at Carnegie Mellon University recently hit on a technique for automatically and rapidly converting large amounts of video content from one source into the style of another. In one example, the scientists transferred the facial expressions of comedian John Oliver onto the bespectacled face of late night show host Stephen Colbert.

The CMU team says the method could be a boon to the movie industry, such as by converting black and white films to color, though it also conceded that the technology could be used to develop deepfakes.

Words Matter with Fake News
While the current spotlight is on how to combat video and image manipulation, a prolonged trench warfare on fake news is being fought by academia, nonprofits, and the tech industry.

This isn’t the fake news that some have come to use as a knee-jerk reaction to fact-based information that might be less than flattering to the subject of the report. Rather, fake news is deliberately-created misinformation that is spread via the internet.

In a recent Pew Research Center poll, Americans said fake news is a bigger problem than violent crime, racism, and terrorism. Fortunately, many of the linguistic tools that have been applied to determine when people are being deliberately deceitful can be baked into algorithms for spotting fake news.

That’s the approach taken by a team at the University of Michigan (U-M) to develop an algorithm that was better than humans at identifying fake news—76 percent versus 70 percent—by focusing on linguistic cues like grammatical structure, word choice, and punctuation.

For example, fake news tends to be filled with hyperbole and exaggeration, using terms like “overwhelming” or “extraordinary.”

“I think that’s a way to make up for the fact that the news is not quite true, so trying to compensate with the language that’s being used,” Rada Mihalcea, a computer science and engineering professor at U-M, told Singularity Hub.

The paper “Automatic Detection of Fake News” was based on the team’s previous studies on how people lie in general, without necessarily having the intention of spreading fake news, she said.

“Deception is a complicated and complex phenomenon that requires brain power,” Mihalcea noted. “That often results in simpler language, where you have shorter sentences or shorter documents.”

AI Versus AI
While most fake news is still churned out by humans with identifiable patterns of lying, according to Mihalcea, other researchers are already anticipating how to detect misinformation manufactured by machines.

A group led by Yejin Choi, with the Allen Institute of Artificial Intelligence and the University of Washington in Seattle, is one such team. The researchers recently introduced the world to Grover, an AI platform that is particularly good at catching autonomously-generated fake news because it’s equally good at creating it.

“This is due to a finding that is perhaps counterintuitive: strong generators for neural fake news are themselves strong detectors of it,” wrote Rowan Zellers, a PhD student and team member, in a Medium blog post. “A generator of fake news will be most familiar with its own peculiarities, such as using overly common or predictable words, as well as the peculiarities of similar generators.”

The team found that the best current discriminators can classify neural fake news from real, human-created text with 73 percent accuracy. Grover clocks in with 92 percent accuracy based on a training set of 5,000 neural network-generated fake news samples. Zellers wrote that Grover got better at scale, identifying 97.5 percent of made-up machine mumbo jumbo when trained on 80,000 articles.

It performed almost as well against fake news created by a powerful new text-generation system called GPT-2 built by OpenAI, a nonprofit research lab founded by Elon Musk, classifying 96.1 percent of the machine-written articles.

OpenAI had so feared that the platform could be abused that it has only released limited versions of the software. The public can play with a scaled-down version posted by a machine learning engineer named Adam King, where the user types in a short prompt and GPT-2 bangs out a short story or poem based on the snippet of text.

No Silver AI Bullet
While real progress is being made against fake news, the challenges of using AI to detect and correct misinformation are abundant, according to Hugo Williams, outreach manager for Logically, a UK-based startup that is developing different detectors using elements of deep learning and natural language processing, among others. He explained that the Logically models analyze information based on a three-pronged approach.

Publisher metadata: Is the article from a known, reliable, and trustworthy publisher with a history of credible journalism?
Network behavior: Is the article proliferating through social platforms and networks in ways typically associated with misinformation?
Content: The AI scans articles for hundreds of known indicators typically found in misinformation.

“There is no single algorithm which is capable of doing this,” Williams wrote in an email to Singularity Hub. “Even when you have a collection of different algorithms which—when combined—can give you relatively decent indications of what is unreliable or outright false, there will always need to be a human layer in the pipeline.”

The company released a consumer app in India back in February just before that country’s election cycle that was a “great testing ground” to refine its technology for the next app release, which is scheduled in the UK later this year. Users can submit articles for further scrutiny by a real person.

“We see our technology not as replacing traditional verification work, but as a method of simplifying and streamlining a very manual process,” Williams said. “In doing so, we’re able to publish more fact checks at a far quicker pace than other organizations.”

“With heightened analysis and the addition of more contextual information around the stories that our users are reading, we are not telling our users what they should or should not believe, but encouraging critical thinking based upon reliable, credible, and verified content,” he added.

AI may never be able to detect fake news entirely on its own, but it can help us be smarter about what we read on the internet.

Image Credit: Dennis Lytyagin / Shutterstock.com Continue reading

Posted in Human Robots