Tag Archives: center

#435224 Can AI Save the Internet from Fake News?

There’s an old proverb that says “seeing is believing.” But in the age of artificial intelligence, it’s becoming increasingly difficult to take anything at face value—literally.

The rise of so-called “deepfakes,” in which different types of AI-based techniques are used to manipulate video content, has reached the point where Congress held its first hearing last month on the potential abuses of the technology. The congressional investigation coincided with the release of a doctored video of Facebook CEO Mark Zuckerberg delivering what appeared to be a sinister speech.

View this post on Instagram

‘Imagine this…’ (2019) Mark Zuckerberg reveals the truth about Facebook and who really owns the future… see more @sheffdocfest VDR technology by @cannyai #spectreknows #privacy #democracy #surveillancecapitalism #dataism #deepfake #deepfakes #contemporaryartwork #digitalart #generativeart #newmediaart #codeart #markzuckerberg #artivism #contemporaryart

A post shared by Bill Posters (@bill_posters_uk) on Jun 7, 2019 at 7:15am PDT

Scientists are scrambling for solutions on how to combat deepfakes, while at the same time others are continuing to refine the techniques for less nefarious purposes, such as automating video content for the film industry.

At one end of the spectrum, for example, researchers at New York University’s Tandon School of Engineering have proposed implanting a type of digital watermark using a neural network that can spot manipulated photos and videos.

The idea is to embed the system directly into a digital camera. Many smartphone cameras and other digital devices already use AI to boost image quality and make other corrections. The authors of the study out of NYU say their prototype platform increased the chances of detecting manipulation from about 45 percent to more than 90 percent without sacrificing image quality.

On the other hand, researchers at Carnegie Mellon University recently hit on a technique for automatically and rapidly converting large amounts of video content from one source into the style of another. In one example, the scientists transferred the facial expressions of comedian John Oliver onto the bespectacled face of late night show host Stephen Colbert.

The CMU team says the method could be a boon to the movie industry, such as by converting black and white films to color, though it also conceded that the technology could be used to develop deepfakes.

Words Matter with Fake News
While the current spotlight is on how to combat video and image manipulation, a prolonged trench warfare on fake news is being fought by academia, nonprofits, and the tech industry.

This isn’t the fake news that some have come to use as a knee-jerk reaction to fact-based information that might be less than flattering to the subject of the report. Rather, fake news is deliberately-created misinformation that is spread via the internet.

In a recent Pew Research Center poll, Americans said fake news is a bigger problem than violent crime, racism, and terrorism. Fortunately, many of the linguistic tools that have been applied to determine when people are being deliberately deceitful can be baked into algorithms for spotting fake news.

That’s the approach taken by a team at the University of Michigan (U-M) to develop an algorithm that was better than humans at identifying fake news—76 percent versus 70 percent—by focusing on linguistic cues like grammatical structure, word choice, and punctuation.

For example, fake news tends to be filled with hyperbole and exaggeration, using terms like “overwhelming” or “extraordinary.”

“I think that’s a way to make up for the fact that the news is not quite true, so trying to compensate with the language that’s being used,” Rada Mihalcea, a computer science and engineering professor at U-M, told Singularity Hub.

The paper “Automatic Detection of Fake News” was based on the team’s previous studies on how people lie in general, without necessarily having the intention of spreading fake news, she said.

“Deception is a complicated and complex phenomenon that requires brain power,” Mihalcea noted. “That often results in simpler language, where you have shorter sentences or shorter documents.”

AI Versus AI
While most fake news is still churned out by humans with identifiable patterns of lying, according to Mihalcea, other researchers are already anticipating how to detect misinformation manufactured by machines.

A group led by Yejin Choi, with the Allen Institute of Artificial Intelligence and the University of Washington in Seattle, is one such team. The researchers recently introduced the world to Grover, an AI platform that is particularly good at catching autonomously-generated fake news because it’s equally good at creating it.

“This is due to a finding that is perhaps counterintuitive: strong generators for neural fake news are themselves strong detectors of it,” wrote Rowan Zellers, a PhD student and team member, in a Medium blog post. “A generator of fake news will be most familiar with its own peculiarities, such as using overly common or predictable words, as well as the peculiarities of similar generators.”

The team found that the best current discriminators can classify neural fake news from real, human-created text with 73 percent accuracy. Grover clocks in with 92 percent accuracy based on a training set of 5,000 neural network-generated fake news samples. Zellers wrote that Grover got better at scale, identifying 97.5 percent of made-up machine mumbo jumbo when trained on 80,000 articles.

It performed almost as well against fake news created by a powerful new text-generation system called GPT-2 built by OpenAI, a nonprofit research lab founded by Elon Musk, classifying 96.1 percent of the machine-written articles.

OpenAI had so feared that the platform could be abused that it has only released limited versions of the software. The public can play with a scaled-down version posted by a machine learning engineer named Adam King, where the user types in a short prompt and GPT-2 bangs out a short story or poem based on the snippet of text.

No Silver AI Bullet
While real progress is being made against fake news, the challenges of using AI to detect and correct misinformation are abundant, according to Hugo Williams, outreach manager for Logically, a UK-based startup that is developing different detectors using elements of deep learning and natural language processing, among others. He explained that the Logically models analyze information based on a three-pronged approach.

Publisher metadata: Is the article from a known, reliable, and trustworthy publisher with a history of credible journalism?
Network behavior: Is the article proliferating through social platforms and networks in ways typically associated with misinformation?
Content: The AI scans articles for hundreds of known indicators typically found in misinformation.

“There is no single algorithm which is capable of doing this,” Williams wrote in an email to Singularity Hub. “Even when you have a collection of different algorithms which—when combined—can give you relatively decent indications of what is unreliable or outright false, there will always need to be a human layer in the pipeline.”

The company released a consumer app in India back in February just before that country’s election cycle that was a “great testing ground” to refine its technology for the next app release, which is scheduled in the UK later this year. Users can submit articles for further scrutiny by a real person.

“We see our technology not as replacing traditional verification work, but as a method of simplifying and streamlining a very manual process,” Williams said. “In doing so, we’re able to publish more fact checks at a far quicker pace than other organizations.”

“With heightened analysis and the addition of more contextual information around the stories that our users are reading, we are not telling our users what they should or should not believe, but encouraging critical thinking based upon reliable, credible, and verified content,” he added.

AI may never be able to detect fake news entirely on its own, but it can help us be smarter about what we read on the internet.

Image Credit: Dennis Lytyagin / Shutterstock.com Continue reading

Posted in Human Robots

#435213 Robot traps ball without coding

Dr. Kee-hoon Kim's team at the Center for Intelligent & Interactive Robotics of the Korea Institute of Science and Technology (KIST) developed a way of teaching “impedance-controlled robots” through human demonstrations using surface electromyograms (sEMG) of muscles, and succeeded in teaching a robot to trap a dropped ball like a soccer player. A surface electromyogram is an electric signal produced during muscle activation that can be picked up on the surface of the skin. Continue reading

Posted in Human Robots

#435167 A Closer Look at the Robots Helping Us ...

Buck Rogers had Twiki. Luke Skywalker palled around with C-3PO and R2-D2. And astronauts aboard the International Space Station (ISS) now have their own robotic companions in space—Astrobee.

A pair of the cube-shaped robots were launched to the ISS during an April re-supply mission and are currently being commissioned for use on the space station. The free-flying space robots, dubbed Bumble and Honey, are the latest generation of robotic machines to join the human crew on the ISS.

Exploration of the solar system and beyond will require autonomous machines that can assist humans with numerous tasks—or go where we cannot. NASA has said repeatedly that robots will be instrumental in future space missions to the moon, Mars, and even to the icy moon Europa.

The Astrobee robots will specifically test robotic capabilities in zero gravity, replacing the SPHERES (Synchronized Position Hold, Engage, Reorient, Experimental Satellite) robots that have been on the ISS for more than a decade to test various technologies ranging from communications to navigation.

The 18-sided robots, each about the size of a volleyball or an oversized Dungeons and Dragons die, use CO2-based cold-gas thrusters for movement and a series of ultrasonic beacons for orientation. The Astrobee robots, on the other hand, can propel themselves autonomously around the interior of the ISS using electric fans and six cameras.

The modular design of the Astrobee robots means they are highly plug-and-play, capable of being reconfigured with different hardware modules. The robots’ software is also open-source, encouraging scientists and programmers to develop and test new algorithms and features.

And, yes, the Astrobee robots will be busy as bees once they are fully commissioned this fall, with experiments planned to begin next year. Scientists hope to learn more about how robots can assist space crews and perform caretaking duties on spacecraft.

Robots Working Together
The Astrobee robots are expected to be joined by a familiar “face” on the ISS later this year—the humanoid robot Robonaut.

Robonaut, also known as R2, was the first US-built robot on the ISS. It joined the crew back in 2011 without legs, which were added in 2014. However, the installation never entirely worked, as R2 experienced power failures that eventually led to its return to Earth last year to fix the problem. If all goes as planned, the space station’s first humanoid robot will return to the ISS to lend a hand to the astronauts and the new robotic arrivals.

In particular, NASA is interested in how the two different robotic platforms can complement each other, with an eye toward outfitting the agency’s proposed lunar orbital space station with various robots that can supplement a human crew.

“We don’t have definite plans for what would happen on the Gateway yet, but there’s a general recognition that intra-vehicular robots are important for space stations,” Astrobee technical lead Trey Smith in the NASA Intelligent Robotics Group told IEEE Spectrum. “And so, it would not be surprising to see a mobile manipulator like Robonaut, and a free flyer like Astrobee, on the Gateway.”

While the focus on R2 has been to test its capabilities in zero gravity and to use it for mundane or dangerous tasks in space, the technology enabling the humanoid robot has proven to be equally useful on Earth.

For example, R2 has amazing dexterity for a robot, with sensors, actuators, and tendons comparable to the nerves, muscles, and tendons in a human hand. Based on that design, engineers are working on a robotic glove that can help factory workers, for instance, do their jobs better while reducing the risk of repetitive injuries. R2 has also inspired development of a robotic exoskeleton for both astronauts in space and paraplegics on Earth.

Working Hard on Soft Robotics
While innovative and technologically sophisticated, Astrobee and Robonaut are typical robots in that neither one would do well in a limbo contest. In other words, most robots are limited in their flexibility and agility based on current hardware and materials.

A subfield of robotics known as soft robotics involves developing robots with highly pliant materials that mimic biological organisms in how they move. Scientists at NASA’s Langley Research Center are investigating how soft robots could help with future space exploration.

Specifically, the researchers are looking at a series of properties to understand how actuators—components responsible for moving a robotic part, such as Robonaut’s hand—can be built and used in space.

The team first 3D prints a mold and then pours a flexible material like silicone into the mold. Air bladders or chambers in the actuator expand and compress using just air.

Some of the first applications of soft robotics sound more tool-like than R2-D2-like. For example, two soft robots could connect to produce a temporary shelter for astronauts on the moon or serve as an impromptu wind shield during one of Mars’ infamous dust storms.

The idea is to use soft robots in situations that are “dangerous, dirty, or dull,” according to Jack Fitzpatrick, a NASA intern working on the soft robotics project at Langley.

Working on Mars
Of course, space robots aren’t only designed to assist humans. In many instances, they are the only option to explore even relatively close celestial bodies like Mars. Four American-made robotic rovers have been used to investigate the fourth planet from the sun since 1997.

Opportunity is perhaps the most famous, covering about 25 miles of terrain across Mars over 15 years. A dust storm knocked it out of commission last year, with NASA officially ending the mission in February.

However, the biggest and baddest of the Mars rovers, Curiosity, is still crawling across the Martian surface, sending back valuable data since 2012. The car-size robot carries 17 cameras, a laser to vaporize rocks for study, and a drill to collect samples. It is on the hunt for signs of biological life.

The next year or two could see a virtual traffic jam of robots to Mars. NASA’s Mars 2020 Rover is next in line to visit the Red Planet, sporting scientific gadgets like an X-ray fluorescence spectrometer for chemical analyses and ground-penetrating radar to see below the Martian surface.

This diagram shows the instrument payload for the Mars 2020 mission. Image Credit: NASA.
Meanwhile, the Europeans have teamed with the Russians on a rover called Rosalind Franklin, named after a famed British chemist, that will drill down into the Martian ground for evidence of past or present life as soon as 2021.

The Chinese are also preparing to begin searching for life on Mars using robots as soon as next year, as part of the country’s Mars Global Remote Sensing Orbiter and Small Rover program. The mission is scheduled to be the first in a series of launches that would culminate with bringing samples back from Mars to Earth.

Perhaps there is no more famous utterance in the universe of science fiction as “to boldly go where no one has gone before.” However, the fact is that human exploration of the solar system and beyond will only be possible with robots of different sizes, shapes, and sophistication.

Image Credit: NASA. Continue reading

Posted in Human Robots

#434843 This Week’s Awesome Stories From ...

ARTIFICIAL INTELLIGENCE
Open AI’s Dota 2 AI Steamrolls World Champion e-Sports Team With Back-to-Back Victories
Nick Statt | The Verge
“…[OpenAI cofounder and CEO, Sam Altman] tells me there probably does not exist a video game out there right now that a system like OpenAI Five can’t eventually master at a level beyond human capability. For the broader AI industry, mastering video games may soon become passé, simple table stakes required to prove your system can learn fast and act in a way required to tackle tougher, real-world tasks with more meaningful benefits.”

ROBOTICS
Boston Dynamics Debuts the Production Version of SpotMini
Brian Heater, Catherine Shu | TechCrunch
“SpotMini is the first commercial robot Boston Dynamics is set to release, but as we learned earlier, it certainly won’t be the last. The company is looking to its wheeled Handle robot in an effort to push into the logistics space. It’s a super-hot category for robotics right now. Notably, Amazon recently acquired Colorado-based start up Canvas to add to its own arm of fulfillment center robots.”

NEUROSCIENCE
Scientists Restore Some Brain Cell Functions in Pigs Four Hours After Death
Joel Achenbach | The Washington Post
“The ethicists say this research can blur the line between life and death, and could complicate the protocols for organ donation, which rely on a clear determination of when a person is dead and beyond resuscitation.”

BIOTECH
How Scientists 3D Printed a Tiny Heart From Human Cells
Yasmin Saplakoglu | Live Science
“Though the heart is much smaller than a human’s (it’s only the size of a rabbit’s), and there’s still a long way to go until it functions like a normal heart, the proof-of-concept experiment could eventually lead to personalized organs or tissues that could be used in the human body…”

SPACE
The Next Clash of Silicon Valley Titans Will Take Place in Space
Luke Dormehl | Digital Trends
“With bold plans that call for thousands of new satellites being put into orbit and astronomical costs, it’s going to be fascinating to observe the next phase of the tech platform battle being fought not on our desktops or mobile devices in our pockets, but outside of Earth’s atmosphere.”

FUTURE HISTORY
The Images That Could Help Rebuild Notre-Dame Cathedral
Alexis C. Madrigal | The Atlantic
“…in 2010, [Andrew] Tallon, an art professor at Vassar, took a Leica ScanStation C10 to Notre-Dame and, with the assistance of Columbia’s Paul Blaer, began to painstakingly scan every piece of the structure, inside and out. …Over five days, they positioned the scanner again and again—50 times in all—to create an unmatched record of the reality of one of the world’s most awe-inspiring buildings, represented as a series of points in space.”

AUGMENTED REALITY
Mapping Our World in 3D Will Let Us Paint Streets With Augmented Reality
Charlotte Jee | MIT Technology Review
“Scape wants to use its location services to become the underlying infrastructure upon which driverless cars, robotics, and augmented-reality services sit. ‘Our end goal is a one-to-one map of the world covering everything,’ says Miller. ‘Our ambition is to be as invisible as GPS is today.’i”

Image Credit: VAlex / Shutterstock.com Continue reading

Posted in Human Robots

#434827 AI and Robotics Are Transforming ...

During the past 50 years, the frequency of recorded natural disasters has surged nearly five-fold.

In this blog, I’ll be exploring how converging exponential technologies (AI, robotics, drones, sensors, networks) are transforming the future of disaster relief—how we can prevent them in the first place and get help to victims during that first golden hour wherein immediate relief can save lives.

Here are the three areas of greatest impact:

AI, predictive mapping, and the power of the crowd
Next-gen robotics and swarm solutions
Aerial drones and immediate aid supply

Let’s dive in!

Artificial Intelligence and Predictive Mapping
When it comes to immediate and high-precision emergency response, data is gold.

Already, the meteoric rise of space-based networks, stratosphere-hovering balloons, and 5G telecommunications infrastructure is in the process of connecting every last individual on the planet.

Aside from democratizing the world’s information, however, this upsurge in connectivity will soon grant anyone the ability to broadcast detailed geo-tagged data, particularly those most vulnerable to natural disasters.

Armed with the power of data broadcasting and the force of the crowd, disaster victims now play a vital role in emergency response, turning a historically one-way blind rescue operation into a two-way dialogue between connected crowds and smart response systems.

With a skyrocketing abundance of data, however, comes a new paradigm: one in which we no longer face a scarcity of answers. Instead, it will be the quality of our questions that matters most.

This is where AI comes in: our mining mechanism.

In the case of emergency response, what if we could strategically map an almost endless amount of incoming data points? Or predict the dynamics of a flood and identify a tsunami’s most vulnerable targets before it even strikes? Or even amplify critical signals to trigger automatic aid by surveillance drones and immediately alert crowdsourced volunteers?

Already, a number of key players are leveraging AI, crowdsourced intelligence, and cutting-edge visualizations to optimize crisis response and multiply relief speeds.

Take One Concern, for instance. Born out of Stanford under the mentorship of leading AI expert Andrew Ng, One Concern leverages AI through analytical disaster assessment and calculated damage estimates.

Partnering with the cities of Los Angeles, San Francisco, and numerous cities in San Mateo County, the platform assigns verified, unique ‘digital fingerprints’ to every element in a city. Building robust models of each system, One Concern’s AI platform can then monitor site-specific impacts of not only climate change but each individual natural disaster, from sweeping thermal shifts to seismic movement.

This data, combined with that of city infrastructure and former disasters, are then used to predict future damage under a range of disaster scenarios, informing prevention methods and structures in need of reinforcement.

Within just four years, One Concern can now make precise predictions with an 85 percent accuracy rate in under 15 minutes.

And as IoT-connected devices and intelligent hardware continue to boom, a blooming trillion-sensor economy will only serve to amplify AI’s predictive capacity, offering us immediate, preventive strategies long before disaster strikes.

Beyond natural disasters, however, crowdsourced intelligence, predictive crisis mapping, and AI-powered responses are just as formidable a triage in humanitarian disasters.

One extraordinary story is that of Ushahidi. When violence broke out after the 2007 Kenyan elections, one local blogger proposed a simple yet powerful question to the web: “Any techies out there willing to do a mashup of where the violence and destruction is occurring and put it on a map?”

Within days, four ‘techies’ heeded the call, building a platform that crowdsourced first-hand reports via SMS, mined the web for answers, and—with over 40,000 verified reports—sent alerts back to locals on the ground and viewers across the world.

Today, Ushahidi has been used in over 150 countries, reaching a total of 20 million people across 100,000+ deployments. Now an open-source crisis-mapping software, its V3 (or “Ushahidi in the Cloud”) is accessible to anyone, mining millions of Tweets, hundreds of thousands of news articles, and geo-tagged, time-stamped data from countless sources.

Aggregating one of the longest-running crisis maps to date, Ushahidi’s Syria Tracker has proved invaluable in the crowdsourcing of witness reports. Providing real-time geographic visualizations of all verified data, Syria Tracker has enabled civilians to report everything from missing people and relief supply needs to civilian casualties and disease outbreaks— all while evading the government’s cell network, keeping identities private, and verifying reports prior to publication.

As mobile connectivity and abundant sensors converge with AI-mined crowd intelligence, real-time awareness will only multiply in speed and scale.

Imagining the Future….

Within the next 10 years, spatial web technology might even allow us to tap into mesh networks.

As I’ve explored in a previous blog on the implications of the spatial web, while traditional networks rely on a limited set of wired access points (or wireless hotspots), a wireless mesh network can connect entire cities via hundreds of dispersed nodes that communicate with each other and share a network connection non-hierarchically.

In short, this means that individual mobile users can together establish a local mesh network using nothing but the computing power in their own devices.

Take this a step further, and a local population of strangers could collectively broadcast countless 360-degree feeds across a local mesh network.

Imagine a scenario in which armed attacks break out across disjointed urban districts, each cluster of eye witnesses and at-risk civilians broadcasting an aggregate of 360-degree videos, all fed through photogrammetry AIs that build out a live hologram in real time, giving family members and first responders complete information.

Or take a coastal community in the throes of torrential rainfall and failing infrastructure. Now empowered by a collective live feed, verification of data reports takes a matter of seconds, and richly-layered data informs first responders and AI platforms with unbelievable accuracy and specificity of relief needs.

By linking all the right technological pieces, we might even see the rise of automated drone deliveries. Imagine: crowdsourced intelligence is first cross-referenced with sensor data and verified algorithmically. AI is then leveraged to determine the specific needs and degree of urgency at ultra-precise coordinates. Within minutes, once approved by personnel, swarm robots rush to collect the requisite supplies, equipping size-appropriate drones with the right aid for rapid-fire delivery.

This brings us to a second critical convergence: robots and drones.

While cutting-edge drone technology revolutionizes the way we deliver aid, new breakthroughs in AI-geared robotics are paving the way for superhuman emergency responses in some of today’s most dangerous environments.

Let’s explore a few of the most disruptive examples to reach the testing phase.

First up….

Autonomous Robots and Swarm Solutions
As hardware advancements converge with exploding AI capabilities, disaster relief robots are graduating from assistance roles to fully autonomous responders at a breakneck pace.

Born out of MIT’s Biomimetic Robotics Lab, the Cheetah III is but one of many robots that may form our first line of defense in everything from earthquake search-and-rescue missions to high-risk ops in dangerous radiation zones.

Now capable of running at 6.4 meters per second, Cheetah III can even leap up to a height of 60 centimeters, autonomously determining how to avoid obstacles and jump over hurdles as they arise.

Initially designed to perform spectral inspection tasks in hazardous settings (think: nuclear plants or chemical factories), the Cheetah’s various iterations have focused on increasing its payload capacity, range of motion, and even a gripping function with enhanced dexterity.

Cheetah III and future versions are aimed at saving lives in almost any environment.

And the Cheetah III is not alone. Just this February, Tokyo’s Electric Power Company (TEPCO) has put one of its own robots to the test. For the first time since Japan’s devastating 2011 tsunami, which led to three nuclear meltdowns in the nation’s Fukushima nuclear power plant, a robot has successfully examined the reactor’s fuel.

Broadcasting the process with its built-in camera, the robot was able to retrieve small chunks of radioactive fuel at five of the six test sites, offering tremendous promise for long-term plans to clean up the still-deadly interior.

Also out of Japan, Mitsubishi Heavy Industries (MHi) is even using robots to fight fires with full autonomy. In a remarkable new feat, MHi’s Water Cannon Bot can now put out blazes in difficult-to-access or highly dangerous fire sites.

Delivering foam or water at 4,000 liters per minute and 1 megapascal (MPa) of pressure, the Cannon Bot and its accompanying Hose Extension Bot even form part of a greater AI-geared system to conduct reconnaissance and surveillance on larger transport vehicles.

As wildfires grow ever more untameable, high-volume production of such bots could prove a true lifesaver. Paired with predictive AI forest fire mapping and autonomous hauling vehicles, not only will solutions like MHi’s Cannon Bot save numerous lives, but avoid population displacement and paralyzing damage to our natural environment before disaster has the chance to spread.

But even in cases where emergency shelter is needed, groundbreaking (literally) robotics solutions are fast to the rescue.

After multiple iterations by Fastbrick Robotics, the Hadrian X end-to-end bricklaying robot can now autonomously build a fully livable, 180-square-meter home in under three days. Using a laser-guided robotic attachment, the all-in-one brick-loaded truck simply drives to a construction site and directs blocks through its robotic arm in accordance with a 3D model.

Meeting verified building standards, Hadrian and similar solutions hold massive promise in the long-term, deployable across post-conflict refugee sites and regions recovering from natural catastrophes.

But what if we need to build emergency shelters from local soil at hand? Marking an extraordinary convergence between robotics and 3D printing, the Institute for Advanced Architecture of Catalonia (IAAC) is already working on a solution.

In a major feat for low-cost construction in remote zones, IAAC has found a way to convert almost any soil into a building material with three times the tensile strength of industrial clay. Offering myriad benefits, including natural insulation, low GHG emissions, fire protection, air circulation, and thermal mediation, IAAC’s new 3D printed native soil can build houses on-site for as little as $1,000.

But while cutting-edge robotics unlock extraordinary new frontiers for low-cost, large-scale emergency construction, novel hardware and computing breakthroughs are also enabling robotic scale at the other extreme of the spectrum.

Again, inspired by biological phenomena, robotics specialists across the US have begun to pilot tiny robotic prototypes for locating trapped individuals and assessing infrastructural damage.

Take RoboBees, tiny Harvard-developed bots that use electrostatic adhesion to ‘perch’ on walls and even ceilings, evaluating structural damage in the aftermath of an earthquake.

Or Carnegie Mellon’s prototyped Snakebot, capable of navigating through entry points that would otherwise be completely inaccessible to human responders. Driven by AI, the Snakebot can maneuver through even the most densely-packed rubble to locate survivors, using cameras and microphones for communication.

But when it comes to fast-paced reconnaissance in inaccessible regions, miniature robot swarms have good company.

Next-Generation Drones for Instantaneous Relief Supplies
Particularly in the case of wildfires and conflict zones, autonomous drone technology is fundamentally revolutionizing the way we identify survivors in need and automate relief supply.

Not only are drones enabling high-resolution imagery for real-time mapping and damage assessment, but preliminary research shows that UAVs far outpace ground-based rescue teams in locating isolated survivors.

As presented by a team of electrical engineers from the University of Science and Technology of China, drones could even build out a mobile wireless broadband network in record time using a “drone-assisted multi-hop device-to-device” program.

And as shown during Houston’s Hurricane Harvey, drones can provide scores of predictive intel on everything from future flooding to damage estimates.

Among multiple others, a team led by Texas A&M computer science professor and director of the university’s Center for Robot-Assisted Search and Rescue Dr. Robin Murphy flew a total of 119 drone missions over the city, from small-scale quadcopters to military-grade unmanned planes. Not only were these critical for monitoring levee infrastructure, but also for identifying those left behind by human rescue teams.

But beyond surveillance, UAVs have begun to provide lifesaving supplies across some of the most remote regions of the globe. One of the most inspiring examples to date is Zipline.

Created in 2014, Zipline has completed 12,352 life-saving drone deliveries to date. While drones are designed, tested, and assembled in California, Zipline primarily operates in Rwanda and Tanzania, hiring local operators and providing over 11 million people with instant access to medical supplies.

Providing everything from vaccines and HIV medications to blood and IV tubes, Zipline’s drones far outpace ground-based supply transport, in many instances providing life-critical blood cells, plasma, and platelets in under an hour.

But drone technology is even beginning to transcend the limited scale of medical supplies and food.

Now developing its drones under contracts with DARPA and the US Marine Corps, Logistic Gliders, Inc. has built autonomously-navigating drones capable of carrying 1,800 pounds of cargo over unprecedented long distances.

Built from plywood, Logistic’s gliders are projected to cost as little as a few hundred dollars each, making them perfect candidates for high-volume remote aid deliveries, whether navigated by a pilot or self-flown in accordance with real-time disaster zone mapping.

As hardware continues to advance, autonomous drone technology coupled with real-time mapping algorithms pose no end of abundant opportunities for aid supply, disaster monitoring, and richly layered intel previously unimaginable for humanitarian relief.

Concluding Thoughts
Perhaps one of the most consequential and impactful applications of converging technologies is their transformation of disaster relief methods.

While AI-driven intel platforms crowdsource firsthand experiential data from those on the ground, mobile connectivity and drone-supplied networks are granting newfound narrative power to those most in need.

And as a wave of new hardware advancements gives rise to robotic responders, swarm technology, and aerial drones, we are fast approaching an age of instantaneous and efficiently-distributed responses in the midst of conflict and natural catastrophes alike.

Empowered by these new tools, what might we create when everyone on the planet has the same access to relief supplies and immediate resources? In a new age of prevention and fast recovery, what futures can you envision?

Join Me
Abundance-Digital Online Community: I’ve created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is my ‘onramp’ for exponential entrepreneurs – those who want to get involved and play at a higher level. Click here to learn more.

Image Credit: Arcansel / Shutterstock.com Continue reading

Posted in Human Robots