Tag Archives: the

#439543 How Robots Helped Out After the Surfside ...

Editor's Note: Along with Robin Murphy, the authors of this article include David Merrick, Justin Adams, Jarrett Broder, Austin Bush, Laura Hart, and Rayne Hawkins. This team is with the Florida State University's Disaster Incident Response Team, which was in Surfside for 24 days at the request of Florida US&R Task 1 (Miami Dade Fire Rescue Department).

On June 24, 2021, at 1:25AM portions of the 12 story Champlain Towers South condominium in Surfside, Florida collapsed, killing 98 people and injuring 11, making it the third largest fatal collapse in US history. The life-saving and mitigation Response Phase, the phase where responders from local, state, and federal agencies searched for survivors, spanned June 24 to July 7, 2021. This article summarizes what is known about the use of robots at Champlain Towers South, and offers insights into challenges for unmanned systems.

Small unmanned aerial systems (drones) were used immediately upon arrival by the Miami Dade Fire Rescue (MDFR) Department to survey the roughly 2.68 acre affected area. Drones, such as the DJI Mavic Enterprise Dual with a spotlight payload and thermal imaging, flew in the dark to determine the scope of the collapse and search for survivors. Regional and state emergency management drone teams were requested later that day to supplement the effort of flying day and night for tactical life-saving operations and to add flights for strategic operations to support managing the overall response.

View of a Phantom 4 Pro in use for mapping the collapse on July 2, 2021. Two other drones were also in the airspace conducting other missions but not visible. Photo: Robin R. Murphy
The teams brought at least 9 models of rotorcraft drones, including the DJI Mavic 2 Enterprise Dual, Mavic 2 Enterprise Advanced, DJI Mavic 2 Zoom, DJI Mavic Mini, DJI Phantom 4 Pro, DJI Matrice 210, Autel Dragonfish, and Autel EVO II Pro plus a tethered Fotokite drone. The picture above shows a DJI Phantom 4 Pro in use, with one of the multiple cranes in use on the site visible. The number of flights for tactical operations were not recorded, but drones were flown for 304 missions for strategic operations alone, making the Surfside collapse the largest and longest use of drones recorded for a disaster, exceeding the records set by Hurricane Harvey (112) and Hurricane Florence (260).

Unmanned ground bomb squad robots were reportedly used on at least two occasions in the standing portion of the structure during the response, once to investigate and document the garage and once on July 9 to hold a repeater for a drone flying in the standing portion of the garage. Note that details about the ground robots are not yet available and there may have been more missions, though not on the order of magnitude of the drone use. Bomb squad robots tend to be too large for use in areas other than the standing portions of the collapse.

We concentrate on the use of the drones for tactical and strategic operations, as the authors were directly involved in those operations. It offers a preliminary analysis of the lessons learned. The full details of the response will not be available for many months due to the nature of an active investigation into the causes of the collapse and due to privacy of the victims and their families.
Drone Use for Tactical Operations
Tactical operations were carried out primarily by MDFR with other drone teams supporting when necessary to meet the workload. Drones were first used by the MDFR drone team, which arrived within minutes of the collapse as part of the escalating calls. The drone effort started with night operations for direct life-saving and mitigation activities. Small DJI Mavic 2 Enterprise Dual drones with thermal camera and spotlight payloads were used for general situation awareness to help responders understand the extent of the collapse beyond what could be seen from the street side. The built-in thermal imager was used but did not have the resolution and was unable to show details as much of the material was the same temperature and heat emissions were fuzzy. The spotlight with the standard visible light camera was more effective, though the view was constricted. The drones were also used to look for survivors or trapped victims, help determine safety hazards to responders, and provide task force leaders with overwatch of the responders. During daylight, DJI Mavic Zoom drones were added because of their higher camera resolution zoom. When fires started in the rubble, drones with a streaming connection to bucket truck operators were used to help optimize position of water. Drones were also used to locate civilians entering the restricted area or flying drones to taking pictures.

In a novel use of drones for physical interaction, MDFR squads flew drones to attempt to find and pick up items in the standing portion of the structure with immense value to survivors.

As the response evolved, the use of drones was expanded to missions where the drones would fly in close proximity to structures and objects, fly indoors, and physically interact with the environment. For example, drones were used to read license plates to help identify residents, search for pets, and document belongings inside parts of the standing structure for families. In a novel use of drones for physical interaction, MDFR squads flew drones to attempt to find and pick up items in the standing portion of the structure with immense value to survivors. Before the demolition of the standing portion of the tower, MDFR used a drone to remove an American flag that had been placed on the structure during the initial search.

Drone Use for Strategic Operations

An orthomosiac of the collapse constructed from imagery collected by a drone on July 1, 2021.
Strategic operations were carried out by the Disaster Incident Research Team (DIRT) from the Florida State University Center for Disaster Risk Policy. The DIRT team is a state of Florida asset and was requested by Florida Task Force 1 when it was activated to assist later on June 24. FSU supported tactical operations but was solely responsible for collecting and processing imagery for use in managing the response. This data was primarily orthomosiac maps (a single high resolution image of the collapse created from stitching together individual high resolution imagers, as in the image above) and digital elevation maps (created from structure from motion, below).

Digital elevation map constructed from imagery collected by a drone on 27 June, 2021.Photo: Robin R. Murphy
These maps were collected every two to four hours during daylight, with FSU flying an average of 15.75 missions per day for the first two weeks of the response. The latest orthomosaic maps were downloaded at the start of a shift by the tactical responders for use as base maps on their mobile devices. In addition, a 3D reconstruction of the state of the collapse on July 4 was flown the afternoon before the standing portion was demolished, shown below.

GeoCam 3D reconstruction of the collapse on July 4, 2021. Photo: Robin R. Murphy
The mapping functions are notable because they require specialized software for data collection and post-processing, plus the speed of post-processing software relied on wireless connectivity. In order to stitch and fuse images without gaps or major misalignments, dedicated software packages are used to generate flight paths and autonomously fly and trigger image capture with sufficient coverage of the collapse and overlap between images.

Coordination of Drones on Site
The aerial assets were loosely coordinated through social media. All drones teams and Federal Aviation Administration (FAA) officials shared a WhatsApp group chat managed by MDFR. WhatsApp offered ease of use, compatibility with everyone's smartphones and mobile devices, and ease of adding pilots. Ease of adding pilots was important because many were not from MDFR and thus would not be in any personnel-oriented coordination system. The pilots did not have physical meetings or briefings as a whole, though the tactical and strategic operations teams did share a common space (nicknamed “Drone Zone”) while the National Institute of Standards and Technology teams worked from a separate staging location. If a pilot was approved by MDFR drone captain who served as the “air boss,” they were invited to the WhatsApp group chat and could then begin flying immediately without physically meeting the other pilots.

The teams flew concurrently and independently without rigid, pre-specified altitude or area restrictions. One team would post that they were taking off to fly at what area of the collapse and at what altitude and then post when they landed. The easiest solution was for the pilots to be aware of each others' drones and adjust their missions, pause, or temporarily defer flights. If a pilot forgot to post, someone would send a teasing chat eliciting a rapid apology.
Incursions by civilian manned and unmanned aircraft in the restricted airspace did occur. If FAA observers or other pilots saw a drone flying that was not accounted for in the chat, i.e., that five drones were visible over the area but only four were posted, or if a drone pilot saw a drone in an unexpected area, they would post a query asking if someone had forgotten to post or update a flight. If the drone remained unaccounted for, the FAA would assume that a civilian drone had violated the temporary flight restrictions and search the surrounding area for the offending pilot.
Preliminary Lessons LearnedWhile the drone data and performance is still being analyzed, some lessons learned have emerged that may be of value to the robotics, AI, and engineering communities.
Tactical and strategic operations during the response phase favored small, inexpensive, easy to carry platforms with cameras supporting coarse structure from motion rather than larger, more expensive lidar systems. The added accuracy of lidar systems was not needed for those missions, though the greater accuracy and resolution of such systems were valuable for the forensic structural analysis. For tactical and strategic operations, the benefits of lidar was not worth the capital costs and logistical burden. Indeed, general purpose consumer/prosumer drones that could fly day or night, indoors and outdoors, and for both mapping and first person view missions were highly preferred over specialized drones. The reliability of a drone was another major factor in choosing a specific model to field, again favoring consumer/prosumer drones as they typically have hundreds of thousand hours of flight time more than specialized or novel drones. Tethered drones offer some advantages for overwatch but many tactical operations missions require a great deal of mobility. Strategic mapping necessitates flying directly over the entire area being mapped.

While small, inexpensive general purpose drones offered many advantages, they could be further improved for flying at night and indoors. A wider area of lighting would be helpful. A 360 degree (spherical) area of coverage for obstacle avoidance for working indoors or at low altitudes and close proximity to irregular work envelopes and near people, especially as night, would also be useful. Systems such as the Flyability ELIOS 2 are designed to fly in narrow and highly cluttered indoor areas, but no models were available for the immediate response. Drone camera systems need to be able to look straight up to inspect the underside of structures or ceilings. Mechanisms for determining the accurate GPS location of a pixel in an image, not just the GPS location of the drone, is becoming increasing desirable.
Other technologies could be of benefit to the enterprise but face challenges. Computer vision/machine learning (CV/ML) for searching for victims in rubble is often mentioned as a possible goal, but a search for victims who are not on the surface of the collapse is not visually directed. The portions of victims that are not covered by rubble are usually camouflaged with gray dust, so searches tend to favor canines using scent. Another challenge for CV/ML methods is the lack of access to training data. Privacy and ethical concerns poses barriers to the research community gaining access to imagery with victims in the rubble, but simulations may not have sufficient fidelity.
The collapse supplies motivation for how informatics research and human-computer interaction and human-robot interaction can contribute to the effective use of robots during a disaster, and illustrates that a response does not follow a strictly centralized, hierarchical command structure and the agencies and members of the response are not known in advance. Proposed systems must be flexible, robust, and easy to use. Furthermore, it is not clear that responders will accept a totally new software app versus making do with a general purpose app such as WhatsApp that the majority routinely use for other purposes.
The biggest lesson learned is that robots are helpful and warrant more investment, particular as many US states are proposing to terminate purchase of the very models of drones that were so effective over cybersecurity concerns.
However, the biggest lesson learned is that robots are helpful and warrant more investment, particular as many US states are proposing to terminate purchase of the very models of drones that were so effective over cybersecurity concerns. There remains much to work to be done by researchers, manufacturers, and emergency management to make these critical technologies more useful for extreme environments. Our current work is focusing on creating open source datasets and documentation and conducting a more thorough analysis to accelerate the process.

Value of Drones The pervasive use of the drones indicates their implicit value to responding to, and documenting, the disaster. It is difficult to quantify the impact of drones, similar to the difficulties in quantifying the impact of a fire truck on firefighting or the use of mobile devices in general. Simply put, drones would not have been used beyond a few flights if they were not valuable.
The impact of the drones on tactical operations was immediate, as upon arrival MDFR flew drones to assess the extent of the collapse. Lighting on fire trucks primarily illuminated the street side of the standing portion of the building, while the drones, unrestricted by streets or debris, quickly expanded situation awareness of the disaster. The drones were used optimize placement of water to suppress the fires in the debris. The impact of the use of drones for other tactical activities is harder to quantify, but the frequent flights and pilots remaining on stand-by 24/7 indicate their value.
The impact of the drones on strategic operations was also considerable. The data collected by the drones and then processed into 2D maps and 3D models became a critical part of the US&R operations as well as one part of the nascent investigation into why the building failed. During initial operations, DIRT provided 2D maps to the US&R teams four times per day. These maps became the base layers for the mobile apps used on the pile to mark the locations of human remains, structural members of the building, personal effects, or other identifiable information. Updated orthophotos were critical to the accuracy of these reports. These apps running on mobile devices suffered from GPS accuracy issues, often with errors as high as ten meters. By having base imagery that was only hours old, mobile app users where able to 'drag the pin' on the mobile app to a more accurate report location on the pile – all by visualizing where they were standing compared to fresh UAS imagery. Without this capability, none of the GPS field data would be of use to US&R or investigators looking at why the structural collapse occurred. In addition to serving a base layer on mobile applications, the updated map imagery was used in all tactical, operational, and strategic dashboards by the individual US&R teams as well as the FEMA US&R Incident Support Team (IST) on site to assist in the management of the incident.
Aside from the 2D maps and orthophotos, 3D models were created from the drone data and used by structural experts to plan operations, including identifying areas with high probabilities of finding survivors or victims. Three-dimensional data created through post-processing also supported the demand for up-to-date volumetric estimates – how much material was being removed from the pile, and how much remained. These metrics provided clear indications of progress throughout the operations.
Acknowledgments Portions of this work were supported by NSF grants IIS-1945105 and CMMI- 2140451. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
The authors express their sincere condolences to the families of the victims. Continue reading

Posted in Human Robots

#439499 Why Robots Can’t Be Counted On to Find ...

On Thursday, a portion of the 12-story Champlain Towers South condominium building in Surfside, Florida (just outside of Miami) suffered a catastrophic partial collapse. As of Saturday morning, according to the Miami Herald, 159 people are still missing, and rescuers are removing debris with careful urgency while using dogs and microphones to search for survivors still trapped within a massive pile of tangled rubble.

It seems like robots should be ready to help with something like this. But they aren’t.

A Miami-Dade Fire Rescue official and a K-9 continue the search and rescue operations in the partially collapsed 12-story Champlain Towers South condo building on June 24, 2021 in Surfside, Florida.
JOE RAEDLE/GETTY IMAGES

The picture above shows what the site of the collapse in Florida looks like. It’s highly unstructured, and would pose a challenge for most legged robots to traverse, although you could see a tracked robot being able to manage it. But there are already humans and dogs working there, and as long as the environment is safe to move over, it’s not necessary or practical to duplicate that functionality with a robot, especially when time is critical.

What is desperately needed right now is a way of not just locating people underneath all of that rubble, but also getting an understanding of the structure of the rubble around a person, and what exactly is between that person and the surface. For that, we don’t need robots that can get over rubble: we need robots that can get into rubble. And we don’t have them.

To understand why, we talked with Robin Murphy at Texas A&M, who directs the Humanitarian Robotics and AI Laboratory, formerly the Center for Robot-Assisted Search and Rescue (CRASAR), which is now a non-profit. Murphy has been involved in applying robotic technology to disasters worldwide, including 9/11, Fukushima, and Hurricane Harvey. The work she’s doing isn’t abstract research—CRASAR deploys teams of trained professionals with proven robotic technology to assist (when asked) with disasters around the world, and then uses those experiences as the foundation of a data-driven approach to improve disaster robotics technology and training.

According to Murphy, using robots to explore rubble of collapsed buildings is, for the moment, not possible in any kind of way that could be realistically used on a disaster site. Rubble, generally, is a wildly unstructured and unpredictable environment. Most robots are simply too big to fit through rubble, and the environment isn’t friendly to very small robots either, since there’s frequently water from ruptured plumbing making everything muddy and slippery, among many other physical hazards. Wireless communication or localization is often impossible, so tethers are required, which solves the comms and power problems but can easily get caught or tangled on obstacles.

Even if you can build a robot small enough and durable enough to be able to physically fit through the kinds of voids that you’d find in the rubble of a collapsed building (like these snake robots were able to do in Mexico in 2017), useful mobility is about more than just following existing passages. Many disaster scenarios in robotics research assume that objectives are accessible if you just follow the right path, but real disasters aren’t like that, and large voids may require some amount of forced entry, if entry is even possible at all. An ability to forcefully burrow, which doesn’t really exist yet in this context but is an active topic of research, is critical for a robot to be able to move around in rubble where there may not be any tunnels or voids leading it where it wants to go.

And even if you can build a robot that can successfully burrow its way through rubble, there’s the question of what value it’s able to provide once it gets where it needs to be. Robotic sensing systems are in general not designed for extreme close quarters, and visual sensors like cameras can rapidly get damaged or get so much dirt on them that they become useless. Murphy explains that ideally, a rubble-exploring robot would be able to do more than just locate victims, but would also be able to use its sensors to assist in their rescue. “Trained rescuers need to see the internal structure of the rubble, not just the state of the victim. Imagine a surgeon who needs to find a bullet in a shooting victim, but does not have any idea of the layout of the victims organs; if the surgeon just cuts straight down, they may make matters worse. Same thing with collapses, it’s like the game of pick-up sticks. But if a structural specialist can see inside the pile of pick-up sticks, they can extract the victim faster and safer with less risk of a secondary collapse.”

Besides these technical challenges, the other huge part to all of this is that any system that you’d hope to use in the context of rescuing people must be fully mature. It’s obviously unethical to take a research-grade robot into a situation like the Florida building collapse and spend time and resources trying to prove that it works. “Robots that get used for disasters are typically used every day for similar tasks,” explains Murphy. For example, it wouldn’t be surprising to see drones being used to survey the parts of the building in Florida that are still standing to make sure that it’s safe for people to work nearby, because drones are a mature and widely adopted technology that has already proven itself. Until a disaster robot has achieved a similar level of maturity, we’re not likely to see it take place in an active rescue.

Keeping in mind that there are no existing robots that fulfill all of the above criteria for actual use, we asked Murphy to describe her ideal disaster robot for us. “It would look like a very long, miniature ferret,” she says. “A long, flexible, snake-like body, with small legs and paws that can grab and push and shove.” The robo-ferret would be able to burrow, to wiggle and squish and squeeze its way through tight twists and turns, and would be equipped with functional eyelids to protect and clean its sensors. But since there are no robo-ferrets, what existing robot would Murphy like to see in Florida right now? “I’m not there in Miami,” Murphy tells us, “but my first thought when I saw this was I really hope that one day we’re able to commercialize Japan’s Active Scope Camera.”

The Active Scope Camera was developed at Tohoku University by Satoshi Tadokoro about 15 years ago. It operates kind of like a long, skinny, radially symmetrical bristlebot with the ability to push itself forward:

The hose is covered by inclined cilia. Motors with eccentric mass are installed in the cable and excite vibration and cause an up-and-down motion of the cable. The tips of the cilia stick on the floor when the cable moves down and propel the body. Meanwhile, the tips slip against the floor, and the body does not move back when it moves up. A repetition of this process showed that the cable can slowly move in a narrow space of rubble piles.

“It's quirky, but the idea of being able to get into those small spaces and go about 30 feet in and look around is a big deal,” Murphy says. But the last publication we can find about this system is nearly a decade old—if it works so well, we asked Murphy, why isn’t it more widely available to be used after a building collapses? “When a disaster happens, there’s a little bit of interest, and some funding. But then that funding goes away until the next disaster. And after a certain point, there’s just no financial incentive to create an actual product that’s reliable in hardware and software and sensors, because fortunately events like this building collapse are rare.”

Dr. Satoshi Tadokoro inserting the Active Scope Camera robot at the 2007 Berkman Plaza II (Jacksonville, FL) parking garage collapse.
Photo: Center for Robot-Assisted Search and Rescue

The fortunate rarity of disasters like these complicates the development cycle of disaster robots as well, says Murphy. That’s part of the reason why CRASAR exists in the first place—it’s a way for robotics researchers to understand what first responders need from robots, and to test those robots in realistic disaster scenarios to determine best practices. “I think this is a case where policy and government can actually help,” Murphy tells us. “They can help by saying, we do actually need this, and we’re going to support the development of useful disaster robots.”

Robots should be able to help out in the situation happening right now in Florida, and we should be spending more time and effort on research in that direction that could potentially be saving lives. We’re close, but as with so many aspects of practical robotics, it feels like we’ve been close for years. There are systems out there with a lot of potential, they just need all help necessary to cross the gap from research project to a practical, useful system that can be deployed when needed. Continue reading

Posted in Human Robots

#439491 This Week’s Awesome Tech Stories From ...

ROBOTICS
This Robot Made a 100,000-Domino ‘Super Mario Bros.’ Mural in 24 Hours
I. Bonifacic | Engadget
“Created by YouTuber and former NASA engineer Mark Rober, the Dominator is the result of more than five years of work. …[Rober’s team] programmed more than 14,000 lines of code, and outfitted it with components like omnidirectional wheels and 3D-printed funnels to create what Rober says is a ‘friendly robot that’s super good at only one thing: setting up a butt-ton of dominos really, really fast.’i”

INNOVATION
I’m Sorry Dave, I’m Afraid I Invented That: Australian Court Finds AI Systems Can Be Recognized Under Patent Law
Josh Taylor | The Guardian
“[A]iFederal court judge says allowing artificial intelligence systems, as well as humans, to be inventors is ‘consistent with promoting innovation.’ ‘In my view, an inventor as recognized under the act can be an artificial intelligence system or device,’ [justice Jonathan Beach] said. …[He] said there needed to be a consideration beyond the mere dictionary definition of ‘inventor’ as being a human.”

TRANSPORTATION
Watch Joby Aviation’s Electric Air Taxi Complete a 150-Mile Flight
Andrew J. Hawkins | The Verge
“Granted, it doesn’t sound too impressive, but it’s actually among the longest flights ever performed by an electric aircraft. It’s equivalent to a trip between Seattle and Vancouver, or Los Angeles to San Diego—the type of regional trip conducted hundreds of times a day by regional partners to major airlines. Swapping out those polluting airplanes for a zero-emissions aircraft like Joby’s could be a major step toward the reduction of CO2 emissions.”

COMPUTING
Intel’s Ambitious Plan to Regain Chipmaking Leadership
Will Knight | Wired
“The technology changes laid out in the road map include a new transistor design and a new way of delivering power to a chip, both slated to debut in 2024 under the name Intel20A, which references the sub-nanometer angstrom scale. …Intel is also developing new ways of stacking the components in a chip together that promise improved performance and design flexibility.”

SCIENCE
Eternal Change for No Energy: A Time Crystal Finally Made Real
Natalie Wolchover | Quanta
“…researchers at Google in collaboration with physicists at Stanford, Princeton and other universities say that they have used Google’s quantum computer to demonstrate a genuine ‘time crystal’ for the first time. A novel phase of matter that physicists have strived to realize for many years, a time crystal is an object whose parts move in a regular, repeating cycle, sustaining this constant change without burning any energy.”

ARTIFICIAL INTELLIGENCE
Fast, Efficient Neural Networks Copy Dragonfly Brains
Frances Chance | IEEE Spectrum
“By harnessing the speed, simplicity, and efficiency of the dragonfly nervous system, we aim to design computers that perform these functions faster and at a fraction of the power that conventional systems consume.”

DIGITAL CURRENCY
Why Does the Federal Reserve Need a Digital Currency?
John Detrixhe | Quartz
“As physical cash goes away, so do the properties that make it special. …Is it time for America’s central bank to create a digital dollar? Congress is asking. And in hearings on July 27, a group of experts laid out their reasons why the Federal Reserve needs to go digital. Here are a few of their arguments.”

Image Credit: Scott Webb / Unsplash Continue reading

Posted in Human Robots

#439431 What Will Education be Like in the ...

Image by Mediamodifier from Pixabay The field of education is changing under the influence of technology and artificial intelligence. Old educational methods are already being transformed today and may lose their relevance in the future. In a few years, your teacher may be a computer program. And in the distant future, education will be like …

The post What Will Education be Like in the Future? appeared first on TFOT. Continue reading

Posted in Human Robots

#439420 This Week’s Awesome Tech Stories From ...

COMPUTING
Tapping Into the Brain to Help a Paralyzed Man Speak
Pam Belluck | The New York Times
“He has not been able to speak since 2003, when he was paralyzed at age 20 by a severe stroke after a terrible car crash. Now, in a scientific milestone, researchers have tapped into the speech areas of his brain—allowing him to produce comprehensible words and sentences simply by trying to say them. When the man, known by his nickname, Pancho, tries to speak, electrodes implanted in his brain transmit signals to a computer that displays his intended words on the screen.”

TRANSPORTATION
This Tiny, $6,800 Car Runs on Solar Power
Adele Peters | Fast Company
“The Squad, a new urban car from an Amsterdam-based startup, is barely bigger than a bicycle: Parked sideways, up to four of the vehicles can fit in a standard parking spot. The electric two-seater’s tiny size is one reason that it doesn’t use much energy—and in a typical day of city driving, it can run entirely on power from a solar panel on its own roof. A swappable battery provides extra power when needed.”

3D PRINTING
World’s First 3D-Printed Stainless Steel Bridge Spans a Dutch Canal
Adam Williams | New Atlas
“MX3D has finally realized its ambitious plan to install what’s described as the world’s first 3D-printed steel bridge over a canal in Amsterdam. The Queen of the Netherlands has officially opened the bridge to the public and, as well as an eye-catching design, it features hidden sensors that are collecting data on its structural integrity, crowd behavior, and more.”

ARTIFICIAL INTELLIGENCE
The Computer Scientist Training AI to Think With Analogies
John Pavlus | Quanta
“Melanie Mitchell has worked on digital minds for decades. She says they’ll never truly be like ours until they can make analogies. … ‘Today’s state-of-the-art neural networks are very good at certain tasks,’ she said, ‘but they’re very bad at taking what they’ve learned in one kind of situation and transferring it to another’—the essence of analogy.”

SPACE
Here’s Why Richard Branson’s Flight Matters—and Yes, It Really Matters
Eric Berger | Ars Technica
“During the last 50 years, the vast majority of human flights into space—more than 95 percent—have been undertaken by government astronauts on government-designed and -funded vehicles. Starting with Branson and going forward, it seems likely that 95 percent of human spaceflights over the next half century, if not more, will take place on privately built vehicles by private citizens. It’s a moment that has been a long time coming.”

ENVIRONMENT
Sweeping ‘Green Deal’ Promises to Revamp EU Economy, Slash Carbon Pollution
Tim De Chant | Ars Technica
“The…proposal would cut carbon pollution 55 percent below 1990 levels by leaning heavily on renewable energy and electric vehicles while also introducing a border carbon adjustment on imports and taxing aviation and maritime fuels. Together, the reforms signal the beginning of the end of fossil fuels in the EU. ‘The fossil fuel economy has reached its limits,’ said European Commission President Ursula von der Leyen.”

FUTURE
Why I’m a Proud Solutionist
Jason Crawford | MIT Technology Review
“Debates about technology and progress are often framed in terms of ‘optimism’ vs. ‘pessimism.’ …It’s tempting to choose sides. …But this represents a false choice. History provides us with powerful examples of people who were brutally honest in identifying a crisis but were equally active in seeking solutions.”

ETHICS
As Use of AI Spreads, Congress Looks to Rein It In
Tom Simonite | Wired
“There’s bipartisan agreement in Washington that the US government should do more to support development of artificial intelligence technology. At the same time, parts of the US government are working to place limits on algorithms to prevent discrimination, injustice, or waste. The White House, lawmakers from both parties, and federal agencies including the Department of Defense and the National Institute for Standards and Technology are all working on bills or projects to constrain potential downsides of AI.”

Image Credit: Maximalfocus / Unsplash Continue reading

Posted in Human Robots