Tag Archives: wired
#436403 Why Your 5G Phone Connection Could Mean ...
Will getting full bars on your 5G connection mean getting caught out by sudden weather changes?
The question may strike you as hypothetical, nonsensical even, but it is at the core of ongoing disputes between meteorologists and telecommunications companies. Everyone else, including you and I, are caught in the middle, wanting both 5G’s faster connection speeds and precise information about our increasingly unpredictable weather. So why can’t we have both?
Perhaps we can, but because of the way 5G networks function, it may take some special technology—specifically, artificial intelligence.
The Bandwidth Worries
Around the world, the first 5G networks are already being rolled out. The networks use a variety of frequencies to transmit data to and from devices at speeds up to 100 times faster than existing 4G networks.
One of the bandwidths used is between 24.25 and 24.45 gigahertz (GHz). In a recent FCC auction, telecommunications companies paid a combined $2 billion for the 5G usage rights for this spectrum in the US.
However, meteorologists are concerned that transmissions near the lower end of that range can interfere with their ability to accurately measure water vapor in the atmosphere. Wired reported that acting chief of the National Oceanic and Atmospheric Administration (NOAA), Neil Jacobs, told the US House Subcommittee on the Environment that 5G interference could substantially cut the amount of weather data satellites can gather. As a result, forecast accuracy could drop by as much as 30 percent.
Among the consequences could be less time to prepare for hurricanes, and it may become harder to predict storms’ paths. Due to the interconnectedness of weather patterns, measurement issues in one location can affect other areas too. Lack of accurate atmospheric data from the US could, for example, lead to less accurate forecasts for weather patterns over Europe.
The Numbers Game
Water vapor emits a faint signal at 23.8 GHz. Weather satellites measure the signals, and the data is used to gauge atmospheric humidity levels. Meteorologists have expressed concern that 5G signals in the same range can disturb those readings. The issue is that it would be nigh on impossible to tell whether a signal is water vapor or an errant 5G signal.
Furthermore, 5G disturbances in other frequency bands could make forecasting even more difficult. Rain and snow emit frequencies around 36-37 GHz. 50.2-50.4 GHz is used to measure atmospheric temperatures, and 86-92 GHz clouds and ice. All of the above are under consideration for international 5G signals. Some have warned that the wider consequences could set weather forecasts back to the 1980s.
Telecommunications companies and interest organizations have argued back, saying that weather sensors aren’t as susceptible to interference as meteorologists fear. Furthermore, 5G devices and signals will produce much less interference with weather forecasts than organizations like NOAA predict. Since very little scientific research has been carried out to examine the claims of either party, we seem stuck in a ‘wait and see’ situation.
To offset some of the possible effects, the two groups have tried to reach a consensus on a noise buffer between the 5G transmissions and water-vapor signals. It could be likened to limiting the noise from busy roads or loud sound systems to avoid bothering neighboring buildings.
The World Meteorological Organization was looking to establish a -55 decibel watts buffer. In Europe, regulators are locked in on a -42 decibel watts buffer for 5G base stations. For comparison, the US Federal Communications Commission has advocated for a -20 decibel watts buffer, which would, in reality, allow more than 150 times more noise than the European proposal.
How AI Could Help
Much of the conversation about 5G’s possible influence on future weather predictions is centered around mobile phones. However, the phones are far from the only systems that will be receiving and transmitting signals on 5G. Self-driving cars and the Internet of Things are two other technologies that could soon be heavily reliant on faster wireless signals.
Densely populated areas are likely going to be the biggest emitters of 5G signals, leading to a suggestion to only gather water-vapor data over oceans.
Another option is to develop artificial intelligence (AI) approaches to clean or process weather data. AI is playing an increasing role in weather forecasting. For example, in 2016 IBM bought The Weather Company for $2 billion. The goal was to combine the two companies’ models and data in IBM’s Watson to create more accurate forecasts. AI would also be able to predict increases or drops in business revenues due to weather changes. Monsanto has also been investing in AI for forecasting, in this case to provide agriculturally-related weather predictions.
Smartphones may also provide a piece of the weather forecasting puzzle. Studies have shown how data from thousands of smartphones can help to increase the accuracy of storm predictions, as well as the force of storms.
“Weather stations cost a lot of money,” Cliff Mass, an atmospheric scientist at the University of Washington in Seattle, told Inside Science, adding, “If there are already 20 million smartphones, you might as well take advantage of the observation system that’s already in place.”
Smartphones may not be the solution when it comes to finding new ways of gathering the atmospheric data on water vapor that 5G could disrupt. But it does go to show that some technologies open new doors, while at the same time, others shut them.
Image Credit: Image by Free-Photos from Pixabay Continue reading
#436123 A Path Towards Reasonable Autonomous ...
Editor’s Note: The debate on autonomous weapons systems has been escalating over the past several years as the underlying technologies evolve to the point where their deployment in a military context seems inevitable. IEEE Spectrum has published a variety of perspectives on this issue. In summary, while there is a compelling argument to be made that autonomous weapons are inherently unethical and should be banned, there is also a compelling argument to be made that autonomous weapons could potentially make conflicts less harmful, especially to non-combatants. Despite an increasing amount of international attention (including from the United Nations), progress towards consensus, much less regulatory action, has been slow. The following workshop paper on autonomous weapons systems policy is remarkable because it was authored by a group of experts with very different (and in some cases divergent) views on the issue. Even so, they were able to reach consensus on a roadmap that all agreed was worth considering. It’s collaborations like this that could be the best way to establish a reasonable path forward on such a contentious issue, and with the permission of the authors, we’re excited to be able to share this paper (originally posted on Georgia Tech’s Mobile Robot Lab website) with you in its entirety.
Autonomous Weapon Systems: A Roadmapping Exercise
Over the past several years, there has been growing awareness and discussion surrounding the possibility of future lethal autonomous weapon systems that could fundamentally alter humanity’s relationship with violence in war. Lethal autonomous weapons present a host of legal, ethical, moral, and strategic challenges. At the same time, artificial intelligence (AI) technology could be used in ways that improve compliance with the laws of war and reduce non-combatant harm. Since 2014, states have come together annually at the United Nations to discuss lethal autonomous weapons systems1. Additionally, a growing number of individuals and non-governmental organizations have become active in discussions surrounding autonomous weapons, contributing to a rapidly expanding intellectual field working to better understand these issues. While a wide range of regulatory options have been proposed for dealing with the challenge of lethal autonomous weapons, ranging from a preemptive, legally binding international treaty to reinforcing compliance with existing laws of war, there is as yet no international consensus on a way forward.
The lack of an international policy consensus, whether codified in a formal document or otherwise, poses real risks. States could fall victim to a security dilemma in which they deploy untested or unsafe weapons that pose risks to civilians or international stability. Widespread proliferation could enable illicit uses by terrorists, criminals, or rogue states. Alternatively, a lack of guidance on which uses of autonomy are acceptable could stifle valuable research that could reduce the risk of non-combatant harm.
International debate thus far has predominantly centered around whether or not states should adopt a preemptive, legally-binding treaty that would ban lethal autonomous weapons before they can be built. Some of the authors of this document have called for such a treaty and would heartily support it, if states were to adopt it. Other authors of this document have argued an overly expansive treaty would foreclose the possibility of using AI to mitigate civilian harm. Options for international action are not binary, however, and there are a range of policy options that states should consider between adopting a comprehensive treaty or doing nothing.
The purpose of this paper is to explore the possibility of a middle road. If a roadmap could garner sufficient stakeholder support to have significant beneficial impact, then what elements could it contain? The exercise whose results are presented below was not to identify recommendations that the authors each prefer individually (the authors hold a broad spectrum of views), but instead to identify those components of a roadmap that the authors are all willing to entertain2. We, the authors, invite policymakers to consider these components as they weigh possible actions to address concerns surrounding autonomous weapons3.
Summary of Issues Surrounding Autonomous Weapons
There are a variety of issues that autonomous weapons raise, which might lend themselves to different approaches. A non-exhaustive list of issues includes:
The potential for beneficial uses of AI and autonomy that could improve precision and reliability in the use of force and reduce non-combatant harm.
Uncertainty about the path of future technology and the likelihood of autonomous weapons being used in compliance with the laws of war, or international humanitarian law (IHL), in different settings and on various timelines.
A desire for some degree of human involvement in the use of force. This has been expressed repeatedly in UN discussions on lethal autonomous weapon systems in different ways.
Particular risks surrounding lethal autonomous weapons specifically targeting personnel as opposed to vehicles or materiel.
Risks regarding international stability.
Risk of proliferation to terrorists, criminals, or rogue states.
Risk that autonomous systems that have been verified to be acceptable can be made unacceptable through software changes.
The potential for autonomous weapons to be used as scalable weapons enabling a small number of individuals to inflict very large-scale casualties at low cost, either intentionally or accidentally.
Summary of Components
A time-limited moratorium on the development, deployment, transfer, and use of anti-personnel lethal autonomous weapon systems4. Such a moratorium could include exceptions for certain classes of weapons.
Define guiding principles for human involvement in the use of force.
Develop protocols and/or technological means to mitigate the risk of unintentional escalation due to autonomous systems.
Develop strategies for preventing proliferation to illicit uses, such as by criminals, terrorists, or rogue states.
Conduct research to improve technologies and human-machine systems to reduce non-combatant harm and ensure IHL compliance in the use of future weapons.
Component 1:
States should consider adopting a five-year, renewable moratorium on the development, deployment, transfer, and use of anti-personnel lethal autonomous weapon systems. Anti-personnel lethal autonomous weapon systems are defined as weapons systems that, once activated, can select and engage dismounted human targets without further intervention by a human operator, possibly excluding systems such as:
Fixed-point defensive systems with human supervisory control to defend human-occupied bases or installations
Limited, proportional, automated counter-fire systems that return fire in order to provide immediate, local defense of humans
Time-limited pursuit deterrent munitions or systems
Autonomous weapon systems with size above a specified explosive weight limit that select as targets hand-held weapons, such as rifles, machine guns, anti-tank weapons, or man-portable air defense systems, provided there is adequate protection for non-combatants and ensuring IHL compliance5
The moratorium would not apply to:
Anti-vehicle or anti-materiel weapons
Non-lethal anti-personnel weapons
Research on ways of improving autonomous weapon technology to reduce non-combatant harm in future anti-personnel lethal autonomous weapon systems
Weapons that find, track, and engage specific individuals whom a human has decided should be engaged within a limited predetermined period of time and geographic region
Motivation:
This moratorium would pause development and deployment of anti-personnel lethal autonomous weapons systems to allow states to better understand the systemic risks of their use and to perform research that improves their safety, understandability, and effectiveness. Particular objectives could be to:
ensure that, prior to deployment, anti-personnel lethal autonomous weapons can be used in ways that are equal to or outperform humans in their compliance with IHL (other conditions may also apply prior to deployment being acceptable);
lay the groundwork for a potentially legally binding diplomatic instrument; and
decrease the geopolitical pressure on countries to deploy anti-personnel lethal autonomous weapons before they are reliable and well-understood.
Compliance Verification:
As part of a moratorium, states could consider various approaches to compliance verification. Potential approaches include:
Developing an industry cooperation regime analogous to that mandated under the Chemical Weapons Convention, whereby manufacturers must know their customers and report suspicious purchases of significant quantities of items such as fixed-wing drones, quadcopters, and other weaponizable robots.
Encouraging states to declare inventories of autonomous weapons for the purposes of transparency and confidence-building.
Facilitating scientific exchanges and military-to-military contacts to increase trust, transparency, and mutual understanding on topics such as compliance verification and safe operation of autonomous systems.
Designing control systems to require operator identity authentication and unalterable records of operation; enabling post-hoc compliance checks in case of plausible evidence of non-compliant autonomous weapon attacks.
Relating the quantity of weapons to corresponding capacities for human-in-the-loop operation of those weapons.
Designing weapons with air-gapped firing authorization circuits that are connected to the remote human operator but not to the on-board automated control system.
More generally, avoiding weapon designs that enable conversion from compliant to non-compliant categories or missions solely by software updates.
Designing weapons with formal proofs of relevant properties—e.g., the property that the weapon is unable to initiate an attack without human authorization. Proofs can, in principle, be provided using cryptographic techniques that allow the proofs to be checked by a third party without revealing any details of the underlying software.
Facilitate access to (non-classified) AI resources (software, data, methods for ensuring safe operation) to all states that remain in compliance and participate in transparency activities.
Component 2:
Define and universalize guiding principles for human involvement in the use of force.
Humans, not machines, are legal and moral agents in military operations.
It is a human responsibility to ensure that any attack, including one involving autonomous weapons, complies with the laws of war.
Humans responsible for initiating an attack must have sufficient understanding of the weapons, the targets, the environment and the context for use to determine whether that particular attack is lawful.
The attack must be bounded in space, time, target class, and means of attack in order for the determination about the lawfulness of that attack to be meaningful.
Militaries must invest in training, education, doctrine, policies, system design, and human-machine interfaces to ensure that humans remain responsible for attacks.
Component 3:
Develop protocols and/or technological means to mitigate the risk of unintentional escalation due to autonomous systems.
Specific potential measures include:
Developing safe rules for autonomous system behavior when in proximity to adversarial forces to avoid unintentional escalation or signaling. Examples include:
No-first-fire policy, so that autonomous weapons do not initiate hostilities without explicit human authorization.
A human must always be responsible for providing the mission for an autonomous system.
Taking steps to clearly distinguish exercises, patrols, reconnaissance, or other peacetime military operations from attacks in order to limit the possibility of reactions from adversary autonomous systems, such as autonomous air or coastal defenses.
Developing resilient communications links to ensure recallability of autonomous systems. Additionally, militaries should refrain from jamming others’ ability to recall their autonomous systems in order to afford the possibility of human correction in the event of unauthorized behavior.
Component 4:
Develop strategies for preventing proliferation to illicit uses, such as by criminals, terrorists, or rogue states:
Targeted multilateral controls to prevent large-scale sale and transfer of weaponizable robots and related military-specific components for illicit use.
Employ measures to render weaponizable robots less harmful (e.g., geofencing; hard-wired kill switch; onboard control systems largely implemented in unalterable, non-reprogrammable hardware such as application-specific integrated circuits).
Component 5:
Conduct research to improve technologies and human-machine systems to reduce non-combatant harm and ensure IHL-compliance in the use of future weapons, including:
Strategies to promote human moral engagement in decisions about the use of force
Risk assessment for autonomous weapon systems, including the potential for large-scale effects, geopolitical destabilization, accidental escalation, increased instability due to uncertainty about the relative military balance of power, and lowering thresholds to initiating conflict and for violence within conflict
Methodologies for ensuring the reliability and security of autonomous weapon systems
New techniques for verification, validation, explainability, characterization of failure conditions, and behavioral specifications.
About the Authors (in alphabetical order)
Ronald Arkin directs the Mobile Robot Laboratory at Georgia Tech.
Leslie Kaelbling is co-director of the Learning and Intelligent Systems Group at MIT.
Stuart Russell is a professor of computer science and engineering at UC Berkeley.
Dorsa Sadigh is an assistant professor of computer science and of electrical engineering at Stanford.
Paul Scharre directs the Technology and National Security Program at the Center for a New American Security (CNAS).
Bart Selman is a professor of computer science at Cornell.
Toby Walsh is a professor of artificial intelligence at the University of New South Wales (UNSW) Sydney.
The authors would like to thank Max Tegmark for organizing the three-day meeting from which this document was produced.
1 Autonomous Weapons System (AWS): A weapon system that, once activated, can select and engage targets without further intervention by a human operator. BACK TO TEXT↑
2 There is no implication that some authors would not personally support stronger recommendations. BACK TO TEXT↑
3 For ease of use, this working paper will frequently shorten “autonomous weapon system” to “autonomous weapon.” The terms should be treated as synonymous, with the understanding that “weapon” refers to the entire system: sensor, decision-making element, and munition. BACK TO TEXT↑
4 Anti-personnel lethal autonomous weapon system: A weapon system that, once activated, can select and engage dismounted human targets with lethal force and without further intervention by a human operator. BACK TO TEXT↑
5 The authors are not unanimous about this item because of concerns about ease of repurposing for mass-casualty missions targeting unarmed humans. The purpose of the lower limit on explosive payload weight would be to minimize the risk of such repurposing. There is precedent for using explosive weight limit as a mechanism of delineating between anti-personnel and anti-materiel weapons, such as the 1868 St. Petersburg Declaration Renouncing the Use, in Time of War, of Explosive Projectiles Under 400 Grammes Weight. BACK TO TEXT↑ Continue reading
#435575 How an AI Startup Designed a Drug ...
Discovering a new drug can take decades, billions of dollars, and untold man hours from some of the smartest people on the planet. Now a startup says it’s taken a significant step towards speeding the process up using AI.
The typical drug discovery process involves carrying out physical tests on enormous libraries of molecules, and even with the help of robotics it’s an arduous process. The idea of sidestepping this by using computers to virtually screen for promising candidates has been around for decades. But progress has been underwhelming, and it’s still not a major part of commercial pipelines.
Recent advances in deep learning, however, have reignited hopes for the field, and major pharma companies have started tying up with AI-powered drug discovery startups. And now Insilico Medicine has used AI to design a molecule that effectively targets a protein involved in fibrosis—the formation of excess fibrous tissue—in mice in just 46 days.
The platform the company has developed combines two of the hottest sub-fields of AI: the generative adversarial networks, or GANs, which power deepfakes, and reinforcement learning, which is at the heart of the most impressive game-playing AI advances of recent years.
In a paper in Nature Biotechnology, the company’s researchers describe how they trained their model on all the molecules already known to target this protein as well as many other active molecules from various datasets. The model was then used to generate 30,000 candidate molecules.
Unlike most previous efforts, they went a step further and selected the most promising molecules for testing in the lab. The 30,000 candidates were whittled down to just 6 using more conventional drug discovery approaches and were then synthesized in the lab. They were put through increasingly stringent tests, but the leading candidate was found to be effective at targeting the desired protein and behaved as one would hope a drug would.
The authors are clear that the results are just a proof-of-concept, which company CEO Alex Zhavoronkov told Wired stemmed from a challenge set by a pharma partner to design a drug as quickly as possible. But they say they were able to carry out the process faster than traditional methods for a fraction of the cost.
There are some caveats. For a start, the protein being targeted is already very well known and multiple effective drugs exist for it. That gave the company a wealth of data to train their model on, something that isn’t the case for many of the diseases where we urgently need new drugs.
The company’s platform also only targets the very initial stages of the drug discovery process. The authors concede in their paper that the molecules would still take considerable optimization in the lab before they’d be true contenders for clinical trials.
“And that is where you will start to begin to commence to spend the vast piles of money that you will eventually go through in trying to get a drug to market,” writes Derek Lowe in his blog In The Pipeline. The part of the discovery process that the platform tackles represents a tiny fraction of the total cost of drug development, he says.
Nonetheless, the research is a definite advance for virtual screening technology and an important marker of the potential of AI for designing new medicines. Zhavoronkov also told Wired that this research was done more than a year ago, and they’ve since adapted the platform to go after harder drug targets with less data.
And with big pharma companies desperate to slash their ballooning development costs and find treatments for a host of intractable diseases, they can use all the help they can get.
Image Credit: freestocks.org / Unsplash Continue reading