Tag Archives: fiction

#437230 How Drones and Aerial Vehicles Could ...

Drones, personal flying vehicles, and air taxis may be part of our everyday life in the very near future. Drones and air taxis will create new means of mobility and transport routes. Drones will be used for surveillance, delivery, and in the construction sector as it moves towards automation.

The introduction of these aerial craft into cities will require the built environment to change dramatically. Drones and other new aerial vehicles will require landing pads, charging points, and drone ports. They could usher in new styles of building, and lead to more sustainable design.

My research explores the impact of aerial vehicles on urban design, mapping out possible future trajectories.

An Aerial Age
Already, civilian drones can vary widely in size and complexity. They can carry a range of items from high-resolution cameras, delivery mechanisms, and thermal image technology to speakers and scanners. In the public sector, drones are used in disaster response and by the fire service to tackle fires which could endanger firefighters.

During the coronavirus pandemic, drones have been used by the police to enforce lockdown. Drones normally used in agriculture have sprayed disinfectant over cities. In the UK, drone delivery trials are taking place to carry medical items to the Isle of Wight.

Alongside drones, our future cities could also be populated by vertical takeoff and landing craft (VTOL), used as private vehicles and air taxis.

These vehicles are familiar to sci-fi fans. The late Syd Mead’s illustrations of the Spinner VTOL craft in the film Blade Runner captured the popular imagination, and the screens for the Spinners in Blade Runner 2049 created by Territory Studio provided a careful design fiction of the experience of piloting these types of vehicle.

Now, though, these flying vehicles are reality. A number of companies are developing eVTOL with electric multi-rotor jets, and a whole new motorsport is being established around them.

These aircraft have the potential to change our cities. However, they need to be tested extensively in urban airspace. A study conducted by Airbus found that public concerns about VTOL use focused on the safety of those on the ground and noise emissions.

New Cities
The widespread adoption of drones and VTOL will lead to new architecture and infrastructure. Existing buildings will require adaptations: landing pads, solar photovoltaic panels for energy efficiency, charging points for delivery drones, and landscaping to mitigate noise emissions.

A number of companies are already trialing drone delivery services. Existing buildings will need to be adapted to accommodate these new networks, and new design principles will have to be implemented in future ones.

The architect Saúl Ajuria Fernández has developed a design for a delivery drone port hub. This drone port acts like a beehive where drones recharge and collect parcels for distribution. Architectural firm Humphreys & Partners’ Pier 2, a design for a modular apartment building of the future, includes a cantilevered drone port for delivery services.

The Norman Foster Foundation has designed a drone port for delivery of medical supplies and other items for rural communities in Rwanda. The structure is also intended to function as a space for the public to congregate, as well as to receive training in robotics.

Drones may also help the urban environment become more sustainable. Researchers at the University of Stuttgart have developed a re-configurable architectural roof canopy system deployed by drones. By adjusting to follow the direction of the sun, the canopy provides shade and reduces reliance on ventilation systems.

Demand for air taxis and personal flying vehicles will develop where failures in other transport systems take place. The Airbus research found that of the cities surveyed, highest demand for VTOLs was in Los Angeles and Mexico City, urban areas famous for traffic pollution. To accommodate these aerial vehicles, urban space will need to transform to include landing pads, airport-like infrastructure, and recharge points.

Furthermore, this whole logistics system in lower airspace (below 500 feet), or what I term “hover space,” will need an urban traffic management system. One great example of how this hover space could work can be seen in a speculative project from design studio Superflux in their Drone Aviary project. A number of drones with different functions move around an urban area in a network, following different paths at varying heights.

We are at a critical period in urban history, faced by climatic breakdown and pandemic. Drones and aerial vehicles can be part of a profound rethink of the urban environment.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: NASA Continue reading

Posted in Human Robots

#436962 Scientists Engineered Neurons to Make ...

Electricity plays a surprisingly powerful role in our bodies. While most people are aware that it plays a crucial role in carrying signals to and from our nerves, our bodies produce electric fields that can do everything from helping heal wounds to triggering the release of hormones.

Electric fields can influence a host of important cellular behavior, like directional migration, proliferation, division, or even differentiation into different cell types. The work of Michael Levin at Tufts University even suggests that electrical fields may play a crucial role in the way our bodies organize themselves.

This has prompted considerable interest in exploiting our body’s receptiveness to electrical stimulation for therapeutic means, but given the diffuse nature of electrical fields a key challenge is finding a way to localize these effects. Conductive polymers have proven a useful tool in this regard thanks to their good electrical properties and biocompatibility, and have been used in everything from neural implants to biosensors.

But now, a team at Stanford University has developed a way to genetically engineer neurons to build the materials into their own cell membranes. The approach could make it possible to target highly specific groups of cells, providing unprecedented control over the body’s response to electrical stimulation.

In a paper in Science, the team explained how they used re-engineered viruses to deliver DNA that hijacks cells’ biosynthesis machinery to create an enzyme that assembles electroactive polymers onto their membranes. This changes the electrical properties of the cells, which the team demonstrated could be used to control their behavior.

They used the approach to modulate neuronal firing in cultures of rat hippocampal neurons, mouse brain slices, and even human cortical spheroids. Most impressively, they showed that they could coax the neurons of living C. elegans worms to produce the polymers in large enough quantities to alter their behavior without impairing the cells’ natural function.

Translating the idea to humans poses major challenges, not least because the viruses used to deliver the genetic changes are still a long way from being approved for clinical use. But the ability to precisely target specific cells using a genetic approach holds enormous promise for bioelectronic medicine, Kevin Otto and Christine Schmidt from the University of Florida say in an accompanying perspective.

Interest is booming in therapies that use electrical stimulation of neural circuits as an alternative to drugs for diseases as varied as arthritis, Alzheimer’s, diabetes, and cardiovascular disease, and hundreds of clinical trials are currently underway.

At present these approaches rely on electrodes that can provide some level of localization, but because different kinds of nerve cells are often packed closely together it’s proven hard to stimulate exactly the right nerves, say Otto and Schmidt. This new approach makes it possible to boost the conductivity of specific cell types, which could make these kinds of interventions dramatically more targeted.

Besides disease-focused bioelectronic interventions, Otto and Schmidt say the approach could prove invaluable for helping to interface advanced prosthetics with patients’ nervous systems by making it possible to excite sensory neurons without accidentally triggering motor neurons, or vice versa.

More speculatively, the approach could one day help create far more efficient bridges between our minds and machines. One of the major challenges for brain-machine interfaces is recording from specific neurons, something that a genetically targeted approach might be able to help greatly with.

If the researchers can replicate the ability to build electronic-tissue “composites” in humans, we may be well on our way to the cyborg future predicted by science fiction.

Image Credit: Gerd Altmann from Pixabay Continue reading

Posted in Human Robots

#436784 This Week’s Awesome Tech Stories From ...

COMPUTING
Inside the Race to Build the Best Quantum Computer on Earth
Gideon Lichfield | MIT Technology Review
“Regardless of whether you agree with Google’s position [on ‘quantum supremacy’] or IBM’s, the next goal is clear, Oliver says: to build a quantum computer that can do something useful. …The trouble is that it’s nearly impossible to predict what the first useful task will be, or how big a computer will be needed to perform it.”

FUTURE
We’re Not Prepared for the End of Moore’s Law
David Rotman | MIT Technology Review
“Quantum computing, carbon nanotube transistors, even spintronics, are enticing possibilities—but none are obvious replacements for the promise that Gordon Moore first saw in a simple integrated circuit. We need the research investments now to find out, though. Because one prediction is pretty much certain to come true: we’re always going to want more computing power.”

ROBOTICS
Flippy the Burger-Flipping Robot Is Changing the Face of Fast Food as We Know It
Luke Dormehl | Digital Trends
“Flippy is the result of the Miso team’s robotics expertise, coupled with that industry-specific knowledge. It’s a burger-flipping robot arm that’s equipped with both thermal and regular vision, which grills burgers to order while also advising human collaborators in the kitchen when they need to add cheese or prep buns for serving.”

BIOTECHNOLOGY
The Next Generation of Batteries Could Be Built by Viruses
Daniel Oberhaus | Wired
“[MIT bioengineering professor Angela Belcher has] made viruses that can work with over 150 different materials and demonstrated that her technique can be used to manufacture other materials like solar cells. Belcher’s dream of zipping around in a ‘virus-powered car’ still hasn’t come true, but after years of work she and her colleagues at MIT are on the cusp of taking the technology out of the lab and into the real world.”

SPACE
Biggest Cosmic Explosion Ever Detected Left Huge Dent in Space
Hannah Devlin | The Guardian
“The biggest cosmic explosion on record has been detected—an event so powerful that it punched a dent the size of 15 Milky Ways in the surrounding space. The eruption is thought to have originated at a supermassive black hole in the Ophiuchus galaxy cluster, which is about 390 million light years from Earth.”

SCIENCE FICTION
Star Trek’s Warp Speed Would Have Tragic Consequences
Cassidy Ward | SyFy
“The various crews of Trek‘s slate of television shows and movies can get from here to there without much fanfare. Seeking out new worlds and new civilizations is no more difficult than gassing up the car and packing a cooler full of junk food. And they don’t even need to do that! The replicators will crank out a bologna sandwich just like mom used to make. All that’s left is to go, but what happens then?”

Image Credit: sergio souza / Pexels Continue reading

Posted in Human Robots

#436573 This Week’s Awesome Tech Stories From ...

ARTIFICIAL INTELLIGENCE
The Messy, Secretive Reality Behind OpenAI’s Bid to Save the World
Karen Hao | MIT Technology Review
“The AI moonshot was founded in the spirit of transparency. This is the inside story of how competitive pressure eroded that idealism. …Yet OpenAI is still a bastion of talent and cutting-edge research, filled with people who are sincerely striving to work for the benefit of humanity. In other words, it still has the most important elements, and there’s still time for it to change.”

ROBOTICS
3D Printed Four-Legged Robot Is Ready to Take on Spot—at a Lower Price
Luke Dormehl | Digital Trends
“[Ghost Robotics and Origin] have teamed up to develop a new line of robots, called the Spirit Series, which offer impressively capable four-legged robots, but which can be printed using additive manufacturing at a fraction of the cost and speed of traditional manufacturing approaches.”

PRIVACY
The Studs on This Punk Bracelet Are Actually Microphone-Jamming Ultrasonic Speakers
Andrew Liszewski | Gizmodo
“You can prevent facial recognition cameras from identifying you by wearing face paint, masks, or sometimes just a pair of oversized sunglasses. Keeping conversations private from an ever-growing number of microphone-equipped devices isn’t quite as easy, but researchers have created what could be the first wearable that actually helps increase your privacy.”

TRANSPORTATION
Iron Man Dreams Are Closer to Becoming a Reality Thanks to This New Jetman Dubai Video
Julia Alexander | The Verge
“Tony Stark may have destroyed his Iron Man suits in Iron Man 3 (only to bring out a whole new line in Avengers: Age of Ultron), but Jetman Dubai’s Iron Man-like dreams of autonomous human flight are realer than ever. A new video published by the company shows pilot Vince Reffet using a jet-powered, carbon-fiber suit to launch off the ground and fly 6,000 feet in the air.”

TECHNOLOGY
Wikipedia Is the Last Best Place on the Internet
Richard Cooke | Wired
“More than an encyclopedia, Wikipedia has become a community, a library, a constitution, an experiment, a political manifesto—the closest thing there is to an online public square. It is one of the few remaining places that retains the faintly utopian glow of the early World Wide Web.”

SCIENCE
The Very Large Array Will Search for Evidence of Extraterrestrial Life
Georgina Torbet | Digital Trends
“To begin the project, an interface will be added to the NRAO’s Very Large Array (VLA) in New Mexico to search for events or structures which could indicate the presence of life, such as laser beams, structures built around stars, indications of constructed satellites, or atmospheric chemicals produced by industry.”

SCIENCE FICTION
The Terrible Truth About Star Trek’s Transporters
Cassidy Ward | SyFy Wire
“The fact that you are scanned, deconstructed, and rebuilt almost immediately thereafter only creates the illusion of continuity. In reality, you are killed and then something exactly like you is born, elsewhere. There’s a whole philosophical debate about whether this really matters. If the person constructed on the other end is identical to you, down to the atomic level, is there any measurable difference from it being actually you?”

Image Credit: Samuel Giacomelli / Unsplash Continue reading

Posted in Human Robots

#436484 If Machines Want to Make Art, Will ...

Assuming that the emergence of consciousness in artificial minds is possible, those minds will feel the urge to create art. But will we be able to understand it? To answer this question, we need to consider two subquestions: when does the machine become an author of an artwork? And how can we form an understanding of the art that it makes?

Empathy, we argue, is the force behind our capacity to understand works of art. Think of what happens when you are confronted with an artwork. We maintain that, to understand the piece, you use your own conscious experience to ask what could possibly motivate you to make such an artwork yourself—and then you use that first-person perspective to try to come to a plausible explanation that allows you to relate to the artwork. Your interpretation of the work will be personal and could differ significantly from the artist’s own reasons, but if we share sufficient experiences and cultural references, it might be a plausible one, even for the artist. This is why we can relate so differently to a work of art after learning that it is a forgery or imitation: the artist’s intent to deceive or imitate is very different from the attempt to express something original. Gathering contextual information before jumping to conclusions about other people’s actions—in art, as in life—can enable us to relate better to their intentions.

But the artist and you share something far more important than cultural references: you share a similar kind of body and, with it, a similar kind of embodied perspective. Our subjective human experience stems, among many other things, from being born and slowly educated within a society of fellow humans, from fighting the inevitability of our own death, from cherishing memories, from the lonely curiosity of our own mind, from the omnipresence of the needs and quirks of our biological body, and from the way it dictates the space- and time-scales we can grasp. All conscious machines will have embodied experiences of their own, but in bodies that will be entirely alien to us.

We are able to empathize with nonhuman characters or intelligent machines in human-made fiction because they have been conceived by other human beings from the only subjective perspective accessible to us: “What would it be like for a human to behave as x?” In order to understand machinic art as such—and assuming that we stand a chance of even recognizing it in the first place—we would need a way to conceive a first-person experience of what it is like to be that machine. That is something we cannot do even for beings that are much closer to us. It might very well happen that we understand some actions or artifacts created by machines of their own volition as art, but in doing so we will inevitably anthropomorphize the machine’s intentions. Art made by a machine can be meaningfully interpreted in a way that is plausible only from the perspective of that machine, and any coherent anthropomorphized interpretation will be implausibly alien from the machine perspective. As such, it will be a misinterpretation of the artwork.

But what if we grant the machine privileged access to our ways of reasoning, to the peculiarities of our perception apparatus, to endless examples of human culture? Wouldn’t that enable the machine to make art that a human could understand? Our answer is yes, but this would also make the artworks human—not authentically machinic. All examples so far of “art made by machines” are actually just straightforward examples of human art made with computers, with the artists being the computer programmers. It might seem like a strange claim: how can the programmers be the authors of the artwork if, most of the time, they can’t control—or even anticipate—the actual materializations of the artwork? It turns out that this is a long-standing artistic practice.

Suppose that your local orchestra is playing Beethoven’s Symphony No 7 (1812). Even though Beethoven will not be directly responsible for any of the sounds produced there, you would still say that you are listening to Beethoven. Your experience might depend considerably on the interpretation of the performers, the acoustics of the room, the behavior of fellow audience members or your state of mind. Those and other aspects are the result of choices made by specific individuals or of accidents happening to them. But the author of the music? Ludwig van Beethoven. Let’s say that, as a somewhat odd choice for the program, John Cage’s Imaginary Landscape No 4 (March No 2) (1951) is also played, with 24 performers controlling 12 radios according to a musical score. In this case, the responsibility for the sounds being heard should be attributed to unsuspecting radio hosts, or even to electromagnetic fields. Yet, the shaping of sounds over time—the composition—should be credited to Cage. Each performance of this piece will vary immensely in its sonic materialization, but it will always be a performance of Imaginary Landscape No 4.

Why should we change these principles when artists use computers if, in these respects at least, computer art does not bring anything new to the table? The (human) artists might not be in direct control of the final materializations, or even be able to predict them but, despite that, they are the authors of the work. Various materializations of the same idea—in this case formalized as an algorithm—are instantiations of the same work manifesting different contextual conditions. In fact, a common use of computation in the arts is the production of variations of a process, and artists make extensive use of systems that are sensitive to initial conditions, external inputs, or pseudo-randomness to deliberately avoid repetition of outputs. Having a computer executing a procedure to build an artwork, even if using pseudo-random processes or machine-learning algorithms, is no different than throwing dice to arrange a piece of music, or to pursuing innumerable variations of the same formula. After all, the idea of machines that make art has an artistic tradition that long predates the current trend of artworks made by artificial intelligence.

Machinic art is a term that we believe should be reserved for art made by an artificial mind’s own volition, not for that based on (or directed towards) an anthropocentric view of art. From a human point of view, machinic artworks will still be procedural, algorithmic, and computational. They will be generative, because they will be autonomous from a human artist. And they might be interactive, with humans or other systems. But they will not be the result of a human deferring decisions to a machine, because the first of those—the decision to make art—needs to be the result of a machine’s volition, intentions, and decisions. Only then will we no longer have human art made with computers, but proper machinic art.

The problem is not whether machines will or will not develop a sense of self that leads to an eagerness to create art. The problem is that if—or when—they do, they will have such a different Umwelt that we will be completely unable to relate to it from our own subjective, embodied perspective. Machinic art will always lie beyond our ability to understand it because the boundaries of our comprehension—in art, as in life—are those of the human experience.

This article was originally published at Aeon and has been republished under Creative Commons.

Image Credit: Rene Böhmer / Unsplash Continue reading

Posted in Human Robots