Tag Archives: computers

#433852 How Do We Teach Autonomous Cars To Drive ...

Autonomous vehicles can follow the general rules of American roads, recognizing traffic signals and lane markings, noticing crosswalks and other regular features of the streets. But they work only on well-marked roads that are carefully scanned and mapped in advance.

Many paved roads, though, have faded paint, signs obscured behind trees and unusual intersections. In addition, 1.4 million miles of U.S. roads—one-third of the country’s public roadways—are unpaved, with no on-road signals like lane markings or stop-here lines. That doesn’t include miles of private roads, unpaved driveways or off-road trails.

What’s a rule-following autonomous car to do when the rules are unclear or nonexistent? And what are its passengers to do when they discover their vehicle can’t get them where they’re going?

Accounting for the Obscure
Most challenges in developing advanced technologies involve handling infrequent or uncommon situations, or events that require performance beyond a system’s normal capabilities. That’s definitely true for autonomous vehicles. Some on-road examples might be navigating construction zones, encountering a horse and buggy, or seeing graffiti that looks like a stop sign. Off-road, the possibilities include the full variety of the natural world, such as trees down over the road, flooding and large puddles—or even animals blocking the way.

At Mississippi State University’s Center for Advanced Vehicular Systems, we have taken up the challenge of training algorithms to respond to circumstances that almost never happen, are difficult to predict and are complex to create. We seek to put autonomous cars in the hardest possible scenario: driving in an area the car has no prior knowledge of, with no reliable infrastructure like road paint and traffic signs, and in an unknown environment where it’s just as likely to see a cactus as a polar bear.

Our work combines virtual technology and the real world. We create advanced simulations of lifelike outdoor scenes, which we use to train artificial intelligence algorithms to take a camera feed and classify what it sees, labeling trees, sky, open paths and potential obstacles. Then we transfer those algorithms to a purpose-built all-wheel-drive test vehicle and send it out on our dedicated off-road test track, where we can see how our algorithms work and collect more data to feed into our simulations.

Starting Virtual
We have developed a simulator that can create a wide range of realistic outdoor scenes for vehicles to navigate through. The system generates a range of landscapes of different climates, like forests and deserts, and can show how plants, shrubs and trees grow over time. It can also simulate weather changes, sunlight and moonlight, and the accurate locations of 9,000 stars.

The system also simulates the readings of sensors commonly used in autonomous vehicles, such as lidar and cameras. Those virtual sensors collect data that feeds into neural networks as valuable training data.

Simulated desert, meadow and forest environments generated by the Mississippi State University Autonomous Vehicle Simulator. Chris Goodin, Mississippi State University, Author provided.
Building a Test Track
Simulations are only as good as their portrayals of the real world. Mississippi State University has purchased 50 acres of land on which we are developing a test track for off-road autonomous vehicles. The property is excellent for off-road testing, with unusually steep grades for our area of Mississippi—up to 60 percent inclines—and a very diverse population of plants.

We have selected certain natural features of this land that we expect will be particularly challenging for self-driving vehicles, and replicated them exactly in our simulator. That allows us to directly compare results from the simulation and real-life attempts to navigate the actual land. Eventually, we’ll create similar real and virtual pairings of other types of landscapes to improve our vehicle’s capabilities.

A road washout, as seen in real life, left, and in simulation. Chris Goodin, Mississippi State University, Author provided.
Collecting More Data
We have also built a test vehicle, called the Halo Project, which has an electric motor and sensors and computers that can navigate various off-road environments. The Halo Project car has additional sensors to collect detailed data about its actual surroundings, which can help us build virtual environments to run new tests in.

The Halo Project car can collect data about driving and navigating in rugged terrain. Beth Newman Wynn, Mississippi State University, Author provided.
Two of its lidar sensors, for example, are mounted at intersecting angles on the front of the car so their beams sweep across the approaching ground. Together, they can provide information on how rough or smooth the surface is, as well as capturing readings from grass and other plants and items on the ground.

Lidar beams intersect, scanning the ground in front of the vehicle. Chris Goodin, Mississippi State University, Author provided
We’ve seen some exciting early results from our research. For example, we have shown promising preliminary results that machine learning algorithms trained on simulated environments can be useful in the real world. As with most autonomous vehicle research, there is still a long way to go, but our hope is that the technologies we’re developing for extreme cases will also help make autonomous vehicles more functional on today’s roads.

Matthew Doude, Associate Director, Center for Advanced Vehicular Systems; Ph.D. Student in Industrial and Systems Engineering, Mississippi State University; Christopher Goodin, Assistant Research Professor, Center for Advanced Vehicular Systems, Mississippi State University, and Daniel Carruth, Assistant Research Professor and Associate Director for Human Factors and Advanced Vehicle System, Center for Advanced Vehicular Systems, Mississippi State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Photo provided for The Conversation by Matthew Goudin / CC BY ND Continue reading

Posted in Human Robots

#433280 This Week’s Awesome Stories From ...

TECHNOLOGY
Google Turns 20: How an Internet Search Engine Reshaped the World
Editorial Staff | The Verge
“No technology company is arguably more responsible for shaping the modern internet, and modern life, than Google. The company that started as a novel search engine now manages eight products with more than 1 billion users each.”

FUTURE
Why Technology Favors Tyranny
Yuval Noah Harari | The Atlantic
“It is undoubtable…that the technological revolutions now gathering momentum will in the next few decades confront humankind with the hardest trials it has yet encountered.”

ARTIFICIAL INTELLIGENCE
AI Can Recognize Images, But Can It Understand This Headline?
Gregory Barber | Wired
“In 2012, artificial intelligence researchers revealed a big improvement in computers’ ability to recognize images by feeding a neural network millions of labeled images from a database called ImageNet. …In other arenas of AI research, like understanding language, similar models have proved elusive. But recent research from fast.ai, OpenAI, and the Allen Institute for AI suggests a potential breakthrough, with more robust language models that can help researchers tackle a range of unsolved problems.”

COMPUTING
Quantum Computing Is Almost Ready for Business, Startup Says
Sean Captain | Fast Company
“Rigetti is now inviting customers to apply for free access to these systems, toward the goal of developing a real-world application that achieves quantum advantage. As an extra incentive, the first to make it wins a $1 million prize.”

SCIENCE FICTION
How Realistic Are Sci-Fi Spaceships? An Expert Ranks Your Favorites
Chris Taylor | Mashable
“For all the villainous Borg’s supposed efficiency, their vast six-sided planet-threatening vessel is a massive waste of space. The Death Star may cost an estimated $852 quadrillion in steel alone, but that figure would be far higher if it employed any other shape. That’s no moon—it’s a highly efficient use of surface area.”

Image Credit: Tithi Luadthong / Shutterstock.com Continue reading

Posted in Human Robots

#432882 Why the Discovery of Room-Temperature ...

Superconductors are among the most bizarre and exciting materials yet discovered. Counterintuitive quantum-mechanical effects mean that, below a critical temperature, they have zero electrical resistance. This property alone is more than enough to spark the imagination.

A current that could flow forever without losing any energy means transmission of power with virtually no losses in the cables. When renewable energy sources start to dominate the grid and high-voltage transmission across continents becomes important to overcome intermittency, lossless cables will result in substantial savings.

What’s more, a superconducting wire carrying a current that never, ever diminishes would act as a perfect store of electrical energy. Unlike batteries, which degrade over time, if the resistance is truly zero, you could return to the superconductor in a billion years and find that same old current flowing through it. Energy could be captured and stored indefinitely!

With no resistance, a huge current could be passed through the superconducting wire and, in turn, produce magnetic fields of incredible power.

You could use them to levitate trains and produce astonishing accelerations, thereby revolutionizing the transport system. You could use them in power plants—replacing conventional methods which spin turbines in magnetic fields to generate electricity—and in quantum computers as the two-level system required for a “qubit,” in which the zeros and ones are replaced by current flowing clockwise or counterclockwise in a superconductor.

Arthur C. Clarke famously said that any sufficiently advanced technology is indistinguishable from magic; superconductors can certainly seem like magical devices. So, why aren’t they busy remaking the world? There’s a problem—that critical temperature.

For all known materials, it’s hundreds of degrees below freezing. Superconductors also have a critical magnetic field; beyond a certain magnetic field strength, they cease to work. There’s a tradeoff: materials with an intrinsically high critical temperature can also often provide the largest magnetic fields when cooled well below that temperature.

This has meant that superconductor applications so far have been limited to situations where you can afford to cool the components of your system to close to absolute zero: in particle accelerators and experimental nuclear fusion reactors, for example.

But even as some aspects of superconductor technology become mature in limited applications, the search for higher temperature superconductors moves on. Many physicists still believe a room-temperature superconductor could exist. Such a discovery would unleash amazing new technologies.

The Quest for Room-Temperature Superconductors
After Heike Kamerlingh Onnes discovered superconductivity by accident while attempting to prove Lord Kelvin’s theory that resistance would increase with decreasing temperature, theorists scrambled to explain the new property in the hope that understanding it might allow for room-temperature superconductors to be synthesized.

They came up with the BCS theory, which explained some of the properties of superconductors. It also predicted that the dream of technologists, a room-temperature superconductor, could not exist; the maximum temperature for superconductivity according to BCS theory was just 30 K.

Then, in the 1980s, the field changed again with the discovery of unconventional, or high-temperature, superconductivity. “High temperature” is still very cold: the highest temperature for superconductivity achieved was -70°C for hydrogen sulphide at extremely high pressures. For normal pressures, -140°C is near the upper limit. Unfortunately, high-temperature superconductors—which require relatively cheap liquid nitrogen, rather than liquid helium, to cool—are mostly brittle ceramics, which are expensive to form into wires and have limited application.

Given the limitations of high-temperature superconductors, researchers continue to believe there’s a better option awaiting discovery—an incredible new material that checks boxes like superconductivity approaching room temperature, affordability, and practicality.

Tantalizing Clues
Without a detailed theoretical understanding of how this phenomenon occurs—although incremental progress happens all the time—scientists can occasionally feel like they’re taking educated guesses at materials that might be likely candidates. It’s a little like trying to guess a phone number, but with the periodic table of elements instead of digits.

Yet the prospect remains, in the words of one researcher, tantalizing. A Nobel Prize and potentially changing the world of energy and electricity is not bad for a day’s work.

Some research focuses on cuprates, complex crystals that contain layers of copper and oxygen atoms. Doping cuprates with various different elements, such exotic compounds as mercury barium calcium copper oxide, are amongst the best superconductors known today.

Research also continues into some anomalous but unexplained reports that graphite soaked in water can act as a room-temperature superconductor, but there’s no indication that this could be used for technological applications yet.

In early 2017, as part of the ongoing effort to explore the most extreme and exotic forms of matter we can create on Earth, researchers managed to compress hydrogen into a metal.

The pressure required to do this was more than that at the core of the Earth and thousands of times higher than that at the bottom of the ocean. Some researchers in the field, called condensed-matter physics, doubt that metallic hydrogen was produced at all.

It’s considered possible that metallic hydrogen could be a room-temperature superconductor. But getting the samples to stick around long enough for detailed testing has proved tricky, with the diamonds containing the metallic hydrogen suffering a “catastrophic failure” under the pressure.

Superconductivity—or behavior that strongly resembles it—was also observed in yttrium barium copper oxide (YBCO) at room temperature in 2014. The only catch was that this electron transport lasted for a tiny fraction of a second and required the material to be bombarded with pulsed lasers.

Not very practical, you might say, but tantalizing nonetheless.

Other new materials display enticing properties too. The 2016 Nobel Prize in Physics was awarded for the theoretical work that characterizes topological insulators—materials that exhibit similarly strange quantum behaviors. They can be considered perfect insulators for the bulk of the material but extraordinarily good conductors in a thin layer on the surface.

Microsoft is betting on topological insulators as the key component in their attempt at a quantum computer. They’ve also been considered potentially important components in miniaturized circuitry.

A number of remarkable electronic transport properties have also been observed in new, “2D” structures—like graphene, these are materials synthesized to be as thick as a single atom or molecule. And research continues into how we can utilize the superconductors we’ve already discovered; for example, some teams are trying to develop insulating material that prevents superconducting HVDC cable from overheating.

Room-temperature superconductivity remains as elusive and exciting as it has been for over a century. It is unclear whether a room-temperature superconductor can exist, but the discovery of high-temperature superconductors is a promising indicator that unconventional and highly useful quantum effects may be discovered in completely unexpected materials.

Perhaps in the future—through artificial intelligence simulations or the serendipitous discoveries of a 21st century Kamerlingh Onnes—this little piece of magic could move into the realm of reality.

Image Credit: ktsdesign / Shutterstock.com Continue reading

Posted in Human Robots

#432549 Your Next Pilot Could Be Drone Software

Would you get on a plane that didn’t have a human pilot in the cockpit? Half of air travelers surveyed in 2017 said they would not, even if the ticket was cheaper. Modern pilots do such a good job that almost any air accident is big news, such as the Southwest engine disintegration on April 17.

But stories of pilot drunkenness, rants, fights and distraction, however rare, are reminders that pilots are only human. Not every plane can be flown by a disaster-averting pilot, like Southwest Capt. Tammie Jo Shults or Capt. Chesley “Sully” Sullenberger. But software could change that, equipping every plane with an extremely experienced guidance system that is always learning more.

In fact, on many flights, autopilot systems already control the plane for basically all of the flight. And software handles the most harrowing landings—when there is no visibility and the pilot can’t see anything to even know where he or she is. But human pilots are still on hand as backups.

A new generation of software pilots, developed for self-flying vehicles, or drones, will soon have logged more flying hours than all humans have—ever. By combining their enormous amounts of flight data and experience, drone-control software applications are poised to quickly become the world’s most experienced pilots.

Drones That Fly Themselves
Drones come in many forms, from tiny quad-rotor copter toys to missile-firing winged planes, or even 7-ton aircraft that can stay aloft for 34 hours at a stretch.

When drones were first introduced, they were flown remotely by human operators. However, this merely substitutes a pilot on the ground for one aloft. And it requires significant communications bandwidth between the drone and control center, to carry real-time video from the drone and to transmit the operator’s commands.

Many newer drones no longer need pilots; some drones for hobbyists and photographers can now fly themselves along human-defined routes, leaving the human free to sightsee—or control the camera to get the best view.

University researchers, businesses, and military agencies are now testing larger and more capable drones that will operate autonomously. Swarms of drones can fly without needing tens or hundreds of humans to control them. And they can perform coordinated maneuvers that human controllers could never handle.

Could humans control these 1,218 drones all together?

Whether flying in swarms or alone, the software that controls these drones is rapidly gaining flight experience.

Importance of Pilot Experience
Experience is the main qualification for pilots. Even a person who wants to fly a small plane for personal and noncommercial use needs 40 hours of flying instruction before getting a private pilot’s license. Commercial airline pilots must have at least 1,000 hours before even serving as a co-pilot.

On-the-ground training and in-flight experience prepare pilots for unusual and emergency scenarios, ideally to help save lives in situations like the “Miracle on the Hudson.” But many pilots are less experienced than “Sully” Sullenberger, who saved his planeload of people with quick and creative thinking. With software, though, every plane can have on board a pilot with as much experience—if not more. A popular software pilot system, in use in many aircraft at once, could gain more flight time each day than a single human might accumulate in a year.

As someone who studies technology policy as well as the use of artificial intelligence for drones, cars, robots, and other uses, I don’t lightly suggest handing over the controls for those additional tasks. But giving software pilots more control would maximize computers’ advantages over humans in training, testing, and reliability.

Training and Testing Software Pilots
Unlike people, computers will follow sets of instructions in software the same way every time. That lets developers create instructions, test reactions, and refine aircraft responses. Testing could make it far less likely, for example, that a computer would mistake the planet Venus for an oncoming jet and throw the plane into a steep dive to avoid it.

The most significant advantage is scale: Rather than teaching thousands of individual pilots new skills, updating thousands of aircraft would require only downloading updated software.

These systems would also need to be thoroughly tested—in both real-life situations and in simulations—to handle a wide range of aviation situations and to withstand cyberattacks. But once they’re working well, software pilots are not susceptible to distraction, disorientation, fatigue, or other human impairments that can create problems or cause errors even in common situations.

Rapid Response and Adaptation
Already, aircraft regulators are concerned that human pilots are forgetting how to fly on their own and may have trouble taking over from an autopilot in an emergency.

In the “Miracle on the Hudson” event, for example, a key factor in what happened was how long it took for the human pilots to figure out what had happened—that the plane had flown through a flock of birds, which had damaged both engines—and how to respond. Rather than the approximately one minute it took the humans, a computer could have assessed the situation in seconds, potentially saving enough time that the plane could have landed on a runway instead of a river.

Aircraft damage can pose another particularly difficult challenge for human pilots: It can change what effects the controls have on its flight. In cases where damage renders a plane uncontrollable, the result is often tragedy. A sufficiently advanced automated system could make minute changes to the aircraft’s steering and use its sensors to quickly evaluate the effects of those movements—essentially learning how to fly all over again with a damaged plane.

Boosting Public Confidence
The biggest barrier to fully automated flight is psychological, not technical. Many people may not want to trust their lives to computer systems. But they might come around when reassured that the software pilot has tens, hundreds, or thousands more hours of flight experience than any human pilot.

Other autonomous technologies, too, are progressing despite public concerns. Regulators and lawmakers are allowing self-driving cars on the roads in many states. But more than half of Americans don’t want to ride in one, largely because they don’t trust the technology. And only 17 percent of travelers around the world are willing to board a plane without a pilot. However, as more people experience self-driving cars on the road and have drones deliver them packages, it is likely that software pilots will gain in acceptance.

The airline industry will certainly be pushing people to trust the new systems: Automating pilots could save tens of billions of dollars a year. And the current pilot shortage means software pilots may be the key to having any airline service to smaller destinations.

Both Boeing and Airbus have made significant investments in automated flight technology, which would remove or reduce the need for human pilots. Boeing has actually bought a drone manufacturer and is looking to add software pilot capabilities to the next generation of its passenger aircraft. (Other tests have tried to retrofit existing aircraft with robotic pilots.)

One way to help regular passengers become comfortable with software pilots—while also helping to both train and test the systems—could be to introduce them as co-pilots working alongside human pilots. Planes would be operated by software from gate to gate, with the pilots instructed to touch the controls only if the system fails. Eventually pilots could be removed from the aircraft altogether, just like they eventually were from the driverless trains that we routinely ride in airports around the world.

This article was originally published on The Conversation. Read the original article.

Image Credit: Skycolors / Shutterstock.com Continue reading

Posted in Human Robots

#432512 How Will Merging Minds and Machines ...

One of the most exciting and frightening outcomes of technological advancement is the potential to merge our minds with machines. If achieved, this would profoundly boost our cognitive capabilities. More importantly, however, it could be a revolution in human identity, emotion, spirituality, and self-awareness.

Brain-machine interface technology is already being developed by pioneers and researchers around the globe. It’s still early and today’s tech is fairly rudimentary, but it’s a fast-moving field, and some believe it will advance faster than generally expected. Futurist Ray Kurzweil has predicted that by the 2030s we will be able to connect our brains to the internet via nanobots that will “provide full-immersion virtual reality from within the nervous system, provide direct brain-to-brain communication over the internet, and otherwise greatly expand human intelligence.” Even if the advances are less dramatic, however, they’ll have significant implications.

How might this technology affect human consciousness? What about its implications on our sentience, self-awareness, or subjective experience of our illusion of self?

Consciousness can be hard to define, but a holistic definition often encompasses many of our most fundamental capacities, such as wakefulness, self-awareness, meta-cognition, and sense of agency. Beyond that, consciousness represents a spectrum of awareness, as seen across various species of animals. Even humans experience different levels of existential awareness.

From psychedelics to meditation, there are many tools we already use to alter and heighten our conscious experience, both temporarily and permanently. These tools have been said to contribute to a richer life, with the potential to bring experiences of beauty, love, inner peace, and transcendence. Relatively non-invasive, these tools show us what a seemingly minor imbalance of neurochemistry and conscious internal effort can do to the subjective experience of being human.

Taking this into account, what implications might emerging brain-machine interface technologies have on the “self”?

The Tools for Self-Transcendence
At the basic level, we are currently seeing the rise of “consciousness hackers” using techniques like non-invasive brain stimulation through EEG, nutrition, virtual reality, and ecstatic experiences to create environments for heightened consciousness and self-awareness. In Stealing Fire, Steven Kotler and Jamie Wheal explore this trillion-dollar altered-states economy and how innovators and thought leaders are “harnessing rare and controversial states of consciousness to solve critical challenges and outperform the competition.” Beyond enhanced productivity, these altered states expose our inner potential and give us a glimpse of a greater state of being.

Expanding consciousness through brain augmentation and implants could one day be just as accessible. Researchers are working on an array of neurotechnologies as simple and non-invasive as electrode-based EEGs to invasive implants and techniques like optogenetics, where neurons are genetically reprogrammed to respond to pulses of light. We’ve already connected two brains via the internet, allowing the two to communicate, and future-focused startups are researching the possibilities too. With an eye toward advanced brain-machine interfaces, last year Elon Musk unveiled Neuralink, a company whose ultimate goal is to merge the human mind with AI through a “neural lace.”

Many technologists predict we will one day merge with and, more speculatively, upload our minds onto machines. Neuroscientist Kenneth Hayworth writes in Skeptic magazine, “All of today’s neuroscience models are fundamentally computational by nature, supporting the theoretical possibility of mind-uploading.” This might include connecting with other minds using digital networks or even uploading minds onto quantum computers, which can be in multiple states of computation at a given time.

In their book Evolving Ourselves, Juan Enriquez and Steve Gullans describe a world where evolution is no longer driven by natural processes. Instead, it is driven by human choices, through what they call unnatural selection and non-random mutation. With advancements in genetic engineering, we are indeed seeing evolution become an increasingly conscious process with an accelerated pace. This could one day apply to the evolution of our consciousness as well; we would be using our consciousness to expand our consciousness.

What Will It Feel Like?
We may be able to come up with predictions of the impact of these technologies on society, but we can only wonder what they will feel like subjectively.

It’s hard to imagine, for example, what our stream of consciousness will feel like when we can process thoughts and feelings 1,000 times faster, or how artificially intelligent brain implants will impact our capacity to love and hate. What will the illusion of “I” feel like when our consciousness is directly plugged into the internet? Overall, what impact will the process of merging with technology have on the subjective experience of being human?

The Evolution of Consciousness
In The Future Evolution of Consciousness, Thomas Lombardo points out, “We are a journey rather than a destination—a chapter in the evolutionary saga rather than a culmination. Just as probable, there will also be a diversification of species and types of conscious minds. It is also very likely that new psychological capacities, incomprehensible to us, will emerge as well.”

Humans are notorious for fearing the unknown. For any individual who has never experienced an altered state, be it spiritual or psychedelic-induced, it is difficult to comprehend the subjective experience of that state. It is why many refer to their first altered-state experience as “waking up,” wherein they didn’t even realize they were asleep.

Similarly, exponential neurotechnology represents the potential of a higher state of consciousness and a range of experiences that are unimaginable to our current default state.

Our capacity to think and feel is set by the boundaries of our biological brains. To transform and expand these boundaries is to transform and expand the first-hand experience of consciousness. Emerging neurotechnology may end up providing the awakening our species needs.

Image Credit: Peshkova / Shutterstock.com Continue reading

Posted in Human Robots