Tag Archives: open

#433852 How Do We Teach Autonomous Cars To Drive ...

Autonomous vehicles can follow the general rules of American roads, recognizing traffic signals and lane markings, noticing crosswalks and other regular features of the streets. But they work only on well-marked roads that are carefully scanned and mapped in advance.

Many paved roads, though, have faded paint, signs obscured behind trees and unusual intersections. In addition, 1.4 million miles of U.S. roads—one-third of the country’s public roadways—are unpaved, with no on-road signals like lane markings or stop-here lines. That doesn’t include miles of private roads, unpaved driveways or off-road trails.

What’s a rule-following autonomous car to do when the rules are unclear or nonexistent? And what are its passengers to do when they discover their vehicle can’t get them where they’re going?

Accounting for the Obscure
Most challenges in developing advanced technologies involve handling infrequent or uncommon situations, or events that require performance beyond a system’s normal capabilities. That’s definitely true for autonomous vehicles. Some on-road examples might be navigating construction zones, encountering a horse and buggy, or seeing graffiti that looks like a stop sign. Off-road, the possibilities include the full variety of the natural world, such as trees down over the road, flooding and large puddles—or even animals blocking the way.

At Mississippi State University’s Center for Advanced Vehicular Systems, we have taken up the challenge of training algorithms to respond to circumstances that almost never happen, are difficult to predict and are complex to create. We seek to put autonomous cars in the hardest possible scenario: driving in an area the car has no prior knowledge of, with no reliable infrastructure like road paint and traffic signs, and in an unknown environment where it’s just as likely to see a cactus as a polar bear.

Our work combines virtual technology and the real world. We create advanced simulations of lifelike outdoor scenes, which we use to train artificial intelligence algorithms to take a camera feed and classify what it sees, labeling trees, sky, open paths and potential obstacles. Then we transfer those algorithms to a purpose-built all-wheel-drive test vehicle and send it out on our dedicated off-road test track, where we can see how our algorithms work and collect more data to feed into our simulations.

Starting Virtual
We have developed a simulator that can create a wide range of realistic outdoor scenes for vehicles to navigate through. The system generates a range of landscapes of different climates, like forests and deserts, and can show how plants, shrubs and trees grow over time. It can also simulate weather changes, sunlight and moonlight, and the accurate locations of 9,000 stars.

The system also simulates the readings of sensors commonly used in autonomous vehicles, such as lidar and cameras. Those virtual sensors collect data that feeds into neural networks as valuable training data.

Simulated desert, meadow and forest environments generated by the Mississippi State University Autonomous Vehicle Simulator. Chris Goodin, Mississippi State University, Author provided.
Building a Test Track
Simulations are only as good as their portrayals of the real world. Mississippi State University has purchased 50 acres of land on which we are developing a test track for off-road autonomous vehicles. The property is excellent for off-road testing, with unusually steep grades for our area of Mississippi—up to 60 percent inclines—and a very diverse population of plants.

We have selected certain natural features of this land that we expect will be particularly challenging for self-driving vehicles, and replicated them exactly in our simulator. That allows us to directly compare results from the simulation and real-life attempts to navigate the actual land. Eventually, we’ll create similar real and virtual pairings of other types of landscapes to improve our vehicle’s capabilities.

A road washout, as seen in real life, left, and in simulation. Chris Goodin, Mississippi State University, Author provided.
Collecting More Data
We have also built a test vehicle, called the Halo Project, which has an electric motor and sensors and computers that can navigate various off-road environments. The Halo Project car has additional sensors to collect detailed data about its actual surroundings, which can help us build virtual environments to run new tests in.

The Halo Project car can collect data about driving and navigating in rugged terrain. Beth Newman Wynn, Mississippi State University, Author provided.
Two of its lidar sensors, for example, are mounted at intersecting angles on the front of the car so their beams sweep across the approaching ground. Together, they can provide information on how rough or smooth the surface is, as well as capturing readings from grass and other plants and items on the ground.

Lidar beams intersect, scanning the ground in front of the vehicle. Chris Goodin, Mississippi State University, Author provided
We’ve seen some exciting early results from our research. For example, we have shown promising preliminary results that machine learning algorithms trained on simulated environments can be useful in the real world. As with most autonomous vehicle research, there is still a long way to go, but our hope is that the technologies we’re developing for extreme cases will also help make autonomous vehicles more functional on today’s roads.

Matthew Doude, Associate Director, Center for Advanced Vehicular Systems; Ph.D. Student in Industrial and Systems Engineering, Mississippi State University; Christopher Goodin, Assistant Research Professor, Center for Advanced Vehicular Systems, Mississippi State University, and Daniel Carruth, Assistant Research Professor and Associate Director for Human Factors and Advanced Vehicle System, Center for Advanced Vehicular Systems, Mississippi State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Photo provided for The Conversation by Matthew Goudin / CC BY ND Continue reading

Posted in Human Robots

#433803 This Week’s Awesome Stories From ...

ARTIFICIAL INTELLIGENCE
The AI Cold War That Could Doom Us All
Nicholas Thompson | Wired
“At the dawn of a new stage in the digital revolution, the world’s two most powerful nations are rapidly retreating into positions of competitive isolation, like players across a Go board. …Is the arc of the digital revolution bending toward tyranny, and is there any way to stop it?”

LONGEVITY
Finally, the Drug That Keeps You Young
Stephen S. Hall | MIT Technology Review
“The other thing that has changed is that the field of senescence—and the recognition that senescent cells can be such drivers of aging—has finally gained acceptance. Whether those drugs will work in people is still an open question. But the first human trials are under way right now.”

SYNTHETIC BIOLOGY
Ginkgo Bioworks Is Turning Human Cells Into On-Demand Factories
Megan Molteni | Wired
“The biotech unicorn is already cranking out an impressive number of microbial biofactories that grow and multiply and burp out fragrances, fertilizers, and soon, psychoactive substances. And they do it at a fraction of the cost of traditional systems. But Kelly is thinking even bigger.”

CYBERNETICS
Thousands of Swedes Are Inserting Microchips Under Their Skin
Maddy Savage | NPR
“Around the size of a grain of rice, the chips typically are inserted into the skin just above each user’s thumb, using a syringe similar to that used for giving vaccinations. The procedure costs about $180. So many Swedes are lining up to get the microchips that the country’s main chipping company says it can’t keep up with the number of requests.”

ART
AI Art at Christie’s Sells for $432,500
Gabe Cohn | The New York Times
“Last Friday, a portrait produced by artificial intelligence was hanging at Christie’s New York opposite an Andy Warhol print and beside a bronze work by Roy Lichtenstein. On Thursday, it sold for well over double the price realized by both those pieces combined.”

ETHICS
Should a Self-Driving Car Kill the Baby or the Grandma? Depends on Where You’re From
Karen Hao | MIT Technology Review
“The researchers never predicted the experiment’s viral reception. Four years after the platform went live, millions of people in 233 countries and territories have logged 40 million decisions, making it one of the largest studies ever done on global moral preferences.”

TECHNOLOGY
The Rodney Brooks Rules for Predicting a Technology’s Success
Rodney Brooks | IEEE Spectrum
“Building electric cars and reusable rockets is fairly easy. Building a nuclear fusion reactor, flying cars, self-driving cars, or a Hyperloop system is very hard. What makes the difference?”

Image Source: spainter_vfx / Shutterstock.com Continue reading

Posted in Human Robots

#433785 DeepMind’s Eerie Reimagination of the ...

If a recent project using Google’s DeepMind were a recipe, you would take a pair of AI systems, images of animals, and a whole lot of computing power. Mix it all together, and you’d get a series of imagined animals dreamed up by one of the AIs. A look through the research paper about the project—or this open Google Folder of images it produced—will likely lead you to agree that the results are a mix of impressive and downright eerie.

But the eerie factor doesn’t mean the project shouldn’t be considered a success and a step forward for future uses of AI.

From GAN To BigGAN
The team behind the project consists of Andrew Brock, a PhD student at Edinburgh Center for Robotics, and DeepMind intern and researcher Jeff Donahue and Karen Simonyan.

They used a so-called Generative Adversarial Network (GAN) to generate the images. In a GAN, two AI systems collaborate in a game-like manner. One AI produces images of an object or creature. The human equivalent would be drawing pictures of, for example, a dog—without necessarily knowing what a dog exactly looks like. Those images are then shown to the second AI, which has already been fed images of dogs. The second AI then tells the first one how far off its efforts were. The first one uses this information to improve its images. The two go back and forth in an iterative process, and the goal is for the first AI to become so good at creating images of dogs that the second can’t tell the difference between its creations and actual pictures of dogs.

The team was able to draw on Google’s vast vaults of computational power to create images of a quality and life-like nature that were beyond almost anything seen before. In part, this was achieved by feeding the GAN with more images than is usually the case. According to IFLScience, the standard is to feed about 64 images per subject into the GAN. In this case, the research team fed about 2,000 images per subject into the system, leading to it being nicknamed BigGAN.

Their results showed that feeding the system with more images and using masses of raw computer power markedly increased the GAN’s precision and ability to create life-like renditions of the subjects it was trained to reproduce.

“The main thing these models need is not algorithmic improvements, but computational ones. […] When you increase model capacity and you increase the number of images you show at every step, you get this twofold combined effect,” Andrew Brock told Fast Company.

The Power Drain
The team used 512 of Google’s AI-focused Tensor Processing Units (TPU) to generate 512-pixel images. Each experiment took between 24 and 48 hours to run.

That kind of computing power needs a lot of electricity. As artist and Innovator-In-Residence at the Library of Congress Jer Thorp tongue-in-cheek put it on Twitter: “The good news is that AI can now give you a more believable image of a plate of spaghetti. The bad news is that it used roughly enough energy to power Cleveland for the afternoon.”

Thorp added that a back-of-the-envelope calculation showed that the computations to produce the images would require about 27,000 square feet of solar panels to have adequate power.

BigGAN’s images have been hailed by researchers, with Oriol Vinyals, research scientist at DeepMind, rhetorically asking if these were the ‘Best GAN samples yet?’

However, they are still not perfect. The number of legs on a given creature is one example of where the BigGAN seemed to struggle. The system was good at recognizing that something like a spider has a lot of legs, but seemed unable to settle on how many ‘a lot’ was supposed to be. The same applied to dogs, especially if the images were supposed to show said dogs in motion.

Those eerie images are contrasted by other renditions that show such lifelike qualities that a human mind has a hard time identifying them as fake. Spaniels with lolling tongues, ocean scenery, and butterflies were all rendered with what looks like perfection. The same goes for an image of a hamburger that was good enough to make me stop writing because I suddenly needed lunch.

The Future Use Cases
GAN networks were first introduced in 2014, and given their relative youth, researchers and companies are still busy trying out possible use cases.

One possible use is image correction—making pixillated images clearer. Not only does this help your future holiday snaps, but it could be applied in industries such as space exploration. A team from the University of Michigan and the Max Planck Institute have developed a method for GAN networks to create images from text descriptions. At Berkeley, a research group has used GAN to create an interface that lets users change the shape, size, and design of objects, including a handbag.

For anyone who has seen a film like Wag the Dog or read 1984, the possibilities are also starkly alarming. GANs could, in other words, make fake news look more real than ever before.

For now, it seems that while not all GANs require the computational and electrical power of the BigGAN, there is still some way to reach these potential use cases. However, if there’s one lesson from Moore’s Law and exponential technology, it is that today’s technical roadblock quickly becomes tomorrow’s minor issue as technology progresses.

Image Credit: Ondrej Prosicky/Shutterstock Continue reading

Posted in Human Robots

#433748 Could Tech Make Government As We Know It ...

Governments are one of the last strongholds of an undigitized, linear sector of humanity, and they are falling behind fast. Apart from their struggle to keep up with private sector digitization, federal governments are in a crisis of trust.

At almost a 60-year low, only 18 percent of Americans reported that they could trust their government “always” or “most of the time” in a recent Pew survey. And the US is not alone. The Edelman Trust Barometer revealed last year that 41 percent of the world population distrust their nations’ governments.

In many cases, the private sector—particularly tech—is driving greater progress in regulation-targeted issues like climate change than state leaders. And as decentralized systems, digital disruption, and private sector leadership take the world by storm, traditional forms of government are beginning to fear irrelevance. However, the fight for exponential governance is not a lost battle.

Early visionaries like Estonia and the UAE are leading the way in digital governance, empowered by a host of converging technologies.

In this article, we will cover three key trends:

Digital governance divorced from land
AI-driven service delivery and regulation
Blockchain-enforced transparency

Let’s dive in.

Governments Going Digital
States and their governments have forever been tied to physical territories, and public services are often delivered through brick-and-mortar institutions. Yet public sector infrastructure and services will soon be hosted on servers, detached from land and physical form.

Enter e-Estonia. Perhaps the least expected on a list of innovative nations, this former Soviet Republic-turned digital society is ushering in an age of technological statecraft.

Hosting every digitizable government function on the cloud, Estonia could run its government almost entirely on a server. Starting in the 1990s, Estonia’s government has covered the nation with ultra-high-speed data connectivity, laying down tremendous amounts of fiber optic cable. By 2007, citizens could vote from their living rooms.

With digitized law, Estonia signs policies into effect using cryptographically secure digital signatures, and every stage of the legislative process is available to citizens online.

Citizens’ healthcare registry is run on the blockchain, allowing patients to own and access their own health data from anywhere in the world—X-rays, digital prescriptions, medical case notes—all the while tracking who has access.

Today, most banks have closed their offices, as 99 percent of banking transactions occur online (with 67 percent of citizens regularly using cryptographically secured e-IDs). And by 2020, e-tax will be entirely automated with Estonia’s new e-Tax and Customs Board portal, allowing companies and tax authority to exchange data automatically. And i-Voting, civil courts, land registries, banking, taxes, and countless e-facilities allow citizens to access almost any government service with an electronic ID and personal PIN online.

But perhaps Estonia’s most revolutionary breakthrough is its recently introduced e-residency. With over 30,000 e-residents, Estonia issues electronic IDs to global residents anywhere in the world. While e-residency doesn’t grant territorial rights, over 5,000 e-residents have already established companies within Estonia’s jurisdiction.

After registering companies online, entrepreneurs pay automated taxes—calculated in minutes and transmitted to the Estonian government with unprecedented ease.

The implications of e-residency and digital governance are huge. As with any software, open-source code for digital governance could be copied perfectly at almost zero cost, lowering the barrier to entry for any group or movement seeking statehood.

We may soon see the rise of competitive governing ecosystems, each testing new infrastructure and public e-services to compete with mainstream governments for taxpaying citizens.

And what better to accelerate digital governance than AI?

Legal Compliance Through AI
Just last year, the UAE became the first nation to appoint a State Minister for AI (actually a friend of mine, H.E. Omar Al Olama), aiming to digitize government services and halve annual costs. Among multiple sector initiatives, the UAE hopes to deploy robotic cops by 2030.

Meanwhile, the U.K. now has a Select Committee on Artificial Intelligence, and just last month, world leaders convened at the World Government Summit to discuss guidelines for AI’s global regulation.

As AI infuses government services, emerging applications have caught my eye:

Smart Borders and Checkpoints

With biometrics and facial recognition, traditional checkpoints will soon be a thing of the past. Cubic Transportation Systems—the company behind London’s ticketless public transit—is currently developing facial recognition for automated transport barriers. Digital security company Gemalto predicts that biometric systems will soon cross-reference individual faces with passport databases at security checkpoints, and China has already begun to test this at scale. While the Alibaba Ant Financial affiliate’s “Smile to Pay” feature allows users to authenticate digital payments with their faces, nationally overseen facial recognition technologies allow passengers to board planes, employees to enter office spaces, and students to access university halls. With biometric-geared surveillance at national borders, supply chains and international travelers could be tracked automatically, and granted or denied access according to biometrics and cross-referenced databases.

Policing and Security

Leveraging predictive analytics, China is also working to integrate security footage into a national surveillance and data-sharing system. By merging citizen data in its “Police Cloud”—including everything from criminal and medical records, transaction data, travel records and social media—it may soon be able to spot suspects and predict crime in advance. But China is not alone. During London’s Notting Hill Carnival this year, the Metropolitan Police used facial recognition cross-referenced with crime data to pre-identify and track likely offenders.

Smart Courts

AI may soon be reaching legal trials as well. UCL computer scientists have developed software capable of predicting courtroom outcomes based on data patterns with unprecedented accuracy. Assessing risk of flight, the National Bureau of Economic Research now uses an algorithm leveraging data from hundreds of thousands of NYC cases to recommend whether defendants should be granted bail. But while AI allows for streamlined governance, the public sector’s power to misuse our data is a valid concern and issues with bias as a result of historical data still remain. As tons of new information is generated about our every move, how do we keep governments accountable?

Enter the blockchain.

Transparent Governance and Accountability
Without doubt, alongside AI, government’s greatest disruptor is the newly-minted blockchain. Relying on a decentralized web of nodes, blockchain can securely verify transactions, signatures, and other information. This makes it essentially impossible for hackers, companies, officials, or even governments to falsify information on the blockchain.

As you’d expect, many government elites are therefore slow to adopt the technology, fearing enforced accountability. But blockchain’s benefits to government may be too great to ignore.

First, blockchain will be a boon for regulatory compliance.

As transactions on a blockchain are irreversible and transparent, uploaded sensor data can’t be corrupted. This means middlemen have no way of falsifying information to shirk regulation, and governments eliminate the need to enforce charges after the fact.

Apply this to carbon pricing, for instance, and emission sensors could fluidly log carbon credits onto a carbon credit blockchain, such as that developed by Ecosphere+. As carbon values are added to the price of everyday products or to corporations’ automated taxes, compliance and transparency would soon be digitally embedded.

Blockchain could also bolster government efforts in cybersecurity. As supercities and nation-states build IoT-connected traffic systems, surveillance networks, and sensor-tracked supply chain management, blockchain is critical in protecting connected devices from cyberattack.

But blockchain will inevitably hold governments accountable as well. By automating and tracking high-risk transactions, blockchain may soon eliminate fraud in cash transfers, public contracts and aid funds. Already, the UN World Food Program has piloted blockchain to manage cash-based transfers and aid flows to Syrian refugees in Jordan.

Blockchain-enabled “smart contracts” could automate exchange of real assets according to publicly visible, pre-programmed conditions, disrupting the $9.5 trillion market of public-sector contracts and public investment projects.

Eliminating leakages and increasing transparency, a distributed ledger has the potential to save trillions.

Future Implications
It is truly difficult to experiment with new forms of government. It’s not like there are new countries waiting to be discovered where we can begin fresh. And with entrenched bureaucracies and dominant industrial players, changing an existing nation’s form of government is extremely difficult and usually only happens during times of crisis or outright revolution.

Perhaps we will develop and explore new forms of government in the virtual world (to be explored during a future blog), or perhaps Sea Steading will allow us to physically build new island nations. And ultimately, as we move off the earth to Mars and space colonies, we will have yet another chance to start fresh.

But, without question, 90 percent or more of today’s political processes herald back to a day before technology, and it shows in terms of speed and efficiency.

Ultimately, there will be a shift to digital governments enabled with blockchain’s transparency, and we will redefine the relationship between citizens and the public sector.

One day I hope i-voting will allow anyone anywhere to participate in policy, and cloud-based governments will start to compete in e-services. As four billion new minds come online over the next several years, people may soon have the opportunity to choose their preferred government and citizenship digitally, independent of birthplace.

In 50 years, what will our governments look like? Will we have an interplanetary order, or a multitude of publicly-run ecosystems? Will cyber-ocracies rule our physical worlds with machine intelligence, or will blockchains allow for hive mind-like democracy?

The possibilities are endless, and only we can shape them.

Join Me
Abundance-Digital Online Community: I’ve created a digital community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is my ‘onramp’ for exponential entrepreneurs – those who want to get involved and play at a higher level. Click here to learn more.

Image Credit: ArtisticPhoto / Shutterstock.com Continue reading

Posted in Human Robots

#433725 This Week’s Awesome Stories From ...

ROBOTICS
The Demise of Rethink Robotics Shows How Hard It Is to Make Machines Truly Smart
Will Knight | MIT Technology Review
“There’s growing interest in using recent advances in AI to make industrial robots a lot smarter and more useful. …But look carefully and you’ll see that these technologies are at a very early stage, and that deploying them commercially could prove extremely challenging. The demise of Rethink doesn’t mean industrial robotics isn’t flourishing, or that AI-driven advances won’t come about. But it shows just how hard doing real innovation in robotics can be.”

SCIENCE
The Human Cell Atlas Is Biologists’ Latest Grand Project
Megan Molteni | Wired
“Dubbed the Human Cell Atlas, the project intends to catalog all of the estimated 37 trillion cells that make up a human body. …By decoding the genes active in single cells, pegging different cell types to a specific address in the body, and tracing the molecular circuits between them, participating researchers plan to create a more comprehensive map of human biology than has ever existed before.”

TRANSPORTATION
US Will Rewrite Safety Rules to Permit Fully Driverless Cars on Public Roads
Andrew J. Hawkins | The Verge
“Under current US safety rules, a motor vehicle must have traditional controls, like a steering wheel, mirrors, and foot pedals, before it is allowed to operate on public roads. But that could all change under a new plan released on Thursday by the Department of Transportation that’s intended to open the floodgates for fully driverless cars.”

ARTIFICIAL INTELLIGENCE
When an AI Goes Full Jack Kerouac
Brian Merchant | The Atlantic
“By the end of the four-day trip, receipts emblazoned with artificially intelligent prose would cover the floor of the car. …it is a hallucinatory, oddly illuminating account of a bot’s life on the interstate; the Electric Kool-Aid Acid Test meets Google Street View, narrated by Siri.”

FUTURE OF FOOD
New Autonomous Farm Wants to Produce Food Without Human Workers
Erin Winick | MIT Technology Review
“As the firm’s cofounder Brandon Alexander puts it: ‘We are a farm and will always be a farm.’ But it’s no ordinary farm. For starters, the company’s 15 human employees share their work space with robots who quietly go about the business of tending rows and rows of leafy greens.”

Image Credit: Kotenko Olaksandr / Shutterstock.com Continue reading

Posted in Human Robots