Tag Archives: computer

#433748 Could Tech Make Government As We Know It ...

Governments are one of the last strongholds of an undigitized, linear sector of humanity, and they are falling behind fast. Apart from their struggle to keep up with private sector digitization, federal governments are in a crisis of trust.

At almost a 60-year low, only 18 percent of Americans reported that they could trust their government “always” or “most of the time” in a recent Pew survey. And the US is not alone. The Edelman Trust Barometer revealed last year that 41 percent of the world population distrust their nations’ governments.

In many cases, the private sector—particularly tech—is driving greater progress in regulation-targeted issues like climate change than state leaders. And as decentralized systems, digital disruption, and private sector leadership take the world by storm, traditional forms of government are beginning to fear irrelevance. However, the fight for exponential governance is not a lost battle.

Early visionaries like Estonia and the UAE are leading the way in digital governance, empowered by a host of converging technologies.

In this article, we will cover three key trends:

Digital governance divorced from land
AI-driven service delivery and regulation
Blockchain-enforced transparency

Let’s dive in.

Governments Going Digital
States and their governments have forever been tied to physical territories, and public services are often delivered through brick-and-mortar institutions. Yet public sector infrastructure and services will soon be hosted on servers, detached from land and physical form.

Enter e-Estonia. Perhaps the least expected on a list of innovative nations, this former Soviet Republic-turned digital society is ushering in an age of technological statecraft.

Hosting every digitizable government function on the cloud, Estonia could run its government almost entirely on a server. Starting in the 1990s, Estonia’s government has covered the nation with ultra-high-speed data connectivity, laying down tremendous amounts of fiber optic cable. By 2007, citizens could vote from their living rooms.

With digitized law, Estonia signs policies into effect using cryptographically secure digital signatures, and every stage of the legislative process is available to citizens online.

Citizens’ healthcare registry is run on the blockchain, allowing patients to own and access their own health data from anywhere in the world—X-rays, digital prescriptions, medical case notes—all the while tracking who has access.

Today, most banks have closed their offices, as 99 percent of banking transactions occur online (with 67 percent of citizens regularly using cryptographically secured e-IDs). And by 2020, e-tax will be entirely automated with Estonia’s new e-Tax and Customs Board portal, allowing companies and tax authority to exchange data automatically. And i-Voting, civil courts, land registries, banking, taxes, and countless e-facilities allow citizens to access almost any government service with an electronic ID and personal PIN online.

But perhaps Estonia’s most revolutionary breakthrough is its recently introduced e-residency. With over 30,000 e-residents, Estonia issues electronic IDs to global residents anywhere in the world. While e-residency doesn’t grant territorial rights, over 5,000 e-residents have already established companies within Estonia’s jurisdiction.

After registering companies online, entrepreneurs pay automated taxes—calculated in minutes and transmitted to the Estonian government with unprecedented ease.

The implications of e-residency and digital governance are huge. As with any software, open-source code for digital governance could be copied perfectly at almost zero cost, lowering the barrier to entry for any group or movement seeking statehood.

We may soon see the rise of competitive governing ecosystems, each testing new infrastructure and public e-services to compete with mainstream governments for taxpaying citizens.

And what better to accelerate digital governance than AI?

Legal Compliance Through AI
Just last year, the UAE became the first nation to appoint a State Minister for AI (actually a friend of mine, H.E. Omar Al Olama), aiming to digitize government services and halve annual costs. Among multiple sector initiatives, the UAE hopes to deploy robotic cops by 2030.

Meanwhile, the U.K. now has a Select Committee on Artificial Intelligence, and just last month, world leaders convened at the World Government Summit to discuss guidelines for AI’s global regulation.

As AI infuses government services, emerging applications have caught my eye:

Smart Borders and Checkpoints

With biometrics and facial recognition, traditional checkpoints will soon be a thing of the past. Cubic Transportation Systems—the company behind London’s ticketless public transit—is currently developing facial recognition for automated transport barriers. Digital security company Gemalto predicts that biometric systems will soon cross-reference individual faces with passport databases at security checkpoints, and China has already begun to test this at scale. While the Alibaba Ant Financial affiliate’s “Smile to Pay” feature allows users to authenticate digital payments with their faces, nationally overseen facial recognition technologies allow passengers to board planes, employees to enter office spaces, and students to access university halls. With biometric-geared surveillance at national borders, supply chains and international travelers could be tracked automatically, and granted or denied access according to biometrics and cross-referenced databases.

Policing and Security

Leveraging predictive analytics, China is also working to integrate security footage into a national surveillance and data-sharing system. By merging citizen data in its “Police Cloud”—including everything from criminal and medical records, transaction data, travel records and social media—it may soon be able to spot suspects and predict crime in advance. But China is not alone. During London’s Notting Hill Carnival this year, the Metropolitan Police used facial recognition cross-referenced with crime data to pre-identify and track likely offenders.

Smart Courts

AI may soon be reaching legal trials as well. UCL computer scientists have developed software capable of predicting courtroom outcomes based on data patterns with unprecedented accuracy. Assessing risk of flight, the National Bureau of Economic Research now uses an algorithm leveraging data from hundreds of thousands of NYC cases to recommend whether defendants should be granted bail. But while AI allows for streamlined governance, the public sector’s power to misuse our data is a valid concern and issues with bias as a result of historical data still remain. As tons of new information is generated about our every move, how do we keep governments accountable?

Enter the blockchain.

Transparent Governance and Accountability
Without doubt, alongside AI, government’s greatest disruptor is the newly-minted blockchain. Relying on a decentralized web of nodes, blockchain can securely verify transactions, signatures, and other information. This makes it essentially impossible for hackers, companies, officials, or even governments to falsify information on the blockchain.

As you’d expect, many government elites are therefore slow to adopt the technology, fearing enforced accountability. But blockchain’s benefits to government may be too great to ignore.

First, blockchain will be a boon for regulatory compliance.

As transactions on a blockchain are irreversible and transparent, uploaded sensor data can’t be corrupted. This means middlemen have no way of falsifying information to shirk regulation, and governments eliminate the need to enforce charges after the fact.

Apply this to carbon pricing, for instance, and emission sensors could fluidly log carbon credits onto a carbon credit blockchain, such as that developed by Ecosphere+. As carbon values are added to the price of everyday products or to corporations’ automated taxes, compliance and transparency would soon be digitally embedded.

Blockchain could also bolster government efforts in cybersecurity. As supercities and nation-states build IoT-connected traffic systems, surveillance networks, and sensor-tracked supply chain management, blockchain is critical in protecting connected devices from cyberattack.

But blockchain will inevitably hold governments accountable as well. By automating and tracking high-risk transactions, blockchain may soon eliminate fraud in cash transfers, public contracts and aid funds. Already, the UN World Food Program has piloted blockchain to manage cash-based transfers and aid flows to Syrian refugees in Jordan.

Blockchain-enabled “smart contracts” could automate exchange of real assets according to publicly visible, pre-programmed conditions, disrupting the $9.5 trillion market of public-sector contracts and public investment projects.

Eliminating leakages and increasing transparency, a distributed ledger has the potential to save trillions.

Future Implications
It is truly difficult to experiment with new forms of government. It’s not like there are new countries waiting to be discovered where we can begin fresh. And with entrenched bureaucracies and dominant industrial players, changing an existing nation’s form of government is extremely difficult and usually only happens during times of crisis or outright revolution.

Perhaps we will develop and explore new forms of government in the virtual world (to be explored during a future blog), or perhaps Sea Steading will allow us to physically build new island nations. And ultimately, as we move off the earth to Mars and space colonies, we will have yet another chance to start fresh.

But, without question, 90 percent or more of today’s political processes herald back to a day before technology, and it shows in terms of speed and efficiency.

Ultimately, there will be a shift to digital governments enabled with blockchain’s transparency, and we will redefine the relationship between citizens and the public sector.

One day I hope i-voting will allow anyone anywhere to participate in policy, and cloud-based governments will start to compete in e-services. As four billion new minds come online over the next several years, people may soon have the opportunity to choose their preferred government and citizenship digitally, independent of birthplace.

In 50 years, what will our governments look like? Will we have an interplanetary order, or a multitude of publicly-run ecosystems? Will cyber-ocracies rule our physical worlds with machine intelligence, or will blockchains allow for hive mind-like democracy?

The possibilities are endless, and only we can shape them.

Join Me
Abundance-Digital Online Community: I’ve created a digital community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is my ‘onramp’ for exponential entrepreneurs – those who want to get involved and play at a higher level. Click here to learn more.

Image Credit: ArtisticPhoto / Shutterstock.com Continue reading

Posted in Human Robots

#433728 AI Is Kicking Space Exploration into ...

Artificial intelligence in space exploration is gathering momentum. Over the coming years, new missions look likely to be turbo-charged by AI as we voyage to comets, moons, and planets and explore the possibilities of mining asteroids.

“AI is already a game-changer that has made scientific research and exploration much more efficient. We are not just talking about a doubling but about a multiple of ten,” Leopold Summerer, Head of the Advanced Concepts and Studies Office at ESA, said in an interview with Singularity Hub.

Examples Abound
The history of AI and space exploration is older than many probably think. It has already played a significant role in research into our planet, the solar system, and the universe. As computer systems and software have developed, so have AI’s potential use cases.

The Earth Observer 1 (EO-1) satellite is a good example. Since its launch in the early 2000s, its onboard AI systems helped optimize analysis of and response to natural occurrences, like floods and volcanic eruptions. In some cases, the AI was able to tell EO-1 to start capturing images before the ground crew were even aware that the occurrence had taken place.

Other satellite and astronomy examples abound. Sky Image Cataloging and Analysis Tool (SKICAT) has assisted with the classification of objects discovered during the second Palomar Sky Survey, classifying thousands more objects caught in low resolution than a human would be able to. Similar AI systems have helped astronomers to identify 56 new possible gravitational lenses that play a crucial role in connection with research into dark matter.

AI’s ability to trawl through vast amounts of data and find correlations will become increasingly important in relation to getting the most out of the available data. ESA’s ENVISAT produces around 400 terabytes of new data every year—but will be dwarfed by the Square Kilometre Array, which will produce around the same amount of data that is currently on the internet in a day.

AI Readying For Mars
AI is also being used for trajectory and payload optimization. Both are important preliminary steps to NASA’s next rover mission to Mars, the Mars 2020 Rover, which is, slightly ironically, set to land on the red planet in early 2021.

An AI known as AEGIS is already on the red planet onboard NASA’s current rovers. The system can handle autonomous targeting of cameras and choose what to investigate. However, the next generation of AIs will be able to control vehicles, autonomously assist with study selection, and dynamically schedule and perform scientific tasks.

Throughout his career, John Leif Jørgensen from DTU Space in Denmark has designed equipment and systems that have been on board about 100 satellites—and counting. He is part of the team behind the Mars 2020 Rover’s autonomous scientific instrument PIXL, which makes extensive use of AI. Its purpose is to investigate whether there have been lifeforms like stromatolites on Mars.

“PIXL’s microscope is situated on the rover’s arm and needs to be placed 14 millimetres from what we want it to study. That happens thanks to several cameras placed on the rover. It may sound simple, but the handover process and finding out exactly where to place the arm can be likened to identifying a building from the street from a picture taken from the roof. This is something that AI is eminently suited for,” he said in an interview with Singularity Hub.

AI also helps PIXL operate autonomously throughout the night and continuously adjust as the environment changes—the temperature changes between day and night can be more than 100 degrees Celsius, meaning that the ground beneath the rover, the cameras, the robotic arm, and the rock being studied all keep changing distance.

“AI is at the core of all of this work, and helps almost double productivity,” Jørgensen said.

First Mars, Then Moons
Mars is likely far from the final destination for AIs in space. Jupiter’s moons have long fascinated scientists. Especially Europa, which could house a subsurface ocean, buried beneath an approximately 10 km thick ice crust. It is one of the most likely candidates for finding life elsewhere in the solar system.

While that mission may be some time in the future, NASA is currently planning to launch the James Webb Space Telescope into an orbit of around 1.5 million kilometers from Earth in 2020. Part of the mission will involve AI-empowered autonomous systems overseeing the full deployment of the telescope’s 705-kilo mirror.

The distances between Earth and Europa, or Earth and the James Webb telescope, means a delay in communications. That, in turn, makes it imperative for the crafts to be able to make their own decisions. Examples from the Mars Rover project show that communication between a rover and Earth can take 20 minutes because of the vast distance. A Europa mission would see much longer communication times.

Both missions, to varying degrees, illustrate one of the most significant challenges currently facing the use of AI in space exploration. There tends to be a direct correlation between how well AI systems perform and how much data they have been fed. The more, the better, as it were. But we simply don’t have very much data to feed such a system about what it’s likely to encounter on a mission to a place like Europa.

Computing power presents a second challenge. A strenuous, time-consuming approval process and the risk of radiation mean that your computer at home would likely be more powerful than anything going into space in the near future. A 200 GHz processor, 256 megabytes of ram, and 2 gigabytes of memory sounds a lot more like a Nokia 3210 (the one you could use as an ice hockey puck without it noticing) than an iPhone X—but it’s actually the ‘brain’ that will be onboard the next rover.

Private Companies Taking Off
Private companies are helping to push those limitations. CB Insights charts 57 startups in the space-space, covering areas as diverse as natural resources, consumer tourism, R&D, satellites, spacecraft design and launch, and data analytics.

David Chew works as an engineer for the Japanese satellite company Axelspace. He explained how private companies are pushing the speed of exploration and lowering costs.

“Many private space companies are taking advantage of fall-back systems and finding ways of using parts and systems that traditional companies have thought of as non-space-grade. By implementing fall-backs, and using AI, it is possible to integrate and use parts that lower costs without adding risk of failure,” he said in an interview with Singularity Hub.

Terraforming Our Future Home
Further into the future, moonshots like terraforming Mars await. Without AI, these kinds of projects to adapt other planets to Earth-like conditions would be impossible.

Autonomous crafts are already terraforming here on Earth. BioCarbon Engineering uses drones to plant up to 100,000 trees in a single day. Drones first survey and map an area, then an algorithm decides the optimal locations for the trees before a second wave of drones carry out the actual planting.

As is often the case with exponential technologies, there is a great potential for synergies and convergence. For example with AI and robotics, or quantum computing and machine learning. Why not send an AI-driven robot to Mars and use it as a telepresence for scientists on Earth? It could be argued that we are already in the early stages of doing just that by using VR and AR systems that take data from the Mars rovers and create a virtual landscape scientists can walk around in and make decisions on what the rovers should explore next.

One of the biggest benefits of AI in space exploration may not have that much to do with its actual functions. Chew believes that within as little as ten years, we could see the first mining of asteroids in the Kuiper Belt with the help of AI.

“I think one of the things that AI does to space exploration is that it opens up a whole range of new possible industries and services that have a more immediate effect on the lives of people on Earth,” he said. “It becomes a relatable industry that has a real effect on people’s daily lives. In a way, space exploration becomes part of people’s mindset, and the border between our planet and the solar system becomes less important.”

Image Credit: Taily / Shutterstock.com Continue reading

Posted in Human Robots

#433668 A Decade of Commercial Space ...

In many industries, a decade is barely enough time to cause dramatic change unless something disruptive comes along—a new technology, business model, or service design. The space industry has recently been enjoying all three.

But 10 years ago, none of those innovations were guaranteed. In fact, on Sept. 28, 2008, an entire company watched and hoped as their flagship product attempted a final launch after three failures. With cash running low, this was the last shot. Over 21,000 kilograms of kerosene and liquid oxygen ignited and powered two booster stages off the launchpad.

This first official picture of the Soviet satellite Sputnik I was issued in Moscow Oct. 9, 1957. The satellite measured 1 foot, 11 inches and weighed 184 pounds. The Space Age began as the Soviet Union launched Sputnik, the first man-made satellite, into orbit, on Oct. 4, 1957.AP Photo/TASS
When that Falcon 1 rocket successfully reached orbit and the company secured a subsequent contract with NASA, SpaceX had survived its ‘startup dip’. That milestone, the first privately developed liquid-fueled rocket to reach orbit, ignited a new space industry that is changing our world, on this planet and beyond. What has happened in the intervening years, and what does it mean going forward?

While scientists are busy developing new technologies that address the countless technical problems of space, there is another segment of researchers, including myself, studying the business angle and the operations issues facing this new industry. In a recent paper, my colleague Christopher Tang and I investigate the questions firms need to answer in order to create a sustainable space industry and make it possible for humans to establish extraterrestrial bases, mine asteroids and extend space travel—all while governments play an increasingly smaller role in funding space enterprises. We believe these business solutions may hold the less-glamorous key to unlocking the galaxy.

The New Global Space Industry
When the Soviet Union launched their Sputnik program, putting a satellite in orbit in 1957, they kicked off a race to space fueled by international competition and Cold War fears. The Soviet Union and the United States played the primary roles, stringing together a series of “firsts” for the record books. The first chapter of the space race culminated with Neil Armstrong and Buzz Aldrin’s historic Apollo 11 moon landing which required massive public investment, on the order of US$25.4 billion, almost $200 billion in today’s dollars.

Competition characterized this early portion of space history. Eventually, that evolved into collaboration, with the International Space Station being a stellar example, as governments worked toward shared goals. Now, we’ve entered a new phase—openness—with private, commercial companies leading the way.

The industry for spacecraft and satellite launches is becoming more commercialized, due, in part, to shrinking government budgets. According to a report from the investment firm Space Angels, a record 120 venture capital firms invested over $3.9 billion in private space enterprises last year. The space industry is also becoming global, no longer dominated by the Cold War rivals, the United States and USSR.

In 2018 to date, there have been 72 orbital launches, an average of two per week, from launch pads in China, Russia, India, Japan, French Guinea, New Zealand, and the US.

The uptick in orbital launches of actual rockets as well as spacecraft launches, which includes satellites and probes launched from space, coincides with this openness over the past decade.

More governments, firms and even amateurs engage in various spacecraft launches than ever before. With more entities involved, innovation has flourished. As Roberson notes in Digital Trends, “Private, commercial spaceflight. Even lunar exploration, mining, and colonization—it’s suddenly all on the table, making the race for space today more vital than it has felt in years.”

Worldwide launches into space. Orbital launches include manned and unmanned spaceships launched into orbital flight from Earth. Spacecraft launches include all vehicles such as spaceships, satellites and probes launched from Earth or space. Wooten, J. and C. Tang (2018) Operations in space, Decision Sciences; Space Launch Report (Kyle 2017); Spacecraft Encyclopedia (Lafleur 2017), CC BY-ND

One can see this vitality plainly in the news. On Sept. 21, Japan announced that two of its unmanned rovers, dubbed Minerva-II-1, had landed on a small, distant asteroid. For perspective, the scale of this landing is similar to hitting a 6-centimeter target from 20,000 kilometers away. And earlier this year, people around the world watched in awe as SpaceX’s Falcon Heavy rocket successfully launched and, more impressively, returned its two boosters to a landing pad in a synchronized ballet of epic proportions.

Challenges and Opportunities
Amidst the growth of capital, firms, and knowledge, both researchers and practitioners must figure out how entities should manage their daily operations, organize their supply chain, and develop sustainable operations in space. This is complicated by the hurdles space poses: distance, gravity, inhospitable environments, and information scarcity.

One of the greatest challenges involves actually getting the things people want in space, into space. Manufacturing everything on Earth and then launching it with rockets is expensive and restrictive. A company called Made In Space is taking a different approach by maintaining an additive manufacturing facility on the International Space Station and 3D printing right in space. Tools, spare parts, and medical devices for the crew can all be created on demand. The benefits include more flexibility and better inventory management on the space station. In addition, certain products can be produced better in space than on Earth, such as pure optical fiber.

How should companies determine the value of manufacturing in space? Where should capacity be built and how should it be scaled up? The figure below breaks up the origin and destination of goods between Earth and space and arranges products into quadrants. Humans have mastered the lower left quadrant, made on Earth—for use on Earth. Moving clockwise from there, each quadrant introduces new challenges, for which we have less and less expertise.

A framework of Earth-space operations. Wooten, J. and C. Tang (2018) Operations in Space, Decision Sciences, CC BY-ND
I first became interested in this particular problem as I listened to a panel of robotics experts discuss building a colony on Mars (in our third quadrant). You can’t build the structures on Earth and easily send them to Mars, so you must manufacture there. But putting human builders in that extreme environment is equally problematic. Essentially, an entirely new mode of production using robots and automation in an advance envoy may be required.

Resources in Space
You might wonder where one gets the materials for manufacturing in space, but there is actually an abundance of resources: Metals for manufacturing can be found within asteroids, water for rocket fuel is frozen as ice on planets and moons, and rare elements like helium-3 for energy are embedded in the crust of the moon. If we brought that particular isotope back to Earth, we could eliminate our dependence on fossil fuels.

As demonstrated by the recent Minerva-II-1 asteroid landing, people are acquiring the technical know-how to locate and navigate to these materials. But extraction and transport are open questions.

How do these cases change the economics in the space industry? Already, companies like Planetary Resources, Moon Express, Deep Space Industries, and Asterank are organizing to address these opportunities. And scholars are beginning to outline how to navigate questions of property rights, exploitation and partnerships.

Threats From Space Junk
A computer-generated image of objects in Earth orbit that are currently being tracked. Approximately 95 percent of the objects in this illustration are orbital debris – not functional satellites. The dots represent the current location of each item. The orbital debris dots are scaled according to the image size of the graphic to optimize their visibility and are not scaled to Earth. NASA
The movie “Gravity” opens with a Russian satellite exploding, which sets off a chain reaction of destruction thanks to debris hitting a space shuttle, the Hubble telescope, and part of the International Space Station. The sequence, while not perfectly plausible as written, is a very real phenomenon. In fact, in 2013, a Russian satellite disintegrated when it was hit with fragments from a Chinese satellite that exploded in 2007. Known as the Kessler effect, the danger from the 500,000-plus pieces of space debris has already gotten some attention in public policy circles. How should one prevent, reduce or mitigate this risk? Quantifying the environmental impact of the space industry and addressing sustainable operations is still to come.

NASA scientist Mark Matney is seen through a fist-sized hole in a 3-inch thick piece of aluminum at Johnson Space Center’s orbital debris program lab. The hole was created by a thumb-size piece of material hitting the metal at very high speed simulating possible damage from space junk. AP Photo/Pat Sullivan
What’s Next?
It’s true that space is becoming just another place to do business. There are companies that will handle the logistics of getting your destined-for-space module on board a rocket; there are companies that will fly those rockets to the International Space Station; and there are others that can make a replacement part once there.

What comes next? In one sense, it’s anybody’s guess, but all signs point to this new industry forging ahead. A new breakthrough could alter the speed, but the course seems set: exploring farther away from home, whether that’s the moon, asteroids, or Mars. It’s hard to believe that 10 years ago, SpaceX launches were yet to be successful. Today, a vibrant private sector consists of scores of companies working on everything from commercial spacecraft and rocket propulsion to space mining and food production. The next step is working to solidify the business practices and mature the industry.

Standing in a large hall at the University of Pittsburgh as part of the White House Frontiers Conference, I see the future. Wrapped around my head are state-of-the-art virtual reality goggles. I’m looking at the surface of Mars. Every detail is immediate and crisp. This is not just a video game or an aimless exercise. The scientific community has poured resources into such efforts because exploration is preceded by information. And who knows, maybe 10 years from now, someone will be standing on the actual surface of Mars.

Image Credit: SpaceX

Joel Wooten, Assistant Professor of Management Science, University of South Carolina

This article is republished from The Conversation under a Creative Commons license. Read the original article. Continue reading

Posted in Human Robots

#433620 Instilling the Best of Human Values in ...

Now that the era of artificial intelligence is unquestionably upon us, it behooves us to think and work harder to ensure that the AIs we create embody positive human values.

Science fiction is full of AIs that manifest the dark side of humanity, or are indifferent to humans altogether. Such possibilities cannot be ruled out, but nor is there any logical or empirical reason to consider them highly likely. I am among a large group of AI experts who see a strong potential for profoundly positive outcomes in the AI revolution currently underway.

We are facing a future with great uncertainty and tremendous promise, and the best we can do is to confront it with a combination of heart and mind, of common sense and rigorous science. In the realm of AI, what this means is, we need to do our best to guide the AI minds we are creating to embody the values we cherish: love, compassion, creativity, and respect.

The quest for beneficial AI has many dimensions, including its potential to reduce material scarcity and to help unlock the human capacity for love and compassion.

Reducing Scarcity
A large percentage of difficult issues in human society, many of which spill over into the AI domain, would be palliated significantly if material scarcity became less of a problem. Fortunately, AI has great potential to help here. AI is already increasing efficiency in nearly every industry.

In the next few decades, as nanotech and 3D printing continue to advance, AI-driven design will become a larger factor in the economy. Radical new tools like artificial enzymes built using Christian Schafmeister’s spiroligomer molecules, and designed using quantum physics-savvy AIs, will enable the creation of new materials and medicines.

For amazing advances like the intersection of AI and nanotech to lead toward broadly positive outcomes, however, the economic and political aspects of the AI industry may have to shift from the current status quo.

Currently, most AI development occurs under the aegis of military organizations or large corporations oriented heavily toward advertising and marketing. Put crudely, an awful lot of AI today is about “spying, brainwashing, or killing.” This is not really the ideal situation if we want our first true artificial general intelligences to be open-minded, warm-hearted, and beneficial.

Also, as the bulk of AI development now occurs in large for-profit organizations bound by law to pursue the maximization of shareholder value, we face a situation where AI tends to exacerbate global wealth inequality and class divisions. This has the potential to lead to various civilization-scale failure modes involving the intersection of geopolitics, AI, cyberterrorism, and so forth. Part of my motivation for founding the decentralized AI project SingularityNET was to create an alternative mode of dissemination and utilization of both narrow AI and AGI—one that operates in a self-organizing way, outside of the direct grip of conventional corporate and governmental structures.

In the end, though, I worry that radical material abundance and novel political and economic structures may fail to create a positive future, unless they are coupled with advances in consciousness and compassion. AGIs have the potential to be massively more ethical and compassionate than humans. But still, the odds of getting deeply beneficial AGIs seem higher if the humans creating them are fuller of compassion and positive consciousness—and can effectively pass these values on.

Transmitting Human Values
Brain-computer interfacing is another critical aspect of the quest for creating more positive AIs and more positive humans. As Elon Musk has put it, “If you can’t beat ’em, join’ em.” Joining is more fun than beating anyway. What better way to infuse AIs with human values than to connect them directly to human brains, and let them learn directly from the source (while providing humans with valuable enhancements)?

Millions of people recently heard Elon Musk discuss AI and BCI on the Joe Rogan podcast. Musk’s embrace of brain-computer interfacing is laudable, but he tends to dodge some of the tough issues—for instance, he does not emphasize the trade-off cyborgs will face between retaining human-ness and maximizing intelligence, joy, and creativity. To make this trade-off effectively, the AI portion of the cyborg will need to have a deep sense of human values.

Musk calls humanity the “biological boot loader” for AGI, but to me this colorful metaphor misses a key point—that we can seed the AGI we create with our values as an initial condition. This is one reason why it’s important that the first really powerful AGIs are created by decentralized networks, and not conventional corporate or military organizations. The decentralized software/hardware ecosystem, for all its quirks and flaws, has more potential to lead to human-computer cybernetic collective minds that are reasonable and benevolent.

Algorithmic Love
BCI is still in its infancy, but a more immediate way of connecting people with AIs to infuse both with greater love and compassion is to leverage humanoid robotics technology. Toward this end, I conceived a project called Loving AI, focused on using highly expressive humanoid robots like the Hanson robot Sophia to lead people through meditations and other exercises oriented toward unlocking the human potential for love and compassion. My goals here were to explore the potential of AI and robots to have a positive impact on human consciousness, and to use this application to study and improve the OpenCog and SingularityNET tools used to control Sophia in these interactions.

The Loving AI project has now run two small sets of human trials, both with exciting and positive results. These have been small—dozens rather than hundreds of people—but have definitively proven the point. Put a person in a quiet room with a humanoid robot that can look them in the eye, mirror their facial expressions, recognize some of their emotions, and lead them through simple meditation, listening, and consciousness-oriented exercises…and quite a lot of the time, the result is a more relaxed person who has entered into a shifted state of consciousness, at least for a period of time.

In a certain percentage of cases, the interaction with the robot consciousness guide triggered a dramatic change of consciousness in the human subject—a deep meditative trance state, for instance. In most cases, the result was not so extreme, but statistically the positive effect was quite significant across all cases. Furthermore, a similar effect was found using an avatar simulation of the robot’s face on a tablet screen (together with a webcam for facial expression mirroring and recognition), but not with a purely auditory interaction.

The Loving AI experiments are not only about AI; they are about human-robot and human-avatar interaction, with AI as one significant aspect. The facial interaction with the robot or avatar is pushing “biological buttons” that trigger emotional reactions and prime the mind for changes of consciousness. However, this sort of body-mind interaction is arguably critical to human values and what it means to be human; it’s an important thing for robots and AIs to “get.”

Halting or pausing the advance of AI is not a viable possibility at this stage. Despite the risks, the potential economic and political benefits involved are clear and massive. The convergence of narrow AI toward AGI is also a near inevitability, because there are so many important applications where greater generality of intelligence will lead to greater practical functionality. The challenge is to make the outcome of this great civilization-level adventure as positive as possible.

Image Credit: Anton Gvozdikov / Shutterstock.com Continue reading

Posted in Human Robots

#433506 MIT’s New Robot Taught Itself to Pick ...

Back in 2016, somewhere in a Google-owned warehouse, more than a dozen robotic arms sat for hours quietly grasping objects of various shapes and sizes. For hours on end, they taught themselves how to pick up and hold the items appropriately—mimicking the way a baby gradually learns to use its hands.

Now, scientists from MIT have made a new breakthrough in machine learning: their new system can not only teach itself to see and identify objects, but also understand how best to manipulate them.

This means that, armed with the new machine learning routine referred to as “dense object nets (DON),” the robot would be capable of picking up an object that it’s never seen before, or in an unfamiliar orientation, without resorting to trial and error—exactly as a human would.

The deceptively simple ability to dexterously manipulate objects with our hands is a huge part of why humans are the dominant species on the planet. We take it for granted. Hardware innovations like the Shadow Dexterous Hand have enabled robots to softly grip and manipulate delicate objects for many years, but the software required to control these precision-engineered machines in a range of circumstances has proved harder to develop.

This was not for want of trying. The Amazon Robotics Challenge offers millions of dollars in prizes (and potentially far more in contracts, as their $775m acquisition of Kiva Systems shows) for the best dexterous robot able to pick and package items in their warehouses. The lucrative dream of a fully-automated delivery system is missing this crucial ability.

Meanwhile, the Robocup@home challenge—an offshoot of the popular Robocup tournament for soccer-playing robots—aims to make everyone’s dream of having a robot butler a reality. The competition involves teams drilling their robots through simple household tasks that require social interaction or object manipulation, like helping to carry the shopping, sorting items onto a shelf, or guiding tourists around a museum.

Yet all of these endeavors have proved difficult; the tasks often have to be simplified to enable the robot to complete them at all. New or unexpected elements, such as those encountered in real life, more often than not throw the system entirely. Programming the robot’s every move in explicit detail is not a scalable solution: this can work in the highly-controlled world of the assembly line, but not in everyday life.

Computer vision is improving all the time. Neural networks, including those you train every time you prove that you’re not a robot with CAPTCHA, are getting better at sorting objects into categories, and identifying them based on sparse or incomplete data, such as when they are occluded, or in different lighting.

But many of these systems require enormous amounts of input data, which is impractical, slow to generate, and often needs to be laboriously categorized by humans. There are entirely new jobs that require people to label, categorize, and sift large bodies of data ready for supervised machine learning. This can make machine learning undemocratic. If you’re Google, you can make thousands of unwitting volunteers label your images for you with CAPTCHA. If you’re IBM, you can hire people to manually label that data. If you’re an individual or startup trying something new, however, you will struggle to access the vast troves of labeled data available to the bigger players.

This is why new systems that can potentially train themselves over time or that allow robots to deal with situations they’ve never seen before without mountains of labelled data are a holy grail in artificial intelligence. The work done by MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) is part of a new wave of “self-supervised” machine learning systems—little of the data used was labeled by humans.

The robot first inspects the new object from multiple angles, building up a 3D picture of the object with its own coordinate system. This then allows the robotic arm to identify a particular feature on the object—such as a handle, or the tongue of a shoe—from various different angles, based on its relative distance to other grid points.

This is the real innovation: the new means of representing objects to grasp as mapped-out 3D objects, with grid points and subsections of their own. Rather than using a computer vision algorithm to identify a door handle, and then activating a door handle grasping subroutine, the DON system treats all objects by making these spatial maps before classifying or manipulating them, enabling it to deal with a greater range of objects than in other approaches.

“Many approaches to manipulation can’t identify specific parts of an object across the many orientations that object may encounter,” said PhD student Lucas Manuelli, who wrote a new paper about the system with lead author and fellow student Pete Florence, alongside MIT professor Russ Tedrake. “For example, existing algorithms would be unable to grasp a mug by its handle, especially if the mug could be in multiple orientations, like upright, or on its side.”

Class-specific descriptors, which can be applied to the object features, can allow the robot arm to identify a mug, find the handle, and pick the mug up appropriately. Object-specific descriptors allow the robot arm to select a particular mug from a group of similar items. I’m already dreaming of a robot butler reliably picking my favourite mug when it serves me coffee in the morning.

Google’s robot arm-y was an attempt to develop a general grasping algorithm: one that could identify, categorize, and appropriately grip as many items as possible. This requires a great deal of training time and data, which is why Google parallelized their project by having 14 robot arms feed data into a single neural network brain: even then, the algorithm may fail with highly specific tasks. Specialist grasping algorithms might require less training if they’re limited to specific objects, but then your software is useless for general tasks.

As the roboticists noted, their system, with its ability to identify parts of an object rather than just a single object, is better suited to specific tasks, such as “grasp the racquet by the handle,” than Amazon Robotics Challenge robots, which identify whole objects by segmenting an image.

This work is small-scale at present. It has been tested with a few classes of objects, including shoes, hats, and mugs. Yet the use of these dense object nets as a way for robots to represent and manipulate new objects may well be another step towards the ultimate goal of generalized automation: a robot capable of performing every task a person can. If that point is reached, the question that will remain is how to cope with being obsolete.

Image Credit: Tom Buehler/CSAIL Continue reading

Posted in Human Robots