Tag Archives: self

#433803 This Week’s Awesome Stories From ...

ARTIFICIAL INTELLIGENCE
The AI Cold War That Could Doom Us All
Nicholas Thompson | Wired
“At the dawn of a new stage in the digital revolution, the world’s two most powerful nations are rapidly retreating into positions of competitive isolation, like players across a Go board. …Is the arc of the digital revolution bending toward tyranny, and is there any way to stop it?”

LONGEVITY
Finally, the Drug That Keeps You Young
Stephen S. Hall | MIT Technology Review
“The other thing that has changed is that the field of senescence—and the recognition that senescent cells can be such drivers of aging—has finally gained acceptance. Whether those drugs will work in people is still an open question. But the first human trials are under way right now.”

SYNTHETIC BIOLOGY
Ginkgo Bioworks Is Turning Human Cells Into On-Demand Factories
Megan Molteni | Wired
“The biotech unicorn is already cranking out an impressive number of microbial biofactories that grow and multiply and burp out fragrances, fertilizers, and soon, psychoactive substances. And they do it at a fraction of the cost of traditional systems. But Kelly is thinking even bigger.”

CYBERNETICS
Thousands of Swedes Are Inserting Microchips Under Their Skin
Maddy Savage | NPR
“Around the size of a grain of rice, the chips typically are inserted into the skin just above each user’s thumb, using a syringe similar to that used for giving vaccinations. The procedure costs about $180. So many Swedes are lining up to get the microchips that the country’s main chipping company says it can’t keep up with the number of requests.”

ART
AI Art at Christie’s Sells for $432,500
Gabe Cohn | The New York Times
“Last Friday, a portrait produced by artificial intelligence was hanging at Christie’s New York opposite an Andy Warhol print and beside a bronze work by Roy Lichtenstein. On Thursday, it sold for well over double the price realized by both those pieces combined.”

ETHICS
Should a Self-Driving Car Kill the Baby or the Grandma? Depends on Where You’re From
Karen Hao | MIT Technology Review
“The researchers never predicted the experiment’s viral reception. Four years after the platform went live, millions of people in 233 countries and territories have logged 40 million decisions, making it one of the largest studies ever done on global moral preferences.”

TECHNOLOGY
The Rodney Brooks Rules for Predicting a Technology’s Success
Rodney Brooks | IEEE Spectrum
“Building electric cars and reusable rockets is fairly easy. Building a nuclear fusion reactor, flying cars, self-driving cars, or a Hyperloop system is very hard. What makes the difference?”

Image Source: spainter_vfx / Shutterstock.com Continue reading

Posted in Human Robots

#433739 No Safety Driver Here—Volvo’s New ...

Each time there’s a headline about driverless trucking technology, another piece is taken out of the old equation. First, an Uber/Otto truck’s safety driver went hands-off once the truck reached the highway (and said truck successfully delivered its valuable cargo of 50,000 beers). Then, Starsky Robotics announced its trucks would start making autonomous deliveries without a human in the vehicle at all.

Now, Volvo has taken the tech one step further. Its new trucks not only won’t have safety drivers, they won’t even have the option of putting safety drivers behind the wheel, because there is no wheel—and no cab, either.

Vera, as the technology’s been dubbed, was unveiled in September, and consists of a sort of flat-Tesla-like electric car with a standard trailer hookup. The vehicles are connected to a cloud service, which also connects them to each other and to a control center. The control center monitors the trucks’ positioning (they’re designed to locate their position to within centimeters), battery charge, load content, service requirements, and other variables. The driveline and battery pack used in the cars are the same as those Volvo uses in its existing electric trucks.

You won’t see these cruising down an interstate highway, though, or even down a local highway. Vera trucks are designed to be used on short, repetitive routes contained within limited areas—think shipping ports, industrial parks, or logistics hubs. They’re limited to slower speeds than normal cars or trucks, and will be able to operate 24/7. “We will see much higher delivery precision, as well as improved flexibility and productivity,” said Mikael Karlsson, VP of Autonomous Solutions at Volvo Trucks. “Today’s operations are often designed according to standard daytime work hours, but a solution like Vera opens up the possibility of continuous round-the-clock operation and a more optimal flow. This in turn can minimize stock piles and increase overall productivity.”

The trucks are sort of like bigger versions of Amazon’s Kiva robots, which scoot around the aisles of warehouses and fulfillment centers moving pallets between shelves and fetching goods to be shipped.

Pairing trucks like Vera with robots like Kiva makes for a fascinating future landscape of logistics and transport; cargo will be moved from docks to warehouses by a large, flat robot-on-wheels, then distributed throughout that warehouse by smaller, flat robots-on-wheels. To really see the automated process through to the end point, even smaller flat robots-on-wheels will be used to deliver peoples’ goods right to their front doors.

Sounds like a lot of robots and not a lot of humans, right? Anticipating its technology’s implication in the ongoing uproar over technological unemployment, Volvo has already made statements about its intentions to continue to employ humans alongside the driverless trucks. “I foresee that there will be an increased level of automation where it makes sense, such as for repetitive tasks. This in turn will drive prosperity and increase the need for truck drivers in other applications,” said Karlsson.

The end-to-end automation concept has already been put into practice in Caofeidian, a northern Chinese city that houses the world’s first fully autonomous harbor, aiming to be operational by the end of this year. Besides replacing human-driven trucks with autonomous ones (made by Chinese startup TuSimple), the port is using automated cranes and a coordinating central control system.

Besides Uber/Otto, Tesla, or Daimler, which are all working on driverless trucks with a more conventional design (meaning they still have a cab and look like you’d expect a truck to look), Volvo also has competition from a company called Einride. The Swedish startup’s electric, cabless T/Pod looks a lot like Vera, but has some fundamental differences. Rather than being tailored to short distances and high capacity, Einride’s trucks are meant for medium distance and capacity, like moving goods from a distribution center to a series of local stores.

Vera trucks are currently still in the development phase. But since their intended use is quite specific and limited (Karlsson noted “Vera is not intended to be a solution for everyone, everywhere”), the technology could likely be rolled out faster than its more general-use counterparts. Having cabless electric trucks take over short routes in closed environments would be one more baby step along the road to a driverless future—and a testament to the fact that self-driving technology will move into our lives and our jobs incrementally, ostensibly giving us the time we’ll need to adapt and adjust.

Image Credit: Volvo Trucks Continue reading

Posted in Human Robots

#433696 3 Big Ways Tech Is Disrupting Global ...

Disruptive business models are often powered by alternative financing. In Part 1 of this series, I discussed how mobile is redefining money and banking and shared some of the dramatic transformations in the global remittance infrastructure.

In this article, we’ll discuss:

Peer-to-peer lending
AI financial advisors and robo traders
Seamless Transactions

Let’s dive right back in…

Decentralized Lending = Democratized Access to Finances
Peer-to-peer (P2P) lending is an age-old practice, traditionally with high risk and extreme locality. Now, the P2P funding model is being digitized and delocalized, bringing lending online and across borders.

Zopa, the first official crowdlending platform, arrived in the United Kingdom in 2004. Since then, the consumer crowdlending platform has facilitated lending of over 3 billion euros ($3.5 billion USD) of loans.

Person-to-business crowdlending took off, again in the U.K., in 2005 with Funding Circle, now with over 5 billion euros (~5.8 billion USD) of capital loaned to small businesses around the world.

Crowdlending next took off in the US in 2006, with platforms like Prosper and Lending Club. The US crowdlending industry has boomed to $21 billion in loans, across 515,000 loans.

Let’s take a step back… to a time before banks, when lending took place between trusted neighbors in small villages across the globe. Lending started as peer-to-peer transactions.

As villages turned into towns, towns turned into cities, and cities turned into sprawling metropolises, neighborly trust and the ability to communicate across urban landscapes broke down. That’s where banks and other financial institutions came into play—to add trust back into the lending equation.

With crowdlending, we are evidently returning to this pre-centralized-banking model of loans, and moving away from cumbersome intermediaries (e.g. high fees, regulations, and extra complexity).

Fueled by the permeation of the internet, P2P lending took on a new form as ‘crowdlending’ in the early 2000s. Now, as blockchain and artificial intelligence arrive on the digital scene, P2P lending platforms are being overhauled with transparency, accountability, reliability, and immutability.

Artificial Intelligence Micro Lending & Credit Scores
We are beginning to augment our quantitative decision-making with neural networks processing borrowers’ financial data to determine their financial ‘fate’ (or, as some call it, your credit score). Companies like Smart Finance Group (backed by Kai Fu Lee and Sinovation Ventures) are using artificial intelligence to minimize default rates for tens of millions of microloans.

Smart Finance is fueled by users’ personal data, particularly smartphone data and usage behavior. Users are required to give Smart Finance access to their smartphone data, so that Smart Finance’s artificial intelligence engine can generate a credit score from the personal information.

The benefits of this AI-powered lending platform do not stop at increased loan payback rates; there’s a massive speed increase as well. Smart Finance loans are frequently approved in under eight seconds. As we’ve seen with other artificial intelligence disruptions, data is the new gold.

Digitizing access to P2P loans paves the way for billions of people currently without access to banking to leapfrog the centralized banking system, just as Africa bypassed landline phones and went straight to mobile. Leapfrogging centralized banking and the credit system is exactly what Smart Finance has done for hundreds of millions of people in China.

Blockchain-Backed Crowdlending
As artificial intelligence accesses even the most mundane mobile browsing data to assign credit scores, blockchain technologies, particularly immutable ledgers and smart contracts, are massive disruptors to the archaic banking system, building additional trust and transparency on top of current P2P lending models.

Immutable ledgers provide the necessary transparency for accurate credit and loan defaulting history. Smart contracts executed on these immutable ledgers bring the critical ability to digitally replace cumbersome, expensive third parties (like banks), allowing individual borrowers or businesses to directly connect with willing lenders.

Two of the leading blockchain platforms for P2P lending are ETHLend and SALT Lending.

ETHLend is an Ethereum-based decentralized application aiming to bring transparency and trust to P2P lending through Ethereum network smart contracts.

Secure Automated Lending Technology (SALT) allows cryptocurrency asset holders to use their digital assets as collateral for cash loans, without the need to liquidate their holdings, giving rise to a digital-asset-backed lending market.

While blockchain poses a threat to many of the large, centralized banking institutions, some are taking advantage of the new technology to optimize their internal lending, credit scoring, and collateral operations.

In March 2018, ING and Credit Suisse successfully exchanged 25 million euros using HQLA-X, a blockchain-based collateral lending platform.

HQLA-X runs on the R3 Corda blockchain, a platform designed specifically to help heritage financial and commerce institutions migrate away from their inefficient legacy financial infrastructure.

Blockchain and tokenization are going through their own fintech and regulation shakeup right now. In a future blog, I’ll discuss the various efforts to more readily assure smart contracts, and the disruptive business model of security tokens and the US Securities and Exchange Commission.

Parallels to the Global Abundance of Capital
The abundance of capital being created by the advent of P2P loans closely relates to the unprecedented global abundance of capital.

Initial coin offerings (ICOs) and crowdfunding are taking a strong stand in disrupting the $164 billion venture capital market. The total amount invested in ICOs has risen from $6.6 billion in 2017 to $7.15 billion USD in the first half of 2018. Crowdfunding helped projects raise more than $34 billion in 2017, with experts projecting that global crowdfunding investments will reach $300 billion by 2025.

In the last year alone, using ICOs, over a dozen projects have raised hundreds of millions of dollars in mere hours. Take Filecoin, for example, which raised $257 million  in only 30 days; its first $135 million was raised in the first hour. Similarly, the Dragon Coin project (which itself is revolutionizing remittance in high-stakes casinos around the world) raised $320 million in its 30-day public ICO.

Some Important Takeaways…

Technology-backed fundraising and financial services are disrupting the world’s largest financial institutions. Anyone, anywhere, at anytime will be able to access the capital they need to pursue their idea.

The speed at which we can go from “I’ve got an idea” to “I run a billion-dollar company” is moving faster than ever.

Following Ray Kurzweil’s Law of Accelerating Returns, the rapid decrease in time to access capital is intimately linked (and greatly dependent on) a financial infrastructure (technology, institutions, platforms, and policies) that can adapt and evolve just as rapidly.

This new abundance of capital requires financial decision-making with ever-higher market prediction precision. That’s exactly where artificial intelligence is already playing a massive role.

Artificial Intelligence, Robo Traders, and Financial Advisors
On May 6, 2010, the Dow Jones Industrial Average suddenly collapsed by 998.5 points (equal to 8 percent, or $1 trillion). The crash lasted over 35 minutes and is now known as the ‘Flash Crash’. While no one knows the specific reason for this 2010 stock market anomaly, experts widely agree that the Flash Crash had to do with algorithmic trading.

With the ability to have instant, trillion-dollar market impacts, algorithmic trading and artificial intelligence are undoubtedly ingrained in how financial markets operate.

In 2017, CNBC.com estimated that 90 percent of daily trading volume in stock trading is done by machine algorithms, and only 10 percent is carried out directly by humans.

Artificial intelligence and financial management algorithms are not only available to top Wall Street players.

Robo-advisor financial management apps, like Wealthfront and Betterment, are rapidly permeating the global market. Wealthfront currently has $9.5 billion in assets under management, and Betterment has $10 billion.

Artificial intelligent financial agents are already helping financial institutions protect your money and fight fraud. A prime application for machine learning is in detecting anomalies in your spending and transaction habits, and flagging potentially fraudulent transactions.

As artificial intelligence continues to exponentially increase in power and capabilities, increasingly powerful trading and financial management bots will come online, finding massive new and previously lost streams of wealth.

How else are artificial intelligence and automation transforming finance?

Disruptive Remittance and Seamless Transactions
When was the last time you paid in cash at a toll booth? How about for a taxi ride?

EZ-Pass, the electronic tolling company implemented extensively on the East Coast, has done wonders to reduce traffic congestion and increase traffic flow.

Driving down I-95 on the East Coast of the United States, drivers rarely notice their financial transaction with the state’s tolling agencies. The transactions are seamless.

The Uber app enables me to travel without my wallet. I can forget about payment on my trip, free up my mental bandwidth and time for higher-priority tasks. The entire process is digitized and, by extension, automated and integrated into Uber’s platform (Note: This incredible convenience many times causes me to accidentally walk out of taxi cabs without paying!).

In January 2018, we saw the success of the first cutting-edge, AI-powered Amazon Go store open in Seattle, Washington. The store marked a new era in remittance and transactions. Gone are the days of carrying credit cards and cash, and gone are the cash registers. And now, on the heals of these early ‘beta-tests’, Amazon is considering opening as many as 3,000 of these cashierless stores by 2023.

Amazon Go stores use AI algorithms that watch various video feeds (from advanced cameras) throughout the store to identify who picks up groceries, exactly what products they select, and how much to charge that person when they walk out of the store. It’s a grab and go experience.

Let’s extrapolate the notion of seamless, integrated payment systems from Amazon Go and Uber’s removal of post-ride payment to the rest of our day-to-day experience.

Imagine this near future:

As you near the front door of your home, your AI assistant summons a self-driving Uber that takes you to the Hyperloop station (after all, you work in L.A. but live in San Francisco).

At the station, you board your pod, without noticing that your ticket purchase was settled via a wireless payment checkpoint.

After work, you stop at the Amazon Go and pick up dinner. Your virtual AI assistant passes your Amazon account information to the store’s payment checkpoint, as the store’s cameras and sensors track you, your cart and charge you auto-magically.

At home, unbeknownst to you, your AI has already restocked your fridge and pantry with whatever items you failed to pick up at the Amazon Go.

Once we remove the actively transacting aspect of finance, what else becomes possible?

Top Conclusions
Extraordinary transformations are happening in the finance world. We’ve only scratched the surface of the fintech revolution. All of these transformative financial technologies require high-fidelity assurance, robust insurance, and a mechanism for storing value.

I’ll dive into each of these other facets of financial services in future articles.

For now, thanks to coming global communication networks being deployed on 5G, Alphabet’s LUNE, SpaceX’s Starlink and OneWeb, by 2024, nearly all 8 billion people on Earth will be online.

Once connected, these new minds, entrepreneurs, and customers need access to money and financial services to meaningfully participate in the world economy.

By connecting lenders and borrowers around the globe, decentralized lending drives down global interest rates, increases global financial market participation, and enables economic opportunity to the billions of people who are about to come online.

We’re living in the most abundant time in human history, and fintech is just getting started.

Join Me
Abundance Digital Online Community: I have created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance Digital. This is my ‘onramp’ for exponential entrepreneurs – those who want to get involved and play at a higher level. Click here to learn more.

Image Credit: Novikov Aleksey / Shutterstock.com Continue reading

Posted in Human Robots

#433620 Instilling the Best of Human Values in ...

Now that the era of artificial intelligence is unquestionably upon us, it behooves us to think and work harder to ensure that the AIs we create embody positive human values.

Science fiction is full of AIs that manifest the dark side of humanity, or are indifferent to humans altogether. Such possibilities cannot be ruled out, but nor is there any logical or empirical reason to consider them highly likely. I am among a large group of AI experts who see a strong potential for profoundly positive outcomes in the AI revolution currently underway.

We are facing a future with great uncertainty and tremendous promise, and the best we can do is to confront it with a combination of heart and mind, of common sense and rigorous science. In the realm of AI, what this means is, we need to do our best to guide the AI minds we are creating to embody the values we cherish: love, compassion, creativity, and respect.

The quest for beneficial AI has many dimensions, including its potential to reduce material scarcity and to help unlock the human capacity for love and compassion.

Reducing Scarcity
A large percentage of difficult issues in human society, many of which spill over into the AI domain, would be palliated significantly if material scarcity became less of a problem. Fortunately, AI has great potential to help here. AI is already increasing efficiency in nearly every industry.

In the next few decades, as nanotech and 3D printing continue to advance, AI-driven design will become a larger factor in the economy. Radical new tools like artificial enzymes built using Christian Schafmeister’s spiroligomer molecules, and designed using quantum physics-savvy AIs, will enable the creation of new materials and medicines.

For amazing advances like the intersection of AI and nanotech to lead toward broadly positive outcomes, however, the economic and political aspects of the AI industry may have to shift from the current status quo.

Currently, most AI development occurs under the aegis of military organizations or large corporations oriented heavily toward advertising and marketing. Put crudely, an awful lot of AI today is about “spying, brainwashing, or killing.” This is not really the ideal situation if we want our first true artificial general intelligences to be open-minded, warm-hearted, and beneficial.

Also, as the bulk of AI development now occurs in large for-profit organizations bound by law to pursue the maximization of shareholder value, we face a situation where AI tends to exacerbate global wealth inequality and class divisions. This has the potential to lead to various civilization-scale failure modes involving the intersection of geopolitics, AI, cyberterrorism, and so forth. Part of my motivation for founding the decentralized AI project SingularityNET was to create an alternative mode of dissemination and utilization of both narrow AI and AGI—one that operates in a self-organizing way, outside of the direct grip of conventional corporate and governmental structures.

In the end, though, I worry that radical material abundance and novel political and economic structures may fail to create a positive future, unless they are coupled with advances in consciousness and compassion. AGIs have the potential to be massively more ethical and compassionate than humans. But still, the odds of getting deeply beneficial AGIs seem higher if the humans creating them are fuller of compassion and positive consciousness—and can effectively pass these values on.

Transmitting Human Values
Brain-computer interfacing is another critical aspect of the quest for creating more positive AIs and more positive humans. As Elon Musk has put it, “If you can’t beat ’em, join’ em.” Joining is more fun than beating anyway. What better way to infuse AIs with human values than to connect them directly to human brains, and let them learn directly from the source (while providing humans with valuable enhancements)?

Millions of people recently heard Elon Musk discuss AI and BCI on the Joe Rogan podcast. Musk’s embrace of brain-computer interfacing is laudable, but he tends to dodge some of the tough issues—for instance, he does not emphasize the trade-off cyborgs will face between retaining human-ness and maximizing intelligence, joy, and creativity. To make this trade-off effectively, the AI portion of the cyborg will need to have a deep sense of human values.

Musk calls humanity the “biological boot loader” for AGI, but to me this colorful metaphor misses a key point—that we can seed the AGI we create with our values as an initial condition. This is one reason why it’s important that the first really powerful AGIs are created by decentralized networks, and not conventional corporate or military organizations. The decentralized software/hardware ecosystem, for all its quirks and flaws, has more potential to lead to human-computer cybernetic collective minds that are reasonable and benevolent.

Algorithmic Love
BCI is still in its infancy, but a more immediate way of connecting people with AIs to infuse both with greater love and compassion is to leverage humanoid robotics technology. Toward this end, I conceived a project called Loving AI, focused on using highly expressive humanoid robots like the Hanson robot Sophia to lead people through meditations and other exercises oriented toward unlocking the human potential for love and compassion. My goals here were to explore the potential of AI and robots to have a positive impact on human consciousness, and to use this application to study and improve the OpenCog and SingularityNET tools used to control Sophia in these interactions.

The Loving AI project has now run two small sets of human trials, both with exciting and positive results. These have been small—dozens rather than hundreds of people—but have definitively proven the point. Put a person in a quiet room with a humanoid robot that can look them in the eye, mirror their facial expressions, recognize some of their emotions, and lead them through simple meditation, listening, and consciousness-oriented exercises…and quite a lot of the time, the result is a more relaxed person who has entered into a shifted state of consciousness, at least for a period of time.

In a certain percentage of cases, the interaction with the robot consciousness guide triggered a dramatic change of consciousness in the human subject—a deep meditative trance state, for instance. In most cases, the result was not so extreme, but statistically the positive effect was quite significant across all cases. Furthermore, a similar effect was found using an avatar simulation of the robot’s face on a tablet screen (together with a webcam for facial expression mirroring and recognition), but not with a purely auditory interaction.

The Loving AI experiments are not only about AI; they are about human-robot and human-avatar interaction, with AI as one significant aspect. The facial interaction with the robot or avatar is pushing “biological buttons” that trigger emotional reactions and prime the mind for changes of consciousness. However, this sort of body-mind interaction is arguably critical to human values and what it means to be human; it’s an important thing for robots and AIs to “get.”

Halting or pausing the advance of AI is not a viable possibility at this stage. Despite the risks, the potential economic and political benefits involved are clear and massive. The convergence of narrow AI toward AGI is also a near inevitability, because there are so many important applications where greater generality of intelligence will lead to greater practical functionality. The challenge is to make the outcome of this great civilization-level adventure as positive as possible.

Image Credit: Anton Gvozdikov / Shutterstock.com Continue reading

Posted in Human Robots

#433506 MIT’s New Robot Taught Itself to Pick ...

Back in 2016, somewhere in a Google-owned warehouse, more than a dozen robotic arms sat for hours quietly grasping objects of various shapes and sizes. For hours on end, they taught themselves how to pick up and hold the items appropriately—mimicking the way a baby gradually learns to use its hands.

Now, scientists from MIT have made a new breakthrough in machine learning: their new system can not only teach itself to see and identify objects, but also understand how best to manipulate them.

This means that, armed with the new machine learning routine referred to as “dense object nets (DON),” the robot would be capable of picking up an object that it’s never seen before, or in an unfamiliar orientation, without resorting to trial and error—exactly as a human would.

The deceptively simple ability to dexterously manipulate objects with our hands is a huge part of why humans are the dominant species on the planet. We take it for granted. Hardware innovations like the Shadow Dexterous Hand have enabled robots to softly grip and manipulate delicate objects for many years, but the software required to control these precision-engineered machines in a range of circumstances has proved harder to develop.

This was not for want of trying. The Amazon Robotics Challenge offers millions of dollars in prizes (and potentially far more in contracts, as their $775m acquisition of Kiva Systems shows) for the best dexterous robot able to pick and package items in their warehouses. The lucrative dream of a fully-automated delivery system is missing this crucial ability.

Meanwhile, the Robocup@home challenge—an offshoot of the popular Robocup tournament for soccer-playing robots—aims to make everyone’s dream of having a robot butler a reality. The competition involves teams drilling their robots through simple household tasks that require social interaction or object manipulation, like helping to carry the shopping, sorting items onto a shelf, or guiding tourists around a museum.

Yet all of these endeavors have proved difficult; the tasks often have to be simplified to enable the robot to complete them at all. New or unexpected elements, such as those encountered in real life, more often than not throw the system entirely. Programming the robot’s every move in explicit detail is not a scalable solution: this can work in the highly-controlled world of the assembly line, but not in everyday life.

Computer vision is improving all the time. Neural networks, including those you train every time you prove that you’re not a robot with CAPTCHA, are getting better at sorting objects into categories, and identifying them based on sparse or incomplete data, such as when they are occluded, or in different lighting.

But many of these systems require enormous amounts of input data, which is impractical, slow to generate, and often needs to be laboriously categorized by humans. There are entirely new jobs that require people to label, categorize, and sift large bodies of data ready for supervised machine learning. This can make machine learning undemocratic. If you’re Google, you can make thousands of unwitting volunteers label your images for you with CAPTCHA. If you’re IBM, you can hire people to manually label that data. If you’re an individual or startup trying something new, however, you will struggle to access the vast troves of labeled data available to the bigger players.

This is why new systems that can potentially train themselves over time or that allow robots to deal with situations they’ve never seen before without mountains of labelled data are a holy grail in artificial intelligence. The work done by MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) is part of a new wave of “self-supervised” machine learning systems—little of the data used was labeled by humans.

The robot first inspects the new object from multiple angles, building up a 3D picture of the object with its own coordinate system. This then allows the robotic arm to identify a particular feature on the object—such as a handle, or the tongue of a shoe—from various different angles, based on its relative distance to other grid points.

This is the real innovation: the new means of representing objects to grasp as mapped-out 3D objects, with grid points and subsections of their own. Rather than using a computer vision algorithm to identify a door handle, and then activating a door handle grasping subroutine, the DON system treats all objects by making these spatial maps before classifying or manipulating them, enabling it to deal with a greater range of objects than in other approaches.

“Many approaches to manipulation can’t identify specific parts of an object across the many orientations that object may encounter,” said PhD student Lucas Manuelli, who wrote a new paper about the system with lead author and fellow student Pete Florence, alongside MIT professor Russ Tedrake. “For example, existing algorithms would be unable to grasp a mug by its handle, especially if the mug could be in multiple orientations, like upright, or on its side.”

Class-specific descriptors, which can be applied to the object features, can allow the robot arm to identify a mug, find the handle, and pick the mug up appropriately. Object-specific descriptors allow the robot arm to select a particular mug from a group of similar items. I’m already dreaming of a robot butler reliably picking my favourite mug when it serves me coffee in the morning.

Google’s robot arm-y was an attempt to develop a general grasping algorithm: one that could identify, categorize, and appropriately grip as many items as possible. This requires a great deal of training time and data, which is why Google parallelized their project by having 14 robot arms feed data into a single neural network brain: even then, the algorithm may fail with highly specific tasks. Specialist grasping algorithms might require less training if they’re limited to specific objects, but then your software is useless for general tasks.

As the roboticists noted, their system, with its ability to identify parts of an object rather than just a single object, is better suited to specific tasks, such as “grasp the racquet by the handle,” than Amazon Robotics Challenge robots, which identify whole objects by segmenting an image.

This work is small-scale at present. It has been tested with a few classes of objects, including shoes, hats, and mugs. Yet the use of these dense object nets as a way for robots to represent and manipulate new objects may well be another step towards the ultimate goal of generalized automation: a robot capable of performing every task a person can. If that point is reached, the question that will remain is how to cope with being obsolete.

Image Credit: Tom Buehler/CSAIL Continue reading

Posted in Human Robots