Tag Archives: training

#435199 The Rise of AI Art—and What It Means ...

Artificially intelligent systems are slowly taking over tasks previously done by humans, and many processes involving repetitive, simple movements have already been fully automated. In the meantime, humans continue to be superior when it comes to abstract and creative tasks.

However, it seems like even when it comes to creativity, we’re now being challenged by our own creations.

In the last few years, we’ve seen the emergence of hundreds of “AI artists.” These complex algorithms are creating unique (and sometimes eerie) works of art. They’re generating stunning visuals, profound poetry, transcendent music, and even realistic movie scripts. The works of these AI artists are raising questions about the nature of art and the role of human creativity in future societies.

Here are a few works of art created by non-human entities.

Unsecured Futures
by Ai.Da

Ai-Da Robot with Painting. Image Credit: Ai-Da portraits by Nicky Johnston. Published with permission from Midas Public Relations.
Earlier this month we saw the announcement of Ai.Da, considered the first ultra-realistic drawing robot artist. Her mechanical abilities, combined with AI-based algorithms, allow her to draw, paint, and even sculpt. She is able to draw people using her artificial eye and a pencil in her hand. Ai.Da’s artwork and first solo exhibition, Unsecured Futures, will be showcased at Oxford University in July.

Ai-Da Cartesian Painting. Image Credit: Ai-Da Artworks. Published with permission from Midas Public Relations.
Obviously Ai.Da has no true consciousness, thoughts, or feelings. Despite that, the (human) organizers of the exhibition believe that Ai.Da serves as a basis for crucial conversations about the ethics of emerging technologies. The exhibition will serve as a stimulant for engaging with critical questions about what kind of future we ought to create via such technologies.

The exhibition’s creators wrote, “Humans are confident in their position as the most powerful species on the planet, but how far do we actually want to take this power? To a Brave New World (Nightmare)? And if we use new technologies to enhance the power of the few, we had better start safeguarding the future of the many.”

Google’s PoemPortraits
Our transcendence adorns,
That society of the stars seem to be the secret.

The two lines of poetry above aren’t like any poetry you’ve come across before. They are generated by an algorithm that was trained via deep learning neural networks trained on 20 million words of 19th-century poetry.

Google’s latest art project, named PoemPortraits, takes a word of your suggestion and generates a unique poem (once again, a collaboration of man and machine). You can even add a selfie in the final “PoemPortrait.” Artist Es Devlin, the project’s creator, explains that the AI “doesn’t copy or rework existing phrases, but uses its training material to build a complex statistical model. As a result, the algorithm generates original phrases emulating the style of what it’s been trained on.”

The generated poetry can sometimes be profound, and sometimes completely meaningless.But what makes the PoemPortraits project even more interesting is that it’s a collaborative project. All of the generated lines of poetry are combined to form a consistently growing collective poem, which you can view after your lines are generated. In many ways, the final collective poem is a collaboration of people from around the world working with algorithms.

Faceless Portraits Transcending Time
AICAN + Ahmed Elgammal

Image Credit: AICAN + Ahmed Elgammal | Faceless Portrait #2 (2019) | Artsy.
In March of this year, an AI artist called AICAN and its creator Ahmed Elgammal took over a New York gallery. The exhibition at HG Commentary showed two series of canvas works portraying harrowing, dream-like faceless portraits.

The exhibition was not simply credited to a machine, but rather attributed to the collaboration between a human and machine. Ahmed Elgammal is the founder and director of the Art and Artificial Intelligence Laboratory at Rutgers University. He considers AICAN to not only be an autonomous AI artist, but also a collaborator for artistic endeavors.

How did AICAN create these eerie faceless portraits? The system was presented with 100,000 photos of Western art from over five centuries, allowing it to learn the aesthetics of art via machine learning. It then drew from this historical knowledge and the mandate to create something new to create an artwork without human intervention.

Genesis
by AIVA Technologies

Listen to the score above. While you do, reflect on the fact that it was generated by an AI.

AIVA is an AI that composes soundtrack music for movies, commercials, games, and trailers. Its creative works span a wide range of emotions and moods. The scores it generates are indistinguishable from those created by the most talented human composers.

The AIVA music engine allows users to generate original scores in multiple ways. One is to upload an existing human-generated score and select the temp track to base the composition process on. Another method involves using preset algorithms to compose music in pre-defined styles, including everything from classical to Middle Eastern.

Currently, the platform is promoted as an opportunity for filmmakers and producers. But in the future, perhaps every individual will have personalized music generated for them based on their interests, tastes, and evolving moods. We already have algorithms on streaming websites recommending novel music to us based on our interests and history. Soon, algorithms may be used to generate music and other works of art that are tailored to impact our unique psyches.

The Future of Art: Pushing Our Creative Limitations
These works of art are just a glimpse into the breadth of the creative works being generated by algorithms and machines. Many of us will rightly fear these developments. We have to ask ourselves what our role will be in an era where machines are able to perform what we consider complex, abstract, creative tasks. The implications on the future of work, education, and human societies are profound.

At the same time, some of these works demonstrate that AI artists may not necessarily represent a threat to human artists, but rather an opportunity for us to push our creative boundaries. The most exciting artistic creations involve collaborations between humans and machines.

We have always used our technological scaffolding to push ourselves beyond our biological limitations. We use the telescope to extend our line of sight, planes to fly, and smartphones to connect with others. Our machines are not always working against us, but rather working as an extension of our minds. Similarly, we could use our machines to expand on our creativity and push the boundaries of art.

Image Credit: Ai-Da portraits by Nicky Johnston. Published with permission from Midas Public Relations. Continue reading

Posted in Human Robots

#435127 Teaching AI the Concept of ‘Similar, ...

As a human you instinctively know that a leopard is closer to a cat than a motorbike, but the way we train most AI makes them oblivious to these kinds of relations. Building the concept of similarity into our algorithms could make them far more capable, writes the author of a new paper in Science Robotics.

Convolutional neural networks have revolutionized the field of computer vision to the point that machines are now outperforming humans on some of the most challenging visual tasks. But the way we train them to analyze images is very different from the way humans learn, says Atsuto Maki, an associate professor at KTH Royal Institute of Technology.

“Imagine that you are two years old and being quizzed on what you see in a photo of a leopard,” he writes. “You might answer ‘a cat’ and your parents might say, ‘yeah, not quite but similar’.”

In contrast, the way we train neural networks rarely gives that kind of partial credit. They are typically trained to have very high confidence in the correct label and consider all incorrect labels, whether ”cat” or “motorbike,” equally wrong. That’s a mistake, says Maki, because ignoring the fact that something can be “less wrong” means you’re not exploiting all of the information in the training data.

Even when models are trained this way, there will be small differences in the probabilities assigned to incorrect labels that can tell you a lot about how well the model can generalize what it has learned to unseen data.

If you show a model a picture of a leopard and it gives “cat” a probability of five percent and “motorbike” one percent, that suggests it picked up on the fact that a cat is closer to a leopard than a motorbike. In contrast, if the figures are the other way around it means the model hasn’t learned the broad features that make cats and leopards similar, something that could potentially be helpful when analyzing new data.

If we could boost this ability to identify similarities between classes we should be able to create more flexible models better able to generalize, says Maki. And recent research has demonstrated how variations of an approach called regularization might help us achieve that goal.

Neural networks are prone to a problem called “overfitting,” which refers to a tendency to pay too much attention to tiny details and noise specific to their training set. When that happens, models will perform excellently on their training data but poorly when applied to unseen test data without these particular quirks.

Regularization is used to circumvent this problem, typically by reducing the network’s capacity to learn all this unnecessary information and therefore boost its ability to generalize to new data. Techniques are varied, but generally involve modifying the network’s structure or the strength of the weights between artificial neurons.

More recently, though, researchers have suggested new regularization approaches that work by encouraging a broader spread of probabilities across all classes. This essentially helps them capture more of the class similarities, says Maki, and therefore boosts their ability to generalize.

One such approach was devised in 2017 by Google Brain researchers, led by deep learning pioneer Geoffrey Hinton. They introduced a penalty to their training process that directly punished overconfident predictions in the model’s outputs, and a technique called label smoothing that prevents the largest probability becoming much larger than all others. This meant the probabilities were lower for correct labels and higher for incorrect ones, which was found to boost performance of models on varied tasks from image classification to speech recognition.

Another came from Maki himself in 2017 and achieves the same goal, but by suppressing high values in the model’s feature vector—the mathematical construct that describes all of an object’s important characteristics. This has a knock-on effect on the spread of output probabilities and also helped boost performance on various image classification tasks.

While it’s still early days for the approach, the fact that humans are able to exploit these kinds of similarities to learn more efficiently suggests that models that incorporate them hold promise. Maki points out that it could be particularly useful in applications such as robotic grasping, where distinguishing various similar objects is important.

Image Credit: Marianna Kalashnyk / Shutterstock.com Continue reading

Posted in Human Robots

#435110 5 Coming Breakthroughs in Energy and ...

The energy and transportation industries are being aggressively disrupted by converging exponential technologies.

In just five days, the sun provides Earth with an energy supply exceeding all proven reserves of oil, coal, and natural gas. Capturing just 1 part in 8,000 of this available solar energy would allow us to meet 100 percent of our energy needs.

As we leverage renewable energy supplied by the sun, wind, geothermal sources, and eventually fusion, we are rapidly heading towards a future where 100 percent of our energy needs will be met by clean tech in just 30 years.

During the past 40 years, solar prices have dropped 250-fold. And as these costs plummet, solar panel capacity continues to grow exponentially.

On the heels of energy abundance, we are additionally witnessing a new transportation revolution, which sets the stage for a future of seamlessly efficient travel at lower economic and environmental costs.

Top 5 Transportation Breakthroughs (2019-2024)
Entrepreneur and inventor Ramez Naam is my go-to expert on all things energy and environment. Currently serving as the Energy Co-Chair at Singularity University, Naam is the award-winning author of five books, including the Nexus series of science fiction novels. Having spent 13 years at Microsoft, his software has touched the lives of over a billion people. Naam holds over 20 patents, including several shared with co-inventor Bill Gates.

In the next five years, he forecasts five respective transportation and energy trends, each poised to disrupt major players and birth entirely new business models.

Let’s dive in.

Autonomous cars drive 1 billion miles on US roads. Then 10 billion

Alphabet’s Waymo alone has already reached 10 million miles driven in the US. The 600 Waymo vehicles on public roads drive a total of 25,000 miles each day, and computer simulations provide an additional 25,000 virtual cars driving constantly. Since its launch in December, the Waymo One service has transported over 1,000 pre-vetted riders in the Phoenix area.

With more training miles, the accuracy of these cars continues to improve. Since last year, GM Cruise has improved its disengagement rate by 321 percent since last year, trailing close behind with only one human intervention per 5,025 miles self-driven.

Autonomous taxis as a service in top 20 US metro areas

Along with its first quarterly earnings released last week, Lyft recently announced that it would expand its Waymo partnership with the upcoming deployment of 10 autonomous vehicles in the Phoenix area. While individuals previously had to partake in Waymo’s “early rider program” prior to trying Waymo One, the Lyft partnership will allow anyone to ride in a self-driving vehicle without a prior NDA.

Strategic partnerships will grow increasingly essential between automakers, self-driving tech companies, and rideshare services. Ford is currently working with Volkswagen, and Nvidia now collaborates with Daimler (Mercedes) and Toyota. Just last week, GM Cruise raised another $1.15 billion at a $19 billion valuation as the company aims to launch a ride-hailing service this year.

“They’re going to come to the Bay Area, Los Angeles, Houston, other cities with relatively good weather,” notes Naam. “In every major city within five years in the US and in some other parts of the world, you’re going to see the ability to hail an autonomous vehicle as a ride.”

Cambrian explosion of vehicle formats

Naam explains, “If you look today at the average ridership of a taxi, a Lyft, or an Uber, it’s about 1.1 passengers plus the driver. So, why do you need a large four-seater vehicle for that?”

Small electric, autonomous pods that seat as few as two people will begin to emerge, satisfying the majority of ride-hailing demands we see today. At the same time, larger communal vehicles will appear, such as Uber Express, that will undercut even the cheapest of transportation methods—buses, trams, and the like. Finally, last-mile scooter transit (or simply short-distance walks) might connect you to communal pick-up locations.

By 2024, an unimaginably diverse range of vehicles will arise to meet every possible need, regardless of distance or destination.

Drone delivery for lightweight packages in at least one US city

Wing, the Alphabet drone delivery startup, recently became the first company to gain approval from the Federal Aviation Administration (FAA) to make deliveries in the US. Having secured approval to deliver to 100 homes in Canberra, Australia, Wing additionally plans to begin delivering goods from local businesses in the suburbs of Virginia.

The current state of drone delivery is best suited for lightweight, urgent-demand payloads like pharmaceuticals, thumb drives, or connectors. And as Amazon continues to decrease its Prime delivery times—now as speedy as a one-day turnaround in many cities—the use of drones will become essential.

Robotic factories drive onshoring of US factories… but without new jobs

The supply chain will continue to shorten and become more agile with the re-onshoring of manufacturing jobs in the US and other countries. Naam reasons that new management and software jobs will drive this shift, as these roles develop the necessary robotics to manufacture goods. Equally as important, these robotic factories will provide a more humane setting than many of the current manufacturing practices overseas.

Top 5 Energy Breakthroughs (2019-2024)

First “1 cent per kWh” deals for solar and wind signed

Ten years ago, the lowest price of solar and wind power fell between 10 to 12 cents per kilowatt hour (kWh), over twice the price of wholesale power from coal or natural gas.

Today, the gap between solar/wind power and fossil fuel-generated electricity is nearly negligible in many parts of the world. In G20 countries, fossil fuel electricity costs between 5 to 17 cents per kWh, while the average cost per kWh of solar power in the US stands at under 10 cents.

Spanish firm Solarpack Corp Technological recently won a bid in Chile for a 120 MW solar power plant supplying energy at 2.91 cents per kWh. This deal will result in an estimated 25 percent drop in energy costs for Chilean businesses by 2021.

Naam indicates, “We will see the first unsubsidized 1.0 cent solar deals in places like Chile, Mexico, the Southwest US, the Middle East, and North Africa, and we’ll see similar prices for wind in places like Mexico, Brazil, and the US Great Plains.”

Solar and wind will reach >15 percent of US electricity, and begin to drive all growth

Just over eight percent of energy in the US comes from solar and wind sources. In total, 17 percent of American energy is derived from renewable sources, while a whopping 63 percent is sourced from fossil fuels, and 17 percent from nuclear.

Last year in the U.K., twice as much energy was generated from wind than from coal. For over a week in May, the U.K. went completely coal-free, using wind and solar to supply 35 percent and 21 percent of power, respectively. While fossil fuels remain the primary electricity source, this week-long experiment highlights the disruptive potential of solar and wind power that major countries like the U.K. are beginning to emphasize.

“Solar and wind are still a relatively small part of the worldwide power mix, only about six percent. Within five years, it’s going to be 15 percent in the US and more than close to that worldwide,” Naam predicts. “We are nearing the point where we are not building any new fossil fuel power plants.”

It will be cheaper to build new solar/wind/batteries than to run on existing coal

Last October, Northern Indiana utility company NIPSCO announced its transition from a 65 percent coal-powered state to projected coal-free status by 2028. Importantly, this decision was made purely on the basis of financials, with an estimated $4 billion in cost savings for customers. The company has already begun several initiatives in solar, wind, and batteries.

NextEra, the largest power generator in the US, has taken on a similar goal, making a deal last year to purchase roughly seven million solar panels from JinkoSolar over four years. Leading power generators across the globe have vocalized a similar economic case for renewable energy.

ICE car sales have now peaked. All car sales growth will be electric

While electric vehicles (EV) have historically been more expensive for consumers than internal combustion engine-powered (ICE) cars, EVs are cheaper to operate and maintain. The yearly cost of operating an EV in the US is about $485, less than half the $1,117 cost of operating a gas-powered vehicle.

And as battery prices continue to shrink, the upfront costs of EVs will decline until a long-term payoff calculation is no longer required to determine which type of car is the better investment. EVs will become the obvious choice.

Many experts including Naam believe that ICE-powered vehicles peaked worldwide in 2018 and will begin to decline over the next five years, as has already been demonstrated in the past five months. At the same time, EVs are expected to quadruple their market share to 1.6 percent this year.

New storage technologies will displace Li-ion batteries for tomorrow’s most demanding applications

Lithium ion batteries have dominated the battery market for decades, but Naam anticipates new storage technologies will take hold for different contexts. Flow batteries, which can collect and store solar and wind power at large scales, will supply city grids. Already, California’s Independent System Operator, the nonprofit that maintains the majority of the state’s power grid, recently installed a flow battery system in San Diego.

Solid-state batteries, which consist of entirely solid electrolytes, will supply mobile devices in cars. A growing body of competitors, including Toyota, BMW, Honda, Hyundai, and Nissan, are already working on developing solid-state battery technology. These types of batteries offer up to six times faster charging periods, three times the energy density, and eight years of added lifespan, compared to lithium ion batteries.

Final Thoughts
Major advancements in transportation and energy technologies will continue to converge over the next five years. A case in point, Tesla’s recent announcement of its “robotaxi” fleet exemplifies the growing trend towards joint priority of sustainability and autonomy.

On the connectivity front, 5G and next-generation mobile networks will continue to enable the growth of autonomous fleets, many of which will soon run on renewable energy sources. This growth demands important partnerships between energy storage manufacturers, automakers, self-driving tech companies, and ridesharing services.

In the eco-realm, increasingly obvious economic calculi will catalyze consumer adoption of autonomous electric vehicles. In just five years, Naam predicts that self-driving rideshare services will be cheaper than owning a private vehicle for urban residents. And by the same token, plummeting renewable energy costs will make these fuels far more attractive than fossil fuel-derived electricity.

As universally optimized AI systems cut down on traffic, aggregate time spent in vehicles will decimate, while hours in your (or not your) car will be applied to any number of activities as autonomous systems steer the way. All the while, sharing an electric vehicle will cut down not only on your carbon footprint but on the exorbitant costs swallowed by your previous SUV. How will you spend this extra time and money? What new natural resources will fuel your everyday life?

Join Me
Abundance-Digital Online Community: Stay ahead of technological advancements and turn your passion into action. Abundance Digital is now part of Singularity University. Learn more.

Image Credit: welcomia / Shutterstock.com Continue reading

Posted in Human Robots

#435070 5 Breakthroughs Coming Soon in Augmented ...

Convergence is accelerating disruption… everywhere! Exponential technologies are colliding into each other, reinventing products, services, and industries.

In this third installment of my Convergence Catalyzer series, I’ll be synthesizing key insights from my annual entrepreneurs’ mastermind event, Abundance 360. This five-blog series looks at 3D printing, artificial intelligence, VR/AR, energy and transportation, and blockchain.

Today, let’s dive into virtual and augmented reality.

Today’s most prominent tech giants are leaping onto the VR/AR scene, each driving forward new and upcoming product lines. Think: Microsoft’s HoloLens, Facebook’s Oculus, Amazon’s Sumerian, and Google’s Cardboard (Apple plans to release a headset by 2021).

And as plummeting prices meet exponential advancements in VR/AR hardware, this burgeoning disruptor is on its way out of the early adopters’ market and into the majority of consumers’ homes.

My good friend Philip Rosedale is my go-to expert on AR/VR and one of the foremost creators of today’s most cutting-edge virtual worlds. After creating the virtual civilization Second Life in 2013, now populated by almost 1 million active users, Philip went on to co-found High Fidelity, which explores the future of next-generation shared VR.

In just the next five years, he predicts five emerging trends will take hold, together disrupting major players and birthing new ones.

Let’s dive in…

Top 5 Predictions for VR/AR Breakthroughs (2019-2024)
“If you think you kind of understand what’s going on with that tech today, you probably don’t,” says Philip. “We’re still in the middle of landing the airplane of all these new devices.”

(1) Transition from PC-based to standalone mobile VR devices

Historically, VR devices have relied on PC connections, usually involving wires and clunky hardware that restrict a user’s field of motion. However, as VR enters the dematerialization stage, we are about to witness the rapid rise of a standalone and highly mobile VR experience economy.

Oculus Go, the leading standalone mobile VR device on the market, requires only a mobile app for setup and can be transported anywhere with WiFi.

With a consumer audience in mind, the 32GB headset is priced at $200 and shares an app ecosystem with Samsung’s Gear VR. While Google Daydream are also standalone VR devices, they require a docked mobile phone instead of the built-in screen of Oculus Go.

In the AR space, Lenovo’s standalone Microsoft’s HoloLens 2 leads the way in providing tetherless experiences.

Freeing headsets from the constraints of heavy hardware will make VR/AR increasingly interactive and transportable, a seamless add-on whenever, wherever. Within a matter of years, it may be as simple as carrying lightweight VR goggles wherever you go and throwing them on at a moment’s notice.

(2) Wide field-of-view AR displays

Microsoft’s HoloLens 2 leads the AR industry in headset comfort and display quality. The most significant issue with their prior version was the limited rectangular field of view (FOV).

By implementing laser technology to create a microelectromechanical systems (MEMS) display, however, HoloLens 2 can position waveguides in front of users’ eyes, directed by mirrors. Subsequently enlarging images can be accomplished by shifting the angles of these mirrors. Coupled with a 47 pixel per degree resolution, HoloLens 2 has now doubled its predecessor’s FOV. Microsoft anticipates the release of its headset by the end of this year at a $3,500 price point, first targeting businesses and eventually rolling it out to consumers.

Magic Leap provides a similar FOV but with lower resolution than the HoloLens 2. The Meta 2 boasts an even wider 90-degree FOV, but requires a cable attachment. The race to achieve the natural human 120-degree horizontal FOV continues.

“The technology to expand the field of view is going to make those devices much more usable by giving you bigger than a small box to look through,” Rosedale explains.

(3) Mapping of real world to enable persistent AR ‘mirror worlds’

‘Mirror worlds’ are alternative dimensions of reality that can blanket a physical space. While seated in your office, the floor beneath you could dissolve into a calm lake and each desk into a sailboat. In the classroom, mirror worlds would convert pencils into magic wands and tabletops into touch screens.

Pokémon Go provides an introductory glimpse into the mirror world concept and its massive potential to unite people in real action.

To create these mirror worlds, AR headsets must precisely understand the architecture of the surrounding world. Rosedale predicts the scanning accuracy of devices will improve rapidly over the next five years to make these alternate dimensions possible.

(4) 5G mobile devices reduce latency to imperceptible levels

Verizon has already launched 5G networks in Minneapolis and Chicago, compatible with the Moto Z3. Sprint plans to follow with its own 5G launch in May. Samsung, LG, Huawei, and ZTE have all announced upcoming 5G devices.

“5G is rolling out this year and it’s going to materially affect particularly my work, which is making you feel like you’re talking to somebody else directly face to face,” explains Rosedale. “5G is critical because currently the cell devices impose too much delay, so it doesn’t feel real to talk to somebody face to face on these devices.”

To operate seamlessly from anywhere on the planet, standalone VR/AR devices will require a strong 5G network. Enhancing real-time connectivity in VR/AR will transform the communication methods of tomorrow.

(5) Eye-tracking and facial expressions built in for full natural communication

Companies like Pupil Labs and Tobii provide eye tracking hardware add-ons and software to VR/AR headsets. This technology allows for foveated rendering, which renders a given scene in high resolution only in the fovea region, while the peripheral regions appear in lower resolution, conserving processing power.

As seen in the HoloLens 2, eye tracking can also be used to identify users and customize lens widths to provide a comfortable, personalized experience for each individual.

According to Rosedale, “The fundamental opportunity for both VR and AR is to improve human communication.” He points out that current VR/AR headsets miss many of the subtle yet important aspects of communication. Eye movements and microexpressions provide valuable insight into a user’s emotions and desires.

Coupled with emotion-detecting AI software, such as Affectiva, VR/AR devices might soon convey much more richly textured and expressive interactions between any two people, transcending physical boundaries and even language gaps.

Final Thoughts
As these promising trends begin to transform the market, VR/AR will undoubtedly revolutionize our lives… possibly to the point at which our virtual worlds become just as consequential and enriching as our physical world.

A boon for next-gen education, VR/AR will empower youth and adults alike with holistic learning that incorporates social, emotional, and creative components through visceral experiences, storytelling, and simulation. Traveling to another time, manipulating the insides of a cell, or even designing a new city will become daily phenomena of tomorrow’s classrooms.

In real estate, buyers will increasingly make decisions through virtual tours. Corporate offices might evolve into spaces that only exist in ‘mirror worlds’ or grow virtual duplicates for remote workers.

In healthcare, accuracy of diagnosis will skyrocket, while surgeons gain access to digital aids as they conduct life-saving procedures. Or take manufacturing, wherein training and assembly will become exponentially more efficient as visual cues guide complex tasks.

In the mere matter of a decade, VR and AR will unlock limitless applications for new and converging industries. And as virtual worlds converge with AI, 3D printing, computing advancements and beyond, today’s experience economies will explode in scale and scope. Prepare yourself for the exciting disruption ahead!

Join Me
Abundance-Digital Online Community: Stay ahead of technological advancements, and turn your passion into action. Abundance Digital is now part of Singularity University. Learn more.

Image Credit: Mariia Korneeva / Shutterstock.com Continue reading

Posted in Human Robots

#434818 Watch These Robots Do Tasks You Thought ...

Robots have been masters of manufacturing at speed and precision for decades, but give them a seemingly simple task like stacking shelves, and they quickly get stuck. That’s changing, though, as engineers build systems that can take on the deceptively tricky tasks most humans can do with their eyes closed.

Boston Dynamics is famous for dramatic reveals of robots performing mind-blowing feats that also leave you scratching your head as to what the market is—think the bipedal Atlas doing backflips or Spot the galloping robot dog.

Last week, the company released a video of a robot called Handle that looks like an ostrich on wheels carrying out the seemingly mundane task of stacking boxes in a warehouse.

It might seem like a step backward, but this is exactly the kind of practical task robots have long struggled with. While the speed and precision of industrial robots has seen them take over many functions in modern factories, they’re generally limited to highly prescribed tasks carried out in meticulously-controlled environments.

That’s because despite their mechanical sophistication, most are still surprisingly dumb. They can carry out precision welding on a car or rapidly assemble electronics, but only by rigidly following a prescribed set of motions. Moving cardboard boxes around a warehouse might seem simple to a human, but it actually involves a variety of tasks machines still find pretty difficult—perceiving your surroundings, navigating, and interacting with objects in a dynamic environment.

But the release of this video suggests Boston Dynamics thinks these kinds of applications are close to prime time. Last week the company doubled down by announcing the acquisition of start-up Kinema Systems, which builds computer vision systems for robots working in warehouses.

It’s not the only company making strides in this area. On the same day the video went live, Google unveiled a robot arm called TossingBot that can pick random objects from a box and quickly toss them into another container beyond its reach, which could prove very useful for sorting items in a warehouse. The machine can train on new objects in just an hour or two, and can pick and toss up to 500 items an hour with better accuracy than any of the humans who tried the task.

And an apple-picking robot built by Abundant Robotics is currently on New Zealand farms navigating between rows of apple trees using LIDAR and computer vision to single out ripe apples before using a vacuum tube to suck them off the tree.

In most cases, advances in machine learning and computer vision brought about by the recent AI boom are the keys to these rapidly improving capabilities. Robots have historically had to be painstakingly programmed by humans to solve each new task, but deep learning is making it possible for them to quickly train themselves on a variety of perception, navigation, and dexterity tasks.

It’s not been simple, though, and the application of deep learning in robotics has lagged behind other areas. A major limitation is that the process typically requires huge amounts of training data. That’s fine when you’re dealing with image classification, but when that data needs to be generated by real-world robots it can make the approach impractical. Simulations offer the possibility to run this training faster than real time, but it’s proved difficult to translate policies learned in virtual environments into the real world.

Recent years have seen significant progress on these fronts, though, and the increasing integration of modern machine learning with robotics. In October, OpenAI imbued a robotic hand with human-level dexterity by training an algorithm in a simulation using reinforcement learning before transferring it to the real-world device. The key to ensuring the translation went smoothly was injecting random noise into the simulation to mimic some of the unpredictability of the real world.

And just a couple of weeks ago, MIT researchers demonstrated a new technique that let a robot arm learn to manipulate new objects with far less training data than is usually required. By getting the algorithm to focus on a few key points on the object necessary for picking it up, the system could learn to pick up a previously unseen object after seeing only a few dozen examples (rather than the hundreds or thousands typically required).

How quickly these innovations will trickle down to practical applications remains to be seen, but a number of startups as well as logistics behemoth Amazon are developing robots designed to flexibly pick and place the wide variety of items found in your average warehouse.

Whether the economics of using robots to replace humans at these kinds of menial tasks makes sense yet is still unclear. The collapse of collaborative robotics pioneer Rethink Robotics last year suggests there are still plenty of challenges.

But at the same time, the number of robotic warehouses is expected to leap from 4,000 today to 50,000 by 2025. It may not be long until robots are muscling in on tasks we’ve long assumed only humans could do.

Image Credit: Visual Generation / Shutterstock.com Continue reading

Posted in Human Robots