Tag Archives: inspired

#431186 The Coming Creativity Explosion Belongs ...

Does creativity make human intelligence special?
It may appear so at first glance. Though machines can calculate, analyze, and even perceive, creativity may seem far out of reach. Perhaps this is because we find it mysterious, even in ourselves. How can the output of a machine be anything more than that which is determined by its programmers?
Increasingly, however, artificial intelligence is moving into creativity’s hallowed domain, from art to industry. And though much is already possible, the future is sure to bring ever more creative machines.
What Is Machine Creativity?
Robotic art is just one example of machine creativity, a rapidly growing sub-field that sits somewhere between the study of artificial intelligence and human psychology.
The winning paintings from the 2017 Robot Art Competition are strikingly reminiscent of those showcased each spring at university exhibitions for graduating art students. Like the works produced by skilled artists, the compositions dreamed up by the competition’s robotic painters are aesthetically ambitious. One robot-made painting features a man’s bearded face gazing intently out from the canvas, his eyes locking with the viewer’s. Another abstract painting, “inspired” by data from EEG signals, visually depicts the human emotion of misery with jagged, gloomy stripes of black and purple.
More broadly, a creative machine is software (sometimes encased in a robotic body) that synthesizes inputs to generate new and valuable ideas, solutions to complex scientific problems, or original works of art. In a process similar to that followed by a human artist or scientist, a creative machine begins its work by framing a problem. Next, its software specifies the requirements the solution should have before generating “answers” in the form of original designs, patterns, or some other form of output.
Although the notion of machine creativity sounds a bit like science fiction, the basic concept is one that has been slowly developing for decades.
Nearly 50 years ago while a high school student, inventor and futurist Ray Kurzweil created software that could analyze the patterns in musical compositions and then compose new melodies in a similar style. Aaron, one of the world’s most famous painting robots, has been hard at work since the 1970s.
Industrial designers have used an automated, algorithm-driven process for decades to design computer chips (or machine parts) whose layout (or form) is optimized for a particular function or environment. Emily Howell, a computer program created by David Cope, writes original works in the style of classical composers, some of which have been performed by human orchestras to live audiences.
What’s different about today’s new and emerging generation of robotic artists, scientists, composers, authors, and product designers is their ubiquity and power.

“The recent explosion of artificial creativity has been enabled by the rapid maturation of the same exponential technologies that have already re-drawn our daily lives.”

I’ve already mentioned the rapidly advancing fields of robotic art and music. In the realm of scientific research, so-called “robotic scientists” such as Eureqa and Adam and Eve develop new scientific hypotheses; their “insights” have contributed to breakthroughs that are cited by hundreds of academic research papers. In the medical industry, creative machines are hard at work creating chemical compounds for new pharmaceuticals. After it read over seven million words of 20th century English poetry, a neural network developed by researcher Jack Hopkins learned to write passable poetry in a number of different styles and meters.
The recent explosion of artificial creativity has been enabled by the rapid maturation of the same exponential technologies that have already re-drawn our daily lives, including faster processors, ubiquitous sensors and wireless networks, and better algorithms.
As they continue to improve, creative machines—like humans—will perform a broad range of creative activities, ranging from everyday problem solving (sometimes known as “Little C” creativity) to producing once-in-a-century masterpieces (“Big C” creativity). A creative machine’s outputs could range from a design for a cast for a marble sculpture to a schematic blueprint for a clever new gadget for opening bottles of wine.
In the coming decades, by automating the process of solving complex problems, creative machines will again transform our world. Creative machines will serve as a versatile source of on-demand talent.
In the battle to recruit a workforce that can solve complex problems, creative machines will put small businesses on equal footing with large corporations. Art and music lovers will enjoy fresh creative works that re-interpret the style of ancient disciplines. People with a health condition will benefit from individualized medical treatments, and low-income people will receive top-notch legal advice, to name but a few potentially beneficial applications.
How Can We Make Creative Machines, Unless We Understand Our Own Creativity?
One of the most intriguing—yet unsettling—aspects of watching robotic arms skillfully oil paint is that we humans still do not understand our own creative process. Over the centuries, several different civilizations have devised a variety of models to explain creativity.
The ancient Greeks believed that poets drew inspiration from a transcendent realm parallel to the material world where ideas could take root and flourish. In the Middle Ages, philosophers and poets attributed our peculiarly human ability to “make something of nothing” to an external source, namely divine inspiration. Modern academic study of human creativity has generated vast reams of scholarship, but despite the value of these insights, the human imagination remains a great mystery, second only to that of consciousness.
Today, the rise of machine creativity demonstrates (once again), that we do not have to fully understand a biological process in order to emulate it with advanced technology.
Past experience has shown that jet planes can fly higher and faster than birds by using the forward thrust of an engine rather than wings. Submarines propel themselves forward underwater without fins or a tail. Deep learning neural networks identify objects in randomly-selected photographs with super-human accuracy. Similarly, using a fairly straightforward software architecture, creative software (sometimes paired with a robotic body) can paint, write, hypothesize, or design with impressive originality, skill, and boldness.
At the heart of machine creativity is simple iteration. No matter what sort of output they produce, creative machines fall into one of three categories depending on their internal architecture.
Briefly, the first group consists of software programs that use traditional rule-based, or symbolic AI, the second group uses evolutionary algorithms, and the third group uses a variation of a form of machine learning called deep learning that has already revolutionized voice and facial recognition software.
1) Symbolic creative machines are the oldest artificial artists and musicians. In this approach—also known as “good old-fashioned AI (GOFAI) or symbolic AI—the human programmer plays a key role by writing a set of step-by-step instructions to guide the computer through a task. Despite the fact that symbolic AI is limited in its ability to adapt to environmental changes, it’s still possible for a robotic artist programmed this way to create an impressively wide variety of different outputs.
2) Evolutionary algorithms (EA) have been in use for several decades and remain powerful tools for design. In this approach, potential solutions “compete” in a software simulator in a Darwinian process reminiscent of biological evolution. The human programmer specifies a “fitness criterion” that will be used to score and rank the solutions generated by the software. The software then generates a “first generation” population of random solutions (which typically are pretty poor in quality), scores this first generation of solutions, and selects the top 50% (those random solutions deemed to be the best “fit”). The software then takes another pass and recombines the “winning” solutions to create the next generation and repeats this process for thousands (and sometimes millions) of generations.
3) Generative deep learning (DL) neural networks represent the newest software architecture of the three, since DL is data-dependent and resource-intensive. First, a human programmer “trains” a DL neural network to recognize a particular feature in a dataset, for example, an image of a dog in a stream of digital images. Next, the standard “feed forward” process is reversed and the DL neural network begins to generate the feature, for example, eventually producing new and sometimes original images of (or poetry about) dogs. Generative DL networks have tremendous and unexplored creative potential and are able to produce a broad range of original outputs, from paintings to music to poetry.
The Coming Explosion of Machine Creativity
In the near future as Moore’s Law continues its work, we will see sophisticated combinations of these three basic architectures. Since the 1950s, artificial intelligence has steadily mastered one human ability after another, and in the process of doing so, has reduced the cost of calculation, analysis, and most recently, perception. When creative software becomes as inexpensive and ubiquitous as analytical software is today, humans will no longer be the only intelligent beings capable of creative work.
This is why I have to bite my tongue when I hear the well-intended (but shortsighted) advice frequently dispensed to young people that they should pursue work that demands creativity to help them “AI-proof” their futures.
Instead, students should gain skills to harness the power of creative machines.
There are two skills in which humans excel that will enable us to remain useful in a world of ever-advancing artificial intelligence. One, the ability to frame and define a complex problem so that it can be handed off to a creative machine to solve. And two, the ability to communicate the value of both the framework and the proposed solution to the other humans involved.
What will happen to people when creative machines begin to capably tread on intellectual ground that was once considered the sole domain of the human mind, and before that, the product of divine inspiration? While machines engaging in Big C creativity—e.g., oil painting and composing new symphonies—tend to garner controversy and make the headlines, I suspect the real world-changing application of machine creativity will be in the realm of everyday problem solving, or Little C. The mainstream emergence of powerful problem-solving tools will help people create abundance where there was once scarcity.
Image Credit: adike / Shutterstock.com Continue reading

Posted in Human Robots

#431165 Intel Jumps Into Brain-Like Computing ...

The brain has long inspired the design of computers and their software. Now Intel has become the latest tech company to decide that mimicking the brain’s hardware could be the next stage in the evolution of computing.
On Monday the company unveiled an experimental “neuromorphic” chip called Loihi. Neuromorphic chips are microprocessors whose architecture is configured to mimic the biological brain’s network of neurons and the connections between them called synapses.
While neural networks—the in vogue approach to artificial intelligence and machine learning—are also inspired by the brain and use layers of virtual neurons, they are still implemented on conventional silicon hardware such as CPUs and GPUs.
The main benefit of mimicking the architecture of the brain on a physical chip, say neuromorphic computing’s proponents, is energy efficiency—the human brain runs on roughly 20 watts. The “neurons” in neuromorphic chips carry out the role of both processor and memory which removes the need to shuttle data back and forth between separate units, which is how traditional chips work. Each neuron also only needs to be powered while it’s firing.

At present, most machine learning is done in data centers due to the massive energy and computing requirements. Creating chips that capture some of nature’s efficiency could allow AI to be run directly on devices like smartphones, cars, and robots.
This is exactly the kind of application Michael Mayberry, managing director of Intel’s research arm, touts in a blog post announcing Loihi. He talks about CCTV cameras that can run image recognition to identify missing persons or traffic lights that can track traffic flow to optimize timing and keep vehicles moving.
There’s still a long way to go before that happens though. According to Wired, so far Intel has only been working with prototypes, and the first full-size version of the chip won’t be built until November.
Once complete, it will feature 130,000 neurons and 130 million synaptic connections split between 128 computing cores. The device will be 1,000 times more energy-efficient than standard approaches, according to Mayberry, but more impressive are claims the chip will be capable of continuous learning.
Intel’s newly launched self-learning neuromorphic chip.
Normally deep learning works by training a neural network on giant datasets to create a model that can then be applied to new data. The Loihi chip will combine training and inference on the same chip, which will allow it to learn on the fly, constantly updating its models and adapting to changing circumstances without having to be deliberately re-trained.
A select group of universities and research institutions will be the first to get their hands on the new chip in the first half of 2018, but Mayberry said it could be years before it’s commercially available. Whether commercialization happens at all may largely depend on whether early adopters can get the hardware to solve any practically useful problems.
So far neuromorphic computing has struggled to gain traction outside the research community. IBM released a neuromorphic chip called TrueNorth in 2014, but the device has yet to showcase any commercially useful applications.
Lee Gomes summarizes the hurdles facing neuromorphic computing excellently in IEEE Spectrum. One is that deep learning can run on very simple, low-precision hardware that can be optimized to use very little power, which suggests complicated new architectures may struggle to find purchase.
It’s also not easy to transfer deep learning approaches developed on conventional chips over to neuromorphic hardware, and even Intel Labs chief scientist Narayan Srinivasa admitted to Forbes Loihi wouldn’t work well with some deep learning models.
Finally, there’s considerable competition in the quest to develop new computer architectures specialized for machine learning. GPU vendors Nvidia and AMD have pivoted to take advantage of this newfound market and companies like Google and Microsoft are developing their own in-house solutions.
Intel, for its part, isn’t putting all its eggs in one basket. Last year it bought two companies building chips for specialized machine learning—Movidius and Nervana—and this was followed up with the $15 billion purchase of self-driving car chip- and camera-maker Mobileye.
And while the jury is still out on neuromorphic computing, it makes sense for a company eager to position itself as the AI chipmaker of the future to have its fingers in as many pies as possible. There are a growing number of voices suggesting that despite its undoubted power, deep learning alone will not allow us to imbue machines with the kind of adaptable, general intelligence humans possess.
What new approaches will get us there are hard to predict, but it’s entirely possible they will only work on hardware that closely mimics the one device we already know is capable of supporting this kind of intelligence—the human brain.
Image Credit: Intel Continue reading

Posted in Human Robots

#431023 Finish Him! MegaBots’ Giant Robot Duel ...

It began two years ago when MegaBots co-founders Matt Oehrlein and Gui Cavalcanti donned American flags as capes and challenged Suidobashi Heavy Industries to a giant robot duel in a YouTube video that immediately went viral.
The battle proposed: MegaBots’ 15-foot tall, 1,200-pound MK2 robot vs. Suidobashi’s 9,000-pound robot, KURATAS. Oehrlein and Cavalcanti first discovered the KURATAS robot in a listing on Amazon with a million-dollar price tag.
In an equally flamboyant response video, Suidobashi CEO and founder Kogoro Kurata accepted the challenge. (Yes, he named his robot after himself.) Both parties planned to take a year to prepare their robots for combat.
In the end, it took twice the amount of time. Nonetheless, the battle is going down this September in an undisclosed location.
Oehrlein shared more about the much-anticipated showdown during our interview at Singularity University’s Global Summit.

Two years since the initial video, MegaBots has now completed the combat-capable MK3 robot, named Eagle Prime. This new 12-ton, 16-foot-tall robot is powered by a 430-horsepower Corvette engine and requires two human pilots.
It’s also the robot they recently shipped to take on KURATAS.

Building Eagle Prime has been no small feat. With arms and legs that each weigh as much as a car, assembling the robot takes forklifts, cranes, and a lot of caution. Fortress One, MegaBots’ headquarters in Hayward, California is where the magic happens.
In terms of “weaponry,” Eagle Prime features a giant pneumatic cannon that shoots huge paint cannonballs. Oehrlein warns, “They can shatter all the windows in a car. It’s very powerful.” A logging grapple, which looks like a giant claw and exerts 3,000 pounds of steel-crushing force, has also been added to the robot.

“It’s a combination of range combat, using the paint balls to maybe blind cameras on the other robot or take out sensitive electronics, and then closing in with the claw and trying to disable their systems at close range,” Oehrlein explains.
Safety systems include a cockpit roll cage for the two pilots, five-point safety seatbelt harnesses, neck restraints, helmets, and flame retardant suits.
Co-founder, Matt Oehrlein, inside the cockpit of MegaBots’ Eagle Prime giant robot.
Oehrlein and Cavalcanti have also spent considerable time inside Eagle Prime practicing battlefield tactics and maneuvering the robot through obstacle courses.
Suidobashi’s robot is a bit shorter and lighter, but also a little faster, so the battle dynamics should be interesting.
You may be thinking, “Why giant dueling robots?”
MegaBots’ grand vision is a full-blown international sports league of giant fighting robots on the scale of Formula One racing. Picture a nostalgic evening sipping a beer (or three) and watching Pacific Rim- and Power Rangers-inspired robots battle—only in real life.
Eagle Prime is, in good humor, a proudly patriotic robot.
“Japan is known as a robotic powerhouse,” says Oehrlein, “I think there’s something interesting about the slightly overconfident American trying to get a foothold in the robotics space and doing it by building a bigger, louder, heavier robot, in true American fashion.”
For safety reasons, no fans will be admitted during the time of the fight. The battle will be posted after the fact on MegaBots’ YouTube channel and Facebook page.
We’ll soon find out whether this becomes another American underdog story.
In the meantime, I give my loyalty to MegaBots, and in the words of Mortal Kombat, say, “Finish him!”

via GIPHY
Image Credit: MegaBots Continue reading

Posted in Human Robots

#431015 Finish Him! MegaBots’ Giant Robot Duel ...

It began two years ago when MegaBots co-founders Matt Oehrlein and Gui Cavalcanti donned American flags as capes and challenged Suidobashi Heavy Industries to a giant robot duel in a YouTube video that immediately went viral.
The battle proposed: MegaBots’ 15-foot tall, 1,200-pound MK2 robot vs. Suidobashi’s 9,000-pound robot, KURATAS. Oehrlein and Cavalcanti first discovered the KURATAS robot in a listing on Amazon with a million-dollar price tag.
In an equally flamboyant response video, Suidobashi CEO and founder Kogoro Kurata accepted the challenge. (Yes, he named his robot after himself.) Both parties planned to take a year to prepare their robots for combat.
In the end, it took twice the amount of time. Nonetheless, the battle is going down this September in an undisclosed location in Japan.
Oehrlein shared more about the much-anticipated showdown during our interview at Singularity University’s Global Summit.

Two years since the initial video, MegaBots has now completed the combat-capable MK3 robot, named Eagle Prime. This new 12-ton, 16-foot-tall robot is powered by a 430-horsepower Corvette engine and requires two human pilots.
It’s also the robot they recently shipped to Japan to take on KURATAS.

Building Eagle Prime has been no small feat. With arms and legs that each weigh as much as a car, assembling the robot takes forklifts, cranes, and a lot of caution. Fortress One, MegaBots’ headquarters in Hayward, California is where the magic happens.
In terms of “weaponry,” Eagle Prime features a giant pneumatic cannon that shoots huge paint cannonballs. Oehrlein warns, “They can shatter all the windows in a car. It’s very powerful.” A logging grapple, which looks like a giant claw and exerts 3,000 pounds of steel-crushing force, has also been added to the robot.
“It’s a combination of range combat, using the paint balls to maybe blind cameras on the other robot or take out sensitive electronics, and then closing in with the claw and trying to disable their systems at close range,” Oehrlein explains.
Safety systems include a cockpit roll cage for the two pilots, five-point safety seatbelt harnesses, neck restraints, helmets, and flame retardant suits.
Co-founder, Matt Oehrlein, inside the cockpit of MegaBots’ Eagle Prime giant robot.
Oehrlein and Cavalcanti have also spent considerable time inside Eagle Prime practicing battlefield tactics and maneuvering the robot through obstacle courses.
Suidobashi’s robot is a bit shorter and lighter, but also a little faster, so the battle dynamics should be interesting.
You may be thinking, “Why giant dueling robots?”
MegaBots’ grand vision is a full-blown international sports league of giant fighting robots on the scale of Formula One racing. Picture a nostalgic evening sipping a beer (or three) and watching Pacific Rim- and Power Rangers-inspired robots battle—only in real life.
Eagle Prime is, in good humor, a proudly patriotic robot.
“Japan is known as a robotic powerhouse,” says Oehrlein, “I think there’s something interesting about the slightly overconfident American trying to get a foothold in the robotics space and doing it by building a bigger, louder, heavier robot, in true American fashion.”
For safety reasons, no fans will be admitted during the time of the fight. The battle will be posted after the fact on MegaBots’ YouTube channel and Facebook page.
We’ll soon find out whether this becomes another American underdog story.
In the meantime, I give my loyalty to MegaBots, and in the words of Mortal Kombat, say, “Finish him!”

via GIPHY
Image Credit: MegaBots Continue reading

Posted in Human Robots

#430955 This Inspiring Teenager Wants to Save ...

It’s not every day you meet a high school student who’s been building functional robots since age 10. Then again, Mihir Garimella is definitely not your average teenager.
When I sat down to interview him recently at Singularity University’s Global Summit, that much was clear.
Mihir’s curiosity for robotics began at age two when his parents brought home a pet dog—well, a robotic dog. A few years passed with this robotic companion by his side, and Mihir became fascinated with how software and hardware could bring inanimate objects to “life.”
When he was 10, Mihir built a robotic violin tuner called Robo-Mozart to help him address a teacher’s complaints about his always-out-of-tune violin. The robot analyzes the sound of the violin, determines which strings are out of tune, and then uses motors to turn the tuning pegs.
Robo-Mozart and other earlier projects helped Mihir realize he could use robotics to solve real problems. Fast-forward to age 14 and Flybot, a tiny, low-cost emergency response drone that won Mihir top honors in his age category at the 2015 Google Science Fair.

The small drone is propelled by four rotors and is designed to mimic how fruit flies can speedily see and react to surrounding threats. It’s a design idea that hit Mihir when he and his family returned home after a long vacation to discover they had left bananas on their kitchen counter. The house was filled with fruit flies.
After many failed attempts to swat the flies, Mihir started wondering how these tiny creatures with small brains and horrible vision were such masterful escape artists. He began digging through research papers on fruit flies and came to an interesting conclusion.
Since fruit flies can’t see a lot of detail, they compensate by processing visual information very fast—ten times faster than people do.
“That’s what enables them to escape so effectively,” says Mihir.
Escaping a threat for a fruit fly could mean quickly avoiding a fatal swat from a human hand. Applied to a search-and-response drone, the scenario shifts—picture a drone instantaneously detecting and avoiding a falling ceiling while searching for survivors inside a collapsing building.

Now, at 17, Mihir is still pushing Flybot forward. He’s developing software to enable the drone to operate autonomously and hopes it will be able to navigate environments such as a burning building, or a structure that’s been hit by an earthquake. The drone is also equipped with intelligent sensors to collect spatial data it will use to maneuver around obstacles and detect things like a trapped person or the location of a gas leak.
For everyone concerned about robots eating jobs, Flybot is a perfect example of how technology can aid existing jobs.
Flybot could substitute for a first responder entering a dangerous situation or help a firefighter make a quicker rescue by showing where victims are trapped. With its small and fast design, the drone could also presumably carry out an initial search-and-rescue sweep in just a few minutes.
Mihir is committed to commercializing the product and keeping it within a $250–$500 price range, which is a fraction of the cost of many current emergency response drones. He hopes the low cost will allow the technology to be used in developing countries.
Next month, Mihir starts his freshman year at Stanford, where he plans to keep up his research and create a company to continue work on the drone.
When I asked Mihir what fuels him, he said, “Curiosity is a great skill for inventors. It lets you find inspiration in a lot of places that you may not look. If I had started by trying to build an escape algorithm for these drones, I wouldn’t know where to start. But looking at fruit flies and getting inspired by them, it gave me a really good place to look for inspiration.”
It’s a bit mind boggling how much Mihir has accomplished by age 17, but I suspect he’s just getting started.
Image Credit: Google Science Fair via YouTube Continue reading

Posted in Human Robots