Tag Archives: possible
#431203 Could We Build a Blade Runner-Style ...
The new Blade Runner sequel will return us to a world where sophisticated androids made with organic body parts can match the strength and emotions of their human creators. As someone who builds biologically inspired robots, I’m interested in whether our own technology will ever come close to matching the “replicants” of Blade Runner 2049.
The reality is that we’re a very long way from building robots with human-like abilities. But advances in so-called soft robotics show a promising way forward for technology that could be a new basis for the androids of the future.
From a scientific point of view, the real challenge is replicating the complexity of the human body. Each one of us is made up of millions and millions of cells, and we have no clue how we can build such a complex machine that is indistinguishable from us humans. The most complex machines today, for example the world’s largest airliner, the Airbus A380, are composed of millions of parts. But in order to match the complexity level of humans, we would need to scale this complexity up about a million times.
There are currently three different ways that engineering is making the border between humans and robots more ambiguous. Unfortunately, these approaches are only starting points and are not yet even close to the world of Blade Runner.
There are human-like robots built from scratch by assembling artificial sensors, motors, and computers to resemble the human body and motion. However, extending the current human-like robot would not bring Blade Runner-style androids closer to humans, because every artificial component, such as sensors and motors, are still hopelessly primitive compared to their biological counterparts.
There is also cyborg technology, where the human body is enhanced with machines such as robotic limbs and wearable and implantable devices. This technology is similarly very far away from matching our own body parts.
Finally, there is the technology of genetic manipulation, where an organism’s genetic code is altered to modify that organism’s body. Although we have been able to identify and manipulate individual genes, we still have a limited understanding of how an entire human emerges from genetic code. As such, we don’t know the degree to which we can actually program code to design everything we wish.
Soft robotics: a way forward?
But we might be able to move robotics closer to the world of Blade Runner by pursuing other technologies and, in particular, by turning to nature for inspiration. The field of soft robotics is a good example. In the last decade or so, robotics researchers have been making considerable efforts to make robots soft, deformable, squishable, and flexible.
This technology is inspired by the fact that 90% of the human body is made from soft substances such as skin, hair, and tissues. This is because most of the fundamental functions in our body rely on soft parts that can change shape, from the heart and lungs pumping fluid around our body to the eye lenses generating signals from their movement. Cells even change shape to trigger division, self-healing and, ultimately, the evolution of the body.
The softness of our bodies is the origin of all their functionality needed to stay alive. So being able to build soft machines would at least bring us a step closer to the robotic world of Blade Runner. Some of the recent technological advances include artificial hearts made out of soft functional materials that are pumping fluid through deformation. Similarly, soft, wearable gloves can help make hand grasping stronger. And “epidermal electronics” has enabled us to tattoo electronic circuits onto our biological skins.
Softness is the keyword that brings humans and technologies closer together. Sensors, motors, and computers are all of a sudden integrated into human bodies once they became soft, and the border between us and external devices becomes ambiguous, just like soft contact lenses became part of our eyes.
Nevertheless, the hardest challenge is how to make individual parts of a soft robot body physically adaptable by self-healing, growing, and differentiating. After all, every part of a living organism is also alive in biological systems in order to make our bodies totally adaptable and evolvable, the function of which could make machines totally indistinguishable from ourselves.
It is impossible to predict when the robotic world of Blade Runner might arrive, and if it does, it will probably be very far in the future. But as long as the desire to build machines indistinguishable from humans is there, the current trends of robotic revolution could make it possible to achieve that dream.
This article was originally published on The Conversation. Read the original article.
Image Credit: Dariush M / Shutterstock.com Continue reading
#431186 The Coming Creativity Explosion Belongs ...
Does creativity make human intelligence special?
It may appear so at first glance. Though machines can calculate, analyze, and even perceive, creativity may seem far out of reach. Perhaps this is because we find it mysterious, even in ourselves. How can the output of a machine be anything more than that which is determined by its programmers?
Increasingly, however, artificial intelligence is moving into creativity’s hallowed domain, from art to industry. And though much is already possible, the future is sure to bring ever more creative machines.
What Is Machine Creativity?
Robotic art is just one example of machine creativity, a rapidly growing sub-field that sits somewhere between the study of artificial intelligence and human psychology.
The winning paintings from the 2017 Robot Art Competition are strikingly reminiscent of those showcased each spring at university exhibitions for graduating art students. Like the works produced by skilled artists, the compositions dreamed up by the competition’s robotic painters are aesthetically ambitious. One robot-made painting features a man’s bearded face gazing intently out from the canvas, his eyes locking with the viewer’s. Another abstract painting, “inspired” by data from EEG signals, visually depicts the human emotion of misery with jagged, gloomy stripes of black and purple.
More broadly, a creative machine is software (sometimes encased in a robotic body) that synthesizes inputs to generate new and valuable ideas, solutions to complex scientific problems, or original works of art. In a process similar to that followed by a human artist or scientist, a creative machine begins its work by framing a problem. Next, its software specifies the requirements the solution should have before generating “answers” in the form of original designs, patterns, or some other form of output.
Although the notion of machine creativity sounds a bit like science fiction, the basic concept is one that has been slowly developing for decades.
Nearly 50 years ago while a high school student, inventor and futurist Ray Kurzweil created software that could analyze the patterns in musical compositions and then compose new melodies in a similar style. Aaron, one of the world’s most famous painting robots, has been hard at work since the 1970s.
Industrial designers have used an automated, algorithm-driven process for decades to design computer chips (or machine parts) whose layout (or form) is optimized for a particular function or environment. Emily Howell, a computer program created by David Cope, writes original works in the style of classical composers, some of which have been performed by human orchestras to live audiences.
What’s different about today’s new and emerging generation of robotic artists, scientists, composers, authors, and product designers is their ubiquity and power.
“The recent explosion of artificial creativity has been enabled by the rapid maturation of the same exponential technologies that have already re-drawn our daily lives.”
I’ve already mentioned the rapidly advancing fields of robotic art and music. In the realm of scientific research, so-called “robotic scientists” such as Eureqa and Adam and Eve develop new scientific hypotheses; their “insights” have contributed to breakthroughs that are cited by hundreds of academic research papers. In the medical industry, creative machines are hard at work creating chemical compounds for new pharmaceuticals. After it read over seven million words of 20th century English poetry, a neural network developed by researcher Jack Hopkins learned to write passable poetry in a number of different styles and meters.
The recent explosion of artificial creativity has been enabled by the rapid maturation of the same exponential technologies that have already re-drawn our daily lives, including faster processors, ubiquitous sensors and wireless networks, and better algorithms.
As they continue to improve, creative machines—like humans—will perform a broad range of creative activities, ranging from everyday problem solving (sometimes known as “Little C” creativity) to producing once-in-a-century masterpieces (“Big C” creativity). A creative machine’s outputs could range from a design for a cast for a marble sculpture to a schematic blueprint for a clever new gadget for opening bottles of wine.
In the coming decades, by automating the process of solving complex problems, creative machines will again transform our world. Creative machines will serve as a versatile source of on-demand talent.
In the battle to recruit a workforce that can solve complex problems, creative machines will put small businesses on equal footing with large corporations. Art and music lovers will enjoy fresh creative works that re-interpret the style of ancient disciplines. People with a health condition will benefit from individualized medical treatments, and low-income people will receive top-notch legal advice, to name but a few potentially beneficial applications.
How Can We Make Creative Machines, Unless We Understand Our Own Creativity?
One of the most intriguing—yet unsettling—aspects of watching robotic arms skillfully oil paint is that we humans still do not understand our own creative process. Over the centuries, several different civilizations have devised a variety of models to explain creativity.
The ancient Greeks believed that poets drew inspiration from a transcendent realm parallel to the material world where ideas could take root and flourish. In the Middle Ages, philosophers and poets attributed our peculiarly human ability to “make something of nothing” to an external source, namely divine inspiration. Modern academic study of human creativity has generated vast reams of scholarship, but despite the value of these insights, the human imagination remains a great mystery, second only to that of consciousness.
Today, the rise of machine creativity demonstrates (once again), that we do not have to fully understand a biological process in order to emulate it with advanced technology.
Past experience has shown that jet planes can fly higher and faster than birds by using the forward thrust of an engine rather than wings. Submarines propel themselves forward underwater without fins or a tail. Deep learning neural networks identify objects in randomly-selected photographs with super-human accuracy. Similarly, using a fairly straightforward software architecture, creative software (sometimes paired with a robotic body) can paint, write, hypothesize, or design with impressive originality, skill, and boldness.
At the heart of machine creativity is simple iteration. No matter what sort of output they produce, creative machines fall into one of three categories depending on their internal architecture.
Briefly, the first group consists of software programs that use traditional rule-based, or symbolic AI, the second group uses evolutionary algorithms, and the third group uses a variation of a form of machine learning called deep learning that has already revolutionized voice and facial recognition software.
1) Symbolic creative machines are the oldest artificial artists and musicians. In this approach—also known as “good old-fashioned AI (GOFAI) or symbolic AI—the human programmer plays a key role by writing a set of step-by-step instructions to guide the computer through a task. Despite the fact that symbolic AI is limited in its ability to adapt to environmental changes, it’s still possible for a robotic artist programmed this way to create an impressively wide variety of different outputs.
2) Evolutionary algorithms (EA) have been in use for several decades and remain powerful tools for design. In this approach, potential solutions “compete” in a software simulator in a Darwinian process reminiscent of biological evolution. The human programmer specifies a “fitness criterion” that will be used to score and rank the solutions generated by the software. The software then generates a “first generation” population of random solutions (which typically are pretty poor in quality), scores this first generation of solutions, and selects the top 50% (those random solutions deemed to be the best “fit”). The software then takes another pass and recombines the “winning” solutions to create the next generation and repeats this process for thousands (and sometimes millions) of generations.
3) Generative deep learning (DL) neural networks represent the newest software architecture of the three, since DL is data-dependent and resource-intensive. First, a human programmer “trains” a DL neural network to recognize a particular feature in a dataset, for example, an image of a dog in a stream of digital images. Next, the standard “feed forward” process is reversed and the DL neural network begins to generate the feature, for example, eventually producing new and sometimes original images of (or poetry about) dogs. Generative DL networks have tremendous and unexplored creative potential and are able to produce a broad range of original outputs, from paintings to music to poetry.
The Coming Explosion of Machine Creativity
In the near future as Moore’s Law continues its work, we will see sophisticated combinations of these three basic architectures. Since the 1950s, artificial intelligence has steadily mastered one human ability after another, and in the process of doing so, has reduced the cost of calculation, analysis, and most recently, perception. When creative software becomes as inexpensive and ubiquitous as analytical software is today, humans will no longer be the only intelligent beings capable of creative work.
This is why I have to bite my tongue when I hear the well-intended (but shortsighted) advice frequently dispensed to young people that they should pursue work that demands creativity to help them “AI-proof” their futures.
Instead, students should gain skills to harness the power of creative machines.
There are two skills in which humans excel that will enable us to remain useful in a world of ever-advancing artificial intelligence. One, the ability to frame and define a complex problem so that it can be handed off to a creative machine to solve. And two, the ability to communicate the value of both the framework and the proposed solution to the other humans involved.
What will happen to people when creative machines begin to capably tread on intellectual ground that was once considered the sole domain of the human mind, and before that, the product of divine inspiration? While machines engaging in Big C creativity—e.g., oil painting and composing new symphonies—tend to garner controversy and make the headlines, I suspect the real world-changing application of machine creativity will be in the realm of everyday problem solving, or Little C. The mainstream emergence of powerful problem-solving tools will help people create abundance where there was once scarcity.
Image Credit: adike / Shutterstock.com Continue reading
#431175 Servosila introduces Mobile Robots ...
Servosila introduces a new member of the family of Servosila “Engineer” robots, a UGV called “Radio Engineer”. This new variant of the well-known backpack-transportable robot features a Software Defined Radio (SDR) payload module integrated into the robotic vehicle.
“Several of our key customers had asked us to enable an Electronic Warfare (EW) or Cognitive Radio applications in our robots”, – says a spokesman for the company, “By integrating a Software Defined Radio (SDR) module into our robotic platforms we cater to both requirements. Radio spectrum analysis, radio signal detection, jamming, and radio relay are important features for EOD robots such as ours. Servosila continues to serve the customers by pushing the boundaries of what their Servosila robots can do. Our partners in the research world and academia shall also greatly benefit from the new functionality that gives them more means of achieving their research goals.”
Photo Credit: Servosila – www.servosila.com
Coupling a programmable mobile robot with a software-defined radio creates a powerful platform for developing innovative applications that mix mobility and artificial intelligence with modern radio technologies. The new robotic radio applications include localized frequency hopping pattern analysis, OFDM waveform recognition, outdoor signal triangulation, cognitive mesh networking, automatic area search for radio emitters, passive or active mobile robotic radars, mobile base stations, mobile radio scanners, and many others.
A rotating head of the robot with mounts for external antennae acts as a pan-and-tilt device thus enabling various scanning and tracking applications. The neck of the robotic head is equipped with a pair of highly accurate Servosila-made servos with a pointing precision of 3.0 angular minutes. This means that the robot can point its antennae with an unprecedented accuracy.
Researchers and academia can benefit from the platform’s support for GnuRadio, an open source software framework for developing SDR applications. An on-board Intel i7 computer capable of executing OpenCL code, is internally connected to the SDR payload module. This makes it possible to execute most existing GnuRadio applications directly on the robot’s on-board computer. Other sensors of the robot such as a GPS sensor, an IMU or a thermal vision camera contribute into sensor fusion algorithms.
Since Servosila “Engineer” mobile robots are primarily designed for outdoor use, the SDR module is fully enclosed into a hardened body of the robot which provides protection in case of dust, rain, snow or impacts with obstacles while the robot is on the move. The robot and its SDR payload module are both powered by an on-board battery thus making the entire robotic radio platform independent of external power supplies.
Servosila plans to start shipping the SDR-equipped robots to international customers in October, 2017.
Web: https://www.servosila.com
YouTube: https://www.youtube.com/user/servosila/videos
About the Company
Servosila is a robotics technology company that designs, produces and markets a range of mobile robots, robotic arms, servo drives, harmonic reduction gears, robotic control systems as well as software packages that make the robots intelligent. Servosila provides consulting, training and operations support services to various customers around the world. The company markets its products and services directly or through a network of partners who provide tailored and localized services that meet specific procurement, support or operational needs.
Press Release above is by: Servosila
The post Servosila introduces Mobile Robots equipped with Software Defined Radio (SDR) payloads appeared first on Roboticmagazine. Continue reading
#431165 Intel Jumps Into Brain-Like Computing ...
The brain has long inspired the design of computers and their software. Now Intel has become the latest tech company to decide that mimicking the brain’s hardware could be the next stage in the evolution of computing.
On Monday the company unveiled an experimental “neuromorphic” chip called Loihi. Neuromorphic chips are microprocessors whose architecture is configured to mimic the biological brain’s network of neurons and the connections between them called synapses.
While neural networks—the in vogue approach to artificial intelligence and machine learning—are also inspired by the brain and use layers of virtual neurons, they are still implemented on conventional silicon hardware such as CPUs and GPUs.
The main benefit of mimicking the architecture of the brain on a physical chip, say neuromorphic computing’s proponents, is energy efficiency—the human brain runs on roughly 20 watts. The “neurons” in neuromorphic chips carry out the role of both processor and memory which removes the need to shuttle data back and forth between separate units, which is how traditional chips work. Each neuron also only needs to be powered while it’s firing.
At present, most machine learning is done in data centers due to the massive energy and computing requirements. Creating chips that capture some of nature’s efficiency could allow AI to be run directly on devices like smartphones, cars, and robots.
This is exactly the kind of application Michael Mayberry, managing director of Intel’s research arm, touts in a blog post announcing Loihi. He talks about CCTV cameras that can run image recognition to identify missing persons or traffic lights that can track traffic flow to optimize timing and keep vehicles moving.
There’s still a long way to go before that happens though. According to Wired, so far Intel has only been working with prototypes, and the first full-size version of the chip won’t be built until November.
Once complete, it will feature 130,000 neurons and 130 million synaptic connections split between 128 computing cores. The device will be 1,000 times more energy-efficient than standard approaches, according to Mayberry, but more impressive are claims the chip will be capable of continuous learning.
Intel’s newly launched self-learning neuromorphic chip.
Normally deep learning works by training a neural network on giant datasets to create a model that can then be applied to new data. The Loihi chip will combine training and inference on the same chip, which will allow it to learn on the fly, constantly updating its models and adapting to changing circumstances without having to be deliberately re-trained.
A select group of universities and research institutions will be the first to get their hands on the new chip in the first half of 2018, but Mayberry said it could be years before it’s commercially available. Whether commercialization happens at all may largely depend on whether early adopters can get the hardware to solve any practically useful problems.
So far neuromorphic computing has struggled to gain traction outside the research community. IBM released a neuromorphic chip called TrueNorth in 2014, but the device has yet to showcase any commercially useful applications.
Lee Gomes summarizes the hurdles facing neuromorphic computing excellently in IEEE Spectrum. One is that deep learning can run on very simple, low-precision hardware that can be optimized to use very little power, which suggests complicated new architectures may struggle to find purchase.
It’s also not easy to transfer deep learning approaches developed on conventional chips over to neuromorphic hardware, and even Intel Labs chief scientist Narayan Srinivasa admitted to Forbes Loihi wouldn’t work well with some deep learning models.
Finally, there’s considerable competition in the quest to develop new computer architectures specialized for machine learning. GPU vendors Nvidia and AMD have pivoted to take advantage of this newfound market and companies like Google and Microsoft are developing their own in-house solutions.
Intel, for its part, isn’t putting all its eggs in one basket. Last year it bought two companies building chips for specialized machine learning—Movidius and Nervana—and this was followed up with the $15 billion purchase of self-driving car chip- and camera-maker Mobileye.
And while the jury is still out on neuromorphic computing, it makes sense for a company eager to position itself as the AI chipmaker of the future to have its fingers in as many pies as possible. There are a growing number of voices suggesting that despite its undoubted power, deep learning alone will not allow us to imbue machines with the kind of adaptable, general intelligence humans possess.
What new approaches will get us there are hard to predict, but it’s entirely possible they will only work on hardware that closely mimics the one device we already know is capable of supporting this kind of intelligence—the human brain.
Image Credit: Intel Continue reading