Tag Archives: program
#431371 Amazon Is Quietly Building the Robots of ...
Science fiction is the siren song of hard science. How many innocent young students have been lured into complex, abstract science, technology, engineering, or mathematics because of a reckless and irresponsible exposure to Arthur C. Clarke at a tender age? Yet Arthur C. Clarke has a very famous quote: “Any sufficiently advanced technology is indistinguishable from magic.”
It’s the prospect of making that… ahem… magic leap that entices so many people into STEM in the first place. A magic leap that would change the world. How about, for example, having humanoid robots? They could match us in dexterity and speed, perceive the world around them as we do, and be programmed to do, well, more or less anything we can do.
Such a technology would change the world forever.
But how will it arrive? While true sci-fi robots won’t get here right away—the pieces are coming together, and the company best developing them at the moment is Amazon. Where others have struggled to succeed, Amazon has been quietly progressing. Notably, Amazon has more than just a dream, it has the most practical of reasons driving it into robotics.
This practicality matters. Technological development rarely proceeds by magic; it’s a process filled with twists, turns, dead-ends, and financial constraints. New technologies often have to answer questions like “What is this good for, are you being realistic?” A good strategy, then, can be to build something more limited than your initial ambition, but useful for a niche market. That way, you can produce a prototype, have a reasonable business plan, and turn a profit within a decade. You might call these “stepping stone” applications that allow for new technologies to be developed in an economically viable way.
You need something you can sell to someone, soon: that’s how you get investment in your idea. It’s this model that iRobot, developers of the Roomba, used: migrating from military prototypes to robotic vacuum cleaners to become the “boring, successful robot company.” Compare this to Willow Garage, a genius factory if ever there was one: they clearly had ambitions towards a general-purpose, multi-functional robot. They built an impressive device—PR2—and programmed the operating system, ROS, that is still the industry and academic standard to this day.
But since they were unable to sell their robot for much less than $250,000, it was never likely to be a profitable business. This is why Willow Garage is no more, and many workers at the company went into telepresence robotics. Telepresence is essentially videoconferencing with a fancy robot attached to move the camera around. It uses some of the same software (for example, navigation and mapping) without requiring you to solve difficult problems of full autonomy for the robot, or manipulating its environment. It’s certainly one of the stepping-stone areas that various companies are investigating.
Another approach is to go to the people with very high research budgets: the military.
This was the Boston Dynamics approach, and their incredible achievements in bipedal locomotion saw them getting snapped up by Google. There was a great deal of excitement and speculation about Google’s “nightmare factory” whenever a new slick video of a futuristic militarized robot surfaced. But Google broadly backed away from Replicant, their robotics program, and Boston Dynamics was sold. This was partly due to PR concerns over the Terminator-esque designs, but partly because they didn’t see the robotics division turning a profit. They hadn’t found their stepping stones.
This is where Amazon comes in. Why Amazon? First off, they just announced that their profits are up by 30 percent, and yet the company is well-known for their constantly-moving Day One philosophy where a great deal of the profits are reinvested back into the business. But lots of companies have ambition.
One thing Amazon has that few other corporations have, as well as big financial resources, is viable stepping stones for developing the technologies needed for this sort of robotics to become a reality. They already employ 100,000 robots: these are of the “pragmatic, boring, useful” kind that we’ve profiled, which move around the shelves in warehouses. These robots are allowing Amazon to develop localization and mapping software for robots that can autonomously navigate in the simple warehouse environment.
But their ambitions don’t end there. The Amazon Robotics Challenge is a multi-million dollar competition, open to university teams, to produce a robot that can pick and package items in warehouses. The problem of grasping and manipulating a range of objects is not a solved one in robotics, so this work is still done by humans—yet it’s absolutely fundamental for any sci-fi dream robot.
Google, for example, attempted to solve this problem by hooking up 14 robot hands to machine learning algorithms and having them grasp thousands of objects. Although results were promising, the 10 to 20 percent failure rate for grasps is too high for warehouse use. This is a perfect stepping stone for Amazon; should they crack the problem, they will likely save millions in logistics.
Another area where humanoid robotics—especially bipedal locomotion, or walking, has been seriously suggested—is in the last mile delivery problem. Amazon has shown willingness to be creative in this department with their notorious drone delivery service. In other words, it’s all very well to have your self-driving car or van deliver packages to people’s doors, but who puts the package on the doorstep? It’s difficult for wheeled robots to navigate the full range of built environments that exist. That’s why bipedal robots like CASSIE, developed by Oregon State, may one day be used to deliver parcels.
Again: no one more than Amazon stands to profit from cracking this technology. The line from robotics research to profit is very clear.
So, perhaps one day Amazon will have robots that can move around and manipulate their environments. But they’re also working on intelligence that will guide those robots and make them truly useful for a variety of tasks. Amazon has an AI, or at least the framework for an AI: it’s called Alexa, and it’s in tens of millions of homes. The Alexa Prize, another multi-million-dollar competition, is attempting to make Alexa more social.
To develop a conversational AI, at least using the current methods of machine learning, you need data on tens of millions of conversations. You need to understand how people will try to interact with the AI. Amazon has access to this in Alexa, and they’re using it. As owners of the leading voice-activated personal assistant, they have an ecosystem of developers creating apps for Alexa. It will be integrated with the smart home and the Internet of Things. It is a very marketable product, a stepping stone for robot intelligence.
What’s more, the company can benefit from its huge sales infrastructure. For Amazon, having an AI in your home is ideal, because it can persuade you to buy more products through its website. Unlike companies like Google, Amazon has an easy way to make a direct profit from IoT devices, which could fuel funding.
For a humanoid robot to be truly useful, though, it will need vision and intelligence. It will have to understand and interpret its environment, and react accordingly. The way humans learn about our environment is by getting out and seeing it. This is something that, for example, an Alexa coupled to smart glasses would be very capable of doing. There are rumors that Alexa’s AI will soon be used in security cameras, which is an ideal stepping stone task to train an AI to process images from its environment, truly perceiving the world and any threats it might contain.
It’s a slight exaggeration to say that Amazon is in the process of building a secret robot army. The gulf between our sci-fi vision of robots that can intelligently serve us, rather than mindlessly assemble cars, is still vast. But in quietly assembling many of the technologies needed for intelligent, multi-purpose robotics—and with the unique stepping stones they have along the way—Amazon might just be poised to leap that gulf. As if by magic.
Image Credit: Denis Starostin / Shutterstock.com Continue reading
#431203 Could We Build a Blade Runner-Style ...
The new Blade Runner sequel will return us to a world where sophisticated androids made with organic body parts can match the strength and emotions of their human creators. As someone who builds biologically inspired robots, I’m interested in whether our own technology will ever come close to matching the “replicants” of Blade Runner 2049.
The reality is that we’re a very long way from building robots with human-like abilities. But advances in so-called soft robotics show a promising way forward for technology that could be a new basis for the androids of the future.
From a scientific point of view, the real challenge is replicating the complexity of the human body. Each one of us is made up of millions and millions of cells, and we have no clue how we can build such a complex machine that is indistinguishable from us humans. The most complex machines today, for example the world’s largest airliner, the Airbus A380, are composed of millions of parts. But in order to match the complexity level of humans, we would need to scale this complexity up about a million times.
There are currently three different ways that engineering is making the border between humans and robots more ambiguous. Unfortunately, these approaches are only starting points and are not yet even close to the world of Blade Runner.
There are human-like robots built from scratch by assembling artificial sensors, motors, and computers to resemble the human body and motion. However, extending the current human-like robot would not bring Blade Runner-style androids closer to humans, because every artificial component, such as sensors and motors, are still hopelessly primitive compared to their biological counterparts.
There is also cyborg technology, where the human body is enhanced with machines such as robotic limbs and wearable and implantable devices. This technology is similarly very far away from matching our own body parts.
Finally, there is the technology of genetic manipulation, where an organism’s genetic code is altered to modify that organism’s body. Although we have been able to identify and manipulate individual genes, we still have a limited understanding of how an entire human emerges from genetic code. As such, we don’t know the degree to which we can actually program code to design everything we wish.
Soft robotics: a way forward?
But we might be able to move robotics closer to the world of Blade Runner by pursuing other technologies and, in particular, by turning to nature for inspiration. The field of soft robotics is a good example. In the last decade or so, robotics researchers have been making considerable efforts to make robots soft, deformable, squishable, and flexible.
This technology is inspired by the fact that 90% of the human body is made from soft substances such as skin, hair, and tissues. This is because most of the fundamental functions in our body rely on soft parts that can change shape, from the heart and lungs pumping fluid around our body to the eye lenses generating signals from their movement. Cells even change shape to trigger division, self-healing and, ultimately, the evolution of the body.
The softness of our bodies is the origin of all their functionality needed to stay alive. So being able to build soft machines would at least bring us a step closer to the robotic world of Blade Runner. Some of the recent technological advances include artificial hearts made out of soft functional materials that are pumping fluid through deformation. Similarly, soft, wearable gloves can help make hand grasping stronger. And “epidermal electronics” has enabled us to tattoo electronic circuits onto our biological skins.
Softness is the keyword that brings humans and technologies closer together. Sensors, motors, and computers are all of a sudden integrated into human bodies once they became soft, and the border between us and external devices becomes ambiguous, just like soft contact lenses became part of our eyes.
Nevertheless, the hardest challenge is how to make individual parts of a soft robot body physically adaptable by self-healing, growing, and differentiating. After all, every part of a living organism is also alive in biological systems in order to make our bodies totally adaptable and evolvable, the function of which could make machines totally indistinguishable from ourselves.
It is impossible to predict when the robotic world of Blade Runner might arrive, and if it does, it will probably be very far in the future. But as long as the desire to build machines indistinguishable from humans is there, the current trends of robotic revolution could make it possible to achieve that dream.
This article was originally published on The Conversation. Read the original article.
Image Credit: Dariush M / Shutterstock.com Continue reading
#431186 The Coming Creativity Explosion Belongs ...
Does creativity make human intelligence special?
It may appear so at first glance. Though machines can calculate, analyze, and even perceive, creativity may seem far out of reach. Perhaps this is because we find it mysterious, even in ourselves. How can the output of a machine be anything more than that which is determined by its programmers?
Increasingly, however, artificial intelligence is moving into creativity’s hallowed domain, from art to industry. And though much is already possible, the future is sure to bring ever more creative machines.
What Is Machine Creativity?
Robotic art is just one example of machine creativity, a rapidly growing sub-field that sits somewhere between the study of artificial intelligence and human psychology.
The winning paintings from the 2017 Robot Art Competition are strikingly reminiscent of those showcased each spring at university exhibitions for graduating art students. Like the works produced by skilled artists, the compositions dreamed up by the competition’s robotic painters are aesthetically ambitious. One robot-made painting features a man’s bearded face gazing intently out from the canvas, his eyes locking with the viewer’s. Another abstract painting, “inspired” by data from EEG signals, visually depicts the human emotion of misery with jagged, gloomy stripes of black and purple.
More broadly, a creative machine is software (sometimes encased in a robotic body) that synthesizes inputs to generate new and valuable ideas, solutions to complex scientific problems, or original works of art. In a process similar to that followed by a human artist or scientist, a creative machine begins its work by framing a problem. Next, its software specifies the requirements the solution should have before generating “answers” in the form of original designs, patterns, or some other form of output.
Although the notion of machine creativity sounds a bit like science fiction, the basic concept is one that has been slowly developing for decades.
Nearly 50 years ago while a high school student, inventor and futurist Ray Kurzweil created software that could analyze the patterns in musical compositions and then compose new melodies in a similar style. Aaron, one of the world’s most famous painting robots, has been hard at work since the 1970s.
Industrial designers have used an automated, algorithm-driven process for decades to design computer chips (or machine parts) whose layout (or form) is optimized for a particular function or environment. Emily Howell, a computer program created by David Cope, writes original works in the style of classical composers, some of which have been performed by human orchestras to live audiences.
What’s different about today’s new and emerging generation of robotic artists, scientists, composers, authors, and product designers is their ubiquity and power.
“The recent explosion of artificial creativity has been enabled by the rapid maturation of the same exponential technologies that have already re-drawn our daily lives.”
I’ve already mentioned the rapidly advancing fields of robotic art and music. In the realm of scientific research, so-called “robotic scientists” such as Eureqa and Adam and Eve develop new scientific hypotheses; their “insights” have contributed to breakthroughs that are cited by hundreds of academic research papers. In the medical industry, creative machines are hard at work creating chemical compounds for new pharmaceuticals. After it read over seven million words of 20th century English poetry, a neural network developed by researcher Jack Hopkins learned to write passable poetry in a number of different styles and meters.
The recent explosion of artificial creativity has been enabled by the rapid maturation of the same exponential technologies that have already re-drawn our daily lives, including faster processors, ubiquitous sensors and wireless networks, and better algorithms.
As they continue to improve, creative machines—like humans—will perform a broad range of creative activities, ranging from everyday problem solving (sometimes known as “Little C” creativity) to producing once-in-a-century masterpieces (“Big C” creativity). A creative machine’s outputs could range from a design for a cast for a marble sculpture to a schematic blueprint for a clever new gadget for opening bottles of wine.
In the coming decades, by automating the process of solving complex problems, creative machines will again transform our world. Creative machines will serve as a versatile source of on-demand talent.
In the battle to recruit a workforce that can solve complex problems, creative machines will put small businesses on equal footing with large corporations. Art and music lovers will enjoy fresh creative works that re-interpret the style of ancient disciplines. People with a health condition will benefit from individualized medical treatments, and low-income people will receive top-notch legal advice, to name but a few potentially beneficial applications.
How Can We Make Creative Machines, Unless We Understand Our Own Creativity?
One of the most intriguing—yet unsettling—aspects of watching robotic arms skillfully oil paint is that we humans still do not understand our own creative process. Over the centuries, several different civilizations have devised a variety of models to explain creativity.
The ancient Greeks believed that poets drew inspiration from a transcendent realm parallel to the material world where ideas could take root and flourish. In the Middle Ages, philosophers and poets attributed our peculiarly human ability to “make something of nothing” to an external source, namely divine inspiration. Modern academic study of human creativity has generated vast reams of scholarship, but despite the value of these insights, the human imagination remains a great mystery, second only to that of consciousness.
Today, the rise of machine creativity demonstrates (once again), that we do not have to fully understand a biological process in order to emulate it with advanced technology.
Past experience has shown that jet planes can fly higher and faster than birds by using the forward thrust of an engine rather than wings. Submarines propel themselves forward underwater without fins or a tail. Deep learning neural networks identify objects in randomly-selected photographs with super-human accuracy. Similarly, using a fairly straightforward software architecture, creative software (sometimes paired with a robotic body) can paint, write, hypothesize, or design with impressive originality, skill, and boldness.
At the heart of machine creativity is simple iteration. No matter what sort of output they produce, creative machines fall into one of three categories depending on their internal architecture.
Briefly, the first group consists of software programs that use traditional rule-based, or symbolic AI, the second group uses evolutionary algorithms, and the third group uses a variation of a form of machine learning called deep learning that has already revolutionized voice and facial recognition software.
1) Symbolic creative machines are the oldest artificial artists and musicians. In this approach—also known as “good old-fashioned AI (GOFAI) or symbolic AI—the human programmer plays a key role by writing a set of step-by-step instructions to guide the computer through a task. Despite the fact that symbolic AI is limited in its ability to adapt to environmental changes, it’s still possible for a robotic artist programmed this way to create an impressively wide variety of different outputs.
2) Evolutionary algorithms (EA) have been in use for several decades and remain powerful tools for design. In this approach, potential solutions “compete” in a software simulator in a Darwinian process reminiscent of biological evolution. The human programmer specifies a “fitness criterion” that will be used to score and rank the solutions generated by the software. The software then generates a “first generation” population of random solutions (which typically are pretty poor in quality), scores this first generation of solutions, and selects the top 50% (those random solutions deemed to be the best “fit”). The software then takes another pass and recombines the “winning” solutions to create the next generation and repeats this process for thousands (and sometimes millions) of generations.
3) Generative deep learning (DL) neural networks represent the newest software architecture of the three, since DL is data-dependent and resource-intensive. First, a human programmer “trains” a DL neural network to recognize a particular feature in a dataset, for example, an image of a dog in a stream of digital images. Next, the standard “feed forward” process is reversed and the DL neural network begins to generate the feature, for example, eventually producing new and sometimes original images of (or poetry about) dogs. Generative DL networks have tremendous and unexplored creative potential and are able to produce a broad range of original outputs, from paintings to music to poetry.
The Coming Explosion of Machine Creativity
In the near future as Moore’s Law continues its work, we will see sophisticated combinations of these three basic architectures. Since the 1950s, artificial intelligence has steadily mastered one human ability after another, and in the process of doing so, has reduced the cost of calculation, analysis, and most recently, perception. When creative software becomes as inexpensive and ubiquitous as analytical software is today, humans will no longer be the only intelligent beings capable of creative work.
This is why I have to bite my tongue when I hear the well-intended (but shortsighted) advice frequently dispensed to young people that they should pursue work that demands creativity to help them “AI-proof” their futures.
Instead, students should gain skills to harness the power of creative machines.
There are two skills in which humans excel that will enable us to remain useful in a world of ever-advancing artificial intelligence. One, the ability to frame and define a complex problem so that it can be handed off to a creative machine to solve. And two, the ability to communicate the value of both the framework and the proposed solution to the other humans involved.
What will happen to people when creative machines begin to capably tread on intellectual ground that was once considered the sole domain of the human mind, and before that, the product of divine inspiration? While machines engaging in Big C creativity—e.g., oil painting and composing new symphonies—tend to garner controversy and make the headlines, I suspect the real world-changing application of machine creativity will be in the realm of everyday problem solving, or Little C. The mainstream emergence of powerful problem-solving tools will help people create abundance where there was once scarcity.
Image Credit: adike / Shutterstock.com Continue reading
#431181 Workspace Sentry collaborative robotics ...
PRINCETON, NJ September 13, 2017 – – ST Robotics announces the availability of its Workspace Sentry collaborative robotics safety system, specifically designed to meet the International Organization for Standardization (ISO)/Technical Specification (TS) 15066 on collaborative operation. The new ISO/TS 15066, a game changer for the robotics industry, provides guidelines for the design and implementation of a collaborative workspace that reduces risks to people.
The ST Robotics Workspace Sentry robot and area safety system are based on a small module that sends infrared beams across the workspace. If the user puts his hand (or any other object) in the workspace, the robot stops using programmable emergency deceleration. Each module has three beams at different angles and the distance a beam reaches is adjustable. Two or more modules can be daisy chained to watch a wider area.
Photo Credit: ST Robotics – www.robot.md
“A robot that is tuned to stop on impact may not be safe. Robots where the trip torque can be set at low thresholds are too slow for any practical industrial application. The best system is where the work area has proximity detectors so the robot stops before impact and that is the approach ST Robotics has taken,” states President and CEO of ST Robotics David Sands.
ST Robotics, widely known for ‘robotics within reach’, has offices in Princeton, New Jersey and Cambridge, England, as well as in Asia. One of the first manufacturers of bench-top robot arms, ST Robotics has been providing the lowest-priced, easy-to-program boxed robots for the past 30 years. ST’s robots are utilized the world over by companies and institutions such as Lockheed-Martin, Motorola, Honeywell, MIT, NASA, Pfizer, Sony and NXP. The numerous applications for ST’s robots benefit the manufacturing, nuclear, pharmaceutical, laboratory and semiconductor industries.
For additional information on ST Robotics, contact:
sales1@strobotics.com
(609) 584 7522
www.strobotics.com
For press inquiries, contact:
Joanne Pransky
World’s First Robotic Psychiatrist®
drjoanne@robot.md
(650) ROBOT-MD
The post Workspace Sentry collaborative robotics safety system appeared first on Roboticmagazine. Continue reading