Tag Archives: robotics
#429448 Discover the Most Advanced Industrial ...
Machine learning, automated vehicles, additive manufacturing and robotics—all popular news headlines, and all technologies that are changing the way the US and the world makes, ships and consumes goods. New technologies are developing at an exponentially increasing pace, and organizations are scrambling to stay ahead of them.
At the center of this change lie the companies creating the products of tomorrow.
Whether it’s self-driving commercial trucks or 3D-printed rocket engines, the opportunities for financial success and human progress are greater than ever. Looking to the future, manufacturing will begin to include never-before-seen approaches to making things using uncommon methods such as deep learning, biology and human-robot collaboration.
That’s where Singularity University’s Exponential Manufacturing summit comes in.
Last year’s event showed how artificial intelligence is changing research and development, how robots are moving beyond the factory floor to take on new roles, how fundamental shifts in energy markets and supply chains are being brought about by exponential technologies, how additive manufacturing is nearing an atomic level of precision, and how to make sure your organization stays ahead of these technologies to win business and improve the world.
Hosted in Boston, Massachusetts May 17-19, Exponential Manufacturing is a meetup of 600+ of the world’s most forward-thinking manufacturing leaders, investors and entrepreneurs. These are the people who design and engineer products, control supply chains, bring together high-functioning teams and head industry leading organizations. Speakers at the event will dive into the topics of deep learning, robotics and cobotics, digital biology, additive manufacturing, nanotechnology and smart energy, among others.
Alongside emcee Will Weisman, Deloitte’s John Hagel will discuss how to innovate in a large organization. Ray Kurzweil will share his predictions for an exponential future. Neil Jacobstein will focus on the limitless possibilities of machine learning. Jay Rogers will share his learnings from the world of rapid prototyping. Hacker entrepreneur Pablos Holman will offer his perspective on what’s truly possible in today’s world. These innovators will be joined by John Werner (Meta), Valerie Buckingham (Carbon), Andre Wegner (Authentise), Deborah Wince-Smith (Council on Competitiveness), Raymond McCauley (Singularity University), Ramez Naam (Singularity University), Vladimir Bulović (MIT), and many others.
Now, more than ever, there is a critical need for companies to take new risks and invest in education simply to stay ahead of emerging technologies. At last year’s Exponential Manufacturing, Ray Kurzweil predicted, “In 2029, AIs will have human levels of language and will be able to pass a valid Turing test. They’ll be indistinguishable from human.” At the same event, Neil Jacobstein said, “It’s not just better, faster, cheaper—it’s different.”
There’s little doubt we’re entering a new era of global business, and the manufacturing industry will help lead the charge. Learn more about our Exponential Manufacturing summit, and join us in Boston this May. As a special thanks for being a Singularity Hub reader, use the code SUHUB2017 during the application process to save up to 15% on current pricing.
Banner Image Credit: Shutterstock Continue reading →
#429447 Art in the Age of AI: How Tech Is ...
Technology has long been considered a resource-liberating mechanism, granting us better access to resources like information, food and energy. Yet what is often overlooked is the revolutionary impact technology can have on our ability to create art.
Many artists are reacting to a world of accelerating change and rapid digitization through their work. Emerging artistic mediums like 3D printing, virtual reality and artificial intelligence are providing artists with unprecedented forms of self-expression. Many are also embracing the rise of intelligent machines and leveraging the man-machine symbiosis to create increasingly powerful works of art. In fact, advances in robotics and AI are challenging the very definition of what it means to be an artist: creating art is no longer exclusive to human beings.
Revolutionary forms of self-expression
Artists’ styles and identities have always been influenced by the eras they live in. Today, technology is pushing the boundaries of creativity and sensory experience.
Some artists are using digital tools to engage their viewers in the artistic experience. Described as a “new artistic language,” Chris Milk’s “The Treachery of Sanctuary” is a stunning example of digital art. The installation uses projections of the participants’ own bodies to explore the creative process through digital birds, hence allowing participants to interact with the work and undergo a captivating experience. Without participants, the work of art is incomplete.
Artist Eyal Gever is notable for writing algorithms of “epic events” and then 3D printing them with the world’s largest 3D printer. Gever believes that by using code and 3D printing, he can bring to life sculptures of explosions or waterfalls that would be impractical to produce by hand.
In fact, Gever is collaborating with NASA and Made In Space on his latest project, #Laugh, to create a visualization of human laughter. This groundbreaking installation will be 3D printed on the International Space Station to become the first piece of artwork ever to be produced in space.
Clearly, artists are no longer limited by traditional tools like paint, stencils or sculptures—they push their expressive urges and create increasingly immersive experiences.
And what could possibly be a more immersive experience than virtual reality?
Matteo Zamagni’s “Nature Abstraction” takes viewers on a virtual meditation-like experience through vast never-ending geometric and fractal patterns. Zamangi says he wants to “show the audience something that is normally invisible to our perceptions, but may be visible otherwise.”
Virtual reality could allow artists themselves to create art in a virtual space. Google Tilt Brush is a program that allows users, regardless of artistic background or experience, to create works of art in a three-dimensional virtual space. Described as “a new perspective in painting,” the Tilt Brush interface allows endless possibilities of artistic production.
As exponentially growing tools like 3D printing and virtual reality become faster, cheaper and more accessible, we will see more renowned and amateur artists turn to them to create, express and capture their imaginations.
Re-defining the artist
Creativity and artistic expression have been considered features exclusive to human intelligence. One of the biggest criticisms of intelligent machines is that they lack the ability to “imagine” and “think” beyond their programming. Soon that may no longer be the case.
Experts are attempting to program intelligent machines that create works of art.
In June 2016, Google launched Magenta, a crowd-sourced research project that explores the use of machine learning in AI to create different forms of art, including music and visuals. Magenta will create interfaces and platforms that will allow artists with no coding or AI experience to use these tools to create their own work.
Beyond this, Magenta could potentially program machines that will produce works of art on their own, without the influence of human artists. In many other similar projects, researchers are utilizing deep learning techniques to allow AI to create music inspired by the works of Johann Bach, create music inspired by the Beatles or write mournful poetry. Their creations are uncanny.
As tech re-defines art, new questions will come up. Who is the real artist, for example, the programmers that coded the AI or the AI itself? Are the visual productions of Google’s AI a work of self-expression or a coincidental byproduct of complex algorithms? Can a machine really “express” itself if it isn’t conscious? Can robot artists, such as Paul, the Robot, be considered creative without the capacity to truly imagine and reflect on their creative output?
But perhaps none of these questions matter. Maybe what matters is not the artist but the viewer. As British-Indian sculptor Anish Kapoor says, “The work itself has a complete circle of meaning and counterpoint. And without your involvement as a viewer, there is no story.”
Disrupting the art world
Technology has not only resulted in more accessible tools for the production of art but has also accelerated the process by which art is funded, marketed and distributed.
In the age of the internet and an increasingly connected world, the impact of an artist is no longer bound by the physical limitations of gallery. Access to art and the production or distribution tools required to leave your artistic mark are no longer exclusive to the elite or the exceptionally talented. With powerful platforms like social media and crowd-funding campaigns, today’s artists can market their innovative work to the world at a low cost.
At the end of the day, to produce our imagination is an innately human act. All of us have the yearning to express ourselves, whether through words, visuals or music. As new mediums of self-expression are made more accessible to all of us, the creative possibilities are infinite.
Image Credit: Artifact Productions/Chris Milk/YouTube Continue reading →
#429386 “Twendy-One”, the Dexterous ...
Twendy-One, a project of Waseda University in Tokyo, is an updated humanoid robot that is very skillful in physical hand movements and able to pick up almost any object.
#429389 Visual Processing System
The article below is by our reader Kyle Stuart, where he introduces his work:
—————————————————————————————————————
The worlds only optical artificial intelligence system
The visual processing system is an artificial intelligence system which operates in a very similar way to the human eye and brain.
Like the human eye and brain the VPS consists of two key parts, the image sensor, and the image processing unit (IPU), then the machine outputs, whatever they might be, which can be referred to as the human body of the machine, though it may have wings and motors, wheels, or just be a PC program or app.
Image Credit: Kyle Stuart. The basic layout of a VPS, depicting the two key components, the image sensor, and IPU and also output control unit fixed to a motherboard. The output control unit will be comprised of the robotics circuits used to regulate your machines outputs.
The VPS receives optical data from the image sensor which then sends that data to the IPU for processing, using the key component of the VPS, the optical capacitor, which acts like a tranzistor in computer chips, processing the optical data, which triggers the machine outputs. On sight of your dropped key, the IPU may trigger the Humanoid helper robots machine outputs, its appendages and vocal system, to call out, walk over to your dropped keys, pick them up, and hand them to you.
This is just one example of how a VPS can be produced for a robotics or automation system, it can be produced for any type of robotics system, to perform any function you can see, in any form you can imagine. You can literally build a VPS to throw mud at Grizzly bears with three legs, only when there is a Unicorn in view, using a tennis racket attached to a pogo stick; or, a drone which monitors traffic, reports incidents, can communicate with stranded motorists, and lands when it notices traffic dying down, to name one.
Due to the magnitude of systems the VPS can be produced for, the VPS is available to anyone with an idea for a robot as a patent portfolio for license manufacture. Should you have an idea for a robot, or automation system, you can license the VPS patent portfolio and produce that robot for the market.
www.photo-tranzistor.biz
info@photo-tranzistor.biz
Please call inventor Kyle Stuart in Australia on +61 497 551 391 should you wish to speak to someone.
The post Visual Processing System appeared first on Roboticmagazine. Continue reading →
#429385 Robots Learning To Pick Things Up As ...
Robots Learning To Pick Things Up As Babies Do
Carnegie Mellon Unleashing Lots of Robots to Push, Poke, Grab Objects
Babies learn about their world by pushing and poking objects, putting them in their mouths and throwing them. Carnegie Mellon University scientists are taking a similar approach to teach robots how to recognize and grasp objects around them.
Manipulation remains a major challenge for robots and has become a bottleneck for many applications. But researchers at CMU’s Robotics Institute have shown that by allowing robots to spend hundreds of hours poking, grabbing and otherwise physically interacting with a variety of objects, those robots can teach themselves how to pick up objects.
In their latest findings, presented last fall at the European Conference on Computer Vision, they showed that robots gained a deeper visual understanding of objects when they were able to manipulate them.
The researchers, led by Abhinav Gupta, assistant professor of robotics, are now scaling up this approach, with help from a three-year, $1.5 million “focused research award” from Google.
“We will use dozens of different robots, including one- and two-armed robots and even drones, to learn about the world and actions that can be performed in the world,” Gupta said. “The cost of robots has come down significantly in recent years, enabling us to unleash lots of robots to collect an unprecedented amount of data on physical interactions.”
Gupta said the shortcomings of previous approaches to robot manipulation were apparent during the Defense Advanced Research Projects Agency’s Robotics Challenge in 2015. Some of the world’s most advanced robots, designed to respond to natural or manmade emergencies, had difficulty with tasks such as opening doors or unplugging and re-plugging an electrical cable.
“Our robots still cannot understand what they see and their action and manipulation capabilities pale in comparison to those of a two-year-old,” Gupta said.
For decades, visual perception and robotic control have been studied separately. Visual perception developed with little consideration of physical interaction, and most manipulation and planning frameworks can’t cope with perception failures. Gupta predicts that by allowing the robot to explore perception and action simultaneously, like a baby, can help overcome these failures.
“Psychological studies have shown that if people can’t affect what they see, their visual understanding of that scene is limited,” said Lerrel Pinto, a Ph.D. student in robotics in Gupta’s research group. “Interaction with the real world exposes a lot of visual dynamics.”
Robots are slow learners, however, requiring hundreds of hours of interaction to learn how to pick up objects. And because robots have previously been expensive and often unreliable, researchers relying on this data-driven approach have long suffered from “data starvation.”
Scaling up the learning process will help address this data shortage. Pinto said much of the work by the CMU group has been done using a two-armed Baxter robot with a simple, two-fingered manipulator. Using more and different robots, including those with more sophisticated hands, will enrich manipulation databases.
Meanwhile, the success of this research approach has inspired other research groups in academia and by Google’s own array of robots to adopt this approach and help expand databases even further.
“If you can get the data faster, you can try a lot more things — different software frameworks, different algorithms,” Pinto said. And once one robot learns something, it can be shared with all robots.
In addition to Gupta and Pinto, the research team for the Google-funded project includes Martial Hebert, director of the Robotics Institute; Deva Ramanan, associate professor of robotics; and Ruslan Salakhutdinov, associate professor of machine learning and director of artificial intelligence research at Apple. The Office of Naval Research and the National Science Foundation also sponsor this research.
About Carnegie Mellon University: Carnegie Mellon (www.cmu.edu) is a private, internationally ranked research university with programs in areas ranging from science, technology and business, to public policy, the humanities and the arts. More than 13,000 students in the university’s seven schools and colleges benefit from a small student-to-faculty ratio and an education characterized by its focus on creating and implementing solutions for real problems, interdisciplinary collaboration and innovation.
5000 Forbes Ave.
Pittsburgh, PA 15213
412-268-2900
Fax: 412-268-6929
Contact: Byron Spice
412-268-9068
bspice@cs.cmu.edu
The post Robots Learning To Pick Things Up As Babies Do appeared first on Roboticmagazine. Continue reading →