Tag Archives: learning
#433506 MIT’s New Robot Taught Itself to Pick ...
Back in 2016, somewhere in a Google-owned warehouse, more than a dozen robotic arms sat for hours quietly grasping objects of various shapes and sizes. For hours on end, they taught themselves how to pick up and hold the items appropriately—mimicking the way a baby gradually learns to use its hands.
Now, scientists from MIT have made a new breakthrough in machine learning: their new system can not only teach itself to see and identify objects, but also understand how best to manipulate them.
This means that, armed with the new machine learning routine referred to as “dense object nets (DON),” the robot would be capable of picking up an object that it’s never seen before, or in an unfamiliar orientation, without resorting to trial and error—exactly as a human would.
The deceptively simple ability to dexterously manipulate objects with our hands is a huge part of why humans are the dominant species on the planet. We take it for granted. Hardware innovations like the Shadow Dexterous Hand have enabled robots to softly grip and manipulate delicate objects for many years, but the software required to control these precision-engineered machines in a range of circumstances has proved harder to develop.
This was not for want of trying. The Amazon Robotics Challenge offers millions of dollars in prizes (and potentially far more in contracts, as their $775m acquisition of Kiva Systems shows) for the best dexterous robot able to pick and package items in their warehouses. The lucrative dream of a fully-automated delivery system is missing this crucial ability.
Meanwhile, the Robocup@home challenge—an offshoot of the popular Robocup tournament for soccer-playing robots—aims to make everyone’s dream of having a robot butler a reality. The competition involves teams drilling their robots through simple household tasks that require social interaction or object manipulation, like helping to carry the shopping, sorting items onto a shelf, or guiding tourists around a museum.
Yet all of these endeavors have proved difficult; the tasks often have to be simplified to enable the robot to complete them at all. New or unexpected elements, such as those encountered in real life, more often than not throw the system entirely. Programming the robot’s every move in explicit detail is not a scalable solution: this can work in the highly-controlled world of the assembly line, but not in everyday life.
Computer vision is improving all the time. Neural networks, including those you train every time you prove that you’re not a robot with CAPTCHA, are getting better at sorting objects into categories, and identifying them based on sparse or incomplete data, such as when they are occluded, or in different lighting.
But many of these systems require enormous amounts of input data, which is impractical, slow to generate, and often needs to be laboriously categorized by humans. There are entirely new jobs that require people to label, categorize, and sift large bodies of data ready for supervised machine learning. This can make machine learning undemocratic. If you’re Google, you can make thousands of unwitting volunteers label your images for you with CAPTCHA. If you’re IBM, you can hire people to manually label that data. If you’re an individual or startup trying something new, however, you will struggle to access the vast troves of labeled data available to the bigger players.
This is why new systems that can potentially train themselves over time or that allow robots to deal with situations they’ve never seen before without mountains of labelled data are a holy grail in artificial intelligence. The work done by MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) is part of a new wave of “self-supervised” machine learning systems—little of the data used was labeled by humans.
The robot first inspects the new object from multiple angles, building up a 3D picture of the object with its own coordinate system. This then allows the robotic arm to identify a particular feature on the object—such as a handle, or the tongue of a shoe—from various different angles, based on its relative distance to other grid points.
This is the real innovation: the new means of representing objects to grasp as mapped-out 3D objects, with grid points and subsections of their own. Rather than using a computer vision algorithm to identify a door handle, and then activating a door handle grasping subroutine, the DON system treats all objects by making these spatial maps before classifying or manipulating them, enabling it to deal with a greater range of objects than in other approaches.
“Many approaches to manipulation can’t identify specific parts of an object across the many orientations that object may encounter,” said PhD student Lucas Manuelli, who wrote a new paper about the system with lead author and fellow student Pete Florence, alongside MIT professor Russ Tedrake. “For example, existing algorithms would be unable to grasp a mug by its handle, especially if the mug could be in multiple orientations, like upright, or on its side.”
Class-specific descriptors, which can be applied to the object features, can allow the robot arm to identify a mug, find the handle, and pick the mug up appropriately. Object-specific descriptors allow the robot arm to select a particular mug from a group of similar items. I’m already dreaming of a robot butler reliably picking my favourite mug when it serves me coffee in the morning.
Google’s robot arm-y was an attempt to develop a general grasping algorithm: one that could identify, categorize, and appropriately grip as many items as possible. This requires a great deal of training time and data, which is why Google parallelized their project by having 14 robot arms feed data into a single neural network brain: even then, the algorithm may fail with highly specific tasks. Specialist grasping algorithms might require less training if they’re limited to specific objects, but then your software is useless for general tasks.
As the roboticists noted, their system, with its ability to identify parts of an object rather than just a single object, is better suited to specific tasks, such as “grasp the racquet by the handle,” than Amazon Robotics Challenge robots, which identify whole objects by segmenting an image.
This work is small-scale at present. It has been tested with a few classes of objects, including shoes, hats, and mugs. Yet the use of these dense object nets as a way for robots to represent and manipulate new objects may well be another step towards the ultimate goal of generalized automation: a robot capable of performing every task a person can. If that point is reached, the question that will remain is how to cope with being obsolete.
Image Credit: Tom Buehler/CSAIL Continue reading
#433386 What We Have to Gain From Making ...
The borders between the real world and the digital world keep crumbling, and the latter’s importance in both our personal and professional lives keeps growing. Some describe the melding of virtual and real worlds as part of the fourth industrial revolution. Said revolution’s full impact on us as individuals, our companies, communities, and societies is still unknown.
Greg Cross, chief business officer of New Zealand-based AI company Soul Machines, thinks one inescapable consequence of these crumbling borders is people spending more and more time interacting with technology. In a presentation at Singularity University’s Global Summit in San Francisco last month, Cross unveiled Soul Machines’ latest work and shared his views on the current state of human-like AI and where the technology may go in the near future.
Humanizing Technology Interaction
Cross started by introducing Rachel, one of Soul Machines’ “emotionally responsive digital humans.” The company has built 15 different digital humans of various sexes, groups, and ethnicities. Rachel, along with her “sisters” and “brothers,” has a virtual nervous system based on neural networks and biological models of different paths in the human brain. The system is controlled by virtual neurotransmitters and hormones akin to dopamine, serotonin, and oxytocin, which influence learning and behavior.
As a result, each digital human can have its own unique set of “feelings” and responses to interactions. People interact with them via visual and audio sensors, and the machines respond in real time.
“Over the last 20 or 30 years, the way we think about machines and the way we interact with machines has changed,” Cross said. “We’ve always had this view that they should actually be more human-like.”
The realism of the digital humans’ graphic representations comes thanks to the work of Soul Machines’ other co-founder, Dr. Mark Sager, who has won two Academy Awards for his work on some computer-generated movies, including James Cameron’s Avatar.
Cross pointed out, for example, that rather than being unrealistically flawless and clear, Rachel’s skin has blemishes and sun spots, just like real human skin would.
The Next Human-Machine Frontier
When people interact with each other face to face, emotional and intellectual engagement both heavily influence the interaction. What would it look like for machines to bring those same emotional and intellectual capacities to our interactions with them, and how would this type of interaction affect the way we use, relate to, and feel about AI?
Cross and his colleagues believe that humanizing artificial intelligence will make the technology more useful to humanity, and prompt people to use AI in more beneficial ways.
“What we think is a very important view as we move forward is that these machines can be more helpful to us. They can be more useful to us. They can be more interesting to us if they’re actually more like us,” Cross said.
It is an approach that seems to resonate with companies and organizations. For example, in the UK, where NatWest Bank is testing out Cora as a digital employee to help answer customer queries. In Germany, Daimler Financial Group plans to employ Sarah as something “similar to a personal concierge” for its customers. According to Cross, Daimler is looking at other ways it could deploy digital humans across the organization, from building digital service people, digital sales people, and maybe in the future, digital chauffeurs.
Soul Machines’ latest creation is Will, a digital teacher that can interact with children through a desktop, tablet, or mobile device and help them learn about renewable energy. Cross sees other social uses for digital humans, including potentially serving as doctors to rural communities.
Our Digital Friends—and Twins
Soul Machines is not alone in its quest to humanize technology. It is a direction many technology companies, including the likes of Amazon, also seem to be pursuing. Amazon is working on building a home robot that, according to Bloomberg, “could be a sort of mobile Alexa.”
Finding a more human form for technology seems like a particularly pervasive pursuit in Japan. Not just when it comes to its many, many robots, but also virtual assistants like Gatebox.
The Japanese approach was perhaps best summed up by famous android researcher Dr. Hiroshi Ishiguro, who I interviewed last year: “The human brain is set up to recognize and interact with humans. So, it makes sense to focus on developing the body for the AI mind, as well as the AI. I believe that the final goal for both Japanese and other companies and scientists is to create human-like interaction.”
During Cross’s presentation, Rob Nail, CEO and associate founder of Singularity University, joined him on the stage, extending an invitation to Rachel to be SU’s first fully digital faculty member. Rachel accepted, and though she’s the only digital faculty right now, she predicted this won’t be the case for long.
“In 10 years, all of you will have digital versions of yourself, just like me, to take on specific tasks and make your life a whole lot easier,” she said. “This is great news for me. I’ll have millions of digital friends.”
Image Credit: Soul Machines Continue reading
#433301 ‘Happiness Tech’ Is On the Rise. Is ...
We often get so fixated on technological progress that we forget it’s merely one component of the entirety of human progress. Technological advancement does not necessarily correlate with increases in human mental well-being.
While cleaner energy, access to education, and higher employment rates can improve quality of life, they do not guarantee happiness and inner peace. Amid what appears to be an increasing abundance of resources and ongoing human progress, we are experiencing a mental health epidemic, with high anxiety and depression rates. This is especially true in the developed world, where we have access to luxuries our ancestors couldn’t even dream of—all the world’s information contained in a device we hold in the palm of our hands, for example.
But as you may have realized through your own experience, technology can make us feel worse instead of better. Social media can become a tool for comparison and a source of debilitating status anxiety. Increased access to goods and services, along with the rise of consumerism, can lead people to choose “stuff” over true sources of meaning and get trapped in a hedonistic treadmill of materialism. Tools like artificial intelligence and big data could lead to violation of our privacy and autonomy. The digital world can take us away from the beauty of the present moment.
Understanding Happiness
How we use technology can significantly impact our happiness. In this context, “happiness” refers to a general sense of well-being, gratitude, and inner peace. Even with such a simple definition, it is a state of mind many people will admit they lack.
Eastern philosophies have told us for thousands of years that the problem of human suffering begins with our thoughts and perceptions of the circumstances we are in, as opposed to beginning with the circumstances themselves. As Derren Brown brilliantly points out in Happy: Why More or Less Everything Is Absolutely Fine, “The problem with the modern conception of happiness is that it is seen as some kind of commodity. There is this fantasy that simply by believing in yourself and setting goals you can have anything. But that simply isn’t how life works. The ancients had a much better view of it. They offered an approach of not trying to control things you can’t control, and of lessening your desires and your expectations so you achieve a harmony between what you desire and what you have.”
A core part of feeling more happy is about re-wiring our minds to adjust our expectations, exercise gratitude, escape negative narratives, and live in the present moment.
But can technology help us do that?
Applications for Mental Well-Being
Many doers are asking themselves how they can leverage digital tools to contribute to human happiness.
Meditation and mindfulness are examples of practices we can use to escape the often overwhelming burden of our thoughts and ground our minds into the present. They have become increasingly democratized with the rise of meditation mobile apps, such as Headspace, Gaia, and Calm, that allow millions of people globally to use their phones to learn from experts at a very low cost.
These companies have also partnered with hospitals, airlines, athletic teams, and others that could benefit from increased access to mindfulness and meditation. The popularity of these apps continues to rise as more people recognize their necessity. The combination of mass technology and ancient wisdom is one that can lead to a transformation of the collective consciousness.
Sometimes merely reflecting on the sources of joy in our lives and practicing gratitude can contribute to better well-being. Apps such as Happier encourage users to reflect upon and share pleasant everyday moments in their daily lives. Such exercises are based on the understanding that being happy is a “skill” one can build though practice and through scientifically-proven activities, such as writing down a nice thought and sharing your positivity with the world. Many other tools such as Track Your Happiness and Happstr allow users to track their happiness, which often serves as a valuable source of data to researchers.
There is also a growing body of knowledge that tells us we can achieve happiness by helping others. This “helper’s high” is a result of our brains producing endorphins after having a positive impact on the lives of others. In many shapes and forms, technology has made it easier now more than ever to help other people no matter where they are located. From charitable donations to the rise of social impact organizations, there is an abundance of projects that leverage technology to positively impact individual lives. Platforms like GoVolunteer connect nonprofits with individuals from a variety of skill sets who are looking to gift their abilities to those in need. Kiva allows for fundraising loans that can change lives. These are just a handful of examples of a much wider positive paradigm shift.
The Future of Technology for Well-Being
There is no denying that increasingly powerful and immersive technology can be used to better or worsen the human condition. Today’s leaders will not only have to focus on their ability to use technology to solve a problem or generate greater revenue; they will have to ask themselves if their tech solutions are beneficial or detrimental to human well-being. They will also have to remember that more powerful technology does not always translate to happier users. It is also crucial that future generations be equipped with the values required to use increasingly powerful tools responsibly and ethically.
In the Education 2030 report, the Millennium Project envisions a world wherein portable intelligent devices combined with integrated systems for lifelong learning contribute to better well-being. In this vision, “continuous evaluation of individual learning processes designed to prevent people from growing unstable and/or becoming mentally ill, along with programs aimed at eliminating prejudice and hate, could bring about a more beautiful, loving world.”
There is exciting potential for technology to be leveraged to contribute to human happiness at a massive scale. Yet, technology shouldn’t consume every aspect of our lives, since a life worth living is often about balance. Sometimes, even if just for a few moments, what would make us feel happier is we disconnected from technology to begin with.
Image Credit: 13_Phunkod / Shutterstock.com Continue reading
#433292 Video Friday: TORO Humanoid Robot ...
Your weekly selection of awesome robot videos Continue reading