Tag Archives: design
#434701 3 Practical Solutions to Offset ...
In recent years, the media has sounded the alarm about mass job loss to automation and robotics—some studies predict that up to 50 percent of current jobs or tasks could be automated in coming decades. While this topic has received significant attention, much of the press focuses on potential problems without proposing realistic solutions or considering new opportunities.
The economic impacts of AI, robotics, and automation are complex topics that require a more comprehensive perspective to understand. Is universal basic income, for example, the answer? Many believe so, and there are a number of experiments in progress. But it’s only one strategy, and without a sustainable funding source, universal basic income may not be practical.
As automation continues to accelerate, we’ll need a multi-pronged approach to ease the transition. In short, we need to update broad socioeconomic strategies for a new century of rapid progress. How, then, do we plan practical solutions to support these new strategies?
Take history as a rough guide to the future. Looking back, technology revolutions have three themes in common.
First, past revolutions each produced profound benefits to productivity, increasing human welfare. Second, technological innovation and technology diffusion have accelerated over time, each iteration placing more strain on the human ability to adapt. And third, machines have gradually replaced more elements of human work, with human societies adapting by moving into new forms of work—from agriculture to manufacturing to service, for example.
Public and private solutions, therefore, need to be developed to address each of these three components of change. Let’s explore some practical solutions for each in turn.
Figure 1. Technology’s structural impacts in the 21st century. Refer to Appendix I for quantitative charts and technological examples corresponding to the numbers (1-22) in each slice.
Solution 1: Capture New Opportunities Through Aggressive Investment
The rapid emergence of new technology promises a bounty of opportunity for the twenty-first century’s economic winners. This technological arms race is shaping up to be a global affair, and the winners will be determined in part by who is able to build the future economy fastest and most effectively. Both the private and public sectors have a role to play in stimulating growth.
At the country level, several nations have created competitive strategies to promote research and development investments as automation technologies become more mature.
Germany and China have two of the most notable growth strategies. Germany’s Industrie 4.0 plan targets a 50 percent increase in manufacturing productivity via digital initiatives, while halving the resources required. China’s Made in China 2025 national strategy sets ambitious targets and provides subsidies for domestic innovation and production. It also includes building new concept cities, investing in robotics capabilities, and subsidizing high-tech acquisitions abroad to become the leader in certain high-tech industries. For China, specifically, tech innovation is driven partially by a fear that technology will disrupt social structures and government control.
Such opportunities are not limited to existing economic powers. Estonia’s progress after the breakup of the Soviet Union is a good case study in transitioning to a digital economy. The nation rapidly implemented capitalistic reforms and transformed itself into a technology-centric economy in preparation for a massive tech disruption. Internet access was declared a right in 2000, and the country’s classrooms were outfitted for a digital economy, with coding as a core educational requirement starting at kindergarten. Internet broadband speeds in Estonia are among the fastest in the world. Accordingly, the World Bank now ranks Estonia as a high-income country.
Solution 2: Address Increased Rate of Change With More Nimble Education Systems
Education and training are currently not set for the speed of change in the modern economy. Schools are still based on a one-time education model, with school providing the foundation for a single lifelong career. With content becoming obsolete faster and rapidly escalating costs, this system may be unsustainable in the future. To help workers more smoothly transition from one job into another, for example, we need to make education a more nimble, lifelong endeavor.
Primary and university education may still have a role in training foundational thinking and general education, but it will be necessary to curtail rising price of tuition and increase accessibility. Massive open online courses (MooCs) and open-enrollment platforms are early demonstrations of what the future of general education may look like: cheap, effective, and flexible.
Georgia Tech’s online Engineering Master’s program (a fraction of the cost of residential tuition) is an early example in making university education more broadly available. Similarly, nanodegrees or microcredentials provided by online education platforms such as Udacity and Coursera can be used for mid-career adjustments at low cost. AI itself may be deployed to supplement the learning process, with applications such as AI-enhanced tutorials or personalized content recommendations backed by machine learning. Recent developments in neuroscience research could optimize this experience by perfectly tailoring content and delivery to the learner’s brain to maximize retention.
Finally, companies looking for more customized skills may take a larger role in education, providing on-the-job training for specific capabilities. One potential model involves partnering with community colleges to create apprenticeship-style learning, where students work part-time in parallel with their education. Siemens has pioneered such a model in four states and is developing a playbook for other companies to do the same.
Solution 3: Enhance Social Safety Nets to Smooth Automation Impacts
If predicted job losses to automation come to fruition, modernizing existing social safety nets will increasingly become a priority. While the issue of safety nets can become quickly politicized, it is worth noting that each prior technological revolution has come with corresponding changes to the social contract (see below).
The evolving social contract (U.S. examples)
– 1842 | Right to strike
– 1924 | Abolish child labor
– 1935 | Right to unionize
– 1938 | 40-hour work week
– 1962, 1974 | Trade adjustment assistance
– 1964 | Pay discrimination prohibited
– 1970 | Health and safety laws
– 21st century | AI and automation adjustment assistance?
Figure 2. Labor laws have historically adjusted as technology and society progressed
Solutions like universal basic income (no-strings-attached monthly payout to all citizens) are appealing in concept, but somewhat difficult to implement as a first measure in countries such as the US or Japan that already have high debt. Additionally, universal basic income may create dis-incentives to stay in the labor force. A similar cautionary tale in program design was the Trade Adjustment Assistance (TAA), which was designed to protect industries and workers from import competition shocks from globalization, but is viewed as a missed opportunity due to insufficient coverage.
A near-term solution could come in the form of graduated wage insurance (compensation for those forced to take a lower-paying job), including health insurance subsidies to individuals directly impacted by automation, with incentives to return to the workforce quickly. Another topic to tackle is geographic mismatch between workers and jobs, which can be addressed by mobility assistance. Lastly, a training stipend can be issued to individuals as means to upskill.
Policymakers can intervene to reverse recent historical trends that have shifted incomes from labor to capital owners. The balance could be shifted back to labor by placing higher taxes on capital—an example is the recently proposed “robot tax” where the taxation would be on the work rather than the individual executing it. That is, if a self-driving car performs the task that formerly was done by a human, the rideshare company will still pay the tax as if a human was driving.
Other solutions may involve distribution of work. Some countries, such as France and Sweden, have experimented with redistributing working hours. The idea is to cap weekly hours, with the goal of having more people employed and work more evenly spread. So far these programs have had mixed results, with lower unemployment but high costs to taxpayers, but are potential models that can continue to be tested.
We cannot stop growth, nor should we. With the roles in response to this evolution shifting, so should the social contract between the stakeholders. Government will continue to play a critical role as a stabilizing “thumb” in the invisible hand of capitalism, regulating and cushioning against extreme volatility, particularly in labor markets.
However, we already see business leaders taking on some of the role traditionally played by government—thinking about measures to remedy risks of climate change or economic proposals to combat unemployment—in part because of greater agility in adapting to change. Cross-disciplinary collaboration and creative solutions from all parties will be critical in crafting the future economy.
Note: The full paper this article is based on is available here.
Image Credit: Dmitry Kalinovsky / Shutterstock.com Continue reading
#434685 How Tech Will Let You Learn Anything, ...
Today, over 77 percent of Americans own a smartphone with access to the world’s information and near-limitless learning resources.
Yet nearly 36 million adults in the US are constrained by low literacy skills, excluding them from professional opportunities, prospects of upward mobility, and full engagement with their children’s education.
And beyond its direct impact, low literacy rates affect us all. Improving literacy among adults is predicted to save $230 billion in national healthcare costs and could result in US labor productivity increases of up to 2.5 percent.
Across the board, exponential technologies are making demonetized learning tools, digital training platforms, and literacy solutions more accessible than ever before.
With rising automation and major paradigm shifts underway in the job market, these tools not only promise to make today’s workforce more versatile, but could play an invaluable role in breaking the poverty cycles often associated with low literacy.
Just three years ago, the Barbara Bush Foundation for Family Literacy and the Dollar General Literacy Foundation joined forces to tackle this intractable problem, launching a $7 million Adult Literacy XPRIZE.
Challenging teams to develop smartphone apps that significantly increase literacy skills among adult learners in just 12 months, the competition brought five prize teams to the fore, each targeting multiple demographics across the nation.
Now, after four years of research, prototyping, testing, and evaluation, XPRIZE has just this week announced two grand prize winners: Learning Upgrade and People ForWords.
In this blog, I’ll be exploring the nuts and bolts of our two winning teams and how exponential technologies are beginning to address rapidly shifting workforce demands.
We’ll discuss:
Meeting 100 percent adult literacy rates
Retooling today’s workforce for tomorrow’s job market
Granting the gift of lifelong learning
Let’s dive in.
Adult Literacy XPRIZE
Emphasizing the importance of accessible mediums and scalability, the Adult Literacy XPRIZE called for teams to create mobile solutions that lower the barrier to entry, encourage persistence, develop relevant learning content, and can scale nationally.
Outperforming the competition in two key demographic groups in aggregate—native English speakers and English language learners—teams Learning Upgrade and People ForWords together claimed the prize.
To win, both organizations successfully generated the greatest gains between a pre- and post-test, administered one year apart to learners in a 12-month field test across Los Angeles, Dallas, and Philadelphia.
Prize money in hand, Learning Upgrade and People ForWords are now scaling up their solutions, each targeting a key demographic in America’s pursuit of adult literacy.
Based in San Diego, Learning Upgrade has developed an Android and iOS app that helps students learn English and math through video, songs, and gamification. Offering a total of 21 courses from kindergarten through adult education, Learning Upgrade touts a growing platform of over 900 lessons spanning English, reading, math, and even GED prep.
To further personalize each student’s learning, Learning Upgrade measures time-on-task and builds out formative performance assessments, granting teachers a quantified, real-time view of each student’s progress across both lessons and criteria.
Specialized in English reading skills, Dallas-based People ForWords offers a similarly delocalized model with its mobile game “Codex: Lost Words of Atlantis.” Based on an archaeological adventure storyline, the app features an immersive virtual environment.
Set in the Atlantis Library (now with a 3D rendering underway), Codex takes its students through narrative-peppered lessons covering everything from letter-sound practice to vocabulary reinforcement in a hidden object game.
But while both mobile apps have recruited initial piloting populations, the key to success is scale.
Using a similar incentive prize competition structure to drive recruitment, the second phase of the XPRIZE is a $1 million Barbara Bush Foundation Adult Literacy XPRIZE Communities Competition. For 15 months, the competition will challenge organizations, communities, and individuals alike to onboard adult learners onto both prize-winning platforms and fellow finalist team apps, AmritaCREATE and Cell-Ed.
Each awarded $125,000 for participation in the Communities Competition, AmritaCREATE and Cell-Ed bring yet other nuanced advantages to the table.
While AmritaCREATE curates culturally appropriate e-content relevant to given life skills, Cell-Ed takes a learn-on-the-go approach, offering micro-lessons, on-demand essential skills training, and individualized coaching on any mobile device, no internet required.
Although all these cases target slightly different demographics and problem niches, they converge upon common phenomena: mobility, efficiency, life skill relevance, personalized learning, and practicability.
And what better to scale these benefits than AI and immersive virtual environments?
In the case of education’s growing mobility, 5G and the explosion of connectivity speeds will continue to drive a learn-anytime-anywhere education model, whereby adult users learn on the fly, untethered to web access or rigid time strictures.
As I’ve explored in a previous blog on AI-crowd collaboration, we might also see the rise of AI learning consultants responsible for processing data on how you learn.
Quantifying and analyzing your interaction with course modules, where you get stuck, where you thrive, and what tools cause you ease or frustration, each user’s AI trainer might then issue personalized recommendations based on crowd feedback.
Adding a human touch, each app’s hired teaching consultants would thereby be freed to track many more students’ progress at once, vetting AI-generated tips and adjustments, and offering life coaching along the way.
Lastly, virtual learning environments—and, one day, immersive VR—will facilitate both speed and retention, two of the most critical constraints as learners age.
As I often reference, people generally remember only 10 percent of what we see, 20 percent of what we hear, and 30 percent of what we read…. But over a staggering 90 percent of what we do or experience.
By introducing gamification, immersive testing activities, and visually rich sensory environments, adult literacy platforms have a winning chance at scalability, retention, and user persistence.
Exponential Tools: Training and Retooling a Dynamic Workforce
Beyond literacy, however, virtual and augmented reality have already begun disrupting the professional training market.
As projected by ABI Research, the enterprise VR training market is on track to exceed $6.3 billion in value by 2022.
Leading the charge, Walmart has already implemented VR across 200 Academy training centers, running over 45 modules and simulating everything from unusual customer requests to a Black Friday shopping rush.
Then in September of last year, Walmart committed to a 17,000-headset order of the Oculus Go to equip every US Supercenter, neighborhood market, and discount store with VR-based employee training.
In the engineering world, Bell Helicopter is using VR to massively expedite development and testing of its latest aircraft, FCX-001. Partnering with Sector 5 Digital and HTC VIVE, Bell found it could concentrate a typical six-year aircraft design process into the course of six months, turning physical mockups into CAD-designed virtual replicas.
But beyond the design process itself, Bell is now one of a slew of companies pioneering VR pilot tests and simulations with real-world accuracy. Seated in a true-to-life virtual cockpit, pilots have now tested countless iterations of the FCX-001 in virtual flight, drawing directly onto the 3D model and enacting aircraft modifications in real time.
And in an expansion of our virtual senses, several key players are already working on haptic feedback. In the case of VR flight, French company Go Touch VR is now partnering with software developer FlyInside on fingertip-mounted haptic tech for aviation.
Dramatically reducing time and trouble required for VR-testing pilots, they aim to give touch-based confirmation of every switch and dial activated on virtual flights, just as one would experience in a full-sized cockpit mockup. Replicating texture, stiffness, and even the sensation of holding an object, these piloted devices contain a suite of actuators to simulate everything from a light touch to higher-pressured contact, all controlled by gaze and finger movements.
When it comes to other high-risk simulations, virtual and augmented reality have barely scratched the surface.
Firefighters can now combat virtual wildfires with new platforms like FLAIM Trainer or TargetSolutions. And thanks to the expansion of medical AR/VR services like 3D4Medical or Echopixel, surgeons might soon perform operations on annotated organs and magnified incision sites, speeding up reaction times and vastly improving precision.
But perhaps most urgently, virtual reality will offer an immediate solution to today’s constant industry turnover and large-scale re-education demands.
VR educational facilities with exact replicas of anything from large industrial equipment to minute circuitry will soon give anyone a second chance at the 21st-century job market.
Want to become an electric, autonomous vehicle mechanic at age 44? Throw on a demonetized VR module and learn by doing, testing your prototype iterations at almost zero cost and with no risk of harming others.
Want to be a plasma physicist and play around with a virtual nuclear fusion reactor? Now you’ll be able to simulate results and test out different tweaks, logging Smart Educational Record credits in the process.
As tomorrow’s career model shifts from a “one-and-done graduate degree” to continuous lifelong education, professional VR-based re-education will allow for a continuous education loop, reducing the barrier to entry for anyone wanting to try their hand at a new industry.
Learn Anything, Anytime, at Any Age
As VR and artificial intelligence converge with demonetized mobile connectivity, we are finally witnessing an era in which no one will be left behind.
Whether in pursuit of fundamental life skills, professional training, linguistic competence, or specialized retooling, users of all ages, career paths, income brackets, and goals are now encouraged to be students, no longer condemned to stagnancy.
Traditional constraints need no longer prevent non-native speakers from gaining an equal foothold, or specialists from pivoting into new professions, or low-income parents from staking new career paths.
As exponential technologies drive democratized access, bolstering initiatives such as the Barbara Bush Foundation Adult Literacy XPRIZE are blazing the trail to make education a scalable priority for all.
Join Me
Abundance-Digital Online Community: I’ve created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is my ‘onramp’ for exponential entrepreneurs – those who want to get involved and play at a higher level. Click here to learn more.
Image Credit: Iulia Ghimisli / Shutterstock.com Continue reading
#434648 The Pediatric AI That Outperformed ...
Training a doctor takes years of grueling work in universities and hospitals. Building a doctor may be as easy as teaching an AI how to read.
Artificial intelligence has taken another step towards becoming an integral part of 21st-century medicine. New research out of Guangzhou, China, published February 11th in Nature Medicine Letters, has demonstrated a natural-language processing AI that is capable of out-performing rookie pediatricians in diagnosing common childhood ailments.
The massive study examined the electronic health records (EHR) from nearly 600,000 patients over an 18-month period at the Guangzhou Women and Children’s Medical Center and then compared AI-generated diagnoses against new assessments from physicians with a range of experience.
The verdict? On average, the AI was noticeably more accurate than junior physicians and nearly as reliable as the more senior ones. These results are the latest demonstration that artificial intelligence is on the cusp of becoming a healthcare staple on a global scale.
Less Like a Computer, More Like a Person
To outshine human doctors, the AI first had to become more human. Like IBM’s Watson, the pediatric AI leverages natural language processing, in essence “reading” written notes from EHRs not unlike how a human doctor would review those same records. But the similarities to human doctors don’t end there. The AI is a machine learning classifier (MLC), capable of placing the information learned from the EHRs into categories to improve performance.
Like traditionally-trained pediatricians, the AI broke cases down into major organ groups and infection areas (upper/lower respiratory, gastrointestinal, etc.) before breaking them down even further into subcategories. It could then develop associations between various symptoms and organ groups and use those associations to improve its diagnoses. This hierarchical approach mimics the deductive reasoning human doctors employ.
Another key strength of the AI developed for this study was the enormous size of the dataset collected to teach it: 1,362,559 outpatient visits from 567,498 patients yielded some 101.6 million data points for the MLC to devour on its quest for pediatric dominance. This allowed the AI the depth of learning needed to distinguish and accurately select from the 55 different diagnosis codes across the various organ groups and subcategories.
When comparing against the human doctors, the study used 11,926 records from an unrelated group of children, giving both the MLC and the 20 humans it was compared against an even playing field. The results were clear: while cohorts of senior pediatricians performed better than the AI, junior pediatricians (those with 3-15 years of experience) were outclassed.
Helping, Not Replacing
While the research used a competitive analysis to measure the success of the AI, the results should be seen as anything but hostile to human doctors. The near future of artificial intelligence in medicine will see these machine learning programs augment, not replace, human physicians. The authors of the study specifically call out augmentation as the key short-term application of their work. Triaging incoming patients via intake forms, performing massive metastudies using EHRs, providing rapid ‘second opinions’—the applications for an AI doctor that is better-but-not-the-best are as varied as the healthcare industry itself.
That’s only considering how artificial intelligence could make a positive impact immediately upon implementation. It’s easy to see how long-term use of a diagnostic assistant could reshape the way modern medical institutions approach their work.
Look at how the MLC results fit snugly between the junior and senior physician groups. Essentially, it took nearly 15 years before a physician could consistently out-diagnose the machine. That’s a decade and a half wherein an AI diagnostic assistant would be an invaluable partner—both as a training tool and a safety measure. Likewise, on the other side of the experience curve you have physicians whose performance could be continuously leveraged to improve the AI’s effectiveness. This is a clear opportunity for a symbiotic relationship, with humans and machines each assisting the other as they mature.
Closer to Us, But Still Dependent on Us
No matter the ultimate application, the AI doctors of the future are drawing nearer to us step by step. This latest research is a demonstration that artificial intelligence can mimic the results of human deductive reasoning even in some of the most complex and important decision-making processes. True, the MLC required input from humans to function; both the initial data points and the cases used to evaluate the AI depended on EHRs written by physicians. While every effort was made to design a test schema that removed any indication of the eventual diagnosis, some “data leakage” is bound to occur.
In other words, when AIs use human-created data, they inherit human insight to some degree. Yet the progress made in machine imaging, chatbots, sensors, and other fields all suggest that this dependence on human input is more about where we are right now than where we could be in the near future.
Data, and More Data
That near future may also have some clear winners and losers. For now, those winners seem to be the institutions that can capture and apply the largest sets of data. With a rapidly digitized society gathering incredible amounts of data, China has a clear advantage. Combined with their relatively relaxed approach to privacy, they are likely to continue as one of the driving forces behind machine learning and its applications. So too will Google/Alphabet with their massive medical studies. Data is the uranium in this AI arms race, and everyone seems to be scrambling to collect more.
In a global community that seems increasingly aware of the potential problems arising from this need for and reliance on data, it’s nice to know there’ll be an upside as well. The technology behind AI medical assistants is looking more and more mature—even if we are still struggling to find exactly where, when, and how that technology should first become universal.
Yet wherever we see the next push to make AI a standard tool in a real-world medical setting, I have little doubt it will greatly improve the lives of human patients. Today Doctor AI is performing as well as a human colleague with more than 10 years of experience. By next year or so, it may take twice as long for humans to be competitive. And in a decade, the combined medical knowledge of all human history may be a tool as common as a stethoscope in your doctor’s hands.
Image Credit: Nadia Snopek / Shutterstock.com Continue reading
#434643 Sensors and Machine Learning Are Giving ...
According to some scientists, humans really do have a sixth sense. There’s nothing supernatural about it: the sense of proprioception tells you about the relative positions of your limbs and the rest of your body. Close your eyes, block out all sound, and you can still use this internal “map” of your external body to locate your muscles and body parts – you have an innate sense of the distances between them, and the perception of how they’re moving, above and beyond your sense of touch.
This sense is invaluable for allowing us to coordinate our movements. In humans, the brain integrates senses including touch, heat, and the tension in muscle spindles to allow us to build up this map.
Replicating this complex sense has posed a great challenge for roboticists. We can imagine simulating the sense of sight with cameras, sound with microphones, or touch with pressure-pads. Robots with chemical sensors could be far more accurate than us in smell and taste, but building in proprioception, the robot’s sense of itself and its body, is far more difficult, and is a large part of why humanoid robots are so tricky to get right.
Simultaneous localization and mapping (SLAM) software allows robots to use their own senses to build up a picture of their surroundings and environment, but they’d need a keen sense of the position of their own bodies to interact with it. If something unexpected happens, or in dark environments where primary senses are not available, robots can struggle to keep track of their own position and orientation. For human-robot interaction, wearable robotics, and delicate applications like surgery, tiny differences can be extremely important.
Piecemeal Solutions
In the case of hard robotics, this is generally solved by using a series of strain and pressure sensors in each joint, which allow the robot to determine how its limbs are positioned. That works fine for rigid robots with a limited number of joints, but for softer, more flexible robots, this information is limited. Roboticists are faced with a dilemma: a vast, complex array of sensors for every degree of freedom in the robot’s movement, or limited skill in proprioception?
New techniques, often involving new arrays of sensory material and machine-learning algorithms to fill in the gaps, are starting to tackle this problem. Take the work of Thomas George Thuruthel and colleagues in Pisa and San Diego, who draw inspiration from the proprioception of humans. In a new paper in Science Robotics, they describe the use of soft sensors distributed through a robotic finger at random. This placement is much like the constant adaptation of sensors in humans and animals, rather than relying on feedback from a limited number of positions.
The sensors allow the soft robot to react to touch and pressure in many different locations, forming a map of itself as it contorts into complicated positions. The machine-learning algorithm serves to interpret the signals from the randomly-distributed sensors: as the finger moves around, it’s observed by a motion capture system. After training the robot’s neural network, it can associate the feedback from the sensors with the position of the finger detected in the motion-capture system, which can then be discarded. The robot observes its own motions to understand the shapes that its soft body can take, and translate them into the language of these soft sensors.
“The advantages of our approach are the ability to predict complex motions and forces that the soft robot experiences (which is difficult with traditional methods) and the fact that it can be applied to multiple types of actuators and sensors,” said Michael Tolley of the University of California San Diego. “Our method also includes redundant sensors, which improves the overall robustness of our predictions.”
The use of machine learning lets the roboticists come up with a reliable model for this complex, non-linear system of motions for the actuators, something difficult to do by directly calculating the expected motion of the soft-bot. It also resembles the human system of proprioception, built on redundant sensors that change and shift in position as we age.
In Search of a Perfect Arm
Another approach to training robots in using their bodies comes from Robert Kwiatkowski and Hod Lipson of Columbia University in New York. In their paper “Task-agnostic self-modeling machines,” also recently published in Science Robotics, they describe a new type of robotic arm.
Robotic arms and hands are getting increasingly dexterous, but training them to grasp a large array of objects and perform many different tasks can be an arduous process. It’s also an extremely valuable skill to get right: Amazon is highly interested in the perfect robot arm. Google hooked together an array of over a dozen robot arms so that they could share information about grasping new objects, in part to cut down on training time.
Individually training a robot arm to perform every individual task takes time and reduces the adaptability of your robot: either you need an ML algorithm with a huge dataset of experiences, or, even worse, you need to hard-code thousands of different motions. Kwiatkowski and Lipson attempt to overcome this by developing a robotic system that has a “strong sense of self”: a model of its own size, shape, and motions.
They do this using deep machine learning. The robot begins with no prior knowledge of its own shape or the underlying physics of its motion. It then repeats a series of a thousand random trajectories, recording the motion of its arm. Kwiatkowski and Lipson compare this to a baby in the first year of life observing the motions of its own hands and limbs, fascinated by picking up and manipulating objects.
Again, once the robot has trained itself to interpret these signals and build up a robust model of its own body, it’s ready for the next stage. Using that deep-learning algorithm, the researchers then ask the robot to design strategies to accomplish simple pick-up and place and handwriting tasks. Rather than laboriously and narrowly training itself for each individual task, limiting its abilities to a very narrow set of circumstances, the robot can now strategize how to use its arm for a much wider range of situations, with no additional task-specific training.
Damage Control
In a further experiment, the researchers replaced part of the arm with a “deformed” component, intended to simulate what might happen if the robot was damaged. The robot can then detect that something’s up and “reconfigure” itself, reconstructing its self-model by going through the training exercises once again; it was then able to perform the same tasks with only a small reduction in accuracy.
Machine learning techniques are opening up the field of robotics in ways we’ve never seen before. Combining them with our understanding of how humans and other animals are able to sense and interact with the world around us is bringing robotics closer and closer to becoming truly flexible and adaptable, and, eventually, omnipresent.
But before they can get out and shape the world, as these studies show, they will need to understand themselves.
Image Credit: jumbojan / Shutterstock.com Continue reading