Tag Archives: operated

#431671 The Doctor in the Machine: How AI Is ...

Artificial intelligence has received its fair share of hype recently. However, it’s hype that’s well-founded: IDC predicts worldwide spend on AI and cognitive computing will culminate to a whopping $46 billion (with a “b”) by 2020, and all the tech giants are jumping on board faster than you can say “ROI.” But what is AI, exactly?
According to Hilary Mason, AI today is being misused as a sort of catch-all term to basically describe “any system that uses data to do anything.” But it’s so much more than that. A truly artificially intelligent system is one that learns on its own, one that’s capable of crunching copious amounts of data in order to create associations and intelligently mimic actual human behavior.
It’s what powers the technology anticipating our next online purchase (Amazon), or the virtual assistant that deciphers our voice commands with incredible accuracy (Siri), or even the hipster-friendly recommendation engine that helps you discover new music before your friends do (Pandora). But AI is moving past these consumer-pleasing “nice-to-haves” and getting down to serious business: saving our butts.
Much in the same way robotics entered manufacturing, AI is making its mark in healthcare by automating mundane, repetitive tasks. This is especially true in the case of detecting cancer. By leveraging the power of deep learning, algorithms can now be trained to distinguish between sets of pixels in an image that represents cancer versus sets that don’t—not unlike how Facebook’s image recognition software tags pictures of our friends without us having to type in their names first. This software can then go ahead and scour millions of medical images (MRIs, CT scans, etc.) in a single day to detect anomalies on a scope that humans just aren’t capable of. That’s huge.
As if that wasn’t enough, these algorithms are constantly learning and evolving, getting better at making these associations with each new data set that gets fed to them. Radiology, dermatology, and pathology will experience a giant upheaval as tech giants and startups alike jump in to bring these deep learning algorithms to a hospital near you.
In fact, some already are: the FDA recently gave their seal of approval for an AI-powered medical imaging platform that helps doctors analyze and diagnose heart anomalies. This is the first time the FDA has approved a machine learning application for use in a clinical setting.
But how efficient is AI compared to humans, really? Well, aside from the obvious fact that software programs don’t get bored or distracted or have to check Facebook every twenty minutes, AI is exponentially better than us at analyzing data.
Take, for example, IBM’s Watson. Watson analyzed genomic data from both tumor cells and healthy cells and was ultimately able to glean actionable insights in a mere 10 minutes. Compare that to the 160 hours it would have taken a human to analyze that same data. Diagnoses aside, AI is also being leveraged in pharmaceuticals to aid in the very time-consuming grunt work of discovering new drugs, and all the big players are getting involved.
But AI is far from being just a behind-the-scenes player. Gartner recently predicted that by 2025, 50 percent of the population will rely on AI-powered “virtual personal health assistants” for their routine primary care needs. What this means is that consumer-facing voice and chat-operated “assistants” (think Siri or Cortana) would, in effect, serve as a central hub of interaction for all our connected health devices and the algorithms crunching all our real-time biometric data. These assistants would keep us apprised of our current state of well-being, acting as a sort of digital facilitator for our personal health objectives and an always-on health alert system that would notify us when we actually need to see a physician.
Slowly, and thanks to the tsunami of data and advancements in self-learning algorithms, healthcare is transitioning from a reactive model to more of a preventative model—and it’s completely upending the way care is delivered. Whether Elon Musk’s dystopian outlook on AI holds any weight or not is yet to be determined. But one thing’s certain: for the time being, artificial intelligence is saving our lives.
Image Credit: Jolygon / Shutterstock.com Continue reading

Posted in Human Robots

#431377 The Farms of the Future Will Be ...

Swarms of drones buzz overhead, while robotic vehicles crawl across the landscape. Orbiting satellites snap high-resolution images of the scene far below. Not one human being can be seen in the pre-dawn glow spreading across the land.
This isn’t some post-apocalyptic vision of the future à la The Terminator. This is a snapshot of the farm of the future. Every phase of the operation—from seed to harvest—may someday be automated, without the need to ever get one’s fingernails dirty.
In fact, it’s science fiction already being engineered into reality. Today, robots empowered with artificial intelligence can zap weeds with preternatural precision, while autonomous tractors move with tireless efficiency across the farmland. Satellites can assess crop health from outer space, providing gobs of data to help produce the sort of business intelligence once accessible only to Fortune 500 companies.
“Precision agriculture is on the brink of a new phase of development involving smart machines that can operate by themselves, which will allow production agriculture to become significantly more efficient. Precision agriculture is becoming robotic agriculture,” said professor Simon Blackmore last year during a conference in Asia on the latest developments in robotic agriculture. Blackmore is head of engineering at Harper Adams University and head of the National Centre for Precision Farming in the UK.
It’s Blackmore’s university that recently showcased what may someday be possible. The project, dubbed Hands Free Hectare and led by researchers from Harper Adams and private industry, farmed one hectare (about 2.5 acres) of spring barley without one person ever setting foot in the field.
The team re-purposed, re-wired and roboticized farm equipment ranging from a Japanese tractor to a 25-year-old combine. Drones served as scouts to survey the operation and collect samples to help the team monitor the progress of the barley. At the end of the season, the robo farmers harvested about 4.5 tons of barley at a price tag of £200,000.

“This project aimed to prove that there’s no technological reason why a field can’t be farmed without humans working the land directly now, and we’ve done that,” said Martin Abell, mechatronics researcher for Precision Decisions, which partnered with Harper Adams, in a press release.
I, Robot Farmer
The Harper Adams experiment is the latest example of how machines are disrupting the agricultural industry. Around the same time that the Hands Free Hectare combine was harvesting barley, Deere & Company announced it would acquire a startup called Blue River Technology for a reported $305 million.
Blue River has developed a “see-and-spray” system that combines computer vision and artificial intelligence to discriminate between crops and weeds. It hits the former with fertilizer and blasts the latter with herbicides with such precision that it can eliminate 90 percent of the chemicals used in conventional agriculture.
It’s not just farmland that’s getting a helping hand from robots. A California company called Abundant Robotics, spun out of the nonprofit research institute SRI International, is developing robots capable of picking apples with vacuum-like arms that suck the fruit straight off the trees in the orchards.
“Traditional robots were designed to perform very specific tasks over and over again. But the robots that will be used in food and agricultural applications will have to be much more flexible than what we’ve seen in automotive manufacturing plants in order to deal with natural variation in food products or the outdoor environment,” Dan Harburg, an associate at venture capital firm Anterra Capital who previously worked at a Massachusetts-based startup making a robotic arm capable of grabbing fruit, told AgFunder News.
“This means ag-focused robotics startups have to design systems from the ground up, which can take time and money, and their robots have to be able to complete multiple tasks to avoid sitting on the shelf for a significant portion of the year,” he noted.
Eyes in the Sky
It will take more than an army of robotic tractors to grow a successful crop. The farm of the future will rely on drones, satellites, and other airborne instruments to provide data about their crops on the ground.
Companies like Descartes Labs, for instance, employ machine learning to analyze satellite imagery to forecast soy and corn yields. The Los Alamos, New Mexico startup collects five terabytes of data every day from multiple satellite constellations, including NASA and the European Space Agency. Combined with weather readings and other real-time inputs, Descartes Labs can predict cornfield yields with 99 percent accuracy. Its AI platform can even assess crop health from infrared readings.
The US agency DARPA recently granted Descartes Labs $1.5 million to monitor and analyze wheat yields in the Middle East and Africa. The idea is that accurate forecasts may help identify regions at risk of crop failure, which could lead to famine and political unrest. Another company called TellusLabs out of Somerville, Massachusetts also employs machine learning algorithms to predict corn and soy yields with similar accuracy from satellite imagery.
Farmers don’t have to reach orbit to get insights on their cropland. A startup in Oakland, Ceres Imaging, produces high-resolution imagery from multispectral cameras flown across fields aboard small planes. The snapshots capture the landscape at different wavelengths, identifying insights into problems like water stress, as well as providing estimates of chlorophyll and nitrogen levels. The geo-tagged images mean farmers can easily locate areas that need to be addressed.
Growing From the Inside
Even the best intelligence—whether from drones, satellites, or machine learning algorithms—will be challenged to predict the unpredictable issues posed by climate change. That’s one reason more and more companies are betting the farm on what’s called controlled environment agriculture. Today, that doesn’t just mean fancy greenhouses, but everything from warehouse-sized, automated vertical farms to grow rooms run by robots, located not in the emptiness of Kansas or Nebraska but smack dab in the middle of the main streets of America.
Proponents of these new concepts argue these high-tech indoor farms can produce much higher yields while drastically reducing water usage and synthetic inputs like fertilizer and herbicides.
Iron Ox, out of San Francisco, is developing one-acre urban greenhouses that will be operated by robots and reportedly capable of producing the equivalent of 30 acres of farmland. Powered by artificial intelligence, a team of three robots will run the entire operation of planting, nurturing, and harvesting the crops.
Vertical farming startup Plenty, also based in San Francisco, uses AI to automate its operations, and got a $200 million vote of confidence from the SoftBank Vision Fund earlier this year. The company claims its system uses only 1 percent of the water consumed in conventional agriculture while producing 350 times as much produce. Plenty is part of a new crop of urban-oriented farms, including Bowery Farming and AeroFarms.
“What I can envision is locating a larger scale indoor farm in the economically disadvantaged food desert, in order to stimulate a broader economic impact that could create jobs and generate income for that area,” said Dr. Gary Stutte, an expert in space agriculture and controlled environment agriculture, in an interview with AgFunder News. “The indoor agriculture model is adaptable to becoming an engine for economic growth and food security in both rural and urban food deserts.”
Still, the model is not without its own challenges and criticisms. Most of what these farms can produce falls into the “leafy greens” category and often comes with a premium price, which seems antithetical to the proposed mission of creating oases in the food deserts of cities. While water usage may be minimized, the electricity required to power the operation, especially the LEDs (which played a huge part in revolutionizing indoor agriculture), are not cheap.
Still, all of these advances, from robo farmers to automated greenhouses, may need to be part of a future where nearly 10 billion people will inhabit the planet by 2050. An oft-quoted statistic from the Food and Agriculture Organization of the United Nations says the world must boost food production by 70 percent to meet the needs of the population. Technology may not save the world, but it will help feed it.
Image Credit: Valentin Valkov / Shutterstock.com Continue reading

Posted in Human Robots

#431343 How Technology Is Driving Us Toward Peak ...

At some point in the future—and in some ways we are already seeing this—the amount of physical stuff moving around the world will peak and begin to decline. By “stuff,” I am referring to liquid fuels, coal, containers on ships, food, raw materials, products, etc.
New technologies are moving us toward “production-at-the-point-of-consumption” of energy, food, and products with reduced reliance on a global supply chain.
The trade of physical stuff has been central to globalization as we’ve known it. So, this declining movement of stuff may signal we are approaching “peak globalization.”
To be clear, even as the movement of stuff may slow, if not decline, the movement of people, information, data, and ideas around the world is growing exponentially and is likely to continue doing so for the foreseeable future.
Peak globalization may provide a pathway to preserving the best of globalization and global interconnectedness, enhancing economic and environmental sustainability, and empowering individuals and communities to strengthen democracy.
At the same time, some of the most troublesome aspects of globalization may be eased, including massive financial transfers to energy producers and loss of jobs to manufacturing platforms like China. This shift could bring relief to the “losers” of globalization and ease populist, nationalist political pressures that are roiling the developed countries.
That is quite a claim, I realize. But let me explain the vision.
New Technologies and Businesses: Digital, Democratized, Decentralized
The key factors moving us toward peak globalization and making it economically viable are new technologies and innovative businesses and business models allowing for “production-at-the-point-of-consumption” of energy, food, and products.
Exponential technologies are enabling these trends by sharply reducing the “cost of entry” for creating businesses. Driven by Moore’s Law, powerful technologies have become available to almost anyone, anywhere.
Beginning with the microchip, which has had a 100-billion-fold improvement in 40 years—10,000 times faster and 10 million times cheaper—the marginal cost of producing almost everything that can be digitized has fallen toward zero.
A hard copy of a book, for example, will always entail the cost of materials, printing, shipping, etc., even if the marginal cost falls as more copies are produced. But the marginal cost of a second digital copy, such as an e-book, streaming video, or song, is nearly zero as it is simply a digital file sent over the Internet, the world’s largest copy machine.* Books are one product, but there are literally hundreds of thousands of dollars in once-physical, separate products jammed into our devices at little to no cost.
A smartphone alone provides half the human population access to artificial intelligence—from SIRI, search, and translation to cloud computing—geolocation, free global video calls, digital photography and free uploads to social network sites, free access to global knowledge, a million apps for a huge variety of purposes, and many other capabilities that were unavailable to most people only a few years ago.
As powerful as dematerialization and demonetization are for private individuals, they’re having a stronger effect on businesses. A small team can access expensive, advanced tools that before were only available to the biggest organizations. Foundational digital platforms, such as the internet and GPS, and the platforms built on top of them by the likes of Google, Apple, Amazon, and others provide the connectivity and services democratizing business tools and driving the next generation of new startups.

“As these trends gain steam in coming decades, they’ll bleed into and fundamentally transform global supply chains.”

An AI startup, for example, doesn’t need its own server farm to train its software and provide service to customers. The team can rent computing power from Amazon Web Services. This platform model enables small teams to do big things on the cheap. And it isn’t just in software. Similar trends are happening in hardware too. Makers can 3D print or mill industrial grade prototypes of physical stuff in a garage or local maker space and send or sell designs to anyone with a laptop and 3D printer via online platforms.
These are early examples of trends that are likely to gain steam in coming decades, and as they do, they’ll bleed into and fundamentally transform global supply chains.
The old model is a series of large, connected bits of centralized infrastructure. It makes sense to mine, farm, or manufacture in bulk when the conditions, resources, machines, and expertise to do so exist in particular places and are specialized and expensive. The new model, however, enables smaller-scale production that is local and decentralized.
To see this more clearly, let’s take a look at the technological trends at work in the three biggest contributors to the global trade of physical stuff—products, energy, and food.
Products
3D printing (additive manufacturing) allows for distributed manufacturing near the point of consumption, eliminating or reducing supply chains and factory production lines.
This is possible because product designs are no longer made manifest in assembly line parts like molds or specialized mechanical tools. Rather, designs are digital and can be called up at will to guide printers. Every time a 3D printer prints, it can print a different item, so no assembly line needs to be set up for every different product. 3D printers can also print an entire finished product in one piece or reduce the number of parts of larger products, such as engines. This further lessens the need for assembly.
Because each item can be customized and printed on demand, there is no cost benefit from scaling production. No inventories. No shipping items across oceans. No carbon emissions transporting not only the final product but also all the parts in that product shipped from suppliers to manufacturer. Moreover, 3D printing builds items layer by layer with almost no waste, unlike “subtractive manufacturing” in which an item is carved out of a piece of metal, and much or even most of the material can be waste.
Finally, 3D printing is also highly scalable, from inexpensive 3D printers (several hundred dollars) for home and school use to increasingly capable and expensive printers for industrial production. There are also 3D printers being developed for printing buildings, including houses and office buildings, and other infrastructure.
The technology for finished products is only now getting underway, and there are still challenges to overcome, such as speed, quality, and range of materials. But as methods and materials advance, it will likely creep into more manufactured goods.
Ultimately, 3D printing will be a general purpose technology that involves many different types of printers and materials—such as plastics, metals, and even human cells—to produce a huge range of items, from human tissue and potentially human organs to household items and a range of industrial items for planes, trains, and automobiles.
Energy
Renewable energy production is located at or relatively near the source of consumption.
Although electricity generated by solar, wind, geothermal, and other renewable sources can of course be transmitted over longer distances, it is mostly generated and consumed locally or regionally. It is not transported around the world in tankers, ships, and pipelines like petroleum, coal, and natural gas.
Moreover, the fuel itself is free—forever. There is no global price on sun or wind. The people relying on solar and wind power need not worry about price volatility and potential disruption of fuel supplies as a result of political, market, or natural causes.
Renewables have their problems, of course, including intermittency and storage, and currently they work best if complementary to other sources, especially natural gas power plants that, unlike coal plants, can be turned on or off and modulated like a gas stove, and are half the carbon emissions of coal.
Within the next decades or so, it is likely the intermittency and storage problems will be solved or greatly mitigated. In addition, unlike coal and natural gas power plants, solar is scalable, from solar panels on individual homes or even cars and other devices, to large-scale solar farms. Solar can be connected with microgrids and even allow for autonomous electricity generation by homes, commercial buildings, and communities.
It may be several decades before fossil fuel power plants can be phased out, but the development cost of renewables has been falling exponentially and, in places, is beginning to compete with coal and gas. Solar especially is expected to continue to increase in efficiency and decline in cost.
Given these trends in cost and efficiency, renewables should become obviously cheaper over time—if the fuel is free for solar and has to be continually purchased for coal and gas, at some point the former is cheaper than the latter. Renewables are already cheaper if externalities such as carbon emissions and environmental degradation involved in obtaining and transporting the fuel are included.
Food
Food can be increasingly produced near the point of consumption with vertical farms and eventually with printed food and even printed or cultured meat.
These sources bring production of food very near the consumer, so transportation costs, which can be a significant portion of the cost of food to consumers, are greatly reduced. The use of land and water are reduced by 95% or more, and energy use is cut by nearly 50%. In addition, fertilizers and pesticides are not required and crops can be grown 365 days a year whatever the weather and in more climates and latitudes than is possible today.
While it may not be practical to grow grains, corn, and other such crops in vertical farms, many vegetables and fruits can flourish in such facilities. In addition, cultured or printed meat is being developed—the big challenge is scaling up and reducing cost—that is based on cells from real animals without slaughtering the animals themselves.
There are currently some 70 billion animals being raised for food around the world [PDF] and livestock alone counts for about 15% of global emissions. Moreover, livestock places huge demands on land, water, and energy. Like vertical farms, cultured or printed meat could be produced with no more land use than a brewery and with far less water and energy.
A More Democratic Economy Goes Bottom Up
This is a very brief introduction to the technologies that can bring “production-at-the-point-of-consumption” of products, energy, and food to cities and regions.
What does this future look like? Here’s a simplified example.
Imagine a universal manufacturing facility with hundreds of 3D printers printing tens of thousands of different products on demand for the local community—rather than assembly lines in China making tens of thousands of the same product that have to be shipped all over the world since no local market can absorb all of the same product.
Nearby, a vertical farm and cultured meat facility produce much of tomorrow night’s dinner. These facilities would be powered by local or regional wind and solar. Depending on need and quality, some infrastructure and machinery, like solar panels and 3D printers, would live in these facilities and some in homes and businesses.
The facilities could be owned by a large global corporation—but still locally produce goods—or they could be franchised or even owned and operated independently by the local population. Upkeep and management at each would provide jobs for communities nearby. Eventually, not only would global trade of parts and products diminish, but even required supplies of raw materials and feed stock would decline since there would be less waste in production, and many materials would be recycled once acquired.

“Peak globalization could be a viable pathway to an economic foundation that puts people first while building a more economically and environmentally sustainable future.”

This model suggests a shift toward a “bottom up” economy that is more democratic, locally controlled, and likely to generate more local jobs.
The global trends in democratization of technology make the vision technologically plausible. Much of this technology already exists and is improving and scaling while exponentially decreasing in cost to become available to almost anyone, anywhere.
This includes not only access to key technologies, but also to education through digital platforms available globally. Online courses are available for free, ranging from advanced physics, math, and engineering to skills training in 3D printing, solar installations, and building vertical farms. Social media platforms can enable local and global collaboration and sharing of knowledge and best practices.
These new communities of producers can be the foundation for new forms of democratic governance as they recognize and “capitalize” on the reality that control of the means of production can translate to political power. More jobs and local control could weaken populist, anti-globalization political forces as people recognize they could benefit from the positive aspects of globalization and international cooperation and connectedness while diminishing the impact of globalization’s downsides.
There are powerful vested interests that stand to lose in such a global structural shift. But this vision builds on trends that are already underway and are gaining momentum. Peak globalization could be a viable pathway to an economic foundation that puts people first while building a more economically and environmentally sustainable future.
This article was originally posted on Open Democracy (CC BY-NC 4.0). The version above was edited with the author for length and includes additions. Read the original article on Open Democracy.
* See Jeremy Rifkin, The Zero Marginal Cost Society, (New York: Palgrave Macmillan, 2014), Part II, pp. 69-154.
Image Credit: Sergey Nivens / Shutterstock.com Continue reading

Posted in Human Robots

#431155 What It Will Take for Quantum Computers ...

Quantum computers could give the machine learning algorithms at the heart of modern artificial intelligence a dramatic speed up, but how far off are we? An international group of researchers has outlined the barriers that still need to be overcome.
This year has seen a surge of interest in quantum computing, driven in part by Google’s announcement that it will demonstrate “quantum supremacy” by the end of 2017. That means solving a problem beyond the capabilities of normal computers, which the company predicts will take 49 qubits—the quantum computing equivalent of bits.
As impressive as such a feat would be, the demonstration is likely to be on an esoteric problem that stacks the odds heavily in the quantum processor’s favor, and getting quantum computers to carry out practically useful calculations will take a lot more work.
But these devices hold great promise for solving problems in fields as diverse as cryptography or weather forecasting. One application people are particularly excited about is whether they could be used to supercharge the machine learning algorithms already transforming the modern world.
The potential is summarized in a recent review paper in the journal Nature written by a group of experts from the emerging field of quantum machine learning.
“Classical machine learning methods such as deep neural networks frequently have the feature that they can both recognize statistical patterns in data and produce data that possess the same statistical patterns: they recognize the patterns that they produce,” they write.
“This observation suggests the following hope. If small quantum information processors can produce statistical patterns that are computationally difficult for a classical computer to produce, then perhaps they can also recognize patterns that are equally difficult to recognize classically.”
Because of the way quantum computers work—taking advantage of strange quantum mechanical effects like entanglement and superposition—algorithms running on them should in principle be able to solve problems much faster than the best known classical algorithms, a phenomenon known as quantum speedup.
Designing these algorithms is tricky work, but the authors of the review note that there has been significant progress in recent years. They highlight multiple quantum algorithms exhibiting quantum speedup that could act as subroutines, or building blocks, for quantum machine learning programs.
We still don’t have the hardware to implement these algorithms, but according to the researchers the challenge is a technical one and clear paths to overcoming them exist. More challenging, they say, are four fundamental conceptual problems that could limit the applicability of quantum machine learning.
The first two are the input and output problems. Quantum computers, unsurprisingly, deal with quantum data, but the majority of the problems humans want to solve relate to the classical world. Translating significant amounts of classical data into the quantum systems can take so much time it can cancel out the benefits of the faster processing speeds, and the same is true of reading out the solution at the end.
The input problem could be mitigated to some extent by the development of quantum random access memory (qRAM)—the equivalent to RAM in a conventional computer used to provide the machine with quick access to its working memory. A qRAM can be configured to store classical data but allow the quantum computers to access all that information simultaneously as a superposition, which is required for a variety of quantum algorithms. But the authors note this is still a considerable engineering challenge and may not be sustainable for big data problems.
Closely related to the input/output problem is the costing problem. At present, the authors say very little is known about how many gates—or operations—a quantum machine learning algorithm will require to solve a given problem when operated on real-world devices. It’s expected that on highly complex problems they will offer considerable improvements over classical computers, but it’s not clear how big problems have to be before this becomes apparent.
Finally, whether or when these advantages kick in may be hard to prove, something the authors call the benchmarking problem. Claiming that a quantum algorithm can outperform any classical machine learning approach requires extensive testing against these other techniques that may not be feasible.
They suggest that this could be sidestepped by lowering the standards quantum machine learning algorithms are currently held to. This makes sense, as it doesn’t really matter whether an algorithm is intrinsically faster than all possible classical ones, as long as it’s faster than all the existing ones.
Another way of avoiding some of these problems is to apply these techniques directly to quantum data, the actual states generated by quantum systems and processes. The authors say this is probably the most promising near-term application for quantum machine learning and has the added benefit that any insights can be fed back into the design of better hardware.
“This would enable a virtuous cycle of innovation similar to that which occurred in classical computing, wherein each generation of processors is then leveraged to design the next-generation processors,” they conclude.
Image Credit: archy13 / Shutterstock.com Continue reading

Posted in Human Robots

#430649 Robotherapy for children with autism

New Robotherapy for children with autism could reduce patient supervision by therapists.
05.07.2017
Autism treatments and therapies routinely make headlines. With robot enhanced therapies on the rise, often overlooked though, is the mental stress and physical toll the procedures take on therapists. As autism treatments can be taxing on both patient and therapists, few realize the stress and workload of those working with autistic patients.
It is against this backdrop, that researchers from the Vrije Universiteit Brussel are pioneering a new technology to aid behavioural therapy, and one with a very deliberate aspect: they are using robots to boost the basic social learning skills of children with ASD and while doing so, they hope to make the therapists’ job substantially easier.
A study, just published in PALADYN – Journal of Behavioural Robotics examines the use of social robots as tools in clinical situations by addressing the challenge of increasing robot autonomy.
The growing deployment of robot-assisted therapies in recent decades means children with Autism Spectrum Disorder (ASD) can develop and nurture social behaviour and cognitive skills. Learning skills that hold out in real life is the first and foremost goal of all autism therapies, including the Robot-Assisted Therapy (RAT), with effectiveness always considered a key concern. However, this time round the scientists have set off on the additional mission to take the load off the human therapists by letting parts of the intervention be taken over by the supervised yet autonomous robots.
The researchers developed a complete system of robot-enhanced therapy (RET) for children with ASD. The therapy works by teaching behaviours during repeated sessions of interactive games. Since the individuals with ASD tend to be more responsive to feedback coming from an interaction with technology, robots are often used for this therapy. In this approach, the social robot acts as a mediator and typically remains remote-controlled by a human operator. The technique, called Wizard of Oz, requires the robot to be operated by an additional person and the robot is not recording the performance during the therapy. In order to reduce operator workload, authors introduced a system with a supervised autonomous robot – which is able to understand the psychological disposition of the child and use it to select actions appropriate to the current state of the interaction.
Admittedly, robots with supervised autonomy can substantially benefit behavioural therapy for children with ASD – diminishing the therapist workload on the one hand, and achieving more objective measurements of therapy outcomes on the other. Yet, complex as it is, this therapy requires a multidisciplinary approach, as RET provides mixed effectiveness for primary tasks: the turn-taking, joint attention and imitation task comparing to Standard Human Treatment (SHT).
Results are likely to prompt a further development of the robot assisted therapy with increasing robot’s autonomy. With many outstanding conceptual and technical issues yet to tackle –it is definitely the ethical questions that pose one of the major challenges as far as the potential and maximal degree of robot autonomy is concerned.
The article is fully available in open access to read, download and share on De Gruyter Online.
Research was conducted as a part of DREAM (Development of Robot-Enhanced therapy for children with Autism spectrum disorders) project.
DOI: 10.1515/pjbr-2017-0002
Image credit: P.G. Esteban
About the Journal: PALADYN – Journal of Behavioural Robotics is a fully peer-reviewed, electronic-only journal that publishes original, high-quality research on topics broadly related to neuronally and psychologically inspired robots and other behaving autonomous systems.
About De Gruyter Open: De Gruyter Open is a leading publisher of Open Access academic content. Publishing in all major disciplines, De Gruyter Open is home to more than 500 scholarly journals and over 100 books. The company is part of the De Gruyter Group (www.degruyter.com) and a member of the Association of Learned and Professional Society Publishers (ALPSP). De Gruyter Open’s book and journal programs have been endorsed by the international research community and some of the world’s top scientists, including Nobel laureates. The company’s mission is to make the very best in academic content freely available to scholars and lay readers alike.
The post Robotherapy for children with autism appeared first on Roboticmagazine. Continue reading

Posted in Human Robots