Tag Archives: Safety

#434759 To Be Ethical, AI Must Become ...

As over-hyped as artificial intelligence is—everyone’s talking about it, few fully understand it, it might leave us all unemployed but also solve all the world’s problems—its list of accomplishments is growing. AI can now write realistic-sounding text, give a debating champ a run for his money, diagnose illnesses, and generate fake human faces—among much more.

After training these systems on massive datasets, their creators essentially just let them do their thing to arrive at certain conclusions or outcomes. The problem is that more often than not, even the creators don’t know exactly why they’ve arrived at those conclusions or outcomes. There’s no easy way to trace a machine learning system’s rationale, so to speak. The further we let AI go down this opaque path, the more likely we are to end up somewhere we don’t want to be—and may not be able to come back from.

In a panel at the South by Southwest interactive festival last week titled “Ethics and AI: How to plan for the unpredictable,” experts in the field shared their thoughts on building more transparent, explainable, and accountable AI systems.

Not New, but Different
Ryan Welsh, founder and director of explainable AI startup Kyndi, pointed out that having knowledge-based systems perform advanced tasks isn’t new; he cited logistical, scheduling, and tax software as examples. What’s new is the learning component, our inability to trace how that learning occurs, and the ethical implications that could result.

“Now we have these systems that are learning from data, and we’re trying to understand why they’re arriving at certain outcomes,” Welsh said. “We’ve never actually had this broad society discussion about ethics in those scenarios.”

Rather than continuing to build AIs with opaque inner workings, engineers must start focusing on explainability, which Welsh broke down into three subcategories. Transparency and interpretability come first, and refer to being able to find the units of high influence in a machine learning network, as well as the weights of those units and how they map to specific data and outputs.

Then there’s provenance: knowing where something comes from. In an ideal scenario, for example, Open AI’s new text generator would be able to generate citations in its text that reference academic (and human-created) papers or studies.

Explainability itself is the highest and final bar and refers to a system’s ability to explain itself in natural language to the average user by being able to say, “I generated this output because x, y, z.”

“Humans are unique in our ability and our desire to ask why,” said Josh Marcuse, executive director of the Defense Innovation Board, which advises Department of Defense senior leaders on innovation. “The reason we want explanations from people is so we can understand their belief system and see if we agree with it and want to continue to work with them.”

Similarly, we need to have the ability to interrogate AIs.

Two Types of Thinking
Welsh explained that one big barrier standing in the way of explainability is the tension between the deep learning community and the symbolic AI community, which see themselves as two different paradigms and historically haven’t collaborated much.

Symbolic or classical AI focuses on concepts and rules, while deep learning is centered around perceptions. In human thought this is the difference between, for example, deciding to pass a soccer ball to a teammate who is open (you make the decision because conceptually you know that only open players can receive passes), and registering that the ball is at your feet when someone else passes it to you (you’re taking in information without making a decision about it).

“Symbolic AI has abstractions and representation based on logic that’s more humanly comprehensible,” Welsh said. To truly mimic human thinking, AI needs to be able to both perceive information and conceptualize it. An example of perception (deep learning) in an AI is recognizing numbers within an image, while conceptualization (symbolic learning) would give those numbers a hierarchical order and extract rules from the hierachy (4 is greater than 3, and 5 is greater than 4, therefore 5 is also greater than 3).

Explainability comes in when the system can say, “I saw a, b, and c, and based on that decided x, y, or z.” DeepMind and others have recently published papers emphasizing the need to fuse the two paradigms together.

Implications Across Industries
One of the most prominent fields where AI ethics will come into play, and where the transparency and accountability of AI systems will be crucial, is defense. Marcuse said, “We’re accountable beings, and we’re responsible for the choices we make. Bringing in tech or AI to a battlefield doesn’t strip away that meaning and accountability.”

In fact, he added, rather than worrying about how AI might degrade human values, people should be asking how the tech could be used to help us make better moral choices.

It’s also important not to conflate AI with autonomy—a worst-case scenario that springs to mind is an intelligent destructive machine on a rampage. But in fact, Marcuse said, in the defense space, “We have autonomous systems today that don’t rely on AI, and most of the AI systems we’re contemplating won’t be autonomous.”

The US Department of Defense released its 2018 artificial intelligence strategy last month. It includes developing a robust and transparent set of principles for defense AI, investing in research and development for AI that’s reliable and secure, continuing to fund research in explainability, advocating for a global set of military AI guidelines, and finding ways to use AI to reduce the risk of civilian casualties and other collateral damage.

Though these were designed with defense-specific aims in mind, Marcuse said, their implications extend across industries. “The defense community thinks of their problems as being unique, that no one deals with the stakes and complexity we deal with. That’s just wrong,” he said. Making high-stakes decisions with technology is widespread; safety-critical systems are key to aviation, medicine, and self-driving cars, to name a few.

Marcuse believes the Department of Defense can invest in AI safety in a way that has far-reaching benefits. “We all depend on technology to keep us alive and safe, and no one wants machines to harm us,” he said.

A Creation Superior to Its Creator
That said, we’ve come to expect technology to meet our needs in just the way we want, all the time—servers must never be down, GPS had better not take us on a longer route, Google must always produce the answer we’re looking for.

With AI, though, our expectations of perfection may be less reasonable.

“Right now we’re holding machines to superhuman standards,” Marcuse said. “We expect them to be perfect and infallible.” Take self-driving cars. They’re conceived of, built by, and programmed by people, and people as a whole generally aren’t great drivers—just look at traffic accident death rates to confirm that. But the few times self-driving cars have had fatal accidents, there’s been an ensuing uproar and backlash against the industry, as well as talk of implementing more restrictive regulations.

This can be extrapolated to ethics more generally. We as humans have the ability to explain our decisions, but many of us aren’t very good at doing so. As Marcuse put it, “People are emotional, they confabulate, they lie, they’re full of unconscious motivations. They don’t pass the explainability test.”

Why, then, should explainability be the standard for AI?

Even if humans aren’t good at explaining our choices, at least we can try, and we can answer questions that probe at our decision-making process. A deep learning system can’t do this yet, so working towards being able to identify which input data the systems are triggering on to make decisions—even if the decisions and the process aren’t perfect—is the direction we need to head.

Image Credit: a-image / Shutterstock.com Continue reading

Posted in Human Robots

#434753 Top Takeaways From The Economist ...

Over the past few years, the word ‘innovation’ has degenerated into something of a buzzword. In fact, according to Vijay Vaitheeswaran, US business editor at The Economist, it’s one of the most abused words in the English language.

The word is over-used precisely because we’re living in a great age of invention. But the pace at which those inventions are changing our lives is fast, new, and scary.

So what strategies do companies need to adopt to make sure technology leads to growth that’s not only profitable, but positive? How can business and government best collaborate? Can policymakers regulate the market without suppressing innovation? Which technologies will impact us most, and how soon?

At The Economist Innovation Summit in Chicago last week, entrepreneurs, thought leaders, policymakers, and academics shared their insights on the current state of exponential technologies, and the steps companies and individuals should be taking to ensure a tech-positive future. Here’s their expert take on the tech and trends shaping the future.

Blockchain
There’s been a lot of hype around blockchain; apparently it can be used for everything from distributing aid to refugees to voting. However, it’s too often conflated with cryptocurrencies like Bitcoin, and we haven’t heard of many use cases. Where does the technology currently stand?

Julie Sweet, chief executive of Accenture North America, emphasized that the technology is still in its infancy. “Everything we see today are pilots,” she said. The most promising of these pilots are taking place across three different areas: supply chain, identity, and financial services.

When you buy something from outside the US, Sweet explained, it goes through about 80 different parties. 70 percent of the relevant data is replicated and is prone to error, with paper-based documents often to blame. Blockchain is providing a secure way to eliminate paper in supply chains, upping accuracy and cutting costs in the process.

One of the most prominent use cases in the US is Walmart—the company has mandated that all suppliers in its leafy greens segment be on a blockchain, and its food safety has improved as a result.

Beth Devin, head of Citi Ventures’ innovation network, added “Blockchain is an infrastructure technology. It can be leveraged in a lot of ways. There’s so much opportunity to create new types of assets and securities that aren’t accessible to people today. But there’s a lot to figure out around governance.”

Open Source Technology
Are the days of proprietary technology numbered? More and more companies and individuals are making their source code publicly available, and its benefits are thus more widespread than ever before. But what are the limitations and challenges of open source tech, and where might it go in the near future?

Bob Lord, senior VP of cognitive applications at IBM, is a believer. “Open-sourcing technology helps innovation occur, and it’s a fundamental basis for creating great technology solutions for the world,” he said. However, the biggest challenge for open source right now is that companies are taking out more than they’re contributing back to the open-source world. Lord pointed out that IBM has a rule about how many lines of code employees take out relative to how many lines they put in.

Another challenge area is open governance; blockchain by its very nature should be transparent and decentralized, with multiple parties making decisions and being held accountable. “We have to embrace open governance at the same time that we’re contributing,” Lord said. He advocated for a hybrid-cloud environment where people can access public and private data and bring it together.

Augmented and Virtual Reality
Augmented and virtual reality aren’t just for fun and games anymore, and they’ll be even less so in the near future. According to Pearly Chen, vice president at HTC, they’ll also go from being two different things to being one and the same. “AR overlays digital information on top of the real world, and VR transports you to a different world,” she said. “In the near future we will not need to delineate between these two activities; AR and VR will come together naturally, and will change everything we do as we know it today.”

For that to happen, we’ll need a more ergonomically friendly device than we have today for interacting with this technology. “Whenever we use tech today, we’re multitasking,” said product designer and futurist Jody Medich. “When you’re using GPS, you’re trying to navigate in the real world and also manage this screen. Constant task-switching is killing our brain’s ability to think.” Augmented and virtual reality, she believes, will allow us to adapt technology to match our brain’s functionality.

This all sounds like a lot of fun for uses like gaming and entertainment, but what about practical applications? “Ultimately what we care about is how this technology will improve lives,” Chen said.

A few ways that could happen? Extended reality will be used to simulate hazardous real-life scenarios, reduce the time and resources needed to bring a product to market, train healthcare professionals (such as surgeons), or provide therapies for patients—not to mention education. “Think about the possibilities for children to learn about history, science, or math in ways they can’t today,” Chen said.

Quantum Computing
If there’s one technology that’s truly baffling, it’s quantum computing. Qubits, entanglement, quantum states—it’s hard to wrap our heads around these concepts, but they hold great promise. Where is the tech right now?

Mandy Birch, head of engineering strategy at Rigetti Computing, thinks quantum development is starting slowly but will accelerate quickly. “We’re at the innovation stage right now, trying to match this capability to useful applications,” she said. “Can we solve problems cheaper, better, and faster than classical computers can do?” She believes quantum’s first breakthrough will happen in two to five years, and that is highest potential is in applications like routing, supply chain, and risk optimization, followed by quantum chemistry (for materials science and medicine) and machine learning.

David Awschalom, director of the Chicago Quantum Exchange and senior scientist at Argonne National Laboratory, believes quantum communication and quantum sensing will become a reality in three to seven years. “We’ll use states of matter to encrypt information in ways that are completely secure,” he said. A quantum voting system, currently being prototyped, is one application.

Who should be driving quantum tech development? The panelists emphasized that no one entity will get very far alone. “Advancing quantum tech will require collaboration not only between business, academia, and government, but between nations,” said Linda Sapochak, division director of materials research at the National Science Foundation. She added that this doesn’t just go for the technology itself—setting up the infrastructure for quantum will be a big challenge as well.

Space
Space has always been the final frontier, and it still is—but it’s not quite as far-removed from our daily lives now as it was when Neil Armstrong walked on the moon in 1969.

The space industry has always been funded by governments and private defense contractors. But in 2009, SpaceX launched its first commercial satellite, and in subsequent years have drastically cut the cost of spaceflight. More importantly, they published their pricing, which brought transparency to a market that hadn’t seen it before.

Entrepreneurs around the world started putting together business plans, and there are now over 400 privately-funded space companies, many with consumer applications.

Chad Anderson, CEO of Space Angels and managing partner of Space Capital, pointed out that the technology floating around in space was, until recently, archaic. “A few NASA engineers saw they had more computing power in their phone than there was in satellites,” he said. “So they thought, ‘why don’t we just fly an iPhone?’” They did—and it worked.

Now companies have networks of satellites monitoring the whole planet, producing a huge amount of data that’s valuable for countless applications like agriculture, shipping, and observation. “A lot of people underestimate space,” Anderson said. “It’s already enabling our modern global marketplace.”

Next up in the space realm, he predicts, are mining and tourism.

Artificial Intelligence and the Future of Work
From the US to Europe to Asia, alarms are sounding about AI taking our jobs. What will be left for humans to do once machines can do everything—and do it better?

These fears may be unfounded, though, and are certainly exaggerated. It’s undeniable that AI and automation are changing the employment landscape (not to mention the way companies do business and the way we live our lives), but if we build these tools the right way, they’ll bring more good than harm, and more productivity than obsolescence.

Accenture’s Julie Sweet emphasized that AI alone is not what’s disrupting business and employment. Rather, it’s what she called the “triple A”: automation, analytics, and artificial intelligence. But even this fear-inducing trifecta of terms doesn’t spell doom, for workers or for companies. Accenture has automated 40,000 jobs—and hasn’t fired anyone in the process. Instead, they’ve trained and up-skilled people. The most important drivers to scale this, Sweet said, are a commitment by companies and government support (such as tax credits).

Imbuing AI with the best of human values will also be critical to its impact on our future. Tracy Frey, Google Cloud AI’s director of strategy, cited the company’s set of seven AI principles. “What’s important is the governance process that’s put in place to support those principles,” she said. “You can’t make macro decisions when you have technology that can be applied in many different ways.”

High Risks, High Stakes
This year, Vaitheeswaran said, 50 percent of the world’s population will have internet access (he added that he’s disappointed that percentage isn’t higher given the proliferation of smartphones). As technology becomes more widely available to people around the world and its influence grows even more, what are the biggest risks we should be monitoring and controlling?

Information integrity—being able to tell what’s real from what’s fake—is a crucial one. “We’re increasingly operating in siloed realities,” said Renee DiResta, director of research at New Knowledge and head of policy at Data for Democracy. “Inadvertent algorithmic amplification on social media elevates certain perspectives—what does that do to us as a society?”

Algorithms have also already been proven to perpetuate the bias of the people who create it—and those people are often wealthy, white, and male. Ensuring that technology doesn’t propagate unfair bias will be crucial to its ability to serve a diverse population, and to keep societies from becoming further polarized and inequitable. The polarization of experience that results from pronounced inequalities within countries, Vaitheeswaran pointed out, can end up undermining democracy.

We’ll also need to walk the line between privacy and utility very carefully. As Dan Wagner, founder of Civis Analytics put it, “We want to ensure privacy as much as possible, but open access to information helps us achieve important social good.” Medicine in the US has been hampered by privacy laws; if, for example, we had more data about biomarkers around cancer, we could provide more accurate predictions and ultimately better healthcare.

But going the Chinese way—a total lack of privacy—is likely not the answer, either. “We have to be very careful about the way we bake rights and freedom into our technology,” said Alex Gladstein, chief strategy officer at Human Rights Foundation.

Technology’s risks are clearly as fraught as its potential is promising. As Gary Shapiro, chief executive of the Consumer Technology Association, put it, “Everything we’ve talked about today is simply a tool, and can be used for good or bad.”

The decisions we’re making now, at every level—from the engineers writing algorithms, to the legislators writing laws, to the teenagers writing clever Instagram captions—will determine where on the spectrum we end up.

Image Credit: Rudy Balasko / Shutterstock.com Continue reading

Posted in Human Robots

#434701 3 Practical Solutions to Offset ...

In recent years, the media has sounded the alarm about mass job loss to automation and robotics—some studies predict that up to 50 percent of current jobs or tasks could be automated in coming decades. While this topic has received significant attention, much of the press focuses on potential problems without proposing realistic solutions or considering new opportunities.

The economic impacts of AI, robotics, and automation are complex topics that require a more comprehensive perspective to understand. Is universal basic income, for example, the answer? Many believe so, and there are a number of experiments in progress. But it’s only one strategy, and without a sustainable funding source, universal basic income may not be practical.

As automation continues to accelerate, we’ll need a multi-pronged approach to ease the transition. In short, we need to update broad socioeconomic strategies for a new century of rapid progress. How, then, do we plan practical solutions to support these new strategies?

Take history as a rough guide to the future. Looking back, technology revolutions have three themes in common.

First, past revolutions each produced profound benefits to productivity, increasing human welfare. Second, technological innovation and technology diffusion have accelerated over time, each iteration placing more strain on the human ability to adapt. And third, machines have gradually replaced more elements of human work, with human societies adapting by moving into new forms of work—from agriculture to manufacturing to service, for example.

Public and private solutions, therefore, need to be developed to address each of these three components of change. Let’s explore some practical solutions for each in turn.

Figure 1. Technology’s structural impacts in the 21st century. Refer to Appendix I for quantitative charts and technological examples corresponding to the numbers (1-22) in each slice.
Solution 1: Capture New Opportunities Through Aggressive Investment
The rapid emergence of new technology promises a bounty of opportunity for the twenty-first century’s economic winners. This technological arms race is shaping up to be a global affair, and the winners will be determined in part by who is able to build the future economy fastest and most effectively. Both the private and public sectors have a role to play in stimulating growth.

At the country level, several nations have created competitive strategies to promote research and development investments as automation technologies become more mature.

Germany and China have two of the most notable growth strategies. Germany’s Industrie 4.0 plan targets a 50 percent increase in manufacturing productivity via digital initiatives, while halving the resources required. China’s Made in China 2025 national strategy sets ambitious targets and provides subsidies for domestic innovation and production. It also includes building new concept cities, investing in robotics capabilities, and subsidizing high-tech acquisitions abroad to become the leader in certain high-tech industries. For China, specifically, tech innovation is driven partially by a fear that technology will disrupt social structures and government control.

Such opportunities are not limited to existing economic powers. Estonia’s progress after the breakup of the Soviet Union is a good case study in transitioning to a digital economy. The nation rapidly implemented capitalistic reforms and transformed itself into a technology-centric economy in preparation for a massive tech disruption. Internet access was declared a right in 2000, and the country’s classrooms were outfitted for a digital economy, with coding as a core educational requirement starting at kindergarten. Internet broadband speeds in Estonia are among the fastest in the world. Accordingly, the World Bank now ranks Estonia as a high-income country.

Solution 2: Address Increased Rate of Change With More Nimble Education Systems
Education and training are currently not set for the speed of change in the modern economy. Schools are still based on a one-time education model, with school providing the foundation for a single lifelong career. With content becoming obsolete faster and rapidly escalating costs, this system may be unsustainable in the future. To help workers more smoothly transition from one job into another, for example, we need to make education a more nimble, lifelong endeavor.

Primary and university education may still have a role in training foundational thinking and general education, but it will be necessary to curtail rising price of tuition and increase accessibility. Massive open online courses (MooCs) and open-enrollment platforms are early demonstrations of what the future of general education may look like: cheap, effective, and flexible.

Georgia Tech’s online Engineering Master’s program (a fraction of the cost of residential tuition) is an early example in making university education more broadly available. Similarly, nanodegrees or microcredentials provided by online education platforms such as Udacity and Coursera can be used for mid-career adjustments at low cost. AI itself may be deployed to supplement the learning process, with applications such as AI-enhanced tutorials or personalized content recommendations backed by machine learning. Recent developments in neuroscience research could optimize this experience by perfectly tailoring content and delivery to the learner’s brain to maximize retention.

Finally, companies looking for more customized skills may take a larger role in education, providing on-the-job training for specific capabilities. One potential model involves partnering with community colleges to create apprenticeship-style learning, where students work part-time in parallel with their education. Siemens has pioneered such a model in four states and is developing a playbook for other companies to do the same.

Solution 3: Enhance Social Safety Nets to Smooth Automation Impacts
If predicted job losses to automation come to fruition, modernizing existing social safety nets will increasingly become a priority. While the issue of safety nets can become quickly politicized, it is worth noting that each prior technological revolution has come with corresponding changes to the social contract (see below).

The evolving social contract (U.S. examples)
– 1842 | Right to strike
– 1924 | Abolish child labor
– 1935 | Right to unionize
– 1938 | 40-hour work week
– 1962, 1974 | Trade adjustment assistance
– 1964 | Pay discrimination prohibited
– 1970 | Health and safety laws
– 21st century | AI and automation adjustment assistance?

Figure 2. Labor laws have historically adjusted as technology and society progressed

Solutions like universal basic income (no-strings-attached monthly payout to all citizens) are appealing in concept, but somewhat difficult to implement as a first measure in countries such as the US or Japan that already have high debt. Additionally, universal basic income may create dis-incentives to stay in the labor force. A similar cautionary tale in program design was the Trade Adjustment Assistance (TAA), which was designed to protect industries and workers from import competition shocks from globalization, but is viewed as a missed opportunity due to insufficient coverage.

A near-term solution could come in the form of graduated wage insurance (compensation for those forced to take a lower-paying job), including health insurance subsidies to individuals directly impacted by automation, with incentives to return to the workforce quickly. Another topic to tackle is geographic mismatch between workers and jobs, which can be addressed by mobility assistance. Lastly, a training stipend can be issued to individuals as means to upskill.

Policymakers can intervene to reverse recent historical trends that have shifted incomes from labor to capital owners. The balance could be shifted back to labor by placing higher taxes on capital—an example is the recently proposed “robot tax” where the taxation would be on the work rather than the individual executing it. That is, if a self-driving car performs the task that formerly was done by a human, the rideshare company will still pay the tax as if a human was driving.

Other solutions may involve distribution of work. Some countries, such as France and Sweden, have experimented with redistributing working hours. The idea is to cap weekly hours, with the goal of having more people employed and work more evenly spread. So far these programs have had mixed results, with lower unemployment but high costs to taxpayers, but are potential models that can continue to be tested.

We cannot stop growth, nor should we. With the roles in response to this evolution shifting, so should the social contract between the stakeholders. Government will continue to play a critical role as a stabilizing “thumb” in the invisible hand of capitalism, regulating and cushioning against extreme volatility, particularly in labor markets.

However, we already see business leaders taking on some of the role traditionally played by government—thinking about measures to remedy risks of climate change or economic proposals to combat unemployment—in part because of greater agility in adapting to change. Cross-disciplinary collaboration and creative solutions from all parties will be critical in crafting the future economy.

Note: The full paper this article is based on is available here.

Image Credit: Dmitry Kalinovsky / Shutterstock.com Continue reading

Posted in Human Robots

#434648 The Pediatric AI That Outperformed ...

Training a doctor takes years of grueling work in universities and hospitals. Building a doctor may be as easy as teaching an AI how to read.

Artificial intelligence has taken another step towards becoming an integral part of 21st-century medicine. New research out of Guangzhou, China, published February 11th in Nature Medicine Letters, has demonstrated a natural-language processing AI that is capable of out-performing rookie pediatricians in diagnosing common childhood ailments.

The massive study examined the electronic health records (EHR) from nearly 600,000 patients over an 18-month period at the Guangzhou Women and Children’s Medical Center and then compared AI-generated diagnoses against new assessments from physicians with a range of experience.

The verdict? On average, the AI was noticeably more accurate than junior physicians and nearly as reliable as the more senior ones. These results are the latest demonstration that artificial intelligence is on the cusp of becoming a healthcare staple on a global scale.

Less Like a Computer, More Like a Person
To outshine human doctors, the AI first had to become more human. Like IBM’s Watson, the pediatric AI leverages natural language processing, in essence “reading” written notes from EHRs not unlike how a human doctor would review those same records. But the similarities to human doctors don’t end there. The AI is a machine learning classifier (MLC), capable of placing the information learned from the EHRs into categories to improve performance.

Like traditionally-trained pediatricians, the AI broke cases down into major organ groups and infection areas (upper/lower respiratory, gastrointestinal, etc.) before breaking them down even further into subcategories. It could then develop associations between various symptoms and organ groups and use those associations to improve its diagnoses. This hierarchical approach mimics the deductive reasoning human doctors employ.

Another key strength of the AI developed for this study was the enormous size of the dataset collected to teach it: 1,362,559 outpatient visits from 567,498 patients yielded some 101.6 million data points for the MLC to devour on its quest for pediatric dominance. This allowed the AI the depth of learning needed to distinguish and accurately select from the 55 different diagnosis codes across the various organ groups and subcategories.

When comparing against the human doctors, the study used 11,926 records from an unrelated group of children, giving both the MLC and the 20 humans it was compared against an even playing field. The results were clear: while cohorts of senior pediatricians performed better than the AI, junior pediatricians (those with 3-15 years of experience) were outclassed.

Helping, Not Replacing
While the research used a competitive analysis to measure the success of the AI, the results should be seen as anything but hostile to human doctors. The near future of artificial intelligence in medicine will see these machine learning programs augment, not replace, human physicians. The authors of the study specifically call out augmentation as the key short-term application of their work. Triaging incoming patients via intake forms, performing massive metastudies using EHRs, providing rapid ‘second opinions’—the applications for an AI doctor that is better-but-not-the-best are as varied as the healthcare industry itself.

That’s only considering how artificial intelligence could make a positive impact immediately upon implementation. It’s easy to see how long-term use of a diagnostic assistant could reshape the way modern medical institutions approach their work.

Look at how the MLC results fit snugly between the junior and senior physician groups. Essentially, it took nearly 15 years before a physician could consistently out-diagnose the machine. That’s a decade and a half wherein an AI diagnostic assistant would be an invaluable partner—both as a training tool and a safety measure. Likewise, on the other side of the experience curve you have physicians whose performance could be continuously leveraged to improve the AI’s effectiveness. This is a clear opportunity for a symbiotic relationship, with humans and machines each assisting the other as they mature.

Closer to Us, But Still Dependent on Us
No matter the ultimate application, the AI doctors of the future are drawing nearer to us step by step. This latest research is a demonstration that artificial intelligence can mimic the results of human deductive reasoning even in some of the most complex and important decision-making processes. True, the MLC required input from humans to function; both the initial data points and the cases used to evaluate the AI depended on EHRs written by physicians. While every effort was made to design a test schema that removed any indication of the eventual diagnosis, some “data leakage” is bound to occur.

In other words, when AIs use human-created data, they inherit human insight to some degree. Yet the progress made in machine imaging, chatbots, sensors, and other fields all suggest that this dependence on human input is more about where we are right now than where we could be in the near future.

Data, and More Data
That near future may also have some clear winners and losers. For now, those winners seem to be the institutions that can capture and apply the largest sets of data. With a rapidly digitized society gathering incredible amounts of data, China has a clear advantage. Combined with their relatively relaxed approach to privacy, they are likely to continue as one of the driving forces behind machine learning and its applications. So too will Google/Alphabet with their massive medical studies. Data is the uranium in this AI arms race, and everyone seems to be scrambling to collect more.

In a global community that seems increasingly aware of the potential problems arising from this need for and reliance on data, it’s nice to know there’ll be an upside as well. The technology behind AI medical assistants is looking more and more mature—even if we are still struggling to find exactly where, when, and how that technology should first become universal.

Yet wherever we see the next push to make AI a standard tool in a real-world medical setting, I have little doubt it will greatly improve the lives of human patients. Today Doctor AI is performing as well as a human colleague with more than 10 years of experience. By next year or so, it may take twice as long for humans to be competitive. And in a decade, the combined medical knowledge of all human history may be a tool as common as a stethoscope in your doctor’s hands.

Image Credit: Nadia Snopek / Shutterstock.com Continue reading

Posted in Human Robots

#434569 From Parkour to Surgery, Here Are the ...

The robot revolution may not be here quite yet, but our mechanical cousins have made some serious strides. And now some of the leading experts in the field have provided a rundown of what they see as the 10 most exciting recent developments.

Compiled by the editors of the journal Science Robotics, the list includes some of the most impressive original research and innovative commercial products to make a splash in 2018, as well as a couple from 2017 that really changed the game.

1. Boston Dynamics’ Atlas doing parkour

It seems like barely a few months go by without Boston Dynamics rewriting the book on what a robot can and can’t do. Last year they really outdid themselves when they got their Atlas humanoid robot to do parkour, leaping over logs and jumping between wooden crates.

Atlas’s creators have admitted that the videos we see are cherry-picked from multiple attempts, many of which don’t go so well. But they say they’re meant to be inspirational and aspirational rather than an accurate picture of where robotics is today. And combined with the company’s dog-like Spot robot, they are certainly pushing boundaries.

2. Intuitive Surgical’s da Vinci SP platform
Robotic surgery isn’t new, but the technology is improving rapidly. Market leader Intuitive’s da Vinci surgical robot was first cleared by the FDA in 2000, but since then it’s come a long way, with the company now producing three separate systems.

The latest addition is the da Vinci SP (single port) system, which is able to insert three instruments into the body through a single 2.5cm cannula (tube) bringing a whole new meaning to minimally invasive surgery. The system was granted FDA clearance for urological procedures last year, and the company has now started shipping the new system to customers.

3. Soft robot that navigates through growth

Roboticists have long borrowed principles from the animal kingdom, but a new robot design that mimics the way plant tendrils and fungi mycelium move by growing at the tip has really broken the mold on robot navigation.

The editors point out that this is the perfect example of bio-inspired design; the researchers didn’t simply copy nature, they took a general principle and expanded on it. The tube-like robot unfolds from the front as pneumatic pressure is applied, but unlike a plant, it can grow at the speed of an animal walking and can navigate using visual feedback from a camera.

4. 3D printed liquid crystal elastomers for soft robotics
Soft robotics is one of the fastest-growing sub-disciplines in the field, but powering these devices without rigid motors or pumps is an ongoing challenge. A variety of shape-shifting materials have been proposed as potential artificial muscles, including liquid crystal elastomeric actuators.

Harvard engineers have now demonstrated that these materials can be 3D printed using a special ink that allows the designer to easily program in all kinds of unusual shape-shifting abilities. What’s more, their technique produces actuators capable of lifting significantly more weight than previous approaches.

5. Muscle-mimetic, self-healing, and hydraulically amplified actuators
In another effort to find a way to power soft robots, last year researchers at the University of Colorado Boulder designed a series of super low-cost artificial muscles that can lift 200 times their own weight and even heal themselves.

The devices rely on pouches filled with a liquid that makes them contract with the force and speed of mammalian skeletal muscles when a voltage is applied. The most promising for robotics applications is the so-called Peano-HASEL, which features multiple rectangular pouches connected in series that contract linearly, just like real muscle.

6. Self-assembled nanoscale robot from DNA

While you may think of robots as hulking metallic machines, a substantial number of scientists are working on making nanoscale robots out of DNA. And last year German researchers built the first remote-controlled DNA robotic arm.

They created a length of tightly-bound DNA molecules to act as the arm and attached it to a DNA base plate via a flexible joint. Because DNA carries a charge, they were able to get the arm to swivel around like the hand of a clock by applying a voltage and switch direction by reversing that voltage. The hope is that this arm could eventually be used to build materials piece by piece at the nanoscale.

7. DelFly nimble bioinspired robotic flapper

Robotics doesn’t only borrow from biology—sometimes it gives back to it, too. And a new flapping-winged robot designed by Dutch engineers that mimics the humble fruit fly has done just that, by revealing how the animals that inspired it carry out predator-dodging maneuvers.

The lab has been building flapping robots for years, but this time they ditched the airplane-like tail used to control previous incarnations. Instead, they used insect-inspired adjustments to the motions of its twin pairs of flapping wings to hover, pitch, and roll with the agility of a fruit fly. That has provided a useful platform for investigating insect flight dynamics, as well as more practical applications.

8. Soft exosuit wearable robot

Exoskeletons could prevent workplace injuries, help people walk again, and even boost soldiers’ endurance. Strapping on bulky equipment isn’t ideal, though, so researchers at Harvard are working on a soft exoskeleton that combines specially-designed textiles, sensors, and lightweight actuators.

And last year the team made an important breakthrough by combining their novel exoskeleton with a machine-learning algorithm that automatically tunes the device to the user’s particular walking style. Using physiological data, it is able to adjust when and where the device needs to deliver a boost to the user’s natural movements to improve walking efficiency.

9. Universal Robots (UR) e-Series Cobots
Robots in factories are nothing new. The enormous mechanical arms you see in car factories normally have to be kept in cages to prevent them from accidentally crushing people. In recent years there’s been growing interest in “co-bots,” collaborative robots designed to work side-by-side with their human colleagues and even learn from them.

Earlier this year saw the demise of ReThink robotics, the pioneer of the approach. But the simple single arm devices made by Danish firm Universal Robotics are becoming ubiquitous in workshops and warehouses around the world, accounting for about half of global co-bot sales. Last year they released their latest e-Series, with enhanced safety features and force/torque sensing.

10. Sony’s aibo
After a nearly 20-year hiatus, Sony’s robotic dog aibo is back, and it’s had some serious upgrades. As well as a revamp to its appearance, the new robotic pet takes advantage of advances in AI, with improved environmental and command awareness and the ability to develop a unique character based on interactions with its owner.

The editors note that this new context awareness mark the device out as a significant evolution in social robots, which many hope could aid in childhood learning or provide companionship for the elderly.

Image Credit: DelFly Nimble / CC BY – SA 4.0 Continue reading

Posted in Human Robots