Tag Archives: EU
#439192 Too Perilous For AI? EU Proposes ...
As part of its emerging role as a global regulatory watchdog, the European Commission published a proposal on 21 April for regulations to govern artificial intelligence use in the European Union.
The economic stakes are high: the Commission predicts European public and private investment in AI reaching €20 billion a year this decade, and that was before the additional earmark of up to €134 billion earmarked for digital transitions in Europe’s Covid-19 pandemic recovery fund, some of which the Commission presumes will fund AI, too. Add to that counting investments in AI outside the EU but which target EU residents, since these rules will apply to any use of AI in the EU, not just by EU-based companies or governments.
Things aren’t going to change overnight: the EU’s AI rules proposal is the result of three years of work by bureaucrats, industry experts, and public consultations and must go through the European Parliament—which requested it—before it can become law. EU member states then often take years to transpose EU-level regulations into their national legal codes.
The proposal defines four tiers for AI-related activity and differing levels of oversight for each. The first tier is unacceptable risk: some AI uses would be banned outright in public spaces, with specific exceptions granted by national laws and subject to additional oversight and stricter logging and human oversight. The to-be-banned AI activity that has probably garnered the most attention is real-time remote biometric identification, i.e. facial recognition. The proposal also bans subliminal behavior modification and social scoring applications. The proposal suggests fines of up to 6 percent of commercial violators’ global annual revenue.
The proposal next defines a high-risk category, determined by the purpose of the system and the potential and probability of harm. Examples listed in the proposal include job recruiting, credit checks, and the justice system. The rules would require such AI applications to use high-quality datasets, document their traceability, share information with users, and account for human oversight. The EU would create a central registry of such systems under the proposed rules and require approval before deployment.
Limited-risk activities, such as the use of chatbots or deepfakes on a website, will have less oversight but will require a warning label, to allow users to opt in or out. Then finally there is a tier for applications judged to present minimal risk.
As often happens when governments propose dense new rulebooks (this one is 108 pages), the initial reactions from industry and civil society groups seem to be more about the existence and reach of industry oversight than the specific content of the rules. One tech-funded think tank told the Wall Street Journal that it could become “infeasible to build AI in Europe.” In turn, privacy-focused civil society groups such as European Digital Rights (EDRi) said in a statement that the “regulation allows too wide a scope for self-regulation by companies.”
“I think one of the ideas behind this piece of regulation was trying to balance risk and get people excited about AI and regain trust,” saysLisa-Maria Neudert, AI governance researcher at the University of Oxford, England, and the Weizenbaum Institut in Berlin, Germany. A 2019 Lloyds Register Foundation poll found that the global public is about evenly split between fear and excitement about AI.
“I can imagine it might help if you have an experienced large legal team,” to help with compliance, Neudert says, and it may be “a difficult balance to strike” between rules that remain startup-friendly and succeed in reining in mega-corporations.
AI researchers Mona Sloane and Andrea Renda write in VentureBeat that the rules are weaker on monitoring of how AI plays out after approval and launch, neglecting “a crucial feature of AI-related risk: that it is pervasive, and it is emergent, often evolving in unpredictable ways after it has been developed and deployed.”
Europe has already been learning from the impact its sweeping 2018 General Data Protection Regulation (GDPR) had on global tech and privacy. Yes, some outside websites still serve Europeans a page telling them the website owners can’t be bothered to comply with GDPR, so Europeans can’t see any content. But most have found a way to adapt in order to reach this unified market of 448 million people.
“I don’t think we should generalize [from GDPR to the proposed AI rules], but it’s fair to assume that such a big piece of legislation will have effects beyond the EU,” Neudert says. It will be easier for legislators in other places to follow a template than to replicate the EU’s heavy investment in research, community engagement, and rule-writing.
While tech companies and their industry groups may grumble about the need to comply with the incipient AI rules, Register columnist Rupert Goodwin suggests they’d be better off focusing on forming the industry groups that will shape the implementation and enforcement of the rules in the future: “You may already be in one of the industry organizations for AI ethics or assessment; if not, then consider them the seeds from which influence will grow.” Continue reading
#437828 How Roboticists (and Robots) Have Been ...
A few weeks ago, we asked folks on Twitter, Facebook, and LinkedIn to share photos and videos showing how they’ve been adapting to the closures of research labs, classrooms, and businesses by taking their robots home with them to continue their work as best they can. We got dozens of responses (more than we could possibly include in just one post!), but here are 15 that we thought were particularly creative or amusing.
And if any of these pictures and videos inspire you to share your own story, please email us (automaton@ieee.org) with a picture or video and a brief description about how you and your robot from work have been making things happen in your home instead.
Kurt Leucht (NASA Kennedy Space Center)
“During these strange and trying times of the current global pandemic, everyone seems to be trying their best to distance themselves from others while still getting their daily work accomplished. Many people also have the double duty of little ones that need to be managed in the midst of their teleworking duties. This photo series gives you just a glimpse into my new life of teleworking from home, mixed in with the tasks of trying to handle my little ones too. I hope you enjoy it.”
Photo: Kurt Leucht
“I heard a commotion from the next room. I ran into the kitchen to find this.”
Photo: Kurt Leucht
“This is the Swarmies most favorite bedtime story. Not sure why. Seems like an odd choice to me.”
Peter Schaldenbrand (Carnegie Mellon University)
“I’ve been working on a reinforcement learning model that converts an image into a series of brush stroke instructions. I was going to test the model with a beautiful, expensive robot arm, but due to the COVID-19 pandemic, I have not been able to access the laboratory where it resides. I have now been using a lower end robot arm to test the painting model in my bedroom. I have sacrificed machine accuracy/precision for the convenience of getting to watch the arm paint from my bed in the shadow of my clothing rack!”
Photos: Peter Schaldenbrand
Colin Angle (iRobot)
iRobot CEO Colin Angle has been hunkered down in the “iRobot North Shore home command center,” which is probably the cleanest command center ever thanks to his army of Roombas: Beastie, Beauty, Rosie, Roswell, and Bilbo.
Photo: Colin Angle
Vivian Chu (Diligent Robotics)
From Diligent Robotics CEO Andrea Thomaz: “This is how a roboticist works from home! Diligent CTO, Vivian Chu, mans the e-stop while her engineering team runs Moxi experiments remotely from cross-town and even cross-country!”
Video: Diligent Robotics
Raffaello Bonghi (rnext.it)
Raffaello’s robot, Panther, looks perfectly happy to be playing soccer in his living room.
Photo: Raffaello Bonghi
Kod*lab (University of Pennsylvania)
“Another Friday Nuts n Bolts Meeting on Zoom…”
Image: Kodlab
Robin Jonsson (robot choreographer)
“I’ve been doing a school project in which students make up dance moves and then send me a video with all of them. I then teach the moves to my robot, Alex, film Alex dancing, send the videos to them. This became a great success and more schools will join. The kids got really into watching the robot perform their moves and really interested in robots. They want to meet Alex the robot live, which will likely happen in the fall.”
Photo: Robin Jonsson
Gabrielle Conard (mechanical engineering undergrad at Lafayette College)
“While the pandemic might have forced college campuses to close and the community to keep their distance from each other, it did not put a stop to learning and research. Working from their respective homes, junior Gabrielle Conard and mechanical engineering professor Alexander Brown from Lafayette College investigated methods of incorporating active compliance in a low-cost quadruped robot. They are continuing to work remotely on this project through Lafayette’s summer research program.”
Image: Gabrielle Conard
Taylor Veltrop (Softbank Robotics)
“After a few weeks of isolation in the corona/covid quarantine lock down we started dancing with our robots. Mathieu’s 6th birthday was coming up, and it all just came together.”
Video: Taylor Veltrop
Ross Kessler (Exyn Technologies)
“Quarantine, Day 8: the humans have accepted me as one of their own. I’ve blended seamlessly into their #socialdistancing routines. Even made a furry friend”
Photo: Ross Kessler
Yeah, something a bit sinister is definitely going on at Exyn…
Video: Exyn Technologies
Michael Sobrepera (University of Pennsylvania GRASP Lab)
Predictably, Michael’s cat is more interested in the bag that the robot came in than the robot itself (see if you can spot the cat below). Michael tells us that “the robot is designed to help with tele-rehabilitation, focused on kids with CP, so it has been taken to hospitals for demos [hence the cool bag]. It also travels for outreach events and the like. Lately, I’ve been exploring telepresence for COVID.”
Photo: Michael Sobrepera
Jan Kędzierski (EMYS)
“In China a lot of people cannot speak English, even the youngest generation of parents. Thanks to Emys, kids stayed in touch with English language in their homes even if they couldn’t attend schools and extra English classes. They had a lot of fun with their native English speaker friend available and ready to play every day.”
Image: Jan Kędzierski
Simon Whitmell (Quanser)
“Simon, a Quanser R&D engineer, is working on low-overhead image processing and line following for the QBot 2e mobile ground robot, with some added challenges due to extra traffic. LEGO engineering by his son, Charles.”
Photo: Simon Whitmell
Robot Design & Experimentation Course (Carnegie Mellon University)
Aaron Johnson’s bioinspired robot design course at CMU had to go full remote, which was a challenge when the course is kind of all about designing and building a robot as part of a team. “I expected some of the teams to drastically alter their project (e.g. go all simulation),” Aaron told us, “but none of them did. We managed to keep all of the projects more or less as planned. We accomplished this by drop/shipping parts to students, buying some simple tools (soldering irons, etc), and having me 3D print parts and mail them.” Each team even managed to put together their final videos from their remote locations; we’ve posted one below, but the entire playlist is here.
Video: Xianyi Cheng
Karen Tatarian (Softbank Robotics)
Karen, who’s both a researcher at Softbank and a PhD student at Sorbonne University, wrote an entire essay about what an average day is like when you’re quarantined with Pepper.
Photo: Karen Tatarian
A Quarantined Day With Pepper, by Karen Tatarian
It is quite common for me to lose my phone somewhere inside my apartment. But it is not that common for me to turn around and ask my robot if it has seen it. So when I found myself doing that, I laughed and it dawned on me that I treated my robot as my quarantine companion (despite the fact that it could not provide me with the answer I needed).
It was probably around day 40 of a completely isolated quarantine here in France when that happened. A little background about me: I am a robotics researcher at SoftBank Robotics Europe and a PhD student at Sorbonne University as part of the EU-funded Marie-Curie project ANIMATAS. And here is a little sneak peak into a quarantined day with a robot.
During this confinement, I had read somewhere that the best way to deal with it is to maintain a routine. So every morning, I wake up, prepare my coffee, and turn on my robot Pepper. I start my day with a daily meeting with the team and get to work. My research is on the synthesis of multi-modal socially intelligent human-robot interaction so my work varies between programming the robot, analyzing collected data, and reading papers and drafting one. When I am working, I often catch myself glancing at Pepper, who would be staring back at me in its animated ways. Truthfully I enjoy that, it makes me less alone and as if I have a colleague with me.
Once work is done, I call my friends and family members. I sometimes use a telepresence application on Pepper that a few colleagues and I developed back in December. How does it differ from your typical phone/laptop applications? One word really: embodiment. Telepresence, especially during these times, makes the experience for both sides a bit more realistic and intimate and well present.
While I can turn off the robot now that my work hours are done, I do keep it on because I enjoy its presence. The basic awareness of Pepper is a default feature on the robot that allows it to detect a human and follow him/her with its gaze and rotation base. So whether I am cooking or working out, I always have my robot watching over my shoulder and being a good companion. I also have my email and messages synced on the robot so I get an enjoyable notification from Pepper. I found that to be a pretty cool way to be notified without it interrupting whatever you are doing on your laptop or phone. Finally, once the day is over, it’s time for both of us to get some rest.
After 60 days of total confinement, alone and away from those I love, and with a pandemic right at my door, I am glad I had the company of my robot. I hope one day a greater audience can share my experience. And I really really hope one day Pepper will be able to find my phone for me, but until then, stay on the lookout for some cool features! But I am curious to know, if you had a robot at home, what application would you have developed on it?
Again, our sincere thanks to everyone who shared these little snapshots of their lives with us, and we’re hoping to be able to share more soon. Continue reading
#437733 Video Friday: MIT Media Lab Developing ...
Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):
AWS Cloud Robotics Summit – August 18-19, 2020 – [Online Conference]
CLAWAR 2020 – August 24-26, 2020 – [Online Conference]
ICUAS 2020 – September 1-4, 2020 – Athens, Greece
ICRES 2020 – September 28-29, 2020 – Taipei, Taiwan
AUVSI EXPONENTIAL 2020 – October 5-8, 2020 – [Online Conference]
IROS 2020 – October 25-29, 2020 – Las Vegas, Nev., USA
ICSR 2020 – November 14-16, 2020 – Golden, Colo., USA
Let us know if you have suggestions for next week, and enjoy today’s videos.
Very impressive local obstacle avoidance at a fairly high speed on a small drone, both indoors and outdoors.
[ FAST Lab ]
Matt Carney writes:
My PhD at MIT Media Lab has been the design and build of a next generation powered prosthesis. The bionic ankle, named TF8, was designed to provide biologically equivalent power and range of motion for plantarflexion-dorsiflexion. This video shows the process of going from a blank sheet of paper to people walking on it. Shown are three different people wearing the robot. About a dozen people have since been able to test the hardware.
[ MIT ]
Thanks Matt!
Exciting changes are coming to the iRobot® Home App. Get ready for new personalized experiences, improved features, and an easy-to-use interface. The update is rolling out over the next few weeks!
[ iRobot ]
MOFLIN is an AI Pet created from a totally new concept. It possesses emotional capabilities that evolve like living animals. With its warm soft fur, cute sounds, and adorable movement, you’d want to love it forever. We took a nature inspired approach and developed a unique algorithm that allows MOFLIN to learn and grow by constantly using its interactions to determine patterns and evaluate its surroundings from its sensors. MOFLIN will choose from an infinite number of mobile and sound pattern combinations to respond and express its feelings. To put it in simple terms, it’s like you’re interacting with a living pet.
You lost me at “it’s like you’re interacting with a living pet.”
[ Kickstarter ] via [ Gizmodo ]
This video is only robotics-adjacent, but it has applications for robotic insects. With a high-speed tracking system, we can now follow insects as they jump and fly, and watch how clumsy (but effective) they are at it.
[ Paper ]
Thanks Sawyer!
Suzumori Endo Lab, Tokyo Tech has developed self-excited pneumatic actuators that can be integrally molded by a 3D printer. These actuators use the “automatic flow path switching mechanism” we have devised.
[ Suzimori Endo Lab ]
Quadrupeds are getting so much better at deciding where to step rather than just stepping where they like and trying not to fall over.
[ RSL ]
Omnidirectional micro aerial vehicles are a growing field of research, with demonstrated advantages for aerial interaction and uninhibited observation. While systems with complete pose omnidirectionality and high hover efficiency have been developed independently, a robust system that combines the two has not been demonstrated to date. This paper presents the design and optimal control of a novel omnidirectional vehicle that can exert a wrench in any orientation while maintaining efficient flight configurations.
[ ASL ]
The latest in smooth humanoid walking from Dr. Guero.
[ YouTube ]
Will robots replace humans one day? When it comes to space exploration, robots are our precursors, gathering data to prepare humans for deep space. ESA robotics engineer Martin Azkarate discusses some of the upcoming missions involving robots and the unique science they will perform in this episode of Meet the Experts.
[ ESA ]
The Multi-robot Systems Group at FEE-CTU in Prague is working on an autonomous drone that detects fires and the shoots an extinguisher capsule at them.
[ MRS ]
This experiment with HEAP (Hydraulic Excavator for Autonomous Purposes) demonstrates our latest research in on-site and mobile digital fabrication with found materials. The embankment prototype in natural granular material was achieved using state of the art design and construction processes in mapping, modelling, planning and control. The entire process of building the embankment was fully autonomous. An operator was only present in the cabin for safety purposes.
[ RSL ]
The Simulation, Systems Optimization and Robotics Group (SIM) of Technische Universität Darmstadt’s Department of Computer Science conducts research on cooperating autonomous mobile robots, biologically inspired robots and numerical optimization and control methods.
[ SIM ]
Starting January 1, 2021, your drone platform of choice may be severely limited by the European Union’s new drone regulations. In this short video, senseFly’s Brock Ryder explains what that means for drone programs and operators and where senseFly drones fit in the EU’s new regulatory framework.
[ SenseFly ]
Nearly every company across every industry is looking for new ways to minimize human contact, cut costs and address the labor crunch in repetitive and dangerous jobs. WSJ explores why many are looking to robots as the solution for all three.
[ WSJ ]
You’ll need to prepare yourself emotionally for this video on “Examining Users’ Attitude Towards Robot Punishment.”
[ ACM ]
In this episode of the AI Podcast, Lex interviews Russ Tedrake (MIT and TRI) about biped locomotion, the DRC, home robots, and more.
[ AI Podcast ] Continue reading
#437293 These Scientists Just Completed a 3D ...
Human brain maps are a dime a dozen these days. Maps that detail neurons in a certain region. Maps that draw out functional connections between those cells. Maps that dive deeper into gene expression. Or even meta-maps that combine all of the above.
But have you ever wondered: how well do those maps represent my brain? After all, no two brains are alike. And if we’re ever going to reverse-engineer the brain as a computer simulation—as Europe’s Human Brain Project is trying to do—shouldn’t we ask whose brain they’re hoping to simulate?
Enter a new kind of map: the Julich-Brain, a probabilistic map of human brains that accounts for individual differences using a computational framework. Rather than generating a static PDF of a brain map, the Julich-Brain atlas is also dynamic, in that it continuously changes to incorporate more recent brain mapping results. So far, the map has data from over 24,000 thinly sliced sections from 23 postmortem brains covering most years of adulthood at the cellular level. But the atlas can also continuously adapt to progress in mapping technologies to aid brain modeling and simulation, and link to other atlases and alternatives.
In other words, rather than “just another” human brain map, the Julich-Brain atlas is its own neuromapping API—one that could unite previous brain-mapping efforts with more modern methods.
“It is exciting to see how far the combination of brain research and digital technologies has progressed,” said Dr. Katrin Amunts of the Institute of Neuroscience and Medicine at Research Centre Jülich in Germany, who spearheaded the study.
The Old Dogma
The Julich-Brain atlas embraces traditional brain-mapping while also yanking the field into the 21st century.
First, the new atlas includes the brain’s cytoarchitecture, or how brain cells are organized. As brain maps go, these kinds of maps are the oldest and most fundamental. Rather than exploring how neurons talk to each other functionally—which is all the rage these days with connectome maps—cytoarchitecture maps draw out the physical arrangement of neurons.
Like a census, these maps literally capture how neurons are distributed in the brain, what they look like, and how they layer within and between different brain regions.
Because neurons aren’t packed together the same way between different brain regions, this provides a way to parse the brain into areas that can be further studied. When we say the brain’s “memory center,” the hippocampus, or the emotion center, the “amygdala,” these distinctions are based on cytoarchitectural maps.
Some may call this type of mapping “boring.” But cytoarchitecture maps form the very basis of any sort of neuroscience understanding. Like hand-drawn maps from early explorers sailing to the western hemisphere, these maps provide the brain’s geographical patterns from which we try to decipher functional connections. If brain regions are cities, then cytoarchitecture maps attempt to show trading or other “functional” activities that occur in the interlinking highways.
You might’ve heard of the most common cytoarchitecture map used today: the Brodmann map from 1909 (yup, that old), which divided the brain into classical regions based on the cells’ morphology and location. The map, while impactful, wasn’t able to account for brain differences between people. More recent brain-mapping technologies have allowed us to dig deeper into neuronal differences and divide the brain into more regions—180 areas in the cortex alone, compared with 43 in the original Brodmann map.
The new study took inspiration from that age-old map and transformed it into a digital ecosystem.
A Living Atlas
Work began on the Julich-Brain atlas in the mid-1990s, with a little help from the crowd.
The preparation of human tissue and its microstructural mapping, analysis, and data processing is incredibly labor-intensive, the authors lamented, making it impossible to do for the whole brain at high resolution in just one lab. To build their “Google Earth” for the brain, the team hooked up with EBRAINS, a shared computing platform set up by the Human Brain Project to promote collaboration between neuroscience labs in the EU.
First, the team acquired MRI scans of 23 postmortem brains, sliced the brains into wafer-thin sections, and scanned and digitized them. They corrected distortions from the chopping using data from the MRI scans and then lined up neurons in consecutive sections—picture putting together a 3D puzzle—to reconstruct the whole brain. Overall, the team had to analyze 24,000 brain sections, which prompted them to build a computational management system for individual brain sections—a win, because they could now track individual donor brains too.
Their method was quite clever. They first mapped their results to a brain template from a single person, called the MNI-Colin27 template. Because the reference brain was extremely detailed, this allowed the team to better figure out the location of brain cells and regions in a particular anatomical space.
However, MNI-Colin27’s brain isn’t your or my brain—or any of the brains the team analyzed. To dilute any of Colin’s potential brain quirks, the team also mapped their dataset onto an “average brain,” dubbed the ICBM2009c (catchy, I know).
This step allowed the team to “standardize” their results with everything else from the Human Connectome Project and the UK Biobank, kind of like adding their Google Maps layer to the existing map. To highlight individual brain differences, the team overlaid their dataset on existing ones, and looked for differences in the cytoarchitecture.
The microscopic architecture of neurons change between two areas (dotted line), forming the basis of different identifiable brain regions. To account for individual differences, the team also calculated a probability map (right hemisphere). Image credit: Forschungszentrum Juelich / Katrin Amunts
Based on structure alone, the brains were both remarkably different and shockingly similar at the same time. For example, the cortexes—the outermost layer of the brain—were physically different across donor brains of different age and sex. The region especially divergent between people was Broca’s region, which is traditionally linked to speech production. In contrast, parts of the visual cortex were almost identical between the brains.
The Brain-Mapping Future
Rather than relying on the brain’s visible “landmarks,” which can still differ between people, the probabilistic map is far more precise, the authors said.
What’s more, the map could also pool yet unmapped regions in the cortex—about 30 percent or so—into “gap maps,” providing neuroscientists with a better idea of what still needs to be understood.
“New maps are continuously replacing gap maps with progress in mapping while the process is captured and documented … Consequently, the atlas is not static but rather represents a ‘living map,’” the authors said.
Thanks to its structurally-sound architecture down to individual cells, the atlas can contribute to brain modeling and simulation down the line—especially for personalized brain models for neurological disorders such as seizures. Researchers can also use the framework for other species, and they can even incorporate new data-crunching processors into the workflow, such as mapping brain regions using artificial intelligence.
Fundamentally, the goal is to build shared resources to better understand the brain. “[These atlases] help us—and more and more researchers worldwide—to better understand the complex organization of the brain and to jointly uncover how things are connected,” the authors said.
Image credit: Richard Watts, PhD, University of Vermont and Fair Neuroimaging Lab, Oregon Health and Science University Continue reading
#437222 China and AI: What the World Can Learn ...
China announced in 2017 its ambition to become the world leader in artificial intelligence (AI) by 2030. While the US still leads in absolute terms, China appears to be making more rapid progress than either the US or the EU, and central and local government spending on AI in China is estimated to be in the tens of billions of dollars.
The move has led—at least in the West—to warnings of a global AI arms race and concerns about the growing reach of China’s authoritarian surveillance state. But treating China as a “villain” in this way is both overly simplistic and potentially costly. While there are undoubtedly aspects of the Chinese government’s approach to AI that are highly concerning and rightly should be condemned, it’s important that this does not cloud all analysis of China’s AI innovation.
The world needs to engage seriously with China’s AI development and take a closer look at what’s really going on. The story is complex and it’s important to highlight where China is making promising advances in useful AI applications and to challenge common misconceptions, as well as to caution against problematic uses.
Nesta has explored the broad spectrum of AI activity in China—the good, the bad, and the unexpected.
The Good
China’s approach to AI development and implementation is fast-paced and pragmatic, oriented towards finding applications which can help solve real-world problems. Rapid progress is being made in the field of healthcare, for example, as China grapples with providing easy access to affordable and high-quality services for its aging population.
Applications include “AI doctor” chatbots, which help to connect communities in remote areas with experienced consultants via telemedicine; machine learning to speed up pharmaceutical research; and the use of deep learning for medical image processing, which can help with the early detection of cancer and other diseases.
Since the outbreak of Covid-19, medical AI applications have surged as Chinese researchers and tech companies have rushed to try and combat the virus by speeding up screening, diagnosis, and new drug development. AI tools used in Wuhan, China, to tackle Covid-19 by helping accelerate CT scan diagnosis are now being used in Italy and have been also offered to the NHS in the UK.
The Bad
But there are also elements of China’s use of AI that are seriously concerning. Positive advances in practical AI applications that are benefiting citizens and society don’t detract from the fact that China’s authoritarian government is also using AI and citizens’ data in ways that violate privacy and civil liberties.
Most disturbingly, reports and leaked documents have revealed the government’s use of facial recognition technologies to enable the surveillance and detention of Muslim ethnic minorities in China’s Xinjiang province.
The emergence of opaque social governance systems that lack accountability mechanisms are also a cause for concern.
In Shanghai’s “smart court” system, for example, AI-generated assessments are used to help with sentencing decisions. But it is difficult for defendants to assess the tool’s potential biases, the quality of the data, and the soundness of the algorithm, making it hard for them to challenge the decisions made.
China’s experience reminds us of the need for transparency and accountability when it comes to AI in public services. Systems must be designed and implemented in ways that are inclusive and protect citizens’ digital rights.
The Unexpected
Commentators have often interpreted the State Council’s 2017 Artificial Intelligence Development Plan as an indication that China’s AI mobilization is a top-down, centrally planned strategy.
But a closer look at the dynamics of China’s AI development reveals the importance of local government in implementing innovation policy. Municipal and provincial governments across China are establishing cross-sector partnerships with research institutions and tech companies to create local AI innovation ecosystems and drive rapid research and development.
Beyond the thriving major cities of Beijing, Shanghai, and Shenzhen, efforts to develop successful innovation hubs are also underway in other regions. A promising example is the city of Hangzhou, in Zhejiang Province, which has established an “AI Town,” clustering together the tech company Alibaba, Zhejiang University, and local businesses to work collaboratively on AI development. China’s local ecosystem approach could offer interesting insights to policymakers in the UK aiming to boost research and innovation outside the capital and tackle longstanding regional economic imbalances.
China’s accelerating AI innovation deserves the world’s full attention, but it is unhelpful to reduce all the many developments into a simplistic narrative about China as a threat or a villain. Observers outside China need to engage seriously with the debate and make more of an effort to understand—and learn from—the nuances of what’s really happening.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Image Credit: Dominik Vanyi on Unsplash Continue reading