Tag Archives: mechanical
#435681 Video Friday: This NASA Robot Uses ...
Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):
ICRES 2019 – July 29-30, 2019 – London, U.K.
DARPA SubT Tunnel Circuit – August 15-22, 2019 – Pittsburgh, Pa., USA
IEEE Africon 2019 – September 25-27, 2019 – Accra, Ghana
ISRR 2019 – October 6-10, 2019 – Hanoi, Vietnam
Let us know if you have suggestions for next week, and enjoy today’s videos.
Robots can land on the Moon and drive on Mars, but what about the places they can’t reach? Designed by engineers as NASA’s Jet Propulsion Laboratory in Pasadena, California, a four-limbed robot named LEMUR (Limbed Excursion Mechanical Utility Robot) can scale rock walls, gripping with hundreds of tiny fishhooks in each of its 16 fingers and using artificial intelligence to find its way around obstacles. In its last field test in Death Valley, California, in early 2019, LEMUR chose a route up a cliff, scanning the rock for ancient fossils from the sea that once filled the area.
The LEMUR project has since concluded, but it helped lead to a new generation of walking, climbing and crawling robots. In future missions to Mars or icy moons, robots with AI and climbing technology derived from LEMUR could discover similar signs of life. Those robots are being developed now, honing technology that may one day be part of future missions to distant worlds.
[ NASA ]
This video demonstrates the autonomous footstep planning developed by IHMC. Robots in this video are the Atlas humanoid robot (DRC version) and the NASA Valkyrie. The operator specifies a goal location in the world, which is modeled as planar regions using the robot’s perception sensors. The planner then automatically computes the necessary steps to reach the goal using a Weighted A* algorithm. The algorithm does not reject footholds that have a certain amount of support, but instead modifies them after the plan is found to try and increase that support area.
Currently, narrow terrain has a success rate of about 50%, rough terrain is about 90%, whereas flat ground is near 100%. We plan on increasing planner speed and the ability to plan through mazes and to unseen goals by including a body-path planner as the first step. Control, Perception, and Planning algorithms by IHMC Robotics.
[ IHMC ]
I’ve never really been able to get into watching people play poker, but throw an AI from CMU and Facebook into a game of no-limit Texas hold’em with five humans, and I’m there.
[ Facebook ]
In this video, Cassie Blue is navigating autonomously. Right now, her world is very small, the Wavefield at the University of Michigan, where she is told to turn left at intersections. You’re right, that is not a lot of independence, but it’s a first step away from a human and an RC controller!
Using a RealSense RGBD Camera, an IMU, and our version of an InEKF with contact factors, Cassie Blue is building a 3D semantic map in real time that identifies sidewalks, grass, poles, bicycles, and buildings. From the semantic map, occupancy and cost maps are built with the sidewalk identified as walk-able area and everything else considered as an obstacle. A planner then sets a goal to stay approximately 50 cm to the right of the sidewalk’s left edge and plans a path around obstacles and corners using D*. The path is translated into way-points that are achieved via Cassie Blue’s gait controller.
[ University of Michigan ]
Thanks Jesse!
Dave from HEBI Robotics wrote in to share some new actuators that are designed to get all kinds of dirty: “The R-Series takes HEBI’s X-Series to the next level, providing a sealed robotics solution for rugged, industrial applications and laying the groundwork for industrial users to address challenges that are not well met by traditional robotics. To prove it, we shot some video right in the Allegheny River here in Pittsburgh. Not a bad way to spend an afternoon :-)”
The R-Series Actuator is a full-featured robotic component as opposed to a simple servo motor. The output rotates continuously, requires no calibration or homing on boot-up, and contains a thru-bore for easy daisy-chaining of wiring. Modular in nature, R-Series Actuators can be used in everything from wheeled robots to collaborative robotic arms. They are sealed to IP67 and designed with a lightweight form factor for challenging field applications, and they’re packed with sensors that enable simultaneous control of position, velocity, and torque.
[ HEBI Robotics ]
Thanks Dave!
If your robot hands out karate chops on purpose, that’s great. If it hands out karate chops accidentally, maybe you should fix that.
COVR is short for “being safe around collaborative and versatile robots in shared spaces”. Our mission is to significantly reduce the complexity in safety certifying cobots. Increasing safety for collaborative robots enables new innovative applications, thus increasing production and job creation for companies utilizing the technology. Whether you’re an established company seeking to deploy cobots or an innovative startup with a prototype of a cobot related product, COVR will help you analyze, test and validate the safety for that application.
[ COVR ]
Thanks Anna!
EPFL startup Flybotix has developed a novel drone with just two propellers and an advanced stabilization system that allow it to fly for twice as long as conventional models. That fact, together with its small size, makes it perfect for inspecting hard-to-reach parts of industrial facilities such as ducts.
[ Flybotix ]
SpaceBok is a quadruped robot designed and built by a Swiss student team from ETH Zurich and ZHAW Zurich, currently being tested using Automation and Robotics Laboratories (ARL) facilities at our technical centre in the Netherlands. The robot is being used to investigate the potential of ‘dynamic walking’ and jumping to get around in low gravity environments.
SpaceBok could potentially go up to 2 m high in lunar gravity, although such a height poses new challenges. Once it comes off the ground the legged robot needs to stabilise itself to come down again safely – like a mini-spacecraft. So, like a spacecraft. SpaceBok uses a reaction wheel to control its orientation.
[ ESA ]
A new video from GITAI showing progress on their immersive telepresence robot for space.
[ GITAI ]
Tech United’s HERO robot (a Toyota HSR) competed in the RoboCup@Home competition, and it had a couple of garbage-related hiccups.
[ Tech United ]
Even small drones are getting better at autonomous obstacle avoidance in cluttered environments at useful speeds, as this work from the HKUST Aerial Robotics Group shows.
[ HKUST ]
DelFly Nimbles now come in swarms.
[ DelFly Nimble ]
This is a very short video, but it’s a fairly impressive look at a Baxter robot collaboratively helping someone put a shirt on, a useful task for folks with disabilities.
[ Shibata Lab ]
ANYmal can inspect the concrete in sewers for deterioration by sliding its feet along the ground.
[ ETH Zurich ]
HUG is a haptic user interface for teleoperating advanced robotic systems as the humanoid robot Justin or the assistive robotic system EDAN. With its lightweight robot arms, HUG can measure human movements and simultaneously display forces from the distant environment. In addition to such teleoperation applications, HUG serves as a research platform for virtual assembly simulations, rehabilitation, and training.
[ DLR ]
This video about “image understanding” from CMU in 1979 (!) is amazing, and even though it’s long, you won’t regret watching until 3:30. Or maybe you will.
[ ARGOS (pdf) ]
Will Burrard-Lucas’ BeetleCam turned 10 this month, and in this video, he recounts the history of his little robotic camera.
[ BeetleCam ]
In this week’s episode of Robots in Depth, Per speaks with Gabriel Skantze from Furhat Robotics.
Gabriel Skantze is co-founder and Chief Scientist at Furhat Robotics and Professor in speech technology at KTH with a specialization in conversational systems. He has a background in research into how humans use spoken communication to interact.
In this interview, Gabriel talks about how the social robot revolution makes it necessary to communicate with humans in a human ways through speech and facial expressions. This is necessary as we expand the number of people that interact with robots as well as the types of interaction. Gabriel gives us more insight into the many challenges of implementing spoken communication for co-bots, where robots and humans work closely together. They need to communicate about the world, the objects in it and how to handle them. We also get to hear how having an embodied system using the Furhat robot head helps the interaction between humans and the system.
[ Robots in Depth ] Continue reading →
#435593 AI at the Speed of Light
Neural networks shine for solving tough problems such as facial and voice recognition, but conventional electronic versions are limited in speed and hungry for power. In theory, optics could beat digital electronic computers in the matrix calculations used in neural networks. However, optics had been limited by their inability to do some complex calculations that had required electronics. Now new experiments show that all-optical neural networks can tackle those problems.
The key attraction of neural networks is their massive interconnections among processors, comparable to the complex interconnections among neurons in the brain. This lets them perform many operations simultaneously, like the human brain does when looking at faces or listening to speech, making them more efficient for facial and voice recognition than traditional electronic computers that execute one instruction at a time.
Today's electronic neural networks have reached eight million neurons, but their future use in artificial intelligence may be limited by their high power usage and limited parallelism in connections. Optical connections through lenses are inherently parallel. The lens in your eye simultaneously focuses light from across your field of view onto the retina in the back of your eye, where an array of light-detecting nerve cells detects the light. Each cell then relays the signal it receives to neurons in the brain that process the visual signals to show us an image.
Glass lenses process optical signals by focusing light, which performs a complex mathematical operation called a Fourier transform that preserves the information in the original scene but rearranges is completely. One use of Fourier transforms is converting time variations in signal intensity into a plot of the frequencies present in the signal. The military used this trick in the 1950s to convert raw radar return signals recorded by an aircraft in flight into a three-dimensional image of the landscape viewed by the plane. Today that conversion is done electronically, but the vacuum-tube computers of the 1950s were not up to the task.
Development of neural networks for artificial intelligence started with electronics, but their AI applications have been limited by their slow processing and need for extensive computing resources. Some researchers have developed hybrid neural networks, in which optics perform simple linear operations, but electronics perform more complex nonlinear calculations. Now two groups have demonstrated simple all-optical neural networks that do all processing with light.
In May, Wolfram Pernice of the Institute of Physics at the University of Münster in Germany and colleagues reported testing an all-optical “neuron” in which signals change target materials between liquid and solid states, an effect that has been used for optical data storage. They demonstrated nonlinear processing, and produced output pulses like those from organic neurons. They then produced an integrated photonic circuit that incorporated four optical neurons operating at different wavelengths, each of which connected to 15 optical synapses. The photonic circuit contained more than 140 components and could recognize simple optical patterns. The group wrote that their device is scalable, and that the technology promises “access to the high speed and high bandwidth inherent to optical systems, thus enabling the direct processing of optical telecommunication and visual data.”
Now a group at the Hong Kong University of Science and Technology reports in Optica that they have made an all-optical neural network based on a different process, electromagnetically induced transparency, in which incident light affects how atoms shift between quantum-mechanical energy levels. The process is nonlinear and can be triggered by very weak light signals, says Shengwang Du, a physics professor and coauthor of the paper.
In their demonstration, they illuminated rubidium-85 atoms cooled by lasers to about 10 microKelvin (10 microdegrees above absolute zero). Although the technique may seem unusually complex, Du said the system was the most accessible one in the lab that could produce the desired effects. “As a pure quantum atomic system [it] is ideal for this proof-of-principle experiment,” he says.
Next, they plan to scale up the demonstration using a hot atomic vapor center, which is less expensive, does not require time-consuming preparation of cold atoms, and can be integrated with photonic chips. Du says the major challenges are reducing cost of the nonlinear processing medium and increasing the scale of the all-optical neural network for more complex tasks.
“Their demonstration seems valid,” says Volker Sorger, an electrical engineer at George Washington University in Washington who was not involved in either demonstration. He says the all-optical approach is attractive because it offers very high parallelism, but the update rate is limited to about 100 hertz because of the liquid crystals used in their test, and he is not completely convinced their approach can be scaled error-free. Continue reading →
#435589 Construction Robots Learn to Excavate by ...
Pavel Savkin remembers the first time he watched a robot imitate his movements. Minutes earlier, the engineer had finished “showing” the robotic excavator its new goal by directing its movements manually. Now, running on software Savkin helped design, the robot was reproducing his movements, gesture for gesture. “It was like there was something alive in there—but I knew it was me,” he said.
Savkin is the CTO of SE4, a robotics software project that styles itself the “driver” of a fleet of robots that will eventually build human colonies in space. For now, SE4 is focused on creating software that can help developers communicate with robots, rather than on building hardware of its own.
The Tokyo-based startup showed off an industrial arm from Universal Robots that was running SE4’s proprietary software at SIGGRAPH in July. SE4’s demonstration at the Los Angeles innovation conference drew the company’s largest audience yet. The robot, nicknamed Squeezie, stacked real blocks as directed by SE4 research engineer Nathan Quinn, who wore a VR headset and used handheld controls to “show” Squeezie what to do.
As Quinn manipulated blocks in a virtual 3D space, the software learned a set of ordered instructions to be carried out in the real world. That order is essential for remote operations, says Quinn. To build remotely, developers need a way to communicate instructions to robotic builders on location. In the age of digital construction and industrial robotics, giving a computer a blueprint for what to build is a well-explored art. But operating on a distant object—especially under conditions that humans haven’t experienced themselves—presents challenges that only real-time communication with operators can solve.
The problem is that, in an unpredictable setting, even simple tasks require not only instruction from an operator, but constant feedback from the changing environment. Five years ago, the Swedish fiber network provider umea.net (part of the private Umeå Energy utility) took advantage of the virtual reality boom to promote its high-speed connections with the help of a viral video titled “Living with Lag: An Oculus Rift Experiment.” The video is still circulated in VR and gaming circles.
In the experiment, volunteers donned headgear that replaced their real-time biological senses of sight and sound with camera and audio feeds of their surroundings—both set at a 3-second delay. Thus equipped, volunteers attempt to complete everyday tasks like playing ping-pong, dancing, cooking, and walking on a beach, with decidedly slapstick results.
At outer-orbit intervals, including SE4’s dream of construction projects on Mars, the limiting factor in communication speed is not an artificial delay, but the laws of physics. The shifting relative positions of Earth and Mars mean that communications between the planets—even at the speed of light—can take anywhere from 3 to 22 minutes.
A long-distance relationship
Imagine trying to manage a construction project from across an ocean without the benefit of intelligent workers: sending a ship to an unknown world with a construction crew and blueprints for a log cabin, and four months later receiving a letter back asking how to cut down a tree. The parallel problem in long-distance construction with robots, according to SE4 CEO Lochlainn Wilson, is that automation relies on predictability. “Every robot in an industrial setting today is expecting a controlled environment.”
Platforms for applying AR and VR systems to teach tasks to artificial intelligences, as SE4 does, are already proliferating in manufacturing, healthcare, and defense. But all of the related communications systems are bound by physics and, specifically, the speed of light.
The same fundamental limitation applies in space. “Our communications are light-based, whether they’re radio or optical,” says Laura Seward Forczyk, a planetary scientist and consultant for space startups. “If you’re going to Mars and you want to communicate with your robot or spacecraft there, you need to have it act semi- or mostly-independently so that it can operate without commands from Earth.”
Semantic control
That’s exactly what SE4 aims to do. By teaching robots to group micro-movements into logical units—like all the steps to building a tower of blocks—the Tokyo-based startup lets robots make simple relational judgments that would allow them to receive a full set of instruction modules at once and carry them out in order. This sidesteps the latency issue in real-time bilateral communications that could hamstring a project or at least make progress excruciatingly slow.
The key to the platform, says Wilson, is the team’s proprietary operating software, “Semantic Control.” Just as in linguistics and philosophy, “semantics” refers to meaning itself, and meaning is the key to a robot’s ability to make even the smallest decisions on its own. “A robot can scan its environment and give [raw data] to us, but it can’t necessarily identify the objects around it and what they mean,” says Wilson.
That’s where human intelligence comes in. As part of the demonstration phase, the human operator of an SE4-controlled machine “annotates” each object in the robot’s vicinity with meaning. By labeling objects in the VR space with useful information—like which objects are building material and which are rocks—the operator helps the robot make sense of its real 3D environment before the building begins.
Giving robots the tools to deal with a changing environment is an important step toward allowing the AI to be truly independent, but it’s only an initial step. “We’re not letting it do absolutely everything,” said Quinn. “Our robot is good at moving an object from point A to point B, but it doesn’t know the overall plan.” Wilson adds that delegating environmental awareness and raw mechanical power to separate agents is the optimal relationship for a mixed human-robot construction team; it “lets humans do what they’re good at, while robots do what they do best.”
This story was updated on 4 September 2019. Continue reading →
#435583 Soft Self-Healing Materials for Robots ...
If there’s one thing we know about robots, it’s that they break. They break, like, literally all the time. The software breaks. The hardware breaks. The bits that you think could never, ever, ever possibly break end up breaking just when you need them not to break the most, and then you have to try to explain what happened to your advisor who’s been standing there watching your robot fail and then stay up all night fixing the thing that seriously was not supposed to break.
While most of this is just a fundamental characteristic of robots that can’t be helped, the European Commission is funding a project called SHERO (Self HEaling soft RObotics) to try and solve at least some of those physical robot breaking problems through the use of structural materials that can autonomously heal themselves over and over again.
SHERO is a three year, €3 million collaboration between Vrije Universiteit Brussel, University of Cambridge, École Supérieure de Physique et de Chimie Industrielles de la ville de Paris (ESPCI-Paris), and Swiss Federal Laboratories for Materials Science and Technology (Empa). As the name SHERO suggests, the goal of the project is to develop soft materials that can completely recover from the kinds of damage that robots are likely to suffer in day to day operations, as well as the occasional more extreme accident.
Most materials, especially soft materials, are fixable somehow, whether it’s with super glue or duct tape. But fixing things involves a human first identifying when they’re broken, and then performing a potentially skill, labor, time, and money intensive task. SHERO’s soft materials will, eventually, make this entire process autonomous, allowing robots to self-identify damage and initiate healing on their own.
Photos: SHERO Project
The damaged robot finger [top] can operate normally after healing itself.
How the self-healing material works
What these self-healing materials can do is really pretty amazing. The researchers are actually developing two different types—the first one heals itself when there’s an application of heat, either internally or externally, which gives some control over when and how the healing process starts. For example, if the robot is handling stuff that’s dirty, you’d want to get it cleaned up before healing it so that dirt doesn’t become embedded in the material. This could mean that the robot either takes itself to a heating station, or it could activate some kind of embedded heating mechanism to be more self-sufficient.
The second kind of self-healing material is autonomous, in that it will heal itself at room temperature without any additional input, and is probably more suitable for relatively minor scrapes and cracks. Here are some numbers about how well the healing works:
Autonomous self-healing polymers do not require heat. They can heal damage at room temperature. Developing soft robotic systems from autonomous self-healing polymers excludes the need of additional heating devices… The healing however takes some time. The healing efficiency after 3 days, 7 days and 14 days is respectively 62 percent, 91 percent and 97 percent.
This material was used to develop a healable soft pneumatic hand. Relevant large cuts can be healed entirely without the need of external heat stimulus. Depending on the size of the damage and even more on the location of damage, the healing takes only seconds or up to a week. Damage on locations on the actuator that are subjected to very small stresses during actuation was healed instantaneously. Larger damages, like cutting the actuator completely in half, took 7 days to heal. But even this severe damage could be healed completely without the need of any external stimulus.
Applications of self-healing robots
Both of these materials can be mixed together, and their mechanical properties can be customized so that the structure that they’re a part of can be tuned to move in different ways. The researchers also plan on introducing flexible conductive sensors into the material, which will help sense damage as well as providing position feedback for control systems. A lot of development will happen over the next few years, and for more details, we spoke with Bram Vanderborght at Vrije Universiteit in Brussels.
IEEE Spectrum: How easy or difficult or expensive is it to produce these materials? Will they add significant cost to robotic grippers?
Bram Vanderborght: They are definitely more expensive materials, but it’s also a matter of size of production. At the moment, we’ve made a few kilograms of the material (enough to make several demonstrators), and the price already dropped significantly from when we ordered 100 grams of the material in the first phase of the project. So probably the cost of the gripper will be higher [than a regular gripper], but you won’t need to replace the gripper as often as other grippers that need to be replaced due to wear, so it can be an advantage.
Moreover due to the method of 3D printing the material, the surface is smoother and airtight (so no post-processing is required to make it airtight). Also, the smooth surface is better to avoid contamination for food handling, for example.
In commercial or industrial applications, gradual fatigue seems to be a more common issue than more abrupt trauma like cuts. How well does the self-healing work to improve durability over long periods of time?
We did not test for gradual fatigue over very long times. But both macroscopic and microscopic damage can be healed. So hopefully it can provide an answer here as well.
Image: SHERO Project
After developing a self-healing robot gripper, the researchers plan to use similar materials to build parts that can be used as the skeleton of robots, allowing them to repair themselves on a regular basis.
How much does the self-healing capability restrict the material properties? What are the limits for softness or hardness or smoothness or other characteristics of the material?
Typically the mechanical properties of networked polymers are much better than thermoplastics. Our material is a networked polymer but in which the crosslinks are reversible. We can change quite a lot of parameters in the design of the materials. So we can develop very stiff (fracture strain at 1.24 percent) and very elastic materials (fracture strain at 450 percent). The big advantage that our material has is we can mix it to have intermediate properties. Moreover, at the interface of the materials with different mechanical properties, we have the same chemical bonds, so the interface is perfect. While other materials, they may need to glue it, which gives local stresses and a weak spot.
When the material heals itself, is it less structurally sound in that spot? Can it heal damage that happens to the same spot over and over again?
In theory we can heal it an infinite amount of times. When the wound is not perfectly aligned, of course in that spot it will become weaker. Also too high temperatures lead to irreversible bonds, and impurities lead to weak spots.
Besides grippers and skins, what other potential robotics applications would this technology be useful for?
Most of self healing materials available now are used for coatings. What we are developing are structural components, therefore the mechanical properties of the material need to be good for such applications. So maybe part of the skeleton of the robot can be developed with such materials to make it lighter, since can be designed for regular repair. And for exceptional loads, it breaks and can be repaired like our human body.
[ SHERO Project ] Continue reading →
#435528 The Time for AI Is Now. Here’s Why
You hear a lot these days about the sheer transformative power of AI.
There’s pure intelligence: DeepMind’s algorithms readily beat humans at Go and StarCraft, and DeepStack triumphs over humans at no-limit hold’em poker. Often, these silicon brains generate gameplay strategies that don’t resemble anything from a human mind.
There’s astonishing speed: algorithms routinely surpass radiologists in diagnosing breast cancer, eye disease, and other ailments visible from medical imaging, essentially collapsing decades of expert training down to a few months.
Although AI’s silent touch is mainly felt today in the technological, financial, and health sectors, its impact across industries is rapidly spreading. At the Singularity University Global Summit in San Francisco this week Neil Jacobstein, Chair of AI and Robotics, painted a picture of a better AI-powered future for humanity that is already here.
Thanks to cloud-based cognitive platforms, sophisticated AI tools like deep learning are no longer relegated to academic labs. For startups looking to tackle humanity’s grand challenges, the tools to efficiently integrate AI into their missions are readily available. The progress of AI is massively accelerating—to the point you need help from AI to track its progress, joked Jacobstein.
Now is the time to consider how AI can impact your industry, and in the process, begin to envision a beneficial relationship with our machine coworkers. As Jacobstein stressed in his talk, the future of a brain-machine mindmeld is a collaborative intelligence that augments our own. “AI is reinventing the way we invent,” he said.
AI’s Rapid Revolution
Machine learning and other AI-based methods may seem academic and abstruse. But Jacobstein pointed out that there are already plenty of real-world AI application frameworks.
Their secret? Rather than coding from scratch, smaller companies—with big visions—are tapping into cloud-based solutions such as Google’s TensorFlow, Microsoft’s Azure, or Amazon’s AWS to kick off their AI journey. These platforms act as all-in-one solutions that not only clean and organize data, but also contain built-in security and drag-and-drop coding that allow anyone to experiment with complicated machine learning algorithms.
Google Cloud’s Anthos, for example, lets anyone migrate data from other servers—IBM Watson or AWS, for example—so users can leverage different computing platforms and algorithms to transform data into insights and solutions.
Rather than coding from scratch, it’s already possible to hop onto a platform and play around with it, said Jacobstein. That’s key: this democratization of AI is how anyone can begin exploring solutions to problems we didn’t even know we had, or those long thought improbable.
The acceleration is only continuing. Much of AI’s mind-bending pace is thanks to a massive infusion of funding. Microsoft recently injected $1 billion into OpenAI, the Elon Musk venture that engineers socially responsible artificial general intelligence (AGI).
The other revolution is in hardware, and Google, IBM, and NVIDIA—among others—are racing to manufacture computing chips tailored to machine learning.
Democratizing AI is like the birth of the printing press. Mechanical printing allowed anyone to become an author; today, an iPhone lets anyone film a movie masterpiece.
However, this diffusion of AI into the fabric of our lives means tech explorers need to bring skepticism to their AI solutions, giving them a dose of empathy, nuance, and humanity.
A Path Towards Ethical AI
The democratization of AI is a double-edged sword: as more people wield the technology’s power in real-world applications, problems embedded in deep learning threaten to disrupt those very judgment calls.
Much of the press on the dangers of AI focuses on superintelligence—AI that’s more adept at learning than humans—taking over the world, said Jacobstein. But the near-term threat, and far more insidious, is in humans misusing the technology.
Deepfakes, for example, allow AI rookies to paste one person’s head on a different body or put words into a person’s mouth. As the panel said, it pays to think of AI as a cybersecurity problem, one with currently shaky accountability and complexity, and one that fails at diversity and bias.
Take bias. Thanks to progress in natural language processing, Google Translate works nearly perfectly today, so much so that many consider the translation problem solved. Not true, the panel said. One famous example is how the algorithm translates gender-neutral terms like “doctor” into “he” and “nurse” into “she.”
These biases reflect our own, and it’s not just a data problem. To truly engineer objective AI systems, ones stripped of our society’s biases, we need to ask who is developing these systems, and consult those who will be impacted by the products. In addition to gender, racial bias is also rampant. For example, one recent report found that a supposedly objective crime-predicting system was trained on falsified data, resulting in outputs that further perpetuate corrupt police practices. Another study from Google just this month found that their hate speech detector more often labeled innocuous tweets from African-Americans as “obscene” compared to tweets from people of other ethnicities.
We often think of building AI as purely an engineering job, the panelists agreed. But similar to gene drives, germ-line genome editing, and other transformative—but dangerous—tools, AI needs to grow under the consultation of policymakers and other stakeholders. It pays to start young: educating newer generations on AI biases will mold malleable minds early, alerting them to the problem of bias and potentially mitigating risks.
As panelist Tess Posner from AI4ALL said, AI is rocket fuel for ambition. If young minds set out using the tools of AI to tackle their chosen problems, while fully aware of its inherent weaknesses, we can begin to build an AI-embedded future that is widely accessible and inclusive.
The bottom line: people who will be impacted by AI need to be in the room at the conception of an AI solution. People will be displaced by the new technology, and ethical AI has to consider how to mitigate human suffering during the transition. Just because AI looks like “magic fairy dust doesn’t mean that you’re home free,” the panelists said. You, the sentient human, bear the burden of being responsible for how you decide to approach the technology.
The time for AI is now. Let’s make it ethical.
Image Credit: GrAI / Shutterstock.com Continue reading →