Tag Archives: monitor

#432051 What Roboticists Are Learning From Early ...

You might not have heard of Hanson Robotics, but if you’re reading this, you’ve probably seen their work. They were the company behind Sophia, the lifelike humanoid avatar that’s made dozens of high-profile media appearances. Before that, they were the company behind that strange-looking robot that seemed a bit like Asimo with Albert Einstein’s head—or maybe you saw BINA48, who was interviewed for the New York Times in 2010 and featured in Jon Ronson’s books. For the sci-fi aficionados amongst you, they even made a replica of legendary author Philip K. Dick, best remembered for having books with titles like Do Androids Dream of Electric Sheep? turned into films with titles like Blade Runner.

Hanson Robotics, in other words, with their proprietary brand of life-like humanoid robots, have been playing the same game for a while. Sometimes it can be a frustrating game to watch. Anyone who gives the robot the slightest bit of thought will realize that this is essentially a chat-bot, with all the limitations this implies. Indeed, even in that New York Times interview with BINA48, author Amy Harmon describes it as a frustrating experience—with “rare (but invariably thrilling) moments of coherence.” This sensation will be familiar to anyone who’s conversed with a chatbot that has a few clever responses.

The glossy surface belies the lack of real intelligence underneath; it seems, at first glance, like a much more advanced machine than it is. Peeling back that surface layer—at least for a Hanson robot—means you’re peeling back Frubber. This proprietary substance—short for “Flesh Rubber,” which is slightly nightmarish—is surprisingly complicated. Up to thirty motors are required just to control the face; they manipulate liquid cells in order to make the skin soft, malleable, and capable of a range of different emotional expressions.

A quick combinatorial glance at the 30+ motors suggests that there are millions of possible combinations; researchers identify 62 that they consider “human-like” in Sophia, although not everyone agrees with this assessment. Arguably, the technical expertise that went into reconstructing the range of human facial expressions far exceeds the more simplistic chat engine the robots use, although it’s the second one that allows it to inflate the punters’ expectations with a few pre-programmed questions in an interview.

Hanson Robotics’ belief is that, ultimately, a lot of how humans will eventually relate to robots is going to depend on their faces and voices, as well as on what they’re saying. “The perception of identity is so intimately bound up with the perception of the human form,” says David Hanson, company founder.

Yet anyone attempting to design a robot that won’t terrify people has to contend with the uncanny valley—that strange blend of concern and revulsion people react with when things appear to be creepily human. Between cartoonish humanoids and genuine humans lies what has often been a no-go zone in robotic aesthetics.

The uncanny valley concept originated with roboticist Masahiro Mori, who argued that roboticists should avoid trying to replicate humans exactly. Since anything that wasn’t perfect, but merely very good, would elicit an eerie feeling in humans, shirking the challenge entirely was the only way to avoid the uncanny valley. It’s probably a task made more difficult by endless streams of articles about AI taking over the world that inexplicably conflate AI with killer humanoid Terminators—which aren’t particularly likely to exist (although maybe it’s best not to push robots around too much).

The idea behind this realm of psychological horror is fairly simple, cognitively speaking.

We know how to categorize things that are unambiguously human or non-human. This is true even if they’re designed to interact with us. Consider the popularity of Aibo, Jibo, or even some robots that don’t try to resemble humans. Something that resembles a human, but isn’t quite right, is bound to evoke a fear response in the same way slightly distorted music or slightly rearranged furniture in your home will. The creature simply doesn’t fit.

You may well reject the idea of the uncanny valley entirely. David Hanson, naturally, is not a fan. In the paper Upending the Uncanny Valley, he argues that great art forms have often resembled humans, but the ultimate goal for humanoid roboticists is probably to create robots we can relate to as something closer to humans than works of art.

Meanwhile, Hanson and other scientists produce competing experiments to either demonstrate that the uncanny valley is overhyped, or to confirm it exists and probe its edges.

The classic experiment involves gradually morphing a cartoon face into a human face, via some robotic-seeming intermediaries—yet it’s in movement that the real horror of the almost-human often lies. Hanson has argued that incorporating cartoonish features may help—and, sometimes, that the uncanny valley is a generational thing which will melt away when new generations grow used to the quirks of robots. Although Hanson might dispute the severity of this effect, it’s clearly what he’s trying to avoid with each new iteration.

Hiroshi Ishiguro is the latest of the roboticists to have dived headlong into the valley.

Building on the work of pioneers like Hanson, those who study human-robot interaction are pushing at the boundaries of robotics—but also of social science. It’s usually difficult to simulate what you don’t understand, and there’s still an awful lot we don’t understand about how we interpret the constant streams of non-verbal information that flow when you interact with people in the flesh.

Ishiguro took this imitation of human forms to extreme levels. Not only did he monitor and log the physical movements people made on videotapes, but some of his robots are based on replicas of people; the Repliee series began with a ‘replicant’ of his daughter. This involved making a rubber replica—a silicone cast—of her entire body. Future experiments were focused on creating Geminoid, a replica of Ishiguro himself.

As Ishiguro aged, he realized that it would be more effective to resemble his replica through cosmetic surgery rather than by continually creating new casts of his face, each with more lines than the last. “I decided not to get old anymore,” Ishiguro said.

We love to throw around abstract concepts and ideas: humans being replaced by machines, cared for by machines, getting intimate with machines, or even merging themselves with machines. You can take an idea like that, hold it in your hand, and examine it—dispassionately, if not without interest. But there’s a gulf between thinking about it and living in a world where human-robot interaction is not a field of academic research, but a day-to-day reality.

As the scientists studying human-robot interaction develop their robots, their replicas, and their experiments, they are making some of the first forays into that world. We might all be living there someday. Understanding ourselves—decrypting the origins of empathy and love—may be the greatest challenge to face. That is, if you want to avoid the valley.

Image Credit: Anton Gvozdikov / Shutterstock.com Continue reading

Posted in Human Robots

#431995 The 10 Grand Challenges Facing Robotics ...

Robotics research has been making great strides in recent years, but there are still many hurdles to the machines becoming a ubiquitous presence in our lives. The journal Science Robotics has now identified 10 grand challenges the field will have to grapple with to make that a reality.

Editors conducted an online survey on unsolved challenges in robotics and assembled an expert panel of roboticists to shortlist the 30 most important topics, which were then grouped into 10 grand challenges that could have major impact in the next 5 to 10 years. Here’s what they came up with.

1. New Materials and Fabrication Schemes
Roboticists are beginning to move beyond motors, gears, and sensors by experimenting with things like artificial muscles, soft robotics, and new fabrication methods that combine multiple functions in one material. But most of these advances have been “one-off” demonstrations, which are not easy to combine.

Multi-functional materials merging things like sensing, movement, energy harvesting, or energy storage could allow more efficient robot designs. But combining these various properties in a single machine will require new approaches that blend micro-scale and large-scale fabrication techniques. Another promising direction is materials that can change over time to adapt or heal, but this requires much more research.

2. Bioinspired and Bio-Hybrid Robots
Nature has already solved many of the problems roboticists are trying to tackle, so many are turning to biology for inspiration or even incorporating living systems into their robots. But there are still major bottlenecks in reproducing the mechanical performance of muscle and the ability of biological systems to power themselves.

There has been great progress in artificial muscles, but their robustness, efficiency, and energy and power density need to be improved. Embedding living cells into robots can overcome challenges of powering small robots, as well as exploit biological features like self-healing and embedded sensing, though how to integrate these components is still a major challenge. And while a growing “robo-zoo” is helping tease out nature’s secrets, more work needs to be done on how animals transition between capabilities like flying and swimming to build multimodal platforms.

3. Power and Energy
Energy storage is a major bottleneck for mobile robotics. Rising demand from drones, electric vehicles, and renewable energy is driving progress in battery technology, but the fundamental challenges have remained largely unchanged for years.

That means that in parallel to battery development, there need to be efforts to minimize robots’ power utilization and give them access to new sources of energy. Enabling them to harvest energy from their environment and transmitting power to them wirelessly are two promising approaches worthy of investigation.

4. Robot Swarms
Swarms of simple robots that assemble into different configurations to tackle various tasks can be a cheaper, more flexible alternative to large, task-specific robots. Smaller, cheaper, more powerful hardware that lets simple robots sense their environment and communicate is combining with AI that can model the kind of behavior seen in nature’s flocks.

But there needs to be more work on the most efficient forms of control at different scales—small swarms can be controlled centrally, but larger ones need to be more decentralized. They also need to be made robust and adaptable to the changing conditions of the real world and resilient to deliberate or accidental damage. There also needs to be more work on swarms of non-homogeneous robots with complementary capabilities.

5. Navigation and Exploration
A key use case for robots is exploring places where humans cannot go, such as the deep sea, space, or disaster zones. That means they need to become adept at exploring and navigating unmapped, often highly disordered and hostile environments.

The major challenges include creating systems that can adapt, learn, and recover from navigation failures and are able to make and recognize new discoveries. This will require high levels of autonomy that allow the robots to monitor and reconfigure themselves while being able to build a picture of the world from multiple data sources of varying reliability and accuracy.

6. AI for Robotics
Deep learning has revolutionized machines’ ability to recognize patterns, but that needs to be combined with model-based reasoning to create adaptable robots that can learn on the fly.

Key to this will be creating AI that’s aware of its own limitations and can learn how to learn new things. It will also be important to create systems that are able to learn quickly from limited data rather than the millions of examples used in deep learning. Further advances in our understanding of human intelligence will be essential to solving these problems.

7. Brain-Computer Interfaces
BCIs will enable seamless control of advanced robotic prosthetics but could also prove a faster, more natural way to communicate instructions to robots or simply help them understand human mental states.

Most current approaches to measuring brain activity are expensive and cumbersome, though, so work on compact, low-power, and wireless devices will be important. They also tend to involve extended training, calibration, and adaptation due to the imprecise nature of reading brain activity. And it remains to be seen if they will outperform simpler techniques like eye tracking or reading muscle signals.

8. Social Interaction
If robots are to enter human environments, they will need to learn to deal with humans. But this will be difficult, as we have very few concrete models of human behavior and we are prone to underestimate the complexity of what comes naturally to us.

Social robots will need to be able to perceive minute social cues like facial expression or intonation, understand the cultural and social context they are operating in, and model the mental states of people they interact with to tailor their dealings with them, both in the short term and as they develop long-standing relationships with them.

9. Medical Robotics
Medicine is one of the areas where robots could have significant impact in the near future. Devices that augment a surgeon’s capabilities are already in regular use, but the challenge will be to increase the autonomy of these systems in such a high-stakes environment.

Autonomous robot assistants will need to be able to recognize human anatomy in a variety of contexts and be able to use situational awareness and spoken commands to understand what’s required of them. In surgery, autonomous robots could perform the routine steps of a procedure, giving way to the surgeon for more complicated patient-specific bits.

Micro-robots that operate inside the human body also hold promise, but there are still many roadblocks to their adoption, including effective delivery systems, tracking and control methods, and crucially, finding therapies where they improve on current approaches.

10. Robot Ethics and Security
As the preceding challenges are overcome and robots are increasingly integrated into our lives, this progress will create new ethical conundrums. Most importantly, we may become over-reliant on robots.

That could lead to humans losing certain skills and capabilities, making us unable to take the reins in the case of failures. We may end up delegating tasks that should, for ethical reasons, have some human supervision, and allow people to pass the buck to autonomous systems in the case of failure. It could also reduce self-determination, as human behaviors change to accommodate the routines and restrictions required for robots and AI to work effectively.

Image Credit: Zenzen / Shutterstock.com Continue reading

Posted in Human Robots

#431906 Low-Cost Soft Robot Muscles Can Lift 200 ...

Jerky mechanical robots are staples of science fiction, but to seamlessly integrate into everyday life they’ll need the precise yet powerful motor control of humans. Now scientists have created a new class of artificial muscles that could soon make that a reality.
The advance is the latest breakthrough in the field of soft robotics. Scientists are increasingly designing robots using soft materials that more closely resemble biological systems, which can be more adaptable and better suited to working in close proximity to humans.
One of the main challenges has been creating soft components that match the power and control of the rigid actuators that drive mechanical robots—things like motors and pistons. Now researchers at the University of Colorado Boulder have built a series of low-cost artificial muscles—as little as 10 cents per device—using soft plastic pouches filled with electrically insulating liquids that contract with the force and speed of mammalian skeletal muscles when a voltage is applied to them.

Three different designs of the so-called hydraulically amplified self-healing electrostatic (HASEL) actuators were detailed in two papers in the journals Science and Science Robotics last week. They could carry out a variety of tasks, from gently picking up delicate objects like eggs or raspberries to lifting objects many times their own weight, such as a gallon of water, at rapid repetition rates.
“We draw our inspiration from the astonishing capabilities of biological muscle,” Christoph Keplinger, an assistant professor at UC Boulder and senior author of both papers, said in a press release. “Just like biological muscle, HASEL actuators can reproduce the adaptability of an octopus arm, the speed of a hummingbird and the strength of an elephant.”
The artificial muscles work by applying a voltage to hydrogel electrodes on either side of pouches filled with liquid insulators, which can be as simple as canola oil. This creates an attraction between the two electrodes, pulling them together and displacing the liquid. This causes a change of shape that can push or pull levers, arms or any other articulated component.
The design is essentially a synthesis of two leading approaches to actuating soft robots. Pneumatic and hydraulic actuators that pump fluids around have been popular due to their high forces, easy fabrication and ability to mimic a variety of natural motions. But they tend to be bulky and relatively slow.
Dielectric elastomer actuators apply an electric field across a solid insulating layer to make it flex. These can mimic the responsiveness of biological muscle. But they are not very versatile and can also fail catastrophically, because the high voltages required can cause a bolt of electricity to blast through the insulator, destroying it. The likelihood of this happening increases in line with the size of their electrodes, which makes it hard to scale them up. By combining the two approaches, researchers get the best of both worlds, with the power, versatility and easy fabrication of a fluid-based system and the responsiveness of electrically-powered actuators.
One of the designs holds particular promise for robotics applications, as it behaves a lot like biological muscle. The so-called Peano-HASEL actuators are made up of multiple rectangular pouches connected in series, which allows them to contract linearly, just like real muscle. They can lift more than 200 times their weight, but being electrically powered, they exceed the flexing speed of human muscle.
As the name suggests, the HASEL actuators are also self-healing. They are still prone to the same kind of electrical damage as dielectric elastomer actuators, but the liquid insulator is able to immediately self-heal by redistributing itself and regaining its insulating properties.
The muscles can even monitor the amount of strain they’re under to provide the same kind of feedback biological systems would. The muscle’s capacitance—its ability to store an electric charge—changes as the device stretches, which makes it possible to power the arm while simultaneously measuring what position it’s in.
The researchers say this could imbue robots with a similar sense of proprioception or body-awareness to that found in plants and animals. “Self-sensing allows for the development of closed-loop feedback controllers to design highly advanced and precise robots for diverse applications,” Shane Mitchell, a PhD student in Keplinger’s lab and an author on both papers, said in an email.
The researchers say the high voltages required are an ongoing challenge, though they’ve already designed devices in the lab that use a fifth of the voltage of those features in the recent papers.
In most of their demonstrations, these soft actuators were being used to power rigid arms and levers, pointing to the fact that future robots are likely to combine both rigid and soft components, much like animals do. The potential applications for the technology range from more realistic prosthetics to much more dextrous robots that can work easily alongside humans.
It will take some work before these devices appear in commercial robots. But the combination of high-performance with simple and inexpensive fabrication methods mean other researchers are likely to jump in, so innovation could be rapid.
Image Credit: Keplinger Research Group/University of Colorado Continue reading

Posted in Human Robots

#431828 This Self-Driving AI Is Learning to ...

I don’t have to open the doors of AImotive’s white 2015 Prius to see that it’s not your average car. This particular Prius has been christened El Capitan, the name written below the rear doors, and two small cameras are mounted on top of the car. Bundles of wire snake out from them, as well as from the two additional cameras on the car’s hood and trunk.
Inside is where things really get interesting, though. The trunk holds a computer the size of a microwave, and a large monitor covers the passenger glove compartment and dashboard. The center console has three switches labeled “Allowed,” “Error,” and “Active.”
Budapest-based AImotive is working to provide scalable self-driving technology alongside big players like Waymo and Uber in the autonomous vehicle world. On a highway test ride with CEO Laszlo Kishonti near the company’s office in Mountain View, California, I got a glimpse of just how complex that world is.
Camera-Based Feedback System
AImotive’s approach to autonomous driving is a little different from that of some of the best-known systems. For starters, they’re using cameras, not lidar, as primary sensors. “The traffic system is visual and the cost of cameras is low,” Kishonti said. “A lidar can recognize when there are people near the car, but a camera can differentiate between, say, an elderly person and a child. Lidar’s resolution isn’t high enough to recognize the subtle differences of urban driving.”
Image Credit: AImotive
The company’s aiDrive software uses data from the camera sensors to feed information to its algorithms for hierarchical decision-making, grouped under four concurrent activities: recognition, location, motion, and control.
Kishonti pointed out that lidar has already gotten more cost-efficient, and will only continue to do so.
“Ten years ago, lidar was best because there wasn’t enough processing power to do all the calculations by AI. But the cost of running AI is decreasing,” he said. “In our approach, computer vision and AI processing are key, and for safety, we’ll have fallback sensors like radar or lidar.”
aiDrive currently runs on Nvidia chips, which Kishonti noted were originally designed for graphics, and are not terribly efficient given how power-hungry they are. “We’re planning to substitute lower-cost, lower-energy chips in the next six months,” he said.
Testing in Virtual Reality
Waymo recently announced its fleet has now driven four million miles autonomously. That’s a lot of miles, and hard to compete with. But AImotive isn’t trying to compete, at least not by logging more real-life test miles. Instead, the company is doing 90 percent of its testing in virtual reality. “This is what truly differentiates us from competitors,” Kishonti said.
He outlined the three main benefits of VR testing: it can simulate scenarios too dangerous for the real world (such as hitting something), too costly (not every company has Waymo’s funds to run hundreds of cars on real roads), or too time-consuming (like waiting for rain, snow, or other weather conditions to occur naturally and repeatedly).
“Real-world traffic testing is very skewed towards the boring miles,” he said. “What we want to do is test all the cases that are hard to solve.”
On a screen that looked not unlike multiple games of Mario Kart, he showed me the simulator. Cartoon cars cruised down winding streets, outfitted with all the real-world surroundings: people, trees, signs, other cars. As I watched, a furry kangaroo suddenly hopped across one screen. “Volvo had an issue in Australia,” Kishonti explained. “A kangaroo’s movement is different than other animals since it hops instead of running.” Talk about cases that are hard to solve.
AImotive is currently testing around 1,000 simulated scenarios every night, with a steadily-rising curve of successful tests. These scenarios are broken down into features, and the car’s behavior around those features fed into a neural network. As the algorithms learn more features, the level of complexity the vehicles can handle goes up.
On the Road
After Kishonti and his colleagues filled me in on the details of their product, it was time to test it out. A safety driver sat in the driver’s seat, a computer operator in the passenger seat, and Kishonti and I in back. The driver maintained full control of the car until we merged onto the highway. Then he flicked the “Allowed” switch, his copilot pressed the “Active” switch, and he took his hands off the wheel.
What happened next, you ask?
A few things. El Capitan was going exactly the speed limit—65 miles per hour—which meant all the other cars were passing us. When a car merged in front of us or cut us off, El Cap braked accordingly (if a little abruptly). The monitor displayed the feed from each of the car’s cameras, plus multiple data fields and a simulation where a blue line marked the center of the lane, measured by the cameras tracking the lane markings on either side.
I noticed El Cap wobbling out of our lane a bit, but it wasn’t until two things happened in a row that I felt a little nervous: first we went under a bridge, then a truck pulled up next to us, both bridge and truck casting a complete shadow over our car. At that point El Cap lost it, and we swerved haphazardly to the right, narrowly missing the truck’s rear wheels. The safety driver grabbed the steering wheel and took back control of the car.
What happened, Kishonti explained, was that the shadows made it hard for the car’s cameras to see the lane markings. This was a new scenario the algorithm hadn’t previously encountered. If we’d only gone under a bridge or only been next to the truck for a second, El Cap may not have had so much trouble, but the two events happening in a row really threw the car for a loop—almost literally.
“This is a new scenario we’ll add to our testing,” Kishonti said. He added that another way for the algorithm to handle this type of scenario, rather than basing its speed and positioning on the lane markings, is to mimic nearby cars. “The human eye would see that other cars are still moving at the same speed, even if it can’t see details of the road,” he said.
After another brief—and thankfully uneventful—hands-off cruise down the highway, the safety driver took over, exited the highway, and drove us back to the office.
Driving into the Future
I climbed out of the car feeling amazed not only that self-driving cars are possible, but that driving is possible at all. I squint when driving into a tunnel, swerve to avoid hitting a stray squirrel, and brake gradually at stop signs—all without consciously thinking to do so. On top of learning to steer, brake, and accelerate, self-driving software has to incorporate our brains’ and bodies’ unconscious (but crucial) reactions, like our pupils dilating to let in more light so we can see in a tunnel.
Despite all the progress of machine learning, artificial intelligence, and computing power, I have a wholly renewed appreciation for the thing that’s been in charge of driving up till now: the human brain.
Kishonti seemed to feel similarly. “I don’t think autonomous vehicles in the near future will be better than the best drivers,” he said. “But they’ll be better than the average driver. What we want to achieve is safe, good-quality driving for everyone, with scalability.”
AImotive is currently working with American tech firms and with car and truck manufacturers in Europe, China, and Japan.
Image Credit: Alex Oakenman / Shutterstock.com Continue reading

Posted in Human Robots

#431599 8 Ways AI Will Transform Our Cities by ...

How will AI shape the average North American city by 2030? A panel of experts assembled as part of a century-long study into the impact of AI thinks its effects will be profound.
The One Hundred Year Study on Artificial Intelligence is the brainchild of Eric Horvitz, technical fellow and a managing director at Microsoft Research.
Every five years a panel of experts will assess the current state of AI and its future directions. The first panel, comprised of experts in AI, law, political science, policy, and economics, was launched last fall and decided to frame their report around the impact AI will have on the average American city. Here’s how they think it will affect eight key domains of city life in the next fifteen years.
1. Transportation
The speed of the transition to AI-guided transport may catch the public by surprise. Self-driving vehicles will be widely adopted by 2020, and it won’t just be cars — driverless delivery trucks, autonomous delivery drones, and personal robots will also be commonplace.
Uber-style “cars as a service” are likely to replace car ownership, which may displace public transport or see it transition towards similar on-demand approaches. Commutes will become a time to relax or work productively, encouraging people to live further from home, which could combine with reduced need for parking to drastically change the face of modern cities.
Mountains of data from increasing numbers of sensors will allow administrators to model individuals’ movements, preferences, and goals, which could have major impact on the design city infrastructure.
Humans won’t be out of the loop, though. Algorithms that allow machines to learn from human input and coordinate with them will be crucial to ensuring autonomous transport operates smoothly. Getting this right will be key as this will be the public’s first experience with physically embodied AI systems and will strongly influence public perception.
2. Home and Service Robots
Robots that do things like deliver packages and clean offices will become much more common in the next 15 years. Mobile chipmakers are already squeezing the power of last century’s supercomputers into systems-on-a-chip, drastically boosting robots’ on-board computing capacity.
Cloud-connected robots will be able to share data to accelerate learning. Low-cost 3D sensors like Microsoft’s Kinect will speed the development of perceptual technology, while advances in speech comprehension will enhance robots’ interactions with humans. Robot arms in research labs today are likely to evolve into consumer devices around 2025.
But the cost and complexity of reliable hardware and the difficulty of implementing perceptual algorithms in the real world mean general-purpose robots are still some way off. Robots are likely to remain constrained to narrow commercial applications for the foreseeable future.
3. Healthcare
AI’s impact on healthcare in the next 15 years will depend more on regulation than technology. The most transformative possibilities of AI in healthcare require access to data, but the FDA has failed to find solutions to the difficult problem of balancing privacy and access to data. Implementation of electronic health records has also been poor.
If these hurdles can be cleared, AI could automate the legwork of diagnostics by mining patient records and the scientific literature. This kind of digital assistant could allow doctors to focus on the human dimensions of care while using their intuition and experience to guide the process.
At the population level, data from patient records, wearables, mobile apps, and personal genome sequencing will make personalized medicine a reality. While fully automated radiology is unlikely, access to huge datasets of medical imaging will enable training of machine learning algorithms that can “triage” or check scans, reducing the workload of doctors.
Intelligent walkers, wheelchairs, and exoskeletons will help keep the elderly active while smart home technology will be able to support and monitor them to keep them independent. Robots may begin to enter hospitals carrying out simple tasks like delivering goods to the right room or doing sutures once the needle is correctly placed, but these tasks will only be semi-automated and will require collaboration between humans and robots.
4. Education
The line between the classroom and individual learning will be blurred by 2030. Massive open online courses (MOOCs) will interact with intelligent tutors and other AI technologies to allow personalized education at scale. Computer-based learning won’t replace the classroom, but online tools will help students learn at their own pace using techniques that work for them.
AI-enabled education systems will learn individuals’ preferences, but by aggregating this data they’ll also accelerate education research and the development of new tools. Online teaching will increasingly widen educational access, making learning lifelong, enabling people to retrain, and increasing access to top-quality education in developing countries.
Sophisticated virtual reality will allow students to immerse themselves in historical and fictional worlds or explore environments and scientific objects difficult to engage with in the real world. Digital reading devices will become much smarter too, linking to supplementary information and translating between languages.
5. Low-Resource Communities
In contrast to the dystopian visions of sci-fi, by 2030 AI will help improve life for the poorest members of society. Predictive analytics will let government agencies better allocate limited resources by helping them forecast environmental hazards or building code violations. AI planning could help distribute excess food from restaurants to food banks and shelters before it spoils.
Investment in these areas is under-funded though, so how quickly these capabilities will appear is uncertain. There are fears valueless machine learning could inadvertently discriminate by correlating things with race or gender, or surrogate factors like zip codes. But AI programs are easier to hold accountable than humans, so they’re more likely to help weed out discrimination.
6. Public Safety and Security
By 2030 cities are likely to rely heavily on AI technologies to detect and predict crime. Automatic processing of CCTV and drone footage will make it possible to rapidly spot anomalous behavior. This will not only allow law enforcement to react quickly but also forecast when and where crimes will be committed. Fears that bias and error could lead to people being unduly targeted are justified, but well-thought-out systems could actually counteract human bias and highlight police malpractice.
Techniques like speech and gait analysis could help interrogators and security guards detect suspicious behavior. Contrary to concerns about overly pervasive law enforcement, AI is likely to make policing more targeted and therefore less overbearing.
7. Employment and Workplace
The effects of AI will be felt most profoundly in the workplace. By 2030 AI will be encroaching on skilled professionals like lawyers, financial advisers, and radiologists. As it becomes capable of taking on more roles, organizations will be able to scale rapidly with relatively small workforces.
AI is more likely to replace tasks rather than jobs in the near term, and it will also create new jobs and markets, even if it’s hard to imagine what those will be right now. While it may reduce incomes and job prospects, increasing automation will also lower the cost of goods and services, effectively making everyone richer.
These structural shifts in the economy will require political rather than purely economic responses to ensure these riches are shared. In the short run, this may include resources being pumped into education and re-training, but longer term may require a far more comprehensive social safety net or radical approaches like a guaranteed basic income.
8. Entertainment
Entertainment in 2030 will be interactive, personalized, and immeasurably more engaging than today. Breakthroughs in sensors and hardware will see virtual reality, haptics and companion robots increasingly enter the home. Users will be able to interact with entertainment systems conversationally, and they will show emotion, empathy, and the ability to adapt to environmental cues like the time of day.
Social networks already allow personalized entertainment channels, but the reams of data being collected on usage patterns and preferences will allow media providers to personalize entertainment to unprecedented levels. There are concerns this could endow media conglomerates with unprecedented control over people’s online experiences and the ideas to which they are exposed.
But advances in AI will also make creating your own entertainment far easier and more engaging, whether by helping to compose music or choreograph dances using an avatar. Democratizing the production of high-quality entertainment makes it nearly impossible to predict how highly fluid human tastes for entertainment will develop.
Image Credit: Asgord / Shutterstock.com Continue reading

Posted in Human Robots