Tag Archives: kind

#431599 8 Ways AI Will Transform Our Cities by ...

How will AI shape the average North American city by 2030? A panel of experts assembled as part of a century-long study into the impact of AI thinks its effects will be profound.
The One Hundred Year Study on Artificial Intelligence is the brainchild of Eric Horvitz, technical fellow and a managing director at Microsoft Research.
Every five years a panel of experts will assess the current state of AI and its future directions. The first panel, comprised of experts in AI, law, political science, policy, and economics, was launched last fall and decided to frame their report around the impact AI will have on the average American city. Here’s how they think it will affect eight key domains of city life in the next fifteen years.
1. Transportation
The speed of the transition to AI-guided transport may catch the public by surprise. Self-driving vehicles will be widely adopted by 2020, and it won’t just be cars — driverless delivery trucks, autonomous delivery drones, and personal robots will also be commonplace.
Uber-style “cars as a service” are likely to replace car ownership, which may displace public transport or see it transition towards similar on-demand approaches. Commutes will become a time to relax or work productively, encouraging people to live further from home, which could combine with reduced need for parking to drastically change the face of modern cities.
Mountains of data from increasing numbers of sensors will allow administrators to model individuals’ movements, preferences, and goals, which could have major impact on the design city infrastructure.
Humans won’t be out of the loop, though. Algorithms that allow machines to learn from human input and coordinate with them will be crucial to ensuring autonomous transport operates smoothly. Getting this right will be key as this will be the public’s first experience with physically embodied AI systems and will strongly influence public perception.
2. Home and Service Robots
Robots that do things like deliver packages and clean offices will become much more common in the next 15 years. Mobile chipmakers are already squeezing the power of last century’s supercomputers into systems-on-a-chip, drastically boosting robots’ on-board computing capacity.
Cloud-connected robots will be able to share data to accelerate learning. Low-cost 3D sensors like Microsoft’s Kinect will speed the development of perceptual technology, while advances in speech comprehension will enhance robots’ interactions with humans. Robot arms in research labs today are likely to evolve into consumer devices around 2025.
But the cost and complexity of reliable hardware and the difficulty of implementing perceptual algorithms in the real world mean general-purpose robots are still some way off. Robots are likely to remain constrained to narrow commercial applications for the foreseeable future.
3. Healthcare
AI’s impact on healthcare in the next 15 years will depend more on regulation than technology. The most transformative possibilities of AI in healthcare require access to data, but the FDA has failed to find solutions to the difficult problem of balancing privacy and access to data. Implementation of electronic health records has also been poor.
If these hurdles can be cleared, AI could automate the legwork of diagnostics by mining patient records and the scientific literature. This kind of digital assistant could allow doctors to focus on the human dimensions of care while using their intuition and experience to guide the process.
At the population level, data from patient records, wearables, mobile apps, and personal genome sequencing will make personalized medicine a reality. While fully automated radiology is unlikely, access to huge datasets of medical imaging will enable training of machine learning algorithms that can “triage” or check scans, reducing the workload of doctors.
Intelligent walkers, wheelchairs, and exoskeletons will help keep the elderly active while smart home technology will be able to support and monitor them to keep them independent. Robots may begin to enter hospitals carrying out simple tasks like delivering goods to the right room or doing sutures once the needle is correctly placed, but these tasks will only be semi-automated and will require collaboration between humans and robots.
4. Education
The line between the classroom and individual learning will be blurred by 2030. Massive open online courses (MOOCs) will interact with intelligent tutors and other AI technologies to allow personalized education at scale. Computer-based learning won’t replace the classroom, but online tools will help students learn at their own pace using techniques that work for them.
AI-enabled education systems will learn individuals’ preferences, but by aggregating this data they’ll also accelerate education research and the development of new tools. Online teaching will increasingly widen educational access, making learning lifelong, enabling people to retrain, and increasing access to top-quality education in developing countries.
Sophisticated virtual reality will allow students to immerse themselves in historical and fictional worlds or explore environments and scientific objects difficult to engage with in the real world. Digital reading devices will become much smarter too, linking to supplementary information and translating between languages.
5. Low-Resource Communities
In contrast to the dystopian visions of sci-fi, by 2030 AI will help improve life for the poorest members of society. Predictive analytics will let government agencies better allocate limited resources by helping them forecast environmental hazards or building code violations. AI planning could help distribute excess food from restaurants to food banks and shelters before it spoils.
Investment in these areas is under-funded though, so how quickly these capabilities will appear is uncertain. There are fears valueless machine learning could inadvertently discriminate by correlating things with race or gender, or surrogate factors like zip codes. But AI programs are easier to hold accountable than humans, so they’re more likely to help weed out discrimination.
6. Public Safety and Security
By 2030 cities are likely to rely heavily on AI technologies to detect and predict crime. Automatic processing of CCTV and drone footage will make it possible to rapidly spot anomalous behavior. This will not only allow law enforcement to react quickly but also forecast when and where crimes will be committed. Fears that bias and error could lead to people being unduly targeted are justified, but well-thought-out systems could actually counteract human bias and highlight police malpractice.
Techniques like speech and gait analysis could help interrogators and security guards detect suspicious behavior. Contrary to concerns about overly pervasive law enforcement, AI is likely to make policing more targeted and therefore less overbearing.
7. Employment and Workplace
The effects of AI will be felt most profoundly in the workplace. By 2030 AI will be encroaching on skilled professionals like lawyers, financial advisers, and radiologists. As it becomes capable of taking on more roles, organizations will be able to scale rapidly with relatively small workforces.
AI is more likely to replace tasks rather than jobs in the near term, and it will also create new jobs and markets, even if it’s hard to imagine what those will be right now. While it may reduce incomes and job prospects, increasing automation will also lower the cost of goods and services, effectively making everyone richer.
These structural shifts in the economy will require political rather than purely economic responses to ensure these riches are shared. In the short run, this may include resources being pumped into education and re-training, but longer term may require a far more comprehensive social safety net or radical approaches like a guaranteed basic income.
8. Entertainment
Entertainment in 2030 will be interactive, personalized, and immeasurably more engaging than today. Breakthroughs in sensors and hardware will see virtual reality, haptics and companion robots increasingly enter the home. Users will be able to interact with entertainment systems conversationally, and they will show emotion, empathy, and the ability to adapt to environmental cues like the time of day.
Social networks already allow personalized entertainment channels, but the reams of data being collected on usage patterns and preferences will allow media providers to personalize entertainment to unprecedented levels. There are concerns this could endow media conglomerates with unprecedented control over people’s online experiences and the ideas to which they are exposed.
But advances in AI will also make creating your own entertainment far easier and more engaging, whether by helping to compose music or choreograph dances using an avatar. Democratizing the production of high-quality entertainment makes it nearly impossible to predict how highly fluid human tastes for entertainment will develop.
Image Credit: Asgord / Shutterstock.com Continue reading

Posted in Human Robots

#431424 A ‘Google Maps’ for the Mouse Brain ...

Ask any neuroscientist to draw you a neuron, and it’ll probably look something like a star with two tails: one stubby with extensive tree-like branches, the other willowy, lengthy and dotted with spindly spikes.
While a decent abstraction, this cartoonish image hides the uncomfortable truth that scientists still don’t know much about what many neurons actually look like, not to mention the extent of their connections.
But without untangling the jumbled mess of neural wires that zigzag across the brain, scientists are stumped in trying to answer one of the most fundamental mysteries of the brain: how individual neuronal threads carry and assemble information, which forms the basis of our thoughts, memories, consciousness, and self.
What if there was a way to virtually trace and explore the brain’s serpentine fibers, much like the way Google Maps allows us to navigate the concrete tangles of our cities’ highways?
Thanks to an interdisciplinary team at Janelia Research Campus, we’re on our way. Meet MouseLight, the most extensive map of the mouse brain ever attempted. The ongoing project has an ambitious goal: reconstructing thousands—if not more—of the mouse’s 70 million neurons into a 3D map. (You can play with it here!)
With map in hand, neuroscientists around the world can begin to answer how neural circuits are organized in the brain, and how information flows from one neuron to another across brain regions and hemispheres.
The first release, presented Monday at the Society for Neuroscience Annual Conference in Washington, DC, contains information about the shape and sizes of 300 neurons.
And that’s just the beginning.
“MouseLight’s new dataset is the largest of its kind,” says Dr. Wyatt Korff, director of project teams. “It’s going to change the textbook view of neurons.”

http://mouselight.janelia.org/assets/carousel/ML-Movie.mp4
Brain Atlas
MouseLight is hardly the first rodent brain atlasing project.
The Mouse Brain Connectivity Atlas at the Allen Institute for Brain Science in Seattle tracks neuron activity across small circuits in an effort to trace a mouse’s connectome—a complete atlas of how the firing of one neuron links to the next.
MICrONS (Machine Intelligence from Cortical Networks), the $100 million government-funded “moonshot” hopes to distill brain computation into algorithms for more powerful artificial intelligence. Its first step? Brain mapping.
What makes MouseLight stand out is its scope and level of detail.
MICrONS, for example, is focused on dissecting a cubic millimeter of the mouse visual processing center. In contrast, MouseLight involves tracing individual neurons across the entire brain.
And while connectomics outlines the major connections between brain regions, the birds-eye view entirely misses the intricacies of each individual neuron. This is where MouseLight steps in.
Slice and Dice
With a width only a fraction of a human hair, neuron projections are hard to capture in their native state. Tug or squeeze the brain too hard, and the long, delicate branches distort or even shred into bits.
In fact, previous attempts at trying to reconstruct neurons at this level of detail topped out at just a dozen, stymied by technological hiccups and sky-high costs.
A few years ago, the MouseLight team set out to automate the entire process, with a few time-saving tweaks. Here’s how it works.
After injecting a mouse with a virus that causes a handful of neurons to produce a green-glowing protein, the team treated the brain with a sugar alcohol solution. This step “clears” the brain, transforming the beige-colored organ to translucent, making it easier for light to penetrate and boosting the signal-to-background noise ratio. The brain is then glued onto a small pedestal and ready for imaging.
Building upon an established method called “two-photon microscopy,” the team then tweaked several parameters to reduce imaging time from days (or weeks) down to a fraction of that. Endearingly known as “2P” by the experts, this type of laser microscope zaps the tissue with just enough photos to light up a single plane without damaging the tissue—sharper plane, better focus, crisper image.
After taking an image, the setup activates its vibrating razor and shaves off the imaged section of the brain—a waspy slice about 200 micrometers thick. The process is repeated until the whole brain is imaged.
This setup increased imaging speed by 16 to 48 times faster than conventional microscopy, writes team leader Dr. Jayaram Chandrashekar, who published a version of the method early last year in eLife.
The resulting images strikingly highlight every crook and cranny of a neuronal branch, popping out against a pitch-black background. But pretty pictures come at a hefty data cost: each image takes up a whopping 20 terabytes of data—roughly the storage space of 4,000 DVDs, or 10,000 hours of movies.
Stitching individual images back into 3D is an image-processing nightmare. The MouseLight team used a combination of computational power and human prowess to complete this final step.
The reconstructed images are handed off to a mighty team of seven trained neuron trackers. With the help of tracing algorithms developed in-house and a keen eye, each member can track roughly a neuron a day—significantly less time than the week or so previously needed.
A Numbers Game
Even with just 300 fully reconstructed neurons, MouseLight has already revealed new secrets of the brain.
While it’s widely accepted that axons, the neurons’ outgoing projection, can span the entire length of the brain, these extra-long connections were considered relatively rare. (In fact, one previously discovered “giant neuron” was thought to link to consciousness because of its expansive connections).
Images captured from two-photon microscopy show an axon and dendrites protruding from a neuron’s cell body (sphere in center). Image Credit: Janelia Research Center, MouseLight project team
MouseLight blows that theory out of the water.
The data clearly shows that “giant neurons” are far more common than previously thought. For example, four neurons normally associated with taste had wiry branches that stretched all the way into brain areas that control movement and process touch.
“We knew that different regions of the brain talked to each other, but seeing it in 3D is different,” says Dr. Eve Marder at Brandeis University.
“The results are so stunning because they give you a really clear view of how the whole brain is connected.”
With a tested and true system in place, the team is now aiming to add 700 neurons to their collection within a year.
But appearance is only part of the story.
We can’t tell everything about a person simply by how they look. Neurons are the same: scientists can only infer so much about a neuron’s function by looking at their shape and positions. The team also hopes to profile the gene expression patterns of each neuron, which could provide more hints to their roles in the brain.
MouseLight essentially dissects the neural infrastructure that allows information traffic to flow through the brain. These anatomical highways are just the foundation. Just like Google Maps, roads form only the critical first layer of the map. Street view, traffic information and other add-ons come later for a complete look at cities in flux.
The same will happen for understanding our ever-changing brain.
Image Credit: Janelia Research Campus, MouseLight project team Continue reading

Posted in Human Robots

#431371 Amazon Is Quietly Building the Robots of ...

Science fiction is the siren song of hard science. How many innocent young students have been lured into complex, abstract science, technology, engineering, or mathematics because of a reckless and irresponsible exposure to Arthur C. Clarke at a tender age? Yet Arthur C. Clarke has a very famous quote: “Any sufficiently advanced technology is indistinguishable from magic.”
It’s the prospect of making that… ahem… magic leap that entices so many people into STEM in the first place. A magic leap that would change the world. How about, for example, having humanoid robots? They could match us in dexterity and speed, perceive the world around them as we do, and be programmed to do, well, more or less anything we can do.
Such a technology would change the world forever.
But how will it arrive? While true sci-fi robots won’t get here right away—the pieces are coming together, and the company best developing them at the moment is Amazon. Where others have struggled to succeed, Amazon has been quietly progressing. Notably, Amazon has more than just a dream, it has the most practical of reasons driving it into robotics.
This practicality matters. Technological development rarely proceeds by magic; it’s a process filled with twists, turns, dead-ends, and financial constraints. New technologies often have to answer questions like “What is this good for, are you being realistic?” A good strategy, then, can be to build something more limited than your initial ambition, but useful for a niche market. That way, you can produce a prototype, have a reasonable business plan, and turn a profit within a decade. You might call these “stepping stone” applications that allow for new technologies to be developed in an economically viable way.
You need something you can sell to someone, soon: that’s how you get investment in your idea. It’s this model that iRobot, developers of the Roomba, used: migrating from military prototypes to robotic vacuum cleaners to become the “boring, successful robot company.” Compare this to Willow Garage, a genius factory if ever there was one: they clearly had ambitions towards a general-purpose, multi-functional robot. They built an impressive device—PR2—and programmed the operating system, ROS, that is still the industry and academic standard to this day.
But since they were unable to sell their robot for much less than $250,000, it was never likely to be a profitable business. This is why Willow Garage is no more, and many workers at the company went into telepresence robotics. Telepresence is essentially videoconferencing with a fancy robot attached to move the camera around. It uses some of the same software (for example, navigation and mapping) without requiring you to solve difficult problems of full autonomy for the robot, or manipulating its environment. It’s certainly one of the stepping-stone areas that various companies are investigating.
Another approach is to go to the people with very high research budgets: the military.
This was the Boston Dynamics approach, and their incredible achievements in bipedal locomotion saw them getting snapped up by Google. There was a great deal of excitement and speculation about Google’s “nightmare factory” whenever a new slick video of a futuristic militarized robot surfaced. But Google broadly backed away from Replicant, their robotics program, and Boston Dynamics was sold. This was partly due to PR concerns over the Terminator-esque designs, but partly because they didn’t see the robotics division turning a profit. They hadn’t found their stepping stones.
This is where Amazon comes in. Why Amazon? First off, they just announced that their profits are up by 30 percent, and yet the company is well-known for their constantly-moving Day One philosophy where a great deal of the profits are reinvested back into the business. But lots of companies have ambition.
One thing Amazon has that few other corporations have, as well as big financial resources, is viable stepping stones for developing the technologies needed for this sort of robotics to become a reality. They already employ 100,000 robots: these are of the “pragmatic, boring, useful” kind that we’ve profiled, which move around the shelves in warehouses. These robots are allowing Amazon to develop localization and mapping software for robots that can autonomously navigate in the simple warehouse environment.
But their ambitions don’t end there. The Amazon Robotics Challenge is a multi-million dollar competition, open to university teams, to produce a robot that can pick and package items in warehouses. The problem of grasping and manipulating a range of objects is not a solved one in robotics, so this work is still done by humans—yet it’s absolutely fundamental for any sci-fi dream robot.
Google, for example, attempted to solve this problem by hooking up 14 robot hands to machine learning algorithms and having them grasp thousands of objects. Although results were promising, the 10 to 20 percent failure rate for grasps is too high for warehouse use. This is a perfect stepping stone for Amazon; should they crack the problem, they will likely save millions in logistics.
Another area where humanoid robotics—especially bipedal locomotion, or walking, has been seriously suggested—is in the last mile delivery problem. Amazon has shown willingness to be creative in this department with their notorious drone delivery service. In other words, it’s all very well to have your self-driving car or van deliver packages to people’s doors, but who puts the package on the doorstep? It’s difficult for wheeled robots to navigate the full range of built environments that exist. That’s why bipedal robots like CASSIE, developed by Oregon State, may one day be used to deliver parcels.
Again: no one more than Amazon stands to profit from cracking this technology. The line from robotics research to profit is very clear.
So, perhaps one day Amazon will have robots that can move around and manipulate their environments. But they’re also working on intelligence that will guide those robots and make them truly useful for a variety of tasks. Amazon has an AI, or at least the framework for an AI: it’s called Alexa, and it’s in tens of millions of homes. The Alexa Prize, another multi-million-dollar competition, is attempting to make Alexa more social.
To develop a conversational AI, at least using the current methods of machine learning, you need data on tens of millions of conversations. You need to understand how people will try to interact with the AI. Amazon has access to this in Alexa, and they’re using it. As owners of the leading voice-activated personal assistant, they have an ecosystem of developers creating apps for Alexa. It will be integrated with the smart home and the Internet of Things. It is a very marketable product, a stepping stone for robot intelligence.
What’s more, the company can benefit from its huge sales infrastructure. For Amazon, having an AI in your home is ideal, because it can persuade you to buy more products through its website. Unlike companies like Google, Amazon has an easy way to make a direct profit from IoT devices, which could fuel funding.
For a humanoid robot to be truly useful, though, it will need vision and intelligence. It will have to understand and interpret its environment, and react accordingly. The way humans learn about our environment is by getting out and seeing it. This is something that, for example, an Alexa coupled to smart glasses would be very capable of doing. There are rumors that Alexa’s AI will soon be used in security cameras, which is an ideal stepping stone task to train an AI to process images from its environment, truly perceiving the world and any threats it might contain.
It’s a slight exaggeration to say that Amazon is in the process of building a secret robot army. The gulf between our sci-fi vision of robots that can intelligently serve us, rather than mindlessly assemble cars, is still vast. But in quietly assembling many of the technologies needed for intelligent, multi-purpose robotics—and with the unique stepping stones they have along the way—Amazon might just be poised to leap that gulf. As if by magic.
Image Credit: Denis Starostin / Shutterstock.com Continue reading

Posted in Human Robots

#431315 Better Than Smart Speakers? Japan Is ...

While American internet giants are developing speakers, Japanese companies are working on robots and holograms. They all share a common goal: to create the future platform for the Internet of Things (IoT) and smart homes.
Names like Bocco, EMIEW3, Xperia Assistant, and Gatebox may not ring a bell to most outside of Japan, but Sony, Hitachi, Sharp, and Softbank most certainly do. The companies, along with Japanese start-ups, have developed robots, robot concepts, and even holograms like the ones hiding behind the short list of names.
While there are distinct differences between the various systems, they share the potential to act as a remote control for IoT devices and smart homes. It is a very different direction than that taken by companies like Google, Amazon, and Apple, who have so far focused on building IoT speaker systems.
Bocco robot. Image Credit: Yukai Engineering
“Technology companies are pursuing the platform—or smartphone if you will—for IoT. My impression is that Japanese companies—and Japanese consumers—prefer that such a platform should not just be an object, but a companion,” says Kosuke Tatsumi, designer at Yukai Engineering, a startup that has developed the Bocco robot system.
At Hitachi, a spokesperson said that the company’s human symbiotic service robot, EMIEW3, robot is currently in the field, doing proof-of-value tests at customer sites to investigate needs and potential solutions. This could include working as an interactive control system for the Internet of Things:
“EMIEW3 is able to communicate with humans, thus receive instructions, and as it is connected to a robotics IT platform, it is very much capable of interacting with IoT-based systems,” the spokesperson said.
The power of speech is getting feet
Gartner analysis predicts that there will be 8.4 billion internet-connected devices—collectively making up the Internet of Things—by the end of 2017. 5.2 billion of those devices are in the consumer category. By the end of 2020, the number of IoT devices will rise to 12.8 billion—and that is just in the consumer category.
As a child of the 80s, I can vividly remember how fun it was to have separate remote controls for TV, video, and stereo. I can imagine a situation where my internet-connected refrigerator and ditto thermostat, television, and toaster try to work out who I’m talking to and what I want them to do.
Consensus seems to be that speech will be the way to interact with many/most IoT devices. The same goes for a form of virtual assistant functioning as the IoT platform—or remote control. Almost everything else is still an open ballgame, despite an early surge for speaker-based systems, like those from Amazon, Google, and Apple.
Why robots could rule
Famous android creator and robot scientist Dr. Hiroshi Ishiguro sees the interaction between humans and the AI embedded in speakers or robots as central to both approaches. From there, the approaches differ greatly.
Image Credit: Hiroshi Ishiguro Laboratories
“It is about more than the difference of form. Speaking to an Amazon Echo is not a natural kind of interaction for humans. That is part of what we in Japan are creating in many human-like robot systems,” he says. “The human brain is constructed to recognize and interact with humans. This is part of why it makes sense to focus on developing the body for the AI mind as well as the AI mind itself. In a way, you can describe it as the difference between developing an assistant, which could be said to be what many American companies are currently doing, and a companion, which is more the focus here in Japan.”
Another advantage is that robots are more kawaii—a multifaceted Japanese word that can be translated as “cute”—than speakers are. This makes it easy for people to relate to them and forgive them.
“People are more willing to forgive children when they make mistakes, and the same is true with a robot like Bocco, which is designed to look kawaii and childlike,” Kosuke Tatsumi explains.
Japanese robots and holograms with IoT-control capabilities
So, what exactly do these robot and hologram companions look like, what can they do, and who’s making them? Here are seven examples of Japanese companies working to go a step beyond smart speakers with personable robots and holograms.
1. In 2016 Sony’s mobile division demonstrated the Xperia Agent concept robot that recognizes individual users, is voice controlled, and can do things like control your television and receive calls from services like Skype.

2. Sharp launched their Home Assistant at CES 2016. A robot-like, voice-controlled assistant that can to control, among other things, air conditioning units, and televisions. Sharp has also launched a robotic phone called RoBoHon.
3. Gatebox has created a holographic virtual assistant. Evil tongues will say that it is primarily the expression of an otaku (Japanese for nerd) dream of living with a manga heroine. Gatebox is, however, able to control things like lights, TVs, and other systems through API integration. It also provides its owner with weather-related advice like “remember your umbrella, it looks like it will rain later.” Gatebox can be controlled by voice, gesture, or via an app.
4. Hitachi’s EMIEW3 robot is designed to assist people in businesses and public spaces. It is connected to a robot IT-platform via the cloud that acts as a “remote brain.” Hitachi is currently investigating the business use cases for EMIEW3. This could include the role of controlling platform for IoT devices.

5. Softbank’s Pepper robot has been used as a platform to control use of medical IoT devices such as smart thermometers by Avatarion. The company has also developed various in-house systems that enable Pepper to control IoT-devices like a coffee machine. A user simply asks Pepper to brew a cup of coffee, and it starts the coffee machine for you.
6. Yukai Engineering’s Bocco registers when a person (e.g., young child) comes home and acts as a communication center between that person and other members of the household (e.g., parent still at work). The company is working on integrating voice recognition, voice control, and having Bocco control things like the lights and other connected IoT devices.
7. Last year Toyota launched the Kirobo Mini, a companion robot which aims to, among other things, help its owner by suggesting “places to visit, routes for travel, and music to listen to” during the drive.

Today, Japan. Tomorrow…?
One of the key questions is whether this emerging phenomenon is a purely Japanese thing. If the country’s love of robots makes it fundamentally different. Japan is, after all, a country where new units of Softbank’s Pepper robot routinely sell out in minutes and the RoBoHon robot-phone has its own cafe nights in Tokyo.
It is a country where TV introduces you to friendly, helpful robots like Doraemon and Astro Boy. I, on the other hand, first met robots in the shape of Arnold Schwarzenegger’s Terminator and struggled to work out why robots seemed intent on permanently borrowing things like clothes and motorcycles, not to mention why they hated people called Sarah.
However, research suggests that a big part of the reason why Japanese seem to like robots is a combination of exposure and positive experiences that leads to greater acceptance of them. As robots spread to more and more industries—and into our homes—our acceptance of them will grow.
The argument is also backed by a project by Avatarion, which used Softbank’s Nao-robot as a classroom representative for children who were in the hospital.
“What we found was that the other children quickly adapted to interacting with the robot and treating it as the physical representation of the child who was in hospital. They accepted it very quickly,” Thierry Perronnet, General Manager of Avatarion, explains.
His company has also developed solutions where Softbank’s Pepper robot is used as an in-home nurse and controls various medical IoT devices.
If robots end up becoming our preferred method for controlling IoT devices, it is by no means certain that said robots will be coming from Japan.
“I think that the goal for both Japanese and American companies—including the likes of Google, Amazon, Microsoft, and Apple—is to create human-like interaction. For this to happen, technology needs to evolve and adapt to us and how we are used to interacting with others, in other words, have a more human form. Humans’ speed of evolution cannot keep up with technology’s, so it must be the technology that changes,” Dr. Ishiguro says.
Image Credit: Sony Mobile Communications Continue reading

Posted in Human Robots

#431243 Does Our Survival Depend on Relentless ...

Malthus had a fever dream in the 1790s. While the world was marveling in the first manifestations of modern science and technology and the industrial revolution that was just beginning, he was concerned. He saw the exponential growth in the human population as a terrible problem for the species—an existential threat. He was afraid the human population would overshoot the availability of resources, and then things would really hit the fan.
“Famine seems to be the last, the most dreadful resource of nature. The power of population is so superior to the power of the earth to produce subsistence for man, that premature death must in some shape or other visit the human race. The vices of mankind are active and able ministers of depopulation.”
So Malthus wrote in his famous text, an essay on the principles of population.
But Malthus was wrong. Not just in his proposed solution, which was to stop giving aid and food to the poor so that they wouldn’t explode in population. His prediction was also wrong: there was no great, overwhelming famine that caused the population to stay at the levels of the 1790s. Instead, the world population—with a few dips—has continued to grow exponentially ever since. And it’s still growing.
There have concurrently been developments in agriculture and medicine and, in the 20th century, the Green Revolution, in which Norman Borlaug ensured that countries adopted high-yield varieties of crops—the first precursors to modern ideas of genetically engineering food to produce better crops and more growth. The world was able to produce an astonishing amount of food—enough, in the modern era, for ten billion people. It is only a grave injustice in the way that food is distributed that means 12 percent of the world goes hungry, and we still have starvation. But, aside from that, we were saved by the majesty of another kind of exponential growth; the population grew, but the ability to produce food grew faster.
In so much of the world around us today, there’s the same old story. Take exploitation of fossil fuels: here, there is another exponential race. The exponential growth of our ability to mine coal, extract natural gas, refine oil from ever more complex hydrocarbons: this is pitted against our growing appetite. The stock market is built on exponential growth; you cannot provide compound interest unless the economy grows by a certain percentage a year.

“This relentless and ruthless expectation—that technology will continue to improve in ways we can’t foresee—is not just baked into share prices, but into the very survival of our species.”

When the economy fails to grow exponentially, it’s considered a crisis: a financial catastrophe. This expectation penetrates down to individual investors. In the cryptocurrency markets—hardly immune from bubbles, the bull-and-bear cycle of economics—the traders’ saying is “Buy the hype, sell the news.” Before an announcement is made, the expectation of growth, of a boost—the psychological shift—is almost invariably worth more than whatever the major announcement turns out to be. The idea of growth is baked into the share price, to the extent that even good news can often cause the price to dip when it’s delivered.
In the same way, this relentless and ruthless expectation—that technology will continue to improve in ways we can’t foresee—is not just baked into share prices, but into the very survival of our species. A third of Earth’s soil has been acutely degraded due to agriculture; we are looming on the brink of a topsoil crisis. In less relentless times, we may have tried to solve the problem by letting the fields lie fallow for a few years. But that’s no longer an option: if we do so, people will starve. Instead, we look to a second Green Revolution—genetically modified crops, or hydroponics—to save us.
Climate change is considered by many to be an existential threat. The Intergovernmental Panel on Climate Change has already put their faith in the exponential growth of technology. Many of the scenarios where they can successfully imagine the human race dealing with the climate crisis involve the development and widespread deployment of carbon capture and storage technology. Our hope for the future already has built-in expectations of exponential growth in our technology in this field. Alongside this, to reduce carbon emissions to zero on the timescales we need to, we will surely require new technologies in renewable energy, energy efficiency, and electrification of the transport system.
Without exponential growth in technology continuing, then, we are doomed. Humanity finds itself on a treadmill that’s rapidly accelerating, with the risk of plunging into the abyss if we can’t keep up the pace. Yet this very acceleration could also pose an existential threat. As our global system becomes more interconnected and complex, chaos theory takes over: the economics of a town in Macedonia can influence a US presidential election; critical infrastructure can be brought down by cybercriminals.
New threats, such as biotechnology, nanotechnology, or a generalized artificial intelligence, could put incredible power—power over the entire species—into the hands of a small number of people. We are faced with a paradox: the continued existence of our system depends on the exponential growth of our capacities outpacing the exponential growth of our needs and desires. Yet this very growth will create threats that are unimaginably larger than any humans have faced before in history.

“It is necessary that we understand the consequences and prospects for exponential growth: that we understand the nature of the race that we’re in.”

Neo-Luddites may find satisfaction in rejecting the ill-effects of technology, but they will still live in a society where technology is the lifeblood that keeps the whole system pumping. Now, more than ever, it is necessary that we understand the consequences and prospects for exponential growth: that we understand the nature of the race that we’re in.
If we decide that limitless exponential growth on a finite planet is unsustainable, we need to plan for the transition to a new way of living before our ability to accelerate runs out. If we require new technologies or fields of study to enable this growth to continue, we must focus our efforts on these before anything else. If we want to survive the 21st century without major catastrophe, we don’t have a choice but to understand it. Almost by default, we’re all accelerationists now.
Image Credit: focal point / Shutterstock.com Continue reading

Posted in Human Robots