Tag Archives: face

#432051 What Roboticists Are Learning From Early ...

You might not have heard of Hanson Robotics, but if you’re reading this, you’ve probably seen their work. They were the company behind Sophia, the lifelike humanoid avatar that’s made dozens of high-profile media appearances. Before that, they were the company behind that strange-looking robot that seemed a bit like Asimo with Albert Einstein’s head—or maybe you saw BINA48, who was interviewed for the New York Times in 2010 and featured in Jon Ronson’s books. For the sci-fi aficionados amongst you, they even made a replica of legendary author Philip K. Dick, best remembered for having books with titles like Do Androids Dream of Electric Sheep? turned into films with titles like Blade Runner.

Hanson Robotics, in other words, with their proprietary brand of life-like humanoid robots, have been playing the same game for a while. Sometimes it can be a frustrating game to watch. Anyone who gives the robot the slightest bit of thought will realize that this is essentially a chat-bot, with all the limitations this implies. Indeed, even in that New York Times interview with BINA48, author Amy Harmon describes it as a frustrating experience—with “rare (but invariably thrilling) moments of coherence.” This sensation will be familiar to anyone who’s conversed with a chatbot that has a few clever responses.

The glossy surface belies the lack of real intelligence underneath; it seems, at first glance, like a much more advanced machine than it is. Peeling back that surface layer—at least for a Hanson robot—means you’re peeling back Frubber. This proprietary substance—short for “Flesh Rubber,” which is slightly nightmarish—is surprisingly complicated. Up to thirty motors are required just to control the face; they manipulate liquid cells in order to make the skin soft, malleable, and capable of a range of different emotional expressions.

A quick combinatorial glance at the 30+ motors suggests that there are millions of possible combinations; researchers identify 62 that they consider “human-like” in Sophia, although not everyone agrees with this assessment. Arguably, the technical expertise that went into reconstructing the range of human facial expressions far exceeds the more simplistic chat engine the robots use, although it’s the second one that allows it to inflate the punters’ expectations with a few pre-programmed questions in an interview.

Hanson Robotics’ belief is that, ultimately, a lot of how humans will eventually relate to robots is going to depend on their faces and voices, as well as on what they’re saying. “The perception of identity is so intimately bound up with the perception of the human form,” says David Hanson, company founder.

Yet anyone attempting to design a robot that won’t terrify people has to contend with the uncanny valley—that strange blend of concern and revulsion people react with when things appear to be creepily human. Between cartoonish humanoids and genuine humans lies what has often been a no-go zone in robotic aesthetics.

The uncanny valley concept originated with roboticist Masahiro Mori, who argued that roboticists should avoid trying to replicate humans exactly. Since anything that wasn’t perfect, but merely very good, would elicit an eerie feeling in humans, shirking the challenge entirely was the only way to avoid the uncanny valley. It’s probably a task made more difficult by endless streams of articles about AI taking over the world that inexplicably conflate AI with killer humanoid Terminators—which aren’t particularly likely to exist (although maybe it’s best not to push robots around too much).

The idea behind this realm of psychological horror is fairly simple, cognitively speaking.

We know how to categorize things that are unambiguously human or non-human. This is true even if they’re designed to interact with us. Consider the popularity of Aibo, Jibo, or even some robots that don’t try to resemble humans. Something that resembles a human, but isn’t quite right, is bound to evoke a fear response in the same way slightly distorted music or slightly rearranged furniture in your home will. The creature simply doesn’t fit.

You may well reject the idea of the uncanny valley entirely. David Hanson, naturally, is not a fan. In the paper Upending the Uncanny Valley, he argues that great art forms have often resembled humans, but the ultimate goal for humanoid roboticists is probably to create robots we can relate to as something closer to humans than works of art.

Meanwhile, Hanson and other scientists produce competing experiments to either demonstrate that the uncanny valley is overhyped, or to confirm it exists and probe its edges.

The classic experiment involves gradually morphing a cartoon face into a human face, via some robotic-seeming intermediaries—yet it’s in movement that the real horror of the almost-human often lies. Hanson has argued that incorporating cartoonish features may help—and, sometimes, that the uncanny valley is a generational thing which will melt away when new generations grow used to the quirks of robots. Although Hanson might dispute the severity of this effect, it’s clearly what he’s trying to avoid with each new iteration.

Hiroshi Ishiguro is the latest of the roboticists to have dived headlong into the valley.

Building on the work of pioneers like Hanson, those who study human-robot interaction are pushing at the boundaries of robotics—but also of social science. It’s usually difficult to simulate what you don’t understand, and there’s still an awful lot we don’t understand about how we interpret the constant streams of non-verbal information that flow when you interact with people in the flesh.

Ishiguro took this imitation of human forms to extreme levels. Not only did he monitor and log the physical movements people made on videotapes, but some of his robots are based on replicas of people; the Repliee series began with a ‘replicant’ of his daughter. This involved making a rubber replica—a silicone cast—of her entire body. Future experiments were focused on creating Geminoid, a replica of Ishiguro himself.

As Ishiguro aged, he realized that it would be more effective to resemble his replica through cosmetic surgery rather than by continually creating new casts of his face, each with more lines than the last. “I decided not to get old anymore,” Ishiguro said.

We love to throw around abstract concepts and ideas: humans being replaced by machines, cared for by machines, getting intimate with machines, or even merging themselves with machines. You can take an idea like that, hold it in your hand, and examine it—dispassionately, if not without interest. But there’s a gulf between thinking about it and living in a world where human-robot interaction is not a field of academic research, but a day-to-day reality.

As the scientists studying human-robot interaction develop their robots, their replicas, and their experiments, they are making some of the first forays into that world. We might all be living there someday. Understanding ourselves—decrypting the origins of empathy and love—may be the greatest challenge to face. That is, if you want to avoid the valley.

Image Credit: Anton Gvozdikov / Shutterstock.com Continue reading

Posted in Human Robots

#432021 Unleashing Some of the Most Ambitious ...

At Singularity University, we are unleashing a generation of women who are smashing through barriers and starting some of the most ambitious technology companies on the planet.

Singularity University was founded in 2008 to empower leaders to use exponential technologies to solve our world’s biggest challenges. Our flagship program, the Global Solutions Program, has historically brought 80 entrepreneurs from around the world to Silicon Valley for 10 weeks to learn about exponential technologies and create moonshot startups that improve the lives of a billion people within a decade.

After nearly 10 years of running this program, we can say that about 70 percent of our successful startups have been founded or co-founded by female entrepreneurs (see below for inspiring examples of their work). This is in sharp contrast to the typical 10–20 percent of venture-backed tech companies that have a female founder, as reported by TechCrunch.

How are we so dramatically changing the game? While 100 percent of the credit goes to these courageous women, as both an alumna of the Global Solutions Program and our current vice chair of Global Grand Challenges, I want to share my reflections on what has worked.

At the most basic level, it is essential to deeply believe in the inherent worth, intellectual genius, and profound entrepreneurial caliber of women. While this may seem obvious, this is not the way our world currently thinks—we live in a world that sees women’s ideas, contributions, work, and existence as inherently less valuable than men’s.

For example, a 2017 Harvard Business Review article noted that even when women engage in the same behaviors and work as men, their work is considered less valuable simply because a woman did the job. An additional 2017 Harvard Business Review article showed that venture capitalists are significantly less likely to invest in female entrepreneurs and are more likely to ask men questions about the potential success of their companies while grilling women about the potential downfalls of their companies.

This doubt and lack of recognition of the genius and caliber of women is also why women are still paid less than men for completing identical work. Further, it’s why women’s work often gets buried in “number two” support roles of men in leadership roles and why women are expected to take on second shifts at home managing tedious household chores in addition to their careers. I would also argue these views as well as the rampant sexual harassment, assault, and violence against women that exists today stems from stubborn, historical, patriarchal views of women as living for the benefit of men, rather than for their own sovereignty and inherent value.

As with any other business, Singularity University has not been immune to these biases but is resolutely focused on helping women achieve intellectual genius and global entrepreneurial caliber by harnessing powerful exponential technologies.

We create an environment where women can physically and intellectually thrive free of harassment to reach their full potential, and we are building a broader ecosystem of alumni and partners around the world who not only support our female entrepreneurs throughout their entrepreneurial journeys, but who are also sparking and leading systemic change in their own countries and communities.

Respecting the Intellectual Genius and Entrepreneurial Caliber of Women
The entrepreneurial legends of our time—Steve Jobs, Elon Musk, Mark Zuckerberg, Bill Gates, Jeff Bezos, Larry Page, Sergey Brin—are men who have all built their empires using exponential technologies. Exponential technologies helped these men succeed faster and with greater impact due to Moore’s Law and the Law of Accelerating Returns which states that any digital technology (such as computing, software, artificial intelligence, robotics, quantum computing, biotechnology, nanotechnology, etc.) will become more sophisticated while dramatically falling in price, enabling rapid scaling.

Knowing this, an entrepreneur can plot her way to an ambitious global solution over time, releasing new applications just as the technology and market are ready. Furthermore, these rapidly advancing technologies often converge to create new tools and opportunities for innovators to come up with novel solutions to challenges that were previously impossible to solve in the past.

For various reasons, women have not pursued exponential technologies as aggressively as men (or were prevented or discouraged from doing so).

While more women are founding firms at a higher rate than ever in wealthy countries like the United States, the majority are small businesses in linear industries that have been around for hundreds of years, such as social assistance, health, education, administrative, or consulting services. In lower-income countries, international aid agencies and nonprofits often encourage women to pursue careers in traditional handicrafts, micro-enterprise, and micro-finance. While these jobs have historically helped women escape poverty and gain financial independence, they have done little to help women realize the enormous power, influence, wealth, and ability to transform the world for the better that comes from building companies, nonprofits, and solutions grounded in exponential technologies.

We need women to be working with exponential technologies today in order to be powerful leaders in the future.

Participants who enroll in our Global Solutions Program spend the first few weeks of the program learning about exponential technologies from the world’s experts and the final weeks launching new companies or nonprofits in their area of interest. We require that women (as well as men) utilize exponential technologies as a condition of the program.

In this sense, at Singularity University women start their endeavors with all of us believing and behaving in a way that assumes they can achieve global impact at the level of our world’s most legendary entrepreneurs.

Creating an Environment Where Woman Can Thrive
While challenging women to embrace exponential technologies is essential, it is also important to create an environment where women can thrive. In particular, this means ensuring women feel at home on our campus by ensuring gender diversity, aggressively addressing sexual harassment, and flipping the traditional culture from one that penalizes women, to one that values and supports them.

While women were initially only a small minority of our Global Solutions Program, in 2014, we achieved around 50% female attendance—a statistic that has since held over the years.

This is not due to a quota—every year we turn away extremely qualified women from our program (and are working on reformulating the program to allow more people to participate in the future.) While part of our recruiting success is due to the efforts of our marketing team, we also benefited from the efforts of some of our early female founders, staff, faculty, and alumnae including Susan Fonseca, Emeline Paat-Dahlstrom, Kathryn Myronuk, Lajuanda Asemota, Chiara Giovenzana, and Barbara Silva Tronseca.

As early champions of Singularity University these women not only launched diversity initiatives and personally reached out to women, but were crucial role models holding leadership roles in our community. In addition, Fonseca and Silva also both created multiple organizations and initiatives outside of (or in conjunction with) the university that produced additional pipelines of female candidates. In particular, Fonseca founded Women@TheFrontier as well as other organizations focusing on women, technology and innovation, and Silva founded BestInnovation (a woman’s accelerator in Latin America), as well as led Singularity University’s Chilean Chapter and founded the first SingularityU Summit in Latin America.

These women’s efforts in globally scaling Singularity University have been critical in ensuring woman around the world now see Singularity University as a place where they can lead and shape the future.

Also, thanks to Google (Alphabet) and many of our alumni and partners, we were able to provide full scholarships to any woman (or man) to attend our program regardless of their economic status. Google committed significant funding for full scholarships while our partners around the world also hosted numerous Global Impact Competitions, where entrepreneurs pitched their solutions to their local communities with the winners earning a full scholarship funded by our partners to attend the Global Solution Program as their prize.

Google and our partners’ support helped individuals attend our program and created a wider buzz around exponential technology and social change around the world in local communities. It led to the founding of 110 SU chapters in 55 countries.

Another vital aspect of our work in supporting women has been trying to create a harassment-free environment. Throughout the Silicon Valley, more than 60% of women convey that while they are trying to build their companies or get their work done, they are also dealing with physical and sexual harassment while being demeaned and excluded in other ways in the workplace. We have taken actions to educate and train our staff on how to deal with situations should they occur. All staff receives training on harassment when they join Singularity University, and all Global Solutions Program participants attend mandatory trainings on sexual harassment when they first arrive on campus. We also have male and female wellness counselors available that can offer support to both individuals and teams of entrepreneurs throughout the entire program.

While at a minimum our campus must be physically safe for women, we also strive to create a culture that values women and supports them in the additional challenges and expectations they face. For example, one of our 2016 female participants, Van Duesterberg, was pregnant during the program and said that instead of having people doubt her commitment to her startup or make her prove she could handle having a child and running a start-up at the same time, people went out of their way to help her.

“I was the epitome of a person not supposed to be doing a startup,” she said. “I was pregnant and would need to take care of my child. But Singularity University was supportive and encouraging. They made me feel super-included and that it was possible to do both. I continue to come back to campus even though the program is over because the network welcomes me and supports me rather than shuts me out because of my physical limitations. Rather than making me feel I had to prove myself, everyone just understood me and supported me, whether it was bringing me healthy food or recommending funders.”

Another strength that we have in supporting women is that after the Global Solutions Program, entrepreneurs have access to a much larger ecosystem.

Many entrepreneurs partake in SU Ventures, which can provide further support to startups as they develop, and we now have a larger community of over 200,000 people in almost every country. These members have often attended other Singularity University programs, events and are committed to our vision of the future. These women and men consist of business executives, Fortune 500 companies, investors, nonprofit and government leaders, technologists, members of the media, and other movers and shakers in the world. They have made introductions for our founders, collaborated with them on business ventures, invested in them and showcased their work at high profile events around the world.

Building for the Future
While our Global Solutions Program is making great strides in supporting female entrepreneurs, there is always more work to do. We are now focused on achieving the same degree of female participation across all of our programs and actively working to recruit and feature more female faculty and speakers on stage. As our community grows and scales around the world, we are also intent at how to best uphold our values and policies around sexual harassment across diverse locations and cultures. And like all businesses everywhere, we are focused on recruiting more women to serve at senior leadership levels within SU. As we make our way forward, we hope that you will join us in boldly leading this change and recognizing the genius and power of female entrepreneurs.

Meet Some of Our Female Moonshots
While we have many remarkable female entrepreneurs in the Singularity University community, the list below features a few of the women who have founded or co-founded companies at the Global Solutions Program that have launched new industries and are on their way to changing the way our world works for millions if not billions of people.

Jessica Scorpio co-founded Getaround in 2009. Getaround was one of the first car-sharing service platforms allowing anyone to rent out their car using a smartphone app. GetAround was a revolutionary idea in 2009, not only because smartphones and apps were still in their infancy, but because it was unthinkable that a technology startup could disrupt the major entrenched car, transport, and logistics companies. Scorpio’s early insights and pioneering entrepreneurial work brought to life new ways that humans relate to car sharing and the future self-driving car industry. Scorpio and Getaround have won numerous awards, and Getaround now serves over 200,000 members.

Paola Santana co-founded Matternet in 2011, which pioneered the commercial drone transport industry. In 2011, only military, hobbyists or the film industry used drones. Matternet demonstrated that drones could be used for commercial transport in short point-to-point deliveries for high-value goods laying the groundwork for drone transport around the world as well as some of the early thinking behind the future flying car industry. Santana was also instrumental in shaping regulations for the use of commercial drones around the world, making the industry possible.

Sara Naseri co-founded Qurasense in 2014, a life sciences start-up that analyzes women’s health through menstrual blood allowing women to track their health every month. Naseri is shifting our understanding of women’s menstrual blood as a waste product and something “not to be talked about,” to a rich, non-invasive, abundant source of information about women’s health.

Abi Ramanan co-founded ImpactVision in 2015, a software company that rapidly analyzes the quality and characteristics of food through hyperspectral images. Her long-term vision is to digitize food supply chains to reduce waste and fraud, given that one-third of all food is currently wasted before it reaches our plates. Ramanan is also helping the world understand that hyperspectral technology can be used in many industries to help us “see the unseen” and augment our ability to sense and understand what is happening around us in a much more sophisticated way.

Anita Schjøll Brede and Maria Ritola co-founded Iris AI in 2015, an artificial intelligence company that is building an AI research assistant that drastically improves the efficiency of R&D research and breaks down silos between different industries. Their long-term vision is for Iris AI to become smart enough that she will become a scientist herself. Fast Company named Iris AI one of the 10 most innovative artificial intelligence companies for 2017.

Hla Hla Win co-founded 360ed in 2016, a startup that conducts teacher training and student education through virtual reality and augmented reality in Myanmar. They have already connected teachers from 128 private schools in Myanmar with schools teaching 21st-century skills in Silicon Valley and around the world. Their moonshot is to build a platform where any teacher in the world can share best practices in teachers’ training. As they succeed, millions of children in some of the poorest parts of the world will have access to a 21st-century education.

Min FitzGerald and Van Duesterberg cofounded Nutrigene in 2017, a startup that ships freshly formulated, tailor-made supplement elixirs directly to consumers. Their long-term vision is to help people optimize their health using actionable data insights, so people can take a guided, tailored approaching to thriving into longevity.

Anna Skaya co-founded Basepaws in 2016, which created the first genetic test for cats and is building a community of citizen scientist pet owners. They are creating personalized pet products such as supplements, therapeutics, treats, and toys while also developing a database of genetic data for future research that will help both humans and pets over the long term.

Olivia Ramos co-founded Deep Blocks in 2016, a startup using artificial intelligence to integrate and streamline the processes of architecture, pre-construction, and real estate. As digital technologies, artificial intelligence, and robotics advance, it no longer makes sense for these industries to exist separately. Ramos recognized the tremendous value and efficiency that it is now possible to unlock with exponential technologies and creating an integrated industry in the future.

Please also visit our website to learn more about other female entrepreneurs, staff and faculty who are pioneering the future through exponential technologies. Continue reading

Posted in Human Robots

#431958 The Next Generation of Cameras Might See ...

You might be really pleased with the camera technology in your latest smartphone, which can recognize your face and take slow-mo video in ultra-high definition. But these technological feats are just the start of a larger revolution that is underway.

The latest camera research is shifting away from increasing the number of mega-pixels towards fusing camera data with computational processing. By that, we don’t mean the Photoshop style of processing where effects and filters are added to a picture, but rather a radical new approach where the incoming data may not actually look like at an image at all. It only becomes an image after a series of computational steps that often involve complex mathematics and modeling how light travels through the scene or the camera.

This additional layer of computational processing magically frees us from the chains of conventional imaging techniques. One day we may not even need cameras in the conventional sense any more. Instead we will use light detectors that only a few years ago we would never have considered any use for imaging. And they will be able to do incredible things, like see through fog, inside the human body and even behind walls.

Single Pixel Cameras
One extreme example is the single pixel camera, which relies on a beautifully simple principle. Typical cameras use lots of pixels (tiny sensor elements) to capture a scene that is likely illuminated by a single light source. But you can also do things the other way around, capturing information from many light sources with a single pixel.

To do this you need a controlled light source, for example a simple data projector that illuminates the scene one spot at a time or with a series of different patterns. For each illumination spot or pattern, you then measure the amount of light reflected and add everything together to create the final image.

Clearly the disadvantage of taking a photo in this is way is that you have to send out lots of illumination spots or patterns in order to produce one image (which would take just one snapshot with a regular camera). But this form of imaging would allow you to create otherwise impossible cameras, for example that work at wavelengths of light beyond the visible spectrum, where good detectors cannot be made into cameras.

These cameras could be used to take photos through fog or thick falling snow. Or they could mimic the eyes of some animals and automatically increase an image’s resolution (the amount of detail it captures) depending on what’s in the scene.

It is even possible to capture images from light particles that have never even interacted with the object we want to photograph. This would take advantage of the idea of “quantum entanglement,” that two particles can be connected in a way that means whatever happens to one happens to the other, even if they are a long distance apart. This has intriguing possibilities for looking at objects whose properties might change when lit up, such as the eye. For example, does a retina look the same when in darkness as in light?

Multi-Sensor Imaging
Single-pixel imaging is just one of the simplest innovations in upcoming camera technology and relies, on the face of it, on the traditional concept of what forms a picture. But we are currently witnessing a surge of interest for systems that use lots of information but traditional techniques only collect a small part of it.

This is where we could use multi-sensor approaches that involve many different detectors pointed at the same scene. The Hubble telescope was a pioneering example of this, producing pictures made from combinations of many different images taken at different wavelengths. But now you can buy commercial versions of this kind of technology, such as the Lytro camera that collects information about light intensity and direction on the same sensor, to produce images that can be refocused after the image has been taken.

The next generation camera will probably look something like the Light L16 camera, which features ground-breaking technology based on more than ten different sensors. Their data are combined using a computer to provide a 50 MB, re-focusable and re-zoomable, professional-quality image. The camera itself looks like a very exciting Picasso interpretation of a crazy cell-phone camera.

Yet these are just the first steps towards a new generation of cameras that will change the way in which we think of and take images. Researchers are also working hard on the problem of seeing through fog, seeing behind walls, and even imaging deep inside the human body and brain.

All of these techniques rely on combining images with models that explain how light travels through through or around different substances.

Another interesting approach that is gaining ground relies on artificial intelligence to “learn” to recognize objects from the data. These techniques are inspired by learning processes in the human brain and are likely to play a major role in future imaging systems.

Single photon and quantum imaging technologies are also maturing to the point that they can take pictures with incredibly low light levels and videos with incredibly fast speeds reaching a trillion frames per second. This is enough to even capture images of light itself traveling across as scene.

Some of these applications might require a little time to fully develop, but we now know that the underlying physics should allow us to solve these and other problems through a clever combination of new technology and computational ingenuity.

This article was originally published on The Conversation. Read the original article.

Image Credit: Sylvia Adams / Shutterstock.com Continue reading

Posted in Human Robots

#431920 If We Could Engineer Animals to Be as ...

Advances in neural implants and genetic engineering suggest that in the not–too–distant future we may be able to boost human intelligence. If that’s true, could we—and should we—bring our animal cousins along for the ride?
Human brain augmentation made headlines last year after several tech firms announced ambitious efforts to build neural implant technology. Duke University neuroscientist Mikhail Lebedev told me in July it could be decades before these devices have applications beyond the strictly medical.
But he said the technology, as well as other pharmacological and genetic engineering approaches, will almost certainly allow us to boost our mental capacities at some point in the next few decades.
Whether this kind of cognitive enhancement is a good idea or not, and how we should regulate it, are matters of heated debate among philosophers, futurists, and bioethicists, but for some it has raised the question of whether we could do the same for animals.
There’s already tantalizing evidence of the idea’s feasibility. As detailed in BBC Future, a group from MIT found that mice that were genetically engineered to express the human FOXP2 gene linked to learning and speech processing picked up maze routes faster. Another group at Wake Forest University studying Alzheimer’s found that neural implants could boost rhesus monkeys’ scores on intelligence tests.
The concept of “animal uplift” is most famously depicted in the Planet of the Apes movie series, whose planet–conquering protagonists are likely to put most people off the idea. But proponents are less pessimistic about the outcomes.
Science fiction author David Brin popularized the concept in his “Uplift” series of novels, in which humans share the world with various other intelligent animals that all bring their own unique skills, perspectives, and innovations to the table. “The benefits, after a few hundred years, could be amazing,” he told Scientific American.
Others, like George Dvorsky, the director of the Rights of Non-Human Persons program at the Institute for Ethics and Emerging Technologies, go further and claim there is a moral imperative. He told the Boston Globe that denying augmentation technology to animals would be just as unethical as excluding certain groups of humans.
Others are less convinced. Forbes’ Alex Knapp points out that developing the technology to uplift animals will likely require lots of very invasive animal research that will cause huge suffering to the animals it purports to help. This is problematic enough with normal animals, but could be even more morally dubious when applied to ones whose cognitive capacities have been enhanced.
The whole concept could also be based on a fundamental misunderstanding of the nature of intelligence. Humans are prone to seeing intelligence as a single, self-contained metric that progresses in a linear way with humans at the pinnacle.
In an opinion piece in Wired arguing against the likelihood of superhuman artificial intelligence, Kevin Kelly points out that science has no such single dimension with which to rank the intelligence of different species. Each one combines a bundle of cognitive capabilities, some of which are well below our own capabilities and others which are superhuman. He uses the example of the squirrel, which can remember the precise location of thousands of acorns for years.
Uplift efforts may end up being less about boosting intelligence and more about making animals more human-like. That represents “a kind of benevolent colonialism” that assumes being more human-like is a good thing, Paul Graham Raven, a futures researcher at the University of Sheffield in the United Kingdom, told the Boston Globe. There’s scant evidence that’s the case, and it’s easy to see how a chimpanzee with the mind of a human might struggle to adjust.
There are also fundamental barriers that may make it difficult to achieve human-level cognitive capabilities in animals, no matter how advanced brain augmentation technology gets. In 2013 Swedish researchers selectively bred small fish called guppies for bigger brains. This made them smarter, but growing the energy-intensive organ meant the guppies developed smaller guts and produced fewer offspring to compensate.
This highlights the fact that uplifting animals may require more than just changes to their brains, possibly a complete rewiring of their physiology that could prove far more technically challenging than human brain augmentation.
Our intelligence is intimately tied to our evolutionary history—our brains are bigger than other animals’; opposable thumbs allow us to use tools; our vocal chords make complex communication possible. No matter how much you augment a cow’s brain, it still couldn’t use a screwdriver or talk to you in English because it simply doesn’t have the machinery.
Finally, from a purely selfish point of view, even if it does become possible to create a level playing field between us and other animals, it may not be a smart move for humanity. There’s no reason to assume animals would be any more benevolent than we are, having evolved in the same ‘survival of the fittest’ crucible that we have. And given our already endless capacity to divide ourselves along national, religious, or ethnic lines, conflict between species seems inevitable.
We’re already likely to face considerable competition from smart machines in the coming decades if you believe the hype around AI. So maybe adding a few more intelligent species to the mix isn’t the best idea.
Image Credit: Ron Meijer / Shutterstock.com Continue reading

Posted in Human Robots

#431599 8 Ways AI Will Transform Our Cities by ...

How will AI shape the average North American city by 2030? A panel of experts assembled as part of a century-long study into the impact of AI thinks its effects will be profound.
The One Hundred Year Study on Artificial Intelligence is the brainchild of Eric Horvitz, technical fellow and a managing director at Microsoft Research.
Every five years a panel of experts will assess the current state of AI and its future directions. The first panel, comprised of experts in AI, law, political science, policy, and economics, was launched last fall and decided to frame their report around the impact AI will have on the average American city. Here’s how they think it will affect eight key domains of city life in the next fifteen years.
1. Transportation
The speed of the transition to AI-guided transport may catch the public by surprise. Self-driving vehicles will be widely adopted by 2020, and it won’t just be cars — driverless delivery trucks, autonomous delivery drones, and personal robots will also be commonplace.
Uber-style “cars as a service” are likely to replace car ownership, which may displace public transport or see it transition towards similar on-demand approaches. Commutes will become a time to relax or work productively, encouraging people to live further from home, which could combine with reduced need for parking to drastically change the face of modern cities.
Mountains of data from increasing numbers of sensors will allow administrators to model individuals’ movements, preferences, and goals, which could have major impact on the design city infrastructure.
Humans won’t be out of the loop, though. Algorithms that allow machines to learn from human input and coordinate with them will be crucial to ensuring autonomous transport operates smoothly. Getting this right will be key as this will be the public’s first experience with physically embodied AI systems and will strongly influence public perception.
2. Home and Service Robots
Robots that do things like deliver packages and clean offices will become much more common in the next 15 years. Mobile chipmakers are already squeezing the power of last century’s supercomputers into systems-on-a-chip, drastically boosting robots’ on-board computing capacity.
Cloud-connected robots will be able to share data to accelerate learning. Low-cost 3D sensors like Microsoft’s Kinect will speed the development of perceptual technology, while advances in speech comprehension will enhance robots’ interactions with humans. Robot arms in research labs today are likely to evolve into consumer devices around 2025.
But the cost and complexity of reliable hardware and the difficulty of implementing perceptual algorithms in the real world mean general-purpose robots are still some way off. Robots are likely to remain constrained to narrow commercial applications for the foreseeable future.
3. Healthcare
AI’s impact on healthcare in the next 15 years will depend more on regulation than technology. The most transformative possibilities of AI in healthcare require access to data, but the FDA has failed to find solutions to the difficult problem of balancing privacy and access to data. Implementation of electronic health records has also been poor.
If these hurdles can be cleared, AI could automate the legwork of diagnostics by mining patient records and the scientific literature. This kind of digital assistant could allow doctors to focus on the human dimensions of care while using their intuition and experience to guide the process.
At the population level, data from patient records, wearables, mobile apps, and personal genome sequencing will make personalized medicine a reality. While fully automated radiology is unlikely, access to huge datasets of medical imaging will enable training of machine learning algorithms that can “triage” or check scans, reducing the workload of doctors.
Intelligent walkers, wheelchairs, and exoskeletons will help keep the elderly active while smart home technology will be able to support and monitor them to keep them independent. Robots may begin to enter hospitals carrying out simple tasks like delivering goods to the right room or doing sutures once the needle is correctly placed, but these tasks will only be semi-automated and will require collaboration between humans and robots.
4. Education
The line between the classroom and individual learning will be blurred by 2030. Massive open online courses (MOOCs) will interact with intelligent tutors and other AI technologies to allow personalized education at scale. Computer-based learning won’t replace the classroom, but online tools will help students learn at their own pace using techniques that work for them.
AI-enabled education systems will learn individuals’ preferences, but by aggregating this data they’ll also accelerate education research and the development of new tools. Online teaching will increasingly widen educational access, making learning lifelong, enabling people to retrain, and increasing access to top-quality education in developing countries.
Sophisticated virtual reality will allow students to immerse themselves in historical and fictional worlds or explore environments and scientific objects difficult to engage with in the real world. Digital reading devices will become much smarter too, linking to supplementary information and translating between languages.
5. Low-Resource Communities
In contrast to the dystopian visions of sci-fi, by 2030 AI will help improve life for the poorest members of society. Predictive analytics will let government agencies better allocate limited resources by helping them forecast environmental hazards or building code violations. AI planning could help distribute excess food from restaurants to food banks and shelters before it spoils.
Investment in these areas is under-funded though, so how quickly these capabilities will appear is uncertain. There are fears valueless machine learning could inadvertently discriminate by correlating things with race or gender, or surrogate factors like zip codes. But AI programs are easier to hold accountable than humans, so they’re more likely to help weed out discrimination.
6. Public Safety and Security
By 2030 cities are likely to rely heavily on AI technologies to detect and predict crime. Automatic processing of CCTV and drone footage will make it possible to rapidly spot anomalous behavior. This will not only allow law enforcement to react quickly but also forecast when and where crimes will be committed. Fears that bias and error could lead to people being unduly targeted are justified, but well-thought-out systems could actually counteract human bias and highlight police malpractice.
Techniques like speech and gait analysis could help interrogators and security guards detect suspicious behavior. Contrary to concerns about overly pervasive law enforcement, AI is likely to make policing more targeted and therefore less overbearing.
7. Employment and Workplace
The effects of AI will be felt most profoundly in the workplace. By 2030 AI will be encroaching on skilled professionals like lawyers, financial advisers, and radiologists. As it becomes capable of taking on more roles, organizations will be able to scale rapidly with relatively small workforces.
AI is more likely to replace tasks rather than jobs in the near term, and it will also create new jobs and markets, even if it’s hard to imagine what those will be right now. While it may reduce incomes and job prospects, increasing automation will also lower the cost of goods and services, effectively making everyone richer.
These structural shifts in the economy will require political rather than purely economic responses to ensure these riches are shared. In the short run, this may include resources being pumped into education and re-training, but longer term may require a far more comprehensive social safety net or radical approaches like a guaranteed basic income.
8. Entertainment
Entertainment in 2030 will be interactive, personalized, and immeasurably more engaging than today. Breakthroughs in sensors and hardware will see virtual reality, haptics and companion robots increasingly enter the home. Users will be able to interact with entertainment systems conversationally, and they will show emotion, empathy, and the ability to adapt to environmental cues like the time of day.
Social networks already allow personalized entertainment channels, but the reams of data being collected on usage patterns and preferences will allow media providers to personalize entertainment to unprecedented levels. There are concerns this could endow media conglomerates with unprecedented control over people’s online experiences and the ideas to which they are exposed.
But advances in AI will also make creating your own entertainment far easier and more engaging, whether by helping to compose music or choreograph dances using an avatar. Democratizing the production of high-quality entertainment makes it nearly impossible to predict how highly fluid human tastes for entertainment will develop.
Image Credit: Asgord / Shutterstock.com Continue reading

Posted in Human Robots