Tag Archives: face

#432891 This Week’s Awesome Stories From ...

TRANSPORTATION
Elon Musk Presents His Tunnel Vision to the People of LA
Jack Stewart and Aarian Marshall | Wired
“Now, Musk wants to build this new, 2.1-mile tunnel, near LA’s Sepulveda pass. It’s all part of his broader vision of a sprawling network that could take riders from Sherman Oaks in the north to Long Beach Airport in the south, Santa Monica in the west to Dodger Stadium in the east—without all that troublesome traffic.”

ROBOTICS
Feel What This Robot Feels Through Tactile Expressions
Evan Ackerman | IEEE Spectrum
“Guy Hoffman’s Human-Robot Collaboration & Companionship (HRC2) Lab at Cornell University is working on a new robot that’s designed to investigate this concept of textural communication, which really hasn’t been explored in robotics all that much. The robot uses a pneumatically powered elastomer skin that can be dynamically textured with either goosebumps or spikes, which should help it communicate more effectively, especially if what it’s trying to communicate is, ‘Don’t touch me!’”

VIRTUAL REALITY
In Virtual Reality, How Much Body Do You Need?
Steph Yin | The New York Times
“In a paper published Tuesday in Scientific Reports, they showed that animating virtual hands and feet alone is enough to make people feel their sense of body drift toward an invisible avatar. Their work fits into a corpus of research on illusory body ownership, which has challenged understandings of perception and contributed to therapies like treating pain for amputees who experience phantom limb.”

MEDICINE
How Graphene and Gold Could Help Us Test Drugs and Monitor Cancer
Angela Chen | The Verge
“In today’s study, scientists learned to precisely control the amount of electricity graphene generates by changing how much light they shine on the material. When they grew heart cells on the graphene, they could manipulate the cells too, says study co-author Alex Savtchenko, a physicist at the University of California, San Diego. They could make it beat 1.5 times faster, three times faster, 10 times faster, or whatever they needed.”

DISASTER RELIEF
Robotic Noses Could Be the Future of Disaster Rescue—If They Can Outsniff Search Dogs
Eleanor Cummins | Popular Science
“While canine units are a tried and fairly true method for identifying people trapped in the wreckage of a disaster, analytical chemists have for years been working in the lab to create a robotic alternative. A synthetic sniffer, they argue, could potentially prove to be just as or even more reliable than a dog, more resilient in the face of external pressures like heat and humidity, and infinitely more portable.”

Image Credit: Sergey Nivens / Shutterstock.com Continue reading

Posted in Human Robots

#432880 Google’s Duplex Raises the Question: ...

By now, you’ve probably seen Google’s new Duplex software, which promises to call people on your behalf to book appointments for haircuts and the like. As yet, it only exists in demo form, but already it seems like Google has made a big stride towards capturing a market that plenty of companies have had their eye on for quite some time. This software is impressive, but it raises questions.

Many of you will be familiar with the stilted, robotic conversations you can have with early chatbots that are, essentially, glorified menus. Instead of pressing 1 to confirm or 2 to re-enter, some of these bots would allow for simple commands like “Yes” or “No,” replacing the buttons with limited ability to recognize a few words. Using them was often a far more frustrating experience than attempting to use a menu—there are few things more irritating than a robot saying, “Sorry, your response was not recognized.”

Google Duplex scheduling a hair salon appointment:

Google Duplex calling a restaurant:

Even getting the response recognized is hard enough. After all, there are countless different nuances and accents to baffle voice recognition software, and endless turns of phrase that amount to saying the same thing that can confound natural language processing (NLP), especially if you like your phrasing quirky.

You may think that standard customer-service type conversations all travel the same route, using similar words and phrasing. But when there are over 80,000 ways to order coffee, and making a mistake is frowned upon, even simple tasks require high accuracy over a huge dataset.

Advances in audio processing, neural networks, and NLP, as well as raw computing power, have meant that basic recognition of what someone is trying to say is less of an issue. Soundhound’s virtual assistant prides itself on being able to process complicated requests (perhaps needlessly complicated).

The deeper issue, as with all attempts to develop conversational machines, is one of understanding context. There are so many ways a conversation can go that attempting to construct a conversation two or three layers deep quickly runs into problems. Multiply the thousands of things people might say by the thousands they might say next, and the combinatorics of the challenge runs away from most chatbots, leaving them as either glorified menus, gimmicks, or rather bizarre to talk to.

Yet Google, who surely remembers from Glass the risk of premature debuts for technology, especially the kind that ask you to rethink how you interact with or trust in software, must have faith in Duplex to show it on the world stage. We know that startups like Semantic Machines and x.ai have received serious funding to perform very similar functions, using natural-language conversations to perform computing tasks, schedule meetings, book hotels, or purchase items.

It’s no great leap to imagine Google will soon do the same, bringing us closer to a world of onboard computing, where Lens labels the world around us and their assistant arranges it for us (all the while gathering more and more data it can convert into personalized ads). The early demos showed some clever tricks for keeping the conversation within a fairly narrow realm where the AI should be comfortable and competent, and the blog post that accompanied the release shows just how much effort has gone into the technology.

Yet given the privacy and ethics funk the tech industry finds itself in, and people’s general unease about AI, the main reaction to Duplex’s impressive demo was concern. The voice sounded too natural, bringing to mind Lyrebird and their warnings of deepfakes. You might trust “Do the Right Thing” Google with this technology, but it could usher in an era when automated robo-callers are far more convincing.

A more human-like voice may sound like a perfectly innocuous improvement, but the fact that the assistant interjects naturalistic “umm” and “mm-hm” responses to more perfectly mimic a human rubbed a lot of people the wrong way. This wasn’t just a voice assistant trying to sound less grinding and robotic; it was actively trying to deceive people into thinking they were talking to a human.

Google is running the risk of trying to get to conversational AI by going straight through the uncanny valley.

“Google’s experiments do appear to have been designed to deceive,” said Dr. Thomas King of the Oxford Internet Institute’s Digital Ethics Lab, according to Techcrunch. “Their main hypothesis was ‘can you distinguish this from a real person?’ In this case it’s unclear why their hypothesis was about deception and not the user experience… there should be some kind of mechanism there to let people know what it is they are speaking to.”

From Google’s perspective, being able to say “90 percent of callers can’t tell the difference between this and a human personal assistant” is an excellent marketing ploy, even though statistics about how many interactions are successful might be more relevant.

In fact, Duplex runs contrary to pretty much every major recommendation about ethics for the use of robotics or artificial intelligence, not to mention certain eavesdropping laws. Transparency is key to holding machines (and the people who design them) accountable, especially when it comes to decision-making.

Then there are the more subtle social issues. One prominent effect social media has had is to allow people to silo themselves; in echo chambers of like-minded individuals, it’s hard to see how other opinions exist. Technology exacerbates this by removing the evolutionary cues that go along with face-to-face interaction. Confronted with a pair of human eyes, people are more generous. Confronted with a Twitter avatar or a Facebook interface, people hurl abuse and criticism they’d never dream of using in a public setting.

Now that we can use technology to interact with ever fewer people, will it change us? Is it fair to offload the burden of dealing with a robot onto the poor human at the other end of the line, who might have to deal with dozens of such calls a day? Google has said that if the AI is in trouble, it will put you through to a human, which might help save receptionists from the hell of trying to explain a concept to dozens of dumbfounded AI assistants all day. But there’s always the risk that failures will be blamed on the person and not the machine.

As AI advances, could we end up treating the dwindling number of people in these “customer-facing” roles as the buggiest part of a fully automatic service? Will people start accusing each other of being robots on the phone, as well as on Twitter?

Google has provided plenty of reassurances about how the system will be used. They have said they will ensure that the system is identified, and it’s hardly difficult to resolve this problem; a slight change in the script from their demo would do it. For now, consumers will likely appreciate moves that make it clear whether the “intelligent agents” that make major decisions for us, that we interact with daily, and that hide behind social media avatars or phone numbers are real or artificial.

Image Credit: Besjunior / Shutterstock.com Continue reading

Posted in Human Robots

#432051 What Roboticists Are Learning From Early ...

You might not have heard of Hanson Robotics, but if you’re reading this, you’ve probably seen their work. They were the company behind Sophia, the lifelike humanoid avatar that’s made dozens of high-profile media appearances. Before that, they were the company behind that strange-looking robot that seemed a bit like Asimo with Albert Einstein’s head—or maybe you saw BINA48, who was interviewed for the New York Times in 2010 and featured in Jon Ronson’s books. For the sci-fi aficionados amongst you, they even made a replica of legendary author Philip K. Dick, best remembered for having books with titles like Do Androids Dream of Electric Sheep? turned into films with titles like Blade Runner.

Hanson Robotics, in other words, with their proprietary brand of life-like humanoid robots, have been playing the same game for a while. Sometimes it can be a frustrating game to watch. Anyone who gives the robot the slightest bit of thought will realize that this is essentially a chat-bot, with all the limitations this implies. Indeed, even in that New York Times interview with BINA48, author Amy Harmon describes it as a frustrating experience—with “rare (but invariably thrilling) moments of coherence.” This sensation will be familiar to anyone who’s conversed with a chatbot that has a few clever responses.

The glossy surface belies the lack of real intelligence underneath; it seems, at first glance, like a much more advanced machine than it is. Peeling back that surface layer—at least for a Hanson robot—means you’re peeling back Frubber. This proprietary substance—short for “Flesh Rubber,” which is slightly nightmarish—is surprisingly complicated. Up to thirty motors are required just to control the face; they manipulate liquid cells in order to make the skin soft, malleable, and capable of a range of different emotional expressions.

A quick combinatorial glance at the 30+ motors suggests that there are millions of possible combinations; researchers identify 62 that they consider “human-like” in Sophia, although not everyone agrees with this assessment. Arguably, the technical expertise that went into reconstructing the range of human facial expressions far exceeds the more simplistic chat engine the robots use, although it’s the second one that allows it to inflate the punters’ expectations with a few pre-programmed questions in an interview.

Hanson Robotics’ belief is that, ultimately, a lot of how humans will eventually relate to robots is going to depend on their faces and voices, as well as on what they’re saying. “The perception of identity is so intimately bound up with the perception of the human form,” says David Hanson, company founder.

Yet anyone attempting to design a robot that won’t terrify people has to contend with the uncanny valley—that strange blend of concern and revulsion people react with when things appear to be creepily human. Between cartoonish humanoids and genuine humans lies what has often been a no-go zone in robotic aesthetics.

The uncanny valley concept originated with roboticist Masahiro Mori, who argued that roboticists should avoid trying to replicate humans exactly. Since anything that wasn’t perfect, but merely very good, would elicit an eerie feeling in humans, shirking the challenge entirely was the only way to avoid the uncanny valley. It’s probably a task made more difficult by endless streams of articles about AI taking over the world that inexplicably conflate AI with killer humanoid Terminators—which aren’t particularly likely to exist (although maybe it’s best not to push robots around too much).

The idea behind this realm of psychological horror is fairly simple, cognitively speaking.

We know how to categorize things that are unambiguously human or non-human. This is true even if they’re designed to interact with us. Consider the popularity of Aibo, Jibo, or even some robots that don’t try to resemble humans. Something that resembles a human, but isn’t quite right, is bound to evoke a fear response in the same way slightly distorted music or slightly rearranged furniture in your home will. The creature simply doesn’t fit.

You may well reject the idea of the uncanny valley entirely. David Hanson, naturally, is not a fan. In the paper Upending the Uncanny Valley, he argues that great art forms have often resembled humans, but the ultimate goal for humanoid roboticists is probably to create robots we can relate to as something closer to humans than works of art.

Meanwhile, Hanson and other scientists produce competing experiments to either demonstrate that the uncanny valley is overhyped, or to confirm it exists and probe its edges.

The classic experiment involves gradually morphing a cartoon face into a human face, via some robotic-seeming intermediaries—yet it’s in movement that the real horror of the almost-human often lies. Hanson has argued that incorporating cartoonish features may help—and, sometimes, that the uncanny valley is a generational thing which will melt away when new generations grow used to the quirks of robots. Although Hanson might dispute the severity of this effect, it’s clearly what he’s trying to avoid with each new iteration.

Hiroshi Ishiguro is the latest of the roboticists to have dived headlong into the valley.

Building on the work of pioneers like Hanson, those who study human-robot interaction are pushing at the boundaries of robotics—but also of social science. It’s usually difficult to simulate what you don’t understand, and there’s still an awful lot we don’t understand about how we interpret the constant streams of non-verbal information that flow when you interact with people in the flesh.

Ishiguro took this imitation of human forms to extreme levels. Not only did he monitor and log the physical movements people made on videotapes, but some of his robots are based on replicas of people; the Repliee series began with a ‘replicant’ of his daughter. This involved making a rubber replica—a silicone cast—of her entire body. Future experiments were focused on creating Geminoid, a replica of Ishiguro himself.

As Ishiguro aged, he realized that it would be more effective to resemble his replica through cosmetic surgery rather than by continually creating new casts of his face, each with more lines than the last. “I decided not to get old anymore,” Ishiguro said.

We love to throw around abstract concepts and ideas: humans being replaced by machines, cared for by machines, getting intimate with machines, or even merging themselves with machines. You can take an idea like that, hold it in your hand, and examine it—dispassionately, if not without interest. But there’s a gulf between thinking about it and living in a world where human-robot interaction is not a field of academic research, but a day-to-day reality.

As the scientists studying human-robot interaction develop their robots, their replicas, and their experiments, they are making some of the first forays into that world. We might all be living there someday. Understanding ourselves—decrypting the origins of empathy and love—may be the greatest challenge to face. That is, if you want to avoid the valley.

Image Credit: Anton Gvozdikov / Shutterstock.com Continue reading

Posted in Human Robots

#432021 Unleashing Some of the Most Ambitious ...

At Singularity University, we are unleashing a generation of women who are smashing through barriers and starting some of the most ambitious technology companies on the planet.

Singularity University was founded in 2008 to empower leaders to use exponential technologies to solve our world’s biggest challenges. Our flagship program, the Global Solutions Program, has historically brought 80 entrepreneurs from around the world to Silicon Valley for 10 weeks to learn about exponential technologies and create moonshot startups that improve the lives of a billion people within a decade.

After nearly 10 years of running this program, we can say that about 70 percent of our successful startups have been founded or co-founded by female entrepreneurs (see below for inspiring examples of their work). This is in sharp contrast to the typical 10–20 percent of venture-backed tech companies that have a female founder, as reported by TechCrunch.

How are we so dramatically changing the game? While 100 percent of the credit goes to these courageous women, as both an alumna of the Global Solutions Program and our current vice chair of Global Grand Challenges, I want to share my reflections on what has worked.

At the most basic level, it is essential to deeply believe in the inherent worth, intellectual genius, and profound entrepreneurial caliber of women. While this may seem obvious, this is not the way our world currently thinks—we live in a world that sees women’s ideas, contributions, work, and existence as inherently less valuable than men’s.

For example, a 2017 Harvard Business Review article noted that even when women engage in the same behaviors and work as men, their work is considered less valuable simply because a woman did the job. An additional 2017 Harvard Business Review article showed that venture capitalists are significantly less likely to invest in female entrepreneurs and are more likely to ask men questions about the potential success of their companies while grilling women about the potential downfalls of their companies.

This doubt and lack of recognition of the genius and caliber of women is also why women are still paid less than men for completing identical work. Further, it’s why women’s work often gets buried in “number two” support roles of men in leadership roles and why women are expected to take on second shifts at home managing tedious household chores in addition to their careers. I would also argue these views as well as the rampant sexual harassment, assault, and violence against women that exists today stems from stubborn, historical, patriarchal views of women as living for the benefit of men, rather than for their own sovereignty and inherent value.

As with any other business, Singularity University has not been immune to these biases but is resolutely focused on helping women achieve intellectual genius and global entrepreneurial caliber by harnessing powerful exponential technologies.

We create an environment where women can physically and intellectually thrive free of harassment to reach their full potential, and we are building a broader ecosystem of alumni and partners around the world who not only support our female entrepreneurs throughout their entrepreneurial journeys, but who are also sparking and leading systemic change in their own countries and communities.

Respecting the Intellectual Genius and Entrepreneurial Caliber of Women
The entrepreneurial legends of our time—Steve Jobs, Elon Musk, Mark Zuckerberg, Bill Gates, Jeff Bezos, Larry Page, Sergey Brin—are men who have all built their empires using exponential technologies. Exponential technologies helped these men succeed faster and with greater impact due to Moore’s Law and the Law of Accelerating Returns which states that any digital technology (such as computing, software, artificial intelligence, robotics, quantum computing, biotechnology, nanotechnology, etc.) will become more sophisticated while dramatically falling in price, enabling rapid scaling.

Knowing this, an entrepreneur can plot her way to an ambitious global solution over time, releasing new applications just as the technology and market are ready. Furthermore, these rapidly advancing technologies often converge to create new tools and opportunities for innovators to come up with novel solutions to challenges that were previously impossible to solve in the past.

For various reasons, women have not pursued exponential technologies as aggressively as men (or were prevented or discouraged from doing so).

While more women are founding firms at a higher rate than ever in wealthy countries like the United States, the majority are small businesses in linear industries that have been around for hundreds of years, such as social assistance, health, education, administrative, or consulting services. In lower-income countries, international aid agencies and nonprofits often encourage women to pursue careers in traditional handicrafts, micro-enterprise, and micro-finance. While these jobs have historically helped women escape poverty and gain financial independence, they have done little to help women realize the enormous power, influence, wealth, and ability to transform the world for the better that comes from building companies, nonprofits, and solutions grounded in exponential technologies.

We need women to be working with exponential technologies today in order to be powerful leaders in the future.

Participants who enroll in our Global Solutions Program spend the first few weeks of the program learning about exponential technologies from the world’s experts and the final weeks launching new companies or nonprofits in their area of interest. We require that women (as well as men) utilize exponential technologies as a condition of the program.

In this sense, at Singularity University women start their endeavors with all of us believing and behaving in a way that assumes they can achieve global impact at the level of our world’s most legendary entrepreneurs.

Creating an Environment Where Woman Can Thrive
While challenging women to embrace exponential technologies is essential, it is also important to create an environment where women can thrive. In particular, this means ensuring women feel at home on our campus by ensuring gender diversity, aggressively addressing sexual harassment, and flipping the traditional culture from one that penalizes women, to one that values and supports them.

While women were initially only a small minority of our Global Solutions Program, in 2014, we achieved around 50% female attendance—a statistic that has since held over the years.

This is not due to a quota—every year we turn away extremely qualified women from our program (and are working on reformulating the program to allow more people to participate in the future.) While part of our recruiting success is due to the efforts of our marketing team, we also benefited from the efforts of some of our early female founders, staff, faculty, and alumnae including Susan Fonseca, Emeline Paat-Dahlstrom, Kathryn Myronuk, Lajuanda Asemota, Chiara Giovenzana, and Barbara Silva Tronseca.

As early champions of Singularity University these women not only launched diversity initiatives and personally reached out to women, but were crucial role models holding leadership roles in our community. In addition, Fonseca and Silva also both created multiple organizations and initiatives outside of (or in conjunction with) the university that produced additional pipelines of female candidates. In particular, Fonseca founded Women@TheFrontier as well as other organizations focusing on women, technology and innovation, and Silva founded BestInnovation (a woman’s accelerator in Latin America), as well as led Singularity University’s Chilean Chapter and founded the first SingularityU Summit in Latin America.

These women’s efforts in globally scaling Singularity University have been critical in ensuring woman around the world now see Singularity University as a place where they can lead and shape the future.

Also, thanks to Google (Alphabet) and many of our alumni and partners, we were able to provide full scholarships to any woman (or man) to attend our program regardless of their economic status. Google committed significant funding for full scholarships while our partners around the world also hosted numerous Global Impact Competitions, where entrepreneurs pitched their solutions to their local communities with the winners earning a full scholarship funded by our partners to attend the Global Solution Program as their prize.

Google and our partners’ support helped individuals attend our program and created a wider buzz around exponential technology and social change around the world in local communities. It led to the founding of 110 SU chapters in 55 countries.

Another vital aspect of our work in supporting women has been trying to create a harassment-free environment. Throughout the Silicon Valley, more than 60% of women convey that while they are trying to build their companies or get their work done, they are also dealing with physical and sexual harassment while being demeaned and excluded in other ways in the workplace. We have taken actions to educate and train our staff on how to deal with situations should they occur. All staff receives training on harassment when they join Singularity University, and all Global Solutions Program participants attend mandatory trainings on sexual harassment when they first arrive on campus. We also have male and female wellness counselors available that can offer support to both individuals and teams of entrepreneurs throughout the entire program.

While at a minimum our campus must be physically safe for women, we also strive to create a culture that values women and supports them in the additional challenges and expectations they face. For example, one of our 2016 female participants, Van Duesterberg, was pregnant during the program and said that instead of having people doubt her commitment to her startup or make her prove she could handle having a child and running a start-up at the same time, people went out of their way to help her.

“I was the epitome of a person not supposed to be doing a startup,” she said. “I was pregnant and would need to take care of my child. But Singularity University was supportive and encouraging. They made me feel super-included and that it was possible to do both. I continue to come back to campus even though the program is over because the network welcomes me and supports me rather than shuts me out because of my physical limitations. Rather than making me feel I had to prove myself, everyone just understood me and supported me, whether it was bringing me healthy food or recommending funders.”

Another strength that we have in supporting women is that after the Global Solutions Program, entrepreneurs have access to a much larger ecosystem.

Many entrepreneurs partake in SU Ventures, which can provide further support to startups as they develop, and we now have a larger community of over 200,000 people in almost every country. These members have often attended other Singularity University programs, events and are committed to our vision of the future. These women and men consist of business executives, Fortune 500 companies, investors, nonprofit and government leaders, technologists, members of the media, and other movers and shakers in the world. They have made introductions for our founders, collaborated with them on business ventures, invested in them and showcased their work at high profile events around the world.

Building for the Future
While our Global Solutions Program is making great strides in supporting female entrepreneurs, there is always more work to do. We are now focused on achieving the same degree of female participation across all of our programs and actively working to recruit and feature more female faculty and speakers on stage. As our community grows and scales around the world, we are also intent at how to best uphold our values and policies around sexual harassment across diverse locations and cultures. And like all businesses everywhere, we are focused on recruiting more women to serve at senior leadership levels within SU. As we make our way forward, we hope that you will join us in boldly leading this change and recognizing the genius and power of female entrepreneurs.

Meet Some of Our Female Moonshots
While we have many remarkable female entrepreneurs in the Singularity University community, the list below features a few of the women who have founded or co-founded companies at the Global Solutions Program that have launched new industries and are on their way to changing the way our world works for millions if not billions of people.

Jessica Scorpio co-founded Getaround in 2009. Getaround was one of the first car-sharing service platforms allowing anyone to rent out their car using a smartphone app. GetAround was a revolutionary idea in 2009, not only because smartphones and apps were still in their infancy, but because it was unthinkable that a technology startup could disrupt the major entrenched car, transport, and logistics companies. Scorpio’s early insights and pioneering entrepreneurial work brought to life new ways that humans relate to car sharing and the future self-driving car industry. Scorpio and Getaround have won numerous awards, and Getaround now serves over 200,000 members.

Paola Santana co-founded Matternet in 2011, which pioneered the commercial drone transport industry. In 2011, only military, hobbyists or the film industry used drones. Matternet demonstrated that drones could be used for commercial transport in short point-to-point deliveries for high-value goods laying the groundwork for drone transport around the world as well as some of the early thinking behind the future flying car industry. Santana was also instrumental in shaping regulations for the use of commercial drones around the world, making the industry possible.

Sara Naseri co-founded Qurasense in 2014, a life sciences start-up that analyzes women’s health through menstrual blood allowing women to track their health every month. Naseri is shifting our understanding of women’s menstrual blood as a waste product and something “not to be talked about,” to a rich, non-invasive, abundant source of information about women’s health.

Abi Ramanan co-founded ImpactVision in 2015, a software company that rapidly analyzes the quality and characteristics of food through hyperspectral images. Her long-term vision is to digitize food supply chains to reduce waste and fraud, given that one-third of all food is currently wasted before it reaches our plates. Ramanan is also helping the world understand that hyperspectral technology can be used in many industries to help us “see the unseen” and augment our ability to sense and understand what is happening around us in a much more sophisticated way.

Anita Schjøll Brede and Maria Ritola co-founded Iris AI in 2015, an artificial intelligence company that is building an AI research assistant that drastically improves the efficiency of R&D research and breaks down silos between different industries. Their long-term vision is for Iris AI to become smart enough that she will become a scientist herself. Fast Company named Iris AI one of the 10 most innovative artificial intelligence companies for 2017.

Hla Hla Win co-founded 360ed in 2016, a startup that conducts teacher training and student education through virtual reality and augmented reality in Myanmar. They have already connected teachers from 128 private schools in Myanmar with schools teaching 21st-century skills in Silicon Valley and around the world. Their moonshot is to build a platform where any teacher in the world can share best practices in teachers’ training. As they succeed, millions of children in some of the poorest parts of the world will have access to a 21st-century education.

Min FitzGerald and Van Duesterberg cofounded Nutrigene in 2017, a startup that ships freshly formulated, tailor-made supplement elixirs directly to consumers. Their long-term vision is to help people optimize their health using actionable data insights, so people can take a guided, tailored approaching to thriving into longevity.

Anna Skaya co-founded Basepaws in 2016, which created the first genetic test for cats and is building a community of citizen scientist pet owners. They are creating personalized pet products such as supplements, therapeutics, treats, and toys while also developing a database of genetic data for future research that will help both humans and pets over the long term.

Olivia Ramos co-founded Deep Blocks in 2016, a startup using artificial intelligence to integrate and streamline the processes of architecture, pre-construction, and real estate. As digital technologies, artificial intelligence, and robotics advance, it no longer makes sense for these industries to exist separately. Ramos recognized the tremendous value and efficiency that it is now possible to unlock with exponential technologies and creating an integrated industry in the future.

Please also visit our website to learn more about other female entrepreneurs, staff and faculty who are pioneering the future through exponential technologies. Continue reading

Posted in Human Robots

#431958 The Next Generation of Cameras Might See ...

You might be really pleased with the camera technology in your latest smartphone, which can recognize your face and take slow-mo video in ultra-high definition. But these technological feats are just the start of a larger revolution that is underway.

The latest camera research is shifting away from increasing the number of mega-pixels towards fusing camera data with computational processing. By that, we don’t mean the Photoshop style of processing where effects and filters are added to a picture, but rather a radical new approach where the incoming data may not actually look like at an image at all. It only becomes an image after a series of computational steps that often involve complex mathematics and modeling how light travels through the scene or the camera.

This additional layer of computational processing magically frees us from the chains of conventional imaging techniques. One day we may not even need cameras in the conventional sense any more. Instead we will use light detectors that only a few years ago we would never have considered any use for imaging. And they will be able to do incredible things, like see through fog, inside the human body and even behind walls.

Single Pixel Cameras
One extreme example is the single pixel camera, which relies on a beautifully simple principle. Typical cameras use lots of pixels (tiny sensor elements) to capture a scene that is likely illuminated by a single light source. But you can also do things the other way around, capturing information from many light sources with a single pixel.

To do this you need a controlled light source, for example a simple data projector that illuminates the scene one spot at a time or with a series of different patterns. For each illumination spot or pattern, you then measure the amount of light reflected and add everything together to create the final image.

Clearly the disadvantage of taking a photo in this is way is that you have to send out lots of illumination spots or patterns in order to produce one image (which would take just one snapshot with a regular camera). But this form of imaging would allow you to create otherwise impossible cameras, for example that work at wavelengths of light beyond the visible spectrum, where good detectors cannot be made into cameras.

These cameras could be used to take photos through fog or thick falling snow. Or they could mimic the eyes of some animals and automatically increase an image’s resolution (the amount of detail it captures) depending on what’s in the scene.

It is even possible to capture images from light particles that have never even interacted with the object we want to photograph. This would take advantage of the idea of “quantum entanglement,” that two particles can be connected in a way that means whatever happens to one happens to the other, even if they are a long distance apart. This has intriguing possibilities for looking at objects whose properties might change when lit up, such as the eye. For example, does a retina look the same when in darkness as in light?

Multi-Sensor Imaging
Single-pixel imaging is just one of the simplest innovations in upcoming camera technology and relies, on the face of it, on the traditional concept of what forms a picture. But we are currently witnessing a surge of interest for systems that use lots of information but traditional techniques only collect a small part of it.

This is where we could use multi-sensor approaches that involve many different detectors pointed at the same scene. The Hubble telescope was a pioneering example of this, producing pictures made from combinations of many different images taken at different wavelengths. But now you can buy commercial versions of this kind of technology, such as the Lytro camera that collects information about light intensity and direction on the same sensor, to produce images that can be refocused after the image has been taken.

The next generation camera will probably look something like the Light L16 camera, which features ground-breaking technology based on more than ten different sensors. Their data are combined using a computer to provide a 50 MB, re-focusable and re-zoomable, professional-quality image. The camera itself looks like a very exciting Picasso interpretation of a crazy cell-phone camera.

Yet these are just the first steps towards a new generation of cameras that will change the way in which we think of and take images. Researchers are also working hard on the problem of seeing through fog, seeing behind walls, and even imaging deep inside the human body and brain.

All of these techniques rely on combining images with models that explain how light travels through through or around different substances.

Another interesting approach that is gaining ground relies on artificial intelligence to “learn” to recognize objects from the data. These techniques are inspired by learning processes in the human brain and are likely to play a major role in future imaging systems.

Single photon and quantum imaging technologies are also maturing to the point that they can take pictures with incredibly low light levels and videos with incredibly fast speeds reaching a trillion frames per second. This is enough to even capture images of light itself traveling across as scene.

Some of these applications might require a little time to fully develop, but we now know that the underlying physics should allow us to solve these and other problems through a clever combination of new technology and computational ingenuity.

This article was originally published on The Conversation. Read the original article.

Image Credit: Sylvia Adams / Shutterstock.com Continue reading

Posted in Human Robots