Tag Archives: lights

#431559 Drug Discovery AI to Scour a Universe of ...

On a dark night, away from city lights, the stars of the Milky Way can seem uncountable. Yet from any given location no more than 4,500 are visible to the naked eye. Meanwhile, our galaxy has 100–400 billion stars, and there are even more galaxies in the universe.
The numbers of the night sky are humbling. And they give us a deep perspective…on drugs.
Yes, this includes wow-the-stars-are-freaking-amazing-tonight drugs, but also the kinds of drugs that make us well again when we’re sick. The number of possible organic compounds with “drug-like” properties dwarfs the number of stars in the universe by over 30 orders of magnitude.
Next to this multiverse of possibility, the chemical configurations scientists have made into actual medicines are like the smattering of stars you’d glimpse downtown.
But for good reason.
Exploring all that potential drug-space is as humanly impossible as exploring all of physical space, and even if we could, most of what we’d find wouldn’t fit our purposes. Still, the idea that wonder drugs must surely lurk amid the multitudes is too tantalizing to ignore.
Which is why, Alex Zhavoronkov said at Singularity University’s Exponential Medicine in San Diego last week, we should use artificial intelligence to do more of the legwork and speed discovery. This, he said, could be one of the next big medical applications for AI.
Dogs, Diagnosis, and Drugs
Zhavoronkov is CEO of Insilico Medicine and CSO of the Biogerontology Research Foundation. Insilico is one of a number of AI startups aiming to accelerate drug discovery with AI.
In recent years, Zhavoronkov said, the now-famous machine learning technique, deep learning, has made progress on a number of fronts. Algorithms that can teach themselves to play games—like DeepMind’s AlphaGo Zero or Carnegie Mellon’s poker playing AI—are perhaps the most headline-grabbing of the bunch. But pattern recognition was the thing that kicked deep learning into overdrive early on, when machine learning algorithms went from struggling to tell dogs and cats apart to outperforming their peers and then their makers in quick succession.
[Watch this video for an AI update from Neil Jacobstein, chair of Artificial Intelligence and Robotics at Singularity University.]

In medicine, deep learning algorithms trained on databases of medical images can spot life-threatening disease with equal or greater accuracy than human professionals. There’s even speculation that AI, if we learn to trust it, could be invaluable in diagnosing disease. And, as Zhavoronkov noted, with more applications and a longer track record that trust is coming.
“Tesla is already putting cars on the street,” Zhavoronkov said. “Three-year, four-year-old technology is already carrying passengers from point A to point B, at 100 miles an hour, and one mistake and you’re dead. But people are trusting their lives to this technology.”
“So, why don’t we do it in pharma?”
Trial and Error and Try Again
AI wouldn’t drive the car in pharmaceutical research. It’d be an assistant that, when paired with a chemist or two, could fast-track discovery by screening more possibilities for better candidates.
There’s plenty of room to make things more efficient, according to Zhavoronkov.
Drug discovery is arduous and expensive. Chemists sift tens of thousands of candidate compounds for the most promising to synthesize. Of these, a handful will go on to further research, fewer will make it to human clinical trials, and a fraction of those will be approved.
The whole process can take many years and cost hundreds of millions of dollars.
This is a big data problem if ever there was one, and deep learning thrives on big data. Early applications have shown their worth unearthing subtle patterns in huge training databases. Although drug-makers already use software to sift compounds, such software requires explicit rules written by chemists. AI’s allure is its ability to learn and improve on its own.
“There are two strategies for AI-driven innovation in pharma to ensure you get better molecules and much faster approvals,” Zhavoronkov said. “One is looking for the needle in the haystack, and another one is creating a new needle.”
To find the needle in the haystack, algorithms are trained on large databases of molecules. Then they go looking for molecules with attractive properties. But creating a new needle? That’s a possibility enabled by the generative adversarial networks Zhavoronkov specializes in.
Such algorithms pit two neural networks against each other. One generates meaningful output while the other judges whether this output is true or false, Zhavoronkov said. Together, the networks generate new objects like text, images, or in this case, molecular structures.
“We started employing this particular technology to make deep neural networks imagine new molecules, to make it perfect right from the start. So, to come up with really perfect needles,” Zhavoronkov said. “[You] can essentially go to this [generative adversarial network] and ask it to create molecules that inhibit protein X at concentration Y, with the highest viability, specific characteristics, and minimal side effects.”
Zhavoronkov believes AI can find or fabricate more needles from the array of molecular possibilities, freeing human chemists to focus on synthesizing only the most promising. If it works, he hopes we can increase hits, minimize misses, and generally speed the process up.
Proof’s in the Pudding
Insilico isn’t alone on its drug-discovery quest, nor is it a brand new area of interest.
Last year, a Harvard group published a paper on an AI that similarly suggests drug candidates. The software trained on 250,000 drug-like molecules and used its experience to generate new molecules that blended existing drugs and made suggestions based on desired properties.
An MIT Technology Review article on the subject highlighted a few of the challenges such systems may still face. The results returned aren’t always meaningful or easy to synthesize in the lab, and the quality of these results, as always, is only as good as the data dined upon.
Stanford chemistry professor and Andreesen Horowitz partner, Vijay Pande, said that images, speech, and text—three of the areas deep learning’s made quick strides in—have better, cleaner data. Chemical data, on the other hand, is still being optimized for deep learning. Also, while there are public databases, much data still lives behind closed doors at private companies.
To overcome the challenges and prove their worth, Zhavoronkov said, his company is very focused on validating the tech. But this year, skepticism in the pharmaceutical industry seems to be easing into interest and investment.
AI drug discovery startup Exscientia inked a deal with Sanofi for $280 million and GlaxoSmithKline for $42 million. Insilico is also partnering with GlaxoSmithKline, and Numerate is working with Takeda Pharmaceutical. Even Google may jump in. According to an article in Nature outlining the field, the firm’s deep learning project, Google Brain, is growing its biosciences team, and industry watchers wouldn’t be surprised to see them target drug discovery.
With AI and the hardware running it advancing rapidly, the greatest potential may yet be ahead. Perhaps, one day, all 1060 molecules in drug-space will be at our disposal. “You should take all the data you have, build n new models, and search as much of that 1060 as possible” before every decision you make, Brandon Allgood, CTO at Numerate, told Nature.
Today’s projects need to live up to their promises, of course, but Zhavoronkov believes AI will have a big impact in the coming years, and now’s the time to integrate it. “If you are working for a pharma company, and you’re still thinking, ‘Okay, where is the proof?’ Once there is a proof, and once you can see it to believe it—it’s going to be too late,” he said.
Image Credit: Klavdiya Krinichnaya / Shutterstock.com Continue reading

Posted in Human Robots | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

#431315 Better Than Smart Speakers? Japan Is ...

While American internet giants are developing speakers, Japanese companies are working on robots and holograms. They all share a common goal: to create the future platform for the Internet of Things (IoT) and smart homes.
Names like Bocco, EMIEW3, Xperia Assistant, and Gatebox may not ring a bell to most outside of Japan, but Sony, Hitachi, Sharp, and Softbank most certainly do. The companies, along with Japanese start-ups, have developed robots, robot concepts, and even holograms like the ones hiding behind the short list of names.
While there are distinct differences between the various systems, they share the potential to act as a remote control for IoT devices and smart homes. It is a very different direction than that taken by companies like Google, Amazon, and Apple, who have so far focused on building IoT speaker systems.
Bocco robot. Image Credit: Yukai Engineering
“Technology companies are pursuing the platform—or smartphone if you will—for IoT. My impression is that Japanese companies—and Japanese consumers—prefer that such a platform should not just be an object, but a companion,” says Kosuke Tatsumi, designer at Yukai Engineering, a startup that has developed the Bocco robot system.
At Hitachi, a spokesperson said that the company’s human symbiotic service robot, EMIEW3, robot is currently in the field, doing proof-of-value tests at customer sites to investigate needs and potential solutions. This could include working as an interactive control system for the Internet of Things:
“EMIEW3 is able to communicate with humans, thus receive instructions, and as it is connected to a robotics IT platform, it is very much capable of interacting with IoT-based systems,” the spokesperson said.
The power of speech is getting feet
Gartner analysis predicts that there will be 8.4 billion internet-connected devices—collectively making up the Internet of Things—by the end of 2017. 5.2 billion of those devices are in the consumer category. By the end of 2020, the number of IoT devices will rise to 12.8 billion—and that is just in the consumer category.
As a child of the 80s, I can vividly remember how fun it was to have separate remote controls for TV, video, and stereo. I can imagine a situation where my internet-connected refrigerator and ditto thermostat, television, and toaster try to work out who I’m talking to and what I want them to do.
Consensus seems to be that speech will be the way to interact with many/most IoT devices. The same goes for a form of virtual assistant functioning as the IoT platform—or remote control. Almost everything else is still an open ballgame, despite an early surge for speaker-based systems, like those from Amazon, Google, and Apple.
Why robots could rule
Famous android creator and robot scientist Dr. Hiroshi Ishiguro sees the interaction between humans and the AI embedded in speakers or robots as central to both approaches. From there, the approaches differ greatly.
Image Credit: Hiroshi Ishiguro Laboratories
“It is about more than the difference of form. Speaking to an Amazon Echo is not a natural kind of interaction for humans. That is part of what we in Japan are creating in many human-like robot systems,” he says. “The human brain is constructed to recognize and interact with humans. This is part of why it makes sense to focus on developing the body for the AI mind as well as the AI mind itself. In a way, you can describe it as the difference between developing an assistant, which could be said to be what many American companies are currently doing, and a companion, which is more the focus here in Japan.”
Another advantage is that robots are more kawaii—a multifaceted Japanese word that can be translated as “cute”—than speakers are. This makes it easy for people to relate to them and forgive them.
“People are more willing to forgive children when they make mistakes, and the same is true with a robot like Bocco, which is designed to look kawaii and childlike,” Kosuke Tatsumi explains.
Japanese robots and holograms with IoT-control capabilities
So, what exactly do these robot and hologram companions look like, what can they do, and who’s making them? Here are seven examples of Japanese companies working to go a step beyond smart speakers with personable robots and holograms.
1. In 2016 Sony’s mobile division demonstrated the Xperia Agent concept robot that recognizes individual users, is voice controlled, and can do things like control your television and receive calls from services like Skype.

2. Sharp launched their Home Assistant at CES 2016. A robot-like, voice-controlled assistant that can to control, among other things, air conditioning units, and televisions. Sharp has also launched a robotic phone called RoBoHon.
3. Gatebox has created a holographic virtual assistant. Evil tongues will say that it is primarily the expression of an otaku (Japanese for nerd) dream of living with a manga heroine. Gatebox is, however, able to control things like lights, TVs, and other systems through API integration. It also provides its owner with weather-related advice like “remember your umbrella, it looks like it will rain later.” Gatebox can be controlled by voice, gesture, or via an app.
4. Hitachi’s EMIEW3 robot is designed to assist people in businesses and public spaces. It is connected to a robot IT-platform via the cloud that acts as a “remote brain.” Hitachi is currently investigating the business use cases for EMIEW3. This could include the role of controlling platform for IoT devices.

5. Softbank’s Pepper robot has been used as a platform to control use of medical IoT devices such as smart thermometers by Avatarion. The company has also developed various in-house systems that enable Pepper to control IoT-devices like a coffee machine. A user simply asks Pepper to brew a cup of coffee, and it starts the coffee machine for you.
6. Yukai Engineering’s Bocco registers when a person (e.g., young child) comes home and acts as a communication center between that person and other members of the household (e.g., parent still at work). The company is working on integrating voice recognition, voice control, and having Bocco control things like the lights and other connected IoT devices.
7. Last year Toyota launched the Kirobo Mini, a companion robot which aims to, among other things, help its owner by suggesting “places to visit, routes for travel, and music to listen to” during the drive.

Today, Japan. Tomorrow…?
One of the key questions is whether this emerging phenomenon is a purely Japanese thing. If the country’s love of robots makes it fundamentally different. Japan is, after all, a country where new units of Softbank’s Pepper robot routinely sell out in minutes and the RoBoHon robot-phone has its own cafe nights in Tokyo.
It is a country where TV introduces you to friendly, helpful robots like Doraemon and Astro Boy. I, on the other hand, first met robots in the shape of Arnold Schwarzenegger’s Terminator and struggled to work out why robots seemed intent on permanently borrowing things like clothes and motorcycles, not to mention why they hated people called Sarah.
However, research suggests that a big part of the reason why Japanese seem to like robots is a combination of exposure and positive experiences that leads to greater acceptance of them. As robots spread to more and more industries—and into our homes—our acceptance of them will grow.
The argument is also backed by a project by Avatarion, which used Softbank’s Nao-robot as a classroom representative for children who were in the hospital.
“What we found was that the other children quickly adapted to interacting with the robot and treating it as the physical representation of the child who was in hospital. They accepted it very quickly,” Thierry Perronnet, General Manager of Avatarion, explains.
His company has also developed solutions where Softbank’s Pepper robot is used as an in-home nurse and controls various medical IoT devices.
If robots end up becoming our preferred method for controlling IoT devices, it is by no means certain that said robots will be coming from Japan.
“I think that the goal for both Japanese and American companies—including the likes of Google, Amazon, Microsoft, and Apple—is to create human-like interaction. For this to happen, technology needs to evolve and adapt to us and how we are used to interacting with others, in other words, have a more human form. Humans’ speed of evolution cannot keep up with technology’s, so it must be the technology that changes,” Dr. Ishiguro says.
Image Credit: Sony Mobile Communications Continue reading

Posted in Human Robots | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

#431165 Intel Jumps Into Brain-Like Computing ...

The brain has long inspired the design of computers and their software. Now Intel has become the latest tech company to decide that mimicking the brain’s hardware could be the next stage in the evolution of computing.
On Monday the company unveiled an experimental “neuromorphic” chip called Loihi. Neuromorphic chips are microprocessors whose architecture is configured to mimic the biological brain’s network of neurons and the connections between them called synapses.
While neural networks—the in vogue approach to artificial intelligence and machine learning—are also inspired by the brain and use layers of virtual neurons, they are still implemented on conventional silicon hardware such as CPUs and GPUs.
The main benefit of mimicking the architecture of the brain on a physical chip, say neuromorphic computing’s proponents, is energy efficiency—the human brain runs on roughly 20 watts. The “neurons” in neuromorphic chips carry out the role of both processor and memory which removes the need to shuttle data back and forth between separate units, which is how traditional chips work. Each neuron also only needs to be powered while it’s firing.

At present, most machine learning is done in data centers due to the massive energy and computing requirements. Creating chips that capture some of nature’s efficiency could allow AI to be run directly on devices like smartphones, cars, and robots.
This is exactly the kind of application Michael Mayberry, managing director of Intel’s research arm, touts in a blog post announcing Loihi. He talks about CCTV cameras that can run image recognition to identify missing persons or traffic lights that can track traffic flow to optimize timing and keep vehicles moving.
There’s still a long way to go before that happens though. According to Wired, so far Intel has only been working with prototypes, and the first full-size version of the chip won’t be built until November.
Once complete, it will feature 130,000 neurons and 130 million synaptic connections split between 128 computing cores. The device will be 1,000 times more energy-efficient than standard approaches, according to Mayberry, but more impressive are claims the chip will be capable of continuous learning.
Intel’s newly launched self-learning neuromorphic chip.
Normally deep learning works by training a neural network on giant datasets to create a model that can then be applied to new data. The Loihi chip will combine training and inference on the same chip, which will allow it to learn on the fly, constantly updating its models and adapting to changing circumstances without having to be deliberately re-trained.
A select group of universities and research institutions will be the first to get their hands on the new chip in the first half of 2018, but Mayberry said it could be years before it’s commercially available. Whether commercialization happens at all may largely depend on whether early adopters can get the hardware to solve any practically useful problems.
So far neuromorphic computing has struggled to gain traction outside the research community. IBM released a neuromorphic chip called TrueNorth in 2014, but the device has yet to showcase any commercially useful applications.
Lee Gomes summarizes the hurdles facing neuromorphic computing excellently in IEEE Spectrum. One is that deep learning can run on very simple, low-precision hardware that can be optimized to use very little power, which suggests complicated new architectures may struggle to find purchase.
It’s also not easy to transfer deep learning approaches developed on conventional chips over to neuromorphic hardware, and even Intel Labs chief scientist Narayan Srinivasa admitted to Forbes Loihi wouldn’t work well with some deep learning models.
Finally, there’s considerable competition in the quest to develop new computer architectures specialized for machine learning. GPU vendors Nvidia and AMD have pivoted to take advantage of this newfound market and companies like Google and Microsoft are developing their own in-house solutions.
Intel, for its part, isn’t putting all its eggs in one basket. Last year it bought two companies building chips for specialized machine learning—Movidius and Nervana—and this was followed up with the $15 billion purchase of self-driving car chip- and camera-maker Mobileye.
And while the jury is still out on neuromorphic computing, it makes sense for a company eager to position itself as the AI chipmaker of the future to have its fingers in as many pies as possible. There are a growing number of voices suggesting that despite its undoubted power, deep learning alone will not allow us to imbue machines with the kind of adaptable, general intelligence humans possess.
What new approaches will get us there are hard to predict, but it’s entirely possible they will only work on hardware that closely mimics the one device we already know is capable of supporting this kind of intelligence—the human brain.
Image Credit: Intel Continue reading

Posted in Human Robots | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

#431081 How the Intelligent Home of the Future ...

As Dorothy famously said in The Wizard of Oz, there’s no place like home. Home is where we go to rest and recharge. It’s familiar, comfortable, and our own. We take care of our homes by cleaning and maintaining them, and fixing things that break or go wrong.
What if our homes, on top of giving us shelter, could also take care of us in return?
According to Chris Arkenberg, this could be the case in the not-so-distant future. As part of Singularity University’s Experts On Air series, Arkenberg gave a talk called “How the Intelligent Home of The Future Will Care For You.”
Arkenberg is a research and strategy lead at Orange Silicon Valley, and was previously a research fellow at the Deloitte Center for the Edge and a visiting researcher at the Institute for the Future.
Arkenberg told the audience that there’s an evolution going on: homes are going from being smart to being connected, and will ultimately become intelligent.
Market Trends
Intelligent home technologies are just now budding, but broader trends point to huge potential for their growth. We as consumers already expect continuous connectivity wherever we go—what do you mean my phone won’t get reception in the middle of Yosemite? What do you mean the smart TV is down and I can’t stream Game of Thrones?
As connectivity has evolved from a privilege to a basic expectation, Arkenberg said, we’re also starting to have a better sense of what it means to give up our data in exchange for services and conveniences. It’s so easy to click a few buttons on Amazon and have stuff show up at your front door a few days later—never mind that data about your purchases gets recorded and aggregated.
“Right now we have single devices that are connected,” Arkenberg said. “Companies are still trying to show what the true value is and how durable it is beyond the hype.”

Connectivity is the basis of an intelligent home. To take a dumb object and make it smart, you get it online. Belkin’s Wemo, for example, lets users control lights and appliances wirelessly and remotely, and can be paired with Amazon Echo or Google Home for voice-activated control.
Speaking of voice-activated control, Arkenberg pointed out that physical interfaces are evolving, too, to the point that we’re actually getting rid of interfaces entirely, or transitioning to ‘soft’ interfaces like voice or gesture.
Drivers of change
Consumers are open to smart home tech and companies are working to provide it. But what are the drivers making this tech practical and affordable? Arkenberg said there are three big ones:
Computation: Computers have gotten exponentially more powerful over the past few decades. If it wasn’t for processors that could handle massive quantities of information, nothing resembling an Echo or Alexa would even be possible. Artificial intelligence and machine learning are powering these devices, and they hinge on computing power too.
Sensors: “There are more things connected now than there are people on the planet,” Arkenberg said. Market research firm Gartner estimates there are 8.4 billion connected things currently in use. Wherever digital can replace hardware, it’s doing so. Cheaper sensors mean we can connect more things, which can then connect to each other.
Data: “Data is the new oil,” Arkenberg said. “The top companies on the planet are all data-driven giants. If data is your business, though, then you need to keep finding new ways to get more and more data.” Home assistants are essentially data collection systems that sit in your living room and collect data about your life. That data in turn sets up the potential of machine learning.
Colonizing the Living Room
Alexa and Echo can turn lights on and off, and Nest can help you be energy-efficient. But beyond these, what does an intelligent home really look like?
Arkenberg’s vision of an intelligent home uses sensing, data, connectivity, and modeling to manage resource efficiency, security, productivity, and wellness.
Autonomous vehicles provide an interesting comparison: they’re surrounded by sensors that are constantly mapping the world to build dynamic models to understand the change around itself, and thereby predict things. Might we want this to become a model for our homes, too? By making them smart and connecting them, Arkenberg said, they’d become “more biological.”
There are already several products on the market that fit this description. RainMachine uses weather forecasts to adjust home landscape watering schedules. Neurio monitors energy usage, identifies areas where waste is happening, and makes recommendations for improvement.
These are small steps in connecting our homes with knowledge systems and giving them the ability to understand and act on that knowledge.
He sees the homes of the future being equipped with digital ears (in the form of home assistants, sensors, and monitoring devices) and digital eyes (in the form of facial recognition technology and machine vision to recognize who’s in the home). “These systems are increasingly able to interrogate emotions and understand how people are feeling,” he said. “When you push more of this active intelligence into things, the need for us to directly interface with them becomes less relevant.”
Could our homes use these same tools to benefit our health and wellness? FREDsense uses bacteria to create electrochemical sensors that can be applied to home water systems to detect contaminants. If that’s not personal enough for you, get a load of this: ClinicAI can be installed in your toilet bowl to monitor and evaluate your biowaste. What’s the point, you ask? Early detection of colon cancer and other diseases.
What if one day, your toilet’s biowaste analysis system could link up with your fridge, so that when you opened it it would tell you what to eat, and how much, and at what time of day?
Roadblocks to intelligence
“The connected and intelligent home is still a young category trying to establish value, but the technological requirements are now in place,” Arkenberg said. We’re already used to living in a world of ubiquitous computation and connectivity, and we have entrained expectations about things being connected. For the intelligent home to become a widespread reality, its value needs to be established and its challenges overcome.
One of the biggest challenges will be getting used to the idea of continuous surveillance. We’ll get convenience and functionality if we give up our data, but how far are we willing to go? Establishing security and trust is going to be a big challenge moving forward,” Arkenberg said.
There’s also cost and reliability, interoperability and fragmentation of devices, or conversely, what Arkenberg called ‘platform lock-on,’ where you’d end up relying on only one provider’s system and be unable to integrate devices from other brands.
Ultimately, Arkenberg sees homes being able to learn about us, manage our scheduling and transit, watch our moods and our preferences, and optimize our resource footprint while predicting and anticipating change.
“This is the really fascinating provocation of the intelligent home,” Arkenberg said. “And I think we’re going to start to see this play out over the next few years.”
Sounds like a home Dorothy wouldn’t recognize, in Kansas or anywhere else.
Stock Media provided by adam121 / Pond5 Continue reading

Posted in Human Robots | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

#430874 12 Companies That Are Making the World a ...

The Singularity University Global Summit in San Francisco this week brought brilliant minds together from all over the world to share a passion for using science and technology to solve the world’s most pressing challenges.
Solving these challenges means ensuring basic needs are met for all people. It means improving quality of life and mitigating future risks both to people and the planet.
To recognize organizations doing outstanding work in these fields, SU holds the Global Grand Challenge Awards. Three participating organizations are selected in each of 12 different tracks and featured at the summit’s EXPO. The ones found to have the most potential to positively impact one billion people are selected as the track winners.
Here’s a list of the companies recognized this year, along with some details about the great work they’re doing.
Global Grand Challenge Awards winners at Singularity University’s Global Summit in San Francisco.
Disaster Resilience
LuminAID makes portable lanterns that can provide 24 hours of light on 10 hours of solar charging. The lanterns came from a project to assist post-earthquake relief efforts in Haiti, when the product’s creators considered the dangerous conditions at night in the tent cities and realized light was a critical need. The lights have been used in more than 100 countries and after disasters, including Hurricane Sandy, Typhoon Haiyan, and the earthquakes in Nepal.

Environment
BreezoMeter uses big data and machine learning to deliver accurate air quality information in real time. Users can see pollution details as localized as a single city block, and data is impacted by real-time traffic. Forecasting is also available, with air pollution information available up to four days ahead of time, or several years in the past.
Food
Aspire Food Group believes insects are the protein of the future, and that technology has the power to bring the tradition of eating insects that exists in many countries and cultures to the rest of the world. The company uses technologies like robotics and automated data collection to farm insects that have the protein quality of meat and the environmental footprint of plants.
Energy
Rafiki Power acts as a rural utility company, building decentralized energy solutions in regions that lack basic services like running water and electricity. The company’s renewable hybrid systems are packed and standardized in recycled 20-foot shipping containers, and they’re currently powering over 700 household and business clients in rural Tanzania.

Governance
MakeSense is an international community that brings together people in 128 cities across the world to help social entrepreneurs solve challenges in areas like education, health, food, and environment. Social entrepreneurs post their projects and submit challenges to the community, then participants organize workshops to mobilize and generate innovative solutions to help the projects grow.
Health
Unima developed a fast and low-cost diagnostic and disease surveillance tool for infectious diseases. The tool allows health professionals to diagnose diseases at the point of care, in less than 15 minutes, without the use of any lab equipment. A drop of the patient’s blood is put on a diagnostic paper, where the antibody generates a visual reaction when in contact with the biomarkers in the sample. The result is evaluated by taking a photo with an app in a smartphone, which uses image processing, artificial intelligence and machine learning.
Prosperity
Egalite helps people with disabilities enter the labor market, and helps companies develop best practices for inclusion of the disabled. Egalite’s founders are passionate about the potential of people with disabilities and the return companies get when they invest in that potential.
Learning
Iris.AI is an artificial intelligence system that reads scientific paper abstracts and extracts key concepts for users, presenting concepts visually and allowing users to navigate a topic across disciplines. Since its launch, Iris.AI has read 30 million research paper abstracts and more than 2,000 TED talks. The AI uses a neural net and deep learning technology to continuously improve its output.
Security
Hala Systems, Inc. is a social enterprise focused on developing technology-driven solutions to the world’s toughest humanitarian challenges. Hala is currently focused on civilian protection, accountability, and the prevention of violent extremism before, during, and after conflict. Ultimately, Hala aims to transform the nature of civilian defense during warfare, as well as to reduce casualties and trauma during post-conflict recovery, natural disasters, and other major crises.
Shelter
Billion Bricks designs and provides shelter and infrastructure solutions for the homeless. The company’s housing solutions are scalable, sustainable, and able to create opportunities for communities to emerge from poverty. Their approach empowers communities to replicate the solutions on their own, reducing dependency on support and creating ownership and pride.

Space
Tellus Labs uses satellite data to tackle challenges like food security, water scarcity, and sustainable urban and industrial systems, and drive meaningful change. The company built a planetary-scale model of all 170 million acres of US corn and soy crops to more accurately forecast yields and help stabilize the market fluctuations that accompany the USDA’s monthly forecasts.
Water
Loowatt designed a toilet that uses a patented sealing technology to contain human waste within biodegradable film. The toilet is designed for linking to anaerobic digestion technology to provide a source of biogas for cooking, electricity, and other applications, creating the opportunity to offset capital costs with energy production.
Image Credit: LuminAID via YouTube Continue reading

Posted in Human Robots | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment