Tag Archives: different

#431189 Researchers Develop New Tech to Predict ...

It is one of the top 10 deadliest diseases in the United States, and it cannot be cured or prevented. But new studies are finding ways to diagnose Alzheimer’s disease in its earliest stages, while some of the latest research says technologies like artificial intelligence can detect dementia years before the first symptoms occur.
These advances, in turn, will help bolster clinical trials seeking a cure or therapies to slow or prevent the disease. Catching Alzheimer’s disease or other forms of dementia early in their progression can help ease symptoms in some cases.
“Often neurodegeneration is diagnosed late when massive brain damage has already occurred,” says professor Francis L Martin at the University of Central Lancashire in the UK, in an email to Singularity Hub. “As we know more about the molecular basis of the disease, there is the possibility of clinical interventions that might slow or halt the progress of the disease, i.e., before brain damage. Extending cognitive ability for even a number of years would have huge benefit.”
Blood Diamond
Martin is the principal investigator on a project that has developed a technique to analyze blood samples to diagnose Alzheimer’s disease and distinguish between other forms of dementia.
The researchers used sensor-based technology with a diamond core to analyze about 550 blood samples. They identified specific chemical bonds within the blood after passing light through the diamond core and recording its interaction with the sample. The results were then compared against blood samples from cases of Alzheimer’s disease and other neurodegenerative diseases, along with those from healthy individuals.
“From a small drop of blood, we derive a fingerprint spectrum. That fingerprint spectrum contains numerical data, which can be inputted into a computational algorithm we have developed,” Martin explains. “This algorithm is validated for prediction of unknown samples. From this we determine sensitivity and specificity. Although not perfect, my clinical colleagues reliably tell me our results are far better than anything else they have seen.”
Martin says the breakthrough is the result of more than 10 years developing sensor-based technologies for routine screening, monitoring, or diagnosing neurodegenerative diseases and cancers.
“My vision was to develop something low-cost that could be readily applied in a typical clinical setting to handle thousands of samples potentially per day or per week,” he says, adding that the technology also has applications in environmental science and food security.
The new test can also distinguish accurately between Alzheimer’s disease and other forms of neurodegeneration, such as Lewy body dementia, which is one of the most common causes of dementia after Alzheimer’s.
“To this point, other than at post-mortem, there has been no single approach towards classifying these pathologies,” Martin notes. “MRI scanning is often used but is labor-intensive, costly, difficult to apply to dementia patients, and not a routine point-of-care test.”
Crystal Ball
Canadian researchers at McGill University believe they can predict Alzheimer’s disease up to two years before its onset using big data and artificial intelligence. They developed an algorithm capable of recognizing the signatures of dementia using a single amyloid PET scan of the brain of patients at risk of developing the disease.
Alzheimer’s is caused by the accumulation of two proteins—amyloid beta and tau. The latest research suggests that amyloid beta leads to the buildup of tau, which is responsible for damaging nerve cells and connections between cells called synapses.
The work was recently published in the journal Neurobiology of Aging.
“Despite the availability of biomarkers capable of identifying the proteins causative of Alzheimer’s disease in living individuals, the current technologies cannot predict whether carriers of AD pathology in the brain will progress to dementia,” Sulantha Mathotaarachchi, lead author on the paper and an expert in artificial neural networks, tells Singularity Hub by email.
The algorithm, trained on a population with amnestic mild cognitive impairment observed over 24 months, proved accurate 84.5 percent of the time. Mathotaarachchi says the algorithm can be trained on different populations for different observational periods, meaning the system can grow more comprehensive with more data.
“The more biomarkers we incorporate, the more accurate the prediction could be,” Mathotaarachchi adds. “However, right now, acquiring [the] required amount of training data is the biggest challenge. … In Alzheimer’s disease, it is known that the amyloid protein deposition occurs decades before symptoms onset.”
Unfortunately, the same process occurs in normal aging as well. “The challenge is to identify the abnormal patterns of deposition that lead to the disease later on,” he says
One of the key goals of the project is to improve the research in Alzheimer’s disease by ensuring those patients with the highest probability to develop dementia are enrolled in clinical trials. That will increase the efficiency of clinical programs, according to Mathotaarachchi.
“One of the most important outcomes from our study was the pilot, online, real-time prediction tool,” he says. “This can be used as a framework for patient screening before recruiting for clinical trials. … If a disease-modifying therapy becomes available for patients, a predictive tool might have clinical applications as well, by providing to the physician information regarding clinical progression.”
Pixel by Pixel Prediction
Private industry is also working toward improving science’s predictive powers when it comes to detecting dementia early. One startup called Darmiyan out of San Francisco claims its proprietary software can pick up signals before the onset of Alzheimer’s disease by up to 15 years.
Darmiyan didn’t respond to a request for comment for this article. Venture Beat reported that the company’s MRI-analyzing software “detects cell abnormalities at a microscopic level to reveal what a standard MRI scan cannot” and that the “software measures and highlights subtle microscopic changes in the brain tissue represented in every pixel of the MRI image long before any symptoms arise.”
Darmiyan claims to have a 90 percent accuracy rate and says its software has been vetted by top academic institutions like New York University, Rockefeller University, and Stanford, according to Venture Beat. The startup is awaiting FDA approval to proceed further but is reportedly working with pharmaceutical companies like Amgen, Johnson & Johnson, and Pfizer on pilot programs.
“Our technology enables smarter drug selection in preclinical animal studies, better patient selection for clinical trials, and much better drug-effect monitoring,” Darmiyan cofounder and CEO Padideh Kamali-Zare told Venture Beat.
Conclusions
An estimated 5.5 million Americans have Alzheimer’s, and one in 10 people over age 65 have been diagnosed with the disease. By mid-century, the number of Alzheimer’s patients could rise to 16 million. Health care costs in 2017 alone are estimated to be $259 billion, and by 2050 the annual price tag could be more than $1 trillion.
In sum, it’s a disease that cripples people and the economy.
Researchers are always after more data as they look to improve outcomes, with the hope of one day developing a cure or preventing the onset of neurodegeneration altogether. If interested in seeing this medical research progress, you can help by signing up on the Brain Health Registry to improve the quality of clinical trials.
Image Credit: rudall30 / Shutterstock.com Continue reading

Posted in Human Robots

#431186 The Coming Creativity Explosion Belongs ...

Does creativity make human intelligence special?
It may appear so at first glance. Though machines can calculate, analyze, and even perceive, creativity may seem far out of reach. Perhaps this is because we find it mysterious, even in ourselves. How can the output of a machine be anything more than that which is determined by its programmers?
Increasingly, however, artificial intelligence is moving into creativity’s hallowed domain, from art to industry. And though much is already possible, the future is sure to bring ever more creative machines.
What Is Machine Creativity?
Robotic art is just one example of machine creativity, a rapidly growing sub-field that sits somewhere between the study of artificial intelligence and human psychology.
The winning paintings from the 2017 Robot Art Competition are strikingly reminiscent of those showcased each spring at university exhibitions for graduating art students. Like the works produced by skilled artists, the compositions dreamed up by the competition’s robotic painters are aesthetically ambitious. One robot-made painting features a man’s bearded face gazing intently out from the canvas, his eyes locking with the viewer’s. Another abstract painting, “inspired” by data from EEG signals, visually depicts the human emotion of misery with jagged, gloomy stripes of black and purple.
More broadly, a creative machine is software (sometimes encased in a robotic body) that synthesizes inputs to generate new and valuable ideas, solutions to complex scientific problems, or original works of art. In a process similar to that followed by a human artist or scientist, a creative machine begins its work by framing a problem. Next, its software specifies the requirements the solution should have before generating “answers” in the form of original designs, patterns, or some other form of output.
Although the notion of machine creativity sounds a bit like science fiction, the basic concept is one that has been slowly developing for decades.
Nearly 50 years ago while a high school student, inventor and futurist Ray Kurzweil created software that could analyze the patterns in musical compositions and then compose new melodies in a similar style. Aaron, one of the world’s most famous painting robots, has been hard at work since the 1970s.
Industrial designers have used an automated, algorithm-driven process for decades to design computer chips (or machine parts) whose layout (or form) is optimized for a particular function or environment. Emily Howell, a computer program created by David Cope, writes original works in the style of classical composers, some of which have been performed by human orchestras to live audiences.
What’s different about today’s new and emerging generation of robotic artists, scientists, composers, authors, and product designers is their ubiquity and power.

“The recent explosion of artificial creativity has been enabled by the rapid maturation of the same exponential technologies that have already re-drawn our daily lives.”

I’ve already mentioned the rapidly advancing fields of robotic art and music. In the realm of scientific research, so-called “robotic scientists” such as Eureqa and Adam and Eve develop new scientific hypotheses; their “insights” have contributed to breakthroughs that are cited by hundreds of academic research papers. In the medical industry, creative machines are hard at work creating chemical compounds for new pharmaceuticals. After it read over seven million words of 20th century English poetry, a neural network developed by researcher Jack Hopkins learned to write passable poetry in a number of different styles and meters.
The recent explosion of artificial creativity has been enabled by the rapid maturation of the same exponential technologies that have already re-drawn our daily lives, including faster processors, ubiquitous sensors and wireless networks, and better algorithms.
As they continue to improve, creative machines—like humans—will perform a broad range of creative activities, ranging from everyday problem solving (sometimes known as “Little C” creativity) to producing once-in-a-century masterpieces (“Big C” creativity). A creative machine’s outputs could range from a design for a cast for a marble sculpture to a schematic blueprint for a clever new gadget for opening bottles of wine.
In the coming decades, by automating the process of solving complex problems, creative machines will again transform our world. Creative machines will serve as a versatile source of on-demand talent.
In the battle to recruit a workforce that can solve complex problems, creative machines will put small businesses on equal footing with large corporations. Art and music lovers will enjoy fresh creative works that re-interpret the style of ancient disciplines. People with a health condition will benefit from individualized medical treatments, and low-income people will receive top-notch legal advice, to name but a few potentially beneficial applications.
How Can We Make Creative Machines, Unless We Understand Our Own Creativity?
One of the most intriguing—yet unsettling—aspects of watching robotic arms skillfully oil paint is that we humans still do not understand our own creative process. Over the centuries, several different civilizations have devised a variety of models to explain creativity.
The ancient Greeks believed that poets drew inspiration from a transcendent realm parallel to the material world where ideas could take root and flourish. In the Middle Ages, philosophers and poets attributed our peculiarly human ability to “make something of nothing” to an external source, namely divine inspiration. Modern academic study of human creativity has generated vast reams of scholarship, but despite the value of these insights, the human imagination remains a great mystery, second only to that of consciousness.
Today, the rise of machine creativity demonstrates (once again), that we do not have to fully understand a biological process in order to emulate it with advanced technology.
Past experience has shown that jet planes can fly higher and faster than birds by using the forward thrust of an engine rather than wings. Submarines propel themselves forward underwater without fins or a tail. Deep learning neural networks identify objects in randomly-selected photographs with super-human accuracy. Similarly, using a fairly straightforward software architecture, creative software (sometimes paired with a robotic body) can paint, write, hypothesize, or design with impressive originality, skill, and boldness.
At the heart of machine creativity is simple iteration. No matter what sort of output they produce, creative machines fall into one of three categories depending on their internal architecture.
Briefly, the first group consists of software programs that use traditional rule-based, or symbolic AI, the second group uses evolutionary algorithms, and the third group uses a variation of a form of machine learning called deep learning that has already revolutionized voice and facial recognition software.
1) Symbolic creative machines are the oldest artificial artists and musicians. In this approach—also known as “good old-fashioned AI (GOFAI) or symbolic AI—the human programmer plays a key role by writing a set of step-by-step instructions to guide the computer through a task. Despite the fact that symbolic AI is limited in its ability to adapt to environmental changes, it’s still possible for a robotic artist programmed this way to create an impressively wide variety of different outputs.
2) Evolutionary algorithms (EA) have been in use for several decades and remain powerful tools for design. In this approach, potential solutions “compete” in a software simulator in a Darwinian process reminiscent of biological evolution. The human programmer specifies a “fitness criterion” that will be used to score and rank the solutions generated by the software. The software then generates a “first generation” population of random solutions (which typically are pretty poor in quality), scores this first generation of solutions, and selects the top 50% (those random solutions deemed to be the best “fit”). The software then takes another pass and recombines the “winning” solutions to create the next generation and repeats this process for thousands (and sometimes millions) of generations.
3) Generative deep learning (DL) neural networks represent the newest software architecture of the three, since DL is data-dependent and resource-intensive. First, a human programmer “trains” a DL neural network to recognize a particular feature in a dataset, for example, an image of a dog in a stream of digital images. Next, the standard “feed forward” process is reversed and the DL neural network begins to generate the feature, for example, eventually producing new and sometimes original images of (or poetry about) dogs. Generative DL networks have tremendous and unexplored creative potential and are able to produce a broad range of original outputs, from paintings to music to poetry.
The Coming Explosion of Machine Creativity
In the near future as Moore’s Law continues its work, we will see sophisticated combinations of these three basic architectures. Since the 1950s, artificial intelligence has steadily mastered one human ability after another, and in the process of doing so, has reduced the cost of calculation, analysis, and most recently, perception. When creative software becomes as inexpensive and ubiquitous as analytical software is today, humans will no longer be the only intelligent beings capable of creative work.
This is why I have to bite my tongue when I hear the well-intended (but shortsighted) advice frequently dispensed to young people that they should pursue work that demands creativity to help them “AI-proof” their futures.
Instead, students should gain skills to harness the power of creative machines.
There are two skills in which humans excel that will enable us to remain useful in a world of ever-advancing artificial intelligence. One, the ability to frame and define a complex problem so that it can be handed off to a creative machine to solve. And two, the ability to communicate the value of both the framework and the proposed solution to the other humans involved.
What will happen to people when creative machines begin to capably tread on intellectual ground that was once considered the sole domain of the human mind, and before that, the product of divine inspiration? While machines engaging in Big C creativity—e.g., oil painting and composing new symphonies—tend to garner controversy and make the headlines, I suspect the real world-changing application of machine creativity will be in the realm of everyday problem solving, or Little C. The mainstream emergence of powerful problem-solving tools will help people create abundance where there was once scarcity.
Image Credit: adike / Shutterstock.com Continue reading

Posted in Human Robots

#431181 Workspace Sentry collaborative robotics ...

PRINCETON, NJ September 13, 2017 – – ST Robotics announces the availability of its Workspace Sentry collaborative robotics safety system, specifically designed to meet the International Organization for Standardization (ISO)/Technical Specification (TS) 15066 on collaborative operation. The new ISO/TS 15066, a game changer for the robotics industry, provides guidelines for the design and implementation of a collaborative workspace that reduces risks to people.

The ST Robotics Workspace Sentry robot and area safety system are based on a small module that sends infrared beams across the workspace. If the user puts his hand (or any other object) in the workspace, the robot stops using programmable emergency deceleration. Each module has three beams at different angles and the distance a beam reaches is adjustable. Two or more modules can be daisy chained to watch a wider area.
Photo Credit: ST Robotics – www.robot.md
“A robot that is tuned to stop on impact may not be safe. Robots where the trip torque can be set at low thresholds are too slow for any practical industrial application. The best system is where the work area has proximity detectors so the robot stops before impact and that is the approach ST Robotics has taken,” states President and CEO of ST Robotics David Sands.

ST Robotics, widely known for ‘robotics within reach’, has offices in Princeton, New Jersey and Cambridge, England, as well as in Asia. One of the first manufacturers of bench-top robot arms, ST Robotics has been providing the lowest-priced, easy-to-program boxed robots for the past 30 years. ST’s robots are utilized the world over by companies and institutions such as Lockheed-Martin, Motorola, Honeywell, MIT, NASA, Pfizer, Sony and NXP. The numerous applications for ST’s robots benefit the manufacturing, nuclear, pharmaceutical, laboratory and semiconductor industries.

For additional information on ST Robotics, contact:
sales1@strobotics.com
(609) 584 7522
www.strobotics.com

For press inquiries, contact:
Joanne Pransky
World’s First Robotic Psychiatrist®
drjoanne@robot.md
(650) ROBOT-MD

The post Workspace Sentry collaborative robotics safety system appeared first on Roboticmagazine. Continue reading

Posted in Human Robots

#431058 How to Make Your First Chatbot With the ...

You’re probably wondering what Game of Thrones has to do with chatbots and artificial intelligence. Before I explain this weird connection, I need to warn you that this article may contain some serious spoilers. Continue with your reading only if you are a passionate GoT follower, who watches new episodes immediately after they come out.
Why are chatbots so important anyway?
According to the study “When Will AI Exceed Human Performance?,” researchers believe there is a 50% chance artificial intelligence could take over all human jobs by around the year 2060. This technology has already replaced dozens of customer service and sales positions and helped businesses make substantial savings.
Apart from the obvious business advantages, chatbot creation can be fun. You can create an artificial personality with a strong attitude and a unique set of traits and flaws. It’s like creating a new character for your favorite TV show. That’s why I decided to explain the most important elements of the chatbot creation process by using the TV characters we all know and love (or hate).
Why Game of Thrones?
Game of Thrones is the most popular TV show in the world. More than 10 million viewers watched the seventh season premiere, and you have probably seen internet users fanatically discussing the series’ characters, storyline, and possible endings.
Apart from writing about chatbots, I’m also a GoT fanatic, and I will base this chatbot on one of the characters from my favorite series. But before you find out the name of my bot, you should read a few lines about incredible free tools that allow us to build chatbots without coding.
Are chatbots expensive?
Today, you can create a chatbot even if you don’t know how to code. Most chatbot building platforms offer at least one free plan that allows you to use basic functionalities, create your bot, deploy it to Facebook Messenger, and analyze its performance. Free plans usually allow your bot to talk to a limited number of users.
Why should you personalize your bot?
Every platform will ask you to write a bot’s name before you start designing conversations. You will also be able to add the bot’s photograph and bio. Personalizing your bot is the only way to ensure that you will stick to the same personality and storyline throughout the building process. Users often see chatbots as people, and by giving your bot an identity, you will make sure that it doesn’t sound like it has multiple personality disorder.
I think connecting my chatbot with a GoT character will help readers understand the process of chatbot creation.
And the name of our GoT chatbot is…
…Cersei. She is mean, pragmatic, and fearless and she would do anything to stay on the Iron Throne. Many people would rather hang out with Daenerys or Jon Snow. These characters are honest, noble and good-hearted, which means their actions are often predictable.
Cersei, on the other hand, is the queen of intrigues. As the meanest and the most vengeful character in the series, she has an evil plan for everybody who steps on her toes. While viewers can easily guess where Jon and Daenerys stand, there are dozens of questions they would like to ask Cersei. But before we start talking to our bot, we need to build her personality by using the most basic elements of chatbot interaction.
Choosing the bot’s name on Botsify.
Welcome / Greeting Message
The welcome message is the greeting Cersei says to every commoner who clicks on the ‘start conversation’ button. She is not a welcoming person (ask Sansa), except if you are a banker from Braavos. Her introductory message may sound something like this:
“Dear {{user_full_name}}, My name is Cersei of the House Lannister, the First of Her Name, Queen of the Andals and the First Men, Protector of the Seven Kingdoms. You can ask me questions, and I will answer them. If the question is not worth answering, I will redirect you to Ser Gregor Clegane, who will give you a step-by-step course on how to talk to the Queen of Westeros.”
Creating the welcome message on Chatfuel
Default Message / Answer
In the bot game, users, bots, and their creators often need to learn from failed attempts and mistakes. The default message is the text Cersei will send whenever you ask her a question she doesn’t understand. Knowing Cersei, it would sound something like this:
“Ser Gregor, please escort {{user_full_name}} to the dungeon.”
Creating default message on Botsify
Menu
To avoid calling out the Mountain every time someone asks her a question, Cersei might give you a few (safe) options to choose. The best way to do this is by using a menu function. We can classify the questions people want to ask Cersei in several different categories:

Iron Throne
Relationship with Jaime — OK, this isn’t a “safe option,” get ready to get close and personal with Sir Gregor Clegane.
War plans
Euron Greyjoy

After users choose a menu item, Cersei can give them a default response on the topic or set up a plot that will make their lives miserable. Knowing Cersei, she will probably go for the second option.
Adding chatbot menu on Botsify
Stories / Blocks
This feature allows us to build a longer Cersei-to-user interaction. The structure of stories and blocks is different on every chatbot platform, but most of them use keywords and phrases for finding out the user’s intention.

Keywords — where the bot recognizes a certain keyword within the user’s reply. Users who have chosen the ‘war plans’ option might ask Cersei how is she planning to defeat Daenerys’s dragons. We can add ‘dragon’ and ‘dragons’ as keywords, and connect them with an answer that will sound something like this:

“Dragons are not invulnerable as you may think. Maester Qyburn is developing a weapon that will bring them down for good!”
Adding keywords on Chatfuel
People may also ask her about White Walkers. Do you plan to join Daenerys and Jon Snow in a fight against White Walkers? After we add ‘White Walker’ and ‘White Walkers’ on the keyword list, Cersei will answer:
“White Walkers? Do you think the Queen of Westeros has enough free time to think about creatures from fairy tales and legends?”
Adding Keywords on Botsify

Phrases — are more complex syntaxes that the bot can be trained to recognize. Many people would like to ask Cersei if she’s going to marry Euron Greyjoy after the war ends. We can add ‘Euron’ as a keyword, but then we won’t be sure what answer the user is expecting. Instead, we can use the phrase ‘(Will you) marry Euron Greyjoy (after the war?)’. Just to be sure, we should also add a few alternative phrases like ‘(Do you plan on) marrying Euron Greyjoy (after the war),’ ‘(Will you) end up with Euron Greyjoy (after the war?)’, ‘(Will) Euron Greyjoy be the new King?’ etc. Cersei would probably answer this inquiry in her style:

“Of course not, Euron is a useful idiot. I will use his fleet and send him back to the Iron Islands, where he belongs.”
Adding phrases on Botsify
Forms
We have already asked Cersei several questions, and now she would like to ask us something. She can do so by using the form/user input feature. Most tools allow us to add a question and the criteria for checking the users’ answer. If the user provides us the answer that is compliant to the predefined form (like email address, phone number, or a ZIP code), the bot will identify and extract the answer. If the answer doesn’t fit into the predefined criteria, the bot will notify the user and ask him/her to try again.
If Cersei would ask you a question, she would probably want to know your address so she could send her guards to fill your basement with barrels of wildfire.
Creating forms on Botsify
Templates
If you have problems building your first chatbot, templates can help you create the basic conversation structure. Unfortunately, not all platforms offer this feature for free. Snatchbot currently has the most comprehensive list of free templates. There you can choose a pre-built layout. The template selection ranges from simple FAQ bots to ones created for a specific industry, like banking, airline, healthcare, or e-commerce.
Choosing templates on Snatchbot
Plugins
Most tools also provide plugins that can be used for making the conversations more meaningful. These plugins allow Cersei to send images, audio and video files. She can unleash her creativity and make you suffer by sending you her favorite GoT execution videos.

With the help of integrations, Cersei can talk to you on Facebook Messenger, Telegram, WeChat, Slack, and many other communication apps. She can also sell her fan gear and ask you for donations by integrating in-bot payments from PayPal accounts. Her sales pitch will probably sound something like this:
“Gold wins wars! Would you rather invest your funds in a member of a respected family, who always pays her debts, or in the chaotic war endeavor of a crazy revolutionary, whose strength lies in three flying lizards? If your pockets are full of gold, you are already on my side. Now you can complete your checkout on PayPal.”
Chatbot building is now easier than ever, and even small businesses are starting to use the incredible benefits of artificial intelligence. If you still don’t believe that chatbots can replace customer service representatives, I suggest you try to develop a bot based on your favorite TV show, movie or book character and talk with him/her for a while. This way, you will be able to understand the concept that stands behind this amazing technology and use it to improve your business.
Now I’m off to talk to Cersei. Maybe she will feed me some Season 8 spoilers.
This article was originally published by Chatbots Magazine. Read the original post here.
Image credits for screenshots in post: Branislav Srdanovic
Banner stock media provided by new_vision_studio / Pond5 Continue reading

Posted in Human Robots

#430874 12 Companies That Are Making the World a ...

The Singularity University Global Summit in San Francisco this week brought brilliant minds together from all over the world to share a passion for using science and technology to solve the world’s most pressing challenges.
Solving these challenges means ensuring basic needs are met for all people. It means improving quality of life and mitigating future risks both to people and the planet.
To recognize organizations doing outstanding work in these fields, SU holds the Global Grand Challenge Awards. Three participating organizations are selected in each of 12 different tracks and featured at the summit’s EXPO. The ones found to have the most potential to positively impact one billion people are selected as the track winners.
Here’s a list of the companies recognized this year, along with some details about the great work they’re doing.
Global Grand Challenge Awards winners at Singularity University’s Global Summit in San Francisco.
Disaster Resilience
LuminAID makes portable lanterns that can provide 24 hours of light on 10 hours of solar charging. The lanterns came from a project to assist post-earthquake relief efforts in Haiti, when the product’s creators considered the dangerous conditions at night in the tent cities and realized light was a critical need. The lights have been used in more than 100 countries and after disasters, including Hurricane Sandy, Typhoon Haiyan, and the earthquakes in Nepal.

Environment
BreezoMeter uses big data and machine learning to deliver accurate air quality information in real time. Users can see pollution details as localized as a single city block, and data is impacted by real-time traffic. Forecasting is also available, with air pollution information available up to four days ahead of time, or several years in the past.
Food
Aspire Food Group believes insects are the protein of the future, and that technology has the power to bring the tradition of eating insects that exists in many countries and cultures to the rest of the world. The company uses technologies like robotics and automated data collection to farm insects that have the protein quality of meat and the environmental footprint of plants.
Energy
Rafiki Power acts as a rural utility company, building decentralized energy solutions in regions that lack basic services like running water and electricity. The company’s renewable hybrid systems are packed and standardized in recycled 20-foot shipping containers, and they’re currently powering over 700 household and business clients in rural Tanzania.

Governance
MakeSense is an international community that brings together people in 128 cities across the world to help social entrepreneurs solve challenges in areas like education, health, food, and environment. Social entrepreneurs post their projects and submit challenges to the community, then participants organize workshops to mobilize and generate innovative solutions to help the projects grow.
Health
Unima developed a fast and low-cost diagnostic and disease surveillance tool for infectious diseases. The tool allows health professionals to diagnose diseases at the point of care, in less than 15 minutes, without the use of any lab equipment. A drop of the patient’s blood is put on a diagnostic paper, where the antibody generates a visual reaction when in contact with the biomarkers in the sample. The result is evaluated by taking a photo with an app in a smartphone, which uses image processing, artificial intelligence and machine learning.
Prosperity
Egalite helps people with disabilities enter the labor market, and helps companies develop best practices for inclusion of the disabled. Egalite’s founders are passionate about the potential of people with disabilities and the return companies get when they invest in that potential.
Learning
Iris.AI is an artificial intelligence system that reads scientific paper abstracts and extracts key concepts for users, presenting concepts visually and allowing users to navigate a topic across disciplines. Since its launch, Iris.AI has read 30 million research paper abstracts and more than 2,000 TED talks. The AI uses a neural net and deep learning technology to continuously improve its output.
Security
Hala Systems, Inc. is a social enterprise focused on developing technology-driven solutions to the world’s toughest humanitarian challenges. Hala is currently focused on civilian protection, accountability, and the prevention of violent extremism before, during, and after conflict. Ultimately, Hala aims to transform the nature of civilian defense during warfare, as well as to reduce casualties and trauma during post-conflict recovery, natural disasters, and other major crises.
Shelter
Billion Bricks designs and provides shelter and infrastructure solutions for the homeless. The company’s housing solutions are scalable, sustainable, and able to create opportunities for communities to emerge from poverty. Their approach empowers communities to replicate the solutions on their own, reducing dependency on support and creating ownership and pride.

Space
Tellus Labs uses satellite data to tackle challenges like food security, water scarcity, and sustainable urban and industrial systems, and drive meaningful change. The company built a planetary-scale model of all 170 million acres of US corn and soy crops to more accurately forecast yields and help stabilize the market fluctuations that accompany the USDA’s monthly forecasts.
Water
Loowatt designed a toilet that uses a patented sealing technology to contain human waste within biodegradable film. The toilet is designed for linking to anaerobic digestion technology to provide a source of biogas for cooking, electricity, and other applications, creating the opportunity to offset capital costs with energy production.
Image Credit: LuminAID via YouTube Continue reading

Posted in Human Robots