Tag Archives: taking
#430874 12 Companies That Are Making the World a ...
The Singularity University Global Summit in San Francisco this week brought brilliant minds together from all over the world to share a passion for using science and technology to solve the world’s most pressing challenges.
Solving these challenges means ensuring basic needs are met for all people. It means improving quality of life and mitigating future risks both to people and the planet.
To recognize organizations doing outstanding work in these fields, SU holds the Global Grand Challenge Awards. Three participating organizations are selected in each of 12 different tracks and featured at the summit’s EXPO. The ones found to have the most potential to positively impact one billion people are selected as the track winners.
Here’s a list of the companies recognized this year, along with some details about the great work they’re doing.
Global Grand Challenge Awards winners at Singularity University’s Global Summit in San Francisco.
Disaster Resilience
LuminAID makes portable lanterns that can provide 24 hours of light on 10 hours of solar charging. The lanterns came from a project to assist post-earthquake relief efforts in Haiti, when the product’s creators considered the dangerous conditions at night in the tent cities and realized light was a critical need. The lights have been used in more than 100 countries and after disasters, including Hurricane Sandy, Typhoon Haiyan, and the earthquakes in Nepal.
Environment
BreezoMeter uses big data and machine learning to deliver accurate air quality information in real time. Users can see pollution details as localized as a single city block, and data is impacted by real-time traffic. Forecasting is also available, with air pollution information available up to four days ahead of time, or several years in the past.
Food
Aspire Food Group believes insects are the protein of the future, and that technology has the power to bring the tradition of eating insects that exists in many countries and cultures to the rest of the world. The company uses technologies like robotics and automated data collection to farm insects that have the protein quality of meat and the environmental footprint of plants.
Energy
Rafiki Power acts as a rural utility company, building decentralized energy solutions in regions that lack basic services like running water and electricity. The company’s renewable hybrid systems are packed and standardized in recycled 20-foot shipping containers, and they’re currently powering over 700 household and business clients in rural Tanzania.
Governance
MakeSense is an international community that brings together people in 128 cities across the world to help social entrepreneurs solve challenges in areas like education, health, food, and environment. Social entrepreneurs post their projects and submit challenges to the community, then participants organize workshops to mobilize and generate innovative solutions to help the projects grow.
Health
Unima developed a fast and low-cost diagnostic and disease surveillance tool for infectious diseases. The tool allows health professionals to diagnose diseases at the point of care, in less than 15 minutes, without the use of any lab equipment. A drop of the patient’s blood is put on a diagnostic paper, where the antibody generates a visual reaction when in contact with the biomarkers in the sample. The result is evaluated by taking a photo with an app in a smartphone, which uses image processing, artificial intelligence and machine learning.
Prosperity
Egalite helps people with disabilities enter the labor market, and helps companies develop best practices for inclusion of the disabled. Egalite’s founders are passionate about the potential of people with disabilities and the return companies get when they invest in that potential.
Learning
Iris.AI is an artificial intelligence system that reads scientific paper abstracts and extracts key concepts for users, presenting concepts visually and allowing users to navigate a topic across disciplines. Since its launch, Iris.AI has read 30 million research paper abstracts and more than 2,000 TED talks. The AI uses a neural net and deep learning technology to continuously improve its output.
Security
Hala Systems, Inc. is a social enterprise focused on developing technology-driven solutions to the world’s toughest humanitarian challenges. Hala is currently focused on civilian protection, accountability, and the prevention of violent extremism before, during, and after conflict. Ultimately, Hala aims to transform the nature of civilian defense during warfare, as well as to reduce casualties and trauma during post-conflict recovery, natural disasters, and other major crises.
Shelter
Billion Bricks designs and provides shelter and infrastructure solutions for the homeless. The company’s housing solutions are scalable, sustainable, and able to create opportunities for communities to emerge from poverty. Their approach empowers communities to replicate the solutions on their own, reducing dependency on support and creating ownership and pride.
Space
Tellus Labs uses satellite data to tackle challenges like food security, water scarcity, and sustainable urban and industrial systems, and drive meaningful change. The company built a planetary-scale model of all 170 million acres of US corn and soy crops to more accurately forecast yields and help stabilize the market fluctuations that accompany the USDA’s monthly forecasts.
Water
Loowatt designed a toilet that uses a patented sealing technology to contain human waste within biodegradable film. The toilet is designed for linking to anaerobic digestion technology to provide a source of biogas for cooking, electricity, and other applications, creating the opportunity to offset capital costs with energy production.
Image Credit: LuminAID via YouTube Continue reading
#430814 The Age of Cyborgs Has Arrived
How many cyborgs did you see during your morning commute today? I would guess at least five. Did they make you nervous? Probably not; you likely didn’t even realize they were there.
In a presentation titled “Biohacking and the Connected Body” at Singularity University Global Summit, Hannes Sjoblad informed the audience that we’re already living in the age of cyborgs. Sjoblad is co-founder of the Sweden-based biohacker network Bionyfiken, a chartered non-profit that unites DIY-biologists, hackers, makers, body modification artists and health and performance devotees to explore human-machine integration.
Sjoblad said the cyborgs we see today don’t look like Hollywood prototypes; they’re regular people who have integrated technology into their bodies to improve or monitor some aspect of their health. Sjoblad defined biohacking as applying hacker ethic to biological systems. Some biohackers experiment with their biology with the goal of taking the human body’s experience beyond what nature intended.
Smart insulin monitoring systems, pacemakers, bionic eyes, and Cochlear implants are all examples of biohacking, according to Sjoblad. He told the audience, “We live in a time where, thanks to technology, we can make the deaf hear, the blind see, and the lame walk.” He is convinced that while biohacking could conceivably end up having Brave New World-like dystopian consequences, it can also be leveraged to improve and enhance our quality of life in multiple ways.
The field where biohacking can make the most positive impact is health. In addition to pacemakers and insulin monitors, several new technologies are being developed with the goal of improving our health and simplifying access to information about our bodies.
Ingestibles are a type of smart pill that use wireless technology to monitor internal reactions to medications, helping doctors determine optimum dosage levels and tailor treatments to different people. Your body doesn’t absorb or process medication exactly as your neighbor’s does, so shouldn’t you each have a treatment that works best with your unique system? Colonoscopies and endoscopies could one day be replaced by miniature pill-shaped video cameras that would collect and transmit images as they travel through the digestive tract.
Singularity University Global Summit is the culmination of the Exponential Conference Series and the definitive place to witness converging exponential technologies and understand how they’ll impact the world.
Security is another area where biohacking could be beneficial. One example Sjoblad gave was personalization of weapons: an invader in your house couldn’t fire your gun because it will have been matched to your fingerprint or synced with your body so that it only responds to you.
Biohacking can also simplify everyday tasks. In an impressive example of walking the walk rather than just talking the talk, Sjoblad had an NFC chip implanted in his hand. The chip contains data from everything he used to have to carry around in his pockets: credit and bank card information, key cards to enter his office building and gym, business cards, and frequent shopper loyalty cards. When he’s in line for a morning coffee or rushing to get to the office on time, he doesn’t have to root around in his pockets or bag to find the right card or key; he just waves his hand in front of a sensor and he’s good to go.
Evolved from radio frequency identification (RFID)—an old and widely distributed technology—NFC chips are activated by another chip, and small amounts of data can be transferred back and forth. No wireless connection is necessary. Sjoblad sees his NFC implant as a personal key to the Internet of Things, a simple way for him to talk to the smart, connected devices around him.
Sjoblad isn’t the only person who feels a need for connection.
When British science writer Frank Swain realized he was going to go deaf, he decided to hack his hearing to be able to hear Wi-Fi. Swain developed software that tunes into wireless communication fields and uses an inbuilt Wi-Fi sensor to pick up router name, encryption modes and distance from the device. This data is translated into an audio stream where distant signals click or pop, and strong signals sound their network ID in a looped melody. Swain hears it all through an upgraded hearing aid.
Global datastreams can also become sensory experiences. Spanish artist Moon Ribas developed and implanted a chip in her elbow that is connected to the global monitoring system for seismographic sensors; each time there’s an earthquake, she feels it through vibrations in her arm.
You can feel connected to our planet, too: North Sense makes a “standalone artificial sensory organ” that connects to your body and vibrates whenever you’re facing north. It’s a built-in compass; you’ll never get lost again.
Biohacking applications are likely to proliferate in the coming years, some of them more useful than others. But there are serious ethical questions that can’t be ignored during development and use of this technology. To what extent is it wise to tamper with nature, and who gets to decide?
Most of us are probably ok with waiting in line an extra 10 minutes or occasionally having to pull up a maps app on our phone if it means we don’t need to implant computer chips into our forearms. If it’s frightening to think of criminals stealing our wallets, imagine them cutting a chunk of our skin out to have instant access to and control over our personal data. The physical invasiveness and potential for something to go wrong seems to far outweigh the benefits the average person could derive from this technology.
But that may not always be the case. It’s worth noting the miniaturization of technology continues at a quick rate, and the smaller things get, the less invasive (and hopefully more useful) they’ll be. Even today, there are people already sensibly benefiting from biohacking. If you look closely enough, you’ll spot at least a couple cyborgs on your commute tomorrow morning.
Image Credit:Movement Control Laboratory/University of Washington – Deep Dream Generator Continue reading
#430761 How Robots Are Getting Better at Making ...
The multiverse of science fiction is populated by robots that are indistinguishable from humans. They are usually smarter, faster, and stronger than us. They seem capable of doing any job imaginable, from piloting a starship and battling alien invaders to taking out the trash and cooking a gourmet meal.
The reality, of course, is far from fantasy. Aside from industrial settings, robots have yet to meet The Jetsons. The robots the public are exposed to seem little more than over-sized plastic toys, pre-programmed to perform a set of tasks without the ability to interact meaningfully with their environment or their creators.
To paraphrase PayPal co-founder and tech entrepreneur Peter Thiel, we wanted cool robots, instead we got 140 characters and Flippy the burger bot. But scientists are making progress to empower robots with the ability to see and respond to their surroundings just like humans.
Some of the latest developments in that arena were presented this month at the annual Robotics: Science and Systems Conference in Cambridge, Massachusetts. The papers drilled down into topics that ranged from how to make robots more conversational and help them understand language ambiguities to helping them see and navigate through complex spaces.
Improved Vision
Ben Burchfiel, a graduate student at Duke University, and his thesis advisor George Konidaris, an assistant professor of computer science at Brown University, developed an algorithm to enable machines to see the world more like humans.
In the paper, Burchfiel and Konidaris demonstrate how they can teach robots to identify and possibly manipulate three-dimensional objects even when they might be obscured or sitting in unfamiliar positions, such as a teapot that has been tipped over.
The researchers trained their algorithm by feeding it 3D scans of about 4,000 common household items such as beds, chairs, tables, and even toilets. They then tested its ability to identify about 900 new 3D objects just from a bird’s eye view. The algorithm made the right guess 75 percent of the time versus a success rate of about 50 percent for other computer vision techniques.
In an email interview with Singularity Hub, Burchfiel notes his research is not the first to train machines on 3D object classification. How their approach differs is that they confine the space in which the robot learns to classify the objects.
“Imagine the space of all possible objects,” Burchfiel explains. “That is to say, imagine you had tiny Legos, and I told you [that] you could stick them together any way you wanted, just build me an object. You have a huge number of objects you could make!”
The infinite possibilities could result in an object no human or machine might recognize.
To address that problem, the researchers had their algorithm find a more restricted space that would host the objects it wants to classify. “By working in this restricted space—mathematically we call it a subspace—we greatly simplify our task of classification. It is the finding of this space that sets us apart from previous approaches.”
Following Directions
Meanwhile, a pair of undergraduate students at Brown University figured out a way to teach robots to understand directions better, even at varying degrees of abstraction.
The research, led by Dilip Arumugam and Siddharth Karamcheti, addressed how to train a robot to understand nuances of natural language and then follow instructions correctly and efficiently.
“The problem is that commands can have different levels of abstraction, and that can cause a robot to plan its actions inefficiently or fail to complete the task at all,” says Arumugam in a press release.
In this project, the young researchers crowdsourced instructions for moving a virtual robot through an online domain. The space consisted of several rooms and a chair, which the robot was told to manipulate from one place to another. The volunteers gave various commands to the robot, ranging from general (“take the chair to the blue room”) to step-by-step instructions.
The researchers then used the database of spoken instructions to teach their system to understand the kinds of words used in different levels of language. The machine learned to not only follow instructions but to recognize the level of abstraction. That was key to kickstart its problem-solving abilities to tackle the job in the most appropriate way.
The research eventually moved from virtual pixels to a real place, using a Roomba-like robot that was able to respond to instructions within one second 90 percent of the time. Conversely, when unable to identify the specificity of the task, it took the robot 20 or more seconds to plan a task about 50 percent of the time.
One application of this new machine-learning technique referenced in the paper is a robot worker in a warehouse setting, but there are many fields that could benefit from a more versatile machine capable of moving seamlessly between small-scale operations and generalized tasks.
“Other areas that could possibly benefit from such a system include things from autonomous vehicles… to assistive robotics, all the way to medical robotics,” says Karamcheti, responding to a question by email from Singularity Hub.
More to Come
These achievements are yet another step toward creating robots that see, listen, and act more like humans. But don’t expect Disney to build a real-life Westworld next to Toon Town anytime soon.
“I think we’re a long way off from human-level communication,” Karamcheti says. “There are so many problems preventing our learning models from getting to that point, from seemingly simple questions like how to deal with words never seen before, to harder, more complicated questions like how to resolve the ambiguities inherent in language, including idiomatic or metaphorical speech.”
Even relatively verbose chatbots can run out of things to say, Karamcheti notes, as the conversation becomes more complex.
The same goes for human vision, according to Burchfiel.
While deep learning techniques have dramatically improved pattern matching—Google can find just about any picture of a cat—there’s more to human eyesight than, well, meets the eye.
“There are two big areas where I think perception has a long way to go: inductive bias and formal reasoning,” Burchfiel says.
The former is essentially all of the contextual knowledge people use to help them reason, he explains. Burchfiel uses the example of a puddle in the street. People are conditioned or biased to assume it’s a puddle of water rather than a patch of glass, for instance.
“This sort of bias is why we see faces in clouds; we have strong inductive bias helping us identify faces,” he says. “While it sounds simple at first, it powers much of what we do. Humans have a very intuitive understanding of what they expect to see, [and] it makes perception much easier.”
Formal reasoning is equally important. A machine can use deep learning, in Burchfiel’s example, to figure out the direction any river flows once it understands that water runs downhill. But it’s not yet capable of applying the sort of human reasoning that would allow us to transfer that knowledge to an alien setting, such as figuring out how water moves through a plumbing system on Mars.
“Much work was done in decades past on this sort of formal reasoning… but we have yet to figure out how to merge it with standard machine-learning methods to create a seamless system that is useful in the actual physical world.”
Robots still have a lot to learn about being human, which should make us feel good that we’re still by far the most complex machines on the planet.
Image Credit: Alex Knight via Unsplash Continue reading
#430649 Robotherapy for children with autism
New Robotherapy for children with autism could reduce patient supervision by therapists.
05.07.2017
Autism treatments and therapies routinely make headlines. With robot enhanced therapies on the rise, often overlooked though, is the mental stress and physical toll the procedures take on therapists. As autism treatments can be taxing on both patient and therapists, few realize the stress and workload of those working with autistic patients.
It is against this backdrop, that researchers from the Vrije Universiteit Brussel are pioneering a new technology to aid behavioural therapy, and one with a very deliberate aspect: they are using robots to boost the basic social learning skills of children with ASD and while doing so, they hope to make the therapists’ job substantially easier.
A study, just published in PALADYN – Journal of Behavioural Robotics examines the use of social robots as tools in clinical situations by addressing the challenge of increasing robot autonomy.
The growing deployment of robot-assisted therapies in recent decades means children with Autism Spectrum Disorder (ASD) can develop and nurture social behaviour and cognitive skills. Learning skills that hold out in real life is the first and foremost goal of all autism therapies, including the Robot-Assisted Therapy (RAT), with effectiveness always considered a key concern. However, this time round the scientists have set off on the additional mission to take the load off the human therapists by letting parts of the intervention be taken over by the supervised yet autonomous robots.
The researchers developed a complete system of robot-enhanced therapy (RET) for children with ASD. The therapy works by teaching behaviours during repeated sessions of interactive games. Since the individuals with ASD tend to be more responsive to feedback coming from an interaction with technology, robots are often used for this therapy. In this approach, the social robot acts as a mediator and typically remains remote-controlled by a human operator. The technique, called Wizard of Oz, requires the robot to be operated by an additional person and the robot is not recording the performance during the therapy. In order to reduce operator workload, authors introduced a system with a supervised autonomous robot – which is able to understand the psychological disposition of the child and use it to select actions appropriate to the current state of the interaction.
Admittedly, robots with supervised autonomy can substantially benefit behavioural therapy for children with ASD – diminishing the therapist workload on the one hand, and achieving more objective measurements of therapy outcomes on the other. Yet, complex as it is, this therapy requires a multidisciplinary approach, as RET provides mixed effectiveness for primary tasks: the turn-taking, joint attention and imitation task comparing to Standard Human Treatment (SHT).
Results are likely to prompt a further development of the robot assisted therapy with increasing robot’s autonomy. With many outstanding conceptual and technical issues yet to tackle –it is definitely the ethical questions that pose one of the major challenges as far as the potential and maximal degree of robot autonomy is concerned.
The article is fully available in open access to read, download and share on De Gruyter Online.
Research was conducted as a part of DREAM (Development of Robot-Enhanced therapy for children with Autism spectrum disorders) project.
DOI: 10.1515/pjbr-2017-0002
Image credit: P.G. Esteban
About the Journal: PALADYN – Journal of Behavioural Robotics is a fully peer-reviewed, electronic-only journal that publishes original, high-quality research on topics broadly related to neuronally and psychologically inspired robots and other behaving autonomous systems.
About De Gruyter Open: De Gruyter Open is a leading publisher of Open Access academic content. Publishing in all major disciplines, De Gruyter Open is home to more than 500 scholarly journals and over 100 books. The company is part of the De Gruyter Group (www.degruyter.com) and a member of the Association of Learned and Professional Society Publishers (ALPSP). De Gruyter Open’s book and journal programs have been endorsed by the international research community and some of the world’s top scientists, including Nobel laureates. The company’s mission is to make the very best in academic content freely available to scholars and lay readers alike.
The post Robotherapy for children with autism appeared first on Roboticmagazine. Continue reading