Tag Archives: less

#430874 12 Companies That Are Making the World a ...

The Singularity University Global Summit in San Francisco this week brought brilliant minds together from all over the world to share a passion for using science and technology to solve the world’s most pressing challenges.
Solving these challenges means ensuring basic needs are met for all people. It means improving quality of life and mitigating future risks both to people and the planet.
To recognize organizations doing outstanding work in these fields, SU holds the Global Grand Challenge Awards. Three participating organizations are selected in each of 12 different tracks and featured at the summit’s EXPO. The ones found to have the most potential to positively impact one billion people are selected as the track winners.
Here’s a list of the companies recognized this year, along with some details about the great work they’re doing.
Global Grand Challenge Awards winners at Singularity University’s Global Summit in San Francisco.
Disaster Resilience
LuminAID makes portable lanterns that can provide 24 hours of light on 10 hours of solar charging. The lanterns came from a project to assist post-earthquake relief efforts in Haiti, when the product’s creators considered the dangerous conditions at night in the tent cities and realized light was a critical need. The lights have been used in more than 100 countries and after disasters, including Hurricane Sandy, Typhoon Haiyan, and the earthquakes in Nepal.

Environment
BreezoMeter uses big data and machine learning to deliver accurate air quality information in real time. Users can see pollution details as localized as a single city block, and data is impacted by real-time traffic. Forecasting is also available, with air pollution information available up to four days ahead of time, or several years in the past.
Food
Aspire Food Group believes insects are the protein of the future, and that technology has the power to bring the tradition of eating insects that exists in many countries and cultures to the rest of the world. The company uses technologies like robotics and automated data collection to farm insects that have the protein quality of meat and the environmental footprint of plants.
Energy
Rafiki Power acts as a rural utility company, building decentralized energy solutions in regions that lack basic services like running water and electricity. The company’s renewable hybrid systems are packed and standardized in recycled 20-foot shipping containers, and they’re currently powering over 700 household and business clients in rural Tanzania.

Governance
MakeSense is an international community that brings together people in 128 cities across the world to help social entrepreneurs solve challenges in areas like education, health, food, and environment. Social entrepreneurs post their projects and submit challenges to the community, then participants organize workshops to mobilize and generate innovative solutions to help the projects grow.
Health
Unima developed a fast and low-cost diagnostic and disease surveillance tool for infectious diseases. The tool allows health professionals to diagnose diseases at the point of care, in less than 15 minutes, without the use of any lab equipment. A drop of the patient’s blood is put on a diagnostic paper, where the antibody generates a visual reaction when in contact with the biomarkers in the sample. The result is evaluated by taking a photo with an app in a smartphone, which uses image processing, artificial intelligence and machine learning.
Prosperity
Egalite helps people with disabilities enter the labor market, and helps companies develop best practices for inclusion of the disabled. Egalite’s founders are passionate about the potential of people with disabilities and the return companies get when they invest in that potential.
Learning
Iris.AI is an artificial intelligence system that reads scientific paper abstracts and extracts key concepts for users, presenting concepts visually and allowing users to navigate a topic across disciplines. Since its launch, Iris.AI has read 30 million research paper abstracts and more than 2,000 TED talks. The AI uses a neural net and deep learning technology to continuously improve its output.
Security
Hala Systems, Inc. is a social enterprise focused on developing technology-driven solutions to the world’s toughest humanitarian challenges. Hala is currently focused on civilian protection, accountability, and the prevention of violent extremism before, during, and after conflict. Ultimately, Hala aims to transform the nature of civilian defense during warfare, as well as to reduce casualties and trauma during post-conflict recovery, natural disasters, and other major crises.
Shelter
Billion Bricks designs and provides shelter and infrastructure solutions for the homeless. The company’s housing solutions are scalable, sustainable, and able to create opportunities for communities to emerge from poverty. Their approach empowers communities to replicate the solutions on their own, reducing dependency on support and creating ownership and pride.

Space
Tellus Labs uses satellite data to tackle challenges like food security, water scarcity, and sustainable urban and industrial systems, and drive meaningful change. The company built a planetary-scale model of all 170 million acres of US corn and soy crops to more accurately forecast yields and help stabilize the market fluctuations that accompany the USDA’s monthly forecasts.
Water
Loowatt designed a toilet that uses a patented sealing technology to contain human waste within biodegradable film. The toilet is designed for linking to anaerobic digestion technology to provide a source of biogas for cooking, electricity, and other applications, creating the opportunity to offset capital costs with energy production.
Image Credit: LuminAID via YouTube Continue reading

Posted in Human Robots

#430814 The Age of Cyborgs Has Arrived

How many cyborgs did you see during your morning commute today? I would guess at least five. Did they make you nervous? Probably not; you likely didn’t even realize they were there.
In a presentation titled “Biohacking and the Connected Body” at Singularity University Global Summit, Hannes Sjoblad informed the audience that we’re already living in the age of cyborgs. Sjoblad is co-founder of the Sweden-based biohacker network Bionyfiken, a chartered non-profit that unites DIY-biologists, hackers, makers, body modification artists and health and performance devotees to explore human-machine integration.
Sjoblad said the cyborgs we see today don’t look like Hollywood prototypes; they’re regular people who have integrated technology into their bodies to improve or monitor some aspect of their health. Sjoblad defined biohacking as applying hacker ethic to biological systems. Some biohackers experiment with their biology with the goal of taking the human body’s experience beyond what nature intended.
Smart insulin monitoring systems, pacemakers, bionic eyes, and Cochlear implants are all examples of biohacking, according to Sjoblad. He told the audience, “We live in a time where, thanks to technology, we can make the deaf hear, the blind see, and the lame walk.” He is convinced that while biohacking could conceivably end up having Brave New World-like dystopian consequences, it can also be leveraged to improve and enhance our quality of life in multiple ways.
The field where biohacking can make the most positive impact is health. In addition to pacemakers and insulin monitors, several new technologies are being developed with the goal of improving our health and simplifying access to information about our bodies.
Ingestibles are a type of smart pill that use wireless technology to monitor internal reactions to medications, helping doctors determine optimum dosage levels and tailor treatments to different people. Your body doesn’t absorb or process medication exactly as your neighbor’s does, so shouldn’t you each have a treatment that works best with your unique system? Colonoscopies and endoscopies could one day be replaced by miniature pill-shaped video cameras that would collect and transmit images as they travel through the digestive tract.
Singularity University Global Summit is the culmination of the Exponential Conference Series and the definitive place to witness converging exponential technologies and understand how they’ll impact the world.
Security is another area where biohacking could be beneficial. One example Sjoblad gave was personalization of weapons: an invader in your house couldn’t fire your gun because it will have been matched to your fingerprint or synced with your body so that it only responds to you.
Biohacking can also simplify everyday tasks. In an impressive example of walking the walk rather than just talking the talk, Sjoblad had an NFC chip implanted in his hand. The chip contains data from everything he used to have to carry around in his pockets: credit and bank card information, key cards to enter his office building and gym, business cards, and frequent shopper loyalty cards. When he’s in line for a morning coffee or rushing to get to the office on time, he doesn’t have to root around in his pockets or bag to find the right card or key; he just waves his hand in front of a sensor and he’s good to go.
Evolved from radio frequency identification (RFID)—an old and widely distributed technology—NFC chips are activated by another chip, and small amounts of data can be transferred back and forth. No wireless connection is necessary. Sjoblad sees his NFC implant as a personal key to the Internet of Things, a simple way for him to talk to the smart, connected devices around him.
Sjoblad isn’t the only person who feels a need for connection.

When British science writer Frank Swain realized he was going to go deaf, he decided to hack his hearing to be able to hear Wi-Fi. Swain developed software that tunes into wireless communication fields and uses an inbuilt Wi-Fi sensor to pick up router name, encryption modes and distance from the device. This data is translated into an audio stream where distant signals click or pop, and strong signals sound their network ID in a looped melody. Swain hears it all through an upgraded hearing aid.
Global datastreams can also become sensory experiences. Spanish artist Moon Ribas developed and implanted a chip in her elbow that is connected to the global monitoring system for seismographic sensors; each time there’s an earthquake, she feels it through vibrations in her arm.
You can feel connected to our planet, too: North Sense makes a “standalone artificial sensory organ” that connects to your body and vibrates whenever you’re facing north. It’s a built-in compass; you’ll never get lost again.
Biohacking applications are likely to proliferate in the coming years, some of them more useful than others. But there are serious ethical questions that can’t be ignored during development and use of this technology. To what extent is it wise to tamper with nature, and who gets to decide?
Most of us are probably ok with waiting in line an extra 10 minutes or occasionally having to pull up a maps app on our phone if it means we don’t need to implant computer chips into our forearms. If it’s frightening to think of criminals stealing our wallets, imagine them cutting a chunk of our skin out to have instant access to and control over our personal data. The physical invasiveness and potential for something to go wrong seems to far outweigh the benefits the average person could derive from this technology.
But that may not always be the case. It’s worth noting the miniaturization of technology continues at a quick rate, and the smaller things get, the less invasive (and hopefully more useful) they’ll be. Even today, there are people already sensibly benefiting from biohacking. If you look closely enough, you’ll spot at least a couple cyborgs on your commute tomorrow morning.
Image Credit:Movement Control Laboratory/University of Washington – Deep Dream Generator Continue reading

Posted in Human Robots

#430743 Teaching Machines to Understand, and ...

We humans are swamped with text. It’s not just news and other timely information: Regular people are drowning in legal documents. The problem is so bad we mostly ignore it. Every time a person uses a store’s loyalty rewards card or connects to an online service, his or her activities are governed by the equivalent of hundreds of pages of legalese. Most people pay no attention to these massive documents, often labeled “terms of service,” “user agreement,” or “privacy policy.”
These are just part of a much wider societal problem of information overload. There is so much data stored—exabytes of it, as much stored as has ever been spoken by people in all of human history—that it’s humanly impossible to read and interpret everything. Often, we narrow down our pool of information by choosing particular topics or issues to pay attention to. But it’s important to actually know the meaning and contents of the legal documents that govern how our data is stored and who can see it.
As computer science researchers, we are working on ways artificial intelligence algorithms could digest these massive texts and extract their meaning, presenting it in terms regular people can understand.
Can computers understand text?
Computers store data as 0s and 1s—data that cannot be directly understood by humans. They interpret these data as instructions for displaying text, sound, images, or videos that are meaningful to people. But can computers actually understand the language, not only presenting the words but also their meaning?
One way to find out is to ask computers to summarize their knowledge in ways that people can understand and find useful. It would be best if AI systems could process text quickly enough to help people make decisions as they are needed—for example, when you’re signing up for a new online service and are asked to agree with the site’s privacy policy.
What if a computerized assistant could digest all that legal jargon in a few seconds and highlight key points? Perhaps a user could even tell the automated assistant to pay particular attention to certain issues, like when an email address is shared, or whether search engines can index personal posts. Companies could use this capability, too, to analyze contracts or other lengthy documents.
To do this sort of work, we need to combine a range of AI technologies, including machine learning algorithms that take in large amounts of data and independently identify connections among them; knowledge representation techniques to express and interpret facts and rules about the world; speech recognition systems to convert spoken language to text; and human language comprehension programs that process the text and its context to determine what the user is telling the system to do.
Examining privacy policies
A modern internet-enabled life today more or less requires trusting for-profit companies with private information (like physical and email addresses, credit card numbers and bank account details) and personal data (photos and videos, email messages and location information).
These companies’ cloud-based systems typically keep multiple copies of users’ data as part of backup plans to prevent service outages. That means there are more potential targets—each data center must be securely protected both physically and electronically. Of course, internet companies recognize customers’ concerns and employ security teams to protect users’ data. But the specific and detailed legal obligations they undertake to do that are found in their impenetrable privacy policies. No regular human—and perhaps even no single attorney—can truly understand them.
In our study, we ask computers to summarize the terms and conditions regular users say they agree to when they click “Accept” or “Agree” buttons for online services. We downloaded the publicly available privacy policies of various internet companies, including Amazon AWS, Facebook, Google, HP, Oracle, PayPal, Salesforce, Snapchat, Twitter, and WhatsApp.
Summarizing meaning
Our software examines the text and uses information extraction techniques to identify key information specifying the legal rights, obligations and prohibitions identified in the document. It also uses linguistic analysis to identify whether each rule applies to the service provider, the user or a third-party entity, such as advertisers and marketing companies. Then it presents that information in clear, direct, human-readable statements.
For example, our system identified one aspect of Amazon’s privacy policy as telling a user, “You can choose not to provide certain information, but then you might not be able to take advantage of many of our features.” Another aspect of that policy was described as “We may also collect technical information to help us identify your device for fraud prevention and diagnostic purposes.”

We also found, with the help of the summarizing system, that privacy policies often include rules for third parties—companies that aren’t the service provider or the user—that people might not even know are involved in data storage and retrieval.
The largest number of rules in privacy policies—43 percent—apply to the company providing the service. Just under a quarter of the rules—24 percent—create obligations for users and customers. The rest of the rules govern behavior by third-party services or corporate partners, or could not be categorized by our system.

The next time you click the “I Agree” button, be aware that you may be agreeing to share your data with other hidden companies who will be analyzing it.
We are continuing to improve our ability to succinctly and accurately summarize complex privacy policy documents in ways that people can understand and use to access the risks associated with using a service.

This article was originally published on The Conversation. Read the original article. Continue reading

Posted in Human Robots

#430652 The Jobs AI Will Take Over First

11th July 2017: The robotic revolution is set to cause the biggest transformation in the world’s workforce since the industrial revolution. In fact, research suggests that over 30% of jobs in Britain are under threat from breakthroughs in artificial intelligence (AI) technology.

With pioneering advances in technology many jobs that weren’t considered ripe for automation suddenly are. RS Components have used PWC Data to reveal how many jobs per sector are at risk of being taken by robots by 2030, a mere 13 years away. Did you think you were exempt from the robot revolution?

The top three sectors who are most exposed to the threats of robots are Transport and Storage, Manufacturing and Wholesale and Retail with 56%, 46% and 44% risk of automation respectively. The PWC report states that the differentiating factor between losing jobs to automation probability is education; those with a GCSE-level education or lower face a 46% risk, whilst those with undergraduate degrees or higher face a 12% risk. If a job is repetitive, physical and requires minimum effort to train for, this will have a higher likelihood to become automated by machines.

The manufacturing industry has the 3rd highest likelihood potential at 46.6%, shortly behind Transportation and Storage (56.4%) and Water, Sewage and Waste Management (62.6%). Although the manufacturing sector has the 3rd highest likelihood, it has the second largest number of jobs at risk of being taken by robots; an astonishing 1.22 million jobs are at risk in the near future. Repetitive manual labour and routine tasks can be taught to fixed machines and mimicked easily, saving employers both time and money.

The three sectors least at risk are Education, Health and Social and Agriculture, Forestry and Fishing with 9%, 17% and 19% risk of automation respectively. These operations are non-repetitive and consist of characteristics that cannot be taught and are harder to replicate with AI and robotics.

These are not the only fields where the introduction of AI will have an impact on employment prospects; Administrative and Support Services, Accommodation and Food Services, Finance and Insurance, Construction, Real Estate, Public Administration and Defence, and Arts and Entertainment are not out of the woods either.

The future is not all doom and gloom. Automation is set to boost productivity to enable workers to focus on higher value, more rewarding jobs; leaving repetitive and uncomplicated ones to the robots. An increase in sectors that are less easy to automate is also expected due to lower running costs. Wealth and spending will also be boosted by the initiation of AI seizing work. Also, there are just some things AI cannot learn so these jobs will be safe.

In some sectors half of the jobs could be taken by a fully automated system. Is your job next?

The post The Jobs AI Will Take Over First appeared first on Roboticmagazine. Continue reading

Posted in Human Robots

#430556 Forget Flying Cars, the Future Is ...

Flying car concepts have been around nearly as long as their earthbound cousins, but no one has yet made them a commercial success. MIT engineers think we’ve been coming at the problem from the wrong direction; rather than putting wings on cars, we should be helping drones to drive.
The team from the university’s Computer Science and Artificial Intelligence Laboratory (CSAIL) added wheels to a fleet of eight mini-quadcopters and tested driving and flying them around a tiny toy town made out of cardboard and fabric.
Adding the ability to drive reduced the distance the drone could fly by 14 percent compared to a wheel-less version. But while driving was slower, the drone could travel 150 percent further than when flying. The result is a vehicle that combines the speed and mobility of flying with the energy-efficiency of driving.

CSAIL director Daniela Rus told MIT News their work suggested that when looking to create flying cars, it might make more sense to build on years of research into drones rather than trying to simply “put wings on cars.”
Historically, flying car concepts have looked like someone took apart a Cessna light aircraft and a family sedan, mixed all the parts up, and bolted them back together again. Not everyone has abandoned this approach—two of the most developed flying car designs from Terrafugia and AeroMobil are cars with folding wings that need an airstrip to take off.
But flying car concepts are looking increasingly drone-like these days, with multiple small rotors, electric propulsion and vertical take-off abilities. Take the eHang 184 autonomous aerial vehicle being developed in China, the Kitty Hawk all-electric aircraft backed by Google founder Larry Page, which is little more than a quadcopter with a seat, the AirQuadOne designed by UK consortium Neva Aerospace, or Lilium Aviation’s Jet.
The attraction is obvious. Electric-powered drones are more compact, maneuverable, and environmentally friendly, making them suitable for urban environments.
Most of these vehicles are not quite the same as those proposed by the MIT engineers, as they’re pure flying machines. But a recent Airbus concept builds on the same principle that the future of urban mobility is vehicles that can both fly and drive. Its Pop.Up design is a two-passenger pod that can either be clipped to a set of wheels or hang under a quadcopter.
Importantly, they envisage their creation being autonomous in both flight and driving modes. And they’re not the only ones who think the future of flying cars is driverless. Uber has committed to developing a network of autonomous air taxis within a decade. This spring, Dubai announced it would launch a pilotless passenger drone service using the Ehang 184 as early as next month (July).
While integrating fully-fledged autonomous flying cars into urban environments will be far more complex, the study by Rus and her colleagues provides a good starting point for the kind of 3D route-planning and collision avoidance capabilities this would require.
The team developed multi-robot path planning algorithms that were able to control all eight drones as they flew and drove around their mock up city, while also making sure they didn’t crash into each other and avoided no-fly zones.
“This work provides an algorithmic solution for large-scale, mixed-mode transportation and shows its applicability to real-world problems,” Jingjin Yu, a computer science professor at Rutgers University who was not involved in the research, told MIT News.
This vision of a driverless future for flying cars might be a bit of a disappointment for those who’d envisaged themselves one day piloting their own hover car just like George Jetson. But autonomy and Uber-like ride-hailing business models are likely to be attractive, as they offer potential solutions to three of the biggest hurdles drone-like passenger vehicles face.
Firstly, it makes the vehicles accessible to anyone by removing the need to learn how to safely pilot an aircraft. Secondly, battery life still limits most electric vehicles to flight times measured in minutes. For personal vehicles this could be frustrating, but if you’re just hopping in a driverless air taxi for a five minute trip across town it’s unlikely to become apparent to you.
Operators of the service simply need to make sure they have a big enough fleet to ensure a charged vehicle is never too far away, or they’ll need a way to swap out batteries easily, such as the one suggested by the makers of the Volocopter electric helicopter.
Finally, there has already been significant progress in developing technology and regulations needed to integrate autonomous drones into our airspace that future driverless flying cars can most likely piggyback off of.
Safety requirements will inevitably be more stringent, but adding more predictable and controllable autonomous drones to the skies is likely to be more attractive to regulators than trying to license and police thousands of new amateur pilots.
Image Credit: Lilium Continue reading

Posted in Human Robots