Tag Archives: research

#431081 How the Intelligent Home of the Future ...

As Dorothy famously said in The Wizard of Oz, there’s no place like home. Home is where we go to rest and recharge. It’s familiar, comfortable, and our own. We take care of our homes by cleaning and maintaining them, and fixing things that break or go wrong.
What if our homes, on top of giving us shelter, could also take care of us in return?
According to Chris Arkenberg, this could be the case in the not-so-distant future. As part of Singularity University’s Experts On Air series, Arkenberg gave a talk called “How the Intelligent Home of The Future Will Care For You.”
Arkenberg is a research and strategy lead at Orange Silicon Valley, and was previously a research fellow at the Deloitte Center for the Edge and a visiting researcher at the Institute for the Future.
Arkenberg told the audience that there’s an evolution going on: homes are going from being smart to being connected, and will ultimately become intelligent.
Market Trends
Intelligent home technologies are just now budding, but broader trends point to huge potential for their growth. We as consumers already expect continuous connectivity wherever we go—what do you mean my phone won’t get reception in the middle of Yosemite? What do you mean the smart TV is down and I can’t stream Game of Thrones?
As connectivity has evolved from a privilege to a basic expectation, Arkenberg said, we’re also starting to have a better sense of what it means to give up our data in exchange for services and conveniences. It’s so easy to click a few buttons on Amazon and have stuff show up at your front door a few days later—never mind that data about your purchases gets recorded and aggregated.
“Right now we have single devices that are connected,” Arkenberg said. “Companies are still trying to show what the true value is and how durable it is beyond the hype.”

Connectivity is the basis of an intelligent home. To take a dumb object and make it smart, you get it online. Belkin’s Wemo, for example, lets users control lights and appliances wirelessly and remotely, and can be paired with Amazon Echo or Google Home for voice-activated control.
Speaking of voice-activated control, Arkenberg pointed out that physical interfaces are evolving, too, to the point that we’re actually getting rid of interfaces entirely, or transitioning to ‘soft’ interfaces like voice or gesture.
Drivers of change
Consumers are open to smart home tech and companies are working to provide it. But what are the drivers making this tech practical and affordable? Arkenberg said there are three big ones:
Computation: Computers have gotten exponentially more powerful over the past few decades. If it wasn’t for processors that could handle massive quantities of information, nothing resembling an Echo or Alexa would even be possible. Artificial intelligence and machine learning are powering these devices, and they hinge on computing power too.
Sensors: “There are more things connected now than there are people on the planet,” Arkenberg said. Market research firm Gartner estimates there are 8.4 billion connected things currently in use. Wherever digital can replace hardware, it’s doing so. Cheaper sensors mean we can connect more things, which can then connect to each other.
Data: “Data is the new oil,” Arkenberg said. “The top companies on the planet are all data-driven giants. If data is your business, though, then you need to keep finding new ways to get more and more data.” Home assistants are essentially data collection systems that sit in your living room and collect data about your life. That data in turn sets up the potential of machine learning.
Colonizing the Living Room
Alexa and Echo can turn lights on and off, and Nest can help you be energy-efficient. But beyond these, what does an intelligent home really look like?
Arkenberg’s vision of an intelligent home uses sensing, data, connectivity, and modeling to manage resource efficiency, security, productivity, and wellness.
Autonomous vehicles provide an interesting comparison: they’re surrounded by sensors that are constantly mapping the world to build dynamic models to understand the change around itself, and thereby predict things. Might we want this to become a model for our homes, too? By making them smart and connecting them, Arkenberg said, they’d become “more biological.”
There are already several products on the market that fit this description. RainMachine uses weather forecasts to adjust home landscape watering schedules. Neurio monitors energy usage, identifies areas where waste is happening, and makes recommendations for improvement.
These are small steps in connecting our homes with knowledge systems and giving them the ability to understand and act on that knowledge.
He sees the homes of the future being equipped with digital ears (in the form of home assistants, sensors, and monitoring devices) and digital eyes (in the form of facial recognition technology and machine vision to recognize who’s in the home). “These systems are increasingly able to interrogate emotions and understand how people are feeling,” he said. “When you push more of this active intelligence into things, the need for us to directly interface with them becomes less relevant.”
Could our homes use these same tools to benefit our health and wellness? FREDsense uses bacteria to create electrochemical sensors that can be applied to home water systems to detect contaminants. If that’s not personal enough for you, get a load of this: ClinicAI can be installed in your toilet bowl to monitor and evaluate your biowaste. What’s the point, you ask? Early detection of colon cancer and other diseases.
What if one day, your toilet’s biowaste analysis system could link up with your fridge, so that when you opened it it would tell you what to eat, and how much, and at what time of day?
Roadblocks to intelligence
“The connected and intelligent home is still a young category trying to establish value, but the technological requirements are now in place,” Arkenberg said. We’re already used to living in a world of ubiquitous computation and connectivity, and we have entrained expectations about things being connected. For the intelligent home to become a widespread reality, its value needs to be established and its challenges overcome.
One of the biggest challenges will be getting used to the idea of continuous surveillance. We’ll get convenience and functionality if we give up our data, but how far are we willing to go? Establishing security and trust is going to be a big challenge moving forward,” Arkenberg said.
There’s also cost and reliability, interoperability and fragmentation of devices, or conversely, what Arkenberg called ‘platform lock-on,’ where you’d end up relying on only one provider’s system and be unable to integrate devices from other brands.
Ultimately, Arkenberg sees homes being able to learn about us, manage our scheduling and transit, watch our moods and our preferences, and optimize our resource footprint while predicting and anticipating change.
“This is the really fascinating provocation of the intelligent home,” Arkenberg said. “And I think we’re going to start to see this play out over the next few years.”
Sounds like a home Dorothy wouldn’t recognize, in Kansas or anywhere else.
Stock Media provided by adam121 / Pond5 Continue reading

Posted in Human Robots | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

#431006 Adoption of robotics into a ...

VTT Technical Research Centre of Finland studied the implementation of a logistics robot system at the Seinäjoki Central Hospital in South Ostrobothnia. The aim is to reduce transportation costs, improve the availability of supplies and alleviate congestion on hospital hallways by running deliveries around the clock on every day of the week. Joint planning and dialogue between the various occupational groups and stakeholders involved was necessary for a successful change process. Continue reading

Posted in Human Robots | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

#430955 This Inspiring Teenager Wants to Save ...

It’s not every day you meet a high school student who’s been building functional robots since age 10. Then again, Mihir Garimella is definitely not your average teenager.
When I sat down to interview him recently at Singularity University’s Global Summit, that much was clear.
Mihir’s curiosity for robotics began at age two when his parents brought home a pet dog—well, a robotic dog. A few years passed with this robotic companion by his side, and Mihir became fascinated with how software and hardware could bring inanimate objects to “life.”
When he was 10, Mihir built a robotic violin tuner called Robo-Mozart to help him address a teacher’s complaints about his always-out-of-tune violin. The robot analyzes the sound of the violin, determines which strings are out of tune, and then uses motors to turn the tuning pegs.
Robo-Mozart and other earlier projects helped Mihir realize he could use robotics to solve real problems. Fast-forward to age 14 and Flybot, a tiny, low-cost emergency response drone that won Mihir top honors in his age category at the 2015 Google Science Fair.

The small drone is propelled by four rotors and is designed to mimic how fruit flies can speedily see and react to surrounding threats. It’s a design idea that hit Mihir when he and his family returned home after a long vacation to discover they had left bananas on their kitchen counter. The house was filled with fruit flies.
After many failed attempts to swat the flies, Mihir started wondering how these tiny creatures with small brains and horrible vision were such masterful escape artists. He began digging through research papers on fruit flies and came to an interesting conclusion.
Since fruit flies can’t see a lot of detail, they compensate by processing visual information very fast—ten times faster than people do.
“That’s what enables them to escape so effectively,” says Mihir.
Escaping a threat for a fruit fly could mean quickly avoiding a fatal swat from a human hand. Applied to a search-and-response drone, the scenario shifts—picture a drone instantaneously detecting and avoiding a falling ceiling while searching for survivors inside a collapsing building.

Now, at 17, Mihir is still pushing Flybot forward. He’s developing software to enable the drone to operate autonomously and hopes it will be able to navigate environments such as a burning building, or a structure that’s been hit by an earthquake. The drone is also equipped with intelligent sensors to collect spatial data it will use to maneuver around obstacles and detect things like a trapped person or the location of a gas leak.
For everyone concerned about robots eating jobs, Flybot is a perfect example of how technology can aid existing jobs.
Flybot could substitute for a first responder entering a dangerous situation or help a firefighter make a quicker rescue by showing where victims are trapped. With its small and fast design, the drone could also presumably carry out an initial search-and-rescue sweep in just a few minutes.
Mihir is committed to commercializing the product and keeping it within a $250–$500 price range, which is a fraction of the cost of many current emergency response drones. He hopes the low cost will allow the technology to be used in developing countries.
Next month, Mihir starts his freshman year at Stanford, where he plans to keep up his research and create a company to continue work on the drone.
When I asked Mihir what fuels him, he said, “Curiosity is a great skill for inventors. It lets you find inspiration in a lot of places that you may not look. If I had started by trying to build an escape algorithm for these drones, I wouldn’t know where to start. But looking at fruit flies and getting inspired by them, it gave me a really good place to look for inspiration.”
It’s a bit mind boggling how much Mihir has accomplished by age 17, but I suspect he’s just getting started.
Image Credit: Google Science Fair via YouTube Continue reading

Posted in Human Robots | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

#430874 12 Companies That Are Making the World a ...

The Singularity University Global Summit in San Francisco this week brought brilliant minds together from all over the world to share a passion for using science and technology to solve the world’s most pressing challenges.
Solving these challenges means ensuring basic needs are met for all people. It means improving quality of life and mitigating future risks both to people and the planet.
To recognize organizations doing outstanding work in these fields, SU holds the Global Grand Challenge Awards. Three participating organizations are selected in each of 12 different tracks and featured at the summit’s EXPO. The ones found to have the most potential to positively impact one billion people are selected as the track winners.
Here’s a list of the companies recognized this year, along with some details about the great work they’re doing.
Global Grand Challenge Awards winners at Singularity University’s Global Summit in San Francisco.
Disaster Resilience
LuminAID makes portable lanterns that can provide 24 hours of light on 10 hours of solar charging. The lanterns came from a project to assist post-earthquake relief efforts in Haiti, when the product’s creators considered the dangerous conditions at night in the tent cities and realized light was a critical need. The lights have been used in more than 100 countries and after disasters, including Hurricane Sandy, Typhoon Haiyan, and the earthquakes in Nepal.

Environment
BreezoMeter uses big data and machine learning to deliver accurate air quality information in real time. Users can see pollution details as localized as a single city block, and data is impacted by real-time traffic. Forecasting is also available, with air pollution information available up to four days ahead of time, or several years in the past.
Food
Aspire Food Group believes insects are the protein of the future, and that technology has the power to bring the tradition of eating insects that exists in many countries and cultures to the rest of the world. The company uses technologies like robotics and automated data collection to farm insects that have the protein quality of meat and the environmental footprint of plants.
Energy
Rafiki Power acts as a rural utility company, building decentralized energy solutions in regions that lack basic services like running water and electricity. The company’s renewable hybrid systems are packed and standardized in recycled 20-foot shipping containers, and they’re currently powering over 700 household and business clients in rural Tanzania.

Governance
MakeSense is an international community that brings together people in 128 cities across the world to help social entrepreneurs solve challenges in areas like education, health, food, and environment. Social entrepreneurs post their projects and submit challenges to the community, then participants organize workshops to mobilize and generate innovative solutions to help the projects grow.
Health
Unima developed a fast and low-cost diagnostic and disease surveillance tool for infectious diseases. The tool allows health professionals to diagnose diseases at the point of care, in less than 15 minutes, without the use of any lab equipment. A drop of the patient’s blood is put on a diagnostic paper, where the antibody generates a visual reaction when in contact with the biomarkers in the sample. The result is evaluated by taking a photo with an app in a smartphone, which uses image processing, artificial intelligence and machine learning.
Prosperity
Egalite helps people with disabilities enter the labor market, and helps companies develop best practices for inclusion of the disabled. Egalite’s founders are passionate about the potential of people with disabilities and the return companies get when they invest in that potential.
Learning
Iris.AI is an artificial intelligence system that reads scientific paper abstracts and extracts key concepts for users, presenting concepts visually and allowing users to navigate a topic across disciplines. Since its launch, Iris.AI has read 30 million research paper abstracts and more than 2,000 TED talks. The AI uses a neural net and deep learning technology to continuously improve its output.
Security
Hala Systems, Inc. is a social enterprise focused on developing technology-driven solutions to the world’s toughest humanitarian challenges. Hala is currently focused on civilian protection, accountability, and the prevention of violent extremism before, during, and after conflict. Ultimately, Hala aims to transform the nature of civilian defense during warfare, as well as to reduce casualties and trauma during post-conflict recovery, natural disasters, and other major crises.
Shelter
Billion Bricks designs and provides shelter and infrastructure solutions for the homeless. The company’s housing solutions are scalable, sustainable, and able to create opportunities for communities to emerge from poverty. Their approach empowers communities to replicate the solutions on their own, reducing dependency on support and creating ownership and pride.

Space
Tellus Labs uses satellite data to tackle challenges like food security, water scarcity, and sustainable urban and industrial systems, and drive meaningful change. The company built a planetary-scale model of all 170 million acres of US corn and soy crops to more accurately forecast yields and help stabilize the market fluctuations that accompany the USDA’s monthly forecasts.
Water
Loowatt designed a toilet that uses a patented sealing technology to contain human waste within biodegradable film. The toilet is designed for linking to anaerobic digestion technology to provide a source of biogas for cooking, electricity, and other applications, creating the opportunity to offset capital costs with energy production.
Image Credit: LuminAID via YouTube Continue reading

Posted in Human Robots | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Comments Off on 12 Companies That Are Making the World a ...

#430761 How Robots Are Getting Better at Making ...

The multiverse of science fiction is populated by robots that are indistinguishable from humans. They are usually smarter, faster, and stronger than us. They seem capable of doing any job imaginable, from piloting a starship and battling alien invaders to taking out the trash and cooking a gourmet meal.
The reality, of course, is far from fantasy. Aside from industrial settings, robots have yet to meet The Jetsons. The robots the public are exposed to seem little more than over-sized plastic toys, pre-programmed to perform a set of tasks without the ability to interact meaningfully with their environment or their creators.
To paraphrase PayPal co-founder and tech entrepreneur Peter Thiel, we wanted cool robots, instead we got 140 characters and Flippy the burger bot. But scientists are making progress to empower robots with the ability to see and respond to their surroundings just like humans.
Some of the latest developments in that arena were presented this month at the annual Robotics: Science and Systems Conference in Cambridge, Massachusetts. The papers drilled down into topics that ranged from how to make robots more conversational and help them understand language ambiguities to helping them see and navigate through complex spaces.
Improved Vision
Ben Burchfiel, a graduate student at Duke University, and his thesis advisor George Konidaris, an assistant professor of computer science at Brown University, developed an algorithm to enable machines to see the world more like humans.
In the paper, Burchfiel and Konidaris demonstrate how they can teach robots to identify and possibly manipulate three-dimensional objects even when they might be obscured or sitting in unfamiliar positions, such as a teapot that has been tipped over.
The researchers trained their algorithm by feeding it 3D scans of about 4,000 common household items such as beds, chairs, tables, and even toilets. They then tested its ability to identify about 900 new 3D objects just from a bird’s eye view. The algorithm made the right guess 75 percent of the time versus a success rate of about 50 percent for other computer vision techniques.
In an email interview with Singularity Hub, Burchfiel notes his research is not the first to train machines on 3D object classification. How their approach differs is that they confine the space in which the robot learns to classify the objects.
“Imagine the space of all possible objects,” Burchfiel explains. “That is to say, imagine you had tiny Legos, and I told you [that] you could stick them together any way you wanted, just build me an object. You have a huge number of objects you could make!”
The infinite possibilities could result in an object no human or machine might recognize.
To address that problem, the researchers had their algorithm find a more restricted space that would host the objects it wants to classify. “By working in this restricted space—mathematically we call it a subspace—we greatly simplify our task of classification. It is the finding of this space that sets us apart from previous approaches.”
Following Directions
Meanwhile, a pair of undergraduate students at Brown University figured out a way to teach robots to understand directions better, even at varying degrees of abstraction.
The research, led by Dilip Arumugam and Siddharth Karamcheti, addressed how to train a robot to understand nuances of natural language and then follow instructions correctly and efficiently.
“The problem is that commands can have different levels of abstraction, and that can cause a robot to plan its actions inefficiently or fail to complete the task at all,” says Arumugam in a press release.
In this project, the young researchers crowdsourced instructions for moving a virtual robot through an online domain. The space consisted of several rooms and a chair, which the robot was told to manipulate from one place to another. The volunteers gave various commands to the robot, ranging from general (“take the chair to the blue room”) to step-by-step instructions.
The researchers then used the database of spoken instructions to teach their system to understand the kinds of words used in different levels of language. The machine learned to not only follow instructions but to recognize the level of abstraction. That was key to kickstart its problem-solving abilities to tackle the job in the most appropriate way.
The research eventually moved from virtual pixels to a real place, using a Roomba-like robot that was able to respond to instructions within one second 90 percent of the time. Conversely, when unable to identify the specificity of the task, it took the robot 20 or more seconds to plan a task about 50 percent of the time.
One application of this new machine-learning technique referenced in the paper is a robot worker in a warehouse setting, but there are many fields that could benefit from a more versatile machine capable of moving seamlessly between small-scale operations and generalized tasks.
“Other areas that could possibly benefit from such a system include things from autonomous vehicles… to assistive robotics, all the way to medical robotics,” says Karamcheti, responding to a question by email from Singularity Hub.
More to Come
These achievements are yet another step toward creating robots that see, listen, and act more like humans. But don’t expect Disney to build a real-life Westworld next to Toon Town anytime soon.
“I think we’re a long way off from human-level communication,” Karamcheti says. “There are so many problems preventing our learning models from getting to that point, from seemingly simple questions like how to deal with words never seen before, to harder, more complicated questions like how to resolve the ambiguities inherent in language, including idiomatic or metaphorical speech.”
Even relatively verbose chatbots can run out of things to say, Karamcheti notes, as the conversation becomes more complex.
The same goes for human vision, according to Burchfiel.
While deep learning techniques have dramatically improved pattern matching—Google can find just about any picture of a cat—there’s more to human eyesight than, well, meets the eye.
“There are two big areas where I think perception has a long way to go: inductive bias and formal reasoning,” Burchfiel says.
The former is essentially all of the contextual knowledge people use to help them reason, he explains. Burchfiel uses the example of a puddle in the street. People are conditioned or biased to assume it’s a puddle of water rather than a patch of glass, for instance.
“This sort of bias is why we see faces in clouds; we have strong inductive bias helping us identify faces,” he says. “While it sounds simple at first, it powers much of what we do. Humans have a very intuitive understanding of what they expect to see, [and] it makes perception much easier.”
Formal reasoning is equally important. A machine can use deep learning, in Burchfiel’s example, to figure out the direction any river flows once it understands that water runs downhill. But it’s not yet capable of applying the sort of human reasoning that would allow us to transfer that knowledge to an alien setting, such as figuring out how water moves through a plumbing system on Mars.
“Much work was done in decades past on this sort of formal reasoning… but we have yet to figure out how to merge it with standard machine-learning methods to create a seamless system that is useful in the actual physical world.”
Robots still have a lot to learn about being human, which should make us feel good that we’re still by far the most complex machines on the planet.
Image Credit: Alex Knight via Unsplash Continue reading

Posted in Human Robots | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Comments Off on How Robots Are Getting Better at Making ...