Tag Archives: huge

#431081 How the Intelligent Home of the Future ...

As Dorothy famously said in The Wizard of Oz, there’s no place like home. Home is where we go to rest and recharge. It’s familiar, comfortable, and our own. We take care of our homes by cleaning and maintaining them, and fixing things that break or go wrong.
What if our homes, on top of giving us shelter, could also take care of us in return?
According to Chris Arkenberg, this could be the case in the not-so-distant future. As part of Singularity University’s Experts On Air series, Arkenberg gave a talk called “How the Intelligent Home of The Future Will Care For You.”
Arkenberg is a research and strategy lead at Orange Silicon Valley, and was previously a research fellow at the Deloitte Center for the Edge and a visiting researcher at the Institute for the Future.
Arkenberg told the audience that there’s an evolution going on: homes are going from being smart to being connected, and will ultimately become intelligent.
Market Trends
Intelligent home technologies are just now budding, but broader trends point to huge potential for their growth. We as consumers already expect continuous connectivity wherever we go—what do you mean my phone won’t get reception in the middle of Yosemite? What do you mean the smart TV is down and I can’t stream Game of Thrones?
As connectivity has evolved from a privilege to a basic expectation, Arkenberg said, we’re also starting to have a better sense of what it means to give up our data in exchange for services and conveniences. It’s so easy to click a few buttons on Amazon and have stuff show up at your front door a few days later—never mind that data about your purchases gets recorded and aggregated.
“Right now we have single devices that are connected,” Arkenberg said. “Companies are still trying to show what the true value is and how durable it is beyond the hype.”

Connectivity is the basis of an intelligent home. To take a dumb object and make it smart, you get it online. Belkin’s Wemo, for example, lets users control lights and appliances wirelessly and remotely, and can be paired with Amazon Echo or Google Home for voice-activated control.
Speaking of voice-activated control, Arkenberg pointed out that physical interfaces are evolving, too, to the point that we’re actually getting rid of interfaces entirely, or transitioning to ‘soft’ interfaces like voice or gesture.
Drivers of change
Consumers are open to smart home tech and companies are working to provide it. But what are the drivers making this tech practical and affordable? Arkenberg said there are three big ones:
Computation: Computers have gotten exponentially more powerful over the past few decades. If it wasn’t for processors that could handle massive quantities of information, nothing resembling an Echo or Alexa would even be possible. Artificial intelligence and machine learning are powering these devices, and they hinge on computing power too.
Sensors: “There are more things connected now than there are people on the planet,” Arkenberg said. Market research firm Gartner estimates there are 8.4 billion connected things currently in use. Wherever digital can replace hardware, it’s doing so. Cheaper sensors mean we can connect more things, which can then connect to each other.
Data: “Data is the new oil,” Arkenberg said. “The top companies on the planet are all data-driven giants. If data is your business, though, then you need to keep finding new ways to get more and more data.” Home assistants are essentially data collection systems that sit in your living room and collect data about your life. That data in turn sets up the potential of machine learning.
Colonizing the Living Room
Alexa and Echo can turn lights on and off, and Nest can help you be energy-efficient. But beyond these, what does an intelligent home really look like?
Arkenberg’s vision of an intelligent home uses sensing, data, connectivity, and modeling to manage resource efficiency, security, productivity, and wellness.
Autonomous vehicles provide an interesting comparison: they’re surrounded by sensors that are constantly mapping the world to build dynamic models to understand the change around itself, and thereby predict things. Might we want this to become a model for our homes, too? By making them smart and connecting them, Arkenberg said, they’d become “more biological.”
There are already several products on the market that fit this description. RainMachine uses weather forecasts to adjust home landscape watering schedules. Neurio monitors energy usage, identifies areas where waste is happening, and makes recommendations for improvement.
These are small steps in connecting our homes with knowledge systems and giving them the ability to understand and act on that knowledge.
He sees the homes of the future being equipped with digital ears (in the form of home assistants, sensors, and monitoring devices) and digital eyes (in the form of facial recognition technology and machine vision to recognize who’s in the home). “These systems are increasingly able to interrogate emotions and understand how people are feeling,” he said. “When you push more of this active intelligence into things, the need for us to directly interface with them becomes less relevant.”
Could our homes use these same tools to benefit our health and wellness? FREDsense uses bacteria to create electrochemical sensors that can be applied to home water systems to detect contaminants. If that’s not personal enough for you, get a load of this: ClinicAI can be installed in your toilet bowl to monitor and evaluate your biowaste. What’s the point, you ask? Early detection of colon cancer and other diseases.
What if one day, your toilet’s biowaste analysis system could link up with your fridge, so that when you opened it it would tell you what to eat, and how much, and at what time of day?
Roadblocks to intelligence
“The connected and intelligent home is still a young category trying to establish value, but the technological requirements are now in place,” Arkenberg said. We’re already used to living in a world of ubiquitous computation and connectivity, and we have entrained expectations about things being connected. For the intelligent home to become a widespread reality, its value needs to be established and its challenges overcome.
One of the biggest challenges will be getting used to the idea of continuous surveillance. We’ll get convenience and functionality if we give up our data, but how far are we willing to go? Establishing security and trust is going to be a big challenge moving forward,” Arkenberg said.
There’s also cost and reliability, interoperability and fragmentation of devices, or conversely, what Arkenberg called ‘platform lock-on,’ where you’d end up relying on only one provider’s system and be unable to integrate devices from other brands.
Ultimately, Arkenberg sees homes being able to learn about us, manage our scheduling and transit, watch our moods and our preferences, and optimize our resource footprint while predicting and anticipating change.
“This is the really fascinating provocation of the intelligent home,” Arkenberg said. “And I think we’re going to start to see this play out over the next few years.”
Sounds like a home Dorothy wouldn’t recognize, in Kansas or anywhere else.
Stock Media provided by adam121 / Pond5 Continue reading

Posted in Human Robots

#431023 Finish Him! MegaBots’ Giant Robot Duel ...

It began two years ago when MegaBots co-founders Matt Oehrlein and Gui Cavalcanti donned American flags as capes and challenged Suidobashi Heavy Industries to a giant robot duel in a YouTube video that immediately went viral.
The battle proposed: MegaBots’ 15-foot tall, 1,200-pound MK2 robot vs. Suidobashi’s 9,000-pound robot, KURATAS. Oehrlein and Cavalcanti first discovered the KURATAS robot in a listing on Amazon with a million-dollar price tag.
In an equally flamboyant response video, Suidobashi CEO and founder Kogoro Kurata accepted the challenge. (Yes, he named his robot after himself.) Both parties planned to take a year to prepare their robots for combat.
In the end, it took twice the amount of time. Nonetheless, the battle is going down this September in an undisclosed location.
Oehrlein shared more about the much-anticipated showdown during our interview at Singularity University’s Global Summit.

Two years since the initial video, MegaBots has now completed the combat-capable MK3 robot, named Eagle Prime. This new 12-ton, 16-foot-tall robot is powered by a 430-horsepower Corvette engine and requires two human pilots.
It’s also the robot they recently shipped to take on KURATAS.

Building Eagle Prime has been no small feat. With arms and legs that each weigh as much as a car, assembling the robot takes forklifts, cranes, and a lot of caution. Fortress One, MegaBots’ headquarters in Hayward, California is where the magic happens.
In terms of “weaponry,” Eagle Prime features a giant pneumatic cannon that shoots huge paint cannonballs. Oehrlein warns, “They can shatter all the windows in a car. It’s very powerful.” A logging grapple, which looks like a giant claw and exerts 3,000 pounds of steel-crushing force, has also been added to the robot.

“It’s a combination of range combat, using the paint balls to maybe blind cameras on the other robot or take out sensitive electronics, and then closing in with the claw and trying to disable their systems at close range,” Oehrlein explains.
Safety systems include a cockpit roll cage for the two pilots, five-point safety seatbelt harnesses, neck restraints, helmets, and flame retardant suits.
Co-founder, Matt Oehrlein, inside the cockpit of MegaBots’ Eagle Prime giant robot.
Oehrlein and Cavalcanti have also spent considerable time inside Eagle Prime practicing battlefield tactics and maneuvering the robot through obstacle courses.
Suidobashi’s robot is a bit shorter and lighter, but also a little faster, so the battle dynamics should be interesting.
You may be thinking, “Why giant dueling robots?”
MegaBots’ grand vision is a full-blown international sports league of giant fighting robots on the scale of Formula One racing. Picture a nostalgic evening sipping a beer (or three) and watching Pacific Rim- and Power Rangers-inspired robots battle—only in real life.
Eagle Prime is, in good humor, a proudly patriotic robot.
“Japan is known as a robotic powerhouse,” says Oehrlein, “I think there’s something interesting about the slightly overconfident American trying to get a foothold in the robotics space and doing it by building a bigger, louder, heavier robot, in true American fashion.”
For safety reasons, no fans will be admitted during the time of the fight. The battle will be posted after the fact on MegaBots’ YouTube channel and Facebook page.
We’ll soon find out whether this becomes another American underdog story.
In the meantime, I give my loyalty to MegaBots, and in the words of Mortal Kombat, say, “Finish him!”

via GIPHY
Image Credit: MegaBots Continue reading

Posted in Human Robots

#431015 Finish Him! MegaBots’ Giant Robot Duel ...

It began two years ago when MegaBots co-founders Matt Oehrlein and Gui Cavalcanti donned American flags as capes and challenged Suidobashi Heavy Industries to a giant robot duel in a YouTube video that immediately went viral.
The battle proposed: MegaBots’ 15-foot tall, 1,200-pound MK2 robot vs. Suidobashi’s 9,000-pound robot, KURATAS. Oehrlein and Cavalcanti first discovered the KURATAS robot in a listing on Amazon with a million-dollar price tag.
In an equally flamboyant response video, Suidobashi CEO and founder Kogoro Kurata accepted the challenge. (Yes, he named his robot after himself.) Both parties planned to take a year to prepare their robots for combat.
In the end, it took twice the amount of time. Nonetheless, the battle is going down this September in an undisclosed location in Japan.
Oehrlein shared more about the much-anticipated showdown during our interview at Singularity University’s Global Summit.

Two years since the initial video, MegaBots has now completed the combat-capable MK3 robot, named Eagle Prime. This new 12-ton, 16-foot-tall robot is powered by a 430-horsepower Corvette engine and requires two human pilots.
It’s also the robot they recently shipped to Japan to take on KURATAS.

Building Eagle Prime has been no small feat. With arms and legs that each weigh as much as a car, assembling the robot takes forklifts, cranes, and a lot of caution. Fortress One, MegaBots’ headquarters in Hayward, California is where the magic happens.
In terms of “weaponry,” Eagle Prime features a giant pneumatic cannon that shoots huge paint cannonballs. Oehrlein warns, “They can shatter all the windows in a car. It’s very powerful.” A logging grapple, which looks like a giant claw and exerts 3,000 pounds of steel-crushing force, has also been added to the robot.
“It’s a combination of range combat, using the paint balls to maybe blind cameras on the other robot or take out sensitive electronics, and then closing in with the claw and trying to disable their systems at close range,” Oehrlein explains.
Safety systems include a cockpit roll cage for the two pilots, five-point safety seatbelt harnesses, neck restraints, helmets, and flame retardant suits.
Co-founder, Matt Oehrlein, inside the cockpit of MegaBots’ Eagle Prime giant robot.
Oehrlein and Cavalcanti have also spent considerable time inside Eagle Prime practicing battlefield tactics and maneuvering the robot through obstacle courses.
Suidobashi’s robot is a bit shorter and lighter, but also a little faster, so the battle dynamics should be interesting.
You may be thinking, “Why giant dueling robots?”
MegaBots’ grand vision is a full-blown international sports league of giant fighting robots on the scale of Formula One racing. Picture a nostalgic evening sipping a beer (or three) and watching Pacific Rim- and Power Rangers-inspired robots battle—only in real life.
Eagle Prime is, in good humor, a proudly patriotic robot.
“Japan is known as a robotic powerhouse,” says Oehrlein, “I think there’s something interesting about the slightly overconfident American trying to get a foothold in the robotics space and doing it by building a bigger, louder, heavier robot, in true American fashion.”
For safety reasons, no fans will be admitted during the time of the fight. The battle will be posted after the fact on MegaBots’ YouTube channel and Facebook page.
We’ll soon find out whether this becomes another American underdog story.
In the meantime, I give my loyalty to MegaBots, and in the words of Mortal Kombat, say, “Finish him!”

via GIPHY
Image Credit: MegaBots Continue reading

Posted in Human Robots

#430761 How Robots Are Getting Better at Making ...

The multiverse of science fiction is populated by robots that are indistinguishable from humans. They are usually smarter, faster, and stronger than us. They seem capable of doing any job imaginable, from piloting a starship and battling alien invaders to taking out the trash and cooking a gourmet meal.
The reality, of course, is far from fantasy. Aside from industrial settings, robots have yet to meet The Jetsons. The robots the public are exposed to seem little more than over-sized plastic toys, pre-programmed to perform a set of tasks without the ability to interact meaningfully with their environment or their creators.
To paraphrase PayPal co-founder and tech entrepreneur Peter Thiel, we wanted cool robots, instead we got 140 characters and Flippy the burger bot. But scientists are making progress to empower robots with the ability to see and respond to their surroundings just like humans.
Some of the latest developments in that arena were presented this month at the annual Robotics: Science and Systems Conference in Cambridge, Massachusetts. The papers drilled down into topics that ranged from how to make robots more conversational and help them understand language ambiguities to helping them see and navigate through complex spaces.
Improved Vision
Ben Burchfiel, a graduate student at Duke University, and his thesis advisor George Konidaris, an assistant professor of computer science at Brown University, developed an algorithm to enable machines to see the world more like humans.
In the paper, Burchfiel and Konidaris demonstrate how they can teach robots to identify and possibly manipulate three-dimensional objects even when they might be obscured or sitting in unfamiliar positions, such as a teapot that has been tipped over.
The researchers trained their algorithm by feeding it 3D scans of about 4,000 common household items such as beds, chairs, tables, and even toilets. They then tested its ability to identify about 900 new 3D objects just from a bird’s eye view. The algorithm made the right guess 75 percent of the time versus a success rate of about 50 percent for other computer vision techniques.
In an email interview with Singularity Hub, Burchfiel notes his research is not the first to train machines on 3D object classification. How their approach differs is that they confine the space in which the robot learns to classify the objects.
“Imagine the space of all possible objects,” Burchfiel explains. “That is to say, imagine you had tiny Legos, and I told you [that] you could stick them together any way you wanted, just build me an object. You have a huge number of objects you could make!”
The infinite possibilities could result in an object no human or machine might recognize.
To address that problem, the researchers had their algorithm find a more restricted space that would host the objects it wants to classify. “By working in this restricted space—mathematically we call it a subspace—we greatly simplify our task of classification. It is the finding of this space that sets us apart from previous approaches.”
Following Directions
Meanwhile, a pair of undergraduate students at Brown University figured out a way to teach robots to understand directions better, even at varying degrees of abstraction.
The research, led by Dilip Arumugam and Siddharth Karamcheti, addressed how to train a robot to understand nuances of natural language and then follow instructions correctly and efficiently.
“The problem is that commands can have different levels of abstraction, and that can cause a robot to plan its actions inefficiently or fail to complete the task at all,” says Arumugam in a press release.
In this project, the young researchers crowdsourced instructions for moving a virtual robot through an online domain. The space consisted of several rooms and a chair, which the robot was told to manipulate from one place to another. The volunteers gave various commands to the robot, ranging from general (“take the chair to the blue room”) to step-by-step instructions.
The researchers then used the database of spoken instructions to teach their system to understand the kinds of words used in different levels of language. The machine learned to not only follow instructions but to recognize the level of abstraction. That was key to kickstart its problem-solving abilities to tackle the job in the most appropriate way.
The research eventually moved from virtual pixels to a real place, using a Roomba-like robot that was able to respond to instructions within one second 90 percent of the time. Conversely, when unable to identify the specificity of the task, it took the robot 20 or more seconds to plan a task about 50 percent of the time.
One application of this new machine-learning technique referenced in the paper is a robot worker in a warehouse setting, but there are many fields that could benefit from a more versatile machine capable of moving seamlessly between small-scale operations and generalized tasks.
“Other areas that could possibly benefit from such a system include things from autonomous vehicles… to assistive robotics, all the way to medical robotics,” says Karamcheti, responding to a question by email from Singularity Hub.
More to Come
These achievements are yet another step toward creating robots that see, listen, and act more like humans. But don’t expect Disney to build a real-life Westworld next to Toon Town anytime soon.
“I think we’re a long way off from human-level communication,” Karamcheti says. “There are so many problems preventing our learning models from getting to that point, from seemingly simple questions like how to deal with words never seen before, to harder, more complicated questions like how to resolve the ambiguities inherent in language, including idiomatic or metaphorical speech.”
Even relatively verbose chatbots can run out of things to say, Karamcheti notes, as the conversation becomes more complex.
The same goes for human vision, according to Burchfiel.
While deep learning techniques have dramatically improved pattern matching—Google can find just about any picture of a cat—there’s more to human eyesight than, well, meets the eye.
“There are two big areas where I think perception has a long way to go: inductive bias and formal reasoning,” Burchfiel says.
The former is essentially all of the contextual knowledge people use to help them reason, he explains. Burchfiel uses the example of a puddle in the street. People are conditioned or biased to assume it’s a puddle of water rather than a patch of glass, for instance.
“This sort of bias is why we see faces in clouds; we have strong inductive bias helping us identify faces,” he says. “While it sounds simple at first, it powers much of what we do. Humans have a very intuitive understanding of what they expect to see, [and] it makes perception much easier.”
Formal reasoning is equally important. A machine can use deep learning, in Burchfiel’s example, to figure out the direction any river flows once it understands that water runs downhill. But it’s not yet capable of applying the sort of human reasoning that would allow us to transfer that knowledge to an alien setting, such as figuring out how water moves through a plumbing system on Mars.
“Much work was done in decades past on this sort of formal reasoning… but we have yet to figure out how to merge it with standard machine-learning methods to create a seamless system that is useful in the actual physical world.”
Robots still have a lot to learn about being human, which should make us feel good that we’re still by far the most complex machines on the planet.
Image Credit: Alex Knight via Unsplash Continue reading

Posted in Human Robots

#428635 The 6 Ds of Tech Disruption: A Guide to ...

“The Six Ds are a chain reaction of technological progression, a road map of rapid development that always leads to enormous upheaval and opportunity.” –Peter Diamandis and Steven Kotler, Bold We live in incredible times. News travels the globe in an instant. Music, movies, games, communication, and knowledge are ever-available on always-connected devices. From biotechnology to artificial intelligence, powerful technologies that were once only available to huge organizations and governments are becoming more accessible and… read more Continue reading

Posted in Human Robots