Tag Archives: vision

#431000 Japan’s SoftBank Is Investing Billions ...

Remember the 1980s movie Brewster’s Millions, in which a minor league baseball pitcher (played by Richard Pryor) must spend $30 million in 30 days to inherit $300 million? Pryor goes on an epic spending spree for a bigger payoff down the road.
One of the world’s biggest public companies is making that film look like a weekend in the Hamptons. Japan’s SoftBank Group, led by its indefatigable CEO Masayoshi Son, is shooting to invest $100 billion over the next five years toward what the company calls the information revolution.
The newly-created SoftBank Vision Fund, with a handful of key investors, appears ready to almost single-handedly hack the technology revolution. Announced only last year, the fund had its first major close in May with $93 billion in committed capital. The rest of the money is expected to be raised this year.
The fund is unprecedented. Data firm CB Insights notes that the SoftBank Vision Fund, if and when it hits the $100 billion mark, will equal the total amount that VC-backed companies received in all of 2016—$100.8 billion across 8,372 deals globally.
The money will go toward both billion-dollar corporations and startups, with a minimum $100 million buy-in. The focus is on core technologies like artificial intelligence, robotics and the Internet of Things.
Aside from being Japan’s richest man, Son is also a futurist who has predicted the singularity, the moment in time when machines will become smarter than humans and technology will progress exponentially. Son pegs the date as 2047. He appears to be hedging that bet in the biggest way possible.
Show Me the Money
Ostensibly a telecommunications company, SoftBank Group was founded in 1981 and started investing in internet technologies by the mid-1990s. Son infamously lost about $70 billion of his own fortune after the dot-com bubble burst around 2001. The company itself has a market cap of nearly $90 billion today, about half of where it was during the heydays of the internet boom.
The ups and downs did nothing to slake the company’s thirst for technology. It has made nine acquisitions and more than 130 investments since 1995. In 2017 alone, SoftBank has poured billions into nearly 30 companies and acquired three others. Some of those investments are being transferred to the massive SoftBank Vision Fund.
SoftBank is not going it alone with the new fund. More than half of the money—$60 billion—comes via the Middle East through Saudi Arabia’s Public Investment Fund ($45 billion) and Abu Dhabi’s Mubadala Investment Company ($15 billion). Other players at the table include Apple, Qualcomm, Sharp, Foxconn, and Oracle.
During a company conference in August, Son notes the SoftBank Vision Fund is not just about making money. “We don’t just want to be an investor just for the money game,” he says through a translator. “We want to make the information revolution. To do the information revolution, you can’t do it by yourself; you need a lot of synergy.”
Off to the Races
The fund has wasted little time creating that synergy. In July, its first official investment, not surprisingly, went to a company that specializes in artificial intelligence for robots—Brain Corp. The San Diego-based startup uses AI to turn manual machines into self-driving robots that navigate their environments autonomously. The first commercial application appears to be a really smart commercial-grade version that crosses a Roomba and Zamboni.

A second investment in July was a bit more surprising. SoftBank and its fund partners led a $200 million mega-round for Plenty, an agricultural tech company that promises to reshape farming by going vertical. Using IoT sensors and machine learning, Plenty claims its urban vertical farms can produce 350 times more vegetables than a conventional farm using 1 percent of the water.
Round Two
The spending spree continued into August.
The SoftBank Vision Fund led a $1.1 billion investment into a little-known biotechnology company called Roivant Sciences that goes dumpster diving for abandoned drugs and then creates subsidiaries around each therapy. For example, Axovant Sciences is devoted to neurology while Urovant focuses on urology. TechCrunch reports that Roivant is also creating a tech-focused subsidiary, called Datavant, that will use AI for drug discovery and other healthcare initiatives, such as designing clinical trials.
The AI angle may partly explain SoftBank’s interest in backing the biggest private placement in healthcare to date.
Also in August, SoftBank Vision Fund led a mix of $2.5 billion in primary and secondary capital investments into India’s largest private company in what was touted as the largest single investment in a private Indian company. Flipkart is an e-commerce company in the mold of Amazon.
The fund tacked on a $250 million investment round in August to Kabbage, an Atlanta-based startup in the alt-lending sector for small businesses. It ended big with a $4.4 billion investment into a co-working company called WeWork.
Betterment of Humanity
And those investments only include companies that SoftBank Vision Fund has backed directly.
SoftBank the company will offer—or has already turned over—previous investments to the Vision Fund in more than a half-dozen companies. Those assets include its shares in Nvidia, which produces chips for AI applications, and its first serious foray into autonomous driving with Nauto, a California startup that uses AI and high-tech cameras to retrofit vehicles to improve driving safety. The more miles the AI logs, the more it learns about safe and unsafe driving behaviors.
Other recent acquisitions, such as Boston Dynamics, a well-known US robotics company owned briefly by Google’s parent company Alphabet, will remain under the SoftBank Group umbrella for now.

This spending spree begs the question: What is the overall vision behind the SoftBank’s relentless pursuit of technology companies? A spokesperson for SoftBank told Singularity Hub that the “common thread among all of these companies is that they are creating the foundational platforms for the next stage of the information revolution.All of the companies, he adds, share SoftBank’s criteria of working toward “the betterment of humanity.”
While the SoftBank portfolio is diverse, from agtech to fintech to biotech, it’s obvious that SoftBank is betting on technologies that will connect the world in new and amazing ways. For instance, it wrote a $1 billion check last year in support of OneWeb, which aims to launch 900 satellites to bring internet to everyone on the planet. (It will also be turned over to the SoftBank Vision Fund.)
SoftBank also led a half-billion equity investment round earlier this year in a UK company called Improbable, which employs cloud-based distributed computing to create virtual worlds for gaming. The next step for the company is massive simulations of the real world that supports simultaneous users who can experience the same environment together(and another candidate for the SoftBank Vision Fund.)
Even something as seemingly low-tech as WeWork, which provides a desk or office in locations around the world, points toward a more connected planet.
In the end, the singularity is about bringing humanity together through technology. No one said it would be easy—or cheap.
Stock Media provided by xackerz / Pond5 Continue reading

Posted in Human Robots

#430955 This Inspiring Teenager Wants to Save ...

It’s not every day you meet a high school student who’s been building functional robots since age 10. Then again, Mihir Garimella is definitely not your average teenager.
When I sat down to interview him recently at Singularity University’s Global Summit, that much was clear.
Mihir’s curiosity for robotics began at age two when his parents brought home a pet dog—well, a robotic dog. A few years passed with this robotic companion by his side, and Mihir became fascinated with how software and hardware could bring inanimate objects to “life.”
When he was 10, Mihir built a robotic violin tuner called Robo-Mozart to help him address a teacher’s complaints about his always-out-of-tune violin. The robot analyzes the sound of the violin, determines which strings are out of tune, and then uses motors to turn the tuning pegs.
Robo-Mozart and other earlier projects helped Mihir realize he could use robotics to solve real problems. Fast-forward to age 14 and Flybot, a tiny, low-cost emergency response drone that won Mihir top honors in his age category at the 2015 Google Science Fair.

The small drone is propelled by four rotors and is designed to mimic how fruit flies can speedily see and react to surrounding threats. It’s a design idea that hit Mihir when he and his family returned home after a long vacation to discover they had left bananas on their kitchen counter. The house was filled with fruit flies.
After many failed attempts to swat the flies, Mihir started wondering how these tiny creatures with small brains and horrible vision were such masterful escape artists. He began digging through research papers on fruit flies and came to an interesting conclusion.
Since fruit flies can’t see a lot of detail, they compensate by processing visual information very fast—ten times faster than people do.
“That’s what enables them to escape so effectively,” says Mihir.
Escaping a threat for a fruit fly could mean quickly avoiding a fatal swat from a human hand. Applied to a search-and-response drone, the scenario shifts—picture a drone instantaneously detecting and avoiding a falling ceiling while searching for survivors inside a collapsing building.

Now, at 17, Mihir is still pushing Flybot forward. He’s developing software to enable the drone to operate autonomously and hopes it will be able to navigate environments such as a burning building, or a structure that’s been hit by an earthquake. The drone is also equipped with intelligent sensors to collect spatial data it will use to maneuver around obstacles and detect things like a trapped person or the location of a gas leak.
For everyone concerned about robots eating jobs, Flybot is a perfect example of how technology can aid existing jobs.
Flybot could substitute for a first responder entering a dangerous situation or help a firefighter make a quicker rescue by showing where victims are trapped. With its small and fast design, the drone could also presumably carry out an initial search-and-rescue sweep in just a few minutes.
Mihir is committed to commercializing the product and keeping it within a $250–$500 price range, which is a fraction of the cost of many current emergency response drones. He hopes the low cost will allow the technology to be used in developing countries.
Next month, Mihir starts his freshman year at Stanford, where he plans to keep up his research and create a company to continue work on the drone.
When I asked Mihir what fuels him, he said, “Curiosity is a great skill for inventors. It lets you find inspiration in a lot of places that you may not look. If I had started by trying to build an escape algorithm for these drones, I wouldn’t know where to start. But looking at fruit flies and getting inspired by them, it gave me a really good place to look for inspiration.”
It’s a bit mind boggling how much Mihir has accomplished by age 17, but I suspect he’s just getting started.
Image Credit: Google Science Fair via YouTube Continue reading

Posted in Human Robots

#430796 Kuri Robot Brings Autonomous Video to a ...

Mayfield Robotics improves their home robot Kuri, adding track wheels, structural updates and “Kuri Vision,” an autonomous home video program Continue reading

Posted in Human Robots

#430761 How Robots Are Getting Better at Making ...

The multiverse of science fiction is populated by robots that are indistinguishable from humans. They are usually smarter, faster, and stronger than us. They seem capable of doing any job imaginable, from piloting a starship and battling alien invaders to taking out the trash and cooking a gourmet meal.
The reality, of course, is far from fantasy. Aside from industrial settings, robots have yet to meet The Jetsons. The robots the public are exposed to seem little more than over-sized plastic toys, pre-programmed to perform a set of tasks without the ability to interact meaningfully with their environment or their creators.
To paraphrase PayPal co-founder and tech entrepreneur Peter Thiel, we wanted cool robots, instead we got 140 characters and Flippy the burger bot. But scientists are making progress to empower robots with the ability to see and respond to their surroundings just like humans.
Some of the latest developments in that arena were presented this month at the annual Robotics: Science and Systems Conference in Cambridge, Massachusetts. The papers drilled down into topics that ranged from how to make robots more conversational and help them understand language ambiguities to helping them see and navigate through complex spaces.
Improved Vision
Ben Burchfiel, a graduate student at Duke University, and his thesis advisor George Konidaris, an assistant professor of computer science at Brown University, developed an algorithm to enable machines to see the world more like humans.
In the paper, Burchfiel and Konidaris demonstrate how they can teach robots to identify and possibly manipulate three-dimensional objects even when they might be obscured or sitting in unfamiliar positions, such as a teapot that has been tipped over.
The researchers trained their algorithm by feeding it 3D scans of about 4,000 common household items such as beds, chairs, tables, and even toilets. They then tested its ability to identify about 900 new 3D objects just from a bird’s eye view. The algorithm made the right guess 75 percent of the time versus a success rate of about 50 percent for other computer vision techniques.
In an email interview with Singularity Hub, Burchfiel notes his research is not the first to train machines on 3D object classification. How their approach differs is that they confine the space in which the robot learns to classify the objects.
“Imagine the space of all possible objects,” Burchfiel explains. “That is to say, imagine you had tiny Legos, and I told you [that] you could stick them together any way you wanted, just build me an object. You have a huge number of objects you could make!”
The infinite possibilities could result in an object no human or machine might recognize.
To address that problem, the researchers had their algorithm find a more restricted space that would host the objects it wants to classify. “By working in this restricted space—mathematically we call it a subspace—we greatly simplify our task of classification. It is the finding of this space that sets us apart from previous approaches.”
Following Directions
Meanwhile, a pair of undergraduate students at Brown University figured out a way to teach robots to understand directions better, even at varying degrees of abstraction.
The research, led by Dilip Arumugam and Siddharth Karamcheti, addressed how to train a robot to understand nuances of natural language and then follow instructions correctly and efficiently.
“The problem is that commands can have different levels of abstraction, and that can cause a robot to plan its actions inefficiently or fail to complete the task at all,” says Arumugam in a press release.
In this project, the young researchers crowdsourced instructions for moving a virtual robot through an online domain. The space consisted of several rooms and a chair, which the robot was told to manipulate from one place to another. The volunteers gave various commands to the robot, ranging from general (“take the chair to the blue room”) to step-by-step instructions.
The researchers then used the database of spoken instructions to teach their system to understand the kinds of words used in different levels of language. The machine learned to not only follow instructions but to recognize the level of abstraction. That was key to kickstart its problem-solving abilities to tackle the job in the most appropriate way.
The research eventually moved from virtual pixels to a real place, using a Roomba-like robot that was able to respond to instructions within one second 90 percent of the time. Conversely, when unable to identify the specificity of the task, it took the robot 20 or more seconds to plan a task about 50 percent of the time.
One application of this new machine-learning technique referenced in the paper is a robot worker in a warehouse setting, but there are many fields that could benefit from a more versatile machine capable of moving seamlessly between small-scale operations and generalized tasks.
“Other areas that could possibly benefit from such a system include things from autonomous vehicles… to assistive robotics, all the way to medical robotics,” says Karamcheti, responding to a question by email from Singularity Hub.
More to Come
These achievements are yet another step toward creating robots that see, listen, and act more like humans. But don’t expect Disney to build a real-life Westworld next to Toon Town anytime soon.
“I think we’re a long way off from human-level communication,” Karamcheti says. “There are so many problems preventing our learning models from getting to that point, from seemingly simple questions like how to deal with words never seen before, to harder, more complicated questions like how to resolve the ambiguities inherent in language, including idiomatic or metaphorical speech.”
Even relatively verbose chatbots can run out of things to say, Karamcheti notes, as the conversation becomes more complex.
The same goes for human vision, according to Burchfiel.
While deep learning techniques have dramatically improved pattern matching—Google can find just about any picture of a cat—there’s more to human eyesight than, well, meets the eye.
“There are two big areas where I think perception has a long way to go: inductive bias and formal reasoning,” Burchfiel says.
The former is essentially all of the contextual knowledge people use to help them reason, he explains. Burchfiel uses the example of a puddle in the street. People are conditioned or biased to assume it’s a puddle of water rather than a patch of glass, for instance.
“This sort of bias is why we see faces in clouds; we have strong inductive bias helping us identify faces,” he says. “While it sounds simple at first, it powers much of what we do. Humans have a very intuitive understanding of what they expect to see, [and] it makes perception much easier.”
Formal reasoning is equally important. A machine can use deep learning, in Burchfiel’s example, to figure out the direction any river flows once it understands that water runs downhill. But it’s not yet capable of applying the sort of human reasoning that would allow us to transfer that knowledge to an alien setting, such as figuring out how water moves through a plumbing system on Mars.
“Much work was done in decades past on this sort of formal reasoning… but we have yet to figure out how to merge it with standard machine-learning methods to create a seamless system that is useful in the actual physical world.”
Robots still have a lot to learn about being human, which should make us feel good that we’re still by far the most complex machines on the planet.
Image Credit: Alex Knight via Unsplash Continue reading

Posted in Human Robots

#430640 RE2 Robotics Receives Air Force Funding ...

PITTSBURGH, PA – June 21, 2017 – RE2 Robotics announced today that the Company was selected by the Air Force to develop a drop-in robotic system to rapidly convert a variety of traditionally manned aircraft to robotically piloted, autonomous aircraft under the Small Business Innovation Research (SBIR) program. This robotic system, named “Common Aircraft Retrofit for Novel Autonomous Control” (CARNAC), will operate the aircraft similarly to a human pilot and will not require any modifications to the aircraft.
Automation and autonomy have broad value to the Department of Defense with the potential to enhance system performance of existing platforms, reduce costs, and enable new missions and capabilities, especially with reduced human exposure to dangerous or life-threatening situations. The CARNAC project leverages existing aviation assets and advances in vehicle automation technologies to develop a cutting-edge drop-in robotic flight system.
During the program, RE2 Robotics will demonstrate system architecture feasibility, humanoid-like robotic manipulation capabilities, vision-based flight-status recognition, and cognitive architecture-based decision making.
“Our team is excited to incorporate the Company’s robotic manipulation expertise with proven technologies in applique systems, vision processing algorithms, and decision making to create a customized application that will allow a wide variety of existing aircraft to be outfitted with a robotic pilot,” stated Jorgen Pedersen, president and CEO of RE2 Robotics. “By creating a drop-in robotic pilot, we have the ability to insert autonomy into and expand the capabilities of not only traditionally manned air vehicles, but ground and underwater vehicles as well. This application will open up a whole new market for our mobile robotic manipulator systems.”
###
About RE2 RoboticsRE2 Robotics develops mobile robotic technologies that enable robot users to remotely interact with their world from a safe distance — whether on the ground, in the air, or underwater. RE2 creates interoperable robotic manipulator arms with human-like performance, intuitive human robot interfaces, and advanced autonomy software for mobile robotics. For more information, please visit www.resquared.com or call 412.681.6382.
Media Contact: RE2 Public Relations, pr@resquared.com, 412.681.6382.
The post RE2 Robotics Receives Air Force Funding to Develop Robotic Pilot appeared first on Roboticmagazine. Continue reading

Posted in Human Robots