Tag Archives: partners

#431907 The Future of Cancer Treatment Is ...

In an interview at Singularity University’s Exponential Medicine in San Diego, Richard Wender, chief cancer control officer at the American Cancer Society, discussed how technology has changed cancer care and treatment in recent years.
Just a few years ago, microscopes were the primary tool used in cancer diagnoses, but we’ve come a long way since.
“We still look at a microscope, we still look at what organ the cancer started in,” Wender said. “But increasingly we’re looking at the molecular signature. It’s not just the genomics, and it’s not just the genes. It’s also the cellular environment around that cancer. We’re now targeting our therapies to the mutations that are found in that particular cancer.”
Cancer treatments in the past have been largely reactionary, but they don’t need to be. Most cancer is genetic, which means that treatment can be preventative. This is one reason why newer cancer treatment techniques are searching for actionable targets in the specific gene before the cancer develops.

When asked how artificial intelligence and machine learning technologies are reshaping clinical trials, Wender acknowledged that how clinical trials have been run in the past won’t work moving forward.
“Our traditional ways of learning about cancer were by finding a particular cancer type and conducting a long clinical trial that took a number of years enrolling patients from around the country. That is not how we’re going to learn to treat individual patients in the future.”
Instead, Wender emphasized the need for gathering as much data as possible, and from as many individual patients as possible. This data should encompass clinical, pathological, and molecular data and should be gathered from a patient all the way through their final outcome. “Literally every person becomes a clinical trial of one,” Wender said.
For the best cancer treatment and diagnostics, Wender says the answer is to make the process collaborative by pulling in resources from organizations and companies that are both established and emerging.
It’s no surprise to hear that the best solutions come from pairing together uncommon partners to innovate.
Image Credit: jovan vitanovski / Shutterstock.com Continue reading

Posted in Human Robots

#431836 Do Our Brains Use Deep Learning to Make ...

The first time Dr. Blake Richards heard about deep learning, he was convinced that he wasn’t just looking at a technique that would revolutionize artificial intelligence. He also knew he was looking at something fundamental about the human brain.
That was the early 2000s, and Richards was taking a course with Dr. Geoff Hinton at the University of Toronto. Hinton, a pioneer architect of the algorithm that would later take the world by storm, was offering an introductory course on his learning method inspired by the human brain.
The key words here are “inspired by.” Despite Richards’ conviction, the odds were stacked against him. The human brain, as it happens, seems to lack a critical function that’s programmed into deep learning algorithms. On the surface, the algorithms were violating basic biological facts already proven by neuroscientists.
But what if, superficial differences aside, deep learning and the brain are actually compatible?
Now, in a new study published in eLife, Richards, working with DeepMind, proposed a new algorithm based on the biological structure of neurons in the neocortex. Also known as the cortex, this outermost region of the brain is home to higher cognitive functions such as reasoning, prediction, and flexible thought.
The team networked their artificial neurons together into a multi-layered network and challenged it with a classic computer vision task—identifying hand-written numbers.
The new algorithm performed well. But the kicker is that it analyzed the learning examples in a way that’s characteristic of deep learning algorithms, even though it was completely based on the brain’s fundamental biology.
“Deep learning is possible in a biological framework,” concludes the team.
Because the model is only a computer simulation at this point, Richards hopes to pass the baton to experimental neuroscientists, who could actively test whether the algorithm operates in an actual brain.
If so, the data could then be passed back to computer scientists to work out the next generation of massively parallel and low-energy algorithms to power our machines.
It’s a first step towards merging the two fields back into a “virtuous circle” of discovery and innovation.
The blame game
While you’ve probably heard of deep learning’s recent wins against humans in the game of Go, you might not know the nitty-gritty behind the algorithm’s operations.
In a nutshell, deep learning relies on an artificial neural network with virtual “neurons.” Like a towering skyscraper, the network is structured into hierarchies: lower-level neurons process aspects of an input—for example, a horizontal or vertical stroke that eventually forms the number four—whereas higher-level neurons extract more abstract aspects of the number four.
To teach the network, you give it examples of what you’re looking for. The signal propagates forward in the network (like climbing up a building), where each neuron works to fish out something fundamental about the number four.
Like children trying to learn a skill the first time, initially the network doesn’t do so well. It spits out what it thinks a universal number four should look like—think a Picasso-esque rendition.
But here’s where the learning occurs: the algorithm compares the output with the ideal output, and computes the difference between the two (dubbed “error”). This error is then “backpropagated” throughout the entire network, telling each neuron: hey, this is how far off you were, so try adjusting your computation closer to the ideal.
Millions of examples and tweakings later, the network inches closer to the desired output and becomes highly proficient at the trained task.
This error signal is crucial for learning. Without efficient “backprop,” the network doesn’t know which of its neurons are off kilter. By assigning blame, the AI can better itself.
The brain does this too. How? We have no clue.
Biological No-Go
What’s clear, though, is that the deep learning solution doesn’t work.
Backprop is a pretty needy function. It requires a very specific infrastructure for it to work as expected.
For one, each neuron in the network has to receive the error feedback. But in the brain, neurons are only connected to a few downstream partners (if that). For backprop to work in the brain, early-level neurons need to be able to receive information from billions of connections in their downstream circuits—a biological impossibility.
And while certain deep learning algorithms adapt a more local form of backprop— essentially between neurons—it requires their connection forwards and backwards to be symmetric. This hardly ever occurs in the brain’s synapses.
More recent algorithms adapt a slightly different strategy, in that they implement a separate feedback pathway that helps the neurons to figure out errors locally. While it’s more biologically plausible, the brain doesn’t have a separate computational network dedicated to the blame game.
What it does have are neurons with intricate structures, unlike the uniform “balls” that are currently applied in deep learning.
Branching Networks
The team took inspiration from pyramidal cells that populate the human cortex.
“Most of these neurons are shaped like trees, with ‘roots’ deep in the brain and ‘branches’ close to the surface,” says Richards. “What’s interesting is that these roots receive a different set of inputs than the branches that are way up at the top of the tree.”
This is an illustration of a multi-compartment neural network model for deep learning. Left: Reconstruction of pyramidal neurons from mouse primary visual cortex. Right: Illustration of simplified pyramidal neuron models. Image Credit: CIFAR
Curiously, the structure of neurons often turn out be “just right” for efficiently cracking a computational problem. Take the processing of sensations: the bottoms of pyramidal neurons are right smack where they need to be to receive sensory input, whereas the tops are conveniently placed to transmit feedback errors.
Could this intricate structure be evolution’s solution to channeling the error signal?
The team set up a multi-layered neural network based on previous algorithms. But rather than having uniform neurons, they gave those in middle layers—sandwiched between the input and output—compartments, just like real neurons.
When trained with hand-written digits, the algorithm performed much better than a single-layered network, despite lacking a way to perform classical backprop. The cell-like structure itself was sufficient to assign error: the error signals at one end of the neuron are naturally kept separate from input at the other end.
Then, at the right moment, the neuron brings both sources of information together to find the best solution.
There’s some biological evidence for this: neuroscientists have long known that the neuron’s input branches perform local computations, which can be integrated with signals that propagate backwards from the so-called output branch.
However, we don’t yet know if this is the brain’s way of dealing blame—a question that Richards urges neuroscientists to test out.
What’s more, the network parsed the problem in a way eerily similar to traditional deep learning algorithms: it took advantage of its multi-layered structure to extract progressively more abstract “ideas” about each number.
“[This is] the hallmark of deep learning,” the authors explain.
The Deep Learning Brain
Without doubt, there will be more twists and turns to the story as computer scientists incorporate more biological details into AI algorithms.
One aspect that Richards and team are already eyeing is a top-down predictive function, in which signals from higher levels directly influence how lower levels respond to input.
Feedback from upper levels doesn’t just provide error signals; it could also be nudging lower processing neurons towards a “better” activity pattern in real-time, says Richards.
The network doesn’t yet outperform other non-biologically derived (but “brain-inspired”) deep networks. But that’s not the point.
“Deep learning has had a huge impact on AI, but, to date, its impact on neuroscience has been limited,” the authors say.
Now neuroscientists have a lead they could experimentally test: that the structure of neurons underlie nature’s own deep learning algorithm.
“What we might see in the next decade or so is a real virtuous cycle of research between neuroscience and AI, where neuroscience discoveries help us to develop new AI and AI can help us interpret and understand our experimental data in neuroscience,” says Richards.
Image Credit: christitzeimaging.com / Shutterstock.com Continue reading

Posted in Human Robots

#431175 Servosila introduces Mobile Robots ...

Servosila introduces a new member of the family of Servosila “Engineer” robots, a UGV called “Radio Engineer”. This new variant of the well-known backpack-transportable robot features a Software Defined Radio (SDR) payload module integrated into the robotic vehicle.

“Several of our key customers had asked us to enable an Electronic Warfare (EW) or Cognitive Radio applications in our robots”, – says a spokesman for the company, “By integrating a Software Defined Radio (SDR) module into our robotic platforms we cater to both requirements. Radio spectrum analysis, radio signal detection, jamming, and radio relay are important features for EOD robots such as ours. Servosila continues to serve the customers by pushing the boundaries of what their Servosila robots can do. Our partners in the research world and academia shall also greatly benefit from the new functionality that gives them more means of achieving their research goals.”
Photo Credit: Servosila – www.servosila.com
Coupling a programmable mobile robot with a software-defined radio creates a powerful platform for developing innovative applications that mix mobility and artificial intelligence with modern radio technologies. The new robotic radio applications include localized frequency hopping pattern analysis, OFDM waveform recognition, outdoor signal triangulation, cognitive mesh networking, automatic area search for radio emitters, passive or active mobile robotic radars, mobile base stations, mobile radio scanners, and many others.

A rotating head of the robot with mounts for external antennae acts as a pan-and-tilt device thus enabling various scanning and tracking applications. The neck of the robotic head is equipped with a pair of highly accurate Servosila-made servos with a pointing precision of 3.0 angular minutes. This means that the robot can point its antennae with an unprecedented accuracy.

Researchers and academia can benefit from the platform’s support for GnuRadio, an open source software framework for developing SDR applications. An on-board Intel i7 computer capable of executing OpenCL code, is internally connected to the SDR payload module. This makes it possible to execute most existing GnuRadio applications directly on the robot’s on-board computer. Other sensors of the robot such as a GPS sensor, an IMU or a thermal vision camera contribute into sensor fusion algorithms.

Since Servosila “Engineer” mobile robots are primarily designed for outdoor use, the SDR module is fully enclosed into a hardened body of the robot which provides protection in case of dust, rain, snow or impacts with obstacles while the robot is on the move. The robot and its SDR payload module are both powered by an on-board battery thus making the entire robotic radio platform independent of external power supplies.

Servosila plans to start shipping the SDR-equipped robots to international customers in October, 2017.

Web: https://www.servosila.com
YouTube: https://www.youtube.com/user/servosila/videos

About the Company
Servosila is a robotics technology company that designs, produces and markets a range of mobile robots, robotic arms, servo drives, harmonic reduction gears, robotic control systems as well as software packages that make the robots intelligent. Servosila provides consulting, training and operations support services to various customers around the world. The company markets its products and services directly or through a network of partners who provide tailored and localized services that meet specific procurement, support or operational needs.
Press Release above is by: Servosila
The post Servosila introduces Mobile Robots equipped with Software Defined Radio (SDR) payloads appeared first on Roboticmagazine. Continue reading

Posted in Human Robots

#431000 Japan’s SoftBank Is Investing Billions ...

Remember the 1980s movie Brewster’s Millions, in which a minor league baseball pitcher (played by Richard Pryor) must spend $30 million in 30 days to inherit $300 million? Pryor goes on an epic spending spree for a bigger payoff down the road.
One of the world’s biggest public companies is making that film look like a weekend in the Hamptons. Japan’s SoftBank Group, led by its indefatigable CEO Masayoshi Son, is shooting to invest $100 billion over the next five years toward what the company calls the information revolution.
The newly-created SoftBank Vision Fund, with a handful of key investors, appears ready to almost single-handedly hack the technology revolution. Announced only last year, the fund had its first major close in May with $93 billion in committed capital. The rest of the money is expected to be raised this year.
The fund is unprecedented. Data firm CB Insights notes that the SoftBank Vision Fund, if and when it hits the $100 billion mark, will equal the total amount that VC-backed companies received in all of 2016—$100.8 billion across 8,372 deals globally.
The money will go toward both billion-dollar corporations and startups, with a minimum $100 million buy-in. The focus is on core technologies like artificial intelligence, robotics and the Internet of Things.
Aside from being Japan’s richest man, Son is also a futurist who has predicted the singularity, the moment in time when machines will become smarter than humans and technology will progress exponentially. Son pegs the date as 2047. He appears to be hedging that bet in the biggest way possible.
Show Me the Money
Ostensibly a telecommunications company, SoftBank Group was founded in 1981 and started investing in internet technologies by the mid-1990s. Son infamously lost about $70 billion of his own fortune after the dot-com bubble burst around 2001. The company itself has a market cap of nearly $90 billion today, about half of where it was during the heydays of the internet boom.
The ups and downs did nothing to slake the company’s thirst for technology. It has made nine acquisitions and more than 130 investments since 1995. In 2017 alone, SoftBank has poured billions into nearly 30 companies and acquired three others. Some of those investments are being transferred to the massive SoftBank Vision Fund.
SoftBank is not going it alone with the new fund. More than half of the money—$60 billion—comes via the Middle East through Saudi Arabia’s Public Investment Fund ($45 billion) and Abu Dhabi’s Mubadala Investment Company ($15 billion). Other players at the table include Apple, Qualcomm, Sharp, Foxconn, and Oracle.
During a company conference in August, Son notes the SoftBank Vision Fund is not just about making money. “We don’t just want to be an investor just for the money game,” he says through a translator. “We want to make the information revolution. To do the information revolution, you can’t do it by yourself; you need a lot of synergy.”
Off to the Races
The fund has wasted little time creating that synergy. In July, its first official investment, not surprisingly, went to a company that specializes in artificial intelligence for robots—Brain Corp. The San Diego-based startup uses AI to turn manual machines into self-driving robots that navigate their environments autonomously. The first commercial application appears to be a really smart commercial-grade version that crosses a Roomba and Zamboni.

A second investment in July was a bit more surprising. SoftBank and its fund partners led a $200 million mega-round for Plenty, an agricultural tech company that promises to reshape farming by going vertical. Using IoT sensors and machine learning, Plenty claims its urban vertical farms can produce 350 times more vegetables than a conventional farm using 1 percent of the water.
Round Two
The spending spree continued into August.
The SoftBank Vision Fund led a $1.1 billion investment into a little-known biotechnology company called Roivant Sciences that goes dumpster diving for abandoned drugs and then creates subsidiaries around each therapy. For example, Axovant Sciences is devoted to neurology while Urovant focuses on urology. TechCrunch reports that Roivant is also creating a tech-focused subsidiary, called Datavant, that will use AI for drug discovery and other healthcare initiatives, such as designing clinical trials.
The AI angle may partly explain SoftBank’s interest in backing the biggest private placement in healthcare to date.
Also in August, SoftBank Vision Fund led a mix of $2.5 billion in primary and secondary capital investments into India’s largest private company in what was touted as the largest single investment in a private Indian company. Flipkart is an e-commerce company in the mold of Amazon.
The fund tacked on a $250 million investment round in August to Kabbage, an Atlanta-based startup in the alt-lending sector for small businesses. It ended big with a $4.4 billion investment into a co-working company called WeWork.
Betterment of Humanity
And those investments only include companies that SoftBank Vision Fund has backed directly.
SoftBank the company will offer—or has already turned over—previous investments to the Vision Fund in more than a half-dozen companies. Those assets include its shares in Nvidia, which produces chips for AI applications, and its first serious foray into autonomous driving with Nauto, a California startup that uses AI and high-tech cameras to retrofit vehicles to improve driving safety. The more miles the AI logs, the more it learns about safe and unsafe driving behaviors.
Other recent acquisitions, such as Boston Dynamics, a well-known US robotics company owned briefly by Google’s parent company Alphabet, will remain under the SoftBank Group umbrella for now.

This spending spree begs the question: What is the overall vision behind the SoftBank’s relentless pursuit of technology companies? A spokesperson for SoftBank told Singularity Hub that the “common thread among all of these companies is that they are creating the foundational platforms for the next stage of the information revolution.All of the companies, he adds, share SoftBank’s criteria of working toward “the betterment of humanity.”
While the SoftBank portfolio is diverse, from agtech to fintech to biotech, it’s obvious that SoftBank is betting on technologies that will connect the world in new and amazing ways. For instance, it wrote a $1 billion check last year in support of OneWeb, which aims to launch 900 satellites to bring internet to everyone on the planet. (It will also be turned over to the SoftBank Vision Fund.)
SoftBank also led a half-billion equity investment round earlier this year in a UK company called Improbable, which employs cloud-based distributed computing to create virtual worlds for gaming. The next step for the company is massive simulations of the real world that supports simultaneous users who can experience the same environment together(and another candidate for the SoftBank Vision Fund.)
Even something as seemingly low-tech as WeWork, which provides a desk or office in locations around the world, points toward a more connected planet.
In the end, the singularity is about bringing humanity together through technology. No one said it would be easy—or cheap.
Stock Media provided by xackerz / Pond5 Continue reading

Posted in Human Robots

#430867 Amazon’s robots: Job destroyers or ...

Every day is graduation day at Amazon Robotics. Here's where the more than 100,000 orange robots that glide along the floors of various Amazon warehouses are made and taught their first steps. Continue reading

Posted in Human Robots