Tag Archives: telecommunications

#434827 AI and Robotics Are Transforming ...

During the past 50 years, the frequency of recorded natural disasters has surged nearly five-fold.

In this blog, I’ll be exploring how converging exponential technologies (AI, robotics, drones, sensors, networks) are transforming the future of disaster relief—how we can prevent them in the first place and get help to victims during that first golden hour wherein immediate relief can save lives.

Here are the three areas of greatest impact:

AI, predictive mapping, and the power of the crowd
Next-gen robotics and swarm solutions
Aerial drones and immediate aid supply

Let’s dive in!

Artificial Intelligence and Predictive Mapping
When it comes to immediate and high-precision emergency response, data is gold.

Already, the meteoric rise of space-based networks, stratosphere-hovering balloons, and 5G telecommunications infrastructure is in the process of connecting every last individual on the planet.

Aside from democratizing the world’s information, however, this upsurge in connectivity will soon grant anyone the ability to broadcast detailed geo-tagged data, particularly those most vulnerable to natural disasters.

Armed with the power of data broadcasting and the force of the crowd, disaster victims now play a vital role in emergency response, turning a historically one-way blind rescue operation into a two-way dialogue between connected crowds and smart response systems.

With a skyrocketing abundance of data, however, comes a new paradigm: one in which we no longer face a scarcity of answers. Instead, it will be the quality of our questions that matters most.

This is where AI comes in: our mining mechanism.

In the case of emergency response, what if we could strategically map an almost endless amount of incoming data points? Or predict the dynamics of a flood and identify a tsunami’s most vulnerable targets before it even strikes? Or even amplify critical signals to trigger automatic aid by surveillance drones and immediately alert crowdsourced volunteers?

Already, a number of key players are leveraging AI, crowdsourced intelligence, and cutting-edge visualizations to optimize crisis response and multiply relief speeds.

Take One Concern, for instance. Born out of Stanford under the mentorship of leading AI expert Andrew Ng, One Concern leverages AI through analytical disaster assessment and calculated damage estimates.

Partnering with the cities of Los Angeles, San Francisco, and numerous cities in San Mateo County, the platform assigns verified, unique ‘digital fingerprints’ to every element in a city. Building robust models of each system, One Concern’s AI platform can then monitor site-specific impacts of not only climate change but each individual natural disaster, from sweeping thermal shifts to seismic movement.

This data, combined with that of city infrastructure and former disasters, are then used to predict future damage under a range of disaster scenarios, informing prevention methods and structures in need of reinforcement.

Within just four years, One Concern can now make precise predictions with an 85 percent accuracy rate in under 15 minutes.

And as IoT-connected devices and intelligent hardware continue to boom, a blooming trillion-sensor economy will only serve to amplify AI’s predictive capacity, offering us immediate, preventive strategies long before disaster strikes.

Beyond natural disasters, however, crowdsourced intelligence, predictive crisis mapping, and AI-powered responses are just as formidable a triage in humanitarian disasters.

One extraordinary story is that of Ushahidi. When violence broke out after the 2007 Kenyan elections, one local blogger proposed a simple yet powerful question to the web: “Any techies out there willing to do a mashup of where the violence and destruction is occurring and put it on a map?”

Within days, four ‘techies’ heeded the call, building a platform that crowdsourced first-hand reports via SMS, mined the web for answers, and—with over 40,000 verified reports—sent alerts back to locals on the ground and viewers across the world.

Today, Ushahidi has been used in over 150 countries, reaching a total of 20 million people across 100,000+ deployments. Now an open-source crisis-mapping software, its V3 (or “Ushahidi in the Cloud”) is accessible to anyone, mining millions of Tweets, hundreds of thousands of news articles, and geo-tagged, time-stamped data from countless sources.

Aggregating one of the longest-running crisis maps to date, Ushahidi’s Syria Tracker has proved invaluable in the crowdsourcing of witness reports. Providing real-time geographic visualizations of all verified data, Syria Tracker has enabled civilians to report everything from missing people and relief supply needs to civilian casualties and disease outbreaks— all while evading the government’s cell network, keeping identities private, and verifying reports prior to publication.

As mobile connectivity and abundant sensors converge with AI-mined crowd intelligence, real-time awareness will only multiply in speed and scale.

Imagining the Future….

Within the next 10 years, spatial web technology might even allow us to tap into mesh networks.

As I’ve explored in a previous blog on the implications of the spatial web, while traditional networks rely on a limited set of wired access points (or wireless hotspots), a wireless mesh network can connect entire cities via hundreds of dispersed nodes that communicate with each other and share a network connection non-hierarchically.

In short, this means that individual mobile users can together establish a local mesh network using nothing but the computing power in their own devices.

Take this a step further, and a local population of strangers could collectively broadcast countless 360-degree feeds across a local mesh network.

Imagine a scenario in which armed attacks break out across disjointed urban districts, each cluster of eye witnesses and at-risk civilians broadcasting an aggregate of 360-degree videos, all fed through photogrammetry AIs that build out a live hologram in real time, giving family members and first responders complete information.

Or take a coastal community in the throes of torrential rainfall and failing infrastructure. Now empowered by a collective live feed, verification of data reports takes a matter of seconds, and richly-layered data informs first responders and AI platforms with unbelievable accuracy and specificity of relief needs.

By linking all the right technological pieces, we might even see the rise of automated drone deliveries. Imagine: crowdsourced intelligence is first cross-referenced with sensor data and verified algorithmically. AI is then leveraged to determine the specific needs and degree of urgency at ultra-precise coordinates. Within minutes, once approved by personnel, swarm robots rush to collect the requisite supplies, equipping size-appropriate drones with the right aid for rapid-fire delivery.

This brings us to a second critical convergence: robots and drones.

While cutting-edge drone technology revolutionizes the way we deliver aid, new breakthroughs in AI-geared robotics are paving the way for superhuman emergency responses in some of today’s most dangerous environments.

Let’s explore a few of the most disruptive examples to reach the testing phase.

First up….

Autonomous Robots and Swarm Solutions
As hardware advancements converge with exploding AI capabilities, disaster relief robots are graduating from assistance roles to fully autonomous responders at a breakneck pace.

Born out of MIT’s Biomimetic Robotics Lab, the Cheetah III is but one of many robots that may form our first line of defense in everything from earthquake search-and-rescue missions to high-risk ops in dangerous radiation zones.

Now capable of running at 6.4 meters per second, Cheetah III can even leap up to a height of 60 centimeters, autonomously determining how to avoid obstacles and jump over hurdles as they arise.

Initially designed to perform spectral inspection tasks in hazardous settings (think: nuclear plants or chemical factories), the Cheetah’s various iterations have focused on increasing its payload capacity, range of motion, and even a gripping function with enhanced dexterity.

Cheetah III and future versions are aimed at saving lives in almost any environment.

And the Cheetah III is not alone. Just this February, Tokyo’s Electric Power Company (TEPCO) has put one of its own robots to the test. For the first time since Japan’s devastating 2011 tsunami, which led to three nuclear meltdowns in the nation’s Fukushima nuclear power plant, a robot has successfully examined the reactor’s fuel.

Broadcasting the process with its built-in camera, the robot was able to retrieve small chunks of radioactive fuel at five of the six test sites, offering tremendous promise for long-term plans to clean up the still-deadly interior.

Also out of Japan, Mitsubishi Heavy Industries (MHi) is even using robots to fight fires with full autonomy. In a remarkable new feat, MHi’s Water Cannon Bot can now put out blazes in difficult-to-access or highly dangerous fire sites.

Delivering foam or water at 4,000 liters per minute and 1 megapascal (MPa) of pressure, the Cannon Bot and its accompanying Hose Extension Bot even form part of a greater AI-geared system to conduct reconnaissance and surveillance on larger transport vehicles.

As wildfires grow ever more untameable, high-volume production of such bots could prove a true lifesaver. Paired with predictive AI forest fire mapping and autonomous hauling vehicles, not only will solutions like MHi’s Cannon Bot save numerous lives, but avoid population displacement and paralyzing damage to our natural environment before disaster has the chance to spread.

But even in cases where emergency shelter is needed, groundbreaking (literally) robotics solutions are fast to the rescue.

After multiple iterations by Fastbrick Robotics, the Hadrian X end-to-end bricklaying robot can now autonomously build a fully livable, 180-square-meter home in under three days. Using a laser-guided robotic attachment, the all-in-one brick-loaded truck simply drives to a construction site and directs blocks through its robotic arm in accordance with a 3D model.

Meeting verified building standards, Hadrian and similar solutions hold massive promise in the long-term, deployable across post-conflict refugee sites and regions recovering from natural catastrophes.

But what if we need to build emergency shelters from local soil at hand? Marking an extraordinary convergence between robotics and 3D printing, the Institute for Advanced Architecture of Catalonia (IAAC) is already working on a solution.

In a major feat for low-cost construction in remote zones, IAAC has found a way to convert almost any soil into a building material with three times the tensile strength of industrial clay. Offering myriad benefits, including natural insulation, low GHG emissions, fire protection, air circulation, and thermal mediation, IAAC’s new 3D printed native soil can build houses on-site for as little as $1,000.

But while cutting-edge robotics unlock extraordinary new frontiers for low-cost, large-scale emergency construction, novel hardware and computing breakthroughs are also enabling robotic scale at the other extreme of the spectrum.

Again, inspired by biological phenomena, robotics specialists across the US have begun to pilot tiny robotic prototypes for locating trapped individuals and assessing infrastructural damage.

Take RoboBees, tiny Harvard-developed bots that use electrostatic adhesion to ‘perch’ on walls and even ceilings, evaluating structural damage in the aftermath of an earthquake.

Or Carnegie Mellon’s prototyped Snakebot, capable of navigating through entry points that would otherwise be completely inaccessible to human responders. Driven by AI, the Snakebot can maneuver through even the most densely-packed rubble to locate survivors, using cameras and microphones for communication.

But when it comes to fast-paced reconnaissance in inaccessible regions, miniature robot swarms have good company.

Next-Generation Drones for Instantaneous Relief Supplies
Particularly in the case of wildfires and conflict zones, autonomous drone technology is fundamentally revolutionizing the way we identify survivors in need and automate relief supply.

Not only are drones enabling high-resolution imagery for real-time mapping and damage assessment, but preliminary research shows that UAVs far outpace ground-based rescue teams in locating isolated survivors.

As presented by a team of electrical engineers from the University of Science and Technology of China, drones could even build out a mobile wireless broadband network in record time using a “drone-assisted multi-hop device-to-device” program.

And as shown during Houston’s Hurricane Harvey, drones can provide scores of predictive intel on everything from future flooding to damage estimates.

Among multiple others, a team led by Texas A&M computer science professor and director of the university’s Center for Robot-Assisted Search and Rescue Dr. Robin Murphy flew a total of 119 drone missions over the city, from small-scale quadcopters to military-grade unmanned planes. Not only were these critical for monitoring levee infrastructure, but also for identifying those left behind by human rescue teams.

But beyond surveillance, UAVs have begun to provide lifesaving supplies across some of the most remote regions of the globe. One of the most inspiring examples to date is Zipline.

Created in 2014, Zipline has completed 12,352 life-saving drone deliveries to date. While drones are designed, tested, and assembled in California, Zipline primarily operates in Rwanda and Tanzania, hiring local operators and providing over 11 million people with instant access to medical supplies.

Providing everything from vaccines and HIV medications to blood and IV tubes, Zipline’s drones far outpace ground-based supply transport, in many instances providing life-critical blood cells, plasma, and platelets in under an hour.

But drone technology is even beginning to transcend the limited scale of medical supplies and food.

Now developing its drones under contracts with DARPA and the US Marine Corps, Logistic Gliders, Inc. has built autonomously-navigating drones capable of carrying 1,800 pounds of cargo over unprecedented long distances.

Built from plywood, Logistic’s gliders are projected to cost as little as a few hundred dollars each, making them perfect candidates for high-volume remote aid deliveries, whether navigated by a pilot or self-flown in accordance with real-time disaster zone mapping.

As hardware continues to advance, autonomous drone technology coupled with real-time mapping algorithms pose no end of abundant opportunities for aid supply, disaster monitoring, and richly layered intel previously unimaginable for humanitarian relief.

Concluding Thoughts
Perhaps one of the most consequential and impactful applications of converging technologies is their transformation of disaster relief methods.

While AI-driven intel platforms crowdsource firsthand experiential data from those on the ground, mobile connectivity and drone-supplied networks are granting newfound narrative power to those most in need.

And as a wave of new hardware advancements gives rise to robotic responders, swarm technology, and aerial drones, we are fast approaching an age of instantaneous and efficiently-distributed responses in the midst of conflict and natural catastrophes alike.

Empowered by these new tools, what might we create when everyone on the planet has the same access to relief supplies and immediate resources? In a new age of prevention and fast recovery, what futures can you envision?

Join Me
Abundance-Digital Online Community: I’ve created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is my ‘onramp’ for exponential entrepreneurs – those who want to get involved and play at a higher level. Click here to learn more.

Image Credit: Arcansel / Shutterstock.com Continue reading

Posted in Human Robots

#433284 Tech Can Sustainably Feed Developing ...

In the next 30 years, virtually all net population growth will occur in urban regions of developing countries. At the same time, worldwide food production will become increasingly limited by the availability of land, water, and energy. These constraints will be further worsened by climate change and the expected addition of two billion people to today’s four billion now living in urban regions. Meanwhile, current urban food ecosystems in the developing world are inefficient and critically inadequate to meet the challenges of the future.

Combined, these trends could have catastrophic economic and political consequences. A new path forward for urban food ecosystems needs to be found. But what is that path?

New technologies, coupled with new business models and supportive government policies, can create more resilient urban food ecosystems in the coming decades. These tech-enabled systems can sustainably link rural, peri-urban (areas just outside cities), and urban producers and consumers, increase overall food production, and generate opportunities for new businesses and jobs (Figure 1).

Figure 1: The urban food value chain nodes from rural, peri-urban and urban producers
to servicing end customers in urban and peri-urban markets.
Here’s a glimpse of the changes technology may bring to the systems feeding cities in the future.

A technology-linked urban food ecosystem would create unprecedented opportunities for small farms to reach wider markets and progress from subsistence farming to commercially producing niche cash crops and animal protein, such as poultry, fish, pork, and insects.

Meanwhile, new opportunities within cities will appear with the creation of vertical farms and other controlled-environment agricultural systems as well as production of plant-based and 3D printed foods and cultured meat. Uberized facilitation of production and distribution of food will reduce bottlenecks and provide new business opportunities and jobs. Off-the-shelf precision agriculture technology will increasingly be the new norm, from smallholders to larger producers.

As part of Agricultural Revolution 4.0, all this will be integrated into the larger collaborative economy—connected by digital platforms, the cloud, and the Internet of Things and powered by artificial intelligence. It will more efficiently and effectively use resources and people to connect the nexus of food, water, energy, nutrition, and human health. It will also aid in the development of a circular economy that is designed to be restorative and regenerative, minimizing waste and maximizing recycling and reuse to build economic, natural, and social capital.

In short, technology will enable transformation of urban food ecosystems, from expanded production in cities to more efficient and inclusive distribution and closer connections with rural farmers. Here’s a closer look at seven tech-driven trends that will help feed tomorrow’s cities.

1. Worldwide Connectivity: Information, Learning, and Markets
Connectivity from simple cell phone SMS communication to internet-enabled smartphones and cloud services are providing platforms for the increasingly powerful technologies enabling development of a new agricultural revolution. Internet connections currently reach more than 4 billion people, about 55% of the global population. That number will grow fast in coming years.

These information and communications technologies connect food producers to consumers with just-in-time data, enhanced good agricultural practices, mobile money and credit, telecommunications, market information and merchandising, and greater transparency and traceability of goods and services throughout the value chain. Text messages on mobile devices have become the one-stop-shop for small farmers to place orders, gain technology information for best management practices, and access market information to increase profitability.

Hershey’s CocoaLink in Ghana, for example, uses text and voice messages with cocoa industry experts and small farm producers. Digital Green is a technology-enabled communication system in Asia and Africa to bring needed agricultural and management practices to small farmers in their own language by filming and recording successful farmers in their own communities. MFarm is a mobile app that connects Kenyan farmers with urban markets via text messaging.

2. Blockchain Technology: Greater Access to Basic Financial Services and Enhanced Food Safety
Gaining access to credit and executing financial transactions have been persistent constraints for small farm producers. Blockchain promises to help the unbanked access basic financial services.

The Gates Foundation has released an open source platform, Mojaloop, to allow software developers and banks and financial service providers to build secure digital payment platforms at scale. Mojaloop software uses more secure blockchain technology to enable urban food system players in the developing world to conduct business and trade. The free software reduces complexity and cost in building payment platforms to connect small farmers with customers, merchants, banks, and mobile money providers. Such digital financial services will allow small farm producers in the developing world to conduct business without a brick-and-mortar bank.

Blockchain is also important for traceability and transparency requirements to meet food regulatory and consumer requirement during the production, post-harvest, shipping, processing and distribution to consumers. Combining blockchain with RFID technologies also will enhance food safety.

3. Uberized Services: On-Demand Equipment, Storage, and More
Uberized services can advance development of the urban food ecosystem across the spectrum, from rural to peri-urban to urban food production and distribution. Whereas Uber and Airbnb enable sharing of rides and homes, the model can be extended in the developing world to include on-demand use of expensive equipment, such as farm machinery, or storage space.

This includes uberization of planting and harvesting equipment (Hello Tractor), transportation vehicles, refrigeration facilities for temporary storage of perishable product, and “cloud kitchens” (EasyAppetite in Nigeria, FoodCourt in Rwanda, and Swiggy and Zomto in India) that produce fresh meals to be delivered to urban customers, enabling young people with motorbikes and cell phones to become entrepreneurs or contractors delivering meals to urban customers.

Another uberized service is marketing and distributing “ugly food” or imperfect produce to reduce food waste. About a third of the world’s food goes to waste, often because of appearance; this is enough to feed two billion people. Such services supply consumers with cheaper, nutritious, tasty, healthy fruits and vegetables that would normally be discarded as culls due to imperfections in shape or size.

4. Technology for Producing Plant-Based Foods in Cities
We need to change diet choices through education and marketing and by developing tasty plant-based substitutes. This is not only critical for environmental sustainability, but also offers opportunities for new businesses and services. It turns out that current agricultural production systems for “red meat” have a far greater detrimental impact on the environment than automobiles.

There have been great advances in plant-based foods, like the Impossible Burger and Beyond Meat, that can satisfy the consumer’s experience and perception of meat. Rather than giving up the experience of eating red meat, technology is enabling marketable, attractive plant-based products that can potentially drastically reduce world per capita consumption of red meat.

5. Cellular Agriculture, Lab-Grown Meat, and 3D Printed Food
Lab-grown meat, literally meat grown from cultured cells, may radically change where and how protein and food is produced, including the cities where it is consumed. There is a wide range of innovative alternatives to traditional meats that can supplement the need for livestock, farms, and butchers. The history of innovation is about getting rid of the bottleneck in the system, and with meat, the bottleneck is the animal. Finless Foods is a new company trying to replicate fish fillets, for example, while Memphis meats is working on beef and poultry.

3D printing or additive manufacturing is a “general purpose technology” used for making, plastic toys, human tissues, aircraft parts, and buildings. 3D printing can also be used to convert alternative ingredients such as proteins from algae, beet leaves, or insects into tasty and healthy products that can be produced by small, inexpensive printers in home kitchens. The food can be customized for individual health needs as well as preferences. 3D printing can also contribute to the food ecosystem by making possible on-demand replacement parts—which are badly needed in the developing world for tractors, pumps, and other equipment. Catapult Design 3D prints tractor replacement parts as well as corn shellers, cart designs, prosthetic limbs, and rolling water barrels for the Indian market.

6. Alt Farming: Vertical Farms to Produce Food in Urban Centers
Urban food ecosystem production systems will rely not only on field-grown crops, but also on production of food within cities. There are a host of new, alternative production systems using “controlled environmental agriculture.” These include low-cost, protected poly hoop houses, greenhouses, roof-top and sack/container gardens, and vertical farming in buildings using artificial lighting. Vertical farms enable year-round production of selected crops, regardless of weather—which will be increasingly important in response to climate change—and without concern for deteriorating soil conditions that affect crop quality and productivity. AeroFarms claims 390 times more productivity per square foot than normal field production.

7. Biotechnology and Nanotechnology for Sustainable Intensification of Agriculture
CRISPR is a promising gene editing technology that can be used to enhance crop productivity while avoiding societal concerns about GMOs. CRISPR can accelerate traditional breeding and selection programs for developing new climate and disease-resistant, higher-yielding, nutritious crops and animals.

Plant-derived coating materials, developed with nanotechnology, can decrease waste, extend shelf-life and transportability of fruits and vegetables, and significantly reduce post-harvest crop loss in developing countries that lack adequate refrigeration. Nanotechnology is also used in polymers to coat seeds to increase their shelf-life and increase their germination success and production for niche, high-value crops.

Putting It All Together
The next generation “urban food industry” will be part of the larger collaborative economy that is connected by digital platforms, the cloud, and the Internet of Things. A tech-enabled urban food ecosystem integrated with new business models and smart agricultural policies offers the opportunity for sustainable intensification (doing more with less) of agriculture to feed a rapidly growing global urban population—while also creating viable economic opportunities for rural and peri-urban as well as urban producers and value-chain players.

Image Credit: Akarawut / Shutterstock.com Continue reading

Posted in Human Robots

#431000 Japan’s SoftBank Is Investing Billions ...

Remember the 1980s movie Brewster’s Millions, in which a minor league baseball pitcher (played by Richard Pryor) must spend $30 million in 30 days to inherit $300 million? Pryor goes on an epic spending spree for a bigger payoff down the road.
One of the world’s biggest public companies is making that film look like a weekend in the Hamptons. Japan’s SoftBank Group, led by its indefatigable CEO Masayoshi Son, is shooting to invest $100 billion over the next five years toward what the company calls the information revolution.
The newly-created SoftBank Vision Fund, with a handful of key investors, appears ready to almost single-handedly hack the technology revolution. Announced only last year, the fund had its first major close in May with $93 billion in committed capital. The rest of the money is expected to be raised this year.
The fund is unprecedented. Data firm CB Insights notes that the SoftBank Vision Fund, if and when it hits the $100 billion mark, will equal the total amount that VC-backed companies received in all of 2016—$100.8 billion across 8,372 deals globally.
The money will go toward both billion-dollar corporations and startups, with a minimum $100 million buy-in. The focus is on core technologies like artificial intelligence, robotics and the Internet of Things.
Aside from being Japan’s richest man, Son is also a futurist who has predicted the singularity, the moment in time when machines will become smarter than humans and technology will progress exponentially. Son pegs the date as 2047. He appears to be hedging that bet in the biggest way possible.
Show Me the Money
Ostensibly a telecommunications company, SoftBank Group was founded in 1981 and started investing in internet technologies by the mid-1990s. Son infamously lost about $70 billion of his own fortune after the dot-com bubble burst around 2001. The company itself has a market cap of nearly $90 billion today, about half of where it was during the heydays of the internet boom.
The ups and downs did nothing to slake the company’s thirst for technology. It has made nine acquisitions and more than 130 investments since 1995. In 2017 alone, SoftBank has poured billions into nearly 30 companies and acquired three others. Some of those investments are being transferred to the massive SoftBank Vision Fund.
SoftBank is not going it alone with the new fund. More than half of the money—$60 billion—comes via the Middle East through Saudi Arabia’s Public Investment Fund ($45 billion) and Abu Dhabi’s Mubadala Investment Company ($15 billion). Other players at the table include Apple, Qualcomm, Sharp, Foxconn, and Oracle.
During a company conference in August, Son notes the SoftBank Vision Fund is not just about making money. “We don’t just want to be an investor just for the money game,” he says through a translator. “We want to make the information revolution. To do the information revolution, you can’t do it by yourself; you need a lot of synergy.”
Off to the Races
The fund has wasted little time creating that synergy. In July, its first official investment, not surprisingly, went to a company that specializes in artificial intelligence for robots—Brain Corp. The San Diego-based startup uses AI to turn manual machines into self-driving robots that navigate their environments autonomously. The first commercial application appears to be a really smart commercial-grade version that crosses a Roomba and Zamboni.

A second investment in July was a bit more surprising. SoftBank and its fund partners led a $200 million mega-round for Plenty, an agricultural tech company that promises to reshape farming by going vertical. Using IoT sensors and machine learning, Plenty claims its urban vertical farms can produce 350 times more vegetables than a conventional farm using 1 percent of the water.
Round Two
The spending spree continued into August.
The SoftBank Vision Fund led a $1.1 billion investment into a little-known biotechnology company called Roivant Sciences that goes dumpster diving for abandoned drugs and then creates subsidiaries around each therapy. For example, Axovant Sciences is devoted to neurology while Urovant focuses on urology. TechCrunch reports that Roivant is also creating a tech-focused subsidiary, called Datavant, that will use AI for drug discovery and other healthcare initiatives, such as designing clinical trials.
The AI angle may partly explain SoftBank’s interest in backing the biggest private placement in healthcare to date.
Also in August, SoftBank Vision Fund led a mix of $2.5 billion in primary and secondary capital investments into India’s largest private company in what was touted as the largest single investment in a private Indian company. Flipkart is an e-commerce company in the mold of Amazon.
The fund tacked on a $250 million investment round in August to Kabbage, an Atlanta-based startup in the alt-lending sector for small businesses. It ended big with a $4.4 billion investment into a co-working company called WeWork.
Betterment of Humanity
And those investments only include companies that SoftBank Vision Fund has backed directly.
SoftBank the company will offer—or has already turned over—previous investments to the Vision Fund in more than a half-dozen companies. Those assets include its shares in Nvidia, which produces chips for AI applications, and its first serious foray into autonomous driving with Nauto, a California startup that uses AI and high-tech cameras to retrofit vehicles to improve driving safety. The more miles the AI logs, the more it learns about safe and unsafe driving behaviors.
Other recent acquisitions, such as Boston Dynamics, a well-known US robotics company owned briefly by Google’s parent company Alphabet, will remain under the SoftBank Group umbrella for now.

This spending spree begs the question: What is the overall vision behind the SoftBank’s relentless pursuit of technology companies? A spokesperson for SoftBank told Singularity Hub that the “common thread among all of these companies is that they are creating the foundational platforms for the next stage of the information revolution.All of the companies, he adds, share SoftBank’s criteria of working toward “the betterment of humanity.”
While the SoftBank portfolio is diverse, from agtech to fintech to biotech, it’s obvious that SoftBank is betting on technologies that will connect the world in new and amazing ways. For instance, it wrote a $1 billion check last year in support of OneWeb, which aims to launch 900 satellites to bring internet to everyone on the planet. (It will also be turned over to the SoftBank Vision Fund.)
SoftBank also led a half-billion equity investment round earlier this year in a UK company called Improbable, which employs cloud-based distributed computing to create virtual worlds for gaming. The next step for the company is massive simulations of the real world that supports simultaneous users who can experience the same environment together(and another candidate for the SoftBank Vision Fund.)
Even something as seemingly low-tech as WeWork, which provides a desk or office in locations around the world, points toward a more connected planet.
In the end, the singularity is about bringing humanity together through technology. No one said it would be easy—or cheap.
Stock Media provided by xackerz / Pond5 Continue reading

Posted in Human Robots

#430579 What These Lifelike Androids Can Teach ...

For Dr. Hiroshi Ishiguro, one of the most interesting things about androids is the changing questions they pose us, their creators, as they evolve. Does it, for example, do something to the concept of being human if a human-made creation starts telling you about what kind of boys ‘she’ likes?
If you want to know the answer to the boys question, you need to ask ERICA, one of Dr. Ishiguro’s advanced androids. Beneath her plastic skull and silicone skin, wires connect to AI software systems that bring her to life. Her ability to respond goes far beyond standard inquiries. Spend a little time with her, and the feeling of a distinct personality starts to emerge. From time to time, she works as a receptionist at Dr. Ishiguro and his team’s Osaka University labs. One of her android sisters is an actor who has starred in plays and a film.

ERICA’s ‘brother’ is an android version of Dr. Ishiguro himself, which has represented its creator at various events while the biological Ishiguro can remain in his offices in Japan. Microphones and cameras capture Ishiguro’s voice and face movements, which are relayed to the android. Apart from mimicking its creator, the Geminoid™ android is also capable of lifelike blinking, fidgeting, and breathing movements.
Say hello to relaxation
As technological development continues to accelerate, so do the possibilities for androids. From a position as receptionist, ERICA may well branch out into many other professions in the coming years. Companion for the elderly, comic book storyteller (an ancient profession in Japan), pop star, conversational foreign language partner, and newscaster are some of the roles and responsibilities Dr. Ishiguro sees androids taking on in the near future.
“Androids are not uncanny anymore. Most people adapt to interacting with Erica very quickly. Actually, I think that in interacting with androids, which are still different from us, we get a better appreciation of interacting with other cultures. In both cases, we are talking with someone who is different from us and learn to overcome those differences,” he says.
A lot has been written about how robots will take our jobs. Dr. Ishiguro believes these fears are blown somewhat out of proportion.
“Robots and androids will take over many simple jobs. Initially there might be some job-related issues, but new schemes, like for example a robot tax similar to the one described by Bill Gates, should help,” he says.
“Androids will make it possible for humans to relax and keep evolving. If we compare the time we spend studying now compared to 100 years ago, it has grown a lot. I think it needs to keep growing if we are to keep expanding our scientific and technological knowledge. In the future, we might end up spending 20 percent of our lifetime on work and 80 percent of the time on education and growing our skills.”
Android asks who you are
For Dr. Ishiguro, another aspect of robotics in general, and androids in particular, is how they question what it means to be human.
“Identity is a very difficult concept for humans sometimes. For example, I think clothes are part of our identity, in a way that is similar to our faces and bodies. We don’t change those from one day to the next, and that is why I have ten matching black outfits,” he says.
This link between physical appearance and perceived identity is one of the aspects Dr. Ishiguro is exploring. Another closely linked concept is the connection between body and feeling of self. The Ishiguro avatar was once giving a presentation in Austria. Its creator recalls how he felt distinctly like he was in Austria, even capable of feeling sensation of touch on his own body when people laid their hands on the android. If he was distracted, he felt almost ‘sucked’ back into his body in Japan.
“I am constantly thinking about my life in this way, and I believe that androids are a unique mirror that helps us formulate questions about why we are here and why we have been so successful. I do not necessarily think I have found the answers to these questions, so if you have, please share,” he says with a laugh.
His work and these questions, while extremely interesting on their own, become extra poignant when considering the predicted melding of mind and machine in the near future.
The ability to be present in several locations through avatars—virtual or robotic—raises many questions of both philosophical and practical nature. Then add the hypotheticals, like why send a human out onto the hostile surface of Mars if you could send a remote-controlled android, capable of relaying everything it sees, hears and feels?
The two ways of robotics will meet
Dr. Ishiguro sees the world of AI-human interaction as currently roughly split into two. One is the chat-bot approach that companies like Amazon, Microsoft, Google, and recently Apple, employ using stationary objects like speakers. Androids like ERICA represent another approach.
“It is about more than the form factor. I think that the android approach is generally more story-based. We are integrating new conversation features based on assumptions about the situation and running different scenarios that expand the android’s vocabulary and interactions. Another aspect we are working on is giving androids desire and intention. Like with people, androids should have desires and intentions in order for you to want to interact with them over time,” Dr. Ishiguro explains.
This could be said to be part of a wider trend for Japan, where many companies are developing human-like robots that often have some Internet of Things capabilities, making them able to handle some of the same tasks as an Amazon Echo. The difference in approach could be summed up in the words ‘assistant’ (Apple, Amazon, etc.) and ‘companion’ (Japan).
Dr. Ishiguro sees this as partly linked to how Japanese as a language—and market—is somewhat limited. This has a direct impact on viability and practicality of ‘pure’ voice recognition systems. At the same time, Japanese people have had greater exposure to positive images of robots, and have a different cultural / religious view of objects having a ‘soul’. However, it may also mean Japanese companies and android scientists are both stealing a lap on their western counterparts.
“If you speak to an Amazon Echo, that is not a natural way to interact for humans. This is part of why we are making human-like robot systems. The human brain is set up to recognize and interact with humans. So, it makes sense to focus on developing the body for the AI mind, as well as the AI. I believe that the final goal for both Japanese and other companies and scientists is to create human-like interaction. Technology has to adapt to us, because we cannot adapt fast enough to it, as it develops so quickly,” he says.
Banner image courtesy of Hiroshi Ishiguro Laboratories, ATR all rights reserved.
Dr. Ishiguro’s team has collaborated with partners and developed a number of android systems:
Geminoid™ HI-2 has been developed by Hiroshi Ishiguro Laboratories and Advanced Telecommunications Research Institute International (ATR).
Geminoid™ F has been developed by Osaka University and Hiroshi Ishiguro Laboratories, Advanced Telecommunications Research Institute International (ATR).
ERICA has been developed by ERATO ISHIGURO Symbiotic Human-Robot Interaction Project Continue reading

Posted in Human Robots