Tag Archives: uk

#431592 Reactive Content Will Get to Know You ...

The best storytellers react to their audience. They look for smiles, signs of awe, or boredom; they simultaneously and skillfully read both the story and their sitters. Kevin Brooks, a seasoned storyteller working for Motorola’s Human Interface Labs, explains, “As the storyteller begins, they must tune in to… the audience’s energy. Based on this energy, the storyteller will adjust their timing, their posture, their characterizations, and sometimes even the events of the story. There is a dialog between audience and storyteller.”
Shortly after I read the script to Melita, the latest virtual reality experience from Madrid-based immersive storytelling company Future Lighthouse, CEO Nicolas Alcalá explained to me that the piece is an example of “reactive content,” a concept he’s been working on since his days at Singularity University.

For the first time in history, we have access to technology that can merge the reactive and affective elements of oral storytelling with the affordances of digital media, weaving stunning visuals, rich soundtracks, and complex meta-narratives in a story arena that has the capability to know you more intimately than any conventional storyteller could.
It’s no understatement to say that the storytelling potential here is phenomenal.
In short, we can refer to content as reactive if it reads and reacts to users based on their body rhythms, emotions, preferences, and data points. Artificial intelligence is used to analyze users’ behavior or preferences to sculpt unique storylines and narratives, essentially allowing for a story that changes in real time based on who you are and how you feel.
The development of reactive content will allow those working in the industry to go one step further than simply translating the essence of oral storytelling into VR. Rather than having a narrative experience with a digital storyteller who can read you, reactive content has the potential to create an experience with a storyteller who knows you.
This means being able to subtly insert minor personal details that have a specific meaning to the viewer. When we talk to our friends we often use experiences we’ve shared in the past or knowledge of our audience to give our story as much resonance as possible. Targeting personal memories and aspects of our lives is a highly effective way to elicit emotions and aid in visualizing narratives. When you can do this with the addition of visuals, music, and characters—all lifted from someone’s past—you have the potential for overwhelmingly engaging and emotionally-charged content.
Future Lighthouse inform me that for now, reactive content will rely primarily on biometric feedback technology such as breathing, heartbeat, and eye tracking sensors. A simple example would be a story in which parts of the environment or soundscape change in sync with the user’s heartbeat and breathing, or characters who call you out for not paying attention.
The next step would be characters and situations that react to the user’s emotions, wherein algorithms analyze biometric information to make inferences about states of emotional arousal (“why are you so nervous?” etc.). Another example would be implementing the use of “arousal parameters,” where the audience can choose what level of “fear” they want from a VR horror story before algorithms modulate the experience using information from biometric feedback devices.
The company’s long-term goal is to gather research on storytelling conventions and produce a catalogue of story “wireframes.” This entails distilling the basic formula to different genres so they can then be fleshed out with visuals, character traits, and soundtracks that are tailored for individual users based on their deep data, preferences, and biometric information.
The development of reactive content will go hand in hand with a renewed exploration of diverging, dynamic storylines, and multi-narratives, a concept that hasn’t had much impact in the movie world thus far. In theory, the idea of having a story that changes and mutates is captivating largely because of our love affair with serendipity and unpredictability, a cultural condition theorist Arthur Kroker refers to as the “hypertextual imagination.” This feeling of stepping into the unknown with the possibility of deviation from the habitual translates as a comforting reminder that our own lives can take exciting and unexpected turns at any moment.
The inception of the concept into mainstream culture dates to the classic Choose Your Own Adventure book series that launched in the late 70s, which in its literary form had great success. However, filmic takes on the theme have made somewhat less of an impression. DVDs like I’m Your Man (1998) and Switching (2003) both use scene selection tools to determine the direction of the storyline.
A more recent example comes from Kino Industries, who claim to have developed the technology to allow filmmakers to produce interactive films in which viewers can use smartphones to quickly vote on which direction the narrative takes at numerous decision points throughout the film.
The main problem with diverging narrative films has been the stop-start nature of the interactive element: when I’m immersed in a story I don’t want to have to pick up a controller or remote to select what’s going to happen next. Every time the audience is given the option to take a new path (“press this button”, “vote on X, Y, Z”) the narrative— and immersion within that narrative—is temporarily halted, and it takes the mind a while to get back into this state of immersion.
Reactive content has the potential to resolve these issues by enabling passive interactivity—that is, input and output without having to pause and actively make decisions or engage with the hardware. This will result in diverging, dynamic narratives that will unfold seamlessly while being dependent on and unique to the specific user and their emotions. Passive interactivity will also remove the game feel that can often be a symptom of interactive experiences and put a viewer somewhere in the middle: still firmly ensconced in an interactive dynamic narrative, but in a much subtler way.
While reading the Melita script I was particularly struck by a scene in which the characters start to engage with the user and there’s a synchronicity between the user’s heartbeat and objects in the virtual world. As the narrative unwinds and the words of Melita’s character get more profound, parts of the landscape, which seemed to be flashing and pulsating at random, come together and start to mimic the user’s heartbeat.
In 2013, Jane Aspell of Anglia Ruskin University (UK) and Lukas Heydrich of the Swiss Federal Institute of Technology proved that a user’s sense of presence and identification with a virtual avatar could be dramatically increased by syncing the on-screen character with the heartbeat of the user. The relationship between bio-digital synchronicity, immersion, and emotional engagement is something that will surely have revolutionary narrative and storytelling potential.
Image Credit: Tithi Luadthong / Shutterstock.com Continue reading

Posted in Human Robots | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

#431377 The Farms of the Future Will Be ...

Swarms of drones buzz overhead, while robotic vehicles crawl across the landscape. Orbiting satellites snap high-resolution images of the scene far below. Not one human being can be seen in the pre-dawn glow spreading across the land.
This isn’t some post-apocalyptic vision of the future à la The Terminator. This is a snapshot of the farm of the future. Every phase of the operation—from seed to harvest—may someday be automated, without the need to ever get one’s fingernails dirty.
In fact, it’s science fiction already being engineered into reality. Today, robots empowered with artificial intelligence can zap weeds with preternatural precision, while autonomous tractors move with tireless efficiency across the farmland. Satellites can assess crop health from outer space, providing gobs of data to help produce the sort of business intelligence once accessible only to Fortune 500 companies.
“Precision agriculture is on the brink of a new phase of development involving smart machines that can operate by themselves, which will allow production agriculture to become significantly more efficient. Precision agriculture is becoming robotic agriculture,” said professor Simon Blackmore last year during a conference in Asia on the latest developments in robotic agriculture. Blackmore is head of engineering at Harper Adams University and head of the National Centre for Precision Farming in the UK.
It’s Blackmore’s university that recently showcased what may someday be possible. The project, dubbed Hands Free Hectare and led by researchers from Harper Adams and private industry, farmed one hectare (about 2.5 acres) of spring barley without one person ever setting foot in the field.
The team re-purposed, re-wired and roboticized farm equipment ranging from a Japanese tractor to a 25-year-old combine. Drones served as scouts to survey the operation and collect samples to help the team monitor the progress of the barley. At the end of the season, the robo farmers harvested about 4.5 tons of barley at a price tag of £200,000.

“This project aimed to prove that there’s no technological reason why a field can’t be farmed without humans working the land directly now, and we’ve done that,” said Martin Abell, mechatronics researcher for Precision Decisions, which partnered with Harper Adams, in a press release.
I, Robot Farmer
The Harper Adams experiment is the latest example of how machines are disrupting the agricultural industry. Around the same time that the Hands Free Hectare combine was harvesting barley, Deere & Company announced it would acquire a startup called Blue River Technology for a reported $305 million.
Blue River has developed a “see-and-spray” system that combines computer vision and artificial intelligence to discriminate between crops and weeds. It hits the former with fertilizer and blasts the latter with herbicides with such precision that it can eliminate 90 percent of the chemicals used in conventional agriculture.
It’s not just farmland that’s getting a helping hand from robots. A California company called Abundant Robotics, spun out of the nonprofit research institute SRI International, is developing robots capable of picking apples with vacuum-like arms that suck the fruit straight off the trees in the orchards.
“Traditional robots were designed to perform very specific tasks over and over again. But the robots that will be used in food and agricultural applications will have to be much more flexible than what we’ve seen in automotive manufacturing plants in order to deal with natural variation in food products or the outdoor environment,” Dan Harburg, an associate at venture capital firm Anterra Capital who previously worked at a Massachusetts-based startup making a robotic arm capable of grabbing fruit, told AgFunder News.
“This means ag-focused robotics startups have to design systems from the ground up, which can take time and money, and their robots have to be able to complete multiple tasks to avoid sitting on the shelf for a significant portion of the year,” he noted.
Eyes in the Sky
It will take more than an army of robotic tractors to grow a successful crop. The farm of the future will rely on drones, satellites, and other airborne instruments to provide data about their crops on the ground.
Companies like Descartes Labs, for instance, employ machine learning to analyze satellite imagery to forecast soy and corn yields. The Los Alamos, New Mexico startup collects five terabytes of data every day from multiple satellite constellations, including NASA and the European Space Agency. Combined with weather readings and other real-time inputs, Descartes Labs can predict cornfield yields with 99 percent accuracy. Its AI platform can even assess crop health from infrared readings.
The US agency DARPA recently granted Descartes Labs $1.5 million to monitor and analyze wheat yields in the Middle East and Africa. The idea is that accurate forecasts may help identify regions at risk of crop failure, which could lead to famine and political unrest. Another company called TellusLabs out of Somerville, Massachusetts also employs machine learning algorithms to predict corn and soy yields with similar accuracy from satellite imagery.
Farmers don’t have to reach orbit to get insights on their cropland. A startup in Oakland, Ceres Imaging, produces high-resolution imagery from multispectral cameras flown across fields aboard small planes. The snapshots capture the landscape at different wavelengths, identifying insights into problems like water stress, as well as providing estimates of chlorophyll and nitrogen levels. The geo-tagged images mean farmers can easily locate areas that need to be addressed.
Growing From the Inside
Even the best intelligence—whether from drones, satellites, or machine learning algorithms—will be challenged to predict the unpredictable issues posed by climate change. That’s one reason more and more companies are betting the farm on what’s called controlled environment agriculture. Today, that doesn’t just mean fancy greenhouses, but everything from warehouse-sized, automated vertical farms to grow rooms run by robots, located not in the emptiness of Kansas or Nebraska but smack dab in the middle of the main streets of America.
Proponents of these new concepts argue these high-tech indoor farms can produce much higher yields while drastically reducing water usage and synthetic inputs like fertilizer and herbicides.
Iron Ox, out of San Francisco, is developing one-acre urban greenhouses that will be operated by robots and reportedly capable of producing the equivalent of 30 acres of farmland. Powered by artificial intelligence, a team of three robots will run the entire operation of planting, nurturing, and harvesting the crops.
Vertical farming startup Plenty, also based in San Francisco, uses AI to automate its operations, and got a $200 million vote of confidence from the SoftBank Vision Fund earlier this year. The company claims its system uses only 1 percent of the water consumed in conventional agriculture while producing 350 times as much produce. Plenty is part of a new crop of urban-oriented farms, including Bowery Farming and AeroFarms.
“What I can envision is locating a larger scale indoor farm in the economically disadvantaged food desert, in order to stimulate a broader economic impact that could create jobs and generate income for that area,” said Dr. Gary Stutte, an expert in space agriculture and controlled environment agriculture, in an interview with AgFunder News. “The indoor agriculture model is adaptable to becoming an engine for economic growth and food security in both rural and urban food deserts.”
Still, the model is not without its own challenges and criticisms. Most of what these farms can produce falls into the “leafy greens” category and often comes with a premium price, which seems antithetical to the proposed mission of creating oases in the food deserts of cities. While water usage may be minimized, the electricity required to power the operation, especially the LEDs (which played a huge part in revolutionizing indoor agriculture), are not cheap.
Still, all of these advances, from robo farmers to automated greenhouses, may need to be part of a future where nearly 10 billion people will inhabit the planet by 2050. An oft-quoted statistic from the Food and Agriculture Organization of the United Nations says the world must boost food production by 70 percent to meet the needs of the population. Technology may not save the world, but it will help feed it.
Image Credit: Valentin Valkov / Shutterstock.com Continue reading

Posted in Human Robots | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

#431189 Researchers Develop New Tech to Predict ...

It is one of the top 10 deadliest diseases in the United States, and it cannot be cured or prevented. But new studies are finding ways to diagnose Alzheimer’s disease in its earliest stages, while some of the latest research says technologies like artificial intelligence can detect dementia years before the first symptoms occur.
These advances, in turn, will help bolster clinical trials seeking a cure or therapies to slow or prevent the disease. Catching Alzheimer’s disease or other forms of dementia early in their progression can help ease symptoms in some cases.
“Often neurodegeneration is diagnosed late when massive brain damage has already occurred,” says professor Francis L Martin at the University of Central Lancashire in the UK, in an email to Singularity Hub. “As we know more about the molecular basis of the disease, there is the possibility of clinical interventions that might slow or halt the progress of the disease, i.e., before brain damage. Extending cognitive ability for even a number of years would have huge benefit.”
Blood Diamond
Martin is the principal investigator on a project that has developed a technique to analyze blood samples to diagnose Alzheimer’s disease and distinguish between other forms of dementia.
The researchers used sensor-based technology with a diamond core to analyze about 550 blood samples. They identified specific chemical bonds within the blood after passing light through the diamond core and recording its interaction with the sample. The results were then compared against blood samples from cases of Alzheimer’s disease and other neurodegenerative diseases, along with those from healthy individuals.
“From a small drop of blood, we derive a fingerprint spectrum. That fingerprint spectrum contains numerical data, which can be inputted into a computational algorithm we have developed,” Martin explains. “This algorithm is validated for prediction of unknown samples. From this we determine sensitivity and specificity. Although not perfect, my clinical colleagues reliably tell me our results are far better than anything else they have seen.”
Martin says the breakthrough is the result of more than 10 years developing sensor-based technologies for routine screening, monitoring, or diagnosing neurodegenerative diseases and cancers.
“My vision was to develop something low-cost that could be readily applied in a typical clinical setting to handle thousands of samples potentially per day or per week,” he says, adding that the technology also has applications in environmental science and food security.
The new test can also distinguish accurately between Alzheimer’s disease and other forms of neurodegeneration, such as Lewy body dementia, which is one of the most common causes of dementia after Alzheimer’s.
“To this point, other than at post-mortem, there has been no single approach towards classifying these pathologies,” Martin notes. “MRI scanning is often used but is labor-intensive, costly, difficult to apply to dementia patients, and not a routine point-of-care test.”
Crystal Ball
Canadian researchers at McGill University believe they can predict Alzheimer’s disease up to two years before its onset using big data and artificial intelligence. They developed an algorithm capable of recognizing the signatures of dementia using a single amyloid PET scan of the brain of patients at risk of developing the disease.
Alzheimer’s is caused by the accumulation of two proteins—amyloid beta and tau. The latest research suggests that amyloid beta leads to the buildup of tau, which is responsible for damaging nerve cells and connections between cells called synapses.
The work was recently published in the journal Neurobiology of Aging.
“Despite the availability of biomarkers capable of identifying the proteins causative of Alzheimer’s disease in living individuals, the current technologies cannot predict whether carriers of AD pathology in the brain will progress to dementia,” Sulantha Mathotaarachchi, lead author on the paper and an expert in artificial neural networks, tells Singularity Hub by email.
The algorithm, trained on a population with amnestic mild cognitive impairment observed over 24 months, proved accurate 84.5 percent of the time. Mathotaarachchi says the algorithm can be trained on different populations for different observational periods, meaning the system can grow more comprehensive with more data.
“The more biomarkers we incorporate, the more accurate the prediction could be,” Mathotaarachchi adds. “However, right now, acquiring [the] required amount of training data is the biggest challenge. … In Alzheimer’s disease, it is known that the amyloid protein deposition occurs decades before symptoms onset.”
Unfortunately, the same process occurs in normal aging as well. “The challenge is to identify the abnormal patterns of deposition that lead to the disease later on,” he says
One of the key goals of the project is to improve the research in Alzheimer’s disease by ensuring those patients with the highest probability to develop dementia are enrolled in clinical trials. That will increase the efficiency of clinical programs, according to Mathotaarachchi.
“One of the most important outcomes from our study was the pilot, online, real-time prediction tool,” he says. “This can be used as a framework for patient screening before recruiting for clinical trials. … If a disease-modifying therapy becomes available for patients, a predictive tool might have clinical applications as well, by providing to the physician information regarding clinical progression.”
Pixel by Pixel Prediction
Private industry is also working toward improving science’s predictive powers when it comes to detecting dementia early. One startup called Darmiyan out of San Francisco claims its proprietary software can pick up signals before the onset of Alzheimer’s disease by up to 15 years.
Darmiyan didn’t respond to a request for comment for this article. Venture Beat reported that the company’s MRI-analyzing software “detects cell abnormalities at a microscopic level to reveal what a standard MRI scan cannot” and that the “software measures and highlights subtle microscopic changes in the brain tissue represented in every pixel of the MRI image long before any symptoms arise.”
Darmiyan claims to have a 90 percent accuracy rate and says its software has been vetted by top academic institutions like New York University, Rockefeller University, and Stanford, according to Venture Beat. The startup is awaiting FDA approval to proceed further but is reportedly working with pharmaceutical companies like Amgen, Johnson & Johnson, and Pfizer on pilot programs.
“Our technology enables smarter drug selection in preclinical animal studies, better patient selection for clinical trials, and much better drug-effect monitoring,” Darmiyan cofounder and CEO Padideh Kamali-Zare told Venture Beat.
Conclusions
An estimated 5.5 million Americans have Alzheimer’s, and one in 10 people over age 65 have been diagnosed with the disease. By mid-century, the number of Alzheimer’s patients could rise to 16 million. Health care costs in 2017 alone are estimated to be $259 billion, and by 2050 the annual price tag could be more than $1 trillion.
In sum, it’s a disease that cripples people and the economy.
Researchers are always after more data as they look to improve outcomes, with the hope of one day developing a cure or preventing the onset of neurodegeneration altogether. If interested in seeing this medical research progress, you can help by signing up on the Brain Health Registry to improve the quality of clinical trials.
Image Credit: rudall30 / Shutterstock.com Continue reading

Posted in Human Robots | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

#431000 Japan’s SoftBank Is Investing Billions ...

Remember the 1980s movie Brewster’s Millions, in which a minor league baseball pitcher (played by Richard Pryor) must spend $30 million in 30 days to inherit $300 million? Pryor goes on an epic spending spree for a bigger payoff down the road.
One of the world’s biggest public companies is making that film look like a weekend in the Hamptons. Japan’s SoftBank Group, led by its indefatigable CEO Masayoshi Son, is shooting to invest $100 billion over the next five years toward what the company calls the information revolution.
The newly-created SoftBank Vision Fund, with a handful of key investors, appears ready to almost single-handedly hack the technology revolution. Announced only last year, the fund had its first major close in May with $93 billion in committed capital. The rest of the money is expected to be raised this year.
The fund is unprecedented. Data firm CB Insights notes that the SoftBank Vision Fund, if and when it hits the $100 billion mark, will equal the total amount that VC-backed companies received in all of 2016—$100.8 billion across 8,372 deals globally.
The money will go toward both billion-dollar corporations and startups, with a minimum $100 million buy-in. The focus is on core technologies like artificial intelligence, robotics and the Internet of Things.
Aside from being Japan’s richest man, Son is also a futurist who has predicted the singularity, the moment in time when machines will become smarter than humans and technology will progress exponentially. Son pegs the date as 2047. He appears to be hedging that bet in the biggest way possible.
Show Me the Money
Ostensibly a telecommunications company, SoftBank Group was founded in 1981 and started investing in internet technologies by the mid-1990s. Son infamously lost about $70 billion of his own fortune after the dot-com bubble burst around 2001. The company itself has a market cap of nearly $90 billion today, about half of where it was during the heydays of the internet boom.
The ups and downs did nothing to slake the company’s thirst for technology. It has made nine acquisitions and more than 130 investments since 1995. In 2017 alone, SoftBank has poured billions into nearly 30 companies and acquired three others. Some of those investments are being transferred to the massive SoftBank Vision Fund.
SoftBank is not going it alone with the new fund. More than half of the money—$60 billion—comes via the Middle East through Saudi Arabia’s Public Investment Fund ($45 billion) and Abu Dhabi’s Mubadala Investment Company ($15 billion). Other players at the table include Apple, Qualcomm, Sharp, Foxconn, and Oracle.
During a company conference in August, Son notes the SoftBank Vision Fund is not just about making money. “We don’t just want to be an investor just for the money game,” he says through a translator. “We want to make the information revolution. To do the information revolution, you can’t do it by yourself; you need a lot of synergy.”
Off to the Races
The fund has wasted little time creating that synergy. In July, its first official investment, not surprisingly, went to a company that specializes in artificial intelligence for robots—Brain Corp. The San Diego-based startup uses AI to turn manual machines into self-driving robots that navigate their environments autonomously. The first commercial application appears to be a really smart commercial-grade version that crosses a Roomba and Zamboni.

A second investment in July was a bit more surprising. SoftBank and its fund partners led a $200 million mega-round for Plenty, an agricultural tech company that promises to reshape farming by going vertical. Using IoT sensors and machine learning, Plenty claims its urban vertical farms can produce 350 times more vegetables than a conventional farm using 1 percent of the water.
Round Two
The spending spree continued into August.
The SoftBank Vision Fund led a $1.1 billion investment into a little-known biotechnology company called Roivant Sciences that goes dumpster diving for abandoned drugs and then creates subsidiaries around each therapy. For example, Axovant Sciences is devoted to neurology while Urovant focuses on urology. TechCrunch reports that Roivant is also creating a tech-focused subsidiary, called Datavant, that will use AI for drug discovery and other healthcare initiatives, such as designing clinical trials.
The AI angle may partly explain SoftBank’s interest in backing the biggest private placement in healthcare to date.
Also in August, SoftBank Vision Fund led a mix of $2.5 billion in primary and secondary capital investments into India’s largest private company in what was touted as the largest single investment in a private Indian company. Flipkart is an e-commerce company in the mold of Amazon.
The fund tacked on a $250 million investment round in August to Kabbage, an Atlanta-based startup in the alt-lending sector for small businesses. It ended big with a $4.4 billion investment into a co-working company called WeWork.
Betterment of Humanity
And those investments only include companies that SoftBank Vision Fund has backed directly.
SoftBank the company will offer—or has already turned over—previous investments to the Vision Fund in more than a half-dozen companies. Those assets include its shares in Nvidia, which produces chips for AI applications, and its first serious foray into autonomous driving with Nauto, a California startup that uses AI and high-tech cameras to retrofit vehicles to improve driving safety. The more miles the AI logs, the more it learns about safe and unsafe driving behaviors.
Other recent acquisitions, such as Boston Dynamics, a well-known US robotics company owned briefly by Google’s parent company Alphabet, will remain under the SoftBank Group umbrella for now.

This spending spree begs the question: What is the overall vision behind the SoftBank’s relentless pursuit of technology companies? A spokesperson for SoftBank told Singularity Hub that the “common thread among all of these companies is that they are creating the foundational platforms for the next stage of the information revolution.All of the companies, he adds, share SoftBank’s criteria of working toward “the betterment of humanity.”
While the SoftBank portfolio is diverse, from agtech to fintech to biotech, it’s obvious that SoftBank is betting on technologies that will connect the world in new and amazing ways. For instance, it wrote a $1 billion check last year in support of OneWeb, which aims to launch 900 satellites to bring internet to everyone on the planet. (It will also be turned over to the SoftBank Vision Fund.)
SoftBank also led a half-billion equity investment round earlier this year in a UK company called Improbable, which employs cloud-based distributed computing to create virtual worlds for gaming. The next step for the company is massive simulations of the real world that supports simultaneous users who can experience the same environment together(and another candidate for the SoftBank Vision Fund.)
Even something as seemingly low-tech as WeWork, which provides a desk or office in locations around the world, points toward a more connected planet.
In the end, the singularity is about bringing humanity together through technology. No one said it would be easy—or cheap.
Stock Media provided by xackerz / Pond5 Continue reading

Posted in Human Robots | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

#430556 Forget Flying Cars, the Future Is ...

Flying car concepts have been around nearly as long as their earthbound cousins, but no one has yet made them a commercial success. MIT engineers think we’ve been coming at the problem from the wrong direction; rather than putting wings on cars, we should be helping drones to drive.
The team from the university’s Computer Science and Artificial Intelligence Laboratory (CSAIL) added wheels to a fleet of eight mini-quadcopters and tested driving and flying them around a tiny toy town made out of cardboard and fabric.
Adding the ability to drive reduced the distance the drone could fly by 14 percent compared to a wheel-less version. But while driving was slower, the drone could travel 150 percent further than when flying. The result is a vehicle that combines the speed and mobility of flying with the energy-efficiency of driving.

CSAIL director Daniela Rus told MIT News their work suggested that when looking to create flying cars, it might make more sense to build on years of research into drones rather than trying to simply “put wings on cars.”
Historically, flying car concepts have looked like someone took apart a Cessna light aircraft and a family sedan, mixed all the parts up, and bolted them back together again. Not everyone has abandoned this approach—two of the most developed flying car designs from Terrafugia and AeroMobil are cars with folding wings that need an airstrip to take off.
But flying car concepts are looking increasingly drone-like these days, with multiple small rotors, electric propulsion and vertical take-off abilities. Take the eHang 184 autonomous aerial vehicle being developed in China, the Kitty Hawk all-electric aircraft backed by Google founder Larry Page, which is little more than a quadcopter with a seat, the AirQuadOne designed by UK consortium Neva Aerospace, or Lilium Aviation’s Jet.
The attraction is obvious. Electric-powered drones are more compact, maneuverable, and environmentally friendly, making them suitable for urban environments.
Most of these vehicles are not quite the same as those proposed by the MIT engineers, as they’re pure flying machines. But a recent Airbus concept builds on the same principle that the future of urban mobility is vehicles that can both fly and drive. Its Pop.Up design is a two-passenger pod that can either be clipped to a set of wheels or hang under a quadcopter.
Importantly, they envisage their creation being autonomous in both flight and driving modes. And they’re not the only ones who think the future of flying cars is driverless. Uber has committed to developing a network of autonomous air taxis within a decade. This spring, Dubai announced it would launch a pilotless passenger drone service using the Ehang 184 as early as next month (July).
While integrating fully-fledged autonomous flying cars into urban environments will be far more complex, the study by Rus and her colleagues provides a good starting point for the kind of 3D route-planning and collision avoidance capabilities this would require.
The team developed multi-robot path planning algorithms that were able to control all eight drones as they flew and drove around their mock up city, while also making sure they didn’t crash into each other and avoided no-fly zones.
“This work provides an algorithmic solution for large-scale, mixed-mode transportation and shows its applicability to real-world problems,” Jingjin Yu, a computer science professor at Rutgers University who was not involved in the research, told MIT News.
This vision of a driverless future for flying cars might be a bit of a disappointment for those who’d envisaged themselves one day piloting their own hover car just like George Jetson. But autonomy and Uber-like ride-hailing business models are likely to be attractive, as they offer potential solutions to three of the biggest hurdles drone-like passenger vehicles face.
Firstly, it makes the vehicles accessible to anyone by removing the need to learn how to safely pilot an aircraft. Secondly, battery life still limits most electric vehicles to flight times measured in minutes. For personal vehicles this could be frustrating, but if you’re just hopping in a driverless air taxi for a five minute trip across town it’s unlikely to become apparent to you.
Operators of the service simply need to make sure they have a big enough fleet to ensure a charged vehicle is never too far away, or they’ll need a way to swap out batteries easily, such as the one suggested by the makers of the Volocopter electric helicopter.
Finally, there has already been significant progress in developing technology and regulations needed to integrate autonomous drones into our airspace that future driverless flying cars can most likely piggyback off of.
Safety requirements will inevitably be more stringent, but adding more predictable and controllable autonomous drones to the skies is likely to be more attractive to regulators than trying to license and police thousands of new amateur pilots.
Image Credit: Lilium Continue reading

Posted in Human Robots | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment