Tag Archives: tech
#430148 Why Interstellar Travel Will Be Possible ...
The term “moonshot” is sometimes invoked to denote a project so outrageously ambitious that it can only be described by comparing it to the Apollo 11 mission to land the first human on the Moon. The Breakthrough Starshot Initiative transcends the moonshot descriptor because its purpose goes far beyond the Moon. The aptly-named project seeks to travel to the nearest stars.
The brainchild of Russian-born tech entrepreneur billionaire Yuri Milner, Breakthrough Starshot was announced in April 2016 at a press conference joined by renowned physicists including Stephen Hawking and Freeman Dyson. While still early, the current vision is that thousands of wafer-sized chips attached to large, silver lightsails will be placed into Earth orbit and accelerated by the pressure of an intense Earth-based laser hitting the lightsail.
After just two minutes of being driven by the laser, the spacecraft will be traveling at one-fifth the speed of light—a thousand times faster than any macroscopic object has ever achieved.
Each craft will coast for 20 years and collect scientific data about interstellar space. Upon reaching the planets near the Alpha Centauri star system, an the onboard digital camera will take high-resolution pictures and send these back to Earth, providing the first glimpse of our closest planetary neighbors. In addition to scientific knowledge, we may learn whether these planets are suitable for human colonization.
The team behind Breakthrough Starshot is as impressive as the technology. The board of directors includes Milner, Hawking, and Facebook co-founder Mark Zuckerberg. The executive director is S. Pete Worden, former director of NASA Ames Research Center. A number of prominent scientists, including Nobel and Breakthrough Laureates, are serving as advisors to the project, and Milner has promised $100 million of his own funds to begin work. He will encourage his colleagues to contribute $10 billion over the next several years for its completion.
While this endeavor may sound like science fiction, there are no known scientific obstacles to implementing it. This doesn’t mean it will happen tomorrow: for Starshot to be successful, a number of advances in technologies are necessary. The organizers and advising scientists are relying upon the exponential rate of advancement to make Starshot happen within 20 years.
Here are 11 key Starshot technologies and how they are expected to advance exponentially over the next two decades.
Exoplanet Detection
An exoplanet is a planet outside our Solar System. While the first scientific detection of an exoplanet was only in 1988, as of May, 1 2017 there have been 3,608 confirmed detections of exoplanets in 2,702 planetary systems. While some resemble those in our Solar System, many have fascinating and bizarre features, such as rings 200 times wider than Saturn’s.
The reason for this deluge of discoveries? A vast improvement in telescope technology.
Just 100 years ago the world’s largest telescope was the Hooker Telescope at 2.54 meters. Today, the European Southern Observatory’s Very Large Telescope consists of four large 8.2-meter diameter telescopes and is now the most productive ground-based facility in astronomy, with an average of over one peer-reviewed, published scientific paper per day.
Researchers use the VLT and a special instrument to look for rocky extrasolar planets in the habitable zone (allowing liquid water) of their host stars. In May 2016, researchers using the Transiting Planets and Planetesimals Small Telescope (TRAPPIST) in Chile found not just one but seven Earth-sized exoplanets in the habitable zone.
Meanwhile, in space, NASA’s Kepler spacecraft is designed specifically for this purpose and has already identified over 2,000 exoplanets. The James Webb Space Telescope, to be launched in October, 2018, will offer unprecedented insight into whether exoplanets can support life. “If these planets have atmospheres, [JWST] will be the key to unlocking their secrets,” according to Doug Hudgins, Exoplanet Program Scientist at NASA headquarters in Washington.
Launch Cost
The Starshot mothership will be launched aboard a rocket and release a thousand starships. The cost of transporting a payload using one-time-only rockets is immense, but private launch providers such as SpaceX and Blue Origin have recently demonstrated success in reusable rockets which are expected to substantially reduce the price. SpaceX has already reduced costs to around $60 million per Falcon 9 launch, and as the private space industry expands and reusable rockets become more common, this price is expected to drop even further.
The Starchip
Each 15-millimeter-wide Starchip must contain a vast array of sophisticated electronic devices, such as a navigation system, camera, communication laser, radioisotope battery, camera multiplexer, and camera interface. The expectation we’ll be able to compress an entire spaceship onto a small wafer is due to exponentially decreasing sensor and chip sizes.
The first computer chips in the 1960s contained a handful of transistors. Thanks to Moore’s Law, we can now squeeze billions of transistors onto each chip. The first digital camera weighed 8 pounds and took 0.01 megapixel images. Now, a digital camera sensor yields high-quality 12+ megapixel color images and fits in a smartphone—along with other sensors like GPS, accelerometer, and gyroscope. And we’re seeing this improvement bleed into space exploration with the advent of smaller satellites providing better data.
For Starshot to succeed, we will need the chip’s mass to be about 0.22 grams by 2030, but if the rate of improvement continues, projections suggest this is entirely possible.
The Lightsail
The sail must be made of a material which is highly reflective (to gain maximum momentum from the laser), minimally absorbing (so that it is not incinerated from the heat), and also very light weight (allowing quick acceleration). These three criteria are extremely constrictive and there is at present no satisfactory material.
Image Credit: Breakthrough StarshotThe required advances may come from artificial intelligence automating and accelerating materials discovery. Such automation has advanced to the point where machine learning techniques can “generate libraries of candidate materials by the tens of thousands,” allowing engineers to identify which ones are worth pursuing and testing for specific applications.
Energy Storage
While the Starchip will use a tiny nuclear-powered radioisotope battery for its 24-year-plus journey, we will still need conventional chemical batteries for the lasers. The lasers will need to employ tremendous energy in a short span of time, meaning that the power must be stored in nearby batteries.
Battery storage has improved at 5-8% per year, though we often don’t notice this benefit because appliance power consumption has increased at a comparable rate resulting in a steady operating lifetime. If batteries continue to improve at this rate, in 20 years they should have 3 to 5 times their present capacity. Continued innovation is expected to be driven from Tesla-Solar City’s big investment in battery technology. The companies have already installed close to 55,000 batteries in Kauai to power a large portion of their infrastructure.
Lasers
Thousands of high-powered lasers will be used to push the lightsail to extraordinary speeds.
Lasers have obeyed Moore’s Law at a nearly identical rate to integrated circuits, the cost-per-power ratio halving every 18 months. In particular, the last decade has seen a dramatic acceleration in power scaling of diode and fiber lasers, the former breaking through 10 kilowatts from a single mode fiber in 2010 and the 100-kilowatt barrier a few months later. In addition to the raw power, we will also need to make advances in combining phased array lasers.
Speed
Our ability to move quickly has…moved quickly. In 1804 the train was invented and soon thereafter produced the hitherto unheard of speed of 70 mph. The Helios 2 spacecraft eclipsed this record in 1976: at its fastest, Helios 2 was moving away from Earth at a speed of 356,040 km/h. Just 40 years later the New Horizons spacecraft achieved a heliocentric speed of almost 45 km/s or 100,000 miles per hour. Yet even at these speeds it would take a long, long time to reach Alpha Centauri at slightly more than four light years away.
While accelerating subatomic particles to nearly light speed is routine in particle accelerators, never before has this been achieved for macroscopic objects. Achieving 20% speed of light for Starshot would represent a 1000x speed increase for any human-built object.
Memory Storage
Fundamental to computing is the ability to store information. Starshot depends on the continued decreasing cost and size of digital memory to include sufficient storage for its programs and the images taken of Alpha Centauri star system and its planets.
The cost of memory has decreased exponentially for decades: in 1970, a megabyte cost about one million dollars; it’s now about one-tenth of a cent. The size required for the storage has similarly decreased, from a 5-megabyte hard drive being loaded via forklift in 1956 to the current availability of 512-gigabyte USB sticks weighing a few grams.
Telecommunication
Once the images are taken the Starchip will send the images back to Earth for processing.
Telecommunications has advanced rapidly since Alexander Graham Bell invented the telephone in 1876. The average internet speed in the US is currently about 11 megabits per second. The bandwidth and speed required for Starshot to send digital images over 4 light years—or 20 trillion miles—will require taking advantage in the latest telecommunications technology.
One promising technology is Li-Fi, a wireless approach which is 100 times faster than Wi-Fi. A second is via optical fibers which now boast 1.125 terabits per second. There are even efforts in quantum telecommunications which are not just ultrafast but completely secure.
Computation
The final step in the Starshot project is to analyze the data returning from the spacecraft. To do so we must take advantage of the exponential increase in computing power, benefiting from the trillion-fold increase in computing over the 60 years.
This dramatically decreasing cost of computing has now continued due largely to the presence of cloud computing. Extrapolating into the future and taking advantage of new types of processing, such as quantum computing, we should see another thousand-fold increase in power by the time data from Starshot returns. Such extreme processing power will allow us to perform sophisticated scientific modeling and analysis of our nearest neighboring star system.
Acknowledgements: The author would like to thank Pete Worden and Gregg Maryniak for suggestions and comments.
Image Credit: NASA/ESA/ESO Continue reading →
#430147 Deep Learning at the Speed of Light on ...
Deep learning has transformed the field of artificial intelligence, but the limitations of conventional computer hardware are already hindering progress. Researchers at MIT think their new “nanophotonic” processor could be the answer by carrying out deep learning at the speed of light.
In the 1980s, scientists and engineers hailed optical computing as the next great revolution in information technology, but it turned out that bulky components like fiber optic cables and lenses didn’t make for particularly robust or compact computers.
In particular, they found it extremely challenging to make scalable optical logic gates, and therefore impractical to make general optical computers, according to MIT physics post-doc Yichen Shen. One thing light is good at, though, is multiplying matrices—arrays of numbers arranged in columns and rows. You can actually mathematically explain the way a lens acts on a beam of light in terms of matrix multiplications.
This also happens to be a core component of the calculations involved in deep learning. Combined with advances in nanophotonics—the study of light’s behavior at the nanometer scale—this has led to a resurgence in interest in optical computing.
“Deep learning is mainly matrix multiplications, so it works very well with the nature of light,” says Shen. “With light you can make deep learning computing much faster and thousands of times more energy-efficient.”
To demonstrate this, Shen and his MIT colleagues have designed an all-optical chip that can implement artificial neural networks—the brain-inspired algorithms at the heart of deep learning.
In a recent paper in Nature, the group describes a chip made up of 56 interferometers—components that allow the researchers to control how beams of light interfere with each other to carry out mathematical operations.
The processor can be reprogrammed by applying a small voltage to the waveguides that direct beams of light around the processor, which heats them and causes them to change shape.
The chip is best suited to inference tasks, the researchers say, where the algorithm is put to practical use by applying a learned model to analyze new data, for instance to detect objects in an image.
It isn’t great at learning, because heating the waveguides is relatively slow compared to how electronic systems are reprogrammed. So, in their study, the researchers trained the algorithm on a computer before transferring the learned model to the nanophotonic processor to carry out the inference task.
That’s not a major issue. For many practical applications it’s not necessary to carry out learning and inference on the same chip. Google recently made headlines for designing its own deep learning chip, the TPU, which is also specifically designed for inference and most companies that use a lot of machine learning split the two jobs.
“In many cases they update these models once every couple of months and the rest of the time the fixed model is just doing inference,” says Shen. “People usually separate these tasks. They typically have a server just doing training and another just doing inference, so I don’t see a big problem making a chip focused on inference.”
Once the model has been programmed into the chip, it can then carry out computations at the speed of light using less than one-thousandth the energy per operation compared to conventional electronic chips.
There are limitations, though. Because the chip deals with light waves that operate on the scale of a few microns, there are fundamental limits to how small these chips can get.
“The wavelength really sets the limit of how small the waveguides can be. We won’t be able to make devices significantly smaller. Maybe by a factor of four, but physics will ultimately stop us,” says MIT graduate student Nicholas Harris, who co-authored the paper.
That means it would be difficult to implement neural nets much larger than a few thousand neurons. However, the vast majority of current deep learning algorithms are well within that limit.
The system did achieve a significantly lower accuracy on the task than a standard computer implementing the same deep learning model, correctly identifying 76.7 percent of vowels compared to 91.7 percent.
But Harris says they think this was largely due to interference between the various heating elements used to program the waveguides, and that it should be easy to fix by using thermal isolation trenches or extra calibration steps.
Importantly, the chips are also built using the same fabrication technology as conventional computer chips, so scaling up production should be easy. Shen said the group has already had interest in their technology from prominent chipmakers.
Pierre-Alexandre Blanche, a professor of optics at the University of Arizona, said he’s very excited by the paper, which he said complements his own work. But he cautioned against getting too carried away.
“This is another milestone in the progress toward useful optical computing. But we are still far away to be competitive with electronics,” he told Singularity Hub in an email. “The argumentation about scalability, power consumption, speed etc. [in the paper] use a lot of conditional tense and assumptions which demonstrate that, if there is potential indeed, there is still a lot of research to be done.”
In particular, he pointed out that the system was only a partial solution to the problem. While the vast majority of neuronal computation involves multiplication of matrices, there is another component: calculating a non-linear response.
In the current paper this aspect of the computation was simulated on a regular computer. The researchers say in future models this function could be carried out by a so-called “saturable absorber” integrated into the waveguides that absorbs less light as the intensity increases.
But Blanche notes that this is not a trivial problem and something his group is actually currently working on. “It is not like you can buy one at the drug store,” he says. Bhavin Shastri, a post-doc at Princeton whose group is also working on nanophotonic chips for implementing neural networks, said the research was important, as enabling matrix multiplications is a key step to enabling full-fledged photonic neural networks.
“Overall, this area of research is poised to usher in an exciting and promising field,” he added. “Neural networks implemented in photonic hardware could revolutionize how machines interact with ultrafast physical phenomena. Silicon photonics combines the analog device performance of photonics with the cost and scalability of silicon manufacturing.”
Stock media provided by across/Pond5.com Continue reading →
#430146 This Tech Could Charge Electric Cars ...
The global auto industry is worth $2 trillion, but electric and hybrid cars currently make up less than one percent of that figure. However, experts are predicting an explosion in electric car adoption.
Financial services company UBS predicted demand for electric cars will reach an inflection point in 2018 as their cost shrinks to equal (and eventually undercut) the cost of internal combustion engine vehicles. China saw a 53 percent increase in electric car sales from 2015 to 2016, and India is aiming to sell only electric cars by 2030.
Even though they’ll be affordable, and they’ll keep the air cleaner, though, electric cars will still have one major limitation, and that’s…the fact that they’re electric. Electric things run on batteries, and if batteries don’t get recharged every so often, they die.
Tesla’s Model 3 will go 200 miles on one charge, and Chevy’s new Bolt goes 238 miles. These are no small distances, especially when compared to the Volt’s 30-mile range just three years ago. Even so, once the cars’ batteries are drained, recharging them takes hours.
Researchers at Stanford University just took a step toward solving this problem. In a paper published last week in Nature, the team described a new technique that wirelessly transmits electricity to a moving object within close range.
Wireless power transfer works using magnetic resonance coupling. An alternating magnetic field in a transmitter coil causes electrons in a receiver coil to oscillate, with the best transfer efficiency occurring when both coils are tuned to the same frequency and positioned at a specific angle.
That makes it hard to transfer electricity while an object is moving though. To bypass the need for continuous manual tuning, the Stanford team removed the radio-frequency source in the transmitter and replaced it with a voltage amplifier and a feedback resistor.
The system calibrates itself to the required frequency for different distances. Using this system, the researchers were able to wirelessly transmit a one-milliwatt charge of electricity to a moving LED light bulb three feet away. No manual tuning was needed, and transfer efficiency remained stable.
One milliwatt is a far cry from the tens of kilowatts an electric car needs. But now that they’ve established that an amplifier will do the trick, the team is working on ramping up the amount of electricity that can be transferred using this system.
Switching out the amplifier itself could make a big difference—for this test, they used a general-purpose amplifier with about ten percent efficiency, but custom-made amplifiers could likely boost efficiency to over 90 percent.
It will still be a while before electric cars can get zapped with infusions of charge while cruising down the highway, but that’s the future some energy experts envision.
“In theory, one could drive for an unlimited amount of time without having to stop to recharge,” said Shanhui Fan, professor of electrical engineering and senior author of the study. “The hope is that you’ll be able to charge your electric car while you’re driving down the highway. A coil in the bottom of the vehicle could receive electricity from a series of coils connected to an electric current embedded in the road.”
Embedding power lines in roads would be a major infrastructure project, and it wouldn’t make sense to undertake it until electric car adoption was widespread—when, for example, electric cars accounted for at least 50 percent of total vehicles on the road, or more. If charging was easier, though, more drivers might choose to go electric.
Tesla has already made electric car ownership a bit easier by investing heavily in its Supercharger network. There are currently 861 Supercharger stations around the world with 5,655 chargers, and hundreds more are in the works. The stations charge Tesla vehicles for free in a half hour or hour instead of multiple hours.
Ripping up roads to embed power lines that can charge cars while they’re moving seems unnecessary as technologies like the Superchargers continue to proliferate. But as electric vehicles proliferate too, drivers will want their experiences to be as seamless as possible, and that could include not having to stop to charge your car.
Despite the significant hurdles left to clear, charging moving cars is the most exciting potential of the Stanford team’s wireless transfer system. But there are also smaller-scale applications like cell phones and personal medical implants, which will likely employ the technology before it’s used on cars. Fan even mentioned that the system “…may untether robotics in manufacturing.”
Image Credit: Shutterstock Continue reading →
#430116 Blooming Beasts: Dinosaurs Are Coming Up ...
A programmer used artificial intelligence to create images of dinosaurs that were constructed entirely out of flowers. Continue reading →
#430115 Tune Into the Future of Fintech at ...
Singularity University’s Exponential Finance Summit begins today and runs through June 9 in New York, the finance industry’s bustling capital. You can tune into the summit as it happens from anywhere with this livestream.
Singularity Hub is also covering the event as it brings together financial and technology leaders from across the industry. From exciting startups like Lemonade and HyperScience to established financial institutions such as BlackRock and Bank of America, we’ll be learning about how emerging technologies are changing the workings of the finance industry and how financial services companies do business.
At the summit, experts will dive into:
The future of blockchain and digital currencies.
How artificial intelligence is being used in finance.
The further decentralization and digitization of banking.
What quantum computing can do for finance.
How major institutions are evolving strategies to take advantage of new fintech startups and technology.
Ric Edelman, founder of Edelman Financial Services, and Sharon Sputz, director of Columbia University’s Data Science Institute, will discuss the future of financial advice and investing. Angela Strange, partner at Andreesen Horowitz, will break down exponential technology and insurance, and BlackRock’s chief talent officer, Matthew Breitfelder, will take a look at the future of work.
Of course, as usual, we’ll also keep an eye on talks and question-and-answer sessions with Ray Kurzweil, Singularity University cofounder and chancellor, and Peter Diamandis, Singularity University cofounder and chairman.
Be sure to join the conversation on the future of finance in real-time on Twitter with @SingularityHub and @xfinance or using the hashtag #xfin.
Much of the latest technology driving fintech is still new, and its impact has yet to be fully fleshed out—which should make for an interesting summit.
Image Credit: Pond5 Continue reading →