Tag Archives: first

#432293 An Innovator’s City Guide to Shanghai

Shanghai is a city full of life. With its population of 24 million, Shanghai embraces vibrant growth, fosters rising diversity, and attracts visionaries, innovators, and adventurers. Fintech, artificial intelligence, and e-commerce are booming. Now is a great time to explore this multicultural, inspirational city as it experiences quick growth and ever greater influence.

Meet Your Guide

Qingsong (Dora) Ke
Singularity University Chapter: Shanghai Chapter
Profession: Associate Director for Asia Pacific, IE Business School and IE University; Mentor, Techstars Startup Weekend; Mentor, Startupbootcamp; China President, Her Century

Your City Guide to Shanghai, China
Top three industries in the city: Automotive, Retail, and Finance

1. Coworking Space: Mixpace

With 10 convenient locations in the Shanghai downtown area, Mixpace offers affordable prices and various office and event spaces to both foreign and local entrepreneurs and startups.

2. Makerspace: XinCheJian

The first hackerspace and a non-profit in China, Xinchejian was founded to support projects in physical computing, open source hardware, and the Internet of Things. It hosts regular events and talks to facilitate development of hackerspaces in China.

3. Local meetups/ networks: FinTech Connector

FinTech Connector is a community connecting local fintech entrepreneurs and start-ups with global professionals, thought leaders, and investors for the purpose of disrupting financial services with cutting-edge technology.

4. Best coffee shop with free WiFi: Seesaw

Clean and modern décor, convenient locations, a quiet environment, and high-quality coffee make Seesaw one of the most popular coffee shops in Shanghai.

5. The startup neighborhood: Knowledge & Innovation Community (KIC)

Located near 10 prestigious universities and over 100 scientific research institutions, KIC attempts to integrate Silicon Valley’s innovative spirit with the artistic culture of the Left Bank in Paris.

6. Well-known investor or venture capitalist: Nanpeng (Neil) Shen

Global executive partner at Sequoia Capital, founding and managing partner at Sequoia China, and founder of Ctrip.com and Home Inn, Neil Shen was named Best Venture Capitalist by Forbes China in 2010–2013 and ranked as the best Chinese investor among Global Best Investors by Forbes in 2012–2016.

7. Best way to get around: Metro

Shanghai’s 17 well-connected metro lines covering every corner of the city at affordable prices are the best way to get around.

8. Local must-have dish and where to get it: Mini Soupy Bun (steamed dumplings, xiaolongbao) at Din Tai Fung in Shanghai.

Named one of the top ten restaurants in the world by the New York Times, Din Tai Fung makes the best xiaolongbao, a delicious soup with stuffed dumplings.

9. City’s best-kept secret: Barber Shop

This underground bar gets its name from the barber shop it’s hidden behind. Visitors must discover how to unlock the door leading to Barber Shop’s sophisticated cocktails and engaging music. (No website for this underground location, but the address is 615 Yongjia Road).

10. Touristy must-do: Enjoy the nightlife and the skyline at the Bund

On the east side of the Bund are the most modern skyscrapers, including Shanghai Tower, Shanghai World Financial Centre, and Jin Mao Tower. The west side of the Bund features 26 buildings of diverse architectural styles, including Gothic, Baroque, Romanesque, and others; this area is known for its exotic buildings.

11. Local volunteering opportunity: Shanghai Volunteer

Shanghai Volunteer is a platform to connect volunteers with possible opportunities in various fields, including education, elderly care, city culture, and environment.

12. Local University with great resources: Shanghai Jiao Tong University

Established in 1896, Shanghai Jiao Tong University is the second-oldest university in China and one of the country’s most prestigious. It boasts notable alumni in government and politics, science, engineering, business, and sports, and it regularly collaborates with government and the private sector.

This article is for informational purposes only. All opinions in this post are the author’s alone and not those of Singularity University. Neither this article nor any of the listed information therein is an official endorsement by Singularity University.

Image Credits: Qinsong (Dora) Ke

Banner Image Credit: ESB Professional / Shutterstock.com Continue reading

Posted in Human Robots

#432236 Why Hasn’t AI Mastered Language ...

In the myth about the Tower of Babel, people conspired to build a city and tower that would reach heaven. Their creator observed, “And now nothing will be restrained from them, which they have imagined to do.” According to the myth, God thwarted this effort by creating diverse languages so that they could no longer collaborate.

In our modern times, we’re experiencing a state of unprecedented connectivity thanks to technology. However, we’re still living under the shadow of the Tower of Babel. Language remains a barrier in business and marketing. Even though technological devices can quickly and easily connect, humans from different parts of the world often can’t.

Translation agencies step in, making presentations, contracts, outsourcing instructions, and advertisements comprehensible to all intended recipients. Some agencies also offer “localization” expertise. For instance, if a company is marketing in Quebec, the advertisements need to be in Québécois French, not European French. Risk-averse companies may be reluctant to invest in these translations. Consequently, these ventures haven’t achieved full market penetration.

Global markets are waiting, but AI-powered language translation isn’t ready yet, despite recent advancements in natural language processing and sentiment analysis. AI still has difficulties processing requests in one language, without the additional complications of translation. In November 2016, Google added a neural network to its translation tool. However, some of its translations are still socially and grammatically odd. I spoke to technologists and a language professor to find out why.

“To Google’s credit, they made a pretty massive improvement that appeared almost overnight. You know, I don’t use it as much. I will say this. Language is hard,” said Michael Housman, chief data science officer at RapportBoost.AI and faculty member of Singularity University.

He explained that the ideal scenario for machine learning and artificial intelligence is something with fixed rules and a clear-cut measure of success or failure. He named chess as an obvious example, and noted machines were able to beat the best human Go player. This happened faster than anyone anticipated because of the game’s very clear rules and limited set of moves.

Housman elaborated, “Language is almost the opposite of that. There aren’t as clearly-cut and defined rules. The conversation can go in an infinite number of different directions. And then of course, you need labeled data. You need to tell the machine to do it right or wrong.”

Housman noted that it’s inherently difficult to assign these informative labels. “Two translators won’t even agree on whether it was translated properly or not,” he said. “Language is kind of the wild west, in terms of data.”

Google’s technology is now able to consider the entirety of a sentence, as opposed to merely translating individual words. Still, the glitches linger. I asked Dr. Jorge Majfud, Associate Professor of Spanish, Latin American Literature, and International Studies at Jacksonville University, to explain why consistently accurate language translation has thus far eluded AI.

He replied, “The problem is that considering the ‘entire’ sentence is still not enough. The same way the meaning of a word depends on the rest of the sentence (more in English than in Spanish), the meaning of a sentence depends on the rest of the paragraph and the rest of the text, as the meaning of a text depends on a larger context called culture, speaker intentions, etc.”

He noted that sarcasm and irony only make sense within this widened context. Similarly, idioms can be problematic for automated translations.

“Google translation is a good tool if you use it as a tool, that is, not to substitute human learning or understanding,” he said, before offering examples of mistranslations that could occur.

“Months ago, I went to buy a drill at Home Depot and I read a sign under a machine: ‘Saw machine.’ Right below it, the Spanish translation: ‘La máquina vió,’ which means, ‘The machine did see it.’ Saw, not as a noun but as a verb in the preterit form,” he explained.

Dr. Majfud warned, “We should be aware of the fragility of their ‘interpretation.’ Because to translate is basically to interpret, not just an idea but a feeling. Human feelings and ideas that only humans can understand—and sometimes not even we, humans, understand other humans.”

He noted that cultures, gender, and even age can pose barriers to this understanding and also contended that an over-reliance on technology is leading to our cultural and political decline. Dr. Majfud mentioned that Argentinean writer Julio Cortázar used to refer to dictionaries as “cemeteries.” He suggested that automatic translators could be called “zombies.”

Erik Cambria is an academic AI researcher and assistant professor at Nanyang Technological University in Singapore. He mostly focuses on natural language processing, which is at the core of AI-powered language translation. Like Dr. Majfud, he sees the complexity and associated risks. “There are so many things that we unconsciously do when we read a piece of text,” he told me. Reading comprehension requires multiple interrelated tasks, which haven’t been accounted for in past attempts to automate translation.

Cambria continued, “The biggest issue with machine translation today is that we tend to go from the syntactic form of a sentence in the input language to the syntactic form of that sentence in the target language. That’s not what we humans do. We first decode the meaning of the sentence in the input language and then we encode that meaning into the target language.”

Additionally, there are cultural risks involved with these translations. Dr. Ramesh Srinivasan, Director of UCLA’s Digital Cultures Lab, said that new technological tools sometimes reflect underlying biases.

“There tend to be two parameters that shape how we design ‘intelligent systems.’ One is the values and you might say biases of those that create the systems. And the second is the world if you will that they learn from,” he told me. “If you build AI systems that reflect the biases of their creators and of the world more largely, you get some, occasionally, spectacular failures.”

Dr. Srinivasan said translation tools should be transparent about their capabilities and limitations. He said, “You know, the idea that a single system can take languages that I believe are very diverse semantically and syntactically from one another and claim to unite them or universalize them, or essentially make them sort of a singular entity, it’s a misnomer, right?”

Mary Cochran, co-founder of Launching Labs Marketing, sees the commercial upside. She mentioned that listings in online marketplaces such as Amazon could potentially be auto-translated and optimized for buyers in other countries.

She said, “I believe that we’re just at the tip of the iceberg, so to speak, with what AI can do with marketing. And with better translation, and more globalization around the world, AI can’t help but lead to exploding markets.”

Image Credit: igor kisselev / Shutterstock.com Continue reading

Posted in Human Robots

#432181 Putting AI in Your Pocket: MIT Chip Cuts ...

Neural networks are powerful things, but they need a lot of juice. Engineers at MIT have now developed a new chip that cuts neural nets’ power consumption by up to 95 percent, potentially allowing them to run on battery-powered mobile devices.

Smartphones these days are getting truly smart, with ever more AI-powered services like digital assistants and real-time translation. But typically the neural nets crunching the data for these services are in the cloud, with data from smartphones ferried back and forth.

That’s not ideal, as it requires a lot of communication bandwidth and means potentially sensitive data is being transmitted and stored on servers outside the user’s control. But the huge amounts of energy needed to power the GPUs neural networks run on make it impractical to implement them in devices that run on limited battery power.

Engineers at MIT have now designed a chip that cuts that power consumption by up to 95 percent by dramatically reducing the need to shuttle data back and forth between a chip’s memory and processors.

Neural nets consist of thousands of interconnected artificial neurons arranged in layers. Each neuron receives input from multiple neurons in the layer below it, and if the combined input passes a certain threshold it then transmits an output to multiple neurons above it. The strength of the connection between neurons is governed by a weight, which is set during training.

This means that for every neuron, the chip has to retrieve the input data for a particular connection and the connection weight from memory, multiply them, store the result, and then repeat the process for every input. That requires a lot of data to be moved around, expending a lot of energy.

The new MIT chip does away with that, instead computing all the inputs in parallel within the memory using analog circuits. That significantly reduces the amount of data that needs to be shoved around and results in major energy savings.

The approach requires the weights of the connections to be binary rather than a range of values, but previous theoretical work had suggested this wouldn’t dramatically impact accuracy, and the researchers found the chip’s results were generally within two to three percent of the conventional non-binary neural net running on a standard computer.

This isn’t the first time researchers have created chips that carry out processing in memory to reduce the power consumption of neural nets, but it’s the first time the approach has been used to run powerful convolutional neural networks popular for image-based AI applications.

“The results show impressive specifications for the energy-efficient implementation of convolution operations with memory arrays,” Dario Gil, vice president of artificial intelligence at IBM, said in a statement.

“It certainly will open the possibility to employ more complex convolutional neural networks for image and video classifications in IoT [the internet of things] in the future.”

It’s not just research groups working on this, though. The desire to get AI smarts into devices like smartphones, household appliances, and all kinds of IoT devices is driving the who’s who of Silicon Valley to pile into low-power AI chips.

Apple has already integrated its Neural Engine into the iPhone X to power things like its facial recognition technology, and Amazon is rumored to be developing its own custom AI chips for the next generation of its Echo digital assistant.

The big chip companies are also increasingly pivoting towards supporting advanced capabilities like machine learning, which has forced them to make their devices ever more energy-efficient. Earlier this year ARM unveiled two new chips: the Arm Machine Learning processor, aimed at general AI tasks from translation to facial recognition, and the Arm Object Detection processor for detecting things like faces in images.

Qualcomm’s latest mobile chip, the Snapdragon 845, features a GPU and is heavily focused on AI. The company has also released the Snapdragon 820E, which is aimed at drones, robots, and industrial devices.

Going a step further, IBM and Intel are developing neuromorphic chips whose architectures are inspired by the human brain and its incredible energy efficiency. That could theoretically allow IBM’s TrueNorth and Intel’s Loihi to run powerful machine learning on a fraction of the power of conventional chips, though they are both still highly experimental at this stage.

Getting these chips to run neural nets as powerful as those found in cloud services without burning through batteries too quickly will be a big challenge. But at the current pace of innovation, it doesn’t look like it will be too long before you’ll be packing some serious AI power in your pocket.

Image Credit: Blue Planet Studio / Shutterstock.com Continue reading

Posted in Human Robots

#432165 Silicon Valley Is Winning the Race to ...

Henry Ford didn’t invent the motor car. The late 1800s saw a flurry of innovation by hundreds of companies battling to deliver on the promise of fast, efficient and reasonably-priced mechanical transportation. Ford later came to dominate the industry thanks to the development of the moving assembly line.

Today, the sector is poised for another breakthrough with the advent of cars that drive themselves. But unlike the original wave of automobile innovation, the race for supremacy in autonomous vehicles is concentrated among a few corporate giants. So who is set to dominate this time?

I’ve analyzed six companies we think are leading the race to build the first truly driverless car. Three of these—General Motors, Ford, and Volkswagen—come from the existing car industry and need to integrate self-driving technology into their existing fleet of mass-produced vehicles. The other three—Tesla, Uber, and Waymo (owned by the same company as Google)—are newcomers from the digital technology world of Silicon Valley and have to build a mass manufacturing capability.

While it’s impossible to know all the developments at any given time, we have tracked investments, strategic partnerships, and official press releases to learn more about what’s happening behind the scenes. The car industry typically rates self-driving technology on a scale from Level 0 (no automation) to Level 5 (full automation). We’ve assessed where each company is now and estimated how far they are from reaching the top level. Here’s how we think each player is performing.

Volkswagen
Volkswagen has invested in taxi-hailing app Gett and partnered with chip-maker Nvidia to develop an artificial intelligence co-pilot for its cars. In 2018, the VW Group is set to release the Audi A8, the first production vehicle that reaches Level 3 on the scale, “conditional driving automation.” This means the car’s computer will handle all driving functions, but a human has to be ready to take over if necessary.

Ford
Ford already sells cars with a Level 2 autopilot, “partial driving automation.” This means one or more aspects of driving are controlled by a computer based on information about the environment, for example combined cruise control and lane centering. Alongside other investments, the company has put $1 billion into Argo AI, an artificial intelligence company for self-driving vehicles. Following a trial to test pizza delivery using autonomous vehicles, Ford is now testing Level 4 cars on public roads. These feature “high automation,” where the car can drive entirely on its own but not in certain conditions such as when the road surface is poor or the weather is bad.

General Motors
GM also sells vehicles with Level 2 automation but, after buying Silicon Valley startup Cruise Automation in 2016, now plans to launch the first mass-production-ready Level 5 autonomy vehicle that drives completely on its own by 2019. The Cruise AV will have no steering wheel or pedals to allow a human to take over and be part of a large fleet of driverless taxis the company plans to operate in big cities. But crucially the company hasn’t yet secured permission to test the car on public roads.

Waymo (Google)

Waymo Level 5 testing. Image Credit: Waymo

Founded as a special project in 2009, Waymo separated from Google (though they’re both owned by the same parent firm, Alphabet) in 2016. Though it has never made, sold, or operated a car on a commercial basis, Waymo has created test vehicles that have clocked more than 4 million miles without human drivers as of November 2017. Waymo tested its Level 5 car, “Firefly,” between 2015 and 2017 but then decided to focus on hardware that could be installed in other manufacturers’ vehicles, starting with the Chrysler Pacifica.

Uber
The taxi-hailing app maker Uber has been testing autonomous cars on the streets of Pittsburgh since 2016, always with an employee behind the wheel ready to take over in case of a malfunction. After buying the self-driving truck company Otto in 2016 for a reported $680 million, Uber is now expanding its AI capabilities and plans to test NVIDIA’s latest chips in Otto’s vehicles. It has also partnered with Volvo to create a self-driving fleet of cars and with Toyota to co-create a ride-sharing autonomous vehicle.

Tesla
The first major car manufacturer to come from Silicon Valley, Tesla was also the first to introduce Level 2 autopilot back in 2015. The following year, it announced that all new Teslas would have the hardware for full autonomy, meaning once the software is finished it can be deployed on existing cars with an instant upgrade. Some experts have challenged this approach, arguing that the company has merely added surround cameras to its production cars that aren’t as capable as the laser-based sensing systems that most other carmakers are using.

But the company has collected data from hundreds of thousands of cars, driving millions of miles across all terrains. So, we shouldn’t dismiss the firm’s founder, Elon Musk, when he claims a Level 4 Tesla will drive from LA to New York without any human interference within the first half of 2018.

Winners

Who’s leading the race? Image Credit: IMD

At the moment, the disruptors like Tesla, Waymo, and Uber seem to have the upper hand. While the traditional automakers are focusing on bringing Level 3 and 4 partial automation to market, the new companies are leapfrogging them by moving more directly towards Level 5 full automation. Waymo may have the least experience of dealing with consumers in this sector, but it has already clocked up a huge amount of time testing some of the most advanced technology on public roads.

The incumbent carmakers are also focused on the difficult process of integrating new technology and business models into their existing manufacturing operations by buying up small companies. The challengers, on the other hand, are easily partnering with other big players including manufacturers to get the scale and expertise they need more quickly.

Tesla is building its own manufacturing capability but also collecting vast amounts of critical data that will enable it to more easily upgrade its cars when ready for full automation. In particular, Waymo’s experience, technology capability, and ability to secure solid partnerships puts it at the head of the pack.

This article was originally published on The Conversation. Read the original article.

Image Credit: Waymo Continue reading

Posted in Human Robots

#432051 What Roboticists Are Learning From Early ...

You might not have heard of Hanson Robotics, but if you’re reading this, you’ve probably seen their work. They were the company behind Sophia, the lifelike humanoid avatar that’s made dozens of high-profile media appearances. Before that, they were the company behind that strange-looking robot that seemed a bit like Asimo with Albert Einstein’s head—or maybe you saw BINA48, who was interviewed for the New York Times in 2010 and featured in Jon Ronson’s books. For the sci-fi aficionados amongst you, they even made a replica of legendary author Philip K. Dick, best remembered for having books with titles like Do Androids Dream of Electric Sheep? turned into films with titles like Blade Runner.

Hanson Robotics, in other words, with their proprietary brand of life-like humanoid robots, have been playing the same game for a while. Sometimes it can be a frustrating game to watch. Anyone who gives the robot the slightest bit of thought will realize that this is essentially a chat-bot, with all the limitations this implies. Indeed, even in that New York Times interview with BINA48, author Amy Harmon describes it as a frustrating experience—with “rare (but invariably thrilling) moments of coherence.” This sensation will be familiar to anyone who’s conversed with a chatbot that has a few clever responses.

The glossy surface belies the lack of real intelligence underneath; it seems, at first glance, like a much more advanced machine than it is. Peeling back that surface layer—at least for a Hanson robot—means you’re peeling back Frubber. This proprietary substance—short for “Flesh Rubber,” which is slightly nightmarish—is surprisingly complicated. Up to thirty motors are required just to control the face; they manipulate liquid cells in order to make the skin soft, malleable, and capable of a range of different emotional expressions.

A quick combinatorial glance at the 30+ motors suggests that there are millions of possible combinations; researchers identify 62 that they consider “human-like” in Sophia, although not everyone agrees with this assessment. Arguably, the technical expertise that went into reconstructing the range of human facial expressions far exceeds the more simplistic chat engine the robots use, although it’s the second one that allows it to inflate the punters’ expectations with a few pre-programmed questions in an interview.

Hanson Robotics’ belief is that, ultimately, a lot of how humans will eventually relate to robots is going to depend on their faces and voices, as well as on what they’re saying. “The perception of identity is so intimately bound up with the perception of the human form,” says David Hanson, company founder.

Yet anyone attempting to design a robot that won’t terrify people has to contend with the uncanny valley—that strange blend of concern and revulsion people react with when things appear to be creepily human. Between cartoonish humanoids and genuine humans lies what has often been a no-go zone in robotic aesthetics.

The uncanny valley concept originated with roboticist Masahiro Mori, who argued that roboticists should avoid trying to replicate humans exactly. Since anything that wasn’t perfect, but merely very good, would elicit an eerie feeling in humans, shirking the challenge entirely was the only way to avoid the uncanny valley. It’s probably a task made more difficult by endless streams of articles about AI taking over the world that inexplicably conflate AI with killer humanoid Terminators—which aren’t particularly likely to exist (although maybe it’s best not to push robots around too much).

The idea behind this realm of psychological horror is fairly simple, cognitively speaking.

We know how to categorize things that are unambiguously human or non-human. This is true even if they’re designed to interact with us. Consider the popularity of Aibo, Jibo, or even some robots that don’t try to resemble humans. Something that resembles a human, but isn’t quite right, is bound to evoke a fear response in the same way slightly distorted music or slightly rearranged furniture in your home will. The creature simply doesn’t fit.

You may well reject the idea of the uncanny valley entirely. David Hanson, naturally, is not a fan. In the paper Upending the Uncanny Valley, he argues that great art forms have often resembled humans, but the ultimate goal for humanoid roboticists is probably to create robots we can relate to as something closer to humans than works of art.

Meanwhile, Hanson and other scientists produce competing experiments to either demonstrate that the uncanny valley is overhyped, or to confirm it exists and probe its edges.

The classic experiment involves gradually morphing a cartoon face into a human face, via some robotic-seeming intermediaries—yet it’s in movement that the real horror of the almost-human often lies. Hanson has argued that incorporating cartoonish features may help—and, sometimes, that the uncanny valley is a generational thing which will melt away when new generations grow used to the quirks of robots. Although Hanson might dispute the severity of this effect, it’s clearly what he’s trying to avoid with each new iteration.

Hiroshi Ishiguro is the latest of the roboticists to have dived headlong into the valley.

Building on the work of pioneers like Hanson, those who study human-robot interaction are pushing at the boundaries of robotics—but also of social science. It’s usually difficult to simulate what you don’t understand, and there’s still an awful lot we don’t understand about how we interpret the constant streams of non-verbal information that flow when you interact with people in the flesh.

Ishiguro took this imitation of human forms to extreme levels. Not only did he monitor and log the physical movements people made on videotapes, but some of his robots are based on replicas of people; the Repliee series began with a ‘replicant’ of his daughter. This involved making a rubber replica—a silicone cast—of her entire body. Future experiments were focused on creating Geminoid, a replica of Ishiguro himself.

As Ishiguro aged, he realized that it would be more effective to resemble his replica through cosmetic surgery rather than by continually creating new casts of his face, each with more lines than the last. “I decided not to get old anymore,” Ishiguro said.

We love to throw around abstract concepts and ideas: humans being replaced by machines, cared for by machines, getting intimate with machines, or even merging themselves with machines. You can take an idea like that, hold it in your hand, and examine it—dispassionately, if not without interest. But there’s a gulf between thinking about it and living in a world where human-robot interaction is not a field of academic research, but a day-to-day reality.

As the scientists studying human-robot interaction develop their robots, their replicas, and their experiments, they are making some of the first forays into that world. We might all be living there someday. Understanding ourselves—decrypting the origins of empathy and love—may be the greatest challenge to face. That is, if you want to avoid the valley.

Image Credit: Anton Gvozdikov / Shutterstock.com Continue reading

Posted in Human Robots