Tag Archives: computing

#431872 AI Uses Titan Supercomputer to Create ...

You don’t have to dig too deeply into the archive of dystopian science fiction to uncover the horror that intelligent machines might unleash. The Matrix and The Terminator are probably the most well-known examples of self-replicating, intelligent machines attempting to enslave or destroy humanity in the process of building a brave new digital world.
The prospect of artificially intelligent machines creating other artificially intelligent machines took a big step forward in 2017. However, we’re far from the runaway technological singularity futurists are predicting by mid-century or earlier, let alone murderous cyborgs or AI avatar assassins.
The first big boost this year came from Google. The tech giant announced it was developing automated machine learning (AutoML), writing algorithms that can do some of the heavy lifting by identifying the right neural networks for a specific job. Now researchers at the Department of Energy’s Oak Ridge National Laboratory (ORNL), using the most powerful supercomputer in the US, have developed an AI system that can generate neural networks as good if not better than any developed by a human in less than a day.
It can take months for the brainiest, best-paid data scientists to develop deep learning software, which sends data through a complex web of mathematical algorithms. The system is modeled after the human brain and known as an artificial neural network. Even Google’s AutoML took weeks to design a superior image recognition system, one of the more standard operations for AI systems today.
Computing Power
Of course, Google Brain project engineers only had access to 800 graphic processing units (GPUs), a type of computer hardware that works especially well for deep learning. Nvidia, which pioneered the development of GPUs, is considered the gold standard in today’s AI hardware architecture. Titan, the supercomputer at ORNL, boasts more than 18,000 GPUs.
The ORNL research team’s algorithm, called MENNDL for Multinode Evolutionary Neural Networks for Deep Learning, isn’t designed to create AI systems that cull cute cat photos from the internet. Instead, MENNDL is a tool for testing and training thousands of potential neural networks to work on unique science problems.
That requires a different approach from the Google and Facebook AI platforms of the world, notes Steven Young, a postdoctoral research associate at ORNL who is on the team that designed MENNDL.
“We’ve discovered that those [neural networks] are very often not the optimal network for a lot of our problems, because our data, while it can be thought of as images, is different,” he explains to Singularity Hub. “These images, and the problems, have very different characteristics from object detection.”
AI for Science
One application of the technology involved a particle physics experiment at the Fermi National Accelerator Laboratory. Fermilab researchers are interested in understanding neutrinos, high-energy subatomic particles that rarely interact with normal matter but could be a key to understanding the early formation of the universe. One Fermilab experiment involves taking a sort of “snapshot” of neutrino interactions.
The team wanted the help of an AI system that could analyze and classify Fermilab’s detector data. MENNDL evaluated 500,000 neural networks in 24 hours. Its final solution proved superior to custom models developed by human scientists.
In another case involving a collaboration with St. Jude Children’s Research Hospital in Memphis, MENNDL improved the error rate of a human-designed algorithm for identifying mitochondria inside 3D electron microscopy images of brain tissue by 30 percent.
“We are able to do better than humans in a fraction of the time at designing networks for these sort of very different datasets that we’re interested in,” Young says.
What makes MENNDL particularly adept is its ability to define the best or most optimal hyperparameters—the key variables—to tackle a particular dataset.
“You don’t always need a big, huge deep network. Sometimes you just need a small network with the right hyperparameters,” Young says.
A Virtual Data Scientist
That’s not dissimilar to the approach of a company called H20.ai, a startup out of Silicon Valley that uses open source machine learning platforms to “democratize” AI. It applies machine learning to create business solutions for Fortune 500 companies, including some of the world’s biggest banks and healthcare companies.
“Our software is more [about] pattern detection, let’s say anti-money laundering or fraud detection or which customer is most likely to churn,” Dr. Arno Candel, chief technology officer at H2O.ai, tells Singularity Hub. “And that kind of insight-generating software is what we call AI here.”
The company’s latest product, Driverless AI, promises to deliver the data scientist equivalent of a chessmaster to its customers (the company claims several such grandmasters in its employ and advisory board). In other words, the system can analyze a raw dataset and, like MENNDL, automatically identify what features should be included in the computer model to make the most of the data based on the best “chess moves” of its grandmasters.
“So we’re using those algorithms, but we’re giving them the human insights from those data scientists, and we automate their thinking,” he explains. “So we created a virtual data scientist that is relentless at trying these ideas.”
Inside the Black Box
Not unlike how the human brain reaches a conclusion, it’s not always possible to understand how a machine, despite being designed by humans, reaches its own solutions. The lack of transparency is often referred to as the AI “black box.” Experts like Young say we can learn something about the evolutionary process of machine learning by generating millions of neural networks and seeing what works well and what doesn’t.
“You’re never going to be able to completely explain what happened, but maybe we can better explain it than we currently can today,” Young says.
Transparency is built into the “thought process” of each particular model generated by Driverless AI, according to Candel.
The computer even explains itself to the user in plain English at each decision point. There is also real-time feedback that allows users to prioritize features, or parameters, to see how the changes improve the accuracy of the model. For example, the system may include data from people in the same zip code as it creates a model to describe customer turnover.
“That’s one of the advantages of our automatic feature engineering: it’s basically mimicking human thinking,” Candel says. “It’s not just neural nets that magically come up with some kind of number, but we’re trying to make it statistically significant.”
Moving Forward
Much digital ink has been spilled over the dearth of skilled data scientists, so automating certain design aspects for developing artificial neural networks makes sense. Experts agree that automation alone won’t solve that particular problem. However, it will free computer scientists to tackle more difficult issues, such as parsing the inherent biases that exist within the data used by machine learning today.
“I think the world has an opportunity to focus more on the meaning of things and not on the laborious tasks of just fitting a model and finding the best features to make that model,” Candel notes. “By automating, we are pushing the burden back for the data scientists to actually do something more meaningful, which is think about the problem and see how you can address it differently to make an even bigger impact.”
The team at ORNL expects it can also make bigger impacts beginning next year when the lab’s next supercomputer, Summit, comes online. While Summit will boast only 4,600 nodes, it will sport the latest and greatest GPU technology from Nvidia and CPUs from IBM. That means it will deliver more than five times the computational performance of Titan, the world’s fifth-most powerful supercomputer today.
“We’ll be able to look at much larger problems on Summit than we were able to with Titan and hopefully get to a solution much faster,” Young says.
It’s all in a day’s work.
Image Credit: Gennady Danilkin / Shutterstock.com Continue reading

Posted in Human Robots | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

#431828 This Self-Driving AI Is Learning to ...

I don’t have to open the doors of AImotive’s white 2015 Prius to see that it’s not your average car. This particular Prius has been christened El Capitan, the name written below the rear doors, and two small cameras are mounted on top of the car. Bundles of wire snake out from them, as well as from the two additional cameras on the car’s hood and trunk.
Inside is where things really get interesting, though. The trunk holds a computer the size of a microwave, and a large monitor covers the passenger glove compartment and dashboard. The center console has three switches labeled “Allowed,” “Error,” and “Active.”
Budapest-based AImotive is working to provide scalable self-driving technology alongside big players like Waymo and Uber in the autonomous vehicle world. On a highway test ride with CEO Laszlo Kishonti near the company’s office in Mountain View, California, I got a glimpse of just how complex that world is.
Camera-Based Feedback System
AImotive’s approach to autonomous driving is a little different from that of some of the best-known systems. For starters, they’re using cameras, not lidar, as primary sensors. “The traffic system is visual and the cost of cameras is low,” Kishonti said. “A lidar can recognize when there are people near the car, but a camera can differentiate between, say, an elderly person and a child. Lidar’s resolution isn’t high enough to recognize the subtle differences of urban driving.”
Image Credit: AImotive
The company’s aiDrive software uses data from the camera sensors to feed information to its algorithms for hierarchical decision-making, grouped under four concurrent activities: recognition, location, motion, and control.
Kishonti pointed out that lidar has already gotten more cost-efficient, and will only continue to do so.
“Ten years ago, lidar was best because there wasn’t enough processing power to do all the calculations by AI. But the cost of running AI is decreasing,” he said. “In our approach, computer vision and AI processing are key, and for safety, we’ll have fallback sensors like radar or lidar.”
aiDrive currently runs on Nvidia chips, which Kishonti noted were originally designed for graphics, and are not terribly efficient given how power-hungry they are. “We’re planning to substitute lower-cost, lower-energy chips in the next six months,” he said.
Testing in Virtual Reality
Waymo recently announced its fleet has now driven four million miles autonomously. That’s a lot of miles, and hard to compete with. But AImotive isn’t trying to compete, at least not by logging more real-life test miles. Instead, the company is doing 90 percent of its testing in virtual reality. “This is what truly differentiates us from competitors,” Kishonti said.
He outlined the three main benefits of VR testing: it can simulate scenarios too dangerous for the real world (such as hitting something), too costly (not every company has Waymo’s funds to run hundreds of cars on real roads), or too time-consuming (like waiting for rain, snow, or other weather conditions to occur naturally and repeatedly).
“Real-world traffic testing is very skewed towards the boring miles,” he said. “What we want to do is test all the cases that are hard to solve.”
On a screen that looked not unlike multiple games of Mario Kart, he showed me the simulator. Cartoon cars cruised down winding streets, outfitted with all the real-world surroundings: people, trees, signs, other cars. As I watched, a furry kangaroo suddenly hopped across one screen. “Volvo had an issue in Australia,” Kishonti explained. “A kangaroo’s movement is different than other animals since it hops instead of running.” Talk about cases that are hard to solve.
AImotive is currently testing around 1,000 simulated scenarios every night, with a steadily-rising curve of successful tests. These scenarios are broken down into features, and the car’s behavior around those features fed into a neural network. As the algorithms learn more features, the level of complexity the vehicles can handle goes up.
On the Road
After Kishonti and his colleagues filled me in on the details of their product, it was time to test it out. A safety driver sat in the driver’s seat, a computer operator in the passenger seat, and Kishonti and I in back. The driver maintained full control of the car until we merged onto the highway. Then he flicked the “Allowed” switch, his copilot pressed the “Active” switch, and he took his hands off the wheel.
What happened next, you ask?
A few things. El Capitan was going exactly the speed limit—65 miles per hour—which meant all the other cars were passing us. When a car merged in front of us or cut us off, El Cap braked accordingly (if a little abruptly). The monitor displayed the feed from each of the car’s cameras, plus multiple data fields and a simulation where a blue line marked the center of the lane, measured by the cameras tracking the lane markings on either side.
I noticed El Cap wobbling out of our lane a bit, but it wasn’t until two things happened in a row that I felt a little nervous: first we went under a bridge, then a truck pulled up next to us, both bridge and truck casting a complete shadow over our car. At that point El Cap lost it, and we swerved haphazardly to the right, narrowly missing the truck’s rear wheels. The safety driver grabbed the steering wheel and took back control of the car.
What happened, Kishonti explained, was that the shadows made it hard for the car’s cameras to see the lane markings. This was a new scenario the algorithm hadn’t previously encountered. If we’d only gone under a bridge or only been next to the truck for a second, El Cap may not have had so much trouble, but the two events happening in a row really threw the car for a loop—almost literally.
“This is a new scenario we’ll add to our testing,” Kishonti said. He added that another way for the algorithm to handle this type of scenario, rather than basing its speed and positioning on the lane markings, is to mimic nearby cars. “The human eye would see that other cars are still moving at the same speed, even if it can’t see details of the road,” he said.
After another brief—and thankfully uneventful—hands-off cruise down the highway, the safety driver took over, exited the highway, and drove us back to the office.
Driving into the Future
I climbed out of the car feeling amazed not only that self-driving cars are possible, but that driving is possible at all. I squint when driving into a tunnel, swerve to avoid hitting a stray squirrel, and brake gradually at stop signs—all without consciously thinking to do so. On top of learning to steer, brake, and accelerate, self-driving software has to incorporate our brains’ and bodies’ unconscious (but crucial) reactions, like our pupils dilating to let in more light so we can see in a tunnel.
Despite all the progress of machine learning, artificial intelligence, and computing power, I have a wholly renewed appreciation for the thing that’s been in charge of driving up till now: the human brain.
Kishonti seemed to feel similarly. “I don’t think autonomous vehicles in the near future will be better than the best drivers,” he said. “But they’ll be better than the average driver. What we want to achieve is safe, good-quality driving for everyone, with scalability.”
AImotive is currently working with American tech firms and with car and truck manufacturers in Europe, China, and Japan.
Image Credit: Alex Oakenman / Shutterstock.com Continue reading

Posted in Human Robots | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

#431678 This Week’s Awesome Stories From ...

ARTIFICIAL INTELLIGENCE
Can A.I. Be Taught to Explain Itself?Cliff Kuang | New York Times“Kosinski’s results suggested something stranger: that artificial intelligences often excel by developing whole new ways of seeing, or even thinking, that are inscrutable to us. It’s a more profound version of what’s often called the ‘black box’ problem—the inability to discern exactly what machines are doing when they’re teaching themselves novel skills—and it has become a central concern in artificial-intelligence research.”
BIOTECH
Semi-Synthetic Life Form Now Fully Armed and OperationalAntonio Regalado | MIT Technology Review “By this year, the team had devised a more stable bacterium. But it wasn’t enough to endow the germ with a partly alien code—it needed to use that code to make a partly alien protein. That’s what Romesberg’s team, reporting today in the journal Nature, says it has done.”
COMPUTING
4 Strange New Ways to ComputeSamuel K. Moore | IEEE Spectrum “With Moore’s Law slowing, engineers have been taking a cold hard look at what will keep computing going when it’s gone…What follows includes a taste of both the strange and the potentially impactful.”
INNOVATION
Google X and the Science of Radical CreativityDerek Thompson | The Atlantic “But what X is attempting is nonetheless audacious. It is investing in both invention and innovation. Its founders hope to demystify and routinize the entire process of making a technological breakthrough—to nurture each moonshot, from question to idea to discovery to product—and, in so doing, to write an operator’s manual for radical creativity.”
PRIVACY AND SECURITY
Uber Paid Hackers to Delete Stolen Data on 57 Million PeopleEric Newcomer | Bloomberg “Hackers stole the personal data of 57 million customers and drivers from Uber Technologies Inc., a massive breach that the company concealed for more than a year. This week, the ride-hailing firm ousted its chief security officer and one of his deputies for their roles in keeping the hack under wraps, which included a $100,000 payment to the attackers.”
Image Credit: singpentinkhappy / Shutterstock.com Continue reading

Posted in Human Robots | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

#431671 The Doctor in the Machine: How AI Is ...

Artificial intelligence has received its fair share of hype recently. However, it’s hype that’s well-founded: IDC predicts worldwide spend on AI and cognitive computing will culminate to a whopping $46 billion (with a “b”) by 2020, and all the tech giants are jumping on board faster than you can say “ROI.” But what is AI, exactly?
According to Hilary Mason, AI today is being misused as a sort of catch-all term to basically describe “any system that uses data to do anything.” But it’s so much more than that. A truly artificially intelligent system is one that learns on its own, one that’s capable of crunching copious amounts of data in order to create associations and intelligently mimic actual human behavior.
It’s what powers the technology anticipating our next online purchase (Amazon), or the virtual assistant that deciphers our voice commands with incredible accuracy (Siri), or even the hipster-friendly recommendation engine that helps you discover new music before your friends do (Pandora). But AI is moving past these consumer-pleasing “nice-to-haves” and getting down to serious business: saving our butts.
Much in the same way robotics entered manufacturing, AI is making its mark in healthcare by automating mundane, repetitive tasks. This is especially true in the case of detecting cancer. By leveraging the power of deep learning, algorithms can now be trained to distinguish between sets of pixels in an image that represents cancer versus sets that don’t—not unlike how Facebook’s image recognition software tags pictures of our friends without us having to type in their names first. This software can then go ahead and scour millions of medical images (MRIs, CT scans, etc.) in a single day to detect anomalies on a scope that humans just aren’t capable of. That’s huge.
As if that wasn’t enough, these algorithms are constantly learning and evolving, getting better at making these associations with each new data set that gets fed to them. Radiology, dermatology, and pathology will experience a giant upheaval as tech giants and startups alike jump in to bring these deep learning algorithms to a hospital near you.
In fact, some already are: the FDA recently gave their seal of approval for an AI-powered medical imaging platform that helps doctors analyze and diagnose heart anomalies. This is the first time the FDA has approved a machine learning application for use in a clinical setting.
But how efficient is AI compared to humans, really? Well, aside from the obvious fact that software programs don’t get bored or distracted or have to check Facebook every twenty minutes, AI is exponentially better than us at analyzing data.
Take, for example, IBM’s Watson. Watson analyzed genomic data from both tumor cells and healthy cells and was ultimately able to glean actionable insights in a mere 10 minutes. Compare that to the 160 hours it would have taken a human to analyze that same data. Diagnoses aside, AI is also being leveraged in pharmaceuticals to aid in the very time-consuming grunt work of discovering new drugs, and all the big players are getting involved.
But AI is far from being just a behind-the-scenes player. Gartner recently predicted that by 2025, 50 percent of the population will rely on AI-powered “virtual personal health assistants” for their routine primary care needs. What this means is that consumer-facing voice and chat-operated “assistants” (think Siri or Cortana) would, in effect, serve as a central hub of interaction for all our connected health devices and the algorithms crunching all our real-time biometric data. These assistants would keep us apprised of our current state of well-being, acting as a sort of digital facilitator for our personal health objectives and an always-on health alert system that would notify us when we actually need to see a physician.
Slowly, and thanks to the tsunami of data and advancements in self-learning algorithms, healthcare is transitioning from a reactive model to more of a preventative model—and it’s completely upending the way care is delivered. Whether Elon Musk’s dystopian outlook on AI holds any weight or not is yet to be determined. But one thing’s certain: for the time being, artificial intelligence is saving our lives.
Image Credit: Jolygon / Shutterstock.com Continue reading

Posted in Human Robots | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

#431603 What We Can Learn From the Second Life ...

For every new piece of technology that gets developed, you can usually find people saying it will never be useful. The president of the Michigan Savings Bank in 1903, for example, said, “The horse is here to stay but the automobile is only a novelty—a fad.” It’s equally easy to find people raving about whichever new technology is at the peak of the Gartner Hype Cycle, which tracks the buzz around these newest developments and attempts to temper predictions. When technologies emerge, there are all kinds of uncertainties, from the actual capacity of the technology to its use cases in real life to the price tag.
Eventually the dust settles, and some technologies get widely adopted, to the extent that they can become “invisible”; people take them for granted. Others fall by the wayside as gimmicky fads or impractical ideas. Picking which horses to back is the difference between Silicon Valley millions and Betamax pub-quiz-question obscurity. For a while, it seemed that Google had—for once—backed the wrong horse.
Google Glass emerged from Google X, the ubiquitous tech giant’s much-hyped moonshot factory, where highly secretive researchers work on the sci-fi technologies of the future. Self-driving cars and artificial intelligence are the more mundane end for an organization that apparently once looked into jetpacks and teleportation.
The original smart glasses, Google began selling Google Glass in 2013 for $1,500 as prototypes for their acolytes, around 8,000 early adopters. Users could control the glasses with a touchpad, or, activated by tilting the head back, with voice commands. Audio relay—as with several wearable products—is via bone conduction, which transmits sound by vibrating the skull bones of the user. This was going to usher in the age of augmented reality, the next best thing to having a chip implanted directly into your brain.
On the surface, it seemed to be a reasonable proposition. People had dreamed about augmented reality for a long time—an onboard, JARVIS-style computer giving you extra information and instant access to communications without even having to touch a button. After smartphone ubiquity, it looked like a natural step forward.
Instead, there was a backlash. People may be willing to give their data up to corporations, but they’re less pleased with the idea that someone might be filming them in public. The worst aspect of smartphones is trying to talk to people who are distractedly scrolling through their phones. There’s a famous analogy in Revolutionary Road about an old couple’s loveless marriage: the husband tunes out his wife’s conversation by turning his hearing aid down to zero. To many, Google Glass seemed to provide us with a whole new way to ignore each other in favor of our Twitter feeds.
Then there’s the fact that, regardless of whether it’s because we’re not used to them, or if it’s a more permanent feature, people wearing AR tech often look very silly. Put all this together with a lack of early functionality, the high price (do you really feel comfortable wearing a $1,500 computer?), and a killer pun for the users—Glassholes—and the final recipe wasn’t great for Google.
Google Glass was quietly dropped from sale in 2015 with the ominous slogan posted on Google’s website “Thanks for exploring with us.” Reminding the Glass users that they had always been referred to as “explorers”—beta-testing a product, in many ways—it perhaps signaled less enthusiasm for wearables than the original, Google Glass skydive might have suggested.
In reality, Google went back to the drawing board. Not with the technology per se, although it has improved in the intervening years, but with the uses behind the technology.
Under what circumstances would you actually need a Google Glass? When would it genuinely be preferable to a smartphone that can do many of the same things and more? Beyond simply being a fashion item, which Google Glass decidedly was not, even the most tech-evangelical of us need a convincing reason to splash $1,500 on a wearable computer that’s less socially acceptable and less easy to use than the machine you’re probably reading this on right now.
Enter the Google Glass Enterprise Edition.
Piloted in factories during the years that Google Glass was dormant, and now roaring back to life and commercially available, the Google Glass relaunch got under way in earnest in July of 2017. The difference here was the specific audience: workers in factories who need hands-free computing because they need to use their hands at the same time.
In this niche application, wearable computers can become invaluable. A new employee can be trained with pre-programmed material that explains how to perform actions in real time, while instructions can be relayed straight into a worker’s eyeline without them needing to check a phone or switch to email.
Medical devices have long been a dream application for Google Glass. You can imagine a situation where people receive real-time information during surgery, or are augmented by artificial intelligence that provides additional diagnostic information or questions in response to a patient’s symptoms. The quest to develop a healthcare AI, which can provide recommendations in response to natural language queries, is on. The famously untidy doctor’s handwriting—and the associated death toll—could be avoided if the glasses could take dictation straight into a patient’s medical records. All of this is far more useful than allowing people to check Facebook hands-free while they’re riding the subway.
Google’s “Lens” application indicates another use for Google Glass that hadn’t quite matured when the original was launched: the Lens processes images and provides information about them. You can look at text and have it translated in real time, or look at a building or sign and receive additional information. Image processing, either through neural networks hooked up to a cloud database or some other means, is the frontier that enables driverless cars and similar technology to exist. Hook this up to a voice-activated assistant relaying information to the user, and you have your killer application: real-time annotation of the world around you. It’s this functionality that just wasn’t ready yet when Google launched Glass.
Amazon’s recent announcement that they want to integrate Alexa into a range of smart glasses indicates that the tech giants aren’t ready to give up on wearables yet. Perhaps, in time, people will become used to voice activation and interaction with their machines, at which point smart glasses with bone conduction will genuinely be more convenient than a smartphone.
But in many ways, the real lesson from the initial failure—and promising second life—of Google Glass is a simple question that developers of any smart technology, from the Internet of Things through to wearable computers, must answer. “What can this do that my smartphone can’t?” Find your answer, as the Enterprise Edition did, as Lens might, and you find your product.
Image Credit: Hattanas / Shutterstock.com Continue reading

Posted in Human Robots | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment