Tag Archives: environment

#431186 The Coming Creativity Explosion Belongs ...

Does creativity make human intelligence special?
It may appear so at first glance. Though machines can calculate, analyze, and even perceive, creativity may seem far out of reach. Perhaps this is because we find it mysterious, even in ourselves. How can the output of a machine be anything more than that which is determined by its programmers?
Increasingly, however, artificial intelligence is moving into creativity’s hallowed domain, from art to industry. And though much is already possible, the future is sure to bring ever more creative machines.
What Is Machine Creativity?
Robotic art is just one example of machine creativity, a rapidly growing sub-field that sits somewhere between the study of artificial intelligence and human psychology.
The winning paintings from the 2017 Robot Art Competition are strikingly reminiscent of those showcased each spring at university exhibitions for graduating art students. Like the works produced by skilled artists, the compositions dreamed up by the competition’s robotic painters are aesthetically ambitious. One robot-made painting features a man’s bearded face gazing intently out from the canvas, his eyes locking with the viewer’s. Another abstract painting, “inspired” by data from EEG signals, visually depicts the human emotion of misery with jagged, gloomy stripes of black and purple.
More broadly, a creative machine is software (sometimes encased in a robotic body) that synthesizes inputs to generate new and valuable ideas, solutions to complex scientific problems, or original works of art. In a process similar to that followed by a human artist or scientist, a creative machine begins its work by framing a problem. Next, its software specifies the requirements the solution should have before generating “answers” in the form of original designs, patterns, or some other form of output.
Although the notion of machine creativity sounds a bit like science fiction, the basic concept is one that has been slowly developing for decades.
Nearly 50 years ago while a high school student, inventor and futurist Ray Kurzweil created software that could analyze the patterns in musical compositions and then compose new melodies in a similar style. Aaron, one of the world’s most famous painting robots, has been hard at work since the 1970s.
Industrial designers have used an automated, algorithm-driven process for decades to design computer chips (or machine parts) whose layout (or form) is optimized for a particular function or environment. Emily Howell, a computer program created by David Cope, writes original works in the style of classical composers, some of which have been performed by human orchestras to live audiences.
What’s different about today’s new and emerging generation of robotic artists, scientists, composers, authors, and product designers is their ubiquity and power.

“The recent explosion of artificial creativity has been enabled by the rapid maturation of the same exponential technologies that have already re-drawn our daily lives.”

I’ve already mentioned the rapidly advancing fields of robotic art and music. In the realm of scientific research, so-called “robotic scientists” such as Eureqa and Adam and Eve develop new scientific hypotheses; their “insights” have contributed to breakthroughs that are cited by hundreds of academic research papers. In the medical industry, creative machines are hard at work creating chemical compounds for new pharmaceuticals. After it read over seven million words of 20th century English poetry, a neural network developed by researcher Jack Hopkins learned to write passable poetry in a number of different styles and meters.
The recent explosion of artificial creativity has been enabled by the rapid maturation of the same exponential technologies that have already re-drawn our daily lives, including faster processors, ubiquitous sensors and wireless networks, and better algorithms.
As they continue to improve, creative machines—like humans—will perform a broad range of creative activities, ranging from everyday problem solving (sometimes known as “Little C” creativity) to producing once-in-a-century masterpieces (“Big C” creativity). A creative machine’s outputs could range from a design for a cast for a marble sculpture to a schematic blueprint for a clever new gadget for opening bottles of wine.
In the coming decades, by automating the process of solving complex problems, creative machines will again transform our world. Creative machines will serve as a versatile source of on-demand talent.
In the battle to recruit a workforce that can solve complex problems, creative machines will put small businesses on equal footing with large corporations. Art and music lovers will enjoy fresh creative works that re-interpret the style of ancient disciplines. People with a health condition will benefit from individualized medical treatments, and low-income people will receive top-notch legal advice, to name but a few potentially beneficial applications.
How Can We Make Creative Machines, Unless We Understand Our Own Creativity?
One of the most intriguing—yet unsettling—aspects of watching robotic arms skillfully oil paint is that we humans still do not understand our own creative process. Over the centuries, several different civilizations have devised a variety of models to explain creativity.
The ancient Greeks believed that poets drew inspiration from a transcendent realm parallel to the material world where ideas could take root and flourish. In the Middle Ages, philosophers and poets attributed our peculiarly human ability to “make something of nothing” to an external source, namely divine inspiration. Modern academic study of human creativity has generated vast reams of scholarship, but despite the value of these insights, the human imagination remains a great mystery, second only to that of consciousness.
Today, the rise of machine creativity demonstrates (once again), that we do not have to fully understand a biological process in order to emulate it with advanced technology.
Past experience has shown that jet planes can fly higher and faster than birds by using the forward thrust of an engine rather than wings. Submarines propel themselves forward underwater without fins or a tail. Deep learning neural networks identify objects in randomly-selected photographs with super-human accuracy. Similarly, using a fairly straightforward software architecture, creative software (sometimes paired with a robotic body) can paint, write, hypothesize, or design with impressive originality, skill, and boldness.
At the heart of machine creativity is simple iteration. No matter what sort of output they produce, creative machines fall into one of three categories depending on their internal architecture.
Briefly, the first group consists of software programs that use traditional rule-based, or symbolic AI, the second group uses evolutionary algorithms, and the third group uses a variation of a form of machine learning called deep learning that has already revolutionized voice and facial recognition software.
1) Symbolic creative machines are the oldest artificial artists and musicians. In this approach—also known as “good old-fashioned AI (GOFAI) or symbolic AI—the human programmer plays a key role by writing a set of step-by-step instructions to guide the computer through a task. Despite the fact that symbolic AI is limited in its ability to adapt to environmental changes, it’s still possible for a robotic artist programmed this way to create an impressively wide variety of different outputs.
2) Evolutionary algorithms (EA) have been in use for several decades and remain powerful tools for design. In this approach, potential solutions “compete” in a software simulator in a Darwinian process reminiscent of biological evolution. The human programmer specifies a “fitness criterion” that will be used to score and rank the solutions generated by the software. The software then generates a “first generation” population of random solutions (which typically are pretty poor in quality), scores this first generation of solutions, and selects the top 50% (those random solutions deemed to be the best “fit”). The software then takes another pass and recombines the “winning” solutions to create the next generation and repeats this process for thousands (and sometimes millions) of generations.
3) Generative deep learning (DL) neural networks represent the newest software architecture of the three, since DL is data-dependent and resource-intensive. First, a human programmer “trains” a DL neural network to recognize a particular feature in a dataset, for example, an image of a dog in a stream of digital images. Next, the standard “feed forward” process is reversed and the DL neural network begins to generate the feature, for example, eventually producing new and sometimes original images of (or poetry about) dogs. Generative DL networks have tremendous and unexplored creative potential and are able to produce a broad range of original outputs, from paintings to music to poetry.
The Coming Explosion of Machine Creativity
In the near future as Moore’s Law continues its work, we will see sophisticated combinations of these three basic architectures. Since the 1950s, artificial intelligence has steadily mastered one human ability after another, and in the process of doing so, has reduced the cost of calculation, analysis, and most recently, perception. When creative software becomes as inexpensive and ubiquitous as analytical software is today, humans will no longer be the only intelligent beings capable of creative work.
This is why I have to bite my tongue when I hear the well-intended (but shortsighted) advice frequently dispensed to young people that they should pursue work that demands creativity to help them “AI-proof” their futures.
Instead, students should gain skills to harness the power of creative machines.
There are two skills in which humans excel that will enable us to remain useful in a world of ever-advancing artificial intelligence. One, the ability to frame and define a complex problem so that it can be handed off to a creative machine to solve. And two, the ability to communicate the value of both the framework and the proposed solution to the other humans involved.
What will happen to people when creative machines begin to capably tread on intellectual ground that was once considered the sole domain of the human mind, and before that, the product of divine inspiration? While machines engaging in Big C creativity—e.g., oil painting and composing new symphonies—tend to garner controversy and make the headlines, I suspect the real world-changing application of machine creativity will be in the realm of everyday problem solving, or Little C. The mainstream emergence of powerful problem-solving tools will help people create abundance where there was once scarcity.
Image Credit: adike / Shutterstock.com Continue reading

Posted in Human Robots | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

#431178 Soft Robotics Releases Development Kit ...

Cambridge, MA – Soft Robotics Inc, which has built a fundamentally new class of robotic grippers, announced the release of its expanded and upgraded Soft Robotics Development Kit; SRDK 2.0.

The Soft Robotics Development Kit 2.0 comes complete with:

Robot tool flange mounting plate
4, 5 and 6 position hub plates
Tool Center Point
Soft Robotics Control Unit G2
6 rail mounted, 4 accordion actuator modules
Custom pneumatic manifold
Mounting hardware and accessories

Where the SRDK 1.0 included 5 four accordion actuator modules and the opportunity to create a gripper containing two to five actuators, The SRDK 2.0 contains 6 four accordion actuator modules plus the addition of a six position hub allowing users the ability to configure six actuator test tools. This expands use of the Development Kit to larger product applications, such as: large bagged and pouched items, IV bags, bags of nuts, bread and other food items.

SRDK 2.0 also contains an upgraded Soft Robotics Control Unit (SRCU G2) – the proprietary system that controls all software and hardware with one turnkey pneumatic operation. The upgraded SRCU features new software with a cleaner, user friendly interface and an IP65 rating. Highly intuitive, the software is able to store up to eight grip profiles and allows for very precise adjustments to actuation and vacuum.

Also new with the release of SRDK 2.0, is the introduction of several accessory kits that will allow for an expanded number of configurations and product applications available for testing.

Accessory Kit 1 – For SRDK 1.0 users only – includes the six position hub and 4 accordion actuators now included in SRDK 2.0
Accessory Kit 2 – For SRDK 1.0 or 2.0 users – includes 2 accordion actuators
Accessory Kit 3 – For SRDK 1.0 or 2.0 users – includes 3 accordion actuators

The shorter 2 and 3 accordion actuators provide increased stability for high-speed applications, increased placement precision, higher grip force capabilities and are optimized for gripping small, shallow objects.

Designed to plug and play with any existing robot currently in the market, the Soft Robotics Development Kit 2.0 allows end-users and OEM Integrators the ability to customize, test and validate their ideal Soft Robotics solution, with their own equipment, in their own environment.

Once an ideal solution has been found, the Soft Robotics team will take those exact specifications and build a production-grade tool for implementation into the manufacturing line. And, it doesn’t end there. Created to be fully reusable, the process – configure, test, validate, build, production – can start over again as many times as needed.

See the new SRDK 2.0 on display for the first time at PACK EXPO Las Vegas, September 25 – 27, 2017 in Soft Robotics booth S-5925.

Learn more about the Soft Robotics Development Kit at www.softroboticsinc.com/srdk.
Photo Credit: Soft Robotics – www.softroboticsinc.com
###
About Soft Robotics
Soft Robotics designs and builds soft robotic gripping systems and automation solutions
that can grasp and manipulate items of varying size, shape and weight. Spun out of the
Whitesides Group at Harvard University, Soft Robotics is the only company to be
commercializing this groundbreaking and proprietary technology platform. Today, the
company is a global enterprise solving previously off-limits automation challenges for
customers in food & beverage, advanced manufacturing and ecommerce. Soft Robotics’
engineers are building an ecosystem of robots, control systems, data and machine
learning to enable the workplace of the future. For more information, please visit
www.softroboticsinc.com.

Media contact:
Jennie Kondracki
The Kondracki Group, LLC
262-501-4507
jennie@kondrackigroup.com
The post Soft Robotics Releases Development Kit 2.0 appeared first on Roboticmagazine. Continue reading

Posted in Human Robots | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

#431171 SceneScan: Real-Time 3D Depth Sensing ...

Nerian Introduces a High-Performance Successor for the Proven SP1 System
Stereo vision, which is the three-dimensional perception of our environment with two sensors likeour eyes, is a well-known technology. As a passive method – there is no need to emit light in thevisible or invisible spectral range – this technology can open up new possibilities for three dimensional perception, even under difficult conditions.
But as often, the devil is in the details: for most applications, the software implementation withstandard PCs, but also with graphics processors, is too slow. Another complicating factor is thatthese hardware platforms are expensive and not energy-efficient. The solution is to instead usespecialized hardware for image processing. A programmable logic device – a so-called FPGA – cangreatly accelerate the image processing.
As a technology leader, Nerian Vision Technologies has been following this path successfully forthe past two years with the SP1 stereo vision system, which has enabled completely newapplications in the fields of robotics, automation technology, medical technology, autonomousdriving and other domains. Now the company introduces two successors:
SceneScan and SceneScan Pro. Real eye-catchers in a double sense: stereo vision in an elegant design!But more important is, of course, the significantly improved inner workings of the two new modelsin comparison to their predecessor. The new hardware allows processing rates of up to 100 framesper second at resolutions of up to 3 megapixels, which leaves the SP1 far behind:
Photo Credit: Nerian Vision Technologies – www.nerian.com

The table illustrates the difference: while SceneScan Pro has the highest possible computing powerand is designed for the most demanding applications, SceneScan has been cost-reduced forapplications with lower requirements. The customer can thus optimize his embedded vision solution both in terms of costs and technology.
The new duo is completed by Nerian’s proven Karmin stereo cameras. Of course, industrialUSB3Vision cameras by other manufacturers are also supported.This combination not only supports the above-mentioned applications even better, but alsofacilitates completely new and innovative ones. If required, customer-specific adaptations are alsopossible.
ContactNerian Vision TechnologiesOwner: Dr. Konstantin SchauweckerGotenstr. 970771 Leinfelden-EchterdingenGermanyPhone: +49 711 / 2195 9414Email: service@nerian.comWebsite: http://nerian.com
Press Release Authored By: Nerian Vision Technologies
Photo Credit: Nerian Vision Technologies – www.nerian.com
The post SceneScan: Real-Time 3D Depth Sensing Through Stereo Vision appeared first on Roboticmagazine. Continue reading

Posted in Human Robots | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

#431170 This Week’s Awesome Stories From ...

AUGMENTED REALITY
ZED Mini Turns Rift and Vive Into an AR Headset From the FutureBen Lang | Road to VR“When attached, the camera provides stereo pass-through video and real-time depth and environment mapping, turning the headsets into dev kits emulating the capabilities of high-end AR headsets of the future. The ZED Mini will launch in November.”
ROBOTICS
Life-Size Humanoid Robot Is Designed to Fall Over (and Over and Over)Evan Ackerman | IEEE Spectrum “The researchers came up with a new strategy for not worrying about falls: not worrying about falls. Instead, they’ve built their robot from the ground up with an armored structure that makes it totally okay with falling over and getting right back up again.”
SPACE
Russia Will Team up With NASA to Build a Lunar Space StationAnatoly Zak | Popular Mechanics “NASA and its partner agencies plan to begin the construction of the modular habitat known as the Deep-Space Gateway in orbit around the Moon in the early 2020s. It will become the main destination for astronauts for at least a decade, extending human presence beyond the Earth’s orbit for the first time since the end of the Apollo program in 1972. Launched on NASA’s giant SLS rocket and serviced by the crews of the Orion spacecraft, the outpost would pave the way to a mission to Mars in the 2030s.”
TRANSPORTATION
Dubai Starts Testing Crewless Two-Person ‘Flying Taxis’Thuy Ong | The Verge“The drone was uncrewed and hovered 200 meters high during the test flight, according to Reuters. The AAT, which is about two meters high, was supplied by specialist German manufacturer Volocopter, known for its eponymous helicopter drone hybrid with 18 rotors…Dubai has a target for autonomous transport to account for a quarter of total trips by 2030.”
AUTONOMOUS CARS
Toyota Is Trusting a Startup for a Crucial Part of Its Newest Self-Driving CarsJohana Bhuiyan | Recode “Toyota unveiled the next generation of its self-driving platform today, which features more accurate object detection technology and mapping, among other advancements. These test cars—which Toyota is testing on both a closed driving course and on some public roads—will also be using Luminar’s lidar sensors, or radars that use lasers to detect the distance to an object.”
Image Credit: KHIUS / Shutterstock.com Continue reading

Posted in Human Robots | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

#431000 Japan’s SoftBank Is Investing Billions ...

Remember the 1980s movie Brewster’s Millions, in which a minor league baseball pitcher (played by Richard Pryor) must spend $30 million in 30 days to inherit $300 million? Pryor goes on an epic spending spree for a bigger payoff down the road.
One of the world’s biggest public companies is making that film look like a weekend in the Hamptons. Japan’s SoftBank Group, led by its indefatigable CEO Masayoshi Son, is shooting to invest $100 billion over the next five years toward what the company calls the information revolution.
The newly-created SoftBank Vision Fund, with a handful of key investors, appears ready to almost single-handedly hack the technology revolution. Announced only last year, the fund had its first major close in May with $93 billion in committed capital. The rest of the money is expected to be raised this year.
The fund is unprecedented. Data firm CB Insights notes that the SoftBank Vision Fund, if and when it hits the $100 billion mark, will equal the total amount that VC-backed companies received in all of 2016—$100.8 billion across 8,372 deals globally.
The money will go toward both billion-dollar corporations and startups, with a minimum $100 million buy-in. The focus is on core technologies like artificial intelligence, robotics and the Internet of Things.
Aside from being Japan’s richest man, Son is also a futurist who has predicted the singularity, the moment in time when machines will become smarter than humans and technology will progress exponentially. Son pegs the date as 2047. He appears to be hedging that bet in the biggest way possible.
Show Me the Money
Ostensibly a telecommunications company, SoftBank Group was founded in 1981 and started investing in internet technologies by the mid-1990s. Son infamously lost about $70 billion of his own fortune after the dot-com bubble burst around 2001. The company itself has a market cap of nearly $90 billion today, about half of where it was during the heydays of the internet boom.
The ups and downs did nothing to slake the company’s thirst for technology. It has made nine acquisitions and more than 130 investments since 1995. In 2017 alone, SoftBank has poured billions into nearly 30 companies and acquired three others. Some of those investments are being transferred to the massive SoftBank Vision Fund.
SoftBank is not going it alone with the new fund. More than half of the money—$60 billion—comes via the Middle East through Saudi Arabia’s Public Investment Fund ($45 billion) and Abu Dhabi’s Mubadala Investment Company ($15 billion). Other players at the table include Apple, Qualcomm, Sharp, Foxconn, and Oracle.
During a company conference in August, Son notes the SoftBank Vision Fund is not just about making money. “We don’t just want to be an investor just for the money game,” he says through a translator. “We want to make the information revolution. To do the information revolution, you can’t do it by yourself; you need a lot of synergy.”
Off to the Races
The fund has wasted little time creating that synergy. In July, its first official investment, not surprisingly, went to a company that specializes in artificial intelligence for robots—Brain Corp. The San Diego-based startup uses AI to turn manual machines into self-driving robots that navigate their environments autonomously. The first commercial application appears to be a really smart commercial-grade version that crosses a Roomba and Zamboni.

A second investment in July was a bit more surprising. SoftBank and its fund partners led a $200 million mega-round for Plenty, an agricultural tech company that promises to reshape farming by going vertical. Using IoT sensors and machine learning, Plenty claims its urban vertical farms can produce 350 times more vegetables than a conventional farm using 1 percent of the water.
Round Two
The spending spree continued into August.
The SoftBank Vision Fund led a $1.1 billion investment into a little-known biotechnology company called Roivant Sciences that goes dumpster diving for abandoned drugs and then creates subsidiaries around each therapy. For example, Axovant Sciences is devoted to neurology while Urovant focuses on urology. TechCrunch reports that Roivant is also creating a tech-focused subsidiary, called Datavant, that will use AI for drug discovery and other healthcare initiatives, such as designing clinical trials.
The AI angle may partly explain SoftBank’s interest in backing the biggest private placement in healthcare to date.
Also in August, SoftBank Vision Fund led a mix of $2.5 billion in primary and secondary capital investments into India’s largest private company in what was touted as the largest single investment in a private Indian company. Flipkart is an e-commerce company in the mold of Amazon.
The fund tacked on a $250 million investment round in August to Kabbage, an Atlanta-based startup in the alt-lending sector for small businesses. It ended big with a $4.4 billion investment into a co-working company called WeWork.
Betterment of Humanity
And those investments only include companies that SoftBank Vision Fund has backed directly.
SoftBank the company will offer—or has already turned over—previous investments to the Vision Fund in more than a half-dozen companies. Those assets include its shares in Nvidia, which produces chips for AI applications, and its first serious foray into autonomous driving with Nauto, a California startup that uses AI and high-tech cameras to retrofit vehicles to improve driving safety. The more miles the AI logs, the more it learns about safe and unsafe driving behaviors.
Other recent acquisitions, such as Boston Dynamics, a well-known US robotics company owned briefly by Google’s parent company Alphabet, will remain under the SoftBank Group umbrella for now.

This spending spree begs the question: What is the overall vision behind the SoftBank’s relentless pursuit of technology companies? A spokesperson for SoftBank told Singularity Hub that the “common thread among all of these companies is that they are creating the foundational platforms for the next stage of the information revolution.All of the companies, he adds, share SoftBank’s criteria of working toward “the betterment of humanity.”
While the SoftBank portfolio is diverse, from agtech to fintech to biotech, it’s obvious that SoftBank is betting on technologies that will connect the world in new and amazing ways. For instance, it wrote a $1 billion check last year in support of OneWeb, which aims to launch 900 satellites to bring internet to everyone on the planet. (It will also be turned over to the SoftBank Vision Fund.)
SoftBank also led a half-billion equity investment round earlier this year in a UK company called Improbable, which employs cloud-based distributed computing to create virtual worlds for gaming. The next step for the company is massive simulations of the real world that supports simultaneous users who can experience the same environment together(and another candidate for the SoftBank Vision Fund.)
Even something as seemingly low-tech as WeWork, which provides a desk or office in locations around the world, points toward a more connected planet.
In the end, the singularity is about bringing humanity together through technology. No one said it would be easy—or cheap.
Stock Media provided by xackerz / Pond5 Continue reading

Posted in Human Robots | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Comments Off on Japan’s SoftBank Is Investing Billions ...