Tag Archives: edition
Millions of years of evolution have allowed animals to develop some elegant and highly efficient solutions to problems like locomotion, flight, and dexterity. As Boston Dynamics unveils its latest mechanical animals, here’s a rundown of nine recent robots that borrow from nature and why.
SpotMini – Boston Dynamics
Starting with BigDog in 2005, the US company has built a whole stable of four-legged robots in recent years. Their first product was designed to be a robotic packhorse for soldiers that borrowed the quadrupedal locomotion of animals to travel over terrain too rough for conventional vehicles.
The US Army ultimately rejected the robot for being too noisy, according to the Guardian, but since then the company has scaled down its design, first to the Spot, then a first edition of the SpotMini that came out last year.
The latter came with a robotic arm where its head should be and was touted as a domestic helper, but a sleeker second edition without the arm was released earlier this month. There’s little detail on what the new robot is designed for, but the more polished design suggests a more consumer-focused purpose.
OctopusGripper – Festo
Festo has released a long line of animal-inspired machines over the years, from a mechanical kangaroo to robotic butterflies. Its latest creation isn’t a full animal—instead it’s a gripper based on an octopus tentacle that can be attached to the end of a robotic arm.
The pneumatically-powered device is made of soft silicone and features two rows of suction cups on its inner edge. By applying compressed air the tentacle can wrap around a wide variety of differently shaped objects, just like its natural counterpart, and a vacuum can be applied to the larger suction cups to grip the object securely. Because it’s soft, it holds promise for robots required to operate safely in collaboration with humans.
CRAM – University of California, Berkeley
Cockroaches are renowned for their hardiness and ability to disappear down cracks that seem far too small for them. Researchers at UC Berkeley decided these capabilities could be useful for search and rescue missions and so set about experimenting on the insects to find out their secrets.
They found the bugs can squeeze into gaps a fifth of their normal standing height by splaying their legs out to the side without significantly slowing themselves down. So they built a palm-sized robot with a jointed plastic shell that could do the same to squeeze into crevices half its normal height.
Snake Robot – Carnegie Mellon University
Search and rescue missions are a common theme for animal-inspired robots, but the snake robot built by CMU researchers is one of the first to be tested in a real disaster.
A team of roboticists from the university helped Mexican Red Cross workers search collapsed buildings for survivors after the 7.1-magnitude earthquake that struck Mexico City in September. The snake design provides a small diameter and the ability to move in almost any direction, which makes the robot ideal for accessing tight spaces, though the team was unable to locate any survivors.
The snake currently features a camera on the front, but researchers told IEEE Spectrum that the experience helped them realize they should also add a microphone to listen for people trapped under the rubble.
Bio-Hybrid Stingray – Harvard University
Taking more than just inspiration from the animal kingdom, a group from Harvard built a robotic stingray out of silicone and rat heart muscle cells.
The robot uses the same synchronized undulations along the edge of its fins to propel itself as a ray does. But while a ray has two sets of muscles to pull the fins up and down, the new device has only one that pulls them down, with a springy gold skeleton that pulls them back up again. The cells are also genetically modified to be activated by flashes of light.
The project’s leader eventually hopes to engineer a human heart, and both his stingray and an earlier jellyfish bio-robot are primarily aimed at better understanding how that organ works.
Bat Bot – Caltech
Most recent advances in drone technology have come from quadcopters, but Caltech engineers think rigid devices with rapidly spinning propellers are probably not ideal for use in close quarters with humans.
That’s why they turned to soft-winged bats for inspiration. That’s no easy feat, though, considering bats use more than 40 joints with each flap of their wings, so the team had to optimize down to nine joints to avoid it becoming too bulky. The simplified bat can’t ascend yet, but its onboard computer and sensors let it autonomously carry out glides, turns, and dives.
Salto – UC Berkeley
While even the most advanced robots tend to plod around, tree-dwelling animals have the ability to spring from branch to branch to clear obstacles and climb quickly. This could prove invaluable for search and rescue robots by allowing them to quickly traverse disordered rubble.
UC Berkeley engineers turned to the Senegal bush baby for inspiration after determining it scored highest in “vertical jumping agility”—a combination of how high and how frequently an animal can jump. They recreated its ability to get into a super-low crouch that stores energy in its tendons to create a robot that could carry out parkour-style double jumps off walls to quickly gain height.
Pleurobot – École Polytechnique Fédérale de Lausanne
Normally robots are masters of air, land, or sea, but the robotic salamander built by researchers at EPFL can both walk and swim.
Its designers used X-ray videos to carefully study how the amphibians move before using this to build a true-to-life robotic version using 3D printed bones, motorized joints, and a synthetic nervous system made up of electronic circuitry.
The robot’s low center of mass and segmented legs make it great at navigating rough terrain without losing balance, and the ability to swim gives added versatility. They also hope it will help paleontologists gain a better understanding of the movements of the first tetrapods to transition from water to land, which salamanders are the best living analog of.
Eelume – Eelume
A snakelike body isn’t only useful on land—eels are living proof it’s an efficient way to travel underwater, too. Norwegian robotics company Eelume has borrowed these principles to build a robot capable of sub-sea inspection, maintenance, and repair.
The modular design allows operators to put together their own favored configuration of joints and payloads such as sensors and tools. And while an early version of the robot used the same method of locomotion as an eel, the latest version undergoing sea trials has added a variety of thrusters for greater speeds and more maneuverability.
Image Credit: Boston Dynamics / YouTube Continue reading
For every new piece of technology that gets developed, you can usually find people saying it will never be useful. The president of the Michigan Savings Bank in 1903, for example, said, “The horse is here to stay but the automobile is only a novelty—a fad.” It’s equally easy to find people raving about whichever new technology is at the peak of the Gartner Hype Cycle, which tracks the buzz around these newest developments and attempts to temper predictions. When technologies emerge, there are all kinds of uncertainties, from the actual capacity of the technology to its use cases in real life to the price tag.
Eventually the dust settles, and some technologies get widely adopted, to the extent that they can become “invisible”; people take them for granted. Others fall by the wayside as gimmicky fads or impractical ideas. Picking which horses to back is the difference between Silicon Valley millions and Betamax pub-quiz-question obscurity. For a while, it seemed that Google had—for once—backed the wrong horse.
Google Glass emerged from Google X, the ubiquitous tech giant’s much-hyped moonshot factory, where highly secretive researchers work on the sci-fi technologies of the future. Self-driving cars and artificial intelligence are the more mundane end for an organization that apparently once looked into jetpacks and teleportation.
The original smart glasses, Google began selling Google Glass in 2013 for $1,500 as prototypes for their acolytes, around 8,000 early adopters. Users could control the glasses with a touchpad, or, activated by tilting the head back, with voice commands. Audio relay—as with several wearable products—is via bone conduction, which transmits sound by vibrating the skull bones of the user. This was going to usher in the age of augmented reality, the next best thing to having a chip implanted directly into your brain.
On the surface, it seemed to be a reasonable proposition. People had dreamed about augmented reality for a long time—an onboard, JARVIS-style computer giving you extra information and instant access to communications without even having to touch a button. After smartphone ubiquity, it looked like a natural step forward.
Instead, there was a backlash. People may be willing to give their data up to corporations, but they’re less pleased with the idea that someone might be filming them in public. The worst aspect of smartphones is trying to talk to people who are distractedly scrolling through their phones. There’s a famous analogy in Revolutionary Road about an old couple’s loveless marriage: the husband tunes out his wife’s conversation by turning his hearing aid down to zero. To many, Google Glass seemed to provide us with a whole new way to ignore each other in favor of our Twitter feeds.
Then there’s the fact that, regardless of whether it’s because we’re not used to them, or if it’s a more permanent feature, people wearing AR tech often look very silly. Put all this together with a lack of early functionality, the high price (do you really feel comfortable wearing a $1,500 computer?), and a killer pun for the users—Glassholes—and the final recipe wasn’t great for Google.
Google Glass was quietly dropped from sale in 2015 with the ominous slogan posted on Google’s website “Thanks for exploring with us.” Reminding the Glass users that they had always been referred to as “explorers”—beta-testing a product, in many ways—it perhaps signaled less enthusiasm for wearables than the original, Google Glass skydive might have suggested.
In reality, Google went back to the drawing board. Not with the technology per se, although it has improved in the intervening years, but with the uses behind the technology.
Under what circumstances would you actually need a Google Glass? When would it genuinely be preferable to a smartphone that can do many of the same things and more? Beyond simply being a fashion item, which Google Glass decidedly was not, even the most tech-evangelical of us need a convincing reason to splash $1,500 on a wearable computer that’s less socially acceptable and less easy to use than the machine you’re probably reading this on right now.
Enter the Google Glass Enterprise Edition.
Piloted in factories during the years that Google Glass was dormant, and now roaring back to life and commercially available, the Google Glass relaunch got under way in earnest in July of 2017. The difference here was the specific audience: workers in factories who need hands-free computing because they need to use their hands at the same time.
In this niche application, wearable computers can become invaluable. A new employee can be trained with pre-programmed material that explains how to perform actions in real time, while instructions can be relayed straight into a worker’s eyeline without them needing to check a phone or switch to email.
Medical devices have long been a dream application for Google Glass. You can imagine a situation where people receive real-time information during surgery, or are augmented by artificial intelligence that provides additional diagnostic information or questions in response to a patient’s symptoms. The quest to develop a healthcare AI, which can provide recommendations in response to natural language queries, is on. The famously untidy doctor’s handwriting—and the associated death toll—could be avoided if the glasses could take dictation straight into a patient’s medical records. All of this is far more useful than allowing people to check Facebook hands-free while they’re riding the subway.
Google’s “Lens” application indicates another use for Google Glass that hadn’t quite matured when the original was launched: the Lens processes images and provides information about them. You can look at text and have it translated in real time, or look at a building or sign and receive additional information. Image processing, either through neural networks hooked up to a cloud database or some other means, is the frontier that enables driverless cars and similar technology to exist. Hook this up to a voice-activated assistant relaying information to the user, and you have your killer application: real-time annotation of the world around you. It’s this functionality that just wasn’t ready yet when Google launched Glass.
Amazon’s recent announcement that they want to integrate Alexa into a range of smart glasses indicates that the tech giants aren’t ready to give up on wearables yet. Perhaps, in time, people will become used to voice activation and interaction with their machines, at which point smart glasses with bone conduction will genuinely be more convenient than a smartphone.
But in many ways, the real lesson from the initial failure—and promising second life—of Google Glass is a simple question that developers of any smart technology, from the Internet of Things through to wearable computers, must answer. “What can this do that my smartphone can’t?” Find your answer, as the Enterprise Edition did, as Lens might, and you find your product.
Image Credit: Hattanas / Shutterstock.com Continue reading
Halloween has never been my holiday of choice. Why? Because scary things, well, actually scare me. But here in the Bay Area, adults go nuts for Halloween. This year, technology companies are showing some serious commitment to Halloween too, and they're using technology to amp up the fright factor—like creating virtual reality simulated haunted houses and using artificial intelligence to generate ridiculously scary images. I’ll be avoiding these tech-induced terrors this weekend, but here are a few stories we… read more Continue reading
Singapore International Robo Expo debuts as the robotics sector is poised for accelerated growth
In partnership with Experia Events, the Singapore Industrial Automation Association sets its sights on boosting the robotics solutions industry with this strategic global platform for innovation and technology
Singapore, 18 October 2016 – The first Singapore International Robo Expo (SIRE), organised by Experia Events and co-organised by the Singapore Industrial Automation Association (SIAA), will be held from 1 to 2 November 2016, at Sands Expo and Convention Centre, Marina Bay Sands.
Themed Forging the Future of Robotics Solutions, SIRE will comprise an exhibition, product demonstrations, networking sessions and conferences. SIRE aims to be the global platform for governments, the private sector and the academia to engage in dialogues, share industry best practices, network, forge partnerships, and explore funding opportunities for the adoption of robotics solutions.
“SIRE debuts at a time when robotics has been gaining traction in the world due to the need for automation and better productivity. The latest World Robotics Report by the International Federation of Robotics has also identified Singapore as a market with one of the highest robot density in manufacturing – giving us more opportunities for further development in this field, and well as its extension into the services sectors.
With the S$450 million pledged by the Singapore government to the National Robotics Programme to develop the industry over the next three years, SIRE is aligned with these goals to cultivate the adoption of robotics and support the growing industry. As an association, we are constantly looking for ways to bring together robotic adoption, collaboration among partners, and providing support with funding for our members. SIRE is precisely the strategic platform for this,” said Mr Oliver Tian, President, SIAA.
SIRE has attracted strong interest from institutes of higher learning (IHLs), research institutes, local and international enterprises, with innovation and technology applicable for a vast range of industries from manufacturing to healthcare.
ST Kinetics, the Title Sponsor for the inaugural edition of the event, is one of the key exhibitors, together with other leading industry players such as ABB, Murata, Panasonic, SICK Pte Ltd, and Tech Avenue amongst others. Emerging SMEs such as H3 Dynamics, Design Tech Technologies and SMP Robotics Singapore will also showcase their innovations at the exhibition. Participating research institute, A*STAR’s SIMTech, and other IHLs supporting the event include Ngee Ann Polytechnic, Republic Polytechnic and the Institute of Technical Education (ITE).
Visitors will also be able to view “live” demonstrations at the Demo Zone and come up close with the latest innovations and technologies. Some of the key highlights at the zone includes the world’s only fully autonomous outdoor security robot developed by SMP Robotics Singapore, as well as ABB’s Yumi, IRB 14000, a collaborative robot designed to work in close collaboration and proximity with humans safely. Dynamic Stabilization Systems, SIMTech and Design Tech will also be demonstrating the capabilities of their robotic innovations at the zone.
At the Singapore International Robo Convention, key speakers representing regulators, industry leaders and academia will come together, exchange insights and engage in discourse to address the various aspects of robotic and automation technology, industry trends and case studies of robotics solutions. There will also be a session discussing the details of the Singapore National Robotics Programme led by Mr Haryanto Tan, Head, Precision Engineering Cluster Group, EDB Singapore.
SIRE will also host the France-Singapore Innovation Days in collaboration with Business France, the national agency supporting the international development of the French economy. The organisation will lead a delegation of 20 key French companies to explore business and networking opportunities with Singapore firms, and conduct specialized workshops.
To further foster a deeper appreciation and to inspire the next generation of robotics and automation experts, the event will also host students from higher institutes of learning on Education Day on 2 November. Students will be able to immerse themselves in the exciting developments of the robotics industry and get a sampling of how robotics can be applied to real-world settings by visiting the exhibits and interacting with representatives from participating companies.
Mr Leck Chet Lam, Managing Director, Experia Events, says, “SIRE will be a game changer for the industry. We are expecting the industry’s best and new-to-market players to showcase their innovations, which could potentially add value to the operations across a wide spectrum of industry sectors, from manufacturing to retail and service, and healthcare. We also hope to inspire the robotics and automation experts of tomorrow with our Education Day programme.
Experia Events prides itself as a company that organises strategic events for the global stage, featuring thought leaders and working with the industries’ best. It is an honour for us to be partnering SIAA, a recognised body and key player in the robotics industry. We are privileged to be able to help elevate Singapore’s robotics industry through SIRE and are pulling out all stops to ensure that the event will be a resounding success.”
SIRE is supported by Strategic Partner, IE Singapore as well as agencies including EDB Singapore, GovTech Singapore, InfoComm Media Development Authority, A*STAR’s SIMTech, and Spring Singapore.
For further enquiries, please contact:
Marilyn HoExperia Events Pte LtdDirector, CommunicationsTel: +65 6595 6130Email: email@example.com
Genevieve YeoExperia Events Pte LtdAssistant Manager, CommunicationsTel: +65 6595 6131Email: firstname.lastname@example.org
The post Singapore International Robotics Expo appeared first on Roboticmagazine. Continue reading