Tag Archives: everything

#435646 Video Friday: Kiki Is a New Social Robot ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

DARPA SubT Tunnel Circuit – August 15-22, 2019 – Pittsburgh, Pa., USA
IEEE Africon 2019 – September 25-27, 2019 – Accra, Ghana
ISRR 2019 – October 6-10, 2019 – Hanoi, Vietnam
Ro-Man 2019 – October 14-18, 2019 – New Delhi, India
Humanoids 2019 – October 15-17, 2019 – Toronto, Canada
ARSO 2019 – October 31-1, 2019 – Beijing, China
ROSCon 2019 – October 31-1, 2019 – Macau
Let us know if you have suggestions for next week, and enjoy today’s videos.

The DARPA Subterranean Challenge tunnel circuit takes place in just a few weeks, and we’ll be there!

[ DARPA SubT ]

Time lapse video of robotic arm on NASA’s Mars 2020 rover handily maneuvers 88-pounds (40 kilograms) worth of sensor-laden turret as it moves from a deployed to stowed configuration.

If you haven’t read our interview with Matt Robinson, now would be a great time, since he’s one of the folks at JPL who designed this arm.

[ Mars 2020 ]

Kiki is a small, white, stationary social robot with an evolving personality who promises to be your friend and costs $800 and is currently on Kickstarter.

The Kickstarter page is filled with the same type of overpromising that we’ve seen with other (now very dead) social robots: Kiki is “conscious,” “understands your feelings,” and “loves you back.” Oof. That said, we’re happy to see more startups trying to succeed in this space, which is certainly one of the toughest in consumer electronics, and hopefully they’ve been learning from the recent string of failures. And we have to say Kiki is a cute robot. Its overall design, especially the body mechanics and expressive face, look neat. And kudos to the team—the company was founded by two ex-Googlers, Mita Yun and Jitu Das—for including the “unedited prototype videos,” which help counterbalance the hype.

Another thing that Kiki has going for it is that everything runs on the robot itself. This simplifies privacy and means that the robot won’t partially die on you if the company behind it goes under, but also limits how clever the robot will be able to be. The Kickstarter campaign is already over a third funded, so…We’ll see.

[ Kickstarter ]

When your UAV isn’t enough UAV, so you put a UAV on your UAV.

[ CanberraUAV ]

ABB’s YuMi is testing ATMs because a human trying to do this task would go broke almost immediately.

[ ABB ]

DJI has a fancy new FPV system that features easy setup, digital HD streaming at up to 120 FPS, and <30ms latency.

If it looks expensive, that’s because it costs $930 with the remote included.

[ DJI ]

Honeybee Robotics has recently developed a regolith excavation and rock cleaning system for NASA JPL’s PUFFER rovers. This system, called POCCET (PUFFER-Oriented Compact Cleaning and Excavation Tool), uses compressed gas to perform all excavation and cleaning tasks. Weighing less than 300 grams with potential for further mass reduction, POCCET can be used not just on the Moon, but on other Solar System bodies such as asteroids, comets, and even Mars.

[ Honeybee Robotics ]

DJI’s 2019 RoboMaster tournament, which takes place this month in Shenzen, looks like it’ll be fun to watch, with a plenty of action and rules that are easy to understand.

[ RoboMaster ]

Robots and baked goods are an automatic Video Friday inclusion.

Wow I want a cupcake right now.

[ Soft Robotics ]

The ICRA 2019 Best Paper Award went to Michelle A. Lee at Stanford, for “Making Sense of Vision and Touch: Self-Supervised Learning of Multimodal Representations for Contact-Rich Tasks.”

The ICRA video is here, and you can find the paper at the link below.

[ Paper ] via [ RoboHub ]

Cobalt Robotics put out a bunch of marketing-y videos this week, but this one reasonably interesting, even if you’re familiar with what they’re doing over there.

[ Cobalt Robotics ]

RightHand Robotics launched RightPick2 with a gala event which looked like fun as long as you were really, really in to robots.

[ RightHand Robotics ]

Thanks Jeff!

This video presents a framework for whole-body control applied to the assistive robotic system EDAN. We show how the proposed method can be used for a task like open, pass through and close a door. Also, we show the efficiency of the whole-body coordination with controlling the end-effector with respect to a fixed reference. Additionally, showing how easy the system can be manually manoeuvred by direct interaction with the end-effector, without the need for an extra input device.

[ DLR ]

You’ll probably need to turn on auto-translated subtitles for most of this, but it’s worth it for the adorable little single-seat robotic car designed to help people get around airports.

[ ZMP ]

In this week’s episode of Robots in Depth, Per speaks with Gonzalo Rey from Moog about their fancy 3D printed integrated hydraulic actuators.

Gonzalo talks about how Moog got started with hydraulic control,taking part in the space program and early robotics development. He shares how Moog’s technology is used in fly-by-wire systems in aircraft and in flow control in deep space probes. They have even reached Mars.

[ Robots in Depth ] Continue reading

Posted in Human Robots

#435601 New Double 3 Robot Makes Telepresence ...

Today, Double Robotics is announcing Double 3, the latest major upgrade to its line of consumer(ish) telepresence robots. We had a (mostly) fantastic time testing out Double 2 back in 2016. One of the things that we found out back then was that it takes a lot of practice to remotely drive the robot around. Double 3 solves this problem by leveraging the substantial advances in 3D sensing and computing that have taken place over the past few years, giving their new robot a level of intelligence that promises to make telepresence more accessible for everyone.

Double 2’s iPad has been replaced by “a fully integrated solution”—which is a fancy way of saying a dedicated 9.7-inch touchscreen and a whole bunch of other stuff. That other stuff includes an NVIDIA Jetson TX2 AI computing module, a beamforming six-microphone array, an 8-watt speaker, a pair of 13-megapixel cameras (wide angle and zoom) on a tilting mount, five ultrasonic rangefinders, and most excitingly, a pair of Intel RealSense D430 depth sensors.

It’s those new depth sensors that really make Double 3 special. The D430 modules each uses a pair of stereo cameras with a pattern projector to generate 1280 x 720 depth data with a range of between 0.2 and 10 meters away. The Double 3 robot uses all of this high quality depth data to locate obstacles, but at this point, it still doesn’t drive completely autonomously. Instead, it presents the remote operator with a slick, augmented reality view of drivable areas in the form of a grid of dots. You just click where you want the robot to go, and it will skillfully take itself there while avoiding obstacles (including dynamic obstacles) and related mishaps along the way.

This effectively offloads the most stressful part of telepresence—not running into stuff—from the remote user to the robot itself, which is the way it should be. That makes it that much easier to encourage people to utilize telepresence for the first time. The way the system is implemented through augmented reality is particularly impressive, I think. It looks like it’s intuitive enough for an inexperienced user without being restrictive, and is a clever way of mitigating even significant amounts of lag.

Otherwise, Double 3’s mobility system is exactly the same as the one featured on Double 2. In fact, that you can stick a Double 3 head on a Double 2 body and it instantly becomes a Double 3. Double Robotics is thoughtfully offering this to current Double 2 owners as a significantly more affordable upgrade option than buying a whole new robot.

For more details on all of Double 3's new features, we spoke with the co-founders of Double Robotics, Marc DeVidts and David Cann.

IEEE Spectrum: Why use this augmented reality system instead of just letting the user click on a regular camera image? Why make things more visually complicated, especially for new users?

Marc DeVidts and David Cann: One of the things that we realized about nine months ago when we got this whole thing working was that without the mixed reality for driving, it was really too magical of an experience for the customer. Even us—we had a hard time understanding whether the robot could really see obstacles and understand where the floor is and that kind of thing. So, we said “What would be the best way of communicating this information to the user?” And the right way to do it ended up drawing the graphics directly onto the scene. It’s really awesome—we have a full, real time 3D scene with the depth information drawn on top of it. We’re starting with some relatively simple graphics, and we’ll be adding more graphics in the future to help the user understand what the robot is seeing.

How robust is the vision system when it comes to obstacle detection and avoidance? Does it work with featureless surfaces, IR absorbent surfaces, in low light, in direct sunlight, etc?

We’ve looked at all of those cases, and one of the reasons that we’re going with the RealSense is the projector that helps us to see blank walls. We also found that having two sensors—one facing the floor and one facing forward—gives us a great coverage area. Having ultrasonic sensors in there as well helps us to detect anything that we can't see with the cameras. They're sort of a last safety measure, especially useful for detecting glass.

It seems like there’s a lot more that you could do with this sensing and mapping capability. What else are you working on?

We're starting with this semi-autonomous driving variant, and we're doing a private beta of full mapping. So, we’re going to do full SLAM of your environment that will be mapped by multiple robots at the same time while you're driving, and then you'll be able to zoom out to a map and click anywhere and it will drive there. That's where we're going with it, but we want to take baby steps to get there. It's the obvious next step, I think, and there are a lot more possibilities there.

Do you expect developers to be excited for this new mapping capability?

We're using a very powerful computer in the robot, a NVIDIA Jetson TX2 running Ubuntu. There's room to grow. It’s actually really exciting to be able to see, in real time, the 3D pose of the robot along with all of the depth data that gets transformed in real time into one view that gives you a full map. Having all of that data and just putting those pieces together and getting everything to work has been a huge feat in of itself.

We have an extensive API for developers to do custom implementations, either for telepresence or other kinds of robotics research. Our system isn't running ROS, but we're going to be adding ROS adapters for all of our hardware components.

Telepresence robots depend heavily on wireless connectivity, which is usually not something that telepresence robotics companies like Double have direct control over. Have you found that connectivity has been getting significantly better since you first introduced Double?

When we started in 2013, we had a lot of customers that didn’t have WiFi in their hallways, just in the conference rooms. We very rarely hear about customers having WiFi connectivity issues these days. The bigger issue we see is when people are calling into the robot from home, where they don't have proper traffic management on their home network. The robot doesn't need a ton of bandwidth, but it does need consistent, low latency bandwidth. And so, if someone else in the house is watching Netflix or something like that, it’s going to saturate your connection. But for the most part, it’s gotten a lot better over the last few years, and it’s no longer a big problem for us.

Do you think 5G will make a significant difference to telepresence robots?

We’ll see. We like the low latency possibilities and the better bandwidth, but it's all going to be a matter of what kind of reception you get. LTE can be great, if you have good reception; it’s all about where the tower is. I’m pretty sure that WiFi is going to be the primary thing for at least the next few years.

DeVidts also mentioned that an unfortunate side effect of the new depth sensors is that hanging a t-shirt on your Double to give it some personality will likely render it partially blind, so that's just something to keep in mind. To make up for this, you can switch around the colorful trim surrounding the screen, which is nowhere near as fun.

When the Double 3 is ready for shipping in late September, US $2,000 will get you the new head with all the sensors and stuff, which seamlessly integrates with your Double 2 base. Buying Double 3 straight up (with the included charging dock) will run you $4,ooo. This is by no means an inexpensive robot, and my impression is that it’s not really designed for individual consumers. But for commercial, corporate, healthcare, or education applications, $4k for a robot as capable as the Double 3 is really quite a good deal—especially considering the kinds of use cases for which it’s ideal.

[ Double Robotics ] Continue reading

Posted in Human Robots

#435589 Construction Robots Learn to Excavate by ...

Pavel Savkin remembers the first time he watched a robot imitate his movements. Minutes earlier, the engineer had finished “showing” the robotic excavator its new goal by directing its movements manually. Now, running on software Savkin helped design, the robot was reproducing his movements, gesture for gesture. “It was like there was something alive in there—but I knew it was me,” he said.

Savkin is the CTO of SE4, a robotics software project that styles itself the “driver” of a fleet of robots that will eventually build human colonies in space. For now, SE4 is focused on creating software that can help developers communicate with robots, rather than on building hardware of its own.
The Tokyo-based startup showed off an industrial arm from Universal Robots that was running SE4’s proprietary software at SIGGRAPH in July. SE4’s demonstration at the Los Angeles innovation conference drew the company’s largest audience yet. The robot, nicknamed Squeezie, stacked real blocks as directed by SE4 research engineer Nathan Quinn, who wore a VR headset and used handheld controls to “show” Squeezie what to do.

As Quinn manipulated blocks in a virtual 3D space, the software learned a set of ordered instructions to be carried out in the real world. That order is essential for remote operations, says Quinn. To build remotely, developers need a way to communicate instructions to robotic builders on location. In the age of digital construction and industrial robotics, giving a computer a blueprint for what to build is a well-explored art. But operating on a distant object—especially under conditions that humans haven’t experienced themselves—presents challenges that only real-time communication with operators can solve.

The problem is that, in an unpredictable setting, even simple tasks require not only instruction from an operator, but constant feedback from the changing environment. Five years ago, the Swedish fiber network provider umea.net (part of the private Umeå Energy utility) took advantage of the virtual reality boom to promote its high-speed connections with the help of a viral video titled “Living with Lag: An Oculus Rift Experiment.” The video is still circulated in VR and gaming circles.

In the experiment, volunteers donned headgear that replaced their real-time biological senses of sight and sound with camera and audio feeds of their surroundings—both set at a 3-second delay. Thus equipped, volunteers attempt to complete everyday tasks like playing ping-pong, dancing, cooking, and walking on a beach, with decidedly slapstick results.

At outer-orbit intervals, including SE4’s dream of construction projects on Mars, the limiting factor in communication speed is not an artificial delay, but the laws of physics. The shifting relative positions of Earth and Mars mean that communications between the planets—even at the speed of light—can take anywhere from 3 to 22 minutes.

A long-distance relationship

Imagine trying to manage a construction project from across an ocean without the benefit of intelligent workers: sending a ship to an unknown world with a construction crew and blueprints for a log cabin, and four months later receiving a letter back asking how to cut down a tree. The parallel problem in long-distance construction with robots, according to SE4 CEO Lochlainn Wilson, is that automation relies on predictability. “Every robot in an industrial setting today is expecting a controlled environment.”
Platforms for applying AR and VR systems to teach tasks to artificial intelligences, as SE4 does, are already proliferating in manufacturing, healthcare, and defense. But all of the related communications systems are bound by physics and, specifically, the speed of light.
The same fundamental limitation applies in space. “Our communications are light-based, whether they’re radio or optical,” says Laura Seward Forczyk, a planetary scientist and consultant for space startups. “If you’re going to Mars and you want to communicate with your robot or spacecraft there, you need to have it act semi- or mostly-independently so that it can operate without commands from Earth.”

Semantic control
That’s exactly what SE4 aims to do. By teaching robots to group micro-movements into logical units—like all the steps to building a tower of blocks—the Tokyo-based startup lets robots make simple relational judgments that would allow them to receive a full set of instruction modules at once and carry them out in order. This sidesteps the latency issue in real-time bilateral communications that could hamstring a project or at least make progress excruciatingly slow.
The key to the platform, says Wilson, is the team’s proprietary operating software, “Semantic Control.” Just as in linguistics and philosophy, “semantics” refers to meaning itself, and meaning is the key to a robot’s ability to make even the smallest decisions on its own. “A robot can scan its environment and give [raw data] to us, but it can’t necessarily identify the objects around it and what they mean,” says Wilson.

That’s where human intelligence comes in. As part of the demonstration phase, the human operator of an SE4-controlled machine “annotates” each object in the robot’s vicinity with meaning. By labeling objects in the VR space with useful information—like which objects are building material and which are rocks—the operator helps the robot make sense of its real 3D environment before the building begins.

Giving robots the tools to deal with a changing environment is an important step toward allowing the AI to be truly independent, but it’s only an initial step. “We’re not letting it do absolutely everything,” said Quinn. “Our robot is good at moving an object from point A to point B, but it doesn’t know the overall plan.” Wilson adds that delegating environmental awareness and raw mechanical power to separate agents is the optimal relationship for a mixed human-robot construction team; it “lets humans do what they’re good at, while robots do what they do best.”

This story was updated on 4 September 2019. Continue reading

Posted in Human Robots

#435541 This Giant AI Chip Is the Size of an ...

People say size doesn’t matter, but when it comes to AI the makers of the largest computer chip ever beg to differ. There are plenty of question marks about the gargantuan processor, but its unconventional design could herald an innovative new era in silicon design.

Computer chips specialized to run deep learning algorithms are a booming area of research as hardware limitations begin to slow progress, and both established players and startups are vying to build the successor to the GPU, the specialized graphics chip that has become the workhorse of the AI industry.

On Monday Californian startup Cerebras came out of stealth mode to unveil an AI-focused processor that turns conventional wisdom on its head. For decades chip makers have been focused on making their products ever-smaller, but the Wafer Scale Engine (WSE) is the size of an iPad and features 1.2 trillion transistors, 400,000 cores, and 18 gigabytes of on-chip memory.

The Cerebras Wafer-Scale Engine (WSE) is the largest chip ever built. It measures 46,225 square millimeters and includes 1.2 trillion transistors. Optimized for artificial intelligence compute, the WSE is shown here for comparison alongside the largest graphics processing unit. Image Credit: Used with permission from Cerebras Systems.
There is a method to the madness, though. Currently, getting enough cores to run really large-scale deep learning applications means connecting banks of GPUs together. But shuffling data between these chips is a major drain on speed and energy efficiency because the wires connecting them are relatively slow.

Building all 400,000 cores into the same chip should get round that bottleneck, but there are reasons it’s not been done before, and Cerebras has had to come up with some clever hacks to get around those obstacles.

Regular computer chips are manufactured using a process called photolithography to etch transistors onto the surface of a wafer of silicon. The wafers are inches across, so multiple chips are built onto them at once and then split up afterwards. But at 8.5 inches across, the WSE uses the entire wafer for a single chip.

The problem is that while for standard chip-making processes any imperfections in manufacturing will at most lead to a few processors out of several hundred having to be ditched, for Cerebras it would mean scrapping the entire wafer. To get around this the company built in redundant circuits so that even if there are a few defects, the chip can route around them.

The other big issue with a giant chip is the enormous amount of heat the processors can kick off—so the company has had to design a proprietary water-cooling system. That, along with the fact that no one makes connections and packaging for giant chips, means the WSE won’t be sold as a stand-alone component, but as part of a pre-packaged server incorporating the cooling technology.

There are no details on costs or performance so far, but some customers have already been testing prototypes, and according to Cerebras results have been promising. CEO and co-founder Andrew Feldman told Fortune that early tests show they are reducing training time from months to minutes.

We’ll have to wait until the first systems ship to customers in September to see if those claims stand up. But Feldman told ZDNet that the design of their chip should help spur greater innovation in the way engineers design neural networks. Many cornerstones of this process—for instance, tackling data in batches rather than individual data points—are guided more by the hardware limitations of GPUs than by machine learning theory, but their chip will do away with many of those obstacles.

Whether that turns out to be the case or not, the WSE might be the first indication of an innovative new era in silicon design. When Google announced it’s AI-focused Tensor Processing Unit in 2016 it was a wake-up call for chipmakers that we need some out-of-the-box thinking to square the slowing of Moore’s Law with skyrocketing demand for computing power.

It’s not just tech giants’ AI server farms driving innovation. At the other end of the spectrum, the desire to embed intelligence in everyday objects and mobile devices is pushing demand for AI chips that can run on tiny amounts of power and squeeze into the smallest form factors.

These trends have spawned renewed interest in everything from brain-inspired neuromorphic chips to optical processors, but the WSE also shows that there might be mileage in simply taking a sideways look at some of the other design decisions chipmakers have made in the past rather than just pumping ever more transistors onto a chip.

This gigantic chip might be the first exhibit in a weird and wonderful new menagerie of exotic, AI-inspired silicon.

Image Credit: Used with permission from Cerebras Systems. Continue reading

Posted in Human Robots

#435520 These Are the Meta-Trends Shaping the ...

Life is pretty different now than it was 20 years ago, or even 10 years ago. It’s sort of exciting, and sort of scary. And hold onto your hat, because it’s going to keep changing—even faster than it already has been.

The good news is, maybe there won’t be too many big surprises, because the future will be shaped by trends that have already been set in motion. According to Singularity University co-founder and XPRIZE founder Peter Diamandis, a lot of these trends are unstoppable—but they’re also pretty predictable.

At SU’s Global Summit, taking place this week in San Francisco, Diamandis outlined some of the meta-trends he believes are key to how we’ll live our lives and do business in the (not too distant) future.

Increasing Global Abundance
Resources are becoming more abundant all over the world, and fewer people are seeing their lives limited by scarcity. “It’s hard for us to realize this as we see crisis news, but what people have access to is more abundant than ever before,” Diamandis said. Products and services are becoming cheaper and thus available to more people, and having more resources then enables people to create more, thus producing even more resources—and so on.

Need evidence? The proportion of the world’s population living in extreme poverty is currently lower than it’s ever been. The average human life expectancy is longer than it’s ever been. The costs of day-to-day needs like food, energy, transportation, and communications are on a downward trend.

Take energy. In most of the world, though its costs are decreasing, it’s still a fairly precious commodity; we turn off our lights and our air conditioners when we don’t need them (ideally, both to save money and to avoid wastefulness). But the cost of solar energy has plummeted, and the storage capacity of batteries is improving, and solar technology is steadily getting more efficient. Bids for new solar power plants in the past few years have broken each other’s records for lowest cost per kilowatt hour.

“We’re not far from a penny per kilowatt hour for energy from the sun,” Diamandis said. “And if you’ve got energy, you’ve got water.” Desalination, for one, will be much more widely feasible once the cost of the energy needed for it drops.

Knowledge is perhaps the most crucial resource that’s going from scarce to abundant. All the world’s knowledge is now at the fingertips of anyone who has a mobile phone and an internet connection—and the number of people connected is only going to grow. “Everyone is being connected at gigabit connection speeds, and this will be transformative,” Diamandis said. “We’re heading towards a world where anyone can know anything at any time.”

Increasing Capital Abundance
It’s not just goods, services, and knowledge that are becoming more plentiful. Money is, too—particularly money for business. “There’s more and more capital available to invest in companies,” Diamandis said. As a result, more people are getting the chance to bring their world-changing ideas to life.

Venture capital investments reached a new record of $130 billion in 2018, up from $84 billion in 2017—and that’s just in the US. Globally, VC funding grew 21 percent from 2017 to a total of $207 billion in 2018.

Through crowdfunding, any person in any part of the world can present their idea and ask for funding. That funding can come in the form of a loan, an equity investment, a reward, or an advanced purchase of the proposed product or service. “Crowdfunding means it doesn’t matter where you live, if you have a great idea you can get it funded by people from all over the world,” Diamandis said.

All this is making a difference; the number of unicorns—privately-held startups valued at over $1 billion—currently stands at an astounding 360.

One of the reasons why the world is getting better, Diamandis believes, is because entrepreneurs are trying more crazy ideas—not ideas that are reasonable or predictable or linear, but ideas that seem absurd at first, then eventually end up changing the world.

Everyone and Everything, Connected
As already noted, knowledge is becoming abundant thanks to the proliferation of mobile phones and wireless internet; everyone’s getting connected. In the next decade or sooner, connectivity will reach every person in the world. 5G is being tested and offered for the first time this year, and companies like Google, SpaceX, OneWeb, and Amazon are racing to develop global satellite internet constellations, whether by launching 12,000 satellites, as SpaceX’s Starlink is doing, or by floating giant balloons into the stratosphere like Google’s Project Loon.

“We’re about to reach a period of time in the next four to six years where we’re going from half the world’s people being connected to the whole world being connected,” Diamandis said. “What happens when 4.2 billion new minds come online? They’re all going to want to create, discover, consume, and invent.”

And it doesn’t stop at connecting people. Things are becoming more connected too. “By 2020 there will be over 20 billion connected devices and more than one trillion sensors,” Diamandis said. By 2030, those projections go up to 500 billion and 100 trillion. Think about it: there’s home devices like refrigerators, TVs, dishwashers, digital assistants, and even toasters. There’s city infrastructure, from stoplights to cameras to public transportation like buses or bike sharing. It’s all getting smart and connected.

Soon we’ll be adding autonomous cars to the mix, and an unimaginable glut of data to go with them. Every turn, every stop, every acceleration will be a data point. Some cars already collect over 25 gigabytes of data per hour, Diamandis said, and car data is projected to generate $750 billion of revenue by 2030.

“You’re going to start asking questions that were never askable before, because the data is now there to be mined,” he said.

Increasing Human Intelligence
Indeed, we’ll have data on everything we could possibly want data on. We’ll also soon have what Diamandis calls just-in-time education, where 5G combined with artificial intelligence and augmented reality will allow you to learn something in the moment you need it. “It’s not going and studying, it’s where your AR glasses show you how to do an emergency surgery, or fix something, or program something,” he said.

We’re also at the beginning of massive investments in research working towards connecting our brains to the cloud. “Right now, everything we think, feel, hear, or learn is confined in our synaptic connections,” Diamandis said. What will it look like when that’s no longer the case? Companies like Kernel, Neuralink, Open Water, Facebook, Google, and IBM are all investing billions of dollars into brain-machine interface research.

Increasing Human Longevity
One of the most important problems we’ll use our newfound intelligence to solve is that of our own health and mortality, making 100 years old the new 60—then eventually, 120 or 150.

“Our bodies were never evolved to live past age 30,” Diamandis said. “You’d go into puberty at age 13 and have a baby, and by the time you were 26 your baby was having a baby.”

Seeing how drastically our lifespans have changed over time makes you wonder what aging even is; is it natural, or is it a disease? Many companies are treating it as one, and using technologies like senolytics, CRISPR, and stem cell therapy to try to cure it. Scaffolds of human organs can now be 3D printed then populated with the recipient’s own stem cells so that their bodies won’t reject the transplant. Companies are testing small-molecule pharmaceuticals that can stop various forms of cancer.

“We don’t truly know what’s going on inside our bodies—but we can,” Diamandis said. “We’re going to be able to track our bodies and find disease at stage zero.”

Chins Up
The world is far from perfect—that’s not hard to see. What’s less obvious but just as true is that we’re living in an amazing time. More people are coming together, and they have more access to information, and that information moves faster, than ever before.

“I don’t think any of us understand how fast the world is changing,” Diamandis said. “Most people are fearful about the future. But we should be excited about the tools we now have to solve the world’s problems.”

Image Credit: spainter_vfx / Shutterstock.com Continue reading

Posted in Human Robots