Tag Archives: russia

#439662 An Army of Grain-harvesting Robots ...

The field of automated precision agriculture is based on one concept—autonomous driving technologies that guide vehicles through GPS navigation. Fifteen years ago, when high-accuracy GPS became available for civilian use, farmers thought things would be simple: Put a GPS receiver station at the edge of the field, configure a route for a tractor or a combine harvester, and off you go, dear robot!

Practice has shown, however, that this kind of carefree field cultivation is inefficient and dangerous. It works only in ideal fields, which are almost never encountered in real life. If there's a log or a rock in the field, or a couple of village paramours dozing in the rye under the sun, the tractor will run right over them. And not all countries have reliable satellite coverage—in agricultural markets like Kazakhstan, coverage can be unstable. This is why, if you want safe and efficient farming, you need to equip your vehicle with sensors and an artificial intelligence that can see and understand its surroundings instead of blindly following GPS navigation instructions.

The Cognitive Agro Pilot system lets a human operator focus on harvesting rather than driving. An integrated display and control system in the cab handles driving based on a video feed from a single low-resolution camera, no GPS or Internet connectivity required. Cognitive Pilot

You might think that GPS navigation is ideal for automated agriculture, since the task facing the operator of a farm vehicle like a combine harvester is simply to drive around the field in a serpentine pattern, mowing down all the wheat or whatever crop it is filled with. But reality is far different. There are hundreds of things operators must watch even as they keep their eyes fastened to the edge of the field to ensure that they move alongside it with fine precision. An agricultural combine is not dissimilar to a church organ in terms of its operational complexity. When a combine operator works with an assistant, one of them steers along the crop edge, while the other controls the reel, the fan, the threshing drum, and the harvesting process in general. In Soviet times, there were two operators in a combine crew, but now there is only one. This means choosing between safe driving and efficient harvesting. And since you can't harvest grain without moving, driving becomes the top priority, and the efficiency of the harvesting process tends to suffer.

Harvesting efficiency is especially important in Eastern Europe, where farming is high risk and there is only one harvest a year. The season starts in March and farmers don't rest until the autumn, when they have only two weeks to harvest the crops. If something goes wrong, every day they miss may lead to a loss of 10 percent of the yield. If a driver does a poor job of harvesting or gets drunk and crashes the machine, precious time is lost—hours or even days. About 90 percent of the combine operator's time is spent making sure that the combine is driving exactly along the edge of the unharvested crop to maximize efficiency without missing any of the crop. But this is the most unpleasant part of the driving, and due to fatigue at the end of the shift, operators typically leave nearly a meter at the edge of each row uncut. These steering errors account for a 25 percent overall increase in harvesting time. Our technology allows combine operators to delegate the driving so that they can instead focus on optimizing harvesting quality.

Add to this the fact that the skilled combine operator is a dying breed. Professional education has declined, and the young people joining the labor force aren't up to the same standard. Though the same can be said of most manual trades, this effect creates a great demand for our robotic system, the Cognitive Agro Pilot.

Developing AI systems is in my genome. My father, Anatoly Uskov, was on the first team of AI program developers at the
System Research Institute of the Russian Academy of Sciences. Their program, named Kaissa, became the world computer chess champion in 1974. Two decades later, after the collapse of the Soviet Union, the Systems Research Institute's AI laboratories formed the foundation of my company, Cognitive Technologies. Our first business was developing optical character recognition software used by companies including HP, Oracle, and Samsung, and our success allowed us to support an R&D team of mathematicians and programmers conducting fundamental research in the field of computer vision and adjacent areas.

In 2012, we added a group of mathematicians developing neural networks. Later that year, this group proudly introduced me to their creation: Vasya, a football-playing toy car with a camera for an eye. “One-eyed Vasya” could recognize a ball among other objects in our long office hallway, and push it around. The robot was a massive distraction for everyone working on that floor, as employees went out into the hallway and started “testing” the car by tripping it up and blocking its way to the ball with obstacles. Meanwhile, the algorithm showed stable performance. Politely swerving around obstacles, the car kept on looking for the ball and pushing it. It almost gave an impression of a living creature, and this was our “eureka” moment—why don't we try doing the same with something larger and more useful?

Your browser does not support the video tag.
A combine driven by the Cognitive Agro Pilot harvests grain while a human supervises from the driver's seat.Cognitive Pilot

After initially experimenting with large heavy-duty trucks, we realized that the agricultural sector doesn't have the major legal and regulatory constraints that road transport has in Russia and elsewhere. Since our priority was to develop a commercially viable product, we set up a business unit called
Cognitive Pilot that develops add-on autonomy for combine harvesters, which are the machines used to harvest the vast majority of grain crops (including corn, wheat, barley, oats, and rye) on large farms.

Just five years ago, it was impossible to use video-content analysis to operate agricultural machinery at this level of automation because there weren't any fully functional neural networks that could detect the borders of a crop strip or see any obstacles in it.

At first, we considered combining GPS with visual data analysis, but it didn't take us long to realize that visual analytics alone is enough. For a GPS steering system to work, you need to prepare a map in advance, install a base station for corrections, or purchase a package of signals. It also requires pressing a lot of buttons in a lot of menus, and combine operators have very little appreciation for user interfaces. What we offer is a camera and a box stuffed with processing power and neural networks. As soon as the camera and the box are mounted and connected to the combine's control system, we're good to go. Once in the field, the newly installed Cognitive Agro Pilot says: “Hurray, we're in the field,” asks the driver for permission to take over, and starts driving. Five years from now, we predict that all combine harvesters will be equipped with a computer vision–based autopilot capable of controlling every aspect of harvesting crops.

From a single video stream, Cognitive Agro Pilot's neural networks are able to identify crops, cleared ground, static obstacles, and moving obstacles like people or other vehicles.Cognitive Pilot

Getting to this point has meant solving some fascinating challenges. We realized we would be facing an immense diversity of field scenes that our neural network must be trained to understand. Already working with farmers on the early project stages, we found out that the same crops can look completely different in different climatic zones. Preparing for mass production of our system, we tried to compile the most highly diversified data set with various fields and crops, starting with videos filmed in the fields of several farms across Russia under different weather and lighting conditions. But it soon became evident we needed to come up with a more adaptable solution.

We decided to use a coarse-to-fine approach to train our networks for autonomous driving. The initial version is improved with each new client, as we obtain additional data on different locations and crops. We use this data to make our networks more accurate and reliable, employing unsupervised domain adaptation to recalibrate them in a short time by adding carefully randomized noise and distortions to the training images to make the networks more robust. Humans are still needed to help with semantic segmentation on new varieties of crops. Thanks to this approach, we have now obtained highly resilient all-purpose networks suitable for use on over a dozen different crops grown across Eastern Europe.

The way the Cognitive Agro Pilot drives a combine is similar to how a human driver does it. That is, our unique competitive edge is the system's ability to see and understand the situation in the field much as a human would, so it maintains full efficiency in collaboration with human drivers. At the end of the day, it all comes down to economics. One human-driven combine can harvest around 20 hectares of crops during one shift. When Cognitive Agro Pilot does the driving, the operators' workload is considerably lower: They don't get tired, can make fewer stops, and take fewer breaks. In practical terms, it means harvesting around 25 to 30 hectares per shift. For a business owner, it means that two combines equipped with our system deliver the performance of three combines without it.

Your browser does not support the video tag.
While the combine drives itself, the human operator can make adjustments to the harvesting system to maximize speed and efficiency.Cognitive Pilot

On the market now there are some separate developments from various agricultural-harvesting companies. But each of their autonomous features is done as a separate function—driving along a field edge, driving along a row, and so on. We haven't yet seen another industrial system that can drive completely with computer vision, but one-eyed Vasya showed us that this was possible. And so as we thought about cost optimization and solving the task with a minimum set of devices, we decided that for a farmer's AI-based robot assistant, one camera is enough.

The Cognitive Agro Pilot's primary sensor is a single 2-megapixel color video camera that can see a wide area in front of the vehicle, mounted on a bracket near one of the combine's side mirrors. A control unit with an Nvidia Jetson TX2 computer module is mounted inside the cab, with an integrated display and driver interface. This control unit contains the main stack of autonomy algorithms, processes the video feed, and issues commands to the combine's hydraulic systems for control of steering, acceleration, and braking. A display in the cab provides the interface for the driver and displays warnings and settings. We are not tied to any particular brand; our retrofit kit will work with any combine harvester model available in the farmer's fleet. For a combine more than five years old, interfacing with its control system may not be quite so easy (sometimes an additional steering-angle sensor is required), but the installation and calibration can still usually be done within one day, and it takes just 10 minutes to train a new driver.

Our vision-based system drives the combine, so the operator can focus on the harvest and adjusting the process to the specific features of the crop. The Cognitive Agro Pilot does all of the steering and maintains a precise distance between rows, minimizing gaps. It looks for obstacles, categorizes them, and forecasts their trajectory if they're moving. If there is time, it warns the driver to avoid the obstacles, or it decides to drive around them or slow down. It also coordinates its movement with a grain truck and with other combines when it is part of a formation. The only time that the operator is routinely required to drive is to turn the combine around at the end of a run. If you need to turn, go ahead—the Cognitive Agro Pilot releases the controls and starts looking for a new crop edge. As soon as it finds one, the robot says: “Let me do the driving, man.” You push the button, and it takes over. Everything is simple and intuitive. And since a run is normally up to 5 kilometers long, these turns account for less than 1 percent of a driver's workload.

Once in the field, the newly installed Cognitive Agro Pilot says: “Hurray, we're in the field,” asks the driver for permission to take over, and starts driving.

During our pilot project last year, the yield from the same fields increased by 3 to 5 percent due to the ability of the harvester to maintain the cut width without leaving unharvested areas. It increased an additional 3 percent simply because the operators had time to more closely monitor what was going on in front of them, optimizing the harvesting performance. With our copilot, drivers' workloads are very low. They start the system, let go of the steering wheel, and can concentrate on controlling the machinery or checking commodity prices on their phones. Harvesting weeks are a real ordeal for combine drivers, who get no rest except for some sleep at night. In one month they need to earn enough for the upcoming six, so they are exhausted. However, the drivers who were using our solution realized they even had some energy left, and those who chose to work long hours said they could easily work 2 hours more than usual.

Gaining 10 or 15 percent more working hours over the course of the harvest may sound negligible, but it means that a driver has three extra days to harvest the crops. Consequently, if there are days of bad weather (like rain that causes the grain to germinate or fall down), the probability of keeping the crop yield high is a lot greater. And since combine operators get paid by harvested volume, using our system helps them make more money. Ultimately, both drivers and managers say unanimously that harvesting has become easier, and typically the cost of the system (about US $10,000) is paid off in just one season. Combine drivers quickly get the hang of our technology—after the first few days, many drivers either start to trust in our robot as an almighty intelligence, or decide to test it to death. Some get the misconception that our robots think like humans and are a little disappointed to see that our system underperforms at night and has trouble driving in dust when multiple combines are driving in file. Even though humans can have problems in these situations also, operators would grumble: “How can it not see?” A human driver understands that the distance to the combine ahead is about 10 meters and that they are traveling at a constant speed. The dust cloud will blow away in a minute, and everything will be fine. No need to brake. Alex, the driver of the combine ahead, definitely won't brake. Or will he? Since the system hasn't spent years alongside Alex and cannot use life experience to predict his actions, it stops the combine and releases the controls. This is where human intelligence once again wins out over AI.

Turns at the end of each run are also left to human intelligence, for now. This feature never failed to amaze combine drivers but turned out to be the most challenging during tests: The immense width of the header means that a huge number of hypotheses about objects beyond the line of sight of our single camera need to be factored in. To automate this feature, we're waiting for the completion of tests on rugged terrain. We are also experimenting with our own synthetic-aperture radar technology, which can see crop edges and crop rows as radio-frequency images. This does not add much to the total solution cost, and we plan to use radar for advanced versions of our “agrodroids” intended for work in low visibility and at night.

Robot in Disguise

It takes just four parts to transform almost any human-driven combine harvester into a robot. A camera [1] mounted on a side-view mirror watches the field ahead, sending a video stream to a combined computing unit, display, and driver interface [2] in the driver's cab. A neural network analyzes the video to find crop edges and obstacles, and sends commands to the hydraulic unit [3] to control the combine. For older combines, a steering sensor [4] mounted inside a wheel provides directional feedback for precision driving. While Cognitive Pilot's system takes care of the driving, it's the job of the human operator in the cab to optimize the performance of the header [5] to harvest the crop efficiently.Cognitive Pilot

During the summer and autumn of 2020, more than 350 autonomous combines equipped with the Cognitive Agro Pilot system drove across over 160,000 hectares of fields and helped their human supervisors harvest more than 720,000 tonnes of crops from Kaliningrad on the Baltic Sea to Vladivostok in the Russian Far East. Our robots have worked more than 230,000 hours, passing 950,000 autonomous kilometers driven last year. And by the end of 2021, our system will be available in the United States and South America.

Common farmers and the end users of our solutions may have heard about driverless cars in the news or seen the words “neural network” a couple of times, but that about sums up their AI experience. So it is fascinating to hear them say things like “Look how well the segmentation has worked!” or “The neural network is doing great!” in the driver's cab.

Changing the technological paradigm takes time, so we ensure the widest possible compatibility of our solutions with existing machinery. Undoubtedly, as farmers adapt to the current innovations, we will continuously increase the autonomy of all types of machinery for all kinds of tasks.

A few years ago, I studied the work of the United Nations mission in Rwanda dealing with the issues of chronic child malnutrition. I will never forget the photographs of emaciated children. It made me think of the famine that gripped a besieged Leningrad during World War II. Some of my relatives died there and their diaries are a testament to the fact that there are few endings more horrible than death from starvation. I believe that robotic automation and AI enhancement of agricultural machinery used in high-risk farming areas or regions with a shortage of skilled workers should be the highest priority for all governments concerned with providing an adequate response to the global food-security challenges.

This article appears in the September 2021 print issue as “On Russian Farms, the Robotic Revolution Has Begun.” Continue reading

Posted in Human Robots

#437639 Boston Dynamics’ Spot Is Helping ...

In terms of places where you absolutely want a robot to go instead of you, what remains of the utterly destroyed Chernobyl Reactor 4 should be very near the top of your list. The reactor, which suffered a catastrophic meltdown in 1986, has been covered up in almost every way possible in an effort to keep its nuclear core contained. But eventually, that nuclear material is going to have to be dealt with somehow, and in order to do that, it’s important to understand which bits of it are just really bad, and which bits are the actual worst. And this is where Spot is stepping in to help.

The big open space that Spot is walking through is right next to what’s left of Reactor 4. Within six months of the disaster, Reactor 4 was covered in a sarcophagus made of concrete and steel to try and keep all the nasty nuclear fuel from leaking out more than it already had, and it still contains “30 tons of highly contaminated dust, 16 tons of uranium and plutonium, and 200 tons of radioactive lava.” Oof. Over the next 10 years, the sarcophagus slowly deteriorated, and despite the addition of that gigantic network of steel support beams that you can see in the video, in the late 1990s it was decided to erect an enormous building over the entire mess to try and stabilize it for as long as possible.

Reactor 4 is now snugly inside the massive New Safe Confinement (NSC) structure, and the idea is that eventually, the structure will allow for the safe disassembly of what’s left of the reactor, although nobody is quite sure how to do that. This is all just to say that the area inside of the containment structure offers a lot of good opportunities for robots to take over from humans.

This particular Spot is owned by the U.K. Atomic Energy Authority, and was packed off to Russia with the assistance of the Robotics and Artificial Intelligence in Nuclear (RAIN) initiative and the National Centre for Nuclear Robotics. Dr. Dave Megson-Smith, who is a researcher at the University of Bristol, in the U.K., and part of the Hot Robotics Facility at the National Nuclear User Facility, was one of the scientists lucky enough to accompany Spot on its adventure. Megson-Smith specializes in sensor development, and he equipped Spot with a collimated radiation sensor in addition to its mapping payload. “We actually built a map of the radiation coming out of the front wall of Chernobyl power plant as we were in there with it,” Megson-Smith told us, and was able to share this picture, which shows a map of gamma photon count rate:

Image: University of Bristol

Researchers equipped Spot with a collimated radiation sensor and use one of the data readings (gamma photon count rate) to create a map of the radiation coming out of the front wall of the Chernobyl power plant.

So what’s the reason you’d want to use a very expensive legged robot to wander around what looks like a very flat and robot friendly floor? As it turns out, the floor is very dusty in there, and a priority inside the NSC is to keep dust down as much as possible, since the dust is radioactive and gets on everything and is consequently the easiest way for radioactivity to escape the NSC. “You want to minimize picking up material, so we consider the total contact surface area,” says Megson-Smith. “If you use a legged system rather than a wheeled or tracked system, you have a much smaller footprint and you disturb the environment a lot less.” While it’s nice that Spot is nimble and can climb stairs and stuff, tracked vehicles can do that as well, so in this case, the primary driving factor of choosing a robot to work inside Chernobyl is minimizing those contact points.

Right now, routine weekly measurements in contaminated spaces at Chernobyl are done by humans, which puts those humans at risk. Spot, or a robot like it, could potentially take over from those humans, as a sort of “automated safety checker”

Right now, routine weekly measurements in contaminated spaces at Chernobyl are done by humans, which puts those humans at risk. Spot, or a robot like it, could potentially take over from those humans, as a sort of “automated safety checker” able to work in medium level contaminated environments.” As far as more dangerous areas go, there’s a lot of uncertainty about what Spot is actually capable of, according to Megson-Smith. “What you think the problems are, and what the industry thinks the problems are, are subtly different things.

We were thinking that we’d have to make robots incredibly radiation proof to go into these contaminated environments, but they said, “can you just give us a system that we can send into places where humans already can go, but where we just don’t want to send humans.” Making robots incredibly radiation proof is challenging, and without extensive testing and ruggedizing, failures can be frequent, as many robots discovered at Fukushima. Indeed, Megson-Smith that in Fukushima there’s a particular section that’s known as a “robot graveyard” where robots just go to die, and they’ve had to up their standards again and again to keep the robots from failing. “So the thing they’re worried about with Spot is, what is its tolerance? What components will fail, and what can we do to harden it?” he says. “We’re approaching Boston Dynamics at the moment to see if they’ll work with us to address some of those questions.

There’s been a small amount of testing of how robots fair under harsh radiation, Megson-Smith told us, including (relatively recently) a KUKA LBR800 arm, which “stopped operating after a large radiation dose of 164.55(±1.09) Gy to its end effector, and the component causing the failure was an optical encoder.” And in case you’re wondering how much radiation that is, a 1 to 2 Gy dose to the entire body gets you acute radiation sickness and possibly death, while 8 Gy is usually just straight-up death. The goal here is not to kill robots (I mean, it sort of is), but as Megson-Smith says, “if we can work out what the weak points are in a robotic system, can we address those, can we redesign those, or at least understand when they might start to fail?” Now all he has to do is convince Boston Dynamics to send them a Spot that they can zap until it keels over.

The goal for Spot in the short term is fully autonomous radiation mapping, which seems very possible. It’ll also get tested with a wider range of sensor packages, and (happily for the robot) this will all take place safely back at home in the U.K. As far as Chernobyl is concerned, robots will likely have a substantial role to play in the near future. “Ultimately, Chernobyl has to be taken apart and decommissioned. That’s the long-term plan for the facility. To do that, you first need to understand everything, which is where we come in with our sensor systems and robotic platforms,” Megson-Smith tells us. “Since there are entire swathes of the Chernobyl nuclear plant where people can’t go in, we’d need robots like Spot to do those environmental characterizations.” Continue reading

Posted in Human Robots

#437265 This Russian Firm’s Star Designer Is ...

Imagine discovering a new artist or designer—whether visual art, fashion, music, or even writing—and becoming a big fan of her work. You follow her on social media, eagerly anticipate new releases, and chat about her talent with your friends. It’s not long before you want to know more about this creative, inspiring person, so you start doing some research. It’s strange, but there doesn’t seem to be any information about the artist’s past online; you can’t find out where she went to school or who her mentors were.

After some more digging, you find out something totally unexpected: your beloved artist is actually not a person at all—she’s an AI.

Would you be amused? Annoyed? Baffled? Impressed? Probably some combination of all these. If you wanted to ask someone who’s had this experience, you could talk to clients of the biggest multidisciplinary design company in Russia, Art.Lebedev Studio (I know, the period confused me at first too). The studio passed off an AI designer as human for more than a year, and no one caught on.

They gave the AI a human-sounding name—Nikolay Ironov—and it participated in more than 20 different projects that included designing brand logos and building brand identities. According to the studio’s website, several of the logos the AI made attracted “considerable public interest, media attention, and discussion in online communities” due to their unique style.

So how did an AI learn to create such buzz-worthy designs? It was trained using hand-drawn vector images each associated with one or more themes. To start a new design, someone enters a few words describing the client, such as what kind of goods or services they offer. The AI uses those words to find associated images and generate various starter designs, which then go through another series of algorithms that “touch them up.” A human designer then selects the best options to present to the client.

“These systems combined together provide users with the experience of instantly converting a client’s text brief into a corporate identity design pack archive. Within seconds,” said Sergey Kulinkovich, the studio’s art director. He added that clients liked Nikolay Ironov’s work before finding out he was an AI (and liked the media attention their brands got after Ironov’s identity was revealed even more).

Ironov joins a growing group of AI “artists” that are starting to raise questions about the nature of art and creativity. Where do creative ideas come from? What makes a work of art truly great? And when more than one person is involved in making art, who should own the copyright?

Art.Lebedev is far from the first design studio to employ artificial intelligence; Mailchimp is using AI to let businesses design multi-channel marketing campaigns without human designers, and Adobe is marketing its new Sensei product as an AI design assistant.

While art made by algorithms can be unique and impressive, though, there’s one caveat that’s important to keep in mind when we worry about human creativity being rendered obsolete. Here’s the thing: AIs still depend on people to not only program them, but feed them a set of training data on which their intelligence and output are based. Depending on the size and nature of an AI’s input data, its output will look pretty different from that of a similar system, and a big part of the difference will be due to the people that created and trained the AIs.

Admittedly, Nikolay Ironov does outshine his human counterparts in a handful of ways; as the studio’s website points out, he can handle real commercial tasks effectively, he doesn’t sleep, get sick, or have “crippling creative blocks,” and he can complete tasks in a matter of seconds.

Given these superhuman capabilities, then, why even keep human designers on staff? As detailed above, it will be a while before creative firms really need to consider this question on a large scale; for now, it still takes a hard-working creative human to make a fast-producing creative AI.

Image Credit: Art.Lebedev Continue reading

Posted in Human Robots

#435436 Undeclared Wars in Cyberspace Are ...

The US is at war. That’s probably not exactly news, as the country has been engaged in one type of conflict or another for most of its history. The last time we officially declared war was after Japan bombed Pearl Harbor in December 1941.

Our biggest undeclared war today is not being fought by drones in the mountains of Afghanistan or even through the less-lethal barrage of threats over the nuclear programs in North Korea and Iran. In this particular war, it is the US that is under attack and on the defensive.

This is cyberwarfare.

The definition of what constitutes a cyber attack is a broad one, according to Greg White, executive director of the Center for Infrastructure Assurance and Security (CIAS) at The University of Texas at San Antonio (UTSA).

At the level of nation-state attacks, cyberwarfare could involve “attacking systems during peacetime—such as our power grid or election systems—or it could be during war time in which case the attacks may be designed to cause destruction, damage, deception, or death,” he told Singularity Hub.

For the US, the Pearl Harbor of cyberwarfare occurred during 2016 with the Russian interference in the presidential election. However, according to White, an Air Force veteran who has been involved in computer and network security since 1986, the history of cyber war can be traced back much further, to at least the first Gulf War of the early 1990s.

“We started experimenting with cyber attacks during the first Gulf War, so this has been going on a long time,” he said. “Espionage was the prime reason before that. After the war, the possibility of expanding the types of targets utilized expanded somewhat. What is really interesting is the use of social media and things like websites for [psychological operation] purposes during a conflict.”

The 2008 conflict between Russia and the Republic of Georgia is often cited as a cyberwarfare case study due to the large scale and overt nature of the cyber attacks. Russian hackers managed to bring down more than 50 news, government, and financial websites through denial-of-service attacks. In addition, about 35 percent of Georgia’s internet networks suffered decreased functionality during the attacks, coinciding with the Russian invasion of South Ossetia.

The cyberwar also offers lessons for today on Russia’s approach to cyberspace as a tool for “holistic psychological manipulation and information warfare,” according to a 2018 report called Understanding Cyberwarfare from the Modern War Institute at West Point.

US Fights Back
News in recent years has highlighted how Russian hackers have attacked various US government entities and critical infrastructure such as energy and manufacturing. In particular, a shadowy group known as Unit 26165 within the country’s military intelligence directorate is believed to be behind the 2016 US election interference campaign.

However, the US hasn’t been standing idly by. Since at least 2012, the US has put reconnaissance probes into the control systems of the Russian electric grid, The New York Times reported. More recently, we learned that the US military has gone on the offensive, putting “crippling malware” inside the Russian power grid as the U.S. Cyber Command flexes its online muscles thanks to new authority granted to it last year.

“Access to the power grid that is obtained now could be used to shut something important down in the future when we are in a war,” White noted. “Espionage is part of the whole program. It is important to remember that cyber has just provided a new domain in which to conduct the types of activities we have been doing in the real world for years.”

The US is also beginning to pour more money into cybersecurity. The 2020 fiscal budget calls for spending $17.4 billion throughout the government on cyber-related activities, with the Department of Defense (DoD) alone earmarked for $9.6 billion.

Despite the growing emphasis on cybersecurity in the US and around the world, the demand for skilled security professionals is well outpacing the supply, with a projected shortfall of nearly three million open or unfilled positions according to the non-profit IT security organization (ISC)².

UTSA is rare among US educational institutions in that security courses and research are being conducted across three different colleges, according to White. About 10 percent of the school’s 30,000-plus students are enrolled in a cyber-related program, he added, and UTSA is one of only 21 schools that has received the Cyber Operations Center of Excellence designation from the National Security Agency.

“This track in the computer science program is specifically designed to prepare students for the type of jobs they might be involved in if they went to work for the DoD,” White said.

However, White is extremely doubtful there will ever be enough cyber security professionals to meet demand. “I’ve been preaching that we’ve got to worry about cybersecurity in the workforce, not just the cybersecurity workforce, not just cybersecurity professionals. Everybody has a responsibility for cybersecurity.”

Artificial Intelligence in Cybersecurity
Indeed, humans are often seen as the weak link in cybersecurity. That point was driven home at a cybersecurity roundtable discussion during this year’s Brainstorm Tech conference in Aspen, Colorado.

Participant Dorian Daley, general counsel at Oracle, said insider threats are at the top of the list when it comes to cybersecurity. “Sadly, I think some of the biggest challenges are people, and I mean that in a number of ways. A lot of the breaches really come from insiders. So the more that you can automate things and you can eliminate human malicious conduct, the better.”

White noted that automation is already the norm in cybersecurity. “Humans can’t react as fast as systems can launch attacks, so we need to rely on automated defenses as well,” he said. “This doesn’t mean that humans are not in the loop, but much of what is done these days is ‘scripted’.”

The use of artificial intelligence, machine learning, and other advanced automation techniques have been part of the cybersecurity conversation for quite some time, according to White, such as pattern analysis to look for specific behaviors that might indicate an attack is underway.

“What we are seeing quite a bit of today falls under the heading of big data and data analytics,” he explained.

But there are signs that AI is going off-script when it comes to cyber attacks. In the hands of threat groups, AI applications could lead to an increase in the number of cyberattacks, wrote Michelle Cantos, a strategic intelligence analyst at cybersecurity firm FireEye.

“Current AI technology used by businesses to analyze consumer behavior and find new customer bases can be appropriated to help attackers find better targets,” she said. “Adversaries can use AI to analyze datasets and generate recommendations for high-value targets they think the adversary should hit.”

In fact, security researchers have already demonstrated how a machine learning system could be used for malicious purposes. The Social Network Automated Phishing with Reconnaissance system, or SNAP_R, generated more than four times as many spear-phishing tweets on Twitter than a human—and was just as successful at targeting victims in order to steal sensitive information.

Cyber war is upon us. And like the current war on terrorism, there are many battlefields from which the enemy can attack and then disappear. While total victory is highly unlikely in the traditional sense, innovations through AI and other technologies can help keep the lights on against the next cyber attack.

Image Credit: pinkeyes / Shutterstock.com Continue reading

Posted in Human Robots

#433668 A Decade of Commercial Space ...

In many industries, a decade is barely enough time to cause dramatic change unless something disruptive comes along—a new technology, business model, or service design. The space industry has recently been enjoying all three.

But 10 years ago, none of those innovations were guaranteed. In fact, on Sept. 28, 2008, an entire company watched and hoped as their flagship product attempted a final launch after three failures. With cash running low, this was the last shot. Over 21,000 kilograms of kerosene and liquid oxygen ignited and powered two booster stages off the launchpad.

This first official picture of the Soviet satellite Sputnik I was issued in Moscow Oct. 9, 1957. The satellite measured 1 foot, 11 inches and weighed 184 pounds. The Space Age began as the Soviet Union launched Sputnik, the first man-made satellite, into orbit, on Oct. 4, 1957.AP Photo/TASS
When that Falcon 1 rocket successfully reached orbit and the company secured a subsequent contract with NASA, SpaceX had survived its ‘startup dip’. That milestone, the first privately developed liquid-fueled rocket to reach orbit, ignited a new space industry that is changing our world, on this planet and beyond. What has happened in the intervening years, and what does it mean going forward?

While scientists are busy developing new technologies that address the countless technical problems of space, there is another segment of researchers, including myself, studying the business angle and the operations issues facing this new industry. In a recent paper, my colleague Christopher Tang and I investigate the questions firms need to answer in order to create a sustainable space industry and make it possible for humans to establish extraterrestrial bases, mine asteroids and extend space travel—all while governments play an increasingly smaller role in funding space enterprises. We believe these business solutions may hold the less-glamorous key to unlocking the galaxy.

The New Global Space Industry
When the Soviet Union launched their Sputnik program, putting a satellite in orbit in 1957, they kicked off a race to space fueled by international competition and Cold War fears. The Soviet Union and the United States played the primary roles, stringing together a series of “firsts” for the record books. The first chapter of the space race culminated with Neil Armstrong and Buzz Aldrin’s historic Apollo 11 moon landing which required massive public investment, on the order of US$25.4 billion, almost $200 billion in today’s dollars.

Competition characterized this early portion of space history. Eventually, that evolved into collaboration, with the International Space Station being a stellar example, as governments worked toward shared goals. Now, we’ve entered a new phase—openness—with private, commercial companies leading the way.

The industry for spacecraft and satellite launches is becoming more commercialized, due, in part, to shrinking government budgets. According to a report from the investment firm Space Angels, a record 120 venture capital firms invested over $3.9 billion in private space enterprises last year. The space industry is also becoming global, no longer dominated by the Cold War rivals, the United States and USSR.

In 2018 to date, there have been 72 orbital launches, an average of two per week, from launch pads in China, Russia, India, Japan, French Guinea, New Zealand, and the US.

The uptick in orbital launches of actual rockets as well as spacecraft launches, which includes satellites and probes launched from space, coincides with this openness over the past decade.

More governments, firms and even amateurs engage in various spacecraft launches than ever before. With more entities involved, innovation has flourished. As Roberson notes in Digital Trends, “Private, commercial spaceflight. Even lunar exploration, mining, and colonization—it’s suddenly all on the table, making the race for space today more vital than it has felt in years.”

Worldwide launches into space. Orbital launches include manned and unmanned spaceships launched into orbital flight from Earth. Spacecraft launches include all vehicles such as spaceships, satellites and probes launched from Earth or space. Wooten, J. and C. Tang (2018) Operations in space, Decision Sciences; Space Launch Report (Kyle 2017); Spacecraft Encyclopedia (Lafleur 2017), CC BY-ND

One can see this vitality plainly in the news. On Sept. 21, Japan announced that two of its unmanned rovers, dubbed Minerva-II-1, had landed on a small, distant asteroid. For perspective, the scale of this landing is similar to hitting a 6-centimeter target from 20,000 kilometers away. And earlier this year, people around the world watched in awe as SpaceX’s Falcon Heavy rocket successfully launched and, more impressively, returned its two boosters to a landing pad in a synchronized ballet of epic proportions.

Challenges and Opportunities
Amidst the growth of capital, firms, and knowledge, both researchers and practitioners must figure out how entities should manage their daily operations, organize their supply chain, and develop sustainable operations in space. This is complicated by the hurdles space poses: distance, gravity, inhospitable environments, and information scarcity.

One of the greatest challenges involves actually getting the things people want in space, into space. Manufacturing everything on Earth and then launching it with rockets is expensive and restrictive. A company called Made In Space is taking a different approach by maintaining an additive manufacturing facility on the International Space Station and 3D printing right in space. Tools, spare parts, and medical devices for the crew can all be created on demand. The benefits include more flexibility and better inventory management on the space station. In addition, certain products can be produced better in space than on Earth, such as pure optical fiber.

How should companies determine the value of manufacturing in space? Where should capacity be built and how should it be scaled up? The figure below breaks up the origin and destination of goods between Earth and space and arranges products into quadrants. Humans have mastered the lower left quadrant, made on Earth—for use on Earth. Moving clockwise from there, each quadrant introduces new challenges, for which we have less and less expertise.

A framework of Earth-space operations. Wooten, J. and C. Tang (2018) Operations in Space, Decision Sciences, CC BY-ND
I first became interested in this particular problem as I listened to a panel of robotics experts discuss building a colony on Mars (in our third quadrant). You can’t build the structures on Earth and easily send them to Mars, so you must manufacture there. But putting human builders in that extreme environment is equally problematic. Essentially, an entirely new mode of production using robots and automation in an advance envoy may be required.

Resources in Space
You might wonder where one gets the materials for manufacturing in space, but there is actually an abundance of resources: Metals for manufacturing can be found within asteroids, water for rocket fuel is frozen as ice on planets and moons, and rare elements like helium-3 for energy are embedded in the crust of the moon. If we brought that particular isotope back to Earth, we could eliminate our dependence on fossil fuels.

As demonstrated by the recent Minerva-II-1 asteroid landing, people are acquiring the technical know-how to locate and navigate to these materials. But extraction and transport are open questions.

How do these cases change the economics in the space industry? Already, companies like Planetary Resources, Moon Express, Deep Space Industries, and Asterank are organizing to address these opportunities. And scholars are beginning to outline how to navigate questions of property rights, exploitation and partnerships.

Threats From Space Junk
A computer-generated image of objects in Earth orbit that are currently being tracked. Approximately 95 percent of the objects in this illustration are orbital debris – not functional satellites. The dots represent the current location of each item. The orbital debris dots are scaled according to the image size of the graphic to optimize their visibility and are not scaled to Earth. NASA
The movie “Gravity” opens with a Russian satellite exploding, which sets off a chain reaction of destruction thanks to debris hitting a space shuttle, the Hubble telescope, and part of the International Space Station. The sequence, while not perfectly plausible as written, is a very real phenomenon. In fact, in 2013, a Russian satellite disintegrated when it was hit with fragments from a Chinese satellite that exploded in 2007. Known as the Kessler effect, the danger from the 500,000-plus pieces of space debris has already gotten some attention in public policy circles. How should one prevent, reduce or mitigate this risk? Quantifying the environmental impact of the space industry and addressing sustainable operations is still to come.

NASA scientist Mark Matney is seen through a fist-sized hole in a 3-inch thick piece of aluminum at Johnson Space Center’s orbital debris program lab. The hole was created by a thumb-size piece of material hitting the metal at very high speed simulating possible damage from space junk. AP Photo/Pat Sullivan
What’s Next?
It’s true that space is becoming just another place to do business. There are companies that will handle the logistics of getting your destined-for-space module on board a rocket; there are companies that will fly those rockets to the International Space Station; and there are others that can make a replacement part once there.

What comes next? In one sense, it’s anybody’s guess, but all signs point to this new industry forging ahead. A new breakthrough could alter the speed, but the course seems set: exploring farther away from home, whether that’s the moon, asteroids, or Mars. It’s hard to believe that 10 years ago, SpaceX launches were yet to be successful. Today, a vibrant private sector consists of scores of companies working on everything from commercial spacecraft and rocket propulsion to space mining and food production. The next step is working to solidify the business practices and mature the industry.

Standing in a large hall at the University of Pittsburgh as part of the White House Frontiers Conference, I see the future. Wrapped around my head are state-of-the-art virtual reality goggles. I’m looking at the surface of Mars. Every detail is immediate and crisp. This is not just a video game or an aimless exercise. The scientific community has poured resources into such efforts because exploration is preceded by information. And who knows, maybe 10 years from now, someone will be standing on the actual surface of Mars.

Image Credit: SpaceX

Joel Wooten, Assistant Professor of Management Science, University of South Carolina

This article is republished from The Conversation under a Creative Commons license. Read the original article. Continue reading

Posted in Human Robots