Intelligence Technology Is a Double-Edged Sword - Signal Magazine

From the outer space environment of the moon to the virtual realm of cyberspace, technology challenges have the potential to vex the intelligence community. Many of the tools that the community is counting on to accomplish its future mission can be co-opted or adopted by adversaries well-schooled in basic scientific disciplines. So U.S. intelligence officials must move at warp speed to develop innovations that give them an advantage over adversaries while concurrently denying foes the use of the same innovations against the United States.

Over the past few decades, the U.S. intelligence community has excelled at keeping up with technology advances to where they are key enablers to espionage and counterespionage, offers Bob Gourley, co-founder and chief technology officer of OODA LLC. But now, technologies are changing so quickly that adversaries are able to even the score. And they are learning what the United States has been adept at doing—turning national security technology against its host.

Gourley uses the acronym CAMBRIC to describe the future of information technology. While the word in common usage refers to a finely woven cloth, the acronym’s letters stand for cloud, artificial intelligence (AI), mobility, big data, robotics, the Internet of Things (IoT) and cybersecurity. The intelligence community is directly in the crosshairs of its elements, he says.

For the cloud, Gourley estimates that only about 5 percent of intelligence community workloads are in some type of cloud. A big push looms for moving much of the other 95 percent to the cloud, and that effort has just begun. He adds that adversaries also are moving to the cloud, which brings about the dichotomy of defending the community’s cloud and attacking someone else’s.

In the technology domain, AI is being counted on to help the collection and sorting of data. However, AI can be a trap for unwitting intelligence officials, Gourley offers.

For example, machine learning algorithms can train by teaching themselves as new data comes in. This activity is taking place today, but problems already have arisen that bode ill for future applications lacking appropriate measures. Machine learning algorithms can be deceived by adversaries in ways that affect the fidelity of the information gleaned from the data, Gourley says. And AI can be self-deceiving. “Any algorithm that can change itself can corrupt itself,” he states (see AI Can Be its Own Worst Enemy, below).

Outsiders can have the same effect. “Machine learning algorithms can be deceived,” he declares. In national security applications, an adversary could program or manipulate data to influence AI into generating misleading information, and the intelligence community must take pains to avoid this scenario.

Many commercial firms have discovered that their AI algorithm or data are vulnerable to attacks from outsiders, and this can lead to thoroughly corrupted information. Gourley describes how businesses red team their AI constructs with outside experts who try to manipulate the algorithm and its data. But even with this red team validation, companies must maintain scrutiny of their AI to ensure it is generating expected results. Having verifying information may be necessary for decision making, he says, adding, ”These kinds of lessons will apply to the intelligence community. But, the intelligence community is operating at a much greater scale and will need much more well-engineered solutions than commercial industry will,” he emphasizes.

“Some of these machine learning algorithms change themselves so much that no human can understand how they work,” Gourley declares.

The community must incorporate a means of explaining AI findings if it is to avoid deliberate or accidental deception, he says. “We need to know what made [AI] come up with this conclusion,” Gourley says of a hypothetical AI-driven report. He adds that many major academic institutions that teach AI are addressing this issue, and this should help the intelligence community in the future.

Startup companies also are being formed to deal with this need. One approach is to examine the millions of variables that machine learning systems have taught themselves, and then have the AI tell the human the top 10 variables, he relates. “There is a lot of innovation in AI protection and explainability.”

For mobility, devices have given humans the ability to exchange thoughts at a distance as if everyone had extrasensory perception. Today, 70 percent of the world’s population have mobile devices, and that percentage is increasing steadily. The devices are becoming ever smarter, and the intelligence community must continue to track that progress, Gourley posits.

Atop the intelligence community’s technology wish list are new analytical capabilities for big data, he points out. The community is doing well in this area, he says, and this trend is likely to continue. Being able to exploit big data effectively is vital for intelligence. The community faces ongoing challenges in that area, but solving them is essential.

The biggest issue for robotics is understanding their use by adversaries, he offers. Many foes are using robots for a growing number of applications in their militaries, and the U.S. intelligence community must study and understand them. “Robotics in adversary militaries are a key threat, and if we can figure out exactly what they’re doing and how to destroy them, that’s good for us,” he points out. Conversely, U.S. robots must be protected from enemy action.

However, new technologies emerging under the rubric of the IoT have the potential to be both an asset and a challenge to the intelligence community. Gourley attests that the IoT will explode with capabilities, as 20 billion IoT devices soon will be in operation. “The concern is whether the intelligence community is keeping up with that technology so that we can defend our own systems and understand what adversaries are doing in that domain,” he declares. “This world of small, embedded sensors that are everywhere is a key topic area.”

The IoT could prove to be a model domain for intelligence community activities. Traditionally, intelligence has focused on learning what the adversaries are doing and determining what they will do next. The IoT can cloud the community’s ability to attain those goals, Gourley suggests, adding that the community must be able to understand how adversaries are using the IoT as well as how to gain knowledge from their use of it.

At the heart of this is defending the United States from enemy activities in the IoT. “What is the counterintelligence threat to the employees of the intelligence community?” he asks. “What is the counterintelligence threat to the small [business] startups that are creating our economy and now using these IoT devices? How can adversaries threaten our economy by attacking companies through IoT devices?”

Gourley continues that the intelligence community has a role in helping figure out what adversaries will do with the IoT. “It all starts with understanding what the adversary objectives are,” he states. Similarly, U.S. intelligence can exploit an adversary’s IoT to learn what it is up to.

But above all, cybersecurity is the linchpin for optimizing all these intelligence information technologies. Just as the Internet is nowhere near secure after nearly three decades of popular use, no one should expect the IoT or other future technologies—including AI and robotics—to be secure, Gourley points out. The intelligence community must expect these information-based technologies to be targeted at their data points, and it must guard against intrusions.

After these information technology areas, the commercialization of space is a vital arena where the intelligence community must heed rapid developments. It can reap the benefits of commercial space technology, but it also must be on the alert for foreign efforts to counter those technologies.

NASA is relying on the commercial sector to provide access to low earth orbit as well as for support hardware in space, Gourley notes. NASA’s Project Artemis, which aims to return humans to the moon to establish bases by 2024, relies extensively on commercial technologies and capabilities. The intelligence community must gear up to protect Project Artemis from adversary exploitation, he attests.

“There are bad guys who are going to attack Project Artemis,” he professes. “Who are they, and what are their objectives? How do we inform the rest of government to take action to defend Project Artemis?” he asks.

“This is an example of an acceleration of technology where there needs to be more of a focus on what the threat actors are going to do to oppose U.S. national interests,” Gourley declares. He warns of a potential “Pearl Harbor in space” if the intelligence community is not sufficiently vigilant about the high frontier.

NASA’s new acquisition approaches will serve as models for the intelligence community, he professes. The agency effectively is ordering “space as a service” in which it lets a contract to several companies and pays them when they provide the requested service. Industry is left to determine how to provide the service and then to carry it out. The intelligence community should follow that model and offer contracts to, for example, place a fixed number of satellites with defined capabilities into low earth orbit within a given period of time. Industry would be able to use technologies such as cubesats and new sensor systems placed in orbit via new commercial launch services to achieve this goal, and the intelligence community would have rapid deployment capabilities on demand.

With this enhanced commercialization of space comes the challenge—and opportunity—to launch new and faster sensors to collect information on adversaries. The intelligence community must be able to take advantage of this opportunity, he warrants.

For the extended future, the intelligence community’s most important technology may be quantum effects, Gourley offers. Two aspects stand out: In the next three to five years, adversaries will be able to use quantum computing to break U.S. encryption. Unbreakable messages already sent by the United States and recorded by adversaries suddenly might become transparent when subjected to quantum processing methods. A database hijacked by an enemy today could be warehoused until effective quantum decryption technology emerges.

“We have information that should be protected for a long time that adversaries soon are going to have access to,” he predicts. “We need to start thinking now—how do we improve our protection for this world of quantum computing?” This would include an assessment of the data that might need to be protected for 50 years, he points out. It can be done, but it must be achieved methodically.

The other aspect of quantum effects is U.S. research into its related technologies, success in which will enhance the ability of the U.S. intelligence community to steal adversaries’ secrets. “Are we doing enough to plan for ways that we can break adversary encryption using quantum computing and gain insights into what bad guys are planning and what adversaries are trying to do to us in the world?” Gourley asks.

“The biggest driver of change in the intelligence community, when it comes to capabilities, over the next five years is going be the transition into the quantum age of computing,” he declares.

AI Can Be its Own Worst Enemy

The hope behind using Artificial intelligence (AI) is that an AI algorithm will think for itself and learn from experience as it serves its human masters. However, Gourley warns that machine learning can lead an algorithm astray to the point where its original mission is corrupted by its learning.

He cites the example, originally reported by Reuters, of Amazon’s use of AI to screen resumes submitted by prospective job applicants. The self-learning algorithms were designed to scan for the best potential recruits and submit the top resumes up the human chain of review for consideration. The computer models vetted applicants by observing patterns in their resumes over a 10-year period.

However, this approach inadvertently enabled the AI to become misogynistic. Basing its learning experience on traditional resume patterns amassed over 10 years—which largely represented men—the AI taught itself that male resumes were preferable and overwhelmingly rejected resumes from women. Elements of women’s resumes that did not appear on men’s were flagged as undesirable even though they did not indicate any lack of ability or poor work habits. Amazon worked to fix the recruiting engine but determined that it could not be assured that the AI would not discriminate in other ways. Ultimately, it abandoned the AI-driven screening process.

Another commercial AI setback occurred in 2016 with Microsoft Tay, an AI chatter bot. Released by Microsoft through Twitter, Tay quickly became the target of trolls, who attacked it with dialogue that turned it into a sex-crazed racial supremacist. Tay, which was designed to adapt the language patterns of a 19-year-old woman, learned the wrong ideas from interactions with these miscreants and developed Nazi tendencies, and Microsoft had to take it down within hours of its introduction.

Other examples of AI distortion abound, but these two cases highlight pitfalls of AI in intelligence applications. In the first, a well-meaning AI algorithm drew the wrong conclusions as it pursued its mission, which did its users no favors. In the second, outside influences conspired to corrupt the algorithm and transform it into something that ran counter to everything its owners hoped. The intelligence community must guard against these types of outcomes, as either could be devastating.



https://ift.tt/32pUjQL

0 Response to "Intelligence Technology Is a Double-Edged Sword - Signal Magazine"

Post a Comment