Artificial Intelligence: Friend or Foe?

Microsoft reportedly has launched a new Azure generative AI service specifically for U.S. intelligence agencies like the CIA that includes security features designed to let analysts pour over classified information beyond the reach of the outside world.

In an interview with Bloomberg, Microsoft CTO for strategic missions and technology, said the new as-yet-unnamed service, which is based on OpenAI’s GPT-4 large language model (LLM), gives spy agencies the benefits of such tools like ChatGPT for processing top-secret information without the risks normally associated with generative AI, including data leaks and breaches.

“This is the first time we’ve ever had an isolated version – when isolated means it’s not connected to the internet – and it’s on a special network that’s only accessible by the U.S. government,” Chappell told the news site. “You don’t want it to learn on the questions that you’re asking and then somehow reveal that information.”

He noted that intelligence agencies had been looking for a way to bring generative AI capabilities into their work, but that products on the market posed security risks that were too high. However, the air-gapped nature of the cloud environment means it can’t be reached via the internet, and while the service can read data and files, it can’t learn from them.

A Modified AI Supercomputer

Cloud services that use tools like ChatGPT and Microsoft’s own Copilot generative AI technology run on servers running in Azure data centers, but Chappell said the tech vendor spent 18 months building a new system that was used to modify an AI supercomputer in Iowa.

AWS

The service launched last week, with Chappell saying it can be used by about 10,000 people within the intelligence community.

Intelligence agencies have not been shy about their ambitions to use generative AI in their work. The CIA for more than a year has looked into generative AI, with some reports dating back to February 2023 about the spy agency beginning to explore how the emerging technology can help its mission and what guardrails would need to be in place.

Generative AI an ‘Inflection Point’

At the time, Lakshmi Raman, the CIA’s AI director who was speaking at an AI summit in Virginia, said the agency had “seen the excitement in the public space around ChatGPT. It’s certainly an inflection point in this technology, and we definitely need to [be exploring] ways in which we can leverage new and upcoming technologies. And I think the way we’re approaching it is we need to approach it in a disciplined way”

Raman added that “a lot of work is underway to ensure the CIA’s success in becoming a mature and AI-driven organization, as well as expanding our understanding of adversaries’ use of AI and [machine learning] capabilities,” pointing to such efforts as creating a common platform for shared services and creating ways to increase workers’ skills with such technologies.

The CIA reportedly later in the year announced plans to create its own generative AI chatbot service. The ChatGPT-style chat design was designed by the agency’s Open Source Enterprise unit, with the goal of making it available to other intelligence agencies in hopes of push back against China’s growing AI capabilities.

A Bulwark Against China

The CIA’s chatbot will leverage advanced generative artificial intelligence to help its agents access and understand open-source data, the agency said. The technology was developed by the CIA’s Open Source Enterprise unit, and will roll out the chatbot to multiple intelligence agencies besides the CIA, as part of an ongoing effort to rival the growing, AI-powered intelligence capabilities of China.

Randy Nixon, director of the CIA’s Open Source Enterprise unit, told Bloomberg that such a tool was needed to manage the growing mass of data it was collecting, adding that “we have to find needles in the needle field. The scale of how much we collect and what we collect has grown astronomically over the last 80-plus years. So much so, that this could be daunting and at times unusable for our consumers.”

NSA’s AI Interest

The National Security Agency (NSA) also has talked about its use of AI in its operations, with then-Director Paul Nakasone – who retired in February – saying late last year that his and other intelligence and defense agencies already use AI.

“AI helps us, but our decisions are made by humans. And that’s an important distinction,” Nakasone said, according to the Associated Press. “We do see assistance from artificial intelligence. But at the end of the day, decisions will be made by humans and humans in the loop.”

That said, not everyone is keen with the NSA’s use of AI. The American Civil Liberties Union (ACLU) late last month filed a lawsuit under the Freedom of Information Act (FOIA) for studies, reports and other documents and the ways the agency is using AI and how it may hinder citizens’ civil rights and liberties.

“In recent years, AI has transformed many of the NSA’s daily operations: The agency uses AI tools to help gather information on foreign governments, augment human language processing, comb through networks for cybersecurity threats, and even monitor its own analysts as they do their jobs,” Patrick Toomey, deputy director of the ACLU’s National Security Project, and Shaiba Rather, an ACLU Fellow, wrote in a blog post. “Unfortunately, that’s about all we know. As the NSA integrates AI into some of its most profound decisions, it’s left us in the dark about how it uses AI and what safeguards, if any, are in place to protect everyday Americans and others around the globe whose privacy hangs in the balance.”

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

AI Data Infrastructure Field Day

TECHSTRONG AI PODCAST

SHARE THIS STORY