WASHINGTON (Reuters) - The U.S. Space Force has paused the use of web-based generative artificial intelligence tools like ChatGPT for its workforce over data security concerns, according to a memo seen by Reuters.
A memo dated Sept. 29 and addressed to Guardians, the name Space Force calls its workforce, prohibits personnel from using such AI tools including large-language models on government computers until they receive formal approval by the force's Chief Technology and Innovation Office.
It said the temporary ban was "due to data aggregation risks."
Uses of generative AI, powered by large language models that ingest huge troves of past data to learn, have exploded in the past year, underpinning ever-evolving products such as OpenAI's ChatGPT that can swiftly generate content like text, images or video off of a simple prompt.
Lisa Costa, Space Force’s chief technology and innovation officer, said in the memo that the technology "will undoubtedly revolutionize our workforce and enhance Guardian's ability to operate at speed."
An Air Force spokesperson confirmed the temporary ban, which was first reported by Bloomberg.
"A strategic pause on the use of Generative AI and Large Language Models within the U.S. Space Force has been implemented as we determine the best path forward to integrate these capabilities into Guardians' roles and the USSF mission," Air Force spokesperson Tanya Downsworth said in a statement.
"This is a temporary measure to protect the data of our service and Guardians," she added.
Costa said in the memo that her office had formed a generative AI task force with other Pentagon offices to mull ways to use the technology in a "responsible and strategic manner."
More guidance on Space Force's use of generative AI would be released in the next month, she added.
(Reporting by Joey Roulette; editing by Jonathan Oatis)