Day in, day out – CISA chief uploads documents to public ChatGPT
- Jan 29
- 2 min read
In August 2025, security systems detected multiple unauthorized uploads of FOUO-marked materials to public ChatGPT. FOUO stands for For Official Use Only, meaning classified but unclassified information whose disclosure could impact citizens' privacy and the functioning of programs critical to U.S. national security.
FOUO ended up in a public service where data can be stored by the provider and, in an extreme scenario, used in responses to other users among the approximately 700 million ChatGPT users worldwide. It is, of course, grotesque that the agency responsible for protecting federal networks itself violated the fundamental principles of data governance.

Unfortunately, it becomes much less funny when we realize how many times a day such actions take place in Polish government offices. Data from government offices flows in a wide stream, and no one knows for what purpose it will be used or whether it will be used by American or Chinese services.
After all, no one knows whether officials, lacking official tools, are using ChatGPT or perhaps DeepSeek on their phones.
For the public sector: this is not a “user error,” but an architectural error
The CISA incident (and our everyday reality) is not just a violation of procedures by one person. It is the result of a lack of official AI tools and a lack of a consistent model for their use. This means that critical information resources remain too close to public consumer services (i.e., unauditable).
Of course, you can ignore this problem or “solve” it by writing a rule saying “you must not use private DeepSeek” — but that is a very naive approach. Nothing will change the fact that people use AI agents because their use significantly increases their productivity at work. And after all, that's what everyone wants, both bosses and employees. The largest corporations (such as Citibank) say it outright – if you don't learn to work with AI, you will lose your job because you will have much worse results than your colleague who likes AI.
There is a solution, and CISA already knows it
clearly separated, sovereign AI (on-prem/sovereign cloud),
central query routing policies (what can go to the public cloud and what must remain local),
providing users with an ergonomic, internal AI alternative with a UX comparable to public chatbots.
If not, “shadow AI” in the form of public chatbots will become the norm, and the consequences for the state and its citizens will be difficult to imagine.
Besides, even the most rigorous security procedures and policies lose out to daily operational pressure if a secure tool is slower, less convenient, or formally more difficult to access than a public chatbot.





Comments