
Ocultia: an anonymiser to use AI without leaking private data
Rule number one when using AI: never paste personal or sensitive information into ChatGPT, Gemini, Claude or any other AI chatbot. There is no guarantee that this data won’t end up being used to train models, or that it won’t leak through some vulnerability or security breach on the AI provider’s side.
While we were preparing the syllabus for our next AI course, focused on a sector where data privacy is essential, our students kept raising the same concern: the fear of pasting sensitive information into ChatGPT and having that data end up where it shouldn’t.
With that in mind, we decided to build Ocultia. A free web app (with the option of an enterprise model running inside your own intranet) that anonymises your text or documents directly in the browser, without any data leaving your machine.
The idea is simple: before pasting anything into ChatGPT, you run the content through Ocultia, which detects and masks the sensitive information. Then you work with the AI without worrying, and when you’re done you bring back the original text with a single click.
The problem with the tools that already existed
There are powerful anonymisation solutions on the market. The problem is that almost all of them require local installation, Python dependencies, operating system configuration… A process that varies from machine to machine and that, in practice, is an insurmountable barrier for anyone who isn’t technical and just wants to use AI safely in their day-to-day work.
We needed something that worked from the browser, with no installation, no account creation, and no data leaving the user’s device. And it turns out the technology to do this already exists. You just have to know how to put all the pieces together. And that’s something we’re genuinely good at.
AI running in the browser, with no servers in between
The key piece is Transformers.js, a library that lets you run AI models directly in the browser, on a desktop computer or a phone alike. The model is downloaded once. From that point on, everything works completely offline. Any document or text the user pastes never leaves the browser: it isn’t sent to any server and isn’t stored anywhere.
The workflow: anonymise, use the AI, and recover the original
Ocultia has two complementary modes:
Anonymisation mode: you paste the text or upload a document (Word, PDF or plain text). The tool identifies sensitive entities and automatically shows you the anonymised result, letting you edit or fine-tune anything you need. Once you’re happy, you copy the clean text and take it to whichever AI chat you use: ChatGPT, Gemini, Claude, you name it.
Reintegration mode: when you’re done in the chat and have the text reworked by the AI, you paste it back into Ocultia. If the session is still open, the tool automatically replaces every anonymous label or marker generated in the previous step with the real data. If some time has passed, you load the anonymised key file generated at the start and you get the original text rehydrated with the real data.
The result is a complete workflow that doesn’t force you to choose between productivity and security.
A project born from a real need
Ocultia started as an answer to our students’ problem. But while we were building it (using Claude Code as our development assistant), we realised the problem is much broader.
Any professional who handles third-party data and wants to make the most of AI runs into the same friction. Doctors, lawyers, advisors, administrators. People working with real information about real people, who can’t afford a slip-up.
Now they have a free tool, with no sign-up, that runs in the browser and sends no data to any server. The compute cost is borne by the user’s machine. Ours was the development time. We think it’s a fair trade.
For the more technical: LLMs running in the browser
While building Ocultia, we explored several technical approaches. Beyond Transformers.js, we tried two more paths: running small versions of Gemma through the same library, and using Chrome’s native API to run Gemini Nano directly, the model Google has built into the browser itself.
All three work. Each with its own quirks, but a modern browser is already capable of running LLMs. The line between “web app” and “app with local AI” is fading faster than it might seem.
For Ocultia’s specific task, the small entity-recognition model is still the fastest and most efficient option. But having confirmed that integrating an LLM in the browser is feasible and reasonably affordable opens the door to applying it in other projects. You’ll probably see it soon.
You can try it at ocultia.com. And if your use case has more layers, or you need an enterprise version to run an Ocultia instance inside your own intranet, tell us about it and we’ll explore together how to make it happen.