Privacy & Local Mode.
Your data stays yours — really
Kiln can work fully offline. Connect to a local inference provider like Ollama, vLLM, or Llama.cpp, and your datasets never leave your machine. No trust required — verify the source and build checksums.
Anyone who takes data privacy seriously
- 01 Most AI tools require sending your data to a third-party cloud — you have no control over how it is stored or used.
- 02 AI providers may train on your inputs unless you negotiate an enterprise contract with custom terms.
- 03 Local-mode tools exist but require stitching together multiple CLI utilities with no shared workflow.
- 04 Privacy policies change without notice, and opting out of data collection is buried or impossible.
- 05 Regulated industries (healthcare, finance, legal, defense) face compliance barriers that block cloud AI adoption entirely.
- 06 Even 'private' cloud offerings still require trusting a vendor's infrastructure and access controls.
How Kiln protects your data
Kiln runs locally
Kiln stores prompts, outputs, ratings, and fine-tuning jobs as plain JSON files on your filesystem. API calls go directly from your machine to the provider you choose. Optional cloud-assisted features (AI Assistant, Auto-Optimize, Auto-Evals) are opt-in, not on by default.
Run fully offline with local inference
Connect Kiln to Ollama, vLLM, Llama.cpp, or any OpenAI-compatible local server running on your own hardware. Your data never touches the internet. Build, evaluate, and fine-tune AI — completely air-gapped. Kiln works offline, really.
Open source and verifiable
The Kiln Python library is MIT-licensed. The desktop app is source-available with public CI builds and checksums on every release. You don't need to trust our claims — read the code, verify it yourself.
Privacy by design
Not by policy.
Kiln can run fully locally. No account, no cloud dependency, no data leaving your machine.
Run models on your own hardware via Ollama, vLLM, Llama.cpp, or any OpenAI-compatible endpoint.
Air-gapped environments, restricted networks, or just your laptop on a plane. Kiln works everywhere.
When you do use a cloud provider, API calls go directly from your machine.
Kiln can't even see your data unless you opt-in to cloud-assisted features (AI Assistant, Auto-Optimize, Auto-Evals).
MIT-licensed library, source-available app, public CI builds, checksums on every release.
No database, no proprietary format. Plain JSON files you can inspect, copy, or delete at will.
The desktop app collects UI analytics. Never dataset content, API keys, or project names.
AI privacy before and after Kiln
- Send your prompts, data, and model outputs to a cloud service you cannot audit, hoping the privacy policy holds.
- Negotiate enterprise contracts to opt out of training on your data — or accept the default and hope for the best.
- Stitch together local inference, eval harnesses, and data tools from separate projects with no shared workflow.
- Rely on a vendor's word that your data is private, with no way to verify the claim.
- Kiln stores everything as local JSON files on your filesystem. API calls go directly to the provider you chose.
- Run fully offline with Ollama, vLLM, or Llama.cpp. Your data never touches the internet. No opt-out needed when there is nothing to opt out of.
- One tool for prompts, evals, fine-tuning, RAG, and deployment — all local, all offline-capable.
- MIT-licensed library, source-available app, public CI builds. Read the code and verify every claim yourself.
About telemetry
We believe in transparency.
The Kiln desktop app collects analytics so we can understand how people use it. This includes which pages are visited and which actions are taken in the app's UI. Analytics are collected with PostHog.
- The Kiln Python library does not collect analytics. Only the desktop app's user interface collects analytics.
- Analytics never include your dataset information — no inputs to models, no outputs from models, no project names.
- Analytics never include your API keys.
- Analytics may be associated with your email address if you register your email address.
We share this because privacy-focused users deserve the full picture, not just the parts that make us look good.
Frequently asked
Does Kiln send any data to Kiln's servers?
By default, your datasets do not leave your machine: prompts, outputs, ratings, and fine-tuning jobs stay as JSON on your filesystem, and provider API calls go directly from your machine to the provider you chose. The desktop app does send anonymous UI analytics (page visits, button clicks) via PostHog — never dataset content, model I/O, project names, or API keys. The Python library sends nothing. Optional Kiln-hosted features (AI Assistant, Auto-Optimize, Auto-Evals) are opt-in and not on by default.
Can I run Kiln completely offline?
Yes. Connect Kiln to a local inference provider like Ollama, vLLM, or Llama.cpp running on your own hardware. Kiln works fully offline — prompts, outputs, and all data stay on your machine. No internet connection required.
Which local inference providers does Kiln support?
Kiln supports Ollama, vLLM, Llama.cpp, and any service that exposes an OpenAI-compatible API endpoint. Run any open-source model — Llama, Mistral, Gemma, Phi, Qwen, and more — on your own hardware.
Can I verify Kiln's privacy claims?
Yes. The Python library source is MIT-licensed on GitHub. App binaries are built on public CI (GitHub Actions) with verifiable checksums. Your security team can inspect the code and reproduce the build.
What about regulated industries (healthcare, finance, defense)?
Kiln's fully-local default is designed for exactly this. No data leaves your environment, and the open data model (plain JSON files) integrates with existing compliance workflows. For air-gapped environments, run local inference with Ollama. For enterprise requirements, we offer contracts with disabled telemetry, VPC deployment, and SSO.
Private AI that actually works.
Download Kiln, connect to a local model, and start building — without sending a single byte to the cloud.