Kiln for Enterprise.

AI infrastructure that passes your security review on day one

Run Kiln fully on-prem with source-available builds, or opt-in to enterprise services with SSO, VPC, and the telemetry controls your security team requires.

Free · MIT library · macOS · Windows · Linux
Your Machine
Kiln App JSON files API keys
Direct API call
AI Provider
Bedrock Azure OpenAI Vertex AI
No Kiln server in the path
01 Solution · for

Enterprise teams deploying AI at scale

ENGINEERING DIRECTOR
What hurts today
  1. 01 Adopting a new cloud AI service triggers months of procurement, security assessment, and legal review.
  2. 02 Regulated data cannot leave your environment, but most AI tools are cloud-only by design.
  3. 03 Proprietary data formats and single-provider dependencies create vendor lock-in risk.
  4. 04 When governance asks 'why did the model produce this output?', there is no traceable history — just scattered notebooks.
  5. 05 Engineering, QA, compliance, and business teams all need access to AI data, but tools are single-user or require a shared cloud database.

How Kiln meets enterprise requirements

Fully on-prem possible

Kiln's default is fully local: prompts, outputs, and fine-tuning jobs stay as plain JSON on your filesystem, and API calls go directly from your machine to the provider you chose — keys never pass through Kiln. Optional cloud-assisted features (AI Assistant, Auto-Optimize, Auto-Evals) are opt-in, not on by default. Every claim is verifiable: MIT-licensed library, source-available app, public CI builds, checksums on every release.

Your Machine
Kiln App JSON files API keys
Direct API call
AI Provider
Bedrock Azure OpenAI Vertex AI
No Kiln server in the path

Audit trail on your Git host

Git Sync turns GitHub Enterprise, GitLab, Bitbucket, or Azure DevOps into a full audit trail. Every change is a commit with traceability and revertibility. Non-technical team members connect via OAuth in clicks — no new vendor to approve.

COMMIT LOG
a3f8c2d update prompt template SC 2m ago
b71e0a5 add eval: factual accuracy MR 15m ago
c9d4f18 rate 24 samples JL 1h ago
d2a6b93 fine-tune: iteration 3 SC 3h ago
e5c1d07 import compliance dataset AK 1d ago

Use approved models and providers

15+ providers including AWS Bedrock, Azure OpenAI, Vertex AI, OpenAI, Anthropic, and Fireworks, plus internal / self-hosted services via any OpenAI-compatible endpoint. Run open-source models locally via Ollama for air-gapped environments. Switch providers without changing a prompt or dataset.

AWS Bedrock
Azure OpenAI
Vertex AI
OpenAI
Anthropic
Fireworks
Gemini
Ollama
Together
Internal Services
Custom API
+

Built for enterprise requirements

Fully on-prem possible

Run fully local with no cloud dependency, or layer in opt-in services.

Verifiable builds

MIT-licensed library, source-available app, public CI builds, checksums on every release.

Git audit trail

Every change is a Git commit on your existing host.

15+ AI providers

Bedrock, Azure OpenAI, Vertex AI, OpenAI, Anthropic, Fireworks, Ollama, internal services, and BYO endpoints.

Safety eval templates

Toxicity, bias, maliciousness, factual correctness, jailbreak testing.

Open JSON data model

No database, no proprietary format. Data portability guaranteed.

Self-hostable REST API

FastAPI server deploys within your own infrastructure.

Enterprise contract options

SSO, VPC peering, and telemetry controls available — talk to us about your requirements.

How Kiln compares for enterprise

Kiln eliminates the cloud dependency that makes most AI tools a procurement headache.

Capability KilnCloud AI PlatformsDIY Notebooks
Data residency Local filesystemVendor serversManual
Security review Local default; opt-in services reviewableFull review requiredPer-tool
Audit trail Git historyVaries
Vendor lock-in None—open JSONProprietary formatFramework-dependent
Multi-team collaboration Git Sync + UICloud accounts
Model flexibility 15+ providers + BYOSingle vendorManual integration
Safety evals Built-in templatesVariesManual
Offline / air-gapped Partial

AI tooling before and after Kiln

Without Kiln
  • Spend months in security review for a new cloud AI service before any team member can use it.
  • Scatter AI datasets across notebooks, spreadsheets, and proprietary databases with no traceable history.
  • Lock into one provider's models and data format, then run a migration project when requirements change.
With Kiln
  • Run Kiln fully on-prem by default, with optional opt-in services available under an enterprise contract (SSO, VPC, telemetry controls).
  • Every prompt, output, and rating is version-controlled in Git on your existing host, with full audit trail.
  • Switch between 15+ providers without changing a prompt or dataset—open JSON means your data is always portable.

Where enterprise teams use Kiln

Three deployment shapes, all on infrastructure you control.

Scenario 1
A platform team at a regulated financial firm ships an internal AI assistant for compliance review.
Problem

Cloud AI tools are blocked by data residency requirements. Manual prompt testing offers no audit trail for regulators.

With Kiln

Run Kiln locally with Azure OpenAI. Built-in safety evals test for toxicity and factual correctness. Git Sync on GitHub Enterprise provides a full history of every prompt and output.

Outcome

Auditable AI pipeline on approved infrastructure with no new vendor to onboard.

Scenario 2
An ML engineering team at a defense contractor ships a document classification model for sensitive data.
Problem

No data can leave the network. Open-source model tooling requires stitching together multiple libraries with no shared dataset format.

With Kiln

Kiln runs fully offline with Ollama. Fine-tune weights export to GGUF for on-premises deployment. The open JSON data model integrates with existing tooling.

Outcome

End-to-end AI pipeline in an air-gapped environment.

Scenario 3
A cross-functional AI team at a healthcare company ships a patient FAQ agent used across multiple product lines.
Problem

Clinicians, engineers, and PMs need to collaborate on AI data, but the only shared tooling is a cloud platform that failed security review.

With Kiln

Kiln's UI lets clinicians rate outputs and write specs without Git knowledge. Automatic Git Sync on the company's self-hosted GitLab provides version control and access control. Engineers iterate on models using the same project.

Outcome

Multi-role collaboration on sensitive data with no cloud dependency.

Frequently asked

Does Kiln send any data to Kiln's servers?

By default, datasets do not leave your machine: prompts, outputs, ratings, and fine-tuning jobs stay as JSON on your filesystem, and provider API calls go directly from your machine to the provider you chose. The desktop app does send anonymous UI analytics (page visits, button clicks) via Posthog — never dataset content, model I/O, project names, or API keys. The Python library sends nothing. Optional Kiln-hosted features (AI Assistant, Auto-Optimize, Auto-Evals) are opt-in and not on by default. For enterprise contracts we can disable telemetry, deploy services into your VPC, and add SSO — talk to us about your requirements.

Can we verify Kiln's privacy claims?

Yes. The library source is MIT-licensed on GitHub. App binaries are built on public CI (GitHub Actions) with verifiable checksums. Your security team can inspect the code and reproduce the build.

Which enterprise AI providers does Kiln support?

15+ providers including AWS Bedrock, Azure OpenAI, Vertex AI, OpenAI, Anthropic, Fireworks, and Gemini. Ollama for fully offline inference. Internal / self-hosted services via any OpenAI-compatible endpoint. 2,000+ test cases validate model capabilities across providers, updated weekly.

How does collaboration work without a cloud service?

Git Sync uses your existing host — GitHub Enterprise, GitLab, Bitbucket, or Azure DevOps. Non-technical users can connect via UI in a few clicks (oAuth). In most organizations, a Git host is already approved.

Can Kiln run in an air-gapped environment?

Yes. Kiln works offline with Ollama for local inference. Fine-tuned weights export to GGUF for on-premises deployment.

Can we get SSO, VPC deployment, or disabled telemetry for an enterprise contract?

Yes — talk to us. We can run Kiln's optional services inside your VPC, add SSO, disable Posthog telemetry, and accommodate other security and procurement requirements via an enterprise contract.

Enterprise AI without the enterprise cloud risk.

Run Kiln on infrastructure you already control. For SSO, VPC deployment, or custom telemetry policies, talk to us about an enterprise contract.