No. Customer validation data is not used to train third-party large language models, and we do not operate a shared model that learns from one customer to serve another. Each model interaction is scoped: we retrieve the minimum relevant data from your environment, send a constructed prompt, receive a response, and complete the request. We are not fine-tuning a persistent model on your proprietary content.
What that looks like in practice
The system of record remains your governed objects and workflow state. Where an LLM is enabled, it acts as a stateless assistant over that data—not as a second brain that accumulates memory of your organization. Outputs that matter for compliance still land as traceable records tied to requirements, risk, and approvals, not as an opaque thread history you cannot explain to an auditor.
All AI features optional
You can use Valkit.ai with all AI features disabled. If you enable AI, Valkit.ai is LLM-agnostic: you can bring your own large language model (LLM) or use ours. Turning those features on or off does not replace the core product; it changes how much automated drafting and gap-filling you get inside the same workflows. If that model is one you trained or fine-tuned yourself, how it was trained and what it was trained on are under your control and your review story—not ours.
For how this fits into our broader design choices—workflow coordination, automation, and keeping intelligence in your data—see Letter from our CTO: workflows, automation, and AI.