Glossary

Fine-Tuning

Fine-tuning is the process of further training a pre-trained AI model on your own dataset to adapt its behavior, knowledge, or style for a specific domain or task. It changes the model's internal weights, making it permanently better at your particular use case.

How It Works

A base language model like GPT-4 or Claude knows a lot about the world but nothing about your specific business. Fine-tuning teaches the model your domain by training it on examples of the inputs and outputs you care about.

The process works like this: you prepare a dataset of example inputs and desired outputs, then run a training process that adjusts the model's weights. After fine-tuning, the model responds in ways that align with your examples. It might learn your company's terminology, follow a specific format, or handle domain-specific tasks more accurately.

Fine-tuning is most useful when you need the model to adopt a specific style, consistently follow a complex format, or develop deep expertise in a narrow domain. Medical report generation, legal document analysis, and code generation for a specific framework are common use cases.

The main alternative to fine-tuning is RAG. RAG gives the model relevant context at runtime without changing the model itself. For most enterprise use cases, RAG is the better starting point. It is cheaper, faster to set up, and easier to update when your data changes.

Fine-tuning makes sense when RAG is not enough. If the model needs to behave differently (not just know different things), fine-tuning is the right tool. Many production systems combine both: a fine-tuned model for style and format, with RAG for up-to-date knowledge.

Related Solutions

Generative AI ApplicationsView →
AI Knowledge BaseView →

Need help implementing this?

We build production AI systems for enterprises. Tell us what you are working on and we will scope it in 30 minutes.