Together Fine-Tuning Support

Last updated: July 3, 2025


Together AI facilitates every step of the fine-tuning process, from data preparation to model deployment. Together supports two types of fine-tuning:

  1. LoRA (Low-Rank Adaptation) fine-tuning: Fine-tunes only a small subset of weights compared to full fine-tuning. This is faster, requires less computational resources, and isĀ recommended for most use cases. Our fine-tuning API defaults to LoRA.

  2. Full fine-tuning: Updates all weights in the model, which requires more computational resources but may provide better results for certain tasks.

A direct link to our Fine-Tuning Guide with more detailed steps can be found here:
- https://docs.together.ai/docs/fine-tuning-quickstart