Supported Fine-Tuning Models
Last updated: September 19, 2025
The list of supported models for fine-tuning changes as we add support for newer models, and end support for older, obsolete models.
We keep an up-to-date list for all models which currently support both LoRA Fine-Tunes, and full Fine-Tunes in their own separately maintained charts here:
Please note - this is also different from the list of models which specifically support Serverless LoRA inference which allows your team to perform LoRA fine-tuning on the many available models through Together AI, then run inference right away. A list of models which support this is maintained separately here:
Important: When uploading LoRA adapters for serverless inference, you must use base models from the serverless LoRA list, not the fine-tuning models list. Using an incompatible base model (such as Turbo variants) will result in a "No lora_model specified" error during upload. For example, use meta-llama/Meta-Llama-3.1-8B-Instruct-Reference instead of meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo for serverless LoRA adapters.