I continued a Fine-Tuning job and now Serverless is gone

Last updated: August 29, 2025


Continuing a fine-tuning job with identical LoRA hyper-parameters (rank, alpha, and selected modules) has no impact on model availability; the resulting adapter remains compatible with LoRA Serverless.

If you change any of those three settings or switch the continuation to Full fine-tuning, LoRA Serverless will be disabled. Likewise, continuing an existing Full fine-tuning job keeps LoRA Serverless disabled.

The system disables the Serverless capability in these cases because the Fine-Tuning API merges the parent adapter into the base model whenever it detects new adapter hyper-parameters, ensuring optimal training quality.

For complete details on continuing fine-tuning jobs, see: