Skip to main content
All CollectionsInference FAQs
I can't run inference with my model. What is going on?
I can't run inference with my model. What is going on?
Updated over 10 months ago

If you want to run inference:

  1. Choose from the available models list.

  2. For Serverless Endpoints models, direct inference is possible without the need to initiate a virtual machine (VM).

  3. If you're trying to run inference on a model you fine-tuned, initiate its VM instance either:

    • Directly from the model's page on api.together.ai, or

    • Utilizing the start and stop instances of our APIs.

  4. If your desired model isn't listed, feel free to request a model.

Did this answer your question?