Organic

This is an old revision of the document!


Frequently Asked Questions

When will Synthetic get model X?

Before you ask this, make sure it:

  1. Has weights available on HuggingFace (these are not always the same thing: sometimes a model is planned to be open weight, but has not been released yet).
  2. Has a compatible license that allows Synthetic to actually make money hosting a model (sometimes model weights are published “openly,” but only under modified OSS licenses that require royalties over a certain profitability limit, for instance).
  3. Has an NVFP4 quantization available so that Synthetic can run it on their GPUs at optimal speed (there are some exceptions to this — if a model is sufficiently desired, they may make their own quant).
  4. Has solid support for that model or its general architecture in sglang, the inference engine Synthetic uses to actually run the models.

Factors that can delay Synthetic getting a model:

  • If it is unusually large, it may take time for them to acquire or free up GPU space to host it.
  • If it has a novel or unusual architecture (such as DeepSeek Sparse Attention for GLM 5), it will take time for inference engines like sglang to get reliable support for the model.
  • If the model has not yet been quantized to NVFP4, Synthetic will have to wait for NVIDIA to do that, or make one themselves, both of which can take some time.

Additionally, it is worth keeping in mind that many models from labs known for creating open-weight models may either be closed source—such as Qwen 3.6-Plus—or available only through the lab’s API for user testing and feedback (and to give the lab a profitable head start) but not yet released as open weights. This was the case with GLM 5.1 for a few weeks and remains true as of April 9th, 2026, for MiniMax M2.7.

Why is model X from Synthetic not in $preferred_harness's list?

Many of these lists are updated by hand by a human, so you might be the first $preferred_harness user who’s noticed one’s missing! You can either

  • Wait until someone else opens a PR to $preferred_harness‘s list
  • Find where $preferred_harness sources its data from and open that PR yourself
  • Use some kind of provider-specific plugin for $preferred_harness with a list that updates more frequently
  • Maintain your own provider/model list for $preferred_harness