hf:zai-org/GLM-5.1
Price: $1.00/mtok in, $3.00/mtok out
GLM-5.1 is currently in beta. It may have higher rate limit impact than stable models. Price may increase to better reflect compute costs (~$1.40/mtok in, ~$4.40/mtok out has been discussed).
GLM-5.1 is currently the smartest/most capable coding and agentic model hosted directly by Synthetic. Also the most capable open weight model period, trading blows with SOTA proprietary models like Opus 4.6 and GPT-5.4.
Consequently, it is also currently the most expensive model; while its output cost is actually lower than Kimi K2.5’s, GLM-5.1’s input costs are 2x Kimi K2.5’s, and input tokens usually dominate output tokens in coding and agentic use cases, meaning that input prices dominate when comparing model cost.
GLM-5.1 runs on 4 B200 GPUs per replica at NVFP4 quantization (same as GLM-5), and uses SGLang instead of vLLM for better cache hit rates. This is compared to 8 B200s for Kimi K2.5, making it theoretically both faster (due to less NVLink overhead) and cheaper (due to less energy required, and needing to rent fewer expensive GPUs). There is no official NVFP4 quant from Nvidia yet, but llmcompressor supports the GLM-5 architecture, which has allowed Synthetic to produce their own quant.
Despite the above numbers, Synthetic claims that it is actually *more* compute-intensive than Kimi K2.5, and they say that the price may need to increase to better reflect this, and has floated the idea that this might resolve GLM-5.1’s noted instability (see below). The price-point they have pointed to as the goal (the “market rates”) are per-token API prices on OpenRouter, where GLM-5.1 is slightly more expensive than GLM-5.
However, users have noted that the subscription-based pricing model used by Synthetic differs significantly from the per-token based pricing model seen on OpenRouter: on OpenRouter, providers are trying to make a per-token profit, so the raise in prices could be just due to greater demand for GLM-5.1 due to its increased capabilities, whereas Synthetic’s subscription model means that token pricing is only meant to be a bellweather for compute costs.
These price hikes have been floated as a solution to the recent instability of the GLM-5.1 replicas, covered below.
It is unclear if an increase in pricing would resolve any of these issues.
See also: GLM-5 (predecessor, being retired), Kimi K2.5 (complementary frontier model with vision)
hf:zai-org/GLM-5
Price: $1.00/mtok in, $3.00/mtok out (beta pricing)
GLM-5 is in beta and is slated to be retired/proxied once GLM-5.1 exits beta. For new projects, prefer GLM-5.1.
GLM-5 was launched in beta on March 30, 2026, after SGLang made progress stabilizing GLM-5’s new architecture. It trades blows with proprietary models like Opus 4.6 and GPT-5.4 for coding and agentic tasks.
GLM-5 has fewer total parameters than Kimi K2.5, making it more efficient to serve on B200 hardware at NVFP4 quantization. However, vLLM and SGLang support for GLM-5’s architecture was initially poor — Synthetic had to wait for upstream fixes before stable hosting was possible.
The model runs on SGLang (which is faster for the GLM series) and uses NVFP4 quantization on B200 GPUs. Each replica requires 4 B200 GPUs (tp4).
Based on public statements from Synthetic staff, the plan is:
1. Take GLM-5 out of beta, stop self-hosting GLM-4.7 2. Put [[:models:glm-5.1|GLM-5.1]] in beta 3. Once GLM-5.1 is out of beta, retire/proxy GLM-5
Old models are typically proxied to Fireworks or TogetherAI, although proxy duration depends on load since proxies are expensive.
See also: GLM-5.1 (the replacement), GLM-4.7 (the predecessor)
hf:moonshotai/Kimi-K2.5
Price: $0.45/mtok in, $3.40/mtok out
hf:moonshotai/Kimi-K2.5 and hf:nvidia/Kimi-K2.5-NVFP4 are aliased internally and can be used interchangeably.
A powerful agentic model with above-average lateral thinking/debugging and great design skills.
Although Kimi was trained with Agent Swarms, to get the same results as MoonshotAI you would have to use their proprietary swarms endpoint which is not available on Synthetic. However, similar results may be had using massively parallel sub-agents, or utilizing an SDK such as Swarms with various roles.
Since February 15, 2026, Synthetic routes Kimi K2.5 requests between two hardware backends based on current load. Both model strings (hf:moonshotai/Kimi-K2.5 and hf:nvidia/Kimi-K2.5-NVFP4) may silently hit either backend. You don’t need to do anything differently.
Why: B200 capacity was sometimes overloaded while H200s sat with excess capacity. Routing between them based on load smooths this out and lets Synthetic scale Kimi via B200s going forward. Some INT4 capacity remains on reserved H200s (NVFP4 provides no perf advantage on H200 hardware).
Benchmark comparison (from Synthetic’s internal testing):
| Benchmark | INT4 | NVFP4 | Delta |
|---|---|---|---|
| AIME | 91.0 | 93.3 | NVFP4 +3.3 |
| Aider Polyglot | 74.4 | 71.1 | INT4 +3.3 |
| LiveCodeBench (subset) | — | — | NVFP4 +4.0 |
All within margin of error. NVFP4 mildly better on 2 of 3 benchmarks. Synthetic considers them equivalent for routing purposes.
Since April 7, 2026, Synthetic serves Kimi K2.5 from a mix of NVFP4 (B200s) and original INT4 (H200s). The /models API endpoint reports NVFP4 as the quant format, but your request may silently hit either backend based on current load.
When serving a model in multiple precisions, Synthetic reports the “least legitimate” (i.e., lab-released original) format in the /models endpoint. Since Moonshot’s original release was INT4 and Nvidia’s NVFP4 is a derived quant, NVFP4 is reported. However, INT4 capacity remains on reserved H200s where NVFP4 provides no performance advantage.
This means: even if you explicitly use hf:nvidia/Kimi-K2.5-NVFP4, your request may still be served by the INT4 variant on H200s during times of B200 load.
hf:MiniMaxAI/MiniMax-M2.5
Price: $0.40/mtok in, $2.00/mtok out
Currently the most capable middle-tier model on Synthetic for general agentic and coding tasks. Best used as a fast subagent orchestrated by a more powerful model like GLM 5 or Kimi K2.5.
hf:moonshotai/Kimi-K2-Thinking
Price: $0.60/mtok, $2.50/mtok
The previous most capable model on Synthetic before GLM 5 and Kimi K2.5 came around. Still by far the best writing model, though.
hf:nvidia/NVIDIA-Nemotron-3-Super-120B-A12B-NVFP4
Price: $0.30/mtok, $1.00/mtok
The most powerful budget model on Synthetic. Definitely worth using for agentic web search and report gathering, basic agentic terminal automation, as well as thread summary and title generation, and other basic housekeeping tasks you don’t need a frontier model for. Should not be allowed to touch code with a ten-foot pole.
hf:zai-org/GLM-4.7-Flash
Price: $0.10/mtok, $0.50/mtok
By far the cheapest model on Synthetic. Capable at basic tasks like summarization, classification, simple translation of natural language commands into tool calls, or terminal commands.
hf:Qwen/Qwen3.5-397B-A17B
Price: See Synthetic pricing page
Qwen 3.5 is a large MoE model (397B total / 17B active) self-hosted by Synthetic. It was launched in beta on February 20, 2026.
Qwen 3.5 is still in beta. Performance may improve as Synthetic optimizes serving.
hf:deepseek-ai/DeepSeek-V3.2
Price: Uses Fireworks pricing (proxied model)
DeepSeek V3.2 is proxied to Fireworks — it is not self-hosted by Synthetic. This means Synthetic cannot control reliability or fix tool-calling issues. Uptime is approximately 99.5% (per status.synthetic.new).
DeepSeek V3.2 is a powerful model that was one of the first available on Synthetic, but its experience has been inconsistent due to proxying.
Because DeepSeek V3.2 is proxied to Fireworks:
For better reliability, prefer self-hosted models like GLM-5, Kimi K2.5, or GLM-4.7-Flash.