This page is read only. You can view the source, but not change it. Ask your administrator if you think this is wrong. ==== DeepSeek V3.2 ==== ''hf:deepseek-ai/DeepSeek-V3.2'' **Price**: Uses Fireworks pricing (proxied model) <WRAP center round alert 60%> DeepSeek V3.2 is **proxied** to Fireworks — it is not self-hosted by Synthetic. This means Synthetic cannot control reliability or fix tool-calling issues. Uptime is approximately 99.5% (per status.synthetic.new). </WRAP> DeepSeek V3.2 is a powerful model that was one of the first available on Synthetic, but its experience has been inconsistent due to proxying. * **Pros:** Strong model capabilities. One of the fastest models when working properly (fastest proxied model over 24hrs in early 2026). * **Cons:** Tool calling reliability is poor due to proxying. Occasional timeout issues. Synthetic cannot patch the inference engine for this model. When DeepSeek 4 comes out, Synthetic has indicated they will try to self-host it. === Proxy Implications === Because DeepSeek V3.2 is proxied to Fireworks: - Synthetic forwards the price they pay the underlying inference provider - Tool calling bugs cannot be fixed on Synthetic's end - TTFT and TPS depend on Fireworks' infrastructure - If Fireworks stops hosting it, Synthetic will have to stop supporting it too For better reliability, prefer self-hosted models like [[:models:glm-5|GLM-5]], [[:models:kimi-k25|Kimi K2.5]], or [[:models:glm-47-flash|GLM-4.7-Flash]].