Organic

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
models [2026/04/10 06:20] katmodels [2026/04/20 06:52] (current) kat
Line 3: Line 3:
 ===== Selection Criteria ===== ===== Selection Criteria =====
  
-==== GLM-5 ==== +{{page>models:glm-5.1}} 
- +{{page>models:glm-5}} 
-''hf:zai-org/GLM-5'' +{{page>models:kimi-k25}} 
- +{{page>models:minimax-m25}} 
-**Price**: $1.00/mtok in, $3.00/mtok out +{{page>models:kimi-k2-thinking}} 
- +{{page>models:nemotron-3-super}} 
-Currently the <wrap hi>smartest/most capable coding and agentic model</wrap> hosted directly by Synthetic.  +{{page>models:glm-47-flash}} 
- +{{page>models:qwen-3.5}} 
-Also, barring GLM 5.1, the most capable open weight model period, trading blows with SOTA proprietary models like Opus 4.6 and GPT-5.4. +{{page>models:deepseek-v3.2}}
- +
-Consequently, it is also currently the most expensive model; while its output cost is actually lower than Kimi K2.5's, <wrap hi>GLM 5's input costs are 2x Kimi K2.5's</wrap>, and input tokens usually dominate output tokens in coding and agentic use cases, meaning that input prices dominate when comparing model cost. +
- +
-  * **Pros:** Excels at almost all coding (see below) and long-horizon agentic work. Widely considered to be exceptionally good at code review. +
- +
-  * **Cons:** Quite a bit worse at user interface work. Overkill for basic assistant work, such as for OpenClaw. Worse at lateral thinking than other frontier models (needs more express guidance). +
- +
-==== Kimi K2.5 ==== +
- +
-''hf:moonshotai/Kimi-K2.5'' +
- +
-**Price**: $0.45/mtok in, $3.40/mtok out +
- +
-<WRAP center round info 60%> +
-''hf:moonshotai/Kimi-K2.5'' and ''hf:nvidia/Kimi-K2.5-NVFP4'' are aliased internally and can be used interchangeably. +
-</WRAP> +
- +
-A powerful agentic model with above-average lateral thinking/debugging and <wrap hi>great design skills</wrap>+
- +
-  * **Pros:** Solid code. Amazing at orchestrating other agents due to special "agent swarm" reinforcement learning ([[https://www.kimi.com/blog/kimi-k2-5#_2-agent-swarm|source]]). <wrap hi>Only frontier class model on Synthetic with vision.</wrap> Best model Synthetic has for UI work (probably because it was trained extensively with vision and to translate between visual input and code). +
- +
-  * **Cons:** Prone to outright laziness (keeping code for "backward compatibility", marking things as "to implement later") and thinking a bit //too// laterally. Should have an eye kept on her for longer tasks. Not quite as good as GLM-5 for backend work. +
- +
-Although Kimi was trained with Agent Swarms, to get the same results as MoonshotAI you would have to use their proprietary swarms endpoint which is not available on Synthetic. However, similar results may be had using massively parallel sub-agents, or utilizing an SDK such as [[Swarms]] with various roles. +
-==== MiniMax M2.5 ==== +
- +
-''hf:MiniMaxAI/MiniMax-M2.5'' +
- +
-**Price**: $0.40/mtok in, $2.00/mtok out +
- +
-Currently the most capable middle-tier model on Synthetic for general agentic and coding tasks. <wrap hi>Best used as a fast subagent orchestrated by a more powerful model</wrap> like GLM 5 or Kimi K2.5. +
- +
-  * **Pros:** Very fast due to a very low active parameter count (10b). Pretty good at straightforward agentic tool use, agentic terminal use, and writing working, adequate code, as well as thoroughly exploring and writing reports on codebases or document collections. +
- +
-  * **Cons:** Will very easily get stuck in loops if it isn't able to quickly debug an issue with its code — or its tools — in 1-2 turns. Requires //detailed and thorough// instructions to correctly execute the desired task (otherwise it will misinterpret what you mean, leave crucial things out, or just not understand the assignment). +
- +
-==== Kimi K2-Thinking ==== +
- +
-''hf:moonshotai/Kimi-K2-Thinking'' +
- +
-**Price**: $0.60/mtok, $2.50/mtok +
- +
-The previous most capable model on Synthetic before GLM 5 and Kimi K2.5 came around. <wrap hi>Still by far the best writing model</wrap>, though. +
- +
-  * **Pros:** Mostly just very good at writing, especially in a way that doesn't have noticeable Claude-like LLM writing tells, and picking up on emotions and nuances. +
- +
-  * **Cons:** Writing isn't always great at conveying coherent physical spaces or motions; can have continuity issues sometimes. Shouldn't really be used for anything but the writing style at this point. +
- +
-==== Nemotron Super ==== +
- +
-''hf:nvidia/NVIDIA-Nemotron-3-Super-120B-A12B-NVFP4'' +
- +
-**Price**: $0.30/mtok, $1.00/mtok +
- +
-The <wrap hi>most powerful budget model</wrap> on Synthetic. Definitely worth using for agentic web search and report gathering, basic agentic terminal automation, as well as thread summary and title generation, and other basic housekeeping tasks you don't need a frontier model for. Should not be allowed to touch code with a ten-foot pole. +
- +
-  * **Pros:** Very long context for such a cheap/small model (double the context of GPT-OSS 120b, which is the same size). Extremely, almost unnervingly fast. Does not really slow down over long contexts at all. Which is all thanks to the hybrid state space model architecture. Most powerful and capable //fully// open source model ([[https://www.signalbloom.ai/posts/nvidia-nemotron-3-super-is-a-bigger-deal-than-you-think/|source]]) +
- +
-  * **Cons:** Not really very flexible at problem solving. Can lose the plot pretty hard if set loose on a difficult problem for a very long time without feedback, although it doesn't really context rot and is very tenacious, so it has that going for it. Probably shouldn't be allowed to write code. Not that smart. +
- +
-==== GLM 4.7 Flash ==== +
- +
-''hf:zai-org/GLM-4.7-Flash'' +
- +
-**Price**: $0.10/mtok, $0.50/mtok +
- +
-By far the <wrap hi>cheapest model on Synthetic</wrap>. Capable at basic tasks like summarization, classification, simple translation of natural language commands into tool calls, or terminal commands. +
- +
-  * **Pros**: Cheapest. Very fast. +
- +
-  * **Cons**: Only for basic usage.+
  
 ===== Embedding Models ===== ===== Embedding Models =====
  
-==== Nomic Embed ==== +{{page>models:nomic-embed-text-15}}
- +
-''hf:nomic-ai/nomic-embed-text-v1.5''  +
- +
-If you have a synthetic subscription using an embedding model is free. Embedding models are largely used in vector databases to give you good semantic search over your data. Think an easy way to look through a hundred documents without having to give an agent the whole context.  +
- +
-Nomic only works on text and it isn't state of the art anymore either, expect middling performance.+