Organic

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
models [2026/04/11 14:44] – [Kimi K2-Thinking] katmodels [2026/04/20 06:52] (current) kat
Line 3: Line 3:
 ===== Selection Criteria ===== ===== Selection Criteria =====
  
 +{{page>models:glm-5.1}}
 {{page>models:glm-5}} {{page>models:glm-5}}
 {{page>models:kimi-k25}} {{page>models:kimi-k25}}
 {{page>models:minimax-m25}} {{page>models:minimax-m25}}
-{{page>model:kimi-k2-thinking}} +{{page>models:kimi-k2-thinking}} 
-==== Nemotron 3 Super ==== +{{page>models:nemotron-3-super}} 
- +{{page>models:glm-47-flash}} 
-''hf:nvidia/NVIDIA-Nemotron-3-Super-120B-A12B-NVFP4'' +{{page>models:qwen-3.5}} 
- +{{page>models:deepseek-v3.2}}
-**Price**: $0.30/mtok, $1.00/mtok +
- +
-The <wrap hi>most powerful budget model</wrap> on Synthetic. Definitely worth using for agentic web search and report gathering, basic agentic terminal automation, as well as thread summary and title generation, and other basic housekeeping tasks you don't need a frontier model for. Should not be allowed to touch code with a ten-foot pole. +
- +
-  * **Pros:** Very long context for such a cheap/small model (double the context of GPT-OSS 120b, which is the same size). Extremely, almost unnervingly fast. Does not really slow down over long contexts at all. Which is all thanks to the hybrid state space model architecture. Most powerful and capable //fully// open source model ([[https://www.signalbloom.ai/posts/nvidia-nemotron-3-super-is-a-bigger-deal-than-you-think/|source]]) +
- +
-  * **Cons:** Not really very flexible at problem solving. Can lose the plot pretty hard if set loose on a difficult problem for a very long time without feedback, although it doesn't really context rot and is very tenacious, so it has that going for it. Probably shouldn't be allowed to write code. Not that smart. +
- +
-==== GLM 4.7 Flash ==== +
- +
-''hf:zai-org/GLM-4.7-Flash'' +
- +
-**Price**$0.10/mtok, $0.50/mtok +
- +
-By far the <wrap hi>cheapest model on Synthetic</wrap>. Capable at basic tasks like summarization, classification, simple translation of natural language commands into tool calls, or terminal commands. +
- +
-  * **Pros**Cheapest. Very fast. +
- +
-  * **Cons**: Only for basic usage.+
  
 ===== Embedding Models ===== ===== Embedding Models =====
  
-==== Nomic Embed ==== +{{page>models:nomic-embed-text-15}}
- +
-''hf:nomic-ai/nomic-embed-text-v1.5''  +
- +
-If you have a Synthetic subscription, using an embedding model is free. Embedding models are commonly used in vector databases to provide effective semantic search over your data. Think of it as an easy way to browse through a hundred documents without giving an agent the entire context. +
- +
-Nomic only works with text and is no longer state-of-the-art, so expect average performance.+