It is highly recommended not to use Claude Code as a coding harness with Synthetic due to excessive token bloat caused by its poor infrastructure.
Claude Code, from Anthropic, is perhaps the most popular agentic coding harness currently. However, it is not really designed to be used with anything other than Anthropic’s own Claude subscriptions.
To use Claude Code with Synthetic, you will need to use something like Claude Code Router, a custom script, or a tool provided by the community, as described here in the official documentation.
Cons: Very poorly vibe-coded: updates often break things; TUI is slow, flickers a lot, and does not support terminal scrollback; extreme token bloat in the system prompts, tool prompts, and feedback to the models; extreme request bloat, in terms of making a lot of requests with the various models you’ve specified as haiku/sonnet/opus alternatives. Requires custom scripts or wrappers to work with anything but Anthropic’s
API.
This guide covers setting up Hermes Agent with Synthetic as your provider. For the official Hermes docs, see here.
Hermes Agent is an open-source AI agent framework by Nous Research that runs in your terminal, messaging platforms, and IDEs. It’s provider-agnostic, supports persistent memory, multi-platform gateways (Telegram, Discord, Slack, etc.), skills, plugins, and profiles.
Pros: Self-improving through skills. Persistent cross-session memory. Multi-platform gateway. Provider-agnostic — swap models and providers mid-workflow. Plugins and MCP servers. Profiles for isolated configs. Cron scheduling. Extensive toolset (web, terminal, file, browser, code execution, vision, etc.)
Octofriend is Synthetic’s very own coding harness! You can read the official docs here. Despite being made by Synthetic, it does still support any provider, if you’re the kind of person who swaps between a lot of them.
Pros: Native support for Synthetic, obviously. Built-in-by-default access to two very helpful auto-fix models to solve some of the most common issues coding agents run into: malformed
JSON responses and poorly written code diffs. Super fast. Stable, with few-to-no notable bugs. Well-coded.
Cons: Extremely simple. Lacks essentially any features beyond “send request, receive edits/response” you may have come to expect from other coding harnesses. No plugin
API to help mitigate that simplicity. Infrequently updated.
The Pi agent is minimal, lightweight, and extremely opinionated. It prioritizes an extremely polished and reliable implementation of the absolute basics of an agent harness over any more advanced features, and ruthlessly eschews features that are just popular in favor of a specific design philosophy (source). Whether this works for you, as a result, will be highly hit-or-miss.
However, Pi is also extremely extensible. For instance, it is the agent at the core of OpenClaw.
Extending Pi is extremely easy: models running in Pi have immediate access to the full Pi documentation and source code, and extending it is as simple as dropping a TypeScript file in a specific directory. The codebase is designed to be easily hooked-into and overridden.
To use with Synthetic, you can follow the Custom Model instructions using the information from the Synthetic developer docs, though the pi-synthetic-provider extension is preferable and offers additional integration.
Pros: Extremely extensible. Self-documenting and self-extending. Very polished and reliable implementation (e.g., flicker-free, extremely fast, and scrollback-supporting TUI, or seamless provider switching within one session). Small and fast. Has powerful extension collections. Very popular. Minimal request and token usage by default. Very transparent introspection into what the harness is using from your provider.
Cons: The default experience is very barebones, and opinionated. It might be useless for you, or the opinions might rub you the wrong way. As always with adding features to something minimal via extensions, this adds yak shaving potential, possible instability, the possibility of overcomplicated, buggy, overbearing additions being the only options besides extending it yourself — in general, the NeoVim or Emacs experience.
See pi/tips for community tips on getting the most out of Pi.
The Zed Agent is the built-in agentic harness integrated with the Zed editor — not to be confused with the agents Zed has the capability to use through the Agent Client Protocol.
The built in Zed Agent has several fundamentally more advanced features than any other agent. It allows you to stay more involved in the process of writing, reading, and reviewing agent decisions and code changes without slowing your agent down, thus reversing the incentive toward vibe coding most other agents have. However, it is missing many other common features, since most of its advantages come from fundamental algorithms and data structures, not feature improvements.
To configure use of Synthetic with Zed, you can configure it as an OpenAI API Compatible Provider (instructions) using the information from the Synthetic developer docs. You’ll then have to set your API Key either in the agent sidebar > three dots > Settings > Synthetic > API key, or as an environment variable. (The former is recommended, so you don’t run into strange env issues).
Pros:
CRDT based collaborative editing with the agent means that you can: edit files at the same time it edits them, or before you’ve accepted/rejected proposed changes from the agent; edit a file while the agent is in the process of editing it to propose changes; see the agent’s proposed changes in-line in the actual buffer you’d use to edit the file, instead of in a separate interface; and accept and reject proposed changes by hunk without confusing the model.
Multibuffer support means that when you look at the unified (live updating) view of all of your agent’s proposed changes as it makes them, you can actually edit that collected buffer.
Zed
DeltaDB also allows agent suggested edits to not have to be accepted or rejected before it can move on to the next turn of reading source code and editing, meaning that you are not forced to accept all changes in one diff just to see where the agent is going, allowing you to let the agent build up a “stack” of changes for a complete feature on autopilot, without actually finally accepting those edits; and allowing the agent to iteratively
refine proposed changes by writing the change, then looking at the resulting file with its changes in it — although you haven’t permanently accepted them yet — seeing problems, and iterating again.
Agent proposed edits are actually in the file, DeltaDB just records the information needed to perfectly revert those changes at the text edit operation level if you reject the changes, meaning that LSP diagnostics, compilation, benchmarks, test suites, fuzzing and son on can be run on an agent’s proposed changes all before accepting the agent’s changes. In fact, the Zed system prompt encourages the agent to run its LSP diagnostic tool on its edits after making them before they are accepted, feeding into the iterative process.
Zed uses a custom “intent to edit file with goal” tool call, and then runs a separate request with the same chat history and model to actually generate the code, and live-generates the diff from the agent’s search and replacement code, so that you can see the agent moving around and editing the file in real time, token-by-token.
Zed allows you to “follow” agents, so you can watch which files they’re currently reading, searching in, or editing.
Cons: In-file live diff generation is laggy. The Zed Agent has subagents, but they’re forced to use the same model as the parent thread. The Zed Agent doesn’t currently have a todo list tool (although that is on the HEAD branch). It also doesn’t have Claude Code style templated slash commands or Agent Skills (although this feature is coming soon). Additionally, the “intent to edit” system means you’ll use 2x the input tokens and 2x the requests for edits.