A ready-to-run example is available here!
Overview
Conversation.fork() deep-copies a conversation — events, agent config, workspace metadata — into a new conversation with its own ID. The fork starts in idle status and retains the full event memory of the source, so calling run() picks up right where the original left off.
Use cases:
- CI debugging — an agent produced a wrong patch; fork to debug without losing the original run’s audit trail
- A/B testing — fork at a given turn, change one variable, compare downstream outcomes
- Tool-change — fork and swap in a different agent with new tools mid-conversation
Basic Usage
Create a fork
Source stays immutable
Forking deep-copies events and state. Anything you do on the fork never touches the source:Fork with a different agent
Swap the agent on fork — useful for A/B testing models or adding/removing tools:Tags and metadata
Forks supporttitle and arbitrary tags for organization:
Metrics reset
By default, cost/token stats start fresh on the fork. Passreset_metrics=False to preserve them:
API Reference
| Parameter | Default | Description |
|---|---|---|
conversation_id | auto-generated UUID | ID for the forked conversation |
agent | deep-copy of source | Agent for the fork (swap model, tools, etc.) |
title | None | Sets tags["title"] on the fork |
tags | None | Arbitrary key-value metadata |
reset_metrics | True | Whether cost/token stats start at zero |
Conversation with the same event history but independent state.
What Gets Copied
| Component | Behavior |
|---|---|
| Events | Deep-copied; source is never modified |
| Agent | Deep-copied by default, or replaced via the agent kwarg |
| Workspace | Shared (same working directory) |
| Confirmation policy | Copied from source |
| Security analyzer | Copied from source |
| Stats / Metrics | Reset by default (reset_metrics=True) |
| Execution status | Always idle on the fork |
| Conversation ID | New UUID (or explicit via conversation_id) |
Agent-Server REST Endpoint
When using the agent-server, forks are available via REST:ConversationInfo for the newly created fork.
When you call fork() on a RemoteConversation, the SDK sends this request for
you and returns a new RemoteConversation pointing at the server-side copy.
Remote forks always reuse the server-managed agent configuration, so
RemoteConversation.fork(agent=...) is intentionally unsupported.
Agent-Server Example
This example is available on GitHub: examples/02_remote_agent_server/11_conversation_fork.py
examples/02_remote_agent_server/11_conversation_fork.py
Ready-to-run Example
This example is available on GitHub: examples/01_standalone_sdk/48_conversation_fork.py
examples/01_standalone_sdk/48_conversation_fork.py
The model name should follow the LiteLLM convention:
provider/model_name (e.g., anthropic/claude-sonnet-4-5-20250929, openai/gpt-4o).
The LLM_API_KEY should be the API key for your chosen provider.Next Steps
- Persistence — Save and restore conversation state
- Pause and Resume — Control execution flow
- Agent Server — Deploy agents with the REST API

