agent.go
A field guide for Go engineers

Build AI agents
that ship — in Go.

A practical, code-first handbook for designing, running, and operating autonomous agents in Go. The loop, the tools, the memory, the orchestration — and the production tradeoffs nobody writes down.

Download the runnable bundle · 8 examples, all tests passing agentgo-examples.zip · 30 KB
24 runnable Go examples 6 production patterns Provider-agnostic
agent.go go run ./cmd/agent
// the entire agent loop, in 18 lines of Go.
func Run(ctx context.Context, goal string) error {
    msgs := []Message{{Role: "user", Content: goal}}

    for turn := 0; turn < 12; turn++ {
        resp, err := llm.Complete(ctx, msgs, tools)
        if err != nil { return err }

        if resp.Done {
            log.Info("agent finished", "answer", resp.Text)
            return nil
        }

        // model wants to call tools — execute & feed results back.
        results := tools.Dispatch(ctx, resp.Calls)
        msgs = append(msgs, resp.AsMessage(), results...)
    }
    return errors.New("agent: max turns exceeded")
}

Provider-agnostic. Works with the Go SDKs you already use.

Anthropic OpenAI Gemini Ollama langchaingo Bedrock Cohere
Why Go

Agents are systems software.
Go is systems software.

An agent is a long-running, concurrent, network-bound service that calls tools, holds state, retries, streams, and crashes in interesting ways. Go was built for exactly this shape of program — without the runtime overhead Python brings.

Concurrency that matches the workload

An agent fans out to N tool calls, streams an LLM response, watches a context cancellation, and emits telemetry — all on the same goroutine budget. Goroutines + channels are the right primitive.

A single static binary in production

No virtualenv, no requirements.txt drift, no Dockerfile gymnastics. go build produces one file you can scp anywhere — including a Lambda, a sidecar, or a hardened container.

Type-safe tool schemas

Tool arguments and structured outputs map cleanly to Go structs and JSON tags. The compiler catches drift between what the model emits and what your code expects, before it ever ships.

context.Context is built for this

Cancellation, timeouts, deadlines, request-scoped values — every long-running agent loop needs them, and Go gives them to you in the standard library. No bolt-on async framework.

Production observability is first-class

OpenTelemetry, structured slog, pprof, expvar — all in stdlib or one import away. Tracing an agent's tool calls and token spend is a 20-line job, not a platform decision.

Boring is a feature

Agents are stochastic enough on the model side. The framework around them should be deterministic, debuggable, and dull. Go's culture of "no magic" pairs unusually well with the chaos LLMs introduce.

The core loop

Every agent — chatbot, coder, researcher — runs the same four-step loop.

Whatever the framework, whatever the model — when you peel back the abstractions you find the same shape: a model decides, tools execute, results return, the model decides again. Master this loop and the rest is decoration.

Read the foundations chapter
1

Decide

Model receives messages + available tools, returns either a final answer or one or more tool calls.

2

Act

Dispatcher executes tool calls in parallel — HTTP, SQL, filesystem, MCP servers, whatever the agent has access to.

3

Observe

Tool results (or errors) are appended to the message history as tool_result turns.

4

Repeat — or stop

Loop back to step 1 until the model returns a stop turn, hits a budget, or trips a guardrail.

tools/search.go go test ./...
type SearchArgs struct {
    Query    string `json:"query" jsonschema:"required"`
    MaxHits  int    `json:"max_hits,omitempty"`
}

var WebSearch = agent.Tool[SearchArgs]{
    Name:        "web_search",
    Description: "Query the web. Returns a list of titled snippets.",
    Run: func(ctx context.Context, a SearchArgs) (any, error) {
        if a.MaxHits == 0 { a.MaxHits = 5 }
        hits, err := brave.Search(ctx, a.Query, a.MaxHits)
        if err != nil {
            return nil, fmt.Errorf("search: %w", err)
        }
        return hits, nil
    },
}
Tool use

Tools are just typed Go functions.

A tool is a struct with a name, a description, and a function from typed args to a result. The framework derives the JSON schema for you, the model picks one to call, and the dispatcher routes it back into Go. No hand-written schemas. No string-typed argument bags.

  • JSON schema is generated from struct tags — no drift.
  • Errors are passed back to the model as observations, not crashes.
  • Tools are unit-tested like any other Go function.
  • The same registry serves agents, evals, and CLI sandboxes.
Three architectures

Pick the smallest shape that solves the problem.

Most "agent" demos are actually deterministic workflows. Most "workflows" should actually be agents. Choosing the right shape is the single highest-leverage decision in the project.

Workflow

Pre-defined chains

A fixed sequence of LLM calls and tool calls. Cheap, fast, predictable, and the right choice for 70% of "AI features." If your task fits in a flowchart, this is it.

Examples — summarization, classification, structured extraction, RAG retrieve-then-answer, content moderation pipelines.

Workflow patterns
Single-loop agent

One agent, many tools

The model decides which tool to call next inside a single loop, until it produces a final answer. The right shape when the path through the problem isn't known up-front.

Examples — coding assistants, customer-support resolvers, researchers, autonomous CLI helpers, deep-search agents.

Single-agent patterns
Multi-agent

Specialist teams

An orchestrator delegates to specialist sub-agents — researcher, writer, critic. Powerful, but the coordination tax is real. Most systems are better served by a single agent with more tools.

Examples — multi-document research, parallel codebase analysis, role-played simulations, supervisor + workers patterns.

Multi-agent patterns
~250 LOC
A working agent loop with tool use, retries, and tracing
4
Provider SDKs covered: Anthropic, OpenAI, Gemini, Ollama
24
Runnable example projects in this guide
0
Frameworks required — every example uses stdlib + one provider SDK
What you'll build

Six end-to-end projects, all in Go.

Each project ships as a self-contained module — go run ./cmd/<project> and you have a working agent. Each one isolates a single architectural lesson worth learning in production.

01 Single-loop agent · ~180 LOC

A coding agent that edits its own repo

A bash + read + edit + run-tests agent that mirrors how Claude Code works internally. Demonstrates the canonical four-tool loop, persistent diffs, and how to scope blast radius with a chroot-like sandbox.

read_file·write_file·bash·run_tests
02 RAG workflow · ~220 LOC

A retrieval pipeline over your own docs

Ingest markdown → chunk → embed → store in pgvector → query. The boring, deterministic backbone of 80% of "AI search" features, written in Go because the ingest pipeline lives next to your other services.

pgvector·openai-go·chunked ingest
03 Reflection loop · ~150 LOC

A self-critiquing report writer

A draft → critique → revise loop using two distinct system prompts on the same model. A clean demonstration of how a single extra LLM turn dramatically improves output quality at predictable cost.

structured output·critique loop·token budget
04 Multi-agent · ~340 LOC

A research team — planner, searchers, writer

An orchestrator agent spawns N parallel research sub-agents, each with their own context window, then hands findings to a writer. Demonstrates the "wide context, narrow context" pattern.

fan-out·goroutines·context budget
05 Streaming + UI · ~280 LOC

A chat interface with SSE streaming

A net/http server that streams model output and tool-call events to the browser via Server-Sent Events. The reference shape for any "chat with my data" product surface.

net/http·SSE·tool events
06 Production · ~410 LOC

An agent with traces, retries, and budgets

The same single-loop agent — wrapped in OTel tracing, exponential-backoff retries, token-cost accounting, and circuit breakers. The boring layers that turn a demo into a service that runs at 3am.

OTel·slog·circuit breaker
Provider matrix

The Go SDKs you'll actually use.

The agent loop is provider-agnostic — but the concrete SDK matters when you're wiring up tool use, streaming, and structured output. Here's how the four most-used Go SDKs stack up for agent work.

Provider Module Tool use Streaming Structured output Best for
Anthropic official anthropic-sdk-go First-class, parallel SSE w/ events Tool-shaped JSON Long-running agentic loops
OpenAI official openai-go Function calling SSE response_format Cheapest tool-use, broad ecosystem
Gemini google.golang.org/genai Function calling Yes Schema-typed Multimodal, long context
Ollama (local) github.com/ollama/ollama/api Limited (model-dep.) NDJSON JSON mode Air-gapped, dev/test, edge
langchaingo github.com/tmc/langchaingo Wrapper-level Wrapper-level Output parsers Switching providers, prototyping

Recommendation — start with the official SDK for the model you're using. Wrap it behind a tiny LLM interface in your own code. Don't reach for langchaingo until you actually have a portability problem to solve.

"The hardest part of an agent isn't the loop — it's deciding what the agent shouldn't do. Tool scope, context budget, and stop conditions are where production systems live or die." — from Patterns: stop conditions and budgets

Get the code

A complete Go project. Compiles. Tests pass. No API key.

Every example shown on this site ships in one zip. Drop it on disk, go test ./..., go run ./cmd/hello, and you're inside a working agent loop in under a minute.

agentgo-examples.zip

Eight runnable Go agents, in one self-contained module.

Plain Go 1.22. Zero external dependencies. The whole thing builds offline. Includes the agent loop, a typed tool registry with auto-generated JSON Schema, a sandboxed file-tools layer, and two API-key-free LLM stand-ins (Scripted + Toy) so every example runs end-to-end without a network call.

30 KBcompressed bundle
1,832 LOCacross 20 Go files
18 / 18tests passing · race-clean
0 depsstdlib only
# project layout agentgo-examples/ ├── go.mod # Go 1.22 · no external deps ├── Makefile # make · make test · make run-all ├── README.md ├── internal/ │ ├── agent/ # the ~250-line core │ │ ├── message.go │ │ ├── llm.go │ │ ├── registry.go # typed tools → JSON schema │ │ ├── agent.go # the ReAct loop │ │ └── agent_test.go │ ├── llm/ # API-key-free LLM stand-ins │ │ ├── scripted.go │ │ ├── toy.go │ │ └── llm_test.go │ └── tools/ # reusable building blocks │ ├── calc.go │ ├── weather.go │ ├── files.go # sandboxed read/write/list │ └── tools_test.go └── cmd/ # 8 runnable examples ├── hello/ # 01 · ReAct + 2 tools ├── chain/ # 02 · prompt chaining ├── route/ # 03 · routing ├── parallel/ # 04 · fan-out ├── reflect/ # 05 · draft → critique → revise ├── coding/ # 06 · sandboxed file tools ├── registry/ # 07 · types → JSON schema └── research/ # 08 · orchestrator + workers
# From an empty terminal to a working agent loop, in 4 commands. $ unzip agentgo-examples.zip && cd agentgo-examples $ go test ./... # 18 tests, all green $ go run ./cmd/hello # first agent run, no API key $ make run-all # every example in sequence

Skip the framework. Read the code.

Every chapter in this guide is backed by a runnable Go module. Clone the repo, go mod tidy, and read the agent loop with a debugger attached. That's how this stuff actually clicks.