← Back to Agentic Workflow Guide

Chapter 3 Workflow Topology

How to connect agents into a graph

In Chapter 2, you learned how to configure a single agent. Now the question becomes: how many agents do you need, and how do they connect? This is the topology — the shape of your graph.

Topology is a design-time decision. You draw it before anything runs. It answers the question: “If I were to sketch this system on a whiteboard, what would the diagram look like?”

Jargon alert: You’ll encounter terms like “sequential decomposition,” “linear flow,” “sequential chaining,” and “pipeline architecture” in blog posts and papers. These all describe the same thing: agents in a line. We call it a pipeline. Similarly, “hierarchical delegation,” “manager-worker pattern,” and “supervisor architecture” all describe a tree with a coordinator at the top. Don’t let synonyms trick you into thinking there are more patterns than there are.

There are exactly six canonical topologies. Every multi-agent system you’ll ever encounter is one of these, or a combination of them.

3.1 — Single Agent

One node. No graph. The agent handles everything. This isn’t really a “topology” in the graph sense, but it’s the baseline you should always consider first.

Single Agent topology — Entry flows to Agent

When to Use

When to Move Beyond

Running Example

Our Alert Classifier from Chapter 2 is a single agent. It receives an alert, classifies it, and returns a result. No decomposition needed. But what happens after classification? The alert needs to be investigated based on its category. That’s when we need more agents.

3.2 — Pipeline (Chain)

Nodes in a line: A → B → C. Each agent takes the previous agent’s output as input and produces output for the next one. This is the most common and most intuitive multi-agent pattern.

Pipeline topology — agents connected in a line

When to Use

Design Tradeoffs

AdvantageDisadvantage
Simple to understand and debug Each agent adds latency (sequential execution)
Each agent has focused context Errors propagate forward — if Agent A is wrong, B and C are wrong too
Easy to add or remove stages No parallelism — can’t do multiple things at once
Natural for refine-and-improve patterns Later agents may lose information from earlier stages
Running Example: Security Alert Pipeline
Pipeline topology — Entry → Classifier → Investigator → Responder
Classifier: Model: GPT-4o-mini, Tools: none "Classify the alert by category and severity." Investigator: Model: GPT-4o, Tools: [check_ip, search_logs, check_hash] "Investigate the classified alert. Gather evidence." Responder: Model: GPT-4o-mini, Tools: [block_ip, create_ticket] "Based on the investigation, take appropriate action."

Each agent has a focused role, its own tools, and a clean handoff to the next. The Classifier is cheap and fast. The Investigator uses a powerful model because investigation requires complex reasoning. The Responder is cheap again — it just executes the plan.

In a pipeline, each connection is a direct edge — “after this node finishes, always go to the connected node.” No conditions, no branching.

3.3 — Fan-out / Fan-in (Parallel)

One input dispatched to multiple agents simultaneously. All agents run in parallel on the same input, then their results merge at a collection point.

Fan-out / Fan-in topology — Dispatch fans out to Analyst A, B, C, which merge into Merge

When to Use

Key Distinction from Router

Fan-out = ALL branches run. Every downstream agent processes the input.
Router = ONE branch runs. The router picks the single best handler.

This is the fundamental difference. Fan-out is about getting multiple answers. Router is about picking the right handler (the Router pattern is explained in more detail in Section 3.4).
Example: Multi-Perspective Binary Analysis

A suspicious binary is analyzed by three agents simultaneously:

A Merge Agent combines all three reports into a unified assessment, potentially finding connections that individual analysts missed (e.g., the static analyst found an XOR loop, and the behavior analyst found a suspicious DNS query — together these suggest encrypted C2 communication).

To implement fan-out, connect one node’s output to multiple downstream nodes using direct edges. Then connect all downstream nodes to a single merge node. The merge node sees all outputs in the shared state.

Agentish constraint: An LLM node’s output may connect to at most one other LLM node. True parallel fan-out from a single LLM node to multiple LLM nodes is not supported — the topology validator will block export if this rule is violated. Worker nodes are not subject to this limit (an LLM can bind any number of Worker nodes). If you need to dispatch to multiple parallel agents, restructure so each parallel branch is reached through a dedicated dispatcher LLM that feeds into exactly one downstream LLM.

3.4 — Router (Dispatcher)

One node classifies the input and sends it to exactly one of N downstream handlers. Only one branch activates per execution. This saves tokens and time by routing work to the right specialist.

Router topology (generic) — Router with conditional edges to Malware Agent, Intrusion Agent, and Misconfig Agent

When to Use

How Router Nodes Decide

A Router Node is an LLM-powered decision maker. It doesn’t use hard-coded rules (like “if alert contains ‘malware’ → malware agent”). Instead, it uses the LLM with structured output to make nuanced decisions:

# What the Router Node does internally: 1. Reads the conversation history from the shared state 2. Receives a system prompt describing routing criteria 3. Sees a list of available targets: - llm_4_node: "Malware Agent" (LLMNode) - llm_5_node: "Intrusion Agent" (LLMNode) - llm_6_node: "Misconfig Agent" (LLMNode) 4. Uses structured output to return a decision: { "next_node": "llm_4_node", "reason": "Alert mentions suspicious binary download and PE file execution, indicating malware" } 5. The system validates the choice against available targets 6. Execution continues at the chosen node

The decision should be validated — if the LLM returns a target that doesn’t exist, the system should fall back to a safe default. The reason field is stored in the shared state for debugging and audit trails.

Running Example: Alert Category Router
Alert Category Router example — Entry → Classifier → Router → Malware/Intrusion/Misconfig Handlers → Responder

The Router reads the Classifier’s output and decides: “This alert is about a suspicious binary → route to Malware Handler.” Only the Malware Handler runs. The other two are skipped.

A Router Node is connected to downstream nodes using conditional edges. Each conditional edge represents one possible routing target. At runtime, the router’s LLM picks one value, and only that branch executes.

3.5 — Hierarchy (Manager-Worker)

A coordinator agent delegates subtasks to worker agents, who perform the work and return results to the coordinator. The coordinator then synthesizes the results. Think of it as a manager who assigns tasks to their team.

Hierarchy topology — Coordinator delegates to Recon, Exploit, Cleanup workers; results flow back to Coordinator

Router vs. Hierarchy

These are easy to confuse because both have a “central node” connected to multiple downstream nodes. The critical difference:

AspectRouterHierarchy
How many run? Exactly one branch Multiple workers (potentially all)
Results flow? Forward to next stage Back to the coordinator
Central node role Classifier / dispatcher Task decomposer / synthesizer
Analogy A receptionist directing you to the right department A project manager assigning tasks to team members

When to Use

Example: CTF Challenge Coordinator

A CTF coordinator agent receives a binary challenge and delegates:

Each worker reports back to the coordinator with a structured result ({"result": "...", "success": true/false}). The coordinator decides what to do next based on the results.

Workers vs. Regular Agents

Many frameworks distinguish between regular agents (which route forward in the graph) and workers (which return results to their caller). The key differences:

AspectRegular AgentWorker
Role in graph A full graph node (part of the execution flow) A callable sub-agent (invoked as a tool by another agent)
Routing Routes to next node in the graph Returns result to the calling agent
State access Reads from and writes to shared state Typically does NOT update shared state directly
Output format Flexible (text, structured output) Structured result returned to caller

3.6 — Cyclic (Looping)

The graph has back-edges — an agent’s output feeds back to a previous agent (or itself). This is where iteration and refinement live. The system tries something, evaluates the result, and tries again if needed.

Cyclic topology — Analyze → Attempt → Evaluate, with loop back to Analyze or exit to END

When to Use

⚠️ Cyclic topologies can run forever. Unlike all other topologies, a cyclic graph can execute indefinitely. You must include termination logic: We cover termination in depth in Chapter 4 and Chapter 6.
Loop history management: When an agent is re-entered via a loop, you need to decide how its conversation history is managed: A feedback variable (e.g., the evaluator’s reasoning for looping back) can inject new context on each re-entry, guiding the agent toward a different approach.
Example: Iterative Exploit Development
# Iteration 1: Analyze: "Buffer overflow in check_password. Try 64-byte payload." Attempt: "Segfault at 0x41414141. Overflow confirmed, offset wrong." Evaluate: "Not solved. Offset was wrong. Need to adjust." → Loop back to Analyze # Iteration 2: Analyze: "Offset was 64, but EIP at 0x41414141 means we overshot. Try 72 bytes before the return address." Attempt: "Got control of EIP! Redirecting to decrypt_flag..." Evaluate: "Flag captured: FLAG{s3cur1ty_ftw}. Done!" → Exit loop

Combining Topologies

Real-world workflows are rarely a single pure topology. They combine patterns. A pipeline might have a router at one stage. A hierarchy might have cycles inside each worker. The six canonical topologies are building blocks you compose together.

Running Example: Full Security Triage (Pipeline + Router)
Combined topology — Entry → Classifier → Router → Malware/Intrusion/Misconfig Handlers → Responder

This combines: Pipeline (Entry → Classifier → Router → … → Responder) and Router (Router dispatches to one of three handlers).

Implementation Summary

TopologyImplementationEdge Type
Single AgentEntry point → one agentDirect edge
PipelineAgents in sequenceDirect edge
Fan-out / Fan-inOne LLM → one downstream LLM per branch → merge node (each branch is a separate LLM-to-LLM chain; an LLM may not connect directly to more than 1 other LLM)Direct edge
RouterRouter node → multiple targetsConditional edge
HierarchyCoordinator agent + worker sub-agentsWorkers are callable functions, not edges
CyclicEdges that loop back; controlled by iteration limits or routerDirect (back-edge) or conditional edge

How to Choose a Topology

Decision guide: Ask these questions in order:
  1. Can one agent handle it? → Use Single Agent.
  2. Does it have sequential phases? → Use Pipeline.
  3. Do different inputs need different handlers? → Add a Router.
  4. Can parts run independently in parallel? → Add Fan-out/Fan-in.
  5. Does a coordinator need to delegate subtasks? → Use Hierarchy.
  6. Does it need trial-and-error? → Add cycles to any of the above.

Start with the simplest topology that could work, then add complexity only when you hit a limitation. A single agent is simpler than a pipeline. A pipeline is simpler than a router. Don’t over-engineer.

Chapter Summary

Key Takeaways:
← Chapter 2: The Agent Chapter 4: Control Flow →