← Back to Agentic Workflow Guide

Chapter 5 Information Flow

What data moves through the graph, and how

Chapter 4 covered where execution goes. This chapter covers what it carries. In a multi-agent system, agents need to share data: the output of one agent becomes the input of the next. How does that work?

5.1 — The Single State Philosophy

Many multi-agent frameworks give each agent its own private state. When agents need to share data, you write explicit “handoff” logic to copy fields between states. This gets messy fast — state synchronization bugs are a top source of workflow failures.

The recommended approach is one shared state, accessible by all nodes. Every node reads from and writes to the same state dictionary. No copying, no synchronization, no handoff logic.

# Every node receives the same shared state: def investigator(state): # Read shared data alert_text = state.get("alert_text", "") classification = state.get("classification", "") # ... perform investigation ... # Write results back to the same shared state state["investigation_result"] = "Found suspicious IP 10.0.0.5..."

5.2 — The Shared State: A Blackboard Architecture

The shared state is a typed dictionary — each field has a name and a type. You define the fields when you design your workflow. At runtime, every node reads from and writes to this same dictionary.

# Example shared state schema: # ── User-defined fields ── alert_text: str # The input alert classification: str # Classifier output severity: str # Severity level # ── Framework-managed fields ── count: int (additive) # Execution counter messages: List[Message] (append-only) # Chat history per_node_messages: List[Message] # Per-agent conversation history routing_reason: str # Why the router chose this path

There are three categories of fields:

CategoryWho Creates ThemExamples
User-defined You, when designing the workflow alert_text, classification, severity
Framework-managed per-node The framework, based on nodes in the graph Per-node message history, LLM call counts, tool iteration counters
Router tracking The framework, if routers exist routing_reason

Data Type Reference

When defining state variables, each type has specific behavior in terms of how values are stored and merged:

TypePython TypeMerge SemanticsWhen to Use
str str Last-write-wins. Later writes replace earlier ones. Analysis results, reports, reasoning text, flags. Most common type.
int int Last-write-wins for user-defined fields. The built-in count uses Annotated[int, operator.add] (additive reducer). Counters, scores, numeric identifiers.
float float Last-write-wins. Confidence scores, probabilities, thresholds.
Annotated[List[BaseMessage], ...] List[BaseMessage] Append reducer. New items are concatenated onto the existing list. Items are never removed. The built-in messages field. Also used for per-node message tracking (node_X_messages). You typically do not create new fields with this type.
Last-write-wins pitfall: If two nodes write to the same str variable, only the last writer’s value survives. Design your state schema so each variable has exactly one writer, or use separate variables.

5.3 — Reducers: How Updates Merge

Here’s a subtle but critical concept. When a node returns Command(update={"count": 1}), what does that 1 mean? Is the new count 1, or is it old count + 1?

The answer depends on the field’s reducer. A reducer is a function that defines how new values merge with existing values.

Fields Without Reducers: Last Write Wins

# No reducer — last write wins: classification: str # Agent A writes: classification = "malware" # Agent B writes: classification = "phishing" # Final value: "phishing" (Agent B's value overwrites Agent A's)

Fields With Additive Reducer: Accumulate

# With additive reducer — values accumulate: count: int (additive) # Starting value: count = 0 # Agent A writes: count += 1 → system computes: 0 + 1 = 1 # Agent B writes: count += 1 → system computes: 1 + 1 = 2 # Final value: 2

Fields With List Reducer: Append

# With list concatenation reducer — items append: messages: List[Message] (append-only) # Starting value: messages = [] # Agent A writes: messages += [AIMessage("Hello")] # → system computes: [] + [AIMessage("Hello")] # = [AIMessage("Hello")] # Agent B writes: messages += [AIMessage("World")] # → system computes: [AIMessage("Hello")] + [AIMessage("World")] # = [AIMessage("Hello"), AIMessage("World")]
⚠️ The “counter bug”: If count uses operator.add but your node writes old_value + 1 instead of just 1, you’ll double-count. With a reducer, agents should return increments (the delta), not totals (the absolute value).

Summary Table

FieldReducerUpdate MeaningNode Returns
classification None (last write wins) Replace the value The new value: "malware"
count Additive Add to existing value The increment: 1
messages Append (list concatenation) Append to existing list New items: [msg]
tool_iteration_count None (last write wins) Replace the value The new absolute count: 5

5.4 — Messages: The Conversation Thread

The most important field in the shared state is messages. It’s the shared conversation history — a chronological list of all messages exchanged across all nodes.

Message Types

TypeWho Creates ItContains
SystemMessage The node’s system prompt Instructions for the LLM (not stored in global messages)
HumanMessage The user, the human prompt, or the system (e.g., iteration warning) Input text or task description
AIMessage The LLM’s response Text response or tool call requests
ToolMessage Tool execution results Tool output data (JSON, text, etc.)

Global vs. Per-Node Messages

There are two message lists in play:

# Global messages — the full conversation across all agents: messages: List[Message] (append-only) # Per-agent messages — this specific agent's conversation: agent_messages: List[Message] (append-only)

Why both? Each agent builds its prompt from its own message history (per-agent messages), not the global one. This keeps each agent’s context focused. But the node also writes to the global messages list so downstream nodes can see what happened upstream.

Example: Message Flow in a Pipeline
# After Classifier (Node 2) runs: global messages = [ HumanMessage("New alert: suspicious login from 10.0.0.5"), AIMessage("Classification: Intrusion, Severity: High") ] node_2_messages = [ HumanMessage("New alert: suspicious login from 10.0.0.5"), AIMessage("Classification: Intrusion, Severity: High") ] # Investigator (Node 3) starts. It builds its prompt from: # 1. Its own system prompt # 2. Its own message history (node_3_messages = [] at first) # 3. The last global message (to see Classifier's output) # After Investigator runs: global messages = [ HumanMessage("New alert: suspicious login from 10.0.0.5"), AIMessage("Classification: Intrusion, Severity: High"), AIMessage("Investigation: IP 10.0.0.5 has 47 failed logins...") ] node_3_messages = [ AIMessage("Investigation: IP 10.0.0.5 has 47 failed logins...") ]

5.5 — Template Variables

How does an agent access specific shared state fields in its prompt? Through template variables — placeholders in curly braces that get replaced with actual values at run time.

# In your agent's prompt template: "Investigate this {classification} alert. The raw alert text is: {alert_text} Previous analysis: {investigation_result}" # At runtime, the system renders the template against the shared state: render_template(prompt_text, state) # Result: "Investigate this Intrusion alert. The raw alert text is: Suspicious login attempt from 10.0.0.5 at 03:42 UTC Previous analysis: {investigation_result}" # ↑ Note: investigation_result wasn't set yet, so it stays as-is

Key behaviors:

Design tip: Template variables are how you create data dependencies between agents. If Agent B’s prompt references {classification}, then Agent A (the Classifier) must write to the classification field before Agent B runs. The topology (Agent A → Agent B) ensures this ordering.

5.6 — Data Passing Patterns

There are three common patterns for how data flows between agents:

Pattern 1: Message Relay

Agents communicate through the global messages list. Each agent reads the previous agent’s output from the last message. Simple and implicit. Good for pipelines.

Pattern 2: Named Fields

Agents write to and read from specific named fields in the shared state (e.g., classification, severity). Explicit and type-safe. Good for structured handoffs.

Pattern 3: Template Injection

Agents read data by referencing fields in their prompts via {field_name} template variables. The data is injected into the prompt text before the LLM sees it. Good for providing context without the agent needing to “ask” for it.

Running Example: All Three Patterns Combined
# Classifier writes classification to a named field: state["classification"] = "intrusion" # Pattern 2: Named field state["severity"] = "high" # Pattern 2: Named field state["messages"].append(AIMessage("It's an intrusion")) # Pattern 1: Message relay # Investigator's prompt uses template variables: system_prompt = "You are investigating a {classification} alert # Pattern 3: Template injection with severity {severity}." # All three patterns complement each other.

Chapter Summary

Key Takeaways:
← Chapter 4: Control Flow Chapter 6: Validation & Error Handling →