← Back to Agentish Framework Guide

Chapter 3 The LLM Node

The core building block of every workflow

The LLM Node is an AI agent powered by a Large Language Model. It reads state, reasons about it, optionally calls tools, and writes its output back to state. Most of your workflow design time will be spent configuring LLM Nodes.

Deep dive: For the theory behind agent configuration (model selection, reasoning strategies, prompt engineering principles), see Agentic Workflow — Chapter 2: The Agent.

Inspector Fields

Click on an LLM Node to see its configuration in the right panel:

FieldPurposeTips
Node title Display name on the canvas and in compiled code. Each node must have a unique title.
Input State Which state variables to include in the LLM’s context. Only check the variables this node actually needs. Less context = better focus.
Global state to update Which state variables this node writes to after execution. The LLM’s structured output fields map to these variables.
System prompt The agent’s identity, role, and behavioral rules. Stays constant. Include: who the agent is, what to focus on, constraints, output format.
Human prompt The task input. Supports {variable} template syntax to inject state values. Use template variables to pass dynamic data from upstream nodes.
Output Schema Defines structured output fields (name, type, description). Forces the LLM to return data in a predictable format. Maps to “Global state to update.”
Selected tools Which tools from the Function Catalog this node can call. Keep to 2–5 tools. More tools = more confusion for the LLM.
Max tool iterations Safety limit on tool call loops. Visible only when tools are selected. Default is 30. Lower for simple tasks, higher for complex investigations.
Iteration warning message Warning injected when approaching the tool limit. Tells the LLM to wrap up. Fires at ~80% of max iterations.

Output Connection Rule

Rule: An LLM Node’s output may connect to at most one other LLM node. Connecting to multiple LLM nodes simultaneously (fan-out) is not allowed and will be blocked both in the editor and at export time. Worker nodes are not subject to this limit — an LLM node may bind any number of Worker nodes.

The Terminal Node Rule

Rule: At least one LLM Node in your workflow must have no outgoing flow edge. This is the terminal node — it compiles to goto = END, which tells the graph “we’re done.”

If every LLM Node connects to another node, execution can never finish. The topology validator will block export with the error:

"No terminal LLM node found. At least one LLM node must have no outgoing flow edge so the graph can reach END."

How to fix: Make sure your final LLM Node (the one that produces the workflow’s end result) has no outgoing edges. Its output slot should be unconnected.

Writing Good System Prompts

The system prompt is the highest-leverage configuration. A well-written prompt can make a simple workflow outperform a complex one with poor prompts.

The Four Components

ComponentWhat It DoesExample
Identity Establishes the agent’s role and expertise. “You are a senior binary reverse engineer.”
Focus Defines what to analyze or produce. “Your task is to find the decryption key in the binary.”
Constraints Sets boundaries on behavior. “Never guess. If unsure, say so explicitly.”
Output format Specifies the structure of the response. “Return: 1) key found, 2) location in binary, 3) confidence.”
Tip: Write your prompts in an external text editor (e.g., notepad.link) where you have more space, then paste them into the inspector. The small textarea in the inspector is not ideal for writing long prompts.

Template Variables

Both the system prompt and human prompt support {variable_name} syntax. At runtime, these are replaced with the current value of that state variable.

Example
Human Prompt: "Analyze the following binary functions: {discovered_functions} Previous analysis: {analysis_result} Find the flag." # At runtime, if discovered_functions = "main, decrypt_flag, check_pw" # the LLM receives: # "Analyze the following binary functions: main, decrypt_flag, check_pw"

Structured Output Schema

The Output Schema table lets you define fields that the LLM must return in a structured format. Each field has a name, type, and description.

When the LLM finishes reasoning, its structured output values are written to the corresponding state variables (as selected in “Global state to update”). This is the primary mechanism for passing data between nodes.

Example: Output Schema for an Analyzer
# Output Schema fields: Name: analysis_result Type: str Description: "The analysis findings" Name: confidence Type: float Description: "Confidence score 0.0-1.0" # These values get written to the global state after the node runs. # The next node can read them via input_state_keys or template variables.

Loop Mode (Advanced)

When an LLM Node is the target of a loop (i.e., it has more than one incoming edge — one from upstream and one from a Router back-edge), you must configure loop mode to control how conversation history is managed across iterations.

This field only appears in the inspector when the node has multiple incoming edges. See Chapter 7: Loops for the full explanation.

ASL Representation

ASL — Complete LLM Node
{ "id": "2", "type": "LLMNode", "label": "Analyzer", "config": { "title": "Analyzer", "input_state_keys": ["discovered_functions"], "output_state_keys": ["analysis_result"], "system_prompt": "You are a binary reverse engineer...", "human_prompt": "Analyze these functions: {discovered_functions}", "structured_output_schema": [ { "name": "analysis_result", "type": "str", "description": "The analysis findings" } ], "selected_tools": ["decompile_function", "list_functions"], "max_tool_iterations": 15, "iteration_warning_message": "Wrap up your analysis." } }

Chapter Summary

Key Takeaways:
← Chapter 2: The Entry Point Chapter 4: The Router Node →