← Back to Agentish Framework Guide
Chapter 9 Tools
Giving your agents the ability to act
Without tools, your agents can only generate text. Tools let them interact with the challenge environment — read files, analyze binaries, query databases, and take actions. In the iCTF, tools are the MCP endpoints provided by each challenge.
The Function Catalog
The Function Catalog is the sidebar panel in the editor that lists
all available tools. These tools come from the challenge’s
challengish.yml file, which defines the MCP servers and their endpoints.
Each tool in the catalog shows its name, description, and parameters. Read these carefully — understanding what each tool does is critical for designing effective workflows.
MCP Tools
MCP (Model Context Protocol) tools are HTTP endpoints that your agent calls at runtime. The challenge provides them. You don’t implement them — you just use them.
At execution time, when your agent’s LLM decides to call a tool, the compiled code makes an HTTP request to the MCP server. The response is returned to the LLM as a tool result, and the LLM continues reasoning.
# 1. LLM decides to call a tool:
LLM: "I should list the functions in the binary."
→ tool_call: list_functions()
# 2. Compiled code sends HTTP request:
GET http://mcp_binary:8002/mcp/list_functions
# 3. MCP server responds:
{ "success": true, "functions": ["main", "decrypt_flag", "check_pw"] }
# 4. Response is given back to the LLM:
ToolMessage: '{"success": true, "functions": ["main", "decrypt_flag", "check_pw"]}'
# 5. LLM continues reasoning with this new information.Custom Tools
In addition to MCP tools from the challenge, you can define custom tools directly in the Agentish editor. Custom tools are Python functions that you write. They are included in the compiled code alongside MCP tool bindings.
Use the Tool Editor (accessible from the toolbar) to create custom tools. Each custom tool needs:
- Name — A valid Python function name.
- Description — What the tool does (shown to the LLM).
- Implementation — Python code that takes arguments and returns a result dict.
def tool_implementation(a: int, b: int, state: dict = None) -> dict:
"""
Add two numbers together.
Args:
a: First number
b: Second number
state: Agent state (optional, read-only access)
Returns:
dict with result and success flag
"""
try:
result = a + b
return { "result": result, "success": True }
except Exception as e:
return { "result": None, "success": False, "error": str(e) }Assigning Tools to Nodes
Tools are assigned per-node in the inspector under Selected tools. You drag tools from the Function Catalog into the node’s tool list, or select them from the available options.
Tool Iteration Limits
When an LLM Node has tools, it enters a ReAct loop: reason, call
a tool, observe the result, repeat. The max_tool_iterations field
controls how many times the loop can repeat.
| Setting | Default | Guidance |
|---|---|---|
max_tool_iterations |
30 | Lower for simple tasks (5–10). Higher for complex investigations (20–50). Never set to 0. |
iteration_warning_message |
“You are close to the tool iteration limit. Wrap up soon without more tool calls.” | Injected as a system message when the LLM reaches ~80% of the limit. Customize to be specific to the task. |
When the limit is hit, the LLM is forced to produce a final response without further tool calls. See Chapter 11: Troubleshooting for what to do when your agent hits the limit before completing its task.
Chapter Summary
- Tools let agents interact with the challenge environment via HTTP (MCP endpoints).
- The Function Catalog shows all available tools from the challenge.
- Assign 2–5 tools per node for best results.
max_tool_iterationsprevents runaway tool loops.- Custom tools can be defined in the Tool Editor for additional functionality.