← Back to Agentish Framework Guide

Chapter 9 Tools

Giving your agents the ability to act

Without tools, your agents can only generate text. Tools let them interact with the challenge environment — read files, analyze binaries, query databases, and take actions. In the iCTF, tools are the MCP endpoints provided by each challenge.

Deep dive: For the theory of tool use, the ReAct loop, and tool assignment strategies, see Agentic Workflow — Chapter 2: The Agent.

The Function Catalog

The Function Catalog is the sidebar panel in the editor that lists all available tools. These tools come from the challenge’s challengish.yml file, which defines the MCP servers and their endpoints.

Each tool in the catalog shows its name, description, and parameters. Read these carefully — understanding what each tool does is critical for designing effective workflows.

MCP Tools

MCP (Model Context Protocol) tools are HTTP endpoints that your agent calls at runtime. The challenge provides them. You don’t implement them — you just use them.

At execution time, when your agent’s LLM decides to call a tool, the compiled code makes an HTTP request to the MCP server. The response is returned to the LLM as a tool result, and the LLM continues reasoning.

How a Tool Call Works at Runtime
# 1. LLM decides to call a tool: LLM: "I should list the functions in the binary." → tool_call: list_functions() # 2. Compiled code sends HTTP request: GET http://mcp_binary:8002/mcp/list_functions # 3. MCP server responds: { "success": true, "functions": ["main", "decrypt_flag", "check_pw"] } # 4. Response is given back to the LLM: ToolMessage: '{"success": true, "functions": ["main", "decrypt_flag", "check_pw"]}' # 5. LLM continues reasoning with this new information.

Custom Tools

In addition to MCP tools from the challenge, you can define custom tools directly in the Agentish editor. Custom tools are Python functions that you write. They are included in the compiled code alongside MCP tool bindings.

Use the Tool Editor (accessible from the toolbar) to create custom tools. Each custom tool needs:

Custom Tool Template
def tool_implementation(a: int, b: int, state: dict = None) -> dict: """ Add two numbers together. Args: a: First number b: Second number state: Agent state (optional, read-only access) Returns: dict with result and success flag """ try: result = a + b return { "result": result, "success": True } except Exception as e: return { "result": None, "success": False, "error": str(e) }

Assigning Tools to Nodes

Tools are assigned per-node in the inspector under Selected tools. You drag tools from the Function Catalog into the node’s tool list, or select them from the available options.

Keep it focused: Assign only 2–5 tools per node. Research shows that LLM accuracy on tool selection drops significantly with more than 5 tools. If you need many tools, split the work across multiple nodes or use Worker nodes.

Tool Iteration Limits

When an LLM Node has tools, it enters a ReAct loop: reason, call a tool, observe the result, repeat. The max_tool_iterations field controls how many times the loop can repeat.

SettingDefaultGuidance
max_tool_iterations 30 Lower for simple tasks (5–10). Higher for complex investigations (20–50). Never set to 0.
iteration_warning_message “You are close to the tool iteration limit. Wrap up soon without more tool calls.” Injected as a system message when the LLM reaches ~80% of the limit. Customize to be specific to the task.

When the limit is hit, the LLM is forced to produce a final response without further tool calls. See Chapter 11: Troubleshooting for what to do when your agent hits the limit before completing its task.

Chapter Summary

Key Takeaways:
← Chapter 8: State Chapter 10: Logging →