How an Agent Uses Tools (Basics) β€” Python (Full Implementation)

Full runnable tool calling example with allowlist, tool execution, and an agent loop.
On this page
  1. What this example demonstrates
  2. Project structure
  3. How to run
  4. Code
  5. tools.py β€” tools that are actually executed
  6. executor.py β€” safety boundary and execution
  7. llm.py β€” model call and available tool definitions
  8. main.py β€” agent loop (model β†’ tool β†’ model)
  9. requirements.txt
  10. Output example
  11. Why this is an agent, not one model call
  12. What to change in this example
  13. Full code on GitHub

This is a full implementation of the example from the article How an Agent Uses Tools (Basics).

If you have not read the article yet, start there. This page focuses only on code.


What this example demonstrates

  • How LLM decides when a tool should be called
  • How the system checks whether the tool is allowed (allowlist)
  • How to execute a tool call and return the result to the model
  • How the agent finishes the loop when data is sufficient

Project structure

TEXT
examples/
└── foundations/
    └── tool-calling-basics/
        └── python/
            β”œβ”€β”€ main.py           # agent loop
            β”œβ”€β”€ llm.py            # model call + tool definitions
            β”œβ”€β”€ executor.py       # allowlist check + tool execution
            β”œβ”€β”€ tools.py          # tools (business logic)
            └── requirements.txt

This split is practical in a real project: model, policy, and tool execution are not mixed in one file.


How to run

1. Clone the repository and go to the folder:

BASH
git clone https://github.com/AgentPatterns-tech/agentpatterns.git
cd examples/foundations/tool-calling-basics/python

2. Install dependencies:

BASH
pip install -r requirements.txt

3. Set API key:

BASH
export OPENAI_API_KEY="sk-..."

4. Run the example:

BASH
python main.py

⚠️ If you forget to set the key, the agent will immediately tell you with a hint on what to do.


Code

tools.py β€” tools that are actually executed

PYTHON
from typing import Any

USERS = {
    42: {"id": 42, "name": "Anna", "tier": "pro"},
    7: {"id": 7, "name": "Max", "tier": "free"},
}

BALANCES = {
    42: {"currency": "USD", "value": 128.40},
    7: {"currency": "USD", "value": 0.0},
}


def get_user_profile(user_id: int) -> dict[str, Any]:
    user = USERS.get(user_id)
    if not user:
        return {"error": f"user {user_id} not found"}
    return {"user": user}


def get_user_balance(user_id: int) -> dict[str, Any]:
    balance = BALANCES.get(user_id)
    if not balance:
        return {"error": f"balance for user {user_id} not found"}
    return {"balance": balance}

The model has no direct access to dictionaries. It can only ask to call these functions.


executor.py β€” safety boundary and execution

PYTHON
import json
from typing import Any

from tools import get_user_balance, get_user_profile

TOOL_REGISTRY = {
    "get_user_profile": get_user_profile,
    "get_user_balance": get_user_balance,
}

ALLOWED_TOOLS = {"get_user_profile", "get_user_balance"}


def execute_tool_call(tool_name: str, arguments_json: str) -> dict[str, Any]:
    if tool_name not in ALLOWED_TOOLS:
        return {"error": f"tool '{tool_name}' is not allowed"}

    tool = TOOL_REGISTRY.get(tool_name)
    if tool is None:
        return {"error": f"tool '{tool_name}' not found"}

    try:
        args = json.loads(arguments_json or "{}")
    except json.JSONDecodeError:
        return {"error": "invalid JSON arguments"}

    try:
        result = tool(**args)
    except TypeError as exc:
        return {"error": f"invalid arguments: {exc}"}

    return {"tool": tool_name, "result": result}

Key idea here: even if the model asks something weird, the system executes only what is explicitly allowed.


llm.py β€” model call and available tool definitions

PYTHON
import os
from openai import OpenAI

api_key = os.environ.get("OPENAI_API_KEY")

if not api_key:
    raise EnvironmentError(
        "OPENAI_API_KEY is not set.\n"
        "Run: export OPENAI_API_KEY='sk-...'"
    )

client = OpenAI(api_key=api_key)

SYSTEM_PROMPT = """
You are an AI support agent. When you need data, call the available tools.
Once you have enough information, give a short answer.
""".strip()

TOOLS = [
    {
        "type": "function",
        "function": {
            "name": "get_user_profile",
            "description": "Returns user profile by user_id",
            "parameters": {
                "type": "object",
                "properties": {"user_id": {"type": "integer"}},
                "required": ["user_id"],
            },
        },
    },
    {
        "type": "function",
        "function": {
            "name": "get_user_balance",
            "description": "Returns user balance by user_id",
            "parameters": {
                "type": "object",
                "properties": {"user_id": {"type": "integer"}},
                "required": ["user_id"],
            },
        },
    },
]


def ask_model(messages: list[dict]):
    completion = client.chat.completions.create(
        model="gpt-4.1-mini",
        messages=[{"role": "system", "content": SYSTEM_PROMPT}, *messages],
        tools=TOOLS,
        tool_choice="auto",
    )
    return completion.choices[0].message

Tools are described here as schema. The model sees this list and chooses from it.


main.py β€” agent loop (model β†’ tool β†’ model)

PYTHON
import json

from executor import execute_tool_call
from llm import ask_model

MAX_STEPS = 6

TASK = "Prepare a short account summary for user_id=42: name, tier, and balance."


def to_assistant_message(message) -> dict:
    tool_calls = []
    for tc in message.tool_calls or []:
        tool_calls.append(
            {
                "id": tc.id,
                "type": "function",
                "function": {
                    "name": tc.function.name,
                    "arguments": tc.function.arguments,
                },
            }
        )
    return {
        "role": "assistant",
        "content": message.content or "",
        "tool_calls": tool_calls,
    }


def run():
    messages: list[dict] = [{"role": "user", "content": TASK}]

    for step in range(1, MAX_STEPS + 1):
        print(f"\n=== STEP {step} ===")
        assistant = ask_model(messages)
        messages.append(to_assistant_message(assistant))

        text = assistant.content or ""
        if text.strip():
            print(f"Assistant: {text.strip()}")

        tool_calls = assistant.tool_calls or []
        if not tool_calls:
            print("\nDone: model finished without a new tool call.")
            return

        for tc in tool_calls:
            print(f"Tool call: {tc.function.name}({tc.function.arguments})")
            execution = execute_tool_call(
                tool_name=tc.function.name,
                arguments_json=tc.function.arguments,
            )
            print(f"Tool result: {execution}")

            messages.append(
                {
                    "role": "tool",
                    "tool_call_id": tc.id,
                    "content": json.dumps(execution, ensure_ascii=False),
                }
            )

    print("\nStop: MAX_STEPS reached.")


if __name__ == "__main__":
    run()

This is the classic loop: model asks for a tool, system executes, result is returned to the model, then the next decision.


requirements.txt

TEXT
openai>=1.0.0

Output example

TEXT
=== STEP 1 ===
Tool call: get_user_profile({"user_id": 42})
Tool result: {'tool': 'get_user_profile', 'result': {'user': {'id': 42, 'name': 'Anna', 'tier': 'pro'}}}

=== STEP 2 ===
Tool call: get_user_balance({"user_id": 42})
Tool result: {'tool': 'get_user_balance', 'result': {'balance': {'currency': 'USD', 'value': 128.4}}}

=== STEP 3 ===
Assistant: User Anna is a pro tier member with a current balance of 128.4 USD.

Done: model finished without a new tool call.

Why this is an agent, not one model call

One model callAgent with tools
Works only with already available textβœ…βŒ
Can request new data via toolβŒβœ…
Has an explicit execution boundary (allowlist)βŒβœ…
Takes multiple steps toward a final answerβŒβœ…

What to change in this example

  • Block get_user_balance in ALLOWED_TOOLS β€” what will the agent return to the user?
  • Replace user_id=42 with user_id=999 β€” how will the agent handle the error?
  • Add a get_user_orders tool but do not add it to ALLOWED_TOOLS β€” will the model try to call it?
  • Add a limit for the number of tool calls separately from MAX_STEPS

Full code on GitHub

The repository contains the full version of this demo: with ALLOWED_TOOLS, step loop, and error handling. If you want to run it quickly or walk through the code line by line, open the full example.

View full code on GitHub β†—
⏱️ 6 min read β€’ Updated Mar, 2026Difficulty: β˜…β˜†β˜†
Integrated: production controlOnceOnly
Add guardrails to tool-calling agents
Ship this pattern with governance:
  • Budgets (steps / spend caps)
  • Tool permissions (allowlist / blocklist)
  • Kill switch & incident stop
  • Idempotency & dedupe
  • Audit logs & traceability
Integrated mention: OnceOnly is a control layer for production agent systems.
Author

This documentation is curated and maintained by engineers who ship AI agents in production.

The content is AI-assisted, with human editorial responsibility for accuracy, clarity, and production relevance.

Patterns and recommendations are grounded in post-mortems, failure modes, and operational incidents in deployed systems, including during the development and operation of governance infrastructure for agents at OnceOnly.