This is a full implementation of the example from the article How an Agent Uses Tools (Basics).
If you have not read the article yet, start there. This page focuses only on code.
What this example demonstrates
- How LLM decides when a tool should be called
- How the system checks whether the tool is allowed (allowlist)
- How to execute a tool call and return the result to the model
- How the agent finishes the loop when data is sufficient
Project structure
examples/
βββ foundations/
βββ tool-calling-basics/
βββ python/
βββ main.py # agent loop
βββ llm.py # model call + tool definitions
βββ executor.py # allowlist check + tool execution
βββ tools.py # tools (business logic)
βββ requirements.txt
This split is practical in a real project: model, policy, and tool execution are not mixed in one file.
How to run
1. Clone the repository and go to the folder:
git clone https://github.com/AgentPatterns-tech/agentpatterns.git
cd examples/foundations/tool-calling-basics/python
2. Install dependencies:
pip install -r requirements.txt
3. Set API key:
export OPENAI_API_KEY="sk-..."
4. Run the example:
python main.py
β οΈ If you forget to set the key, the agent will immediately tell you with a hint on what to do.
Code
tools.py β tools that are actually executed
from typing import Any
USERS = {
42: {"id": 42, "name": "Anna", "tier": "pro"},
7: {"id": 7, "name": "Max", "tier": "free"},
}
BALANCES = {
42: {"currency": "USD", "value": 128.40},
7: {"currency": "USD", "value": 0.0},
}
def get_user_profile(user_id: int) -> dict[str, Any]:
user = USERS.get(user_id)
if not user:
return {"error": f"user {user_id} not found"}
return {"user": user}
def get_user_balance(user_id: int) -> dict[str, Any]:
balance = BALANCES.get(user_id)
if not balance:
return {"error": f"balance for user {user_id} not found"}
return {"balance": balance}
The model has no direct access to dictionaries. It can only ask to call these functions.
executor.py β safety boundary and execution
import json
from typing import Any
from tools import get_user_balance, get_user_profile
TOOL_REGISTRY = {
"get_user_profile": get_user_profile,
"get_user_balance": get_user_balance,
}
ALLOWED_TOOLS = {"get_user_profile", "get_user_balance"}
def execute_tool_call(tool_name: str, arguments_json: str) -> dict[str, Any]:
if tool_name not in ALLOWED_TOOLS:
return {"error": f"tool '{tool_name}' is not allowed"}
tool = TOOL_REGISTRY.get(tool_name)
if tool is None:
return {"error": f"tool '{tool_name}' not found"}
try:
args = json.loads(arguments_json or "{}")
except json.JSONDecodeError:
return {"error": "invalid JSON arguments"}
try:
result = tool(**args)
except TypeError as exc:
return {"error": f"invalid arguments: {exc}"}
return {"tool": tool_name, "result": result}
Key idea here: even if the model asks something weird, the system executes only what is explicitly allowed.
llm.py β model call and available tool definitions
import os
from openai import OpenAI
api_key = os.environ.get("OPENAI_API_KEY")
if not api_key:
raise EnvironmentError(
"OPENAI_API_KEY is not set.\n"
"Run: export OPENAI_API_KEY='sk-...'"
)
client = OpenAI(api_key=api_key)
SYSTEM_PROMPT = """
You are an AI support agent. When you need data, call the available tools.
Once you have enough information, give a short answer.
""".strip()
TOOLS = [
{
"type": "function",
"function": {
"name": "get_user_profile",
"description": "Returns user profile by user_id",
"parameters": {
"type": "object",
"properties": {"user_id": {"type": "integer"}},
"required": ["user_id"],
},
},
},
{
"type": "function",
"function": {
"name": "get_user_balance",
"description": "Returns user balance by user_id",
"parameters": {
"type": "object",
"properties": {"user_id": {"type": "integer"}},
"required": ["user_id"],
},
},
},
]
def ask_model(messages: list[dict]):
completion = client.chat.completions.create(
model="gpt-4.1-mini",
messages=[{"role": "system", "content": SYSTEM_PROMPT}, *messages],
tools=TOOLS,
tool_choice="auto",
)
return completion.choices[0].message
Tools are described here as schema. The model sees this list and chooses from it.
main.py β agent loop (model β tool β model)
import json
from executor import execute_tool_call
from llm import ask_model
MAX_STEPS = 6
TASK = "Prepare a short account summary for user_id=42: name, tier, and balance."
def to_assistant_message(message) -> dict:
tool_calls = []
for tc in message.tool_calls or []:
tool_calls.append(
{
"id": tc.id,
"type": "function",
"function": {
"name": tc.function.name,
"arguments": tc.function.arguments,
},
}
)
return {
"role": "assistant",
"content": message.content or "",
"tool_calls": tool_calls,
}
def run():
messages: list[dict] = [{"role": "user", "content": TASK}]
for step in range(1, MAX_STEPS + 1):
print(f"\n=== STEP {step} ===")
assistant = ask_model(messages)
messages.append(to_assistant_message(assistant))
text = assistant.content or ""
if text.strip():
print(f"Assistant: {text.strip()}")
tool_calls = assistant.tool_calls or []
if not tool_calls:
print("\nDone: model finished without a new tool call.")
return
for tc in tool_calls:
print(f"Tool call: {tc.function.name}({tc.function.arguments})")
execution = execute_tool_call(
tool_name=tc.function.name,
arguments_json=tc.function.arguments,
)
print(f"Tool result: {execution}")
messages.append(
{
"role": "tool",
"tool_call_id": tc.id,
"content": json.dumps(execution, ensure_ascii=False),
}
)
print("\nStop: MAX_STEPS reached.")
if __name__ == "__main__":
run()
This is the classic loop: model asks for a tool, system executes, result is returned to the model, then the next decision.
requirements.txt
openai>=1.0.0
Output example
=== STEP 1 ===
Tool call: get_user_profile({"user_id": 42})
Tool result: {'tool': 'get_user_profile', 'result': {'user': {'id': 42, 'name': 'Anna', 'tier': 'pro'}}}
=== STEP 2 ===
Tool call: get_user_balance({"user_id": 42})
Tool result: {'tool': 'get_user_balance', 'result': {'balance': {'currency': 'USD', 'value': 128.4}}}
=== STEP 3 ===
Assistant: User Anna is a pro tier member with a current balance of 128.4 USD.
Done: model finished without a new tool call.
Why this is an agent, not one model call
| One model call | Agent with tools | |
|---|---|---|
| Works only with already available text | β | β |
| Can request new data via tool | β | β |
| Has an explicit execution boundary (allowlist) | β | β |
| Takes multiple steps toward a final answer | β | β |
What to change in this example
- Block
get_user_balanceinALLOWED_TOOLSβ what will the agent return to the user? - Replace
user_id=42withuser_id=999β how will the agent handle the error? - Add a
get_user_orderstool but do not add it toALLOWED_TOOLSβ will the model try to call it? - Add a limit for the number of tool calls separately from
MAX_STEPS
Full code on GitHub
The repository contains the full version of this demo: with ALLOWED_TOOLS, step loop, and error handling.
If you want to run it quickly or walk through the code line by line, open the full example.