What problem does function calling solve in AI agents?
It eliminates the need to parse unstructured text responses by having the model return structured function calls instead.
Why was manual parsing difficult before function calling?
LLM responses varied in format and required engineering strict templates and error-handling logic.
What does function calling guarantee about tool execution?
The model returns a structured JSON function call with a function name and arguments.
What does the agent do when it receives a function call?
It matches the tool_name to a Python function and executes it with the provided arguments.
What happens if no function call is included in the LLM’s response?
The agent treats the output as plain text and prints it.
In the simplified loop, how are tools passed into the LLM?
As a list of JSON Schema tool definitions in the tools= parameter of the completion call.
What is stored in the agent’s memory after executing a tool?
The action taken (assistant role) and the result of that action (user role).
What triggers the agent loop to end?
The LLM calling the terminate tool.
What does the list_files tool do?
Returns a list of files in the current directory.
What does the read_file tool do?
Reads the content of a file and returns it.
Why do we define tools using JSON Schema?
So the LLM knows valid parameters and can format tool calls correctly.
What dictionary maps tool names to their Python functions?
The tool_functions dictionary.
How does the simplified loop detect a tool call?
By checking response.choices[0].message.tool_calls.
Why is function calling more reliable than prompt-engineered parsing?
Because function calls are generated in structured, machine-readable format.
What type of error should you catch when parsing function call arguments?
json.JSONDecodeError.
Why does error-handling still matter with function calling?
The LLM can sometimes produce malformed JSON or invalid arguments.
What does the agent do with each iteration’s result?
Appends it to memory so the LLM can use updated context in the next step.
How many iterations can the loop run before stopping?
Up to the maximum defined by max_iterations.
How does function calling unify conversation + actions?
It allows the LLM to mix function calls with normal text responses seamlessly.
What is the agent’s first input based on the example loop?
The user’s task, collected through input().