ML: Agents Flashcards

(21 cards)

1
Q

What defines an “Agentic” AI system compared to a standard LLM generation pipeline?

A

A standard pipeline is linear (Input → LLM → Output). An Agentic system has autonomy: it uses the LLM as a reasoning engine to break down tasks, decide which external tools to use, observe the results, and iteratively adjust its plan until the goal is met.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Explain the ReAct framework.

A

ReAct stands for Reasoning and Acting. It prompts the LLM to output a continuous loop of three steps: Thought (evaluating the current state), Action (calling a tool), and Observation (reading the tool’s output). This forces the LLM to “think out loud” before acting.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are the core components of a standard AI Agent architecture?

A
  1. Brain (LLM): Handles reasoning and planning.2. Memory: Short-term (context window) and Long-term (vector DBs).3. Tools: APIs, calculators, search engines the agent can interact with.4. Planning: Task decomposition and reflection.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

How does Tool Calling (or Function Calling) work under the hood?

A

You pass the LLM a JSON schema defining the available tools, their parameters, and descriptions. Instead of generating raw text, the LLM generates a structured JSON object specifying which function to call and with what arguments. The application executes the function and returns the result to the LLM.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is LangGraph, and why use it over traditional LangChain Agents?

A

LangChain agents are often “black boxes” that run a hardcoded AgentExecutor loop, making them hard to debug or customize. LangGraph models the agent as a stateful graph. It allows for highly controllable, cyclic workflows where you explicitly define how state passes between nodes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Explain Nodes, Edges, and State in LangGraph.

A

State: A shared data structure (like a dictionary) updated by the nodes.Nodes: Python functions (often containing LLM calls) that read the state, perform work, and return state updates.Edges: The routing logic (conditional or direct) that dictates which node executes next based on the current state.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Why are cyclic graphs essential for agent workflows?

A

Unlike standard DAGs (Directed Acyclic Graphs) used in basic pipelines, agents need to loop. They must try an action, observe the result, and if it fails, loop back to try a different approach. Cyclic graphs allow for this iterative “try-observe-correct” loop.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How do you manage memory and state persistence in LangGraph?

A

LangGraph uses Checkpointers (like MemorySaver or database-backed checkpointers). They save the graph’s State at every step (superstep). This provides conversation memory and allows you to pause, resume, or “time-travel” back to previous states if an error occurs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Describe the “Routing” workflow pattern.

A

The LLM acts as a router that classifies an incoming query and directs it to a specialized downstream pipeline (e.g., Route to “Code Assistant”, “Customer Support”, or “General Chit-Chat”). It only takes one action and does not loop.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Describe the “Orchestrator-Worker” (Supervisor) multi-agent workflow.

A

A central “Supervisor” LLM receives a complex task, breaks it down, and delegates sub-tasks to specialized “Worker” agents (e.g., a Researcher agent and a Coder agent). The workers report back, and the supervisor synthesizes the final output.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Describe the “Evaluator-Optimizer” workflow.

A

One agent generates a solution (Optimizer), and a second agent critiques it (Evaluator). The Evaluator provides feedback, and the Optimizer refines the solution in a loop until the Evaluator approves it or a max retry limit is reached. Great for code generation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

How do you prevent an AI agent from getting stuck in an infinite loop?

A
  1. Implement a hard recursion limit (max steps).2. Pass the step count into the agent’s prompt so it knows time is running out.3. Use a “give_up” tool that the agent can call if it realizes the task is impossible.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is MCP (Model Context Protocol), and what problem does it solve?

A

MCP is an open standard that standardizes how AI models connect to external data sources and tools. Instead of writing custom API integrations for every new agent and data source, MCP provides a universal, two-way protocol so agents can securely query local or remote contexts.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Explain the Client-Server architecture of MCP.

A

The MCP Client is the AI application (like Claude Desktop or your custom agent). The MCP Server is a lightweight program connected to a data source (like a database, GitHub, or Slack). The Client asks the Server for context or tool execution over a standardized JSON-RPC connection.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

How would you design a multi-agent system for a software development task?

A

I would use a LangGraph workflow with three agents:1. Planner Agent: Breaks the user request into a technical spec.2. Coder Agent: Writes the code based on the spec.3. Reviewer Agent: Runs unit tests and provides feedback to the Coder. The Coder and Reviewer loop until tests pass.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is Semantic Routing, and when would you use it over LLM-based routing?

A

Semantic routing uses vector embeddings to compare a user’s query against predefined route examples, bypassing the LLM. It is much faster, cheaper, and more deterministic than asking an LLM to decide the route, making it ideal for high-volume, initial request sorting.

17
Q

How do you handle tool failure or API errors in an agentic workflow?

A

Never let the application crash. Catch the exception and return the error message back to the LLM as the “tool observation.” Prompt the LLM to read the error, understand what went wrong (e.g., missing parameter), and try calling the tool again with corrected inputs.

18
Q

What is the difference between a Hierarchical and a Networked multi-agent architecture?

A

Hierarchical: A strict chain of command (Supervisor delegates to Workers). Predictable and easy to manage.Networked: Agents can talk to any other agent peer-to-peer to solve problems organically. More flexible but highly prone to getting stuck in loops or losing focus.

19
Q

What is “Human-in-the-loop” (HITL), and how is it implemented in LangGraph?

A

HITL pauses the agent before taking a high-risk action (like deleting a database or sending an email) to require human approval. In LangGraph, you set an interrupt_before flag on a specific node. The graph pauses, saves state to the Checkpointer, and waits for a human to resume it.

20
Q

Explain the concept of “Reflection” in agentic AI.

A

Reflection is a pattern where an agent is prompted to explicitly evaluate its own past actions and outcomes before deciding on the next step. It looks at its scratchpad, identifies mistakes it made in previous turns, and updates its strategy, significantly reducing hallucinations.

21
Q

What are “Structured Outputs” and why are they critical for multi-agent systems?

A

Structured outputs force the LLM to respond in a strict format (usually JSON matching a Pydantic model). This is critical because downstream agents or Python functions expect exact keys and data types; raw text generation would break the software pipeline.