Tool Types Flashcards

(99 cards)

1
Q

What is the purpose of agent observability & evals?

A

To debug and improve reliability through traces, prompts, tool calls, and outcomes

Examples include LangSmith/OpenAI Evals-style platforms; logs + run metadata to compare agent graph versions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is the role of workflow orchestration (non-AI)?

A

To kick off agents on schedules, react to webhooks, branch, and fan-out to workers

Examples include n8n, Airflow, Prefect—pair these with agent runtimes for robust, resumable jobs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Why is knowledge/RAG plumbing important for agents?

A

Agents need context, which is provided by adding vector DBs and retrieval pipelines

Examples include Chroma/Weaviate/PGVector + document loaders + chunking/index QA.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the purpose of secrets & configuration?

A

To safely hand tokens, API keys, and per-environment configs to agents

Examples include Doppler, Vault, 1Password Secrets Automation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is the function of queueing & concurrency control?

A

To throttle external APIs, make jobs idempotent, and recover from failures

Examples include Redis queues, Celery/RQ, BullMQ.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Why is data quality & linting for prompts and code necessary?

A

To keep prompts versioned and codebases clean as agents modify files

Examples include prompt registries/versioning, pre-commit, ESLint/ruff; pair with Sourcery/Sonar to trigger refactors.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is the purpose of desktop/RPA complements?

A

To handle flows that are outside the browser, such as file dialogs and native apps

Examples include Power Automate, AutoHotkey, UiPath (for full RPA).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are snippet/launcher utilities used for?

A

For faster hand-offs between tasks and reusable text blocks

Examples include Raycast/Alfred snippets, Espanso—pairs nicely with your clipboard manager.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is the need for API testing & contract tools?

A

To verify API integrations called by agents

Examples include Postman, Insomnia, Pact for contracts.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is the importance of packaging & environment management?

A

To ensure reproducible tool stacks across machines/CI

Examples include Docker, uv/Poetry/pipx, asdf.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are agent orchestration tools?

A

Frameworks for building multi-step, tool-using AI agents and coordinating multiple agents

They handle routing, memory, retries, and hand-offs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Name a good product option for agent orchestration that provides deterministic, stateful agent flows.

A

LangGraph (LangChain’s graph runtime)

Great for guardrails and repeatability.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is an example of an agent orchestration workflow?

A

A requirements-review agent hands off to a standards-checker, then to a remediator; failed checks loop back via an error edge

This illustrates the multi-step coordination of agents.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is the purpose of refactoring tools?

A

Restructure code, improve clarity, and reduce technical debt without changing behavior

They help maintain code quality and readability.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Name a good product option for refactoring that offers deep, language-aware refactorings.

A

JetBrains IDEs (IntelliJ IDEA, PyCharm, etc.)

Known for their robust refactoring capabilities.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What does Sourcery provide in terms of refactoring?

A

Automated refactoring and code-quality suggestions, especially strong for Python

It includes continuous analysis to maintain code quality.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What are clipboard management tools used for?

A

Multi-item clipboard histories with search, images/snippets, formatting rules, and cross-device sync

They significantly speed up repetitive work.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Name a good product option for clipboard management on macOS.

A

Raycast Clipboard History

Offers fast global search and OCR on images in history.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is the function of web browser interaction/control tools?

A

Automates real browsers for clicking, typing, navigation, file uploads/downloads, scraping, and multi-tab control

Useful for testing, RPA, or data extraction.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Name a good product option for web browser automation that is modern and reliable.

A

Playwright

It supports multi-browser and multi-language capabilities.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What is an example of a no/low-code web browser automation tool?

A

Browserflow

Allows recording actions, adding loops/conditions, and scheduling runs without coding.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What are the strengths of agent orchestration tools?

A
  • Structure: encode multi-step plans, roles, tools, and guardrails; reproducible graphs
  • Reliability: retries, state, hand-offs reduce single-agent brittleness
  • Extensibility: easy to slot in new tools (RAG, browsers, code exec, evaluators)
  • Observability: many support tracing, run metadata, and evaluation hooks

These strengths enhance the effectiveness and reliability of agent orchestration in various applications.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What are the weaknesses of agent orchestration tools?

A
  • Complexity overhead: graph/routing adds infra and maintenance burden
  • Data/Prompt drift: versioning prompts & tools becomes a governance task
  • Cost opacity: multi-agent fan-out can explode API spend if unmetered
  • Debuggability gap: non-determinism + tool chains still tricky to debug

These weaknesses can hinder the implementation and maintenance of agent orchestration tools.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What are the opportunities for agent orchestration tools?

A
  • Governance-by-design: policy checks, safety filters, and audit trails embedded in the graph
  • Verticalization: domain agents (e.g., compliance, RFQ, BOM cleanup) with reusable subgraphs
  • Auto-evaluation loops: continuous improvement using unit tests, golden datasets, and human-in-the-loop review
  • Hybrid runtimes: combine workflow orchestrators (Airflow/Prefect) with agent graphs for SLAs

These opportunities can lead to enhanced functionality and governance in agent orchestration.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
What are the **threats** to **agent orchestration tools**?
* Vendor churn: SDK/runtime changes, model deprecations, tool API shifts * Security/compliance: secret handling, PII, data egress risks with tool plugins * Shadow IT: teams spin up agents without standards → inconsistent quality * Model regressions: upstream model updates can silently change behavior ## Footnote These threats can pose significant risks to the stability and security of agent orchestration tools.
26
What are the **strengths** of **refactoring tools**?
* Proven ROI: safer renames/moves/signature changes; less regressions * Quality uplift: consistent style, dead-code removal, complexity reductions * Developer velocity: automated fixes + safe refactors beat manual edits * CI integration: quality gates stop debt before merge ## Footnote These strengths contribute to improved code quality and developer efficiency.
27
What are the **weaknesses** of **refactoring tools**?
* Coverage limits: complex architectural refactors still need humans * Language/tooling variance: depth of refactors differs across stacks * False positives/negatives: static analysis can miss runtime issues * Culture change: teams may resist enforced standards or autofixes ## Footnote These weaknesses can limit the effectiveness of refactoring tools in certain scenarios.
28
What are the **opportunities** for **refactoring tools**?
* Architecture codemods: scripted repo-wide migrations (e.g., API v1→v2) * AI-assisted design: suggest smaller, testable units and enforce SOLID/clean-architecture patterns * Education: use refactor suggestions as coaching for juniors * Debt dashboards: link code smells to incident rates & cycle time ## Footnote These opportunities can enhance the capabilities and educational value of refactoring tools.
29
What are the **threats** to **refactoring tools**?
* Over-automation: “green CI” masking missing tests or runtime contracts * Tool lock-in: proprietary project metadata or license costs * Performance regressions: accidental hot-path slowdowns if not profiled * Monorepo scale: very large trees can outstrip tool performance ## Footnote These threats can undermine the reliability and performance of refactoring tools.
30
What are the **strengths** of **clipboard management tools**?
* Immediate productivity: multi-item history, search, OCR, templates * Error reduction: reuse exact strings (part numbers, SKUs, commands) * Lightweight: minimal learning curve; works across apps and roles * Team consistency (some tools): shared boards/snippets reduce drift ## Footnote These strengths enhance user efficiency and consistency in data handling.
31
What are the **weaknesses** of **clipboard management tools**?
* Security/privacy: sensitive data can persist in history or sync * Fragmentation: per-OS features produce uneven team experience * Organization debt: unmanaged histories become cluttered quickly * Limited logic: not a replacement for proper snippet/automation systems ## Footnote These weaknesses can pose challenges in managing sensitive information and maintaining organization.
32
What are the **opportunities** for **clipboard management tools**?
* Policy profiles: auto-expire secrets/PII; blocklist patterns (tokens, keys) * Workflow glue: pair with launchers (Raycast/Alfred) and text expanders * Domain libraries: curated boards (BOM fields, IPC notes, Jira macros) * Analytics: track paste patterns to identify automation candidates ## Footnote These opportunities can improve the functionality and security of clipboard management tools.
33
What are the **threats** to **clipboard management tools**?
* Compliance findings: auditors may flag unencrypted clip histories/sync * Phishing/exfiltration: malicious apps reading clipboard contents * OS changes: platform privacy restrictions can break features * Vendor shutdowns: cloud-sync tools discontinuing services ## Footnote These threats can impact the reliability and security of clipboard management tools.
34
What are the **strengths** of **web browser interaction/control tools**?
* Realistic automation: full browser engines, modern waits, multi-browser * Test + RPA: one skillset for QA, scraping, and internal portal automation * Rich ecosystem: recorders, locators, assertions, screenshots, videos * Parallelism: scale out for coverage or data collection ## Footnote These strengths facilitate comprehensive testing and automation across web applications.
35
What are the **weaknesses** of **web browser interaction/control tools**?
* Flaky selectors: DOM churn and anti-bot measures increase brittleness * Maintenance cost: tests/flows rot when UIs change * Data limits: authenticated scraping can hit legal/ToS boundaries * Skill gap: reliable flows need engineering discipline (selectors, retries, idempotency) ## Footnote These weaknesses can complicate the use of web browser interaction tools in dynamic environments.
36
What are the **opportunities** for **web browser interaction/control tools**?
* Contracted selectors: collaborate with app teams to expose test IDs * Headful+human-in-the-loop: mix automation with checkpoints for hard steps * Synthetic monitoring: reuse flows for 24/7 uptime/user-journey checks * Agent integration: let an agent call Playwright for “last-mile” web tasks ## Footnote These opportunities can enhance the effectiveness and reliability of web browser interaction tools.
37
What are the **threats** to **web browser interaction/control tools**?
* Anti-automation tech: bot detection, captchas, rate limits * Legal risk: scraping or data handling that violates terms/regulations * Browser changes: engine updates breaking APIs or stealth modes * API replacement: targets ship official APIs that obsolete scraping ## Footnote These threats can significantly impact the functionality and legality of web browser interaction tools.
38
What is a recommended **quick playbook** for **agent orchestration**?
* Start with a single high-value graph (e.g., “spec → requirement checks → remediation PR”) * Add tracing, golden tests, and a cost budget from day one * Gate production with evals; pin model versions ## Footnote This playbook provides a structured approach to implementing agent orchestration effectively.
39
What is a recommended **quick playbook** for **refactoring**?
* Define a “refactor only with passing tests” policy * Establish a debt backlog w/ measurable outcomes (MTTR, defects) * Add pre-commit linters and CI quality gates; pilot codemods on a branch ## Footnote This playbook helps ensure that refactoring efforts are effective and maintain code quality.
40
What is a recommended **quick playbook** for **clipboard management**?
* Roll out with a policy: auto-purge sensitive items, disable cloud sync for secrets * Curate shared boards for repetitive text (BOM boilerplate, ticket templates) * Pair with a text expander/launcher for snippets and commands ## Footnote This playbook enhances the security and efficiency of clipboard management practices.
41
What is a recommended **quick playbook** for **browser control**?
* Standardize on Playwright (dev) + a no-code tool (ops) * Require data owner sign-off and ToS/legal review for scraping * Use resilient locators (test IDs), retries, and visual checks; monitor flake rate ## Footnote This playbook provides guidelines for effective and compliant web browser control.
42
What are the **strengths** of **agent observability & evals**?
* Full trace of prompts, tool calls, tokens, costs * Regression checks with golden datasets ## Footnote These strengths facilitate easier debugging and measurable quality over time.
43
What are the **weaknesses** of **agent observability & evals**?
* Extra infra + vendor lock-in risk * Instrumentation adds latency * Building good eval sets is labor-intensive ## Footnote These weaknesses can complicate implementation and maintenance.
44
What **opportunities** exist for **agent observability & evals**?
* Policy/audit readiness (SOX/ISO/FAA-style traceability) * Auto-alerts on drift, prompt regressions, or cost spikes ## Footnote These opportunities can enhance compliance and proactive monitoring.
45
What are the **threats** to **agent observability & evals**?
* Sensitive data in logs * Upstream model changes invalidate historical baselines ## Footnote These threats can lead to data privacy issues and unreliable evaluations.
46
What are the **strengths** of **workflow orchestration (non-AI)**?
* Reliable scheduling, retries, backoffs, SLAs for long/async jobs * Clear DAGs make dependencies visible ## Footnote These strengths improve operational efficiency and clarity.
47
What are the **weaknesses** of **workflow orchestration (non-AI)**?
* Learning curve * Overkill for small projects or simple cron jobs ## Footnote These weaknesses may deter adoption for simpler tasks.
48
What **opportunities** exist for **workflow orchestration (non-AI)**?
* Hybrid pipelines (ETL → agent → QA → publish) * Cost control via batching/fanning-out/fan-in ## Footnote These opportunities can optimize resource usage and enhance workflow efficiency.
49
What are the **threats** to **workflow orchestration (non-AI)**?
* Single point of failure if poorly deployed * Cloud costs creep with always-on workers ## Footnote These threats can lead to increased operational risks and costs.
50
What are the **strengths** of **knowledge / RAG plumbing**?
* Contextual answers tied to your docs * Modular: loaders, chunkers, rerankers, caches ## Footnote These strengths enhance the accuracy and relevance of information retrieval.
51
What are the **weaknesses** of **knowledge / RAG plumbing**?
* Retrieval quality highly data-dependent * Ongoing ingestion and access-control governance needed ## Footnote These weaknesses can affect the reliability of the system.
52
What **opportunities** exist for **knowledge / RAG plumbing**?
* Hybrid search (BM25 + vectors + rerankers) * Domain adapters (specs, IPC/DO-178, internal SOPs) ## Footnote These opportunities can significantly improve search accuracy.
53
What are the **threats** to **knowledge / RAG plumbing**?
* Data leakage via embeddings or mis-scoped indices * Schema drift in sources breaks pipelines ## Footnote These threats can compromise data security and system integrity.
54
What are the **strengths** of **secrets & configuration**?
* Centralized rotation, audit logs, and scoped access * Templated per-env configs reduce misconfig bugs ## Footnote These strengths enhance security and reduce configuration errors.
55
What are the **weaknesses** of **secrets & configuration**?
* Initial setup friction * Secret sprawl when projects multiply ## Footnote These weaknesses can complicate management and adoption.
56
What **opportunities** exist for **secrets & configuration**?
* Just-in-time credentials; short-lived tokens for agents * Policy-as-code for data egress and key usage ## Footnote These opportunities can improve security and compliance.
57
What are the **threats** to **secrets & configuration**?
* Vault compromise is catastrophic * Hard-coded fallbacks linger in legacy repos ## Footnote These threats can lead to severe security breaches.
58
What are the **strengths** of **queueing & concurrency control**?
* Smooths spikes; isolation of slow/flaky integrations * Idempotency + retries = resilient external API usage ## Footnote These strengths enhance system reliability and performance.
59
What are the **weaknesses** of **queueing & concurrency control**?
* Operational overhead (dead-letter queues, monitoring) * Requires disciplined job design to avoid duplication ## Footnote These weaknesses can increase complexity and maintenance efforts.
60
What **opportunities** exist for **queueing & concurrency control**?
* Priority lanes (human-facing vs. batch) * Rate-limit orchestration across multiple vendors/models ## Footnote These opportunities can optimize resource allocation and improve performance.
61
What are the **threats** to **queueing & concurrency control**?
* Poison messages clogging queues * Vendor outages causing cascading retries/costs ## Footnote These threats can disrupt operations and increase costs.
62
What are the **strengths** of **data quality & linting (prompts + code)**?
* Consistent style, safety filters, and best practices enforced automatically * Prevents debt: catches smells before merge ## Footnote These strengths help maintain code quality and prevent issues.
63
What are the **weaknesses** of **data quality & linting (prompts + code)**?
* False positives drain attention * Hard to encode higher-level design constraints ## Footnote These weaknesses can lead to frustration and inefficiencies.
64
What **opportunities** exist for **data quality & linting (prompts + code)**?
* Prompt registries with versioning, tests, and red-team checklists * Repo-wide codemods for API migrations or security patches ## Footnote These opportunities can streamline development processes.
65
What are the **threats** to **data quality & linting (prompts + code)**?
* Teams “green-bar game” the gate * Tooling lock-in via proprietary configs ## Footnote These threats can undermine the effectiveness of linting tools.
66
What are the **strengths** of **desktop / RPA complements**?
* Automates native apps, file dialogs, legacy systems * Great for operations teams without heavy dev support ## Footnote These strengths enhance productivity and reduce manual effort.
67
What are the **weaknesses** of **desktop / RPA complements**?
* Fragile against UI updates * Licensing can be pricey at scale ## Footnote These weaknesses can limit the effectiveness and scalability of RPA solutions.
68
What **opportunities** exist for **desktop / RPA complements**?
* Human-in-the-loop checkpoints for tricky steps * Replace swivel-chair tasks (portals, CSV wrangling) ## Footnote These opportunities can enhance automation capabilities.
69
What are the **threats** to **desktop / RPA complements**?
* Compliance/ToS pitfalls * OS patches break robots at the worst time ## Footnote These threats can create operational risks and compliance issues.
70
What are the **strengths** of **snippet / launcher utilities**?
* Huge local productivity: instant commands, snippets, and app control * Extensible with plugins ## Footnote These strengths can significantly enhance user efficiency.
71
What are the **weaknesses** of **snippet / launcher utilities**?
* Personalization → uneven team standardization * Not ideal for heavy data transformation or long-running jobs ## Footnote These weaknesses can hinder team collaboration and effectiveness.
72
What **opportunities** exist for **snippet / launcher utilities**?
* Shared snippet packs (BOM boilerplate, ticket macros) * Bridge to agents via quick actions and command palettes ## Footnote These opportunities can improve collaboration and efficiency.
73
What are the **threats** to **snippet / launcher utilities**?
* Sensitive text in snippets; sync risk * Plugin ecosystem volatility ## Footnote These threats can lead to security vulnerabilities and instability.
74
What are the **strengths** of **API testing & contract tools**?
* Early detection of breaking changes * Contract testing reduces cross-team friction ## Footnote These strengths enhance development efficiency and collaboration.
75
What are the **weaknesses** of **API testing & contract tools**?
* Can be orthogonal to E2E reality if contracts are too strict * Test sprawl without maintenance ## Footnote These weaknesses can lead to ineffective testing and increased maintenance burdens.
76
What **opportunities** exist for **API testing & contract tools**?
* Tie agent tools to mocked integrations for safe sandboxes * CI gates on contract drift ## Footnote These opportunities can improve testing accuracy and reliability.
77
What are the **threats** to **API testing & contract tools**?
* False confidence if coverage is shallow * Vendor schema changes outside your control ## Footnote These threats can undermine the effectiveness of API testing.
78
What are the **strengths** of **packaging & environment management**?
* Reproducible builds; parity between dev/CI/prod * Fast onboarding; fewer “works on my machine” issues ## Footnote These strengths enhance consistency and reduce onboarding time.
79
What are the **weaknesses** of **packaging & environment management**?
* Image bloat and long CI times if unmanaged * Multi-language stacks add toolchain complexity ## Footnote These weaknesses can complicate management and increase build times.
80
What **opportunities** exist for **packaging & environment management**?
* Slim, SBOM-aware images; provenance for supply-chain trust * Per-project tool shims for painless switching ## Footnote These opportunities can enhance security and flexibility.
81
What are the **threats** to **packaging & environment management**?
* Supply-chain attacks via base images * Orphaned containers & cache costs ## Footnote These threats can lead to security vulnerabilities and increased costs.
82
What is the **first step** to deploy in the **90-day cut**?
Secrets + packaging baseline (Vault/Doppler + Docker + uv/Poetry) ## Footnote This step establishes a secure and reproducible environment.
83
What is the **second step** to deploy in the **90-day cut**?
Orchestrator (Prefect) + queues (Redis/RQ) + observability (LangSmith/Promptfoo) ## Footnote This step enhances workflow management and monitoring.
84
What is the **third step** to deploy in the **90-day cut**?
RAG MVP (PGVector + reranker) with data quality gates (Semgrep/Sonar) ## Footnote This step focuses on improving data retrieval and quality.
85
What is the **fourth step** to deploy in the **90-day cut**?
Browser/desktop automations for one high-value process; add API contract tests ## Footnote This step aims to automate key processes and ensure API reliability.
86
What is the purpose of the **LLM** in the decision-making process for selecting an agentic stack?
To extract requirements and lock the decision surface ## Footnote The LLM acts as a facilitator, analyst, and scribe throughout the process.
87
List the **core categories** typically considered when defining a stack taxonomy.
* Agent orchestration/runtime * Model layer * Retrieval/RAG * Tools * Workflow orchestration * Queues * Observability & evals * Secrets/config * State & memory * Safety & policy * Packaging/DevEx * Governance ## Footnote These categories help in organizing the selection of products for the agentic stack.
88
What are the **non-functional requirements** to consider when selecting an agentic stack?
* Reliability targets (SLOs) * Cost ceilings * Governance/PII * Deployment environments * IP restrictions ## Footnote These requirements ensure the stack meets operational and compliance standards.
89
What does the **MoSCoW** prioritization method stand for?
* Must have * Should have * Could have * Won't have ## Footnote This method helps prioritize requirements based on their importance.
90
What is the output of the **requirements extraction** process?
YAML for requirements; a concise decision surface summary ## Footnote This output provides a structured format for further analysis.
91
Define the **weighted scoring rubric** criteria for evaluating agentic stacks.
* Reliability (0.18) * Safety/compliance (0.15) * Fit to workloads (0.15) * DevEx & maintainability (0.12) * Cost & lock-in (0.12) * Performance (0.10) * Integrations (0.10) * Observability/evals (0.08) ## Footnote Each criterion has a weight that reflects its importance in the overall evaluation.
92
What is the purpose of the **decision matrix** in the selection process?
To score products against evaluation criteria and compute a weighted total per product ## Footnote This matrix helps visualize the strengths and weaknesses of each candidate.
93
What are the three **scenario tests** to run for stress testing the selected stack?
* 100 parallel agents performing browser + RAG tasks with 95th-pct latency < 10s * Strict PII governance (EU residency, audit logs, redaction) * Brown-out of primary LLM provider; system must degrade gracefully ## Footnote These tests assess the stack's performance under various conditions.
94
What should the **reference architecture** include?
* Agent graph design * State/memory strategy * Workflow + queue layering * Observability * Secrets & policy gates ## Footnote This architecture outlines the logical design and components of the selected stack.
95
What is included in the **policy pack** generated for governance and safety?
* Data handling (PII classes, redaction, retention) * Tool governance (allow/deny, approval workflow, audit) * Prompt/version control (registry, rollbacks, pinning) * Cost guardrails (budget caps, per-run limits, fallback ladders) * Incident playbooks (model regression, API outage, data leak) ## Footnote This pack provides guidelines for managing data and tools within the system.
96
What is the final deliverable called that summarizes the stack selection process?
Architecture Decision Record (ADR) ## Footnote The ADR includes context, options considered, scores, decision, consequences, and rollback plan.
97
True or false: The **minimum viable** combination of tools should be proposed first to avoid over-tooling.
TRUE ## Footnote This approach helps ensure that only necessary tools are included in the stack initially.
98
What is a potential risk associated with **agent sprawl**?
Lack of a registry for agents ## Footnote A registry helps manage agents by documenting their purpose, inputs, tools, and owners.
99
What should be enforced to prevent **hidden costs** in the agentic stack?
A cost model (requests × tokens × retries) and a budget-to-fail plan ## Footnote This ensures that costs are monitored and controlled throughout the project.