Complete Testing Flashcards

(156 cards)

1
Q

Why is testing described as an investigative process rather than a verification process?

A

Because its primary purpose is generating information about risk to inform decisions—not proving correctness.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Can testing prove a program is bug-free?

A

No. Testing can only show the presence of bugs—never guarantee their absence.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

You’ve run 500 tests and they all pass. A stakeholder asks if the software is bug-free. How do you respond?

A

No—passing tests only prove those specific scenarios work. Untested paths, edge cases, and unknown conditions remain unverified.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Why is verification testing necessary but insufficient on its own?

A

It confirms requirements are met (necessary) but only finds problems you already thought to look for (insufficient for unknowns).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Your test suite covers all documented requirements and passes. A user reports a crash. What type of testing might have caught this?

A

Investigative (exploratory) testing—asking “what if” questions beyond the requirements checklist.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How do verification and investigation testing complement each other?

A

Verification ensures known requirements work. Investigation discovers unknown problems. Mature products need both.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is a Test Oracle?

A

A mechanism for determining whether a test passed or failed—the source of expected behavior.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

You’re writing a test but realize you don’t know what “correct” output looks like. What concept are you missing?

A

A test oracle—without knowing expected behavior you cannot determine pass/fail.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Why do some tests lack clear oracles?

A

When correct behavior is subjective (UX quality) or undefined (exploratory edge cases). These require human judgment or heuristics.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Why are there two fundamental approaches to test case creation?

A

Because you can either test from external behavior (specification-based) or internal structure (code-based)—each reveals different issues.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

How do black-box and white-box testing differ in what they can reveal?

A

Black-box finds behavior that doesn’t match specifications. White-box finds code paths that aren’t exercised or have internal issues.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

You’re testing an API by sending requests and checking responses, with no access to source code. What approach is this?

A

Black-box (specification-based) testing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

You examine a function’s code, notice an edge case in a conditional branch, and write a test for it. What approach is this?

A

White-box (code-based) testing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Why can specification-based tests be developed before code is complete?

A

Because they’re independent of implementation—they only need to know expected inputs and outputs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Why does specification-based testing often leave coverage gaps?

A

You can’t see which code paths remain untested—a requirement may be “covered” while error-handling branches are never exercised.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Why does code-based testing enable coverage metrics that specification-based cannot?

A

Because you can instrument the code to measure which lines/branches tests actually execute.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Your black-box tests all pass, but a bug exists in an error-handling branch. Why might black-box testing miss this?

A

Black-box testing can’t see internal code paths—it may never exercise error handling if specifications don’t cover those scenarios.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

When should you combine both testing approaches?

A

Most projects benefit from both: specification-based for requirement coverage and acceptance criteria; code-based for finding untested paths.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

How does the ‘postal service vs. phone call’ analogy explain Selenium vs. Playwright speed?

A

Selenium (WebDriver) is like mailing letters—each command requires a round-trip. Playwright (CDP) is like an open phone line—continuous bidirectional communication.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Why does the communication protocol matter so much for test performance?

A

Tests send many commands. Per-command latency accumulates. Lower latency protocols (CDP, WebSocket) dramatically reduce total execution time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Where does Playwright sit in relation to the browser—inside or outside?

A

Outside. Playwright controls the browser externally via WebSocket. Only Cypress runs inside the browser.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Why does Cypress running inside the browser create both advantages and limitations?

A

Advantages: direct state access, synchronous DOM manipulation. Limitations: same-origin restrictions, shared execution context reduces isolation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Which browser automation tool provides maximum browser coverage, and why?

A

Selenium—WebDriver is a W3C standard implemented by all major vendors including Safari and older browsers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Your app must work in Safari. Which tools can you use?

A

Selenium (full Safari support) or Playwright (WebKit engine approximation). Cypress has limited Safari support.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
How do the three major tools differ in their browser interaction architecture?
Selenium: WebDriver over HTTP (remote). Playwright: DevTools Protocol over WebSocket (remote). Cypress: runs inside browser (local).
26
Why is Selenium described as offering maximum freedom with maximum responsibility?
It provides low-level control with few guardrails—teams must explicitly manage waiting, browser differences, drivers, and timing.
27
Why does Selenium's flexibility create maintenance overhead?
Without enforced patterns every team invents their own waiting strategies and abstractions—leading to inconsistency and technical debt.
28
Why is Selenium's browser setup more complex than Playwright's?
Selenium requires separate driver downloads and version management. Playwright bundles everything.
29
You're setting up Selenium and get "driver version mismatch" errors. What architectural decision causes this?
Selenium separates browser drivers from the library—you must manually synchronize chromedriver/geckodriver versions with browsers.
30
Why does Playwright's auto-waiting improve both reliability and readability?
Every method automatically waits for actionability—no explicit wait code clutters tests and no timing issues from forgotten waits.
31
Why is browser version determinism valuable for CI/CD?
Because we set Playwright versions through a requirements document (or should), we know that all test runs are using the same Playwright version. Playwright versions are certified to work on certain Browser versions, and use those versions during testing. Ensuring that local and remote suites use the same Playwright version, and there the same Browser versions, we should see the same results.
32
What is the risk of testing against version-locked browsers?
Missing bugs that appear on newest browser versions. However risk is low because browsers prioritize backward compatibility.
33
Your Playwright tests pass locally but a user on Chrome beta reports a bug. What trade-off does this illustrate?
Version-locked browsers provide determinism but may miss bleeding-edge issues. The trade-off favors reliability over catching rare early-adopter bugs.
34
Why would a team choose Cypress despite its browser limitations?
Exceptional debugging experience (time-travel and DOM snapshots) and seamless JavaScript integration outweigh limitations for Chrome-only projects.
35
Why does Cypress's shared execution context create scaling challenges?
Tests may develop hidden coupling through implicit shared state that only surfaces as the suite grows.
36
Your Cypress tests pass individually but fail when run together. What likely causes this?
Implicit shared state—tests are coupled through data that persists between them in the shared execution context.
37
Why can't Cypress easily test cross-origin scenarios?
Running inside the browser means same-origin restrictions apply—Cypress can't freely navigate between different domains.
38
What makes Playwright a "middle ground" between Selenium and Cypress?
Strong defaults and fast interactions like Cypress; explicit control and architectural separation like Selenium.
39
You're choosing a browser automation tool. What three questions should you ask first?
1) What browsers must you support? 2) What language does your team use? 3) How important is debugging vs. architectural flexibility?
40
Your team debates Selenium vs. Playwright. What's the strongest maintenance argument for Playwright?
Playwright's opinionated defaults reduce decision fatigue and inconsistency—the cognitive load that causes Selenium suite quality to degrade over time.
41
You need Safari testing, have a JavaScript team, and want great debugging. How do you prioritize?
Safari is a hard requirement (eliminates Cypress). Choose Selenium (full Safari) or Playwright (WebKit) based on exact Safari vs. maintainability priority.
42
Why is isolation the prerequisite for parallelization?
Only isolated tests can run simultaneously without interference. Shared state causes unpredictable failures when tests execute concurrently.
43
Your tests pass serially but fail randomly under pytest -n 4. What's almost certainly the cause?
Test isolation failure—tests share state and interfere when running simultaneously.
44
Why does parallel execution reveal hidden problems that sequential runs miss?
Shared resources get accessed simultaneously—exposing race conditions and coupling that sequential runs accidentally avoid through timing.
45
Why must each test create its own uniquely-identified data?
So tests never collide—parallel tests accessing the same database row or account will corrupt each other's state.
46
Test A creates user "testuser" and Test B also uses "testuser". They pass alone but fail together. How do you fix this?
Each test creates uniquely-named data (e.g. testuser_{uuid}) so they never share resources.
47
Why can't you just "be careful" instead of designing for isolation?
Human vigilance doesn't scale. As suites grow subtle dependencies become impossible to track. Architecture must enforce isolation.
48
How does parallelization affect execution time as suites scale?
Constant speedup regardless of size. 8 workers ≈ 8x faster whether running 50 or 500 tests.
49
Why is efficient scaling the most important architectural advantage?
QA team won't grow proportionally with test count. Without parallelization execution time becomes the CI/CD bottleneck.
50
Your 50-test suite takes 5 minutes. You expect 500 tests next year. How does parallelization change the math?
Without parallelization: ~50 minutes (linear growth). With 8 workers: still ~6 minutes.
51
Why should test isolation be designed in from the start rather than retrofitted?
Retrofitting isolation requires rewriting tests that assumed shared state—often the majority of a legacy suite.
52
Why does Playwright use browser contexts rather than separate browser processes for isolation?
Browser launches take seconds; context creation takes milliseconds. Same isolation much faster.
53
How does a new Context differ from a new Page in Playwright?
A new Page shares state within its Context (same cookies). A new Context provides complete isolation.
54
In the apartment building analogy, what does each level represent?
Browser = building (one process). Context = apartment (isolated session). Page = room (tab within apartment). Apartments share nothing.
55
Why is sharing a live browser context across tests risky?
Tests can mutate cookies/storage/page state—causing unpredictable failures and order-dependent behavior.
56
You could launch a fresh browser for each test to guarantee isolation. Why is this overkill?
BrowserContext already provides complete isolation. Browser launches are expensive; context creation is cheap.
57
Why do parallel tests not interfere with each other's authentication state?
Each test runs in its own context with isolated cookies and storage. They never see each other's session data.
58
How does Playwright's storageState solve the speed-vs-isolation dilemma for authentication?
Log in once and save cookies to a file. Load that file into fresh isolated contexts—each test gets isolation AND pre-authentication.
59
Why is reusing saved auth state safer than sharing a logged-in context?
Each test gets its own isolated context (can't pollute others) while starting pre-authenticated (fast).
60
When should authentication state NOT be shared between tests?
When testing login/logout flows or role-based authorization—anything where auth state is the test subject.
61
You need to test both admin and regular user permissions in the same test. How do you structure this?
Create two separate contexts with different storageState files. Both exist simultaneously with complete isolation.
62
How does the optimized authenticated fixture architecture work?
Session-scoped browser (one launch) + session-scoped auth_state (one login saved) + function-scoped context (isolated but pre-authenticated per test).
63
Why must test running and browser automation be separate responsibilities?
So changes in one domain (browsers) don't force refactors across the other (test organization). Separation enables independent evolution.
64
How does separating runner from automation help debugging?
You can identify whether failures come from test logic (your code), execution flow (runner), or browser interaction (automation library).
65
In the tournament analogy, what is pytest's role?
Tournament organizer—schedules matches (tests) and records results but doesn't play the games (browser interactions).
66
Why shouldn't a browser automation library decide which tests to run?
Test selection is an orchestration concern. Mixing concerns creates tight coupling that makes both harder to modify.
67
Why is tool replaceability important in test automation?
Technologies change. Separation lets you swap Playwright for something new without rewriting test structure and fixtures.
68
Your tests are well-isolated but tightly coupled to Playwright APIs throughout. Is this good architecture?
Partial. Isolation is good but tight coupling violates separation of concerns—makes tool replacement difficult.
69
Why does pytest use naming conventions for test discovery?
Convention over configuration—pytest finds test_*.py files and test_* functions automatically without manual registration.
70
How does pytest know to run a fixture?
Test functions declare fixtures as parameters. pytest resolves the dependency graph and runs fixtures before tests.
71
Why is it significant that fixtures are your code not pytest code?
You control what fixtures do. pytest just orchestrates when they run. A fixture failure is your setup code failing.
72
A traceback points to a fixture and no assertions ran. What failed?
Setup failed inside the fixture or something it called—not the test body itself.
73
Your test fails on page.click(). Is this a pytest problem or a Playwright problem?
Playwright problem—pytest orchestrated running the test but Playwright executed the interaction that failed.
74
Why is pytest's declarative fixture system more powerful than explicit setup code in tests?
Declaring dependencies as parameters lets pytest build a dependency graph—enabling scoping decisions, reuse, proper teardown, and parallelization.
75
How does fixture scope control resource lifecycle?
Function scope = fresh per test. Session scope = once for entire run. Higher scopes reduce setup time but increase pollution risk.
76
Explain session vs. function scope time savings with concrete numbers.
Login takes 3 seconds. 50 tests. Function scope = 50 logins = 2.5 minutes. Session scope = 1 login = 3 seconds.
77
Why can't a session-scoped fixture depend on a function-scoped fixture?
The function-scoped dependency would be destroyed after the first test—breaking the session fixture that expected it to persist.
78
In a pytest fixture using yield, what happens before vs. after?
Before yield: setup runs. Yield: resource passed to test. After yield: teardown runs regardless of test outcome.
79
Why might you intentionally use function scope for page navigation despite the time cost?
Clean page state per test—no leftover filters or modals. Trading seconds for isolation confidence.
80
You forget cleanup code after yield. What happens?
Resources accumulate—browser contexts stay open
81
Why does knowing the fixture dependency graph enable parallelization?
pytest-xdist can determine which tests share fixtures (coordinate) and which are independent (parallelize).
82
What is a "worker" in pytest parallel execution?
A separate Python process running a subset of tests concurrently with other workers.
83
What does pytest -n 4 do?
Splits the suite across 4 worker processes running in parallel.
84
If you run pytest without -n, how many workers?
One—a single process runs everything sequentially.
85
Why is worker-scoped setup useful?
Runs expensive setup once per worker rather than once per test—balancing speed with isolation.
86
Do workers run the same tests or different tests by default?
Different—each test runs once total distributed across workers.
87
Why might a team choose unittest over pytest despite fewer features?
It's built into Python—no additional dependency. Sufficient for simple projects with basic fixture needs.
88
Why does Robot Framework use keyword-driven syntax instead of pure code?
To enable non-programmers to read and write tests. The trade-off is added complexity for developer-centric teams.
89
Why does BDD create a maintenance burden?
Two synchronized codebases (feature files and step definitions)—changes in one often require changes in the other.
90
When is pytest better than BDD frameworks?
Developer-centric team and stakeholders care about results not test scenarios. The key question: who reads the tests?
91
Your stakeholders start asking to review test scenarios. You have a pytest suite. What are your options?
Layer BDD on top—step definitions become thin wrappers calling existing page objects. No rewrite needed.
92
How do you decide between pytest and BDD frameworks?
Ask: who needs to read the tests? Technical people → pytest. Non-technical stakeholders → BDD.
93
How do pytest and Playwright divide responsibilities?
pytest: orchestration (fixture lifecycle and scoping and dependencies). Playwright: execution (browser automation and isolation and auth persistence).
94
In the choreographer/dancer analogy, which is which?
pytest is the choreographer (decides what happens when). Playwright is the dancer (performs the movements).
95
Why is it dangerous for a browser automation tool to handle orchestration?
Blurs separation of concerns and makes failures harder to attribute.
96
You're debugging a test failure with an element selection error. Which tool's docs do you check?
Playwright—element selection is browser automation not test orchestration.
97
Both fixture scope and BrowserContext provide isolation. What's the difference?
Fixture scope: when resources are created/destroyed (lifecycle). BrowserContext: browser state isolation (cookies/storage). Different layers.
98
Separation of concerns and test isolation both keep things separate. How do they differ?
Separation of concerns = which tool handles what (architectural design). Test isolation = tests don't share state (runtime behavior).
99
Why is explicit context handling important in long-lived test suites?
Forces conscious state management. Implicit state leads to hidden dependencies that surface as bugs later.
100
Why is a CI matrix strategy better than looping through browsers in test code?
Matrix spawns independent jobs—parallel execution and clear failure isolation and separate reports. In-test loops create complexity.
101
How does a CI matrix approach change fixture architecture?
Fixtures return single objects. CI handles multi-browser by running the full suite multiple times with different browser parameters.
102
Your Firefox CI job fails but Chrome passes. How does the matrix approach help debugging?
Firefox has its own job with isolated logs and artifacts. No parsing combined output.
103
You could loop through browsers inside each test. Why is this worse?
Failures harder to isolate and parallelization lost and reports muddied and test code more complex.
104
Test isolation vs. separation of concerns: both "keep things separate." How do they differ?
Test isolation: tests don't share runtime state (data). Separation of concerns: tools have distinct responsibilities (design).
105
Fixture scope vs. context scope: both control "how long something lives." What's different?
Fixture scope: pytest's lifecycle management. Context scope: Playwright's isolation mechanism. Orchestration vs. execution.
106
BrowserContext vs. Page: both are Playwright concepts. When do you need which?
New Page: another tab sharing session. New Context: complete isolation (different user or clean state).
107
WebDriver vs. CDP: both control browsers. What's the fundamental difference?
WebDriver: request/response over HTTP (higher latency). CDP: bidirectional WebSocket (real-time and lower latency).
108
Verification vs. investigation testing: when do you use each?
Verification: confirming requirements (checklist). Investigation: discovering unknowns (exploration). Most products need both.
109
Function scope vs. session scope: what trade-off are you making?
Function: maximum isolation but slower (repeated setup). Session: maximum speed but pollution risk. Choose by cost and state sensitivity.
110
You're designing fixture scope for a resource. What questions determine the right scope?
How expensive is setup? How stateful is the resource? Can tests pollute each other through it?
111
A new team member asks why you use pytest + Playwright instead of just Cypress. How do you explain?
Separation of concerns—pytest handles orchestration (fixtures and parallelization) while Playwright handles browser automation. Cypress combines both limiting flexibility and isolation.
112
Why must you choose isolation strategy before choosing tools?
Tool capabilities constrain what isolation patterns are possible. Wrong tool choice can make good architecture impossible.
113
You're evaluating a new browser automation tool. What architectural questions matter most?
How does it achieve isolation? What's the communication protocol latency? How does it integrate with test runners? What browsers does it support?
114
Tests pass locally but fail in CI. Name three likely causes.
Environment differences (browser versions), timing/network variations, or execution order exposing isolation problems.
115
Tests pass individually but fail together. What's almost certainly wrong?
Shared state—tests are coupled through data that persists between them.
116
Tests fail randomly not consistently. What category of problem is this?
Flakiness—usually timing issues or race conditions or external dependencies.
117
Fixture setup fails and the test never runs. Where do you debug?
The fixture code or its dependencies—not the test body.
118
Browser clicks happen but nothing changes on screen. Test passes. What's wrong?
Missing assertions—the test didn't verify expected outcomes. Passing tests ≠ working features.
119
You're starting a new test suite. What decisions do you make in order?
1) Browser automation tool. 2) Test runner. 3) Fixture architecture. 4) Project structure. 5) CI integration.
120
Your suite has 200 tests taking 30 minutes. What's your first optimization?
Parallelization—ensure tests are isolated then add workers. 8 workers could reduce time to ~4 minutes.
121
You need both admin and regular user in one test. How do you structure it?
Two separate contexts with different storageState files. Both exist simultaneously with complete isolation.
122
Your page object is 500 lines. What pattern should you consider?
Component objects—extract reusable UI components (tables and modals and navigation) into separate classes.
123
The app adds a new user role. What's the minimal infrastructure change?
Add a new auth fixture/storageState for that role. Tests request it by parameter. No structural changes if architecture is sound.
124
What is the WebDriver protocol?
A W3C standard that defines how test code communicates with browsers via HTTP requests and JSON responses.
125
What is the Chrome DevTools Protocol (CDP)?
A protocol allowing direct communication with Chromium-based browsers via persistent WebSocket connections.
126
What is test flakiness?
When a test produces inconsistent results (sometimes passes and sometimes fails) without any code changes.
127
What is a test fixture in the general sense?
Any fixed state or setup that tests rely on—data or configuration or environment needed before tests run.
128
What is test discovery?
The process by which a test runner finds and identifies which tests exist in a codebase.
129
What is test collection in pytest?
The phase after discovery where pytest gathers tests into a tree structure and resolves fixture dependencies.
130
What is a test marker in pytest?
A decorator that attaches metadata to tests for filtering or special handling (like @pytest.mark.slow).
131
What is pytest-xdist?
A pytest plugin that enables parallel test execution by distributing tests across multiple worker processes.
132
What is the conftest.py file?
A pytest file for sharing fixtures across multiple test files without explicit imports. Fixtures defined there are auto-discovered.
133
What is a page object?
A design pattern that encapsulates a web page's elements and interactions into a single class.
134
What is test atomicity?
The principle that each test should be self-contained and independent—not relying on other tests running first.
135
What is the test pyramid?
A model suggesting many unit tests at the base and fewer integration tests and fewest UI tests at the top.
136
What is end-to-end (E2E) testing?
Testing that validates entire user workflows from start to finish through the actual UI.
137
What is a test assertion?
A statement that checks whether an expected condition is true—the mechanism that determines pass or fail.
138
What is test coverage?
A metric measuring what percentage of code (lines or branches or paths) is executed by tests.
139
What does headless browser mode mean?
Running a browser without a visible GUI—faster execution and suitable for CI/CD environments.
140
What is a locator in browser automation?
A strategy for finding elements on a page (CSS selector or XPath or text content or test ID).
141
What is the DOM?
Document Object Model—the browser's tree representation of HTML that automation tools interact with.
142
What is an explicit wait in browser automation?
Code that pauses execution until a specific condition is met (element visible or clickable or text appears).
143
Explicit waits vs. implicit waits: what's the key difference?
Explicit waits target specific conditions for specific elements. Implicit waits apply globally to all element lookups with a timeout.
144
Test flakiness vs. test failure: how do they differ?
Failure is consistent (always fails for a reason). Flakiness is inconsistent (sometimes passes and sometimes fails)—harder to diagnose.
145
pytest markers vs. pytest fixtures: what's the distinction?
Markers attach metadata for filtering or behavior modification. Fixtures provide setup/teardown and dependency injection.
146
conftest.py vs. a regular test file: what makes conftest special?
Fixtures in conftest.py are auto-discovered and shared across all tests in that directory tree. Regular files require explicit imports.
147
Test discovery vs. test collection: what happens in each phase?
Discovery finds test files and functions. Collection builds the dependency graph and resolves which fixtures each test needs.
148
Page object vs. component object: when do you use each?
Page objects represent entire pages. Component objects represent reusable UI elements (tables or modals) that appear across multiple pages.
149
E2E tests vs. integration tests: what's the scope difference?
E2E tests validate complete user workflows through the UI. Integration tests verify that components work together but may skip the UI.
150
Headless vs. headed browser mode: what are the trade-offs?
Headless is faster and works in CI. Headed shows the browser for debugging but requires a display and runs slower.
151
CSS selectors vs. XPath: when might you prefer each?
CSS selectors are faster and more readable for most cases. XPath can traverse up the DOM and match text content directly.
152
Auto-waiting vs. explicit waiting: what's Playwright's approach?
Playwright auto-waits built into every action (no code needed). Selenium requires explicit waits coded manually.
153
Test coverage vs. test quality: why aren't they the same?
High coverage means code was executed but not necessarily validated. Tests without meaningful assertions can achieve coverage without quality.
154
Why do UI tests sit at the top of the test pyramid?
They're slowest and most brittle. The pyramid suggests investing more in faster and more stable lower-level tests.
155
Why is headless mode preferred in CI/CD pipelines?
No display required and faster execution and lower resource usage. Visual debugging isn't needed in automated pipelines.
156
Why do test frameworks separate markers from fixtures?
Different concerns: markers classify and filter tests while fixtures manage resources and dependencies. Separation keeps both simpler.