Testing Fundamentals Flashcards

(130 cards)

1
Q

What is Testing really about?

A

Testing is an investigative process that generates information about risk to inform decisions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Does testing eventually show the absence of bugs in a program?

A

No, testing can only show the presence of bugs, never guarantee their absence

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What does Testing actually accomplish?

A
  1. How a system behaves under various conditions. 2. Where a system’s weak points are found. 3. What risk exists in the use of the system. 4. What assumptions about the system turned out to be wrong.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Is Testing attempting to prove something?

A

No, testing is used to gather evidence to inform understanding

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Describe the “Checklist Mindset” of testing.

A

Testing as Verification. Does it meet requirements. Follows a script. Binary - Pass/Fail. Goal - Confirm the system works as specified

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

With Verification testing, what is the goal?

A

The goal of verification testing is to show the system works as specified

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Describe the “Explorer Mindset” of testing.

A

Testing as investigation. What can I learn about the system. What if, how does it handle, what happens when. Discovering unexpected behaviors. Goal - Generate useful information about the system

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is the Goal of Investigative testing?

A

The goal of Investigative testing is to generate useful information about the system

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is the drawback of only using Verification testing?

A

Verification testing only finds the problems you already thought to look for

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is a Test Oracle?

A

A Test Oracle is a mechanism used to decide whether a test passed or failed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Name two fundamental approaches to test case creation.

A

1) Specification-based 2) Code-based

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Describe the concept of black-box testing.

A

Black box testing is testing in which the implementation of how something happens is not known, and the code is understood completely in terms of inputs and outputs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What are the benefits of Specification-based testing?

A

Independent of implementation, so implementation can change. Test cases can be developed in parallel with code development

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What are the drawbacks of Specification-based test cases?

A

Often significant redundancies and gaps of untested software

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is the idea of White-box or Clear-box testing?

A

White-box or Clear-box testing is testing while knowing how the code is implemented and using that knowledge to inform the test cases.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Associate Specification-based and Code-based testing with the “box” model.

A

Specification-based is black-box and Code-based is white-box

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What does Code-based testing provide and why?

A

Code-based testing provides the ability to track test coverage metrics due to its strong theoretical basis.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Why does Playwright use one browser instance per worker instead of one per test?

A

Browser launches are expensive (takes seconds), while creating new contexts is fast (milliseconds). One instance per worker minimizes the expensive launches while maintaining complete test isolation through separate contexts for each test.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is the relationship between browser contexts and authentication sessions?

A

Sessions are an application concept stored as cookies on the server side. When you log in, the session cookie is stored in the browser context. When the context is destroyed, the session cookie is lost. Each new context starts with no session data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Why don’t two tests running in parallel interfere with each other’s authentication state?

A

Because each test runs in its own browser context with isolated cookies, localStorage, and cache. They never see each other’s data, even if using the same browser instance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What efficiency advantage does Playwright’s instance/context model provide?

A

Instead of launching a new browser for every test (which takes seconds each time), Playwright launches one browser per worker and creates lightweight contexts for each test. This dramatically reduces startup overhead.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

When comparing Selenium, Playwright, and Cypress, which provides the maximum browser coverage and why?

A

Selenium provides the maximum browser coverage because it uses the WebDriver protocol, which is standardized and implemented by all major browser vendors including Safari. It also supports older browser versions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What are three strong architectural arguments for choosing Playwright?

A

Browser version determinism prevents “worked yesterday, broke today” scenarios. Auto-waiting reduces cognitive load and makes tests more readable. Efficient scaling architecture through fast context creation and built-in parallelization keeps feedback loops short as the suite grows.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

How does parallelization affect test suite execution time as the suite scales?

A

Parallelization provides a constant speedup factor regardless of suite size. With 8 workers, you get roughly 8x speedup whether running 65 tests or 500 tests. Without parallelization, execution time grows linearly with test count.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
What pattern must every test follow to enable parallel execution?
Each test must: 1) Create its own test data with unique identifiers, 2) Operate only on that specific data, 3) Clean up its own data afterward. This ensures complete data isolation.
26
Why is Playwright's API more readable than Selenium's for element interactions?
Playwright's auto-waiting is built into every method. page.click('#button') automatically waits for the element to be visible, enabled, and stable. Selenium requires explicit waits that must be coded manually for every interaction.
27
What is the risk of testing against version-locked browsers?
Missing bugs that early adopters encounter on the very latest browser versions. However, this risk is low because browsers are extremely careful about backward compatibility, and most browser update issues affect test infrastructure rather than user-facing functionality.
28
Why is Selenium's browser setup more complex than Playwright's?
Selenium requires separate driver downloads, version compatibility management between driver and browser, path configuration, and cleanup of driver processes. Playwright bundles everything with one command.
29
In what scenario would Cypress be the best tool choice despite its limitations?
When you only need Chrome-family browser coverage, don't require cross-origin testing, value the exceptional debugging experience with time-travel and DOM snapshots, and your team uses JavaScript/TypeScript exclusively.
30
What does it mean that Cypress runs inside the browser's JavaScript context?
Cypress code executes as JavaScript within the browser's engine, giving direct access to the application's JavaScript variables, functions, and state. This enables direct state manipulation, easy network stubbing, and synchronous DOM access.
31
How does the 'postal service vs. phone call' analogy explain the speed difference between Selenium and Playwright?
Selenium's WebDriver protocol is like sending letters - each command requires a round-trip HTTP request/response. Playwright's CDP connection is like an open phone line - commands and responses flow continuously over a persistent WebSocket with minimal latency.
32
What is the relationship between test isolation and parallel execution?
Test isolation is the prerequisite for parallel execution. Only tests that are isolated can safely run in parallel. Without isolation, parallel tests will interfere with each other causing unpredictable failures. Isolation enables parallelization; parallelization enables efficient scaling.
33
Why is efficient scaling considered the most important architectural advantage?
Because QA team size is unlikely to grow proportionally with the test suite. As the suite grows, execution time becomes the bottleneck for CI/CD velocity. Parallelization keeps test time relatively constant as the suite grows, maintaining short feedback loops.
34
What is the main responsibility of a test runner in automated testing?
Discovering tests and controlling how, when, and in what order they are executed.
35
What kinds of questions does a test runner answer?
Which tests run, how they're grouped, how failures are handled, and how results are reported.
36
What is the primary responsibility of a browser automation library like Playwright or Selenium?
Translating test instructions into concrete browser actions such as clicks, typing, and waiting.
37
Why shouldn't a browser automation library decide which tests to run?
Because test selection and execution strategy are lifecycle concerns, not browser interaction concerns.
38
What does "separation of concerns" mean in test automation architecture?
Assigning distinct responsibilities to different tools so changes remain localized and manageable.
39
Why is it risky to combine test running and browser automation into a single "mega-tool"?
Changes in one domain like browsers can force widespread refactors across the entire test system.
40
How does separation of concerns reduce cognitive load?
It allows engineers to reason about and modify one component without understanding the entire system.
41
Why is tool replaceability important in test automation?
It allows teams to adapt to new environments or technologies without rewriting the entire test suite.
42
What role does pytest play in a Playwright-based test suite?
pytest acts as the test runner, managing execution flow, fixtures, parallelization, and reporting.
43
Why is it better to think in terms of "roles" instead of specific tools like pytest or Playwright?
Because tools can change, but architectural roles remain stable and transferable across ecosystems.
44
What kinds of problems are easier to debug when responsibilities are separated?
Determining whether failures come from test logic, setup, execution flow, or browser interaction.
45
Why do different parts of a test ecosystem evolve at different speeds?
Browsers change rapidly, while test execution patterns and organization tend to be more stable.
46
Which component should handle parallel execution, retries, tagging, and reporting?
The test runner, because these are execution-level concerns independent of browser behavior.
47
What is meant by the phrase "execution is not interaction"?
Deciding when tests run is separate from how a browser is manipulated during a test.
48
What is the core architectural difference between Selenium, Cypress, and Playwright?
How they interact with the browser: Selenium uses WebDriver via a remote protocol, Cypress runs inside the browser, and Playwright controls the browser via DevTools-level protocols from the outside.
49
Why is Selenium described as offering "maximum freedom"?
Because it gives low-level control with few guardrails, requiring teams to manage waiting, browser differences, drivers, and timing explicitly.
50
What is the main architectural cost of Selenium's flexibility?
Increased cognitive load, higher risk of flakiness, inconsistent patterns across tests, and greater maintenance overhead as the suite grows.
51
How does Cypress's execution model differ from Selenium and Playwright?
Cypress runs inside the browser and shares execution context with the application, rather than controlling the browser externally.
52
What architectural trade-off does Cypress make to improve developer experience?
It makes implicit decisions about timing, state, and execution context, reducing flexibility and isolation in exchange for ease of use.
53
Why can Cypress tests become harder to reason about at scale?
Because shared execution context and implicit state can lead to hidden coupling between tests.
54
What makes Playwright a "middle ground" between Selenium and Cypress?
It provides strong defaults and fast interactions like Cypress, while preserving explicit control and architectural separation like Selenium.
55
Why is explicit context handling important in long-lived test suites?
It forces developers to consciously manage state sharing, reducing accidental coupling and making failures easier to diagnose.
56
What is a Playwright browser context?
An isolated browser environment with its own cookies, storage, permissions, and session state.
57
What is the difference between a new page and a new context in Playwright?
A new page shares state within the same context; a new context provides a clean, isolated environment.
58
Why is sharing a live browser context across tests risky?
Because tests can mutate shared state, leading to unpredictable failures and flakiness, especially under parallel execution.
59
What does Playwright's "saved auth state" enable?
Reusing authenticated session data to seed new, isolated browser contexts without repeating the login flow.
60
Why is reusing saved auth state safer than sharing a logged-in browser?
Each test gets its own isolated context while still starting authenticated, preventing cross-test interference.
61
Which layer should control test discovery, execution order, and lifecycle?
The test runner like pytest, not the browser automation tool.
62
Why is it dangerous for a browser automation tool to handle orchestration responsibilities?
It blurs separation of concerns and makes failures harder to attribute and debug.
63
When should authentication state NOT be shared between tests?
When testing login/logout flows, role-based authorization, or any behavior where auth state itself is the subject of the test.
64
What is a “worker” in pytest parallel execution?
A separate Python process that runs a subset of tests in parallel with other processes (commonly via pytest-xdist).
65
What does pytest -n 4 do?
Splits the test suite across 4 workers so tests run in parallel, with each test generally running once total.
66
If you run pytest without -n, how many workers are there?
One process runs everything—effectively a single worker.
67
Why is “worker-scoped” setup useful?
It runs expensive setup once per parallel worker, improving speed while avoiding shared mutable state across workers.
68
Do workers run the same tests or different tests by default?
Different tests—each test runs once overall unless you intentionally repeat it.
69
Why is sharing a live browser context across tests risky, especially with parallel runs?
Because tests can mutate shared state (cookies/storage/auth), causing flaky failures and order-dependent behavior.
70
What is a test runner’s main job?
To discover tests, decide which ones run, execute them, manage setup/teardown, determine outcomes, and report results.
71
In one analogy, what is pytest?
A tournament organizer (or stage manager) that schedules and runs the tests and records what happens.
72
What does pytest do during “test discovery”?
It finds test files/functions/classes based on naming conventions and collection rules.
73
What does pytest control about test execution?
The lifecycle of running tests, including selection, ordering (unless parallelized), setup/teardown, stopping rules, and result handling.
74
How does pytest know to run a fixture?
A test function asks for it by name as a parameter; pytest resolves and executes that fixture before the test.
75
Are fixtures “pytest code” or “your code”?
Your code. pytest just orchestrates running it.
76
If a traceback points to a fixture and no assertions ran, what failed?
Setup failed inside the fixture (or something it called), not the test body.
77
Does pytest define what an assertion means?
No—Python defines assert; pytest enhances how assertion failures are captured and reported.
78
What does pytest contribute to assertions?
Rich failure output, status classification (fail/error), and integration into reporting.
79
What is NOT pytest’s responsibility in browser testing?
Clicking, waiting, element detection, interacting with the DOM—those belong to the browser automation library (e.g., Playwright).
80
What is NOT pytest’s responsibility in API testing?
Performing CRUD actions or understanding API behavior—those belong to your API clients/helpers and the application under test.
81
What’s the difference between “pytest failed” and “a fixture failed”?
pytest is the runner; a fixture failure is your setup code failing while pytest is doing its normal orchestration.
82
What does it usually mean if tests pass serially but fail under pytest -n 4?
A test isolation problem—tests are coupled through shared state and interfere when run in parallel.
83
Why does parallel execution often reveal hidden problems?
Because shared resources/state get used at the same time, exposing race conditions and unintended coupling.
84
Name common sources of shared state that break isolation.
Shared accounts, shared DB rows, shared files, global variables/singletons, cached auth state reused incorrectly, shared browser contexts, shared environment/config mutation.
85
What is the runner vs. tool separation in UI automation?
pytest runs and organizes tests; Playwright performs the browser actions.
86
Why is pytest a strong choice for SERCA-style automation?
Powerful fixture system, composability, plugin ecosystem, low ceremony, and it supports deliberate architecture instead of forcing one.
87
What are the three distinct phases of pytest's test execution workflow?
1) Discovery - Finds test files (test_*.py), classes (Test*), and functions (test_*) 2) Collection - Gathers tests into a tree, resolves which fixtures each test needs, applies markers and filters 3) Execution - Runs setup → test → teardown for each test, captures results
88
What is conftest.py's actual purpose in pytest?
conftest.py is pytest's mechanism for sharing fixtures across multiple test files without explicit imports. Any fixture defined in a conftest.py is automatically available to tests in that directory and all subdirectories below it. It's not a configuration file in the traditional sense—it's a fixture sharing mechanism.
89
What is the key architectural advantage of pytest's declarative fixture system (declaring dependencies as function parameters) versus imperative setup (calling setup functions inside tests)?
Declarative dependencies allow pytest to build a complete dependency graph during collection, before any test code executes. This enables pytest to: - Create scoped fixtures at the right moment - Reorder tests to maximize fixture reuse - Know when it's safe to teardown fixtures - Make smart decisions about caching and lifecycle With imperative calls, each test is a black box—pytest has no visibility into what's happening until runtime.
90
What does fixture "scope" control in pytest?
Scope controls how many times pytest creates the fixture and how long it stays alive: - scope="function" (default): Fresh fixture for every test - scope="class": One fixture per test class - scope="module": One fixture per test file - scope="session": One fixture for the entire test run Higher scopes mean fewer creations but more sharing between tests.
91
Using concrete numbers, explain the time savings of session-scoped vs function-scoped fixtures for authentication.
If login takes 3 seconds and you have 50 tests: Function scope: 50 logins × 3 seconds = 150 seconds (2.5 minutes) just logging in Session scope: 1 login × 3 seconds = 3 seconds total The savings come from avoiding repeated expensive setup operations.
92
What pytest rule prevents a session-scoped fixture from depending on a function-scoped fixture?
pytest enforces that a fixture cannot depend on a fixture with a narrower scope. A session-scoped fixture expects to live for the entire test run, but a function-scoped dependency would be destroyed after the first test—breaking the session fixture. pytest raises an error to prevent this invalid dependency chain.
93
In Playwright's hierarchy, what do Browser, BrowserContext, and Page represent? Use the apartment building analogy.
1) Browser - Apartment building - One Chrome/Firefox process running on your machine 2) BrowserContext - Individual apartment - Isolated session with separate cookies, storage, cache 3) Page - Room in apartment - A single tab within a context Different contexts (apartments) in the same browser (building) share nothing—complete isolation.
94
What is the fundamental difference between what pytest controls versus what Playwright controls in test automation?
- pytest controls orchestration: When fixtures are created/destroyed, dependency management, test lifecycle, scoping decisions - Playwright controls execution: Actual browser automation, isolation mechanisms (contexts), authentication persistence (storageState) "pytest is the choreographer. Playwright is the dancer."
95
What problem does Playwright's storageState feature solve, and how does it work?
Problem: We want fresh contexts for isolation, but authentication lives in cookies inside the context—so a fresh context means logging in again. Solution: 1) Log in once in a temporary context 2) Save the authentication state (cookies, localStorage) using context.storage_state() 3) For each test, create a fresh context but load the saved state: browser.new_context(storage_state=saved_state) Result: Fresh isolated context + pre-loaded authentication = both speed and isolation.
96
Why is creating a new browser process per test "overkill" for achieving test isolation?
Playwright's BrowserContext already provides complete isolation—separate cookies, cache, storage. You don't need a new browser process to get isolation; you only need a new context within the same browser. Browser creation is expensive (launching Chrome), while context creation is cheap (just allocating memory for isolated state).
97
Describe the optimized fixture architecture for authenticated tests using proper scoping.
browser (session scope) └── auth_state (session scope) — logs in once, saves storageState └── logged_in_page (function scope) — creates fresh context with saved auth - One browser launch for entire test run - One actual login, authentication captured - Each test gets isolated context that starts already authenticated - Context closes after each test, browser stays alive
98
In a pytest fixture using yield, what happens before vs after the yield statement?
- Before yield: Setup code runs, prepares the resource - yield statement: Pauses fixture, hands resource to the test - Test executes: Uses the yielded resource - After yield: Teardown code runs (cleanup), regardless of test pass/fail This is pytest's mechanism for guaranteeing cleanup happens even if tests fail.
99
How does pytest handle a fixture dependency chain like test → installations_page → logged_in_page → browser_context_and_page?
pytest resolves the chain during collection and executes in dependency order: - browser_context_and_page runs up to its yield - logged_in_page runs up to its yield (receives browser/context/page) - installations_page runs up to its yield (receives logged-in pages) - Test executes - Teardown runs in reverse order: installations_page → logged_in_page → browser_context_and_page
100
If a fixture has function scope but depends on a session-scoped fixture, what happens?
The function-scoped fixture runs fresh for each test, but it receives the same session-scoped resource that was created once. Example: installations_page (function) depends on logged_in_page (session)—navigation happens every test, but login happened only once at session start.
101
Why might you intentionally use function scope for a page-specific fixture like installations_page even though it adds time?
Function scope for page navigation provides: - Clean page state: Each test starts with freshly-loaded page, not whatever state previous test left (filters applied, modals open, scrolled position) - Isolation at page level: If one test corrupts the page, next test gets a clean slate - Data isolation: Tests that manipulate data don't affect each other This trades a few seconds of navigation for confidence that tests don't interfere.
102
What would happen if you tried to use Selenium instead of Playwright for the session-scoped auth pattern?
Selenium doesn't have a built-in storageState equivalent. Your options would be: - Log in every test (slow) - Share a browser session across tests (fast but no isolation—test pollution risk) - Manually extract and restore cookies yourself (possible but messy) Playwright's architecture was designed with this problem in mind—it's one of the "learned from Selenium's pain points" advantages.
103
Where does Playwright actually sit in relation to the browser—inside or outside?
Playwright sits outside the browser, just like Selenium. The difference is the communication protocol: - Selenium: Outside, communicating through HTTP requests (like a mailbox) - Playwright: Outside, but with a persistent WebSocket connection (like an always-on phone call) - Cypress: Actually inside the browser (runs in same JavaScript context as the app) Playwright can observe browser events in real-time despite being external.
104
What is "dependency injection" in the context of pytest fixtures?
Dependency injection means tests declare what they need (via function parameters) rather than creating what they need (via internal setup code). pytest "injects" the required fixtures into the test function. This pattern: - Makes dependencies explicit and visible - Allows pytest to manage lifecycle and scope - Enables fixture composition and reuse - Decouples test logic from setup/teardown mechanics.
105
Why does knowing the complete fixture dependency graph before execution enable test parallelization?
Tools like pytest-xdist can analyze the fixture graph to determine: - Which tests share fixtures (must run sequentially or share safely) - Which tests are independent (can run in parallel) - When fixtures can be safely torn down (no remaining tests need them) Without the declarative dependency graph, the runner couldn't make these decisions intelligently.
106
What's the difference between "isolation" and "speed" in test architecture, and how do scoped fixtures help balance them?
- Maximum isolation (function scope everything): Every test is independent, but repeated setup is slow - Maximum speed (session scope everything): Setup happens once, but tests can pollute each other Scoped fixtures let you choose per-resource: - Session scope for expensive, stateless resources (browser process) - Function scope for cheap, stateful resources (contexts with cookies) - Combined with storageState: get both speed (one login) and isolation (fresh contexts)
107
What are the three distinct phases of pytest's test execution workflow?
Discovery (finds test files and functions), Collection (builds dependency graph, resolves fixtures), and Execution (runs setup, test, teardown, captures results).
108
What is conftest.py's actual purpose in pytest?
It's pytest's mechanism for sharing fixtures across multiple test files without explicit imports. Any fixture defined in conftest.py is automatically available to tests in that directory and below.
109
What is the key advantage of pytest's declarative fixture system over writing setup code inside each test?
Declaring dependencies as parameters allows pytest to build the complete dependency graph before execution. This enables smart scoping decisions, fixture reuse, proper teardown ordering, and potential test reordering—none of which are possible when setup is hidden inside test functions.
110
What does fixture "scope" control in pytest?
Scope controls how many times pytest creates the fixture and how long it stays alive. Function scope creates fresh fixtures per test; session scope creates one fixture for the entire test run.
111
Explain the time savings of session-scoped versus function-scoped fixtures using authentication as an example.
If login takes 3 seconds and you have 50 tests, function scope means 50 logins (2.5 minutes). Session scope means 1 login (3 seconds). The savings come from avoiding repeated expensive setup.
112
What pytest rule prevents mixing fixture scopes incorrectly?
A fixture cannot depend on a fixture with a narrower scope. A session-scoped fixture cannot depend on a function-scoped fixture because the dependency would be destroyed after the first test.
113
In Playwright's hierarchy, what do Browser, BrowserContext, and Page represent?
Browser is the Chrome/Firefox process (the apartment building). BrowserContext is an isolated session with separate cookies and storage (an individual apartment). Page is a tab within that context (a room in the apartment). Different contexts share nothing.
114
What is the fundamental difference between what pytest controls versus what Playwright controls?
pytest controls orchestration—when fixtures are created and destroyed, dependency management, scoping. Playwright controls execution—actual browser automation, isolation mechanisms, authentication persistence. pytest is the choreographer; Playwright is the dancer.
115
What problem does Playwright's storageState feature solve?
We want fresh contexts for isolation, but authentication lives in cookies inside the context. storageState lets you log in once, save that state, then load it into fresh contexts—getting both isolation and speed without repeated logins.
116
Why is creating a new browser process per test "overkill" for achieving test isolation?
Playwright's BrowserContext already provides complete isolation. You only need a new context, not a new browser. Browser creation is expensive; context creation is cheap.
117
Describe the optimized fixture architecture for authenticated tests.
Session-scoped browser (one process for entire run), session-scoped auth_state (logs in once, saves storageState), function-scoped logged_in_page (creates fresh context with saved auth for each test). One browser, one login, many isolated tests.
118
In a pytest fixture using yield, what happens before versus after the yield statement?
Before yield runs setup code and prepares the resource. The yield pauses the fixture and hands the resource to the test. After the test completes, code after yield runs teardown, regardless of whether the test passed or failed.
119
If a fixture has function scope but depends on a session-scoped fixture, what happens?
The function-scoped fixture runs fresh for each test, but receives the same session-scoped resource that was created once. Example: page navigation happens every test, but login happened only once.
120
Why might you intentionally use function scope for page navigation even though it adds time?
Function scope provides clean page state for each test—no leftover filters, modals, or scroll positions from previous tests. You trade a few seconds of navigation for confidence that tests don't interfere with each other.
121
What is "dependency injection" in the context of pytest fixtures?
Tests declare what they need via parameters rather than creating it themselves. pytest injects the required fixtures. This makes dependencies explicit, enables lifecycle management, and decouples test logic from setup mechanics.
122
What is unittest and what is its main limitation compared to pytest?
unittest is Python's built-in class-based testing framework. Its main limitation is rigid fixture handling—only setUp/tearDown per test or setUpClass/tearDownClass per class. No flexible scoping, dependency injection, or fixture composition.
123
What is Robot Framework and when does it make sense to use it?
Robot Framework uses keyword-driven tabular syntax instead of pure Python. It makes sense when non-technical stakeholders need to read or write tests. The tradeoff is added complexity for developer-centric teams comfortable with Python.
124
What is BDD (Behavior Driven Development) and what is its main maintenance burden?
BDD separates tests into human-readable feature files (Given/When/Then scenarios) and Python step definitions. The burden is maintaining two synchronized codebases—changes to one layer often require changes to the other.
125
When is pytest the right choice over BDD frameworks?
When the team is developer-centric, stakeholders care about results rather than reviewing test scenarios, and you need flexible fixtures for complex setup. The key question is "who needs to read the tests?"
126
What factors led to choosing pytest for SERCA?
Technical QA writing tests (not non-programmers), stakeholders who don't review test code, complex fixture needs, scope beyond UI (API, backend, data verification), need for rich plugin ecosystem, and organizational consistency with WildXR.
127
How can BDD be layered over existing pytest code if stakeholder needs change?
Step definitions become thin wrappers that call existing page objects and fixtures. Your existing code stays intact; BDD adds a human-readable translation layer on top rather than requiring a rewrite.
128
What is the key question to ask when choosing between pytest and BDD frameworks?
"Who needs to read and understand the tests?" If developers and technical QA, pytest's expressiveness is an asset. If non-technical stakeholders need to review test logic, BDD's Gherkin syntax provides accessibility.
129
Why is a CI matrix strategy better than iterating over browsers within test code?
CI matrix spawns independent jobs per browser, providing parallel execution, clear failure isolation, separate reports, and simpler test code. In-test iteration creates complex loops, muddied results, and harder debugging.
130
How does the CI matrix approach change your fixture architecture?
Fixtures return single objects instead of lists. The CI system handles multi-browser by running the entire suite multiple times with different browser parameters, rather than each test handling multiple browsers internally.