ISTQB - CTFL Flashcards

(114 cards)

1
Q

According to ISTQB Glossary, the word ‘bug’ is synonymous with which word?

A

b. Defect

Humans make mistakes/errors → causes defects/bugs/faults → which cause failures

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

According to ISTQB Glossary, a risk relates to which of the following?

A

c. Negative consequences that COULD occur

Risk is potential, not certain

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is an Error?

A

Human action that produces an incorrect result

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is a Defect (Bug/Fault)?

A

Flaw in component or system that can cause failure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is a Failure?

A

Observable deviation from expected behavior when a defect is executed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is Root Cause?

A

The fundamental reason/underlying cause that led to the defect

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Testing vs Debugging - what is the key difference?

A

TESTING = triggers failures to find defects. DEBUGGING = finds, analyzes, and removes the CAUSE of failures

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

According to ISTQB Glossary, what is an Incident?

A

ANY event during testing that requires investigation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

According to ISTQB Glossary, what is a Test Level?

A

A group of test activities that are organized together with specific objectives

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are the 4 test levels?

A
  • Unit/Component
  • Integration
  • System
  • Acceptance
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

According to ISTQB Glossary, what is a Test Manager?

A

The leader of a test team or teams

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is a Product Risk?

A

Risk related to quality characteristics of the TEST OBJECT/software

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is a Project Risk?

A

Risk related to PROJECT MANAGEMENT aspects

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Which is a PRODUCT risk?

a. The arrival of a more-reliable competing product
on the market.
b. Delivery of an incomplete test release to the first
cycle of system test.
c. An excessively high number of defect fixes fail
during re-testing.
d. Failure to accept allowed credit cards.

A

b. Failure to accept credit cards

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Which is a PROJECT risk?

a. Unexpected illness of a key team member
b. Excessively slow transaction-processing time
c. Data corruption under network congestion
d. Failure to handle a key use case

A

c. Unexpected illness of key team member

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Is ‘Excessively slow transaction processing time’ a Product or Project risk?

A

PRODUCT risk

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Is ‘Data corruption under network congestion’ a Product or Project risk?

A

PRODUCT risk

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Is ‘Failure to handle a key use case’ a Product or Project risk?

A

PRODUCT risk

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Is ‘Delivery of incomplete test release’ a Product or Project risk?

A

PROJECT risk

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Is ‘High number of defect fixes failing during re-testing’ a Product or Project risk?

A

PROJECT risk

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What is the formula for Risk Level?

A

Risk Level = Likelihood × Impact

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

A product risk analysis meeting is held. Which determines the level of risk?

a. Difficulty of fixing related problems in code
b. The harm that might result to the user
c. The price for which the software is sold
d. The technical staff in the meeting

A

b. The harm that might result to the user

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

According to ISTQB Glossary, a product risk is related to which of the following?

a. Control of the test project
b. The test object
c. A single test item
d. A potential negative outcome

A

b. The test object

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What should you do with Product Risks during test planning?

A

Determine the EXTENT/SCOPE of testing required

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
What should you do with **Project Risks** during test planning?
Define MITIGATION actions and CONTINGENCY plans
26
During test planning, what else should you do with risk lists?
a. Determine extent of testing for product risks and mitigation for project risks
27
Product risk report shows 40% of defects in 'OTHER' category. What does this indicate?
NEW/UNKNOWN risks to system quality that weren't identified during planning
28
Why is independent testing important?
b. More effective at finding defects
29
Company runs acceptance test before production. Most likely reason?
a. Build confidence in the application
30
Why start test design during requirements phase?
a. Preventing defects in the system
31
What is the purpose of **Regression Testing**?
To ensure defects have NOT been introduced by a modification
32
What is the purpose of **Confirmation Testing (Re-testing)**?
To verify the success of corrective actions
33
What does a **Regression Test** check?
c. Unchanged areas of software to see if they have been affected
34
List the **7 ISTQB Test Principles**
1. Testing shows presence not absence 2. Exhaustive testing impossible 3. Early testing (shift left) 4. Defect clustering 5. Pesticide paradox 6. Testing context dependent 7. Absence-of-errors fallacy
35
Test team finds 90-95% of defects but management says 'too many bugs missed.' Which principle explains why this is actually good?
a. Exhaustive testing is impossible
36
Explain: 'Exhaustive testing is impossible'
Testing everything is not feasible
37
Explain: 'Defect clustering'
A small number of modules typically contain most of the defects
38
Explain: 'Pesticide paradox'
If same tests are repeated, they become less effective at finding new defects
39
Explain: 'Absence-of-errors fallacy'
Finding and fixing defects doesn't help if system doesn't meet user needs
40
How do test principles apply across the test lifecycle?
d. Affect activities throughout the test lifecycle
41
What are **Entry Criteria**?
Preconditions that must be met BEFORE starting a test activity
42
What are **Exit Criteria**?
Conditions that must be achieved to COMPLETE a test activity
43
When are Exit Criteria DEFINED vs ASSESSED?
DEFINED during test planning, ASSESSED during test execution
44
Which test plan element is assessed during execution (not just planned)?
c. Exit criteria
45
What is 'Definition of Done' in Agile?
Exit criteria - defines team's objective metrics for a releasable item
46
What is 'Definition of Ready' in Agile?
Entry criteria - conditions a user story must fulfill to start development/testing
47
Name 3 typical **Entry Criteria**
1. Availability of resources 2. Availability of testware 3. Initial quality
48
Name 3 typical **Exit Criteria**
1. Measures of thoroughness 2. Completion criteria 3. Can include time/budget limits
49
Consider these exit criteria: I) No critical defects, II) All interfaces tested, III) 100% code coverage, IV) All requirements satisfied, V) Matches legacy system. Which belong in **ACCEPTANCE test plan**?
d. Only I, IV, and V
50
'All interfaces between components tested' - which test level does this belong to?
INTEGRATION testing
51
'100% code coverage of all units' - which test level does this belong to?
UNIT testing
52
What does **Acceptance Testing** focus on?
Business requirements, user perspective
53
What does **Unit Testing** focus on?
Test individual components in isolation
54
What does **Integration Testing** focus on?
Test interfaces and interactions between components or systems
55
What does **System Testing** focus on?
Test complete system behavior and capabilities
56
Which is a **TEST TYPE** (not a test level)?
b. Functional testing
57
What is **Functional Testing**?
Testing WHAT the system does
58
What is **Non-functional Testing**?
Testing HOW WELL the system behaves
59
Which is a non-functional quality characteristic?
b. Usability
60
What are the **7 ISO 25010 non-functional quality characteristics**?
1. Performance efficiency 2. Compatibility 3. Usability 4. Reliability 5. Security 6. Maintainability 7. Portability
61
Which is a **FUNCTIONAL test**?
c. Checking screen info against database contents
62
Non-functional testing includes:
b. Testing quality attributes like reliability and usability
63
What are good practices for testing in the development lifecycle?
d. A and B
64
Which best describes objectives for test levels?
d. Each level has objectives specific to that level
65
For emergency changes, what is the proper testing approach?
d. Retest the changed area + risk analysis for reasonable regression subset
66
What is most important to maintain good tester-developer relationships?
b. Explaining test results in a neutral fashion
67
Where is **Beta Testing** performed?
a. By customers at their own site
68
What is most important to maintain good **tester-developer relationships**?
b. Explaining test results in a neutral fashion ## Footnote Not blame-oriented, constructive communication.
69
Where is **Beta Testing** performed?
a. By customers at their own site ## Footnote Real user environment.
70
What is **Static Testing**?
Testing WITHOUT executing code (reviews, inspections, walkthroughs, static analysis) ## Footnote Focuses on reviewing documentation and code.
71
What is **Dynamic Testing**?
Testing BY executing code and observing behavior ## Footnote Involves running the software to find defects.
72
Which artifacts can be examined using **review techniques**?
d. All of the above * Software code * Requirements specification * Test designs ## Footnote Any work product can be reviewed.
73
What does a **Static Analysis tool** do?
a. Gives quality information about code WITHOUT executing it ## Footnote Finds coding standards violations, unreachable code, etc.
74
Which is **NOT** a review type?
d. Management approval ## Footnote Actual review types include Walkthrough, Inspection, Technical review, Informal review.
75
What are the **4 main review types** in ISTQB?
1. Informal review 2. Walkthrough 3. Technical review 4. Inspection ## Footnote Each type serves a different purpose in the review process.
76
Who leads a **Walkthrough** vs an **Inspection**?
WALKTHROUGH = led by author. INSPECTION = led by trained moderator ## Footnote Leadership roles differ in these review types.
77
Who leads a **Technical Review** vs an **Inspection**?
TECHNICAL REVIEW = led by trained leader (not moderator). INSPECTION = led by trained moderator ## Footnote Different leadership roles for effective reviews.
78
What statement about **reviews** is true?
a. Inspections led by moderator, technical reviews not necessarily ## Footnote Clarifies the leadership structure in different review types.
79
Match characteristics to review types: 1) Led by author, 2) Undocumented, 3) No mgmt participation, 4) Led by trained moderator/leader, 5) Uses entry/exit criteria. s=Inspection, t=Technical, u=Informal, v=Walkthrough
b. s=4&5, t=3, u=2, v=1 ## Footnote Correct matching of characteristics to review types.
80
What can **static analysis** find that dynamic testing struggles with?
a. Defects difficult to find with dynamic testing ## Footnote Includes unreachable code, standard violations, etc.
81
Static code analysis - what can it **NOT** find?
c. Faults in the requirements ## Footnote Analyzes CODE, not requirements.
82
Early test design - which statements are **TRUE**? 1) Defects found early are more expensive to fix, 2) Early test can find defects, 3) Early test can cause requirement changes, 4) Early test takes more effort
c. 2&3 are true, 1&4 are false ## Footnote Early testing DOES find defects and CAN cause requirement changes.
83
Which **IEEE 829 document** contains step-by-step instructions including setup, logging, environment, measurement?
d. Test procedure specification ## Footnote Detailed execution steps - HOW to execute.
84
What is a **Test Plan**?
Document describing test objectives, scope, approach, resources, schedule for ONE OR MORE test levels ## Footnote Strategy document for testing.
85
What is a **Test Design Specification**?
Document specifying test conditions and identifying high-level test cases ## Footnote Defines WHAT to test.
86
What is a **Test Procedure Specification**?
Document specifying detailed sequence of actions for executing tests ## Footnote Describes HOW to execute, step-by-step.
87
Test Plan vs Design Spec vs Procedure Spec - primary difference?
a. Plan=levels/strategy, Design=test cases (WHAT), Procedure=execution steps (HOW) ## Footnote Clarifies the purpose of each document.
88
For experienced tester under time pressure - best test procedure approach?
a. High-level outline of conditions and general steps ## Footnote Experienced tester can fill in details.
89
What is a **Test Summary Report**?
Document summarizing test activities and results for a completed stage of testing ## Footnote Covers test level, cycle, iteration.
90
What is a **Test Progress Report**?
Document providing ongoing status of test activities, generated regularly during testing ## Footnote Can be daily, weekly.
91
What is an **Incident Report**?
Document describing ANY event during testing that requires investigation ## Footnote Includes defects, failures, anomalies, questions.
92
Test summary report: 'Payment fails for AmEx - must-work feature.' Which section?
a. Evaluation ## Footnote Critical assessment of results in summary report.
93
Incident report: Detailed expected vs actual with error messages. Which section?
d. Incident description ## Footnote Detailed description of what happened.
94
Test execution order for database modifications: 1)Print record, 2)Change house#, 3)Error msg, 4)Change postal, 5)Open record, 6)Close, 7)Add empty customer. Best order?
c. 5,4,2,1,7,3,6 ## Footnote Setup → Positive tests → Negative tests → Cleanup.
95
Why use BOTH **specification-based** AND **structure-based testing**?
a. Find different types of defects ## Footnote Black-box finds functional issues, White-box finds code structure issues.
96
Key characteristic of **structure-based (white-box) testing**?
b. Measure coverage and design tests to increase coverage ## Footnote Dual purpose of measuring and designing tests.
97
What is **Statement Coverage**?
Percentage of executable statements that have been exercised by tests ## Footnote Measures how much of the code is tested.
98
What is **Branch Coverage**?
Percentage of decision outcomes (branches) that have been exercised by tests ## Footnote Ensures all paths are tested.
99
Decision table testing for financial app at system-test level - best example?
c. Rules for mortgage applications ## Footnote Business-level decisions at system test.
100
State transition testing - valid coverage measures? V)All states reached, W)Response time adequate, X)Every transition exercised, Y)All boundaries exercised, Z)Specific sequences exercised
d. V,X,Z ## Footnote State coverage, Transition coverage, Sequence coverage.
101
What are the **4 Black-box techniques** in ISTQB?
1. Equivalence Partitioning 2. Boundary Value Analysis 3. Decision Table Testing 4. State Transition Testing ## Footnote Techniques for testing without knowledge of internal code.
102
What is **Equivalence Partitioning**?
Dividing inputs into partitions expected to exhibit similar behavior ## Footnote Selecting one representative from each.
103
What is **Boundary Value Analysis**?
Testing at boundaries between partitions ## Footnote Includes min, min+1, max-1, max values.
104
Typical task of a **test leader**?
d. Gather and report test progress metrics ## Footnote NOT develop requirements - that's BA/PO task.
105
What are **Test Leader/Manager key responsibilities**?
Gather/report test metrics, manage test team, coordinate test execution, manage test planning ## Footnote Essential roles for effective test management.
106
Most useful metric to monitor **DURING test execution**?
c. Number of defects found and fixed ## Footnote Execution metric; others are planning metrics.
107
What are **Planning Metrics** (3 examples)?
% test cases written, % requirements covered, test environment readiness ## Footnote Metrics to assess test preparation.
108
What are **Execution Metrics** (3 examples)?
Defects found/fixed, test cases run/passed/failed, test execution progress ## Footnote Metrics to assess test execution.
109
Test status: 90% tests run, 127 defects found, 112 fixed, 15 won't be fixed (mgmt decision). Most reasonable next step?
b. Run remaining 10% of test cases ## Footnote Management already decided not to fix those 15, but tests are incomplete.
110
Biggest influence on **test effort**?
c. Quality of information used to develop tests ## Footnote Test basis quality - requirements, specs.
111
What is **Configuration Management**?
Discipline of identifying configuration items, controlling changes, maintaining integrity and traceability throughout lifecycle ## Footnote Ensures consistency and control over software changes.
112
What is **Configuration Control**?
Element of CM ensuring changes to configuration items are tracked and only authorized versions are used ## Footnote Prevents unauthorized changes.
113
What is **Traceability**?
Ability to identify related items in documentation and software ## Footnote e.g., requirements ↔ test cases ↔ defects.
114
Defect was fixed, confirmed, but appears again later. Which CM aspect failed?
c. Configuration control ## Footnote Version control failed - old code was re-introduced.