Vulnerability Management Flashcards

This deck teaches vulnerability management as an operational system: how findings are discovered, triaged, fixed, and verified; how severity scoring maps to real risk via exploitability and impact; and how different scanners (SCA, SAST, DAST, image scanning) actually produce findings. It also covers SBOM usage, why false positives and false negatives happen, patching and version pinning trade-offs, how to verify remediation, and how exceptions and compensating controls are handled without losing (28 cards)

1
Q

What is the vulnerability lifecycle (discover → triage → remediate → verify)?

A

The vulnerability lifecycle is an operational flow that turns findings into fixed and verified outcomes.
- Discover: tools or reports produce findings tied to an asset and evidence.
- Triage: decide validity, severity, ownership, and fix plan.
- Remediate: change code/config/dependency to remove the vulnerable condition.
- Verify: re-scan/re-test to confirm the vulnerable condition no longer exists.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

How does vulnerability discovery work mechanically?

A

Discovery happens when a system evaluates an asset against a rule and produces a match.
- Scanner ingests inputs (source code, dependency graph, running endpoints, images).
- Scanner applies detection logic (pattern match, version match, runtime probe).
- If rule matches, it outputs a finding with location, evidence, and metadata.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What breaks vulnerability discovery in practice?

A

Cause → system behavior → security impact.
- Cause: incomplete asset inventory or scan coverage.
- Behavior: vulnerable assets are never evaluated, so no findings are produced.
- Impact: exposure persists because “no findings” is treated as “no risk.”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is triage in vulnerability management?

A

Triage is the decision process that turns a raw finding into an actionable item.
- Validate: confirm the finding maps to a real reachable condition in the asset.
- Classify: set severity based on exploitability and impact in your context.
- Assign: choose owner and fix path (upgrade, code change, config change, mitigation).
| Triage is required because scanner output is not equal to real risk by default.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

How do you triage a vulnerability mechanically?

A

Step 1: Confirm asset identity and location (repo/path, image digest, endpoint).
Step 2: Confirm evidence (dependency version, code path, reachable endpoint, package present in image).
Step 3: Determine exploitability conditions (reachable from where, required privileges, required user interaction).
Step 4: Determine impact if exploited (data exposure, code execution, privilege gain, availability loss).
Step 5: Set severity and assign remediation owner with a specific verification method.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is a severity scoring mental model (what changes real risk)?

A

Severity changes when exploitability and impact change for your asset.
- Exploitability changes with reachability, required privileges, and presence of mitigations.
- Impact changes with data sensitivity, permissions of the affected component, and blast radius.
- A scanner score is a starting point; real risk is the score after applying system context.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is exploitability vs impact reasoning?

A

Exploitability is how likely an attacker can trigger the vulnerability; impact is what happens if they do.
- Exploitability depends on reachable interfaces, prerequisites, and attacker capability needed.
- Impact depends on what the vulnerable component can access or change.
- High impact with low exploitability can still be urgent if prerequisites are easy to obtain in your environment.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What breaks severity reasoning in practice?

A

Cause → system behavior → security impact.
- Cause: severity is set by default scanner score without checking reachability or permissions.
- Behavior: teams over-fix low-risk items or under-fix high-risk reachable items.
- Impact: effort is misallocated and real exposure remains.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is SCA and how does it work?

A

SCA (Software Composition Analysis) finds vulnerable third-party dependencies by building a dependency inventory and matching versions.
- Tool resolves dependency graph from manifests/lockfiles.
- Tool maps packages and versions to a vulnerability database.
- Tool outputs findings when a vulnerable version is present in the resolved graph.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What breaks SCA in practice?

A

Cause → system behavior → security impact.
- Cause: missing lockfiles, private registries not indexed, or incomplete resolution of transitive deps.
- Behavior: dependency graph is wrong or incomplete, so vulnerable components are missed or misreported.
- Impact: teams believe dependencies are safe while vulnerable versions still ship.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is SAST and how does it work?

A

SAST (Static Application Security Testing) analyzes source code without running it to find risky patterns.
- Tool parses code into structures (syntax trees, data flows, call graphs).
- Tool applies rules for patterns (injection sinks, unsafe deserialization, hardcoded secrets patterns).
- Findings point to code locations and data-flow explanations when available.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What breaks SAST in practice?

A

Cause → system behavior → security impact.
- Cause: insufficient context (framework-specific flows not modeled) or noisy rules.
- Behavior: many false positives reduce trust, or real flows are missed due to weak modeling.
- Impact: teams ignore the tool or ship vulnerable code paths that were not detected.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is DAST and how does it work?

A

DAST (Dynamic Application Security Testing) tests a running application by sending requests and observing responses.
- Tool probes endpoints and injects payloads into inputs.
- Tool checks responses and side effects for vulnerability signals (unexpected data, error patterns, state changes).
- Findings are tied to specific endpoints and request/response evidence.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What breaks DAST in practice?

A

Cause → system behavior → security impact.
- Cause: scan cannot reach authenticated paths, or environment differs from production.
- Behavior: only public unauth paths are tested, so many real attack surfaces are not evaluated.
- Impact: critical issues behind auth remain undetected despite “DAST ran.”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is container/image vulnerability scanning and how does it work?

A

Image scanning finds vulnerable packages inside container images by inspecting layers and package metadata.
- Scanner reads image layers and extracts installed packages and versions.
- It matches them to vulnerability databases.
- It outputs findings tied to image digest and package evidence, not to source code paths.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What breaks image scanning in practice?

A

Cause → system behavior → security impact.
- Cause: packages exist but are not used, or the scanner cannot map OS package variants correctly.
- Behavior: findings may not reflect runtime reachability, or vulnerabilities can be missed due to package identification gaps.
- Impact: teams either waste effort on non-reachable packages or miss reachable ones.

17
Q

What is an SBOM and what does it contain?

A

SBOM (Software Bill of Materials) is a structured inventory of components in a build artifact.
- It lists packages/components and versions, often with dependency relationships.
- It is tied to a specific artifact (build output or image) so it reflects what ships.
- It supports later vulnerability matching and incident scoping.

18
Q

How is an SBOM used in vulnerability management?

A

SBOM is used as an authoritative component list for matching and scoping.
- Systems match SBOM components to known vulnerabilities to produce findings.
- During incidents, SBOM helps identify which shipped artifacts contain affected components.
- SBOM usefulness depends on being complete and linked to the exact artifact digest/version.

19
Q

What breaks SBOM usage in practice?

A

Cause → system behavior → security impact.
- Cause: SBOM not tied to the shipped artifact, or generated from source without reflecting build outputs.
- Behavior: component inventory does not match what is deployed.
- Impact: vulnerability scope and remediation decisions are based on incorrect component lists.

20
Q

What are false positives and why do they happen?

A

A false positive is a finding that reports a vulnerability that is not actually present or not actually exploitable as claimed.
- Causes include wrong dependency resolution, wrong package identification, or code patterns that are safe in context.
- Systems produce false positives when they match on incomplete context and cannot prove reachability.

21
Q

What are false negatives and why do they happen?

A

A false negative is a missed vulnerability that is present and exploitable.
- Causes include missing scan coverage, unsupported languages/frameworks, missing auth paths in DAST, or incomplete inventories.
- Systems produce false negatives when inputs are incomplete or detection logic cannot model the real execution path.

22
Q

What are patch management and version pinning trade-offs?

A

Patch management changes versions to remove vulnerable components; pinning controls which versions can be used.
- Pinning increases reproducibility because builds use known versions.
- Pinning can delay fixes if update flow is slow or approvals block changes.
- Unpinned dependencies can introduce unexpected changes, but can also pick up fixes automatically in some setups.

23
Q

What breaks patch management in practice?

A

Cause → system behavior → security impact.
- Cause: dependency updates are not deployable quickly (test failures, breaking changes, slow release cadence).
- Behavior: vulnerable versions remain in production despite known fixes.
- Impact: exposure window stays open even though a patch exists.

24
Q

How do you verify remediation (what to re-scan/re-test)?

A

Verification confirms the vulnerable condition is gone in the shipped artifact and/or running system.
- Re-run the same scan type that produced the finding (SCA/SAST/DAST/image scan) after the change.
- Confirm the vulnerable version/pattern is absent in the new artifact (lockfile, image digest, endpoint evidence).
- Confirm deployment updated to the fixed artifact; otherwise the fix exists only in source, not in production.

25
What breaks remediation verification in practice?
Cause → system behavior → security impact. - Cause: verification checks source state but not deployed state. - Behavior: scans show “fixed” for the repo, while production still runs the old artifact. - Impact: risk remains because the vulnerable runtime has not changed.
26
What is exception handling in vulnerability management?
Exception handling is formally allowing a known finding to remain for a defined scope and time. - The system records the exception with asset, vulnerability, reason, and expiration. - The finding is suppressed only under those constraints, not globally. - Exceptions must preserve accountability by keeping the decision auditable.
27
What are compensating controls for a vulnerability exception?
Compensating controls are enforced behaviors that reduce exploitability or impact while the vulnerable condition remains. - Examples include network restriction, stronger auth, reduced permissions, input validation, or WAF rules. - Compensating controls must map to the abuse path and cause attacker steps to fail or become detectable. | A control is not compensating if it does not change the attacker’s success conditions.
28
What breaks exception handling in practice?
Cause → system behavior → security impact. - Cause: exceptions have no expiration or are applied too broadly. - Behavior: vulnerabilities become permanent by policy suppression rather than by remediation. - Impact: long-term exposure accumulates and true risk is hidden from reporting.