What is the vulnerability lifecycle (discover → triage → remediate → verify)?
The vulnerability lifecycle is an operational flow that turns findings into fixed and verified outcomes.
- Discover: tools or reports produce findings tied to an asset and evidence.
- Triage: decide validity, severity, ownership, and fix plan.
- Remediate: change code/config/dependency to remove the vulnerable condition.
- Verify: re-scan/re-test to confirm the vulnerable condition no longer exists.
How does vulnerability discovery work mechanically?
Discovery happens when a system evaluates an asset against a rule and produces a match.
- Scanner ingests inputs (source code, dependency graph, running endpoints, images).
- Scanner applies detection logic (pattern match, version match, runtime probe).
- If rule matches, it outputs a finding with location, evidence, and metadata.
What breaks vulnerability discovery in practice?
Cause → system behavior → security impact.
- Cause: incomplete asset inventory or scan coverage.
- Behavior: vulnerable assets are never evaluated, so no findings are produced.
- Impact: exposure persists because “no findings” is treated as “no risk.”
What is triage in vulnerability management?
Triage is the decision process that turns a raw finding into an actionable item.
- Validate: confirm the finding maps to a real reachable condition in the asset.
- Classify: set severity based on exploitability and impact in your context.
- Assign: choose owner and fix path (upgrade, code change, config change, mitigation).
| Triage is required because scanner output is not equal to real risk by default.
How do you triage a vulnerability mechanically?
Step 1: Confirm asset identity and location (repo/path, image digest, endpoint).
Step 2: Confirm evidence (dependency version, code path, reachable endpoint, package present in image).
Step 3: Determine exploitability conditions (reachable from where, required privileges, required user interaction).
Step 4: Determine impact if exploited (data exposure, code execution, privilege gain, availability loss).
Step 5: Set severity and assign remediation owner with a specific verification method.
What is a severity scoring mental model (what changes real risk)?
Severity changes when exploitability and impact change for your asset.
- Exploitability changes with reachability, required privileges, and presence of mitigations.
- Impact changes with data sensitivity, permissions of the affected component, and blast radius.
- A scanner score is a starting point; real risk is the score after applying system context.
What is exploitability vs impact reasoning?
Exploitability is how likely an attacker can trigger the vulnerability; impact is what happens if they do.
- Exploitability depends on reachable interfaces, prerequisites, and attacker capability needed.
- Impact depends on what the vulnerable component can access or change.
- High impact with low exploitability can still be urgent if prerequisites are easy to obtain in your environment.
What breaks severity reasoning in practice?
Cause → system behavior → security impact.
- Cause: severity is set by default scanner score without checking reachability or permissions.
- Behavior: teams over-fix low-risk items or under-fix high-risk reachable items.
- Impact: effort is misallocated and real exposure remains.
What is SCA and how does it work?
SCA (Software Composition Analysis) finds vulnerable third-party dependencies by building a dependency inventory and matching versions.
- Tool resolves dependency graph from manifests/lockfiles.
- Tool maps packages and versions to a vulnerability database.
- Tool outputs findings when a vulnerable version is present in the resolved graph.
What breaks SCA in practice?
Cause → system behavior → security impact.
- Cause: missing lockfiles, private registries not indexed, or incomplete resolution of transitive deps.
- Behavior: dependency graph is wrong or incomplete, so vulnerable components are missed or misreported.
- Impact: teams believe dependencies are safe while vulnerable versions still ship.
What is SAST and how does it work?
SAST (Static Application Security Testing) analyzes source code without running it to find risky patterns.
- Tool parses code into structures (syntax trees, data flows, call graphs).
- Tool applies rules for patterns (injection sinks, unsafe deserialization, hardcoded secrets patterns).
- Findings point to code locations and data-flow explanations when available.
What breaks SAST in practice?
Cause → system behavior → security impact.
- Cause: insufficient context (framework-specific flows not modeled) or noisy rules.
- Behavior: many false positives reduce trust, or real flows are missed due to weak modeling.
- Impact: teams ignore the tool or ship vulnerable code paths that were not detected.
What is DAST and how does it work?
DAST (Dynamic Application Security Testing) tests a running application by sending requests and observing responses.
- Tool probes endpoints and injects payloads into inputs.
- Tool checks responses and side effects for vulnerability signals (unexpected data, error patterns, state changes).
- Findings are tied to specific endpoints and request/response evidence.
What breaks DAST in practice?
Cause → system behavior → security impact.
- Cause: scan cannot reach authenticated paths, or environment differs from production.
- Behavior: only public unauth paths are tested, so many real attack surfaces are not evaluated.
- Impact: critical issues behind auth remain undetected despite “DAST ran.”
What is container/image vulnerability scanning and how does it work?
Image scanning finds vulnerable packages inside container images by inspecting layers and package metadata.
- Scanner reads image layers and extracts installed packages and versions.
- It matches them to vulnerability databases.
- It outputs findings tied to image digest and package evidence, not to source code paths.
What breaks image scanning in practice?
Cause → system behavior → security impact.
- Cause: packages exist but are not used, or the scanner cannot map OS package variants correctly.
- Behavior: findings may not reflect runtime reachability, or vulnerabilities can be missed due to package identification gaps.
- Impact: teams either waste effort on non-reachable packages or miss reachable ones.
What is an SBOM and what does it contain?
SBOM (Software Bill of Materials) is a structured inventory of components in a build artifact.
- It lists packages/components and versions, often with dependency relationships.
- It is tied to a specific artifact (build output or image) so it reflects what ships.
- It supports later vulnerability matching and incident scoping.
How is an SBOM used in vulnerability management?
SBOM is used as an authoritative component list for matching and scoping.
- Systems match SBOM components to known vulnerabilities to produce findings.
- During incidents, SBOM helps identify which shipped artifacts contain affected components.
- SBOM usefulness depends on being complete and linked to the exact artifact digest/version.
What breaks SBOM usage in practice?
Cause → system behavior → security impact.
- Cause: SBOM not tied to the shipped artifact, or generated from source without reflecting build outputs.
- Behavior: component inventory does not match what is deployed.
- Impact: vulnerability scope and remediation decisions are based on incorrect component lists.
What are false positives and why do they happen?
A false positive is a finding that reports a vulnerability that is not actually present or not actually exploitable as claimed.
- Causes include wrong dependency resolution, wrong package identification, or code patterns that are safe in context.
- Systems produce false positives when they match on incomplete context and cannot prove reachability.
What are false negatives and why do they happen?
A false negative is a missed vulnerability that is present and exploitable.
- Causes include missing scan coverage, unsupported languages/frameworks, missing auth paths in DAST, or incomplete inventories.
- Systems produce false negatives when inputs are incomplete or detection logic cannot model the real execution path.
What are patch management and version pinning trade-offs?
Patch management changes versions to remove vulnerable components; pinning controls which versions can be used.
- Pinning increases reproducibility because builds use known versions.
- Pinning can delay fixes if update flow is slow or approvals block changes.
- Unpinned dependencies can introduce unexpected changes, but can also pick up fixes automatically in some setups.
What breaks patch management in practice?
Cause → system behavior → security impact.
- Cause: dependency updates are not deployable quickly (test failures, breaking changes, slow release cadence).
- Behavior: vulnerable versions remain in production despite known fixes.
- Impact: exposure window stays open even though a patch exists.
How do you verify remediation (what to re-scan/re-test)?
Verification confirms the vulnerable condition is gone in the shipped artifact and/or running system.
- Re-run the same scan type that produced the finding (SCA/SAST/DAST/image scan) after the change.
- Confirm the vulnerable version/pattern is absent in the new artifact (lockfile, image digest, endpoint evidence).
- Confirm deployment updated to the fixed artifact; otherwise the fix exists only in source, not in production.