What are image build inputs and what trust boundaries do they cross?
Image build inputs are the materials the build system fetches and turns into an image.
- Base image references and layers pulled from a registry.
- OS packages and language dependencies fetched from package repos.
- Source code and build scripts from a repo.
- Build-time secrets or credentials used to fetch private inputs.
| Each input crosses a trust boundary because compromise upstream changes what gets built.
How do systems enforce trust for image build inputs?
Step 1: Resolve base image and dependencies to specific versions or digests.
Step 2: Fetch inputs over authenticated channels with integrity checks when available.
Step 3: Record exactly what was used (digests, versions) as build metadata/provenance.
Step 4: Fail the build if required inputs cannot be verified to expected identities.
What breaks image build input trust in practice?
Cause → system behavior → security impact.
- Cause: floating tags, unpinned dependencies, or compromised package registries.
- Behavior: build pulls different content than expected while still “succeeding.”
- Impact: attacker-controlled code or binaries get embedded in images without obvious changes in source.
What is image immutability and how does digest pinning enforce it?
Image immutability means the deployed image content does not change without a new identifier.
- A digest identifies the exact image content (content-addressed).
- Pinning to a digest means the runtime pulls that exact content, not “whatever tag points to now.”
- Tags are mutable pointers; digests are stable identifiers for the content.
How is digest pinning used in practice?
Step 1: Build produces an image and calculates its digest.
Step 2: Deployment manifests reference the digest, not just a tag.
Step 3: Runtime pulls by digest; if registry content differs, digest mismatch prevents it from matching.
Step 4: Only a new build produces a new digest, forcing explicit rollout for changes.
What breaks immutability in practice?
Cause → system behavior → security impact.
- Cause: deployments use mutable tags (latest, stable) without digest.
- Behavior: pulling the same tag at different times can yield different image content.
- Impact: attacker can retag malicious images and get them deployed without changing manifests.
What is image signing and what does a verification gate enforce?
Image signing attaches a cryptographic proof to an image identity; verification gates enforce that proof before run.
- Signing creates a signature over an image digest (and often metadata).
- Verification checks the signature using a trusted public key and policy.
- Gate denies deploy/pull if verification fails, blocking unknown or tampered images.
How do systems enforce image signing and verification gates?
Step 1: Build system signs the image digest with a private key.
Step 2: Signature is stored in an associated store/registry alongside the digest identity.
Step 3: At deploy/admission/pull time, verifier fetches signature and validates it against allowed keys/policies.
Step 4: If signature missing/invalid, the gate rejects the deployment of that image.
What breaks signing and verification in practice?
Cause → system behavior → security impact.
- Cause: verification not enforced on the deploy path, or keys are not protected.
- Behavior: unsigned or attacker-signed images pass because no gate checks them or keys are compromised.
- Impact: artifact integrity controls become cosmetic and malicious images can run.
What are minimal images and how do they reduce attack surface mechanically?
Minimal images reduce attack surface by reducing installed code and tools available to an attacker.
- Fewer packages means fewer known vulnerabilities to match and exploit.
- Fewer debugging tools means fewer built-in capabilities for discovery and pivoting after compromise.
- Smaller filesystem reduces exposed configuration files and credentials accidentally baked in.
What breaks minimal-image benefits in practice?
Cause → system behavior → security impact.
- Cause: images include compilers, shells, package managers, and extra utilities not needed at runtime.
- Behavior: attacker can download/install tools or use existing tools to explore and persist.
- Impact: post-compromise capability increases and patch surface grows.
What are runtime permissions in containers (user, capabilities)?
Runtime permissions are OS-enforced constraints on what a container process can do.
- User identity: determines file ownership and permission checks inside the container filesystem.
- Linux capabilities: granular privileges that allow specific privileged operations (not full root).
- Defaulting to least privilege reduces what attacker code can execute successfully.
How do systems enforce container user and capability settings?
Step 1: Container runtime starts the process with a configured UID/GID.
Step 2: Runtime applies capability set (drop/add) to the process.
Step 3: When the process tries a privileged operation, kernel checks capabilities and denies if missing.
Step 4: When the process accesses files, kernel checks UID/GID permissions and denies if not allowed.
What breaks runtime permission controls in practice?
Cause → system behavior → security impact.
- Cause: containers run as root with broad capabilities.
- Behavior: privileged operations succeed (mounts, raw sockets, kernel interfaces) that would otherwise fail.
- Impact: easier escalation and higher chance of container escape or host impact.
What is seccomp and how does it work (syscall allow/deny)?
Seccomp (secure computing mode) filters which Linux syscalls a process may call.
- A profile defines allowed/denied syscalls (and sometimes arguments).
- Kernel evaluates each syscall; if denied, the syscall fails or the process is killed depending on policy.
- This limits exploit techniques that require specific syscalls.
How do systems enforce seccomp for a container?
Step 1: Container is started with a seccomp profile.
Step 2: Profile is loaded into the kernel for that process.
Step 3: On each syscall attempt, kernel checks the profile.
Step 4: Disallowed syscalls are blocked, preventing those code paths from succeeding.
What breaks seccomp in practice?
Cause → system behavior → security impact.
- Cause: unconfined profiles or overly broad allowlists.
- Behavior: dangerous syscalls remain available, so exploit chains can use them.
- Impact: runtime exploitability increases and containment relies only on other controls.
What is AppArmor/SELinux and how do they enforce policy?
AppArmor and SELinux are Mandatory Access Control (MAC) systems that enforce policy beyond basic Unix permissions.
- Policies define allowed file paths, capabilities, and interactions for labeled processes.
- Kernel checks policy on access attempts and denies operations that violate it.
- They reduce damage even when a process runs as root inside the container.
How do systems enforce AppArmor/SELinux for containers?
Step 1: Container is started with a specific AppArmor profile or SELinux label.
Step 2: Kernel associates the process with that policy context.
Step 3: On file/operation access, kernel checks MAC policy rules.
Step 4: Forbidden actions are denied regardless of container user privileges.
What breaks AppArmor/SELinux in practice?
Cause → system behavior → security impact.
- Cause: profiles not applied, set to permissive, or policies are too broad.
- Behavior: processes can access sensitive host paths or perform risky operations not intended.
- Impact: attackers gain more options for persistence, data access, and escape attempts.
What are filesystem and host mount risks in containers?
Host mounts expose host resources to the container, weakening isolation boundaries.
- hostPath mounts can expose host filesystem paths to container processes.
- Docker socket or container runtime sockets allow controlling other containers and images.
- Mounting sensitive host directories can expose credentials and configuration.
What breaks isolation with host mounts in practice?
Cause → system behavior → security impact.
- Cause: container has access to hostPath directories or runtime sockets.
- Behavior: attacker reads host secrets/config or uses runtime API to create privileged containers.
- Impact: compromise jumps from one container to host or to other containers (lateral movement).
What are container escape concepts (boundary failures)?
Container escape is when code execution in a container reaches host-level execution or control.
- It happens when isolation boundaries fail (kernel exploit, misconfigurations, exposed host interfaces).
- Escape often uses available privileges/capabilities, dangerous mounts, or vulnerable kernel surfaces.
- The result is attacker actions outside the intended container boundary.
What breaks container boundaries in practice?
Cause → system behavior → security impact.
- Cause: privileged containers, kernel vulnerabilities, or exposed host interfaces (sockets, devices).
- Behavior: attacker executes host-level operations or controls other workloads.
- Impact: full node compromise and broader cluster/environment compromise.