Goal of Vulnerability Management Programs
Identify, prioritize, and remediate vulnerabilities before an attacker exploits them to undermine the CIA of information assets
Regulatory Schemes Mandating Vulnerability Management
These two mandate it:
* PCI DSS (Payment Card Industry Data Security Standard)
* FISMA (Federal Information Security Management Act
These two don’t:
* HIPAA (Health Insurance Portability and Accountability Act)
* GLBA (Gramm-Leach-Bailey Act)
PCI DSS
Payment Card Industry Data Security Standard
This describes specific security controls for merchants who handle credit card transactions, and service providers who assist merchants with these transactions
This standard includes the most specific requirements for vulnerability scanning or any other standard:
* Orgs must run both internal and external vulnerability scans
* Orgs must run scans at least once every quarter and after any significant changes
* Internal scans must be conducted by qualified personnel
* Orgs must remediate any high-risk vulnerabilities and repeat scans to confirm that they’re resolved, until they receive a “clean” scan report
* External scans must be conducted by an ASV (approved scanning vendor) authorized by the PCI SSC (payment card industry security standards council)
FISMA
Federal Information Security Management Act
Requires that government and other orgs operation on behalf of government agencies comply with a series of security standards
The specific controls depend on whether the government designates the system as low, moderate, or high impact
Further guidance is found in FIPS (federal information processing standard) 199: Standards for Security Categorization of Federal Information and Information Systems
All federal information systems, regardless of their impact categorization, must meet the basic requirements for vulnerability scanning found in NIST Special Publication 800-53: Security and Privacy Controls for Federal Information Systems and Organizations
Page 206 table with Security Objective and Potential Impact
NIST Special Publication 800-53
Security and Privacy Controls for Federal Information Systems and Organizations
This requires that each org subject to FISMA do the following:
A) Monitor and scan for vulnerabilities in the system and hosted applications and, when new vulnerabilities potentially affecting the system are identified, report them
B) Employ vulnerability scanning tools and techniques that facilitate interoperability among tools and automate parts of the vulnerability management process by using standards for:
1. Enumerating platforms, software flaws, and improper configurations
2. Formatting checklists and test procedures
3. Measuring vulnerability impact
C) Analyze vulnerability scan reports and results from vulnerability monitoring
D) Remediate legitimate vulnerabilities in accordance with an organizational assessment of risk
E) Share information obtained from the vulnerability scanning process and security control assesments to help eliminate similar vulnerabilities in other information systems (systemic weaknesses or deficiencies)
F) Employ vulnerability monitoring tools that include the capability to readily update the vulnerabilities to be scanned
These requirements establish a baseline for all federal information systems
CIS Benchmarks
Center for Internet Security
They publish a series of security benchmarks that represent the consensus opinions of multiple subject matter experts
The benchmarks provide detailed configuration instructions for a variety of OS, apps, and devices
They’re a great starting point for organizations making their own system configuration efforts and baselining
ISO Standards
International Organization for Standardization
They publish a series of standards related to information security
ISO 27000 is used to provide a framework for managing information security
ISO 27001 specifies the requirements for an information security management system
ISO 27002 provides a code of practice for information security management
Orgs can become officially certified as compliant with ISO 27001
OWASP Top Ten
Open Web Application Security Project
OWASP provides a regularly updated list of significant vulnerabilities and proactive controls, the Top Ten
Some examples on the Top Ten:
* Broken access control
* Cryptographic failure
* Injection
* Insecure design
* Security misconfiguration
* Vulnerable and outdated components
* Identification and authentication failures
* Software and data integrity failures
* Security logging and monitoring failures
* Sever-side request forgery
Many vulnerability scanners are configured to OWASP’s list as a core reference when conducting web app scans
Identifying Scan Targets
Use these questions to help determine if you want to scan all systems, some of them, or none of them:
* What is the data classification of the information stored, procesed, or transmitted by the system?
* Is the system exposed to the internet or other public or semipublic networks?
* What services are offered by the system?
* Is the system a production, test, or development system?
Asset Inventory and Asset Criticality
Build an asset inventory by searching the network for connected systems, whether they were previously known or unknown
Once you have the inventory, supplement it with additional information to determine which systems are critical or noncritical
Both of these help guide decisions about the types of scans that should be performed, the frequency of the scans, and the priority admins should place on remediating vulnerabilities detected by the scans
Frequency of Vulnerability Scans
Many factors influence how often an org should conduct vulnerability scans against its systems:
* Risk Appetite: The willingness to tolerate risk within the environment
* Regulatory Requirements: PCI DSS, FISMA, corporate policy, etc, may dictate a minimum frequency for scans
* Performance Constraints: The scanning system might only be capable of so many scans in a given day, and you may need to adjust frequency to accommodate
* Operations Constraints: You may not be able to conduct resource-intensive scans during periods of high business activity—avoid disruption of critical processes
* Licensing Limitations: Your software license might only allow for a certain bandwidth to be consume, or number of scans that can be done simultaneously
Balance each consideration when planning the vulnerability scanning program—start small and slowly expand the scope
Active Vulnerability Scanning
This is the majority of vulnerability scanning tools, and they interact with the scanned host to identify open services and check for possible vulnerabilities
Active scanning provides high quality results, but with some drawbacks:
* Attempts to connect to every device on a network looking for open ports and vulnerable apps
* Noisy and will be detected by admins of scanned systems, which is an issue if you want to keep scanning quiet
* Potential to accidentally exploit vulnerabilities and interfere with the functioning of production systems
* Can miss entire systems if they’re blocked by firewall, IPS, segmented network, or other security controls
* Can disrupt the network and interrupt normal business operations
Passive Vulnerability Scanning
Meant to supplement active scans
Instead of probing systems for vulnerabilities, passive scanners monitor the network like IDS
But instead of watching for intrusion attempts, they look for the telltale signatures of outdated systems and apps, then report results to admins
They’re only capable of detecting vulnerabilities that are reflected in network traffic, and shouldn’t replace active scans—but the two work well together
Scoping Vulnerability Scans
The scope of a vulnerability scan describes the extent of the scan, including answers to the following questions:
* What systems and networks will be included in the vulnerability scan?
* What technical measures will be used to test whether systems are present on the network?
* What tests will be performed against systems discovered by a vulnerability scan?
Answer these questions in a general sense to ensure the scans are appropriate and unlikely to cause disruption to the business
Dion’s Notes
* Scope is the range of hosts or subnets included within a single scan job
* It’s important to adjust the scope to make scanning more efficient and:
1. Schedule scans on different portions of the scope for different times of the day
2. Configure the scope based on a particular compliance objective
3. Rescan scopes containing critical assets more often
4. Pat attention to internal vs external scan perspective
Scan Sensitivity Levels
Pay attention to the configuration settings related to the scan sensitivity level
These settings determine the types of checks that the scanner will perform and should be customized to ensure that the scan meets its objectives while minimizing the possibility of disrupting the target environment
Four Types of Scan with Different Sensitivies
Discovery Scan
* Used to create and update an inventory of assets by conducting enumeration of the network and its targets without scanning for vulnerabilities
* Mostly used for enumeration
Fast/Basic Assessment Scan
* A scan that contains options for analyzing hosts for unpatched software vulnerabilities and configuration issues
Full/Deep Assessment Scan
* Comprehensive scan that forces the use of more plugin types
* Risk of causing disruption
* Will ignore previous scan results and fully rescan every host
Compliance Scan
* Scan based on a compliance template or checklist to ensure the controls and configuration settings are properly applied to a given target or host
Avoiding Disruption With Scans
Some plugins for vulnerability scanners can disrupt activity on a production system or, worse, destroy content on those systems
One way around this problem is to maintain a test environment containing copies of the same systems running on the production network, and running scans against the test systems first
If the scans detect problems in the test environment, you can correct the underlying causes on both networks
Supplementing Network Scans
Basic vulnerability scans run over a network and probe a system from a distance
This provides a realistic view of the system’s security by simulating what an attacker might see from another network vantage point
But firewalls, IPS, and other security controls can affect the scan and provide an inaccurate view of the server’s security
Modern vulnerability management solutions can supplement these remote scans with trusted information about server configurations in two ways:
1. Admins can provide the scanner with credentials that allow the scanner to connect to the target server and retrieve config information—aka, a credentialed scan
2. Agent-based scanning where admins install small software agents on each target server, which conduct scans of the server config and provide an inside-out vulnerability scan
NOTE: If you use agent-based scanning, it could still cause performance or stability issues on servers, so approach this concept conservatively
Scan Perspectives
Each scan perspective conducts the scan from a different location on the network, providing a different view into vulnerabilities
External scans run from the internet and give a view of what an outside attacker might see
Internal scans provide a view that a malicious insider might encounter
Scanners located in a datacenter and agents located on a server offer the most accurate view of the real state of the server by showing vulnerabilities that might be blocked by other security controls
SCAP
Security Content Automation Protocol
An effort led by NIST to create a standardized approach for communicating security-related information
This standardization is important to the automation of interactions between security components
Some of the SCAP standards include:
* Common Configuration Enumeration (CCE): Provides a standard nomenclature for discussing system configuration issues
* Common Platform Enumeration (CPE): Provides a standard nomenclature for describing product names and versions
* Common Vulnerabilities and Exposures (CVE): Provides a standard nomenclature for describing security-related software flaws
* National Vulnerability Database (NVD): A superset of the CVE database maintained by NIST, contains additional information such as analysis, CVSS, and fix information
* Common Attack Pattern Enumeration and Classification (CAPEC): A knowledge base maintained by MITRE that classifies specific attack patterns focused on application security and exploit techniques
* Common Vulnerability Scoring System (CVSS): Provides a standardized approach for measure and descriving the severity of security-related software flaws
* Extensible Configuration Checklist Description Format (XCCDF): An XLM schema for developing and auditing best practice configuration checklists and rules
* Open Vulnerability and Assessment Language (OVAL): An XML schema for describing system security states and querying vulnerability reports and information
Remediation Workflow
Since vulnerability scans produce a fairly steady stream of security issues that require attention, you need a workflow to manage it all
The typical vulnerability management lifecycle goes like this:
* Detection –> Remediation –> Testing –> Detection…
It should be as automated as possible given the tools at your disposal, and many vulnerability management tools have built in work flow mechanisms
That way you can track vulnerabilities through the remediation process and automatically close them out after testing confirms a successful remediation
Some orgs choose not to use them in favor of tracking vulnerabilities in the ITSM (IT service management) tool (like Jira)
Dion’s Notes
* Remediation is the overall process of reducing exposure to the effects of risk factors to minimize the leftover, residual risk
* It mitigates risk exposure down to an acceptable level based on organizational risk appetite
Ongoing Scanning
Moves away from the scheduled scanning approach that tested systems on a weekly, monthly, etc basis
Instead configures scanners to simply scan systems on a rotating basis, checking for vulnerabilities as often as scanning resources permit
This can be bandwidth and resource-intensive, but it provides earlier detection of vulnerabilities
Continuous Monitoring
The technique of constantly evaluating an environment for changes so that new risks may be quickly detected and business operations improved
This is an ongoing effort to obtain information vital in managing risk within the organization and provides benefits like:
* Situational awareness
* Routine audits
* Real time analysis
* Transform from reactive to proactive
Incorporates data from agent-based approaches to vulnerability detection and reports security-related configuration changes to the vulnerability management platform as soon as they occur
From there, you can analyze the changes for potential vulnerabilities
Prioritizing Remediation
Take into account the following factors when choosing where to turn your attention first:
Criticality of the Systems and Information Affected by the Vulnerability
* Criticality measures should take into account CIA requirements, depending on the nature of the vulnerability
* EX: If the vulnerability allows a DoS attack, consider the impact to the org if the system becomes unusable
* EX: If the vulnerability allows theft of stored information from a database, consider the impact on the org if it’s stolen
Difficulty of Remediating the Vulnerability
* If fixing a vulnerability will require an inordinate commitment of human or financial resources, that fact should be facotred into the decicison-making process
* You might find you can fix fives issues rated 2 thru 6 in priority order for the same investment that would be required to address the top issue
* You shouldn’t only make a decision based on cost and difficulty, but it’s an important consideration
Severity of the Vulnerability
* The more severe an issue is, the more important it is to correct that issue
* You might use the CVSS to provide relative severity rankings for different vulnerabilities
Exposure of the Vulnerability
* Consider how exposed a vulnerability is to potential exploitation
* EX: If an internal server has a serious SQL injection vulnerability, but that server is accessible only from internal networks, remediation might take lower priority than a less severe issue exposed to the internet
Testing and Implementing Fixes
Before deploying any remediation activity, always thoroughly test your planned fixes in a sandbox environment
This allows you to identify any unforeseen side effects of the fix and reduces the likelihood that remedaition activities will disrupt business operations or casue damage to your org’s information assets
After deploying a fix by patching or hardening the affected systems, verify the mitigation was effective—repeat the scan and confim the issue isn’t there
NOTE: When you mitigate something, always remember to update your configuration baseline to ensure future systems are patched against that same vunlerability from the start
Dion’s Notes
Configuration Baseline
* Settings for services and policy configuration for a server operating in a particular application role
* You set up everything you want, and then you do a vulnerability scan
* You find out what vulnerabilities exist, remediate what you want, accept residual risk, and then you have a baseline
* Any deviation from that baseline must be remediated or it’s now accepted risk