Methodology

Several important aspects must be considered when developing a proper methodology for conducting vulnerability assessments. These include the development of standards to be used without limiting creativity and a plan to follow the standards (mapping the theatre of war, qualifying targets, creating attack profiles, attacking, and finally defending). All are important aspects of performing a successful vulnerability assessment and maintaining a top- notch security posture .

Methodology Standards

There are some important aspects of standardizing on a methodology when conducting vulnerability assessments. One aspect is to realize the benefit of staff creativity regarding those who will be conducting the assessments. Standards are important but if creativity is limited by introducing standards so formal that the quality of the assessment suffers, the organization ultimately suffers too. The human factor of the assessment provides the creativity necessary to find potential vulnerabilities that cannot be found by automated tools alone.

Another aspect to consider when standardizing the methodology is that no one solution or product can meet all security needs. An organization must be thorough in verifying security just as it (hopefully) followed a meticulous approach to developing its security posture to begin with. The use of multiple tools, products, and mechanisms will increase the chances of finding vulnerabilities.

Although each organization will differ in its security needs and must be flexible, it is important to mention that some standards must be implemented to maximize the capability of identifying potential security risks and threats. These standards should, at a minimum, outline all aspects of the organization's infrastructure that should be assessed.

A realistic "external" vulnerability assessment should analyze every pertinent aspect of the network perimeter from outside the organization's network essentially performing the assessment from the same perspective as an experienced attacker would across the Internet. This will provide a view of an organization's infrastructure from an "outside looking in" perspective (which you will hear throughout the assessment methodology). The pertinent network perimeter elements that should be included are listed in Table 13-1 below.

Table 13-1: Vulnerability Assessment Methodology Standards

Assessment Standards

Description (What Does It Include?)

Why Is It Important?

Information gathering/reconnaissance

Information gathering to determine all public avenues available. These include

  • Public routing prefix announcements

  • ISP route filter policy

  • Address registrar configuration

  • Exterior routing protocol configuration

  • Domain registrations

  • Name service (DNS) including SOA and other pertinent DNS records

  • Web page documents found via search engine results

  • Published directory information

  • Corporate records

  • SEC registrations

  • Press releases

  • Employment postings

You must determine all public exposure for your organization. This is extremely important (and difficult) if you do not handle all of your domain and network registrations (or if your organization is global and using multiple registrars)

Mapping out your theatre of war

Use data found while gathering information to map the focus areas of the assessment. Create and validate initial topology maps for use while planning attacks. These maps should include packet filters, firewalls, load balancers, and other "interesting" devices that may alter results during attacks.

Knowing what you are up against will make your assessment that much more accurate. Remember, for "external" assessments, view your organization from "the outside looking in," just as an attacker would.

Target qualification

Determine and prequalify potential targets. Use information gathered as a baseline for initial targets and determine live hosts, services running, and potential attack vectors. Update topology maps as live hosts and services are found. Do this through

  • Port scanning

  • Web searches

If you do not determine what hosts and services are live, you cannot accurately determine the best attack plan.

Attack profiling

Plan and optimize attack vectors for each system. Attack vectors should be based on protocol, platform, and network variables determined in earlier stages. Tools should be used to plan attack vectors, but human interpretation is paramount for a successful audit.

An attacker will use very specific and strategic attacks. You must act like an attacker to conduct an accurate assessment. Do not use a "shotgun approach" when attacking.

Attacking

Perform an in-depth examination of hosts, their operating systems, application software, and moreall based on a very directed and strategic attack plan:

  • Attack

  • Validate results

  • Prioritize vulnerabilities found

Once you have conducted attacks based on your attack profiles, you must validate results, account for false positive issues, and prioritize your vulnerabilities to provide justification for remediation and possible impacts to the organization.

Defending

Determine what remediation is necessary and create plans to implement the remediation steps. These plans should include remediation priorities, which vulnerabilities will not be remediated, and a plan to retest to ensure the threats were addressed.

A plan to defend your organization's infrastructure must include remediation of known problems as well as methods to retest to ensure you are not still vulnerable.

If an organization only checks its "known live hosts" for vulnerabilities, for example, there may be inadvertent and unidentified "holes" open in access control lists (ACLs) and firewall rules that allow access to systems meant for internal use only. Likewise, for an internal assessment, if you only test what's externally visible, you may miss out on vulnerabilities that are exploitable from inside your network (remember layered security). Performing vulnerability assessments that include a standards-based approach helps organizations ensure all public services are secure, and that no unintentional services/data are available without the organization's knowledge. Selecting the appropriate standards (as outlined earlier) helps ensure thorough assessments are conducted without limiting the creative flexibility of a first-rate security analyst.

All too often, inexperienced security analysts are quick to enter a list of IP addresses into a security auditing/scanning software package, without considering many of the elements just listed. As of today, no security auditing software package correctly analyzes all of these elements. Multiple tools must be used and human interpretation of those tools' results and feedback is paramount.

A human has the ability to visualize the target network as information is gathered and factors are validated . Software tools cannot do this in the same manner as the human mind. The human interaction allows "whiteboarding" and other activities to continuously piece together the network maps as more information and details are uncovered. For example, when a firewall or other packet filter is identified by certain tools, that device and its IP address is added to the map on the whiteboard (literally or figuratively) in order to track its importance amongst the overall topology and the assessment.

By having a standard methodology, the human factor can be achieved without sacrificing the comprehensiveness of the assessment.

Information Gathering

The process and high-level steps for conducting a vulnerability assessment are outlined as standards in the previous section. So how do you determine the logical path for getting started? Remember, conducting a vulnerability assessment is essentially simulating an attack on your own network. The first thing you must do is gather information on which to base your simulated attack.

For the purpose of developing your theatre of war and attack profiles for your vulnerability assessment as outlined in the standards previously mentioned, you must discover all of your publicly available systems and services. In order to find all publicly available systems, several steps should be taken to determine what networks and domains are under your organization's control. There are several tools publicly available to validate known networks, but more importantly they may be used to determine any unknown networks by searching public records. Whois databases, Autonomous System Number announcement tracking, and network and domain registrations are just a few of the public sources available to solicit information about your organization's networks and domains. Other services required to provide public information services (such as DNS) are also a great resource for gathering information. In Chapter 14, you will see how to maximize public service exposure findings, where to look for information, and some of the limitations experienced while gathering information. Some tools used during this portion of the assessment are also explained further in the following chapter.

Mapping Out Your Theatre of War

At this point you have reviewed public records to determine how your organization appears publicly on the Internet. During your reconnaissance, you also searched for any additional networks and/or domains that may not have been made known to you prior to the assessment. Additionally, you reviewed name service infrastructure not directly controlled by your organization but still capable of causing outages if services are interrupted . You have successfully created your first phase of mapping the theatre of war by "finding the boundaries" in which you will perform the assessment.

Your investigations have allowed you to take the first step in completing your organization's network map as it is viewed by the outside world. You will be able to use this map as a baseline for your assessment plans. An important aspect to mention is that no matter how often you conduct vulnerability assessments of your organization's infrastructure, you should never assume things have not changed since the last assessment. One of the biggest mistakes made during vulnerability assessments (even by security firms) is that all the information provided to the assessor remains unquestioned and is often taken at face value.

As you will see in the example assessment in Chapters 14 and 15, even the most honest administrator may not provide you with all the information and the information provided may not be entirely accurate. The human mind has an amazing way of justifying almost anything it wants. Because an administrator doesn't think something is important and neglects to tell you about it doesn't mean you shouldn't do your research and go find the truth yourself. Making sure your initial mapping is accurate is your area of focus for the entire assessmentyour theatre of war. If this is inaccurate from the beginning, your entire vulnerability assessment may be skewed and ultimately inaccurate.

What do you do next? The answer is simpleattack! But not so fast. One of the most important aspects of a successful vulnerability assessment is to know what to attack, when, and how. Creating the theatre of war is important in order to set boundaries for your attack. But to ensure accurate results are achieved and the effectiveness of your time spent is maximized during the assessment, you will need to take the next step and qualify your potential targets. A very strategic, directed attack is paramount to achieve accurate results and do so in the least amount of time possible. Your time is valuable ; you should plan to use it accordingly ! Use it to continue tracking targets to determine which are eligible for further attack planning.

Target Qualification

Once you have created your initial theatre of war, you must continue to develop it by limiting the attack vectors possible. A list of networks publicly available and an initial mapping depicting where those networks are available is a good start; however, it is only a start. Now you must determine what IP addresses have live hosts and what services are running on those hosts, and gather as much information as possible about those services and applications to determine which vulnerabilities may be useful in your attack. These are all preliminary qualifications that help to build attack profiles for each target (which may be a host, application, or service).

The key to qualifying targets successfully is to be so thorough (and to do so from a public source) as to not miss any available services. There are some factors that make this difficult. The first is the perimeter security put in place by your organization. Applications and services are made available publicly by allowing network traffic requests to specific ports on various systems. These systems listen for requests on the appropriate ports and, when properly solicited, conduct a communication session with the requesting host. One of the most popular methods of qualifying targets for attack is port scanning. There are several techniques and methods that can be used when port scanning. Unfortunately, the perimeter security discussed above can cause different port scanning techniques to provide very different results. An experienced security analyst will identify various data returned during port scans and adjust the port scanner as necessary to get the most accurate scan possible (including scanning from inside the perimeter of security devices). The key to remember during port scanning is that it is as much an art as it is a science. There are some principles to follow (which we will outline in the next chapters), but you must experiment with your port scanning tools and determine what methods and techniques work best to get the most accurate results in your organization.

One of your goals during port scanning should be to conduct reconnaissance on all addresses within your organization's net blocks. As discussed earlier, even if the department or staff conducting the assessment(s) are responsible for maintaining and allocating public address space, all routed addresses should be checked initially to ensure addresses are not in use by other departments within the organization or, worse , hijacked by an outside party.

Conducting public source port scanning means you must have access to a resource outside your organization that has an Internet presence. Since many organizations have complex routing structures, results may differ if reconnoitering from an organization's internal network versus a public source. This is another factor causing accurate vulnerability assessments to be difficult (but certainly a factor possible to overcome ). If a vulnerability assessment is conducted from inside the network and routing to various hosts and services differs from the public routing, your assessment will not accurately depict your vulnerabilities seen by the public. To ensure you accurately see what an attacker will see (a true "outside looking in" perspective), you must conduct analysis from a public source.

Depending on configuration options such as which ports to scan per host, timing policies, and sheer number of addresses, the time taken for port scans for an organization can vary greatly. Generally speaking, port scanning takes a long time. While port scanners run, there is plenty of time to conduct other searches for information that can help qualify targets, provide additional information, and generally help create specific attack profiles. Web searches for sensitive information about your organization help reveal public server misconfigurations. Web, FTP, DNS, and other servers may be supplying information to the general public. Misconfigurations in these systems may be making more information available than necessary. Tools such as Google's Advanced Search and other search tools are a great resource for finding this information online.

Attack Profiling

Many times security analysts fail at vulnerability assessments simply because false assumptions are made or misinterpretations take place. As the analyst conducting the assessment, you must leave no stone unturned. Working together with other departments inside your organization (such as the security group, network group, and systems group ) as well as vendors and providers, you must coordinate the collection of routing, connectivity, topology, and other network heuristic data. When that data is gathered, you then must determine what contains active services and spend time qualifying those targets. Now that you have completed this stage of the assessment, you must use it and additional research tools to plan the optimal means of attack based on protocol, platform, and other network data already learnedhence, the attack profile.

The idea of an attack profile can be easily compared to military terms such as "fields of fire" or "avenues of approach." One must study and analyze targets thoroughly to ensure awareness of all attack vectors, defensive postures, and evasion techniques in use. In the information technology world, this may include tools for spoofing, man-in-the-middle, and distributed denial-of-service (DDoS) attacks. In short, an attack profile is the result of researching the target network, detecting its points of ingress and egress, analyzing its filtering strategies and the filtering strategies of its upstream network providers, identifying the targets and defenses in place, and formulating the optimal tools and sequence of tactics to accomplish the objective: infiltration or disruption of the target.

Detailed Plans, Assessment Successes

Creating a custom attack profile for each target leverages a good analyst's experience and increases the attack's effectiveness. Using tools discussed later in the next chapter gives you, like an experienced attacker, advantages when analyzing the target network. These advantages allow you to see deeper into the target network (from the outside) than traditional methodologies generally would allow. A better understanding of the routing and target networks (as well as the target network's upstream provider(s)) results in a better plan of attack, or initial attack profile.

Using tools such as Layer Four Traceroute, or LFT (covered in Chapter 14), and other tools mentioned later, you will be able to create a more sophisticated attack profile such as the one depicted here.

image from book

Using the proper tools and a little human intelligence, the true makeup of the target network can be perceivedeven seeing beyond firewalls and their rule sets. Following paths that protocols take and analyzing the replies received allows the security analyst to accurately determine paths through load balancers, intrusion detection solutions, packet filters, and a variety of other security technologies. Many of these systems, once identified, become targets themselves . The result is a higher quality and more accurate vulnerability assessment.

In addition to probing into the network to identify services and systems, developing the attack profile allows you to validate your information-gathering findings. While many tools exist to assist in discovery, automated tools should only be trusted as far as the analyst validates . No tool is 100 percent foolproof; therefore, the human factor must come into play. You, as the analyst, must validate what your tools find. If you don't, the assessment is nothing more than a large stack of paper (or a large electronic file) filled with false positives and poor assumptions. Technology gets more sophisticated with every new version and every new breakthrough , but there simply is no substitute for the human interpretation of findings. As mentioned in the "Methodology Standards" section earlier in this chapter, humans have the ability to map the environment, which helps in determining attack vectors. Remember this when planning your security budget; it is not just a budget for a single tool or suite of toolsit must also contain human time, whether that is you as the analyst or an outsourced security firm.

Be Wary of Online Vulnerability Scanners and Services

The human factor needed to provide accurate interpretations of findings is one reason online vulnerability assessments/scanners should be used with caution. Online vulnerability services are generally designed to require minimal human interaction. The idea of automation is appealing to many of these organizations that deliver such services because it will generally mean higher revenue with less staffand therefore higher profit margins. Cost savings are also (often) passed through to the customer. The unfortunate consequence of this business is significant degradation in overall quality. Vulnerability assessment tools are not sophisticated enough to run themselves. Whether you use an open source scanner such as Nessus or you pay a premium commercial price for a tool like ISS Internet Scanner, the fact of the matter is vulnerability assessment (VA) tools do not use artificial intelligence (AI) nor can they replace human intelligence. While this is the case, there are several interesting automated vulnerability management tools from Foundstone, nCircle, Tenable, and so on that do automated discovery and automated reporting of changes. While this doesn't necessarily make them sophisticated enough to think for you, they may be able to reduce your workload significantly in a large network.

Most online scanners also cannot provide thorough assessments based on the information gathered during the scan preparation. As with everything in life, there is an exception to every rule. Take, for example, web servers. Online vulnerability assessment tools and services ask the customer what IP address ranges should be scanned. While this seems to make sense, consider this: a web server has the capability to use host headers and/or virtual servers to host many web sites on a single IP address instead of using one IP address per site.

Customarily, hosting providers use this tactic and map a "This page has not been built yet" web site as the primary site for the IP address. The server then differentiates web sites hosted by the name requested . Each of these names conceivably can resolve to that same IP address. If only an IP address is solicited in scan preparation with online scanners, when the IP address is scanned, the only web site analyzed during the assessment is the main web site or default site (the generic site described above as "This page has not been built yet"). Entire web sites (accessed by host headers or by name) can be missed in the assessment. Likewise, since these web sites may all have a unique set of features, web server configuration settings, and document roots, the assessment may actually provide completely inaccurate results.

This is just one example where human interaction by a good analyst would find additional web sites hosted by an organization through a variety of methods to include domain registration searches, DNS zone enumerations, reverse DNS sweeps , and even social engineering. An attacker would do this, so your vulnerability assessment provider (whether you or a third party) should do it too! Beware of automated vulnerability assessment providers that ask for a range of IP addresses to scan, but don't ask for a list of web addresses (fully qualified domain names (FQDNs)) to use. If they're not asking, they're not looking because these can't reliably be discovered autonomously. Again, this is just one of many reasons to heavily consider the cost versus gain of using online automated vulnerability assessment services.

Attack!

So, at this point you have reconnoitered your environment, gathered information through research and port scanning, determined your boundaries or theatre of war, and developed your plan in your attack profile. It's time to attack!

Your goal during the attack portion of the assessment should be to conduct a series of mock attacks against your environment based on your attack profiles. These attacks will surface a number of threats including unsecured or default authentication credentials, Trojans, weak encryption and authentication methods, exploitable software components , backdoors, improperly configured firewalls and routers, unpatched or unmaintained system software, and a plethora of other potential problems that may have gone overlooked for some time. The attacks will also be used to identify tripwires and counterstrike potential that will measure your network's readiness embodied in its monitoring and response capabilities. All in all, through human and automated testing tools (some commercial and some open source as discussed in the next chapters), you have the ability through unique multipoint analysis to test for literally thousands of potential vulnerabilities on systems you have already targeted as potential threats to your environment.

At this point, you should be using tools such as Nessus (an open source vulnerability scanner) and other VA tools to do much of your heavy lifting during the attacks. Nessus and other VA tools can be configured to analyze systems for virtually every known vulnerability. When a vulnerability is announced for a particular system, operating system, or application, several organizations go about creating testing criteria to determine if systems are vulnerable. VA tools have these vulnerability checks continuously updated to ensure new vulnerabilities are included in assessments. In the case of Nessus, modules are created for each new vulnerability (for more information, see Chapter 15). If your vulnerability scanner falls short or appears to have false positives, use additional tools (some examples found in the next chapter or some of your own choices) to validate your findings. Don't rely on results solely from your VA scanner just because it spits out a slew of vulnerabilities for your systems. Other tools even as simple as netcat or telnet connecting to a specific port or web browser usage with specially crafted URLs can help validate findings and/or eliminate false positives. For more information on running tools during the assessment attack, please refer to the "Vulnerability Assessment Tools" section in Chapter 15.

Validate Results

Even the most advanced tools available today won't catch every vulnerability. And those same advanced tools will alert you about vulnerabilities that do not truly exist (false positives). Because of the way some of the tools are written and subsequently the way they check for vulnerabilities, simple configuration changes on a system can raise flags during a scan. It is up to you to validate the results discovered with your tools. You will see a recurring theme throughout this chapter that human intelligence is the best tool used during vulnerability assessments. This is because VA tools today simply cannot be written with enough intelligence to find every vulnerability with 100 percent accuracy when system configurations can contain an infinite number of nuances from one installation to the next.

In short, if your VA scanner claims it found sensitive documents publicly available on one of your web servers, open a web browser and test the vulnerability for yourself. Look for those documents at the URL provided by the scanner. While this and validating all the results of your VA tools seems daunting at first, the more you do it, the quicker it will become. More experience will be gained and, most importantly, you'll gain an even better understanding of the strengths and weaknesses of your VA scanner. Additionally, you will come to know the other tools you use for validation and your knowledge about your own network environment itself will increase. This will help you see false positives more efficiently so less time is spent validating something you have previously determined to be a false positive. In the infamous words of Ronald Reagan, "Trust, but verify!"

It should be mentioned that although many analysts dismiss false positives and move on, they should still be documented and configuration changes made to avoid them if possible. Why? If your VA scanner is seeing the attack vector as a successful means of infiltration, so are some attackers . This will only encourage them; therefore, the elimination of false positives from the report should take a back seat to the elimination of the configuration issue that caused the false positive in the first place. Don't ignore your false positives; remedy the configuration problem causing them!

Documenting Results

Now that you conducted mock attacks and validated the results from your tools and manual attacks, what should you do with all the data? One of the most important aspects of vulnerability assessments is the ability to report on findings and provide organized information that is not riddled with false positives. In order to do this, you must have some type of reporting scheme/format that fits the needs of your organization.

Documentation needs will vary from one organization to the next. One organization may not pass the reports on to anyone other than the analysts. Others may provide reports to executive management in an attempt to justify remediation resources and budget. Whatever your organization's needs, the key to documentation is to know your audience. A report for an analyst is documented very differently when compared to a report written for a group of business VPs. An enhanced report with more explanations on how the assessment was conducted as well as descriptions of any issues found may be required if you will be presenting to management. This type of report should include the following items to help management understand the scope of the assessment, as well as how the organization did overall during the assessment.

  • Scope What was assessed

  • Methodology Tools and methods employed

  • Identified Vulnerabilities Including high-level descriptions

  • Exposure How vulnerable the organization is (and to what extent)

  • Remediation How vulnerabilities should be addressed

Having this information available provides the management staff with the tools to make educated decisions regarding information security.

Prioritizing Vulnerabilities Found

How tired are you of "high," "medium," and "low" when it comes to the seriousness of a vulnerability? Exactly how high do you have to go to find out how serious the vulnerability is? At what point is a ladder necessary for extremely serious vulnerabilities? And after all, if you are looking for the vulnerability down low, it may simply sneak right past you and escalate to a medium steak errr state.

All joking aside, the relevance of high, medium, and low is purely subjective and could mean just about anything (as portrayed by the authors' poor attempt at humor above). In practice, what is subjectively a high priority to one company could be a low priority to anotherand most security vendors never bother to explain why a particular vulnerability is marked "high" or "low" anyway. Oftentimes, the VA software package in use simply makes the decision on its own without any knowledge of your business. In Nessus, for example, the random analyst that wrote the test plug-in for a specific vulnerability may classify the vulnerability however he or she chooses. In nearly every incarnation of VA scanner, a high/medium/low classification is assigned to each vulnerability and its basis may be completely irrelevant to your organization. To effectively provide any value, a prioritization scheme should have very specific, measurable concepts directly related to your business.

That said, some type of standardization or predefined methodology for assigning priorities must be used to ensure you speak "apples to apples" when comparing vulnerabilities with others. What's in use by the IT staff must also be consistent with a management priority and communicable to outside parties. Some key aspects that should be considered when developing your organization's per-vulnerability prioritization scheme are included in Chapter 12 and are outlined here for convenience:

  • A scheme based on a security engineering standard such as The National Infrastructure Advisory Council (NIAC) or the Software Engineering Institute's Capability Maturity Model

  • The base metric group

  • The temporal metric group

  • The environmental metric group

The illustration below provides prioritization example criteria previously outlined in Chapter 12, displaying the submetrics for each group listed above.

image from book

By standardizing vulnerability priorities, your organization can make certain any particular vulnerability ranking is based on the risk it poses to your organization and not a subjective and vague high/medium/low classification. Instead of creating arbitrary levels of risk, you can define exactly what the risks are so you can focus on getting them fixed in the order that is most important to your organization. All vulnerabilities surfaced during your vulnerability assessment should be associated with a priority factor and that factor should consider information about the vulnerability and its availability, the complexity of the tactics required to exploit the vulnerability, and the threat to the business (or scope) such exploitation represents. If you can make this system a documented standard used within your organization, you will not spend all of your time searching "high and low" for your vulnerabilities' impacts to your organization (here ends the authors' second attempt at humor in this chapter). When a manager asks "How severe is this new vulnerability?" it will feel much better saying "It's a Level 4 threat based on our prioritization and threat criticality model" as opposed to responding "low." Sound bites aside, everyone is then on the same page and using terminology in keeping with your organization's business.

Vulnerability Assessment vs. Penetration Testing

Now that you have spent time preparing for your vulnerability assessment and have carefully laid out plans and executed your attacks with expert precision, you will be frustrated to hear false positives are still possible. A vulnerability assessment tool is generally as good as its developer(s), and in some cases, developers must get "creative" to find ways to test for vulnerabilities. Some of these creative methods simply do not provide accurate checks in every environment. As an analyst conducting an assessment, you may have the luxury of checking a system manually for a vulnerability that shows up continuously in your assessments. This insider information is nice if available, but if it is not, where do you draw the line between assessing and actually verifying ( penetrating )?

A generally accepted rule of thumb is that if you are "testing" a system (whether automated or manually), you are conducting vulnerability assessments. The moment you begin exploiting potential vulnerabilities found on systems in an attempt to escalate privileges, upload a threat payload, or cause undesired /alternative operation of the (presumably) vulnerable software, you have started a penetration test (pen test). During vulnerability assessments, you will find it necessary to validate findings as discussed earlier in the chapter; however, if you go beyond simply checking if a vulnerability is likely to exist and you actually attempt to exploit the vulnerability (attempt to gain a root or command shell, gain administrative access through exploits, and so on), you are moving beyond the vulnerability assessment realm.

In order to conduct pen testing, vulnerable targets must be selected, and suspected vulnerabilities must then be selected for attempted penetration. Research is performed to find exploit code that may exist for these vulnerabilities in commercial penetration testing software, as well as exploit code "in the wild" (code that may be developed by security professionals or other security companies). Additionally, some customization is necessary from time to time to ensure the exploit code fits the vulnerability in question, thereby maximizing the potential for penetration.

Once targets and exploit code have been selected, hosts are " attacked " to attempt penetration of the system. If a system is penetrated, privilege escalation is attempted. If the vulnerability is exploited (the host penetrated) and privileges are escalated, further scanning of the local network from the compromised host is attempted to exploit any trust relationships that may exist between adjacent local systems. Pen testing techniques generally go well beyond basic vulnerability validation and require an experienced security analyst to successfully attempt penetrations. Additionally, there are several tools used during a penetration test in addition to tools used during the vulnerability assessment. Two of the more popular tools are Core Impact from CORE Technologies and Metasploit, which is maintained as an open source project.

Core Impact provides a framework for conducting information gathering and host analysis if you do not have other tools to complete those portions of the testing. It can also take input from Nessus results derived from a previous round of information gathering or assessment. Once the vulnerability assessment portion is completed, Core Impact provides tracking mechanisms for auditing your penetration work as well as exploit code for dozens of potential vulnerabilities. Attempts are made to upload agents (often referred to as "eggs") through possible vulnerabilities. If the agent successfully uploads, privilege escalation is attempted and the possibility of further compromise is checked through the analysis of trust relationships associated with the compromised host.

The Metasploit web site (http://www.metasploit.com) provides a consolidated location for exploit code that may be available "in the wild." Metasploit also provides a framework for exploit code to be used and is based on open source vulnerability code. It is a Perl-based package that can run on most UNIX systems (whereas Core Impact runs on Windows systems and is Python-based). Metasploit also provides not just the framework but also actual exploit code written by Metasploit maintainers and others from the open source community.

In short, a simple distinction can be drawn between vulnerability assessments and penetration testing: vulnerability assessments evaluate a system in hopes of finding suspected vulnerabilities, whereas pen testing verifies suspected vulnerabilities by actually attempting to exploit them.

One gray area in the vulnerability assessment versus pen testing debate is denial-of-service (DoS) testing. In order to test whether a system is vulnerable to DoS, an actual test must be conducted. The only accurate test is the actual DoS itself. Some argue that DoS testing is considered pen testing because it is intrusive; however, most DoS attacks are not conducted to gain access to a system. This attack technique is used more as a service interruption tactic rather than a privilege escalation attack mechanism. During vulnerability assessments, your organization must determine whether DoS tests should be conducted as they can be intrusive if services fail as a result of the tests. Most vulnerability assessment tools have the ability to conduct "safe checks" that do not include DoS testing. In the event DoS tests are to be conducted, your organization should schedule assessment activities during maintenance windows where outages are acceptable to avoid any service interruption that may result in loss of productivity.

One important aspect of DoS testing that is often overlooked (because of the risk required to conduct the testing) is the inherent ability to find design flaws in the organization's physical and logical network topology. If the topology of your organization's network has been poorly designed, a DoS attack from the outside may stop internal networks from routing (see Chapter 9) and/or internal services from being accessible. The only true way to reveal these types of problems is during DoS testing. Sometimes the importance of the test isn't that your organization can be taken offline from the Internet; rather, the internal local area network (that is, the management LAN) is also offline because the Internet gateway device also routes the internal network. We believe every organization is vulnerable to DoS in some form (whether that be an application or the entire infrastructure). The fact of the matter is that with services available publicly, enough packets can be pushed to a service (or entire infrastructure such as the border router) and the service will "fall over" or become nonresponsive due to sheer volume. The point to DoS testing is not to see how many packets it takes to overwhelm a device or service; it is to ensure there are not design flaws that may cause entire site outages when one service is attacked with a DoS.

Defend!

With a completed vulnerability assessment, you have the information outlining potential vulnerabilities on your organization's network. All of your efforts go to waste if you are not able to take what you learned and apply it to the defense of your organization's network. There are some simple steps that can be followed to defend your network based on your assessments. These steps include

  1. Determining remediation options required for each vulnerability

  2. Prioritizing the remediation items

  3. Developing a remediation plan

  4. Documenting vulnerabilities not being mitigated

  5. Conducting remediation

  6. Retesting

The steps listed above allow analysts to use vulnerability assessments for more than just meeting a legal mandate or fulfilling a checkbox for risk analysis in HIPAA, Gramm-Leach-Bliley, and other standards. The outcome of the assessments can be used to help provide a more secure environment for your organization and its data.

Determining Remediation

Your vulnerability assessment should include all vulnerabilities foundthat's a given. But most vulnerability scanning tools provide a baseline for remediation as well. Many include links to vendor web sites where patches or system configuration change instructions are located. During your process of determining how to remediate each vulnerability, you must consider several factors outlined earlier in the chapter:

  • Security

  • Functionality

  • Scalability

  • Performance

  • Budget

Security is the obvious factor in question since the lack of it is the reason you are planning your remediation steps. The other factors (functionality, scalability, performance, and budget) all affect how the service will function after the remediation along with what financial burden the organization must bear. Generally, there will be compromises that must be made in either functionality or security to make the service usable for your constituents while keeping acceptable security practices in place. In addition to security and functionality concerns, performance must also be accounted for during remediation planning. And of course, budget is always a concern with any IT remediation. The days of endless spending to build an infrastructure so robust, so redundant, and so secure that Fort Knox would be proud to use it are long gone; therefore, you must figure out the magic of "doing more with less" and making your application or service secure within a predefined budget. The key to remember is that there may be many ways to remedy each problem. Options should be considered and documented so the options can be analyzed using the factors outlined above to determine the optimal solution/remedy.

Prioritizing Remediation

Once you know how you will remediate, you must prioritize your vulnerabilities. Using the vulnerability prioritizations included in your assessment and the factors discussed in the previous paragraph, you can determine the remediation order. High-priority vulnerabilities that can be remedied with low cost and minimized negative impact on users should be highest priority. On the other hand, low-priority, high-cost, and major changes required by users should all be factors considered for delayed remediation. It is really quite simpleif you have multiple holes in your dam and all are leaking water, you start plugging the biggest holes first and the ones closest to your reach. The same goes for vulnerabilities in your infrastructure. The potential for leaked information, data theft, or complete outages to your organization should be the first "holes" considered for remediation.

Developing the Plan

Your remediation plan should account for all vulnerabilities in need of remediation, the priorities assigned (effectively the remediation order), and estimates for completing remediation. When developing the remediation plan, maintenance windows, outages, and (possibly) change control must all be contemplated. A good analyst will plan and document his or her remediation plan. Management is more apt to spend precious dollars (or at least have more interest) on a well-thought-out plan with justifications for why it must be conducted. This planning takes time to complete, but as the remediation gets underway, you will see the planning pay back in spades.

Documenting Accepted Vulnerabilities

Not every vulnerability will be cost-effective to remedy. Some will simply be too expensive to remedy based on the risk posed to the organization. Vulnerabilities classified as acceptable risks to the organization should be documented. This documentation should include information about the vulnerability, the systems, applications, and user base that could be affected and justification for why remediation will not take place. By documenting the accepted vulnerabilities, your organization will be able to provide justification for any vulnerability found repeatedly during ongoing assessments.

Conducting Remediation

Now that you have all the planning and documentation completed, its time to put the remediation plan into action. Conduct remediation following your plans. As vulnerabilities are completed, it is important to test those systems/services to ensure the vulnerability is truly eliminated. If you continue to see the vulnerability, verify it is not a false positive. If it is, ensure you remove this vulnerability from your future assessments by determining why it is a false positive and subsequently how to determine/validate that it is in fact not a true vulnerability. Remember, conducting vulnerability assessments is as much an art as it is a job or science. Not everything is black and whitethere is a whole world of gray.

Retesting

Finally, when vulnerabilities are remediated, it is imperative that continuous retesting takes place on a schedule acceptable to your organization. You must verify your infrastructure as a whole periodically as you remediate problem areas. Also remember that as quickly as infrastructure changes, it will be highly unlikely that you will see the same systems and/or services every time you conduct an assessment. For the reasons listed above, ongoing assessments are vital to securing your organization's perimeter so that you can find and subsequently remediate vulnerabilities.

Defense Tactics

Whether your organization conducts its own vulnerability assessments or the assessments are outsourced to a third party, you must understand what the assessment contains in order to successfully remediate the vulnerabilities. Some vulnerabilities and exploits are straightforward and easy to comprehend; however, there are some vulnerabilities that are more complex. In order to understand these, additional research is often required. Many vulnerability assessment tools contain references to various sources of information regarding the vulnerabilities. The following list includes some of the well-known sources for vulnerability information:

  • Common Vulnerabilities and Exposures (CVE) A list providing standard naming for vulnerabilities and exposures (delineates between vulnerabilities and exposures because vulnerability has become such a loosely used word in the IT industry). Great for cross-referencing vendors' and/or organizations' vulnerability IDs with each other. http://cve.mitre.org

  • CERT Coordination Center (CERT/CC) A part of the Software Engineering Institute located at Carnegie Mellon University. CERT acts as a coordination center for Internet security. http://www.cert.org

  • U.S. Computer Emergency Readiness Team (US-CERT) U.S. federally funded and Department of Homeland Security-administered threat tracker and coordination center. http://www.uscert.gov

  • National Infrastructure Security Coordination Center (NISCC/UNIRAS) The U.K. version of US-CERT, only as of this writing, arguably more timely and comprehensive than the U.S. variety. http://www.uniras.org

  • SecurityFocus Vulnerability Database (BugTraq ID) SecurityFocus is a source for security information on the Internet. It operates the BugTraq mailing lists and tracks vulnerabilities for all platforms and services (not vendor-specific). http://www.securityfocus.com

  • Open Source Vulnerability Database (OSVDB) An independent database containing vulnerabilities from both commercial and open source software available for security professionals. Note: "Open Source" in the name refers to the database use itself and not the contents/purpose of the database. http://www.osvdb.org

There may be some minor differences on how vulnerabilities are reported within each of the organizations listed above, but the key is to find information sources you are comfortable with and that are compatible with your vulnerability assessment tools.

Being Proactive

It is very difficult to maintain an IT infrastructure that contains zero vulnerabilities. That said, if an attacker wants into your network and has some skills and experience, it is nearly impossible to keep him or her out. The purpose of external vulnerability assessments and defending your network is to minimize your organization's risk footprint on the Internet. The more difficult it is to successfully exploit your systems, the more apt an attacker will be to move on to the next target. There are simply plenty of systems and networks that have gaping holes on the Internet today. An attacker does not have to be choosy unless specifically looking for something within your organization.

To minimize your organization's risk footprint, you are taking the right steps in researching vulnerability assessments (reading this book), but you must take that extra step and think with security foremost in your mind with every IT decision that is made. Of course, you must ensure systems can be used for their intended business purposes; however, when risks are created by opening a service, these should be documented and other risk mitigation techniques such as system hardening, layered security through border router access lists, firewall rule sets, and host firewalls should all be considered. The days of creating the castle wall with a very deep moat around it are long gone in the IT worldbe proactive with every system, application, and service on your network. Harden them, filter traffic to and from them, only allow specific access, log that access, and most importantly, check your work continuously through ongoing vulnerability assessments.



Extreme Exploits. Advanced Defenses Against Hardcore Hacks
Extreme Exploits: Advanced Defenses Against Hardcore Hacks (Hacking Exposed)
ISBN: 0072259558
EAN: 2147483647
Year: 2005
Pages: 120

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net