Why Benchmark?

Before you can identify the ideal hardware and software configuration for your directory services environment, you need to have a general idea, or baseline, of how your hardware and software work together. Benchmarking involves running a set of standard tests in a controlled environment. In this controlled environment, you can manipulate individual components (hardware and software) to see what enhances and what degrades performance, and thereby establish what works best in your environment. Periodic benchmark measurements also keep you apprised of the ongoing performance of your directory services so you can make sure everything is running optimally.

As you will find out as you go through this chapter, performance analysis of the Sun ONE Directory Server software is an immense task, that involves not only throughput and availability but could also include replication, backup and recovery considerations, and many other tasks .

Before we get into the specific details, be aware that this chapter provides one example of performing a benchmark, and that any actual benchmarking should be strongly based on the usage patterns you expect in a production environment.

Directory Server Benchmark Objectives

It is important in any performance benchmark to fully understand what you are testing. With this in mind, let's define some of the performance characteristics of the Sun ONE Directory Server software, and describe the acceptance criteria for performance.

The benchmark testing environment comprises a set of generation programs simulating real-world user interactions with the Sun ONE Directory Server. The main objectives are the measurement of the following:

  • Vertical scalability of the directory server:

    Identifies when the performance of one instance of the Directory Server reaches a plateau, even when the number of CPUs is increased.

  • General scalability of the directory server is identified by:

    • How many SEARCH LDAP operations the Directory Server can handle on the same node. Here, the LDAP client will create a persistent, anonymous connection to the directory server. Entries are then selected based on the approach, which is to use weighted operations that target some entries more frequently than others across the entire set of data in the directory server and retrieved with either a subtree , or substring search on the userid . This is a much closer approximation to real-world usage patterns and can have a significant impact on performance, particularly when the data set is too large to fit entirely in the cache. The search will return the user's email address based on their userid . This is measured in queries per second.

    • How many ADD LDAP operations the Directory Server can handle on the same node. The LDAP Client creates a persistent, authenticated connection to the directory server. Whole entries are added to the DIT. This is measured in operations per second.

    • How many DELETE LDAP operations the Directory Server can handle on the same node. The LDAP client creates a persistent, authenticated connection to the directory server. Whole entries are deleted in the DIT. This is measured in operations per second.

    • How many MODIFY LDAP operations the Directory Server can handle on the same node. The LDAP client creates a persistent, authenticated connection to the directory server. A single attribute is updated in the entries selected across the entire set of data in the directory server. The attribute is indexed with an equality index. This is measured in updates per second.

    • How many AUTHENTICATE operations the Directory Server can handle on the same node. The LDAP Client creates a persistent, anonymous connection to the directory server. User authentication involves first performing a search to find the user's entry and then a bind to verify the credentials they have provided. There may also be an additional step whereby a determination is made about whether the user is in a particular group or role. This is measured in authentications per second.

    • How many RANDOM LDAP operations the Directory Server can handle on the same node. The LDAP client will create a persistent, anonymous connection to the directory server. A set of the above operations are performed against the entire set of data in the directory server. This is measured in queries per authentications per updates per second.

    • How many SEARCH , ADD , DELETE , MODIFY , AUTHNETICATE , RANDOM LDAP operations the directory server can handle on the same node over TLSv1/SSL.

    • How many IMPORTs which is measured in the number of entries per second, using the LDIF2DB utility the Directory Server can handle on the same node. An LDAP data interchange format (LDIF) file containing the test entry records is generated by the MakeLDIF file generator application (discussed in detail later) and imported using the Sun ONE Directory Server LDIF2DB program.

The acceptance criteria are really determined by the success or failure of a particular test that is described in performance benchmark. Depending upon the performance benchmark test, the success criteria can include: receipt of a particular return code from a server (often expressed as an error message), getting a response from the directory server, or displaying search results correctly on the requesting LDAP client. If the criteria are not met for a given performance test, it is deemed to have failed.

Benchmark Test Harness Description

This section describes the test harness (required functionality addressed by the benchmark tests) for the benchmark performance validation test, so that testing conditions can be distinctly characterized and performance numbers obtained from one run to another are comparable.

Overview of Benchmark Tasks

There are many approaches you can take in performing a benchmark for your directory services. The next few sections of this chapter describe the benchmark process using various tools that Sun has developed to simplify this difficult endeavor. A high-level outline of the process using these tools is as follows :

  1. Set up and configure the hardware in your test environment (select, install, and configure the hardware, network and so on).

  2. Install and patch the operating system.

  3. Use idsktune to ensure that the necessary patches and tuning are performed.

  4. Configure and tune your systems (storage, in particular).

  5. Install the Sun ONE Directory Server software in your test environment. This includes areas such as directory server configuration, DIT Structure and Database Topology, Data Structure, Indexing, Access Control Structure, and so on.

  6. Load directory server data. This chapter uses the MakeLDIF program (covered later in this chapter) to simplify this task.

  7. Pretest. The preliminary testing phase consists of verifying that the directory server testing process does indeed work as expected and that it produces valid results. You also need to consider the expected length of each test when computing the time required for this testing.

  8. Perform the benchmark testing and tuning. This chapter uses the SLAMD application to quickly and conveniently run various test scenarios.

  9. Collect and analyze the benchmark results. This chapter does not list the numerous ways in which the data can be analyzed . In some cases, you might need to experiment with different hardware and software configurations, or different test scenarios, rerunning the benchmark tests until you've reached an optimum configuration. Take throughput, latency and utilization into account.



LDAP in the Solaris Operating Environment[c] Deploying Secure Directory Services
LDAP in the Solaris Operating Environment[c] Deploying Secure Directory Services
ISBN: 131456938
EAN: N/A
Year: 2005
Pages: 87

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net