Using SLAMD, the Distributed Load Generation Engine

Using SLAMD , the Distributed Load Generation Engine

The directory server performance benchmark testing in this section is accomplished using the SLAMD application, which is a tool developed by Sun engineering for benchmarking the Sun ONE Directory Server product. It is important to understand that SLAMD was not designed to be used only for testing directory servers. It was intentionally designed in a somewhat abstract manner so that it could be used equally well for load testing and benchmarking virtually any kind of network application. Although most of the jobs provided with SLAMD are intended for use with LDAP directory servers, there are also jobs that can be used for testing the messaging, calendar, portal, identity, and web servers.

The SLAMD environment is in essence a distributed computing system with a primary focus on load generation and performance assessment. Each unit of work is called a job, and a job may be processed concurrently on multiple systems, each of which reports results back to the SLAMD server where those results can be viewed and interpreted in a number of ways. The SLAMD environment comprises many components, each of which has a specific purpose. The components of the SLAMD environment include:

  • The core server

  • The configuration handler

  • The scheduler

  • The logger

  • The client listener

  • The job cache

  • SLAMD clients

  • The administrative interface

  • The access control manager

In this next section we take a look at SLAMD, which is an extremely useful Java application for benchmarking the Sun ONE Directory Server 5.x, but not limited to the Sun ONE Directory Server. This application is available for download. See "Obtaining the Downloadable Files for This Book" on page xxvii.

SLAMD Overview

The SLAMD Distributed Load Generation Engine is a Java-based application designed for stress testing and performance analysis of network-based applications. Unlike many other load generation utilities, SLAMD provides an easy way to schedule a job for execution, either immediately or at some point in the future, have that job information distributed to a number of client systems, and then executed concurrently on those clients to generate higher levels of load and more realistic usage patterns than a standalone application operating on a single system. Upon completing the assigned task, the clients report the results of their execution back to the server where the data is combined and summarized. Using an HTML-based administrative interface, you can view results, either in summary form or in varying levels of detail. You may also view graphs of the statistics collected and may even export that data into a format that can be imported into spreadsheets or other external applications for further analysis.

The SLAMD environment is highly extensible. Custom jobs that interact with network applications and collect statistics can be developed either by writing Java class files or executed using the embedded scripting engine. The kinds of statistics that are collected while jobs are being executed can also be customized. The kinds of information that can be provided to a job to control the way in which it operates is also configurable. Although it was originally designed for assessing the performance of LDAP directory servers, SLAMD is well suited for interacting with any networkbased application that uses either TCP- or UDP-based transport protocols.

This section provides information about installing and running the components of the SLAMD environment. Additional topics, like developing custom jobs for execution in the SLAMD environment, are not covered in this book.

Installation Prerequisites

Before SLAMD may be installed and used, a number of preliminary requirements must be satisfied:

  • All components of the SLAMD environment have been written in Java and therefore a Java runtime environment is required to use them. All components have been developed using the Java 1.4.0 specification. Version 1.4.0 or higher of the runtime should be installed on the system that hosts the SLAMD server, as well as all systems used to run the SLAMD client. Any system used to develop custom jobs for execution in the SLAMD environment should have the Java 1.4.0 or higher SDK installed.

    Note

    Both the Java runtime environment and the Java SDK may be obtained online from http://java.sun.com/.


  • Some aspects of job execution are time sensitive, and differences in system clocks can cause inaccuracies in the results that they can obtain. Therefore, time synchronization should be used on all systems in the SLAMD environment to ensure that such clock differences do not occur.

  • The communication that occurs between clients and the SLAMD server requires that the host name of those systems be available. The addresses of all client systems must be resolvable by both the client and server systems through DNS or some other mechanism like the /etc/hosts file.

  • Much of the configuration and all of the job data for the SLAMD environment is stored in an LDAP directory server. Therefore, a directory server must be accessible by the system that acts as the SLAMD server. SLAMD has been designed and tested with the iPlanet Directory Server 5.1 and the Sun ONE Directory Server 5.2 software.

    Note

    The entries that store information about scheduled jobs may be quite large. It is therefore necessary to ensure that the configuration directory is properly tuned so it can properly handle these entries.


  • The HTML that makes up the SLAMD administrative interface is dynamically generated using Java servlet 1.2 technology. A servlet engine is required to provide this capability. By default, SLAMD is provided with the Apache Tomcat servlet engine and this engine is quite capable of providing the administrative interface. However, it should be possible to use any compliant servlet engine to host that interface. In addition to Tomcat, SLAMD has been tested with the Sun ONE Web Server 6.0 SP3 software.

  • Nearly all interaction with the SLAMD server is performed through an HTML interface. A Web browser must be installed on all systems that access the administration interface. SLAMD has been tested with Netscape and Mozilla, but the administrative interface has been designed in accordance with the HTML 4.01 specification and therefore any browser capable of rending such content may be used. Even text-based browsers are quite capable of performing all administrative tasks .

    Note

    If SLAMD is to be used in a purely text-based environment, it is recommended that the Links browser be installed on systems that need to access the administration interface. Links is a text-based browser that is similar to the betterknown lynx (available at http://lynx.browser.org/), but provides better support for rendering tables, which are used throughout the SLAMD administrative interface. See http://links. sourceforge .net/ for more information.


Installing the SLAMD Server

Once all the prerequisites have been filled, it is possible to install the SLAMD server.

The following procedure assumes that you have acquired the slamd-1.5.1.tar.gz file. It is available as a downloadable file for this book (see "Obtaining the Downloadable Files for This Book" on page xxvii ) .

To Install the SLAMD Server
  1. Copy the slamd-1.5.1.tar.gz file onto the server system, and into the location in which you wish to install SLAMD .

  2. Uncompress the slamd-1.5.1.tar.gz file as shown:

     $  gunzip slamd-1.5.1.tar.gz  

    Note

    If the gunzip command is not available on the system, the -d option with the gzip command:

     $  gzip -d slamd-1.5.1.tar.gz  
  3. Extract the files from the slamd-1.5.1.tar file as shown:

     $  /usr/bin/tar -xvf slamd-1.5.1.tar  

    All files are placed in a subdirectory named slamd .

    Note

    If you wish to use a name other than slamd for the base directory, simply rename that directory immediately after extracting the installation archive. However, once the SLAMD server has been started, the path in which it is installed should not be changed.

  4. Change to the newly created slamd directory .

  5. Edit the bin/startup.sh shell script .

    This shell script is used to start the SLAMD server and the administrative interface, but it must first be edited so that the settings are correct for your system. Set the value of the JAVA_HOME variable to the location in which the Java 1.4.0 or higher runtime environment that you have. You may also edit the INITIAL_MEMORY and MAX_MEMORY variables to specify the amount of memory in megabytes that the SLAMD server and the administrative interface are allowed to consume . Comment out, or remove the two lines at the top of the file that provide the warning message indicating that the startup file has not been configured.

  6. Edit the bin/shutdown.sh shell script .

    This shell script is used to stop the SLAMD server and the administrative interface. Set the value of the JAVA_HOME variable to the location in which the Java 1.4.0 or higher runtime has been installed, and comment out, or remove the two lines at the top of the file that provide the warning message indicating that the shutdown file has not been configured.

  7. Execute the bin/startup.sh shell script to start the Tomcat servlet engine and make the SLAMD administrative interface available .

  8. Start a Web browser and access the SLAMD administrative interface .

    The SLAMD administrative interface is available at: http:// address :8080/slamd , where address is the IP address or DNS host name of the SLAMD server machine. A page is displayed indicating that the SLAMD server is unavailable because it has not yet been configured.

  9. Click on the Initialization Parameters link to a page on which it is possible to specify the configuration directory settings .

    The configuration directory is the LDAP directory server that is used to store much of the configuration and all of the job data for jobs that have been scheduled.

  10. Click on the Config Directory Address link .

    You are presented with a form that allows you to specify the address to use for the configuration directory server.

    The address must be entered as either an IP address or a host name, as long as the SLAMD server machine can contact the configuration directory machine using the provided address. Repeat this process for the remaining configuration directory settings. TABLE 9-4 describes the kind of values that should be used by each:

    Table 9-4. SLAMD Configuration Parameters

    Parameter

    Description

    Config Directory Address

    Specifies the address that should be used to contact the configuration directory server.

    Config Directory Port

    Specifies the port number that should be used to contact the configuration directory server.

    Config Directory Bind DN

    The DN of the user that should be used to bind to the configuration directory server. This DN must have full read and write permissions (including the ability to add and remove entries) for the portion of the directory that is to hold the SLAMD configuration data.

    Config Directory Bind Password

    The password for the configuration bind DN.

    Configuration Base DN

    The location in the configuration directory under which all SLAMD information is stored. If this entry does not exist, the SLAMD server can create it provided that the DN specified is under an existing suffix in the configuration directory.

    Use SSL for Config Directory

    SSL Key Store Location

    SSL Key Store Password

    SSL Trust Store Location

    SSL Trust Store Password

    Settings that control whether the communication between the SLAMD server and the configuration directory is encrypted with SSL. It is recommended that the initial configuration be completed without SSL. See a later section for information on configuring SLAMD for use with SSL.

  11. Click the Add SLAMD Schema button at the bottom of the Initialization Settings page .

    This communicates with the configuration directory server, finds the schema subentry, and adds the custom SLAMD schema definitions over LDAP while the server is online. This schema information must be added to the configuration directory before SLAMD can use it to store configuration and job information.

    Note

    If you prefer, you can add the schema information to the directory manually rather than over LDAP. A file containing the SLAMD schema definitions is included in the installation archive as conf/98slamd.ldif .

  12. Click the Add SLAMD Config button at the bottom of the Initialization Settings page .

    This automatically adds all required entries to allow that directory instance to be used as the configuration directory for SLAMD. All information added to the directory is at or below the configuration base DN.

    Note

    At least one entry in the hierarchy of the configuration base DN must already exist in the directory server before the configuration may be added to it. For example, if the directory is configured with a suffix of dc=example,dc=com and the configuration base DN is specified as ou=SLAMD,ou=Applications,dc=example,dc=com , then at least the dc=example,dc=com entry must already be present in the directory server before attempting to add the SLAMD configuration data. Any entries between the directory suffix and the SLAMD configuration base DN that are not present in the directory are automatically added when the Add SLAMD Config button is pressed.

  13. Click the Test Connection button at the bottom of the Initialization Settings page .

    This establishes a connection to the configuration directory server, verifies that the bind DN and password provided are correct, and verifies that the SLAMD schema and configuration entries have been added to that directory. If all tests succeed, a message is displayed indicating that it is suitable for use as the SLAMD configuration directory. If any failure occurs, details about that failure are displayed so that the problem may be corrected.

  14. Follow the SLAMD Server Status link on the left side of the page .

    This page normally displays a significant amount of information about the SLAMD server, including the number and types of jobs defined, the number of clients connected, and statistics about the Java environment in which SLAMD is running. However, when the SLAMD server is offline, much of this information is not available.

  15. Click the Start SLAMD button in the Server Status section and when prompted, click Yes to provide confirmation for the startup .

    If all goes well, the SLAMD server is started properly and the full set of SLAMD functions is made available.

SLAMD Clients

Because SLAMD is a distributed load generation engine, the SLAMD server itself does not execute any of the jobs. Rather, the actual execution is performed by SLAMD clients, and the server merely coordinates the activity of those clients. Therefore, before any jobs can be executed, it is necessary to have clients connected to the server to run those jobs.

The client application communicates with the SLAMD server using a TC-based protocol. Therefore, it is possible to install clients on different machines than the one on which the SLAMD server is installed. In fact, this is recommended so the client and server do not compete for the same system resources, which could interfere with the ability of the client to obtain accurate results. Further, it is possible to connect to the SLAMD server with a large number of clients to process multiple jobs concurrently. In such cases, it is best to have those clients distributed across as many machines as possible to avoid problems in which the clients are competing with each other for system resources.

The following procedure assumes that you have the SLAMD client package called slamd_client-1.5.1.tar.gz . This file is available for download in two ways:

  • As part of the slamd-1.5.1.tar.gz downloadable file

  • Downloadable in a compressed file that only contains the SLAMD client software ( slamd_client-1.5.1.tar.gz )

See "Obtaining the Downloadable Files for This Book" on page xxvii for download information.

To Install the SLAMD Client
  1. Copy the slamd-client-1.5.1.tar.gz file onto the client system, and into the location in which you wish to install the SLAMD client .

  2. Uncompress the slamd-client-1.5.1.tar.gz file as shown:

     $  gunzip slamd-client-1.5.1.tar.gz  

    Note

    If the gunzip command is not available on the system, the -d option with the gzip command:

     $  gzip -d slamd-client-1.5.1.tar.gz  
  3. Extract the files from the slamd-1.5.1.tar file as shown:

     $  /usr/bin/tar -xvf slamd-client-1.5.1.tar  

    All files are placed in a slamd_client subdirectory.

    Note

    As in the server installation, the directory in which the client files are placed may be modified. However, unlike the server, the client does not store any path information. Therefore, it is possible to move or rename the directory containing the SLAMD client files even after the client has been used.

  4. Edit the start_client.sh script .

    This script may be used to start the SLAMD client application, but it must first be edited so that the settings are correct for your system. Set the value of the JAVA_HOME variable to the location in which the Java 1.4.0 or higher runtime environment has been installed, and the SLAMD_HOST and SLAMD_PORT variables to indicate the address and port number that the client should use to communicate with the server.

    You can also edit the INITIAL_MEMORY and MAX_MEMORY variables to specify the amount of memory in megabytes that the SLAMD client is allowed to consume.

    Finally, comment out or remove the two lines at the top of the file that provide the warning message indicating that the file has not been configured.

To Start the SLAMD Client
  • Use the following command:

     $  ./start_client.sh  

    This starts the client application, connects to the SLAMD server, and indicates that it is ready to accept new job requests . If a problem occurs, an error message is printed that indicates the cause of the problem so that it may be corrected.

    While it is not necessary to provide any arguments to this script, there are a couple of options that are supported. One such option is the -a argument. This argument indicates that the client should aggregate all the data collected by each thread into a single set of statistics before sending those results to the SLAMD server. This can significantly reduce the volume of data that the client needs to send to the server and that the server needs to manage, but it does prevent the end user from being able to view performance information about each individual thread on the client (it is still possible to view aggregate data for each client).

    It is also possible to use the -v option when starting the client application. This starts the client in verbose mode, which means that it prints additional information that may be useful for debugging problems, including detailed information about the communication between the client and the server. However, using verbose mode might incur a performance penalty in some cases and therefore it is recommended that its use be reserved for troubleshooting problems that cannot be solved with the standard output provided by the client.

The SLAMD Administration Interface

All interaction with the SLAMD server is performed through an HTML administration interface. This interface provides a number of capabilities, including the ability to:

  • Schedule new jobs for execution

  • View the results of jobs that have completed execution

  • View status information about the SLAMD server

  • Make new job classes available for use in the SLAMD environment

  • Configure the SLAMD server

  • Customize the appearance of the HTML interface

This section provides a brief overview of the administrative interface to describe how it can be used.

By default, the administrative interface is accessed through the URL http:// address :8080/slamd , where address is the address of the system on which the SLAMD server is installed. Any browser that supports the HTML 4.01 standard should be able to use this interface, although different browsers may have differences in the way that the content is rendered.

Provided that the SLAMD server is running, the left side of the page contains a navigation bar with links to the various tasks that can be performed. This navigation bar is divided into four major sections:

  • Manage Jobs

  • Startup Configuration

  • SLAMD Configuration

  • SLAMD Server Status

Note

If access control is enabled in the administration interface, some sections or options may not be displayed if the current user does not have permission to use those features. Configuring the administrative interface to use access control is documented in a later section.


FIGURE 9-10 illustrates the SLAMD administrative interface.

Figure 9-10. SLAMD Administrative Interface

graphics/09fig10.jpg

The Manage Jobs section provides options to schedule new jobs for execution, view results of jobs that have already been completed, view information about jobs that are currently running or awaiting execution, and view the kinds of jobs that may be executed in the SLAMD environment.

The Startup Configuration section provides options to edit settings in the configuration file that contain the information required to start the SLAMD server (by default, webapps/slamd/WEB-INF/slamd.conf ). These settings include specifying the settings used for communicating with the configuration directory and the settings used for access control.

The SLAMD Configuration section provides options to edit the SLAMD server settings that are stored in the configuration directory. These settings include options to configure the various components of the SLAMD server, and to customize the appearance of the administrative interface.

The SLAMD Server Status option provides the ability to view information about the current state of the SLAMD environment, including the number of jobs currently running and awaiting execution, the number of clients that are connected and what each of them is doing, and information about the Java Virtual Machine (JVM) software in which the SLAMD server is running. This section also provides administrators with the ability to start and stop the SLAMD server, and a means of interacting with the cache used for storing access control information.

Scheduling Jobs for Execution

One of the most important capabilities of the SLAMD server is the ability to schedule jobs for execution. You can schedule them to execute immediately or at some point in the future, on one or more client systems, using one or more threads per client system, and with a number of other options.

For a job to be available for processing, it must first be defined in the SLAMD server. You can develop your own job classes and add them to the SLAMD server so that they are executed by clients. The process for defining new job classes is discussed later, but the SLAMD server is provided with a number of default job classes that can be used to interact with an LDAP directory server.

To schedule a job for execution, you must first follow the Schedule a Job link in the Manage Jobs section of the navigation sidebar. This displays a page containing a list of all job classes that have been defined to the server. To choose the type of job to execute, follow the link for that job class, and a new page is displayed containing a form in which the user may specify how the job is to be executed. Some of the parameters that can be specified are specific to the type of job that was chosen . The parameters specific to the default job classes are documented in a later section. However, some options are the same for every type of job.

The common configuration parameters that are displayed by default are as follows :

  • Description This field allows you to provide a brief description of the job that allows it to be distinguished in a list of completed jobs. This is an optional field. If no description is desired, leave this field blank.

  • Start Time This field allows you to specify the time at which the job should start running. If no value is provided, the job starts running as soon as possible. If a value is provided, it must be in the form YYYYMMDDhhmmss .

  • Stop Time This field allows you to specify the time at which the job should stop running, provided that it has not already stopped for some other reason. If no value is provided, the job is not stopped because of the stop time. If a value is provided, it must be in the form YYYYMMDDhhmmss .

  • Duration This field allows you to specify the maximum length of time in seconds that the job should be allowed to run. This is different from the stop time because it is calculated from the time that the job actually starts running, regardless of the scheduled start time. If no value is provided, there is no maximum duration.

  • Number of Clients This field allows you to specify the number of clients on which the job runs. When the time comes for the job to run, it is sent to the specified number of clients to do the processing. This is a required parameter, and it must be a positive integer.

  • Wait for Available Clients This checkbox allows you to specify what should happen if the time comes for the job to start but there is not an appropriate set of clients available to perform that processing. If this box is not checked, the job is cancelled. If this box is checked, the job is delayed until an appropriate set of clients is available.

  • Threads per Client This field allows you to specify the number of threads that should be created on each client to run the job. Each thread executes the instructions associated with the job at the same time, which allows a single client to perform more work and generate a higher load against the target server. However, specifying too many threads may cause a scenario in which the client system or the target server is overloaded, which could produce inaccurate results.

  • Statistics Collection Interval This field allows you to specify the minimum interval over which statistics are collected while a job is being executed. Statistics are collected for the entire duration of the job, but they are also collected for each interval of the specified duration while that job is active. This is helpful for graphing or otherwise analyzing the results of the job over time.

  • Job Comments This text area allows you to add free-form text that describes additional aspects of the job that cannot be reflected through the other parameters associated with the job. Unlike the other parameters common to all jobs, the job comments field appears at the bottom of the schedule job form. In addition, these comments can be edited after the job has started running or even after the job has completed. This makes it possible to provide comments on the job based on observations gathered during job execution.

    In addition to the above parameters, there are a number of other options that are available by clicking on the Show Advanced Scheduling Options button. These options are more specialized than the above parameters, and therefore are not commonly used. However, they are available for use if necessary. These advanced options include:

  • Job Is Disabled This checkbox allows you to specify whether the job should be disabled when it is scheduled. When a job is disabled, it is not considered eligible for execution until it is enabled. This can be beneficial for cases in which it is necessary to make changes to the job after it has been scheduled (for example, if a number of jobs are scheduled at the same time and then individual changes need to be made to each of them).

  • Number of Copies This field allows you to specify the number of copies that are made of the current job when it is scheduled. By default, only a single copy is made, but it is possible to specify any number of copies. This is useful for cases in which a number of jobs of the same type are to be executed with only minor differences between them, since each job can be edited after it is scheduled to make those minor changes.

  • Time Between Copy Startups This field allows you to specify the length of time in seconds that should be left between each job's start time when multiple jobs are scheduled. Note that this is the interval between start times for the jobs, not the time between the end of one job and the beginning of the next. If this is not specified, all copies are scheduled with the same start time.

  • Use Specific Clients This text area allows you to request that the job be executed on a specific set of clients. By default, the clients that are used to run a job cannot be guaranteed . However, if a specific set of addresses are specified, those systems are used to run the job. It is possible to specify either the IP address or DNS host name (as long as those names can be resolved to IP addresses). A separate address must be specified on each line.

  • Thread Startup Delay This field allows you to specify the length of time in milliseconds that each client should wait between starting each thread. This is useful for cases in which there are a number of threads on the client that may be attempting the same operations at the same time in a manner that could lead to resource contention. By introducing a small delay between each thread, that contention might be somewhat reduced for more accurate results.

  • Job Dependencies This field allows you to specify one or more jobs on which the current job is dependent. That is, any job on which the current job is dependent must complete its execution before the current job is considered eligible for execution. When a job is initially scheduled, only a single dependency can be specified. However, by editing the job after it is scheduled, it is possible to add additional dependencies. Only jobs that are currently in the pending or running job queues are possible to specify as dependencies when scheduling or editing a job.

  • Notify on Completion This field allows the user to specify the email addresses of one or more users that should be notified when the job has completed. If multiple addresses are to be included, separate them with commas and/or spaces. This option only appears if the SLAMD mailer has been enabled.

The remaining parameters that appear on the form when scheduling a new job are specific to that job type. The default jobs are described later in this book. Regardless of the job type, following the link at the top of the page in the sentence Click here for help regarding these parameters displays a page with information on each of those parameters.

Once all appropriate parameters are specified for the job, clicking the Schedule Job button causes those parameters to be validated . Provided that all the values provided were acceptable, the jobs are scheduled for execution. If any of the parameters are unacceptable, an error message is displayed indicating the reason that the provided value was inappropriate. A form is displayed allowing you to correct the problem.

Managing Scheduled Jobs

Once a job is scheduled for execution, it is added to the pending jobs queue to await execution. Once all of the criteria required to start the job are met (for example, the start time has arrived, the job is not disabled, all dependencies have been satisfied, and an appropriate set of clients is available), that job is moved from the pending jobs queue into the running jobs queue, and the job is sent out to the clients for processing. When the job completes execution, the job is removed from the running jobs queue, and the job information is updated in the configuration directory.

As described previously, a job can be in one of three stages:

  • Pending The job is awaiting execution and has the potential to be executed once all the appropriate conditions are met. This also includes jobs that are disabled.

  • Running The job information has been sent to at least one client, and the server is currently waiting for results from at least one client.

  • Completed The job has completed all the execution that it is going to do. This could mean any of a range of values, from an indication that the job completed successfully to an indication that a problem occurred that prevented the job from being started.

Viewing Job Execution Results

Once a job is executed and all clients have sent the results back to the SLAMD server, those results are made available through the administration interface. Those results are made available in a variety of forms, and the data collected can even be exported for use with external programs like spreadsheets or databases.

When the job summary page is displayed for a particular job, all of the parameters used to schedule that job are displayed. If that job has completed execution, additional information is available about the results of that execution. That additional information falls into three categories:

  • Job Execution Data This section provides an overview of the execution results and links to obtain more detailed information. This section is described in more detail later.

  • Clients Used This section provides the client IDs of all the clients used to run the job. The client ID contains the address of that client, which makes it possible to determine which systems were used in the job execution.

  • Messages Logged This section provides a list of all messages that were logged while the job was in progress. These messages may provide additional information about problems that occurred while the job was active, or any other significant information that should be known about the job execution. If there are no such messages, this section is not displayed.

Of these three sections, the one of most interest is that containing the job execution data, because it provides the actual results.

Optimizing Jobs

When using SLAMD to run benchmarks against a network application, it is often desirable to find the configuration that yields the best performance. In many cases, this also involves trying different numbers of clients or threads per client to determine the optimal amount of load that can be placed on the server to yield the best results. To help automate this process, SLAMD offers optimizing jobs.

An optimizing job is actually a collection of smaller jobs. It runs the same job repeatedly with increasing numbers of threads per client until it finds the number that yields the best performance for a particular statistic. At the present time, optimizing jobs do not alter the number of clients used to execute the job, although that option may be available in the future.

There are two ways in which an optimizing job may be scheduled:

  • View information about a particular completed job that contains the settings you wish to use for the optimizing job. On this page, click the Optimize Results button at the top of the page.

  • Follow the View Optimizing Jobs link in the navigation sidebar. Choose the job type from the drop-down list box at the top of that page and click Submit.

Organizing Job Information

After the SLAMD server is used to schedule and run a large number of jobs, the page that stores completed job information can grow quite large, and the process of displaying that page can take more time and consume more server resources. Therefore, for the purposes of both organization and conserving system resources, the server offers the ability to arrange jobs into folders. It is also possible to specify a variety of criteria that can be used to search for job information, regardless of the folder in which it is contained. This section provides information on using job folders and searching for job information.

Real Job Folders

Real job folders correspond to the location of the job information in the configuration directory. As such, it is only possible for a job to exist in a single real folder. Real job folders are used to store information about jobs that have completed execution so that viewing completed job information does not become an expensive process.

Virtual Job Folders

Although real job folders can be very beneficial for a number of reasons, they also have some drawbacks that prevent them from being useful in all circumstances. For that reason, the SLAMD server offers the ability to classify jobs in virtual folders in addition to real folders.

Virtual job folders offer a number of advantages over real job folders:

  • A virtual job folder is completely independent of real job folders. That is, a virtual job folder can contain jobs from a number of different real job folders.

  • A virtual job folder is not dependent upon the location of the job information in the configuration directory. Therefore, a single job can exist in multiple virtual job folders while it can only exist in a single real job folder.

  • A virtual job folder can contain jobs in any state. Real job folders do not display jobs in the pending, running, or disabled states.

  • In addition to having a specific list of jobs to include, a virtual job folder can also have a set of search criteria that can be used to dynamically include jobs. This allows newly created jobs to be automatically included in the virtual job folder without requiring any administrative action.

The Default Job Classes

When the SLAMD server is installed, a number of default job classes are registered with the server. The majority of these job classes are used to generate load against LDAP directory servers, because that is the first intended purpose for SLAMD. However, it is quite possible to develop and execute jobs that communicate with any kind of network application that uses a TCP- or UDP-based protocol. The process for adding custom jobs to the SLAMD server is described later. The remainder of this section describes each of the jobs provided with the SLAMD server by default, including the kinds of parameters that are provided to customize their behavior.

Null Job

The null job is a very simple job that does not perform any actual function. Its only purpose is to consume time. When combined with the job dependency feature, it provides the ability to insert a delay between jobs that would otherwise start immediately after the previous job had completed. It does not have any job-specific parameters.

Exec Job

The exec job provides a means of executing a specified command on the client system and optionally capturing the output resulting from the execution of that command. It can be used for any purpose, although its original intent is to be used to execute a script that can perform setup or cleanup before or after processing another job (for example, to restore an LDAP directory server to a known state after a job that may have made changes to it).

HTTP GetRate Job

The HTTP GetRate job is intended to generate load against web servers using the HTTP protocol. It works by repeatedly retrieving a specified URL using the HTTP GET method, and can simulate the kind of load that can be generated when a large number of users attempt to access the server concurrently using web browsers.

LDAP SearchRate Job

The LDAP SearchRate job is intended to generate various kinds of search loads against an LDAP directory server. It is similar to the searchrate command-line utility included in the Sun ONE Directory Server Resource Kit software, but there are some differences in the set of configurable options, and the SLAMD version has support for additional features not included in the command-line version.

Weighted LDAP SearchRate Job

The Weighted LDAP SearchRate job is very similar to the SearchRate job, with two exceptions: it is possible to specify two different filters to use when searching, and also to specify a percentage to use when determining which filter to issue for the search. If this is combined with the ability to use ranges of values in the filter, it is possible to implement a set of searches that conform to the 80/20 rule (80 percent of the searches are targeted at 20 percent of the entries in the directory) or some other ratio. This makes it possible to more accurately simulate real-world search loads on large directories in which it is not possible to cache the entire contents of the directory in memory.

LDAP Prime Job

The LDAP prime job is a specialized kind of SearchRate job that can be used to easily prime an LDAP directory server (retrieve all or a significant part of the entries contained in the directory so that they may be placed in the server's entry cache, allowing them to be retrieved more quickly in the future). It is true that the process of priming a directory server can often be achieved by a whole subtree search with a filter of (objectClass=*) . However, this job offers two distinct advantages over using that method. The first is that it allows multiple clients and multiple client threads to be used concurrently to perform the priming, which allows it to complete in significantly less time and with significantly less resource consumption on the directory server system. The second is that this job makes it possible to prime the server with only a subset of the data, whereas an (objectClass=*) filter results in the retrieval of the entire data set.

One requirement of the LDAP prime job is that it requires that the directory server be populated with a somewhat contrived data set. Each entry should contain an attribute (indexed for equality) whose value is a sequentially incrementing integer. As such, while it can easily be used with data sets intended for benchmarking the performance of the directory server, it is probably not adequate for use on a directory loaded with actual production data.

LDAP ModRate Job

The LDAP ModRate job is intended to generate various kinds of modify load against an LDAP directory server. It is similar to the modrate command-line utility included in the Sun ONE Directory Server Resource Kit software, but there are some differences in the set of configurable options. The SLAMD version also has support for additional features not included in the command-line version.

LDAP ModRate with Replica Latency Job

The LDAP ModRate with replica latency job is intended to generate various kinds of modify load against an LDAP directory server while tracking the time required to replicate those changes to another directory server. It accomplishes this by registering a persistent search against the consumer directory server and using it to detect changes to an entry that is periodically modified in the supplier directory. The time between the change made on the supplier and its appearance on the consumer is recorded to the nearest millisecond.

It is important to note that this job works through sampling. The replication latency is not measured for most of the changes made in the supplier server. Rather, updates are periodically made to a separate entry and only changes to that entry are measured. This should allow the change detection to be more accurate for those changes that are measured, and provide a measurement of the overall replication latency. However, it does not measure the latency of changes made to other entries by other worker threads. Therefore, it is not possible to guarantee that the maximum or minimum latency for all changes is measured.

LDAP AddRate Job

The LDAP AddRate job is intended to generate various kinds of add load against an LDAP directory server. It is similar to the infadd command-line utility included in the Sun ONE Directory Server Resource Kit software, but there are some differences in the set of configurable options. The SLAMD version also has support for additional features not included in the command-line version, such as the ability to use SSL and the ability to specify additional attributes to include in the generated entries.

Note

Because individual clients are unaware of each other when asked to process a job, this job class should never be run with multiple clients. If this job is run on multiple clients, most operations fail because all clients attempt to add the same entries. However, alternatives do exist. It is possible to use one client with many threads because threads running on the same client can be made aware of each other. Additionally, it is possible to create multiple copies of the same job, each intended to run on one client (with any number of threads) but operating on a different range of entries.


LDAP AddRate with Replica Latency Job

The LDAP AddRate with replica latency job is intended to generate various kinds of add load against an LDAP directory server while measuring the time required to replicate those changes to another directory server. It is very similar to the LDAP AddRate job, although it does not provide support for communicating over SSL.

The process for monitoring replication latency in this job is identical to the method used by the LDAP ModRate job that tests replica latency. That is, a persistent search is registered against a specified entry on the consumer and periodic modifications are performed against that entry on the master directory. The fact that this job performs adds while the test to measure replication latency is based on modify operations is not significant because replicated changes are performed in the order that they occurred regardless of the type of operation (that is, add, modify, delete, and modify RDN operations all have the same priority).

Note

Because individual clients are unaware of each other when asked to process a job, this job class should never be run with multiple clients. If this job is run on multiple clients, most operations fail because all clients attempt to add the same entries. However, alternatives do exist. It is possible to use one client with many threads because threads running on the same client can be made aware of each other. Additionally, it is possible to create multiple copies of the same job, each intended to run on one client (with any number of threads) but operating on a different range of entries.


LDAP DelRate Job

The LDAP DelRate job is intended to generate delete load against an LDAP directory server. It is similar to the ldapdelete command-line utility included in the Sun ONE Directory Server Resource Kit software, but it has many additional features, including the ability to use multiple concurrent threads to perform the delete operations.

Note

Because individual clients are unaware of each other when asked to process a job, this job class should never be run with multiple clients. If this job is run on multiple clients, most operations fail because all clients attempt to delete the same entries. However, alternatives do exist. It is possible to use one client with many threads because threads running on the same client can be made aware of each other. Additionally, it is possible to create multiple copies of the same job, each intended to run on one client (with any number of threads) but operating on a different range of entries.


LDAP DelRate with Replica Latency Job

The LDAP DelRate with replica latency job is intended to generate delete load against an LDAP directory server while measuring the time required to replicate those changes to another directory server. It is very similar to the LDAP DelRate job, although it does not provide support for communicating over SSL.

The process for monitoring replication latency in this job is identical to the method used by the LDAP ModRate job that tests replica latency. That is, a persistent search is registered against a specified entry on the consumer and periodic modifications are performed against that entry on the master directory. The fact that this job performs deletes while the test to measure replication latency is based on modify operations is not significant because replicated changes are performed in the order that they occurred regardless of the type of operation (that is, add, modify, delete, and modify RDN operations all have the same priority).

Note

Because individual clients are unaware of each other when asked to process a job, this job class should never be run with multiple clients. If this job is run on multiple clients, most operations fail because all clients attempt to delete the same entries. However, alternatives do exist. It is possible to use one client with many threads because threads running on the same client can be made aware of each other. Additionally, it is possible to create multiple copies of the same job, each intended to run on one client (with any number of threads) but operating on a different range of entries.


LDAP CompRate Job

The LDAP CompRate job is intended to generate various kinds of compare load against an LDAP directory server. The Sun ONE Directory Server Resource Kit software does not have a command-line utility capable of generating load for LDAP compare operations, although it does provide the ldapcmp utility that makes it possible to perform a single compare operation.

LDAP AuthRate Job

The LDAP AuthRate job is intended to simulate the load generated against an LDAP directory server by various kinds of applications that use the directory server for authentication and authorization purposes. It first performs a search operation to find a user's entry based on a login ID value. Once the entry has been found, a bind is performed as that user to verify that the provided password is correct and to verify that the user's account has not been inactivated, that the password is not expired , or that the account is not otherwise inactivated. Upon a successful bind, it might optionally verify whether that user is a member of a specified static group, dynamic group , or role.

Note

The Sun ONE Directory Server Resource Kit software does contain a command-line authrate utility, but the behavior of that utility is significantly different because it only provides the capability to perform repeated bind operations as the same user.


LDAP DIGEST-MD5 AuthRate Job

The LDAP DIGEST-MD5 AuthRate job is very similar to the LDAP AuthRate job, except that instead of binding using Simple authentication, binds are performed using the SASL DIGEST-MD5 mechanism. DIGEST-MD5 is a form of authentication in which a password is used to verify a user's identity, but rather than providing the password itself in the bind response (which could be available in clear text to anyone that might happen to be watching network traffic), the password, along with some other information agreed upon by the client and the server, is hashed in an MD5 digest. This prevents the password from being transferred over the network in clear text, although it does require that the server have access to the clear text password in its own database so that it can perform the same hash to verify the credentials provided by the client.

Because the only difference between this job and the LDAP AuthRate job is the method used to bind to the directory server, all configurable parameters are exactly the same and are provided in exactly the same manner.

LDAP Search and Modify Load Generator Job

The LDAP ModRate job makes it possible to generate modify-type load against an LDAP directory server. To accomplish this, the DNs of the entries to be modified must be explicitly specified, or the DNs of the entries must be constructed from a fixed string and a randomly chosen number. It is possible that neither of these methods are feasible in some environments, and in such cases, the LDAP search and modify job might be more appropriate. Rather than constructing or using an explicit list of DNs, the search and modify job performs searches in the directory server to find entries and performs modifications on the entries returned.

LDAP Load Generator with Multiple Searches Job

The LDAP load generator with multiple searches job provides the capability to perform a number of operations in an LDAP directory server. Specifically, it is able to perform add, compare, delete, modify, modify RDN, and up to six different kinds of search operations in the directory with various relative frequencies. It is very similar to the LDAP load generator job, with the exception that it makes it possible to perform different kinds of searches to better simulate the different kinds of search load that applications can place on the directory.

Note

That the different kinds of searches to be performed should be specified using filter files it is not possible to specify filter patterns for them.


Solaris OE LDAP Authentication Load Generator Job

The Solaris OE LDAP authentication load generator is a job that simulates the load that Solaris 9 OE clients place on the directory server when they are configured to use pam_ldap for authentication. Although this behavior can vary quite dramatically based on the configuration provided through the idsconfig and ldapclient utilities, many common configurations can be accommodated through the job parameters. In particular, the lookups can be configured to be performed either anonymously or through a proxy user, using either simple or DIGEST-MD5 authentication with or without SSL.

The directory server against which the authentication is to be performed should be configured properly to process authentication requests from Solaris clients. It may be configured in this manner using the idsconfig and ldapaddent tools provided with the Solaris OE, with at least the passwd , shadow , and hosts databases imported into the directory. However, it may be more desirable to simulate this information. The appropriate information may be simulated using the MakeLDIF utility with the solaris.template template file. The data produced in that case is more suited for use by this job because all user accounts can be created with an incrementing numeric value in the user ID and with the same password, which makes it possible to simulate a much broader range of users authenticating to the directory server.

The full set of parameters that may be used to customize the behavior of this job is as follows:

  • Directory Server Address This field allows the user to specify the address that the clients should use when attempting to connect to the LDAP directory server. All clients that may run the job must be able to access the server using this address.

  • Directory Server Port This field allows the user to specify the port number that the clients should use when attempting to connect to the LDAP directory server. If the authentication should be performed over SSL, this should reference the directory server's secure port.

  • Directory Base DN This field allows the user to specify the base DN under which the Solaris OE naming data and user accounts exist. All searches are performed using a subtree scope, so this base DN may be at any level in the directory under which at least the hosts and user account information exist.

  • Credential Level This field allows the user to specify the manner in which clients should bind to the server when finding user accounts and other naming information. The credential level may be either anonymous , in which no authentication is performed and all the user IDs and host information should be available without authentication, or proxy , which means that a third-party account should be used to bind to the directory server to retrieve this information. If the proxy method is selected, a proxy bind DN and password must be specified.

  • Proxy User DN This field allows the user to specify the DN that the client uses to bind to the server when performing lookups to find user entries and to retrieve other naming information. This is only used if the credential level is set to proxy , in which case a proxy user DN and password must be specified. The typical proxy user DN as created by idsconfig is cn=proxyagent, ou=profile,{baseDN} , where { baseDN } is the DN of the entry below which Solaris OE naming information is stored.

  • Proxy User Password This field allows the user to specify the password for the proxy user account. This is only used if the credential level is set to proxy , in which case a proxy user DN and password must be specified.

  • Authentication Method This field allows the user to specify the means by which users authenticate to the directory server. The authentication method may be either simple or DIGEST-MD5 authentication, and it may or may not be configured to use SSL. It should be noted that if DIGEST-MD5 authentication is to be performed, the directory server must be configured to store the passwords for the specified users in clear text.

  • User ID This field allows the user to specify the user ID of the user or users that is used to authenticate to the directory server. It is possible to include a range of numeric values enclosed in brackets that is replaced with a randomly chosen integer from that range. For example, if the value specified for this field is user.[1-1000] , a random integer between 1 and 1000 (inclusive) is chosen and used to replace the bracketed range. It is also possible to specify a sequential range of values to use for the user ID values by replacing the dash with a colon . For example, a value of user. [1:1000] results in the first login ID value generated being user.1 , the second being user.2 , and the one thousandth value being user.1000 . If the maximum value is reached, the generation starts over with the minimum value again (so user ID 1001 would be user.1 ).

  • User Password This field allows the user to specify the password for the user or users that is used to authenticate to the directory server. If the user ID contains a range of values, this password must be the same for all those users. Further, if the job is configured to perform authentication using DIGEST-MD5, the directory must be configured to store this password in clear text for the user or users specified by the user ID.

  • Simulated Client Address Range This field allows the user to specify the IP address or address range from which clients appear to be originating. Whenever a client connects to a Solaris OE system configured to use pam_ldap , the Solaris OE system attempts to find the host name associated with the IP address of that client system, and the value provided for this parameter is used to specify what those addresses should be. The value provided for this field may be either a single IPv4 address (in standard dotted -quad format), or a range of addresses using the classless inter-domain routing (CIDR) format. CIDR is a popular means of expressing contiguous ranges of IP addresses on the same network and is defined in RFC 1519. The format used is a.b.c.d/e , in which a.b.c.d is an IPv4 address and /e specifies the number of bits in the provided IPv4 address that must match a client address to be considered in the range. For example, 192.168.1.0/24 indicates that any IPv4 address that matches the first 24 bits of the address 192.168.1.0 is considered part of the range. Although the number of bits to use may be any integer value between 0 and 32 (inclusive), the most commonly used values are 0 (any address matches), 8 (the first octet must match), 16 (the first two octets must match), 24 (the first three octets must match), and 32 (all four octets must match). See RFC 1519 for further explanation. This parameter does not currently support the use of IPv6 addresses.

  • SSL Key Store This field allows the user to specify the location (on the local file system of the client) of the Java Secure Socket Extension (JSSE) key store used to help determine whether to trust the SSL certificate presented by the directory server. It is used if either of the SSL-based authentication methods are chosen.

  • SSL Key Store Password This field allows the user to specify the password that should be used to access the information in the JSSE key store. It is used if either of the SSL-based authentication methods are chosen.

  • SSL Trust Store This field allows the user to specify the location (on the local file system of the client) of the JSSE trust store used to help determine whether to trust the SSL certificate presented by the directory server. It is used if either of the SSL-based authentication methods are chosen.

  • SSL Trust Store Password This field allows the user to specify the password that should be used to access the information in the JSSE trust store. It is used if either of the SSL-based authentication methods are chosen.

SiteMinder LDAP Load Simulator Job

As its name implies, the SiteMinder LDAP load simulation job attempts to simulate the load that Netegrity SiteMinder (with password services enabled) places on a directory server whenever a user authenticates. This simulation was based on information obtained by examining the directory server's access log during a time that SiteMinder was in use.

POP CheckRate Job

The POP CheckRate job provides the capability to generate load against a messaging server that can communicate using POP3. It chooses a user ID, authenticates to the POP server as that user, retrieves a list of the messages in that user's mailbox, and disconnects from the server.

IMAP CheckRate Job

The IMAP CheckRate job is very similar to the POP CheckRate job, except that it communicates with the messaging server over IMAPv4 instead of POP3. Like the POP CheckRate job, it chooses a user ID, authenticates to the POP server as that user, retrieves a list of the messages in that user's INBOX folder, and disconnects from the server.

Calendar Initial Page Rate Job

The Calendar Initial Page Rate job provides the capability to generate load against the Sun ONE Calendar Server version 5.1.1. It does this by communicating with the Calendar Server over HTTP and simulating the interaction that a web-based client would have with the server when a user authenticates to the server and displays the initial schedule page.

It is important to note that because of the way in which this job operates and the specific kinds of requests that are required, it may not work with any version of the Calendar Server other than version 5.1.1. At the time that this job was developed, version 5.1.1 was the most recent release available, but it is not possible to ensure that future versions of the Calendar Server will continue to behave in the same manner.

Adding New Job Classes

The SLAMD server was designed in such a way that it is very extensible. One of the ways this is evident is the ability for an end user to develop a new job class and add that class to the SLAMD server. Once that class has been added to the SLAMD server, it is immediately possible to schedule and run jobs that make use of that job class. It is not necessary to copy that job class to all client systems, as that is done automatically whenever a client is asked to run a job for which it does not have the appropriate job class.

Note

Although job classes are automatically transferred from the SLAMD server to clients as necessary, if a job class uses a Java library that is not already available to those client systems, that library must be manually copied to each client. Libraries in Java Archive (JAR) file form should be placed in the lib directory of the client installation, and libraries provided as individual class files should be placed under the classes directory (with all appropriate parent directories created in accordance with the package in which those classes reside).


If any new versions of job classes are installed, it is necessary to manually update each client, as the client has no way of knowing that it would otherwise be using an outdated version of the job class.

Using the Standalone Client

Even though jobs are designed to be scheduled and coordinated by the SLAMD server, it is possible to execute a job as a standalone entity. This is convenient if you want to run a job in an environment where there is no SLAMD server available, if you do not need advanced features like graphing results, or if you are developing a new job for use in the SLAMD environment and wish to test it without scheduling it through the SLAMD server.

The standalone client is similar to the network-based client in that it is included in the same installation package as the network client and requires a Java environment (preferably 1.4.0) installed on the client system. However, because there is no communication with the SLAMD server, it is not necessary that the client address be resolvable or that any SLAMD server be accessible.

Before the standalone client may be used, it is necessary to edit the standalone_client.sh script. This script may be used to run the standalone client, but it must first be edited so that the settings are correct for your system. Set the value of the JAVA_HOME variable to the location in which the Java 1.4.0 or higher runtime environment has been installed. You may also edit the INITIAL_MEMORY and MAX_MEMORY variables to specify the amount of memory in megabytes that the standalone client is allowed to consume. Finally, comment out or remove the two lines at the top of the file that provide the warning message indicating that the file has not been configured.

Since the standalone client operates independently of the SLAMD server, it is not possible to use the administrative interface to define the parameters to use for the job. Instead, the standalone client reads the values of these parameters from a configuration file. In order to generate an appropriate configuration file, issue the command:

 $  ./standalone_client.sh -g   job_class   -f   config_file  

where job_class is the fully-qualified name of the job class file (for example, com.example. slamd.example.SearchRateJobClass ), and config_file is the path and name of the configuration file to create. This creates a configuration file that can be read by the standalone client to execute the job. This configuration file likely needs to be modified before it can actually be used to run a job, but comments in the configuration file should explain the purpose and acceptable values for each parameter.

Once an appropriate configuration file is available, the standalone client may be used to run the job. In its most basic form, it may be executed using the command

 $  ./standalone_client.sh -F   config_file  

This reads the configuration file and executes the job defined in that configuration file using a single thread until the job completes. However, this default configuration is not sufficient for many jobs, and therefore there are additional command-line arguments that may be provided to further customize its behavior.

Starting and Stopping SLAMD

The SLAMD server has been designed so that it should not need to be restarted frequently. Most of the configuration parameters that may be specified within the SLAMD server can be customized without the need to restart the server itself or the servlet engine that provides the administrative interface. However, some changes do require that the server be restarted for that change to take effect. This section describes the preferred ways of starting, stopping, and restarting the SLAMD server and the Tomcat servlet engine.

Starting the Tomcat Servlet Engine

By default, SLAMD uses the Tomcat servlet engine to generate the HTML pages used for interacting with the SLAMD server. However, the servlet engine is responsible for not only generating these HTML pages, but actually for running the entire SLAMD server. All components of the server run inside the Java Virtual Machine used by the servlet engine. Therefore, unless the servlet engine is running, the SLAMD server is not available.

As described earlier in this document in the discussion on installing the SLAMD server, the Tomcat servlet engine may be started by using the bin/startup.sh shell script provided in the installation archive. This shell script must be edited to specify the path of the Java installation, the amount of memory to use, and the location of an X server to use when generating graphs. Once that has been done, this shell script may be used to start the Tomcat servlet engine.

Starting SLAMD

By default, the SLAMD server is loaded and started automatically when the servlet engine starts. However, if a problem is encountered when the servlet engine tries to start the SLAMD server (that is, if the configuration directory server is unavailable), the Tomcat servlet engine is started but SLAMD remains offline. If this occurs, a message is displayed indicating that the SLAMD server is unavailable, and this message should also include information that can help administrators diagnose and correct the problem.

When the problem has been corrected, the SLAMD server may be started by following the SLAMD Server Status link at the bottom of the navigation bar and clicking the Start SLAMD button (this button is only visible if the SLAMD server is not running). This attempts to start the SLAMD server. If the attempt is successful, the full user interface is available. If the SLAMD server could not be started for some reason, it remains offline and an informational message describing the problem that occurred is displayed.

Restarting SLAMD

As indicated earlier, a few configuration parameters require the SLAMD server to be restarted in order for changes to take effect. This can be done easily through the administrative interface without the need to restart the servlet engine. To do so, follow the SLAMD Server Status link at the bottom of the navigation bar and click the Restart SLAMD button on the status page (this button is only visible if the SLAMD server is currently running). This causes the SLAMD server to be stopped and immediately restarted.

Stopping SLAMD

Restarting the SLAMD server should be sufficient for cases in which it is only necessary to re-read configuration parameters, but in some cases it may be necessary to stop the SLAMD server and leave it offline for a period of time (for example, if the configuration directory server is taken offline for maintenance). This can be done by following the SLAMD Server Status link at the bottom of the navigation bar and clicking the Stop SLAMD button on the status page. This causes the SLAMD server to be stopped, and it remains offline until the Start SLAMD button is clicked or until the servlet engine is restarted.

Note

Stopping or restarting the SLAMD server (or the servlet engine in which it is running) disconnects all clients currently connected to the server. If any of those clients are actively processing jobs, an attempt is made to cancel those jobs and obtain at least partial results, but this cannot be guaranteed. Any jobs that are in the pending jobs queue are also stored in the configuration directory and are properly re-loaded when the SLAMD server is restarted. However, if the SLAMD server is offline for any significant period of time, the start times for some jobs may have passed, which could cause the pending jobs queue to become backlogged when the server is restarted.


Stopping the Tomcat Servlet Engine

It should be possible to edit all of the SLAMD server's configuration without needing to restart the servlet engine in which SLAMD is running. However, if the configuration of the Tomcat servlet engine itself is to be modified, it is necessary to stop and restart Tomcat for those changes to take effect.

Before stopping Tomcat, it is recommended that the SLAMD server be stopped first. To do this, follow the SLAMD Server Status link and click the Stop SLAMD button. Once the SLAMD server has been stopped, it is possible to stop the Tomcat servlet engine using the bin/shutdown.sh shell script.

If the SLAMD server is not stopped before the attempt to stop the Tomcat servlet engine, it is possible (although unlikely ) that the Tomcat servlet engine will not stop properly. If that occurs, the servlet engine may be stopped by killing the Java process in which it is running (note that on Linux systems it may appear as multiple processes). The Tomcat startup scripts have been modified so that the process ID of the Tomcat process should be written into the logs/pid file. If Tomcat does not shut down properly, this PID may be used to determine which process or processes should be killed . If it is necessary to manually kill the Tomcat process, it should be done using the SIGTERM signal (the default signal used when the kill command is issued). A SIGKILL signal should only be used if the Tomcat process or processes do not respond to the SIGTERM signals.

Tuning the Configuration Directory

In addition to storing the SLAMD configuration, the configuration directory is used to store information about all jobs that have been scheduled for execution in the SLAMD environment, including the statistical information gathered from jobs that have completed. Nearly all operations that can be performed in the administrative interface require some kind of interaction with the configuration directory. Therefore, properly tuning the configuration directory can dramatically improve the performance of the administrative interface and the SLAMD server in general. Further, entries that store statistical information may grow quite large and without proper configuration, it may not be possible to store this information in the directory. The changes that should be made to the directory server configuration are described below.

Configuring for Large Entries

All information about scheduled jobs is stored in the configuration directory. For completed jobs, this includes the statistical information gathered while those jobs were running. As a result, these entries can be required to store several megabytes of data, especially for those jobs with a large number of threads, with a long duration, or that maintain statistics for a number of items. This can cause a problem because by default the directory server is configured to allow only approximately two megabytes of information to be sent to the server in a single LDAP message. This limit is controlled by the nsslapd-maxbersize configuration attribute, which specifies the maximum allowed message size in bytes. A value of at least 100 megabytes (104857600 bytes) should be specified to prevent updates with large amounts of statistical information from being rejected, although it is possible that a job could return even more than 100 megabytes of data, particularly for jobs that run for a very long period of time and have a relatively short collection interval.

Cache Tuning

The directory server contains two caches that may be utilized to improve overall performance: the entry cache and the database cache. The entry cache holds copies of the most recently used entries in memory so they can be retrieved without having to access the database. The database cache holds pages of the database in memory so it is not necessary to access the data stored on the disk. By default, both of these caches are configured to store approximately ten megabytes of information. Increasing the sizes of these caches increases the amount of information stored in memory and therefore the overall performance when it is necessary to retrieve information from the directory server. Increasing the size of the database cache can also improve the performance of the server when writing information to the database.

Proper Indexing

Whenever the SLAMD server needs to retrieve information from the configuration directory, it issues an LDAP search request to the directory. If the directory server is properly indexed, the server is able to locate the matching entries more quickly. Adding indexes for the following attributes helps the directory server process the queries from SLAMD more efficiently :

  • slamdJobActualStopTime -- Presence and equality

  • slamdJobClassName -- Equality

  • slamdJobID -- Equality

  • slamdJobState -- Equality

  • slamdOptimizingJobID -- Equality

  • slamdParameterName -- Equality

Typical SLAMD Architecture

FIGURE 9-11 shows an example of how you might architect and deploy SLAMD.

Figure 9-11. SLAMD Architecture

graphics/09fig11.gif

The SLAMD Architecture figure depicts how the SLAMD clients are distributed amongst multiple machines, and how the clients receive what is termed as job data, from the SLAMD server. This information is related to how the Sun ONE Directory Server should be load tested by the SLAMD server. This results in data being sent as a report back to the SLAMD server when the job is done. This load testing of the Sun ONE Directory Server is achieved through LDAP protocol requests. In FIGURE 9-11, the clients are distributed, which means in order to obtain the average results, the SLAMD server must aggregate the results from all participating clients and present the results to the user as one single job. The SLAMD server also requires a directory server to store configuration and result data. A typical SLAMD server and its configuration directory are normally located on the same system, which could be a Sun Enterprise 420R.



LDAP in the Solaris Operating Environment[c] Deploying Secure Directory Services
LDAP in the Solaris Operating Environment[c] Deploying Secure Directory Services
ISBN: 131456938
EAN: N/A
Year: 2005
Pages: 87

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net