Optimizing Notification Generation


In this section, we look at ways to improve the performance of notification generation in SQL-NS applications. As you've seen, the generator's primary job is to run the rules you define in the ADF. It is largely the performance of those rules that determines the performance of notification generation. Improving generation performance involves carefully examining the rules to find ways to make them run more efficiently. Generation performance can also be affected by a number of generator execution settings that you can adjust to the optimal values for your application. In this section, we first examine techniques for optimizing rules and then look at the generator execution settings.

Note

This chapter assumes that you still have the music store instance that you created in Chapter 10, "Delivery Protocols" (and used in the debugging exercises in Chapter 11, "Debugging Notification Generation"). We will use this instance to make some performance tuning adjustments. If you already cleaned up the instance, follow the instructions in the "Re-creating the SQL-NS Instance" section (p. 362) in Chapter 10 to get the instance back up and running. Make sure that you add subscriptions and submit some events that generate notifications; we need this data to be in the application to complete some of the performance-related steps described in this chapter. The section "Adding Subscriber Devices for New Delivery Channels" (p. 365) in Chapter 10 provides instructions for adding the required subscriber and subscription data, and the section "Preparing to Debug: Disabling the Generator and Submitting Events" (p. 405) in Chapter 11 describes some sample event data you can use.


Indexes and Query Optimization

All the rules that you define in the ADF are just SQL queries that operate over data in the events and subscription tables. Improving rule performance is really about tuning these SQL queries to make them run as efficiently as possible.

There are usually two aspects to optimizing SQL queries:

  • Indexing the referenced tables efficiently

  • Refining the queries so as to best leverage the indexes

Because rules operate against views of the event and subscription tables, proper indexing of those tables is the basis for improving rule performance. When the SQL-NS compiler creates the event and subscription tables, it creates indexes on some of the built-in columns used by the SQL-NS engine. But rule performance can usually benefit from additional indexes on some of the event and subscription class fields. The SQL-NS compiler has no understanding of the meaning of these fields, so it does not index them automatically. It is left to you to create the appropriate indexes over these fields, based on your knowledge of their semantics. This section describes the techniques you can use to identify the appropriate indexes for your application and the ADF elements you can use to create them.

Note

SQL query optimization is such a broad topic that entire books have been written about it. There is no way that this small section can provide adequate general coverage of this vast topic. Instead it offers some broadly applicable guidelines and focuses on aspects of query optimization specific to SQL-NS. To learn more about query optimization in general, the "Query Tuning" section in the SQL Server Books Online is a good place to start.


Viewing the Query Execution Plans for Rules

When working with simple queries, just looking at the query text is usually enough to tell you which columns in the referenced tables need to be indexed. Typically, those columns involved in joins are good candidates. In a more complex query, it helps to look at the query execution plan, which describes the way in which SQL Server executes the various parts of the query. The query execution plan (sometimes called the query plan for short) usually tells you which parts of the query are most costly to execute. This information can guide you in creating the appropriate indexes and adjusting the query to use them efficiently.

With the help of the SQL-NS debugging stored procedures (described in Chapter 11), you can use the SQL Server tools to examine the query plan for the rules in a SQL-NS application. To do this, you set the quantum clock to a time period that encompasses a previously submitted event batch, prepare the first rule firing, and then turn on the feature of Management Studio that displays execution plans for queries. With this feature turned on, you then execute the rules manually and observe the graphical query plans that Management Studio displays. Use the following instructions to try this with the music store application:

1.

From a Notification Services Command Prompt on your development machine, navigate to the music store sample's scripts directory by typing the following command:

 cd /d C:\SQL-NS\Samples\MusicStore\Scripts 


2.

Run disable_generator.cmd to disable the generator (this is required to run the debugging stored procedures).

3.

Open Management Studio and log in to your SQL Server.

4.

Open a new query window and issue the following query:

 USE MusicStore GO SELECT * FROM [SongAlerts].[NSSongAddedEventBatches] 


The resultset shows a row for each submitted event batch. Choose one of these event batches and remember its event batch ID.

5.

Load the file C:\SQL-NS\Chapters\11\Scripts\SetQuantumClock.sql. (The code in this file is shown in Chapter 11see Listing 11.2, p. 407.)

6.

In this file, you will see a comment that reads /*... fill in your event batch ID here ...*/ at the end of the WHERE clause in the first SELECT statement. Replace this comment with the ID of the event batch you chose in step 4.

7.

Run the script.

8.

Execute the procedure that prepares the first rule firing by typing the following code into a query window and then executing it:

 EXEC [SongAlerts].[NSPrepareRuleFiring] 


9.

Go to Management Studio's Query menu and select Include Actual Execution Plan to toggle the display of queries' execution plans. When this feature is enabled, the icon next to the menu item appears selected.

10.

Execute a rule firing manually by typing the following code into a query window and then executing it:

 EXEC [SongAlerts].[NSExecuteRuleFiring] 


Caution

When invoking NSExecuteRuleFiring with the option to include query plans turned on, you may receive a warning message that reads:

 The query has exceeded the maximum number of resultsets that can be displayed in the Execution Plan pane. Only the first 25 resultsets are displayed in the Execution Plan pane. 


You can ignore this message because the rule statement is always among the first 25 queries executed within the NSExecuteRuleFiring stored procedure.

11.

In the results pane of the query window, you will see three tabs, one of which is labeled Execution Plan. Click this tab and your window should look something like Figure 12.4.

Figure 12.4. Management Studio showing the execution plan for a rule.


12.

The results pane shows the execution plan for each statement executed when you invoked NSExecuteRuleFiring. Some of these statements come from the internals of the SQL-NS stored procedures, but among them you will find the rule that executed (as specified in the ADF). Below the rule statement you will find the query execution plan, which you can study to understand how the query is executed.

13.

You can repeat steps 1012 to view the execution plan for each subsequent rule in the application.

Note

After you finish working with the debugging stored procedures, you must reenable the generator for the application to work in the normal way again. You can use the enable_generator.cmd script, described in the "Reenabling the Generator" section (p. 413) in Chapter 11, to do this.


Tip

If you hold the mouse pointer over an element in the query plan display, Management Studio will show a ToolTip that provides additional detailed information about that element. For general information about reading graphical query plans, see the "Displaying Graphical Execution Plans (SQL Server Management Studio)" topic in the SQL Server Books Online.


As an example of how the query plan can be used to identify inefficiencies in a query, let's examine the match rule for the NewSongByArtist subscription class. This match rule joins the events with the SongDetails view on song ID and the NewSongByArtist subscriptions on artist name. Listing 12.1 shows the match rule text, and Figure 12.5 shows portions of the resulting query plan (obtained using the steps described in this section).

Listing 12.1. The Match Rule Used in the NewSongByArtist Subscription Class

 INSERT INTO [SongAlerts].[NewSong] SELECT subscriptions.SubscriberId,        subscriptions.DeviceName,        subscriptions.Locale,        songs.SongTitle,        songs.ArtistName,        songs.AlbumTitle,        songs.GenreName FROM   [SongAlerts].[SongAdded] events JOIN   [Catalog].[SongDetails] songs     ON events.SongId = songs.SongId JOIN   [SongAlerts].[NewSongByArtist] subscriptions     ON subscriptions.ArtistName = songs.ArtistName 

Figure 12.5. The execution plan for the NewSongByArtist match rule.


The query plan shows some inefficiencies. For example, the lookup of the subscription data requires an index scan, and the join with the song data requires a hash match. Together, these operations constitute 40% of the total query cost.

Caution

The cost percentages shown in the query plans are usually just estimates. Although in most cases they are fairly accurate, be aware that the real costs could deviate from those shown in the query plan.


Note

To make the query plan fit on the printed page, only the parts relevant to the discussion in this section are shown in Figure 12.5. Other parts of the query plan have been omitted. Breaks in the lines in Figure 12.5 indicate the places where parts of the full query plan have been removed.

Also, because the content of Figure 12.5 was derived from Management Studio screen captures, some of the labels on plan elements have been truncated. I recommend that you execute the rule and look at the query plan in Management Studio yourself, so that you can see it in its entirety. When doing so, look at the detailed information in the ToolTips that appear when you hover the mouse pointer over the plan elements.

If you do this, you may notice that the lookup of the event data executes over a table called NSCurrentSongAddedEvents, not the events table you're familiar with, NSSongAddedEvents. NSCurrentSongAddedEvents is an internal table used by the generator to materialize the SongAdded event data for the event batches being processed in the current quantum. There is a separate NSCurrent<EventClass>Events table for every event class in the application.


SQL Server has to resort to a scan over the subscription data because the subscriptions table is not indexed on the column being used in the match rule (the ArtistName column). Had there been an index on the ArtistName column in the subscriptions table, the expensive scan could have been replaced with a cheaper lookup mechanism. In the next section, we look how index creation statements are specified in the ADF and observe how the query plan changes when the right indexes are in place.

Creating Indexes in the ADF

Through dedicated ADF elements, you can specify indexes for the tables the SQL-NS compiler creates from event class and subscription class declarations. Listing 12.2 shows these elements used to create indexes that may help make the query plan we examined in the previous section more efficient.

Listing 12.2. Specifying Indexes in the ADF

 <Application>   ...   <EventClasses>     <EventClass>       <EventClassName>SongAdded</EventClassName>       <Schema>         <Field>           <FieldName>SongId</FieldName>           <FieldType>INT</FieldType>           <FieldTypeMods>NOT NULL</FieldTypeMods>         </Field>       </Schema>       <IndexSqlSchema>         <SqlStatement>           CREATE INDEX SongIdIndex           ON [SongAlerts].[SongAdded](SongId)         </SqlStatement>       </IndexSqlSchema>       ...   </EventClasses>   <SubscriptionClasses>     <SubscriptionClass>       <SubscriptionClassName>NewSongByArtist</SubscriptionClassName>       <Schema>         <Field>           <FieldName>ArtistName</FieldName>           <FieldType>NVARCHAR(255)</FieldType>           <FieldTypeMods>NOT NULL</FieldTypeMods>         </Field>         ...       </Schema>       <IndexSqlSchema>         <SqlStatement>           CREATE INDEX ArtistNameIndex           ON [SongAlerts].[NewSongByArtist](ArtistName)         </SqlStatement>       </IndexSqlSchema>       ...     </SubscriptionClass>     <SubscriptionClass>       <SubscriptionClassName>NewSongByGenre</SubscriptionClassName>       <Schema>         <Field>           <FieldName>GenreName</FieldName>           <FieldType>NVARCHAR(255)</FieldType>           <FieldTypeMods>NOT NULL</FieldTypeMods>         </Field>         ...       </Schema>       <IndexSqlSchema>         <SqlStatement>           CREATE INDEX GenreNameIndex           ON [SongAlerts].[NewSongByGenre](GenreName)         </SqlStatement>       </IndexSqlSchema>       ...     </SubscriptionClass>   </SubscriptionClasses>   <NotificationClasses>     ...   </NotificationClasses>   ... </Application> 

Below the <Schema> element in an event class or subscription class declaration, you can declare an <IndexSqlSchema> element. The <IndexSqlSchema> element contains one or more <SqlStatement> elements that define indexes. After the SQL-NS compiler creates the event and subscription tables, it executes these SQL statements to create the indexes. SQL-NS already creates a primary key clustered index on these tables (over the columns it adds to the schema), so you cannot create another clustered index in your <IndexSqlSchema> declaration (SQL Server allows only one clustered index per table). Beyond that, there are no restrictions: The <SqlStatement> elements can contain any valid CREATE INDEX statements, so all index options are supported. In fact, the <SqlStatement> elements within an <IndexSqlSchema> declaration can contain any valid T-SQL statements, but they are intended to encapsulate CREATE INDEX statements.

In the example shown in Listing 12.2, the event class declaration specifies an index over the SongId column. Notice that in the CREATE INDEX statement, the table name is given as SongAdded, the name of the event class itself. This might surprise you because, as you've already seen, the events table is called NSSongAddedEvents. In this context, you can use the event class name because the tables and indexes are created as follows: SQL-NS first creates the events table with the name of the event class, SongAdded, then applies the SQL statements in the <IndexSqlSchema>, and finally renames the table to NSSongAddedEvents. This allows you to specify indexes without having to know the internal naming scheme that SQL-NS uses (which may well change in future versions).

The subscription classes in Listing 12.2 also contain index declarations. The <IndexSqlSchema> element in the NewSongByArtist subscription class specifies an index over the ArtistName column. As in the event class, the subscription class name is used as the table name in the CREATE INDEX statement (SQL-NS uses the same renaming scheme to make this work). In the NewSongByGenre subscription class, the <IndexSqlSchema> declares an index over the GenreName column. Although we didn't actually observe the query plan for the NewSongByGenre match rule, based on what was learned about the structurally similar NewSongByArtist match rule, we can deduce that an index will be needed on this column because it's involved in a join.

Tip

You can create indexes over a chronicle table by specifying a CREATE INDEX statement directly in the <SqlStatement> elements in the chronicle declaration's <SqlSchema>. There are no dedicated elements for creating chronicle indexes. If chronicle tables play a significant role in your match rules, creating the right indexes over them may be just as important as indexing the event and subscription tables.


Testing the Effects of the New Indexes

In this section, we update the music store application with the index creation code shown in Listing 12.2 and then observe the effect on the query plan generated for the NewSongByArtist match rule.

The code in Listing 12.2 is supplied in the file C:\SQL-NS\Chapters\12\SupplementaryFiles\ApplicationDefinition-16.xml. To test it, you need to add this code to the main ADF in the music store sample's source directory, C:\SQL-NS\Samples\MusicStore\SongAlerts\ApplicationDefinition.xml. You can do so either by typing it in manually or by copying the supplementary ADF over the original. After you've added the code, update the instance as you've done before (disable the instance, stop the service, run the update_with_argument_key.cmd script, reenable the instance, and restart the service). For detailed instructions on updating the music store instance, see steps 27 in the "Testing the FileSystemWatcherProvider in the Music Store Application" section, p. 251, in Chapter 8, "Event Providers."

Unfortunately, adding indexes to the event and subscription tables causes the data in those tables to be removed during the update process. The addition of indexes appears to the SQL-NS compiler as a schema change, so it rebuilds the event and subscription tables. The old event data is discarded, but the compiler preserves the old subscription data.

Before re-creating the subscription tables, it renames the existing tables to NS<SubscriptionClass>SubscriptionsOld (where <SubscriptionClass> is the name of the subscription class). After the update completes, you can copy data from the old subscription tables into the new ones to preserve subscriptions.

To do this on your system, open the script C:\SQL-NS\Chapters\12\Scripts\CopyOldSubscriptionData.sql in Management Studio and run it. This script copies all the data from the old subscription tables into the new ones and then drops the old subscription tables. Dropping the old tables after copying the data out of them is an important step. If a subsequent update to the application causes the SQL-NS compiler to rebuild the subscription tables again, it will try to rename the existing ones to NS<SubscriptionClass>SubscriptionsOld once more. If the old tables from the previous update still exist, there will be a naming conflict and the update will fail.

Caution

Unfortunately, in the case of scheduled subscriptions, copying the old subscription data into the new tables isn't as simple as one INSERT-SELECT statement. The subscription schedule data is normalized into several tables, all of which are re-created during the update process. As it does for the main subscriptions table, the SQL-NS compiler renames the other tables by appending the "Old" suffix. The data in all these tables must be copied into the corresponding new tables while preserving all foreign key relationships.

The CopyOldSubscriptionData.sql script described in this section includes the code that restores all the scheduled subscription data for our scheduled subscription class, NewSongByGenre. Take a look at the script to see how this is done. Comments in the code explain each operation.


Because the old event data was discarded by the SQL-NS compiler during the update, we need to submit new batches of events to test the effects of the new indexes. As you did before, run the AddSongs program again to submit some events. You can enter any song data you choose, as long as the artist name is Miles Davis or Diana Krall (these are the artists for which subscriptions exist). Be sure to check the Submit Events for Songs Added box before clicking the Add to Database button to submit your song data.

With the subscriptions and events in place, run the instructions for viewing query plans listed in the "Viewing the Query Execution Plans for Rules" section (p. 425) earlier in this chapter. Make sure that you run step 2 (disabling the generator) if the instance was enabled after performing the update. Also, pay particular attention to steps 4 and 6. Because you've submitted new events, the event batch ID you will use in the SetQuantumClock.sql script will likely be different from the one you used before.

The rule we're interested in is the match rule for the NewSongByArtist subscription class, which is the second rule executed (you will have to perform step 10 twice). Portions of the new query plan for this rule are shown in Figure 12.6.

Figure 12.6. The new, more efficient, execution plan for the NewSongByArtist match rule that takes advantage of the new indexes.


Note

Like Figure 12.5, Figure 12.6 is a truncation of the full query plan. To make the plan fit on the printed page, the sections that aren't relevant to the discussion in this section have been omitted. Omissions are indicated by breaks in the lines that connect plan elements.


The index scan has been replaced by a nonclustered index seek and the hash join has been replaced with a nested loops join. These are more efficient strategies and lead to lower overall query cost. As this example shows, by creating the right indexes, we can enable the query processor to execute the match rules more efficiently, and this ultimately leads to better notification generation performance.

Optimizing the Quantum Duration

Recall from Chapter 11 that the generator divides time into discrete units called quantums and that the quantum clock drives the generator's operation. The duration of a quantum is the polling interval on which the generator looks for new event batches to process. By default, the quantum duration is 1 minute, but this is adjustable through the ADF. In this section, we look at the effect that the quantum duration can have on generator performance.

The most visible impact of the quantum duration is latency in the processing pipeline. Consider an event batch submitted while the generator is sleeping. The event batch will not be processed until the start of the next quantum. The quantum duration determines how long that delay will be. In the music store sample application, we used a quantum duration of 15 seconds, instead of the default, 1 minute, to reduce the latency in the processing of events. Reducing the quantum interval makes the application more responsive because new event batches will be processed with less delay. The minimum quantum duration allowed is 1 second.

As you might expect, the increased responsiveness comes at a cost. First, a shorter quantum duration means that the generator checks for event batches more often, and the overhead involved in doing this check can become significant if it is done too frequently. Checking for event batches involves querying the database and writing records to the generator's internal tables, all of which can put load on the SQL Server.

Perhaps more important than the overhead cost is the reduced opportunity for batching that results from a short quantum duration. If the quantum duration is long, several event batches can accumulate in a single quantum. The generator can process these event batches together (assuming the application does not use strict event batch ordering) and amortize fixed costs over large sets of data. As the quantum duration is reduced, the number of event batches in a single quantum is likely to be less. This has a negative performance impact because processing event batches individually (or in small groups) is typically less efficient than processing several event batches together.

For these reasons, although reducing the quantum duration can make an application more responsive, doing so may well reduce the overall efficiency. The particular quantum duration appropriate for a given application depends on several factors, including

  • The expected arrival rate of events

  • Users' expectations of responsiveness and timeliness

  • The granularity of scheduled subscriptions

In some applications, events arrive frequently. The stock price application we built in Chapter 3 is an example of such an application. In other applications, events arrive only a few times a day. Our music store application would most likely fall into this category in reality. If events don't arrive frequently, it usually doesn't make sense to have a short quantum duration.

However, event arrival rate must be considered together with the second listed factor: the users' expectations of responsiveness and timeliness. In some applications, even if events occur infrequently, users want to be notified about those events as soon as they happen because there is a high value associated with the timeliness of the information. An example of this is an application that sends out breaking news headlines. In a given day, there may be only a few breaking news events, but when they occur, notifications should be sent out immediately. In these applications, a short quantum duration is appropriate, even though events are submitted infrequently.

The third factor in the list, the granularity of scheduled subscriptions, is also related to users' expectations of timeliness. The quantum duration actually limits the scheduled subscription granularity that can be supported. As described in Chapter 11, during a quantum, the generator executes scheduled rules against subscriptions whose schedules fall within the quantum's boundaries. So, a given scheduled subscription will be evaluated only during a quantum that encompasses its next evaluation time (as determined by its schedule). If the quantum duration is long, the real-time at which this quantum begins executing may be significantly offset from the actual time specified in the subscription schedule.

For example, consider a scheduled subscription with a schedule time of 12:02 in an application with a quantum duration of 5 minutes. Suppose that one quantum runs from 11:55 to 12:00, so the next quantum encompassing the scheduled subscription would be 12:00 to 12:05. Recall that when the quantum clock is up-to-date (in other words, not falling behind), the current quantum always runs from the current real clock time to one quantum duration in the past. So, the 12:00 to 12:05 quantum would only begin executing at 12:05 in real-time. Thus, the 12:02 scheduled subscription would be evaluated some time after 12:05.

Even though the SQL-NS subscription management API allows you to specify a subscription schedule in 1-second granularity, the effective granularity is determined by the quantum duration, as the example shows. When choosing a quantum duration for an application, you need to consider how accurately users expect their subscription schedules to be honored.

Tip

In your SMI, you can restrict subscription schedules to units of time equal to the quantum duration. For example, if the quantum duration were 5 minutes, the SMI could allow only schedule times that fall on 5-minute boundaries to be entered.


The three factors and their relative importance vary with the semantics of each application. When choosing a quantum duration for an application, you will have to weigh these factors against the efficiency concerns, as described in this section.

As you've seen in the music store ADF, to set the quantum duration, you use the <QuantumDuration> element. <QuantumDuration> is one of several optional elements that can be declared in the <ApplicationExecutionSettings> element to control the runtime behavior of the SQL-NS engine. Some of the other settings that can be declared in <ApplicationExecutionSettings> are described in the later sections of this chapter and in Chapter 13.

Listing 12.3 shows the quantum duration specified in the ADF. This example sets a quantum duration of 15 seconds. Notice that the quantum duration value is specified in the XSD duration syntax (explained in the sidebar "The XSD duration Data Type," p. 157, in Chapter 5).

Listing 12.3. Specifying a Quantum Duration in the ADF

 <Application>   ...   <ApplicationExecutionSettings>     <QuantumDuration>PT15S</QuantumDuration>     ...   </ApplicationExecutionSettings> </Application> 

Note

If the <QuantumDuration> element is not specified, a default value of 1 minute is used.


Quantum Limits

As described in the "Discrepancies Between the Quantum Clock and the Real-Time Clock" section (p. 397) of Chapter 11, the generator's quantum clock can fall behind the real-time clock. This can happen if the processing that the generator does within a quantum takes longer than the quantum duration, or if the generator restarts after a period of downtime.

When the quantum clock falls behind, the generator keeps processing quantums until it catches up to the real-time clock. Depending on the workload, this may take a long time. While the generator is catching up, new events are ignored as old events are processed (because the generator proceeds one quantum at a time, in chronological order). In applications whose semantics demand that every single event be processed, this cannot be avoided. However, there are many applications in which old events can be skipped so that the generator can get to new events more quickly.

An example of this is the news headline application, mentioned earlier in this chapter. Imagine that the generator goes down, and when it comes back up, the quantum clock is several hours behind, with several hundred old quantums' news events to process. Headlines are of little value when they're old, so this application would serve its users better by skipping over the old events and processing the newest events sooner.

SQL-NS allows you to define quantum limits for the generator that constrain how far the quantum clock is allowed to fall behind before the generator starts skipping quantums. Quantum limits are always expressed as a number of quantums. When properly used, quantum limits can improve the performance of an application by avoiding the unnecessary processing of old and possibly irrelevant events.

To understand how quantum limits are used, consider an application that has a quantum duration of 1 minute and imagine the generator goes down for 1 hour, starting at 12:00. When it comes back up at 1:00 (in real-time), its quantum clock will be on the 11:59 to 12:00 quantum (as it would have been at the moment the generator went down). Had the quantum clock been up-to-date, at 1:00 the generator would have processed the 12:59 to 1:00 quantum. Therefore, the quantum clock is 1 hour, or 60 quantums, behind.

By specifying a quantum limitfor example, 20 quantumsthe application's developer can specify that the generator should skip old quantums until it is no more than 20 quantums behind. With this setting in place, the quantum clock would jump to the 12:39 to 12:40 quantum, and the generator would start its processing from there. All event batches submitted during quantums that were skipped (all those from 11:59 through 12:39), would never be processed.

Quantum limits are specified in the <ApplicationExecutionSettings> element of the ADF. Two separate quantum limits can be specified:

  • A chronicle quantum limit that defines the maximum number of old quantums for which the generator executes chronicle rules

  • A subscription quantum limit that defines the maximum number of old quantums for which the generator executes subscription class match rules

SQL-NS provides two separate quantum limit controls because, in some applications, the semantics that define when it is acceptable to skip the processing of old event batches may be different for chronicle rules than for match rules. It may be necessary to run the chronicle rules to update application state, even if the match rules are skipped. Using the two settings, you can limit the old quantum processing for these rule types separately.

As an example of how the limits work, picture an application that specifies a chronicle quantum limit of 40 and a subscription quantum limit of 10. If the generator gets more than 40 quantums behind, it will completely skip quantums (without running any rules) until it reaches the chronicle quantum limit of 40 quantums behind. Then, for the next 30 quantums, it will execute only the chronicle rules and skip the match rules. When it has caught up to the subscription quantum limit, 10 quantums behind, it will start executing both chronicle and match rules until it is completely caught up.

Listing 12.4 shows the ADF elements used to specify the two quantum limits.

Listing 12.4. Specifying Quantum Limits in the ADF

 <Application>   ...   <ApplicationExecutionSettings>     ...     <ChronicleQuantumLimit>40</ChronicleQuantumLimit>     <SubscriptionQuantumLimit>10</SubscriptionQuantumLimit>     ...   </ApplicationExecutionSettings> </Application> 

The chronicle quantum limit is defined in the <ChronicleQuantumLimit> element, and the subscription quantum limit is defined in the <SubscriptionQuantumLimit> element. Both elements specify an integer number of quantums. The default value for the <ChronicleQuantumLimit> element (which is used if the element is not specified) is 1,440 quantums. With the default quantum duration of 1 minute, this amounts to 24 hours, or 1 day. The default value for the <SubscriptionQuantumLimit> element is 30 quantums. A value of zero in either element is taken to mean no limit.

Caution

Because the default non-zero values apply without the <ChronicleQuantumLimit> and <SubscriptionQuantumLimit> elements being declared, it is sometimes easy to forget that quantum limits are in effect. Be aware that quantum limits will always be enforced, unless you explicitly declare these elements with zero as values.





Microsoft SQL Server 2005 Notification Services
Microsoft SQL Server 2005 Notification Services
ISBN: 0672327791
EAN: 2147483647
Year: 2006
Pages: 166
Authors: Shyam Pather

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net