Optimize for Processing


To understand how to improve UDM processing performance you first need to understand the processing operation of Analysis Services. Analysis Services 2005 supports ROLAP and MOLAP storage modes for dimensions and ROLAP, MOLAP and HOLAP storage modes for partitions. Assume the data source is a relational database. Figure 13-15 shows the architecture of a regular processing operation when the storage mode for the dimensions and cubes is MOLAP. Analysis Services sends separate relational queries to the retrieve dimension and fact data. The relational data source reads records from its file storage for the queries and sends the records to Analysis Services. Analysis Services reads the data from the relational data source and stores it in a proprietary format for fast data access. During dimension processing Analysis Services sends separate queries to process each attribute of the dimension. Members from each attribute are stored indexed by Key and Name for fast data access. The related properties to the attribute are also indexed. If an attribute has a related attribute defined then the related attribute needs to be processed first before the attribute itself. Analysis Services processes attributes in parallel based on resource availability, parallelism specified and dependencies. The key attribute of the dimension is the last attribute processed since all the attributes are related to the key attribute. While processing the partitions Analysis Services reads fact data from the relational data source and stores it in proprietary format. Analysis Services then creates indexes to access the data efficiently. If aggregations are designed for the partitions then aggregations are built followed by indexes.

image from book
Figure 13-15

If the storage type is ROLAP, Analysis Services only needs to store the metadata in Analysis Services. There is no data transfer between the relational data source and the Analysis Services database, and there is no actual data storage on the Analysis Services side. Hence ROLAP processing is faster. Use of the ROLAP storage format, does impose a performance penalty at query time; query results will come slower when compared to MOLAP. This is due to the fact at query time data needs to be retrieved from the data source, aggregated within Analysis Services and then results returned to the end user.

When the storage type is HOLAP the source data resides in the data source while aggregations are calculated and stored on Analysis Services. HOLAP storage is typically used only when you primarily have space constraints on replicating the data on Analysis Services. HOLAP storage mode doesn't save much processing time as compared to MOLAP. Given the performance advantage of MOLAP data storage, we recommend using MOLAP storage for Analysis Services databases instead of HOLAP. For small and active partitions (current period data) we recommend use of ROLAP storage mode to get the most recent data.

You can have users querying the cube when you initiate processing. Analysis Services uses locks inside transactions to ensure atomic changes and consistent views of data. The lock is referred to as database (DB) commit lock. Usually everything just works and one does not need to even know about the DB commit lock. However for sophisticated usage scenarios, understanding this process can help explain system behavior that might otherwise seem anomalous. This can also help you to perform processing operations based on the load on the system.

During a query, the server takes a read DB commit lock. This ensures the database will not be deleted or modified during the query. During processing, a new version of the object (dimension, fact-table partition, etc.) is created. The original version on disk is not immediately overwritten though; a shadow copy is created. Once the new version has been successfully created, within the process transaction, a write DB commit lock is acquired. Then the new files automatically replace the old files, and the lock is released. The duration over which the lock is held is typically very small, perhaps less than 1 second. However, acquiring this lock requires waiting for current lock holders to relinquish their locks. Thus a long-running query can block the completion of a processing command.

The ForceCommitTimeout server property specifies the period of time a process command has to wait to acquire the DB commit lock. The default is 30000 milliseconds (30 seconds) and is specified in the configuration file \Program Files\Microsoft SQL Server\<MSSQL.x>\OLAP\Config\msmdsrv.ini. After the timeout all transactions holding the lock will be forced to fail. For the English version of Analysis Services the error message returned is "The operation has been cancelled." There are other scenarios under which this message will also be returned, but this is one cause to keep in mind when troubleshooting. Typically the holder of the read lock will be one or more long-running queries, and for the system as a whole, forcing them to fail is probably the better choice.

Lock chains can result when long running queries coexist with processing commands. While a process command is waiting for a long-running query, new queries must wait for the processing command to complete. Although the request for a new read lock is compatible with existing granted read locks, granting the new request could lead to starvation of the processing command, so locks are queued. The resulting behavior can appear to be a server hang even though it is not. You can see this behavior if you try to connect to Analysis Services using SSMS when you have a long running query along with a processing command which is waiting for the long running query to complete. This is due to the fact that SSMS sets a default database which requires a read DB commit lock. Here is a list of events creating a lock chain:

     1. Long-running query acquires and holds read DB commit lock.     2. Process command completes and waits to acquire write DB commit lock.     3. New query waits to acquire read DB commit lock. 

Having learnt the basic trade offs in terms of processing time and query performance benefits for the various storage modes supported by Analysis Services let us look at some of the techniques that help in optimizing processing.

Create Partitions to Speed Up Processing

We expect Analysis Services to be installed on a multi-processor machine to take advantage of multiple processors and have better performance. When a fact table is partitioned into multiple partitions within a measure group, due to the inherent parallelism in Analysis Services, the partitions are processed in parallel. You can also specify the parallelism to be used by Analysis Services through the processing option as discussed in Chapter 12. Hence having multiple partitions reduces the processing time for the measure group as compared to a single partition.

As discussed in the design optimization section, creating partitions can significantly improve processing performance during the typical daily, weekly, or monthly updates. Most often the partition corresponding to the most recent time period needs to be updated. Since a small subset of the data (most recent partition) is processed you speed up the processing time for the entire measure group.

We recommend you create multiple partitions for the measure groups whenever you have large volumes of data. Consistent with the Microsoft SQL Server 2000 Analysis Services Operations Guide by Carl Rabeler and Dave Wickert (http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/anservog.mspx), you should consider having partitions of size 5GB, or 20 million rows. Based on your business scenario, you might need to create partitions outside of that size profile.

Creating too many partitions can potentially can potentially hurt processing performance. Analysis Services processes objects in parallel based on your system resources. If all the partitions are processed in parallel they can be competing for resources. Furthermore the relational data source can also slow down processing performance if Analysis Services sends requests to retrieve data for all the partitions. Analysis Services allows you to control the number of objects to be processed in parallel (described later in this chapter). Make sure you design the right number of partitions based on user queries and process with a certain degree of parallelism.

Choose Small and Appropriate Data Types and Sizes

Choosing integers as keys for your tables helps improve processing and query performance. Using single integer keys (rather than strings or composite keys) results in faster processing time due to decreased size, decreased network usage, and the handling of keys internal to the relational data source, and Analysis Services can be done with simple native machine instructions. Analysis Services looks up the keys of dimension members while processing dimensions as well as cubes. The key lookup routines used by Analysis Services can run tens or hundreds of times faster for integer data types compared to other data types that are used as the key. In general, we recommend you set appropriate keys and or consider a design using integer surrogate keys in the relational data source in advance of building your UDM.

SQL Server and Analysis Services Installations

When you install SQL Server 2005, you can have SQL Server and Analysis Services installed on the same machine or a different machine. There are some trade-offs that you might want to consider for processing when you have UDMs retrieving data from the SQL Server. If you have both installations on the same machine, SQL Server and Analysis Services may compete for resources. You need to make sure you have sufficient processors and memory configurations on your system. Whenever your Analysis Services dimensions are large (millions of members) there will be an impact on processing speed. If SQL Server and Analysis Services are competing for memory, you might have significant paging of data to disk, which could slow down operations dramatically.

For 32-bit versions of Windows we recommend turning on the /3GB flag. By default each process running in Windows can access a maximum of 2GB. By turning on the /3GB flag, you allow the Analysis Services process to access up to 3GB of addressable space. This increases the accessible memory and facilitates large dimension processing and aggregation building. To enable the /3GB option, open your boot.ini file on your system drive and add the /3GB option as shown here, and then reboot the machine.

     multi (0)disk (0)rdisk (0)partition (2)\WINNT="????" /3GB 

In addition to turning on the /3GB option on a 32-bit machine, consider using a machine with a large amount of memory (e.g. 8GB) whenever you have SQL Server and Analysis Services installations on the same machine and your UDMs have large dimensions (on the order of millions of members). The additional memory on the 32-bit machine can be accessed by SQL Server 2005 through Address Windowing Extensions (AWE). Hence adding the additional memory helps ensure that both servers have sufficient memory resources to provide good performance. Another option is to use SQL Server and Analysis Services on 64-bit machines with larger memory.

If you do install SQL Server and Analysis Services on separate machines, they do not compete for resources, but you need to make sure you have a good high-speed network connection between the servers, such as gigabit Ethernet. Having good network connectivity helps reduce the network transfer time for queries returning large volumes of data. To stay legal, we recommend you check your licensing agreement for installing SQL Server and Analysis Services on separate machines.

Optimizing a Relational Data Source

Analysis Services 2005 has improved fact table data reading by sending the fact table scan query without joining dimension tables during MOLAP partition processing. In Analysis Services 2000, you had to set the dimension key uniqueness to eliminate the joins between the fact table and dimension table during the processing of the fact table. Analysis Services 2005 sends a table scan query similar to the following to get the fact data without the join and hence it is efficient as compared to Analysis Services 2000 by default.

     SELECT [dbo_WB_Fact].[BudgetExpenseAmount] AS [dbo_WB_FactBudgetExpenseAmount0_0],     [dbo_WB_Fact].[EmployeeKey] AS [dbo_WB_FactEmployeeKey0_1],     [dbo_WB_Fact].[quarterkey] AS [dbo_WB_Factquarterkey0_2]     FROM     [dbo].[WB_Fact] AS [dbo_WB_Fact] 

This query is a pure table scan query for the whole partition. It is unnecessary to put an index on the fact table because there are no joins involved with the dimension table.

However, if you do have ROLAP partitions set up, we recommend you have appropriate indexes created so that queries sent to the relational data source at query time return results faster because those will involve joins to dimension tables. If you trace the operations of Analysis Services with the help of SQL Profiler, you can identify the queries sent to your relational server. We recommend that you set up efficient indices for the queries targeted to your relational server for best performance when querying ROLAP partitions.

Avoid Excessive Aggregation Design

Aggregation design is a way to define aggregated data that needs to be created during processing of partitions. You learn more about aggregations later in this chapter. If you have the right aggregations created, it will help in query performance. However, having excessive aggregations for partitions will increase the processing time of the partitions. Analysis Services may need additional temporary files during the process and need to write more data onto the disk for each partition. As a general rule of thumb we recommend you create aggregations to improve performance by 10-30%. If you do need additional query performance improvements, we recommend you use usage-based aggregation design to create targeted aggregations based on the requests sent by the users accessing the cubes. You also have the choice of using different aggregations for each partition, which will help in removing unwanted aggregations for certain partitions and hence speed up cube processing time. We recommend designing more aggregations on heavily queried partitions, and using less aggregations for partitions rarely used. Ok, so perhaps we have a flair for the obvious — just be sure you do this!

Use Incremental Processing when Appropriate

Often, data changes in the relational data source. These changes could be due to new rows being added to existing tables or updates on existing rows. If you have set up the storage mode for dimensions and partitions as ROLAP, you will be retrieving the data from the relational data source for queries sent by users. In situations where the result set has been cached on Analysis Services due to a previous query, Analysis Services will not be fetching data from the relational database by default. You can force Analysis Services to always fetch data from the relational data source by using ROLAP (see Chapter 18 for more details). However, if the storage mode for all your cubes and dimensions is MOLAP, Analysis Services serves queries only from the processed MOLAP data that resides on Analysis Services. Future updates to relational tables will not be available to end users unless you update the MOLAP data on Analysis Services.

You have several ways of updating the data on Analysis Services. You can do a full process of the corresponding dimensions and/or cubes that need to be refreshed due to changes in the corresponding relational tables. There are several processing options to optimize your processing needs. Process Incremental and Process Add are options that help you process the dimensions and partitions so that they are updated with the new data on the relational data source as necessary. Not all the data in the partitions or dimensions gets updated during these operations; only data that changed since the last round of processing will be refreshed.

During an incremental process of partitions Analysis Services retrieves the new data from the relational data source and adds it to a temporary partition. Aggregations are then created for this new data, and finally the temporary partition is merged to the existing partition. As soon as the data is merged it is available for querying and you will see the new data reflected in user queries. Because Analysis Services only retrieves the new data added to relational tables, the incremental processing option for partitions helps you to process the partition quickly as compared to full process. However, if your partition is to be processed frequently due to data changes in the relational data source, we recommend you do a full process on that partition periodically (not often), because full processing will sort and optimize the data storage of multidimensional data. Similar to the defragment technique in file storage, sorting the data on disk will improve query performance.

Dimensions can be incrementally processed using the ProcessIncremental or ProcessAdd options. ProcessIncremental retrieves the data from the relational data source, compares the data to existing dimension members on Analysis Services, and then makes updates to existing dimension members if there are any changes. If there are new members, they are added to the dimension and this does not affect the partitions. However, if the dimension members are updated such that the relationship between attributes have changed (an employee's marital status changed from single to married) then the aggregations for the corresponding partitions will be dropped. The cube will still be available for querying but it can impact the performance of the queries which were using the dropped aggregations. ProcessIncremental takes more time than Full process of the same dimension because Analysis Services does the additional work of checking for updates for existing members; however, it provides you the flexibility of having the dimension available for querying and the cube does not have to be reprocessed. ProcessAdd is a new option in Analysis Services 2005 for processing dimensions. This option allows you to add new dimension members that have been added in the relational data source to existing processed dimensions. We recommend using the ProcessAdd option for dimensions whenever you have new members being added to the corresponding dimension tables in the relational data source. ProcessAdd for dimensions is useful in cases where new products are added to the products table on a periodic basis. You need to specify relational queries to the data source that will return the new rows that have been added since the last dimension update. The query to retrieve new data along with the data source and data source view elements is to be specified in the DDL along with the ProcessAdd called as out of line binding. Enclosed below is an example of using DDL using ProcessAdd with out of line binding.

   <Batch xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">     <Parallel>       <Process xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">         <Object>       <DatabaseID>Adventure Works DW</DatabaseID>       <DimensionID>Dim Customer</DimensionID>         </Object>         <Type>ProcessAdd</Type>         <DataSourceView>       <ID>Adventure Works DW</ID>       <Name>Adventure Works DW</Name>       <DataSourceID>Adventure Works DW</DataSourceID>       <Schema>        <xs:schema  xmlns=""   xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:msdata="urn:schemas-microsoft-   com:xml-msdata" xmlns:msprop="urn:schemas-microsoft-com:xml-msprop">         <xs:element name="Adventure_x0020_Works_x0020_DW" msdata:IsDataSet="true"   msdata:UseCurrentLocale="true">          <xs:complexType>           <xs:choice minOccurs="0" maxOccurs="unbounded">            <xs:element name="dbo_DimProduct" msprop:FriendlyName="DimProduct"   msprop:DbSchemaName="dbo" msprop:DbTableName="DimProduct"                 msprop:QueryDefiniton ="SELECT               * FROM DimProduct WHERE ProductKey &gt; 600"                 msprop:DbTableName="DimProduct"                 msprop:IsLogical="True"                 msprop:TableType="View">             <xs:complexType>              ... //Details of columns returned             </xs:complexType>            </xs:element>           </xs:choice>          </xs:complexType>         </xs:element>        </xs:schema>       </Schema>         </DataSourceView>       </Process>     </Parallel>   </Batch> 

Parallelism during Processing

When compared to Analysis Services 2000, Analysis Services 2005 has improved processing behavior by using max parallelism to process independent components by default. If you have 16 partitions, all 16 partitions will be processed in parallel. Having too much parallelism can actually hurt performance through context switching and disk thrashing — happily, Analysis Services provides you with several options to control the amount of parallelism used for processing. Based on the complexity of the cube and the set of aggregations, we suggest you have two to three objects being processed per CPU.

You can control the amount of parallelism by changing certain server properties. For processing, the main server property impacting processing performance is CoordinatorExecutionMode. The server properties can be changed using the properties dialog from SQL Server Management Studio or in the config file msmsdsrv.ini located in the %System Drive%\Program Files\Microsoft SQL Server\ MSSQL.x\OLAP\Config folder. The CoordinatorExecutionMode server property sets the maximum parallelism allowed for a specific job, such as processing that needs to be executed on the server at a specific time. This property can have a positive or negative value. If the value is positive, the actual value is used and if the value is negative, it is multiplied by the number of processors and the absolute value of the result is used. For example, the default Analysis Services 2005 value is -4, which on a 4-processor machine indicates that the maximum parallelism to be used on the machine is 16 for each request. By setting this property you can avoid the server being overloaded with processing operations, perhaps to allow some resources for queries.

Another property, ThreadPool\Processing\MaxThreads, specifies the maximum number of threads in the processing thread pool. We recommend this not be used as a way to limit concurrency because internally the server often executes tasks by queuing other tasks and waiting for completion. The server is designed to be smart enough to know that more threads are needed to avoid deadlocking by exceeding MaxThreads.

When all the objects within a database are processed in parallel then Analysis Services sends queries to the data source for dimension as well as partition processing. Sometimes having too many connections and queries to the data source can increase processing time. You can limit the number of connections Analysis Services establishes with the data source. You can limit concurrent connections using the Data Source property "Maximum Number of Connections." This value can be altered from SQL Management Studio. See Figure 13-16.

image from book
Figure 13-16

You can specify the amount of parallelism for a specific processing operation along with the processing command. Follow the steps below to restrict the parallelism in the Processing Dialog while processing a database:

  1. Open the sample Adventure Works DW project and deploy it to the Analysis Services instance.

  2. Open SSMS and connect to the Analysis Services instance.

  3. Navigate to Internet_Sales measure group for the Adventure Works cube in the Object browser window.

  4. Right-click the measure group and choose Process to open the Process dialog shown in Figure 13-17.

    image from book
    Figure 13-17

  5. Click the Change Settings button to open the settings dialog.

  6. The settings dialog shown in Figure 13-18 allows you specify the amount of parallelism while processing the current object. Select the value 8 as shown in the figure and click OK.

    image from book
    Figure 13-18

  7. You learned in Chapter 12 that you can process the objects on an Analysis Services instance using the dialogs or through scripts. The Process dialog has the option to script the current settings to process the Internet_Sales measure group. Click the Script to New Window in the process dialog shown in Figure 13-17. A processing script is now opened within SSMS as shown below. Note that you could also use SQL Profiler to view the process command received by the server.

     <Batch ProcessAffectedObjects="true"     xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">       <Parallel MaxParallel="8">         <Process xmlns:xsd="http://www.w3.org/2001/XMLSchema"     xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">           <Object>             <DatabaseID>Adventure Works DW</DatabaseID>             <CubeID>Adventure Works DW</CubeID>             <MeasureGroupID>Fact Internet Sales 1</MeasureGroupID>           </Object>           <Type>ProcessFull</Type>           <WriteBackTableCreation>UseExisting</WriteBackTableCreation>         </Process>       </Parallel>      </Batch> 

In the processing script you can see the option MaxParallel=8, which instructs the Analysis Services instance to process a maximum of 8 objects in parallel.

Identify Resource Bottlenecks

Analysis Services processing and query performance requires well-configured hardware resources for best results. Processing performance requires sufficient memory, CPU speed, and good hard disk IO speed. These three play a significant role in getting the best performance. Analysis Services allows you to monitor resources used during operations of the server by way of perfmon counters. There are specific perfmon counters that show the memory utilization on the Analysis Services instance. You can monitor the server behavior during the processing operation via perfmon counters of the objects \\<machinename>\Processor, \\<machinename>\Memory, \\<machinename>\PhysicalDisk and specific counters provided by Analysis Services \\<machinename>\MSAS 2005: Processing, \\ <machinename>\MSAS 2005: ProcIndexes, \\<machinename>\MSAS 2005: ProcAggregations, and \\<machinename>\ProcIndexes. After identifying the bottlenecks of the system, you can take appropriate action to relieve the server performance hot spots.

Some simple hardware additions, such as increasing server memory, adding more CPU, or using fast writing disks can potentially improve the system processing performance. As mentioned earlier, if you have memory over 3G for a 32-bit machine, you might consider using the /3-GB flag to allow Analysis Services to use memory over the 3GB limit as discussed earlier in this chapter. If memory is the main bottleneck we highly recommend you to consider using 64-bit machines to increase the performance and capacity of your system, especially for cubes with large dimensions (>10 million dimension members).

In certain cases where the number of partitions is in the order of hundreds, processing partitions in parallel on a 32-bit machine can result in partitions competing among themselves for memory resource. In such circumstances processing can result in errors that system does not have sufficient memory. In such circumstances you can split the processing of the partitions by one of the following techniques.

  1. Process fewer partitions at a time and stagger them to complete processing of all the partitions.

  2. Use one of the techniques mentioned in the Parallelism during Processing section.

  3. Instead of doing a full process of the partitions which includes processing data, indexes, and aggregations, split the processing as Process Data first for all partitions followed by Process Indexes.

  4. By default the server property OLAP\ProcessPlan\MemoryLimit is set to 65 which means 65% of the available memory. If the /3GB flag is not enabled then Analysis Services determines this to be 65% of 2GB. If you are aware that you have more memory you can try increasing this value to 75 or 80.



Professional SQL Server Analysis Services 2005 with MDX
Professional SQL Server Analysis Services 2005 with MDX (Programmer to Programmer)
ISBN: 0764579185
EAN: 2147483647
Year: 2004
Pages: 176

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net