Chapter 12: Administering Analysis Services


Administration is an important task on any server product. As an administrator of Analysis Services you need to make sure Analysis Services is secure, reliable and provides efficient access to the end users. You can administer Analysis Server in two ways; through the SQL Server 2005 Tool set (SSMS or BIDS) or programmatically using an object model called AMO (Analysis Management Objects). You can accomplish tasks like processing objects, providing access to Analysis Services objects in databases and synchronization of databases between Analysis Services instances using SSMS. You can use BIDS to connect to a specific OLAP database to perform design changes and accomplish follow-on tasks such as processing and providing access to users. SSMS and BIDS both use AMO behind the scenes to accomplish all management tasks. The AMO object model itself is installed and registered into the GAC (Global Assembly Cache) when the product is installed. The AMO .NET assembly, by the way, is Microsoft.AnalysisSevices.dll. In the first section of this chapter you learn about key administrative tasks and how to accomplish those tasks using SSMS and BIDS, and in the second section you learn more about the programmatic approach, using AMO, to accomplish the same tasks and others.

Administration using SQL Server 2005 Tools

Let's just jump in and get our feet wet, shall we? In Chapter 2 you used SSMS to view the objects found in an Analysis Services 2005 database. We'll start here on a similar footing; first, open SSMS, connect to Analysis Services, and open the Databases folder. You will see a tree view of those databases you have saved on the server to date, as shown in Figure 12-1. One of those databases should be titled AnalysisServices2005Tutorial — you should take a moment to review the tree nodes and what they contain because you will be learning the administrative tasks associated with those objects.

image from book
Figure 12-1

Managing the Server

SSMS, the integrated management environment for SQL Server 2005 products, provides you the flexibility of managing several Analysis Servers. In this chapter we use the word "server" to denote an instance of Analysis Services, and "servers" to denote one or more. If you have a set of Production Servers that are being used for customers and a set of Test Servers that are being used for development and testing purposes, you typically want to manage them differently. The most logical thing is to group these servers. Using the Register Servers windows of SQL Server Management Studio you can group a set of Analysis Servers to form a Server group as shown in Figure 12-2. You can register several Analysis Servers or organize a few Analysis Servers into a group and manage them using the Server Group and Server Registration dialogs that can be launched by right clicking on Analysis Services folder in the Register Servers window.

image from book
Figure 12-2

Some of the common tasks of starting, stopping, restarting or configuring Analysis Services instances can be accomplished from the Registered Servers window. You can right-click the specific Analysis Services instance and choose the appropriate operation. In addition to that you can connect to the server in the object explorer or launch the MDX query editor from this window.

Once you are connected to an Analysis Services instance in the Object Explorer window you can accomplish various administrative tasks on the server, such as creating new databases, providing permissions, creating new databases, processing objects, and moving databases from test servers to production servers. First and foremost for the Analysis Server admin is to provide access permissions to the users who will be administering the server. The following steps show how you can add a user as an administrator of an Analysis Services instance by making them part of the object called Server Role:

  1. In the Object Explorer window right-click on the Analysis Services instance and select Properties.

  2. You will now see the Analysis Services properties dialog.

  3. Click Security in the page as shown in Figure 12-3. Click the Add button to add a user to Analysis Services administrators group. You can only add domain users or groups as part of the administrator group for Analysis Services. If your user is a local user you can specify machinename\username to add the user to this server administrator group.

Another important management task is to set appropriate properties for Analysis Services for enabling features or configuring certain properties so that Analysis Services performs optimally. You can change the properties using the Analysis Services properties dialog shown in Figure 12-4. Analysis Services needs to be restarted for certain properties to take effect. This is indicated by a "yes" in the Restart column in the Analysis Server Properties dialog. Some of the most important properties involve control of parallelism for processing and querying; changing the read buffer size for reading data from disk for faster query response time. Equally important are maximum amount of memory to be used by the Analysis Services processes, controlling the maximum number of connections to the server, and the ability to turn server features on and off. You learn some of these properties in this chapter and others in Chapter 13.

image from book
Figure 12-3

Managing Analysis Services Objects

Several management tasks can be performed on Analysis Services objects. Some of the most important tasks are processing of cubes and dimensions, providing access permissions to various objects within a database, managing the partitions of a cube based on usage, and adding assemblies to databases. Even though the SQL Server Management Studio provides a great interface to manage Analysis Services 2005 and abstracts all the internal details, it is beneficial to understand the underlying operations that take place when you perform the management operations. Knowledge of the server internals gives you the edge of understanding the operations better and effectively managing the server when unforeseen problems occur.

image from book
Figure 12-4

All communications to Analysis Services is through XML for Analysis (XML/A). The management tasks executed through SSMS use the management object model AMO (Analysis Management Objects), which in turn sends XML/A Execute commands to the Analysis Services instance. You will see some of the commands sent to the server while performing the management tasks in this chapter.

Processing Analysis Services Database Objects

One of the important jobs of an Analysis Services DBA (database administrator) is to process the objects resident in databases. Analysis Services 2005 provides fine-grain control to the Analysis Services DBA to process the objects within an Analysis Services database using the Process dialog. You can launch the Process dialog by right-clicking the object folders such as Cubes, Dimensions, and Mining Structures — this works just as well on individual objects or groups of objects too. Based on the location from which the Process dialog is launched, the options for processing the object or group of objects will vary. In addition to this you can select an object and launch the process dialog. To process the database AnalysisServicesTutorial2005, do the following:

  1. Right-click the database AnalysisServicesTutorial2005 and click Process as shown in Figure 12-5.

    image from book
    Figure 12-5

  2. You will see the Process dialog as shown in Figure 12-6. This dialog shows the name of the object to be processed along with the type of object. There are several processing options for each object. The default option for the database object is Process Full. As the name implies, the Process Full option allows you to process the object completely even if the object had been processed earlier. It will clear any data that was processed earlier. Click OK in this dialog.

    When you click OK the process dialog uses AMO to send the Process command to the Analysis Services instance. You can see the process command that is to be sent to the server by clicking the Scriptimage from bookScript Action to new Query window. You will see the following script command:

         <Batch xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">       <Parallel>         <Process xmlns:xsd="http://www.w3.org/2001/XMLSchema"     xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">             <Object>                 <DatabaseID>AnalysisServices2005Tutorial</DatabaseID>             </Object>             <Type>ProcessFull</Type>             <WriteBackTableCreation>UseExisting</WriteBackTableCreation>         </Process>       </Parallel>     </Batch> 

    image from book
    Figure 12-6

    The above script contains several commands that are interpreted by Analysis Services. Since the medium of communication to Analysis Services is XML/A request, the script is embedded within SOAP Envelope tags. This script can be executed from XMLA editor within SQL Server Management Studio. SSMS adds the appropriate SOAP envelope tags to send the script to Analysis Services. The commands in the script are Batch, Parallel, and Process. The Process command is part of a set of commands that manipulate the data in Analysis Services. These commands that change the data in Analysis Services databases are called the DML (data manipulation language). The Batch command allows multiple commands to be executed within a single statement. The Parallel command allows you to instruct the server to execute all the commands within the command in parallel. The Process command is used to process an Analysis Services object and needs several properties such as DatabaseID and Processing Type.

  3. When you click OK in the Process dialog, a Process command with appropriate options is sent to the Analysis Services instance. This command requests the server to process the database. For processing the objects within a database the server needs to read data from the data source, which is done by issuing queries to the data source. You will now see the Process progress dialog that shows fine-grain details of each processing operation on the server. As you can see from Figure 12-7, each object within the database that is being processed is reported along with the timing information and whether the process succeeded or failed. You can see the query sent to the data source to retrieve the data.

  4. Once all the objects have been processed you will see the status of the processing. If all the objects were successfully processed you will see Process Succeeded in the status as shown in Figure 12-7. If there were errors during processing, the status bar will show an appropriate message. The processing of objects that results in an error are shown in red in the tree view of the processing progress dialog. You can drill down into the details of the processing to understand the reason for failure.

image from book
Figure 12-7

There are several operations that take place in the preceding processing command. All the objects within the database are processed in parallel based on the settings of the Analysis Services instance. If there are dependencies, the dependent objects are processed first. For example, the dimensions that are part of a cube need to be processed before the cube can be processed. Analysis Services processes all the objects of the database under a single transaction. What this means is that if one of the objects failed during processing, the remaining objects will not be processed. For example, if all the dimensions of a cube were successfully processed and if there were errors while processing the cube, the processing of the dimension objects will be rolled back. Once all the objects have been successfully processed, the server commits the transaction, which means that the objects are marked as processed and are available for querying.

Assume an Analysis Services object has been processed and is being queried by users. At the time users are querying the object, you can initiate processing on the same object. Because a version of the object is currently being queried, Analysis Service stores the uncommitted processed object under a temporary file. At the time of commit, the server first ensures that the user is not using the objects, removes the previous version of the processed objects, and then marks the temporary files as primary. You see this in detail in the following section.

Processing a Cube

An Analysis Services database can contain several cubes and dimensions within it. You have the flexibility to control the processing of individual cubes and dimensions by launching the process dialog from appropriate cube or dimension objects. There are several processing options for processing a cube, as shown in Figure 12-8. All of the same processing options available for partitions and measure groups are available for the cube because a cube is a collection of measure groups, which in turn is a collection of partitions.

image from book
Figure 12-8

When a cube is created you will typically do a full process (Process Full in process dialog) of it so that you can browse the cube. Usually the cube structure might not change after the initial design. You will be getting additional fact data that you would want to add to an existing cube. For example, you might have a Sales cube that you have created and you might be getting sales fact data from each store every month. Processing the entire cube every time will take a considerable amount of time and end users might have to wait for a long period to see the most up to date data. Analysis Services 2005 provides you with an option to process only the new fact data instead of the entire cube. This is called incremental processing. In order to add the new fact data to the cube you can add a new partition to the cube and process that partition. Alternately you can use the Process Incremental option in process dialog and specify the query that provides the new fact data that needs to be processed. Process incremental is a common management task for data warehouses. If you specify the Process Default option in process dialog, the server checks for all the objects that have not been processed and only processes those objects. If the cube data had been processed and if aggregations and indexes are not processed then those are processed.

When you choose the Process Full option for processing a cube, the server performs three internal operations. If the storage mode for the cube is MOLAP, the server first reads the data from the relational data and stores the data in a compact format. If there were aggregations defined for the cube, the server will build those aggregations during this processing. Finally, the server creates indexes for the data that helps speed access to data during querying. Even if there were no aggregations specified for the cube, the server still creates the indexes. The Process Data option actually is the first step of the Process Full option where the server reads data from relational data sources and stores it in proprietary format. The second and third steps of processing aggregations and indexes can be separately accomplished by the Process Index option. You might be wondering why you have the Process Data and Process Index option when the Process Full and Process Default option actually accomplish the same task. These options provide the administrator a fine grain of control. These are especially important when you have limited time to access the relational data source and want to optimize the processing. Under such instances, you first process data. Once you have all the data on the Analysis Service you can then create your aggregations and indexes, which do not need access to the relational data source.

If you choose the Process Structure option, the server processes all the cube's dimensions and the cube definitions so that the cube's structure is processed without any processing of the data. The server will not process the partitions or measure groups of the cube, therefore you cannot see any of the fact data; however, you can browse the cube because the cube definitions are processed. You can retrieve metadata information about the cube (measure names, measure groups, dimensions, KPIs, actions, and so on) after processing the cube's structure. However, you will not be able to query the cube data. For a cube that has been processed with Process Structure, you can see the cube in SQL Server Management Studio MDX query editor when you select the drop-down list for the cube. If your cube linked measure groups has been processed with Process Structure option, you will be able to query the measures in linked measure groups. Often when you design your UDM you will want to make sure your design is correct and your customers are able to see the right measures and dimensions. Process Structure is helpful in validating your design. As soon as the data for the cube is available the cube can be processed with the Process Default option so that end users can query the data from the cube.

You can clear the data in the cube using the Unprocess option. The Process Script Cache option can be used when you want Analysis Services to cache the calculations specified in the MDX script. Analysis Services evaluates calculations specified using the CACHE statement in the MDX scripts. You have learned the various processing options for the Cube. The processing options provided in the process dialog are different than the process types that are specified in the process command sent to Analysis Services. The following table shows how the various processing options map to the process types sent to Analysis Services.

Process Options in Process Dialog

Process Type in Process Command

Process Full

ProcessFull

Process Default

ProcessDefault

Process Data

ProcessData

Process Structure

ProcessStructure

Unprocess

ProcessClear

Process Index

ProcessIndexes

Process Incremental

ProcessAdd

Process Script Cache

ProcessScriptCache

The processed data of a cube are stored in a hierarchical directory structure that is equivalent to the structure you see in the object explorer. Figure 12-9 shows the directory structure of the processed data of the Adventure Works DW sample database in Analysis Services 2005. The directory also shows the files within a partition. The metadata information about the cubes and dimensions are stored as XML files, while the data is stored in a proprietary format of Microsoft. Every time an object is processed, a new version number is appended to the object. For example, the files shown in Figure 12-9 are under a specific partition directory. The file info.<versionnumber>.xml is used to store the metadata information about the partition. Similar metadata files are stored within the directories of each object, cube, dimension, and measure groups. We recommend you to browse through each object folder to see the metadata information. The fact data is stored in the file with extension data. The key to an OLAP database is the fast access to data. You learned about a cell, which was represented by a Tuple. A Tuple is the intersection of various dimension members. For fast data access, Analysis Services builds indexes to access data across multiple dimensions. The index files in Analysis Services have the extension "map." In Figure 12-9 you can see the .map files that have the format <version>.<Dimension>.<Hierarchy>.fact.map. There is an associated header file for each map file. Analysis Services stores the data as blocks called segments for fast access. The associated header file contains offsets to the various segments for fast access during queries.

image from book
Figure 12-9

The processing dialog provides you the flexibility of processing objects in parallel or within the same transaction. If errors are encountered during processing, you can set options to handle these errors. You can configure the parallelism and error options by selecting the Change Settings button in the process dialog. You will see the Change Settings dialog as shown in Figure 12-10 which enables you to configure certain processing operations and error settings during processing. Setting the parallelism option is as simple as selecting the appropriate option in the processing order. By default all the objects are processed in parallel and within the same transaction. If you do want failure of one object to impact other objects, you should process the objects under different transactions by choosing the sequential option.

image from book
Figure 12-10

You might encounter errors while processing your Analysis Services objects due to incorrect design or referential integrity problems in the relational data source. For example, if you have a fact record that contains a dimension id that is not available in the dimension table, you will see a "Key not found" error while processing the cube. By default, when an error is encountered during processing the processing operation fails. You can change the settings in the processing dialog to take appropriate action other than failing the processing operation. The Change Settings dialog helps in changing the error configuration settings for all the objects selected for processing. Whenever you encounter Key errors you can either convert the values to unknown or discard the erroneous records. You can run into key errors while processing the facts or the dimensions. If you encounter a key error while processing a cube, that means Analysis Services was unable to find a corresponding key in the dimension. You can assign the fact value to the member called Unknown member for that specific dimension. You can encounter key errors while processing a snowflake dimension when an attribute defined as a foreign key does not exist in the foreign table or when there are duplicate entries. The two most common types of key errors that you might encounter during dimension processing are key not found and duplicate key errors.

Processing a Dimension

You can process dimensions independent of the cube. After the initial processing of a dimension, you might process the dimensions on a periodic basis if additional records are added in the dimension table or there were changes to columns of an existing row. An example of additions to a dimension is new products being added to the products dimension. You would want this information to be reflected in the dimensions so that you can see the sales information for the new products. Another example of changes in dimension is when an employee moves from one city to another city; the attributes of the employee will need to change. Therefore the process dialog provides you with various options for processing the dimension, as shown in Figure 12-11.

image from book
Figure 12-11

While processing a dimension Analysis Services reads data from the dimensions table(s). When a dimension is processed, each attribute of the dimension is processed separately. Based on the parallelism specified on Analysis Services, these attributes can be processed in parallel. Each dimension contains an attribute called the All attribute. This is not exposed to the user but used internally by Analysis Services. You can see the files associated with this attribute as <version>.(All).<extension> in Figure 12-12. When each attribute is processed, several files are created. Similar to fact data, the dimension data is stored in a proprietary format. Each attribute of a dimension has a key column and a named column. These directly map into two different files with extensions kstore and sstore, which refer to key store and string store, respectively. In addition, there are additional files that get created for each attribute of the dimension, which help in fast access to name, key, and levels of attributes and hierarchies. The files with extension map are created when indexes are processed for each attribute and help in fast retrieval of related attributes of the dimension for a dimension member.

image from book
Figure 12-12

Time taken to process a dimension depends on the number of attributes and hierarchies in the dimension as well as the number of members in each hierarchy. When a processing command is sent to the server, the server reads the data from the relational data source and updates the dimension. SQL Server Analysis Services 2000 had a limitation of 64000 members for each parent and a maximum of 256 levels within a single dimension. Analysis Services 2005 does not have such a limitation. When a dimension is processed, the attributes of the dimension are processed separately. Some attributes can be processed in parallel, while some cannot. The order of processing of various attributes is dependent on the member property definition and resources available on the machine. For example, say you have a Customer dimension that contains the attributes Customer Name, SSN, City, State, and Country. Assume SSN is the Key attribute for this dimension and by default all attributes within the dimension are member properties of the key attribute. In addition, assume additional member property relationships have been established. They are Countryimage from bookState, Stateimage from bookCity, Cityimage from bookCustomer Name, Stateimage from bookCustomer Name, and Countryimage from bookCustomer Name. Due to the preceding relationship, the order of processing of the attributes in the Customer dimension is Country, State, City, Customer Name, and SSN. This is because Analysis Services needs to have information about Country in order to establish the member property relationship while processing the State, Customer Name, or SSN.

When the Process Default option is chosen for processing, the dimension's data or indexes are processed if they had not been processed. If the Process Full option is chosen, the entire dimension is re-processed. When Process Full option is used dimension data and indexes that have been processed initially will be dropped and data is retrieved from the data source. Based on dimension size (number of dimension members as well as number of attributes and hierarchies in the dimension) the processing operation can take a long time.

Similar to incremental processing of the cubes you can incrementally process dimensions using the Process Update option. The Process Update option in the process dialog maps to ProcessUpdate process type in the process command which is applied only for dimensions. Some of the dimensions such as Employees or Customers or Products can potentially contain a large number of members. Mostly additional members can be added to these dimension or some attributes of these dimension members might have changed. Often a full processing of the entire dimensions is not only necessary, but cannot be afforded due to business needs. Under these circumstances incremental processing of the dimension or an update of the attributes of the dimension should be sufficient. When you choose the Process Update option for the dimension, the server scans all the dimensions in the dimension table. If there were changes to the dimension caption or description, they are updated. If new members are added to the dimension table, these members are added to the existing dimension during incremental processing. When the Process Update option is chosen, attributes of each dimension member will be updated. The key of each dimension member is assumed to be the same, but expect some attributes to be updated. The most important attribute that is updated is the member property for each member. When you have a parent-child hierarchy in a dimension; and if the parent attribute has been changed, that information is updated during the Process Update processing option.

The Process Data option for dimensions is used to process the dimension data. The indexes will not be processed when the Process Data option is used. The Process Index option is used to create indexes for attributes in the dimensions. If the ProcessMode dimension property is set to LazyAggregations, Analysis Services builds indexes for new attributes of the dimension as a lazy operation in the background thread. If you want to rebuild these indexes immediately you can do so by choosing the Process Index option. The Unprocess option is to clear the data within the dimension.

Managing Partitions

In a fit of unrestrained metaphor use back in Chapter 4, we noted that carving bars of soap using razor blades could result in intricate little statues, but could also result in fingers covered with band aids. Well, use of partitions carries a similar warning. Partitions enable you to distribute fact data within Analysis Services and aggregate data so that the resources on a machine can be efficiently utilized. When there are create multiple partitions on the same server, you will reap the benefits of partitions since Analysis Services reads/writes data in parallel across multiple partitions. Fact data on the data source can be stored as several fact tables Sales_Fact_2002, Sales_Fact_2003 etc or as a single large fact table called Sales Fact. You can create multiple partitions within a measure group; one for each fact table in the data source or by splitting data from single large fact table through several queries. Partitions also allow you to split the data across two or more machines running Analysis Services which are called Remote partitions. As an administrator you might be thinking what the size of each partition should be to achieve the best results. Based on the performance analyzed on Analysis Services 2000 customers, Microsoft recommends each partition be 5 GB or 20 million records. You learn more about optimizing partitions in Chapter 13.

A sales cube's partition usually contains data spread across time, that is, a new partition might be created for every month or a quarter. As an administrator you would create a new partition from SQL Server Management Studio and process it so that is available for users. To create a new partition, perform the following steps in BIDS.

  1. Open the AnalysisServices2005Tutorial project you have used in previous chapters.

  2. Change the FactInternetSales table to a named query so that there is a where condition DueDateKey<700. In case you don't recall how this is done, we've included the steps here:

    1. Open Adventure Works DW.dsv under the Data Source Views folder.

    2. Right click on the FactInternetSales table in diagram view, select Replace Table->With New Named Query menu items.

    3. In the Create Named Query dialog, In the DueDateKey Filter text entry box, enter <700. Your change will automatically be reflected in the query window. Click OK to continue.

  3. In the DSV, right-click the Add/Remove tables and add the FactInternetSales table to Included objects.

  4. In the diagram view, replace the FactInternetSales table with a named query. In the named query, set Filter to DueDateKey >=700. Rename the named query as FactInternetSalesNew. Deploy the project.

  5. Connect to the AnalysisServices2005Tutorial database using SSMS. Navigate to the measure group FactInternetSales.

  6. Right-click the Partitions folder and select New Partition. You will now be in the Partition Wizard. Click Next on the welcome screen.

  7. Choose the named query FactInternetSalsNew to create a new partition as shown in Figure 12-13 and click Next.

    image from book
    Figure 12-13

  8. If you want to split the data from the fact table into multiple partitions, you can do so by specifying the query on the page shown in Figure 12-14. Click the Next button.

    image from book
    Figure 12-14

    One way Analysis Services provides scalability is by use of remote partitions, where the partitions reside in two or more Analysis Services instances. On the Processing and Storage Location page as shown in Figure 12-15, you can specify where to store the partition. You can specify the remote Analysis Services instance on this page, but the data source to the remote Analysis Service instance should have been defined in this database. You can change the storage location where you want the partition to reside on any of the Analysis Service instances.

  9. Choose the default options as shown in Figure 12-15 and click Next.

    image from book
    Figure 12-15

  10. You will be in the final page of the Partition Wizard. Select Design aggregations later, Process Now and then click Finish. The partition will be processed and you can browse the cube data.

The number of partitions for a specific cube can typically increases over time. Users might not be browsing historical data with the same granularity as that of the recent data. For example, you might be more interested in comparing Sales data for the current month to that of the previous month rather than data from five years ago. However, you might want to compare year-over-year data for several years. By merging the partition data you can see some benefits during query performance. You learn the considerations you should take into account to merge partitions in Chapter 13.

There are two main requirements to merge partitions: the partitions should be of the same storage type, and they need to be on the same Analysis Services instance. Therefore if you have remote partitions, they can be merged together only if they are on the same Analysis Services instance. To merge partitions, do the following:

  1. Launch the Merge partition dialog by right-clicking the Partitions folder within a measure group. You will see the Merge partition dialog as shown in Figure 12-16.

    image from book
    Figure 12-16

  2. Select the Target partition that will contain the merged data and the list of partitions to merge data from in the Merge partition dialog as shown in Figure 12-16 and click OK.

All the data from the source partitions will merge into the target partition, and the source partitions are deleted due to this operation. SSMS sends the following command to Analysis Services to merge the partitions:

     <MergePartitions xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">       <Sources>         <Source>           <DatabaseID>AnalysisServices2005Tutorial</DatabaseID>           <CubeID>Adventure Works DW</CubeID>           <MeasureGroupID>Fact Internet Sales</MeasureGroupID>           <PartitionID>Fact Internet Sales New</PartitionID>         </Source>       </Sources>       <Target>         <DatabaseID>AnalysisServices2005Tutorial</DatabaseID>         <CubeID>Adventure Works DW</CubeID>         <MeasureGroupID>Fact Internet Sales</MeasureGroupID>         <PartitionID>Fact Internet Sales</PartitionID>       </Target>     </MergePartitions> 

Managing Assemblies

You learned about .NET and COM assemblies in Chapter 10. Assemblies, also referred to as stored procedures, can only be added by Analysis Services administrators. You need to make sure your instance of Analysis Services is safe and secure irrespective of the operations done by the stored procedures. Security is always a concern, and you do not want any assemblies to bring down the server. Because hackers try to hack servers, most software products now are built to be secure by default. The administrator needs to enable certain components and options to make them available to users. By default, Analysis Services does not allow execution of stored procedures. The administrator first needs to enable the server properties Feature\ManagedCodeEnabled and Feature\ComUdfEnabled to true (value of 1 in the Analysis Services config file) for enabling managed assemblies and COM DLLs respectively. This is accomplished by using the server properties dialog to enable registration and execution of assemblies.

The key to managing assemblies is to understand the nature of the assembly and setting appropriate properties while adding assemblies to your Analysis Services. Figure 12-17 shows the dialog to add assemblies to the Server or to a specific database.

You saw in Chapter 10 that Analysis Services supports two types of assemblies: COM and .NET CLR assemblies. Once you specify the type and name of the assemblies, you need to specify the security information for these assemblies. There are two parameters by which you can control the security of these stored procedures, Impersonation and Permissions. Permission allows you to define the scope of access for the assembly, such as accessing the file system, accessing the network, and accessing unmanaged code. There are three different values for permission sets:

  • Safe. The most secure of the three permissions. When the safe permission set is specified for an assembly, it means that the assembly is only intended for computation and the assembly cannot access any protected resource. It guarantees protection against information leaks, elevation attacks by malicious code, and guarantees reliability.

  • External Access. This set value provides access to external resources to the assembly without compromising reliability, but does not offer any specific security guarantees. You can use this if you as the DBA trust the programmer's ability to write good code and if there is a need to access external resources such as data from an outside file.

  • Unrestricted. This set value is primarily intended for people who have a very good understanding of programming on servers and need access to all resources. This permission set does not guarantee any code security or reliability. Unrestricted access should only be allowed to assemblies that have been written by users who absolutely need access to external resources and have a very good understanding of all security issues, such as denial of service attacks and information leakage, and are able to handle all these within the stored procedures. We recommend you use this option only when it is absolutely essential and you have full confidence in the programming abilities of the developer who has developed the assembly.

image from book
Figure 12-17

Impersonation allows you to specify the account under which the stored procedure needs to be executed. There are five different values for Impersonation information:

  • Default: The default value allows you to execute the stored procedure under a secure mode with the minimum privileges. If the assembly is of type COM, the default value is Impersonate CurrentUser. For a .NET CLR assembly, the default value depends on the permission set defined. If the permission set is Safe, the Impersonation mode will be Impersonate Service Account but if the permission set is External Access or Unrestricted, the Impersonation mode will be Impersonate Current User.

  • Anonymous: If you want the stored procedure to be executed as anonymous user, you need to select Impersonate Anonymous. You will have limited access when the stored procedure is executed as Impersonate Anonymous.

  • Use the credentials of the current user: This Impersonation mode is typically used when you want the stored procedure to be executed with the user's credentials. This is a safe option to select. If the stored procedure accesses external resources and the current user executing the stored procedure does not have permissions then execution of the stored procedure will not cause any ill effects. A use of this impersonation mode is to define dynamic data security where the current user's credential is need to access external resources.

  • Use the service account: If you choose the Impersonate Service Account, whenever the stored procedure is executed it will be executed under the credentials of service startup account for Analysis Services. An example of a stored procedure that would need this impersonation mode is an AMO stored procedure that does management operations on the server.

  • Use a specific username and password: If your business needs a stored procedure to always be executed in the context of a specific user, you need to choose this option. You need to specify a Windows account name and password for this impersonation mode. A typical example where you might use this options is when you access an external source such as a data source or web service to retrieve data with this account and utilize that value within the stored procedure for computation.

To summarize, COM assemblies only support the credentials of the current user impersonation, whereas .NET CLR assemblies support all the impersonation modes. As an administrator of Analysis Services you need to choose the right impersonation and permission that suits your business needs.

When you register an assembly with a specific Analysis Services database or for the server using the Register Assembly dialog, Analysis Services 2005 Register Database Assembly dialog uses AMO to set up the correct properties. This, in turn, sends a Create command to the Analysis Services instance as shown below.

     <Create AllowOverwrite="true"     xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">        <ParentObject>          <DatabaseID>Adventure Works DW</DatabaseID>        </ParentObject>        <ObjectDefinition>          <Assembly xmlns:xsd="http://www.w3.org/2001/XMLSchema"     xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="ClrAssembly">            <ID>AmoSproc</ID>            <Name>AmoSproc</Name>            <Description />            <ImpersonationInfo>              <ImpersonationMode>Default</ImpersonationMode>            </ImpersonationInfo>            <Files>              <File>                <Name>AmoSproc.dll</Name>                <Type>Main</Type>                <Data>                  <Block>------------Content about the stored procedure------</Block>                  <Block>------------Content about the stored procedure------</Block>                  <Block>------------Content about the stored procedure------</Block>                  <Block>------------Content about the stored procedure------</Block>                </Data>              </File>            </Files>            <PermissionSet>ExternalAccess</PermissionSet>          </Assembly>        </ObjectDefinition>      </Create> 

The information within the tag BLOCK is a huge amount of text, which for illustration purposes has been restricted to a single line. This text within the BLOCK tag is the assembly to be registered that will be stored within Analysis Services instance. When queries use functions within the assembly, Analysis Services loads the assembly within the same process and executes the CLR assembly with appropriate parameter passing. The results from the assembly are appropriately passed back to Analysis Services for further evaluation of a query.

Back-up and Restore

Backup is an operation that is part of every individual's life. If you have an important document you take a photocopy as a backup. Similarly, backup is an extremely critical operation for any data warehouse. There are several reasons why you should periodically back up your Analysis Services database. One reason is for disaster recovery, another is for auditing purposes. Irrespective of purpose, it is always a good idea to back up your database on a periodic basis. You can back up databases on your Analysis Services instance through SSMS. Follow the steps below to backup the AnalysisServiecs2005Tutorial database.

  1. Connect to Analysis Services instance using SSMS.

  2. Navigate to the database AnalysisServices2005Tutorial in the Object Explorer window.

  3. Right-click the database and select Back Up

You will see the backup dialog shown in Figure 12-18. By default the dialog chooses the database name as the backup name. By default the backup file will be created in the Backup folder your Analysis Services installation folder. If you want the backup location on a different drive or directory, you first need to change the Analysis Services server property AllowedBrowsingFolder by adding the appropriate directory. You can then choose the folder by clicking Browse in the Backup Database dialog and specifying a file name for backup.

You have the option to encrypt the database by specifying a password. You'll need that password to restore the database. If you have remote partitions in the database, you have the option of specifying the backup location for each remote partition. Backup of these partitions is done on respective Analysis Services instances on that machine.

  1. Disable the option to encrypt the backup file.

  2. Select the option to overwrite any existing backup files with the same name.

  3. Choose the default backup file name and click OK.

Following command is sent to Analysis Services instance by SSMS to backup the database AnalysisServices2005Tutorial.

     <Backup xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">         <Object>             <DatabaseID>Analysis Services 2005 Tutorial</DatabaseID>         </Object>         <File>Analysis Services 2005 Tutorial.abf</File>         <AllowOverwrite>true</AllowOverwrite>     </Backup> 

image from book
Figure 12-18

An Analysis Services 2005 backup file with the extension .abf will be created in the Backup folder. Analysis Services 2000 had a 2GB limitation on the file size for backup of a database. Analysis Services 2005 does not have any such limitation. Analysis Services 2005 allows you to back up multiple databases at the same time. Through the SQL Server Management Studio you can launch the backup dialog from each database you want to backup and run backups in parallel. Alternatively, you can create a DDL that will execute backup of multiple databases within the same command.

Whenever you want to restore an Analysis Services database for which you have a backup, you can do so by using the restore dialog. To a restore a database on an Analysis Services instance launch the restore dialog by right-clicking the Database folder from Object Explorer of SSMS and select Restore You will see the restore dialog shown in Figure 12-19.

image from book
Figure 12-19

Specify the database you want to restore and the backup file for that database. If the database already exists and you want to overwrite the existing database object, you can do so by clicking the option Allow Database Overwrite. You can restore the entire database including security information or you can skip the security defined on the backup file during restore. If you had provided a password during backup of this database, you need to specify the same password for restore operation. Once the database has been restored you can query the database. You can take a backup of a database from your test servers and restore it on production server. In such a circumstance you might choose to skip the security information if the security defined on production servers is different from those on your test servers. In such a circumstance you would need to ensure you secure the database by defining the right security on production servers. In a circumstance where the backup was taken on your production server and you are restoring the database on an upgraded production machine we do expect users to restore the database with the security information.

Synchronization

Synchronization sounds like a sophisticated, highly technical area of endeavor, but actually, it couldn't be simpler; consider synchronization as just replication for Analysis Services 2005 databases. The name actually is suitable because it allows you to "synchronize" the Analysis Services database resident on an Analysis Services instance to another Analysis Services instance. Typically engineers test the designed Analysis Services database on a test environment before they move them to their production servers. Engineers often have to backup their database on test servers and restore them on production servers. However through the synchronization feature in Analysis Services 2005 engineers can move well tested database(s) from test servers to production servers with ease.

If you have an Analysis Services instance actively supporting a population of users, you want to be able to update the database they're querying against without taking the system down to do so. Using the Synchronize Database Wizard you can accomplish the database update seamlessly. The wizard will copy both data and metadata from your development and test machine (staging server) to the production server and automatically switch users to the newly copied data and metadata based on conditions defined on production server. To try this out, you need to have two instance of Analysis Services installed on another machine, or have a second instance of Analysis Services installed on your current machine. We recommend you to install another instance called I2 on the same machine. Follow the steps below to synchronize a database from default instance to the newly instance named instance.

  1. Launch SSMS and connect to your default instance (localhost) and named instance (localhost\I2) of Analysis Services as shown in Figure 12-20.

    image from book
    Figure 12-20

  2. We assume that you have deployed the AnalysisServices2005Tutorial to your default instance. Right click on the Databases folder of the named instance and select Synhcronize as shown in Figure 12-20.

  3. If you see the welcome screen click Next.

  4. In the Select Database to Synchronize page of the Synchronize wizard type the default instance localhost as the Source server and select the Source database AnalysisServices2005Tutorial as shown in Figure 12-21 and click Next.

image from book
Figure 12-21

In the Specify Locations for Local Partitions page you can change locations of the partitions during synchronizations if the destination server allows it. In Figure 12-22 you can see that all the partitions of AnalysisServices2005Tutorial will be restored in the default location.

image from book
Figure 12-22

  1. Click Next in the Specify Locations for Local Partitions page. In the Specify Query Criteria page you can choose to synchronize the database with or without the security permissions specified on the source server with the various options as shown in Figure 12-23. You can choose to copy all the roles, skip the membership information in the role(s) or skip all the security permissions. Analysis Services 2005 has been designed to provide these options since customers might choose to synchronize databases from test servers to production servers. While synchronizing databases from test to production servers you can choose to keep all roles if the security permissions in test environment are identical to the ones in production environment. If the security permissions have been defined in such a way that they can be utilized in the production environment but the users in production environment are different then you can use Skip membership information. If you choose the Skip membership option you would need to define the membership after synchronization. Select the Skip membership option as shown in Figure 12-23 and click Next.

    image from book
    Figure 12-23

  2. In the Select Synchronization Method page you can choose to start the synchronization process immediately or script the command to a file and later send the command to the destination server using SSMS or through custom programs. Choose the Synchronize now method as shown in Figure 12-24 and click Finish.

image from book
Figure 12-24

As soon as you hit the finish following Synchronization a command is sent to the server to synchronize the database.

     <Synchronize xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"     xmlns:xsd="http://www.w3.org/2001/XMLSchema"     xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">         <Source>             <ConnectionString>Provider=MSOLAP.3;Data     Source=localhost;ConnectTo=9.0;Integrated Security=SSPI;Initial     Catalog=AnalysisServices2005Tutorial</ConnectionString>             <Object>                 <DatabaseID>AnalysisServices2005Tutorial</DatabaseID>             </Object>         </Source>         <Locations>         <SynchronizeSecurity>SkipMembership</SynchronizeSecurity>         <ApplyCompression>true</ApplyCompression>     </Synchronize> 

You should be aware that the destination server contacts the source server for synchronization using the credentials of the service startup account and not the user who initiated the synchronize operation from SSMS. You do need to make sure the service startup account of the destination server has credentials to access the database(s) on the source server. The source server creates a backup of the objects that have changed in the source server, compresses them, and then sends them to the destination server. On the destination server these objects are first restored under a temporary file. If there are active queries being executed against the database on the destination server, the server waits for those queries to complete and then updates the objects. On the source server, the objects are locked during synchronization. Until the time the objects are sent to the destination server you cannot perform operations, such as processing, or other actions that will modify the objects.

  1. You will see the Synchronization Progress page as shown in Figure 12-25. You will see progress percentage of the synchronization shown in this page which gets updated periodically. After the synchronization is completed you will see the message in the page as shown in Figure 12-25. Click Close.

image from book
Figure 12-25

You can use the synchronization wizard periodically to synchronize the database from source server to destination server. We typically expect the source server is your test environment and the destination server to be the production environment. Synchronization is a pull model operation where the destination server pulls data from the source server. If a version of the database exists on the destination server then the source server only sends the data. Typically on the destination server you might have established new membership or security permissions. If you choose appropriate options to skip membership or Ignore roles during synchronization then security permissions on the destination servers will not be changed.

Important

There is an important security requirement you must implement to complete a successful synchronization. The destination server's service startup account must have access to the database(s) on the source server that are expected to be synchronized.

Managing Security

As an administrator, managing security is the most critical operation for the Analysis Services database. The phrase "managing security" can mean several things: managing the roles of Analysis Services databases, using the Analysis Services security features dimension and cell security, enabling and disabling features in Analysis Services, and setting up Analysis Services with appropriate firewall protection. The latter of which will ensure your Analysis Services instance can appropriately communicate via Internet and Intranet.

Server and Database Roles

Roles are of vital importance to secure databases on your Analysis Services instance. You will deal with two kinds of roles when using the product: the server role and database roles. The server role is required for use by a login that performs administrative functions through the user interface (SSMS) or programmatically using AMO. The database roles are defined on an as-needed basis where you can provide read/write permissions to users to all objects in the database or as fine grain as certain cells in a cube. You learned about the server role and how to specify membership earlier in this chapter. In Chapter 9 you learned to define read/write access to dimensions and cubes in a database. To provide access permissions to users you can use the Roles Dialog that can be launched by right clicking on the Roles folder within a specific database. Figure 12-26 shows the Roles Dialog in SSMS.

image from book
Figure 12-26

The Roles dialog is used to define access to database objects. The pages in the Roles dialog are identical to the Role designer you learned in Chapter 9. Please see to Chapter 9 for details. Chapter 19 provides extensive coverage of database role management through a scenario by which restrict access is restricted to specific dimension members (dimension security) or cells in a cube (cell security) to the users of the database. Just to recap briefly, you can add several roles to a specific database, add members to the role, and provide read, write, or read/write access to the role. In addition you can specify the cell security and dimension security for this role using MDX expressions to limit access to specific cell data or dimension members. When a user is part of multiple roles, Analysis Services provides you access to data in a least restrictive manner. If a user has been restricted access to members of a dimension in one role and has been provided access to the same members in another role then the user will be able to access the members.

Enabling or Disabling Features

Managing database roles is one aspect of securing data in Analysis Services. You can add users to the server role so that you can have several administrators for your Analysis Services instance. The administrator can define appropriate levels of access to databases and objects within a database. However, there is another level of protection that Analysis Services 2005 provides by which you can disable feature(s) that are not used by your users. One of the most common ways to protect your server from security attacks is to reduce your attack surface by running your server or application with minimum functionality. For example, you can turn off unused services of an operating system that listens for requests from users by default. As and when the features are needed, they can be enabled by the administrator. Similarly, Analysis Services allows you to enable or disable certain features to prevent security attacks, thereby making your Analysis Services more secure. Following is the list of features that can be enabled or disabled using the server properties of your Analysis Services instance:

  • Feature\ManagedCodeEnabled

  • Feature\LinkInsideInstanceEnabled

  • Feature\LinkToOtherInstanceEnabled

  • Feature\LinkFromOtherInstanceEnabled

  • Feature\COMUDFEnabled

  • Datamining\AllowAdhocOpenRowSetQueries

  • Security\RequireClientAuthentication

The features LinkInsideInstanceEnabled, LinkToOtherInstanceEnabled, and LinkFromOtherInstanceEnabled help in enabling or disabling linked objects (measure groups and dimensions) within the same instance and between instances of Analysis Services. The features ManagedCodeEnabled and COMUDFEnabled help you to allow/disallow loading assemblies to Analysis Services. You can allow or deny ad-hoc open row set data mining queries using the property Datamining\AllowAdhocOpenRowSetQueries. The server property Security\RequireClientAuthentication helps you to allow or deny anonymous connections to Analysis Services.

SQL Server 2005 provides you a configuration utility called Surface Area Configuration to reduce the attack surface. You can launch the Surface Area Configuration dialog from Program Filesimage from book Microsoft SQL Server 2005image from bookConfiguration Toolsimage from book SQL Server Surface Area Configuration. The dialog provides you two options. The first option is to configure SQL Server 2005 services where you can start or stop services or enable local and remote connections to SQL Server 2005 services. The second option is to configure the features. If you select Surface Area Configuration for Features you will see the dialog shown in Figure 12-27. You can enable or disable the features for allowing Data Mining queries, Anonymous Connections, Linked Objects and COM User Defined functions from this dialog.

image from book
Figure 12-27



Professional SQL Server Analysis Services 2005 with MDX
Professional SQL Server Analysis Services 2005 with MDX (Programmer to Programmer)
ISBN: 0764579185
EAN: 2147483647
Year: 2004
Pages: 176

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net