Additional Cube Features


Throughout the past few chapters, we have learned how to create and, most recently, deploy Analysis Services cubes. I know you are all ready to rush off and create data marts and cubes to start analyzing your own data. This is when I need to be like the knife salesperson on late-night TV and say, "But wait, there's more!"

SQL Server 2005 offers a number of special features in its OLAP cubes. These features enable us to provide our users with more meaningful, better organized, and more secure business intelligence. Space does not permit a detailed examination of each of these features. Instead, the goal here is to give you a basic understanding of each item, so you can do additional exploration on your own. SQL Server 2005 has online documentation called Books On-Line. Refer to this documentation for a more detailed explanation.

Linked Objects

Depending on the intricacy of your organization and its data processing structure, you may end up creating a number of'different Analysis Services cubes from a number of different data sources. Odds are there will be measures in two or three different cubes that a user wants to analyze together. We could solve this problem by duplicating the measures in several cubes. Duplicating data is not a great option, especially if we are preprocessing hundreds of thousands, or even millions, of aggregate values in each cube. Fortunately, we have a better alternative using linked objects.

By linking objects, we can take measures and their related dimensions from one cube and allow them to be referenced through another cube. This solves our problem without creating maintenance nightmares. Once created, these linked measures and dimensions look to the end user like any other measure or dimension in the cube.

Creating a Linked Measure

Linked objects are created on the Cube Structure tab of the Cube Design tab in the Business Intelligence Development Studio. We open the project for the cube that will contain the linked objects. Right-click in the Measures or Dimensions area and select New Linked Object from the Context menu, as shown in Figure 9-17, to launch the Linked Object Wizard.

image from book
Figure 9-17: Creating a linked object

After the introductory page, the Select a Data Source page of the wizard is displayed. Here, we can select an existing Analysis Service data source, if one exists, or create a new data source using the New Data Source button. If the New Data Source button is clicked, the Data Source Wizard appears. This wizard walks us through the creation of a new data source as we have done previously. The only difference is we are creating a connection to an Analysis Services database, rather than a transactional database.

Once a data source is created, we are returned to the Linked Object Wizard. Clicking Next takes us to the Select Objects page of the wizard. On this page, we can select the dimensions and measures we want to utilize from this cube to link back into the cube open in the Business Intelligence Development Studio. This is shown in Figure 9-18.

image from book
Figure 9-18: The Select Objects page of the Linked Object Wizard

The next page of the wizard lets us review our selections and complete the process. After we click Finish, the selected items are added to the cube definition open in the Business Intelligence Development Studio as shown in Figure 9-19. The linked items appear in the Measures and Dimensions areas with a chain link added to their icons. These items must still be maintained through their original cube. This is why the source tables for these dimensions are not added to the Data Source View in this cube.

image from book
Figure 9-19: A cube definition with linked objects

The Business Intelligence Wizard

A number of complex business intelligence concepts can be applied to cubes in SQL Server 2005 Analysis Services. The Business Intelligence Development Studio wraps a number of these concepts into the Business Intelligence Wizard. You can think of this as being nine wizards wrapped into one.

The Business Intelligence Wizard enables us to do any of the following:

  • Define time intelligence Adds time-related calculated measures to the cube. This includes period-to-date calculations, rolling averages, and period over period growth.

  • Define account intelligence Identifies a dimension and its attributes that define a chart of accounts. Once the chart of accounts is defined, additional calculated values can be created using the account definition.

  • Define dimension intelligence Identifies a dimension and its attributes as being of a certain type, such as products, customers, rates, time, and so forth. Once the dimension type is defined, additional calculated values can be created using the dimension definition.

  • Specify a unary operator Specifies the operator to be used instead of the default aggregation in a dimension with a parent-child hierarchy.

  • Create a custom member formula Creates a calculation to be used instead of the default aggregation in a hierarchy.

  • Specify attribute ordering Specifies how the members of a particular attribute are to be sorted. The default order is by name.

  • Enable dimension writeback Allows the members and their attributes to be updated through the cube.

  • Define semiadditive behavior Defines the semiadditive behavior of a measure.

  • Define currency conversion Defines the rules for converting currency and analyzing other multinational information.

The Business Intelligence Wizard is launched by clicking the Add Business Intelligence button in the Cube Design tab toolbar or by right-clicking in the Solution Explorer window and selecting Add Business Intelligence from the Context menu. This is shown in Figure 9-20.

image from book
Figure 9-20: Launching the Business Intelligence Wizard

Key Performance Indicators

A Key Performance Indicator (KPI) lets us define a simple, graphic method for analyzing business intelligence that is important to our organization. KPIs are meant to enable a decision maker to monitor a number of key aspects of the organization's operations at a single glance, KPIs are often used as part of a concept called a digital dashboard.

The digital dashboard is modeled after the dashboard in your car: on the car's dashboard, a number of gauges and lights enable you to easily monitor the status of your car's operation. When you want to know what speed you are traveling at, you don't need to wade through an output of the tire's rotational speed over the past few minutes or perform a calculation to determine the tire's circumference. All of the intelligence is built into the dashboard. You simply look down at the speedometer and the needle tells you what your speed is.

The same is true with the digital dashboard. The decision maker doesn't need to plow though voluminous reports or crunch numbers in a spreadsheet to determine how the organization is doing against specified goals. This business intelligence is in the digital dashboard. A set of electronic gauges and other symbols provides the necessary status information at a single glance.

In SQL Server 2005 Analysis Services, a KPI can reflect five different status levels:

  • Very Good

  • Good

  • Fair

  • Bad

  • Very Bad

Creating a Key Performance Indicator

Key Performance Indicators are created using the KPIs tab on the Cube Design tab shown in Figure 9-21. On this tab, we define each property of the KPI.

image from book
Figure 9-21: The KPIs tab on the Cube Design tab

  • Name A unique name for this KPI.

  • Associated Measure Group The measure group being used to calculate the values for this KPI. This can be a single measure group in the cube or all measure groups in the cube.

  • Value Expression An MDX expression used to calculate the current value of the KPI. This may be as simple as a single measure or a calculated member from the cube, or it may be a complex MDX expression looking at the interaction of several measures.

  • Goal Expression An MDX expression used to express the current goal for the KPI. If the goal is a set value, this is a constant value. In other cases, the goal may need to reflect cyclic trends or it may need to vary, depending on dimensional members. The goal could even come from a dimensional attribute. For example, the production goal for each product could be an attribute in the product dimension.

  • Status Indicator The graphical representation to be used with this KPI. The Business Intelligence Development Studio and its KPI viewer support eight different graphics. Be aware, however: the graphics available to the user depend heavily on the client tool being used to browse and analyze the cube.

  • Status Expression An MDX expression used to translate the current value of the KPI into a graphic representation. As noted previously, the KPI can support five different status levels. These equate to five different numeric values: 1 for Very Good, 0.5 for Good, 0 for Fair, -0.5 for Bad, and -1 for Very Bad. The MDX expression must calculate the appropriate value for each status.

Beyond these required items, we can define the following to give the KP1 additional functionality:

  • Trend Indicator The graphical representation to be used when expressing the direction the KPI value is trending. The Business Intelligence Development Studio and its KPJ viewer support four different graphics. As with the Status Indicator, the graphics available to the user depend heavily on the client tool being used to browse and analyze the cube.

  • Trend Expression An MDX expression used to translate the trend of the KPI value into a graphic representation. The KPI supports five trend states: 1 for an upward trend, 0.5 for a mildly upward trend, 0 for a flat trend, -0.5 for a mildly downward trend, and -1 for a downward trend. The MDX expression must calculate the appropriate value for each trend situation.

  • Display Folder The grouping folder for this KPI. This property can be used to collect KPls into groupings. KPls that are related can be placed in the same display folder. The client tool used to browse and analyze the cube should then present all of the KPls in a display folder as a single grouping.

  • Parent KPI The KPI that serves as a rollup for this and other child KPIs. KPIs can be created in a hierarchical structure where one KPI displays the status of a rollup value and its child KPIs display the status of the more detailed indicators that make up this value.

  • Current Time Member An MDX expression defining the current time member. We discuss the concept of a current member of a dimension in Chapter 10.

  • Weight An MDX expression that expresses the emphasis given to this KPI relative to other KPIs in the group. This expression usually returns a numeric value.

  • Description A textual description of the KPI.

Key Performance Indicators and Client Software

We can define KPIs in our cubes and even suggest the graphics we would like used to present those KPIs. However, it is up to the client software used to browse and analyze the cube to display the KPI. Therefore, the client software must not only have the same graphics available, but also honor our suggested graphic for each KPI for the KPIs to appear as we designed them.

You should look at the capabilities of the client tools being used in your organization before spending a lot of time defining KPIs. Make sure KPIs are supported and determine which parts of the KPI definition the client tools can utilize.

Learn by Doing—Adding a KPI to the Max Min Manufacturing DM Cube

Feature Highlighted
  • Defining a Key Performance Indicator

Business Need Maximum Miniatures is contemplating the creation of a digital dashboard using KPIs. An initial KPI should be created showing the status of the Percent Rejected calculated measure. This KPI will be used as part of a proof of concept project.

The Percent Rejected KPI should reflect status as follows:

  • Values less than 1.03 percent should get a Very Good status.

  • Values greater than or equal to 1.03 percent, but less than 1.04 percent, should get a Good status.

  • Values greater than or equal to 1.04 percent, but less than 1.05 percent, should get a Fair status.

  • Values greater than or equal to 1.05 percent, but less than 1.06 percent, should get a Bad status.

  • Values greater than or equal to 1.06 percent should get a Very Bad status.

Steps
  1. Open the Business Intelligence Development Studio.

  2. Open the MaxMinManufacturingDM project.

  3. Double-click the entry for Max Min Manufacturing DM.cube in the Solution Explorer window. The Cube Design tab appears.

  4. Select the KPIs tab. Click the New KPI button on the KPIs tab toolbar.

  5. Enter Percent Rejected for Name.

  6. Select Manufacturing Fact from the Associated Measure Group drop-down list.

  7. Enter the following for Valid Expression:

     [Measures].[Percent Rejected] 

  8. Enter the following for Goal Expression:

     "1.02% or less" 

  9. Leave Status Graphic set to the default of Gauge.

  10. Enter the following MDX expression for Status Expression:

     Case     When ROUND([Measures].(Percent Rejectedl,4) <.0103          Then 1     When ROUND([Measures].[Percent Rejected],4) >= .0103 AND          ROUND([Measures].[Percent Rejected],4) < .0104           Then .5     When ROUND([Measures].[Percent Rejected],4) >= .0104 AND          ROUND([Measures].[Percent Rejected],4) < .0105           Then 0     When ROUND([Measures].[Percent Rejected],4) >= .0105 AND          ROUND([Measures].[Percent Rejected],4) < .0106           Then -.5     Else -1 End 

    We discuss MDX expressions in detail in Chapter 10.

  11. Click the Save All button in the toolbar. The KPIs tab should appear similar to Figure 9-21.

  12. We now want to use the Business Intelligence Development Studio's KPI browser to look at the result of our efforts. To do this, we need to deploy these modifications to the Analysis Services database. Select Build I Deploy MaxMinManufacturingDM from the Main menu.

  13. When the Deployment Completed Successfully message appears in the Deployment Progress window, click the Browser View button in the KPIs tab toolbar as shown in Figure 9-22. The KPI browser appears.

    The KPI browser shows us the Percent Rejected KPI we just created, as shown in Figure 9-23. The value shown represents the value for the Percent Rejected calculated measure across the entire cube. Our KPI translates this value to a status of Bad (between the red and yellow areas on the gauge).

    The upper portion of the KPI browser allows us to filter the information we are sending to the KPI. We use the filter to look at the KPI for some specific dates of production, rather than across the entire cube.

  14. Click the cell containing <Select dimension> in the upper portion of the KPI browser. A dimension drop-down list appears. Select Dim Time from this drop-down list. (If <Select dimension> does not appear in the upper portion of the KPI browser, click the Reconnect button in the toolbar of the KPIs tab.)

  15. Click the cell immediately below the Hierarchy heading. The hierarchy drop-down list appears. Select DateOfManufacture from this drop-down list.

  16. Click the cell immediately below the Filter Expression heading. The Filter Expression drop-down window control appears. Click the down arrow to display the drop-down window.

  17. Expand the All entry on the drop-down window. Check 1/2/2003 and click OK.

  18. Click anywhere on the KPI browser, so the Filter Expression cell loses focus. This causes the KPI to refresh. The KPI browser now has a value of 1.04 percent, which translates to a status of Fair (straight up in the yellow), as shown in Figure 9-24.

  19. Try this process again for the following dates:

    Date

    Value

    Status

    1/7/2003

    1.06%

    Very Bad

    1/17/2003

    1.03%

    Good

    1/29/2003

    1.02%

    Very Good

  20. You can experiment with additional analysis. Try adding a second filter to see the KPI for a single product or a single product type.

image from book
Figure 9-22: The Browser View toolbar button

image from book
Figure 9-23: The KPI browser with the Percent Rejected KPI for the entire cube

image from book
Figure 9-24: The KPI browser with the Percent Rejected KPI for January 1, 2005

Actions

Actions allow the OLAP cubes to "reach out and touch someone." They enable us to define commands, statements, and directives that are to be executed outside of the cube. In fact, actions are designed to be executed outside of the Analysis Services environment altogether.

Actions are instructions that are defined and stored inside the cube itself. They are linked to certain objects in the cube. When the user is browsing a certain object in the client software, that software can look for any actions related to the object. These actions are then displayed to the user as menu items or action buttons on the client screen. The user can select one of these actions to launch separate, but related, applications or interfaces to accomplish related tasks.

For example, suppose a user is browsing a cube that analyzes changing stock prices. This cube may include an action for navigating to a website that contains up-to-the-minute market analysis. If the stock symbol for the company is stored as an attribute in the cube, the action could pass this along as a parameter in the URL.

In another case, a user may be browsing a cube that contains information on purchasing. The cube could include an action that would launch the organization's document management system and take the user to the folder containing scanned copies of the hardcopy purchase orders. Then, when a questionable purchase is spotted, the user can launch the document management program and examine the paper trail for the purchase.

Types of Actions

Three different types of actions are available:

  • Action The generic action type, which has a number of subtypes:

    • Dataset Retrieves a dataset.

    • Proprietary A proprietary action defined by the client software.

    • Rowset Retrieves a rowset.

    • Statement Runs an OLE DB command.

    • URL Displays a page in a browser.

  • Drillthrough Action Defines a dataset to be returned as a drillthrough to a more detailed level.

  • Report Action Launches a SQL Server 2005 Reporting Services report.

Creating an Action

Actions are created on the Actions tab on the Cube Design tab shown in Figure 9-25. On this tab, we define each property of the action.

image from book
Figure 9-25: The Actions tab on the Cube Design tab

  • Name A unique name for this Action.

  • Target type The type of object to which this Action is associated.

  • Target object The specific objects of the selected Target type to which this Action is associated.

  • Condition An MDX expression defining a condition that would further limit the objects to which this Action is associated.

  • Type The type of Action (Dataset, Proprietary, Rowset, Statement, or URL).

  • Action expression The command that is executed to carry out the Action.

  • Invocation How the command is executed: Batch, Interactive, On Open.

  • Application The application used to carry out the Action.

  • Description A description of the Action.3

  • Caption A caption for this Action.

  • Caption is MDX A flag showing whether the caption is a string constant or an MDX expression to be evaluated.

Creating Drillthrough Actions

For the most part, Drillthrough Actions have the same properties as Actions. Drillthrough Actions do not have Target Type or Target Object properties. In their place, the Drillthrough Action has the following:

  • Drillthrough Columns Defines the objects to be included in the drillthrough dataset.

  • Default A flag showing whether this is the default Drillthrough Action.

  • Maximum Rows The maximum number of rows to be included in the drillthrough dataset.

Creating Report Actions

For the most part, Report Actions have the same properties as Actions. Report Actions do not have Target Type or Target Object properties. In their place, the Report Action has the following:

  • Server name The name of the report server.

  • Server path The path to the report sever.

  • Report format The format of the report to be returned (HTML5, HTML3, Excel, PDF).

  • Parameters The parameter values to be passed to the report.

Actions and Client Software

We can define Actions in our cubes. However, it is up to the client software used to browse and analyze the cube to display and implement the Action. You should look at the capabilities of the client tools being used in your organization before spending a lot of time defining Actions. Make sure Actions are supported and determine which parts of the Action definition the client tools can utilize.

Partitions

We have deployed and processed our cubes to databases on the SQL Server Analysis Services server. We now have, as we have seen, cubes full of information that can be browsed and analyzed. What we have not discussed up to this point is exactly how and where the information in the databases is stored. Let's take care of that deficiency right now.

Measure Groups and Partitions

When we created the measure groups in our cubes, the Business Intelligence Development Studio created something else for us behind the scenes. It created one partition for each of our measure groups. These partitions are where all of the measure group information is stored. We can view the partitions defined in a cube by using the Partitions tab on the Cube Design tab as shown in Figure 9-26.

image from book
Figure 9-26: The Partitions tab on the Cube Design tab

In Figure 9-26, you can see the partitions that were created for each of the measure groups in the Max Min Manufacturing DM cube. The partitions were given the same name as the measure group they are tied to. The Source column on this tab tells us where each partition and, therefore, each measure group, gets its fact data. Each partition is tied to a fact table in the data mart. The Aggregations column tells us how each partition is configured for storing aggregate data.

Multiple Partitions for a Measure Group

Although we start out with one partition for each measure group, that does not mean we have to keep it that way. We can assign a number of partitions to a single measure group. Each partition has its own data source table.

This is similar to the concept of partitioned views on the relational side of SQL Server. With partitioned views, we distribute the data from what would be a large table among a number of smaller tables as shown in Figure 9-27. Sales data from 2002 is in one table. Sales data from 2003 is in another table, and so on. These tables are presented to the client application as if they were a single table by combining them together in a partitioned view. This is done for ease of management and for better performance.

image from book
Figure 9-27: A partitioned view

We can essentially do the same thing with partitions and measure groups as shown in Figure 9-28. Our data can be divided among a number of tables. These tables could be in the same data source or in different data sources. Because they can be in different data sources, the tables could even be on different database servers. We can define multiple partitions, each one pulling data from one of the data source tables. These multiple partitions are then brought together in a single measure group.

image from book
Figure 9-28: A measure group with multiple partitions from multiple relational tables

As with a partitioned view in a relational database, all of the partitions combined in a single measure group must have the same structure. They must have the same measures and the same relationships to dimensions. If the measures and dimensional relationships don't match, we cannot make the partitions into a single measure group.

It is also possible to create multiple partitions from a single table, We may want to do this because our source data works just fine in a single table, but we can better manage it as a number of separate partitions in Analysis Services. This is shown in Figure 9-29.

image from book
Figure 9-29: A measure group with multiple partitions from a single relational table

We do this by creating a number of views in the relational database or named queries in the data source view. You can think of named queries as equivalent to views. They are defined in the data source view of a cube. We try creating a few named queries in the section "Learn By DoingCreating Multiple Partitions from a Single Table." Each view or named query becomes the data source for a single partition. The partitions are joined together again by the measure group.

Any time we use multiple partitions in a single measure group, but especially when using named queries with a single table, we must be careful not to get the same data in two partitions. If this occurs, the information in our measure group will be incorrect. To prevent this, it is important to strictly define the criteria for membership in each view or named query. In many cases, dividing data by time is wise, as is shown in Figure 9-27, Figure 9-28, and Figure 9-29.

Learn Bny Doing—Creating Multiple Partitions from a Single Table

Features Highlighted
  • Creating Named Queries in a data source view

  • Creating multiple partitions for a single measure group

Business Need To better manage the Manufacturing Fact information in Analysis Services, this measure group should utilize three partitions. One partition should contain data from 2003, one partition should contain data from 2004, and one partition should contain data from 2005.

Steps
  1. Open the Business Intelligence Development Studio.

  2. Open the MaxMinManufacturingDM project.

  3. Double-click the entry for the Max Min Manufacturing DM.dsv data source view. The Data Source View Design tab appears.

  4. Right-click any place there isn't a table in the data source diagram area as shown in Figure 9-30. Select New Named Query from the Context menu. The Create Named Query dialog box appears.

  5. Enter ManufacturingFact_2003 for Name.

  6. Enter the following query in the SQL pane:

     SELECT * FROM ManufacturingFact WHERE YEAR(DateOfManufacture) = 2003 

    The Create Named Query dialog box appears as shown in Figure 9-31.

  7. Click OK.

  8. Create a second Named Query called ManufacturingFact_2004, which includes only 2004 data.

  9. Create a third Named Query called ManufacturingFact_Current using the following query:

     SELECT * FROM ManufacturingFact WHERE YEAR(DateOfManufacture) > 2004 

    Obviously, some annual maintenance is required at the beginning of each year. At least, by keeping our Final Named Query open-ended, using > 2004 rather than = 2005, we do not run the risk of leaving out data.

  10. Click the Save All button in the toolbar and close the Data Source View Design tab.

  11. Double-click the entry for Max Min Manufacturing DM.cube in the Solution Explorer window. The Cube Design tab appears.

  12. Select the Partitions tab.

  13. Click the New Partition link or the New Partition button in the Partitions tab toolbar. The Welcome page of the Partition Wizard dialog box appears.

  14. Click Next. The Specify Source Information page of the wizard appears. The Manufacturing Fact measure group and the Max Min Manufacturing DM data source view should already be selected for us. Check ManufacturingFact_2003 in the Available Tables list.

  15. Click Next. The Restrict Rows page of the wizard appears. Here, we can further filter the rows in the selected table or Named Query to insure no overlap exists between partitions. Our Named Queries are structured such that no overlap occurs, so we do not need to specify a filter here.

  16. Click Next. The Processing and Storage Locations page of the wizard appears. This page enables us to specify which Analysis Services server is to be used to calculate the aggregates and where the resulting data should be stored. You can see, if we needed a great deal of computing power to calculate a large set of aggregates, we could have a separate server for calculating each partition! If we needed a large amount of storage space, we could have a separate drive system to store the data from each partition! We don't have those types of requirements, so we can leave the defaults.

  17. Click Next. The Completing the Wizard page appears. This page lets us choose when and how to design aggregations. We execute the Aggregation Design Wizard manually in the next Learn By Doing exercise. For now, select Design Aggregations Later.

  18. Click Finish. The new partition appears under the Manufacturing Fact measure group.

  19. Repeat Step 13 through Step 18 for the Manufacturing_2004 Named Query and again for the Manufacturing_Current Named Query.

  20. Select the original Manufacturing Fact partition. Click the Delete button in the Partitions tab toolbar. The Delete Objects dialog box appears.

  21. Click OK to confirm the deletion. Click the Save All button in the toolbar. The Partitions tab should appear as shown in Figure 9-32.

image from book
Figure 9-30: Creating a New Named Query

image from book
Figure 9-31: The ManufacturingFact 2003 Named Query definition

image from book
Figure 9-32: The Manufacturing Fact measure group with three partitions

Partitions and Storage Options

In Chapter 4 of this book, we discussed the different types of OLAP architectures: Relational OLAP (ROLAP), Multidimensional OLAP (MOLAP), and Hybrid OLAP (HOLAP). Refer to Figure 4-4, if you need a refresher. We are now going to discuss these storage options with regard to our partitions.

As we have said, the aggregates in our partitions are going to be preprocessed. Once the preprocessing is completed, all of that data needs to be stored somewhere. SQL Server 2005 Analysis Services provides us with a number of options here, enabling us to select the best mix of query speed versus data latency.

If we click the Storage Settings link or the Storage Settings button in the Partitions tab toolbar, we see the Storage Settings dialog box as shown in Figure 9-33. This dialog box presents a continuum of storage choices on the Standard Setting slider. These are preconfigured storage options that Analysis Services makes available to us. You can also create your own custom storage settings, but leave that alone for the moment.

image from book
Figure 9-33: The Storage Settings dialog box

The options on the left side of the slider represent the lowest latency, but the slowest query speed. The options on the right side of the slider represent the highest latency, but the fastest query speed. We need to determine the best choice for each partition.

The predefined options control the following characteristics:

  • Storage location for the detail data and the preprocessed aggregates The detail data and preprocessed aggregates can be stored in one of the following three ways:

    • ROLAP The detail data and the preprocessed aggregates are both stored in a relational format.

    • HOLAP The detail data is stored in a relational format and the aggregates are stored in a multidimensional format.

    • MOLAP The detail data and the preprocessed aggregates are both stored in a multidimensional format.

  • How Analysis Services finds out a data source has changed Analysis Services can discover data source changes in the following ways:

    • SQL Server Notification The SQL Server relational database engine notifies Analysis Services when changes are made to a data source table. When the partition is linked to a view or a Named Query, we must specify a SQL table that will cause the notification when it is changed. This is known as a tracking table.

    • Client Initiated Notification The client software that is responsible for updating the relational data source sends a NotifyTableChangeCommand to Analysis Services to let it know that data has been changed. Of course, this only works if the data source is modified through this client software only.

    • Scheduled Polling Analysis Services periodically runs a query to determine if the data has changed. This query must return a single value that is compared with the value saved from the previous query execution. This value could be the MAX() of an identity field in the fact table that catches adds, but not edits and deletes. This could also be a MAX() of a last updated date/time stamp, which would catch inserts and edits, but still miss deletes.

  • Enable Proactive Caching Whether proactive caching is used or not. (See the section "Proactive Caching" in Chapter 4 for more information on Proactive Cache.)

  • Silence Interval The length of time the data must go without a change before processing begins. The thinking here is that data updates often come in bursts. We don't want to start processing the cube if another ten data changes are coming down the pike. We can wait until things settle down, and then start processing.

  • Silence Override Interval The length of time we wait for a silence interval before we go ahead and initiate processing without one. This prevents the data in the cube from getting too old, while waiting for the data changes to die down.

  • Drop Outdated Cache Latency This is the maximum amount of time we permit outdated aggregates to live in proactive cache while we are waiting for processing to complete. When this time period is reached, the cache is dropped and querying is directed to the relational data source.

  • Update Cache Periodically This is a schedule for updating the proactive cache, even if we have not received a notification that it is out-of-date. This is often used when no notification mechanism is in place.

Let's look at each of the predefined settings in a bit more detail.

Real-Time ROLAP All detail data and aggregates are queried directly from the relational data source. No notification is necessary. No proactive caching is used. This may result in slow query performance, but data is always current.

This setting is best for data that is changing frequently, leaving no time for cube processing, but which must always be up-to-date.

Real-Time HOLAP Detail data remains in the relational data source. Aggregates are in multidimensional storage. When Analysis Services is notified that the aggregates are out-of-date, it processes the cube. It does not wait for a silence interval. While the aggregates are out-of-date or being processed, queries are sent directly to the relational data source. No proactive cache is used. This provides better performance for queries when the aggregates are up-to-date, but reverts to slow performance while processing.

This setting is best for data that is also changing frequently, but provides some intervals for processing.

Low-Latency MOLAP Detail data and aggregates are in multidimensional storage. When Analysis Services is notified that the aggregates are out-of-date, it waits for a silence interval often seconds before beginning processing. It uses a silence override interval of ten minutes. While the cube is processing, queries are sent to a proactive cache. If processing takes longer than 30 minutes, the proactive cache is dropped and queries are sent directly to the relational data source. This provides fast query response unless processing takes longer than 30 minutes. Maximum latency is 30 minutes.

This setting is best in situations where query performance is important, but data must remain fairly current.

Medium-Latency MOLAP Detail data and aggregates are in multidimensional storage. When Analysis Services is notified that the aggregates are out-of-date, it waits for a silence interval often seconds before it starts processing. It uses a silence override interval often minutes. While the cube is processing, queries are sent to a proactive cache. If processing takes longer than four hours, the proactive cache is dropped and queries are sent directly to the relational data source. This provides fast query response unless processing takes longer than four hours. Maximum latency is four hours.

This setting is best in situations where query performance is important and a bit more latency can be tolerated.

Automatic MOLAP Detail data and aggregates are in multidimensional storage. When Analysis Services is notified that the aggregates are out-of-date, it waits for a silence interval often seconds before it starts processing. It uses a silence override interval of ten minutes. While the cube is processing, queries are sent to a proactive cache. Proactive cache is not dropped, no matter how long processing takes. This provides fast query response at all times, but it can lead to a large latency if processing is long-running.

This setting is best in situations where query performance is the most important factor and a potentially large latency can be tolerated.

Scheduled MOLAP Detail data and aggregates are in multidimensional storage. Analysis Services does not receive notification of data source changes. Instead, it processes automatically every 24 hours. While the cube is processing, queries are sent to a proactive cache. Proactive cache is not dropped, no matter how long processing takes. This provides fast query response at all times, but it has a maximum latency of 24 hours, plus the time required for processing.

This setting is typically used in situations where a notification mechanism is not available or where data updates occur nightly.

MOLAP Detail data and aggregates are in multidimensional storage. Analysis Services does not receive notification of data source changes. Instead, processing is initiated manually by an administrator. No proactive caching is used, so queries cannot be run while processing is occurring.

This setting is typically used in situations where data is changing rarely or for development and testing environments.

Learn By Doing—Setting Storage Options for a Cube Partition

Features Highlighted
  • Setting storage options for a cube

  • Enabling SQL Notification when using Named Queries

Business Need Before we put the Max Min Manufacturing DM cube into production, we need to revise the storage settings, so it keeps itself up-to-date without having to be manually processed. The vice president of production at Maximum Miniatures has asked that this cube have a maximum latency of four hours.

Steps
  1. Open the Business Intelligence Development Studio.

  2. Open the MaxMinManufacturingDM project.

  3. Double-click the entry for Max Min Manufacturing DM.cube in the Solution Explorer window. The Cube Design tab appears.

  4. Select the Partitions tab.

  5. Select the Manufacturing Fact 2003 partition.

  6. Click the Storage Settings link or the Storage Settings button in the Partitions tab toolbar. The Partition Storage Settings - Manufacturing Fact 2003 dialog box appears.

  7. Move the slider to the Medium-latency MOLAP setting. Note the warning message at the bottom of the screen telling us that the default notification method does not work with a Named Query as a data source.

  8. Click the Options button. The Storage Options dialog box appears as shown in Figure 9-34.

  9. Select the Notifications tab.

  10. Check Specify Tracking Tables. This lets us specify a table whose changes cause changes to our Named Queries. In other words, when this table changes, the data in our Named Queries change and the cube needs to be processed.

  11. Click the ellipsis () button to the right. The Relational Objects dialog box appears.

  12. Check the box next to the ManufacturingFact table. Changes to this table cause the SQL Server relational database engine to notify Analysis Services that the cube must be processed.

  13. Click OK to exit the Relational Objects dialog box. The Notifications tab of the Storage Options dialog box should appear as shown in Figure 9-35.

  14. Click OK to exit the Storage Options dialog box. Click OK to exit the Partition Storage Settings dialog box.

  15. Repeat Step 5 through Step 14 for the Manufacturing Fact 2004 partition and the Manufacturing Fact Current partition.

  16. Select the Save All button from the toolbar.

  17. Select Build | Deploy MaxMinManufacturingDM from the Main menu. The cube is deployed to the server and processed.

  18. Now let's test our notification mechanism. Select the Browser tab on the Cube Design tab.

  19. Expand the Measures entry, drag the Total Products calculated member, and then drop it on Drop Totals or Detail Fields Here. The total should be 10,296,577.

  20. Open the SQL Server Management Studio (leave the Business Intelligence Development Studio open as well) and connect to the SQL Server database engine hosting the MaxMinManufacturingDM relational database.

  21. Expand the items in the Object Explorer window until you can see the ManufacturingFact table in the MaxMinManufacturingDM database.

  22. Right-click the ManufacturingFact table and select Open Table from the Context menu.

  23. Change the AcceptedProducts field at the top of the grid from 3875 to 4875.

  24. Move your cursor off of the modified row and wait about 20 seconds.

  25. Return to the Business Intelligence Development Studio.

  26. Click the Refresh button in the Browser tab toolbar. If everything is working properly, the total should change to 10,297,577.

  27. Go back to the SQL Server Management Studio and change the AcceptedProducts field back to 3875.

image from book
Figure 9-34: The General tab of the Storage Options dialog box

image from book
Figure 9-35: The Notifications tab of the Storage Options dialog box

Aggregation Design

Previously, we discussed preprocessing all of the aggregates in a cube and putting the results in a multidimensional storage. This is a bit of an overstatement. In fact, Analysis Services determines a pattern of aggregates within the partition that strikes a balance between storage requirements and query speed. This is called an aggregation design. This process determines key aggregates, and then calculates and stores these. Other aggregations are calculated from the key aggregates at the time a query is submitted.

The aggregation design is created by the Aggregation Design Wizard. (No big surprise there.) This wizard is launched by clicking the Design Aggregations button on the Partitions tab toolbar.

After the Welcome page, the wizard displays a Specify Storage and Caching Options page. This page looks exactly like the Storage Options dialog box we worked with in the previous sections. It should be obvious that storage options must be determined before the most appropriate aggregation design can be created.

The next page of the wizard is the Specify Object Counts page shown in Figure 9-36. When the Count button is clicked, the wizard counts the number of members in each of the dimensions, including the fact dimension. If you are working with a large cube and do not want to wait for the wizard to do all of this counting, you can enter the member counts manually.

image from book
Figure 9-36: The Specify Object Counts page of the Aggregation Design Wizard

The following page is the Set Aggregation Options page. This is shown in Figure 9-37. On this page, we tell the wizard how to optimize the aggregation design. We can either specify a maximum amount of storage we want to allow for aggregates, a maximum performance gain we require, or choose not to design aggregations at all. A fourth option allows the optimization process to run until we manually tell it to stop. With this last option, the optimizer continues running until it has the best solution or until you get tired of waiting for it and stop it manually. As the optimizer runs, it shows you a graph of its progress.

image from book
Figure 9-37: The Set Aggregation Options page of the Aggregation Design Wizard

The last page of the wizard enables us to either deploy and process the changes immediately or save them for later processing. Once the aggregation design has been completed, the Partitions tab displays the percentage of aggregations being preprocessed for each partition. This is shown in Figure 9-38.

image from book
Figure 9-38: The Partitions tab with aggregation design completed

Be sure to complete the aggregation design before putting your cubes into production to insure you get the query performance you expect.

Note 

In addition to the optimization provided by the aggregation design, there is also a Usage-Based Optimization Wizard available from the Partitions tab.

Writeback

Many times, while a user is analyzing data in a cube, they discover data that needs to be adjusted for one reason or another. It may be too time-consuming to make the change in the data source, and then wait for it to be processed into the cube. Also, some situations, it may not be possible, for business reasons, to make the adjustment in the data source. This is where writeback comes in.

Writeback enables us to make adjusting entries to the cube data. These entries are stored in a special table that is added to the data source. This table then shows up as an additional partition in the measure group. The adjusting entries are incorporated into queries from this measure group in the same way that data from all of the other partitions is combined.

Writeback is enabled by right-clicking on a partition on the Partitions tab and selecting Writeback Settings on the Context menu. This displays the Enable Writeback dialog box shown in Figure 9-39. We can specify the name of the writeback table being created and the data source it is created in. Once writeback is enabled, the writeback partition can be seen on the Partitions tab, as shown in Figure 9-40. Clicking Writeback Settings on the Context menu when writeback is enabled lets you disable this feature.

image from book
Figure 9-39: The Enable Writeback dialog box

image from book
Figure 9-40: The Partitions tab with a writeback partition

As with a number of other cube features, the client software used to browse and analyze the cube must support writeback for this feature to be put to use. Microsoft's Office Tools, Excel 2000, XP, and 2003 all support this writeback function.

Perspectives

In real life, many objects look different, depending on how you look at themdepending on your perspective. In the same way, we can detine different perspectives, so our cubes look different to different users. By using perspectives, we can provide our users with what they need and not overwhelm them with things they don't.

Creating Perspectives

Perspectives are created on the Perspectives tab. This tab lists all of the items in our cube, as shown in Figure 9-41. We can then determine which items are visible in each perspective.

image from book
Figure 9-41: The Perspectives tab

New perspectives are added by clicking the New Perspective button in the Perspectives tab toolbar. Each perspective is given a unique name to identify it. When the perspective is first created, it includes all of the items in the cube. We can remove (uncheck) the items that are not needed or should not be viewed by people who are going to use that perspective to access the cube.

In Figure 9-41, we created two perspectives in our Max Min Manufacturing DM cube. The Inventory Monitoring perspective is for users who will be looking at the Inventory Fact measures. They do not need access to the Manufacturing Fact measures or other items that apply only to those measures. The Highlevel Mfg Info perspective is for users who are only interested in the KPl and the two calculated members we have created in our cube. Therefore, only those items and the dimensions are included in this perspective.

Translations

We live in a global society. More and more, we are required to work internationally to get things done. Business intelligence is certainly no exception to this trend.

To help this along, Analysis Services provides us with the means to localize the metadata contained in our cubes. This is done using the Translations tab. The Translations tab is shown in Figure 9-42.

image from book
Figure 9-42: The Translations tab

Creating Translations

New translations are created by clicking the New Translation button in the Translations tab toolbar. First, we must pick the language for the translation. The Select Language dialog box, shown in Figure 9-43, enables us to select the language and country designation for this translation. Next, we can fill in the appropriate translation for each of the metadata items in the cube.

image from book
Figure 9-43: The Select Language dialog box

The translations entered in the Translations tab only apply to metadata in the cubes, not to the data in each of the dimensions. To implement multilingual support for the dimensional members, each dimension must have an attribute for each language supported. These attributes then contain the appropriate translations for each member.




Delivering Business Intelligence with Microsoft SQL Server 2005
Delivering Business Intelligence with Microsoft SQL Server 2005: Utilize Microsofts Data Warehousing, Mining & Reporting Tools to Provide Critical Intelligence to A
ISBN: 0072260904
EAN: 2147483647
Year: 2007
Pages: 112
Authors: Brian Larson

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net