Introducing Data Mining with SQL Server

 < Day Day Up > 

Although SQL Server 7.0 offered Online Analytical Processing (OLAP) as OLAP Services, it was not until the release of SQL Server 2000 that data-mining algorithms were included. Analysis Services comes bundled with SQL Server as a separate install. It allows developers to build complex OLAP cubes and then utilize two popular data-mining algorithms to process data within the cubes.

Of course, it is not necessary to build OLAP cubes in order to utilize data-mining techniques. Analysis Services also allows mining models to be built against one or more tables from a relational database. This is a big departure from traditional data-mining methodologies. It means that users can access data-mining predictions without the need for OLAP services.

Data mining involves the gathering of knowledge to facilitate better decision-making. It is meant to empower organizations to learn from their experiences or in this case their historical data in order to form proactive and successful business strategies. It does not replace decision-makers, but instead provides them with a useful and important tool.

The introduction of data-mining algorithms with SQL Server represents an important step toward making data mining accessible to more companies. The built-in tools allow users to visually create mining models and then train those models with historical data from relational databases.

Data-Mining Algorithms

Data mining with Analysis Services is accomplished using one of two popular mining algorithms decision trees and clustering. These algorithms are used to find meaningful patterns in a group of data and then make predictions about the data. Table 5.1 lists the key terms related to data mining with Analysis Services.

Table 5.1. Key terms related to data mining with Analysis Services.

Term

Definition

Case

The data and relationships that represent a single object you wish to analyze. For example, a product and all its attributes, such as Product Name and Unit Price. It is not necessarily equivalent to a single row in a relational table, because attributes can span multiple related tables. The product case could include all the order detail records for a single product.

Case Set

Collection of related cases. This represents the way the data is viewed and not necessarily the data itself. One case set involving products could focus on the product, whereas another may focus on the purchase detail for the same product.

Clustering

One of two popular algorithms used by Analysis Services to mine data. Clustering involves the classification of data into distinct groups. As opposed to the other algorithm, decision trees, clustering does not require an outcome variable.

Cubes

Multidimensional data structures built from one or more tables in a relational database. Cubes can be the input for a data-mining model, but with Analysis Services the input could also be based on an actual relational table(s).

Decision Trees

One of two popular algorithms used by Analysis Services to mine data. Decision trees involves the creation of a tree that allows the user to map a path to a successful outcome.

Testing Dataset

A portion of historical data that can be used to validate the predictions of a trained mining model. The model will be trained using a training dataset that is representative of all historical data. By using a testing dataset, the developer can ensure that the mining model was designed correctly and can be trusted to make useful predictions.

Training Dataset

A portion of historical data that is representative of all input data. It is important that the training dataset represent input variables in a way that is proportional to occurrences in the entire dataset. In the case of Savings Mart, we would want the training dataset to include all the stores that were open during the same time period so that no bias is unintentionally introduced.


Decision Trees

Decision trees are useful for predicting exact outcomes. Applying the decision trees algorithm to a training dataset results in the formation of a tree that allows the user to map a path to a successful outcome. At every node along the tree, the user answers a question (or makes a "decision"), such as "years applicant has been at current job (0 1, 1 5, > 5 years)."

The decision trees algorithm would be useful for a bank that wants to ascertain the characteristics of good customers. In this case, the predicted outcome is whether or not the applicant represents a bad credit risk. The outcome of a decision tree may be a Yes/No result (applicant is/is not a bad credit risk) or a list of numeric values, with each value assigned a probability. We will see the latter form of outcome later in this chapter.

The training dataset consists of the historical data collected from past loans. Attributes that affect credit risk might include the customer's educational level, the number of kids the customer has, or the total household income. Each split on the tree represents a decision that influences the final predicted variable. For example, a customer who graduated from high school may be more likely to pay back the loan. The variable used in the first split is considered the most significant factor. So if educational level is in the first split, it is the factor that most influences credit risk.

Clustering

Clustering is different from decision trees in that it involves grouping data into meaningful clusters with no specific outcome. It goes through a looped process whereby it reevaluates each cluster against all the other clusters looking for patterns in the data. This algorithm is useful when a large database with hundreds of attributes is first evaluated. The clustering process may uncover a relationship between data items that was never suspected. In the case of the bank that wants to determine credit risk, clustering might be used to identify groups of similar customers. It could reveal that certain customer attributes are more meaningful than originally thought. The attributes identified in this process could then be used to build a mining model with decision trees.

OLE DB for Data-Mining Specification

Analysis Services is based on the OLE DB for Data Mining (OLE DB for DM) specification. OLE DB for DM, an extension of OLE DB, was developed by the Data Mining Group at Microsoft Research. It includes an Application Programming Interface (API) that exposes data-mining functionality. This allows third-party providers to implement their own data-mining algorithms. These algorithms can then be made available through the Analysis Services Manager application when building new mining models.

Tip

Readers interested in learning more about the OLE DB for Data Mining Specification can download documentation from the Microsoft Web site at http://www.microsoft.com/downloads/details.aspx?FamilyID=01005f92-dba1-4fa4-8ba0-af6a19d30217&displaylang=en.


     < Day Day Up > 


    Building Intelligent  .NET Applications(c) Agents, Data Mining, Rule-Based Systems, and Speech Processing
    Building Intelligent .NET Applications(c) Agents, Data Mining, Rule-Based Systems, and Speech Processing
    ISBN: N/A
    EAN: N/A
    Year: 2005
    Pages: 123

    flylib.com © 2008-2017.
    If you may any questions please contact us: flylib@qtcs.net