What About the DBMS?

You are somewhat fortunate, in that you are preparing for a Microsoft exam dealing with Microsoft technologies surrounding Microsoft tools. Not all real-world systems have this luxury. It is easy to take the system you have modeled and step back to make a choice between Access, for a small system, and SQL Server, for larger-scale systems. This does not mean conceptual modeling should be done only in a Microsoft environment. ORM is a modeling practice that can be used in any data environment. In fact, ORM can be equally valuable in non-data situations, although obviously this is not your focus.

There are many advantages to using ORM in a database environment. Major entities stand out, and those entities become tables. The relationships that have already been modeled become one of many constraint implementations in the physical database structure. Because you have done all this planning in drawing a conceptual model, taking the next step to a logical data model or even a physical data model becomes much easier.

You can fine-tune the ORM diagram still more, even after completing the final design. At this level, however, you might not be interested in eliminating redundancy. In fact, introducing redundant data could provide better performance in the system. You can represent this data denormalization by looking at the number of interconnected entities needed to produce any single result in the system.

Data that could be calculated might require data from a series of tables. Performing the calculation and storing that data to eliminate table processing improves performance in most cases. You must make a choice at this juncture as to what is more important: speed or the resources used by the extra data storage. In general, resources are cheap and time is money. Denormalization of data is discussed in Chapter 8, so I won't spend any more time on the topic in this chapter.

Converting an ORM conceptual diagram into a relational database is relatively simple. Tools such as Microsoft Visio can perform this task by using a simple algorithm. This algorithm simply groups fact types into normalized tables. Without a tool such as Visio, any experienced database designer could still easily perform this task, however.

If the fact entities have been laid out properly, the data schema mapping should be error free and not contain any unplanned redundancy. Planned redundancythat is, denormalizationshould be minimized in the initial stages of the database layout and can easily be added in later. Each elementary fact type will become a single table. Per the modeling process, each fact type that maps to the same table will have uniqueness constraints based on the same attribute, which usually becomes the table's primary key.

Several key structures are used in the database-mapping algorithm. You can obtain available keys, either simple or composite, from the uniqueness constraints defined for each role. Other elements, such as a compidot (compositely defined object type), are either a nested object type or a type whose primary reference is based on an external uniqueness constraint. Either way, primary key and foreign key relationships should be quite visible in an ORM diagram. Each fact type that has a defined composite key is divided into tables, basing the primary key on this composite key. Any fact type that has a simple key is placed into the same table, basing the primary key on the identifier.

Most constraints that have been defined correlate with data types, data rules attached to fields in the database, primary/foreign key constraints defined in the database, and other field- and table-level constraints defined in the table structure or database structure, respectively. Some more complex constraints can be defined by triggers, in which data values can be checked by using more involved logic. A small percentage of the constraints will have no defining capability in the system and must be handled through stored procedures or other application-level code outside the physical database.

Most databases have limited capability to base data values on a comparison to data outside the physical database itself. This limited functionality causes two problems in the Billington system. If you are to validate supplier identifiers entered into the product table and employee identifiers entered into the call table, external data sources are needed. Stored procedures or application coding in the middle tier can handle this functionality, but primary/foreign key relationships won't be possible.



Analyzing Requirements and Defining. Net Solution Architectures (Exam 70-300)
MCSD Self-Paced Training Kit: Analyzing Requirements and Defining Microsoft .NET Solution Architectures, Exam 70-300: Analyzing Requirements and ... Exam 70-300 (Pro-Certification)
ISBN: 0735618941
EAN: 2147483647
Year: 2006
Pages: 175

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net