There are certain requirements when you need to send encrypted messages to make sure the data you're sending is secure. You can use encryption in two ways. First, you can encrypt a message itself. Second, you can use queuing services to encrypt a message through the MessageQueue class.
To encrypt a message, you can use the DestinationSymmetricKey property of a message and then send this message to the queue. The DestinationSymmetricKey property represents the symmetric key used to encrypt application-encrypted messages.
The EncryptionRequired property of MessageQueue sets whether a queue accepts nonencrypted or encrypted messages. The EncryptionRequired enumeration provides the value of the EncryptionRequired property, which has three values: Body , None , and Optional . The Body option makes sure the queue only accepts private (encrypted) messages. The None option means the queue accepts only nonencrypted messages. The Optional option means the queue accepts both encrypted and nonencrypted messages.
The EncryptionAlgorithm property of the Message class represents the type of encryption applied on a message. This property is represented by the EncryptionAlgorithm enumeration, which has three values: None , RC2 , and RC4 . The value None means no encryption. RC2 is a 64-bit block encryption, and RC4 is the stream encryption. The block mode encrypts a block of data at a time, and the stream mode encrypts a bit at a time.
Listing 21-10 sends encrypted messages. As you can see, it sets the UseEncryption property of a Message object to activate the encryption.
Listing 21-10: Sending Encrypted Messages
Try Dim msg As System.Messaging.Message = _ New System.Messaging.Message(titleTextBox.Text) MessageQueue1.EncryptionRequired = _ Messaging.EncryptionRequired.Body msg.UseEncryption = True If (rc4RadioBtn.Checked) Then msg.EncryptionAlgorithm = Messaging.EncryptionAlgorithm.Rc4 ElseIf (rc2RadioBtn.Checked) Then msg.EncryptionAlgorithm = Messaging.EncryptionAlgorithm.Rc2 Else msg.EncryptionAlgorithm = Messaging.EncryptionAlgorithm.None End If msg.Label = titleTextBox.Text msg.Body = bodyTextBox.Text MessageQueue1.Send(msg) Catch exp As Exception MessageBox.Show(exp.Message) End Try
In this chapter, we covered some basic programming of messaging services. The System.Messaging namespace offers much more than we're able to discuss in this chapter. Still, this chapter should give you a quick jump start on MSMQ using .NET.
We also discussed the classes provided by the .NET Framework Library to work with messaging services. We discussed how to create, delete, and manage queues. Additionally, we discussed how to send and receive normal messages and transaction messages. Finally, we discussed the security, encryption, and authentication issues involved in messaging and the classes provided by the .NET Framework Library to implement messaging.
Ever since the early days of relational databases, optimizing database performance has always been as much an art as a science. Even today, you'll often hear of new "secrets" through professional relationships that take on the air of furtive backalley exchanges more reminiscent of a Le Carr
The following sections provide some of the "back-alley" secrets we've
Properly selecting and managing your indexes can have the single biggest impact on performance using any database—and Microsoft SQL Server is no different. Indexes are like little optimized tables that organize the data you'll most commonly use so that you can find rows in the main table optimally.
To manage the indexes on a table, simply right-click the table in Enterprise Manager, choose All
Figure 22-1: The Manage Indexes menu in Enterprise Manager
From the dialog box that appears, you can manage any of the tables in any of the databases running in the current instance of Microsoft SQL Server (see Figure 22-2). You can change databases by choosing a different database from the Database drop-down list, or you can change the table you're managing your indexes on by choosing another table from the Table drop-down list.
Figure 22-2: The Manage Indexes dialog box
Once you have the database and table selected that you would like to manage the index on, the current indexes that exist on the table display in the Existing Indexes area.
To add a new index to the table, click the New button at the bottom of this dialog box. This opens the Create New Index dialog box (see Figure 22-3). Having a good understanding of this dialog box and its implications is critical to creating a top-performing application. Determining which
Figure 22-3: The Create New Index dialog box
Checking the Unique Values box
The real value of this option comes into play when dealing with a multiple-column index that, when combined, produces a unique value. For example, a complicated join table may not have any single column that represents a unique value, but a combination of two of the columns is unique.
The Pad Index and Fill Factor options are
The Pad Index option
The Clustered Index option internally sorts the data in a way that is optimal for a number of queries,
Clustered indexes don't help on columns that contain data with little variability such as bit columns or columns that join to something finite such as values representing states. Unless the country goes on a huge annexation binge, this number will
You can have only one clustered index on any given table, so think about what you want to take advantage of the potential performance boost. This may sound trivial; however, if you closely examine a table, you'll see a number of options. You'll have to determine which ones your queries will use the most.
The Do Not Recompute Statistics option comes with a warning that it is not recommended. In almost all circumstances, heed this warning or you could get some unexpected results. The only situation where you should use this option is where a table is read only and will never be
Knowing which columns to index is as important as knowing what options to use when indexing them. This is a good place to start: Any column you frequently query is probably a good candidate for an index.
The best candidates for indexes are usually the primary keys of your tables. The reason for this is when you're doing a multiple table join, you'll usually do the join on your primary key. If most of your queries use a particular column, then it's probably optimal to make that column a clustered index. It's never necessary to create an index for your primary keys because of the nature of how SQL Server stores data relating to auto-numbered primary keys.
SQL Server works well when storing data. Its overall goal is to make sure that when a row is inserted, it can be placed physically between rows above and below your primary key in terms of
If your table uses an auto-numbered primary key, your records will automatically be in the correct order as you insert them because of the auto numbering that SQL Server uses to store your data. In this case, it's not beneficial to use table padding.
Also, when choosing what columns to index, remember that you only get one clustered index. The clustered index groups and orders your index in a manner that it can quickly find what is being queried against.
Also, don't waste your clustered index on a finite state column. This includes bits and
You can learn quite a few things about how to optimize your database by using the SQL Server Profiler. The SQL Server Profiler allows you to see what the database is doing, when it is doing it, and how long it takes.
You can launch the SQL Server Profiler from the Microsoft SQL Server submenu in the Start menu or from the Tools menu of Enterprise Manager. Once you've launched the SQL Server Profiler, you can start a new trace by choosing File Ø New Ø Trace. This opens the Trace Properties dialog box (see Figure 22-4).
Figure 22-4: The Trace Properties dialog box
Once you've launched the Trace Properties dialog box, you'll see a number of important options. If you intend on being able to run the optimizer (which can automatically perform a number of optimizations on your database), you'll need to save the output to a trace file. You can do this by checking the Capture to File box and then choosing a file by clicking the folder icon to the right of the box.
The Events tab allows you to manually choose the events you'd like to include in the trace. These events include table
The Data Columns tab provides you with the ability to choose the kind of information you want captured when each of the selected events occurs. Some of the options included are the Application
The Filters tab allows you to choose the items you would like excluded from your trace.
Once you've started your trace, you'll be able to watch the activity on your database in real time and understand how long each query and stored procedure takes to execute (see Figure 22-5).
Figure 22-5: The SQL Server Profiler with a trace running
If you're watching a trace on a live production system, you should be able to quickly identify any stored procedures that are taking a long time to run by looking at the Duration column. Any stored procedure taking longer than 50
The SQL Server Profiler also provides an outstanding tool that can automatically optimize your database for you. Database
Foremost, it's important to keep in mind that the Index Tuning Wizard does nothing for optimizing the database for queries contained in stored procedures. This is a bit of a problem given that it's a good practice to keep all of your database queries wrapped in stored procedures for optimal performance.
For more information about SQL Server performance, see Brad M. McGehee's site at www.sql-server-performance.com . You can also refer to William R. Vaughn's books, notably ADO.NET and ADO Examples and Best Practices for VB Programmers, Second Edition (Apress, 2002) or the classic Hitchhiker's Guide to Visual Basic and SQL Server (Microsoft Press, 1998), which discusses Visual Basic 6, SQL Server 7, and SQL Server 2000, Microsoft Data Engine (MSDE), and ADO 2.0.
Fortunately, there are some workarounds to this particular problem. You can copy and paste queries from your stored procedures into the Query Analyzer and have it assess the load you're
Once you've pasted your query into the Query Analyzer, choose Query Ø Perform Index Analysis, and the Query Analyzer will use the SQL Server Profiler to show you the best course of action (see Figure 22-6).
Figure 22-6: The Query Analyzer's recommendations for the current query
If the Query Analyzer has recommendations on what you should do, a second dialog box will pop up and ask if you want it to automatically implement the recommendations. Often, it recommends adding an index to a particular column that is
Figure 22-7: The Query Analyzer's recommendation to create an index that would help the query run faster
You can also attach the SQL Server Profiler to run a trace against your production database to identify stored procedures that are taking longer to execute than you would like. Then, launch the wizard and choose Profile the Performance of a Stored Procedure; the tool will test that particular stored procedure and recommend what you can do to optimize its performance.
It's largely a matter of personal preference as to how you want to work to improve the performance of your stored procedures. Although the wizard makes suggestions, you don't have the opportunity to tweak and alter the stored procedure the way you do with the Query Analyzer.
You can tune your Transact-SQL queries and stored procedures for performance in a number of ways. Think of your queries as asking the database a question and think of the optimizations you make on the query as asking the question in a way that requires the database less thought. Listing 22-1 shows an inefficient query that you can optimize.
Listing 22-1: The Query from SelectVideosCheckedIn , Not Optimized
SELECT VideoTapeID, Title, Description FROM VideoTape WHERE VideoTapeID NOT IN ( SELECT VideoTapeID FROM VideoCheckOut as vco WHERE VideoTapeID NOT IN (SELECT VideoTapeID From VideoCheckIn as vci WHERE vco.VideoCheckOutID=vci.VideoCheckOutID) )
The query in Listing 22-1 took 15 seconds to return 42,000 rows from the VideoStore2 database available in the downloadable code for this book. Most of this is the time required to pipe out that many rows to the Query Analyzer. But even with factoring that in, there's a way to shave a second off the query time. Figure 22-8 shows the execution plan of Listing 22-1.
Figure 22-8: The estimated execution plan of the query in Listing 22-1
to test the subqueries, SQL Server has to do a comparison to make sure that the
doesn't match at each level of the subquery. To understand what SQL Server is doing to perform this query, it's helpful to be able to see the execution
You can view this in the Query Analyzer by choosing Query Ø Display Estimated Execution Path. This tool is an especially powerful weapon in the war against inefficient queries.
Next to each item in the execution path is a cost in terms of the percentage of the overall execution time (always adding up to 100 percent). As you follow the execution path and move your mouse over each item, you should pay attention to a few key items in the pop-up text. The three most critical factors to pay attention to are Estimated Number of Executes, Estimated CPU, and Estimated IO.
Estimated Number of Executes lets you know how many times the comparison will be performed. If you can think of a way to keep the logic of your query and still reduce the number of times this comparison must be performed, you can have an
The Estimated CPU and Estimated IO columns let you know how much CPU usage and disk IO is resulting for a single step in the process of executing your stored procedure.
Using the information derived from the estimated execution plan, you can identify a few areas that are ripe for optimization. You can significantly reduce the table scan icon that is taking up 60 percent of the execution time if you use a narrowing item in the WHERE clause.
One of the biggest impacts you can have on the speed of your query execution is keeping the result set as small as possible. To this end, you can modify the
stored procedure to include a simple way to reduce the number of rows that must be compared to the subquery. In this case, narrowing the results based upon the CategoryID vastly
Listing 22-2: An Optimized Version of the SelectVideosCheckedIn Query
SELECT VideoTapeID, Title, Description FROM VideoTape WHERE CategoryID=5 AND VideoTapeID NOT IN ( SELECT VideoTapeID FROM VideoCheckOut as vco WHERE VideoTapeID NOT IN (SELECT VideoTapeID From VideoCheckIn as vci WHERE vco.VideoCheckOutID=vci.VideoCheckOutID) )
Another effect of narrowing the results from the VideoTape table before performing the subquery comes from the Estimated Number of Executions that are now expected to run on the Table Spool step and its related
Figure 22-9: The estimated execution plan of the query in Listing 22-2
A second strategy you can use to reduce the number of rows returned is to use SELECT TOP to reduce the number of rows you expect to be returned. This can provide an enormous performance boost, especially in situations where you don't care precisely what records from the larger result set are required.
An example of where this behavior is beneficial is in an unordered queue situation. For example, if you had a table that tracks tasks for
SELECT TOP 10 TaskID, Name FROM Tasks WHERE Assigned=0
The performance increase comes from two elements. First, the number of rows that have to be piped across the network is significantly less. Second, the query can stop running as soon as it has any 10 records that meet the criteria.
If the query had to have the tasks in some type of order, such as Date Due column, then the query would lose the benefit of being able to stop executing as soon as it identified 10 items meeting the criteria. Adding an ORDER BY clause forces the query to completely execute to return the top 10 items in the correct order in which you expect them.
Understanding the execution plans of your queries can help you
One of the reasons stored procedures are faster than simply executing SQL directly against the database is that stored procedures are interpreted and compiled into a structure that SQL Server can execute
A simple way to avoid excessive recompilations is to keep all of your variable and temporary table declarations together in your stored procedure. Some types of declarations, such as the creation of a temporary table, cause the stored procedure to recompile, but if it
Another cause of your stored procedure being frequently recompiled is if the contents of the tables the stored procedure is querying change frequently. If you have a table that is constantly changing, using sp_executesql can keep your stored procedure from recompiling (see Listing 22-3). The downside of this particular approach is that SQL Server won't be able to cache the execution plan of the query you run in your stored procedure.
Listing 22-3: Using sp_executesql to Execute a Database Query
EXECdbo.sp_executesql N' SELECT VideoTapeID, Title, Description FROM VideoTape WHERE VideoTapeID NOT IN ( SELECT VideoTapeID FROM VideoCheckOut as vco WHERE VideoTapeID NOT IN (SELECT VideoTapeID From VideoCheckIn as vci WHERE vco.VideoCheckOutID=vci.VideoCheckOutID) )'
To identify what stored procedures in your database are recompiling excessively, use the SQL Server Profiler and make sure you have the SP:Recompile event selected. The SQL Server Profiler shows you when each recompilation occurs. If you're able to identify any stored procedures that recompile often, you'll have an excellent place to start tuning your database performance.
To avoid two threads accessing the same resource at the same time, SQL Server has a locking system. If your CPU usage and IO are low but your database is performing slowly, it's likely you have some bad locking conditions occurring.
If you have a stored procedure that needs frequent recompilation, make sure you include the table owner in all of your statements. When a stored procedure is being recompiled, it's in a locked state and can't be called by another process until it has finished recompiling. If it has to look up who owns each table as it does the recompile, it'll take longer.
Also, avoid using
Finally, only use transactions when you need to—and keep transactions short. Because a transaction can't have any data modified outside of the transaction while it's occurring, it'll lock almost everything it touches. If you think through the execution of your transaction and come up with any way to keep your transactional integrity without actually using a transaction, then you can reduce the number of locks that occur.
To achieve the best performance from your database, you'll need to start thinking about performance as you design your table schema. In every step of the process, think about how you'll query the data in the table and how you'll run updates on it. Unfortunately, sometimes you'll have to break away from clean design and choose one that is not as clean as you would like it to be to achieve optimal performance.
A common myth in designing a database for performance is that every table must have a primary key. In general, it's a good idea for every table to have a primary key—unless the primary key will never be used. A good example of this is a join table that sits between two tables and expresses a relationship between the two. Adding an extra column to a table that represents a many-to-many relationship is like having a fifth wheel on your car—it won't break anything, but it might hurt your gas mileage just a little. Figure 22-10 shows a join table.
Figure 22-10: A join table that doesn't need a primary key
In the table structure shown in Figure 22-10, the VideoTapeType table doesn't need a primary key. If you added another VideoTapeTypeID column as an autonumbered primary key, it would be completely superfluous and add a useless piece of data to the database.
If performance is
In SQL Server 2000, data is internally stored as a collection of 8KB pages. The more rows that can be stored on a single 8KB page, the faster all queries run on the database will be and the less space the table will take up.
A simple way to find out how many bytes each row takes up is to look at the Size column in design view. If you add up the size of each of these columns and it's
Database page size is one of the reasons why it's generally better to have multiple small tables as opposed to a
The performance benefit of these tables comes from the fact that the database doesn't need to join multiple tables together. Denormalization is useful for situations where you need most of the data that's in the table. If a denormalized table has 40 columns, and you really only need the data in two of the columns the majority of the time, you're likely better off just using a regular normalized table schema to achieve optimal performance.
You have to maintain a fine balance when creating a database schema between three main elements:
Clean normalized design
Speed of development
If you're designing a large, complicated application, a good normalized design is probably the way to go. If scalability is the prime consideration, then performance comes first. If the application you're developing is small and doesn't have scalability requirements, obviously speed of development is at the top of your list.
Above all, make sure you're designing the application the way it's needed, and as with all professional applications, make the right tool for the job and avoid architectural "gold
With this consideration, you have to be careful when you start slipping from clean design because without discipline it can become a "