Either your application architecture dictates which connection strategy you must use or best practice does so. You'll find a lot of advice out there about how and when to connect (and disconnect). Here, I'll show you the trade-offs and advantages of each approach. Consider that getting connected is expensiveeven when connection pooling is enabled (and it is by default). If you're using SSPI authentication, each time you connect, the "user" credentials must be revalidated. When connecting to SQL Server, the server must reset the connection when it's first openedand again each and every time it's reopened (unless you disable this feature). Since connecting to a SQL Server instance is not free, you should take steps to minimize the number of connections you make if you're concerned about performance. That said, remember that SQL Server connections are a limited resourcebut not that limited. Of course, your SQL Server license might dictate how many user connections you can expose, so that number would be your upper limitand dictate how you manage connections. If you don't have a licensing restriction, you should be able to connect a thousand or more clients to the server if your code manages what's getting executed on these dedicated connections. Even though a user is connected, their client application might not be executing query or update code, so the impact on SQL Server is simply the RAM resources and a few thread cycles used to poll the status. SQL Server has a finite capacity to perform work. You know that, but you can't easily tell how that capacity relates to the number of customers, clients, hits-per-second, or other load metrics without considerable testing. Just because SQL Server can support thousands of connections does not mean it can handle a thousand simultaneous ASP pages or active client applications demanding attention at once. At one point in time or another, your server might reach capacityhow your application(s) behave if that happens is a critical factor in its success. Having connections time out when you try to open them is a symptom of either sloppy connection object management (that I discuss later in this chapter) or poor query design, or the server cannot handle additional volume. Let's look at a few connection strategies and how you should approach application connection. Connecting Just in TimeWhen working with ASP applications or XML Web Services, (generally) your connection cannot be maintained between invocations of your code[5]. That is, in a typical ASP (or ASP.NET) application, your page is loaded, your code opens a connection, you perform any queries or updates, and you close the connection before your page closes. However, this execution pattern does not mean you need to open your connection several times in the course of processing a pagein fact, you shouldn't. If your code can ensure that any open connections are closed before the page is unloaded, it's perfectly okay to leave the connection open if you anticipate having to reuse the connection later in the application.
"Just-in-time" (JIT) connection strategies rely heavily on the connection pool to optimize performanceeven though the pooler is robbing your application of some performance. When using JIT connections, the pooler keeps the SQL Server connection open in anticipation of your application returning to use it in the immediate future. The problem with using the connection pool is that you need to (constantly) watch it for overflow issues, for reasons I just mentioned. Remember, the connection pool should not grow linearly (or exponentially) past its optimum performance plateau. The number of pooled connections should remain relatively flat once the system reaches stasis[6]when the number of connections being demanded matches those ready to be used in the pool.
Persistent ConnectionsWhen building client/server applications, you don't really need to use the connection poolespecially if your application is designed to open and hold open a connection for the life of the application. I've been designing applications that use this strategy for decades. Another benefit of this approach is that you can set and maintain server state on a connection-by-connection basis. This means you'll be able to implement server-side cursors, manage #temp tables, make SQL Server SET option changes, and much more. If you're using the connection pool but want to leverage server state, you might find it very frustrating as the default behavior resets the connection (and resets any server state) when it's reopened. Yes, you can disable this behavior, as I describe later in this chapter (see the ConnectionString options). The trick to using an application-shared connection is to serialize its use. That is, (unless you use MARS as discussed later) you need to make sure that only one thread in your application is using the connection at any one time. For example, if your application has several forms, they can all use the same connectionbut only one at a time if you leave the connection open. One approach to managing connections is to create a routine that opens the connection if it's not already open, prevents other threads from using it, and closes it if it's idle for a period of time. However, if you need to go to this trouble, you should reconsider using the connection poolerit does all of this for you. The other problem with a home-grown connection-management scheme is that the ADO.NET methods that manage connections on their own won't use itthey simply call the Open method on their own. No, I'm not a fan of manually managing connections. It's a lot safer, cheaper, and easier to just use the connection pool. That said, if you plan to use server-side cursors as described in Appendix IV, "Creating and Managing Server-Side Cursors," you'll want to use a persistent connectionone that's left open while the cursor is being accessed. Understanding MARS and Its ImplicationsMultiple Active Resultsets (MARS) is a new feature in ADO.NET 2.0 that you might never need. Fortunately, it's disabled by default[7]. MARS permits developers to execute multiple queries on the same physical connection. ADO.NET accomplishes this behind the scenes by creating up to nine "logical" connections per physical connection. Each of these "sessions" is managed (more or less) independently. If you attempt to execute more than nine operations on the same connection, your performance will degrade (dramatically).
One benefit of MARS is that it might prevent you from seeing the troublesome "There is already an open SqlDataReader on the connection..." exception. The problem with hiding this exception is that it's telling you that you have left an uncompleted operation behind. Enabling MARS to solve this problem is like taking out the bulb in the engine oil low-pressure indicator. MARS does not help performanceit adds overhead to the connection pooling mechanism. As a matter of fact, if you make a mistake and overload MARS with more than ten operations, your application might appear to lock up as MARS careens into the ditch trying to accommodate the load. Why did Microsoft do this? Well, IMHO, MARS was invented so Microsoft could add a "check" to the box in the Oracle vs. SQL Server marketing material that said it supported multiple operations per connection. Another (apparently) good reason to use MARS is that it permits you to open a connection, run a query to fetch rows, and use the same connection to post updates to the row. Since both operations can now share a common transaction, there is some benefit to this approach. However, the approach itself is discredited. Most applications that move rows to the client to perform updates in this manner can/should be rewritten to perform these operations directly on the server with a smart WHERE clause or a stored procedure. IMHO MARS: Just say no. Choosing the Right Scope for the Connection ObjectIn your application's design phase, you need to decide how many forms, pages, or classes need access to a common connectionif any do. Remember, connections are a "semi-precious" commodity, like gas 200 miles from the nearest station, but unless you plan to scale your application to support thousands of users, it's probably okay to use multiple "local" connections rather than one central connection. A local connection is one that's opened in the routine that needs a connection. A "global" connection is declared as a SqlConnection object in a scope that can be accessed by all of the routines (forms, modules, classes) in your application. No, you can't usually get away with globally scoped SqlConnection objects in an ASP or middle-tier applicationthey make the most sense in a Windows Forms application. You're asking for trouble if you instantiate a SqlConnection in a function and pass out a SqlDataReader feeding off the open SqlConnection. As I'll show later in Chapter 10, "Managing SqlCommand Objects," you can program the SqlDataReader to close its associated connection when (or if) the SqlDataReader is properly closed. Based on my experience, the problem is that developers don't always close the SqlDataReader when they're done with itor the SqlConnection associated with it. That might be because the consumer of your function (that returns a SqlDataReader) is another person or team that does not realize that it's their responsibility to close the SqlDataReader once they have finished reading the data. The problem is, the code that accepts a SqlDataReader as an argument does not have direct access to the SqlConnection used to feed the SqlDataReader, so they can't close the underlying SqlConnection even if they remember to do so.
No, it has never been possible to pass a live SqlConnection object between processes. This has more to do with the fundamental architecture of the low-level connectivity interfaces than anything else. While it's entirely possible to pass a ConnectionString from tier to tier, you won't get away with passing a populated SqlConnection instance. |