When Microsoft announced its plans for .NET, it was clear Microsoft was designing a framework that would allow developers to easily implement applications that work over the Internet. Sure, there are ways to do this with the previous version of Visual Studio and subsequent Software Development Kit (SDK) downloads, but it's clear that the .NET Framework was designed with the Internet in mind. This is immediately apparent once you delve into ADO.NET.
When Active Server Pages (ASP) and other related technologies were released, developers were finally able to push large amounts of data from a database residing on a server to an end user without having to copy the entire database and run it locally. However, this also ushered in a new era of database access the disconnected client. Although saying that a user browsing the Web is a disconnected client may sound like an oxymoron, the disconnection in this case refers to the user and the data source. In a multitiered architecture, there are generally three key pieces to an application. First is the user interface, which is rendered for the client. The second tier is the piece that assembles the data or, more technically, contains the logic that connects the user interface with the third tier. This third tier is the data source. So, what exactly does it mean when I say that a client is disconnected?
Suppose you had to construct the necessary logic to support a few simultaneous connections from clients. After you got past the code to synchronize the data, connect and disconnect clients, and transfer data, you're essentially finished. With just a few client connections, your data connections are flowing smoothly. Now, add a hundred more simultaneous connections (or a thousand or a million) and watch as the performance of your application drops substantially. By keeping all these connections open, your application will sooner or later come crashing down. This is, in essence, the connected client scenario. In the early days of data access, creating an application of such magnitude would have been extremely difficult due to the complexity involved with the database-access API functions at that time.
In the previous example, you need a data-access method that solves the performance problems associated with large numbers of simultaneous connections. One solution is to limit the number of simultaneous connections. However, as the number of clients grows, some of them may be waiting a long time. Another solution could entail transferring the entire database to the client along with the necessary logic to manipulate the data on the client machine. I think you can judge for yourself why this would be a bad idea. Another solution could be to connect with a client, give the code that houses the data-access logic a quick snapshot of the current data, and then disconnect the data connection and let the logic tier handle it from there, periodically reconnecting and disconnecting to make small changes. This is one of the methods currently implemented within ADO.NET, and it's known as a disconnected client scenario.