The Rise of ClientServer Computing


The Rise of Client/Server Computing

Until client computers had real processing power and capability, the early client/server paradigm operated without the conventional client and server machines common in most current business environments. Rather, the early client/server model established an ad hoc definition of client server and based on which unit issued a request for information or services (thereby becoming the client) and which one responded to such requests (thereby becoming the server). This kind of capability continues to be used to this day, particularly in peer-to-peer services. Primitive implementations of this technology are also forever preserved in old reliable Internet applications, including File Transfer Protocol (FTP) and Telnet (networked virtual terminal).

Initial client/server computing centered around extremely large, expensive devices called mainframes. Mainframes could provide access to multiple simultaneous users by running multiple-user operating systems and were first available to big business and academia. Access to these large-scale computers for system operators and end users alike came via a terminal console (an input device and output display). Mainframe designs and concepts of yesteryear are vastly different from contemporary mainframes, and they are explored later in this chapter.

Functionality for the early mainframes consisted of business and academic applications that required no programming skill to use or maintain. This permitted business managers to create spreadsheets for ad hoc business modeling and reporting and to keep database entries for internal records, for example. The legacy applications they continue to run (and serve up) provides mainframes with their incredible staying power, often because business managers are wary of abandoning older, working systems for newer technology. Transitions from oldies but goodies into more modern equivalents can also be time-consuming and expensive, and it continues to provide a powerful argument for maintaining the mainframe as a form of status quo. When cutover from old systems to new ones is unavoidable, parallel operation is nearly always practiced during a cutover phase, so that the old system runs alongside the new one, often in complete lock-step. This is especially likely whenever costs or risks of downtime or new system failure are unacceptably high; the old system is kept up and running as a kind of "hot standby" in case anything affects the operation of the new one.

Dedicated server computing found its initial justification from the consolidation and centralization of common network resources and devices it enabled. Owing to the prohibitive cost of then-nascent technologies such as early printers, tape backups, large storage repositories, and the operational overhead involved with maintaining and operating a mainframe, business managers opted to centralize most commonly used resources and attach them to the mainframe (sometimes directly, sometimes through a variety of peripheral processing units). This eventually spurred the development of more cost-effective "minicomputer" designs, which replicated most mainframe functionality at a fraction of the cost. This affected business operations by providing higher availability and yielding higher productivity.




Upgrading and Repairing Servers
Upgrading and Repairing Servers
ISBN: 078972815X
EAN: 2147483647
Year: 2006
Pages: 240

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net