|< Day Day Up >|
Computers have been playing a role in the business environment for more than four decades. Since the beginning of that time, server-side processing has played a major role in the majority of business applications. Let's take a look at how those four decades have played out.
In the early 1960s, mainframe computers began to find their way into large enterprises . These mainframes consisted of extremely large computers that were responsible for all logic, storage, and processing of data. "Dumb-terminals" allowed various users to interact with the mainframe.
These systems continued in widespread use for more than 30 years , and to some degree, continue to exist today. Architecturally, these were designed at a time when processing power was scarce and expensive; therefore, it was cost effective to centralize all the power onto the server. The clients for the mainframe systems contained virtually no logic because they relied on the server for everything, including the display logic.
The Age of Microcomputers
As memory and processing power became cheaper, the microcomputer (also known as the personal computer) began to find its way into businesses. Originally, these were used to run stand-alone applications, where everything needed by the application resided directly on the terminal at which the end user worked. These terminals were often easier to use because the user interface had improved. It was during this time that graphical user interfaces (GUIs) became available, further increasing the ease of use of the systems. However, as stand-alone systems, there was still no effective way to centralize data or business rules.
With the migration from mainframe to microcomputer, the pendulum swung from one extreme (having all logic on the server) to the other extreme (having all logic on the client). Sensing the imbalance in this, several vendors began to develop a system that could encapsulate all the benefits of the microcomputer as well as those of the mainframe systems. This led to the birth of client/server applications.
Client/server applications were frequently written in languages such as Visual Basic or PowerBuilder, and they offered a lot of flexibility to application developers. Interfaces that were very interactive and intuitive could be created and maintained independent of the logic that drove the application functionality. This separation allowed modifications to be made to the user interface (the place in an application where changes are most frequent), without the need to impact business rules or data access. Additionally, by connecting the client to a remote server, it became possible to build systems in which multiple users could share data and application functionality. With business and data access logic centrally located, any changes to these could be made in a single place.
Although traditional client/server applications offered tremendous advantages over stand-alone and mainframe applications, they all lacked a distributed client. This meant that for each change that needed to be made to the user interface, the files comprising the client needed to be reinstalled at each workstation, often requiring dynamic link libraries (DLL) files to be updated. The phrase "DLL hell" aptly captured the frustration of many IT professionals whose job it was to keep the client applications current within a business.
During the days of the client/server dominance , the U.S. government project ARPANet was renamed "Internet" and started becoming available to businesses as a means to share files across a distributed network. Most of the early protocols of the Internet, such as File Transfer Protocol (FTP) and Gopher, were specifically related to file sharing. The Hypertext Transfer Protocol (HTTP) followed these and introduced the concept of "hyperlinking" between networked documents. The Internet, in many ways, is like the mainframe systems that predate it, in that an ultra thin client (the browser) is used to display information retrieved by the server. The documents on the server contain all the information to determine how the page will be displayed in the client.
Businesses began to embrace the Internet as a means to share documents, and in time, many realized that the distributed nature of the Internet could free them from the DLL hell of their client/server applications. This new-found freedom led to the introduction of the Internet as more than a document-sharing system and introduced the concept of the web-based application. Of course, these web-based applications lacked the richness and usability that was taken for granted in the client/server days.
Establishing the Need for Rich Internet Applications (RIAs)
Through the transition from client/server applications to web-based applications, businesses were able to save a tremendous amount of money on the costs of desktop support for applications. No longer was it necessary to move from one desk to the next to reinstall the latest version of the application client with each change. Instead, each time the application was used, the latest client logic was downloaded from the server.
Of course, within a few years, many businesses realized that there was a downside to this model. Although they were indeed saving money on the distribution costs, they also lost money, largely due to the productivity losses of their employees . The richness of the client in client/server applications allowed end users to achieve their goals quickly and efficiently . However, the page-based nature of web-based applications mandated that for each action they took, the data needed to be sent back to the server, and a new page needed to be retrieved. Although this often was a matter of only seconds per page, over the course of an eight- hour work day, those seconds quickly added up to several minutes per day. Many businesses found that over the course of a work week, employees heavily involved in data entry operations were losing as many as 3 “5 hours a week in productive time, as compared to doing the same tasks in their earlier client/server applications.
Looking to regain the lost productivity, several variations of rich clients for Internet applications were attempted. One of the early attempts was Java applets, but these often failed because the file size was too large and there were many issues with platform independence.
Fortunately, with the release of Macromedia Flash MX in 2002, a new tool to solve the problem was introduced. With Flash as a client, it was again possible to have all the richness and benefits of a traditional client/server application along with the distributed nature of a web-based system. The end result was that the productivity of the client/server days was restored, without the added expense of keeping the user base up-to-date.
However, to begin using Flash in this way, Flash developers had to make a logic leap. Traditionally, Flash was used to build stand-alone applications, often in the form of animations or movies. These would most often use local data to run. To fully leverage the benefits of the client/server model, developers needed to understand the benefits of connecting to a server, and the proper delegation between local and remote processing of data.
|< Day Day Up >|