C. JavaScript historyDuring its checkered history JavaScript's purpose has been continually redefined, not only because every generation of browsers supports more JavaScript, but also because of the struggle for pre-eminence between the fat and the thin models of JavaScript programming. A historical overview will therefore acquaint you with ancient and modern thinking about JavaScript's purpose, as well as provide perspective on the fat client vs. thin client question discussed earlier. You'll also begin to understand why JavaScript has been frequently misunderstood, especially by "hard" programmers. False startJavaScript was created by one man: Brendan Eich, then working for Netscape Communications Corporation. Its public history starts with the release of Netscape 2 in March 1996, but unfortunately it was a false start in more than one respect. Eich's purpose in creating JavaScript was to give Web developersnot known for their technical prowessan easy way to add bits of interactivity to pages. The idea was to copy scripts from other pages and tweak them a bit. In this respect he succeeded admirably: many JavaScript developers (including myself) started out as copy-pasters. Unfortunately JavaScript had the wrong name and the wrong syntax. Originally JavaScript was named LiveScript, but at the last possible moment its name was changed. This was done purely for marketing reasons: in 1996 Java was very popular, and Netscape thought it a good idea to hitch a ride on this popularity by choosing a similar name. Even worse, Netscape's marketing managers summarily ordered Eich to "make the language look like Java." The result was a language that superficially resembles Java in name and syntax and that is easy to learn by copy-pasting. This inevitably led people to discount JavaScript as a "dumbed-down" version of Java, as a cute little scripting language you could do a few tricks with but that wasn't worth a serious programmer's attention. We're still suffering the consequences of this fatal misperception. De facto standardNetscape 2, the first browser to support JavaScript, was a roaring success. Netscape 3 followed pretty soon, and to everybody's delight it supported even more functionality. Netscape 3 also gave JavaScript its first de facto standard. Although this is not an official specification promulgated by an official body like W3C, it is no less real: every browser that supports JavaScript also supports the de facto Netscape 3 standard. In 1996, Netscape 3 was the king of the hill. Web developers enthusiastically used its new functionalities and advanced features. Therefore, Netscape's competitors, such as Microsoft's Internet Explorer, had to support everything that Netscape supported. After all, who wanted a browser that didn't support the cool stuff used on thousands of Web sites? This copycatting is what made a de facto standard. This is an important theme in the Web's history: once enough Web sites use a certain functionality, any browser must support that functionality and continue to support it indefinitely. If a browser doesn't, users notice that their favorite sites don't work, and they blame the browser they're using, whether that's fair or not. The first thin phaseIn those early days, the browser was still definitely a thin client. Form validation and mouseovers are fine and dandy, but they don't allow you to handle a significant amount of user interaction on the client. Users were forced to go back to the server time and again in order to truly interact with Web sites. Back then that was no problem; no user expected otherwise. The Browser WarsThe browser market started to change rapidly. The first Internet hype began, and the Browser Wars were heating up. Who was going to rule the WebNetscape or Explorer?
Neither party was confident of winning the Wars with its version 3 browser, and therefore both decided to create an upgraded and extended versions 4. Unfortunately, Netscape's and Microsoft's CSS implementations did not match the standardor each other´sand that set back the adoption of CSS by many years. Part of the reason was that the two competitors deliberately paid no attention to each other's implementation, a state of affairs that also hindered JavaScript in those days. Both vendors implemented W3C's CSS specification, partly because they had helped shape the specification, and partly because they were afraid that the competitor would support CSS and gain an advantage. A bunch of backgrounds and borders was not deemed sufficiently cool to win the Browser Wars. Both browser vendors therefore allowed JavaScript control over these CSS declarations. It became possible to use position: absolute to create a layer "on top of" the rest of the site, and then continuously change its top and left properties to make it move across the screen. This was seriously cool stuff.
Collectively, these tricks became known as DHTML. They had little to do with actual HTML, and much to do with CSS and JavaScript, but a now-forgotten marketing genius coined the term, which has persevered even to the present day. Competing standardsIn order to make DHTML possible, the browsers needed to extend their JavaScript capabilities. In the past, Web developers could access only form fields, links, and images, but now it became mandatory to allow access to layers, too, so that they could be moved across the browser window. A DOM upgrade was necessary. Unfortunately, in those days no specification existed for the improvement of JavaScript. Worse, both Netscape and Microsoft were trying to get a decisive advantage over the other, and therefore deliberately created totally incompatible DOM extensions. Both hoped that their DOM would become the new standard and the other DOM would be relegated to the ash heap of history. Thus the proprietary DOMs (also known as "intermediate DOMs," since they came between Netscape 3's and W3C's DOMs) came into being. Netscape 4 supported the document.layers DOM, while Explorer supported (and still supports) the document.all DOM. They worked quite differently, but in the end Explorer's implementation was closer than Netscape's to the eventual W3C DOM, as well as easier to use.
Managing these competing DOMs was the main challenge for Web developers during the Browser Wars. If you wanted to move a layer across the browser windowany Web developer's fantasy in those daysyou had to write separate scripts to accommodate Netscape 4's and Explorer 4's DOMs and event models, as well as a bit of code that made sure older browsers would not try to execute either script. Some Web developers thought this was great fun; others complained about a certain lack of standardization. The first fat phaseThanks to these new features, the client could now handle significant amounts of interaction. Animations, show/hide scripts, and other eye candy became possible overnight, and in the hands of a competent interaction designer these tricks could help users make more sense of a site and take more actions before being forced to load another page. Unfortunately, competent interaction designers were thin on the ground. The fat model gained adherents, who began to redefine JavaScript's purpose. From a thin, simple language that could do a few tricks, it became a much more comprehensive construct that allowed developers to create true single-page interfaces. Unfortunately, all this activity did not lead to the revolution promised by ecstatic gurus. Although a few experiments were interesting, most interfaces and libraries only offered variations on the moving layer and the dropdown menu. JavaScript's redefined purpose was all about technology, and not about usability. This is a recurring theme in JavaScript history. Eight years later, we see essentially the same thing happening in JavaScript's next fat phase. Browser PeaceMicrosoft came out with Explorer 5 in 1999. It supported quite decent CSS, and the new W3C DOM standard. In contrast, Netscape 4 died, despite desperate last-resort attempts from both its parent company and Web developers to alleviate its suffering. Roughly at the same time, JavaScript's first fat phase broke down into its component parts: a bit of JavaScript and a lot of hot air. This double demise of Netscape 4 and JavaScript's first fat phase allowed Explorer to win the Browser Wars and made room for the CSS revolution and a new way of thinking about Web development. The CSS revolutionAt the tail end of the Browser Wars, a group of concerned Web developers united in the Web Standards Project (WaSP). Their mission was to increase the standards awareness and compatibility of the various browsers and of Web developers themselves. Back then JavaScript stood for all that was wrong with the old way of making Web sites. There was no standard; the average JavaScript-"improved" site was bloated, more likely than not worked only in one browser, and didn't consider accessibility at all. A fundamental re-think was necessary. For those reasons among others, the WaSP, and Web developers sympathizing with its goals, focused on CSS. Many Web developers were tired of the hacks and workarounds that the Browser Wars era had given rise to and desperately wanted to clean the slate. CSS, and not JavaScript, gave them the best chance to radically break with the past. The second thin phaseWhen the first fat phase of JavaScript ended, interest in the language dwindled, and its purpose became rather hazy. Some developers reverted to the pre-Browser Wars form validation/mouseover school of thought; others continued to churn out fat (not to say obese) interfaces that pleased nobody; and many participants in the CSS revolution excluded JavaScript totally. History could have proceeded differently from this point. Explorer 5.0 was already available, and it supported large swaths of the W3C DOM, as well as the XMLHttpRequest object that has come to play such a vital role in JavaScript's second fat phase. But fat clients had gone out of fashion and simply stopped evolving. A new startFrom about 2003 onwards, a few pioneers began to write JavaScript in a new style that was heavily influenced by the ideas of the CSS revolution. For the first time it was tightly embedded in a comprehensive theory of Web development, and the identification and solving of accessibility issues was taken seriously.
The resulting scripts were thin, and mostly concerned themselves with subtly enhancing HTML pages and adding light touches of functionality. If the browser does not support JavaScript, little is lost except for a bit of usability. This coding style is known as unobtrusive scripting, and we'll discuss it in detail in Chapter 2. Unobtrusive scripting didn't immediately conquer the world. In progressive Web-development circles, JavaScript still had a bad name for being inaccessible, while developers of Browser Wars-style bloatware were mostly unaware of the new approach. The second fat phaseThen JavaScript's second fat phase started with a big bang. One article crystallized slumbering technical and usability notions by showing that modern techniques allow the creation of a single-page interface that silently loads little chunks of data from the server. The article was a resounding success, which in itself indicates that people were ready, even eager, to start a new fat phase of JavaScript use. And it's true: a site can become much, much more usable if a few smart scripts make sure that one single page contains everything the user needs and allows her to take all actions she wants. Thus JavaScript's purpose was again redefined. Fat clients became fashionable overnight. The Ajax wave brought new blood into the JavaScript communityan infusion of people from other disciplines, most importantly the server-side languages, with fundamentally different ways of looking at JavaScript.
These developers are interpreting the purpose of JavaScript in different ways. Put (too) simply, traditional Web developers heavily influenced by the CSS revolution create thin, accessible JavaScripts in spaghetti-code, while "hard programmers" coming from server-side development create fat, inaccessible Ajax clients in impeccably object-oriented code. In some respects Ajax resembles DHTML too closely for comfort. Accessibility, for instance, is hardly an issue for many Ajax applications. And the hype tends to concentrate on technical issues (how Ajax?), while usability and interaction issues (why Ajax?) remain underreported. Finally, bloatware libraries (called "frameworks" nowadays) are on the rise again. Fortunately, there has been one significant change since the first fat phase: browser vendors and JavaScript developers agree that standards are there to be followed. Although browser problems will always exist, the deliberate incompatibilities that characterized the Browser Wars era have gone. What's next?At the time of writing, the Ajax hype is still running at full speed. Nonetheless I believe that it will end just as DHTML did: people will simply lose interest and it will fall apart into a bit of JavaScript and a lot of hot airthough I don't know when this will happen.
JavaScript will swing back to a thin phase in which its purpose is again redefined and large-scale solutions make place for smaller, simpler scripts. Of course, in due time this third thin phase will be followed by a third fat phase, in which an as-yet-uninvented acronym will and redefine JavaScript's purpose for the sixth time. This movement between fat and thin phases seems to be one of the few "laws" of JavaScript history. Can we break these cycles somehow? Essentially that's only possible if everyone would agree to a single purpose for JavaScript. Therefore I hope that by the time the third fat phase comes around, JavaScript developers, including those coming from "hard" programming backgrounds, will have learned to look beyond cool code and slick libraries/frameworks, and will base their actions on the context their scripts run in: standards-compliant, accessible Web pages. |