Who s Online?


Who's Online?

Throughout this book we have used a news site as a basis for the examples. On this site there are articles and commentary, and presumably many people view the content. A recent popular trend in online services is the development of a social community where like-minded people can virtually "hang out." On some sites, the overall content is so focused that the simple fact that a user is visiting puts him in the same community as every other visitor. A news site is not like that.

News sites have a tremendous amount of content, and the reader communities are many and overlapping. The articles represent political opinions, ethnic and racial interests, as well as geographically focused information. As a white male who reads the technology section, you might imagine I am not in the same community as a Hispanic woman who only reads about the political activities in Washington D.C. However, she is my neighbor, so we do have tremendous commonalities.

How do we establish and represent a sense of community to the visitors of our news site? The approach is a basic one and a spectacular basis for many useful website features as well as market research opportunities. We will track who is viewing each page and expose to the visitor the other users who are viewing the same pages.

The premise is that when I view a page, I am considered to be actively viewing that page for the next 30 minutes or until I load another page. On each page there will be a buddy list that will dynamically update with the 30 most recent viewers of that page or (for more popular pages) there will be a count of current readers. Users will be allowed to expose their profiles publicly, and from the buddy list you can see information (if the user allows it) about your peer readers.

Technical Setup

Although we have discussed Oracle and PostgreSQL previously in this book, MySQL is an excellent and tremendously popular RDBMS solution for websites. Just for the sake of diversity (and as a reminder that this book is about techniques and approaches, not products), we will assume that our entire news site has been built atop MySQL.

We want to track the most recently loaded page of each user. This is a trivial exercise using mod_perl because it provides the capability to run code in the log-handling phase of the Apache request. Before we jump into the code, we should figure out what questions we must answer and of what subsystem we will ask them. We need to know the following:

  • The total number of users viewing the site

  • The total number of users viewing any given dynamic page

  • The list of users who last viewed any given dynamic pagewe will limit it to the 30 most recent views

To answer these questions, we store information (such as user ID or username) on each user who has loaded any dynamic page within the last 30 minutes. When a user loads a new object, her information is tracked on the new dynamic page and not on the previous dynamic page. This will tell us at any point in time who last viewed each object. After 30 minutes pass without the user loading a dynamic page, we will consider her offline, and her information should not contribute to the current viewers of the site.

Why do we want to do this? The first and foremost reason is because the business (and likely marketing) thought it would be a product differentiator and both attract and retain more users. Knowing the trend of online users can tell you (without deep log analysis) how you are growing and can allow you to relatively gauge the captivity of your site.

This seems like a rather simple problem. We use MySQL already to drive our site, and it does so with gusto. All we need to do is add a table to MySQL called recent_hits and stick users into that table as they load dynamic pages on the site; the SQL to achieve that follows:

CREATE TABLE recent_hits (   SiteUserID INT NOT NULL PRIMARY KEY,   URL VARCHAR(255) NOT NULL,   Hitdate DATETIME NOT NULL ); CREATE INDEX recent_hits_url_idx on recent_hits(URL,Hitdate); CREATE INDEX recent_hits_hitdate_idx on recent_hits(Hitdate); 


To track new page loads, we simply REPLACE INTO the table with all the columns filled in. Because this will track users forever, it may be in our best interest to have a periodic administrative task that performs a DELETE FROM recent_hits WHERE Hitdate < SUBDATE(NOW(), INTERVAL 30 MINUTE) to cull rows that are not important. However, because this table will always be strictly limited to the number of users in the database (primary keyed off SiteUserID), this isn't strictly necessary.

When attempting to answer the three questions we posed, we quickly derive simple SQL statements that do the job perfectly:

  • Total users:

         SELECT COUNT(*)        FROM recent_hits;       WHERE HITDATE > SUBDATE(NOW(), INTERVAL 30 MINUTE) 

  • Total users on a page:

         SELECT COUNT(*)        FROM recent_hits       WHERE HITDATE > SUBDATE(NOW(), INTERVAL 30 MINUTE)         AND URL = ?; 

  • Thirty most recent users on a page:

         SELECT SiteUserID, Hitdate        FROM recent_hits       WHERE HITDATE > SUBDATE(NOW(), INTERVAL 30 MINUTE)         AND URL = ?    ORDER BY Hitdate DESC       LIMIT 30; 

A Quick perl Example

Implementing the previous solution within our application should be an exercise devoid of intensity and challenge. Were our application a mod_perl application, we would (after authorizing the user) poke the SiteUserID into the Apache connection using Apache->request->connection->user($SiteUserID) and provide something like the following for the PerlLogHandler:

01: package NewSite::RealtimeExtensionToAppendRequestData; 02: use DBI; 03: use vars qw/$g_dbh $g_sth/; 04: 05: sub handler { 06    my $r = shift; 07    return unless($r->connection->user); # bail if no user information. 08    for ( 1 .. 2 ) {   # Two tries (if the first errors) 09:     eval { 10:       if(!$g_dbh) { 11:         # If we are not connected, connect and prepare our statement. 12:         $g_dbh = DBI->connect("dbi:mysql:database=db;host=dbhost", 13:                                "user", "pw"); 14:         $g_sth = $g_dbh->prepare(q{ 15:            REPLACE INTO recent_hits 16:                         (SiteUserID, URL, Hitdate) 17:                  VALUES (?,?,NOW()) }); 18:       } 19:       # Insert the user and the URI they loaded. 20:       $g_sth->execute($r->connection->user, $r->uri); 21:     }; 22:     return unless($@);    # No error, we are done. 23:     undef $g_sth;         # Error... drop our statement handle 24:     undef $g_dbh;         #       and our database connection. 25:   } 26: } 27: 1; 


We can activate this handler by adding the following line to our Apache configuration file:

PerlLogHandler NewsSite::RealtimeExtensionToAppendRequestData

After we do this, during the logging phase Apache/mod_perl will invoke the handler and replace that user's row in the recent_hits table with the page she just loaded.

That's Not Fair!

For all the PHP programmers out there screaming "That's not fair!"relax. Although PHP does not have hooks in all the various phases of the Apache request serving process, if there is a will, there is a way. This should encourage you to not be intimidated by the prospect of bending the tools at hand to your willafter all, they are there to serve you.

The only two clever things in the previous example are

  • Changing the apparent authorized user from the Apache perspective

  • Performing some action (a database insertion) at the end of the request after the payload has been handed to the client

We will create a PHP extension that provides a newssite_poke_user function that updates the user information on the connection servicing the current Apache request. Also, we will hook the request shutdown phase to log to MySQL, which emulates the perl variant's behavior as closely as possible given that PHP lacks the capability to run code in Apache's log handler phase.

To build a PHP extension, you need to create a config.m4 that indicates how the module should be named and linked and a source file that contains all intelligence. The config.m4 file follows:

01: PHP_ARG_ENABLE(RealtimeExtensionToAppendRequestData, 02:                        NewsSite request logging, 03:   [--enable-newssite    Enable NewsSite request logging ]) 04: 05: if test "$PHP_REALTIMEEXTENSIONTOAPPENDREQUESTDATA" != "no"; then 06:   PHP_NEW_EXTENSION(RealtimeExtensionToAppendRequestData, 07:                              RealtimeExtensionToAppendRequestData.c, 08:                               $ext_shared) 09: fi 10: 11: PHP_ADD_INCLUDE(/opt/ecelerity/3rdParty/include) 12: PHP_ADD_INCLUDE(/opt/ecapache/include) 13: 14: PHP_SUBST(REALTIMEEXTENSIONTOAPPENDREQUESTDATA_SHARED_LIBADD) 15: PHP_ADD_LIBRARY_WITH_PATH( 16:   mysqlclient, /opt/ecelerity/3rdParty/lib/amd64, 17:   REALTIMEEXTENSIONTOAPPENDREQUESTDATA_SHARED_LIBADD) 


As described in our config.m4 file, the C source file is called RealtimeExtensionToAppendRequestData.c:

01: #include "config.h" 02: #include "php.h" 03: #include "SAPI.h" 04: #include <mysql.h> 05: #include <httpd.h> 06: 07: static int is_connected = 0; 08: static MYSQL dbh; 09: 10: static PHP_FUNCTION(newssite_poke_user) { 11:   long SiteUserID; 12:   char number[32]; 13:   request_rec *r; 14:   if(FAILURE == zend_parse_parameters(ZEND_NUM_ARGS() TSRMLS_CC, 15:                                       "l", &SiteUserID)) 16:     return; 17:   snprintf(number, sizeof(number), "%ld", SiteUserID); 18:   /* Stick our SiteUserID as a string into the */ 19:   /* request->connection->user                        */ 20:   r = (request_rec*)SG(server_context); 21:   if(r && r->connection) 22:     r->connection->user = ap_pstrdup(r->connection->pool, number); 23:   return; 24: } 25: 26: static function_entry newssite_functions[] = { 27:   PHP_FE(newssite_poke_user, NULL) 28:   { NULL, NULL, NULL }, 29: }; 30: 31: #define RECENT_HITS_REPLACE \ 32:   "REPLACE INTO recent_hits (SiteUserID, URL, Hitdate) " \ 33:   "                  VALUES (\"%s\", \"%s\", NOW())" 34: 35: static PHP_RSHUTDOWN_FUNCTION(newssite) { 36:   request_rec *r; 37:   char *sql_buffer, *uri_buffer, *user_buffer; 38:   int sql_len, uri_len, user_len, err, should_reattempt = 1; 39:   r = (request_rec*)SG(server_context); 40: 41:   /* If we don't have information to log, leave now. */ 42:   if(!r || !r->connection || !r->connection->user || !r->uri) 43:     goto bail; 44: 45:  reattempt: 46:   if(!is_connected) { 47:     if(!mysql_real_connect(&dbh, "dbhost", "user", "pw", "db", 48:                                        3308, NULL, 0)) goto bail; 49:     is_connected = 1; 50:   } 51:   /* calculate room we need to escape args and construct SQL */ 52:   user_len = strlen(r->connection->user)*2 + 1; 53:   uri_len = strlen(r->uri)*2 + 1; 54:   sql_len = strlen(RECENT_HITS_REPLACE) + user_len + uri_len; 55:   /* allocate space */ 56:   user_buffer = emalloc(user_len); 57:   uri_buffer = emalloc(uri_len); 58:   sql_buffer = emalloc(sql_len); 59:   /* escape our arguments*/ 60:   user_len = mysql_real_escape_string(&dbh, user_buffer, 61:                                           r->connection->user, 62:                                           strlen(r->connection->user)); 63:   uri_len = mysql_real_escape_string(&dbh, uri_buffer, 64:                                          r->uri, strlen(r->uri)); 65:   /* Build our SQL */ 66:   sql_len = snprintf(sql_buffer, sql_len, RECENT_HITS_REPLACE, 67:                      user_buffer, uri_buffer); 68:   /* Run the query and bail out if there are no errors */ 69:   if((err = mysql_real_query(&dbh, sql_buffer, sql_len)) == 0) goto bail; 70:   /* There was a error close down the connection */ 71:   mysql_close(&dbh); 72:   is_connected = 0; 73:   if(should_reattempt--) goto reattempt; 74:  bail: 75:   /* We always return success.                  */ 76:   /* Our failures aren't interesting to others. */ 77:   return SUCCESS; 78: } 79: 80: zend_module_entry newssite_module_entry = { 81:   STANDARD_MODULE_HEADER, 82:   "RealtimeExtensionToAppendRequestData", 83:   newssite_functions, 84:   NULL, NULL, NULL, 85:   PHP_RSHUTDOWN(newssite), 86:   NULL, 87:   "1.0", 88:   STANDARD_MODULE_PROPERTIES, 89: }; 90: 91: #ifdef COMPILE_DL_REALTIMEEXTENSIONTOAPPENDREQUESTDATA 92: ZEND_GET_MODULE(newssite) 93: #endif 


This PHP example is longer than the perl one. This is due to current versions of PHP not supporting hooks outside of Apache's actual content handler. This will likely be addressed formally in future versions of PHP because it can be useful. The reason we wrote a module in C for PHP (a PHP extension) is so that we could stick our fingers into parts of the Apache runtime that we cannot do via PHP scripting.

Lines 1024 illustrate the newssite_poke_user function that will take the SiteUserID argument and place it in the user field of the Apache connection record. This will allow it to be logged via other logging systems already in Apache.

The RSHUTDOWN function (lines 3578) does all the heavy lifting of performing the REPLACE INTO SQL statements against MySQL.

Lines 80 and on describe the module to PHP when it is loaded from the php.ini file, and the config.m4 file is used by phpize to aide in compiling the extension.

As with all PHP extensions, you place these two files in a directory and run the following:

phpize && ./configure && make && make install.

Then you add to your php.ini:

extension=RealtimeExtensionToAppendRequestData.so

In the authentication code that drives the website, we will take the resultant SiteUserID of the visiting user and "poke" it into the right spot in Apache's data structures:

<?php newssite_poke_user($SiteUserID); ?>

Now on any page, users who are logged in will be journaled to the recent_hits table with the URI they are visiting and the time stamp of the request.

Although this may seem intimidating at first, it took me about an hour to write the previous code, set up a MySQL database, create the tables, configure Apache, and test it end-to-end. It is reasonable to expect any C programmer (without knowledge of Apache, PHP, or MySQL) to produce these 94 lines within a day.

Defining Scope

Now that we have an implementation that can track the data we need, we can define the size and scope of the problem. Wait a minute! Perhaps we should have defined the size and scope of the problem before we built a solution!

Based on the results of Chapter 6, "Static Content Serving for Speed and Glory," we are aiming to service 500 new arrivals per second and 2,000 subsequent page views per second. This was an expected average, and we had projected, based off empirical data, needing to support 15% of the overall traffic in the peak hour. This turns out to be a peak of 9,000 dynamic page loads per second in our peak hour.

Each of the pages served will display information about the total number of users online, the number of users viewing the served page, and the last 30 users who viewed that page.

Let's see whether we can get MySQL to do the things we want it to do. With 500 new arrivals per second and a 30-minute window on our user activity, we see an average of 900,000 users online at any given time. However, we must account for the 15% overall traffic peak hour. That brings this number to 3.24 million users considered to be online at once.

So, we have 9,000 insert/updates per second against the recent_hits table and 9,000 of each of the three previously described SQL queries. Each insert/update will completely invalidate MySQL's query cache. So, the test of the total users online will effectively induce a full table scan over a three million row table. And the other two queries will involve index range scans (one with a sort). You might think at first that the latter queries that are limited by URLs will be efficient; however, popular URLs will have many rows in the range scan, and, obviously, the same URLs will be queried more often for results. In other words, popular URLs will induce a greater number of insert/updates and queries and as such will perform worse because more data must be digested to answer the queries.

In my tests, by pushing on this implementation I was able to achieve around 800 operations per second. An operation in this case consists of the three queries used to display information on the page followed by the insert/update query used to track the page view. Eight hundred is a far cry from 9,000. Clearly this was not well thought out.

Stepping Back

Clearly, we need to step back from our proposed solution and revisit the problem. The problem was clearly stated and mentioned no requirement that we use an RDBMS to solve it. Why did we choose to use MySQL? Simply put, because it was there. MySQL drives our site; we use it for retrieving information about users and the page content, so it only seemed natural to leverage the existing infrastructure for this task.

The questions being asked are acutely painful for a relational database. Why? ACID. Quite frankly, these semantics are of little importance in providing this specific feature. Specifically, durability is not required. The information being used to compose the "who's online" display is only relevant for a 30-minute window of time. Isolation has no importance either as the operations are effectively single-row manipulations of a single table. Consistency and atomicity are the same thing in this situation because there are no views on the data, and all the data modifications are the result of single-statement transactions. Basically, there is no need to use SQL for this.

There are several fundamental problems with the previously offered solution:

  • Most databases do not keep an accurate cardinality for tables or index ranges. This means (even ignoring the Hitdate limitation) that SELECT COUNT(*) FROM RECENT_HITS and SELECT COUNT(*) FROM RECENT_HITS WHERE URL=? require a significant amount of work.

  • A statement such as SELECT SiteUserID FROM RECENT_HITS WHERE URL=? ORDER BY Hitdate DESC LIMIT 30 will perform a full sort of the resultset despite the index on Hitdate and the LIMIT.

  • The method in which logging was instrumented happens serially with the delivery of content. This means that if, for some reason, the REPLACE INTO statements were to slow down, the whole site would come screeching to a halt when the desired behavior would be to instead display slightly outdated information while the system catches up.

It is important to note that these requirements are in addition to the already demanding requirements of the site. The previous tests were performed against idle hardware in the best of conditions. If I was to repeat the tests and actually push a MySQL-based solution into production, I predict the end of the website moments later.

Thinking Outside the Box

The only information we really need to solve this problem is who is viewing which pagein real-time. Note that the slow part of the application before was the direct logging to a database. There was nothing egregiously bad about poking the SiteUserID into the logs. In fact, it means that all the information we need is in the logs.

Glancing back at Chapter 9, " Juggling Logs and Other Circus Tricks," note that all these logs flying across our network and subscribing to the log stream are effectively free. This is a crucially important mental step in the problem solving.

One of the serious bottlenecks in the previously posed solution was that the logging of the page view to the recent_hits table was synchronous with the serving of the page. Even though we made a specific point to engineer the logging of that information after the page was served to the end-user, the fact that the Apache resource used to serve the page will not be available to service another request until the SQL command is processed is a recipe for disaster. By processing the page views from the log stream passively, we still achieve near real-time accuracy while ensuring that no slowness in the logging of such data can ever slow down the application. In the event of an unexpected surge of traffic, the only possible negative effect is that the information lags "reality" by a small margin until the back-end layer can catch up.

This, of course, does not solve the issue that MySQL is simply "too slow" to provide this service to your application. I put "too slow" in quotation marks because that is the type of speak you would hear from someone casually describing the problem. The truth of the matter is that MySQL is not at all too slow; it is inappropriate. MySQL is simply the wrong tool for this job.

We need something that is less expensive than a database. Preferably something that doesn't touch storage and has data structures that make it easy (computationally inexpensive) to answer the exact questions we are asking.

Choosing a Platform

The first step to implementing a solution is to choose the platform on which it will be built. We chose MySQL unsuccessfully, but now we have revised the approach to be driven by passive log analysis. We already have a system called spreadlogd that is passively collecting all of our logs via Spread. It seems natural that because spreadlogd is privy to all the information necessary it be considered as a platform for supporting this new service.

Spreadlogd already has the infrastructure for subscribing to mod_log_spread log channels as well as supporting loadable modules (extensible in C and perl). Because this needs to perform rather well (9,000 requests/second), we'll approach this problem in C so that we have better control over the data structures that will manage our 3.24 million "online" users.

Spreadlogd's module API provides three hook points: initialization, logline, and shutdown. Initialization and shutdown are each called once in rather obvious places, and logline is called for each log entry seen. Additionally, spreadlogd has a multi-indexed skiplist data structure that will allow us to store hit information in an ideal form.

Let's start by setting up an online.h header that will house some defaults and simple structures to hold the information we need to track:

01: #ifndef _ONLINE_H_ 02: #define _ONLINE_H_ 03: 04: #include "sld_config.h" 05: #include "module.h" 06: #include "echash.h" 07: #include "nethelp.h" 08: #include "skip_heap.h" 09: 10: #define MAX_AGE 30*60 11: #define MAX_USER_INFO 30 12: #define DEFAULT_PORT 8989 13: #define DEFAULT_ADDRESS "*" 14: 15: typedef struct urlcount { 16:   char *URL;                   /* The URL, so we only store one copy */ 17:   unsigned int cnt;            /* count of users who last viewed this*/ 18: } urlcount_t; 19: 20: typedef struct hit { 21:   unsigned long long SiteUserID;   /* The viewer's ID     */ 22:   char *URL;                       /* The URL, refs urls  */ 23:   time_t Hitdate;                  /* the time of the hit */ 24: } hit_t; 25: 26: void online_service(int fd, short event, void *vnl); 27: urlcount_t *get_url(const char *url, int len); 28: void cull_old_hits(); 29: struct skiplistnode *get_first_hit_for_url(const char *url); 30: unsigned int get_current_online_count(); 31: 32: #endif 


Although C is always a bit more lengthy than we'd like it to be, it certainly won't do anything we don't tell it to. Outside the run-of-the-mill C header stuff, in lines 1013 we can see declarations for the 30-minute timeout, the interest in only the last 30 users, and a default listing address for the "who's online" service we will be exposing. Lines 1518 describe the urlcount_t type that will hold a URL and the number of users (just a count) whose last viewed page was this URL. Lines 2024 describe the hit_t structure that tracks the time of the last URL loaded by a given user (identified by SiteUserID). Lines 2630 just expose some functions that will be used to drive the service portion of the module. All these functions' roles should be obvious except for get_first_hit_for_url, which we will cover later.

Although we haven't yet discussed tracking the information, we have an API for accessing the information we need to answer our three service questions: total users online globally, total users on a specific URL, and the last 30 users on a URL.

Spreadlogd uses libevent to drive its network-level aspects such as receiving data from Spread. We, too, can leverage the libevent API to drive our "who's online" service.

The Service Provider

We want this to be fast and efficient (and easy to write), so we will use a simple binary format for the client-server communications. The client will connect and send an unsigned 16-bit integer in network byte order describing the length in bytes of the URL it would like information on followed by the URL. The server will respond to this request by passing back the results as three unsigned 32-bit integers in network endian describing the total users online, the total users on the specified URL, and the number of detailed user records that will follow, respectively. It will follow this with the detailed user records each of which consists of a 64-bit network endian SiteUserID, a 32-bit network endian age in seconds, and 32 bits of padding (because I'm lazy and the example is shorter if we can assume 64-bit alignment). Figure 10.1 illustrates the client-server communication.

Figure 10.1. A sample client-server "who's online" session


The code listing for online-server.c follows:

01: #include "online.h" 02: typedef struct user_hit_info { 03:   unsigned long long SiteUserID;  /* This viewer's ID */ 04:   int age;                        /* Seconds since view */ 05:   int pad;                        /* 4-byte alignment pad */ 06: } user_hit_info_t; 


The user_hit_info_t structure sends back the requesting client for each of the (up to 30) more recent viewers of the subject URL.

07: static void get_hit_info(const char *url, unsigned int *total, 08:                          unsigned int *url_total, 09:                          user_hit_info_t *uinfo, 10:                          unsigned int *nusers) { 11:   struct skiplistnode *first; 12:   urlcount_t *uc; 13:   time_t now; 14:   int max_users = *nusers; 15: 16:   /* Clean up any hits that are too old */ 17:   cull_old_hits(); 18:   *total = get_current_online_count(); 19:   if((uc = get_url(url, strlen(url))) == NULL) { 20:      *nusers = *url_total = 0; 21:      return; 22:   } 23:   now = time(NULL); 24:   *url_total = uc->cnt; 25:   first = get_first_hit_for_url(uc->URL); 26:   *nusers = 0; 27:   if(!first) return; /* something is very wrong */ 28:   while(*nusers < max_users && first) { 29:     hit_t *hit = (hit_t *)first->data; 30:     /* keep going until we see a new URL */ 31:     if(!hit || strcmp(hit->URL,uc->URL)) break; 32:     uinfo[*nusers].SiteUserID = hit->SiteUserID; 33:     uinfo[*nusers].age = now - hit->Hitdate; 34:     (*nusers)++; 35:     sl_next(&first); 36:   } 37:   return; 38: } 


The get_hit_info function takes a URL and fills out all the information we need to send back the requesting client. The first step in the function (line 17) removes any users from the tallies whose last access is considered too old. Lines 1822 set the totals, fetch specific information about the URL, and return early if no users are currently viewing that URL. The rest of the function finds the "first" view of the specified URL and populates the uinfo list with *nusers of the most recent viewers.

Next comes the tricky network programming part. We need to read the request from the client, format the information retrieved in get_hit_info function into an on-wire format, and send it the client.

39: void online_service(int fd, short event, void *vnl) { 40:   netlisten_t *nl = (netlisten_t *)vnl; 41:   int expected_write, actual_write, i; 42:   struct iovec io[4]; 43:   unsigned int total, url_total, nusers = MAX_USER_INFO; 44:   user_hit_info_t uinfo[MAX_USER_INFO]; 45:   struct { 46:     unsigned short sofar; 47:     unsigned short ulen; 48:     char *url; 49:   } *req; 50: 51:   if(NULL == (req = nl->userdata)) 52:     nl->userdata = req = calloc(1, sizeof(*req)); 53: 


The userdata member of nl lives for the life of the client connection. The first time we enter the function, it is NULL, and we allocate it to represent req that describes the progress of reading the request from the client. We need to track the process because we are non-blocking and we may read only a portion of the request and then be re-entered later to finish reading the rest of the socket. This is a nuance of non-blocking network socket programming.

54:    if(!req->ulen) { 55:      /* Read the length the URL to be passed (network short) */ 56:      if(read(nl->fd, &req->ulen, sizeof(req->ulen)) != 57:         sizeof(req->ulen)) goto bail; 58:      req->ulen = ntohs(req->ulen); 59:      /* Allocate our read length plus a terminator */ 60:      req->url = malloc(req->ulen + 1); 61:      req->url[req->ulen] = '\0'; 62:    } 


The ulen member of req describes the length of the URL we expect to read from the client. If it is zero, we haven't read the expected URL length from the client yet and proceed to do so. In this implementation we expect to read all two bytes that we are expecting at once (and fail if we do not). The two bytes we read are the 16-bit network endian length of the URL to follow. We turn the datum into host-endian form and allocate enough room to read the URL.

63: while(req->sofar < req->ulen) { 64:     int len = read(nl->fd, req->url, req->ulen - req->sofar); 65:     if(len == -1 && errno == EAGAIN) return;  /* incomplete read */ 66:     if(len <= 0) goto bail;                        /* error */ 67:     req->sofar += len; 68:    } 69: 


Although we have not read as many bytes as we expect to see in the URL, we will read it into our URL buffer. By returning when we see an EAGAIN error, we will be called later to finish the job when more data is available.

70:    /* Get the answer */ 71:    get_hit_info(req->url, &total, &url_total, uinfo, &nusers); 72: 73:    /* Pack it on the network */ 74:    expected_write = sizeof(total) * 3 + nusers * sizeof(*uinfo); 75:    io[0].iov_base = &total;       io[0].iov_len = sizeof(total); 76:    io[1].iov_base = &url_total; io[1].iov_len = sizeof(url_total); 77:    io[2].iov_base = &nusers;      io[2].iov_len = sizeof(nusers); 78:    io[3].iov_base = uinfo; 79:    io[3].iov_len = nusers * sizeof(*uinfo); 80: 81:    total = htonl(total); 82:    url_total = htonl(url_total); 83:    for(i=0;i<nusers;i++) { 84:      uinfo[i].SiteUserID = bswap64(uinfo[i].SiteUserID); 85:      uinfo[i].age = htonl(uinfo[i].age); 86:    } 87:    nusers = htonl(nusers); 88: 


After calling the get_hit_info function (line 71) we must fill out the I/O vector (lines 7479) with the results and make sure that they are in network endian (lines 8187).

89:    /* We should be able to write it all at once. We don't support */ 90:    /* command pipelining, so the total contents of the outbound   */ 91:    /* buffer will only ever be this large.                        */ 92:    actual_write = writev(nl->fd, io, 4); 93:    if(actual_write != expected_write) goto bail; 94: 95:    free(req->url); 96:    memset(req, 0, sizeof(*req)); 97:    return; 


On line 92 we write out the response to the client. We expect to write the whole response in one call to writev. This is a reasonable assumption because we will write out only one response at a time because the client doesn't issue multiple questions without reading the answer to the first. Once written, we free the memory we allocated and zero out the progress tracking structure in preparation for a subsequent request.

098:   bail: 099:    if(req) { 100:       if(req->url) free(req->url); 101:       free(req); 102:    } 103:    close(nl->fd); 104:    event_del(&nl->e); 105:    return; 106: } 


The last bit of code is the path to execute if there is an error servicing the request. It frees memory, closes down the socket, and removes itself from the libevent system.

The Information Collector

Now that our structures are defined, we need to populate them with the log information. We'll do this by storing the URLs (unique with counts) in a hash table and the last hit by user in a multi-indexed skiplist. The multiple indexes on the skiplist will be (you guessed it) the same as those on our original database table: unique on SiteUserID, ordered on Hitdate, and ordered on URL-Hitdate. The code listing for online.c follows:

01: #include "online.h" 02: 03: static Skiplist hits;    /* Tracks each users's last hit */ 04: static ec_hash_table urls; /* Tracks the count on each URL */ 05: 06: void urlcount_free(void *vuc) { 07:    if(((urlcount_t *)vuc)->URL) free(((urlcount_t *)vuc)->URL); 08:    free(vuc); 09: } 10: urlcount_t *get_url(const char *url, int len) { 11:    void *uc; 12:    if(echash_retrieve(&urls, url, len, &uc)) return uc; 13:    return NULL; 14: } 15: static void urlcount_decrement(const char *url) { 16:    urlcount_t *uc; 17:    if((uc = get_url(url, strlen(url))) != NULL) { 18:       if(!(--uc->cnt)) 19:          echash_delete(&urls, url, strlen(url), NULL, urlcount_free); 20:    } 21: } 22: void hit_free(void *vhit) { 23:    urlcount_decrement(((hit_t *)vhit)->URL); 24:    free(vhit); 25: } 


As can be seen with this leading code snippet, C is not a terse or compact language. As discussed earlier, we place the URL view counts in a hash table (line 4) and the actual per-user view information in a multi-indexed skiplist (line 3). Lines 69 and 2225 are resource deallocation functions (free). When a user's hit information is freed from the system, we must decrement the count of current views on that URL (line 23 and implemented during 1521).

More C verbosity is evident when we go to implement the comparison functions that drive each skiplist index.

26: /* comparator for the URL,Hitdate index */ 27: static int url_hitdate_comp(const void *a, const void *b) { 28:    int ret = strcmp(((hit_t *)a)->URL, ((hit_t *)b)->URL); 29:    if(ret) return ret; 30:    /* Newest (greatest) in front */ 31:    return (((hit_t *)a)->Hitdate < ((hit_t *)b)->Hitdate)?1:-1; 32: } 33: /* comparator for the Hitdate */ 34: static int hitdate_comp(const void *a, const void *b) { 35:    /* Oldest in front... so we can pop off expired ones */ 36:    return (((hit_t *)a)->Hitdate < ((hit_t *)b)->Hitdate)?-1:1; 37: } 38: /* comparator for the SiteUserID */ 39: static int SiteUserID_comp(const void *a, const void *b) { 40:    if(((hit_t *)a)->SiteUserID == ((hit_t *)b)->SiteUserID) return 0; 41:    if(((hit_t *)a)->SiteUserID < ((hit_t *)b)->SiteUserID) return -1; 42:    return 1; 43: } 44: static int SiteUserID_comp_key(const void *a, const void *b) { 45:    if(*((unsigned long long *)a) == ((hit_t *)b)->SiteUserID) return 0; 46:    if(*((unsigned long long *)a) < ((hit_t *)b)->SiteUserID) return -1; 47:    return 1; 48: } 


Lines 2632 implement our URL-Hitdate index, lines 3337 implement our Hitdate index, and lines 3848 implement the unique SiteUserID index.

Now we have the tools to implement the API that powers the service offered in online-server.c.

49: unsigned int get_current_online_count() { 50:    return hits.size; 51: } 52: 53: void cull_old_hits() { 54:    hit_t *hit; 55:    time_t oldest; 56:    oldest = time(NULL) - MAX_AGE; 57:    while((hit = sl_peek(&hits)) != NULL && (hit->Hitdate < oldest)) 58:       sl_pop(&hits, hit_free); 59: } 60: 61: struct skiplistnode *get_first_hit_for_url(const char *url) { 62:    struct skiplistnode *match, *left, *right; 63:    hit_t target; 64:    target.URL = (char *)url; 65:    /* ask for the node one second in the future.   We'll miss and */ 66:    /* 'right' will point to the newest node for that URL.          */ 67:    target.Hitdate = time(NULL) + 1; 68:    sl_find_compare_neighbors(&hits, &target, &match, &left, &right, 69:                                           url_hitdate_comp); 70:    return right; 71: } 


The get_current_online_count function returns the total number of online users that is simply the size of the skiplist holding unique online users.

One of the things that we had to do in the SQL version of this service was limit our queries to hits that had occurred within the last 30 minutes. In this approach, instead of counting only hits that occurred within the last 30 minutes, we actually just eliminate them from the system before they answer any such questions; the cull_old_hits function (lines 5359) performs this. It's good to note that because one of the indexes on the hits skiplist is on Hitdate in ascending order, the first element on that index is the oldest. This means that we can just pop items off the front of the list until we see one that is not too old; popping off a skiplist is O(1) (that is, really inexpensive computationally).

Lines 6171 are likely the most complicated lines of code in this entire example because the approach for finding the 30 most recent viewers of a URL may not be intuitive. The sl_find_compare_neighbors function attempts to find a node in a skiplist with the side effect of noting the element to the left and right of the node. If the node is not in the skiplist, it will note the nodes that would be on the left and right if it were to exist.

The index we are using for this lookup is URL-Hitdate index. This means that all the viewers for a given URL are grouped together in the list and are ordered from newest (largest time stamp) to oldest (smallest time stamp). Because this list is used to track the hits that have happened, it stands to reason that the largest time stamp possible would be the current time. On line 67, we set the "target" time stamp to be one second in the future (guaranteeing no match) and then look up the target URL in the hits table. We anticipate that the node we are looking for will be absent, but we also count on the fact that the "right neighbor" returned will be the most recent viewer of the URL in question. This is depicted in Figure 10.2.

Figure 10.2. Finding the most recent viewer for a URL


Now we have done everything but take log data and populate our system.

72:  static int online_init(const char *config) { 73:    char *host = NULL, *sport = NULL; 74:    unsigned int port; 75:    echash_init(&urls); 76:    sl_init(&hits); 77:    sl_set_compare(&hits, hitdate_comp, hitdate_comp); 78:    sl_add_index(&hits, SiteUserID_comp, SiteUserID_comp_key); 79:    sl_add_index(&hits, url_hitdate_comp, url_hitdate_comp); 80: 81:    if(config) host = strdup(config); 82:    if(host) sport = strchr(host, ':'); 83:    if(sport) { 84:       *sport++ = '\0'; 85:       port = atoi(sport); 86:    } else 87:       port = DEFAULT_PORT; 88:    if(!host) host = DEFAULT_ADDRESS; 89:    if(tcp_dispatch(host, port, 100, EV_READ|EV_PERSIST, online_service, 90:                            NULL) < 0) { 91:       fprintf(stderr, "Could not start service on %s\n", config); 92:       return -1; 93:   } 94:    return 0; 95:  } 96: 97:  static void online_shutdown() { 98:    fprintf(stderr, "Stopping online module.\n"); 99: } 


The online_init function is invoked during the configuration phase of spreadlogd's startup sequence when the LoadModule directive is seen. Lines 7579 initialize our hash table and create our skiplist with the three indexes we need. Lines 8188 parse an option argument (passed from the spreadlogd.conf), and lines 8990 register the "who's online" service we built in online-server.c with libevent (via the spreadlogd helper function tcp_dispatch). Lines 9899 just implement an online_shutdown function that says goodbye.

100: #define SET_FROM_TOKEN(a,b) do { \ 101:    a ## _len = tokens[(b)+1]-tokens[(b)]-1; \ 102:    a = tokens[(b)]; \ 103: } while(0) 104: 105: static void online_logline(SpreadConfiguration *sc, 106:          const char *sender, const char *group, const char *message) { 107:    const char *tokens[8]; 108:    const char *user, *url; 109:    unsigned long long SiteUserID; 110:    int user_len, url_len; 111:    urlcount_t *uc; 112:    hit_t *hit; 113:    int i; 114: 115:    tokens[0] = message; 116:    for(i=1; i<8; i++) { 117:       tokens[i] = strchr(tokens[i-1], ' '); 118:       if(!tokens[i]++) return;   /* couldn't find token */ 119:    } 120:    /* the userid is field 3 and the URI is field 7 based on white space */ 121:    SET_FROM_TOKEN(user, 2); 122:    SET_FROM_TOKEN(url, 6); 


Lines 100103 and 115122 perform tokenization of the logline (passed to us by spreadlogd) based on white space. We know that in the Apache common log format the user is field 3, and the URL is field 7 (which are array index offsets 2 and 6, respectively).

123:    SiteUserID = strtoul(user, NULL, 10); 124:    /* Find the URL in the URL counts, creating if necessary */ 125:    if((uc = get_url(url, url_len)) == NULL) { 126:       uc = calloc(1, sizeof(*uc)); 127:       uc->URL = malloc(url_len+1); 128:       memcpy(uc->URL, url, url_len); 129:       uc->URL[url_len] = '\0'; 130:       echash_store(&urls, uc->URL, url_len, uc); 131:    } 132:    /* Increment the counter on the URL */ 133:    uc->cnt++; 134: 


We translate the user to an unsigned long (because our user IDs are numeric), and we look up the URL in our URL hash table. If we don't find a copy, this is the only user viewing this URL in the system, so we create a new urlcount_t object with this URL and insert it. We also increment the count of current viewers on this URL.

135:    /* Fetch this users's last hit */ 136:    hit = sl_find_compare(&hits, &SiteUserID, NULL, SiteUserID_comp); 137:    if(!hit) { 138:      /* No hit for this user, allocate one */ 139:      hit = calloc(1, sizeof(*hit)); 140:    } 141:    else { 142:      /* We have an old hit.   We must reduce the count on the old URL. 143:       * it is not our string, so we don't free it. */ 144:      sl_remove_compare(&hits, &SiteUserID, NULL, SiteUserID_comp); 145:       urlcount_decrement(hit->URL); 146:    } 147:    hit->URL = uc->URL; 148:    hit->SiteUserID = SiteUserID; 149:    hit->Hitdate = time(NULL); 150:    sl_insert(&hits, hit); 151:    cull_old_hits(); 152: } 


Finally, we pull a switch. We attempt to find the URL the user was viewing immediately before this hit was logged and remove it if successful. Then we insert the new hit information marking the user, time, and URL. Finally, we preemptively remove any hit information that is no longer pertinent.

sld_module_abi_t online = {  "online",  online_init,  online_logline,  online_shutdown, }; 


The last bit of code required is used to provide spreadlogd's module loader a handle to our logging implementation. The sld_module_abi_t is a structure that contains the name of the module and function pointers to our initialization, logging, and shutdown routines.

C, as you know by now, is not a wonderful example language to be used in a book that isn't about C. However, because we are building a performance-centric solution here, it simply makes sense to use a language that won't blink at 9,000 queries per second. The particular features we need to achieve such performance are cheap data management and extremely granular control over the data structures we use. This is possible in other languages as well, but I know Cso sue me.

Loading the Module

Now that we have a module, we will compile it, link it, and install it as /opt/spreadlogd/modules/online.so on a server called sldhost running spreadlogd installed in /opt/spreadlogd/. Then we tailor a spreadlogd.conf file to log our news site logs through our new module:

BufferSize = 65536 ModuleDir = /opt/spreadlogd/libexec LoadModule online *:8989 Spread {   Port = 4803   Log {     Group newssite     ModuleLog online   } } 


This connects to Spread on port 4803 and listens for mod_log_spread published logs to the group newssite, pumping all the messages it sees though our new module.

After this is up and running, we have a fully usable "who's online" service running on port 8989. But how do we pull data from it?

Writing a Client

We have a service running that presumably has all the data we need to meet the goals for our new feature. Our website is written in perl, so we can scratch together the following modules to implement the client-server protocol written earlier from the client side. NewsSite/WhosOnline.pm contains our client base class:

01: package NewsSite::WhosOnline; 02: 03: use Net::WhosOnline::INET; 04: use vars qw/$VERSION/; 05: use strict; 06: use bigint; 07: 08: $VERSION = "1.0"; 09: 10: sub new { 11:   my $class = shift; 12:   return NewsSite::WhosOnline::INET->new(@_); 13: } 


This is the class perl package preamble that declares a version number and provides a new class method. In this method, we just hand our arguments to the new method of one of our subclasses NewsSite::WhosOnline::INET to build a TCP/IP client. This is good practice because perhaps in the future we will provide some other method of connecting to the service.

14: sub query { 15:   my($self, $url) = @_; 16:   $url = pack('n', length($url)) . $url; 17:   # Binary net-strings-style request 18:   my $wlen = $self->syswrite($url, length($url)); 19:   die if(!defined($wlen) || ($wlen != length($url))); 20:   return $self->read_response; 21: } 


The query method takes a URL as it writes it to the server preceded by the length as a 16-bit network-endian unsigned integer.

22:  sub getcounts { 23:    my $self = shift; 24:    my $pss; 25:    # 4 bytes x (total, url_total, nusers) 26:    die if($self->sysread($pss, 12) != 12); 27:    return unpack('NNN', $pss); 28:  } 29:  sub getuserinfo { 30:    my ($self, $count) = @_; 31:    my ($pss, @hits); 32:    # The structure handed back to us is (for each user) 33:    # 8 bytes of SiteUserID, 4 bytes of age in seconds 34:    # and 4 bytes of padding. 35:    die if($self->sysread($pss, 16 * $count) != (16 * $count)); 36:    my @part = unpack('NNNN' x $count, $pss); 37:    while(@part) { 38:       # this little trick will allow bigint to kick in without 39:       # rolling an int on 32bit systems. 40:       my $rv = $part[0]; 41:       $rv *= 0xffffffff; 42:       $rv += $part[0] + $part[1]; 43:       push @hits, [ "$rv", $part[2] ]; 44:       # We don't do anything with $part[3], it's padding. 45:       splice(@part, 0, 4); # eat these 4, and onto the next 46:    } 47:    return \@hits; 48:  } 49:  sub read_response { 50:    my $self = shift; 51:    my $response; 52:    eval { 53:       my ($total, $url_total, $nusers) = $self->getcounts; 54:       $response = { total       => $total, 55:                     url_total   => $url_total, 56:                     recent_hits => $self->getuserinfo($nusers) }; 57:    }; 58:    return $@?undef:$response; 59: } 60: 1; 


The read_response method (lines 4959) simply calls the getcounts and getuserinfo methods to populate a hash and return it to the caller. The getcounts method reads and unpacks our three 32-bit unsigned integers in network endian (lines 2627). In getuserinfo we read and unpack all the user hit information on lines 35 and 36 and spend lines 3746 processing them and compensating for the fact that the SiteUserID is a 64-bit integer. We want to support large integers, so we used the bigint package on line 6this allows perl to handle that "magically." The NewsSite/WhosOnline/Inet.pm code listing follows:

01: package NewsSite::WhosOnline::INET; 02: 03: use NewsSite::WhosOnline; 04: use IO::Socket::INET; 05: use vars qw/$VERSION @ISA/; 06: use strict; 07: 08: $VERSION = "1.0"; 09: @ISA = (qw/Net::WhosOnline IO::Socket::INET/); 10: 11: sub new { 12:   my ($class, $hostname, %args) = @_; 13:   $args{Port} ||= 8989;   # set the default port 14:   return $class->IO::Socket::INET::new(PeerAddr = > $hostname, 15:                                           PeerPort = > $args{Port}, 16:                                           Proto = > 'tcp', 17:                                           Timeout = > $args{Timeout}); 18: } 19: 1; 


The NewsSite::WhosOnline::INET implements the TCP/IP specific details of the client by subclassing IO::Socket::INET. On line 13, we default the remote port to 8989 to keep it in line with our server implementation.

It's important to note that the "who's online" client implementation is only 79 lines of code. Were our application written in PHP, Java, or just about any other language, the client code would be similarly simple.

Running Some Tests

Now that we have a working client we can ping our service and see who is currently visiting /index.html:

# perl -MNewsSite::WhosOnline -MData::Dumper \   -e 'print Dumper(NewsSite::WhosOnline->new("sldhost")                                        ->query("/b/index.html"))' $VAR1 = {           'url_total' =>39,           'recent_hits' => [                               [                                  '662735628',                                  0                                ],                                [                                  '2873826139',                                  10                                ],                                # 27 more detail here                                [                                  '4108724910',                                  337                                ],                              ],           'total' = > 978         }; 


Now that we have everything working, did we solve our problem? Of course, we think we did. We are confident that our design doesn't suffer from the acute performance issues identified in the SQL-based implementation. But was it enough?

Testing the Solution

How do you go about testing a solution like this? Our goal was to sustain 9,000 queries per second. Although it is easy to write a simple test that runs and requests information on 9,000 URLs as rapidly as possible, that is only useful if the service is populated with a dataset that fairly represents what we expect to see in production.

One of the truly beautiful aspects of this solution is that the tracking side of it does not sit in the critical path. If it is not being queried for URL information there is no way it can disrupt our architecture. This allows us to do performance tests without investing a tremendous amount in test-harness design. We can run our test queries against a new, unused spreadlogd instance feeding on real production logs.

So, we crank up a new box running spreadlogd and our "who's online" service module, and we wait. It tracks users' clickstreams up to 30 minutes old. The instance will only have usable data (realistic size, quantity, and diversity) after 30 minutes of operation.

After the system is warmed up with some real data, we run a really simple test that requests a frequently viewed page 100,000 times in a loop and calculates the actual queries per second. This isn't a bad test in general because we know a bit about our data structures. Every URL looked up costs the same, and there is limit of 30 records handed back; as long as the URL in our test has many users associated with it, it will be a "worst case" and a thus a fairly good test. We can script this test case as whosonline-test.pl:

use strict; use NewsSite::WhosOnline; use Time::HiRes qw/gettimeofday tv_interval/; my $cnt = shift || 100000; my $nw = NewsSite::WhosOnline->new("sldhost"); my $start = [gettimeofday]; for(my $i=0; $i<$cnt; $i++) {   $nw->query("/b/index.html"); } printf "$cnt queries [%0.2f q/s]\n", $cnt/tv_interval($start); 


We can run our test and find out how fast our spectacular new service is.

; perl whosonline-test.pl 100000 queries [43.29 q/s] 


Oh, dear! Forty-three queries per second, that's just not right! How could that possibly be? Is it the test script or is it the server?

; time perl -d:DProf whosonline-test.pl 100000 queries [43.29 q/s] 1911.386u 132.212s 38:30.46 88.4%      0+0k 0+0io 390pf+0w 


The preceding line shows 1911 seconds of CPU time to run the script! The script isn't doing anything but asking questions. What is taking so much horsepower?

; perl -d:DProf whosonline-test.pl 100000 queries [43.29 q/s] ; dprofpp tmon.out Total Elapsed Time = 1314.906 Seconds   User+System Time = 1203.177 Seconds Exclusive Times %Time ExclSec CumulS #Calls sec/call Csec/c   Name  20.7    249.0 407.63 640001    0.0000 0.0001   Math::BigInt::new  19.4    233.4 257.00 182000    0.0000 0.0000   Math::BigInt::numify  14.4    173.2 164.10 920000    0.0000 0.0000   Math::BigInt::round  13.0    156.4 1355.1 276000    0.0000 0.0000   Math::BigInt::__ANON__  9.48    114.0 392.31 300000    0.0000 0.0001   Math::BigInt::badd  9.19    110.6 1424.2 100000    0.0011 0.0142   NewsSite::WhosOnline::getuserinfo  8.95    107.6 236.65 340000    0.0000 0.0001   Math::BigInt::objectify  8.90    107.1 207.33 320000    0.0000 0.0001   Math::BigInt::bmul  6.50    78.15 59.958 182000    0.0000 0.0000   Math::BigInt::Calc::_num  5.48    65.88 62.890 300000    0.0000 0.0000   Math::BigInt::_split  5.28    63.50 57.109 640001    0.0000 0.0000   Math::BigInt::Calc::_new  3.76    45.26 72.699 300000    0.0000 0.0000   Math::BigInt::bstr  3.03    36.42 33.430 300000    0.0000 0.0000   Math::BigInt::Calc::_str  2.70    32.50 29.310 320000    0.0000 0.0000   Math::BigInt::Calc::_mul_use_div  2.64    31.77 28.780 300000    0.0000 0.0000   Math::BigInt::Calc::_add 


It should be clear that we spend pretty much all of our time dealing with Math::BigInt, which is the way that 32-bit perl can deal transparently with 64-bit integers. This is not what we bargained for, so let's eliminate this by passing the SiteUserID back to the perl caller as an array of a high and low part (that is, the 32 most significant bits as a 32-bit integer and the 32 least significant bits as a 32-bit integer). By doing this we will avoid the majority of the expense highlighted in our profiling output, and, hopefully, our solution will perform adequately.

We will comment out line 6 of the NewsSite/WhosOnline.pm file and change the getuserinfo method as follows:

29: sub getuserinfo { 30:   my ($self, $count) = @_; 31:   my ($pss, @hits); 32:   # The structure handed back to us is (for each user) 33:   # 8 bytes of SiteUserID, 4 bytes of age in seconds 34:   # and 4 bytes of padding. 35:   die if($self->sysread($pss, 16 * $count) != (16 * $count)); 36:   my @part = unpack('NNNN' x $count, $pss); 37:   while(@part) { 38:      # this little trick will allow bigint to kick in without 39:      # rolling an int on 32bit systems... TOO EXPENSIVE 40:      # my $rv = $part[0]; 41:      # $rv *= 0xffffffff; 42:      # $rv += $part[0] + $part[1]; 43:      push @hits, [ [ $part[0], $part[1] ], $part[2] ]; 44:      # We don't do anything with $part[3], it's padding. 45:      splice(@part, 0, 4); # eat these 4, and onto the next 46:   } 


We comment out lines 4042, which was our "clever" bigint helper. Then we change line 43 to pass the SiteUserID back as an array of $part[0] and $part[1]. Then, we rerun our test.

; time perl -d:DProf whosonline-test.pl 100000 queries [2858.75 q/s] 27.208u 4.078s 0:35.11 89.0%    0+0k 0+0io 388pf+0w 


This is much more reasonable. The example is still not where we want to be, but we also don't want to be running test cases in the end. Our goal is to have our web application be a client to this service and be able to perform 9,000 queries per second against it.

To better emulate this, we take the previous test and run it from a number of our web servers. We're no longer interested in how long the test takes or what the perl profiling looks like, we just want the "q/s" results. Starting tests across all the various servers involves an element of human error. The tests will not start at the same time, and they won't end at the same time. The goal is to gain an understanding of what performance we can achieve with them all running in parallel. The easiest way to do this is to understand the human error and make sure that its effect on the outcome is minimal. Going overboard in the testing process here can be a waste of time as well. The purpose of this exercise is to build confidence in the performance of our solution. We'll bump the iterations up to 10,000,000 and let the test run on eight nodes. It takes just over two hours, and the results are excellent:

www-0-1:   1534 q/s www-0-2:   1496 q/s www-0-3:   1521 q/s www-0-4:   1501 q/s www-0-5:   1524 q/s www-0-6:   1462 q/s www-0-7:   1511 q/s www-0-8:   1488 q/s 


That should do just fine. Across all the nodes, we are seeing 12,037 queries per second.

This chapter contains a lot of complicated code and as such the "who's online" module discussed is included in the spreadlogd source distribution for your convenience.




Scalable Internet Architectures
Scalable Internet Architectures
ISBN: 067232699X
EAN: 2147483647
Year: 2006
Pages: 114

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net