Log Files


Whenever someone gets a page from a Web site, the server makes note of it. In a traditional store or office, the management can only guess where people go and what they look at, but Web servers know exactly who is looking at what when (though, of course, looking doesn't guarantee understanding or interest). A Web site visitor has to ask to see every piece they're interested in, like in a jewelry store. This is different from a supermarket, where he or she can spend all day squeezing every tomato, and no one would ever know. This is why a clerk at a jewelry store is likely to have a much better understanding of customers' behavior than a supermarket cashier.

Web servers have the ability to collect a surprisingly large amount of information. For every request (including every page, every image, every search), they store the address of where the request came from, what was requested, when it was requested, what browser was doing the requesting, what operating system the browser was running on, and a number of other facts about the connection. This produces a lot of data, and it's not unusual for busy sites to produce gigabyte-sized log files on a daily basis.

So how is this data mountain mined? Carefully. Up-front planning can reduce the vertigo that comes from confronting the task of data mining. As you answer initial questions, you discover what other information should be examined, collected, and processed.




Observing the User Experience. A Practioner's Guide for User Research
Real-World .NET Applications
ISBN: 1558609237
EAN: 2147483647
Year: 2002
Pages: 144

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net