Testing Philosophy for Logging


In the previous section, Logging to Files, you changed logging behavior by modifying the logging.properties file. The ability to make dynamic changes using properties files is very powerful and allows your system to remain flexible.

Note that the test you wrote executed regardless of where the logging output went. Your test code instead proved that your code sent a specific message to the logger object. You were able to prove this by inspecting the message sent from the logger to one of its handlers.

However, the destination of logging output is another specification detail, one that needs to be adequately tested. It would be very easy to make invalid changes to the properties file. When the application shipped, those changes would cause serious problems. It is imperative that you test not only your Java code but also how the configuration of the system impacts that code.

You could write JUnit tests to ensure that log files were correctly created.[11] The strategy for the test would be:

[11] ...once you learned how to write code that works with files. See Lesson 11.

  1. write out a properties file with the proper data

  2. force the logging facility to load this properties file

  3. execute code that logs a message

  4. read the expected log file and ensure that it contains the expected message

Or you could consider that this test falls into the scenario of what is known as integration testing. (Some shops may refer to this as customer testing, system testing, or acceptance testing.) You have already testing that your unit of codeStudentinteracts with the logging facility correctly. Testing how the logging facility works in conjunction with dynamic configurations begins to fall outside the realm of unit testing.

Regardless of the semanticsregardless of whether you consider such a test a unit test or notit is imperative that you test the configuration that you will ship. You might choose to do this manually and visually inspect the log files like you did in the previous section. You might translate the above 4-step strategy into an executable test and call it an integration test. You will want to execute such tests as part of any regression test suite. A regression test suite ensures that new changes don't break existing code through execution of a comprehensive set of tests against the entire system.

Logging is a tool for supportability. Supportability is a system requirement, just like any other functional requirement. Testing logging at some level is absolutely required. But should you write a test for each and every log message?

The lazier answer is no. Once you have proved that the basic mechanics of logging are built correctly, you might consider writing tests for additional loggings a waste of time. Often, the only reason you log is so that you as a developer can analyze what the code is doing. Adding new calls to the logger can't possibly break things.

The better answer is yes. It will be tedious to maintain a test for every log message, but it will also keep you from getting into trouble for a few reasons.

Many developers introduce TRy-catch blocks because the compiler insists upon it, not because they have a specification or test. They don't have a solution for what to do when an exception occurs. The nonthinking reaction by the developer is to log a message from within the catch block. This produces an Empty Catch Clause, effectively hiding what could be a serious problem.

By insisting that you write a test for all logged messages, you don't necessarily solve the problem of Empty Catch Clause. But two things happen: First, you must figure out how to emulate generating the exception in question in order to code the test. Sometimes just going through this process will educate you about how to eliminate the need for an Empty Catch Clause. Second, writing a test elevates the folly of Empty Catch Clause to a higher level of visibility. Such a test should help point out the problem.

Another benefit of writing a test for every logged message is that it's painful. Sometimes pain is a good thing. The pain here will make you think about each and every logging message going into the system. "Do I really need to log something here? What will this buy me? Does it fit into our team's logging strategy?" If you take the care to log only with appropriate answers to these questions, you will avoid the serious problem of overlogging.

Finally, it is possible to introduce logging code that breaks other things. Unlikely, but possible.



Agile Java. Crafting Code with Test-Driven Development
Agile Javaв„ў: Crafting Code with Test-Driven Development
ISBN: 0131482394
EAN: 2147483647
Year: 2003
Pages: 391
Authors: Jeff Langr

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net