Process Ingredients to Value


I'm not much of a process guy, but I still would like to add a couple of thoughts. Before I get started, what do I mean by "process"? It's the methodology for moving forward in projects: What should be done, when, and how?

The classic process was the so-called waterfall. Each phase followed after the other in a strictly linear fashion with no going back whatsoever. A condensed description could be that first a specification was written, and then the system was built from that specification, then tested, then deployed.

Since then, numerous different processes have been introduced over the years, all with merits and pitfalls of their own. A recent one, Extreme Programming (XP) [Beck XP], gathered as one process under the umbrella of Agile [Cockburn Agile] processes, could probably be described as the opposite of waterfall. One of the basic ideas of XP is that it's impossible to know enough to write a really good, detailed specification on day one. Knowledge evolves during the project, not just because time passes, but because parts of the system are built, which is a very efficient way of gaining insight.

Note

There's much more to XP than what I just said. See, for example, [Beck XP] for more information. I will discuss TDD and Refactoring, which have their roots in XP.

Also note that XP doesn't always start from scratch. In the XP forums there is also a lot of interest for XP and legacy systems. See, for example, Object-Oriented Reengineering Patterns [Demeyer/Ducasse/Nierstrasz OORP].


I try to find a good combination of smaller ingredients for the situation. I have a couple of different current favorite ingredients. I'd like to discuss Domain-Driven Design, Test-Driven Development, and Refactoring, but let's start with Up-Front Architecture Design.

XP and a Focus on the User

Another thing that was considered new with XP was its focus on user-centric development. Users should be involved in the projects throughout the complete development.

For a Swedish guy, that wasn't very revolutionary. I don't know why, but we have a fairly long history of user-centric development in Sweden, long before XP.

Without meaning to sound as if Sweden is better than any other country, we have a long history of also using the waterfall processes.


Up-Front Architecture Design

Even though I like many of the Agile ideas about not making up front and premature decisions about things that couldn't possibly be known at that stage, I don't think that we should start the construction of new projects with only a blank sheet of paper, either. Most often, we have a fair amount of information from the very beginning (the more the better) about production environment, expected load, expected complexity, and so on. At least keep that in the back of your mind and use the information for doing some initial/early proof of concept of your up-front architecture.

Reuse Ideas from Your Successful Architectures

We can't afford to start from scratch with every new application, and this is especially true when it comes to the architecture. I usually think about my current favorite default architecture and see how it fits with the situation and requirements of the new application that is to be built. I also evaluate the last few applications to think of how they could have been improved from an architecture perspective, again with the context of the new application in mind. Always evaluating and trying to improve is definitely something to value.

If I assume for the moment that you like the idea of the Domain Model pattern [Fowler PoEAA], the architecture decisions could actually be a bit less hard to make initially because a great deal of the focus will go just there, on the Domain Model. Deciding whether or not to use the Domain Model pattern is actually an up-front architecture design decision, and a very important one because it will affect a lot of your upcoming work.

It is also important to point out that even though you do make an up front decision, it's not written in stone, not even your decision about whether or not to utilize the Domain Model pattern.

Note

I recently heard that the recommendation from some gurus was to implement the Transaction Script pattern [Fowler PoEAA] with the Recordset pattern [Fowler PoEAA] for as long as possible and move to another solution when it proved to be a necessity. I disagree. Sure, there are worse things that could happen, but that transition is not something I'd like to do late in the project because it will affect so much.


Consistency

An important reason for why it's good to make early architecture decisions is so that your team of several developers will work in the same direction. Some guidelines are needed for the sake of consistency.

The same is also true for IS departments that build many applications for the company. It's very beneficial if the architectures of the applications are somewhat similar as it makes it much easier and more efficient to move people between projects.

Software Factories

This brings us nicely to talking a little bit about Software Factories [Greenfield/Short SF] (with inspiration from Product Line Architecture [Bosch Product Line]). The idea of Software Factories is to have two lines in the software company. One line creates architectures, frameworks, and such to be used for families of applications. The other line creates the applications by using what the first line has produced, and thereby amortizing the cost of the frameworks on several projects.

Note

A problem is that it's troublesome to just invent a framework. It's probably a better idea to harvest instead [Fowler HarvestedFramework].


Another thing that is somewhat problematic with Software Factories, though, is that they probably require pretty large organizations before being efficient to use. A friend that has used Product Line Architectures said that the organization needs to have a head count of thousands rather than fifty or a hundred because of the large investments and overhead costs. He also said (and I've heard the same from others) that even in organizations that have started to use Product Line Architectures, it's not necessarily and automatically used all the time. The overhead and bureaucracy it brings with it should not be underestimated. Think about a tiny framework that you use in several applications and then think much bigger and you get a feeling for it.

That said, I definitely think the Software Factories initiative is interesting.

At the heart of Software Factories is that of Domain Specific Languages (DSL) [Fowler LW]. A pop-description could be that DSL is about dealing with sub-problems with languages specialized for the task at hand. The languages themselves can be graphical or textual. They can also be generic, as with XML, UML and C#, or specific, like the WinForms editor in VS.NET or a little language you define on your own for a certain task.

Another approach, but with many similarities to DSL, is Model-Driven Architecture.

Model-Driven Architecture

Model-Driven Architecture (MDA) [OMG MDA] is something like "programming by drawing diagrams in UML." One common idea from the MDA arena is to create a Platform Independent Model (PIM) that can then be transformed into a Platform Specific Model (PSM), and from there transformed into executable form.

Thinking about it, it feels like writing 100% of a new program with a 3GL, such as C#, is overkill. It should be time to increase the abstraction level.

I think one problem many are seeing with MDA is its tight coupling to UML. One part of the problem is the loss of precision when going between code and UML; another part of the problem is that UML is a generic language with the pros and cons that comes with that.

For the moment, let's summarize those approaches (both DSL and MDA) with the term Model-Driven Development. I feel confident to say that we are moving in that direction, so Model-Driven Development will probably be a big thing. Even today, you can go a pretty long way with the current tools for Model-Driven Development.

Further on, I also think both approaches of DSL and MDA fit very well with Domain-Driven Design, especially the mindset of focusing on the model.

Domain-Driven Design

We have already discussed model focus and Domain-Driven Design (DDD) in the architecture section, but I'd like to add a few words about DDD in the process perspective also. Using DDD [Evans DDD] as the process will focus most of the energy on building a good model and implementing it as closely as possible in software.

What it's all about is creating as simple a model as possible, one that still captures what's important for the domain of the application. During development, the process could really be described as knowledge-crunching by the developers and domain experts together. The knowledge that is gained is put into the model.

Find Old Knowledge as a Shortcut

Of course, not all knowledge has to be gained from scratch. Depending upon the domain, there could well be loads of knowledge available in books, and not just in specific books for the domain in question. There is actually information in a couple of software books, too. Until recently, I would only consider the books called Data Model Patterns [Hay Data Model Patterns] and Analysis Patterns [Fowler Analysis Patterns], but I would strongly suggest that you get your hands on a book about Archetype Patterns as well. The book is called Enterprise Patterns and MDA [Arlow/Neustadt Archetype Patterns].

Refactor for Deeper Knowledge

A value I definitely think should be valued is continuous evaluation, and that goes for DDD, as well. Is the current model the best one? Every now and then, question the current model constructively, and if you come up with important simplifications, don't be afraid to refactor.

The same also applies when you find something new. A couple of simple refactorings might make that new feature not only possible but also easy to achieve so it fits well within the model.

Refactorings alone might also lead to deeper knowledge, like when you do a refactoring that can open everything up and lead to one of those rare "eureka!" moments.

We will come back to refactoring shortly, but first I'd like to talk about something closely related, namely Test-Driven Development.

Test-Driven Development

I've often heard developers say that they can't use automatic unit testing because they don't have any good tools. Well, the mindset is much more important than the tools, although tools do help, of course.

I wrote my own tool (see Figure 1-6) a couple of years ago and used it for registering and executing tests of stored procedures, COM components, and .NET classes. Thanks to this tool, I could skip those forms with 97 buttons that have to be pressed in a certain order in order to execute the tests.

Figure 1-6. Screen shot of my old test tool called JnskTest


Later on, when NUnit [NUnit] (a derivate from the other xUnit versions) was released (see Figure 1-7), I started using that tool instead. Using NUnit is way more productive. For example, my tool didn't reflect on what the existing tests were. Instead you had to explicitly register information about them.

Figure 1-7. NUnit, the GUI version


Note

Now I use another tool, called Testdriven.Net, but as I said, what tool you use is of less importance.


No matter what process you use, you can use automatic unit tests. For an even larger positive effect, I strongly recommend you find out if Test-Driven Development (TDD) is for you.

The Next Level

TDD is about writing tests before writing the real code. In doing this, the tests will drive your design and programming.

TDD sounds dull and boring, and developers often expect it to be a pain in the backside. They couldn't be more wrong! In my experience, the opposite is trueit's actually great fun, which came as a surprise to me. I guess the reason that it is such fun is that you get instant feedback on your changes, and because we are professionals, we enjoy creating high-quality applications.

Another way to put it is that TDD isn't about testing. It's about programming and design. It's about writing simpler, clearer, and more robust code! (Sure, the "side-effect" of created unit tests is extremely important!)

Why TDD?

The reason I started with TDD in the first place was that I wanted to improve the quality of my projects. Improvement in quality is probably the most obvious and important effect. We don't want to create applications that crash when the customer uses them for the first time or applications that break down when we need to enhance them. It's just not acceptable anymore.

TDD won't automatically help you never release products with bugs again, but the quality will improve.

Note

The automatic tests themselves aren't the primary reason for TDD; they are nice side effects. If quality is everything, there are other formal methods, but for many scenarios they are considered too "expensive." Again, context is important.


You can see the effect of improved quality by writing tests after the real code. What I mean is that you don't have to apply TDD (which means writing tests before the real code), you just need a lot of discipline. On the other hand, using TDD gets the tests written. Otherwise, there is a very great risk that you won't write any tests when you're pressed for time, which always happens when it gets to a late stage in the projects. Again, TDD makes the tests happen.

The second effect you can expect when applying TDD is to see improved simplicity of design. In the words of two popular sayings, "Simple is beautiful" and "KISS." They are very important because, for example, complexity produces bugs.

Instead of creating loads of advanced blueprints covering every little detail upfront, when using TDD you will focus on the core customer requirements and just add the stuff the customer needs. You get more of a customer perspective than a technical perspective.

TDD is not about skipping design. On the contrary, you are doing design the whole time when using TDD.

In the past I've been very good at overcomplicating simple things. TDD helps me keep focused and not do anything other than what is really necessary now. This effect (getting improved simplicity of design) requires TDD. It's not enough to just write the tests afterwards.

Yet another effect of TDD is that you will get high productivity all the way. This might sound counterintuitive at first. When you start a new project, it feels very productive to get going and write the real code. At first you are very productive, but it's very common that the productivity completely drops near the end of the project. Bugs start cropping up; the customer decides on a couple of pretty substantial changes that upset everything; you find out that you have misunderstood some things...well, you get the picture.

Tests will force you to challenge the requirements and to challenge them early. Thereby, you will find out early if you have understood. You will also reveal lacking and contradictory requirementsagain, early.

By the way, you shouldn't ask the customer if you should use TDD or not, at least not if you're asking for more payment/time/whatever at the same time. He will just tell you to do it right instead. When considering the project from start to finish, if using TDD incurs no extra cost, I believe you should just go ahead. The customer will be happy afterward when he gets the quality he expects.

Note

Let's for a moment skip TDD and focus only on automatic tests.

A colleague of mine has been extremely skeptical of the need for automatic unit tests (created before or after real code). He told me that during his two-decade career, automatic unit tests would not have helped him once. However, I think he changed his mind a bit recently. We were working together on a project where he wrote a COM component in C++ and I wrote tests as specifications and as just tests. When we were done, the customer changed one thing in the requirements. My colleague made a small change, but the tests caught a bug that occurred just four times in 1,000 executions. The bug was found after only seconds of testing, compared to hours if it had been done manually. And if it had been done manually, the bug would most probably not have been found at all, but would have shown itself during production.


The TDD Flow

Now I have tried to get you motivated to start with TDD, so let's have a closer look at how the process flows. (We'll get back to this in Chapter 3 when we will investigate it in a bit more depth with a real world demo.) Assuming you have a decent idea about the requirements, the flow goes like this:

First of all, you start writing a test. You make the test fail meaningfully so that you can be sure that the test is testing what you think. This is a simple and important rule, but even so, I have skipped over it several times and that is just asking for trouble.

The second step is to write the simplest possible code that makes the test pass.

The third step is to refactor if necessary, because you identify code that smells (for example, code duplication), and then you start all over again, adding another test.

If we use NUnit lingo, the first step should give you a red light/bar, and the second step should give you a green light/bar.

I mentioned refactoring previously and as the third step in the general process of TDD, and I think I should briefly explain the term a bit more.

Refactoring

Continuous learning was something we heard about in school all the time. It's very much true in the case of refactoring [Fowler R]refactoring to get a better model, for example.

Refactoring is about making small, well-known changes step by step in order to improve the design of existing code. That is, to improve its maintainability without changing its observed behavior. Another way to say it is to change how, not what.

In a nutshell, what refactoring does is to take you from smelly code to nice code. It's as simple as that.

So you don't have to come up with a perfect design up-front. That is good news, because it can't be done anyway.

Why Use Refactoring?

None of us have any trouble in recognizing smelly code. What might be more troublesome is to know when to fix it. As I see it, you should deal with the problem as soon as it arises. You should use refactoring because without continuous maintenance of your code, it will start to degenerate and crumble.

Note

Mark Burhop said the following: "A good friend keeps a list of Software Development Laws. One goes something like "Code, left untouched, will develop bugs."


Let's use the analogy of your home. Problems you choose to ignore, such as fixing windows, repairing the roof, painting the woodwork, and so on, ignored problems like these will grow in time. That's an immutable law. So sooner or later your house will fall to bits, and at that point it's worthless. Nobody wants that situation, right?

Software is different because it's not built of organic material and won't become affected from weather and wind if not changed. Still, we intuitively have a feeling for what's happening over time with software when refactoring isn't applied during bug fixes and when the software is extended.

How Should I Use Refactoring?

Refactoring can be used in all phases of the application lifecycle; for instance, during development of the first version of an application. But just assume we don't use refactoring, but an up-front, traditional design heavy process instead (now often referred to as Big Design Up-Front, BDUF). We will spend quite a lot of time on initial detailed design, creating loads of detailed UML diagrams, but as a result we will expect the development to go very smoothly and quickly. Even assuming that it does, there is still a risk that the code will be, well, smelly.

Instead, let's say we just accept the fact that we can't get it right up front the first time. In this case, a slightly different approach is to move some of the effort from initial detailed design over to development instead (and of course all development, especially in this case, is design) and to be prepared for doing refactoring continuously when we learn more. Learning more is exactly what we do during development, and as I see it, this approach results in higher quality code.

So instead of doing too much guessing, we do more learning and proofing!

Note

I was probably overly positive to BDUF so as to not distract you from the point that I was after regarding smelly code. Doing a lot of guesswork on day one will often lead to wasted time because it is just guesswork.

My friend Jimmy Ekbäck commented on this by saying, "BDUF can be even worse than wasted time because of incorrect guesses. BDUF can also lead to self-fulfilled prophesies."


Refactoring + TDD = True

In order to be able to use refactoring in a safe way, you must carry out extensive tests. If you don't, you will introduce bugs and/or you will prioritize, not making any changes simply for the sake of maintainability, because the risk of introducing bugs is just too large. And when you stop making changes because of maintainability, your code has slowly started to degrade.

Note

You will find much more coverage, with focus on hands-on examples, about both TDD and Refactoring in Chapter 3.


It's a good idea to use TDD and refactoring for bugfixing also. First expose the bug with a red test, then solve the bug so you get green, and then refactor.

Which Ingredient or a Combination?

Again, I'm sure many of you are wondering which way to go. For instance, should you focus on up-front design or TDD?

As I see it, you can mix up-front design and TDD successfully. For example, set up some up-front architecture, work with Domain-Driven Design, and for each piece of behavior build it with TDD (including refactoring). Then go back to your architecture and change it in accordance with what you have learned. Then work with Domain-Driven Design, and continue like that.

Note

I have to admit that I often fall back into the old habit of doing detailed up-front design. However, thinking about the problem in different ways is often the most efficient thing to do. A little bit top-down, a little bit bottom-up. A little bit inside out, a little bit outside in.


I think it's pretty well known that a Big Design Up-Front (BDUF) has some big problems. At the same time, most often we know some things from day one. It's a matter of balance.

Finally, a last remark regarding DDD and TDD: Domain Models are very suitable for TDD. Sure, you can also apply TDD with more database-oriented design, but I haven't been able to apply it as gracefully and productively as when I'm working with Domain Models.

Note

When I discussed TDD and/or DDD with Eric Evans he said the following, which I think is spot on:

"Myself, I actually play with the model while writing the tests. Writing the test lets me see what sort of client code different assignments of responsibility would produce, as well as the fine-tuning of method names and so on to communicate intention and have a good flow."


No matter if you focus on TDD or BDUF, there are lots of techniques that are useful anyway. The chapter will end with focusing on a couple of such things, such as operational aspects, but the first example is called Continuous Integration.




Applying Domain-Driven Design and Patterns(c) With Examples in C# and  .NET
Applying Domain-Driven Design and Patterns: With Examples in C# and .NET
ISBN: 0321268202
EAN: 2147483647
Year: 2006
Pages: 179
Authors: Jimmy Nilsson

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net