Well ask this question whenever we find a defect that wasnt found by the tests. In a perfect world, there should be no defects. Every defect in the system should have been tested for, and prevented, by one or more of our tests. So when we enounter defects, we reflect on how our tests could help us better.
In the case of this application, the tests could have been more help had there been a better connection between testing, TextModel, and TextBox. We started with the assumption that if we tested TextModel, TextBox would take care of itself. This was nearly true but not completely true. TextBox carries too much functionality in the product to be left quite as alone as our tests leave it. This showed up particularly when we were fiddling with getting Undo to work.
The tests could have been more helpful if the customer tests were easier to run one at a time and if it was easier to find out why they failed when they failed. In this case, it was our testing infrastructure that wasnt helping, and we might have profited from making it a bit stronger.
Teams using this test-driven approach are reporting results so good as to seem unbelievable. In one example, I am told of a test-driven C++ project, which included interfacing to hardware, that went into heavy production with zero defects reported. Many test-driven XP teams are reporting similar results: bug lists that used to contain hundreds of defects contain one or two on the XP project. It does take work, and we have to develop skill, but the payoffs being reported are quite exciting.