Those who think they still don't have sufficient material to find test cases are referred to the huge amount of test literature; see the references in Bibliography and List of References, Further Reading. A few additional tips and ideas follow from various sources:
Compile a test catalog with typical error cases and use it for inspiration. A good starting point might be the catalog at [URL:TestingCat].
Test all of your different result categories ("distinct results") of a function call. The above explanations about error cases and exceptions are based on this principle.
Sometimes it is meaningful to randomly generate test data and to find the desired result based on a test oracle, instead of using only test cases with exactly defined input and output data. This is normally the case when certain errors cannot be caused in a deterministic way. A typical example is the creation of correct random input strings for a parser to discover potential ambiguities in the underlying grammar [Metsker01].
Such tests remove the causal relationship between failure and previous changes to the code so that these tests should be separated from "normal" unit tests.
Developers who take their role as testers seriously will hardly encounter the problem of not testing enough, but rather when to stop. Chapter 8 discusses this issue in detail. The following heuristics may be helpful to decide on the minimal set of tests when deadlines are looming or when you have to trade off your testing effort:
Test at least the explicit functional requirements.
Add a unit test whenever a bug slipped through to functional testing or production.
Test wherever you have already found many bugs. Statistical studies have shown that bugs normally come in clusters; they are not equally distributed over the entire application.