When the target model is regenerated, we need to be certain that the model fragments that have been elaborated (in other words, modified purposefully) aren't replaced by regenerated model fragments or portions of models. A simple approach is the concept of protected areas. If an area of the source model is protected, the mapping function can simply preserve the manually-entered contents. However, both the source model and the target model can be polluted by protected area boundary tags (though modern editors can hide them). Moreover, it's difficult to build sophisticated merges that do more than preserve entire protected areas, but if the granularity of the protected areas is chosen appropriately, less sophistication is required than you might expect.
Once you've elaborated a target model, there's the issue of how to deal with these manual modifications when the mapping function has to be executed again due to changes in the source model(s). Of course, the desirable outcome is that all manual modifications are "properly" preserved after executing the mapping function again; the trick, of course, is the definition of "properly."
Consider an incomplete mapping function that accepts a UML source model containing classes with operations, attributes, and associations, and produces a class declaration in an object-oriented programming language for each UML class. For each modeled operation, the mapping function creates a method declaration with a signature, but with empty method contents. Let's assume that the mapping expects the developers to fill in the method bodies in the generated source code instead of specifying them in the UML model.
However, what if the modeler changes the method signature in the model between mappings? Does this mean that the same method body should still be used, which risks the production of uncompilable code because the new signature doesn't match the existing manual implementation body? Or what if the modeler deleted an operation from the model and then created it again? In this case, the operation model element would be viewed as a new entity that doesn't match the existing implementation body. And what should we do with the manually-inserted implementation if the modeler decides to remove the operation from the model?
We can express this challenge in terms of finding out what manual changes have actually occurred. We can see that the following are crucial:
General solutions to this problem are quite sophisticated, but many boil down to finding differences and merging them ("diff-and-merge"). As with any diff-and-merge algorithm, there are cases that are hard or impossible to resolve automatically. These are called merge conflicts. In these cases, the modeler has to determine manually which of the changes are to be merged and how this has to be done. Naturally, the modeler's decision should be retained for later use.
Diffing and merging models using text is tractable, but providing the same level of functionality for complex metamodels expressed in MOF, or for graphical models, is still a subject of research, though a great deal can be achieved even today.
Note that diff-and-merge is not the only technique. Manual changes can also be tagged as such during editing, so that model elements have origin tags that point to the corresponding source model element, and so act as traceability links. Manually added elements don't have origin tags; on regeneration, elements with no origin are just left alone.
Yet another approach is to generate the target so that manual code is separate from the generated code (for example, using inheritance or callbacks).