Even with these advances in the level of reuse, we nonetheless have a problem: There's still little reuse of applications.
Over and over, we see systems that are reimplementations of existing functionality built to make use of improved technology, and we see systems that are unable to reuse existing platforms because they've become interwoven with an existing application. Components and frameworks are helping, but there's still significantly more reuse of those closer to the machine. We see more reuse of databases and data servers general services that rely on implementation technologies than we see reuse of customer objects, which in turn rely on general services.
Figure 1-3 shows the overall effect. Each line between layers represents an opportunity for standardization to support run-time interoperability. Standards allow one layer to be replaced by a different implementation that conforms to the same standard. This is the value of interoperability: By defining a standard interface, we may replace one CORBA implementation with another, say, or one SQL database with another.
Figure 1-3. The difficulty of reusing applications
Standards and interoperability of this nature certainly help, but the problem still remains: What happens if the CORBA implementation you prefer relies on some operating system or database that you don't want? Tough. You're stuck, because each layer in the pyramid relies on all of the layers below it.
Moreover, components and frameworks may not fit together architecturally. This problem, dubbed architectural mismatch by David Garlan (1994), comes about when the several components and/or frameworks in a system have differing concepts about how the system fits together.
Here are some examples, moving from the concrete to the more abstract:
In each of these cases, the problem is not merely one of interfaces, though often that's how the problem presents itself, but rather completely different concepts about the software architecture of the system.
Expressed abstractly, reuse at the code level is multiplicative, not additive. For example, if there are three possible operating systems, three possible data servers, and three possible CORBA implementations, there are 27 possible implementations (3 x 3 x 3).
The chances of the stars aligning so we have the right database, the right operating system, and so forth are relatively small (1/27), even though there are only ten components (3 + 3 + 3, plus one for the application).
The consequences of this problem are ghastly and gargantuan. Even as we increase the level of abstraction and the level of reuse, we'll continue to have difficulties as the number of layers increases, as it must as we come to tackle ever larger problems. The main reason is that once we've mixed the pieces of code together, it's impossibly difficult to reuse each of the parts, because each part relies so much on code that glues the pieces together and glue makes everything it touches sticky.
To realize an additive solution, one that allows reuse of each layer independently of the others, we must glue layers together using mechanisms that are independent of the content of each layer. These mechanisms are bridges between layers, which are expressed as a system of mappings between elements in the layer. Bridges localize the interfaces so that an interface can be changed and subsequently propagated through the code.
Relying on reuse of code, no matter how chunky that code is, addresses only a part of the problem. The dependencies between the layers must be externalized and added in only when the system is deployed. The glue must be mixed and applied only at the last moment. Each model is now a reusable, stand-alone asset, not an expense.
Model-driven architecture, then, imposes the system's architecture only at the last moment. In other words, by deferring the gluing of the layers together and combining models at the last (design) minute, model-driven architecture enables design-time interoperability.