Designing and Architecting a Migration Solution

 <  Day Day Up  >  

When designing the target technology platform, you must understand the degree of change required by the enterprise. Changes might be in business functionality, or more commonly, they might be changes in the service level. Within the context of this book and the methodology presented in it, the design stage addresses issues of platform technology, process, and people. The following tasks are involved in the design and architecture of a migration solution:

  • Identifying the degree of change required

  • Identifying service level goals

  • Documenting design goals with the SunTone Architecture Methodology

  • Creating a component and technique map

  • Refining high-level designs

  • Creating a transition plan

  • Developing a configuration management plan

  • Creating a system I/O map

  • Creating an acceptance test plan

  • Planning test strategies

  • Prototyping the process

  • Designing a training plan for the new environment

Identifying the Degree of Change Required

One of the key objectives of migration is to preserve an organization's investment in business logic. Therefore, the migration project team will rarely have to exercise business logic design skills except to maintain semantic meaning. Despite this priority, changes are often required. These changes will usually be in the nonfunctional qualities of the system to improve costs or service levels. The likelihood of this being necessary depends on the migration strategy chosen because the primacy of preserving the business logic unchanged depends on this. For example, a refronting strategy might cause the presentation layer to change significantly. If either business usability or business logic is encapsulated in the presentation code modules, the business logic might need to change or might require enhancement.

Earlier in this book, we described the functionality sedimentation process. For example, we explained how when the application of a replacement strategy leads to the decision to move printing from the application code to a utility, the recording of a print date can impact the recording of business-relevant data such as an invoice date. This example illustrates how the proposition that migration projects do not require changes to the business logic ( D L) is not universally true, and the reliability of this proposal depends on the strategy adopted (S).

D L=f(S)

The amount of change in the business logic depends on the migration strategy chosen to meet the business requirements. Further, the degree of change in the business logic depends on the requirements (R), and it is the requirements that drive the strategy selection.

D L=f(R)

Changes in the functional requirement of an application cannot be achieved by platform design alone. The migration project must apply one or more of the migration strategies or techniques to the software components requiring change. Changes in the service level requirements should be achievable either through platform architecture and design or through process design. Change requirements will often form part of the benefits case for a platform transition. For testing designs, a requirements statement is mandatory.

Identifying Service Level Goals

The SunTone Architecture Methodology defines an application solution's systemic qualities as belonging to one of the following four families:

  • Manifest. Reflect the user experience of nonfunctionality qualities of the system.

  • Operational. Reflect the experience of operations managers and operators.

  • Developmental. Reflect the views of developers or builders.

  • Evolutionary. Anticipate the future needs of the application.

These systemic qualities define the application community's service level goals for an application. Critically, they exclude the business logic and the implementation of business process or transactional logic.

Manifest qualities include usability, performance (response time), reliability, availability, and accessibility. Operational qualities include performance (throughput), manageability, security, serviceability, and maintainability. The developmental qualities include buildability, budgetability, and planability, and the evolutionary qualities include scalability, maintainability, extensibility, reusability, and portability. It should be noted that maintainability occurs in both the operational and evolutionary families. The difference between them is based on the definition of the family, the operational versus the evolutionary requirements and qualities of the system. It is also important to understand that some of these nonfunctionality qualities might need to be modeled and defined in the functional analysis. The obvious examples are response time performance and security as it applies to user authority definition. The decision to implement user privilege management in specific software components (either application code modules or infrastructure products) is made by the application architect.

The following table summarizes these service-level goals.

Table 6-1. System Qualities for Systemic Quality Families

Systemic Quality Family

System Quality

Manifest

Usability, performance (response time), reliability, availability, and accessibility

Operational

Performance (throughput), manageability, security, serviceability and maintainability

Developmental

Buildability, budgetability, and planability

Evolutionary

Scalability, maintainability, extensibility, reusability, and portability

For information about the SunTone Architecture Methodology, refer to (http://www.sun.com/service/sunps/jdc/suntoneam_wp_5.2.4.pdf)

Identify Manifest Quality Design Goals

The preceding table shows that what might traditionally be seen as performance has spilled over into several areas and, in the case of response time, can also be derived from the functional analysis. The following sections explain how to identify manifest quality design goals for an application.

Performance and Scalability

Scalability is usually planned by adoption of a strategy to apply when performance thresholds fail to be reached and the failure is caused by a change to the scale of the system that results from an increase in business volumes or the user community. Several scalability strategies are available to platform designers.

Scalability has two dimensions:

  • The definition of the change strategy required to meet changes in the business volume or user community size . These change strategies impact deployment decisions and might vary from software component to software component. This means that the target platform needs to account for more than one scalability strategy. An example is designers deciding to apply a vertically scaling design pattern to the database layer and a horizontally scaling pattern to the web servers. Scalability design patterns can also be applied to storage, network, and backup solutions. Diagonally scalable solutions might apply, for instance when vertical and horizontal strategies are applied in an iterative sequence.

  • The appropriate change strategy might change as different volume thresholds are met. A good example is an RDBMS. A vertically scaling solution can be planned until a certain volume threshold is reached. At that point, the deployment managers can move to a horizontally scaling solution either with parallel database technology (such as Oracle RAC), or through applications design and implementing multiple databases.

Scalability strategies (an evolutionary quality) have a significant impact on the deployment design. Designers need to design for growth, and business volume predictions are therefore required. These can be hard to discover, but migrators have the advantage that the history should be available because the application, and hence the business process, already exist. Scalability design also needs to account for any predictions that the performance constraint will move as business volumes grow.

System performance management is about understanding whether the application's response time or throughput is constrained by memory, CPU, or I/O. Given the nature of the migrator's problems when the source and target systems have massively different performance ratios across these three dimensions, an application could be CPU bound on the source system and become I/O bound on the target system. This situation can be discovered through prototyping or testing. If you expect this problem to arise, include performance tests as part of the platform integration test plan. If an application's performance can be characterized as hardware bound by a single hardware attribute, scaling performance is relatively easy. Additional resources can be added, although the constraint often will be accepted for cost reasons if the performance is good enough and the additional cost is not justified. In some applications, particularly those implemented on distributed platforms, there might be multiple constraints. Each system is different, and the bottleneck moves as the multiple application's components process the business transaction. This is rarely a problem in user facing online transaction processing (OLTP) modules because the required and actual response times are relatively low but an overnight batch process or an investment banking risk calculation, might display this sort of behavior.

System performance management is usually reactive but involves identifying the system constraints on meeting the performance requirements and designing them out of the configuration. We have both a performance history and the opportunity to prototype or benchmark and a number of design patterns to deploy.

  • CPU constraint. SMP, horizontal scaling, CPU clock upgrade (occasionally)

  • Memory constraint. 64-bit very large memory addressing, horizontal scaling

  • I/O constraint (disk). Bandwidth scaling, RAID, disk cache1.

  • I/O constraint (network). Bandwidth scaling (trunking or base technology), with caching implemented

However, applying platform design solutions to the performance and scalability requirements mandates that the code scales within these designs and does not meet a threshold that moves the bottleneck. If it does, and the bottleneck becomes another system constraint, then an additional appropriate pattern can be applied. When considering scalability, consider the possibility that when software is moved from one system to another, new-generation, system, bottlenecks might exist in the application's code, caused by the length of the code path exposed by the new more powerful CPUs. The multiuser concurrency mechanisms might also cause bottlenecks when run on hardware considerably more capable than that for which they were originally designed. This is unlikely for software built recently, but applications undertaking their fourth or fifth migration might suffer from this problem. An additional problem for some RDBMS users is that as their business scales and the demand on the application grows, the ISV-supplied database code lines become contention bound. Scalability constraints are not exclusively hardware based or system based. The only way around software bottlenecks is to recode or redesign.

Availability and Reliability

In the SunTone architecture requirements methodology, availability is defined in a slightly more complex way than platform designers expect. This ensures that software developers consider all the issues that they need to address so that platform designers can apply their patterns. It is not possible or sensible to improve availability on an application with critical components that fail more frequently than the host platform or that take longer to recover than the host platform. Additionally, issues such as online patching and upgrades need to be designed into the applications environment through software architecture, although protection against failure is an operational quality and online upgrades and patching are evolutionary qualities.

The two key availability patterns are clustering and component replication. Clustering permits the hosted software components to fail fast and restart: software component replication redirects nonfailing client processes to an alternative server process. Sun ¢ Cluster software and its competitors use a shared disk configuration architecture to ensure that an alternative host system is available to restart a software component that previously existed on a failing hardware node. Software components suitable for clustering must be capable of failing cleanly, and they must be capable of taking their network identities from the cluster configuration, not from the individual host system nodes. RDBMSs are examples of software components that can be clustered, whereas the common name server components usually provide some form of replication (or cache server) functionality and are suitable for replication-based patterns.

Some versions of the Sun ONE Application Server also support clustering, albeit without a shared disk model. The option of using clustering or replication also maps to the decision in clustered-solutions design to leverage a host service (making it cluster aware) or to encapsulate a service within a logical host. By using these two patterns in your designs, the decision of which host to use within a platform design abstracts the location of the service from its physical host system. This is beneficial because the other systemic qualities can be easily incorporated into the design. Abstracting the software component's network identity from the physical platform also leads to greater runtime flexibility and prepares the application for deployment on N1 compute fabrics .

As a systemic quality, reliability is a software design goal that reflects on error condition definition and handling. In a migration project, the project design goal is to minimize code line changes to only those required to leverage the new platform; therefore, improvements in the software's reliability might not be a goal of the project. Platform reliability design is examined further in "Serviceability and Maintainability" on page 90. It is worth repeating that the application's service cannot be more reliable than the software itself. If the application contains fatal failure conditions based on logic paths, the platform designer will have difficulty protecting users against service outages caused by these failures. It is also worth noting that spending high amounts of money on making the platform of an inherently fragile application more rugged is unlikely to be worthwhile, although reductions in the mean time to recover (MTTR) might be worthwhile and the techniques discussed under availability patterns might be worth considering.

Usability and Accessibility

Usability and accessibility are software design goals. The refronting strategy can have usability implications if, for instance, ASCII or 3270 forms are being ported into a browser or Java solution. Usability qualities need to be examined and the solutions selection might be impacted by these goals. If the project's goals are for zero change, an emulation solution is more appropriate. If changes supported by replacement are required, apply the replacement strategy and rehost the presentation logic in the new infrastructure component.

Identify Operational Quality Design Goals

Throughput performance is another aspect of scalability. A throughput goal might be defined as part of the acceptance test criteria at transition time, but it will need to be defended against change during operations. The section "Identify Manifest Quality Design Goals" on page 83 provides more detail about designing for performance and scalability. Because throughput can be defined as the product of the number of users (each of whom can conduct only one transaction at a time) and transaction duration (or response time), there is a direct relationship between response time, system throughput, and the number of users.

T = f(U,rt),

U = number of users, rt = response time, T = throughput, which is a rate.

The change in the number of users defines the scalability problem.

The main difference between an operations manager's view of throughput and a user's view is around systems management jobs that need to be applied to relatively large jobs. A user typically undertakes an OLTP transaction and is only interested in a tiny subset of data available, whereas a systems manager is interested in the whole database. The systems manager's jobs are often serial batch jobs that can require relatively long periods of time. These jobs also often require sole use of software components in order to undertake the management activity. Much of the solutions design around this problem is spent in minimizing the exclusive usage time period required by the systems manager. The other batch constraint is the seriality of the process. The seriality factors only become an issue once any hardware constraints have been removed. The key design patterns are as follows :

  • Asynchronous processing. Planning and configuring to do the work before or after the time it is required.

  • Symmetrization. The application of multiple workload processors to a job or transaction.

Strictly speaking, these are not design patterns, but classes of patterns. An example of asynchronous processing includes the use of disk image snapshots to allow backups to occur while the original image remains in use. Another example comes from the database design world in which schema denormalization occurs by including total fields as columns in a master table in the master-detail relationship, which can be updated by the application's logic or triggers. This means that the additional logic occurs when the database is updated, not when it is read. This enables you to avoid a table join and its processing overhead. The processing of the total is undertaken at the time the detail record is written or updated. Asynchronous processing is particularly useful when the expected response time is long and users are happy to wait for their replies. Submitting reports into a job queue is an example of asynchronous processing. A call center user or investment banker might not be able to wait, but the call center manager can ask for a member's next calls in advance of needing them.

For an application to use symmetrization techniques, the application's components need to be scalable by the preceding technique. Key patterns in this class include increasing the number of batch job queues, using SMP hardware, designing for or deploying system grids, or using infrastructure software that automatically symmetrizes a serial problem such as a backup solution that uses multiple tape drives , or RDBMS that uses parallel query optimization plans. From these examples, you can see that symmetrization is complex and requires that the objects to which the technique is being applied is appropriate in that it lacks significant seriality. Seriality is an outcome of either the design process, or fundamental properties of the applications problem. At the moment, because of the need to negotiate between more than one user, databases must have a degree of seriality in the lock resolution code lines and the database write-after log. This seriality can be reduced by either good database design or DBMS design. Seriality in batch jobs or third-GL legacy code can also be reviewed to apply optimization techniques. Since infrastructure providers such as Sun spend a lot of research and development money offering platforms for symmetrically scaling software, symmetrization, where appropriate, is cheaper to implement and less likely to conflict with other design goals.

Manageability

When you are designing for manageability, the key requirement to understand is what operational management transactions need to be undertaken. Some management transactions will be defined by use cases and are therefore application management transactions. The operational management transactions are those undertaken and defined by system managers. System managers' activities also impact the IT service delivery process design. These are examined later in this chapter in the section on management design.

Application management activities are defined by developers. Typically, these include running specific ad hoc or regular jobs as required by the applications process. Examples might include cutting tapes for intercompany communication or setting the logical day clock.

The other major manageability activity is problem and incident handling, which can be IT based or applications based. The solutions designer should seek to create a single incident-handling regime , although when the applications error handling is tightly integrated with the transaction logic, migrating these code lines to infrastructure or utility software might be prohibitively costly.

Security

The security qualities you can incorporate in your design can be broken into four problem areas:

  • Secrecy

  • Integrity

  • Authentication

  • Nonrepudiation

The enforcement of business processing rules based on users authority requires an authentication solution that cannot be compromised, strong and effective privilege definitions, and the assignment of a privilege list. The business rule is a functional requirement, and capturing it requires use-case analysis and actor identification. For example, the business rule might require that payments over a certain amount need to be approved by someone with approval authorization, and larger amounts might need multiple signatories. This implies a sign-off attribute on financial authorities and a signed-off state on a payment, which itself is derived from possessing sufficient sign-offs. This business rule can only be captured with functional design techniques. The decision to use certain technologies to implement the security piece is an architectural design. The option to embed the business rules in the application code exists; however, secrecy and integrity can be provided through infrastructure products, as can authority limits and privileges.

Encryption is the technology that underpins the security design. Secrecy and integrity design goals can be developed in isolation from applications, and thus users approaching the application will have a known level of identity. Most installations will have security and security technology policies, which migration project planners need to read, understand, and conform to.

The application being migrated will place additional constraints on the security solutions design, and where replacement strategies are being applied, functionality might migrate from the application's code to infrastructure products. For example, user privileges can be held in directory servers as additional personal attributes. Clearly, the application would need to be designed to leverage these additional services, and this is an example of recoding to support a replacement strategy. Again, the strategies used in migration strongly influence the design process.

An additional dimension to the security quality is an application's support for multiple economic entities or businesses. At one level, this is just another object in the security model, but because an OS instance is a security delivery object, it is necessary to take this into account while the platform is designed.

Serviceability and Maintainability

First and foremost, serviceability and maintainability are software design goals. The objects of migration projects are less likely than modern software products to have had effective design solutions to the requirements of these qualities.

Platform solution designers must understand the constraints and strategies designed by application designers, and they need to design a platform that meets these constraints and requirements. Platform designers have a number of tools in the design toolbox.

The Solaris OS has number of software features to support the serviceability and maintainability of the applications runtime file system tree. These include:

  • UNIX System V packages

  • Solaris Live Upgrade

  • UNIX file system semantics

  • Solaris library management semantics and utilities

Examine the application being migrated to determine if any of these features can improve the serviceability and maintainability of the application once it is deployed on the target system. Again, you must examine the proposed architecture to understand the capability of the application. It is not cost effective to overachieve on the goals within the platform if the application can't take advantage of the improvements in the platform.

Identify Developmental Quality Design Goals

Developmental qualities consist of buildability, budgetability, and planability. These qualities derive from a series of design decisions made early in the development cycle. Given that in a migration project, the application already exists, the opportunity to improve these qualities might be limited. However, changes in the platform can lead to improvements in all three areas.

Buildability

The new generation of hardware can significantly reduce compilation times, allowing a more proactive release cycle, while migration to the Solaris OS allows access to a wide range of development tools, including the Sun ONE development environment and tools.

Budgetability and Planability

In the context of a migration project in which the application already exists, these qualities reduce to financial scalability. They also affect platform component selection. Horizontally scaling components can be deployed on a just-in-time basis: vertically scaling components can be deployed using financial engineering tools such as lease financing to allow additional system boards to be deployed over time in Sun Fire systems.

Alternatively, a storage design that permits the easy swapping of systems into and out of the configuration permits midrange systems to be swapped for large systems when required. Both designs allow the deployment of additional resources when required, and the financial solution allows an appropriate stable payment schedule.

The following design solutions are available to meet these qualities:

  • Horizontal scaling

  • Vertical scaling, using platform product features

  • Design for upgrade, which impacts storage and network design and enables vertical scaling through replacement

Identify Evolutionary Quality Design Goals

Evolutionary qualities relate to the ability and ease of changing the application and its platform configuration. Scalability is clearly an evolutionary quality; however, the appropriate strategies to scale an application's component or its platform are manifest qualities. Designing for scalability is covered in "Identify Manifest Quality Design Goals" on page 83.

Maintainability

The tools and approaches discussed in the operational qualities section must be designed into the software. The design of the platform's use of the maintenance features of the hosted software is an operational quality design problem. Resolving the maintainability requirements through design-impact decisions is related to designing preproduction environments. Most end-user software departments possess a development and production environment, and the majority of them have at least one preproduction environment variously referred to as the "user acceptance test," "integration test," or "quality assurance" environment. The variety of titles strongly implies that these preproduction environments might have different purposes and thus multiple preproduction environments might be required. The number and purpose of these environments is an enterprise management decision, because of their cost. The decision does, however, have technical constraints, and the proposed software release process has a major impact on these decisions.

The key input to decisions about designing for maintainability is the frequency of releases and the number of versions of the application required at any one time. Developers might be working on the next +2 and next +3 release, while integration testing is being undertaken on next +1, the user support team and user training might be working on the next release, and production is, of course, working on the current release. In a service provider environment, additional presales environments might also be required to support the sales process so that customers in transition have facilities to allow them to migrate their business or undertake a "try before you buy" exercise. This latter example might need to be segmented by the customer.

Extensibility and Reusability

Extensibility and reusability are software design qualities. A refronting strategy might introduce significant changes to these qualities. The goals should be documented and designed against. Extensibility refers to functional extensibility. A migration can be a good opportunity to improve the extensibility and reusability of the application's code.

Portability

Depending on the source code's implementation technology and the migration strategy adopted, improvements in the portability qualities of an application can be obvious, inexpensive, and required. As you identify which code idioms require change, you might identify existing options to improve the portability of code lines. This can involve changing code to use POSIX library calls, or replacing proprietary SQL calls with open connectivity libraries such as Java ¢ Database Connectivity (JDBC ¢) software. Refronting offers new and specific opportunities to improve the portability qualities of an application.

Documenting Design Goals With the SunTone Architecture Methodology

The SunTone Architecture Methodology's nonfunctionality quality list gives platform and migration designers a framework for documenting their design goals and the extent to which the current system meets the required goals. Designers can then specify the changes required and design against the new goals. These new goals might require you to make no change to the current system, or in cases for which you must make a trade-off design decision, you might have to reduce the quality goal. Given that the primary reason for a migration is the preservation of business logic, changes in goals are more likely to be within the operational qualities family. Because platform design is aimed at improving service management goals, functional improvements are unusual in migration projects. This will be examined further in the section on management design.

The application of technology patterns to meet requirements is the key to design.

Creating a Component and Technique Map

Chapter 3 identifies a series of strategies that can be applied to migration projects. Typically, one of these strategies dominates a project. In fact, in the selection of a strategy, it is often obvious which strategy is the best fit for a situation. The choice of the strategy is based on two axioms. The first is that the business logic is worth preserving (except in the case of a replacement project). The second is that the cost/benefit analysis proves the need for a migration project.

While strategy selection is often straightforward, the techniques usually associated with the chosen strategy will need to be augmented with others to complete the project. It is rare that the techniques and tactics used to implement the strategy are sufficient to complete the project. Other techniques and tactics are often required to implement a migration solution. This is where the component/technique map is required. Additionally, the way you apply a technique might vary according to the strategy you are using.

Once you have selected a strategy, you need to identify the techniques you will use to apply this strategy to the components involved in the migration. This information is captured in a document called a component and technique map . Be aware that you might need to update this document during the project and that prototyping and iteration might also lead to changes in this document. During an initial architecture stage, this document serves as a first-cut component technique map.

The technique you use to implement a strategy is based on cost and risk criteria and on the extent to which you will be using tools. When source code porting techniques are used, an interim source code control system might be required and the build environment might need to be ported or reverse-engineered. Source code porting also encourages the use of code scanners to identify common known idioms in the source environment and to apply known transformations to them. These techniques can also be applied to data structures, although the data transformations will differ from code transformations.

The techniques you apply to data files are of critical importance to replacement or rehost-technology porting strategies. Data files might be anything from source environment proprietary formats through indexed or ASCII flat files to RDBMS data volumes. Such data migrations might require specific conversion programs to be written or purchased and deployed, and decisions about physical versus logical copy strategies need to be defined, along with the need to transform or augment the source data.

Creating a component/technique map involves creating a component list based on data attributes of the source object and assigning a migration technique to each object class in the component list.

Refining High-Level Designs

Platform design artifacts that might require further refinement might include the following:

  • Required system capability

  • Deployment topology (for example, the number of boxes)

  • Storage requirements

  • Storage connectivity topology

  • A network connection topology

  • A nonfunctionality requirements compliance statement

This refinement process is iterative in nature. As more information, requirements, constraints, and costs are discovered, changes to the overall design might also be required. These changes must be documented and agreed to by those responsible for managing as well as implementing the migration.

Refine Platform Design

Earlier in this chapter, we examined a methodology for defining platform design requirements and some of the platform design patterns used to meet differing requirements. The overall assembly framework used by Sun platform designers, called the service-point architecture, is based on the provision of compute pools conforming to a vertical, horizontal, edge role-based organizational nexus. This is supported by a storage architecture that is based on block and file services (SAN and NAS) and a management capability.

Applying platform design patterns to requirements permits the development of a candidate architecture capable of supporting an instance of the application. This candidate design needs to be tested against two additional issues: environments and consolidation.

Most organizations have one or more preproduction environments to support their development and software release processes. They might also have multiple production environments to support differing user communities, which might or might not be from different companies. The least expensive platform design is the replication of the production environment. However, the capital and management costs involved in this approach might prohibit its use, and cost scaling might need to be applied to at least one of the preproduction environments. An environment built for full volume performance or scalability testing needs to be large enough to meet these goals and might, therefore, be larger than the production environment and might also contain a load generation capability. In situations in which a development team has a significant commitment to the Solaris OS and the applications to be migrated are only a small part of the portfolio, Sun's N1 Provisioning Server, currently available only on blades, might offer development communities the opportunity to share hardware on a bookable basis, because hardware resources can be switched between development users by a reprovisioning transaction. For example, if developers need an integration test facility for only one month as they enter the integration test phase, you might provision the environment and release the resources to other users after the tests have been completed. A similar solution could be used for the volume test environment's load generators, and even for some components of the volume test environment.

Infrastructure designers are now looking to increase system utilization through various forms of consolidation solutions, one of the most important being workload sharing. Planning for consolidation is another iteration in the platform design cycle. Cost effectiveness and utilization maximization are the key goals for this design stage. The citizenship factors of an application depend on the degree of exclusivity an application makes on scarce resources. Traditional bad citizenship caused by memory leaks or fork bombs can now be managed by the Solaris OS such that while such applications still fail, they do not jeopardize the OS instance. Although we don't live in a perfect world, applications that exhibit these traits are not production quality and shouldn't be implemented. The majority of scarce resources, where scarce means one resource, are related to the TCP/IP stack: TCP addresses, port addresses, or gethostname replies. CPU, RAM, and I/O bandwidth are never scarce because it is always possible to design more of these resources into a platform by use of the vertical and horizontal scaling design patterns. Defining the citizenship factors of the applications' components, either formally or informally, is a prerequisite for designing the co-host schema in a consolidation solution. Applications with good citizenships can be collocated, a strategy that leads to sharing nonutilized cycles. Abstracting the application from a host instance by means of remote storage patterns and application name services solutions permits flexible and timely job location decisions, such as the development and load generator solutions documented above.

Despite conducting a detailed requirements definition, and iteratively refining the system's and storage platform's architectures, you must tune the design stage depending on the strategies employed.

Refronting, replacement, and interoperation usually introduce new components into the application's runtime environment, and original and new design patterns based on prototyping-based fact-finding might be required. For example, these strategies might be used if the customer organization is able to leverage newer infrastructure components and if the migrated application is sharing resources that are already deployed, such as a message bus infrastructure when the interoperate strategy is employed. This case enables current capacity planning and service level management knowledge to be engineered into the platform design. It also enables the migration project to leverage historic investments. This is often the key justification for the migration. One of the reasons for performing a system I/O architecture review is to determine whether economies in the feed interfaces (if they exist) can be achieved. It is these sorts of economies that can often justify a migration necessitated by the retirement of data propagation code lines and their replacement with publish/subscribe messaging solutions.

Refronting will have required the isolation, extraction, and reimplementation of presentation logic, and this will have host platform redesign issues because of new software components that require hosting. The implementation of the presentation logic in browser-hosted technology raises a simple software distribution problem that is fairly simple to solve because the application will become available through an http URL. It is also likely to raise training issues because the GUI will likely change enough that retraining the user base will be required. Obviously, if refronting is undertaken to extend the user community from within the company to either suppliers or customers, the training effort is likely to be justified. Optionally, you can undertake refronting by provisioning the legacy terminal environment, thereby making training issues are less acute. The definition of the hosting environment as a browser will often mean that system infrastructure is in place to support this requirement.

Replacement strategies that leverage the functionality sedimentation process require the provision of additional infrastructure, and the designer must include these new functional components in the runtime environment catalog.

Refine the Build Environment Design

The outputs from project planning will determine the list of infrastructure components required of the migration project. These might include the following:

  • Document management repository

  • Code control systems

  • Compilation environment

  • Analytic/architecture repositories

  • Data dictionaries

  • Software release and distribution control system

These necessary resources require system resources to work. A migration workbench is required. The capability and capacity of the migration workbench will depend on the transition plan. If the proposed production resources are available before you transition the production environment, the necessity for additional resources is limited. Software release and distribution is an ITIL function, and thus might logically fall into designing or refining the management solution. In the context of migration, software release and distribution is highly likely to be a significant implementation and transition enabler . Therefore, the solutions design for software release distribution should be considered as part of project architecture.

In addition to a build environment for the target application's execution environment, you might need additional system resources (for example, system and infrastructure software) to develop components of the management solution. These resources can be made available through the sharing of the application's build environment or through the candidate production resource. If the candidate production resource is currently in use for production, as is the case when already existing infrastructure is leveraged, formal change control is required and a separate build environment for the management solutions is needed. It is better to build the proposed software release environment before transition and to use it in the transition process than to implement the management regime after the transition.

Create a Target Runtime Environment's Inventory

The definition of the target runtime environment might seem obvious, but you should test it against the component and technique map. The purpose of this testing is to ensure completeness of the target environment's functionality to meet the needs of the migrated application. The key requirements are as follows:

  • Additional libraries (for example, JVM ¢ or RogueWave)

  • Application servers (for example, the Sun ONE Application Server, WebSphere, or WebLogic)

  • Third-party products (such as shell interpreters, RDBMSs, or emulators)

  • Deployment or provisioning agents for the build environment (for example, JumpStart ¢ technology or rdist )

  • Software agents required for transition (for example, SQL-BackTrack or database replication technology)

Obviously, objects that should be supported in the target environment and that should be captured in the runtime environments catalog include any interpreters or other application development environments required on the target production systems, including both RDBMS and their forms packages.

The runtime environment's inventory needs to be comprehensive and it needs to fit within the budget defined during cost/benefit analysis. For example, if you're migrating from a non-UNIX system to the Solaris environment, it might be best to rehost the job control logic (JCL) within either UNIX shell script or an infrastructure component such as a third-party job scheduler. This technique allows you to reduce the complexity and cost of the target system and maximizes the usefulness of the target system.

Refine the Application Design

Given that the goal of a migration project is usually the preservation of business logic, application design skills are only required in projects in which the migration strategy introduces significant changes to the code base. For example, if you were migrating business systems by moving from one COTS product to another, the migration problem would be reduced to data transformation and migration. Even in this example, the techniques of reverse engineering minimize the need for application design skills. However, detailed data modeling skills might still be required, and suitable tools might need to be deployed to support the modelers. Additionally, utilities might need to be planned, designed, or acquired to support transformations and to eliminate bugs that are exposed by the migration. This might also be necessary when implementation technology implementations differ, for example, with Cobol packed decimals.

Strategies adopted will determine the amount of applications design skill and infrastructure required to support the migration project, with rehosting requiring the least and rearchitecture requiring the most. The amount of genuine applications design skill that is required within the project will determine the complexity, functionality, and cost of the build environment and its supporting infrastructure.

Refine the Network Design

Organizations with a strong networking capability believe that network design is a core competence. They frequently have very strong and inflexible policies that are strongly aligned to their business service delivery. Two factors are changing the constraints of solutions network designers:

  • Network hardware technology

  • Intersystem communication overtaking intersite communication

Network hardware vendors are among the prime beneficiaries of the sedimentation process discussed earlier in this book. The security, virtualization, and routing capabilities of network hardware are significantly more effective than they have been in the past, and this trend will continue. The bandwidth and speed of TCP/IP networks are also on a growth curve, and the costs are also on long-term decline. Like systems, the supply industry is offering improved price performance as a long- term trend, and this must impact network designers.

The deepening reality behind Sun's tagline "The Computer is the Network" is introducing new requirements to network design within the data center. These requirements are different from traditional design criteria in that they emphasize intersystem communication, as opposed to geographical or inter-site communication. This could mean that your data center design might vary from your overall enterprise network design. Mass consumer portals such as vodafone.net are examples of these new Internet data centers. It is clear that the needs of intersystem communication are beginning to influence the principles and patterns of network design and are becoming as important as the physical network. For example, geographic distribution models have traditionally driven much of the intellectual property involved in current network design efforts, and therefore they have driven network design standards.

Applications designers will have provided an intercomponent data communication model. This is also a critical input to the network design process.

Most data networks are owned by a networking group , and the network design rules are often a constraint to migration project designers. Negotiation with the network's standards owner need to be based on sound requirements to solutions. The business requirements are not negotiable by the networks team, and if the requirements mandate a standards or policy change, this change should be achievable. Changes proposed for non-essential reasons, such as the personal comfort of the platform designer, are not justifiable.

Another key area of network design is the need to supply name services. The design rules and responsibilities between IP-address distribution architecture and host-name resolution is well understood . Distributed applications require name services solution designers to supply a name service for the application's components. Many infrastructure providers mandate enhancements or additions to the traditional name service applications to allow intercomponent communication. For instance, Oracle implements a name services solution (SQL*Net) to allow clients , servers, and replication nodes to talk to each other. Sybase also implements a name services solution, using files or Lightweight Directory Access Protocol (LDAP).

The needs of management also place demands on the network designer. The usual design is to separate management traffic from business traffic because they have different demand profiles and routing requirements, and applying quality of service criteria to business traffic is easier when separate network segments are used.

Parts of security solutions design can be delegated to network hardware. Sun consultants recommend an in-depth security strategy that leverages an architecture with network hardware and system hosts playing specific cooperative roles.

Develop a Management Services Design

Sun's current management vision is driven by SunTone and the IT Infrastructure Libraries (ITIL). ITIL defines runtime systems management as consisting of service delivery and service support, as follows:

  • ITIL service delivery. Service level management, financial management, capacity management, availability management, and IT service continuity management

  • ITIL service support. Service desk, incident management, problem management, change management, release management, and configuration management.

While business logic needs to be preserved, the management regime is often where improvement is expected, although these improvements can be specified as improvements in the SunTone Architectural Method's nonfunctionality requirements. As a UNIX implementation, the Solaris OS has rich functionality. It is also one of the most popular operating systems for ISV offerings. Management solution designers have many choices about how to implement management solutions. They can choose whether to use Solaris OE-based functionality, to use Solaris third-party layered software products (including those that emulate the behavior of the source environment such as DEC control language (DCL) shells for ex-VMS users), or to use elements of BMC Software Inc. or Computer Associates (CA) product sets often available on both IBM mainframe operating systems and the Solaris OS.

Service improvements can be based on either improved quality or reduced cost. Where cost improvement changes are sought, reviewing how to implement Solaris environment-bundled functionality should be considered. This functionality will already have been paid for, and support is included in Sun's standard support offerings. In a number of cases, some functionality can be suboptimal and the cost/benefit case should be carefully considered. For instance, the UNIX clock daemon, cron , and its various interfaces will run jobs at specific times. On request, they will implement a primitive security (privilege) model, but they do not support a job contingency semantic or language. Invoking applications jobs within a distributed deployment also requires that you leverage name servers and use the remote execution capabilities of the Solaris OS (UNIX). The cron utility can be suboptimal and lack some functionality, but it might also meet all the business requirements and is free in that it is bundled with the Solaris OS.

The good thing is that these features do exist, although the developer productivity for job control might be less than is required, jeopardizing both time to market goals and the cost of the new job deployment if you decide to implement job control in the UNIX shell. In this case, applying a "no worse than before" rule might help you make decisions. Reviewing the preceding example, if the job control in the source system is based on a proprietary job control language such as REXX or DCL, then the decision to port the job control scripts to a UNIX shell will lead to a "no worse than before" deployment and cost solution, which will improve the portability of the job control solution. Other key areas in which the UNIX market, and hence the Solaris market, offer cheaper layered software solutions include service continuity, capacity planning, incident management, and release management. A decision to migrate functionality from one third-party supplier to a more cost-effective software provider can offer both functional and cost advantages, and in the case of migration from IBM proprietary mainframes, this can add significantly to the benefits case.

In summary, migration management tools from either proprietary or bespoke technologies to either cheaper third-party or base OS functionality can save money.

We can now examine the process piece. The preservation of business logic is a key requirement. Applications are tightly bound to business processes, and process redesign opportunities are, therefore, likely to be restricted to the IT department.

The development of a process and the selection and implementation of tools are iterative and are driven by requirements. To maximize financial efficiency, adopt a principle of minimal compliance, and evaluate the evolutionary qualities of the management solution to determine the level of investment protection that is required and the changes that are expected to be paid for as part of the project.

The big rule is this: "Don't design for tomorrow's problem unless the customer wants to pay for it today." The corollary is that the business case owner will want to pay for some of tomorrow's problems today.

Examine each of the ITIL disciplines and determine the service level, process definition, and technology infrastructure supporting the process. Determine the service level changes required, both improvements and reductions. Because these changes might require a design that will impact the system, storage, and network solutions design, be sure to obtain sign-off for the changes before implementing them. In addition, leverage the current infrastructure. For instance, if the migration is from a proprietary environment to the Solaris environment and the organization has a significant Solaris estate, many of the ITIL disciplines will be in place and an infrastructure and process will already be defined. In this case, the design will be about including the new targets within a solution that already exists.

It is also likely that differing ITIL disciplines will be implemented to different levels of effectiveness. The platform design and the management process design need to go through an iterative, cooperative design review to ensure that design features of the platform and application can be used by the service support/delivery functions of the organization. This way, process change can be identified and the design can be simplified to meet the required IT process. An example of where design drives process is the definition of backup operations within the IT service continuity function. The actual backup processes are defined by the solutions designer, and the design needs to be reflected in the operational instructions that include how to manage backup source systems and backup media. The availability management function goals include an additional requirement statement for the platform designer, and the design needs to be tested for both compliance and potential over-supply and complexity.

In summary, designing for management services involves the following tasks:

  • Identifying management requirements, using the operational qualities and ITIL

  • Identifying where changes must occur as a result of the platform changes

  • Identifying where changes must occur to implement the benefits case

  • Testing the platform design against the management requirements

  • Testing the management process against the solutions design

Creating a Transition Plan

After developing a component and technique map, develop a first-pass transition plan that explains the business acceptability of downtime windows . Transition planning is the process of planning for the transition of the production environment from the legacy environment to your new production-ready target environment. Planning the transition tasks involves the following activities:

  • Determining the transition window

  • Extending the downtime window (for example, with database replication)

  • Identifying and planning transition activities

It is significantly easier to develop a transition plan if limited downtime windows are available to permit the final data copying to occur while an application is not running. It is necessary to design alternative transition plans, leveraging additional infrastructure such as replication or messaging software, when limited downtime is required. This plan also needs to take into account the final acceptance plan, which might need to consider the downtime window that is available. The transition plan needs to take the system I/O architecture into account. Any system that feeds from other systems within the enterprise needs to be repointed at the new production solution. Documenting the time at which this occurs is an essential part of the transition plan.

The process involved in performing this task is described in Chapter 7.

Developing a Configuration Management Plan

Configuration management is frequently more complex in a migration project than in other deployments, because additional tools are required to act as configuration management repositories. Both reverse engineering and source code porting place unique demands on the configuration management discipline, and the proposed approach and tool set must be planned and documented in a configuration management plan. The installation of any required tools on the development support systems should be undertaken before their use, and the additional hardware requirements of the migration project also need to be defined (for example, the tool hosts and disk to be used).

Creating a System I/O Map

The scope of a project implicitly defines a series of system objects that will not be migrated to a new technology. This is especially the case when you are applying the refronting and interoperation strategies. For platform architects , system I/O normally applies to network, serial port, or disk I/O. However, in this case, we are talking about a software system.

Some of the classic I/O sources are likely to be ported and should be identified in a system I/O map. These include the following:

  • I/O using forms packages

  • Other data feeds

  • Interfaces that might need to be redesigned to ensure that legacy (or heritage) functionality remains available to the migrated system

  • Batch feeds from other systems

  • Value-added network (VAN) connections such as credit checking transactions

Batch feeds might be implemented in nonbatch technologies such as a message bus. These system interfaces need to be identified, and both migration and transition plans need to be designed.

Creating an Acceptance Test Plan

One of the first things to plan for in any project is the mutually acceptable definition of an end point for the project. The mutuality might be between the business and the IT department, or it might be between the IT department and their suppliers. The definition you establish to identify the successful completion of a project should be captured in a document called an acceptance test plan.

Two key inputs to designing an acceptance test plan are necessary:

  • An acceptable definition of the required functionality

  • The definition of the improvement goals as defined in the project scope

Testing costs include money and time. The downtime window available for transition is also a factor to be considered during the development of the acceptance test plan. The runtime durations of the final acceptance test need to take into account the downtime window (for example, don't design an acceptance test that jeopardizes production transition). The critical tests to include in an acceptance test should focus on the business to ensure that business logic on the target systems will meet the migration requirements. This usually tests that all the requirements are addressed and that the migration solution remains an accurate and usable representation of the state of the business.

Planning Test Strategies

The usual principles of unit testing should be applied to designated components and work packages. These need to be supplemented by integration tests that involve members of the user community and by a user acceptance testing infrastructure. Some organizations will release test scripts. When appropriate, these should be used during unit testing. The tests you perform might or might not be the final acceptance tests, or they might be prefinal test. The purpose of the unit tests is to ensure that the planned inputs and outputs are consumed and produced correctly. These tests should be designed to ensure that the components interact correctly with each other and that any application's input and outputs are correctly handled. The application's system I/Os include screens, forms, reports, and any batch or real-time feeds. The application and its components need to be tested against any mandatory systemic qualities.

In addition, any software or process designed to support the transition requires testing. Again, the usual principles of unit and integration testing need to be applied. However, specific instrumentation can and should be designed into the transition harness to reduce the post-transition testing. If the transition process can reliably report on its success, or degrees of success, testing for the completeness and accuracy of the transition can be avoided after the transition and the only testing required becomes the final acceptance tests.

The acceptance test must be business focused. A degree of pretesting is required to ensure that the acceptance test is performed in an acceptable time frame and the copy process will require some instrumentation so that the acceptance testers have confidence that they are testing the correct bill of materials.

Prototyping the Process

One dimension of the design problem in migration projects that has not yet been explored is that of multiple environments. The word environment as used here describes an instance of the application to which different purposes and management goals are applied. Most applications posses multiple environments, if only a development and a production environment. Usually, at least one test environment will exist, which is usually called User Acceptance Testing (UAT), Training, Volume Test, Quality Assurance, or something similar. The variety of titles illustrates that it is possible to have more than one test environment. Additionally, instances of the application might exist to allow certain states of the business to be represented over a longer period of time than they would be in reality. The end-of-month position can be copied into an instance of an application to allow read- intensive "as at end of month" reports to be run over the succeeding month. This can be done to help minimize contention between the users of historic data and the new business workers, or because the application cannot support "as at" reporting. Other environments can be created to support business continuity. In the case study in Chapter 11, the organization had development, training, user acceptance, production, and MIS instances of the application's environment. The MIS instance held an "as at" end-of-previous-month state.

Separate environments are likely to have different downtime criteria and different acceptability as to failure of the transition, in that a training instance may be able to cope with remediation time on a partially successful copy. Additionally, the regression from an unsuccessful migration is likely to be simpler. These differences in manageability of the separate instances should be used in the migration planning process. One of the key advantages multiple environments deliver is a prototyping/benchmarking resource.

Prototyping and benchmarking can be used to test delivered manifest qualities and the manifest qualities of the transition process. The need to instrument the transition process has already been discussed, but the existence of separate environments allows the instrumentation to be developed iteratively. The existence of reduced-size integral data sets also allows rapid prototyping and the use of extrapolation for predicting the performance of the transition harness. If the transition harness's logic needs testing, this can be done on a subset. The performance of the transition process is a key factor in the transition, and if a full-size data set can be obtained in a nonintrusive way, test transitions should be undertaken to provide the following benefits:

  • To assure users and migrators of the accuracy and acceptability of the transition process

  • To train the transition team in the transition process to minimize the chance of process failure

Prototyping can be of an iterative or throwaway nature. Because the transition harness is only to be used for a short period of time while the various environments are migrated, throwaway techniques can be appropriate for the harness. However, the development of the test plans and any transition code to be inherited by either the development team or operations staff is best developed iteratively and documented appropriately.

Designing a Training Plan for the New Environment

At this point, you need to consider the impact of implementing the new solution for the users or administrators and develop a training plan that addresses them. This plan will reflect the strategies and techniques used in the migration and will also fall into the IT/business divide. We have already argued that process redesign should be minimal in the context of migration projects. However, user interfaces might vary dramatically, such as in the case of a COTS version upgrade that changes the middleware infrastructure, moving from ASCII forms to web browser/server presentation solutions.

The rest of the training plan should concentrate on the IT department and the process and technology changes implemented within the context of a migration project.

Training falls into two areas: skills and process. Training potentially has two target communities: business and IT. The amount of training required will depend on the strategies you have adopted. A review of the project-benefits case will reinforce where business logic within the application will remain unchanged and where both business skills and business process should require minimal enhancement. The three strategies that contradict this statement are:

  • Refronting

  • Rearchitecting

  • Replacing

Refronting might require you to retrain users in the use of the GUI. Rearchitecture and replacing might have significant user impact because the migration of a business from a legacy ledger solution to an up-to-date COTS solution is likely to have a massive impact on both business processes and the individual use of the system. Retirement strategies might mean that certain operations are no longer to be conducted , and therefore process changes are required.

Another factor influencing the amount of change and, therefore, the amount of training that will potentially be required is the change in technologies used by the business and IT. Migrations from non-UNIX platforms will mandate that the enterprise conduct a skills audit to ensure that it has sufficient skills in its IT department to run the new platform. The likelihood of this occurring depends on the degree of heterogeneity in the IT department before the migration. If the migration has led to the enterprise's first Solaris systems, then a training and transition plan needs to be implemented. The impact on an organization's IT process of a migration also varies depending on the maturity and coverage of the process ”for example, how many ITIL functions are implemented and how well. The final factor is the degree of change implemented in the process, which might depend on the technologies deployed during the migration.

 <  Day Day Up  >  


Migrating to the Solaris Operating System
Migrating to the Solaris Operating System: The Discipline of UNIX-to-UNIX Migrations
ISBN: 0131502638
EAN: 2147483647
Year: 2003
Pages: 70

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net