26.1 Migration Assessment


26.1 Migration Assessment

Before beginning any sort of migration, you should assess your current situation to see what work needs to be done before a successful migration is performed.

26.1.1 Migration Issues

Consider this list of migration issues. You will have to address these before initiating your migration since many of them could turn out to be showstoppers if you don't plan ahead.

  • There can be no sharing of the storage or cluster interconnect between a V1.[56] and V5.X clusters.

  • Some additional hardware may be required.

    • Storage for new cluster-common file systems, member boot disks, and quorum disk.

    • Cluster Interconnect (LAN or Memory Channel), which may not already be in place in many configurations (such as ASE-only configurations).

      • If you are currently using Memory Channel 1 and you will be adding newer systems to your cluster (such as the DS or ES series AlphaServers, which only support Memory Channel 2), you should be aware that Memory Channel 1 and Memory Channel 2 cannot exist (mixed) on the same rail. You should check the Supported Options web site to see what is supported.

        http://www.compaq.com/alphaserver/products/options.html

    • Additional shared SCSI controllers to eliminate single points of failure.

    • Additional network interface to reduce single points of failure.

  • Tru64 UNIX version 5.X is a major operating system upgrade, and you should read the Release Notes and Installation Guide to familiarize yourself with the various changes before diving in.

  • Applications must be supported on Tru64 UNIX version 5.X. It could be that the version of your production application supported on V5.X is very different from installation, configuration, and implementation. Know this before moving forward.

  • There is a new AdvFS file system on-disk structure in V5.X. This change is only relevant for newly created file domains, and the previous format of the domain is compatible with V5.X. No update to the existing domain structure occurs during an OS upgrade.

    • The new on-disk structure allows tracking of more files in a domain and delivers better performance when creating and accessing files.

  • The Logical Storage Manager (LSM) has on-disk changes that take effect during an installupdate (8). These changes can be rolled back if you run volsave(8) before the upgrade (and save the files produced by volsave): clu_migrate_save runs volsave. Note: This ability to rollback the LSM on-disk structures is important if you upgrade; however, we recommend a fresh install of V5.X, not an installupdate to V5.X.

  • TruCluster Server V5.X requires a new license (TCS-UA), which is member-specific; that is, it must be installed on each member.

  • The UFS file system is not supported read-write cluster-wide (but is supported read-write while mounted exclusively on a single member as of V5.1A).[1]

  • Multiple ASEs in a V1.[56] configuration are tricky and require special consideration but should rarely occur. See section 26.4.6 for more information on multiple ASEs.

  • ASE-specific environment variables (such as MEMBER_STATE, ASEROUTING, ASE_ID, and ASE_PARTIAL_MIRRORING) do not exist in TruCluster Server.

26.1.2 Migration Planning

Ask yourself these questions to get an idea of how you will migrate to the new version and what resources and hardware you'll need.

  • How many systems are in the current cluster and will any of them be available for use in the migration? This is key if you choose migration option 3. The migration options are covered in section 26.3.

  • What is the current version of the operating system and TruCluster ? The migration tools require TruCluster V1.5 or V1.6.

  • Are there additional PCI slots in the future members of a V5.X cluster? This is in case a cluster interconnect and/or additional storage card might need to be added.

  • What is acceptable downtime for the migration? This maintenance window must be considered, especially if migration option 3, requiring some "hard" down time, is chosen.

  • Are there scripts that reference Tru64 UNIX version 4.X disk names (/dev/rz23c)? These will have to be modified (or symbolic links added) since TruCluster Server does not support the old device names. See clu_migrate_save output for the mappings.

  • What applications are being used? These will have to be checked to see if they are compatible with Tru64 UNIX version 5.X.

  • What types of ASE services are being used? ASE services are like CAA resources in version 5.X. See Section 23.4 for more on CAA resources.

    • Disk services are not needed in TruCluster Server version 5.X because of the nature of shared storage.

      • You no longer have to associate an application resource with storage because the storage is accessible (with a few exceptions) from each member. You may, however, create CAA resources such that the application tries to locate the storage to the member that is running the application. See Chapter 24 for an example.

      • You need a CAA resource if you want to set the relocation policy and/or to set up start and stop scripts, which are combined into one script in TruCluster Server.

    • NFS services aren't required; instead NFS service is fully integrated into V5.X clusters. Clients can use the default cluster alias or another cluster alias for NFS mounting. See Section 20.4.2 for more about NFS Serving and Section 20.4.2.3 for the details about the exports.aliases file.

    • The old-style DRD services are also not needed, but you may need to create symbolic links to point to the new device names (/dev/rdisk/dskN[a-h]) for the application to run.

  • What are the versions of SRM firmware, storage controller (HSx), and Host Bus Adapters (KxPxx)? These should all be updated to the latest version.

  • Do you have single points of failure for the new cluster-common file systems or disks? You should use some combination of software and/or hardware mirroring for cluster_root,cluster_usr, and cluster_var. You can use LSM to mirror cluster_usr and cluster_var. As of V5.1A, you can also mirror cluster_root and swap with LSM. The boot partition and CNX partition must be mirrored at the hardware level as of this writing. Since primary swap, the boot and CNX partitions are all on the same disk; it probably makes sense to mirror the whole disk at the hardware level. Note: if you follow our recommendations in Chapter 21, the member book disk can be restored in a matter of 10-15 minutes provided that the failing member didn't cause the cluster to lose quorum. So even if you don't mirror the member boot disk, you can recreate it rather quickly.

[1]Using "mount -o server_only".




TruCluster Server Handbook
TruCluster Server Handbook (HP Technologies)
ISBN: 1555582591
EAN: 2147483647
Year: 2005
Pages: 273

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net