8.3 Tuning an HFS Filesystem

     

There are few changes we can make to HFS filesystems. Any changes that are made should be made when the filesystem is created or shortly thereafter. The changes involve the size of blocks, fragments , inode density, as well as proportion of space that an individual file can take from a cylinder group . The following sections provides some examples.

8.3.1 Filesystems containing only a few large files

In this instance, we know up front that this filesystem will be used solely by a few large files. We want to allocate space to the files as efficiently as possible. In this instance, we will:

  1. Use the largest block and fragment size possible.

     

     newfs b 65536 f 8192 

  2. Lower the inode density to increase user data space.

     

     newfs i 65536 ... 

  3. Ensure that the minimum amount of free space in the filesystem does not fall below 10 percent.

     

     tunefs m 10 ... 

    When the designers of HFS were performing tests on the performance of the filesystem, they discovered that, on average, if the free space in the filesystem fell below 10 percent, the ability of the filesystem to find free resources dropped dramatically. For large filesystems these days, it may be appropriate to drop this percentage by a few percentage points. The difference being that in the 1980s, the designers of HFS were dealing with filesystems of a few hundred megabytes. Today, we are dealing with filesystems of a few hundred gigabytes. Ten percent of 500GB is somewhat different from 10 percent of 500MB.

  4. Allow largefiles.

     

     newfs o largefiles ... 

  5. Allow a single file to utilize an entire cylinder group.

     

     tunefs b <bpg> ... 

  6. Ensure that the rotational delay in the filesystem is set to zero.

     

     tunefs r 0 ... 

The first two changes will need to be made at filesystem creation time. At that time, we can include options 3 and 4, although both can be implemented later. The last two tasks should be undertaken as soon as the filesystem is created. If not, the allocation policies used initially will be less than optimal, and files will be created in such a way that fixing them will require us to delete the file and restore it from a backup tape. Options 1, 4, and 5 will, we hope, see a performance benefit when our filesystem consists of only a few large files, while options 2, 3, and 4 are capacity related . Let's start with a simple baseline test where we create a single large file. This is not necessarily a complete test, because we should test for both sequential and random IO (large and small IO requests ). It will simply indicate whether we are seeing in difference in the performance of the filesystem by changing the way the filesystem is created. To start with, we will simply create the filesystem with default options.

 

 root@hpeos003[]  newfs -F hfs /dev/vx/rdsk/ora1/archive  mkfs (hfs): Warning - 224 sector(s) in the last cylinder are not allocated. mkfs (hfs): /dev/vx/rdsk/ora1/archive - 4194304 sectors in 6722 cylinders of 16 tracks, 39 sectors 4295.0Mb in 421 cyl groups (16 c/g, 10.22Mb/g, 1600 i/g) Super block backups (for fsck -b) at:     16,  10040,  20064,  30088,  40112,  50136,  60160,  70184,  80208,  90232,  100256, 110280, 120304, 130328, 140352, 150376, 159760, 169784, 179808, 189832,  ... 4193456  root@hpeos003[]  

We will now mount the filesystem and time how long it takes to create a 1GB file in the filesystem.

 

 root@hpeos003[]  mkdir /test  root@hpeos003[]  mount /dev/vx/dsk/ora1/archive /test  root@hpeos003[]  cd /test  root@hpeos003[test]  time prealloc 1GB.file 1073741824  real    11:38.8 user        0.1 sys        29.1 root@hpeos003[test] 

I took some samples of IO performance (using sar ) during this test:

 

 root@hpeos003[]  sar -d 5 5  HP-UX hpeos003 B.11.11 U 9000/800    11/13/03 00:02:53   device   %busy   avque   r+w/s  blks/s  avwait  avserv 00:02:58  c1t15d0    0.59    0.50       0       2    6.96   18.60            c0t4d0  100.00 31949.50      82    1302 150717.84   97.60           c4t12d0  100.00 22443.50     164    2620 144854.70   49.21 00:03:03  c1t15d0    3.45    0.50       3      13    5.71   15.57            c0t4d0  100.00 31470.00     110    1756 147746.78   72.94           c4t12d0  100.00 21586.50     178    2852 151833.02   44.99 00:03:08  c1t15d0    1.20    0.50       1       4    5.19   15.61            c0t4d0  100.00 30945.00     101    1612 158546.36   78.75           c4t12d0  100.00 20772.00     149    2385 157004.81   51.24 00:03:13  c1t15d0    4.01    0.50       3       9    6.57   22.09            c0t4d0  100.00 30477.00      86    1374 164202.09   93.72           c4t12d0  100.00 20001.50     159    2535 158627.81   52.65 00:03:18  c1t15d0    2.21    0.50       2       6    6.20   19.53            c0t4d0  100.00 29947.50     127    2022 163093.66   63.05           c4t12d0  100.00 19147.50     184    2937 165405.70   43.43 Average   c1t15d0    2.28    0.50       2       7    6.07   18.47 Average    c0t4d0  100.00 30895.00     101    1611 157029.62   79.25 Average   c4t12d0  100.00 20775.00     167    2665 155637.33   48.07 root@hpeos003[] 

I can't believe the average wait time for IO requests. I suppose that if you look at the average service time and the average queue size, it starts to make sense. From the fsdb output below, we can see that we are now using the double indirect pointer ( a13 ). This will increase the amount of IO to the filesystem:

 

 root@hpeos003[test]  ll -i  total 2098208      4 -rw-rw-rw-   1 root       sys        1073741824 Nov 12 23:59 1GB.file      3 drwxr-xr-x   2 root       root          8192 Nov 12 23:52 lost+found root@hpeos003[test]  echo "4i"  fsdb -F hfs /dev/vx/rdsk/ora1/archive  file system size = 4194304(frags)   isize/cyl group=200(Kbyte blocks) primary block size=8192(bytes) fragment size=1024 no. of cyl groups = 421 i#:4  md: f---rw-rw-rw- ln:    1 uid:    0 gid:    3 sz: 1073741824 ci:0 a0 :   280  a1 :   288  a2 :   296  a3 :   304  a4 :   312  a5 :   320   a6 :   328  a7 :   336  a8 :   344  a9 :   352  a10:   360  a11:   368   a12:  9992   a13: 79880   a14:     0   at: Wed Nov 12 23:53:30 2003 mt: Wed Nov 12 23:59:55 2003 ct: Thu Nov 13 00:05:07 2003 root@hpeos003[test] 

Let's try the same test, but on a filesystem created with the features listed above.

 

 root@hpeos003[]  umount /test  root@hpeos003[]  newfs -F hfs -b 65536 -f 8192 -i 65536 -o largefiles /dev/vx/rdsk/ora1/archive  mkfs (hfs): Warning - 224 sector(s) in the last cylinder are not allocated. mkfs (hfs): /dev/vx/rdsk/ora1/archive - 4194304 sectors in 6722 cylinders of 16 tracks, 39 sectors 4295.0Mb in 421 cyl groups (16 c/g, 10.22Mb/g, 512 i/g) Super block backups (for fsck -b) at:     64,  10112,  20160,  30208,  40256,  50304,  60352,  70400,  80448,  90496,  100544, 110592, 120640, 130688, 140736, 150784, 159808, 169856, 179904,  ... 4193600 root@hpeos003[] 

We will now tune the filesystem to ensure that a single file can utilize an entire cylinder group ( maxbpg = bpg ):

 

 root@hpeos003[]  tunefs -v /dev/vx/rdsk/ora1/archive  super block last mounted on:  magic   5231994 clean   FS_CLEAN        time    Thu Nov 13 00:19:27 2003 sblkno  8       cblkno  16      iblkno  24      dblkno  32 sbsize  8192    cgsize  8192    cgoffset 8      cgmask  0xfffffff0 ncg     421     size    524288  blocks  514175 bsize   65536   bshift  16      bmask   0xffff0000 fsize   8192    fshift  13      fmask   0xffffe000 frag    8       fragshift       3       fsbtodb 3   minfree 10%     maxbpg  39   maxcontig 1   rotdelay 0ms   rps     60 csaddr  32      cssize  8192    csshift 12      csmask  0xfffff000 ntrak   16      nsect   39      spc     624     ncyl    6722 cpg     16   bpg     156   fpg     1248    ipg     512 nindir  16384   inopb   512     nspf    8 nbfree  64269   ndir    2       nifree  215548  nffree  14 cgrotor 0       fmod    0       ronly   0 fname           fpack        featurebits     0x3     id      0x0,0x0 optimize        FS_OPTTIME cylinders in last group 2 blocks in last group 19 root@hpeos003[]  tunefs -e 156 /dev/vx/rdsk/ora1/archive  maximum blocks per file in a cylinder group changes from 39 to 156 root@hpeos003[] 

The rotational delay is zero, as we wanted it to be. We can now try our test:

 

 root@hpeos003[]  mount /dev/vx/dsk/ora1/archive /test  root@hpeos003[]  cd /test  root@hpeos003[test]  time prealloc 1GB.file 1073741824  real       54.8 user        0.0 sys         4.4 root@hpeos003[test] 

This is an astonishing difference in times. I managed to capture similar sar output during this test.

 

 root@hpeos003[]  sar -d 5 5  HP-UX hpeos003 B.11.11 U 9000/800    11/13/03 00:22:05   device   %busy   avque   r+w/s  blks/s  avwait  avserv 00:22:10  c1t15d0    2.79    0.50      14     123    4.86    2.70            c0t4d0  100.00 2254.00     520   66561 10704.99   15.34           c4t12d0  100.00 1793.00     530   67788 10670.21   15.10 00:22:15  c1t15d0    6.40   21.34      23     102   52.45   17.21            c0t4d0   71.60 2658.75     298   36509 8667.97   18.07           c4t12d0   55.60 2718.96     218   26109 6239.87   18.49 00:22:20  c1t15d0    0.20    0.50       0       1    0.11   10.28            c0t4d0  100.00 4338.00     336   41197 4008.19   23.88           c4t12d0  100.00 4222.50     352   43235 3945.94   23.03 00:22:25   c0t4d0  100.00 2569.00     371   47226 8976.19   21.73           c4t12d0  100.00 2483.12     344   43610 9085.68   23.28 00:22:30   c0t4d0  100.00  940.60     280   35142 13835.80   28.58           c4t12d0  100.00  883.60     296   37280 13772.03   27.08 Average   c1t15d0    1.88   13.29       8      45   34.02   11.69 Average    c0t4d0   94.32 2604.22     361   45335 9252.38   20.74 Average   c4t12d0   91.12 2515.36     348   43614 8971.14   20.78 root@hpeos003[] 

While the disks are still maxed out most of the time, the average wait time has been significantly reduced. The average service time has been cut drastically, hinting that the disk is finding it easier to perform the IO. The read/ writes and the number of blocks transferred have also increased. All this has led to a dramatic reduction in the average queue size . At first, I thought these figures were too different, so I ran the same tests a number of times, all with the same results. I checked my diagnostic logs for any hardware errors; there were none. If we look at the structure of the inode, we have avoided the double-indirect pointer ( a13 ).

 

 root@hpeos003[test]  ll -i  total 2097408      4 -rw-rw-rw-   1 root       sys        1073741824 Nov 13 00:22 1GB.file      3 drwxr-xr-x   2 root       root         65536 Nov 13 00:19 lost+found root@hpeos003[test]  echo "4i"  fsdb -F hfs /dev/vx/rdsk/ora1/archive  file system size = 524288(frags)   isize/cyl group=64(Kbyte blocks) primary block size=65536(bytes) fragment size=8192 no. of cyl groups = 421 i#:4  md: f---rw-rw-rw- ln:    1 uid:    0 gid:    3 sz: 1073741824 ci:0 a0 :    72  a1 :    80  a2 :    88  a3 :    96  a4 :   104  a5 :   112   a6 :   120  a7 :   128  a8 :   136  a9 :   144  a10:   152  a11:   160   a12:  1256   a13:     0   a14:     0   at: Thu Nov 13 00:21:54 2003 mt: Thu Nov 13 00:22:32 2003 ct: Thu Nov 13 00:22:49 2003 root@hpeos003[test] 

I then had a flash of inspiration. The VxVM volume I am using is a relayout of a RAID 5 volume to a concat-mirror . I was wondering if this could be the issue. I removed the volume entirely and created it as a simple concatenated volume. Let's get back to the test, first with the baseline configuration.

 

 root@hpeos003[]  newfs -F hfs /dev/vx/rdsk/ora1/archive  ... root@hpeos003[]  mount /dev/vx/dsk/ora1/archive /test  root@hpeos003[]  cd /test  root@hpeos003[test]  time prealloc 1GB.file 1073741824  real     2:31.6 user        0.1 sys         8.1 root@hpeos003[test] 

This is a dramatic difference! Here are the sar statistics collected during this test:

 

 root@hpeos003[]  sar -d 5 5  HP-UX hpeos003 B.11.11 U 9000/800    11/13/03 01:19:12   device   %busy   avque   r+w/s  blks/s  avwait  avserv 01:19:17  c1t15d0    0.80    0.50       2       9    0.20   10.56           c4t13d0  100.00 50608.50    1081   17297 7996.79    6.30 01:19:22  c1t15d0    4.20   10.96      16      73   30.94   15.32           c4t13d0  100.00 45837.00     935   14839 13218.79    9.83 01:19:27  c4t13d0  100.00 40412.00    1223   19524 17993.24    6.51 01:19:32  c4t13d0  100.00 34814.50    1016   16222 23178.15    7.93 01:19:37  c4t13d0  100.00 29783.00     996   15899 27687.47    8.04 Average   c1t15d0    1.00    9.58       4      16   26.88   14.69 Average   c4t13d0  100.00 40430.61    1050   16756 17924.13    7.62 root@hpeos003[] 

And now with the enhanced options:

 

 root@hpeos003[test]  umount /test  umount: cannot unmount /test : Device busy root@hpeos003[test]  cd /  root@hpeos003[]  newfs -F hfs -b 65536 -f 8192 -i 65536 -o largefiles /dev/vx/rdsk/ora1/archive  mkfs (hfs): Warning - 224 sector(s) in the last cylinder are not allocated. mkfs (hfs): /dev/vx/rdsk/ora1/archive - 4194304 sectors in 6722 cylinders of 16 tracks, 39 sectors ... root@hpeos003[]  mount /dev/vx/dsk/ora1/archive /test  root@hpeos003[]  cd /test  root@hpeos003[test]  time prealloc 1GB.file 1073741824  real       47.9 user        0.0 sys         4.1 root@hpeos003[test] 

With sar statistics of:

 

 root@hpeos003[]  sar -d 5 5  HP-UX hpeos003 B.11.11 U 9000/800    11/13/03 01:24:39   device   %busy   avque   r+w/s  blks/s  avwait  avserv 01:24:44  c1t15d0    0.80    0.50       2       8    1.82    8.79           c4t13d0  100.00 2146.50     482   61726 11380.62   16.58 01:24:49  c4t13d0   70.00 3447.08     288   32566 9157.99   14.85 01:24:54  c1t15d0    2.80    8.41      11      54   20.88   14.90           c4t13d0  100.00 6522.11     457   52835 3078.64   17.41 01:24:59  c1t15d0    0.20    0.50       0       0    0.12   12.11           c4t13d0  100.00 4171.00     461   57635 8195.14   17.39 01:25:04  c4t13d0  100.00 1913.00     443   55488 13146.13   18.20 Average   c1t15d0    0.76    6.92       3      12   17.27   13.79 Average   c4t13d0   94.00 3628.46     426   52054 8977.38   17.04 root@hpeos003[] 

This demonstrates two things: We need to be careful that the detail of our tests is clearly understood , and while the use of advanced features in volume management products may be useful, they can have a dramatic effect on how our applications perform.

NOTE : This test is a simplistic test designed to show that differences in filesystem construction can have an impact on IO throughput in certain circumstances. In real life, I would be more rigorous about the tests I perform, as well as the sampling techniques used. If I were to perform small, random IO to this filesystem, there is no guarantee that I would see any performance improvement at all. As always with performance tuning, " It depends. " It depends on your own specific circumstance. The moral of the story is this: Always do a baseline measure, modify, and then measure again. Ensure that you use the same test data and same test conditions throughout.

8.3.2 Resizing an HFS filesystem

Unlike VxFS, to resize an HFS filesystem, you have to unmount the filesystem to effect the change. This is a major problem when working in a High Availability environment when you will have to tell users to stop using the filesystem, probably by shutting down an associated application. This is unfortunate but entirely necessary. In this first step, I am simply resizing the volume:

 

 root@hpeos003[]  bdf /test  Filesystem          kbytes    used   avail %used Mounted on /dev/vx/dsk/ora1/archive                    4113400 1048736 2653320   28% /test root@hpeos003[]  vxassist -g ora1 growby archive 1G  root@hpeos003[]  vxprint -g ora1 archive  TY NAME         ASSOC      KSTATE   LENGTH   PLOFFS   STATE    TUTIL0  PUTIL0 v  archive      fsgen      ENABLED  5242880  -        ACTIVE   -       - pl archive-01   archive    ENABLED  5242880  -        ACTIVE   -       - sd ora_disk4-04 archive-01 ENABLED  5242880  0        -        -       - root@hpeos003[]  root@hpeos003[]  bdf /test  Filesystem          kbytes    used   avail %used Mounted on /dev/vx/dsk/ora1/archive                    4113400 1048736 2653320   28% /test root@hpeos003[] 

As you can see, the filesystem hasn't grown even though the volume has been increased in size. Next, we need to unmount the filesystem in order to run the extendfs command:

 

 root@hpeos003[]  umount /test  root@hpeos003[]  extendfs -F hfs /dev/vx/rdsk/ora1/archive  max number of sectors extendible is 1048576. extend file system /dev/vx/rdsk/ora1/archive to have 1048576 sectors more. Warning: 592 sector(s) in last cylinder unallocated extended super-block backups (for fsck -b#) at:  4203648, 4213696, 4223744, 4233792, 4243840, 4253888, 4263936, 4273984, 428403,  4304128, 4313152, 4323200, 4333248, 4343296, 4353344, 4363392, 4373440, 438348,  4403584, 4413632, 4423680, 4433728, 4443776, 4453824, 4463872, 4472896, 448294,  4503040, 4513088, 4523136, 4533184, 4543232, 4553280, 4563328, 4573376, 458342,  4603520, 4613568, 4623616, 4632640, 4642688, 4652736, 4662784, 4672832, 468288,  4702976, 4713024, 4723072, 4733120, 4743168, 4753216, 4763264, 4773312, 478336,  4802432, 4812480, 4822528, 4832576, 4842624, 4852672, 4862720, 4872768, 488281,  4902912, 4912960, 4923008, 4933056, 4943104, 4952128, 4962176, 4972224, 498227,  5002368, 5012416, 5022464, 5032512, 5042560, 5052608, 5062656, 5072704, 508275,  5102848, 5111872, 5121920, 5131968, 5142016, 5152064, 5162112, 5172160, 518220,  5202304, 5212352, 5222400, 5232448, 5242496, root@hpeos003[]  mount /dev/vx/dsk/ora1/archive /test  root@hpeos003[]  bdf /test  Filesystem          kbytes    used   avail %used Mounted on /dev/vx/dsk/ora1/archive                    5141816 1048744 3578888   23% /test root@hpeos003[] 

This is one of the most limiting factors of using an HFS filesystem. There is no way we can reduce the size of the filesystem without destroying it entirely. The situation is similar for defragmenting a filesystem; you would store all the data elsewhere, e.g., tape, recreate the filesystem from scratch, tune the filesystem, and then restore the data. In doing so, the data will be laid out in the filesystem in an optimal fashion.

8.3.3 Symbolic and hard links

We all know the difference between symbolic and hard links. This is just a simple demonstration on how they work from an inode perspective. First, we will set up a symbolic and a hard link to our 1GB file.

 

 root@hpeos003[test]  ll  total 2097408 -rw-rw-r--   1 root       sys      1073741824 Nov 13 01:25 1GB.file drwxr-xr-x   2 root       root       65536 Nov 13 01:23 lost+found root@hpeos003[test]  ln -s 1GB.file 1GB.soft  root@hpeos003[test]  ln 1GB.file 1GB.hard  root@hpeos003[test]  ll  total 4194704 -rw-rw-r--   2 root       sys      1073741824 Nov 13 01:25 1GB.file -rw-rw-r--   2 root       sys      1073741824 Nov 13 01:25 1GB.hard lrwxrwxr-x   1 root       sys            8 Nov 13 01:39 1GB.soft -> 1GB.file drwxr-xr-x   2 root       root       65536 Nov 13 01:23 lost+found root@hpeos003[test] 

It's interesting to note the way in which soft and hard links are implemented. A hard link is simply a directory entry referencing the same inode:

 

 root@hpeos003[test]  echo "2i.fd"  fsdb -F hfs /dev/vx/rdsk/ora1/archive  file system size = 524288(frags)   isize/cyl group=64(Kbyte blocks) primary block size=65536(bytes) fragment size=8192 no. of cyl groups = 421 d0: 2      .  d1: 2      .  .  d2: 3      l  o  s  t  +  f  o  u  n  d   d3: 4      1  G  B  .  f  i  l  e   d4: 5      1  G  B  .  s  o  f  t   d5: 4      1  G  B  .  h  a  r  d   root@hpeos003[test] 

While a symbolic link is a unique file in its own right (inode 5), the interesting thing is the way the symbolic link is implemented.

 

 root@hpeos003[test]  echo "5i"  fsdb -F hfs /dev/vx/rdsk/ora1/archive  file system size = 524288(frags)   isize/cyl group=64(Kbyte blocks) primary block size=65536(bytes) fragment size=8192 no. of cyl groups = 421 i#:5   md: l---rwxrwxr-x   ln:    1 uid:    0 gid:    3 sz: 8 ci:0   a0 :   400   a1 :     0  a2 :     0  a3 :     0  a4 :     0  a5 :     0   a6 :     0  a7 :     0  a8 :     0  a9 :     0  a10:     0  a11:     0   a12:     0  a13:     0  a14:     0   at: Thu Nov 13 01:39:56 2003 mt: Thu Nov 13 01:39:47 2003 ct: Thu Nov 13 01:39:47 2003 root@hpeos003[test]  root@hpeos003[test]  echo "5i.f0c"  fsdb -F hfs /dev/vx/rdsk/ora1/archive  file system size = 524288(frags)   isize/cyl group=64(Kbyte blocks) primary block size=65536(bytes) fragment size=8192 no. of cyl groups = 421 14400000    :   1  G  B  .  f  i  l  e   
 root@hpeos003[test]  echo "5i"  fsdb -F hfs /dev/vx/rdsk/ora1/archive  file system size = 524288(frags) isize/cyl group=64(Kbyte blocks) primary block size=65536(bytes) fragment size=8192 no. of cyl groups = 421 i#:5   md: l---rwxrwxr-x   ln: 1 uid: 0 gid: 3 sz: 8 ci:0   a0 : 400   a1 : 0 a2 : 0 a3 : 0 a4 : 0 a5 : 0 a6 : 0 a7 : 0 a8 : 0 a9 : 0 a10: 0 a11: 0 a12: 0 a13: 0 a14: 0 at: Thu Nov 13 01:39:56 2003 mt: Thu Nov 13 01:39:47 2003 ct: Thu Nov 13 01:39:47 2003 root@hpeos003[test] root@hpeos003[test]  echo "5i.f0c"  fsdb -F hfs /dev/vx/rdsk/ora1/archive  file system size = 524288(frags) isize/cyl group=64(Kbyte blocks) primary block size=65536(bytes) fragment size=8192 no. of cyl groups = 421 14400000 :   1 G B . f i l e   \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 graphics/ccc.gif \0 \0 \0 \0 \0 \0 \0 14400040 ... root@hpeos003[test] 
 root@hpeos003[test]  echo "5i"  fsdb -F hfs /dev/vx/rdsk/ora1/archive  file system size = 524288(frags) isize/cyl group=64(Kbyte blocks) primary block size=65536(bytes) fragment size=8192 no. of cyl groups = 421 i#:5   md: l---rwxrwxr-x   ln: 1 uid: 0 gid: 3 sz: 8 ci:0   a0 : 400   a1 : 0 a2 : 0 a3 : 0 a4 : 0 a5 : 0 a6 : 0 a7 : 0 a8 : 0 a9 : 0 a10: 0 a11: 0 a12: 0 a13: 0 a14: 0 at: Thu Nov 13 01:39:56 2003 mt: Thu Nov 13 01:39:47 2003 ct: Thu Nov 13 01:39:47 2003 root@hpeos003[test] root@hpeos003[test]  echo "5i.f0c"  fsdb -F hfs /dev/vx/rdsk/ora1/archive  file system size = 524288(frags) isize/cyl group=64(Kbyte blocks) primary block size=65536(bytes) fragment size=8192 no. of cyl groups = 421 14400000 :   1 G B . f i l e   \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 graphics/ccc.gif \0 \0 \0 \0 \0 \0 \0 14400040 ... root@hpeos003[test] 
 root@hpeos003[test]  echo "5i"  fsdb -F hfs /dev/vx/rdsk/ora1/archive  file system size = 524288(frags) isize/cyl group=64(Kbyte blocks) primary block size=65536(bytes) fragment size=8192 no. of cyl groups = 421 i#:5   md: l---rwxrwxr-x   ln: 1 uid: 0 gid: 3 sz: 8 ci:0   a0 : 400   a1 : 0 a2 : 0 a3 : 0 a4 : 0 a5 : 0 a6 : 0 a7 : 0 a8 : 0 a9 : 0 a10: 0 a11: 0 a12: 0 a13: 0 a14: 0 at: Thu Nov 13 01:39:56 2003 mt: Thu Nov 13 01:39:47 2003 ct: Thu Nov 13 01:39:47 2003 root@hpeos003[test] root@hpeos003[test]  echo "5i.f0c"  fsdb -F hfs /dev/vx/rdsk/ora1/archive  file system size = 524288(frags) isize/cyl group=64(Kbyte blocks) primary block size=65536(bytes) fragment size=8192 no. of cyl groups = 421 14400000 :   1 G B . f i l e   \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 graphics/ccc.gif \0 \0 \0 \0 \0 \0 \0 14400040 ... root@hpeos003[test] 
 root@hpeos003[test]  echo "5i"  fsdb -F hfs /dev/vx/rdsk/ora1/archive  file system size = 524288(frags) isize/cyl group=64(Kbyte blocks) primary block size=65536(bytes) fragment size=8192 no. of cyl groups = 421 i#:5   md: l---rwxrwxr-x   ln: 1 uid: 0 gid: 3 sz: 8 ci:0   a0 : 400   a1 : 0 a2 : 0 a3 : 0 a4 : 0 a5 : 0 a6 : 0 a7 : 0 a8 : 0 a9 : 0 a10: 0 a11: 0 a12: 0 a13: 0 a14: 0 at: Thu Nov 13 01:39:56 2003 mt: Thu Nov 13 01:39:47 2003 ct: Thu Nov 13 01:39:47 2003 root@hpeos003[test] root@hpeos003[test]  echo "5i.f0c"  fsdb -F hfs /dev/vx/rdsk/ora1/archive  file system size = 524288(frags) isize/cyl group=64(Kbyte blocks) primary block size=65536(bytes) fragment size=8192 no. of cyl groups = 421 14400000 :   1 G B . f i l e   \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 graphics/ccc.gif \0 \0 \0 \0 \0 \0 \0 14400040 ... root@hpeos003[test] 
 root@hpeos003[test]  echo "5i"  fsdb -F hfs /dev/vx/rdsk/ora1/archive  file system size = 524288(frags) isize/cyl group=64(Kbyte blocks) primary block size=65536(bytes) fragment size=8192 no. of cyl groups = 421 i#:5   md: l---rwxrwxr-x   ln: 1 uid: 0 gid: 3 sz: 8 ci:0   a0 : 400   a1 : 0 a2 : 0 a3 : 0 a4 : 0 a5 : 0 a6 : 0 a7 : 0 a8 : 0 a9 : 0 a10: 0 a11: 0 a12: 0 a13: 0 a14: 0 at: Thu Nov 13 01:39:56 2003 mt: Thu Nov 13 01:39:47 2003 ct: Thu Nov 13 01:39:47 2003 root@hpeos003[test] root@hpeos003[test]  echo "5i.f0c"  fsdb -F hfs /dev/vx/rdsk/ora1/archive  file system size = 524288(frags) isize/cyl group=64(Kbyte blocks) primary block size=65536(bytes) fragment size=8192 no. of cyl groups = 421 14400000 :   1 G B . f i l e   \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 graphics/ccc.gif \0 \0 \0 \0 \0 \0 \0 14400040 ... root@hpeos003[test] 
 root@hpeos003[test]  echo "5i"  fsdb -F hfs /dev/vx/rdsk/ora1/archive  file system size = 524288(frags) isize/cyl group=64(Kbyte blocks) primary block size=65536(bytes) fragment size=8192 no. of cyl groups = 421 i#:5   md: l---rwxrwxr-x   ln: 1 uid: 0 gid: 3 sz: 8 ci:0   a0 : 400   a1 : 0 a2 : 0 a3 : 0 a4 : 0 a5 : 0 a6 : 0 a7 : 0 a8 : 0 a9 : 0 a10: 0 a11: 0 a12: 0 a13: 0 a14: 0 at: Thu Nov 13 01:39:56 2003 mt: Thu Nov 13 01:39:47 2003 ct: Thu Nov 13 01:39:47 2003 root@hpeos003[test] root@hpeos003[test]  echo "5i.f0c"  fsdb -F hfs /dev/vx/rdsk/ora1/archive  file system size = 524288(frags) isize/cyl group=64(Kbyte blocks) primary block size=65536(bytes) fragment size=8192 no. of cyl groups = 421 14400000 :   1 G B . f i l e   \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 graphics/ccc.gif \0 \0 \0 \0 \0 \0 \0 14400040 ... root@hpeos003[test] 
 root@hpeos003[test]  echo "5i"  fsdb -F hfs /dev/vx/rdsk/ora1/archive  file system size = 524288(frags) isize/cyl group=64(Kbyte blocks) primary block size=65536(bytes) fragment size=8192 no. of cyl groups = 421 i#:5   md: l---rwxrwxr-x   ln: 1 uid: 0 gid: 3 sz: 8 ci:0   a0 : 400   a1 : 0 a2 : 0 a3 : 0 a4 : 0 a5 : 0 a6 : 0 a7 : 0 a8 : 0 a9 : 0 a10: 0 a11: 0 a12: 0 a13: 0 a14: 0 at: Thu Nov 13 01:39:56 2003 mt: Thu Nov 13 01:39:47 2003 ct: Thu Nov 13 01:39:47 2003 root@hpeos003[test] root@hpeos003[test]  echo "5i.f0c"  fsdb -F hfs /dev/vx/rdsk/ora1/archive  file system size = 524288(frags) isize/cyl group=64(Kbyte blocks) primary block size=65536(bytes) fragment size=8192 no. of cyl groups = 421 14400000 :   1 G B . f i l e   \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 graphics/ccc.gif \0 \0 \0 \0 \0 \0 \0 14400040 ... root@hpeos003[test] 
 root@hpeos003[test]  echo "5i"  fsdb -F hfs /dev/vx/rdsk/ora1/archive  file system size = 524288(frags) isize/cyl group=64(Kbyte blocks) primary block size=65536(bytes) fragment size=8192 no. of cyl groups = 421 i#:5   md: l---rwxrwxr-x   ln: 1 uid: 0 gid: 3 sz: 8 ci:0   a0 : 400   a1 : 0 a2 : 0 a3 : 0 a4 : 0 a5 : 0 a6 : 0 a7 : 0 a8 : 0 a9 : 0 a10: 0 a11: 0 a12: 0 a13: 0 a14: 0 at: Thu Nov 13 01:39:56 2003 mt: Thu Nov 13 01:39:47 2003 ct: Thu Nov 13 01:39:47 2003 root@hpeos003[test] root@hpeos003[test]  echo "5i.f0c"  fsdb -F hfs /dev/vx/rdsk/ora1/archive  file system size = 524288(frags) isize/cyl group=64(Kbyte blocks) primary block size=65536(bytes) fragment size=8192 no. of cyl groups = 421 14400000 :   1 G B . f i l e   \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 graphics/ccc.gif \0 \0 \0 \0 \0 \0 \0 14400040 ... root@hpeos003[test] 
 root@hpeos003[test]  echo "5i"  fsdb -F hfs /dev/vx/rdsk/ora1/archive  file system size = 524288(frags) isize/cyl group=64(Kbyte blocks) primary block size=65536(bytes) fragment size=8192 no. of cyl groups = 421 i#:5   md: l---rwxrwxr-x   ln: 1 uid: 0 gid: 3 sz: 8 ci:0   a0 : 400   a1 : 0 a2 : 0 a3 : 0 a4 : 0 a5 : 0 a6 : 0 a7 : 0 a8 : 0 a9 : 0 a10: 0 a11: 0 a12: 0 a13: 0 a14: 0 at: Thu Nov 13 01:39:56 2003 mt: Thu Nov 13 01:39:47 2003 ct: Thu Nov 13 01:39:47 2003 root@hpeos003[test] root@hpeos003[test]  echo "5i.f0c"  fsdb -F hfs /dev/vx/rdsk/ora1/archive  file system size = 524288(frags) isize/cyl group=64(Kbyte blocks) primary block size=65536(bytes) fragment size=8192 no. of cyl groups = 421 14400000 :   1 G B . f i l e   \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 graphics/ccc.gif \0 \0 \0 \0 \0 \0 \0 14400040 ... root@hpeos003[test] 
 root@hpeos003[test]  echo "5i"  fsdb -F hfs /dev/vx/rdsk/ora1/archive  file system size = 524288(frags) isize/cyl group=64(Kbyte blocks) primary block size=65536(bytes) fragment size=8192 no. of cyl groups = 421 i#:5   md: l---rwxrwxr-x   ln: 1 uid: 0 gid: 3 sz: 8 ci:0   a0 : 400   a1 : 0 a2 : 0 a3 : 0 a4 : 0 a5 : 0 a6 : 0 a7 : 0 a8 : 0 a9 : 0 a10: 0 a11: 0 a12: 0 a13: 0 a14: 0 at: Thu Nov 13 01:39:56 2003 mt: Thu Nov 13 01:39:47 2003 ct: Thu Nov 13 01:39:47 2003 root@hpeos003[test] root@hpeos003[test]  echo "5i.f0c"  fsdb -F hfs /dev/vx/rdsk/ora1/archive  file system size = 524288(frags) isize/cyl group=64(Kbyte blocks) primary block size=65536(bytes) fragment size=8192 no. of cyl groups = 421 14400000 :   1 G B . f i l e   \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 graphics/ccc.gif \0 \0 \0 \0 \0 \0 \0 14400040 ... root@hpeos003[test] 
 root@hpeos003[test]  echo "5i"  fsdb -F hfs /dev/vx/rdsk/ora1/archive  file system size = 524288(frags) isize/cyl group=64(Kbyte blocks) primary block size=65536(bytes) fragment size=8192 no. of cyl groups = 421 i#:5   md: l---rwxrwxr-x   ln: 1 uid: 0 gid: 3 sz: 8 ci:0   a0 : 400   a1 : 0 a2 : 0 a3 : 0 a4 : 0 a5 : 0 a6 : 0 a7 : 0 a8 : 0 a9 : 0 a10: 0 a11: 0 a12: 0 a13: 0 a14: 0 at: Thu Nov 13 01:39:56 2003 mt: Thu Nov 13 01:39:47 2003 ct: Thu Nov 13 01:39:47 2003 root@hpeos003[test] root@hpeos003[test]  echo "5i.f0c"  fsdb -F hfs /dev/vx/rdsk/ora1/archive  file system size = 524288(frags) isize/cyl group=64(Kbyte blocks) primary block size=65536(bytes) fragment size=8192 no. of cyl groups = 421 14400000 :   1 G B . f i l e   \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 graphics/ccc.gif \0 \0 \0 \0 \0 \0 \0 14400040 ... root@hpeos003[test] 
 root@hpeos003[test]  echo "5i"  fsdb -F hfs /dev/vx/rdsk/ora1/archive  file system size = 524288(frags) isize/cyl group=64(Kbyte blocks) primary block size=65536(bytes) fragment size=8192 no. of cyl groups = 421 i#:5   md: l---rwxrwxr-x   ln: 1 uid: 0 gid: 3 sz: 8 ci:0   a0 : 400   a1 : 0 a2 : 0 a3 : 0 a4 : 0 a5 : 0 a6 : 0 a7 : 0 a8 : 0 a9 : 0 a10: 0 a11: 0 a12: 0 a13: 0 a14: 0 at: Thu Nov 13 01:39:56 2003 mt: Thu Nov 13 01:39:47 2003 ct: Thu Nov 13 01:39:47 2003 root@hpeos003[test] root@hpeos003[test]  echo "5i.f0c"  fsdb -F hfs /dev/vx/rdsk/ora1/archive  file system size = 524288(frags) isize/cyl group=64(Kbyte blocks) primary block size=65536(bytes) fragment size=8192 no. of cyl groups = 421 14400000 :   1 G B . f i l e   \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 graphics/ccc.gif \0 \0 \0 \0 \0 \0 \0 14400040 ... root@hpeos003[test] 
 root@hpeos003[test]  echo "5i"  fsdb -F hfs /dev/vx/rdsk/ora1/archive  file system size = 524288(frags) isize/cyl group=64(Kbyte blocks) primary block size=65536(bytes) fragment size=8192 no. of cyl groups = 421 i#:5   md: l---rwxrwxr-x   ln: 1 uid: 0 gid: 3 sz: 8 ci:0   a0 : 400   a1 : 0 a2 : 0 a3 : 0 a4 : 0 a5 : 0 a6 : 0 a7 : 0 a8 : 0 a9 : 0 a10: 0 a11: 0 a12: 0 a13: 0 a14: 0 at: Thu Nov 13 01:39:56 2003 mt: Thu Nov 13 01:39:47 2003 ct: Thu Nov 13 01:39:47 2003 root@hpeos003[test] root@hpeos003[test]  echo "5i.f0c"  fsdb -F hfs /dev/vx/rdsk/ora1/archive  file system size = 524288(frags) isize/cyl group=64(Kbyte blocks) primary block size=65536(bytes) fragment size=8192 no. of cyl groups = 421 14400000 :   1 G B . f i l e   \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 graphics/ccc.gif \0 \0 \0 \0 \0 \0 \0 14400040 ... root@hpeos003[test] 
 root@hpeos003[test]  echo "5i"  fsdb -F hfs /dev/vx/rdsk/ora1/archive  file system size = 524288(frags) isize/cyl group=64(Kbyte blocks) primary block size=65536(bytes) fragment size=8192 no. of cyl groups = 421 i#:5   md: l---rwxrwxr-x   ln: 1 uid: 0 gid: 3 sz: 8 ci:0   a0 : 400   a1 : 0 a2 : 0 a3 : 0 a4 : 0 a5 : 0 a6 : 0 a7 : 0 a8 : 0 a9 : 0 a10: 0 a11: 0 a12: 0 a13: 0 a14: 0 at: Thu Nov 13 01:39:56 2003 mt: Thu Nov 13 01:39:47 2003 ct: Thu Nov 13 01:39:47 2003 root@hpeos003[test] root@hpeos003[test]  echo "5i.f0c"  fsdb -F hfs /dev/vx/rdsk/ora1/archive  file system size = 524288(frags) isize/cyl group=64(Kbyte blocks) primary block size=65536(bytes) fragment size=8192 no. of cyl groups = 421 14400000 :   1 G B . f i l e   \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 graphics/ccc.gif \0 \0 \0 \0 \0 \0 \0 14400040 ... root@hpeos003[test] 
 root@hpeos003[test]  echo "5i"  fsdb -F hfs /dev/vx/rdsk/ora1/archive  file system size = 524288(frags) isize/cyl group=64(Kbyte blocks) primary block size=65536(bytes) fragment size=8192 no. of cyl groups = 421 i#:5   md: l---rwxrwxr-x   ln: 1 uid: 0 gid: 3 sz: 8 ci:0   a0 : 400   a1 : 0 a2 : 0 a3 : 0 a4 : 0 a5 : 0 a6 : 0 a7 : 0 a8 : 0 a9 : 0 a10: 0 a11: 0 a12: 0 a13: 0 a14: 0 at: Thu Nov 13 01:39:56 2003 mt: Thu Nov 13 01:39:47 2003 ct: Thu Nov 13 01:39:47 2003 root@hpeos003[test] root@hpeos003[test]  echo "5i.f0c"  fsdb -F hfs /dev/vx/rdsk/ora1/archive  file system size = 524288(frags) isize/cyl group=64(Kbyte blocks) primary block size=65536(bytes) fragment size=8192 no. of cyl groups = 421 14400000 :   1 G B . f i l e   \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 graphics/ccc.gif \0 \0 \0 \0 \0 \0 \0 14400040 ... root@hpeos003[test] 
 root@hpeos003[test]  echo "5i"  fsdb -F hfs /dev/vx/rdsk/ora1/archive  file system size = 524288(frags) isize/cyl group=64(Kbyte blocks) primary block size=65536(bytes) fragment size=8192 no. of cyl groups = 421 i#:5   md: l---rwxrwxr-x   ln: 1 uid: 0 gid: 3 sz: 8 ci:0   a0 : 400   a1 : 0 a2 : 0 a3 : 0 a4 : 0 a5 : 0 a6 : 0 a7 : 0 a8 : 0 a9 : 0 a10: 0 a11: 0 a12: 0 a13: 0 a14: 0 at: Thu Nov 13 01:39:56 2003 mt: Thu Nov 13 01:39:47 2003 ct: Thu Nov 13 01:39:47 2003 root@hpeos003[test] root@hpeos003[test]  echo "5i.f0c"  fsdb -F hfs /dev/vx/rdsk/ora1/archive  file system size = 524288(frags) isize/cyl group=64(Kbyte blocks) primary block size=65536(bytes) fragment size=8192 no. of cyl groups = 421 14400000 :   1 G B . f i l e   \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 graphics/ccc.gif \0 \0 \0 \0 \0 \0 \0 14400040 ... root@hpeos003[test] 
 root@hpeos003[test]  echo "5i"  fsdb -F hfs /dev/vx/rdsk/ora1/archive  file system size = 524288(frags) isize/cyl group=64(Kbyte blocks) primary block size=65536(bytes) fragment size=8192 no. of cyl groups = 421 i#:5   md: l---rwxrwxr-x   ln: 1 uid: 0 gid: 3 sz: 8 ci:0   a0 : 400   a1 : 0 a2 : 0 a3 : 0 a4 : 0 a5 : 0 a6 : 0 a7 : 0 a8 : 0 a9 : 0 a10: 0 a11: 0 a12: 0 a13: 0 a14: 0 at: Thu Nov 13 01:39:56 2003 mt: Thu Nov 13 01:39:47 2003 ct: Thu Nov 13 01:39:47 2003 root@hpeos003[test] root@hpeos003[test]  echo "5i.f0c"  fsdb -F hfs /dev/vx/rdsk/ora1/archive  file system size = 524288(frags) isize/cyl group=64(Kbyte blocks) primary block size=65536(bytes) fragment size=8192 no. of cyl groups = 421 14400000 :   1 G B . f i l e   \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 graphics/ccc.gif \0 \0 \0 \0 \0 \0 \0 14400040 ... root@hpeos003[test] 
graphics/ccc.gif
 root@hpeos003[test]  echo "5i"  fsdb -F hfs /dev/vx/rdsk/ora1/archive  file system size = 524288(frags) isize/cyl group=64(Kbyte blocks) primary block size=65536(bytes) fragment size=8192 no. of cyl groups = 421 i#:5   md: l---rwxrwxr-x   ln: 1 uid: 0 gid: 3 sz: 8 ci:0   a0 : 400   a1 : 0 a2 : 0 a3 : 0 a4 : 0 a5 : 0 a6 : 0 a7 : 0 a8 : 0 a9 : 0 a10: 0 a11: 0 a12: 0 a13: 0 a14: 0 at: Thu Nov 13 01:39:56 2003 mt: Thu Nov 13 01:39:47 2003 ct: Thu Nov 13 01:39:47 2003 root@hpeos003[test] root@hpeos003[test]  echo "5i.f0c"  fsdb -F hfs /dev/vx/rdsk/ora1/archive  file system size = 524288(frags) isize/cyl group=64(Kbyte blocks) primary block size=65536(bytes) fragment size=8192 no. of cyl groups = 421 14400000 :   1 G B . f i l e   \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 graphics/ccc.gif \0 \0 \0 \0 \0 \0 \0 14400040 ... root@hpeos003[test] 
 root@hpeos003[test]  echo "5i"  fsdb -F hfs /dev/vx/rdsk/ora1/archive  file system size = 524288(frags) isize/cyl group=64(Kbyte blocks) primary block size=65536(bytes) fragment size=8192 no. of cyl groups = 421 i#:5   md: l---rwxrwxr-x   ln: 1 uid: 0 gid: 3 sz: 8 ci:0   a0 : 400   a1 : 0 a2 : 0 a3 : 0 a4 : 0 a5 : 0 a6 : 0 a7 : 0 a8 : 0 a9 : 0 a10: 0 a11: 0 a12: 0 a13: 0 a14: 0 at: Thu Nov 13 01:39:56 2003 mt: Thu Nov 13 01:39:47 2003 ct: Thu Nov 13 01:39:47 2003 root@hpeos003[test] root@hpeos003[test]  echo "5i.f0c"  fsdb -F hfs /dev/vx/rdsk/ora1/archive  file system size = 524288(frags) isize/cyl group=64(Kbyte blocks) primary block size=65536(bytes) fragment size=8192 no. of cyl groups = 421 14400000 :   1 G B . f i l e   \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 graphics/ccc.gif \0 \0 \0 \0 \0 \0 \0 14400040 ... root@hpeos003[test] 
 root@hpeos003[test]  echo "5i"  fsdb -F hfs /dev/vx/rdsk/ora1/archive  file system size = 524288(frags) isize/cyl group=64(Kbyte blocks) primary block size=65536(bytes) fragment size=8192 no. of cyl groups = 421 i#:5   md: l---rwxrwxr-x   ln: 1 uid: 0 gid: 3 sz: 8 ci:0   a0 : 400   a1 : 0 a2 : 0 a3 : 0 a4 : 0 a5 : 0 a6 : 0 a7 : 0 a8 : 0 a9 : 0 a10: 0 a11: 0 a12: 0 a13: 0 a14: 0 at: Thu Nov 13 01:39:56 2003 mt: Thu Nov 13 01:39:47 2003 ct: Thu Nov 13 01:39:47 2003 root@hpeos003[test] root@hpeos003[test]  echo "5i.f0c"  fsdb -F hfs /dev/vx/rdsk/ora1/archive  file system size = 524288(frags) isize/cyl group=64(Kbyte blocks) primary block size=65536(bytes) fragment size=8192 no. of cyl groups = 421 14400000 :   1 G B . f i l e   \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 graphics/ccc.gif \0 \0 \0 \0 \0 \0 \0 14400040 ... root@hpeos003[test] 
 root@hpeos003[test]  echo "5i"  fsdb -F hfs /dev/vx/rdsk/ora1/archive  file system size = 524288(frags) isize/cyl group=64(Kbyte blocks) primary block size=65536(bytes) fragment size=8192 no. of cyl groups = 421 i#:5   md: l---rwxrwxr-x   ln: 1 uid: 0 gid: 3 sz: 8 ci:0   a0 : 400   a1 : 0 a2 : 0 a3 : 0 a4 : 0 a5 : 0 a6 : 0 a7 : 0 a8 : 0 a9 : 0 a10: 0 a11: 0 a12: 0 a13: 0 a14: 0 at: Thu Nov 13 01:39:56 2003 mt: Thu Nov 13 01:39:47 2003 ct: Thu Nov 13 01:39:47 2003 root@hpeos003[test] root@hpeos003[test]  echo "5i.f0c"  fsdb -F hfs /dev/vx/rdsk/ora1/archive  file system size = 524288(frags) isize/cyl group=64(Kbyte blocks) primary block size=65536(bytes) fragment size=8192 no. of cyl groups = 421 14400000 :   1 G B . f i l e   \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 graphics/ccc.gif \0 \0 \0 \0 \0 \0 \0 14400040 ... root@hpeos003[test] 
 root@hpeos003[test]  echo "5i"  fsdb -F hfs /dev/vx/rdsk/ora1/archive  file system size = 524288(frags) isize/cyl group=64(Kbyte blocks) primary block size=65536(bytes) fragment size=8192 no. of cyl groups = 421 i#:5   md: l---rwxrwxr-x   ln: 1 uid: 0 gid: 3 sz: 8 ci:0   a0 : 400   a1 : 0 a2 : 0 a3 : 0 a4 : 0 a5 : 0 a6 : 0 a7 : 0 a8 : 0 a9 : 0 a10: 0 a11: 0 a12: 0 a13: 0 a14: 0 at: Thu Nov 13 01:39:56 2003 mt: Thu Nov 13 01:39:47 2003 ct: Thu Nov 13 01:39:47 2003 root@hpeos003[test] root@hpeos003[test]  echo "5i.f0c"  fsdb -F hfs /dev/vx/rdsk/ora1/archive  file system size = 524288(frags) isize/cyl group=64(Kbyte blocks) primary block size=65536(bytes) fragment size=8192 no. of cyl groups = 421 14400000 :   1 G B . f i l e   \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 graphics/ccc.gif \0 \0 \0 \0 \0 \0 \0 14400040 ... root@hpeos003[test] 
 root@hpeos003[test]  echo "5i"  fsdb -F hfs /dev/vx/rdsk/ora1/archive  file system size = 524288(frags) isize/cyl group=64(Kbyte blocks) primary block size=65536(bytes) fragment size=8192 no. of cyl groups = 421 i#:5   md: l---rwxrwxr-x   ln: 1 uid: 0 gid: 3 sz: 8 ci:0   a0 : 400   a1 : 0 a2 : 0 a3 : 0 a4 : 0 a5 : 0 a6 : 0 a7 : 0 a8 : 0 a9 : 0 a10: 0 a11: 0 a12: 0 a13: 0 a14: 0 at: Thu Nov 13 01:39:56 2003 mt: Thu Nov 13 01:39:47 2003 ct: Thu Nov 13 01:39:47 2003 root@hpeos003[test] root@hpeos003[test]  echo "5i.f0c"  fsdb -F hfs /dev/vx/rdsk/ora1/archive  file system size = 524288(frags) isize/cyl group=64(Kbyte blocks) primary block size=65536(bytes) fragment size=8192 no. of cyl groups = 421 14400000 :   1 G B . f i l e   \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 graphics/ccc.gif \0 \0 \0 \0 \0 \0 \0 14400040 ... root@hpeos003[test] 
 root@hpeos003[test]  echo "5i"  fsdb -F hfs /dev/vx/rdsk/ora1/archive  file system size = 524288(frags) isize/cyl group=64(Kbyte blocks) primary block size=65536(bytes) fragment size=8192 no. of cyl groups = 421 i#:5   md: l---rwxrwxr-x   ln: 1 uid: 0 gid: 3 sz: 8 ci:0   a0 : 400   a1 : 0 a2 : 0 a3 : 0 a4 : 0 a5 : 0 a6 : 0 a7 : 0 a8 : 0 a9 : 0 a10: 0 a11: 0 a12: 0 a13: 0 a14: 0 at: Thu Nov 13 01:39:56 2003 mt: Thu Nov 13 01:39:47 2003 ct: Thu Nov 13 01:39:47 2003 root@hpeos003[test] root@hpeos003[test]  echo "5i.f0c"  fsdb -F hfs /dev/vx/rdsk/ora1/archive  file system size = 524288(frags) isize/cyl group=64(Kbyte blocks) primary block size=65536(bytes) fragment size=8192 no. of cyl groups = 421 14400000 :   1 G B . f i l e   \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 graphics/ccc.gif \0 \0 \0 \0 \0 \0 \0 14400040 ... root@hpeos003[test] 
14400040 ... root@hpeos003[test]

It is evident that the pathname used in the ln command is stored in the data fragment of the symbolic link. There is a kernel parameter called create_fastlinks , which would allow HFS to store the pathname (if it was 13 characters or smaller) directly in the inode, without using a data fragment.

 

 root@hpeos003[test]  kmtune -q create_fastlinks  Parameter             Current Dyn Planned                  Module     Version ============================================================================= create_fastlinks            0  -  0                           root@hpeos003[test] 

This feature is turned off by default. While this isn't going to suddenly make your system run much faster, it might make a slight difference. The only proviso is that once you change the kernel parameter, you need to delete and recreate the symbolic links again.

When we delete a symbolic link, it simply disappears because it is treated like any other file. When we delete a hard link, the link count of the inode is consulted:

 

 root@hpeos003[test]  echo "4i"  fsdb -F hfs /dev/vx/rdsk/ora1/archive  file system size = 524288(frags)   isize/cyl group=64(Kbyte blocks) primary block size=65536(bytes) fragment size=8192 no. of cyl groups = 421 i#:4  md: f---rw-rw-r--   ln:    2   uid:    0 gid:    3 sz: 1073741824 ci:0 a0 :    72  a1 :    80  a2 :    88  a3 :    96  a4 :   104  a5 :   112   a6 :   120  a7 :   128  a8 :   136  a9 :   144  a10:   152  a11:   160   a12:  1256  a13:     0  a14:     0   at: Thu Nov 13 01:24:27 2003 mt: Thu Nov 13 01:25:06 2003 ct: Thu Nov 13 01:39:54 2003 root@hpeos003[test] 

When we delete a hard link, the directory entry is zeroed and the link count of the inode is decreased by one.

 

 root@hpeos003[test]  rm 1GB.file  root@hpeos003[test] root@hpeos003[test]  echo "4i"  fsdb -F hfs /dev/vx/rdsk/ora1/archive___  file system size = 524288(frags)   isize/cyl group=64(Kbyte blocks) primary block size=65536(bytes) fragment size=8192 no. of cyl groups = 421 i#:4  md: f---rw-rw-r--   ln:    1   uid:    0 gid:    3 sz: 1073741824 ci:0 a0 :    72  a1 :    80  a2 :    88  a3 :    96  a4 :   104  a5 :   112   a6 :   120  a7 :   128  a8 :   136  a9 :   144  a10:   152  a11:   160   a12:  1256  a13:     0  a14:     0   at: Thu Nov 13 01:24:27 2003 mt: Thu Nov 13 01:25:06 2003 ct: Thu Nov 13 02:04:54 2003 root@hpeos003[test] root@hpeos003[test]  echo "2i.fd"  fsdb -F hfs /dev/vx/rdsk/ora1/archive  file system size = 524288(frags)   isize/cyl group=64(Kbyte blocks) primary block size=65536(bytes) fragment size=8192 no. of cyl groups = 421 d0: 2      .  d1: 2      .  .  d2: 3      l  o  s  t  +  f  o  u  n  d  d3: 5      1  G  B  .  s  o  f  t  d4: 4      1  G  B  .  h  a  r  d  root@hpeos003[test] 

Only when the link count drops to zero does an inode get deleted.

Finally, on symbolic links, I love the idea of being able to (symbolic) link to something that doesn't exist (see 1GB.soft ):

 

 root@hpeos003[test]  ll  total 2097424 -rw-rw-r--   2 root       sys      1073741824 Nov 13 01:25 1GB.hard lrwxrwxr-x   1 root       sys            8 Nov 13 01:39 1GB.soft -> 1GB.file drwxr-xr-x   2 root       root       65536 Nov 13 01:23 lost+found root@hpeos003[test]  ln -s cat mouse  root@hpeos003[test]  ln -s mouse cat  root@hpeos003[test]  ll  total 2097456 -rw-rw-r--   2 root       sys      1073741824 Nov 13 01:25 1GB.hard lrwxrwxr-x   1 root       sys            8 Nov 13 01:39 1GB.soft -> 1GB.file   lrwxrwxr-x   1 root       sys            5 Nov 13 01:56 cat -> mouse   drwxr-xr-x   2 root       root       65536 Nov 13 01:23 lost+found   lrwxrwxr-x   1 root       sys            3 Nov 13 01:56 mouse -> cat   root@hpeos003[test] 

It doesn't make much sense to be able to point to black hole , but that's life! What would happen if I ran the command cat mouse ?



HP-UX CSE(c) Official Study Guide and Desk Reference
HP-UX CSE(c) Official Study Guide and Desk Reference
ISBN: N/A
EAN: N/A
Year: 2006
Pages: 434

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net