Section 5.2. Observing Physical IO


5.2. Observing Physical I/O

The traditional method of observing file system activity is to induce information from the bottom end of the file system, for example, physical I/O. This can be done easily with iostat or DTrace, as shown in the following iostat example and further in Chapter 4.

$ iostat -xnczpm 3      cpu  us sy wt id   7  2  8 83                     extended device statistics     r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.6    3.8    8.0   30.3  0.1  0.2   20.4   37.7   0   3 c0t0d0     0.6    3.8    8.0   30.3  0.1  0.2   20.4   37.7   0   3 c0t0d0s0 (/)     0.0    0.0    0.0    0.0  0.0  0.0    0.0   48.7   0   0 c0t0d0s1     0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c0t0d0s2     0.0    0.0    0.0    0.0  0.0  0.0  405.2 1328.5   0   0 c0t1d0     0.0    0.0    0.0    0.0  0.0  0.0  405.9 1330.8   0   0 c0t1d0s1     0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c0t1d0s2    14.7    4.8  330.8    6.8  0.0  0.3    0.0   13.9   0   8 c4t16d1    14.7    4.8  330.8    6.8  0.0  0.3    0.0   13.9   0   8 c4t16d1s7 (/export/home)     1.4    0.4   70.4    4.3  0.0  0.0    0.0   21.8   0   2 c4t16d2     1.4    0.4   70.4    4.3  0.0  0.0    0.0   21.8   0   2 c4t16d2s7 (/export/home2)    12.8   12.4   73.5    7.4  0.0  0.1    0.0    2.5   0   3 c4t17d0    10.8   10.8    0.4    0.4  0.0  0.0    0.0    0.0   0   0 c4t17d0s2     2.0    1.6   73.1    7.0  0.0  0.1    0.0   17.8   0   3 c4t17d0s7 (/www)     0.0    2.9    0.0  370.4  0.0  0.1    0.0   19.1   0   6 rmt/1 


Using iostat, we can observe I/O counts, bandwidth, and latency at the device level, and optionally per-mount, by using the -m option (note that this only works for file systems like UFS that mount only one device). In the above example, we can see that /export/home is mounted on c4t16d1s7. It is generating 14.7 reads per second and 4.8 writes per second, with a response time of 13.9 milliseconds. But that's all we knowfar too often we deduce too much by simply looking at the physical I/O characteristics. For example, in this case we could easily assume that the upper-level application is experiencing good response times, when in fact substantial latency is being added in the file system layer, which is masked by these statistics. We talk more about common scenarios in which latency is added in the file system layer in Section 5.4.

By using the DTrace I/O provider, we can easily connect physical I/O events with some file-system-level information; for example, file names. The script from Section 5.4.3 shows a simple example of how DTrace can display per-operation information with combined file-system-level and physical I/O information.

# ./iotrace.d     DEVICE                                                       FILE RW      SIZE      cmdk0                               /export/home/rmc/.sh_history  W      4096      cmdk0                                 /opt/Acrobat4/bin/acroread  R      8192      cmdk0                                 /opt/Acrobat4/bin/acroread  R      1024      cmdk0                                 /var/tmp/wscon-:0.0-gLaW9a  W      3072      cmdk0                           /opt/Acrobat4/Reader/AcroVersion  R      1024      cmdk0             /opt/Acrobat4/Reader/intelsolaris/bin/acroread  R      8192      cmdk0             /opt/Acrobat4/Reader/intelsolaris/bin/acroread  R      8192      cmdk0             /opt/Acrobat4/Reader/intelsolaris/bin/acroread  R      4096      cmdk0             /opt/Acrobat4/Reader/intelsolaris/bin/acroread  R      8192      cmdk0             /opt/Acrobat4/Reader/intelsolaris/bin/acroread  R      8192 





Solaris Performance and Tools(c) Dtrace and Mdb Techniques for Solaris 10 and Opensolaris
Solaris Performance and Tools: DTrace and MDB Techniques for Solaris 10 and OpenSolaris
ISBN: 0131568191
EAN: 2147483647
Year: 2007
Pages: 180

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net