Home | History | Annotate | Line # | Download | only in Examples
      1 The following are sample outputs of the pfilestat tool for various scenarios.
      2 
      3 
      4 
      5 Starting with something simple,
      6 
      7 Running: dd if=/dev/rdsk/c0d0s0 of=/dev/null bs=56k	# x86, 32-bit
      8 
      9    # ./pfilestat `pgrep -x dd`
     10    
     11         STATE   FDNUM      Time Filename
     12          read       3        2% /devices/pci@0,0/pci-ide@1f,1/ide@0/cmdk@0,0
     13         write       4        3% /devices/pseudo/mm@0:null
     14       waitcpu       0        7%
     15       running       0       16%
     16       sleep-r       0       69%
     17    
     18         STATE   FDNUM      KB/s Filename
     19         write       4     53479 /devices/pseudo/mm@0:null
     20          read       3     53479 /devices/pci@0,0/pci-ide@1f,1/ide@0/cmdk@0,0
     21    
     22    Total event time (ms): 4999   Total Mbytes/sec: 104
     23 
     24 Most of the time we are sleeping on read, which is to be expected as dd on
     25 the raw device is simple -> read:entry, strategy, biodone, read:return.
     26 CPU time in read() itself is small.
     27 
     28 
     29 
     30 Now for the dsk device,
     31 
     32 Running: dd if=/dev/dsk/c0d0s0 of=/dev/null bs=56k	# x86, 32-bit
     33 
     34    # ./pfilestat `pgrep -x dd`
     35    
     36         STATE   FDNUM      Time Filename
     37         write       4        5% /devices/pseudo/mm@0:null
     38       waitcpu       0        8%
     39       running       0       15%
     40       sleep-r       0       18%
     41          read       3       53% /devices/pci@0,0/pci-ide@1f,1/ide@0/cmdk@0,0
     42    
     43         STATE   FDNUM      KB/s Filename
     44          read       3     53492 /devices/pci@0,0/pci-ide@1f,1/ide@0/cmdk@0,0
     45         write       4     53492 /devices/pseudo/mm@0:null
     46    
     47    Total event time (ms): 4914   Total Mbytes/sec: 102
     48 
     49 Woah, we are now spending much more time in read()! I imagine segmap is
     50 a busy bee. The "running" and "write" times are hardly different.
     51 
     52 
     53 
     54 Now for a SPARC demo of the same,
     55 
     56 Running: dd if=/dev/dsk/c0d0s0 of=/dev/null bs=56k	# SPARC, 64-bit
     57 
     58    # ./pfilestat `pgrep -x dd`
     59 
     60         STATE   FDNUM      Time Filename
     61         write       4        3% /devices/pseudo/mm@0:zero
     62       waitcpu       0        7%
     63       running       0       17%
     64          read       3       24% /devices/pci@1f,0/pci@1,1/ide@3/dad@0,0:a
     65       sleep-r       0       54%
     66    
     67         STATE   FDNUM      KB/s Filename
     68          read       3     13594 /devices/pci@1f,0/pci@1,1/ide@3/dad@0,0:a
     69         write       4     13606 /devices/pseudo/mm@0:zero
     70    
     71    Total event time (ms): 4741   Total Mbytes/sec: 25
     72 
     73 I did prime the cache by running this a few times first. There is less
     74 read() time than with the x86 32-bit demo, as I would guess that the
     75 process is more often exhausting the (faster) segmap cache and getting
     76 to the point where it must sleep. (However, do take this comparison with
     77 a grain of salt - my development servers aren't ideal for comparing
     78 statistics: one is a 867 MHz Pentium, and the other a 360 MHz Ultra 5).
     79 
     80 The file system cache is faster on 64-bit systems due to the segkpm
     81 enhancement in Solaris 10. For details see,
     82 http://blogs.sun.com/roller/page/rmc?entry=solaris_10_fast_filesystem_cache
     83 
     84 
     85 
     86 Now, back to x86.
     87 
     88 Running: tar cf /dev/null /
     89 
     90    # ./pfilestat `pgrep -x tar`
     91    
     92         STATE   FDNUM      Time Filename
     93          read      11        0% /extra1/test/amd64/libCstd.so.1
     94          read      11        0% /extra1/test/amd64/libXm.so
     95          read      11        0% /extra1/test/amd64/libXm.so.4
     96          read      11        1% /extra1/test/amd64/libgtk-x11-2.0.so
     97          read      11        2% /extra1/test/amd64/libgtk-x11-2.0.so.0
     98       waitcpu       0        2%
     99          read       9        4% /extra1/5000
    100         write       3        7% /devices/pseudo/mm@0:null
    101       running       0       19%
    102       sleep-r       0       46%
    103    
    104         STATE   FDNUM      KB/s Filename
    105          read      11       293 /extra1/test/amd64/libgdk-x11-2.0.so
    106          read      11       295 /extra1/test/amd64/libgdk-x11-2.0.so.0
    107          read       9       476 /extra1/1000
    108          read      11       526 /extra1/test/amd64/libCstd.so.1
    109          read      11       594 /extra1/test/amd64/libXm.so.4
    110          read      11       594 /extra1/test/amd64/libXm.so
    111          read      11      1603 /extra1/test/amd64/libgtk-x11-2.0.so.0
    112          read      11      1606 /extra1/test/amd64/libgtk-x11-2.0.so
    113          read       9      4078 /extra1/5000
    114         write       3     21254 /devices/pseudo/mm@0:null
    115    
    116    Total event time (ms): 4903   Total Mbytes/sec: 41
    117 
    118 Fair enough. tar is crusing along at 21 Mbytes/sec (writes to fd 3!).
    119 
    120 
    121 
    122 More interesting is to do the following,
    123 
    124 Running: tar cf - / | gzip > /dev/null
    125 
    126    # ./pfilestat `pgrep -x tar`
    127    
    128         STATE   FDNUM      Time Filename
    129          read      11        0% /extra1/test/amd64/libm.so
    130          read      11        0% /extra1/test/amd64/libm.so.2
    131          read      11        0% /extra1/test/amd64/libnsl.so
    132          read      11        0% /extra1/test/amd64/libnsl.so.1
    133          read      11        0% /extra1/test/amd64/libc.so.1
    134         write       3        2% <none>
    135       waitcpu       0        4%
    136       sleep-r       0        4%
    137       running       0        6%
    138       sleep-w       0       78%
    139    
    140         STATE   FDNUM      KB/s Filename
    141          read      11        74 /extra1/test/amd64/libldap.so
    142          read      11        75 /extra1/test/amd64/libldap.so.5
    143          read      11        75 /extra1/test/amd64/libresolv.so.2
    144          read      11        76 /extra1/test/amd64/libresolv.so
    145          read      11        97 /extra1/test/amd64/libm.so.2
    146          read      11        98 /extra1/test/amd64/libm.so
    147          read      11       174 /extra1/test/amd64/libnsl.so
    148          read      11       176 /extra1/test/amd64/libnsl.so.1
    149          read      11       216 /extra1/test/amd64/libc.so.1
    150         write       3      3022 <none>
    151    
    152    Total event time (ms): 4911   Total Mbytes/sec: 6
    153 
    154 Woah now, tar is writing 3 Mbytes/sec - AND spending 78% of it's time on
    155 sleep-w, sleeping on writes! Of course, this is because we are piping the
    156 output to gzip, which is spending a while compressing the data. 78%
    157 matches the time gzip was on the CPU (using either "prstat -m" or dtrace
    158 to measure; procfs's pr_pctcpu would take too long to catch up).
    159 
    160 
    161 
    162 
    163 Also interesting is,
    164 
    165 Running: perl -e 'while (1) {;}' &
    166 Running: perl -e 'while (1) {;}' &
    167 Running: perl -e 'while (1) {;}' &
    168 Running: perl -e 'while (1) {;}' &
    169 Running: tar cf /dev/null /
    170 
    171    # ./pfilestat `pgrep -x tar`
    172 
    173         STATE   FDNUM      Time Filename
    174          read      11        0% /extra1/test/amd64/libxml2.so.2
    175          read      11        0% /extra1/test/amd64/libgdk-x11-2.0.so.0
    176          read      11        0% /extra1/test/amd64/libgdk-x11-2.0.so
    177          read      11        0% /extra1/test/amd64/libCstd.so.1
    178          read      11        0% /extra1/test/amd64/libgtk-x11-2.0.so.0
    179          read      11        2% /extra1/test/amd64/libgtk-x11-2.0.so
    180         write       3        2% /devices/pseudo/mm@0:null
    181       running       0        8%
    182       sleep-r       0       22%
    183       waitcpu       0       65%
    184    
    185         STATE   FDNUM      KB/s Filename
    186          read      11       182 /extra1/test/amd64/libsun_fc.so
    187          read      11       264 /extra1/test/amd64/libglib-2.0.so
    188          read      11       266 /extra1/test/amd64/libglib-2.0.so.0
    189          read      11       280 /extra1/test/amd64/libxml2.so.2
    190          read      11       293 /extra1/test/amd64/libgdk-x11-2.0.so
    191          read      11       295 /extra1/test/amd64/libgdk-x11-2.0.so.0
    192          read      11       526 /extra1/test/amd64/libCstd.so.1
    193          read      11       761 /extra1/test/amd64/libgtk-x11-2.0.so.0
    194          read      11      1606 /extra1/test/amd64/libgtk-x11-2.0.so
    195         write       3      7881 /devices/pseudo/mm@0:null
    196    
    197    Total event time (ms): 4596   Total Mbytes/sec: 13
    198 
    199 Now we have "waitcpu" as tar competes for CPU cycles along with the greedy
    200 infinite perl processes.
    201