This shows the IO stack for AIX. When tuning, we have to be aware of all the layers, as each layer impacts performance, and there are knobs to turn at each layer.
IOs can be coalesced into fewer IOs, or broken up into more IOs as they go up and down thru the IO layers. Generally, one gets better performance, in MB/s, with fewer larger IOs, and with fewer IOs, there's less CPU overhead to handle the IOs.
Note that system setup, from a data layout viewpoint, is generally done from the bottom up. First the disk subsystem is configured, then the device layer (hdisks, vpaths, etc.), then the LVM layer (VGs then LVs) then the filesystems, and finally the files.
The disk interconnection technology exists below the device driver level, sometimes prior to the disk subsystem and within the disk subsystem if it exists. The advent of SANs, NAS and iSCSI have additional latencies for getting the IO across the disk network.
Direct IO (DIO), and concurrent IO (CIO), bypasses the use of JFS cache, and is beneficial in some circumstances, e.g., when updating log files. Direct IO can be specified either by a mount option mount -o dio or via a program opening a file with the O_DIRECT open flag.
Synchronous and asynchronous IO refer to whether or not the application is coded so that if the application must wait for the IO to complete, then it's a synchronous IO. Default write IOs to JFS or JFS2 are asynchronous unless specifically coded to be synchronous.
Note that most application use the character device (the r device, e.g. /dev/r<lvname>) for IO though it's also possible to use the block device.
NFS file attribute caching is specified via the actimeo, acregmin, acregmax, acdirmin and acdirmax attributes in /etc/filesystems. It also allows a cached filesystem on NFS clients via the cfsadmin command so that files from the NFS server will be copied to local disk.
Maxreqs min 8192- note – find out WHY this param has so much impact on performance; variable description makes no sense.