Nowhere near enough info to make any definitive suggestions.
An assortment of comments/observations ...
- you mention having observed IO contention but we've got no details on where you saw this contention nor how you measured it; measured at the OS level? measured via MDA? where is the data showing the contention? which device(s) do you believe have contention?
- there's no context with the sysmon/diskio data; sampling period? what activity (OLTP? DSS? maintenance?) was going on at the time of the sampling? were these samples taken at the same time as your IO contention measurements (eg, if IO contention was observed @ 5:00 am, while sysmon was collected @ 12:00 pm, then the sysmon data is useless)? by itself (ie, without context) the sysmon data looks 'ok' to me
- I can't think of any cases where I've seen the size of a ASE device have any effect on performance; disk/IO performance usually relates to a) rate of IOs (ie, number of IOs per time period), b) type of IOs (read vs write), c) device configuration (ASE/directio vs ASE/dsync vs AIX/cio; filesystem(FS) vs raw; degradation due to FS journaling; raw disk vs RAID 1+0 vs RAID 5 vs ...; availability of FS/SAN cache; etc), d) ASE cache/pool configurations and e) problematic queries and/or poorly designed processes
- no mention of what changes you've made to your system based on past recommendations in these SAP forums, eg, did you address the AIX/cio issues as suggested in Increasing the size of the 128K memory pool ?
-----------
As for improving IO performance ... keeping in mind that at this point I have no idea a) if you have IO performance issues nor b) the root cause of any IO performance issues ...
- if IO performance degradation is noticed during specific database operations (eg, certain queries), it may be possible to tune said queries to reduce their IO requirements, ymmv
- assuming no query performance issues, an improperly sized data cache/pool can lead to excessive disk reads; IO performance degradation could show up in the form of 'slow' reads or overall slow IOs due to flooding the disk subsystem with excessive read requests
- sysmon data shows tempdb device writes (w/ little/no reads) responsible for 12-30% of IO requests; could probably reduce, if not eliminate, these IOs by a) assigning tempdb to its own appropriately sized cache and b) disabling the HK for the cache
- if, If, IF it turns out that your disk subsystem is 'ok', and your cache/pools are 'ok', and your queries/operations are 'ok', and that you're experiencing contention due to some hot tables/indexes then it may be possible to improve IO performance by spreading your disk IOs across different devices, but this would require a) knowing which tables/indexes are being hit the hardest, b) moving said tables/indexes to different user-defined segments and c) making sure said segments are assigned to different OS/SAN 'disks' (so as to spread disk IO activity across several physical 'disks'); [NOTE: I've seen much higher IO request volumes in sysmon/diskIO output at various clients where there was little/no IO contention, ie, at this point I have no reason to believe that moving tables/indexes to separate database segments would be of any use]
-----------
There are likely many other possibilities I'm missing/forgetting at this time ... regardless, addressing IO performance issues needs to start with an understanding of where/why the performance is degraded and then figuring out the correct action to improve said IO performance. ("Duh, Mark!" ?)