How to Minimize and Optimize I/Os in your RPG Application 9/26/2013

9/26/2013
How to Minimize and Optimize I/Os in your RPG
Application
Data3/IBM COMMON Sweden
Stockholm, Sweden
September 30th to October 1th 2013
Gottfried Schimunek
Senior Architect
Application Design
IBM STG Software
Development
Lab Services
3605 Highway 52 North
Rochester, MN 55901
Tel 507-253-2367
Fax 845-491-2347
Gottfried@us.ibm.com
IBM ISV Enablement
Follow us @IBMpowersystems
Learn more at www.ibm.com/power
Agenda
Disk I/O definitions and concepts
Disk I/O configuration/settings related to performance
High-level description of common disk IO performance symptoms
Common disk I/O related performance problems
Review common disk I/O problems with performance tools
Case Studies
Quiz ( 3 or 4 questions)
2
© 2013 IBM Corporation
1
9/26/2013
IO Subsystem Definitions and Concepts
Page Faults - Faulting occurs when a referenced page, or piece of data, is not in memory. This causes programs to stop
because they must wait for the data to be paged in. Faulting is normal behavior on IBM i. However, the size of the storage
pools can affect the faulting rates, and storage pools that are too small can result in excessive faulting. You can examine
and set the storage pool sizes by using the Work with System Status (WRKSYSSTS) command. If you have too many
active threads or threads that use too many system resources, you will see numerous page faults for the storage pools in
which applications are running. Increasing the storage pool size or decreasing the activity level might reduce the number of
page faults.
IO Pending Faults – Waiting for a fault that is currently in progress from another job.
Synchronous IO versus asynchronous IO
Synchronous IO requests require the jobs to wait for the IO operation to complete before the job can continue to process.
Asynchronous IO’s are background IO operations that occur while the jobs is processing records, but can be
converted to synchronous if IO bottlenecks occur.
Paging - Paging is the movement of data in and out of memory, both synchronously and asynchronously. Pages can be written
out to storage or removed from memory without being written if they have not been changed. Faulting causes paging to
occur on the server.
Write cache - When a disk write operation is requested, the data is written to a memory buffer and is queued to be written out
to a disk device at a later time (this later operation is called destaging). After a buffer is queued, disk operation completion is
signaled. As far as the waiting application is concerned, the write operation is over. Of course, data is not on a disk surface
yet, so what happens if a system loses power before destaging is complete? In this situation, a write cache is protected by
an independent source of power, a battery, which keeps write cache contents intact until power is restored and data can be
evacuated to its proper place on a disk surface. This is called non-volatile storage (NVS).
Write cache overruns - A write cache absorbs the disk write activity until physical disk device is overwhelmed to the point
where the destaging operation queue backs up and there is no more memory left in a buffer pool to accept new write
operations from system. Now new disk operations have to wait until some earlier operations are complete and free some
memory. This is called a write cache overrun.
Write cache efficiency - As data is sitting in the write cache waiting to be destaged to the disk, cache management logic
tries to apply all kinds of optimizations to reduce the actual number of disk accesses. For example, a series of writes to the
adjacent sectors can be coalesced into a single multi-sector operation. Multiple writes to the same sector may result in
multiple updates to the data in cache, before data is ever written out. The purpose of the write cache efficiency ratio metric
is to show how successful the cache software was at this task. For example, if write cache efficiency is 15%, it means that
the number of write operations was reduced by 15% by the cache management software. This metric is only defined for
internal disk units and is calculated as: 100 - ((DSDWOP/DIVISOR) * 100) / DSWRTS DIVISOR depends on the disk
configuration. It is 1 in non-RAID configurations, 2 for RAID 5, and 3 for RAID 6. It reflects the additional disk writes that
result from the RAID. In general, this metric is more important to the storage adapter cache management software
designers. But it can also give an insight into workload behavior. One has to be very careful with this metric. If RAID
© longer
2013 IBM Corporation
3 configuration is in the exposed mode due to a single disk failure, the number of physical disk writes will no
correspond to the assumptions and this metric will no longer be valid.
IO Subsystem Technical Definition and Concepts
Extended adaptive cache - Extended Adaptive Cache is an advanced read cache technology that improves
both the I/O subsystem and system response times by reducing the number of physical I/O requests that
are read from disk. Extended Adaptive Cache operates at the disk subsystem controller level, and does
not affect the AS/400 system processor. Management of the cache is performed automatically within the
I/O adapter. It is designed to cache data by using a predictive algorithm. The algorithm considers how
recently and how frequently the host has accessed a predetermined range of data.
Physical Disk IO - the traffic toward the storage subsystem
Logical DB IO - the IO request which retrieve records from memory through the open data path (ODP) to the
program object. If the requested data is not to be found in memory, it generates a physical I/O and the
system fetches it from the disks.
Disk service times – the time required for the disk IO subsystem to performance the IO request with no waits.
Expert cache - Expert cache is a set of algorithms the operating system uses to maximize the efficiency of the
cache and main storage usage, as workloads change over the course of a typical day. By caching in main
storage, the system not only eliminates accesses to the storage devices, but all associated I/O traffic on
the internal system buses as well. In addition, an algorithm running in the main processor has a better
view of actual application trends and should do a better job, overall, of assessing what data should be
cached. Expert Cache works by minimizing the effect of synchronous DASD I/Os on a job. The best
candidates for performance improvement are those jobs that are most affected by synchronous DASD
I/Os.
Internal Arms
SAS Arms
Solid State Drives (SSD)
External Arms
LUN’s
Fiber Channels
Independent Auxiliary Storage Pools (IASP) -Independent disk pools provide the ability to group together
storage that can be taken off-line or brought online independent of system data or other unrelated data. An
independent disk pool can be either:
Privately connected to a single system. Also known as stand-alone or primary IASPs
Switched among multiple systems or partitions in a clustered environment
Virtual
IO
4
© 2013 IBM Corporation
2
9/26/2013
Disk I/O requests
Faults: Demand paging: Implicit memory request
- Referenced instruction/data (virtual address) NOT in memory
- Either DB or NDB
Fault: Generally re-read of data that was subject of earlier BRING
Fault I/O size
- Page= 4 KB
- Block= 32 KB (Java up to 64KB)
Bring: Anticipatory paging (not counted as a fault)
- Transfer objects to main storage
For example: program "CALL" or blocks of DB records
Initial physical DB input via "BRING"
SETOBJACC of entire object
- Bring I/O size:
Bring: Explicitly reads virtual pages in to memory and
improves perfomance by anticipation!
Clear: Memory cleared - set to binary=0 – particularly
when current content not valid.
Exchange: Implictly identifies pages whose contents are
blocking/"Sequential
Only"
replaced without "aging algorithm"!
Purge: Memory pages written to disk – for example. journal
pages.
- 4 KB default, 128 KB optimum
More with DB
- SQL statement establish size
5
© 2013 IBM Corporation
Disk Subsystems
Collection Services records disk I/O Information
QAPMDISK Performance Database file
Write Request
Read Request
A.
CPU
A.
Synchronous or
asynchronous
requests for pages
in main storage to be
written
A.
Can it be held in IOA
cache?
A.
Else physical write
May reference
virtual addresses not
in main storage
DISK Controller
A.
Is it in the IOA
cache?
Cache
DISK DRIVES
A.
6
Else physical read
© 2013 IBM Corporation
3
9/26/2013
Disk operations
Categorized by Main Storage Management
- Database I/O
Physical or logical file I/O
- Non-database I/O
Disk I/O not related to physical or logical file I/O
Categorized by concurrency with processing
- Synchronous I/O
Processing waits: Does not continue until I/O completes
Some database I/O
Servicing memory faults
Contributes response/run
- Asynchronous I/O
Processing concurrent with I/O
WRITE typically asynchronous
FORCE operations, SEQONLY(*YES) DB
input
Disk I/O
DB I/O
Non-DBwithout)
I/O
Journal deposits with commitment
control (synchronous
Synchronous
Read
Write
Asynchronous
Synchronous
Read
Read
Write
7
Asynchronous
Write
Read
Write
© 2013 IBM Corporation
Asynchronous I/O wait
DISK I/O
REQUEST
ISSUED
Data becomes
available to job
JOB NEEDS THE DATA
FROM THE I/O REQUEST
OVERLAPPED: Job Processing & Disk I/O
JOB
PROCESSING
Time
REQUEST
SCHEDULED
DISK REQUEST PROCESSING
Start
DISK I/O
REQUEST
ISSUED
Finish
JOB NEEDS THE DATA
FROM THE I/O REQUEST
OVERLAPPED JOB
Processing & Disk I/O
Data becomes
Available to job
Asynchronous Disk
I/O Wait Time
JOB
PROCESSING
Time
REQUEST
SCHEDULED
DISK REQUEST PROCESSING
Star
t
Finish
WHY DOES THIS HAPPEN?
1 FASTER CPU/LESS CPU CONTENTION
2 SLOWER DISK RESPONSE
3 Improved Code (Job uses less CPU)
TOTAL RUN TIME IMPROVEMENT FROM A CHANGE CAN BE LESS THAN EXPECTED
A Faster CPU may not help as much as expected!
8
© 2013 IBM Corporation
4
9/26/2013
Logical DB I/O versus physical disk I/O
(1) Application program performs a READ operation
(2) Data in the job work area (ODP)?
- Data is moved into the program variables
No logical DB I/O
No physical I/O
(3) Data in the partition buffer?
- Moved to job work area/program variables
Logical DB I/O
No physical I/O
(4) Data not in partition buffer?
- Data read from disk subsystem
- Data moved to job work area
- Data is moved to program variables
Logical DB I/O
Physical I/O
PROGRAM-A
1
JOB WORK AREAS
Pgm
Variables
2
ODP
3
LDIO
PARTITION BUFFER
4
PDIO
IOP/IOA
Disk
9
© 2013 IBM Corporation
Impact of queuing on response time
Service time * Queuing Multiplier = Response time
S*QM = T
QM=1/(1-U)
U= Utilization
10
Service Time
(secs)
S
Queuing
Multiplier
QM
Resp Time
(secs)
T
0.005
3
0.015
0.010
3
0.030
0.050
3
0.150
0.100
3
0.300
© 2013 IBM Corporation
5
9/26/2013
Disk storage allocation
Space assignment to object usually follows space availability
Example:
Drive #1 is 15% full
Drive #2 is 50% full
System will favor allocating space on drive #1
Disk accesses usually follow space usage
Example:
Following from disk allocation method:
Drive #1 ( 70 GB) - 50% full
Drive #2 (140 GB) - 50% full
Busy = operations per second
140 GB drive may be ~2 times busier
140 GB
Following more data
50% of 140 GB (70 GB) versus 50% of 70 GB (35 GB)
70 GB
50%
11
50%
© 2013 IBM Corporation
“Mixed” drives
Consider three drives
Mixing drives of different capacities in the same ASP with insufficient
Similar physical attributes
write cache may cause performance problems.
Rotational Delay, Seek time, etc.
Service time = 5 milliseconds
Assume even distribution of data/no cache benefits
Disk service time = 5 milliseconds
35 GB drive half as busy (20%) as 70 GB drive (40%)
70 GB drive half as busy (40%) as 140 GB drive (80%)
35 GB drive:
QM= 1 / (1-0.20) = 1 /0.80 = 1.25
35 GB drive averages (1.25 x 5 msecs) = 6.25 milliseconds/operation
70 GB drive:
QM= 1 / (1-0.40) = 1 / 0.60 = 1.66
70 GB drive averages (1.66 x 5 msecs) = 8.33 milliseconds/operation
140 GB drive:
QM= 1 / (1-0.80) = 1 / 0.20 = 5.00
140 GB drive averages (5.00 x 5 msecs) = 25.00 milliseconds/operation
12
© 2013 IBM Corporation
6
9/26/2013
Balancing disk
Restore objects
Spreads large objects over all arms in an ASP
1 MB extents
Rebalance disk
- Trace ASP Balance (TRCASPBAL) command
Monitors data access on disk units within the specified ASP
High-use data/ Low-use data on the units is identified
Statistics collected for ASP may be used to balance disk drives
- Start ASP Balance (STRASPBAL) command
Performs ASP balancing on one or more ASPs
Capacity balancing
Data balance based on percentage disk space used
Usage balancing (*USAGE)
Requires high-use/low-use data from TRCASPBAL
Attempts to balance arm utilization within ASP
Balancing soon after collection of statistics
Usefulness diminished with aging statistics
13
© 2013 IBM Corporation
Example: Disk response imbalance
Unit
Size IOP
Unit
Name
Type
(M)
Util
------------- ---- ------- ---ASP ID/ASP Rsc Name: 1/
0001
DD013
4328 132,309
.0
0002
DD016
4328 132,309
.0
0003
DD004
4328 132,309
.0
0004
DD015
4328 132,309
.0
0005
DD012
4328 132,309
.0
0006
DD007
4328 132,309
.0
0007
DD002
4328 132,309
.0
0008
DD006
4328 132,309
.0
0009
DD020
4328 123,488
.0
0010
DD003
4328 123,488
.0
0011
DD018
4328 132,309
.0
0012
DD019
4328 123,488
.0
0013
DD010
4328 123,488
.0
0014
DD022
4328 132,309
.0
0015
DD024
4328 132,309
.0
0016
DD005
4328 132,309
.0
0017
DD008
4328 132,309
.0
0018
DD001
4328 132,309
.0
0019
DD021
4328 132,309
.0
0020
DD014
4328 132,309
.0
0021
DD023
4328 123,488
.0
0022
DD017
4328 123,488
.0
0023
DD011
4328 123,488
.0
0024
DD009
4328 123,488
.0
0025
DD049
4328 123,488
.0
0026
DD050
4328 123,488
.0
0027
DD051
4328 123,488
.0
0028
DD052
4328 123,488
.0
0029
DD053
4328 123,488
.0
0030
DD054
4328 123,488
.0
0031
DD055
4328 123,488
.0
0032
DD056
4328 105,847
.0
0033
DD057
4328 105,847
.0
0034
DD058
4328 105,847
.0
0035
DD059
4328 123,488
.0
0036
DD060
4328 105,847
.0
Total for ASP ID: 1
4,516,140
Average
Total
4,516,140
Average
14
IOP
Dsk CPU --Percent-- Op Per
K Per
Name
Util
Full
Util Second
I/O
---------- ------- ------- -------- --------CMB01
CMB01
CMB01
CMB01
CMB01
CMB01
CMB01
CMB01
CMB01
CMB01
CMB01
CMB01
CMB01
CMB01
CMB01
CMB01
CMB01
CMB01
CMB01
CMB01
CMB01
CMB01
CMB01
CMB01
CMB10
CMB10
CMB10
CMB10
CMB10
CMB10
CMB10
CMB10
CMB10
CMB10
CMB10
CMB10
5.2
5.2
5.2
5.2
5.2
5.2
5.2
5.2
5.1
5.1
5.2
5.2
5.2
5.2
5.2
5.1
5.2
5.2
5.2
5.1
5.2
5.1
5.2
5.2
15.8
15.8
15.8
15.7
15.7
15.8
15.8
15.7
15.8
15.7
15.8
15.8
- Average Time Per I/O -Service
Wait
Response
------- ------ --------
41.8
40.2
40.2
40.2
40.2
40.2
40.2
40.2
40.2
40.2
40.2
40.2
40.2
40.2
40.2
40.2
40.2
40.2
40.2
40.2
40.2
40.2
40.2
40.2
40.2
40.2
40.2
40.2
40.2
40.2
40.2
40.2
40.2
40.2
40.2
40.2
2.0
3.0
3.4
2.7
2.6
2.6
3.1
2.5
2.6
2.4
3.0
2.4
2.5
2.5
2.7
2.9
2.8
2.3
3.1
2.8
2.6
2.6
3.2
2.7
31.3
32.3
34.8
32.3
33.6
30.3
37.8
36.9
34.9
44.3
29.3
41.2
15.69
24.72
32.19
29.40
27.05
22.36
42.75
20.75
21.21
21.23
29.58
19.30
19.98
22.44
24.16
26.06
27.46
17.43
23.66
21.20
24.20
21.06
39.51
24.00
99.97
101.86
105.69
99.71
118.01
110.91
99.21
109.57
101.89
113.90
118.92
110.55
5.9
23.2
22.2
14.7
17.9
21.0
10.7
20.8
23.6
20.5
18.3
22.9
24.4
20.3
19.2
20.2
17.5
22.2
25.9
24.4
20.6
25.1
13.9
20.8
35.6
35.9
35.7
37.2
32.5
33.5
37.0
36.2
37.4
36.1
31.6
37.5
.0012
.0012
.0010
.0009
.0009
.0011
.0007
.0012
.0012
.0011
.0010
.0012
.0012
.0011
.0011
.0011
.0010
.0013
.0013
.0013
.0010
.0012
.0008
.0011
.0031
.0031
.0032
.0032
.0028
.0027
.0038
.0033
.0034
.0038
.0024
.0037
.0001
.0004
.0004
.0003
.0004
.0004
.0004
.0005
.0004
.0003
.0003
.0004
.0004
.0003
.0003
.0005
.0003
.0003
.0004
.0004
.0006
.0003
.0002
.0004
.0096
.0114
.0165
.0098
.0115
.0078
.0208
.0188
.0114
.0449
.0062
.0313
.0013
.0016
.0014
.0012
.0013
.0015
.0011
.0017
.0016
.0014
.0013
.0016
.0016
.0014
.0014
.0016
.0013
.0016
.0017
.0017
.0016
.0015
.0010
.0015
.0127
.0145
.0197
.0130
.0143
.0105
.0246
.0221
.0148
.0487
.0086
.0350
40.3
13.4
52.44
30.4
.0025
.0116
.0141
40.3
13.4
52.44
30.4
.0025
.0116
.0141
© 2013 IBM Corporation
7
9/26/2013
Reorganize physical file member
First ...
- Reorganize the DB file member to remove deleted records
Compresses deleted records/optionally reorganizes by key
- Alternatively, consider re-using deleted records
Then ...
SAVE the file
DELETE the file
RESTORE the file
Space Allocation up to 1 MB
File Object
File Object
DLTREC
DLTREC
DLTREC
DLTREC
Object
DLTREC
DLTREC
DLTREC
DLTREC
Many jobs processing files sequentially have reported up to 50% improvement with deleted
records removed from a physical file. Check with the application provider!
15
© 2013 IBM Corporation
Solid State Drives (SSD)
Today’s applications can often benefit with a faster storage option
SSD high speed can really help get rid of I/O bottlenecks, bridging the gap between
memory and disk speeds
Improve performance
And save space, energy at the same time
Processors
Memory
Disk
SSD
Very, very,
very, very,
very fast
< 10’s ns
Very, very,
very fast
Fast
~100 ns
~200,000 ns
Very, very slow
comparatively
1,000,000 8,000,000 ns
Access Speed
16
© 2013 IBM Corporation
8
9/26/2013
Mixed SSD + HDD Can be Great Solution
It is typical for data bases to have a large percentage of data which is infrequently used (“cold”) and a small
percentage of data which is frequently used (“hot”)
Hot data may be only 10-20% capacity, but represent 80-90% activity
SSD offers best price performance when focused on “hot” data
HDD offers best storage cost, so focus it on “cold” data …. sort of a hierarchal approach
Cold
Hot
May be able to use larger HDD and/or a larger % capacity used
Can run SSD closer
to 100% capacity
17
© 2013 IBM Corporation
SSD Performance
8000
30000
7000
25000
6000
20000
5000
15000
4000
3000
10000
2000
5000
1000
0
0
SSD
HDD
Random I/O’s per second
(Sustained )
18
SSD
HDD
Power Consumption in Watts
Required for 135K IOPS
performance
© 2013 IBM Corporation
9
9/26/2013
IBM i Load Balancer
Monitors partition/ASP using “trace”
- User turns trace on during a peak time
- User turns trace off after reasonable sample time
- Negligible performance impact expected
- Tool monitors “reads” to identify hot data
- TRCASPBAL followed by STRASPBAL TYPE(*HSM)
Upon command, moves hot data to SSD, cold data to HDD
Minimal performance impact, done in background
Re-monitor and rebalance any time
- Probably a weekly or monthly activity
- Perhaps less often if data not volatile
Place DB2 objects on SSD through new parameters on DB2 commands
- CRTPF / CHGPF
- CREATE / ALTER TABLE
- CREATE INDEX
19
© 2013 IBM Corporation
Where should SSD be considered?
Where server performance is really important and I/O dependent
Where server has a lot of HDD with low capacity utilization (or ought to have them
configured this way)
Where high value (from a performance perspective) can be focused in SSD
Fairly small portion of total data (“hot data”)
Specific indexes/tables/files/libraries of the operating system or application
Best workload characteristics for SSD
Lots of random reads … low percentage of sequential/predictable reads
Higher percentage reads than writes
Assuming a disk adapter/controller with enough write cache, SSD
writes may or may not be that much faster than HDD writes
Whitepaper at http://www.ibm.com/systems/resources/ssd_ibmi.pdf
20
© 2013 IBM Corporation
10
9/26/2013
SSD Analyzer Tool for IBM i
• Quick, easy, no-charge analysis introduced 4Q 2009
• Looks at standard performance report output –
• Provides “probably yes”, “probably no”, or “maybe”
SSD ANALYSIS TOOL (ANZSSDDTA)
Type choices, press Enter.
PERFORMANCE MEMBER . . . . . . .
LIBRARY . . . . . . . . . . .
*DEFAULT__
__________
Name, *DEFAULT
Name
Additional Parameters
REPORT TYPE . . . . . . . .
TIME PERIOD::
START TIME AND DATE::
BEGINNING TIME . . . . . .
BEGINNING DATE . . . . . .
END TIME AND DATE::
ENDING TIME . . . . . . .
ENDING DATE . . . . . . .
NUMBER OF RECORDS IN REPORT
F3=Exit
F4=Prompt
F24=More keys
. .
*SUMMARY
*DETAIL, *SUMMARY, *BOTH
. .
. .
*AVAIL__
*BEGIN__
Time, *AVAIL
Date, *BEGIN
. .
. .
. .
*AVAIL__
*END____
50__
Time, *AVAIL
Date, *END
0 - 9999
F5=Refresh
F12=Cancel
Bottom
F13=How to use this display
Available via http://www.ibm.com/support/techdocs in “Presentations &
Tools”. Search using keyword SSD.
21
© 2013 IBM Corporation
Common Disk IO Performance Symptoms
- Longer nightly batch runs
- Intermittent response problems with IO intensive transactions
- High disk utilization
- High disk response time for some arms
- Noticeable seize/lock conflicts due to high disk response times
- High percentage full on arms
- Longer save/restore and system back up times
22
© 2013 IBM Corporation
11
9/26/2013
Potential Causes
Customers often assume high disk response times are caused by the lack of arms,
but the following causes of disk response issues are more common
The typical disk response problems are the following:
Excessive page faulting
- Memory pools not properly configured
- Excessive amount of records being paged in the memory pools
- Lack of memory
Expert Cache hasn’t been turned on
Small read/write cache IO adapters (IOA)
Too many arms/LUN’s attached to one IOA
Excessive amount of synchronous writes causing write cache overruns
- Forced write ratio
- Journal cache or commitment control not enabled
- Lack of blocked writes
Excessive amount of synchronous reads due to poor programming techniques
Programs are not blocking the read requests
23
© 2013 IBM Corporation
Potential Causes
Arm balance issues (hot objects)
- ASP balance not performed after new arms were added
- Raid parity was started with a subset of arms initially configured
- Different size arms configured to a specific IOA
Save/Restore activity occurring during production hours
External storage configuration problems
•
•
•
Insufficient number of logical units (LUN’s) configured
Insufficient fiber channel configuration
Insufficient amount of write cache
Workload needs to be tuned to reduce unnecessary IO
Workload simply too IO intensive for arm configuration
Hardware errors
IO Subsystem undersized
24
© 2013 IBM Corporation
12
9/26/2013
Actions/Preventions
The first action is to verify all the necessary IO performance features have been
enabled/installed:
–
Expert cache
–
QQRYDEGREE system value = *IO
–
HA Journal performance LPP
The 2nd action should always be to prevent unnecessary page faulting
–
Page faulting for some objects may be unavoidable. These object
could be faulted in quicker with solid state drives (SSD)
The 3rd action is to determine when the worst disk response times occur, and which
jobs and programs/SQL statements are responsible
The 4th step is to determine if there are specific arms with higher response times,
higher percent busy, higher operations per second, and/or higher k per I/O
The 5th step is to determine if those arms with poor response times are attached to a
specific IOA, and the size of the read/write cache available
25
© 2013 IBM Corporation
Reviewing general IO performance via Performance tools
System Report
The disk utilization section of the system report with show the average service
times, wait times, and response times for each arm
Remember this is an average for each arm using 5-15 minute
intervals for the number of intervals selected and this includes
both reads and write response times
The guideline for reads is 5 ms per IO or less
The guideline for writes is .5 ms per IO or less
Look for arms with higher response times, utilization, percentage full, ops/sec,
k/IO
Look to see if arms with response problems are all attached to the same IOP/IOA
26
© 2013 IBM Corporation
13
9/26/2013
Reviewing general IO performance via Performance tools
Component Report
Page down to disk activity section of the component report
Review write cache efficiency column
The write cache efficiency should be over 60%
Review write cache overrun column
•
The write cache overruns higher then 0 are worth investigating
•
Review the write cache size of the IOA’s connected to the arms
with poor response
•
Review the number of arms and the percentage full of these
arms
•
Determine which jobs and programs are driving the high
read/write volume
27
© 2013 IBM Corporation
Case Study 1 – Hot Objects
The following PDI graph shows the poor response times for intervals with less workload
28
© 2013 IBM Corporation
14
9/26/2013
Case Study 1 – Hot Objects
PDI - Disk response times by arm
29
© 2013 IBM Corporation
Case Study 1 – Hot Objects
PDI Job Watcher – intervals with highest synchronous disk response times
30
© 2013 IBM Corporation
15
9/26/2013
Case Study 1 – Hot Objects
PDI Job Watcher - Jobs driving the most synchronous IO
31
© 2013 IBM Corporation
Case Study 1 – Hot Objects
iDoctor Job Watcher – intervals with highest synchronous disk response times
32
© 2013 IBM Corporation
16
9/26/2013
Case Study 1 – Hot Objects
iDoctor Job Watcher - Jobs driving the most synchronous IO
33
© 2013 IBM Corporation
Case Study 1 – Hot Objects
iDoctor – QSPLMAINT call stack (DLTSPLF job)
34
© 2013 IBM Corporation
17
9/26/2013
Case Study 1 – Hot Objects
iDoctor – RCV_RQS call stack (DSPJRNA Command)
35
© 2013 IBM Corporation
Case Study 2 – Job Watcher
36
© 2013 IBM Corporation
18
9/26/2013
Case Study 2 – Job Watcher
37
© 2013 IBM Corporation
Case Study 2 – Job Watcher
38
© 2013 IBM Corporation
19
9/26/2013
Case Study 2 – Job Watcher
39
© 2013 IBM Corporation
Case Study 2 – Job Watcher
40
© 2013 IBM Corporation
20
9/26/2013
Case Study 2 – Job Watcher
41
© 2013 IBM Corporation
Case Study 2
At this point you know:
- Which jobs are waiting
- The type of contention and the length of time per interval – Disk writes
- Which program is driving the writes
- The type of write operations (blocked inserts - QDBPUTM)
- The file being written to
What you don’t know is:
- Why are the writes being performed synchronously
The possible causes are:
- Write cache overruns
- The object being written to is only on a subset of arms (hot object)
- The writes are forced to be synchronous
42
© 2013 IBM Corporation
21
9/26/2013
Case Study 2 – Disk Watcher
43
© 2013 IBM Corporation
Case Study 2 – Disk Watcher
44
© 2013 IBM Corporation
22
9/26/2013
Case Study 2 – Disk Watcher
45
© 2013 IBM Corporation
DSPFD Command
A display file description command (DSPFD) against the file with long write times
shows the force write ratio of 1
46
© 2013 IBM Corporation
23
9/26/2013
Quiz (1 of 2)
For integrated disks controlled by IBM i, you may use RAID-5 and mirroring only.
True
False
Higher disk drive utilization always results in increased disk response times.
True
False
Increased queuing beyond a critical level may result in disk response time being
significantly impacted.
True
False
47
© 2013 IBM Corporation
Quiz solutions (1 of 2)
For integrated disks controlled by IBM i, you may use RAID-5 and mirroring only.
True
False
Higher disk drive utilization always results in increased disk response times.
True
False
Increased queuing beyond a critical level may result in disk response time being
significantly impacted.
True
False
48
© 2013 IBM Corporation
24
9/26/2013
Quiz (2 of 2)
Increased disk adapter cache can improve disk subsystem performance.
True
False
Read-intensive workloads with high levels of random reads are more likely to benefit
from Solid State Drives (SSD) than other workloads.
True
False
49
© 2013 IBM Corporation
Quiz solutions (2 of 2)
Increased disk adapter cache can improve disk subsystem performance.
True
False
Read-intensive workloads with high levels of random reads are more likely to benefit
from Solid State Drives (SSD) than other workloads.
True
False
50
© 2013 IBM Corporation
25
9/26/2013
Summary
I/Os can be optimized without touching application programs
Simple best practices can lead to significant I/O performance
improvements
Optimization can significantly improve performance and runtimes by
– Reducing I/Os and wait times
– Avoiding I/Os rather than optimizing
51
© 2013 IBM Corporation
Thank You!
52
© 2013 IBM Corporation
26
9/26/2013
IBM Systems Lab Services and Training
Leverage the skills and expertise of IBM's technical consultants
to implement projects that achieve faster business value
Ensure a smooth upgrade
Improve your availability
Design for efficient virtualization
Reduce management complexity
Assess your system security
Optimize database performance
Modernize applications for iPad
How to contact us at COMMON:
For Lab Services:
Contact Mark Even at even@us.ibm.com
For Technical Training:
Contact Lisa Ryan at lisaryan@us.ibm.com
Meet Mark and Lisa in the IBM Expo booth
Follow us at @IBMSLST
Learn more ibm.com/systems/services/labservices
Deliver training classes & conferences
53
© 2013 IBM Corporation
Special notices
This document was developed for IBM offerings in the United States as of the date of publication. IBM may not make these offerings available in
other countries, and the information is subject to change without notice. Consult your local IBM business contact for information on the IBM
offerings available in your area.
Information in this document concerning non-IBM products was obtained from the suppliers of these products or other public sources. Questions
on the capabilities of non-IBM products should be addressed to the suppliers of those products.
IBM may have patents or pending patent applications covering subject matter in this document. The furnishing of this document does not give
you any license to these patents. Send license inquires, in writing, to IBM Director of Licensing, IBM Corporation, New Castle Drive, Armonk, NY
10504-1785 USA.
All statements regarding IBM future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives
only.
The information contained in this document has not been submitted to any formal IBM test and is provided "AS IS" with no warranties or
guarantees either expressed or implied.
All examples cited or described in this document are presented as illustrations of the manner in which some IBM products can be used and the
results that may be achieved. Actual environmental costs and performance characteristics will vary depending on individual client configurations
and conditions.
IBM Global Financing offerings are provided through IBM Credit Corporation in the United States and other IBM subsidiaries and divisions
worldwide to qualified commercial and government clients. Rates are based on a client's credit rating, financing terms, offering type, equipment
type and options, and may vary by country. Other restrictions may apply. Rates and offerings are subject to change, extension or withdrawal
without notice.
IBM is not responsible for printing errors in this document that result in pricing or information inaccuracies.
All prices shown are IBM's United States suggested list prices and are subject to change without notice; reseller prices may vary.
IBM hardware products are manufactured from new parts, or new and serviceable used parts. Regardless, our warranty terms apply.
Any performance data contained in this document was determined in a controlled environment. Actual results may vary significantly and are
dependent on many factors including system hardware configuration and software design and configuration. Some measurements quoted in this
document may have been made on development-level systems. There is no guarantee these measurements will be the same on generallyavailable systems. Some measurements quoted in this document may have been estimated through extrapolation. Users of this document
should verify the applicable data for their specific environment.
Revised September 26, 2006
54
54
© 2013 IBM Corporation
27
9/26/2013
Special notices (cont.)
IBM, the IBM logo, ibm.com AIX, AIX (logo), AIX 5L, AIX 6 (logo), AS/400, BladeCenter, Blue Gene, ClusterProven, DB2, ESCON, i5/OS, i5/OS (logo), IBM Business
Partner (logo), IntelliStation, LoadLeveler, Lotus, Lotus Notes, Notes, Operating System/400, OS/400, PartnerLink, PartnerWorld, PowerPC, pSeries, Rational, RISC
System/6000, RS/6000, THINK, Tivoli, Tivoli (logo), Tivoli Management Environment, WebSphere, xSeries, z/OS, zSeries, Active Memory, Balanced Warehouse,
CacheFlow, Cool Blue, IBM Watson, IBM Systems Director VMControl, pureScale, TurboCore, Chiphopper, Cloudscape, DB2 Universal Database, DS4000, DS6000,
DS8000, EnergyScale, Enterprise Workload Manager, General Parallel File System, , GPFS, HACMP, HACMP/6000, HASM, IBM Systems Director Active Energy
Manager, iSeries, Micro-Partitioning, POWER, PowerLinux, PowerExecutive, PowerVM, PowerVM (logo), PowerHA, Power Architecture, Power Everywhere, Power
Family, POWER Hypervisor, Power Systems, Power Systems (logo), Power Systems Software, Power Systems Software (logo), POWER2, POWER3, POWER4,
POWER4+, POWER5, POWER5+, POWER6, POWER6+, POWER7, POWER7+, Systems, System i, System p, System p5, System Storage, System z, TME 10,
Workload Partitions Manager and X-Architecture are trademarks or registered trademarks of International Business Machines Corporation in the United States, other
countries, or both. If these and other IBM trademarked terms are marked on their first occurrence in this information with a trademark symbol (® or ™), these symbols
indicate U.S. registered or common law trademarks owned by IBM at the time this information was published. Such trademarks may also be registered or common law
trademarks in other countries.
A full list of U.S. trademarks owned by IBM may be found at: http://www.ibm.com/legal/copytrade.shtml.
Adobe, the Adobe logo, PostScript, and the PostScript logo are either registered trademarks or trademarks of Adobe Systems Incorporated in the United States, and/or
other countries.
AltiVec is a trademark of Freescale Semiconductor, Inc.
AMD Opteron is a trademark of Advanced Micro Devices, Inc.
InfiniBand, InfiniBand Trade Association and the InfiniBand design marks are trademarks and/or service marks of the InfiniBand Trade Association.
Intel, Intel logo, Intel Inside, Intel Inside logo, Intel Centrino, Intel Centrino logo, Celeron, Intel Xeon, Intel SpeedStep, Itanium, and Pentium are trademarks or registered
trademarks of Intel Corporation or its subsidiaries in the United States and other countries.
IT Infrastructure Library is a registered trademark of the Central Computer and Telecommunications Agency which is now part of the Office of Government Commerce.
Java and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its affiliates.
Linear Tape-Open, LTO, the LTO Logo, Ultrium, and the Ultrium logo are trademarks of HP, IBM Corp. and Quantum in the U.S. and other countries.
Linux is a registered trademark of Linus Torvalds in the United States, other countries or both.
PowerLinux™ uses the registered trademark Linux® pursuant to a sublicense from LMI, the exclusive licensee of Linus Torvalds, owner of the Linux® mark on a worldwide basis.
Microsoft, Windows and the Windows logo are registered trademarks of Microsoft Corporation in the United States, other countries or both.
NetBench is a registered trademark of Ziff Davis Media in the United States, other countries or both.
SPECint, SPECfp, SPECjbb, SPECweb, SPECjAppServer, SPEC OMP, SPECviewperf, SPECapc, SPEChpc, SPECjvm, SPECmail, SPECimap and SPECsfs are
trademarks of the Standard Performance Evaluation Corp (SPEC).
The Power Architecture and Power.org wordmarks and the Power and Power.org logos and related marks are trademarks and service marks licensed by Power.org.
TPC-C and TPC-H are trademarks of the Transaction Performance Processing Council (TPPC).
UNIX is a registered trademark of The Open Group in the United States, other countries or both.
Revised November 28, 2012
55
Other company, product and service names may be trademarks or service marks of others.
55
© 2013 IBM Corporation
Notes on benchmarks and values
The IBM benchmarks results shown herein were derived using particular, well configured, development-level and generally-available computer systems. Buyers should
consult other sources of information to evaluate the performance of systems they are considering buying and should consider conducting application oriented testing. For
additional information about the benchmarks, values and systems tested, contact your local IBM office or IBM authorized reseller or access the Web site of the benchmark
consortium or benchmark vendor.
IBM benchmark results can be found in the IBM Power Systems Performance Report at http://www.ibm.com/systems/p/hardware/system_perf.html .
All performance measurements were made with AIX or AIX 5L operating systems unless otherwise indicated to have used Linux. For new and upgraded systems, the latest
versions of AIX were used. All other systems used previous versions of AIX. The SPEC CPU2006, LINPACK, and Technical Computing benchmarks were compiled using
IBM's high performance C, C++, and FORTRAN compilers for AIX 5L and Linux. For new and upgraded systems, the latest versions of these compilers were used: XL C for
AIX v11.1, XL C/C++ for AIX v11.1, XL FORTRAN for AIX v13.1, XL C/C++ for Linux v11.1, and XL FORTRAN for Linux v13.1.
For a definition/explanation of each benchmark and the full list of detailed results, visit the Web site of the benchmark consortium or benchmark vendor.
TPC
http://www.tpc.org
SPEC
http://www.spec.org
LINPACK
http://www.netlib.org/benchmark/performance.pdf
Pro/E
http://www.proe.com
GPC
http://www.spec.org/gpc
VolanoMark
http://www.volano.com
STREAM
http://www.cs.virginia.edu/stream/
SAP
http://www.sap.com/benchmark/
Oracle, Siebel, PeopleSoft http://www.oracle.com/apps_benchmark/
Baan
http://www.ssaglobal.com
Fluent
http://www.fluent.com/software/fluent/index.htm
TOP500 Supercomputers http://www.top500.org/
Ideas International
http://www.ideasinternational.com/benchmark/bench.html
Storage Performance Council
http://www.storageperformance.org/results
Revised December 2, 2010
56
56
© 2013 IBM Corporation
28
9/26/2013
Notes on HPC benchmarks and values
The IBM benchmarks results shown herein were derived using particular, well configured, development-level and generally-available computer systems. Buyers should
consult other sources of information to evaluate the performance of systems they are considering buying and should consider conducting application oriented testing. For
additional information about the benchmarks, values and systems tested, contact your local IBM office or IBM authorized reseller or access the Web site of the benchmark
consortium or benchmark vendor.
IBM benchmark results can be found in the IBM Power Systems Performance Report at http://www.ibm.com/systems/p/hardware/system_perf.html .
All performance measurements were made with AIX or AIX 5L operating systems unless otherwise indicated to have used Linux. For new and upgraded systems, the latest
versions of AIX were used. All other systems used previous versions of AIX. The SPEC CPU2006, LINPACK, and Technical Computing benchmarks were compiled using
IBM's high performance C, C++, and FORTRAN compilers for AIX 5L and Linux. For new and upgraded systems, the latest versions of these compilers were used: XL C for
AIX v11.1, XL C/C++ for AIX v11.1, XL FORTRAN for AIX v13.1, XL C/C++ for Linux v11.1, and XL FORTRAN for Linux v13.1. Linpack HPC (Highly Parallel Computing)
used the current versions of the IBM Engineering and Scientific Subroutine Library (ESSL). For Power7 systems, IBM Engineering and Scientific Subroutine Library (ESSL)
for AIX Version 5.1 and IBM Engineering and Scientific Subroutine Library (ESSL) for Linux Version 5.1 were used.
For a definition/explanation of each benchmark and the full list of detailed results, visit the Web site of the benchmark consortium or benchmark vendor.
http://www.spec.org
SPEC
LINPACK
http://www.netlib.org/benchmark/performance.pdf
Pro/E
http://www.proe.com
GPC
http://www.spec.org/gpc
STREAM
http://www.cs.virginia.edu/stream/
Fluent
http://www.fluent.com/software/fluent/index.htm
TOP500 Supercomputers http://www.top500.org/
AMBER
http://amber.scripps.edu/
FLUENT
http://www.fluent.com/software/fluent/fl5bench/index.htm
GAMESS
http://www.msg.chem.iastate.edu/gamess
GAUSSIAN
http://www.gaussian.com
ANSYS
http://www.ansys.com/services/hardware-support-db.htm
Click on the "Benchmarks" icon on the left hand side frame to expand. Click on "Benchmark Results in a Table" icon for benchmark results.
ABAQUS
http://www.simulia.com/support/v68/v68_performance.php
ECLIPSE
http://www.sis.slb.com/content/software/simulation/index.asp?seg=geoquest&
MM5
http://www.mmm.ucar.edu/mm5/
MSC.NASTRAN http://www.mscsoftware.com/support/prod%5Fsupport/nastran/performance/v04_sngl.cfm
STAR-CD
www.cd-adapco.com/products/STAR-CD/performance/320/index/html
Revised December 2, 2010
NAMD
http://www.ks.uiuc.edu/Research/namd
HMMER
http://hmmer.janelia.org/
57
http://powerdev.osuosl.org/project/hmmerAltivecGen2mod
57
© 2013 IBM Corporation
Notes on performance estimates
rPerf for AIX
rPerf (Relative Performance) is an estimate of commercial processing performance relative to other IBM UNIX
systems. It is derived from an IBM analytical model which uses characteristics from IBM internal workloads, TPC
and SPEC benchmarks. The rPerf model is not intended to represent any specific public benchmark results and
should not be reasonably used in that way. The model simulates some of the system operations such as CPU,
cache and memory. However, the model does not simulate disk or network I/O operations.
rPerf estimates are calculated based on systems with the latest levels of AIX and other pertinent software at the
time of system announcement. Actual performance will vary based on application and configuration specifics. The
IBM eServer pSeries 640 is the baseline reference system and has a value of 1.0. Although rPerf may be used to
approximate relative IBM UNIX commercial processing performance, actual system performance may vary and is
dependent upon many factors including system hardware configuration and software design and configuration.
Note that the rPerf methodology used for the POWER6 systems is identical to that used for the POWER5 systems.
Variations in incremental system performance may be observed in commercial workloads due to changes in the
underlying system architecture.
All performance estimates are provided "AS IS" and no warranties or guarantees are expressed or implied by IBM.
Buyers should consult other sources of information, including system benchmarks, and application sizing guides to
evaluate the performance of a system they are considering buying. For additional information about rPerf, contact
your local IBM office or IBM authorized reseller.
========================================================================
CPW for IBM i
Commercial Processing Workload (CPW) is a relative measure of performance of processors running the IBM i
operating system. Performance in customer environments may vary. The value is based on maximum
configurations. More performance information is available in the Performance Capabilities Reference at:
www.ibm.com/systems/i/solutions/perfmgmt/resource.html
Revised April 2, 2007
58
58
© 2013 IBM Corporation
29