Document 179540

IBM Power Systems Technical University October 25–29, 2010 — Lyon, France
“How to optimize performance,
runtimes, foot print and energy with
Solid State Drives (SSDs) for IBM i”
Gottfried Schimunek
Senior IT Architect
Application Design
IBM STG Software
Development
Lab Services
3605 Highway 52 North
Rochester, MN 55901
Tel 507-253-2367
Fax 845-491-2347
Gottfried@us.ibm.com
IBM ISV Enablement
© 2010 IBM Corporation
IBM Power Systems Technical University
October 25–29, 2010 — Lyon, France
Acknowledgements
This presentation would not be possible without the contributions of:
2
Sue Baker
Clark Anderson
Dan Braden
Eric Hess
Fant Steele
Ginny McCright
Glen Nelson
Henry May
Jacques Cote
Jim Hermes
John Hock
Lee Cleveland
Mark Olson
Robert Gagliardi
Tom Crowley
© 2010 IBM Corporation
1
IBM Power Systems Technical University
October 25–29, 2010 — Lyon, France
Abstract and agenda
Abstract:
– Solid State Drives (SSD) offer an exciting new way to solve I/O disk
bottlenecks which can not be easily handled with more spinning disk
drives. This presentation offers configuration and usage insights on this
game changing technology and includes the latest insights from
development, benchmark center, Power performance teams, etc. for IBM i
environments.
Agenda:
– Overview of SSD technology
– Using SSDs for IBM i workloads
• ASP balance enhancements for SSDs
• DB2 for IBM i enhancements for SSDs
– Performance results for IBM i workloads
– SSD configuration rules
3
© 2010 IBM Corporation
IBM Power Systems Technical University
October 25–29, 2010 — Lyon, France
Today’s Cheat Sheet
SCSI – Small Computer System Interface – decades old
SAS – Serial Attached SCSI – modern, higher performance replacement for SCSI
HDD – Hard Disk Drive
SSD – Solid State Drive
SAN – Storage Area Network
NPIV – N_Port ID Virtualization
VIOS – Virtual I/O Server
SFF – Small Form Factor – 2 ½” HDDs or SSDs
IOA – I/O Adapter
IOP – I/O Processor – previously used to support IOAs for specific generations of IBM i
systems
SmartIOA – an IOA which doesn’t require an IOP, reducing card slot usage and costs
PCI-x – PCI eXtended - enhanced PCI card and slot
PCIe – PCI Express – latest and fastest enhanced PCI card and slot
HSL – High Speed Loop – POWER4 thru POWER6 I/O bus interconnect
RIO – Remote I/O – same as HSL, but called RIO when used on System p
12X – IBM’s POWER system implementation of InfiniBand bus interconnect
CEC – Central Electronics Complex.
– Refers to the processor enclosures for POWER5/5+, POWER6/6+, and POWER7 systems.
• 520, 550, and 750 systems have a single enclosure
• 560, 570, 770, and 780 systems have 1 or more enclosures.
4
© 2010 IBM Corporation
2
IBM Power Systems Technical University
October 25–29, 2010 — Lyon, France
Overview of SSD technology
5
© 2010 IBM Corporation
IBM Power Systems Technical University
October 25–29, 2010 — Lyon, France
Solid State Drives – 40 years in the making
EMC SSD
storage for
minicomputers
silicon nitride
electrically
alterable
ROMs
(EAROMs)
rewritable
user removable
non volatile
solid state
storage
for
industrial controls
1970
6
First
high volume
Windows XP
notebook
using SSDs
(Samsung)
Curtis
ROMDISK
for IBM PC
DEC
EZ5x Family
solid state
disks
ATTO
Technology
SiliconDisk II
5.25” SCSI-3
RAM SSD 1.6G
22k IOPS
Intel
1-Mbit
bubble
memory
1980
1990
2000
IBM and STEC
announce
collaboration and
availability of
SSDs (SAS &
FC)
in DS8000 and
Power Systems
Servers
45k IOPS
2010
© 2010 IBM Corporation
3
IBM Power Systems Technical University
October 25–29, 2010 — Lyon, France
Why SSDs?
SSD
Processors
Memory
Disk
Very, very,
very, very,
very fast
Very, very,
very fast
Fast
Very, very slow
comparatively
< 10’s ns
~100 ns
~200,000 ns
1,000,000 8,000,000 ns
Access Speed
Seagate 15k RPM/3.5" Drive Specifications
450
Capacity (GB)
Max Sustained
Data Rate (MB/s)
Read Seek (ms)
171
73
75
HDDs continue to provide value on a $/GB
metric …. but are getting worse on an
IO/GB metric
HDD Sweet Spot:
no strong performance need
measuring performance in sustained GB/s
3.6
3.4
2002
2008
7
© 2010 IBM Corporation
IBM Power Systems Technical University
October 25–29, 2010 — Lyon, France
SSDs and read response times
Rt (ms)
4KB/op Read Response Time
9
8
7
6
5
4
3
2
1
0
8
3.9
0.1
IOA Cache Hit
8
0.33
SSD
15k RPM HDD
Short Seek
15k RPM HDD
Long Seek
© 2010 IBM Corporation
4
IBM Power Systems Technical University
October 25–29, 2010 — Lyon, France
Power Solid State Disk (SSD) Technology
Faster than a
spinning disk
Enterprise Grade Solid State Drive (SSD)
– Also called “Flash Drive”
– Built in wear leveling
Rated capacity: 69.7 GB
– Actually has 128 GB for industrial strength
– Extra 83% capacity for long life of drive
First SAS SSD in industry
2.5 inch
(SFF)
– Higher performance interface
– Higher levels of redundancy/reliability
SAS Interface ( 3 Gb )
– 2.5 / 3.5 inch inserts/carriers
Performance Throughput Sustained:
– 220MB/s Read
– 115MB/s Write
Random transactional operations (IOPS)
– 28,000 IOPS
Average Access time:
– Typically around 200 microseconds vs. >2 milliseconds for
HDDs
Power Consumption: 8.4W max, 5.4W idle
– Same as SFF 15k HDD
– About half 3.5” 15K HDD
(16-18W for today’s larger capacity)
9
© 2010 IBM Corporation
IBM Power Systems Technical University
October 25–29, 2010 — Lyon, France
Using SSDs for IBM i workloads
10
© 2010 IBM Corporation
5
IBM Power Systems Technical University
October 25–29, 2010 — Lyon, France
Workloads where SSD technology may help
Application performance is really important and I/O dependent
(IE critical batch window with very high I/Os)
Server has a lot of HDD with low capacity utilization (% full)
(or ought to be configured this way)
Highly valued data (from a performance perspective) can be focused on
SSD
– Fairly small portion of total data is “hot data”
– Specific indexes/tables/files/libraries of the operating system or application
Best application workload characteristics for SSD
– Lots of random reads … and a low percentage of sequential/predictable
reads
– Higher percentage reads than writes
11
© 2010 IBM Corporation
IBM Power Systems Technical University
October 25–29, 2010 — Lyon, France
Mixed SSD + HDD Can Be A Great Solution
It is typical for data bases to have a large percentage of data which is infrequently
used (“cold”) and a small percentage of data which is frequently used (“hot”)
Hot data may be only 10-20% capacity, but represent 80-90% activity
SSD offers best price performance when focused on “hot” data
HDD offers best storage cost, so focus it on “cold” data …. sort of a hierarchal
approach
Cold
Hot
May be able to use larger HDD and/or a larger % capacity used
Can run SSD closer to 100% capacity
12
• Reduces footprint
• Reduces power
• Reduces cooling load
• Increase performance and scalability
© 2010 IBM Corporation
6
IBM Power Systems Technical University
October 25–29, 2010 — Lyon, France
Using SSDs for IBM i workloads:
ASP balance enhancements for SSDs
13
© 2010 IBM Corporation
IBM Power Systems Technical University
October 25–29, 2010 — Lyon, France
IBM i SSD Data Balancer – introduction and performance
summary
Industry leading automated capability
Monitors partition/ASP using “trace”
– User turns trace on during a peak time
– User turns trace off after reasonable
sample time (1 hour or more)
IBM i intelligent hot/cold placement
makes a big difference vs. normal
IBM striping / scattering of data
across all drives.
– Trace monitors “reads” to identify hot
data
Upon command, moves hot data
to SSD, cold data to HDD
– Minimal performance impact,
done in background
Can remonitor and rebalance at
any time
14
Application Response time
• Tracing has a negligible performance
impact
72 HDD + 16 SSD No Balance
72 HDD + 16 SSD Data Balanced
Trans/min
© 2010 IBM Corporation
7
IBM Power Systems Technical University
October 25–29, 2010 — Lyon, France
Types of ASP balancing
Balance data between busy units and idle units (STRASPBAL
TYPE(*USAGE))
Make all of the units in the ASP have the same percent full (STRASPBAL
TYPE(*CAPACITY))
Drain the data from a disk, to prepare unit it to be removed from the
configuration (STRASPBAL TYPE(*MOVDTA))
(Almost obsolete) move hot data off of a compressed disk, and move cold
data to the compressed disk (STRASPBAL TYPE(*HSM))
– Requires specific disk controllers
with compression capability –
feature code 2741, 2748, or 2778.
– Compression only allowed in
user ASPs – not allowed in
ASP1 (SYSBASE)
Move cold data to HDDs and
move hot data to SSDs
(STRASPBAL TYPE(*HSM))
15
© 2010 IBM Corporation
IBM Power Systems Technical University
October 25–29, 2010 — Lyon, France
Using TRCASPBAL to place hot data on SSDs
HDD1
HDD2
HDD3
HDD4
SSD
100
10000
100
300
0
200
100
500
1200
800
4000
300
600
100
6000
900
500
300
700
2000
3000
900
400
1000
6000
10000
100
6000
900
4000
300
100
Trace ASP balance counts the read operations based on 1MB stripes
– TRCASPBAL SET(*ON) ASP(1) TIMLMT(*NOMAX)
Start ASP balance moves the data
– STRASPBAL TYPE(*HSM) ASP(1) TIMLMT(*NOMAX)
– Target is 50% of read operations to be on SSD
– Cold data is moved (multiple threads) to HDDs, hot data is moved
(single thread) to SSD
16
© 2010 IBM Corporation
8
IBM Power Systems Technical University
October 25–29, 2010 — Lyon, France
Using SSDs for IBM i workloads:
DB2 for IBM i enhancements for SSDs
17
© 2010 IBM Corporation
IBM Power Systems Technical University
October 25–29, 2010 — Lyon, France
DB2 and SSD integration for IBM i – CL enhancements
IBM i V5R4 and V6R1
– CRTPF, CRTLF, CHGPF, CHGLF, CRTSRCPF, and CHGSRCPF commands enhanced
to indicate preference for placement on SSDs
• V5R4 – examples
» CRTPF lib1/pf1 SRCFILE(libsrc/dds) UNIT(255)
» CHGPF lib1tst/pf1 UNIT(255)
• V6R1 – examples
» CRTPF lib1/pf1 SRCFILE(libsrc/dds) UNIT(*SSD)
» CHGPF lib1tst/pf1 UNIT(*SSD)
– Delivered via Database Group PTF plus additional PTFs
• V5R4 SF99504
» Version 22 (Recommended minimum level)
• V6R1 SF99601
» Version 10 (Recommended minimum level)
• Capabilities are continuously being added to DB2. You should stay current to take
advantage of them. See support document 8625761F00527104.
18
Notes:
– When using the CRTPF, CRTLF, CHGPF, CHGLF commands, if table or physical file has
multiple partitions or members, the media preference applies to all partitions or members.
– An exclusive lock is required to change the PF/LF and is released.
– Movement will be synchronous until asynchronous movement
© 2010 IBM Corporation
PTF is released 1H10.
9
IBM Power Systems Technical University
October 25–29, 2010 — Lyon, France
DB2 and SSD integration for IBM i – SQL enhancements
Example: add partition
for current year and
place on SSD
IBM i V6R1 SQL support
– UNIT SSD on the object level
• CREATE TABLE
• ALTER TABLE
• CREATE INDEX
– UNIT SSD on the partition level
• CREATE TABLE
• ALTER TABLE ... ADD PARTITION
• ALTER TABLE ... ALTER PARTITION
Example: move partition
with last year’s data
back to HDDs
19
© 2010 IBM Corporation
IBM Power Systems Technical University
October 25–29, 2010 — Lyon, France
How to Find Hot Tables and Indexes
Performance Explorer
– BY FAR the best solution
– Perform analysis based on read complete and write complete events
DB2 maintains statistics about the number of operations on a table or
index
– Statistics are zeroed on each IPL
– Statistics only identify candidates (logical operations include both random
and sequential operations)
– Available via:
•
•
•
•
20
Display file description (DSPFD)
Application programming interface (API) QUSRMBRD
System i Navigator Health Center (V6R1 only)
SQL catalog queries
© 2010 IBM Corporation
10
IBM Power Systems Technical University
October 25–29, 2010 — Lyon, France
DS8000 Automated Data Relocation
DS8000 Features Deliverables
Smart Monitoring
R3.1
•
•
Fine Grained (Sub-LUN
level) performance data
collection
Hot Spot Performance
Analysis Tools
Volume Based Data
Relocation
R5.1
•
•
Customer driven volume
based “non-disruptive”
migration
Enable mobility among
ranks, pools
Extent Based Data
Relocation
R5.1
•
•
Automatic enable in a
merged SSD+HDD pool
Automatic Hot Spot
identification and relocate
Hot Spot to SSD ranks
21
© 2010 IBM Corporation
IBM Power Systems Technical University
October 25–29, 2010 — Lyon, France
Traditional Disk Mapping vs. Smart Storage Mapping
Smart storage mapping
Host Volumes
Traditional disk mapping
Physical Devices
Smart
Virtual volumes
Volumes have different characteristics.
Applications need to place them on
correct tiers of storage based on usage.
22
All volumes appear to be “logically”
homogenous to apps. But data is placed at
the right tier of storage based on its usage
through smart data placement and migration.
© 2010 IBM Corporation
11
IBM Power Systems Technical University
October 25–29, 2010 — Lyon, France
Workload Learning through Smart Monitoring
Time (Minutes)
Logical
Logical Block
Block Address
Address (LBA)
(LBA)
Hot Data Region
Cool Data Region
Inactive Disk Region
Each workload has its unique IO
access patterns and characteristics
over time.
Smart Monitoring and analysis tool
allows customers to develop new
insight to application optimization
on storage infrastructure.
Left diagram shows historical
performance data for a LUN over
12 hours.
– Y-axis (Top to bottom) LBA ranges
– X-axis (Left to right) time in
minutes.
This workload shows performance
skews in three LBA ranges.
23
© 2010 IBM Corporation
IBM Power Systems Technical University
October 25–29, 2010 — Lyon, France
Average Response Time Shows Significant Improvement with
Smart Data Placement and Migration
Occasional DA health checks
Migration Begins after 1 hour
24
© 2010 IBM Corporation
12
IBM Power Systems Technical University
October 25–29, 2010 — Lyon, France
Performance results for IBM i workloads
25
© 2010 IBM Corporation
IBM Power Systems Technical University
October 25–29, 2010 — Lyon, France
Application Disk Read Footprint
Workload A
– Hot data evident
– Small footprint, but….
– High reads/service time
Workload B
– Hot data not as evident
– Uniform footprint
– Uniform reads/service time
26
© 2010 IBM Corporation
13
IBM Power Systems Technical University
October 25–29, 2010 — Lyon, France
Performance Results using ASP Balancer
108 SCSI HDDs vs Mixed Config (36 HDDs + 8 SSDs)
Hot data m ove d to SSDs us ing ASP Balance r
ASP Balancer used to move “hot” data to
SSDs
Application Response Time (sec)
5
– TRCASPBAL ran over entire throughput
curve
– SSDs = 30% full after balance
– HDDs = 39% full after balance
4.5
4
3.5
3
2.5
2
1.5
1
0.5
0
50000
70000
90000
110000
130000
150000
170000
190000
210000
230000
Throughput [Trans /m in]
108 SCSI HDDs
LOWER Application response time
108 SCSI HDDs vs Mixed Config (36 HDDs + 8 SSDs)
– 93% lower RT at highest throughput point
– 91% lower average RT over full curve
Hot data moved to SSDs using ASP Balancer
– 17% higher Trans/min
Application RT < 0.5 sec
– 2.5X higher average Trans/min per drive
3500
SSD = 2.5x higher average throughput per
0.8
3000
0.7
2500
2000
1500
0.6
SSD = 91% low er
average application
response tim e
0.5
0.4
0.3
1000
0.2
500
0.1
0
Average Application Response Time
(sec)
Average Throughput [Trans/min] per
Drive
0.9
HIGHER Performance / Throughput
•
Mixed 36 HDDs + 8 SSDs
0
All SCSI HDD
Mixed SSD/HDD
27
© 2010 IBM Corporation
IBM Power Systems Technical University
October 25–29, 2010 — Lyon, France
Performance Results using DB2 Media Preference
108 SCSI HDDs vs Mixed Config (36 HDDs + 8 SSDs)
Hot data m oved to SSDs us ing DB2 M edia Pre fere nce
– CHGPF used to move 16 files over to
SSDs
– SSDs = 28% full after move
– HDDs = 35% full after move
5
Application Response Time (sec)
DB2 media preference used to move “hot”
data to SSDs
4.5
4
3.5
3
2.5
2
1.5
1
0.5
0
0
50000
100000
150000
200000
250000
Thr oughput [Trans /m in]
108 SCSI HDDs
Mixed 36 HDDs + 8 SSDs
LOWER Application response time
– 14% higher Trans/min
– 2.5X higher average Trans/min per drive
Hot data m oved to SSDs using DB2 Media Preference
0.9
SSD = 2.5x higher average throughput per drive
3500
0.8
3000
2500
0.7
SSD = 50% lower
average application
response time
0.6
0.5
2000
0.4
1500
0.3
1000
0.2
500
0.1
0
0
All SCSI HDD
28
Average Application Response Time (sec)
HIGHER Performance / Throughput
108 SCSI HDDs vs Mixed Config (36 HDDs + 8 SSDs)
Average Throughput [Trans/min] per Drive
– 30% lower RT at highest throughput point
– 50% lower average RT over full curve
Mixed SSD/HDD
© 2010 IBM Corporation
14
IBM Power Systems Technical University
October 25–29, 2010 — Lyon, France
Memory adds US$40K cost
Customer SSD Benchmarking Experience
• Delivers 40GB main storage for
250
SSDs add US$106K cost
use by active work
• Delivers 488 GB usable
SSDsstorage
add US$47K cost • 44% performance improvement
• 34% performance improvement
• Delivers 210GB usable storage
• 32% performance improvement
Minutes
200
150
100
50
0
Typical Run
72 SAS HDD
CPU time
CPU queuing
Other
DASD page faults
40 GB Pinned
Memory
DASD other
Maximum desired run time
29
IBM Power Systems Technical University
72 SAS HDD + 8 60 SAS HDD + 4
SSD
SSD
© 2010 IBM Corporation
October 25–29, 2010 — Lyon, France
Batch Job Wait Signature
•Disk constrained
•Waiting for other jobs
•Running ‘well’
•54% reduction
•38% reduction
•17% reduction
30
© 2010 IBM Corporation
15
IBM Power Systems Technical University
October 25–29, 2010 — Lyon, France
Answering the question ……….
Will SSDs help me?
31
© 2010 IBM Corporation
IBM Power Systems Technical University
October 25–29, 2010 — Lyon, France
Introducing ANZSSDDTA
SSD ANALYSIS TOOL (ANZSSDDTA)
Type choices, press Enter.
PERFORMANCE MEMBER . . . . . . .
LIBRARY . . . . . . . . . . .
*DEFAULT__
__________
Name, *DEFAULT
Name
Additional Parameters
REPORT TYPE . . . . . . . .
TIME PERIOD::
START TIME AND DATE::
BEGINNING TIME . . . . . .
BEGINNING DATE . . . . . .
END TIME AND DATE::
ENDING TIME . . . . . . .
ENDING DATE . . . . . . .
DETAIL REPORT SORT ORDER . .
NUMBER OF RECORDS IN REPORT
Bottom
F3=Exit
F4=Prompt
F24=More keys
. .
*SUMMARY
*DETAIL, *SUMMARY, *BOTH
. .
. .
*AVAIL__
*BEGIN__
Time, *AVAIL
Date, *BEGIN
.
.
.
.
*AVAIL__
*END____
*DISKTOT
50__
Time, *AVAIL
Date, *END
*JOBNAME, *CPUTOT...
0 - 9999
.
.
.
.
F5=Refresh
F12=Cancel
F13=How to use this display
Visit www.ibm.com/support/techdocs and
search for document PRS3780
32
© 2010 IBM Corporation
16
IBM Power Systems Technical University
October 25–29, 2010 — Lyon, France
ANZSSDDTA – *SUMMARY output
SSD Data Analysis - Disk Read Wait Summary
Performance member Q224014937 in library @SUE
Time period from 2009-08-12-01.49.40.000000 to 2009-08-13-00.00.00.000000
-------------------------------------------------------------------------------------------------------------------------------Disk read wait average response was 00.003058. Maybe candidate.
Bottom
F3=Exit F12=Cancel F19=Left F20=Right F24=More keys
33
© 2010 IBM Corporation
IBM Power Systems Technical University
October 25–29, 2010 — Lyon, France
ANZSSDDTA – *DETAIL output
SSD Data Analysis - Jobs Sorted by Disk Read Time
Performance member Q224014937 in library @SUE
Time period from 2009-08-12-01.49.40.000000 to 2009-08-13-00.00.00.000000
Job Name
------------------------QSPRC00001/QSYS/448980
DELSMBQPRT/SUEBA02/455198
QSPLMAINT/QSYS/448961
PERFNAVDS/SUEBA01/451039
WCSMBB01/SUEBAK/456767
WCSMBB02/SUEBAK/456856
QPADEV000F/SUEBA01/451414
SB055J/SUEBA01/453231
QCLNSYSPRT/QPGMR/456926
DELSMBQQPR/SUEBA02/455585
CPU
Total
Seconds
--------38.010
67.096
23.377
15.865
144.285
49.446
7.612
690.375
5.232
12.035
71 minutes of disk read wait time
34
Disk Read
Wait Total
Seconds
----------4,276.730
3,551.437
2,820.571
862.070
774.387
589.355
544.305
482.659
459.801
431.057
Disk Read
Wait Average
Seconds
-----------.004677
.004724
.004547
.001861
.002174
.003625
.004620
.002527
.005025
.004763
Disk
Read Wait
/CPU
--------113
53
121
54
5
12
72
1
88
36
113 seconds of disk read wait
per second of CPU run time
© 2010 IBM Corporation
17
IBM Power Systems Technical University
October 25–29, 2010 — Lyon, France
Action plan beyond the tool ….
Contact your seller of IBM equipment
– IBM business partners contact your distributor …..or…..
– Contact Techline and request
• IBM i SSD Capacity Planning assistance
» This is a no additional charge analysis to
♦ Estimate number of SSDs
♦ Recommend best use of SSDs for a measured workload.
» Analysis uses
♦ Collection services data
♦ PEX collection of specific disk events
35
© 2010 IBM Corporation
IBM Power Systems Technical University
October 25–29, 2010 — Lyon, France
SSD configuration rules for IBM i
36
© 2010 IBM Corporation
18
IBM Power Systems Technical University
October 25–29, 2010 — Lyon, France
Code prerequisites for SSDs
Server firmware – 3.4.2
–
–
–
–
–
V6R1M0 LIC and OS
520 – EL340_075
550 – EL340_075
560 – EM340_075
570 – EM340_075
595 – EH340_075 + EB340_078
HMC – V7R3.4.0 service pack 2, fix level
MH01162
V5R4M5 LIC and V5R4 OS
– Cumulative PTF package C9104540
– Respin RS545-F LIC, RS540-30 OS (does not
need to be installed, but CD is needed for system
reload purposes)
– Database group 22 (SF99507)
– The following PTFs should be in temporary,
permanent or superseded status on the system:
•
•
•
•
•
•
•
•
•
MF46591
MF46593
MF46594
MF46595
MF46743 – THIS IS A DELAYED PTF
MF46748
SI35126
SI35365
SI35305
37
IBM Power Systems Technical University
– Cumulative PTF package C9111610
– Respin RS610-F, RS640-00 OS
– (does not need to be installed, but CD is needed
for system reload purposes)
– Database group 10 (SF99601)
– The following PTFs should be in temporary,
permanent or superseded status on the system:
•
•
•
•
•
•
•
•
•
•
•
•
•
•
MF46390
MF46518
MF46587
MF46588
MF46609
MF46714
MF46771
MF46817
MF47076 – THIS IS A DELAYED PTF
MF47224 – THIS IS A DELAYED PTF
SI35299
SI35379
SI35572
SI35653
See support document 8625761F00527104 for up to date
information.
© 2010 IBM Corporation
October 25–29, 2010 — Lyon, France
SSD configuration requirements – all models, all form factors
POWER6 system required
IBM i 5.4.5 and 6.1 operating system
IBM i requires disk controllers with write cache
– Must be protected using IBM i mirroring, RAID-5 or RAID-6
– Maximum of 8 SSDs per controller
When installed in FC 5886 EXP12S enclosures
– No HDDs allowed in same enclosure
– Maximum of one (1) 5886 per disk controller (5904, 5906, 5908) or
controller pair (5903 x 2) [Statement of Direction for IBM i support]
SSDs and other disk types (HDDs) are not allowed to mirror each other.
SSDs are not compatible with features 5900, 5901, and 5912 SAS
adapters.
SSDs and HDDs cannot be mixed within a RAID array
38
© 2010 IBM Corporation
19
IBM Power Systems Technical University
October 25–29, 2010 — Lyon, France
References
IBM - Performance Management on IBM i Resource Library
http://www.ibm.com/systems/i/advantages/perfmgmt/resource.html
Performance Value of Solid State Drives using IBM i
http://www.ibm.com/systems/resources/ssd_ibmi.pdf
IBM Systems Lab Services and Training
http://www.ibm.com/systems/services/labservices
IBM Power Systems(i) Benchmarking and Proof-of-Concept Centers
http://www.ibm.com/systems/i/support/benchmarkcenters
39
IBM Power Systems Technical University
© 2010 IBM Corporation
October 25–29, 2010 — Lyon, France
Questions
40
© 2010 IBM Corporation
20
IBM Power Systems Technical University
October 25–29, 2010 — Lyon, France
IBM Systems Lab Services and
Training
Mainframe Systems
Our Mission and Profile
Support the IBM Systems Agenda and accelerate the adoption of new
products and solutions
Power Systems
Maximize performance of our clients’ existing IBM systems
Deliver technical training, conferences, and other services tailored to
meet client needs
System x & Bladecenter
Team with IBM Service Providers to optimize the deployment of IBM
solutions (GTS, GBS, SWG Lab Services and our IBM Business
Partners)
System Storage
Our Competitive Advantage
Leverage relationships with the IBM development labs to build deep
technical skills and exploit the expertise of our developers
IT Infrastructure Optimization
Combined expertise of Lab Services and the Training for Systems team
Data Center Services
Skills can be deployed worldwide to assure all client needs can be met
Successful worldwide history:
17 years in Americas, 9 years in Europe/Middle East/Africa,
5 years in Asia Pacific
www.ibm.com/systems/services/labservices
Training Services
stgls@us.ibm.com
41
© 2010 IBM Corporation
IBM Power Systems Technical University
October 25–29, 2010 — Lyon, France
IBM Systems Lab Services and Training Power Services
Key Offerings
Americas, WW Contacts
High Availability Services on Power Systems (including
Advanced Copy Services for PowerHA™ on IBM i)
Systems Director Services
PowerCare Services
Frank Kriss
Performance and Scalability services (including system,
application, and database tuning)
kriss@us.ibm.com, 507-253-1354
IBM i, High Availability
Virtualization Services for AIX® on Power Systems™
Karen Anderson
Application and database modernization consulting (SOA
implementation)
kanders@us.ibm.com, 972-561-6337
IBM i Vouchers
Linux® on Power consulting, custom application
development, implementation, and optimization services
Stephen Brandenburg
Security on Power consulting and implementation services
System consolidation and migration service
George Henningsen
High Performance Computing consulting and
implementation services
gehenni@us.ibm.com, 516-349-3530
SWOT/SWAT, AIX
SAP® on IBM i consulting
Power Blades on BladeCenter (including VIOS on i and
blades running IBM i implementation)
Smart Analytics services (including DB2® Web Query
implementation and consulting)
even@us.ibm.com, 507-253-1313
IBM i
sbranden@us.ibm.com, 301-803-6199
PowerVouchers, Virtualization Program, AIX
Allen Johnston
allenrj@us.ibm.com, 704-340-9165
PowerCare
Dawn May
Public, private, customized and self-paced virtual training
Power Systems Technical University
www.ibm.com/systems/services/labservices
42
Mark Even
dmmay@us.ibm.com, 507-253-2121
Power Performance and Scalability Center
stgls@us.ibm.com
© 2010 IBM Corporation
21
IBM Power Systems Technical University October 25–29, 2010 — Lyon, France
“How to optimize performance,
runtimes, foot print and energy with
Solid State Drives (SSDs) for IBM i”
Gottfried Schimunek
Senior IT Architect
Application Design
IBM STG Software
Development
Lab Services
3605 Highway 52 North
Rochester, MN 55901
Tel 507-253-2367
Fax 845-491-2347
Gottfried@us.ibm.com
IBM ISV Enablement
© 2010 IBM Corporation
IBM Power Systems Technical University
October 25–29, 2010 — Lyon, France
Special notices
This document was developed for IBM offerings in the United States as of the date of publication. IBM may not make these offerings available in other
countries, and the information is subject to change without notice. Consult your local IBM business contact for information on the IBM offerings available
in your area.
Information in this document concerning non-IBM products was obtained from the suppliers of these products or other public sources. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.
IBM may have patents or pending patent applications covering subject matter in this document. The furnishing of this document does not give you any
license to these patents. Send license inquires, in writing, to IBM Director of Licensing, IBM Corporation, New Castle Drive, Armonk, NY 10504-1785
USA.
All statements regarding IBM future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only.
The information contained in this document has not been submitted to any formal IBM test and is provided "AS IS" with no warranties or guarantees
either expressed or implied.
All examples cited or described in this document are presented as illustrations of the manner in which some IBM products can be used and the results
that may be achieved. Actual environmental costs and performance characteristics will vary depending on individual client configurations and conditions.
IBM Global Financing offerings are provided through IBM Credit Corporation in the United States and other IBM subsidiaries and divisions worldwide to
qualified commercial and government clients. Rates are based on a client's credit rating, financing terms, offering type, equipment type and options, and
may vary by country. Other restrictions may apply. Rates and offerings are subject to change, extension or withdrawal without notice.
IBM is not responsible for printing errors in this document that result in pricing or information inaccuracies.
All prices shown are IBM's United States suggested list prices and are subject to change without notice; reseller prices may vary.
IBM hardware products are manufactured from new parts, or new and serviceable used parts. Regardless, our warranty terms apply.
Any performance data contained in this document was determined in a controlled environment. Actual results may vary significantly and are dependent
on many factors including system hardware configuration and software design and configuration. Some measurements quoted in this document may
have been made on development-level systems. There is no guarantee these measurements will be the same on generally-available systems. Some
measurements quoted in this document may have been estimated through extrapolation. Users of this document should verify the applicable data for
their specific environment.
Revised September 26, 2006
44
© 2010 IBM Corporation
22
IBM Power Systems Technical University
October 25–29, 2010 — Lyon, France
Special notices (cont.)
IBM, the IBM logo, ibm.com AIX, AIX (logo), AIX 6 (logo), AS/400, BladeCenter, Blue Gene, ClusterProven, DB2, ESCON, i5/OS, i5/OS (logo), IBM Business Partner
(logo), IntelliStation, LoadLeveler, Lotus, Lotus Notes, Notes, Operating System/400, OS/400, PartnerLink, PartnerWorld, PowerPC, pSeries, Rational, RISC
System/6000, RS/6000, THINK, Tivoli, Tivoli (logo), Tivoli Management Environment, WebSphere, xSeries, z/OS, zSeries, AIX 5L, Chiphopper, Chipkill, Cloudscape, DB2
Universal Database, DS4000, DS6000, DS8000, EnergyScale, Enterprise Workload Manager, General Purpose File System, , GPFS, HACMP, HACMP/6000, HASM, IBM
Systems Director Active Energy Manager, iSeries, Micro-Partitioning, POWER, PowerExecutive, PowerVM, PowerVM (logo), PowerHA, Power Architecture, Power
Everywhere, Power Family, POWER Hypervisor, Power Systems, Power Systems (logo), Power Systems Software, Power Systems Software (logo), POWER2,
POWER3, POWER4, POWER4+, POWER5, POWER5+, POWER6, System i, System p, System p5, System Storage, System z, Tivoli Enterprise, TME 10, Workload
Partitions Manager and X-Architecture are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or
both. If these and other IBM trademarked terms are marked on their first occurrence in this information with a trademark symbol (® or ™), these symbols indicate U.S.
registered or common law trademarks owned by IBM at the time this information was published. Such trademarks may also be registered or common law trademarks in
other countries. A current list of IBM trademarks is available on the Web at "Copyright and trademark information" at www.ibm.com/legal/copytrade.shtml
The Power Architecture and Power.org wordmarks and the Power and Power.org logos and related marks are trademarks and service marks licensed by Power.org.
UNIX is a registered trademark of The Open Group in the United States, other countries or both.
Linux is a registered trademark of Linus Torvalds in the United States, other countries or both.
Microsoft, Windows and the Windows logo are registered trademarks of Microsoft Corporation in the United States, other countries or both.
Intel, Itanium, Pentium are registered trademarks and Xeon is a trademark of Intel Corporation or its subsidiaries in the United States, other countries or both.
AMD Opteron is a trademark of Advanced Micro Devices, Inc.
Java and all Java-based trademarks and logos are trademarks of Sun Microsystems, Inc. in the United States, other countries or both.
TPC-C and TPC-H are trademarks of the Transaction Performance Processing Council (TPPC).
SPECint, SPECfp, SPECjbb, SPECweb, SPECjAppServer, SPEC OMP, SPECviewperf, SPECapc, SPEChpc, SPECjvm, SPECmail, SPECimap and SPECsfs are
trademarks of the Standard Performance Evaluation Corp (SPEC).
NetBench is a registered trademark of Ziff Davis Media in the United States, other countries or both.
AltiVec is a trademark of Freescale Semiconductor, Inc.
Cell Broadband Engine is a trademark of Sony Computer Entertainment Inc.
InfiniBand, InfiniBand Trade Association and the InfiniBand design marks are trademarks and/or service marks of the InfiniBand Trade Association.
Revised April 24, 2008
Other company, product and service names may be trademarks or service marks of others.
45
IBM Power Systems Technical University
© 2010 IBM Corporation
October 25–29, 2010 — Lyon, France
Notes on benchmarks and values
The IBM benchmarks results shown herein were derived using particular, well configured, development-level and generally-available computer systems. Buyers should
consult other sources of information to evaluate the performance of systems they are considering buying and should consider conducting application oriented testing. For
additional information about the benchmarks, values and systems tested, contact your local IBM office or IBM authorized reseller or access the Web site of the benchmark
consortium or benchmark vendor.
IBM benchmark results can be found in the IBM Power Systems Performance Report at http://www.ibm.com/systems/p/hardware/system_perf.html .
All performance measurements were made with AIX or AIX 5L operating systems unless otherwise indicated to have used Linux. For new and upgraded systems, AIX
Version 4.3, AIX 5L or AIX 6 were used. All other systems used previous versions of AIX. The SPEC CPU2006, SPEC2000, LINPACK, and Technical Computing
benchmarks were compiled using IBM's high performance C, C++, and FORTRAN compilers for AIX 5L and Linux. For new and upgraded systems, the latest versions of
these compilers were used: XL C Enterprise Edition V7.0 for AIX, XL C/C++ Enterprise Edition V7.0 for AIX, XL FORTRAN Enterprise Edition V9.1 for AIX, XL C/C++
Advanced Edition V7.0 for Linux, and XL FORTRAN Advanced Edition V9.1 for Linux. The SPEC CPU95 (retired in 2000) tests used preprocessors, KAP 3.2 for FORTRAN
and KAP/C 1.4.2 from Kuck & Associates and VAST-2 v4.01X8 from Pacific-Sierra Research. The preprocessors were purchased separately from these vendors. Other
software packages like IBM ESSL for AIX, MASS for AIX and Kazushige Goto’s BLAS Library for Linux were also used in some benchmarks.
For a definition/explanation of each benchmark and the full list of detailed results, visit the Web site of the benchmark consortium or benchmark vendor.
http://www.tpc.org
TPC
SPEC
http://www.spec.org
LINPACK
http://www.netlib.org/benchmark/performance.pdf
Pro/E
http://www.proe.com
GPC
http://www.spec.org/gpc
VolanoMark
http://www.volano.com
STREAM
http://www.cs.virginia.edu/stream/
SAP
http://www.sap.com/benchmark/
Oracle Applications
http://www.oracle.com/apps_benchmark/
PeopleSoft - To get information on PeopleSoft benchmarks, contact PeopleSoft directly
Siebel
http://www.siebel.com/crm/performance_benchmark/index.shtm
Baan
http://www.ssaglobal.com
Fluent
http://www.fluent.com/software/fluent/index.htm
TOP500 Supercomputers
http://www.top500.org/
Ideas International
http://www.ideasinternational.com/benchmark/bench.html
Storage Performance Council http://www.storageperformance.org/results
Revised March 12, 2009
46
© 2010 IBM Corporation
23
IBM Power Systems Technical University
October 25–29, 2010 — Lyon, France
Notes on HPC benchmarks and values
The IBM benchmarks results shown herein were derived using particular, well configured, development-level and generally-available computer systems. Buyers should
consult other sources of information to evaluate the performance of systems they are considering buying and should consider conducting application oriented testing. For
additional information about the benchmarks, values and systems tested, contact your local IBM office or IBM authorized reseller or access the Web site of the benchmark
consortium or benchmark vendor.
IBM benchmark results can be found in the IBM Power Systems Performance Report at http://www.ibm.com/systems/p/hardware/system_perf.html .
All performance measurements were made with AIX or AIX 5L operating systems unless otherwise indicated to have used Linux. For new and upgraded systems, AIX
Version 4.3 or AIX 5L were used. All other systems used previous versions of AIX. The SPEC CPU2000, LINPACK, and Technical Computing benchmarks were compiled
using IBM's high performance C, C++, and FORTRAN compilers for AIX 5L and Linux. For new and upgraded systems, the latest versions of these compilers were used: XL
C Enterprise Edition V7.0 for AIX, XL C/C++ Enterprise Edition V7.0 for AIX, XL FORTRAN Enterprise Edition V9.1 for AIX, XL C/C++ Advanced Edition V7.0 for Linux, and
XL FORTRAN Advanced Edition V9.1 for Linux. The SPEC CPU95 (retired in 2000) tests used preprocessors, KAP 3.2 for FORTRAN and KAP/C 1.4.2 from Kuck &
Associates and VAST-2 v4.01X8 from Pacific-Sierra Research. The preprocessors were purchased separately from these vendors. Other software packages like IBM ESSL
for AIX, MASS for AIX and Kazushige Goto’s BLAS Library for Linux were also used in some benchmarks.
For a definition/explanation of each benchmark and the full list of detailed results, visit the Web site of the benchmark consortium or benchmark vendor.
http://www.spec.org
SPEC
LINPACK
http://www.netlib.org/benchmark/performance.pdf
Pro/E
http://www.proe.com
GPC
http://www.spec.org/gpc
STREAM
http://www.cs.virginia.edu/stream/
Fluent
http://www.fluent.com/software/fluent/index.htm
TOP500 Supercomputers
http://www.top500.org/
AMBER
http://amber.scripps.edu/
FLUENT
http://www.fluent.com/software/fluent/fl5bench/index.htm
GAMESS
http://www.msg.chem.iastate.edu/gamess
GAUSSIAN
http://www.gaussian.com
ANSYS
http://www.ansys.com/services/hardware-support-db.htm
Click on the "Benchmarks" icon on the left hand side frame to expand. Click on "Benchmark Results in a Table" icon for benchmark results.
ABAQUS
http://www.simulia.com/support/v68/v68_performance.php
ECLIPSE
http://www.sis.slb.com/content/software/simulation/index.asp?seg=geoquest&
MM5
http://www.mmm.ucar.edu/mm5/
MSC.NASTRAN
http://www.mscsoftware.com/support/prod%5Fsupport/nastran/performance/v04_sngl.cfm
STAR-CD
www.cd-adapco.com/products/STAR-CD/performance/320/index/html
NAMD
http://www.ks.uiuc.edu/Research/namd
HMMER
http://hmmer.janelia.org/
Revised March 12, 2009
http://powerdev.osuosl.org/project/hmmerAltivecGen2mod
47
© 2010 IBM Corporation
IBM Power Systems Technical University
October 25–29, 2010 — Lyon, France
Notes on performance estimates
rPerf for AIX
rPerf (Relative Performance) is an estimate of commercial processing performance relative to other IBM UNIX
systems. It is derived from an IBM analytical model which uses characteristics from IBM internal workloads, TPC
and SPEC benchmarks. The rPerf model is not intended to represent any specific public benchmark results and
should not be reasonably used in that way. The model simulates some of the system operations such as CPU,
cache and memory. However, the model does not simulate disk or network I/O operations.
rPerf estimates are calculated based on systems with the latest levels of AIX and other pertinent software at the
time of system announcement. Actual performance will vary based on application and configuration specifics.
The IBM eServer pSeries 640 is the baseline reference system and has a value of 1.0. Although rPerf may be
used to approximate relative IBM UNIX commercial processing performance, actual system performance may
vary and is dependent upon many factors including system hardware configuration and software design and
configuration. Note that the rPerf methodology used for the POWER6 systems is identical to that used for the
POWER5 systems. Variations in incremental system performance may be observed in commercial workloads due
to changes in the underlying system architecture.
All performance estimates are provided "AS IS" and no warranties or guarantees are expressed or implied by IBM.
Buyers should consult other sources of information, including system benchmarks, and application sizing guides
to evaluate the performance of a system they are considering buying. For additional information about rPerf,
contact your local IBM office or IBM authorized reseller.
========================================================================
CPW for IBM i
Commercial Processing Workload (CPW) is a relative measure of performance of processors running the IBM i
operating system. Performance in customer environments may vary. The value is based on maximum
configurations. More performance information is available in the Performance Capabilities Reference at:
www.ibm.com/systems/i/solutions/perfmgmt/resource.html
Revised April 2, 2007
48
© 2010 IBM Corporation
24