Performance Report PRIMERGY RX2530 M1

White Paper  Performance Report PRIMERGY RX2530 M1
White Paper
FUJITSU Server PRIMERGY
Performance Report PRIMERGY RX2530 M1
This document contains a summary of the benchmarks executed for the FUJITSU Server
PRIMERGY RX2530 M1.
The PRIMERGY RX2530 M1 performance data are compared with the data of other
PRIMERGY models and discussed. In addition to the benchmark results, an explanation
has been included for each benchmark and for the benchmark environment.
Version
1.1
2015-04-15
http://ts.fujitsu.com/primergy
Page 1 (52)
White Paper  Performance Report PRIMERGY RX2530 M1
Version: 1.1  2015-04-15
Contents
Document history ................................................................................................................................................ 2
Technical data .................................................................................................................................................... 3
SPECcpu2006 .................................................................................................................................................... 6
SPECpower_ssj2008 ........................................................................................................................................ 13
Disk I/O: Performance of RAID controllers ....................................................................................................... 18
OLTP-2 ............................................................................................................................................................. 27
vServCon .......................................................................................................................................................... 31
VMmark V2 ....................................................................................................................................................... 38
STREAM ........................................................................................................................................................... 45
LINPACK .......................................................................................................................................................... 47
Literature ........................................................................................................................................................... 51
Contact ............................................................................................................................................................. 52
Document history
Version 1.0
New:






Technical data
SPECpower_ssj2008
Measurement with Xeon E5-2699 v3
Disk I/O: Performance of RAID controllers
Measurements with “LSI SW RAID on Intel C610 (Onboard SATA)”, “PRAID CP400i”, “PRAID
EP400i” and “PRAID EP420i” controllers
OLTP-2
®
®
Results for Intel Xeon Processor E5-2600 v3 Product Family
vServCon
®
®
Results for Intel Xeon Processor E5-2600 v3 Product Family
VMmark V2
Measurement with Xeon E5-2699 v3
“Performance with Server Power” measurement with Xeon E5-2699 v3
“Performance with Server and Storage Power” measurement with Xeon E5-2699 v3
Version 1.1
New:



SPECcpu2006
®
®
Measurements with Intel Xeon Processor E5-2600 v3 Product Family
STREAM
®
®
Measurements with Intel Xeon Processor E5-2600 v3 Product Family
LINPACK
®
®
Measurements with Intel Xeon Processor E5-2600 v3 Product Family
Page 2 (52)
http://ts.fujitsu.com/primergy
White Paper  Performance Report PRIMERGY RX2530 M1
Version: 1.1  2015-04-15
Technical data
PRIMERGY RX2530 M1
PY RX2530 M1 4x 3.5"
PRIMERGY RX2530 M1
PY RX2530 M1 10x 2.5"
Decimal prefixes according to the SI standard are used for measurement units in this white paper (e.g. 1 GB
9
30
= 10 bytes). In contrast, these prefixes should be interpreted as binary prefixes (e.g. 1 GB = 2 bytes) for
the capacities of caches and memory modules. Separate reference will be made to any further exceptions
where applicable.
Model
PRIMERGY RX2530 M1
Model versions
PY RX2530 M1 4x 3.5"
PY RX2530 M1 4x 2.5" expandable
PY RX2530 M1 10x 2.5"
Form factor
Rack server
Chipset
Intel C612
Number of sockets
2
Number of processors orderable
1 or 2
Processor type
Intel Xeon Processor E5-2600 v3 Product Family
Number of memory slots
24 (12 per processor)
Maximum memory configuration
768 GB
Onboard HDD controller
Controller with RAID 0, RAID 1 or RAID 10 for up to 8 SATA HDDs
2 × PCI-Express 3.0 x8
2 × PCI-Express 3.0 x16
PY RX2530 M1 4x 3.5":
4
PY RX2530 M1 4x 2.5" expandable: 8
PY RX2530 M1 10x 2.5":
10
PCI slots
Max. number of internal hard disks
http://ts.fujitsu.com/primergy
®
®
®
Page 3 (52)
White Paper  Performance Report PRIMERGY RX2530 M1
Version: 1.1  2015-04-15
Cache
Xeon E5-2623 v3
4
8
10
8.00
3.00
3.50
1866
105
Xeon E5-2637 v3
4
8
15
9.60
3.50
3.70
2133
135
Xeon E5-2603 v3
6
6
15
6.40
1.60
n/a
1600
85
Xeon E5-2609 v3
6
6
15
6.40
1.90
n/a
1600
85
Xeon E5-2620 v3
6
12
15
8.00
2.40
3.20
1866
85
Xeon E5-2643 v3
6
12
20
9.60
3.40
3.70
2133
135
Xeon E5-2630L v3 8
16
20
8.00
1.80
2.90
1866
55
Xeon E5-2630 v3
8
16
20
8.00
2.40
3.20
1866
85
Xeon E5-2640 v3
8
16
20
8.00
2.60
3.40
1866
90
Xeon E5-2667 v3
8
16
20
9.60
3.20
3.60
2133
135
Xeon E5-2650 v3
10
20
25
9.60
2.30
3.00
2133
105
Xeon E5-2660 v3
10
20
25
9.60
2.60
3.3
2133
105
Xeon E5-2650L v3 12
24
30
9.60
1.80
2.50
2133
65
Xeon E5-2670 v3
12
24
30
9.60
2.30
3.10
2133
120
Xeon E5-2680 v3
12
24
30
9.60
2.50
3.30
2133
120
Xeon E5-2690 v3
12
24
30
9.60
2.60
3.50
2133
135
Xeon E5-2683 v3
14
28
35
9.60
2.00
3.00
2133
120
Xeon E5-2695 v3
14
28
35
9.60
2.30
3.30
2133
120
Xeon E5-2697 v3
14
28
35
9.60
2.60
3.60
2133
145
Xeon E5-2698 v3
16
32
40
9.60
2.30
3.60
2133
135
Xeon E5-2699 v3
18
36
45
9.60
2.30
3.60
2133
145
Processor
Cores
Threads
Processors (since system release)
QPI
Speed
Rated
Frequency
[Ghz]
Max.
Turbo
Frequency
[Ghz]
Max.
Memory
Frequency
[MHz]
[MB]
[GT/s]
TDP
[Watt]
All the processors that can be ordered with the PRIMERGY RX2530 M1, apart from Xeon E5-2603 v3 and
®
Xeon E5-2609 v3, support Intel Turbo Boost Technology 2.0. This technology allows you to operate the
processor with higher frequencies than the nominal frequency. Listed in the processor table is "Max. Turbo
Frequency" for the theoretical frequency maximum with only one active core per processor. The maximum
frequency that can actually be achieved depends on the number of active cores, the current consumption,
electrical power consumption and the temperature of the processor.
As a matter of principle Intel does not guarantee that the maximum turbo frequency will be reached. This is
related to manufacturing tolerances, which result in a variance regarding the performance of various
examples of a processor model. The range of the variance covers the entire scope between the nominal
frequency and the maximum turbo frequency.
The turbo functionality can be set via BIOS option. Fujitsu generally recommends leaving the "Turbo Mode"
option set at the standard setting "Enabled", as performance is substantially increased by the higher
frequencies. However, since the higher frequencies depend on general conditions and are not always
guaranteed, it can be advantageous to disable the "Turbo Mode" option for application scenarios with
intensive use of AVX instructions and a high number of instructions per clock unit, as well as for those that
require constant performance or lower electrical power consumption.
Page 4 (52)
http://ts.fujitsu.com/primergy
White Paper  Performance Report PRIMERGY RX2530 M1
Version: 1.1  2015-04-15
ECC
1
4
2133


Load reduced
8
Low voltage
Registered
Frequency [MHz]
Ranks
8GB (1x8GB) 1Rx4 DDR4-2133 R ECC
Capacity [GB]
Memory module
Bit width of the
memory chips
Memory modules (since system release)
8
2
8
2133


16GB (1x16GB) 2Rx4 DDR4-2133 R ECC
16
2
4
2133


32GB (1x32GB) 2Rx4 DDR4-2133 R ECC
32
2
4
2133


32GB (1x32GB) 4Rx4 DDR4-2133 LR ECC
32
4
4
2133


8GB (1x8GB) 2Rx8 DDR4-2133 R ECC
Power supplies (since system release)

Max. number
Modular PSU 450W platinum hp
2
Modular PSU 800W platinum hp
2
Modular PSU 800W titanium hp
2
Some components may not be available in all countries or sales regions.
Detailed technical information is available in the data sheet PRIMERGY RX2530 M1.
http://ts.fujitsu.com/primergy
Page 5 (52)
White Paper  Performance Report PRIMERGY RX2530 M1
Version: 1.1  2015-04-15
SPECcpu2006
Benchmark description
SPECcpu2006 is a benchmark which measures the system efficiency with integer and floating-point
operations. It consists of an integer test suite (SPECint2006) containing 12 applications and a floating-point
test suite (SPECfp2006) containing 17 applications. Both test suites are extremely computing-intensive and
concentrate on the CPU and the memory. Other components, such as Disk I/O and network, are not
measured by this benchmark.
SPECcpu2006 is not tied to a special operating system. The benchmark is available as source code and is
compiled before the actual measurement. The used compiler version and their optimization settings also
affect the measurement result.
SPECcpu2006 contains two different performance measurement methods: the first method (SPECint2006 or
SPECfp2006) determines the time which is required to process single task. The second method
(SPECint_rate2006 or SPECfp_rate2006) determines the throughput, i.e. the number of tasks that can be
handled in parallel. Both methods are also divided into two measurement runs, “base” and “peak” which
differ in the use of compiler optimization. When publishing the results the base values are always used; the
peak values are optional.
Benchmark
Arithmetics
Type
Compiler
optimization
SPECint2006
integer
peak
aggressive
SPECint_base2006
integer
base
conservative
SPECint_rate2006
integer
peak
aggressive
SPECint_rate_base2006
integer
base
conservative
SPECfp2006
floating point
peak
aggressive
SPECfp_base2006
floating point
base
conservative
SPECfp_rate2006
floating point
peak
aggressive
SPECfp_rate_base2006
floating point
base
conservative
Measurement
result
Application
Speed
single-threaded
Throughput
multi-threaded
Speed
single-threaded
Throughput
multi-threaded
The measurement results are the geometric average from normalized ratio values which have been
determined for individual benchmarks. The geometric average - in contrast to the arithmetic average - means
that there is a weighting in favour of the lower individual results. Normalized means that the measurement is
how fast is the test system compared to a reference system. Value “1” was defined for the
SPECint_base2006-, SPECint_rate_base2006, SPECfp_base2006 and SPECfp_rate_base2006 results of
the reference system. For example, a SPECint_base2006 value of 2 means that the measuring system has
handled this benchmark twice as fast as the reference system. A SPECfp_rate_base2006 value of 4 means
that the measuring system has handled this benchmark some 4/[# base copies] times faster than the
reference system. “# base copies” specify how many parallel instances of the benchmark have been
executed.
Not every SPECcpu2006 measurement is submitted by us for publication at SPEC. This is why the SPEC
web pages do not have every result. As we archive the log files for all measurements, we can prove the
correct implementation of the measurements at any time.
Page 6 (52)
http://ts.fujitsu.com/primergy
White Paper  Performance Report PRIMERGY RX2530 M1
Version: 1.1  2015-04-15
Benchmark environment
System Under Test (SUT)
Hardware
Model
PRIMERGY RX2530 M1
Processor
Intel Xeon Processor E5-2600 v3 Product Family
Memory
16 × 16GB (1x16GB) 2Rx4 DDR4-2133 R ECC
®
®
Software
Operating system
SPECint_base2006, SPECint2006:
Xeon E5-2630 v3, E5-2650 v3:
Red Hat Enterprise Linux Server release 7.0
All others: Red Hat Enterprise Linux Server release 6.5
SPECint_rate_base2006, SPECint_rate2006:
Xeon E5-2630 v3: Red Hat Enterprise Linux Server release 7.0
All others:
Red Hat Enterprise Linux Server release 6.5
SPECfp_base2006, SPECfp2006, SPECfp_rate_base2006, SPECfp_rate2006:
Red Hat Enterprise Linux Server release 7.0
Operating system
settings
echo always > /sys/kernel/mm/redhat_transparent_hugepage/enabled
Compiler
SPECint_base2006, SPECint2006:
Xeon E5-2630 v3, E5-2650 v3:
C/C++: Version 15.0.0.090 of Intel C++ Studio XE for Linux
All others: C/C++: Version 14.0.0.080 of Intel C++ Studio XE for Linux
SPECint_rate_base2006, SPECint_rate2006:
Xeon E5-2630 v3: C/C++: Version 15.0.0.090 of Intel C++ Studio XE for Linux
All others:
C/C++: Version 14.0.0.080 of Intel C++ Studio XE for Linux
SPECfp_base2006, SPECfp2006, SPECfp_rate_base2006, SPECfp_rate2006:
C/C++: Version 15.0.0.090 of Intel C++ Studio XE for Linux
Fortran: Version 15.0.0.090 of Intel Fortran Studio XE for Linux
Some components may not be available in all countries or sales regions.
http://ts.fujitsu.com/primergy
Page 7 (52)
White Paper  Performance Report PRIMERGY RX2530 M1
Version: 1.1  2015-04-15
Benchmark results
Number of processors
SPECint_rate_base2006
SPECint_rate2006
1
209 (est.)
216 (est.)
2
410
424
Xeon E5-2637 v3
2
61.3
64.5
1
233 (est.)
240 (est.)
2
457
471
Xeon E5-2603 v3
2
29.3
30.5
1
135 (est.)
140 (est.)
2
265
275
Xeon E5-2609 v3
2
33.9
35.3
1
156 (est.)
162 (est.)
2
306
317
Xeon E5-2620 v3
2
53.8
57.0
1
261
270
2
508
524
Xeon E5-2643 v3
2
63.6
67.0
1
340 (est.)
352 (est.)
2
667
690
Xeon E5-2630L v3
2
49.9
53.3
1
288 (est.)
298 (est.)
2
564
585
Xeon E5-2630 v3
2
56.1
58.6
1
339 (est.)
353 (est.)
2
664
692
Xeon E5-2640 v3
2
58.4
62.2
1
359 (est.)
371 (est.)
2
703
727
Xeon E5-2667 v3
2
63.3
66.8
1
414 (est.)
429 (est.)
2
812
840
Xeon E5-2650 v3
2
54.6
56.6
1
420 (est.)
435 (est.)
2
823
852
Xeon E5-2660 v3
2
57.9
61.4
1
453 (est.)
468 (est.)
2
888
918
Xeon E5-2650L v3
2
46.3
48.7
1
403 (est.)
416 (est.)
2
790
816
Xeon E5-2670 v3
2
56.4
59.3
1
494 (est.)
510 (est.)
2
968
1000
Xeon E5-2680 v3
2
60.0
63.1
1
531 (est.)
546 (est.)
2
1040
1070
Xeon E5-2690 v3
2
62.0
65.4
1
556 (est.)
571 (est.)
2
1090
1120
Xeon E5-2683 v3
2
53.7
56.8
1
546 (est.)
561 (est.)
2
1070
1100
Xeon E5-2695 v3
2
58.7
61.9
1
577 (est.)
597 (est.)
2
1130
1170
Xeon E5-2697 v3
2
63.2
66.9
1
622 (est.)
643 (est.)
2
1220
1260
Xeon E5-2698 v3
2
62.6
66.3
1
643 (est.)
663 (est.)
2
1260
1300
Xeon E5-2699 v3
2
63.3
66.8
1
704 (est.)
724 (est.)
2
1380
1420
SPECint_rate2006
59.1
Page 8 (52)
SPECint_rate_base2006
56.2
Number of processors
2
SPECint2006
Xeon E5-2623 v3
SPECint_base2006
Processor
Number of processors
In terms of processors the benchmark result depends primarily on the size of the processor cache, the
support for Hyper-Threading, the number of processor cores and on the processor frequency. In the case of
processors with Turbo mode the number of cores, which are loaded by the benchmark, determines the
maximum processor frequency that can be achieved. In the case of single-threaded benchmarks, which
largely load one core only, the maximum processor frequency that can be achieved is higher than with multithreaded benchmarks.
The results marked (est.) are estimates.
http://ts.fujitsu.com/primergy
Number of processors
SPECfp_rate_base2006
SPECfp_rate2006
2
379
389
1
214 (est.)
222 (est.)
2
425
439
57.2
1
142 (est.)
145 (est.)
2
282
287
64.1
1
163 (est.)
167 (est.)
2
324
330
479
Xeon E5-2637 v3
2
Xeon E5-2603 v3
2
55.4
Xeon E5-2609 v3
2
61.9
Xeon E5-2620 v3
2
95.2
Xeon E5-2643 v3
2
Xeon E5-2630L v3
2
Xeon E5-2630 v3
2
102
107
Xeon E5-2640 v3
2
106
Xeon E5-2667 v3
2
Xeon E5-2650 v3
105
SPECfp2006
SPECfp_base2006
2
99.1
108
SPECfp_rate2006
196 (est.)
Xeon E5-2623 v3
Number of processors
190 (est.)
Number of processors
1
Processor
95.6
Version: 1.1  2015-04-15
SPECfp_rate_base2006
White Paper  Performance Report PRIMERGY RX2530 M1
100
1
235 (est.)
242 (est.)
2
468
118
1
289 (est.)
298 (est.)
2
575
591
1
254 (est.)
262 (est.)
2
506
519
1
284 (est.)
293 (est.)
2
566
581
112
1
294 (est.)
303 (est.)
2
586
600
116
121
1
330 (est.)
340 (est.)
2
656
674
2
103
107
1
342 (est.)
354 (est.)
2
681
700
Xeon E5-2660 v3
2
110
114
1
357 (est.)
369 (est.)
2
711
730
Xeon E5-2650L v3
2
1
323 (est.)
3333 (est.)
2
642
659
Xeon E5-2670 v3
2
105
110
1
381 (est.)
394 (est.)
2
759
780
Xeon E5-2680 v3
2
110
115
1
391 (est.)
405 (est.)
2
779
802
Xeon E5-2690 v3
2
114
118
1
403 (est.)
416 (est.)
2
801
824
Xeon E5-2683 v3
2
104
1
405 (est.)
418 (est.)
2
805
827
Xeon E5-2695 v3
2
105
110
1
412 (est.)
426 (est.)
2
820
844
Xeon E5-2697 v3
2
111
117
1
429 (est.)
444 (est.)
2
853
879
Xeon E5-2698 v3
2
107
113
1
437 (est.)
452 (est.)
2
869
895
Xeon E5-2699 v3
2
109
116
1
461 (est.)
478 (est.)
2
918
946
http://ts.fujitsu.com/primergy
113
91.5
88.1
99.4
97.7
92.0
Page 9 (52)
White Paper  Performance Report PRIMERGY RX2530 M1
Version: 1.1  2015-04-15
The following four diagrams illustrate the throughput of the PRIMERGY RX2530 M1 in comparison to its
predecessor PRIMERGY RX200 S8, in their respective most performant configuration.
SPECcpu2006: integer performance
PRIMERGY RX2530 M1 vs. PRIMERGY RX200 S8
67.5
70
62.7
67.0
63.6
60
50
40
30
20
SPECint2006
10
SPECint_base2006
0
PRIMERGY RX200 S8 PRIMERGY RX2530 M1
2 x Xeon E5-2667 v2
2 x Xeon E5-2643 v3
SPECcpu2006: integer performance
PRIMERGY RX2530 M1 vs. PRIMERGY RX200 S8
1420
1500
1380
960
1250
929
1000
750
500
SPECint_rate2006
250
SPECint_rate_base2006
0
PRIMERGY RX200 S8 PRIMERGY RX2530 M1
2 x Xeon E5-2697 v2 2 x Xeon E5-2699 v3
Page 10 (52)
http://ts.fujitsu.com/primergy
White Paper  Performance Report PRIMERGY RX2530 M1
Version: 1.1  2015-04-15
SPECcpu2006: floating-point performance
PRIMERGY RX2530 M1 vs. PRIMERGY RX200 S8
121
113
125
116
108
100
75
50
SPECfp2006
25
SPECfp_base2006
0
PRIMERGY RX200 S8 PRIMERGY RX2530 M1
2 x Xeon E5-2667 v2
2 x Xeon E5-2667 v3
SPECcpu2006: floating-point performance
PRIMERGY RX2530 M1 vs. PRIMERGY RX200 S8
946
1000
900
918
696
800
678
700
600
500
400
300
SPECfp_rate2006
200
100
SPECfp_rate_base2006
0
PRIMERGY RX200 S8 PRIMERGY RX2530 M1
2 x Xeon E5-2697 v2
2 x Xeon E5-2699 v3
http://ts.fujitsu.com/primergy
Page 11 (52)
White Paper  Performance Report PRIMERGY RX2530 M1
Version: 1.1  2015-04-15
The diagram below reflects how the performance of the PRIMERGY RX2530 M1 scales from one to two
processors when using the Xeon E5-2620 v3.
SPECcpu2006: integer performance
PRIMERGY RX2530 M1 (2 sockets vs. 1 socket)
524
600
508
500
400
270
300
261
200
SPECint_rate2006
100
SPECint_rate_base2006
0
1 x Xeon E5-2620 v3
Page 12 (52)
2 x Xeon E5-2620 v3
http://ts.fujitsu.com/primergy
White Paper  Performance Report PRIMERGY RX2530 M1
Version: 1.1  2015-04-15
SPECpower_ssj2008
Benchmark description
SPECpower_ssj2008 is the first industry-standard SPEC benchmark that evaluates the power and
performance characteristics of a server. With SPECpower_ssj2008 SPEC has defined standards for server
power measurements in the same way they have done for performance.
The benchmark workload represents typical server-side Java business applications. The workload is
scalable, multi-threaded, portable across a wide range of platforms and easy to run. The benchmark tests
CPUs, caches, the memory hierarchy and scalability of symmetric multiprocessor systems (SMPs), as well
as the implementation of Java Virtual Machine (JVM), Just In Time (JIT) compilers, garbage collection,
threads and some aspects of the operating system.
SPECpower_ssj2008 reports power consumption for
servers at different performance levels — from 100% to
“active idle” in 10% segments — over a set period of
time. The graduated workload recognizes the fact that
processing loads and power consumption on servers
vary substantially over the course of days or weeks. To
compute a power-performance metric across all levels,
measured transaction throughputs for each segment are
added together and then divided by the sum of the
average power consumed for each segment. The result
is a figure of merit called “overall ssj_ops/watt”. This
ratio provides information about the energy efficiency of
the measured server. The defined measurement
standard enables customers to compare it with other
configurations
and
servers
measured
with
SPECpower_ssj2008. The diagram shows a typical
graph of a SPECpower_ssj2008 result.
The benchmark runs on a wide variety of
operating
systems
and
hardware
architectures and does not require extensive
client or storage infrastructure. The minimum
equipment for SPEC-compliant testing is two
networked computers, plus a power analyzer
and a temperature sensor. One computer is
the System Under Test (SUT) which runs
one of the supported operating systems and
the JVM. The JVM provides the environment
required to run the SPECpower_ssj2008
workload which is implemented in Java. The
other computer is a “Control & Collection
System” (CCS) which controls the operation
of the benchmark and captures the power,
performance and temperature readings for
reporting. The diagram provides an overview
of the basic structure of the benchmark
configuration and the various components.
http://ts.fujitsu.com/primergy
Page 13 (52)
White Paper  Performance Report PRIMERGY RX2530 M1
Version: 1.1  2015-04-15
Benchmark environment
System Under Test (SUT)
Hardware
Model
PRIMERGY RX2530 M1
Model version
PY RX2530 M1 4x 3.5''
Processor
Xeon E5-2699 v3
Memory
8 × 8GB (1x8GB) 2Rx8 DDR4-2133 R ECC
Network-Interface
1 × PLAN AP 1x1Gbit Cu Intel I210-T1 LP
Disk-Subsystem
Onboard HDD controller
1 × DOM SATA 6G 64GB Main N H-P
Power Supply Unit
1 × Modular PSU 800W titanium hp
Software
BIOS
R1.11.0
BIOS settings
Hardware Prefetcher = Disabled
Adjacent Cache Line Prefetch = Disabled
DCU Streamer Prefetcher = Disabled
Onboard USB Controllers = Disabled
Power Technology = Custom
QPI Link Frequency Select = 6.4 GT/s
Turbo Mode = Disabled
Intel Virtualization Technology = Disabled
ASPM Support = L1 Only
DMI Control = Gen1
COD Enable = Enabled
Early Snoop = Disabled
Firmware
7.69F
Operating system
Microsoft Windows Server 2008 R2 Enterprise SP1
Operating system
settings
Using the local security settings console, “lock pages in memory” was enabled for the user
running the benchmark.
Power Management: Enabled (“Fujitsu Enhanced Power Settings” power plan)
Set “Turn off hard disk after = 1 Minute” in OS.
Benchmark was started via Windows Remote Desktop Connection.
Microsoft Hotfix KB2510206 has been installed due to known problems of the group
assignment algorithm which does not create a balanced group assignment. For more
information see: http://support.microsoft.com/kb/2510206
JVM
IBM J9 VM (build 2.6, JRE 1.7.0 Windows Server 2008 R2 amd64-64 20120322_106209 (JIT
enabled, AOT enabled)
JVM settings
start /NODE [0,1,2,3] /AFFINITY [0x3,0xC,0x30,0xC0,0x300,0xC00,0x3000,0xC000,0x30000]
-Xmn825m -Xms975m -Xmx975m -Xaggressive -Xcompressedrefs -Xgcpolicy:gencon
-XlockReservation -Xnoloa -XtlhPrefetch -Xlp -Xconcurrentlevel0 -Xthr:minimizeusercpu
-Xgcthreads2 (-Xgcthreads1 for JVM5 and JVM23)
Other software
IBM WebSphere Application Server V8.5.0.0, Microsoft Hotfix for Windows (KB2510206)
Some components may not be available in all countries or sales regions.
Page 14 (52)
http://ts.fujitsu.com/primergy
White Paper  Performance Report PRIMERGY RX2530 M1
Version: 1.1  2015-04-15
Benchmark results
The PRIMERGY RX2530 M1 achieved the following result:
SPECpower_ssj2008 = 9,811 overall ssj_ops/watt
The adjoining diagram shows the
result of the configuration described
above. The red horizontal bars show
the performance to power ratio in
ssj_ops/watt (upper x-axis) for each
target load level tagged on the y-axis
of the diagram. The blue line shows
the run of the curve for the average
power consumption (bottom x-axis) at
each target load level marked with a
small rhomb. The black vertical line
shows the benchmark result of 9,811
overall
ssj_ops/watt
for
the
PRIMERGY RX2530 M1. This is the
quotient of the sum of the transaction
throughputs for each load level and
the sum of the average power consumed for each measurement interval.
The following table shows the benchmark results for the throughput in ssj_ops, the power consumption in
watts and the resulting energy efficiency for each load level.
Performance
Power
Energy Efficiency
Target Load
ssj_ops
100%
3,231,698
289
11,191
90%
2,906,711
260
11,173
80%
2,584,271
230
11,236
70%
2,263,620
199
11,377
60%
1,937,836
172
11,283
50%
1,618,047
153
10,544
40%
1,292,232
138
9,367
30%
970,277
123
7,896
20%
646,383
108
5,982
10%
322,218
92.3
3,492
0
47.3
0
Active Idle
Average Power (W)
ssj_ops/watt
∑ssj_ops / ∑power = 9,811
The PRIMERGY RX2530 M1 achieved a new class record with this result, thus surpassing the
best result of the competition by 23% (date: March 03, 2015). Thus, the PRIMERGY RX2530
M1 proves itself to be the most energy-efficient 1U rack server in the world. For the latest
SPECpower_ssj2008 benchmark results, visit:
http://www.spec.org/power_ssj2008/results.
http://ts.fujitsu.com/primergy
Page 15 (52)
White Paper  Performance Report PRIMERGY RX2530 M1
SPECpower_ssj2008: PRIMERGY RX2530 M1 vs. competition
12,000
overall ssj_ops/watt
10,000
9,811
8,000
8,004
Version: 1.1  2015-04-15
The comparison with the competition makes
the advantage of the PRIMERGY RX2530
M1 in the field of energy efficiency evident.
With 23% more energy efficiency than the
best result of the competition in the class of
1U rack servers, the Supermicro SYS1028R-WC1RT server, the PRIMERGY
RX2530 M1 is setting new standards.
6,000
4,000
2,000
0
Supermicro
SYS-1028R-WC1RT
Fujitsu Server
PRIMERGY RX2530 M1
The following diagram shows for each load level the power consumption (on the right y-axis) and the
throughput (on the left y-axis) of the PRIMERGY RX2530 M1 compared to the predecessor PRIMERGY
RX200 S8.
SPECpower_ssj2008: PRIMERGY RX2530 M1 vs. PRIMERGY RX200 S8
Page 16 (52)
http://ts.fujitsu.com/primergy
White Paper  Performance Report PRIMERGY RX2530 M1
SPECpower_ssj2008 overall ssj_ops/watt:
PRIMERGY RX2530 M1 vs. PRIMERGY RX200 S8
12,000
2,000
1,800
10,000
1,600
9,811
1,400
8,000
1,200
7,670
6,000
1,000
800
4,000
600
total power [watt]
overall ssj_ops/watt
Thanks to the new Haswell processors the
PRIMERGY RX2530 M1 has in comparison
with the PRIMERGY RX200 S8 a substantially higher throughput. Despite the higher
power consumption this results in an overall
increase in energy efficiency in the
PRIMERGY RX2530 M1 of 28%.
Version: 1.1  2015-04-15
400
2,000
200
0
0
Fujitsu Server
Fujitsu Server
PRIMERGY RX200 S8 PRIMERGY RX2530 M1
http://ts.fujitsu.com/primergy
Page 17 (52)
White Paper  Performance Report PRIMERGY RX2530 M1
Version: 1.1  2015-04-15
Disk I/O: Performance of RAID controllers
Benchmark description
Performance measurements of disk subsystems for PRIMERGY servers are used to assess their
performance and enable a comparison of the different storage connections for PRIMERGY servers. As
standard, these performance measurements are carried out with a defined measurement method, which
models the accesses of real application scenarios on the basis of specifications.
The essential specifications are:
 Share of random accesses / sequential accesses
 Share of read / write access types
 Block size (kB)
 Number of parallel accesses (# of outstanding I/Os)
A given value combination of these specifications is known as “load profile”. The following five standard load
profiles can be allocated to typical application scenarios:
Standard load
profile
Access
Type of access
read
write
Block size
[kB]
Application
File copy
random
50%
50%
64
Copying of files
File server
random
67%
33%
64
File server
Database
random
67%
33%
8
Database (data transfer)
Mail server
Streaming
sequential
100%
0%
64
Database (log file),
Data backup;
Video streaming (partial)
Restore
sequential
0%
100%
64
Restoring of files
In order to model applications that access in parallel with a different load intensity, the “# of Outstanding
I/Os” is increased, starting with 1, 3, 8 and going up to 512 (from 8 onwards in increments to the power of
two).
The measurements of this document are based on these standard load profiles.
The main results of a measurement are:



Throughput [MB/s]
Transactions [IO/s]
Latency [ms]
Throughput in megabytes per second
Transaction rate in I/O operations per second
Average response time in ms
The data throughput has established itself as the normal measurement variable for sequential load profiles,
whereas the measurement variable “transaction rate” is mostly used for random load profiles with their small
block sizes. Data throughput and transaction rate are directly proportional to each other and can be
transferred to each other according to the formula
Data throughput [MB/s]
= Transaction rate [IO/s] × Block size [MB]
Transaction rate [IO/s]
= Data throughput [MB/s] / Block size [MB]
12
This section specifies capacities of storage media on a basis of 10 (1 TB = 10 bytes) while all other
20
capacities, file sizes, block sizes and throughputs are specified on a basis of 2 (1 MB/s = 2 bytes/s).
All the details of the measurement method and the basics of disk I/O performance are described in the white
paper “Basics of Disk I/O Performance”.
Page 18 (52)
http://ts.fujitsu.com/primergy
White Paper  Performance Report PRIMERGY RX2530 M1
Version: 1.1  2015-04-15
Benchmark environment
All the measurement results discussed in this chapter were determined using the hardware and software
components listed below:
System Under Test (SUT)
Hardware
Controller
1 × “LSI SW RAID on Intel C610 (Onboard SATA)”
1 × “PRAID CP400i”
1 × “PRAID EP400i”
1 × “PRAID EP420i”
Drive
4 × 3.5" SATA HDD Seagate ST3000NM0033
4 × 2.5" SATA SSD Intel SSDSC2BA400G3C
10 × 2.5" SAS SSD Toshiba PX02SMF040
10 × 2.5" SAS HDD HGST HUC156045CSS204
Software
BIOS settings
Intel Virtualization Technology = Disabled
VT-d = Disabled
Energy Performance = Performance
Utilization Profile = Unbalanced
CPU C6 Report = Disabled
Operating system
Microsoft Windows Server 2012 Standard
Operating system
settings
Choose or customize a power plan: High performance
For the processes that create disk I/Os: set the AFFINITY to the CPU node to which the
PCIe slot of the RAID controller is connected
Administration
software
ServerView RAID Manager 5.7.2
Initialization of RAID
arrays
RAID arrays are initialized before the measurement with an elementary block size of 64 kB
(“stripe size”)
File system
NTFS
Measuring tool
Iometer 2006.07.27
Measurement data
Measurement files of 32 GB with 1 – 8 hard disks; 64 GB with 9 – 16 hard disks;
128 GB with 17 or more hard disks
Some components may not be available in all countries / sales regions.
http://ts.fujitsu.com/primergy
Page 19 (52)
White Paper  Performance Report PRIMERGY RX2530 M1
Version: 1.1  2015-04-15
Benchmark results
The results presented here are designed to help you choose the right solution from the various configuration
options of the PRIMERGY RX2530 M1 in the light of disk-I/O performance. Various combinations of RAID
controllers and storage media will be analyzed below. Information on the selection of storage media
themselves is to be found in the section “Disk I/O: Performance of storage media”.
Hard disks
The hard disks are the first essential component. If there is a reference below to “hard disks”, this is meant
as the generic term for HDDs (“hard disk drives”, in other words conventional hard disks) and SSDs (“solid
state drives”, i.e. non-volatile electronic storage media).
Mixed drive configurations of SAS and SATA hard disks in one system are permitted, unless they are
excluded in the configurator for special hard disk types.
More hard disks per system are possible as a result of using 2.5" hard disks instead of 3.5" hard disks.
Consequently, the load that each individual hard disk has to overcome decreases and the maximum overall
performance of the system increases.
More detailed performance statements about hard disk types are available in the section “Disk I/O:
Performance of storage media” in this performance report.
Model versions
The maximum number of hard disks in the system depends on the system configuration. The following table
lists the essential cases.
Form
factor
Connection
type
Interface
Number of PCIe Maximum number
controllers
of hard disks
2.5", 3.5"
SATA 6G
direct
0
4
3.5"
SATA 6G, SAS 12G
direct
1
4
2.5"
SATA 6G, SAS 12G
direct
1
8
2.5"
SATA 6G, SAS 12G
Expander
1
10
RAID controller
In addition to the hard disks the RAID controller is the second performance-determining key component. In
the case of these controllers the “modular RAID” concept of the PRIMERGY servers offers a plethora of
options to meet the various requirements of a wide range of different application scenarios.
The following table summarizes the most important features of the available RAID controllers of the
PRIMERGY RX2530 M1. A short alias is specified here for each controller, which is used in the subsequent
list of the performance values.
Controller name
Alias
Cache
Supported
interfaces
In the system
Max. # disks
per controller
BBU/
FBU
RAID levels
LSI SW RAID on Intel
C610 (Onboard SATA)
Onboard C610
-
SATA 6G
-
4 × 2.5"
4 × 3.5"
0, 1, 10
-/-
PRAID CP400i
PRAID CP400i
-
SATA 6G
SAS 12G
PCIe 3.0
x8
8 × 2.5"
4 × 3.5"
0, 1, 1E, 5,
10, 50
-/-
PRAID EP400i
PRAID EP400i
1 GB
SATA 6G
SAS 12G
PCIe 3.0
x8
10 × 2.5"
4 × 3.5"
0, 1, 1E, 5, 6,
10, 50, 60
-/
PRAID EP420i
PRAID EP420i
2 GB
SATA 6G
SAS 12G
PCIe 3.0
x8
10 × 2.5"
4 × 3.5"
0, 1, 1E, 5, 6,
10, 50, 60
-/
The onboard RAID controller is implemented in the chipset Intel C610 on the system board of the server and
uses the CPU of the server for the RAID functionality. This controller is a simple solution that does not
require a PCIe slot.
Page 20 (52)
http://ts.fujitsu.com/primergy
White Paper  Performance Report PRIMERGY RX2530 M1
Version: 1.1  2015-04-15
System-specific interfaces
The interfaces of a controller in CPU direction (PCIe or in the event of onboard controllers “Direct Media
Interface”, DMI in short) and in the direction of hard disks (SAS or SATA) have in each case specific limits for
data throughput. These limits are listed in the following table. The minimum of these two values is a definite
limit, which cannot be exceeded. This value is highlighted in bold in the following table.
Controller
alias
Effective in the configuration
Onboard C610
4 × SATA 6G
2060 MB/s
4 × DMI 2.0
1716 MB/s
-
PRAID CP400i
8 × SAS 12G
8240 MB/s
8 × PCIe 3.0
6761 MB/s
-/
PRAID EP400i
8 × SAS 12G
8240 MB/s
8 × PCIe 3.0
6761 MB/s
-/
PRAID EP420i
8 × SAS 12G
8240 MB/s
8 × PCIe 3.0
6761 MB/s
-/
# Disk-side
data channels
Limit for
throughput of
disk interface
# CPU-side
data channels
Limit for
throughput of
CPU-side
interface
Connection
via
expander
More details about the RAID controllers of the PRIMERGY systems are available in the white paper “RAID
Controller Performance”.
http://ts.fujitsu.com/primergy
Page 21 (52)
White Paper  Performance Report PRIMERGY RX2530 M1
Version: 1.1  2015-04-15
Settings
In most cases, the cache of HDDs has a great influence on disk-I/O performance. It is frequently regarded as
a security problem in case of power failure and is thus switched off. On the other hand, it was integrated by
hard disk manufacturers for the good reason of increasing the write performance. For performance reasons it
is therefore advisable to enable the hard disk cache. To prevent data loss in case of power failure you are
recommended to equip the system with a UPS.
In the case of controllers with a cache there are several parameters that can be set. The optimal settings can
depend on the RAID level, the application scenario and the type of data medium. In the case of RAID levels
5 and 6 in particular (and the more complex RAID level combinations 50 and 60) it is obligatory to enable the
controller cache for application scenarios with write share. If the controller cache is enabled, the data
temporarily stored in the cache should be safeguarded against loss in case of power failure. Suitable
accessories are available for this purpose (e.g. a BBU or FBU).
For the purpose of easy and reliable handling of the settings for RAID controllers and hard disks it is
advisable to use the RAID-Manager software “ServerView RAID” that is supplied for PRIMERGY servers. All
the cache settings for controllers and hard disks can usually be made en bloc – specifically for the
application – by using the pre-defined modi “Performance” or “Data Protection”. The “Performance” mode
ensures the best possible performance settings for the majority of the application scenarios.
More information about the setting options of the controller cache is available in the white paper “RAID
Controller Performance”.
Performance values
In general, disk-I/O performance of a RAID array depends on the type and number of hard disks, on the
RAID level and on the RAID controller. If the limits of the system-specific interfaces are not exceeded, the
statements on disk-I/O performance are therefore valid for all PRIMERGY systems. This is why all the
performance statements of the document “RAID Controller Performance” also apply for the PRIMERGY
RX2530 M1 if the configurations measured there are also supported by this system.
The performance values of the PRIMERGY RX2530 M1 are listed in table form below, specifically for
different RAID levels, access types and block sizes. Substantially different configuration versions are dealt
with separately. The established measurement variables, as already mentioned in the subsection
Benchmark description, are used here. Thus, transaction rate is specified for random accesses and data
throughput for sequential accesses. To avoid any confusion among the measurement units the tables have
been separated for the two access types.
The table cells contain the maximum achievable values. This has three implications: On the one hand hard
disks with optimal performance were used (the components used are described in more detail in the
subsection Benchmark environment). Furthermore, cache settings of controllers and hard disks, which are
optimal for the respective access scenario and the RAID level, are used as a basis. And ultimately each
value is the maximum value for the entire load intensity range (# of outstanding I/Os).
In order to also visualize the numerical values each table cell is highlighted with a horizontal bar, the length
of which is proportional to the numerical value in the table cell. All bars shown in the same scale of length
have the same color. In other words, a visual comparison only makes sense for table cells with the same
colored bars.
Since the horizontal bars in the table cells depict the maximum achievable performance values, they are
shown by the color getting lighter as you move from left to right. The light shade of color at the right end of
the bar tells you that the value is a maximum value and can only be achieved under optimal prerequisites.
The darker the shade becomes as you move to the left, the more frequently it will be possible to achieve the
corresponding value in practice.
Page 22 (52)
http://ts.fujitsu.com/primergy
White Paper  Performance Report PRIMERGY RX2530 M1
Version: 1.1  2015-04-15
2.5" - Random accesses (maximum performance values in IO/s):
Onboard C610 SSDSC2BA400G3C SATA SSD
HUC156045CSS204 SAS HDD
PRAID CP400i
PX02SMF040 SAS SSD
SSDs random
64 kB blocks
67% read
[IO/s]
SSDs random
8 kB blocks
67% read
[IO/s]
HDDs random
64 kB blocks
67% read
[IO/s]
RAID level
#Disks
Hard disk
type
RAID
Controller
Configuration version
HDDs random
8 kB blocks
67% read
[IO/s]
PRIMERGY RX2530 M1
Model version PY RX2530 M1 4x 2.5' expandable
Model version PY RX2530 M1 10x 2.5'
2
1 N/A
N/A
47337
7870
4
4
0 N/A
10 N/A
N/A
N/A
78887
63426
14951
12256
2
1
1290
1112
75925
12445
8
8
8
10
0
5
3948
5490
2827
2524
3466
1920
100776
137616
29911
55843
77081
19148
2
1
1394
1122
78733
12318
PRAID EP400i
HUC156045CSS204 SAS HDD
PX02SMF040 SAS SSD
8
8
8
10
0
5
4113
5558
3292
2610
3504
2176
113462
132049
54614
58778
81445
23046
PRAID EP400i
HUC156045CSS204 SAS HDD
PX02SMF040 SAS SSD
10
10
10
10
0
5
4902
6398
3470
3145
4298
2570
112511
133706
54063
47547
94661
22576
2
1
1495
1212
80178
12460
PRAID EP420i
HUC156045CSS204 SAS HDD
PX02SMF040 SAS SSD
8
8
8
10
0
5
4212
5452
3194
2804
3506
2010
105915
123219
54214
58569
79893
22894
PRAID EP420i
HUC156045CSS204 SAS HDD
PX02SMF040 SAS SSD
10
10
10
10
0
5
4876
6382
3622
3328
4348
2439
112459
133049
54582
48085
94396
22812
http://ts.fujitsu.com/primergy
Page 23 (52)
White Paper  Performance Report PRIMERGY RX2530 M1
Version: 1.1  2015-04-15
2.5" - Sequential accesses (maximum performance values in MB/s):
Onboard C610 SSDSC2BA400G3C SATA SSD
HUC156045CSS204 SAS HDD
PRAID CP400i
PX02SMF040 SAS SSD
SSDs sequential
64 kB blocks
100% write
[MB/s]
SSDs sequential
64 kB blocks
100% read
[MB/s]
HDDs sequential
64 kB blocks
100% write
[MB/s]
RAID level
#Disks
Hard disk
type
RAID
Controller
Configuration version
HDDs sequential
64 kB blocks
100% read
[MB/s]
PRIMERGY RX2530 M1
Model version PY RX2530 M1 4x 2.5' expandable
Model version PY RX2530 M1 10x 2.5'
2
1 N/A
N/A
726
443
4
4
0 N/A
10 N/A
N/A
N/A
1264
1027
1190
605
2
1
394
235
1603
420
8
8
8
10
0
5
1006
1816
1577
913
1820
1583
5918
5838
5844
1652
3295
1868
2
1
411
235
1596
420
PRAID EP400i
HUC156045CSS204 SAS HDD
PX02SMF040 SAS SSD
8
8
8
10
0
5
1001
1836
1600
926
1808
1591
5873
5818
5790
1653
3295
2651
PRAID EP400i
HUC156045CSS204 SAS HDD
PX02SMF040 SAS SSD
10
10
10
10
0
5
1251
2295
2046
1149
2256
1974
5909
5889
5899
2035
4051
2743
2
1
440
274
1595
421
PRAID EP420i
HUC156045CSS204 SAS HDD
PX02SMF040 SAS SSD
8
8
8
10
0
5
1027
1919
1636
958
1851
1605
5888
5848
5847
1650
3281
2611
PRAID EP420i
HUC156045CSS204 SAS HDD
PX02SMF040 SAS SSD
10
10
10
10
0
5
1269
2334
2033
1151
2325
2070
5905
5899
5900
2034
4050
2684
Page 24 (52)
http://ts.fujitsu.com/primergy
White Paper  Performance Report PRIMERGY RX2530 M1
Version: 1.1  2015-04-15
3.5" - Random accesses (maximum performance values in IO/s):
Onboard C610
PRAID CP400i
PRAID EP400i
PRAID EP420i
ST3000NM0033 SATA HDD
SSDSC2BA400G3C SATA SSD
HUC156045CSS204 SAS HDD
PX02SMF040 SAS SSD
SSDs random
64 kB blocks
67% read
[IO/s]
SSDs random
8 kB blocks
67% read
[IO/s]
HDDs random
64 kB blocks
67% read
[IO/s]
RAID level
#Disks
Hard disk
type
RAID
Controller
Configuration version
HDDs random
8 kB blocks
67% read
[IO/s]
PRIMERGY RX2530 M1
Model version PY RX2530 M1 4x 3.5'
2
1
487
435
47337
7870
4
4
0
10
1081
813
609
464
78887
63426
14951
12256
2
1
1290
1112
75925
12445
4
4
4
10
0
5
2216
2634
1578
1251
1526
953
101722
131832
28395
21815
41626
16470
2
1
1394
1122
78733
12318
HUC156045CSS204 SAS HDD
4
10
2045
1288
112215
22104
PX02SMF040 SAS SSD
4
4
0
5
2681
1516
1770
971
128779
36954
41399
13079
2
1
1495
1212
80178
12460
HUC156045CSS204 SAS HDD
4
10
2133
1397
111969
21776
PX02SMF040 SAS SSD
4
4
0
5
2697
2518
1747
1157
128792
36908
42029
13835
http://ts.fujitsu.com/primergy
Page 25 (52)
White Paper  Performance Report PRIMERGY RX2530 M1
Version: 1.1  2015-04-15
3.5" - Sequential accesses (maximum performance values in MB/s):
Onboard C610
PRAID CP400i
PRAID EP400i
PRAID EP420i
ST3000NM0033 SATA HDD
SSDSC2BA400G3C SATA SSD
HUC156045CSS204 SAS HDD
PX02SMF040 SAS SSD
SSDs sequential
64 kB blocks
100% write
[MB/s]
SSDs sequential
64 kB blocks
100% read
[MB/s]
HDDs sequential
64 kB blocks
100% write
[MB/s]
RAID level
#Disks
Hard disk
type
RAID
Controller
Configuration version
HDDs sequential
64 kB blocks
100% read
[MB/s]
PRIMERGY RX2530 M1
Model version PY RX2530 M1 4x 3.5'
2
1
178
173
726
443
4
4
0
10
667
350
672
336
1264
1027
1190
605
2
1
394
235
1603
420
4
4
4
10
0
5
537
913
682
459
914
686
3239
3224
3206
822
1649
1162
2
1
411
235
1596
420
HUC156045CSS204 SAS HDD
4
10
549
472
3270
831
PX02SMF040 SAS SSD
4
4
0
5
946
698
938
673
3274
3189
1660
1238
421
2
1
440
274
1595
HUC156045CSS204 SAS HDD
4
10
576
480
3249
837
PX02SMF040 SAS SSD
4
4
0
5
963
726
958
677
3233
3197
1670
1251
Conclusion
At full configuration with powerful hard disks the PRIMERGY RX2530 M1 achieves a throughput of up to
5918 MB/s for sequential load profiles and a transaction rate of up to 137616 IO/s for typical, random
application scenarios.
For best possible performance we recommend one of the plug-in PCIe controllers. To operate SSDs within
the maximum performance range the PRAID CP400i is already suited for the simpler RAID levels 0, 1 and
10, and a controller with cache is to be preferred for RAID 5.
In the event of HDDs the controller cache for random load profiles with a significant write share has
performance advantages for all RAID levels.
Page 26 (52)
http://ts.fujitsu.com/primergy
White Paper  Performance Report PRIMERGY RX2530 M1
Version: 1.1  2015-04-15
OLTP-2
Benchmark description
OLTP stands for Online Transaction Processing. The OLTP-2 benchmark is based on the typical application
scenario of a database solution. In OLTP-2 database access is simulated and the number of transactions
achieved per second (tps) determined as the unit of measurement for the system.
In contrast to benchmarks such as SPECint and TPC-E, which were standardized by independent bodies
and for which adherence to the respective rules and regulations are monitored, OLTP-2 is an internal
benchmark of Fujitsu. OLTP-2 is based on the well-known database benchmark TPC-E. OLTP-2 was
designed in such a way that a wide range of configurations can be measured to present the scaling of a
system with regard to the CPU and memory configuration.
Even if the two benchmarks OLTP-2 and TPC-E simulate similar application scenarios using the same load
profiles, the results cannot be compared or even treated as equal, as the two benchmarks use different
methods to simulate user load. OLTP-2 values are typically similar to TPC-E values. A direct comparison, or
even referring to the OLTP-2 result as TPC-E, is not permitted, especially because there is no priceperformance calculation.
Further information can be found in the document Benchmark Overview OLTP-2.
Benchmark environment
The measurement set-up is symbolically illustrated below:
Tier A
Driver
Network
Network
Application Server
Clients
Tier B
Database Server
Disk
subsystem
System Under Test (SUT)
All results were determined by way of example on a PRIMERGY RX2540 M1.
http://ts.fujitsu.com/primergy
Page 27 (52)
White Paper  Performance Report PRIMERGY RX2530 M1
Version: 1.1  2015-04-15
Database Server (Tier B)
Hardware
Model
PRIMERGY RX2540 M1
Processor
Intel Xeon Processor E5-2600 v3 Product Family
Memory
1 processor:
2 processors:
Network interface
2 × onboard LAN 10 Gb/s
Disk subsystem
RX2540 M1:
Onboard RAID controller PRAID EP400i
2 × 300 GB 15k rpm SAS Drive, RAID1 (OS),
4 × 450 GB 15k rpm SAS Drive, RAID10 (LOG)
5 × LSI MegaRAID SAS 9286CV-8e or 5 × PRAID EP420e
(same performance with OLTP-2)
5 × JX40: 13 × 400 GB SSD Drive each, RAID5 (data)
®
®
8 × 32GB (1x32GB) 4Rx4 DDR4-2133 LR ECC
16 × 32GB (1x32GB) 4Rx4 DDR4-2133 LR ECC
Software
BIOS
Version R1.0.0
Operating system
Microsoft Windows Server 2012 R2 Standard
Database
Microsoft SQL Server 2014 Enterprise
Application Server (Tier A)
Hardware
Model
1 × PRIMERGY RX200 S8
Processor
2 × Xeon E5-2667 v2
Memory
64 GB, 1600 MHz registered ECC DDR3
Network interface
2 × onboard LAN 1 Gb/s
1 × Dual Port LAN 10 Gb/s
Disk subsystem
2 × 250 GB 7.2k rpm SATA Drive
Software
Operating system
Microsoft Windows Server 2012 Standard
Client
Hardware
Model
2 × PRIMERGY RX200 S7
Processor
2 × Xeon E5-2670
Memory
32 GB, 1600 MHz registered ECC DDR3
Network interface
2 × onboard LAN 1 Gb/s
1 × Dual Port LAN 1Gb/s
Disk subsystem
1 × 250 GB 7.2k rpm SATA Drive
Software
Operating system
Microsoft Windows Server 2008 R2 Standard
Benchmark
OLTP-2 Software EGen version 1.13.0
Some components may not be available in all countries / sales regions.
Page 28 (52)
http://ts.fujitsu.com/primergy
White Paper  Performance Report PRIMERGY RX2530 M1
Version: 1.1  2015-04-15
Benchmark results
Database performance greatly depends on the configuration options with CPU, memory and on the
connectivity of an adequate disk subsystem for the database. In the following scaling considerations for the
processors we assume that both the memory and the disk subsystem has been adequately chosen and is
not a bottleneck.
A guideline in the database environment for selecting main memory is that sufficient quantity is more
important than the speed of the memory accesses. This why a configuration with a total memory of 512 GB
was considered for the measurements with two processors and a configuration with a total memory of
256 GB for the measurements with one processor. Both memory configurations have memory access of
2133 MHz. Further information about memory performance can be found in the White Paper Memory
performance of Xeon E5-2600 v3 (Haswell-EP)-based systems.
The following diagram shows the OLTP-2 transaction rates that can be achieved with one and two
®
®
processors of the Intel Xeon Processor E5-2600 v3 Product Family.
OLTP-2 tps
E5-2699 v3 - 18C, HT
3768.70
2136.52
E5-2698 v3 - 16C, HT
3435.90
1915.90
E5-2697 v3 - 14C, HT
3235.78
1842.92
E5-2695 v3 - 14C, HT
2985.40
1689.76
E5-2683 v3 - 14C, HT
1606.03
E5-2690 v3 - 12C, HT
1586.30
E5-2680 v3 - 12C, HT
2889.68
2838.84
2765.09
1545.09
E5-2670 v3 - 12C, HT
2601.38
1454.37
E5-2650L v3 - 12C, HT
2188.13
1219.78
E5-2660 v3 - 10C, HT
2357.78
1310.06
E5-2650 v3 - 10C, HT
1240.39
E5-2667 v3 - 8C, HT
1224.02
E5-2640 v3 - 8C, HT
1068.08
E5-2630L v3 - 8C, HT
874.06
E5-2630 v3 - 8C, HT
1019.98
E5-2643 v3 - 6C, HT
979.99
E5-2620 v3 - 6C, HT
773.16
732.37
409.34
684.82
372.70
E5-2609 v3 - 6C
E5-2603 v3 - 6C
2232.40
2244.71
1996.46
1623.49
1906.56
1823.73
1449.64
2CPUs 512GB RAM
1CPU 256GB RAM
E5-2637 v3 - 4C, HT
667.62
E5-2623 v3 - 4C, HT
608.60
0
HT:
Hyper-Threading
http://ts.fujitsu.com/primergy
500
1247.97
1128.35
1000
1500
2000
2500
3000
3500
4000
tps
bold:
measured
cursive: calculated
Page 29 (52)
White Paper  Performance Report PRIMERGY RX2530 M1
Version: 1.1  2015-04-15
It is evident that a wide performance range is covered by the variety of released processors. If you compare
the OLTP-2 value of the processor with the lowest performance (Xeon E5-2603 v3) with the value of the
processor with the highest performance (Xeon E5-2699 v3), the result is a 5.2-fold increase in performance.
The features of the processors are summarized in the section “Technical data”.
The relatively large performance differences between the processors can be explained by their features. The
values scale on the basis of the number of cores, the size of the L3 cache and the CPU clock frequency and
as a result of the features of Hyper-Threading and turbo mode, which are available in most processor types.
Furthermore, the data transfer rate between processors (“QPI Speed”) also determines performance.
A low performance can be seen in the Xeon E5-2603 v3 and E5-2609 v3 processors, as they have to
manage without Hyper-Threading (HT) and turbo mode (TM).
Within a group of processors with the same number of cores scaling can be seen via the CPU clock
frequency.
If you compare the maximum achievable OLTP-2 values of the current system generation with the values
that were achieved on the predecessor systems, the result is an increase of about 52%.
Maximum OLTP-2 tps
Comparison of system generations
tps
4500
+ ~52%
4000
3500
3000
2500
2000
1500
2 × E5-2697 v2
512 GB
2 × E5-2699 v3
512 GB
1000
SQL 2012
SQL 2014
500
0
Predecessor System
Current System
TX2560 M1 RX2530 M1 RX2540 M1 RX2560 M1
Predecessor System TX300 S8
Page 30 (52)
Current System
RX200 S8
RX300 S8
RX350 S8
http://ts.fujitsu.com/primergy
White Paper  Performance Report PRIMERGY RX2530 M1
Version: 1.1  2015-04-15
vServCon
Benchmark description
vServCon is a benchmark used by Fujitsu to compare server configurations with hypervisor with regard to
their suitability for server consolidation. This allows both the comparison of systems, processors and I/O
technologies as well as the comparison of hypervisors, virtualization forms and additional drivers for virtual
machines.
vServCon is not a new benchmark in the true sense of the word. It is more a framework that combines
already established benchmarks (or in modified form) as workloads in order to reproduce the load of a
consolidated and virtualized server environment. Three proven benchmarks are used which cover the
application scenarios database, application server and web server.
Application scenario
Benchmark
No. of logical CPU cores
Memory
Database
Sysbench (adapted)
2
1.5 GB
Java application server
SPECjbb (adapted, with 50% - 60% load)
2
2 GB
Web server
WebBench
1
1.5 GB
Each of the three application scenarios is allocated to a dedicated virtual machine (VM). Add to these a
fourth machine, the so-called idle VM. These four VMs make up a “tile”. Depending on the performance
capability of the underlying server hardware, you may as part of a measurement also have to start several
identical tiles in parallel in order to achieve a maximum performance score.
System Under Test
Database
VM
Java
VM
Web
VM
…
Database
Java
VM
VM
Database
Java
VM
VM
Database
Java
VM
VM
Web
VM
Web
VM
Web
VM
Idle
VM
Tile n
…
Idle
VM
Idle
VM
Idle
VM
Tile 3
Tile 2
Tile 1
Each of the three vServCon application scenarios provides a specific benchmark result in the form of
application-specific transaction rates for the respective VM. In order to derive a normalized score, the
individual benchmark results for one tile are put in relation to the respective results of a reference system.
The resulting relative performance values are then suitably weighted and finally added up for all VMs and
tiles. The outcome is a score for this tile number.
Starting as a rule with one tile, this procedure is performed for an increasing number of tiles until no further
significant increase in this vServCon score occurs. The final vServCon score is then the maximum of the
vServCon scores for all tile numbers. This score thus reflects the maximum total throughput that can be
achieved by running the mix defined in vServCon that consists of numerous VMs up to the possible full
utilization of CPU resources. This is why the measurement environment for vServCon measurements is
designed in such a way that only the CPU is the limiting factor and that no limitations occur as a result of
other resources.
The progression of the vServCon scores for the tile numbers provides useful information about the scaling
behavior of the “System under Test”.
A detailed description of vServCon is in the document: Benchmark Overview vServCon.
http://ts.fujitsu.com/primergy
Page 31 (52)
White Paper  Performance Report PRIMERGY RX2530 M1
Version: 1.1  2015-04-15
Benchmark environment
The measurement set-up is symbolically illustrated below:
Framework
controller
Server
Disk subsystem
Multiple
1Gb or 10Gb
networks
System Under Test (SUT)
Load generators
All results were determined by way of example on a PRIMERGY RX2540 M1.
System Under Test (SUT)
Hardware
®
®
Processor
Intel Xeon Processor E5-2600 v3 Product Family
Memory
1 processor:
2 processors:
Network interface
1 × dual port 1GbE adapter
1 × dual port 10GbE server adapter
Disk subsystem
1 × dual-channel FC controller Emulex LPe12002
ETERNUS DX80 storage systems:
Each Tile: 50 GB LUN
Each LUN: RAID 0 with 2 × Seagate ST3300657SS-Disks (15 krpm)
8 × 32GB (1x32GB) 4Rx4 DDR4-2133 LR ECC
16 × 32GB (1x32GB) 4Rx4 DDR4-2133 LR ECC
Software
Operating system
VMware ESXi 5.5.0 U2 Build 2068190
Load generator (incl. Framework controller)
Hardware (Shared)
Enclosure
PRIMERGY BX900
Hardware
Model
18 × PRIMERGY BX920 S1 server blades
Processor
2 × Xeon X5570
Memory
12 GB
Network interface
3 × 1 Gbit/s LAN
Software
Operating system
Page 32 (52)
Microsoft Windows Server 2003 R2 Enterprise with Hyper-V
http://ts.fujitsu.com/primergy
White Paper  Performance Report PRIMERGY RX2530 M1
Version: 1.1  2015-04-15
Load generator VM (per tile 3 load generator VMs on various server blades)
Hardware
Processor
1 × logical CPU
Memory
512 MB
Network interface
2 × 1 Gbit/s LAN
Software
Operating system
Microsoft Windows Server 2003 R2 Enterprise Edition
Some components may not be available in all countries or sales regions.
Benchmark results
®
The PRIMERGY dual-socket rack and tower systems dealt with here are based on processors of the Intel
®
Xeon Processor E5-2600 v3 Product Family. The features of the processors are summarized in the section
“Technical data”.
The available processors of these systems with their results can be seen in the following table.
Score
#Tiles
4 Cores
Hyper-Threading, Turbo Mode
E5-2623 v3
E5-2637 v3
7.71
8.65
4
4
6 Cores
E5-2603 v3
E5-2609 v3
5.13
5.83
5
5
6 Cores
Hyper-Threading, Turbo Mode
E5-2620 v3
E5-2643 v3
10.1
13.1
6
6
8 Cores
Hyper-Threading, Turbo Mode
E5-2630L v3
E5-2630 v3
E5-2640 v3
E5-2667 v3
11.4
13.6
14.1
15.9
8
8
8
8
10 Cores
Hyper-Threading, Turbo Mode
E5-2650 v3
E5-2660 v3
16.6
17.8
10
10
12 Cores
Hyper-Threading, Turbo Mode
E5-2650L v3
E5-2670 v3
E5-2680 v3
E5-2690 v3
16.2
20.0
21.4
22.4
11
12
12
13
14 Cores
Hyper-Threading, Turbo Mode
E5-2683 v3
E5-2695 v3
E5-2697 v3
21.6
23.5
25.5
14
14
15
16 Cores
Hyper-Threading, Turbo Mode
E5-2698 v3
27.3
16
18 Cores
Hyper-Threading, Turbo Mode
E5-2699 v3
30.3
18
®
®
Intel Xeon Processor
E5 v3 Product Family
Processor
These PRIMERGY dual-socket rack and tower systems are very suitable for application virtualization thanks
to the progress made in processor technology. Compared with a system based on the previous processor
generation an approximate 76% higher virtualization performance can be achieved (measured in vServCon
score in their maximum configuration).
http://ts.fujitsu.com/primergy
Page 33 (52)
White Paper  Performance Report PRIMERGY RX2530 M1
Version: 1.1  2015-04-15
The first diagram compares the virtualization performance values that can be achieved with the processors
reviewed here.
®
®
6
8
8
8
8
10
10
11
12
12
13
E5-2637 v3 - 4 Cores
E5-2603 v3 - 6 Cores
E5-2609 v3 - 6 Cores
E5-2620 v3 - 6 Cores
E5-2643 v3 - 6 Cores
E5-2630L v3 - 8 Cores
E5-2630 v3 - 8 Cores
E5-2640 v3 - 8 Cores
E5-2667 v3 - 8 Cores
E5-2650 v3 - 10 Cores
E5-2660 v3 - 10 Cores
E5-2650L v3 - 12 Cores
E5-2670 v3 - 12 Cores
E5-2680 v3 - 12 Cores
E5-2690 v3 - 12 Cores
#Tiles
14
14
15
16
18
E5-2699 v3 - 18 Cores
6
E5-2698 v3 - 16 Cores
5
E5-2697 v3 - 14 Cores
5
E5-2695 v3 - 14 Cores
4
E5-2683 v3 - 14 Cores
4
E5-2623 v3 - 4 Cores
Intel Xeon Processor E5-2600 v3 Product Family
30
Final vServCon Score
25
20
15
10
5
0
The relatively large performance differences between the processors can be explained by their features. The
values scale on the basis of the number of cores, the size of the L3 cache and the CPU clock frequency and
as a result of the features of Hyper-Threading and turbo mode, which are available in most processor types.
Furthermore, the data transfer rate between processors (“QPI Speed”) also determines performance.
A low performance can be seen in the Xeon E5-2603 v3 and E5-2609 v3 processors, as they have to
manage without Hyper-Threading (HT) and turbo mode (TM). In principle, these weakest processors are only
to a limited extent suitable for the virtualization environment.
Within a group of processors with the same number of cores scaling can be seen via the CPU clock
frequency.
As a matter of principle, the memory access speed also influences performance. A guideline in the
virtualization environment for selecting main memory is that sufficient quantity is more important than the
speed of the memory accesses. The vServCon scaling measurements presented here were all performed
with a memory access speed – depending on the processor type – of at most 2133 MHz. More information
about the topic “Memory Performance” and QPI architecture can be found in the White Paper Memory
performance of Xeon E5-2600 v3 (Haswell-EP)-based systems.
Page 34 (52)
http://ts.fujitsu.com/primergy
White Paper  Performance Report PRIMERGY RX2530 M1
Until now we have looked at the virtualization performance of a
fully configured system. However, with a server with two sockets
the question also arises as to how good performance scaling is
from one to two processors. The better the scaling, the lower the
overhead usually caused by the shared use of resources within a
server. The scaling factor also depends on the application. If the
server is used as a virtualization platform for server consolidation,
the system scales with a factor of 1.97. When operated with two
processors, the system thus achieves a significantly better
performance than with one processor, as is illustrated in the
diagram opposite using the processor version Xeon E5-2699 v3 as
an example.
× 1.97
25
15
10
5
30.3@18 tiles
20
15.4@9 tiles
Final vServCon Score
35
30
Version: 1.1  2015-04-15
1 x E5-2699 v3
2 x E5-2699 v3
0
2.65
5.47
8.20
10.7
13.3
14.8
17.2
18.9
19.8
21.0
22.0
22.7
23.4
23.5
0
2.89
5.70
8.36
10.3
11.6
12.8
13.8
14.1
vServCon Score
The next diagram illustrates the virtualization performance for increasing numbers of VMs based on the
Xeon E5-2640 v3 (8 core) and E5-2695 v2 (14 core) processors.
In addition to the increased
number of physical cores,
E5-2640 v3
E5-2695 v3
Hyper-Threading, which is
25
supported by almost all
®
processors of the Intel
®
Xeon Processor E5-2600 v3
20
Product
Family,
is
an
additional reason for the high
number of VMs that can be
operated. As is known, a
15
physical processor core is
consequently divided into two
logical cores so that the
10
number of cores available for
the hypervisor is doubled.
This standard feature thus
generally
increases
the
5
virtualization performance of
a system.
1 2 3 4 5 6 7 8
1 2 3 4 5 6 7 8 9 10 11 12 13 14
#Tiles
The previous diagram examined the total performance of all application VMs of a host. However, studying
the performance from an individual application VM viewpoint is also interesting. This information is in the
previous diagram. For example, the total optimum is reached in the above Xeon E5-2640 v3 situation with 24
application VMs (eight tiles, not including the idle VMs); the low load case is represented by three application
VMs (one tile, not including the idle VM). Remember: the vServCon score for one tile is an average value
across the three application scenarios in vServCon. This average performance of one tile drops when
changing from the low load case to the total optimum of the vServCon score - from 2.89 to 14.1/8=1.76, i.e.
to 61%. The individual types of application VMs can react very differently in the high load situation. It is thus
clear that in a specific situation the performance requirements of an individual application must be balanced
against the overall requirements regarding the numbers of VMs on a virtualization host.
http://ts.fujitsu.com/primergy
Page 35 (52)
White Paper  Performance Report PRIMERGY RX2530 M1
Version: 1.1  2015-04-15
The virtualization-relevant progress in processor technology since 2008 has an effect on the one hand on an
individual VM and, on the other hand, on the possible maximum number of VMs up to CPU full utilization.
The following comparison shows the proportions for both types of improvements.
Six systems with similar housing
2008
2009
2011
2012
2013
2014/2015
construction are compared: a
RX200 S4
RX200 S5 RX200 S6
RX200 S7 RX200 S8 RX2530 M1
system from 2008, a system from
RX300
S4
RX300
S5
RX300
S6
RX300 S7 RX300 S8 RX2540 M1
2009, a system from 2011, a
TX300 S6
RX350 S7 RX350 S8 RX2560 M1
system from 2012, a system from
TX300 S4
TX300 S5 TX300 S6
TX300 S7 TX300 S8 TX2560 M1
2013 and a current system with
the best processors each (see
table below) for few VMs and for highest maximum performance.
2008
2009
2011
2012
2013
2014
Best
vServCon
Best
vServCon
Performance Score
Maximum
Score
Few VMs
1 Tile Performance
max.
X5460
1.91
X5460
2.94@2 tiles
X5570
2.45
X5570
6.08@ 6 tiles
X5690
2.63
X5690
9.61@ 9 tiles
E5-2643
2.73
E5-2690
13.5@ 8 tiles
E5-2667 v2
2.85
E5-2697 v2 17.1@11 tiles
E5-2643 v3
3.22
E5-2699 v3
30.3@18tiles
The clearest performance improvements arose from 2008 to 2009 with the introduction of the Xeon 5500
1
processor generation (e. g. via the feature “Extended Page Tables” (EPT) ). One sees an increase of the
vServCon score by a factor of 1.28 with a few VMs (one tile).
Virtualization relevant improvements
10
Few VMs (1 Tile)
9
8
vServCon Score
7
6
5
× 1.13
× 1.04
× 1.04
4
× 1.07
× 1.28
3
2
1
1.91
2.45
2.63
2.73
2.85
3.22
2009
X5570
2.93 GHz
4C
2011
X5690
2.93 GHz
6C
2012
E5-2643
3.3 GHz
4C
2013
E5-2667 v2
3.3 GHz
8C
2014
E5-2643 v3
3.4 GHz
6C
0
2008
X5460
3.17 GHz
4C
Year
Y CPU
Freq.
#Cores
With full utilization of the systems with VMs there was an increase by a factor of 2.07. The one reason was
the performance increase that could be achieved for an individual VM (see score for a few VMs). The other
reason was that more VMs were possible with total optimum (via Hyper-Threading). However, it can be seen
that the optimum was “bought” with a triple number of VMs with a reduced performance of the individual VM.
1
EPT accelerates memory virtualization via hardware support for the mapping between host and guest memory
addresses.
Page 36 (52)
http://ts.fujitsu.com/primergy
White Paper  Performance Report PRIMERGY RX2530 M1
Version: 1.1  2015-04-15
Virtualization relevant improvements
35
Score at optimum Tile count
× 1.77
30
25
vServCon Score
× 1.27
20
× 1.40
15
30.3
× 1.58
× 2.07
10
17.1
13.5
5
0
9.61
2.94
2008
X5460
3.17 GHz
4C
6.08
2009
X5570
2.93 GHz
4C
2011
X5690
2.93 GHz
6C
2012
E5-2690
2.9 GHz
8C
2013
E5-2697 v2
2.7 GHz
12C
2014
E5-2699 v3
2.3 GHz
18C
Year
Y CPU
Freq.
#Cores
Where exactly is the technology progress between 2009 and 2014?
The performance for an individual VM in low-load situations has only slightly increased for the processors
compared here with the highest clock frequency per core. We must explicitly point out that the increased
virtualization performance as seen in the score cannot be completely deemed as an improvement for one
individual VM.
The decisive progress is in the higher number of physical cores and – associated with it – in the increased
values of maximum performance (factor 1.58, 1.40, 1.27 and 1.77 in the diagram).
Up to and including 2011 the best processor type of a processor generation had both the highest clock
frequency and the highest number of cores. From 2012 there have been differently optimized processors on
offer: Versions with a high clock frequency per core for few cores and versions with a high number of cores,
but with a lower clock frequency per core. The features of the processors are summarized in the section
“Technical data”.
Performance increases in the virtualization environment since 2009 are mainly achieved by increased VM
numbers due to the increased number of available logical or physical cores. However, since 2012 it has
been possible - depending on the application scenario in the virtualization environment – to also select a
CPU with an optimized clock frequency if a few or individual VMs require maximum computing power.
http://ts.fujitsu.com/primergy
Page 37 (52)
White Paper  Performance Report PRIMERGY RX2530 M1
Version: 1.1  2015-04-15
VMmark V2
Benchmark description
VMmark V2 is a benchmark developed by VMware to compare server configurations with hypervisor
solutions from VMware regarding their suitability for server consolidation. In addition to the software for load
generation, the benchmark consists of a defined load profile and binding regulations. The benchmark results
can be submitted to VMware and are published on their Internet site after a successful review process. After
the discontinuation of the proven benchmark “VMmark V1” in October 2010, it has been succeeded by
“VMmark V2”, which requires a cluster of at least two servers and covers data center functions, like Cloning
and Deployment of virtual machines (VMs), Load Balancing, as well as the moving of VMs with vMotion and
also Storage vMotion.
In addition to the “Performance Only” result, it is also possible from version 2.5 of VMmark to alternatively
measure the electrical power consumption and publish it as a “Performance with Server Power” result (power
consumption of server systems only) and/or “Performance with Server and Storage Power” result (power
consumption of server systems and all storage components).
VMmark V2 is not a new benchmark in the actual sense. Application scenario Load tool
# VMs
It is in fact a framework that consolidates already
LoadGen
1
established benchmarks, as workloads in order to Mail server
Web
2.0
Olio
client
2
simulate the load of a virtualized consolidated server
environment. Three proven benchmarks, which cover E-commerce
DVD Store 2 client
4
the application scenarios mail server, Web 2.0, and
Standby server
(IdleVMTest)
1
e-commerce were integrated in VMmark V2.
Each of the three application scenarios is assigned to a total of seven dedicated virtual machines. Then add
to these an eighth VM called the “standby server”. These eight VMs form a “tile”. Because of the
performance capability of the underlying server hardware, it is usually necessary to have started several
identical tiles in parallel as part of a measurement in order to achieve a maximum overall performance.
A new feature of VMmark V2 is an infrastructure component, which is present once for every two hosts. It
measures the efficiency levels of data center consolidation through VM Cloning and Deployment, vMotion
and Storage vMotion. The Load Balancing capacity of the data center is also used (DRS, Distributed
Resource Scheduler).
The result of VMmark V2 for test type „Performance Only“ is a number, known as a “score”, which provides
information about the performance of the measured virtualization solution. The score reflects the maximum
total consolidation benefit of all VMs for a server configuration with hypervisor and is used as a comparison
criterion of various hardware platforms.
This score is determined from the individual results of the VMs and an infrastructure result. Each of the five
VMmark V2 application or front-end VMs provides a specific benchmark result in the form of applicationspecific transaction rates for each VM. In order to derive a normalized score the individual benchmark results
for one tile are put in relation to the respective results of a reference system. The resulting dimensionless
performance values are then averaged geometrically and finally added up for all VMs. This value is included
in the overall score with a weighting of 80%. The infrastructure workload is only present in the benchmark
once for every two hosts; it determines 20% of the result. The number of transactions per hour and the
average duration in seconds respectively are determined for the score of the infrastructure workload
components.
In addition to the actual score, the number of VMmark V2 tiles is always specified with each VMmark V2
score. The result is thus as follows: “Score@Number of Tiles”, for example “4.20@5 tiles”.
In the case of the two test types “Performance with Server Power” and “Performance with Server and
Storage Power” a so-called “Server PPKW Score” and “Server and Storage PPKW Score” is determined,
which is the performance score divided by the average power consumption in kilowatts (PPKW =
performance per kilowatt (KW)).
The results of the three test types should not be compared with each other.
A detailed description of VMmark V2 is available in the document Benchmark Overview VMmark V2.
Page 38 (52)
http://ts.fujitsu.com/primergy
White Paper  Performance Report PRIMERGY RX2530 M1
Version: 1.1  2015-04-15
Benchmark environment
The measurement set-up is symbolically illustrated below:
Clients & Management
Server(s)
Storage System
Multiple
1Gb or 10Gb
networks
Load Generators
incl. Prime Client and
Datacenter Management
Server
vMotion
network
System under Test (SUT)
System Under Test (SUT)
Hardware
Number of servers
2
Model
PRIMERGY RX2540 M1
Processor
Memory
2 × Xeon E5-2699 v3
“Performance Only” measurement result:
Network interface
512 GB: 16 × 32GB (1x32GB) 4Rx4 DDR4-2133 LR ECC
“Performance with Server Power” and “Performance with Server and Storage
Power” measurement results:
384 GB: 12 × 32GB (1x32GB) 4Rx4 DDR4-2133 LR ECC
“Performance Only” measurement result:
1 × Emulex OneConnect OCe14000 Dual Port Adapter with 10Gb SFP+ DynamicLoM
interface module
1 × PLAN CP 2x1Gbit Cu Intel I350-T2 LP Adapter
“Performance with Server Power” and “Performance with Server and Storage
Power” measurement results:
1 × Emulex OneConnect OCe14000 Dual Port Adapter with 10Gb SFP+ DynamicLoM
interface module
Disk subsystem
1 × Dual port PFC EP LPe16002
2 × PRIMERGY RX300 S8 configured as Fibre Channel target:
7/6 × SAS-SSD (400 GB)
®
2 × Fusion-io ioDrive 2 PCIe-SSD (1.2 TB)
RAID 0 with several LUNs
Total: 8440 GB
Software
BIOS
Version V5.0.0.9 R1.1.0
BIOS settings
See details
Operating system
VMware ESXi 5.5.0 U2 Build 1964139
Operating system
settings
ESX settings: see details
http://ts.fujitsu.com/primergy
Page 39 (52)
White Paper  Performance Report PRIMERGY RX2530 M1
Version: 1.1  2015-04-15
Details
See disclosure
http://www.vmware.com/a/assets/vmmark/pdf/2014-12-09-Fujitsu-RX2530M1.pdf
http://www.vmware.com/a/assets/vmmark/pdf/2014-12-23-Fujitsu-RX2530M1serverPPKW.pdf
http://www.vmware.com/a/assets/vmmark/pdf/2014-12-23-Fujitsu-RX2530M1serverstoragePPKW.pdf
Datacenter Management Server (DMS)
Hardware (Shared)
Enclosure
PRIMERGY BX600
Network Switch
1 × PRIMERGY BX600 GbE Switch Blade 30/12
Hardware
Model
1 × server blade PRIMERGY BX620 S5
Processor
2 × Xeon X5570
Memory
24 GB
Network interface
6 × 1 Gbit/s LAN
Software
Operating system
VMware ESXi 5.1.0 Build 799733
Datacenter Management Server (DMS) VM
Hardware
Processor
4 × logical CPU
Memory
10 GB
Network interface
2 × 1 Gbit/s LAN
Software
Operating system
Microsoft Windows Server 2008 R2 Enterprise x64 Edition
Prime Client
Hardware (Shared)
Enclosure
PRIMERGY BX600
Network Switch
1 × PRIMERGY BX600 GbE Switch Blade 30/12
Hardware
Model
1 × server blade PRIMERGY BX620 S5
Processor
2 × Xeon X5570
Memory
12 GB
Network interface
6 × 1 Gbit/s LAN
Software
Operating system
Page 40 (52)
Microsoft Windows Server 2008 Enterprise x64 Edition SP2
http://ts.fujitsu.com/primergy
White Paper  Performance Report PRIMERGY RX2530 M1
Version: 1.1  2015-04-15
Load generator
Hardware
Model
2 × PRIMERGY RX600 S6
Processor
4 × Xeon E7-4870
Memory
512 GB
Network interface
5 × 1 Gbit/s LAN
Software
Operating system
VMware ESX 4.1.0 U2 Build 502767
Load generator VM (per tile 1 load generator VM)
Hardware
Processor
4 × logical CPU
Memory
4 GB
Network interface
1 × 1 Gbit/s LAN
Software
Operating system
Microsoft Windows Server 2008 Enterprise x64 Edition SP2
Some components may not be available in all countries or sales regions.
http://ts.fujitsu.com/primergy
Page 41 (52)
White Paper  Performance Report PRIMERGY RX2530 M1
Version: 1.1  2015-04-15
Benchmark results
th
“Performance Only” measurement result (December 9 2014)
On December 9, 2014 Fujitsu achieved with a PRIMERGY RX2530 M1 with Xeon E5-2699 v3 processors
and VMware ESXi 5.5.0 U2 a VMmark V2 score of “26.37@22 tiles” in a system configuration with a total of
2 × 36 processor cores and when using two identical servers in the “System under Test” (SUT). With this
result the PRIMERGY RX2530 M1 is in the official VMmark V2 “Performance Only” ranking the second most
powerful 2-socket server in a “matched pair” configuration consisting of two identical hosts (valid as of
benchmark results publication date).
th
All comparisons for the competitor products reflect the status of 9 December 2014. The current VMmark V2
“Performance Only” results as well as the detailed results and configuration data are available at
http://www.vmware.com/a/vmmark/.
The diagram shows the “Performance Only” result of the PRIMERGY RX2530 M1 in comparison with the
®
®
PRIMERGY RX2540 M1 and the best competitor system with 2 × 2 processors of the Intel Xeon Processor
E5-2600 Product Family (v1/v2/v3).
The PRIMERGY RX2530 M1 obtains a performance level almost identical to that of the PRIMERGY
RX2540 M1.
PRIMERGY RX2530 M1 compared to PRIMERGY RX2540 M1
and HP ProLiant DL380p Gen8
30
+59%
20
5
16.54@14 tiles
10
26.37@22 tiles
15
26.48@22 tiles
VMmark V2 Score
25
2 × Fujitsu
PRIMERGY
RX2540 M1
2 × 2 × Xeon
E5-2699 v3
2 × Fujitsu
PRIMERGY
RX2530 M1
2 × 2 × Xeon
E5-2699 v3
2 × HP
ProLiant DL380p
Gen8
2 × 2 × Xeon
E5-2697 v2
0
The processors used, which with a good hypervisor setting could make optimal use of their processor
features, were the essential prerequisites for achieving the PRIMERGY RX2530 M1 result. These features
include Hyper-Threading. All this has a particularly positive effect during virtualization.
All VMs, their application data, the host operating system as well as additionally required data were on a
powerful Fibre Channel disk subsystem. As far as possible, the configuration of the disk subsystem takes the
specific requirements of the benchmark into account. The use of flash technology in the form of SAS SSDs
and PCIe-SSDs in the powerful Fibre Channel disk subsystem resulted in further advantages in response
times of the storage medium used.
The network connection to the load generators was implemented via 10Gb LAN ports. The infrastructureworkload connection between the hosts was by means of 1Gb LAN ports.
All the components used were optimally attuned to each other.
Page 42 (52)
http://ts.fujitsu.com/primergy
White Paper  Performance Report PRIMERGY RX2530 M1
Version: 1.1  2015-04-15
th
“Performance with Server Power” measurement result (December 23 2014)
On December 23, 2014 Fujitsu achieved with a PRIMERGY RX2530 M1 with Xeon E5-2699 v3 processors
and VMware ESXi 5.5.0 U2 a VMmark V2 “Server PPKW Score” of “25.2305@22 tiles” in a system
configuration with a total of 2 × 36 processor cores and when using two identical servers in the “System
under Test” (SUT). With this result the PRIMERGY RX2530 M1 is in the official VMmark V2 “Performance
with Server Power” ranking the second most energy-efficient virtualization server worldwide (valid as of
benchmark results publication date).
rd
All comparisons for the competitor products reflect the status of 23
December 2014. The current
VMmark V2 “Performance with Server Power” results as well as the detailed results and configuration data
are available at http://www.vmware.com/a/vmmark/2/.
The diagram shows all VMmark V2 “Performance with Server Power“ results.
The PRIMERGY RX2530 M1 as rack server system with only one unit has a very good value for this
compact construction; beaten by only about 6% by the twice as high PRIMERGY RX2540 M1, but clearly
better than the competition system with two units.
Performance with Server Power
30
20
5
17.6899@20 tiles
10
23.6493@22 tiles
15
25.2305@22 tiles
VMmark V2 Server PPKW Score
+34%
25
2 × Fujitsu
PRIMERGY
RX2540 M1
2 × 2 × Xeon
E5-2699 v3
2 × Fujitsu
PRIMERGY
RX2530 M1
2 × 2 × Xeon
E5-2699 v3
2 × HP
ProLiant DL380
Gen9
2 × 2 × Xeon
E5-2699 v3
0
http://ts.fujitsu.com/primergy
Page 43 (52)
White Paper  Performance Report PRIMERGY RX2530 M1
Version: 1.1  2015-04-15
th
“Performance with Server and Storage Power” measurement result (December 23 2014)
On December 23, 2014 Fujitsu achieved with a PRIMERGY RX2530 M1 with Xeon E5-2699 v3 processors
and VMware ESXi 5.5.0 U2 a VMmark V2 “Server and Storage PPKW Score” of “20.8067@22 tiles” in a
system configuration with a total of 2 × 36 processor cores and when using two identical servers in the
“System under Test” (SUT). With this result the PRIMERGY RX2530 M1 is in the official VMmark V2
“Performance with Server and Storage Power” ranking the second most energy-efficient virtualization
platform worldwide (valid as of benchmark results publication date).
rd
All comparisons for the competitor products reflect the status of 23
December 2014. The current
VMmark V2 “Performance with Server and Storage Power” results as well as the detailed results and
configuration data are available at http://www.vmware.com/a/vmmark/3/.
The diagram shows all VMmark V2 “Performance with Server and Storage Power“ results.
The PRIMERGY RX2530 M1 as a rack server system with only one unit – together with the energy-efficient
disk subsystem – has a very good value for this compact construction, beaten by approx. 5% by the twice as
high PRIMERGY RX2540 M1 with the same energy-efficient disk subsystem, but clearly better than the
overall competition configuration which contains servers with two height units.
Performance with Server and Storage Power
VMmark V2 Server and Storage PPKW Score
25
+55%
20
12.7058@20 tiles
5
19.7263@22 tiles
10
20.8067@22 tiles
15
2 × Fujitsu
PRIMERGY
RX2540 M1
2 × 2 × Xeon
E5-2699 v3
2 × Fujitsu
PRIMERGY
RX2530 M1
2 × 2 × Xeon
E5-2699 v3
2 × HP
ProLiant DL380
Gen9
2 × 2 × Xeon
E5-2699 v3
0
Page 44 (52)
http://ts.fujitsu.com/primergy
White Paper  Performance Report PRIMERGY RX2530 M1
Version: 1.1  2015-04-15
STREAM
Benchmark description
STREAM is a synthetic benchmark that has been used for many years to determine memory throughput and
which was developed by John McCalpin during his professorship at the University of Delaware. Today
STREAM is supported at the University of Virginia, where the source code can be downloaded in either
Fortran or C. STREAM continues to play an important role in the HPC environment in particular. It is for
example an integral part of the HPC Challenge benchmark suite.
The benchmark is designed in such a way that it can be used both on PCs and on server systems. The unit
of measurement of the benchmark is GB/s, i.e. the number of gigabytes that can be read and written per
second.
STREAM measures the memory throughput for sequential accesses. These can generally be performed
more efficiently than accesses that are randomly distributed on the memory, because the processor caches
are used for sequential access.
Before execution the source code is adapted to the environment to be measured. Therefore, the size of the
data area must be at least 12 times larger than the total of all last-level processor caches so that these have
as little influence as possible on the result. The OpenMP program library is used to enable selected parts of
the program to be executed in parallel during the runtime of the benchmark, consequently achieving optimal
load distribution to the available processor cores.
During implementation the defined data area, consisting of 8-byte elements, is successively copied to four
types, and arithmetic calculations are also performed to some extent.
Type
Execution
Bytes per step
Floating-point calculation per step
COPY
a(i) = b(i)
16
0
SCALE
a(i) = q × b(i)
16
1
SUM
a(i) = b(i) + c(i)
24
1
TRIAD
a(i) = b(i) + q × c(i)
24
2
The throughput is output in GB/s for each type of calculation. The differences between the various values are
usually only minor on modern systems. In general, only the determined TRIAD value is used as a
comparison.
The measured results primarily depend on the clock frequency of the memory modules; the processors
influence the arithmetic calculations.
9
This chapter specifies throughputs on a basis of 10 (1 GB/s = 10 Byte/s).
Benchmark environment
System Under Test (SUT)
Hardware
Model
PRIMERGY RX2530 M1
Processor
2 processors of Intel Xeon Processor E5-2600 v3 Product Family
Memory
16 × 16GB (1x16GB) 2Rx4 DDR4-2133 R ECC
®
®
Software
BIOS settings
EnergyPerformance = Performance
Cores per processor < 10: COD Enable = disabled, Early Snoop = enabled
else:
COD Enable = enabled, Early Snoop = disabled
Operating system
Red Hat Enterprise Linux Server release 6.5
Operating system
settings
echo never > /sys/kernel/mm/redhat_transparent_hugepage/enabled
Compiler
Intel C++ Composer XE 2013 SP1 for Linux Update 1
Benchmark
Stream.c Version 5.9
Some components may not be available in all countries or sales regions.
http://ts.fujitsu.com/primergy
Page 45 (52)
White Paper  Performance Report PRIMERGY RX2530 M1
Version: 1.1  2015-04-15
Benchmark results
Processor
Memory
Frequency
[MHz]
Max. Memory
Bandwidth
[GB/s]
Cores
Processor
Frequency
[GHz]
Number of
Processors
Xeon E5-2603 v3
1600
51
Xeon E5-2609 v3
1600
51
Xeon E5-2623 v3
1866
Xeon E5-2620 v3
Xeon E5-2630L v3
TRIAD
6
1.60
2
47.4
6
1.90
2
58.2
59
4
3.00
2
73.3
1866
59
6
2.40
2
88.9
1866
59
8
1.80
2
86.9
Xeon E5-2630 v3
1866
59
8
2.40
2
89.9
Xeon E5-2640 v3
1866
59
8
2.60
2
90.1
Xeon E5-2637 v3
2133
68
4
3.50
2
89.9
Xeon E5-2643 v3
2133
68
6
3.40
2
90.3
Xeon E5-2667 v3
2133
68
8
3.20
2
Xeon E5-2650 v3
2133
68
10
2.30
2
116
Xeon E5-2660 v3
2133
68
10
2.60
2
115
Xeon E5-2650L v3
2133
68
12
1.80
2
116
Xeon E5-2670 v3
2133
68
12
2.30
2
118
Xeon E5-2680 v3
2133
68
12
2.50
2
118
Xeon E5-2690 v3
2133
68
12
2.60
2
118
Xeon E5-2683 v3
2133
68
14
2.00
2
117
Xeon E5-2695 v3
2133
68
14
2.30
2
118
Xeon E5-2697 v3
2133
68
14
2.60
2
117
Xeon E5-2698 v3
2133
68
16
2.30
2
117
Xeon E5-2699 v3
2133
68
18
2.30
2
116
[GB/s]
90.3
The following diagram illustrates the throughput of the PRIMERGY RX2530 M1 in comparison to its
predecessor, the PRIMERGY RX200 S8, in their most performant configuration.
STREAM TRIAD:
PRIMERGY RX2530 M1 vs. PRIMERGY RX200 S8
GB/s
118
120
101
100
80
60
40
20
0
PRIMERGY RX200 S8
2 × Xeon E5-2697 v2
Page 46 (52)
PRIMERGY RX2530 M1
2 × Xeon E5-2680 v3
http://ts.fujitsu.com/primergy
White Paper  Performance Report PRIMERGY RX2530 M1
Version: 1.1  2015-04-15
LINPACK
Benchmark description
LINPACK was developed in the 1970s by Jack Dongarra and some other people to show the performance of
supercomputers. The benchmark consists of a collection of library functions for the analysis and solution of
linear system of equations. A description can be found in the document
http://www.netlib.org/utk/people/JackDongarra/PAPERS/hplpaper.pdf.
LINPACK can be used to measure the speed of computers when solving a linear equation system. For this
purpose, an n × n matrix is set up and filled with random numbers between -2 and +2. The calculation is then
performed via LU decomposition with partial pivoting.
A memory of 8n² bytes is required for the matrix. In case of an n × n matrix the number of arithmetic
2
3
2
operations required for the solution is /3n + 2n . Thus, the choice of n determines the duration of the
measurement: a doubling of n results in an approximately eight-fold increase in the duration of the
measurement. The size of n also has an influence on the measurement result itself: as n increases, the
measured value asymptotically approaches a limit. The size of the matrix is therefore usually adapted to the
amount of memory available. Furthermore, the memory bandwidth of the system only plays a minor role for
the measurement result, but a role that cannot be fully ignored. The processor performance is the decisive
factor for the measurement result. Since the algorithm used permits parallel processing, in particular the
number of processors used and their processor cores are - in addition to the clock rate - of outstanding
significance.
LINPACK is used to measure how many floating point operations were carried out per second. The result is
referred to as Rmax and specified in GFlops (Giga Floating Point Operations per Second).
An upper limit, referred to as Rpeak, for the speed of a computer can be calculated from the maximum
number of floating point operations that its processor cores could theoretically carry out in one clock cycle:
Rpeak = Maximum number of floating point operations per clock cycle
× Number of processor cores of the computer
× Rated processor frequency[GHz]
LINPACK is classed as one of the leading benchmarks in the field of high performance computing (HPC).
LINPACK is one of the seven benchmarks currently included in the HPC Challenge benchmark suite, which
takes other performance aspects in the HPC environment into account.
Manufacturer-independent publication of LINPACK results is possible at http://www.top500.org/. The use of a
LINPACK version based on HPL is prerequisite for this (see: http://www.netlib.org/benchmark/hpl/).
Intel offers a highly optimized LINPACK version (shared memory version) for individual systems with Intel
processors. Parallel processes communicate here via "shared memory", i.e. jointly used memory. Another
version provided by Intel is based on HPL (High Performance Linpack). Intercommunication of the LINPACK
processes here takes place via OpenMP and MPI (Message Passing Interface). This enables communication
between the parallel processes - also from one computer to another. Both versions can be downloaded from
http://software.intel.com/en-us/articles/intel-math-kernel-library-linpack-download/.
Manufacturer-specific LINPACK versions also come into play when graphics cards for General Purpose
Computation on Graphics Processing Unit (GPGPU) are used. These are based on HPL and include
extensions which are needed for communication with the graphics cards.
http://ts.fujitsu.com/primergy
Page 47 (52)
White Paper  Performance Report PRIMERGY RX2530 M1
Version: 1.1  2015-04-15
Benchmark environment
System Under Test (SUT)
Hardware
Model
PRIMERGY RX2530 M1
Processor
2 processors of Intel Xeon Processor E5-2600 v3 Product Family
Memory
16 × 16GB (1x16GB) 2Rx4 DDR4-2133 R ECC
®
®
Software
BIOS settings
EnergyPerformance = Performance
COD Enable = disabled
Early Snoop = disabled
All processors apart from Xeon E5-2603 v3 and E5-2609 v3:
Turbo Mode
= Enabled (default)
= Disabled
Hyper Threading = Disabled
Operating system
Red Hat Enterprise Linux Server release 7.0
Benchmark
Shared memory version: Intel Optimized LINPACK Benchmark 11.2 for Linux OS
Some components may not be available in all countries or sales regions.
Page 48 (52)
http://ts.fujitsu.com/primergy
White Paper  Performance Report PRIMERGY RX2530 M1
Version: 1.1  2015-04-15
Rmax (with Turbo Mode)
[GFlops]
2
384
336
88%
365
95%
Xeon E5-2637 v3
4
3.50
2
448
388
87%
388
87%
Xeon E5-2603 v3
6
1.60
2
307
273
89%
Xeon E5-2609 v3
6
1.90
2
365
321
88%
89%
Efficiency
Rpeak
[GFlops]
3.00
Efficiency
Number of processors
4
Cores
Xeon E5-2623 v3
Processor
Rated Frequency [Ghz]
Rmax (without Turbo Mode)
[GFlops]
Benchmark results
Xeon E5-2620 v3
6
2.40
2
461
411
442
96%
Xeon E5-2643 v3
6
3.40
2
653
565
87%
579
89%
Xeon E5-2630L v3
8
1.80
2
461
432
94%
454
98%
Xeon E5-2630 v3
8
2.40
2
614
575
94%
587
96%
Xeon E5-2640 v3
8
2.60
2
666
597
90%
597
90%
Xeon E5-2667 v3
8
3.20
2
819
734 (est.)
90%
734 (est.)
90%
Xeon E5-2650 v3
10
2.30
2
736
686
93%
702
95%
Xeon E5-2660 v3
10
2.60
2
832
713
86%
712
86%
Xeon E5-2650L v3
12
1.80
2
691
542
78%
541
78%
Xeon E5-2670 v3
12
2.30
2
883
823
93%
829
94%
Xeon E5-2680 v3
12
2.50
2
960
838
87%
838
87%
Xeon E5-2690 v3
12
2.60
2
998
896
90%
896
90%
Xeon E5-2683 v3
14
2.00
2
896
835
93%
874
98%
Xeon E5-2695 v3
14
2.30
2
1030
929
90%
929
90%
Xeon E5-2697 v3
14
2.60
2
1165
983 (est.)
84%
982 (est.)
84%
Xeon E5-2698 v3
16
2.30
2
1178
1084
92%
1086
92%
Xeon E5-2699 v3
18
2.30
2
1325
1185
89%
1186
90%
The results marked (est.) are estimates.
Rmax = Measurement result
Rpeak = Maximum number of floating point operations per clock cycle
× Number of processor cores of the computer
× Rated frequency [GHz]
As explained in the section "Technical Data", Intel does not as a matter of principle guarantee that the
maximum turbo frequency can be reached in the processor models due to manufacturing tolerances. A
further restriction applies for workloads, such as those generated by LINPACK: with intensive use of AVX
instructions and a high number of instructions per clock unit. Here the frequency of a core can also be limited
if the upper limits of the processor for power consumption and temperature are reached before the upper
limit for the current consumption. This can result in the achievement of a lower performance with turbo mode
than without turbo mode. In such cases, you should disable the turbo functionality via BIOS option.
http://ts.fujitsu.com/primergy
Page 49 (52)
White Paper  Performance Report PRIMERGY RX2530 M1
Version: 1.1  2015-04-15
System comparison
The following diagram illustrates the throughput of the PRIMERGY RX2530 M1 in comparison to its
predecessor, the PRIMERGY RX200 S8, in their most performant configuration.
LINPACK:
PRIMERGY RX2530 M1 vs. PRIMERGY RX200 S8
GFlops
1186
1200
1000
800
546
600
400
200
0
PRIMERGY RX200 S8
2 × Xeon E5-2697 v2
Page 50 (52)
PRIMERGY RX2530 M1
2 × Xeon E5-2699 v3
http://ts.fujitsu.com/primergy
White Paper  Performance Report PRIMERGY RX2530 M1
Version: 1.1  2015-04-15
Literature
PRIMERGY Servers
http://primergy.com/
PRIMERGY RX2530 M1
This White Paper:
http://docs.ts.fujitsu.com/dl.aspx?id=2b2fb785-30f8-4277-9d4a-c3083476c197
http://docs.ts.fujitsu.com/dl.aspx?id=8fcfa982-2a17-4eac-b796-05702e93fba2
http://docs.ts.fujitsu.com/dl.aspx?id=5b7faa9f-d6be-45a3-b14d-938f600ef1c7
Data sheet
http://docs.ts.fujitsu.com/dl.aspx?id=afc62316-7690-4222-814b-ad0203928a07
PRIMERGY Performance
http://www.fujitsu.com/fts/x86-server-benchmarks
Performance of Server Components
http://www.fujitsu.com/fts/products/computing/servers/mission-critical/benchmarks/x86components.html
BIOS optimizations for Xeon E5-2600 v3 based systems
http://docs.ts.fujitsu.com/dl.aspx?id=f154aca6-d799-487c-8411-e5b4e558c88b
Memory performance of Xeon E5-2600 v3 (Haswell-EP)-based systems
http://docs.ts.fujitsu.com/dl.aspx?id=74eb62e6-4487-4d93-be34-5c05c3b528a6
RAID Controller Performance
http://docs.ts.fujitsu.com/dl.aspx?id=e2489893-cab7-44f6-bff2-7aeea97c5aef
Disk I/O: Performance of RAID controllers
Basics of Disk I/O Performance
http://docs.ts.fujitsu.com/dl.aspx?id=65781a00-556f-4a98-90a7-7022feacc602
Information about Iometer
http://www.iometer.org
LINPACK
The LINPACK Benchmark: Past, Present, and Future
http://www.netlib.org/utk/people/JackDongarra/PAPERS/hplpaper.pdf
TOP500
http://www.top500.org/
HPL - A Portable Implementation of the High-Performance Linpack Benchmark for DistributedMemory Computers
http://www.netlib.org/benchmark/hpl/
Intel Math Kernel Library – LINPACK Download
http://software.intel.com/en-us/articles/intel-math-kernel-library-linpack-download/
OLTP-2
Benchmark Overview OLTP-2
http://docs.ts.fujitsu.com/dl.aspx?id=e6f7a4c9-aff6-4598-b199-836053214d3f
SPECcpu2006
http://www.spec.org/osg/cpu2006
Benchmark overview SPECcpu2006
http://docs.ts.fujitsu.com/dl.aspx?id=1a427c16-12bf-41b0-9ca3-4cc360ef14ce
SPECpower_ssj2008
http://www.spec.org/power_ssj2008
Benchmark Overview SPECpower_ssj2008
http://docs.ts.fujitsu.com/dl.aspx?id=166f8497-4bf0-4190-91a1-884b90850ee0
http://ts.fujitsu.com/primergy
Page 51 (52)
White Paper  Performance Report PRIMERGY RX2530 M1
Version: 1.1  2015-04-15
STREAM
http://www.cs.virginia.edu/stream/
VMmark V2
Benchmark Overview VMmark V2
http://docs.ts.fujitsu.com/dl.aspx?id=2b61a08f-52f4-4067-bbbf-dc0b58bee1bd
VMmark V2
http://www.vmmark.com
vServCon
Benchmark Overview vServCon
http://docs.ts.fujitsu.com/dl.aspx?id=b953d1f3-6f98-4b93-95f5-8c8ba3db4e59
Contact
FUJITSU
Website: http://www.fujitsu.com/
PRIMERGY Product Marketing
mailto:Primergy-PM@ts.fujitsu.com
PRIMERGY Performance and Benchmarks
mailto:primergy.benchmark@ts.fujitsu.com
© Copyright 2015 Fujitsu Technology Solutions. Fujitsu and the Fujitsu logo are trademarks or registered trademarks of Fujitsu Limited in Japan and other
countries. Other company, product and service names may be trademarks or registered trademarks of their respective owners. Technical data subject to
modification and delivery subject to availability. Any liability that the data and illustrations are complete, actual or correct is excluded. Designations may be
trademarks and/or copyrights of the respective manufacturer, the use of which by third parties for their own purposes may infringe the rights of such owner.
For further information see http://www.fujitsu.com/fts/resources/navigation/terms-of-use.html
2015-04-15 WW EN
Page 52 (52)
http://ts.fujitsu.com/primergy