Document

Roland Mueller
Reference Architecture: System x3650 M5
Scalable Solution for Microsoft Exchange
Server 2013 Using Internal Disks
Solution Reference Number: BAPEXCX1344
Contents
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Business problem and business value . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Architectural overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Software and hardware overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Functional requirements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Best practice and implementation guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Solution validation methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
References and helpful links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
About the Author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Appendix A: Bill of materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Appendix B: Configuring the RAID1 boot array on the ServeRAID M5210 . . . . . . . . . . . . . 20
Appendix C: Configuring the RAID0 single-disk arrays on the ServeRAID M5210 . . . . . . . 24
Appendix D: Creating volumes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Appendix E: Microsoft Exchange Jetstress 2013 Stress Test Result Report . . . . . . . . . . . 28
© Copyright Lenovo 2014. All rights reserved.
ibm.com/redbooks
1
Introduction
This document describes a scalable, reference architecture for a 4-node Microsoft Exchange
Server 2013 mail environment that uses System x3650 M5 servers and internal storage. The
solution as described supports a user population of 5,000 users with 3,500 MB mailboxes;
however, when it is deployed in aggregate, it can support any number of mailboxes in
multiples of 5,000.
By using the native high availability features of Exchange Server 2013, the environment that
is described in this publication allows administrators to eliminate traditional backup methods,
which frees critical enterprise resources. In addition, the use of internal disks in a
cache-enabled JBOD configuration drastically reduces the overall cost of the solution by
eliminating SAN or DAS-based storage and reduces the need for storage administrators.
This paper is intended to provide the planning, design considerations, and best practices for
implementation.
This document is for clients who are implementing Exchange Server 2013 and for IT
engineers familiar with the hardware and software that make up this reference architecture.
Also, the System x® sales teams and their customers who are evaluating or pursuing
Exchange Server 2013 solutions can benefit from this validated configuration.
Comprehensive experience with the various reference configuration technologies is
recommended before implementing this solution.
Business problem and business value
Today’s IT managers are looking for efficient ways to grow their IT infrastructure, while
simultaneously reducing administration and power costs. Add the requirement for high
availability, and organizations must maintain a delicate balance between availability of the
mail environment and the cost of purchasing and maintaining the solution.
Rapidly responding to changing business needs with simple, fast deployment and
configuration while maintaining systems and services directly corresponds to the vitality of
your business. Natural disasters, malicious attacks, and even simple software upgrade
patches can cripple services and applications until administrators resolve the problems and
restore any backed up data.
Business value
The last three generations of Microsoft Exchange Server have seen dramatic improvements
in native high availability and site resiliency features. Starting with the Cluster Continuous
Replication foundational technology (which first appeared in Exchange Server 2007 and later
evolved into the Database Availability Group in Exchange Server 2010), Exchange Server
2013 further improves availability and site resiliency by de-coupling the mailbox and Client
Access Server (CAS) roles and simplifying the namespace requirements. In addition to the
improvements in high availability and site resiliency, optimizations at the database level
reduced IOPS requirements by 50% over Exchange Server 2010. This reduction in IOPS
allows Exchange Server 2013 to use less expensive, larger capacity hard disk drives (HDDs).
2
System x3650 M5 Scalable Solution for Microsoft Exchange Server 2013 Using Internal Disks
Lenovo® made a significant investment in the System x platform to use the improvements in
Exchange Server 2013. The large memory capacity of the server improves search query
performance and reduces IOPS, while the impressive number of internal drive slots provides
organizations with the option of using internal disks for their mail environment rather than
purchasing expensive DAS or SAN-based storage subsystems. The improved architecture of
the x3650 M5 makes it a perfect fit.
This System x3650 M5 solution for Exchange Server 2013 provides businesses with an
affordable, interoperable, and reliable industry-leading email solution. This reference
architecture combines Microsoft software, System x servers, consolidated guidance, and
validated configurations to provide a high level of redundancy and fault tolerance to ensure
high availability of email resources.
Architectural overview
The design consists of four x3650 M5 servers that are part of a database availability group
(DAG), which spans two data centers. Two servers are in each of the data centers. Four
copies of each mailbox database (two in each datacenter) are used to provide fault tolerance,
site resiliency, and to eliminate the need for traditional backups.
Figure 1 shows the overall architecture of the solution.
Primary
Datacenter
(Active)
Management
Servers
Domain
Controllers
Witness Server
Secondary
Datacenter
(Passive)
Management
Servers
Domain
Controllers
Corporate Network (MAPI and Replication traffic on separate VLANs)
Network Switches
Network Switches
Layer 4 Load
Balancer
Layer 4 Load
Balancer
Database Availability Group
Copy-1
Mailbox/CAS
Server-1
Copy-2
Mailbox/CAS
Server-2
Copy-3
Mailbox/CAS
Server-3
Copy-4
Mailbox/CAS
Server-4
Figure 1 Architectural overview
For the storage configuration, the Exchange databases and logs are hosted on the servers’
internal disks rather than an external disk system. Each disk that is hosting Exchange mailbox
databases and logs is defined as a discreet, single-disk RAID0 array, which creates a
JBOD-like design while it uses the RAID controller’s 2 GB cache for maximum performance.
Each server has the mailbox and the CAS role installed (multi-role).
System x3650 M5 Scalable Solution for Microsoft Exchange Server 2013 Using Internal Disks
3
The native high availability features in Exchange Server 2013 help eliminate single points of
failure without having to rely on hardware high availability features as heavily as in previous
releases of Microsoft Exchange Server. Therefore, users have near-continuous access to
business-critical email resources.
Multiple paths connect the servers to the networking infrastructure to maintain access to
critical resources if there is a planned or unplanned outage. A hardware network load
balancer is also used to balance network connections between the CAS servers.
A witness server is at the primary site. The witness server gives the primary site three votes
as opposed to two votes (the mailbox servers). If there is a WAN outage, the mailbox servers
at the primary site remain active. Because all mail users are at the same site as the primary
datacenter, no users lose access to email. When the WAN is restored, replication resumes.
Software and hardware overview
A short summary of the software and hardware components that are used in this reference
architecture is listed in this section.
The reference configuration is constructed of the following enterprise-class components:
򐂰 Four System x3650 M servers in DAG cluster and installed with the Exchange Server
Mailbox and CAS roles
򐂰 Microsoft Exchange Server 2013 SP1
򐂰 Microsoft Windows Server 2012 R2
Together, these software and hardware components form a cost-effective solution that
supports an Exchange Server 2013 mail environment that is flexible and scalable enough to
support any number of user populations (in multiples of 5,000 users) when it is deployed in
aggregate (four servers support 5,000 users, eight servers support 10,000 users, and so on).
This design consists of four System x3650 M5 servers that are running the Microsoft
Windows 2012 R2 operating system and installed with Microsoft Exchange Server 2013 SP1.
System x3650 M5
At the core of this solution, the System x3650 M5 server delivers the performance and
reliability that is required for business-critical applications, such as Exchange Server 2013.
System x3650 M5 servers can be equipped with up to two 18-core Intel E5-2600 v3
processors, and up to 3 TB of TruDDR4™ of memory. Up to seven PCIe 3.0 expansion slots,
four 1-Gb network ports, and an optional embedded dual-port 10 GbE network adapter
provide ports for both your data and storage connections.
The x3650 M5 includes an on-board RAID controller and the choice of spinning hot swap SAS
or SATA disks and SFF hot swap solid-state drives (SSDs). The large number of HDD slots in
the x3650 M5 makes it the perfect platform for running Microsoft Exchange Server mail
environments from local disk.
The x3650 M5 supports the following components:
򐂰
򐂰
򐂰
򐂰
4
Up to 8 2.5-inch Gen3 Simple Swap HDDs
Up to 8 3.5-inch Simple Swap HDDs
Up to 24+2+2 SAS/SATA 2.5-inch Gen3 Hot Swap HDDs
Up to 12+2 SAS/SATA 3.5-inch Hot Swap HDDs
System x3650 M5 Scalable Solution for Microsoft Exchange Server 2013 Using Internal Disks
The x3650 M5 also supports remote management via the Integrated Management Module
(IMM), which enables continuous management capabilities. All of these key features,
including many that are not listed here, help solidify the dependability that customers are
accustomed to with System x servers.
Figure 2 shows the System x3650 M5.
Figure 2 System x3650 M5 with 12 3.5-inch drive bays at the front
ServeRAID M5210 RAID controller
The ServeRAID™ M5210 SAS/SATA controller for System x is part of the ServeRAID M
Series family that offers a complete server storage solution consisting of RAID controllers,
cache and flash modules, energy packs, and software feature upgrades in an ultra-flexible
offerings structure. M5210 comes as a small form factor PCIe adapter.
Two internal x4 HD Mini-SAS connectors provide connections to up to 32 internal drives
(depending on the server model). To use cache, the optional 2 GB onboard data cache
(DDR3 that is running at 1866 MHz) with battery backup upgrade is used.
Figure 3 shows the ServeRAID M5210 Controller with an optional cache installed.
Figure 3 ServeRAID M5210 SAS/SATA Controller with optional cache installed
Note: When the ServeRAID M5210 RAID controller is used in a pure JBOD (just a bunch
of disks) configuration, the controller cannot be installed with the optional onboard data
cache (DDR3 that is running at 1866 MHz) with battery backup; the disk drives are passed
directly through to the operating system. Pure JBOD deployments are significantly affected
by the lack of battery-backed cache; therefore, to allow the use of cache, RAID0 is used to
create discreet, single-disk arrays.
System x3650 M5 Scalable Solution for Microsoft Exchange Server 2013 Using Internal Disks
5
Microsoft Windows Server 2012 R2
Windows Server 2012 R2 provides the enterprise with a scalable and highly elastic platform
for mission-critical applications. The operating system supports up to 4 TB of RAM, 320
logical processors, and 64 nodes per cluster. It also includes updates to Active Directory to
help the performance of applications such as Exchange.
Deployment
Figure 4 shows the hardware as it is deployed in the data centers. As shown in Figure 4, each
rack includes two System x3650 M5 servers, two top-of-rack network switches, and a network
load balancer.
Primary Datacenter (Active)
Top-of-Rack
Networking
Switches
Network Load
Balancer
x3650 M5 servers
Secondary Datacenter (Passive)
25
25
24
24
23
23
22
22
21
Top-of-Rack
Networking
Switches
25
25
24
24
23
23
22
22
21
21
21
20
20
20
20
19
19
19
19
18
18
18
18
17
17
17
17
16
16
16
16
15
15
15
15
14
14
14
14
13
13
13
13
12
12
12
12
11
11
11
11
10
10
10
10
09
09
09
09
08
08
08
08
07
07
07
07
06
06
06
06
05
05
05
05
04
04
04
04
03
03
03
03
02
02
02
02
01
01
01
01
Network Load
Balancer
x3650 M5 servers
Figure 4 Deployed hardware
Note: The x3650 M5 servers are the only hardware components that are covered in-depth
in this reference architecture.
Functional requirements
This section describes the designed function of the reference architecture and the customer
profile and design requirements.
To demonstrate functionality and design decision points, this paper presents a case study of a
fictitious organization with 5,000 employees. The employee population is at the same site as
the primary data center. A single namespace spans both sites to use new site resiliency
features that are included in Exchange Server 2013.
6
System x3650 M5 Scalable Solution for Microsoft Exchange Server 2013 Using Internal Disks
User profile
The company determined the average number of emails that are sent and received per day
for each user is approximately 100, with an average email size of 75 KB. Each user is
assigned a 3500 MB mailbox. The organization requires a deleted item retention window of
14 days to give users ample time to recover unintentionally deleted emails.
High availability and site resiliency
If an organization has multiple data centers, the Exchange Server infrastructure can be
deployed in one site or distributed across two or more sites. Typically, the service level
agreement that is in place determines the degree of high availability and the placement of the
servers.
In this solution, the organization has two data centers with a user population that is co-located
with the primary datacenter. The organization determined that site resiliency is required;
therefore, the Exchange Server 2013 design is based on a multiple site deployment with site
resiliency. The secondary data center is passive and used for disaster recovery, if needed.
The organization also decided a 24-hour recovery point objective is sufficient.
Backup and recovery
Exchange Server 2013 includes several native features that provide data protection that,
when implemented correctly, can eliminate the need for traditional backups. Such backups
often are used for disaster recovery, recovery of accidentally deleted items, long-term data
storage, and point-in-time database recovery. Each of these scenarios is addressed with
features in Exchange Server 2013, such as high availability database copies in a DAG, the
Recoverable Items folder with the optional Hold Policy, archiving, multiple mailbox search,
message retention features, and lagged database copies.
If there is a server failure and recovery (or reseeding) is necessary, rebuilding a failed
database might take hours or even days when Exchange Server 2013 native data protection
features are used. Having a traditional backup can greatly reduce the time that is required to
bring a failed database back online. However, the downsides to the use of backups are not
insignificant; the administrative overhead, licensing costs, and the extra storage capacity that
is required for the backup files can greatly increase the total cost of ownership of the solution.
In this solution, the organization decided to forgo traditional backups in favor of the use of an
Exchange Server 2013 native data protection strategy.
Database copies
Before you determine the number of database copies that are needed, it is important to
understand the following types of database copies:
򐂰 High availability database copy
This type of database copy has a log replay time of zero seconds. When a change is made
in the active database copy, changes are immediately replicated to passive database
copies.
System x3650 M5 Scalable Solution for Microsoft Exchange Server 2013 Using Internal Disks
7
򐂰 Lagged database copy
This type of database copy has a pre-configured delay that is built into the log replay time.
When a change is implemented in the active database copy, the logs are copied over to
the server hosting the lagged database copy, but are not immediately implemented. This
configuration provides point-in-time protection, which can be used to recover from logical
corruption of a database (logical corruption occurs when data is added, deleted, or
manipulated in a way the user or administrator did not expect). Lagged database copies
allow up to 14 days of lagged data.
Log replay time for lagged database copies: Lenovo recommends the use of a replay
lag time of 72 hours. This lag time gives administrators time to detect logical corruption that
occurred at the start of a weekend.
Another factor to consider when you are choosing the number of database copies is
serviceability of the hardware. If only one high availability database copy is present at each
site, the administrator is required to switch over to the database copy that is hosted at a
secondary datacenter when a server must be powered off for servicing. To prevent this issue,
maintaining a second database copy at the same geographic location as the active database
copy is a valid option to maintain hardware serviceability and reduce administrative overhead.
Improvements to the Exchange Server 2013 namespace design reduced the complexity of
this action, which makes datacenter switchover more transparent than in Exchange 2010.
Microsoft recommends having a minimum of three high availability database copies before
removing traditional forms of backup. Microsoft also recommends having at least two
database copies at each site when a JBOD storage design is used. Our example organization
chose to forgo traditional forms of backups and is using cache-enabled JBOD rather than
traditional RAID arrays; therefore, the organization requires at least two copies of each
database at each site.
The organization determined a 14-day deleted item retention window is sufficient protection
and decided not to deploy lagged database copies.
Database Availability Groups
The database availability group (DAG) is the building block for highly available or disaster
recoverable solutions. It evolved from continuous cluster replication (CCR) in Exchange 2007.
DAGs were first introduced in Exchange 2010.
A DAG is composed of up to 16 mailbox servers that host a set of replicated databases and
provide automatic database-level recovery from failures that affect individual servers or
databases.
Microsoft recommends minimizing the number of DAGs that are deployed for administrative
simplicity. However, multiple DAGs are required in the following circumstances:
򐂰
򐂰
򐂰
򐂰
More than 16 mailbox servers are deployed.
Active mailbox users are in multiple sites (active/active site configuration).
Separate DAG-level administrative boundaries are required.
Mailbox servers are in separate domains (DAG is domain bound).
In this reference architecture, the organization is deploying an active/passive site
configuration (active users at the primary site only); therefore, the organization uses one
DAG, which spans both sites. A file share witness is in the primary datacenter, which gives it
three votes as opposed to two votes at the secondary datacenter (the two mailbox servers).
8
System x3650 M5 Scalable Solution for Microsoft Exchange Server 2013 Using Internal Disks
If there is a WAN outage, the database copies in the primary datacenter remain active. No
users lose access to email during the WAN outage. After the WAN is restored, replication
resumes to the secondary datacenter.
Best practice and implementation guidelines
A successful Microsoft Exchange Server deployment and operation can be significantly
attributed to a set of test-proven planning and deployment techniques. Proper planning
includes sizing required server resources (CPU and memory), storage (capacity and IOPS),
and networking (bandwidth and VLAN assignment) are needed to support the infrastructure.
This information can then be implemented by using industry-standard best practices to
achieve optimal performance and growth headroom that is necessary for the life of the
solution.
Configuration best practices and implementation guidelines, which aid in planning and
configuration of the solution, are described in the following sections:
򐂰
򐂰
򐂰
򐂰
򐂰
“Racking and power distribution”
“System x3650 M5 Setup”
“Network configuration” on page 10
“Storage design and configuration” on page 12
“Exchange Server 2013 database and log placement” on page 13
Racking and power distribution
Power distribution units (PDUs) and their cabling should be installed before any system is
racked. When you are cabling the PDUs, consider the following points:
򐂰 Ensure sufficient, separate electrical circuits and receptacles to support the required
PDUs.
򐂰 To minimize the chance of a single electrical circuit failure taking down a device, ensure
that there are sufficient PDUs to feed redundant power supplies by using separate
electrical circuits.
򐂰 For devices that have redundant power supplies, plan for individual electrical cords from
separate PDUs.
򐂰 Maintain appropriate shielding and surge suppression practices and use appropriate
battery back-up techniques.
System x3650 M5 Setup
The Exchange Server 2013 architecture consists of four dual-socket System x3650 M5
servers with the following configuration:
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
2x Intel Xeon E5-2640 v3 8-core 2.6 GHz processors
12x 16 GB TruDDR4 Memory (2Rx4, 1.2 V) PC4-17000 CL15 2133 MHz LP RDIMM
10x 4 TB NL SAS 3.5-inch Hot Swap HDDs
2x 2 TB NL SAS 3.5-inch Hot Swap HDDs
ServeRAID M5210 SAS/SATA Controller for System x
ServeRAID M5200 Series 2GB Flash/RAID 5 Upgrade
On-board quad-port Broadcom BCM5719 1Gb network adapter
Setup involves the installation and configuration of Windows Server 2012 R2, storage, and
networking configuration on each server.
System x3650 M5 Scalable Solution for Microsoft Exchange Server 2013 Using Internal Disks
9
The following pre-operating system installation steps are used:
1. Validate that firmware levels are consistent across all servers.
2. Verify that the two 2TB NL SAS local disks are configured as an RAID1 array (for the
operating system). (For more information, see “Appendix B: Configuring the RAID1 boot
array on the ServeRAID M5210” on page 20.)
The following Windows installation and configuration steps are used:
1. Install Windows Server 2012 R2.
2. Set your server name and join the domain.
3. Install the Exchange Server 2013 prerequisite features. For more information about
prerequisites, see this website:
http://technet.microsoft.com/en-us/library/bb691354(v=exchg.150).aspx
4. Run Windows Update to ensure that any new patches are installed.
Note: All the servers must have the same software updates (patches) and service packs.
Network configuration
This section describes the networking topology and best practices to correctly configure the
network environment.
In our solution, we rely heavily on the use of virtual LANs (VLANs), which are a way to
logically segment networks to increase network flexibility without changing the physical
network topology. With network segmentation, each switch port connects to a segment that is
a single broadcast domain. When a switch port is configured to be a member of a VLAN, it is
added to a group of ports that belong to one broadcast domain. Each VLAN is identified by a
VLAN identifier (VID). A VID is a 12-bit portion of the VLAN tag in the frame header that
identifies an explicit VLAN.
Network topology design
Two isolated networks are recommended to support this reference architecture: a MAPI
network and a Replication network.
The use of two network ports in each DAG member provides you with one MAPI network and
one Replication network, with redundancy for the Replication network and the following
recovery behaviors:
򐂰 If there is a failure that affects the MAPI network, a server failover occurs (assuming there
are healthy mailbox database copies that can be activated).
򐂰 If there is a failure that affects the Replication network, log shipping and seeding
operations revert to using the MAPI network if the MAPI network is unaffected by the
failure, even if the MAPI network has its ReplicationEnabled property set to False. When
the failed Replication network is restored and ready to resume log shipping and seeding
operations, you must manually switch over to the Replication network.
To change replication from the MAPI network to a restored Replication network, you can
suspend and resume continuous replication by using the Suspend-MailboxDatabaseCopy
and Resume-MailboxDatabaseCopy cmdlets, or restart the Microsoft Exchange Replication
service.
10
System x3650 M5 Scalable Solution for Microsoft Exchange Server 2013 Using Internal Disks
Microsoft recommends the use of suspend and resume operations to avoid the brief
outage that is caused by restarting the Microsoft Exchange Replication service.
A combination of physical and virtual isolated networks is configured at the server and the
switch layers to satisfy isolation best practices.
Each x3650 M5 server is equipped with an on-board quad-port Broadcom BCM5719 1 Gb
network adapter that is used for all data traffic. Each server maintains two 1 Gb connections
to each of the network switches that are used in this reference architecture.
The connections between the servers and the switches are shown in Figure 5.
Note: If more network ports are needed for management or monitoring networks, extra
network adapters can be purchased and installed.
Corporate
Network
Network Switch
Network Switch
DB
DB
Mailbox/CAS Server
Mailbox/CAS Server
Figure 5 Network connections between host servers and network switches at one of the data centers
Windows Server 2012 R2 NIC teaming can be used to provide fault tolerance and load
balancing to all of the networks (MAPI network and Replication network). This setup allows
the most efficient use of network resources with a highly optimized configuration for network
connectivity.
System x3650 M5 Scalable Solution for Microsoft Exchange Server 2013 Using Internal Disks
11
At the switch layer, VLANs should be used to provide logical isolation between the various
networks. A key element is properly configuring the switches to maximize available bandwidth
and reduce congestion. However, based on individual environment preferences, there is
flexibility regarding how many VLANs are created and what type of role-based traffic they
handle. After a final selection is made, ensure that the switch configurations are saved and
backed up.
Before continuing, test the network implementation thoroughly to ensure that communication
is not lost despite the loss of a network switch or connection.
Storage design and configuration
This section describes the storage topology and includes instructions to correctly configure
the internal storage on the x3650 M5.
Key storage concepts and terminology
This section describes the following basic concepts and terminology that are used throughout
the next sections:
򐂰 JBOD
An architecture that includes multiple drives, while making them accessible as
independent drives or a combined (spanned) single logical volume with no actual RAID
functionality.
򐂰 RAID0
RAID0 comprises striping (but not parity or mirroring). This level provides no data
redundancy or fault tolerance. RAID0 has no error detection mechanism; therefore, the
failure of one disk causes the loss of all data on the array.
򐂰 RAID1
RAID1 creates an exact copy (or mirror) of a set of data on two disks. This configuration is
useful when read performance or reliability is more important than data storage capacity.
A classic RAID 1 mirrored pair contains two disks.
򐂰 Cache-Enabled JBOD
This storage design uses discreet, single-disk RAID0 arrays to maintain the independent
disk design of a traditional JBOD configuration. It also uses the cache of the storage
controller.
򐂰 AutoReseed
AutoReseed is a new high availability feature for Database Availability Groups in
Exchange Server 2013. This feature automatically reseeds a database if there is a disk
failure to a “pool” of volumes that were pre-configured for this purpose. If the failed drive
included an active database, Exchange fails over to one of the other passive copies and
reseeds the failed database to the AutoReseed volume. If the failed drive contained one of
the passive copies, Exchange reseeds the database on the same drive. This feature
allows administrators to replace failed drives at their leisure, rather than being on-site to
repair the failed hardware when the failure occurs.
Storage Partitioning
This section describes the storage partitioning that is required to support the configuration as
described in this document. Figure 6 on page 13 shows the volumes and the underlying RAID
architecture.
12
System x3650 M5 Scalable Solution for Microsoft Exchange Server 2013 Using Internal Disks
An RAID1 array that uses the 2 TB disks in slots 0 and 1 is used for the Windows operating
system and the Exchange Transport Database.
In our Exchange testing, we found the cache to have a significant performance effect. With
cache enabled, the performance bottleneck shifts from the disk subsystem to the server’s
processors. Therefore, to use the cache, 10 drives are configured as discreet and individual
RAID0 arrays. These arrays host Exchange database files, log files, and a restore logical unit
number (LUN) and an AutoReseed LUN.
RAID1 Array
OS/Transport
RAID0 Array
RAID0 Array
RAID0 Array
RAID0 Array
RAID0 Array
RAID0 Array
RAID0 Array
RAID0 Array
RAID0 Array
RAID0 Array
Figure 6 RAID1 array for the OS and RAID0 arrays formed from individual drives
Storage Configuration
Configuration of the ServeRAID M5210 is performed from within the UEFI shell of the server.
For more information about creating a RAID1 boot array for the Windows operating system,
see “Appendix B: Configuring the RAID1 boot array on the ServeRAID M5210” on page 20.
For more information about creating the RAID0 sing-disk arrays for the Exchange databases
and the Restore and AutoReseed volumes, see “Appendix C: Configuring the RAID0
single-disk arrays on the ServeRAID M5210” on page 24.
After the RAID0 arrays are created, create the volumes from within the Windows operating
system. For more information about creating volumes, see “Appendix D: Creating volumes”
on page 27.
Exchange Server 2013 database and log placement
This section describes the Exchange Server 2013 mailbox database and log placement.
Because our example organization chose to forgo traditional forms of backups and is using
cache-enabled JBOD rather than a standard RAID configuration, the organization requires at
least two copies of each database at each site for a total of four copies of each database. A
total of 16 active databases are required to support the user population of 5,000 users. Each
of the 16 active databases has three more passive copies to provide high availability.
Therefore, 64 database copies are needed to support the entire user population. The
databases are divided between four servers; therefore, each server hosts 16 databases.
Each volume (except for the operating system, Restore, and AutoReseed volumes) hosts two
database copies. The log files for each of the databases are placed on the same volume in
separate folders.
Figure 7 on page 14 shows the database distribution between the servers. The logs follow the
same pattern and are on the same drives as their respective databases.
System x3650 M5 Scalable Solution for Microsoft Exchange Server 2013 Using Internal Disks
13
RAID1 Array
OS/Transport
DB1
Primary
Datacenter
(Active)
DB2
RAID1 Array
OS/Transport
DB1
DB2
DB3
DB4
DB9
DB10
DB5
DB6
DB11
DB12
DB7
DB8
DB13
DB14
DB3
DB4
DB9
DB10
DB5
DB6
DB11
DB12
DB7
DB8
DB13
DB14
DB15
DB16
Restore Volume
AutoReseed
DB15
DB16
Restore Volume
AutoReseed
DAG-1
RAID1 Array
OS/Transport
DB1
Secondary
Datacenter
(Passive)
DB2
RAID1 Array
OS/Transport
DB1
Active Database
DB2
DB3
DB4
DB9
DB10
DB5
DB6
DB11
DB12
DB7
DB8
DB13
DB14
DB3
DB4
DB9
DB10
DB5
DB6
DB11
DB12
DB7
DB8
DB13
DB14
DB15
DB16
Restore Volume
AutoReseed
DB15
DB16
Restore Volume
AutoReseed
Passive Database
Figure 7 Database copy distribution between the servers
Solution validation methodology
Correctly designing an Exchange Server 2013 infrastructure requires accurate sizing of the
servers and the storage and stress testing of the storage to ensure it can handle peak loads.
Microsoft provides two tools to evaluate both aspects of an Exchange environment: Exchange
Server 2013 Server Role Requirements Calculator for sizing the servers and storage and the
Microsoft Exchange Server Jetstress 2013 Tool for testing the performance of the storage.
Storage validation
Storage performance is critical in any type of Exchange deployment. A poorly performing
storage subsystem results in high transaction latency, which affects the user experience. It is
important to correctly validate storage sizing and configuration when Exchange is deployed in
any real-world scenario.
14
System x3650 M5 Scalable Solution for Microsoft Exchange Server 2013 Using Internal Disks
To facilitate the validation of Exchange storage sizing and configuration, Microsoft provides
the Microsoft Exchange Server Jetstress 2013 Tool. Jetstress simulates an Exchange I/O
workload at the database level by interacting directly with the Extensible Storage Engine
(ESE). The ESE is the database technology that Exchange uses to store messaging data on
the Mailbox server role. The Jetstress utility can simulate a target profile of user count and
per-user IOPS and validate that the storage subsystem can maintain an acceptable level of
performance with the target profile. Test duration is adjustable and can be set to an extended
period to validate storage subsystem reliability.
Testing storage systems by using Jetstress focuses primarily on database read latency, log
write latency, processor time, and the number of transition pages that are repurposed per
second (an indicator of ESE cache pressure). Jetstress returns a Pass or Fail report,
depending on how well the storage is performing.
To ensure performance results, the storage passed rigorous testing to establish a baseline
that conclusively isolates and identifies any potential bottleneck as a valid server
performance-related issue.
Note: In normal operations, two servers host the 16 active databases (8 active databases
per server in the active datacenter). However, to ensure that a single server can handle the
workload if the second server was down for maintenance, a single server was tested with
16 active databases.
Performance Testing Results
Table 1 lists the parameters that were used to evaluate the storage performance.
Table 1 Testing parameters
Parameter
Value
Database Sizing
Database Files (Count)
16
Number of Users
5000
IOPs Per User
.10
Mailbox Size (MB)
3500
Jetstress System Parameters
Thread Count
10
Minimum Database Cache
512 MB
Maximum Database Cache
4096.0 MB
Insert Operations
40%
Delete Operations
20%
Replace Operations
5%
Read Operations
35%
Lazy Commits
70%
Run Background Database Maintenance
True
Number of Copies per Database
4
System x3650 M5 Scalable Solution for Microsoft Exchange Server 2013 Using Internal Disks
15
Table 2 presents the target IOPs and the IOPs that are achieved during the 24-hour testing
cycle. The achieved IOPs exceeded the target by 51.995 IOPs.
Table 2 Database Sizing and Throughput
Parameter
Value
Achieved Transactional I/O per Second
555.756
Target Transactional I/O per Second
500
Jetstress evaluates latency values for Database Reads and Log writes because these affect
the user experience.
Table 3 displays the test results of the load on the database files. The second column
(Database Reads Avg Latency) should not exceed 20 msec. A value over 20 indicates that
the storage cannot handle sustained peak workload. In our testing, the maximum value is
9.79, which is well below the 20 msec limit.
Table 3 Testing results for database files
Database
Instances
Database Reads
Avg Latency
(msec)
Database Writes
Avg Latency
(msec)
Database
Reads/sec
Database
Writes/sec
Database Reads
Avg Bytes
Database
Writes
Avg Bytes
Database 1
9.484
0.779
33.127
10.701
97022.911
37286.02
Database 2
9.561
0.774
33.061
10.61
97292.983
37334.507
Database 3
9.471
0.817
33.135
10.679
97053.051
37332.095
Database 4
9.635
0.816
33.108
10.67
97152.714
37320.87
Database 5
9.423
0.813
33.152
10.702
97024.653
37294.253
Database 6
9.713
0.814
33.114
10.685
97166.529
37315.497
Database 7
9.452
0.797
33.147
10.679
97032.998
37294.355
Database 8
9.651
0.8
33.114
10.665
97173.984
37308.086
Database 9
9.54
0.79
33.087
10.648
97085.963
37342.839
Database 10
9.79
0.795
33.103
10.68
97115.872
37308.252
Database 11
9.345
0.81
33.126
10.661
97030.37
37315.01
Database 12
9.597
0.811
33.181
10.724
97007.538
37266.338
Database 13
9.281
0.845
33.107
10.659
97100.703
37328.53
Database 14
9.461
0.852
33.171
10.74
97024.463
37305.502
Database 15
9.286
0.903
33.098
10.697
97120.82
37315.966
Database 16
9.476
0.91
33.188
10.717
97078.352
37285.638
16
System x3650 M5 Scalable Solution for Microsoft Exchange Server 2013 Using Internal Disks
Table 4 displays the test results of the load on the log files. The third column (Log Writes Avg
Latency (msec)) should not exceed 10 msec. A value over 10 indicates that the storage
cannot handle the workload. In our testing, the maximum value is 0.066, which is well below
the 10 msec limit.
Table 4 Testing results for log files
Database
Instances
Log Reads
Avg Latency
(msec)
Log Writes
Avg Latency
(msec)
Log Reads/sec
Log Writes/sec
Log Reads
Avg Bytes
Log Writes
Avg Bytes
Database 1
0.418
0.065
0.555
8.027
72134.133
8059.677
Database 2
0.415
0.065
0.554
7.942
72065.852
8142.75
Database 3
0.377
0.065
0.554
8.013
71994.351
8093.48
Database 4
0.379
0.065
0.556
8.001
72114.505
8118.133
Database 5
0.36
0.065
0.555
7.989
72037.059
8101.961
Database 6
0.347
0.066
0.557
7.998
72353.925
8140.286
Database 7
0.388
0.065
0.553
8.001
71937.236
8128.899
Database 8
0.365
0.065
0.556
7.962
72114.505
8136.209
Database 9
0.355
0.065
0.555
7.977
72103.65
8151.282
Database 10
0.359
0.065
0.558
8.038
72400.069
8096.973
Database 11
0.352
0.065
0.553
7.954
71848.013
8119.665
Database 12
0.37
0.065
0.554
8.023
71945.965
8108.243
Database 13
0.358
0.065
0.555
7.995
72085.155
8153.69
Database 14
0.307
0.065
0.556
8.035
72247.057
8077.712
Database 15
0.321
0.065
0.56
8.05
72722.668
8104.566
Database 16
0.31
0.066
0.554
8
71952.964
8104.541
For more information about the complete results from the 24-hour JetStress test, see
“Appendix E: Microsoft Exchange Jetstress 2013 Stress Test Result Report ” on page 28.
Summary
The System x3650 M5 with cache-enabled JBOD that uses internal disks performed
admirably throughout the test duration. This test demonstrates the capability of the x3650 M5
in supporting 5,000 mail users with 3,500 MB mailboxes.
The System x3650 M5 Scalable Solution for Microsoft Exchange Server 2013 provides a
highly available and site-resilient answer to organizations that are ready to upgrade their
existing mail environment or implement a new low-cost infrastructure. The environment that is
described in this document allows administrators to eliminate traditional backup method,
which frees critical enterprise resources.
System x3650 M5 Scalable Solution for Microsoft Exchange Server 2013 Using Internal Disks
17
The use of internal disks in a cache-enabled JBOD configuration drastically reduces the
overall cost and complexity of the solution by eliminating SAN or DAS-based storage and
reducing the need for storage administrators. This solution also provides a framework for
scaling up to meet the needs of any sized user population.
References and helpful links
For more information about the topics in this paper, see the following resources:
򐂰 High availability and site resilience:
http://technet.microsoft.com/en-us/library/dd638137(v=exchg.150).aspx
򐂰 The New Exchange:
http://blogs.technet.com/b/exchange/archive/2012/07/23/the-new-exchange.aspx
򐂰 Exchange Server 2013 storage configuration options:
http://technet.microsoft.com/en-us/library/ee832792(v=exchg.150).aspx
򐂰 Namespace Planning in Exchange Server 2013:
http://blogs.technet.com/b/exchange/archive/2014/02/28/namespace-planning-in-ex
change-2013.aspx
򐂰 Non-RAID drive architectures:
http://en.wikipedia.org/wiki/Non-RAID_drive_architectures
򐂰 ServeRAID M5210 and M5210e SAS/SATA Controllers Product Guide
http://www.redbooks.ibm.com/abstracts/tips1069.html
򐂰 IBM Support:
http://www.ibm.com/support
򐂰 IBM Firmware update and best practices guide:
http://ibm.com/support/entry/portal/docdisplay?lndocid=MIGR-5082923
About the Author
Roland G. Mueller works for Lenovo in Kirkland, Washington. He has a second office at the
Microsoft main campus in Redmond, Washington to facilitate close collaboration with
Microsoft. He specializes in Exchange Server infrastructure sizing, design, and performance
testing. Before Lenovo, Roland worked for IBM from 2002 - 2014, specializing in various
technologies, including virtualization, bare-metal server deployment, and Microsoft Exchange
Server.
18
System x3650 M5 Scalable Solution for Microsoft Exchange Server 2013 Using Internal Disks
Appendix A: Bill of materials
Each of the four servers is configured as listed in Table 5.
Table 5 Bill of materials
Feature code
Description
Quantity
5462AC1
Server1: System x3650 M5
1
A5FP
System x3650 M5 PCIe Riser 1 (2 x8 FH/FL + 1 x8 FH/HL Slots)
1
5977
Select Storage devices: No IBM configured RAID required
1
A5R5
System x3650 M5 PCIe Riser 2 (1 x16 FH/FL + 1 x8 FH/HL Slots)
1
A5FF
System x3650 M5 12x 3.5-inch Base without Power Supply
1
A5EW
System x 900W High Efficiency Platinum AC Power Supply
2
A5EL
Extra Intel Xeon Processor E5-2640 v3 8C 2.6 GHz 20 MB 1866 MHz 90 W
1
9206
No Preload Specify
1
A5B7
16 GB TruDDR4 Memory (2Rx4, 1.2V) PC4-17000 CL15 2133 MHz LP RDIMM
12
A5EY
System Documentation and Software-US English
1
A5GT
Intel Xeon Processor E5-2640 v3 8C 2.6 GHz 20 MB Cache 1866 MHz 90 W
1
A3YZ
ServeRAID M5210 SAS/SATA Controller
1
A3Z2
ServeRAID M5200 Series 2 GB Flash/RAID 5 Upgrade
1
A5GE
x3650 M5 12x 3.5-inch HS HDD Assembly Kit
1
A5VQ
IBM 4 TB 7.2K 12 Gbps NL SAS 3.5-inch G2HS 512e HDD
10
A5VP
IBM 2 TB 7.2K 12 Gbps NL SAS 3.5-inch G2HS 512e HDD
2
6400
2.8m, 13 A/125-10 A/250 V, C13 to IEC 320-C14 Rack Power Cable
2
A5FV
System x Enterprise Slides Kit
1
A5G1
System x3650 M5 IBM EIA Plate
1
A2HP
Configuration ID 01
1
A5V5
System x3650 M5 Right EIA for Storage Dense Model
1
A5FM
System x3650 M5 System Level Code
1
A5FH
System x3650 M5 Agency Label GBM
1
A5EA
System x3650 M5 Planar
1
A5G5
System x3650 M5 Riser Bracket
2
A5FT
System x3650 M5 Power Paddle Card
1
A47G
Super Cap Cable 425mm for ServRAID M5200 Series Flash
1
5374CM1
HIPO: Configuration Instruction
1
A2HP
Configuration ID 01
1
A46P
ServeRAID M5210 SAS/SATA Controller Placement
1
System x3650 M5 Scalable Solution for Microsoft Exchange Server 2013 Using Internal Disks
19
Feature code
Description
Quantity
A46S
ServeRAID M5200 Series 2GB Flash/RAID 5 Upgrade Placement
1
A2JX
Controller 01
1
6756ND6
Service pack1: Lenovo RTS for System x - Base - 3yr
1
67568HG
Service pack2: 3 Year Onsite Repair 24x7 4 Hour Response
1
Appendix B: Configuring the RAID1 boot array on the
ServeRAID M5210
Configuration of the ServeRAID M5210 is performed from within the UEFI shell of the server.
Complete the following steps to configure the internal storage:
1. Power on the server you want to configure and press F1 to enter UEFI Setup when the
UEFI splash window opens.
2. From the UEFI Setup menu, select System Settings and press Enter, as shown in
Figure 8.
Figure 8 UEFI Setup main menu
3. From the System Settings menu, scroll down and select Storage and press Enter.
20
System x3650 M5 Scalable Solution for Microsoft Exchange Server 2013 Using Internal Disks
4. From the Storage menu, select the storage adapter and press Enter, as shown in Figure 9.
Figure 9 Available RAID adapters
5. From the Main menu, select Configuration Management and press Enter, as shown in
Figure 10.
Figure 10 ServeRAID M5210 Main Menu
6. From the Configuration Management menu, select Create Virtual Drive - Advanced and
press Enter.
System x3650 M5 Scalable Solution for Microsoft Exchange Server 2013 Using Internal Disks
21
7. From the Create Virtual Drive - Advanced menu, ensure that RAID1 is selected as the
RAID level and 256 KB is selected as the Stripe size (be aware that it is spelled “Strip
size”). Select Select Drives and press Enter, as shown in Figure 11.
Figure 11 Create Virtual Drive menu that shows correct RAID level and Stripe size
8. From the Select Drives menu, highlight the HDDs in slots 0 and 1 and press the Enter to
select them. Then, select Apply Changes and press Enter, as shown in Figure 12.
Figure 12 Select Drives menu
9. From the Success page, select OK and press Enter. You are returned to the Create Virtual
Drive – Advanced menu.
10.From the Create Virtual Drive - Advanced menu, select Save Configuration and press
Enter.
22
System x3650 M5 Scalable Solution for Microsoft Exchange Server 2013 Using Internal Disks
11.From the Warning page (as shown in Figure 13), confirm your choice and press Enter to
select it. Next, select Yes and press Enter.
Figure 13 Warning page
12.From the Success page, select OK and press Enter.
13.Press Esc twice to return to the Main Menu. Select Controller Management and press
Enter, as shown in Figure 14.
Figure 14 Selecting Controller Management
System x3650 M5 Scalable Solution for Microsoft Exchange Server 2013 Using Internal Disks
23
14.From the Controller Management page (as shown in Figure 15), scroll down and select
Boot Device and press Enter.
Figure 15 Selecting a boot device
15.Select the RAID1 array and press Enter.
16.Press Esc to exit to the Main Menu.
17.After you configure the RAID1 boot array, exit UEFI and boot to your installation media to
install the Windows operating system.
Appendix C: Configuring the RAID0 single-disk arrays on the
ServeRAID M5210
Configuration of the ServeRAID M5210 is performed from within the UEFI shell of the server.
Complete the following steps to configure the internal storage:
1. Power on the server that you want to configure and press F1 to enter UEFI Setup when
the UEFI splash window opens.
2. From the UEFI Setup menu, select System Settings (as shown in Figure 16 on page 25)
and press Enter.
24
System x3650 M5 Scalable Solution for Microsoft Exchange Server 2013 Using Internal Disks
Figure 16 UEFI Setup main menu
3. From the System Settings menu, scroll down and select Storage and press Enter.
4. From the Storage menu, select the storage adapter (see Figure 17) and press Enter.
Figure 17 Available RAID adapters
5. From the Main menu, select Configuration Management (as shown in Figure 18) and
press Enter.
Figure 18 ServeRAID M5210 Main Menu
System x3650 M5 Scalable Solution for Microsoft Exchange Server 2013 Using Internal Disks
25
6. From the Configuration Management menu, select Create Virtual Drive - Advanced and
press Enter.
7. From the Create Virtual Drive - Advanced menu, ensure that RAID0 is selected as the
RAID level and 256 KB is selected as the Stripe size and then select Select Drives and
press Enter (see Figure 19).
Figure 19 Create Virtual Drive menu that shows correct RAID level and Stripe size
8. From the Select Drives menu, highlight the first HDD and press the Enter to select it as
shown in Figure 20. Then, select Apply Changes and press Enter.
Figure 20 Select Drives menu
26
System x3650 M5 Scalable Solution for Microsoft Exchange Server 2013 Using Internal Disks
9. From the Success page, select OK and press Enter. You are returned to the Create Virtual
Drive – Advanced menu.
10.From the Create Virtual Drive - Advanced menu, select Save Configuration and press
Enter.
11.From the Warning page (see Figure 21), highlight Confirm and press Enter to select it.
Next, select Yes and press Enter.
Figure 21 Warning page
12.From the Success page, select OK and press Enter.
13.Repeat steps 7 - 12 to build RAID0 arrays on each individual remaining drive.
14.After you configure the drives, exit UEFI and boot to Windows normally.
When you are finished, each of the remaining 10 HDDs in the front of the server should be a
discreet RAID0 array.
Appendix D: Creating volumes
After you create the RAID0 single-disk arrays, volumes must be defined and assigned mount
points or drive letters from within the Windows operating system. Complete the following
steps:
1. Open Server Manager and click File and Storage Services from the navigation tree.
2. Click Disks, and then bring each of the disks online and initialize them as GPT by
right-clicking each disk and clicking Bring Online or Initialize Disk from the menu.
3. Right-click each disk and click New Volume.
4. Click Next to bypass the Before You Begin page.
5. On the Server and Disk page, verify that the correct disk is selected and click Next.
6. Click Next to accept the default size, which uses the entire disk.
7. On the Drive Letter or Folder page, select a drive letter or choose a folder as a mount point
for the volume and click Next to continue.
System x3650 M5 Scalable Solution for Microsoft Exchange Server 2013 Using Internal Disks
27
8. On the File System Settings page, verify that NTFS is selected as the File System and use
the drop-down menu to select 64K as the Allocation unit size, as shown in Figure 22. Click
Next to continue.
Figure 22 File System Settings window
9. In the Confirmation window, click Create to create the volume. Click Close after the
process completes.
10.Repeat these steps for the remaining 11 disks.
Appendix E: Microsoft Exchange Jetstress 2013 Stress Test
Result Report
Test Summary
Overall Test Result
Machine Name
Test Description
Test Start Time
Test End Time
Collection Start Time
Collection End Time
28
Pass
WIN-6RCTC0BTSAJ
5,000 Users
4 database copies
3500 mb mailbox
.10 IOPS
2 databases per volume with logs
8 disks used + 1 restore +1 autoseed
10 threads
11/11/2014 9:08:30 AM
11/12/2014 9:17:42 AM
11/11/2014 9:17:18 AM
11/12/2014 9:17:19 AM
System x3650 M5 Scalable Solution for Microsoft Exchange Server 2013 Using Internal Disks
Jetstress Version
ESE Version
Operating System
Performance Log
15.00.0658.004
15.00.0516.026
Windows Server 2012 R2 Datacenter (6.2.9200.0)
C:\Program Files\Exchange Jetstress\Stress_2014_11_11_9_9_3.blg
Database Sizing and Throughput
Achieved Transactional I/O per Second
Target Transactional I/O per Second
Initial Database Size (bytes)
Final Database Size (bytes)
Database Files (Count)
555.756
500
18352035594240
18369081245696
16
Jetstress System Parameters
Thread Count
Minimum Database Cache
Maximum Database Cache
Insert Operations
Delete Operations
Replace Operations
Read Operations
Lazy Commits
Run Background Database Maintenance
Number of Copies per Database
10
512.0 MB
4096.0 MB
40%
20%
5%
35%
70%
True
4
Database Configuration
Instance2416.1
Instance2416.2
Instance2416.3
Instance2416.4
Instance2416.5
Instance2416.6
Instance2416.7
Instance2416.8
Instance2416.9
Instance2416.10
Instance2416.11
Instance2416.12
Instance2416.13
Log path: E:\LOG1
Database: E:\DB1\Jetstress001001.edb
Log path: E:\LOG2
Database: E:\DB2\Jetstress002001.edb
Log path: F:\LOG3
Database: F:\DB3\Jetstress003001.edb
Log path: F:\LOG4
Database: F:\DB4\Jetstress004001.edb
Log path: G:\LOG5
Database: G:\DB5\Jetstress005001.edb
Log path: G:\LOG6
Database: G:\DB6\Jetstress006001.edb
Log path: H:\LOG7
Database: H:\DB7\Jetstress007001.edb
Log path: H:\LOG8
Database: H:\DB8\Jetstress008001.edb
Log path: I:\LOG9
Database: I:\DB9\Jetstress009001.edb
Log path: I:\LOG10
Database: I:\DB10\Jetstress010001.edb
Log path: J:\LOG11
Database: J:\DB11\Jetstress011001.edb
Log path: J:\LOG12
Database: J:\DB12\Jetstress012001.edb
Log path: K:\LOG13
System x3650 M5 Scalable Solution for Microsoft Exchange Server 2013 Using Internal Disks
29
Database: K:\DB13\Jetstress013001.edb
Log path: K:\LOG14
Database: K:\DB14\Jetstress014001.edb
Log path: L:\LOG15
Database: L:\DB15\Jetstress015001.edb
Log path: L:\LOG16
Database: L:\DB16\Jetstress016001.edb
Instance2416.14
Instance2416.15
Instance2416.16
Transactional I/O Performance
The Transactional I/O Performance results are shown in Table 6.
MSExchange
Database ==>
Instances
I/O Database Reads
Average Latency (msec)
I/O Database Writes
Average Latency (msec)
I/O Database Reads/sec
I/O Database Writes/sec
I/O Database Reads
Average Bytes
I/O Database Writes
Average Bytes
I/O Log Reads Average
Latency (msec)
I/O Log Writes Average
Latency (msec)
I/O Log Reads/sec
I/O Log Writes/sec
I/O Log Reads
Average Bytes
I/O Log Writes
Average Bytes
Table 6 Transactional I/O Performance
Instance2416.1
9.484
0.779
24.060
10.701
34943.000
37286.020
0.000
0.065
0.000
8.027
0.000
8059.677
Instance2416.2
9.561
0.774
23.983
10.610
35025.687
37334.507
0.000
0.065
0.000
7.942
0.000
8142.750
Instance2416.3
9.471
0.817
24.068
10.679
34988.864
37332.095
0.000
0.065
0.000
8.013
0.000
8093.480
Instance2416.4
9.635
0.816
24.030
10.670
34962.288
37320.870
0.000
0.065
0.000
8.001
0.000
8118.133
Instance2416.5
9.423
0.813
24.085
10.702
35001.460
37294.253
0.000
0.065
0.000
7.989
0.000
8101.961
Instance2416.6
9.713
0.814
24.037
10.685
35004.918
37315.497
0.000
0.066
0.000
7.998
0.000
8140.286
Instance2416.7
9.452
0.797
24.080
10.679
34998.128
37294.355
0.000
0.065
0.000
8.001
0.000
8128.899
Instance2416.8
9.651
0.800
24.035
10.665
35004.899
37308.086
0.000
0.065
0.000
7.962
0.000
8136.209
Instance2416.9
9.540
0.790
24.021
10.648
34939.419
37342.839
0.000
0.065
0.000
7.977
0.000
8151.282
Instance2416.10
9.790
0.795
24.028
10.680
34919.938
37308.252
0.000
0.065
0.000
8.038
0.000
8096.973
Instance2416.11
9.345
0.810
24.057
10.661
34920.377
37315.010
0.000
0.065
0.000
7.954
0.000
8119.665
Instance2416.12
9.597
0.811
24.100
10.724
34924.357
37266.338
0.000
0.065
0.000
8.023
0.000
8108.243
Instance2416.13
9.281
0.845
24.036
10.659
34955.596
37328.530
0.000
0.065
0.000
7.995
0.000
8153.690
Instance2416.14
9.461
0.852
24.088
10.740
34895.876
37305.502
0.000
0.065
0.000
8.035
0.000
8077.712
Instance2416.15
9.286
0.903
24.026
10.697
34948.610
37315.966
0.000
0.065
0.000
8.050
0.000
8104.566
Instance2416.16
9.476
0.910
24.104
10.717
35006.508
37285.638
0.000
0.066
0.000
8.000
0.000
8104.541
30
System x3650 M5 Scalable Solution for Microsoft Exchange Server 2013 Using Internal Disks
Background Database Maintenance I/O Performance
The Background Database Maintenance I/O Performance results are shown in Table 7.
Table 7 Background Database Maintenance I/O Performance
MSExchange Database ==>
Instances
Database Maintenance IO
Reads/sec
Database Maintenance IO Reads
Average Bytes
Instance2416.1
9.067
261760.460
Instance2416.2
9.078
261798.657
Instance2416.3
9.067
261797.134
Instance2416.4
9.078
261770.029
Instance2416.5
9.067
261786.940
Instance2416.6
9.077
261773.815
Instance2416.7
9.067
261789.291
Instance2416.8
9.078
261769.501
Instance2416.9
9.066
261749.045
Instance2416.10
9.075
261788.745
Instance2416.11
9.069
261788.931
Instance2416.12
9.081
261774.595
Instance2416.13
9.071
261769.221
Instance2416.14
9.083
261794.643
Instance2416.15
9.072
261779.992
Instance2416.16
9.084
261784.701
Log Replication I/O Performance
The Log Replication I/O Performance results are shown in Table 8.
Table 8 Log Replication I/O Performance
MSExchange Database ==> Instances
I/O Log Reads/sec
I/O Log Reads Average Bytes
Instance2416.1
0.555
72134.133
Instance2416.2
0.554
72065.852
Instance2416.3
0.554
71994.351
Instance2416.4
0.556
72114.505
Instance2416.5
0.555
72037.059
Instance2416.6
0.557
72353.925
Instance2416.7
0.553
71937.236
Instance2416.8
0.556
72114.505
Instance2416.9
0.555
72103.650
System x3650 M5 Scalable Solution for Microsoft Exchange Server 2013 Using Internal Disks
31
MSExchange Database ==> Instances
I/O Log Reads/sec
I/O Log Reads Average Bytes
Instance2416.10
0.558
72400.069
Instance2416.11
0.553
71848.013
Instance2416.12
0.554
71945.965
Instance2416.13
0.555
72085.155
Instance2416.14
0.556
72247.057
Instance2416.15
0.560
72722.668
Instance2416.16
0.554
71952.964
Total I/O Performance
The Total I/O Performance results are shown in Table 9.
MSExchange
Database
==>
Instances
I/O Database Reads
Average Latency (msec)
I/O Database Writes
Average Latency (msec)
I/O Database Reads/sec
I/O Database Writes/sec
I/O Database Reads
Average Bytes
I/O Database Writes
Average Bytes
I/O Log Reads Average
Latency (msec)
I/O Log Writes Average
Latency (msec)
I/O Log Reads/sec
I/O Log Writes/sec
I/O Log Reads
Average Bytes
I/O Log Writes
Average Bytes
Table 9 Total I/O Performance
Instance2416.1
9.484
0.779
33.127
10.701
97022.911
37286.020
0.418
0.065
0.555
8.027
72134.133
8059.677
Instance2416.2
9.561
0.774
33.061
10.610
97292.983
37334.507
0.415
0.065
0.554
7.942
72065.852
8142.750
Instance2416.3
9.471
0.817
33.135
10.679
97053.051
37332.095
0.377
0.065
0.554
8.013
71994.351
8093.480
Instance2416.4
9.635
0.816
33.108
10.670
97152.714
37320.870
0.379
0.065
0.556
8.001
72114.505
8118.133
Instance2416.5
9.423
0.813
33.152
10.702
97024.653
37294.253
0.360
0.065
0.555
7.989
72037.059
8101.961
Instance2416.6
9.713
0.814
33.114
10.685
97166.529
37315.497
0.347
0.066
0.557
7.998
72353.925
8140.286
Instance2416.7
9.452
0.797
33.147
10.679
97032.998
37294.355
0.388
0.065
0.553
8.001
71937.236
8128.899
Instance2416.8
9.651
0.800
33.114
10.665
97173.984
37308.086
0.365
0.065
0.556
7.962
72114.505
8136.209
Instance2416.9
9.540
0.790
33.087
10.648
97085.963
37342.839
0.355
0.065
0.555
7.977
72103.650
8151.282
Instance2416.10
9.790
0.795
33.103
10.680
97115.872
37308.252
0.359
0.065
0.558
8.038
72400.069
8096.973
Instance2416.11
9.345
0.810
33.126
10.661
97030.370
37315.010
0.352
0.065
0.553
7.954
71848.013
8119.665
Instance2416.12
9.597
0.811
33.181
10.724
97007.538
37266.338
0.370
0.065
0.554
8.023
71945.965
8108.243
Instance2416.13
9.281
0.845
33.107
10.659
97100.703
37328.530
0.358
0.065
0.555
7.995
72085.155
8153.690
Instance2416.14
9.461
0.852
33.171
10.740
97024.463
37305.502
0.307
0.065
0.556
8.035
72247.057
8077.712
Instance2416.15
9.286
0.903
33.098
10.697
97120.820
37315.966
0.321
0.065
0.560
8.050
72722.668
8104.566
Instance2416.16
9.476
0.910
33.188
10.717
97078.352
37285.638
0.310
0.066
0.554
8.000
71952.964
8104.541
32
System x3650 M5 Scalable Solution for Microsoft Exchange Server 2013 Using Internal Disks
Host System Performance
The Host System Performance results are shown in Table 10.
Table 10 Host System Performance
Counter
Average
Minimum
Maximum
% Processor Time
2.311
0.441
51.140
Available MBs
123896.336
123331.000
124367.000
Free System Page Table Entries
16411242.386
16410212.000
16411446.000
Transition Pages RePurposed/sec
0.000
0.000
0.000
Pool Nonpaged Bytes
102092183.137
95023104.000
109391872.000
Pool Paged Bytes
167382025.460
131985408.000
275730432.000
Database Page Fault Stalls/sec
0.000
0.000
0.000
Test Log
11/11/2014 9:08:30 AM -- Preparing for testing ...
11/11/2014 9:08:46 AM -- Attaching databases ...
11/11/2014 9:08:46 AM -- Preparations for testing are complete.
11/11/2014 9:08:46 AM -- Starting transaction dispatch ..
11/11/2014 9:08:46 AM -- Database cache settings: (minimum: 512.0 MB, maximum: 4.0 GB)
11/11/2014 9:08:46 AM -- Database flush thresholds: (start: 40.9 MB, stop: 81.9 MB)
11/11/2014 9:09:03 AM -- Database read latency thresholds: (average: 20 msec/read, maximum: 200
msec/read).
11/11/2014 9:09:03 AM -- Log write latency thresholds: (average: 10 msec/write, maximum: 200 msec/write).
11/11/2014 9:09:05 AM -- Operation mix: Sessions 10, Inserts 40%, Deletes 20%, Replaces 5%, Reads 35%,
Lazy Commits 70%.
11/11/2014 9:09:05 AM -- Performance logging started (interval: 15000 ms).
11/11/2014 9:09:05 AM -- Attaining prerequisites:
11/11/2014 9:17:18 AM -- \MSExchange Database(JetstressWin)\Database Cache Size, Last: 3865498000.0
(lower bound: 3865470000.0, upper bound: none)
11/12/2014 9:17:20 AM -- Performance logging has ended.
11/12/2014 9:17:20 AM -- JetInterop batch transaction stats: 76558, 76558, 76558, 76558, 76558, 76558,
76558, 76558, 76558, 76558, 76558, 76558, 76558, 76558, 76558 and 76558.
11/12/2014 9:17:20 AM -- Dispatching transactions ends.
11/12/2014 9:17:20 AM -- Shutting down databases ...
11/12/2014 9:17:42 AM -- Instance2416.1 (complete), Instance2416.2 (complete), Instance2416.3
(complete), Instance2416.4 (complete), Instance2416.5 (complete), Instance2416.6 (complete),
Instance2416.7 (complete), Instance2416.8 (complete), Instance2416.9 (complete), Instance2416.10
(complete), Instance2416.11 (complete), Instance2416.12 (complete), Instance2416.13 (complete),
Instance2416.14 (complete), Instance2416.15 (complete) and Instance2416.16 (complete)
11/12/2014 9:17:42 AM -- C:\Program Files\Exchange Jetstress\Stress_2014_11_11_9_9_3.blg has 5769
samples.
11/12/2014 9:17:42 AM -- Creating test report ...
11/12/2014 9:18:42 AM -- Instance2416.1 has 9.5 for I/O Database Reads Average Latency.
11/12/2014 9:18:42 AM -- Instance2416.1 has 0.1 for I/O Log Writes Average Latency.
11/12/2014 9:18:42 AM -- Instance2416.1 has 0.1 for I/O Log Reads Average Latency.
11/12/2014 9:18:42 AM -- Instance2416.2 has 9.6 for I/O Database Reads Average Latency.
11/12/2014 9:18:42 AM -- Instance2416.2 has 0.1 for I/O Log Writes Average Latency.
11/12/2014 9:18:42 AM -- Instance2416.2 has 0.1 for I/O Log Reads Average Latency.
11/12/2014 9:18:42 AM -- Instance2416.3 has 9.5 for I/O Database Reads Average Latency.
11/12/2014 9:18:42 AM -- Instance2416.3 has 0.1 for I/O Log Writes Average Latency.
11/12/2014 9:18:42 AM -- Instance2416.3 has 0.1 for I/O Log Reads Average Latency.
System x3650 M5 Scalable Solution for Microsoft Exchange Server 2013 Using Internal Disks
33
11/12/2014 9:18:42 AM -- Instance2416.4 has 9.6 for I/O Database Reads Average Latency.
11/12/2014 9:18:42 AM -- Instance2416.4 has 0.1 for I/O Log Writes Average Latency.
11/12/2014 9:18:42 AM -- Instance2416.4 has 0.1 for I/O Log Reads Average Latency.
11/12/2014 9:18:42 AM -- Instance2416.5 has 9.4 for I/O Database Reads Average Latency.
11/12/2014 9:18:42 AM -- Instance2416.5 has 0.1 for I/O Log Writes Average Latency.
11/12/2014 9:18:42 AM -- Instance2416.5 has 0.1 for I/O Log Reads Average Latency.
11/12/2014 9:18:42 AM -- Instance2416.6 has 9.7 for I/O Database Reads Average Latency.
11/12/2014 9:18:42 AM -- Instance2416.6 has 0.1 for I/O Log Writes Average Latency.
11/12/2014 9:18:42 AM -- Instance2416.6 has 0.1 for I/O Log Reads Average Latency.
11/12/2014 9:18:42 AM -- Instance2416.7 has 9.5 for I/O Database Reads Average Latency.
11/12/2014 9:18:42 AM -- Instance2416.7 has 0.1 for I/O Log Writes Average Latency.
11/12/2014 9:18:42 AM -- Instance2416.7 has 0.1 for I/O Log Reads Average Latency.
11/12/2014 9:18:42 AM -- Instance2416.8 has 9.7 for I/O Database Reads Average Latency.
11/12/2014 9:18:42 AM -- Instance2416.8 has 0.1 for I/O Log Writes Average Latency.
11/12/2014 9:18:42 AM -- Instance2416.8 has 0.1 for I/O Log Reads Average Latency.
11/12/2014 9:18:42 AM -- Instance2416.9 has 9.5 for I/O Database Reads Average Latency.
11/12/2014 9:18:42 AM -- Instance2416.9 has 0.1 for I/O Log Writes Average Latency.
11/12/2014 9:18:42 AM -- Instance2416.9 has 0.1 for I/O Log Reads Average Latency.
11/12/2014 9:18:42 AM -- Instance2416.10 has 9.8 for I/O Database Reads Average Latency.
11/12/2014 9:18:42 AM -- Instance2416.10 has 0.1 for I/O Log Writes Average Latency.
11/12/2014 9:18:42 AM -- Instance2416.10 has 0.1 for I/O Log Reads Average Latency.
11/12/2014 9:18:42 AM -- Instance2416.11 has 9.3 for I/O Database Reads Average Latency.
11/12/2014 9:18:42 AM -- Instance2416.11 has 0.1 for I/O Log Writes Average Latency.
11/12/2014 9:18:42 AM -- Instance2416.11 has 0.1 for I/O Log Reads Average Latency.
11/12/2014 9:18:42 AM -- Instance2416.12 has 9.6 for I/O Database Reads Average Latency.
11/12/2014 9:18:42 AM -- Instance2416.12 has 0.1 for I/O Log Writes Average Latency.
11/12/2014 9:18:42 AM -- Instance2416.12 has 0.1 for I/O Log Reads Average Latency.
11/12/2014 9:18:42 AM -- Instance2416.13 has 9.3 for I/O Database Reads Average Latency.
11/12/2014 9:18:42 AM -- Instance2416.13 has 0.1 for I/O Log Writes Average Latency.
11/12/2014 9:18:42 AM -- Instance2416.13 has 0.1 for I/O Log Reads Average Latency.
11/12/2014 9:18:42 AM -- Instance2416.14 has 9.5 for I/O Database Reads Average Latency.
11/12/2014 9:18:42 AM -- Instance2416.14 has 0.1 for I/O Log Writes Average Latency.
11/12/2014 9:18:42 AM -- Instance2416.14 has 0.1 for I/O Log Reads Average Latency.
11/12/2014 9:18:42 AM -- Instance2416.15 has 9.3 for I/O Database Reads Average Latency.
11/12/2014 9:18:42 AM -- Instance2416.15 has 0.1 for I/O Log Writes Average Latency.
11/12/2014 9:18:42 AM -- Instance2416.15 has 0.1 for I/O Log Reads Average Latency.
11/12/2014 9:18:42 AM -- Instance2416.16 has 9.5 for I/O Database Reads Average Latency.
11/12/2014 9:18:42 AM -- Instance2416.16 has 0.1 for I/O Log Writes Average Latency.
11/12/2014 9:18:42 AM -- Instance2416.16 has 0.1 for I/O Log Reads Average Latency.
11/12/2014 9:18:42 AM -- Test has 0 Maximum Database Page Fault Stalls/sec.
11/12/2014 9:18:42 AM -- The test has 0 Database Page Fault Stalls/sec samples higher than 0.
11/12/2014 9:18:42 AM -- C:\Program Files\Exchange Jetstress\Stress_2014_11_11_9_9_3.xml has 5736
samples queried.
34
System x3650 M5 Scalable Solution for Microsoft Exchange Server 2013 Using Internal Disks
Notices
Lenovo may not offer the products, services, or features discussed in this document in all countries. Consulty
our local Lenovo representative for information on the products and services currently available in yourarea.
Any reference to a Lenovo product, program, or service is not intended to state or imply that only that Lenovo
product, program, or service may be used. Any functionally equivalent product, program, or service that does
not infringe any Lenovo intellectual property right may be used instead. However, it is the user's responsibility
to evaluate and verify the operation of any other product, program, or service.
Lenovo may have patents or pending patent applications covering subject matter described in this document.
The furnishing of this document does not give you any license to these patents. You can send license
inquiries, in writing, to:
Lenovo (United States), Inc.
1009 Think Place - Building One
Morrisville, NC 27560
U.S.A.
Attention: Lenovo Director of Licensing
LENOVO PROVIDES THIS PUBLICATION “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER
EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some
jurisdictions do not allow disclaimer of express or implied warranties in certain transactions, therefore, this
statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. Lenovo may
make improvements and/or changes in the product(s) and/or the program(s) described in this publication at
any time without notice.
The products described in this document are not intended for use in implantation or other life support
applications where malfunction may result in injury or death to persons. The information contained in this
document does not affect or change Lenovo product specifications or warranties. Nothing in this document
shall operate as an express or implied license or indemnity under the intellectual property rights of Lenovo or
third parties. All information contained in this document was obtained in specific environments and is
presented as an illustration. The result obtained in other operating environments may vary.
Lenovo may use or distribute any of the information you supply in any way it believes appropriate without
incurring any obligation to you.
Any references in this publication to non-Lenovo Web sites are provided for convenience only and do not in
any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the
materials for this Lenovo product, and use of those Web sites is at your own risk.
Any performance data contained herein was determined in a controlled environment. Therefore, the result
obtained in other operating environments may vary significantly. Some measurements may have been made
on development-level systems and there is no guarantee that these measurements will be the same on
generally available systems. Furthermore, some measurements may have been estimated through
extrapolation. Actual results may vary. Users of this document should verify the applicable data for their
specific environment.
© Copyright Lenovo 2014. All rights reserved.
Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by Global Services
Administration (GSA) ADP Schedule Contract
35
This document REDP-5165-00 was created or updated on December 17, 2014.
Send us your comments in one of the following ways:
򐂰 Use the online Contact us review Redbooks form found at:
ibm.com/redbooks
򐂰 Send your comments in an email to:
redbooks@us.ibm.com
Trademarks
The following terms are trademarks of Lenovo in the United States, other countries, or both:
Lenovo®
Lenovo(logo)®
ServeRAID™
System x®
TruDDR4™
The following terms are trademarks of other companies:
Intel, Intel Xeon, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks
of Intel Corporation or its subsidiaries in the United States and other countries.
Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States,
other countries, or both.
Other company, product, or service names may be trademarks or service marks of others.
36
System x3650 M5 Scalable Solution for Microsoft Exchange Server 2013 Using Internal Disks