Deployment in Virtual Environments
Scalability Guidelines for a CommServe
Disconnecting Idle GUI Connections
Scalability Guidelines for a CommCell
Increasing Streams for Concurrent Backups
Decreasing Network Agents for Non-LAN Optimized Backups
Setting up Fan-In Ratio for Connections to a CDR/WBA Destination
Setting up Job Preemption Control for the CommCell
Managing Content Indexing Scalability
Scalability Guidelines for CommServe/CommNet Server
Large CommCell Optimization Parameters
CommCell® Performance Parameters
Performance Degradation Impact Analysis
Calypso® software is deployed in large enterprise environments. Various scalability criteria needs to be addressed in order to ensure a successful and sustainable installation.
Certain guidelines associated with the design, deployment, and support of large CommCell® environments must be followed. Consider the following as you plan the install and configuration of Calypso®.
The following list outlines the benefits gained by deploying Calypso® software across multiple CommCell groups in an Enterprise environment:
Allows for partial downtime of backup operations. For example, when a CommServe group is taken offline for software updates/upgrades or hardware upgrades and maintenance, the other CommCell groups will continue to function without impact on data movement operations.
The strategic placement of CommServe within the shortest network distance to the servers it protect, dramatically reduces issues associated with slow or unreliable network connectivity between LAN clients and the CommServe.
By balancing data protection activity load across multiple CommServes, a smoother operation of the CommCell group is allowed, with better response time in each individual CommCell console GUI.
Reports from CommServes in multiple CommCells may be e-mailed to the same user or appended in same folder or spreadsheet.
This helps in consolidated processing and review. However, using CommNet as a central executive monitoring console and reporting mechanism helps spanning multiple CommCell groups.
As the data protection infrastructure evolves, Calypso® provides an easy-to-use method for maintaining the balance of resources across CommCell groups via CommCell Migration.
This section suggests the deployment requirements and considerations to be taken into account for a Workgroup, Datacenter or an Enterprise environment. Each environment has different hardware and software requirements.
Consider these requirements as you plan how you will install and configure Calypso®.
The following table illustrates the recommended system requirements for the CommServe and MediaAgents, based on scalability requirements.
Refer to System Requirements for a detailed list of required components for each CommCell module.
It is important to complete an accurate assessment of data growth planned, during completion of CommCell® design configuration.
This ensures that the design aligns with the scalability requirements reflected in this document when calculating backups, tape utilization and retention requirements.
Once any new CommServe® servers are added, it allows future CommCell growth and upgrades to be completed in a more controlled fashion.
There can only be one active CommServe instance in a CommCell, at one time. |
When deploying multiple CommCells ensure that disk and tape libraries of different types are placed in different CommCells.
Conversely, ensure tape libraries of the same type are placed within the same CommCell group, whenever possible.
For large tape libraries, use vendor software that allows configuration of smaller virtual libraries.
Tape exports from multiple CommCells can be centrally managed by creating the appropriate Vault Tracker® policies in each CommCell configuration.
When deploying Calypso® software in virtual environments consider the following:
If the CommServe server is configured on a Virtual Machine (VM), then it typically operates at a range of 60% efficiency as compared to a comparable physical server. In this deployment model scalability limits are affected as compared to physical server environment. The following conditions must be taken into consideration while deploying the CommServe in virtual environment:
During a storage vMotion, every IO operation sent by the virtual machine requires at least 2 IO operations on the physical disk, and additional IO is being sent to the SAN, which would decrease available IO capacity. |
During any of these phases, the additional IO could cause an IO intensive application on the virtual machine, which is sensitive to data read/write times above and beyond what the OS timeouts are set to, to time out on some of those requests, if the SAN is unable to cope with the additional workload and still return read/write requests within that application's timeout settings. Even if requests are returned fast enough to not cause a timeout, it still would result in additional latency. Databases are among the types of applications this might affect. vMotion operations associated with the CommServe DB will create undesirable latency on CommServe operations. Under no circumstances should a vMotion operation be completed with an active CommServe DB.
Storage vMotions cause additional IO load, which can cause a heavily loaded storage device to respond more slowly than an application might expect.
As a best practice step when configuring a CommServe on a VM, it is recommended to configure an alarm to notify the CommCell administrator of a vMotion event. The CommCell administrator can use this alarm to diagnose any CommServe performance issue that follows the vMotion event.
If a MediaAgent is configured on a Virtual Machine (VM), then it typically operates at a range of 60% efficiency as compared to a comparable physical server. In this deployment model scalability limits are affected as compared to physical server environment. The following conditions must be taken into considerations while deploying the MediaAgent in virtual environment:
Library creation for a virtualized MediaAgent is also limited to writes to a single magnetic library or stand alone library given support for tape devices in Virtualized environments. |
For Hyper-V MediaAgents, disk can be passed to the virtualized Hyper-V MediaAgent with the pass through disk feature. Note that the disk will need to be in an offline state on the parent server to function on the guest. For more information on pass through disks in Hyper-V refer to the Microsoft Hyper-V R2 website: http://www.microsoft.com/windowsserver2008/en/us/hyperv-main.aspx.
A warning is issued to Administrators when the scalability limits are approached. The warning message advises to modify current settings or configure the entities that exceed the scalability guidelines within a different CommCell group.
The following environments have been identified as scalability thresholds within a single CommCell group. Threshold considerations are divided as follows:
Observe the following parameters for implementing a CommCell hierarchy.
If your CommServe is running on a
physical computer (a non-virtual machine) with Solid-state
drive (SSD), you can scale it to twice the limits stated
below. E.g. Number of clients sustained could reach 5000 clients instead of usual 2500. |
CommCell Class | Multiplexing Factor | Concurrent Job Streams | Client Count | MediaAgent Count | Drive Count | SLOT COUNT |
Workgroup |
2 | 75 | 50 | 20 | 20 | No Limit |
Datacenter |
5 | 300 | 200 | 50 | 100 | No Limit |
MS SQL Express Edition |
5 | 10 | 25 The maximum number of Client count is inclusive of MediaAgents. |
25 Any client computer can be installed as a MediaAgent. |
500 | No Limit |
Enterprise |
25 | 1000 | 2500 | Any client computer can be installed as a MediaAgent. There is no separate restriction for MediaAgent. |
1000 | No Limit |
Notes |
Multiplexing Factor Limit for a Single CommCell. Administrator Notification are set at a Multiplex count of 5. |
Number of Active Job Streams corresponding to concurrently running
jobs. Running Job Stream = No. of tape drives * (Multiplexing Factor + Magnetic Writers). This is the high watermark value in the GUI and is enforced at 1,200. |
Number of Supported Clients Within a Single CommCell. Hard Limit Notification occurs at 4,500 Clients within the CommCell. |
Number of Supported MediaAgents within a Single CommCell Express CommCells have a limit of 25 MediaAgents The maximum Number of Concurrent Streams to a Single MediaAgent is set at a value of 300. |
Number of Supported Tape Drives within a Single CommCell | Number of Supported Slots within a Single CommCell |
To improve performance of a CommCell group, enable the option for the CommCell Console to become disconnected when inactive for a certain period of time. This option disconnects connections from idle GUI sessions, thereby allowing other GUIs to connect without exceeding the established parameter.
Follow the steps below to enable this option:
It is strongly recommended that the soft limits mentioned in the sections below are followed, and not be exceeded.
It is also recommended to review of the Deduplication Architecture Guide when planning for Calypso® Block Level Deduplication CommCell group deployments.
Contact Customer Support or your Account Team for the current release of that separate document.
To increase the number of streams for concurrent backups from large number of clients, enable the option optimize for concurrent backups, It will increase the current stream count limit by 200 more streams.
|
For better throughput, specify a lower value for the number of data pipes/processes that the client uses to transfer data over a network.
|
For maximum performance and robustness, the total number of Replication Pairs configured for the same source volume should be kept to a minimum. If multiple Replication Pairs for the same source volume are required, the following limits must be observed.
CommCell Class | FAN IN Ratio for Different Server Types | |
Win 32 | Win 64 | |
Workgroup |
1 to 20 | 1 to 60 |
Datacenter |
21 to 50 | 61 to 150 |
Enterprise |
51 to 100 | 151 to 500 |
In Virtual Tape Library Environments scalability may run beyond 60,000 tapes when backup jobs preempt auxiliary copy jobs.
|
It is important to manage concurrently running jobs, this can be done by staggering schedules. Use multiple schedule policies on different client groups and adjust the timing of the schedules in order to optimize scalability.
The table below displays the maximum number of concurrent jobs permitted in different environments.
CommCell Class | Total Permitted Job Count |
Workgroup |
1 to 100 |
Datacenter |
101 to 300 |
Enterprise |
301 to 1,000 |
Notes |
This includes Jobs in a Waiting/Pending status.
However, there is no limit to the number of Storage Policies in a single CommCell Group for the Enterprise environments. Stagger the start time of jobs to be separated by a time interval of Upto 20 minutes. |
There is no software limit on the number of hardware-generated Snapshots using Calypso®. However for limits imposed by each manufacturer’s controllers and software, refer to your hardware provider's documentation.
Content Indexing and Search provides the ability to content index and search both file server data and protected data for data discovery.
The following limits must be observed to achieve maximum performance.
Installation Type | Hardware Type | Object Count per Index Node |
Upgrade |
Legacy | 50,000,000 |
New Installation |
Virtual Machine-based | 50,000,000 |
New Installation |
x64, 16GB RAM Servers | 100,00,0000 |
Notes |
Content Indexing 100,000,000 objects per Indexing Node is a Hard Limit and cannot be exceeded. |
This section represents scalability thresholds for a co-located CommServe and CommNet Server, where the CommNet Server functions as reporting server for a single CommServe.
CommCell Class | Maximum Number of Supported Clients | ||
Workgroup |
1 to 50 | ||
Datacenter |
51 to 200 | ||
Enterprise |
201 to 2,500 | ||
Notes |
|
Subclient count optimization allows managing the daily backup operations easily for the administrator. It is recommended to periodically review the subclients in order to determine if any redundant or unneeded subclients exist and can be removed from the CommCell configuration. Information about each subclient is tracked by the CommServe, and by reducing the number of subclients, there is a huge reduction in the amount of tracking information.
This parameter impacts tape backup operations. A higher chunk size gives a better throughput.
A lower value for this setting is recommended for frequent checks against slower data protection operations, especially when data is moving across a WAN link.
|
It is recommended to implement a formal software update management process. This process allows for planned software updates to be aggregated on a 30-day or 60-day basis.
A test CommServe is recommended to be used for the completion of update validation prior to placing scheduled updates into a production environment. This helps improve backup stability.
It is also recommended to increase the Job manager Update Interval for your Agent, this can be done using the following steps:
|
Additional detail regarding this registry key is available at: http://support.microsoft.com/default.aspx?scid=kb;en-us;842411&Product=w
This will allow for jobs to meet the backup window needs at the same time and not overload a single MediaAgent within the network.
Associate Storage Polices and MediaAgents evenly with backups in order to balance the data protection operations.
File System Multi-Streaming employs multiple data streams per subclient for the data protection operation, enabling the subclient's contents to be distributed to all the streams, transmitting them in parallel to the storage media. It is recommended to use Automatic File Multi-Streaming for larger subclients (1TB or more).
It allows the file system backup to use multiple readers for increased performance, this configuration in turn reduces duplicate file Scan Time on client servers.
For subclients less than 1TB in total size, set the number of readers on the sub-client to 1. |
After upgrading to a newer release of Calypso®, delete the system state backups.
Newly installed clients do not have the separate subclient.
CommCell Performance is based on load that can be measured by the impact on following types of operations:
Within a CommCell hierarchy, the following subsystems must be examined so that a degradation in performance does not occur. Concerning areas are dependent upon individual data protection objectives and the server along with the available storage resources. The subsystem recommendations described in this section are based on size and and expected growth trends.
Update index on MediaAgent. Up to the number of drives controlled through each MediaAgent.
Approximately six backup jobs per drive per night on a MediaAgent, for a maximum of 36 processes that may start concurrently.
You may stagger up to 100 jobs at a time and 20 minutes apart.