Overview - The QSnap® Service

Topics | How To | Related Topics


Introduction

Configuration

QSnap® Copy-On-Write Cache

The Block-Filter Driver and Bitmaps

Persistence

QSnap® Driver on UNIX

CXBF Devices

Recovery Points

License Requirement


Introduction

The QSnap® Service is a software-based snapshot implementation that integrates with other Agents, providing all of the components necessary for basic snapshot functionality without requiring specialized hardware. The QSnap product is an installable, licensed software module. In this release, the QSnap service will still be automatically installed with CDR but will not be used as the default snap engine. QSnap® Services will be used as the source snapshot on Windows 2000 and Windows XP. It will not be used for creating Recovery Points, even with the aforementioned operations systems.

On Unix platforms, the driver name is Unix QSnap. You will often see "CXBF" in references to the Unix QSnap driver. For example, a CXBF device is a volume or partition that is monitored by the CXBF block-filter driver.

The QSnap service provides the following functionality:

Back to Top


Configuration

For all Agents, the QSnap service must be installed prior to configuring the software. If your Agent supports integration with the QSnap service, an install procedure that includes the steps for installing QSnap software has been provided. See Installation for installation procedures.

QSnap configuration differs depending upon the Agent with which it is working; select your Agent from the following list for specific configuration information and procedures:

Back to Top


QSnap® Copy-On-Write Cache

The QSnap creates a copy-on-write (COW) cache, to which it copies blocks that are being overwritten on the source volume during a snapshot creation. This preserves the original data, ensuring an accurate snapshot for data protection and recovery operations.

The COW cache contains only copies of blocks that have been overwritten. Any new data that is written to free space on a source volume is not cached. It is important to ensure there is enough space for the cached blocks.

By default on Windows platforms, the COW cache minimum size is 50MB, and the system expands the cache file as needed up to 90% of the total capacity of the volume. A snapshot will be terminated if it causes the cache file to go beyond this size.

You can override the default cache sizes by entering megabyte values for minimum cache and maximum cache in the General tab of the iDataAgent's Properties window. (See Change the COW Cache Size.)

But 90% of a volume's total capacity for maximum is a hard upper limit. Although the system will accept a megabyte value that represents more than this limit (e.g., 95 MB on a 100 MB volume), the snapshot will still terminate when the cache reaches 90% of total volume capacity (in the example: 90 MB).

Meanwhile, if you want to reset the minimum cache, note that 25 MB is the lower limit. If you try to set the minimum below this limit, the software will consider the limit to be 25 MB. And, when resetting the maximum and minimum values, be sure not to set the minimum value higher than the maximum value or limit.

Another factor is whether the data is written to free space on a volume. For example, if 550MB of new data is written to free space on a source volume, no blocks are overwritten and therefore no blocks are cached, and the default COW cache settings do not require adjustment. The snapshot driver does not need to cache data written to free space in order to create a snapshot. However, if 550MB of data is written over existing data and the 550MB represents more than 90% of total capacity, then the default maximum cache size will be exceeded and the snapshot will terminate.

Therefore, when determining the maximum size of the copy-on-write cache, consider the following, in order of importance:

Be sure to set the maximum cache high enough (not to exceed 90% of the volume's total capacity) to accommodate these factors.

The COW cache must be located on a volume with a supported file system.

Windows COW Cache

By default, a copy-on-write (COW) cache is created on each source volume. Whether you are using NTFS or FAT/FAT32 for the source volume, any NTFS volume is supported for the COW cache location, with the following caveats:

Changing the Cache Location on Windows

On the Windows platform, the COW cache location can be changed through the Client level properties. Note that if you specify an alternate COW cache location, all of the COW caches for all of the volumes on the client will use that COW cache location. Be sure there is enough free space to account for the caches.

If you are using the QSnap services with the Windows File System iDataAgent in a cluster environment, and you want to change the COW cache location:

See Change the COW Cache Location for step-by-step instructions.

Cache Size on Windows

The COW cache size is adjustable through the Agent level properties. You can set the Minimum and Maximum cache sizes for the volumes associated with the Agent. By default, the minimum size is 50MB and the maximum size is 90% of the total capacity of the volume.

See Change the COW Cache Size for step-by-step instructions.

Changing the Write-Inactivity-Period

The QSnap enabler requires a short period of no disk activity to create the snapshot. You may require a longer write inactivity period due to slow disk performance. See Change the Write Inactivity Period (WIP) for step-by-step instructions.

Cache Location for Quick Recovery Agent Subclient using Recovery Points

If you are creating a subclient that will use a QR Policy with QSnap as the snapshot enabler and Recovery Points enabled, you must specify a cache volume (unless a cache volume had been specified in Client Properties/Advanced). The cache volume specified during Quick Recovery® Agent subclient creation will be used as the cache volume for all snapshots (source and destination) and for all Agents for that particular machine.

Considerations - QSnap COW Cache for Windows

Consider the following about the COW cache size, location, configuration, and use:

UNIX COW Cache

For Unix, the mount point for the copy-on-write (COW) cache is specified during installation. For instructions on mounting a partition to the COW cache mount point, see Mount the COW Cache Partition. It is recommended that you use the local file system for COW cache.

  The COW cache must be located on a volume with a supported file system.

The cache partition cannot be on a network share.

After installation, you can assign any volume/partition to this mount point so long as it is one of the supported configurations below:

Source Volume Supported COW Cache Location Notes
JFS2 - AIX
  • Any volume other than the source volume
  • Volumes managed by VxVM can be used for the COW cache partition; however, VxFS volumes cannot be used.
  • Only one COW cache partition is allowed per client. It is recommended that this partition be a dedicated cache partition (not used for other purposes).
  • For the QR Agent on Solaris, the COW cache partition must be on a CXBF device.
  • On UNIX, the source volume is not supported as the COW cache partition. You must specify a volume other than the source.
EXT2, EXT3, EXT4 - Linux
  • Any volume other than the source volume
XFS - Linux
  • Any volume other than the source volume
Reiserfs - Linux
  • Any volume other than the source volume
UFS - Solaris
  • Any volume other than the source volume, with a UFS file system
VxFS - Solaris
  • Any volume other than the source volume, with a UFS file system

New data does not use a Unix COW cache on Solaris volumes or on Linux ext2/ext3 volumes; only changed data on these volumes use the cache. However, both new data and changed data use the Unix COW cache on AIX volumes or Linux volumes using a file system other than ext2/ext3.

Not all agents support all source volume types and file systems. See Supported Data Types in the Product Overview for your agent.

Calculate Cow Cache Size on Unix

The maximum size of the COW cache equals the size of the volume mounted to the COW cache mount point (which was specified during the software installation). If the COW cache size exceeds the size of this partition, then a larger partition must be mounted to this mount point.

The size of the cache partition should be determined as follows, based on the assumption that 25% of a volume is being modified between the times of snap-on to snap-off on each volume:

If m is the number of volumes for which the cache is being created with size s, and n is the number of snapshots that are expected to be copied, then the cache size should be:

(n x s)/4

For example, if there are 20 Volumes (m) of 10GB size (s) and we expect to run snapshots on 2 volumes (n) at a time then the cache partition size should be at least:

(2 x 10)/4= 5GB

Cow Cache for Linux Volume Greater then 3TB

Reformat the file system containing the CXBF cache to accept sparse files 3TB or bigger (XFS, EXT4, Reiserfs) in size.

Set the value of dSnapChunkSize registry key to 1048576.

Cow Cache for Red Hat Linux

The Red Hat Linux file system chooses a cache block size according to disk size. So if you are using a small disk for cache, the file system will use a small block size such as one kilobyte. A small block size and a source disk with many files can cause the CXBF driver scan to fail. For this situation, it is recommended that you manually specify a block size of 4k (4096bytes) with the following command: mke2fs -b 4096.

Calculate Bitmap Size on Unix

For the cache size, we use one bit per extent to mark if it is modified or not. The default extent size is 2MB.

547 GB = 280,064 extents

280,064 extents needs 35,008 bytes

So, for 547GB, we need only 35 KB of space in /etc/galaxy directory.

The cache size can be as large as the volume. A 547 GB file system completely modified from scratch will occupy 547GB of space. Also, please note that this is a sparse file. The size shown by ls -l is not an indication of the space consumed. "du -sk" on the file is the correct measure on the actual blocks used.

Back to Top


The Block-Filter Driver and Bitmaps

Through the use of a block-filter driver, the QSnap software creates bitmaps to track the block-level changes to a volume over time. These bitmaps are stored in memory and later written to disk. The bitmaps help ensure that the next data protection operation can be an incremental backup instead of a full backup. Unlike the COW cache, which is created and used only when creating snapshots, bitmaps are always maintained for devices monitored by the block-filter driver.

Block-Filter Activation

When the QSnap software is installed, its block-filter is activated for all volumes by default.

Manually Activating/Deactivating the Block-Filter

The block-filter may be active on volumes you do not want it to be active on — as may be the case after upgrading QSnap. You can manually activate/deactivate the QSnap block-filter driver on specific volumes, using the following:

Bitmaps and Changed Blocks

After the first full backup, you can update the backup incrementally so that only the changed blocks on the source volume are backed up or copied. In order to keep track of the changes, the QSnap program creates a bitmap that records the changed blocks for each source volume.

If the system cannot verify the integrity of a bitmap, it will force the next backup to be a full backup, which may be undesirable in the following situations:


Persistence

To avoid unwanted full backups or QR Volume Creation jobs, you can configure QSnap® bitmap persistence on a volume, to capture a bitmap of the blocks that have not been read and copied before ungraceful restarts and failovers. When bitmap persistence is enabled, non-full QR Volume Creation or Image backup jobs remain non-full after both graceful and ungraceful restart or failover. If bitmap persistence is not enabled, non-full jobs remain non-full only after a graceful restart or failover; in the event of an ungraceful restart or failover, the system will force a full backup or QR Volume creation.

The following table summarizes the expected behavior of the Agents after graceful (planned) and ungraceful ("blue screen," crash, etc.) restarts and failovers, depending on how you configure the software:

  Non-Cluster Cluster
Default Allows non-fulls* after graceful restarts only.

Ungraceful restarts will force a full backup or QR Volume creation.

Allows non-fulls* after graceful restarts or failovers only.

Ungraceful restarts or failovers will force a full backup or QR Volume creation.

Configured for Persistence Allows non-fulls* after both graceful and ungraceful restarts. Allows non-fulls* after both graceful and ungraceful restarts and failovers.

*Where non-full refers to QR Incremental Updates or Incremental  backups.

Enable Persistence

By default, Persistence is enabled on all volumes. If required, you can also enable Persistence on volumes with the Qsnp2Config tool, which is located in <Install Directory>\Base on the client and is installed with QSnap® software. For step-by-step instructions, see one of the following:

Disable Persistence

Persistence is disabled with the Qsnp2Config. For step-by-step instructions, see one of the following:

Bitmap Location

The bitmap location is specified during the installation of QSnap functionality. Only NTFS volumes are supported for the bitmap location. For cluster installation, the default location for storing bitmaps is the corresponding shared volume but in case of an upgrade the shared location selected prior to upgrade will be the default location for the bitmaps. You can change the bitmap location using Qsnp2Config. See Change the QSnap Bitmap Location for step-by-step instructions.

Enable SAN Environment

Enabling the SAN Environment allows you to continue incremental operations after a device (SAN-attached disk) has been reconnected. This option is enabled by default. Disabling feature this will force a full QR Volume creation in cases where a disconnected device has been reconnected. See Enable SAN Environment for step-by-step instructions.

Use Bitmaps to Measure Change

You can predict the size of your next incremental job by using the TrackBlockIO tool. This tool looks at the bitmap information to determine the amount of changed blocks. See Use TrackBlockIO to measure changed blocks for step-by-step instructions.


QSnap® Driver on UNIX

The Unix QSnap driver interfaces between File System (UFS) and sd (disk driver) on Unix. The Unix QSnap driver preserves all the properties of the underlying sd driver. For both applications and file systems, it is transparent. The driver intercepts I/O and keeps track of the blocks that have been modified. The bitmap file location is managed by the Unix QSnap® driver.

  • The Unix QSnap driver creates device nodes in /devices. It does not have hot pluggable support for devices.
  • On Unix, QSnap bitmaps are referred to as CXBF bitmaps, and they are managed by the QSnap Unix driver.
  • The Unix QSnap driver comes with a utility called CVSnap, which can be used to configure and list CXBF devices.
  • The Solaris EFI disk label, introduced in Solaris 10 to support disk labels above one terabyte, is not supported by the Unix QSnap driver.

CXBF Devices

On a Unix platform, a CXBF device is a volume or partition that is monitored by the CXBF block-filter driver. Any volume that you add to a subclient is automatically configured as a CXBF device. You can also create CXBF devices using the Configure cxbf device right-click option in Volume Explorer on any volumes available to the client.

  • CXBF should not be configured on devices containing the Operating System software. Typically, this is the device mounted to "/".
  • For Quick Recovery Agent adding a volume to the subclient content does not automatically configure it to be a CXBF device.

  • Raw volumes are not automatically configured as CXBF devices when added to the subclient content. Use the Volume Explorer to configure raw volumes as CXBF devices.

  • CXBF devices can only be configured on quiescent file systems (i.e. umountable or not busy).

For AIX:

The CXBF devices for AIX disks must be Logical Volume Manager (LVM) devices with the JFS2 virtual file system. (The JFS virtual file system is not supported.) These devices will be created in the paths of:

/dev/cxbf/cxbfx/blk

where x is a number from 0 to 32256.

The devices are selected by their device name in the CommCell® Console (for example, /dev/cxbf/cxbf24/blk/dept).

  • CXBF devices on AIX do not support online file system expansion or shrink using the chfs command. If a volume or file system controlled by a CXBF block-filter driver is extended or shrunk, it will fail. Therefore, you must de-configure the CXBF device before using chfs.

  • If a CXBF device on AIX was configured using Volume Explorer (where the CXBF device was added to /etc/filesystems), be sure to mount the CXBF device before you deconfigure it. If an unmounted CXBF device on AIX is deconfigured using Volume Explorer, the /etc/filesystems system file will be out of synchronization with the actual file system configuration.

    If /etc/filesystems and the actual file system configuration get out of synchronization, a manual edit of /etc/filesystems is required to correct the inconsistencies. Remove the cxbf entry and uncomment the original entry (that is, remove "*Galaxy" from the beginning of each line).

For Solaris:

The CXBF devices for these disks will be created in the paths of /dev/cxbf/dsk and /dev/cxbf/rdsk. They are selected by their device name in the CommCell® Console (for example, /dev/cxbf/dsk/c2t1d1s1).

The naming convention for CXBF devices is as follows:

cxtxdxsx

Where the system assigns integer x as follows:

QSnap® Software Performance on Solaris

If the server system is heavily loaded with I/O, multiple processes, or low memory, you may need to tune the buffer cache parameter in /etc/system. If the buffer cache is too low, backup processes may hang in heavily loaded condition.

The bufhwm variable, set in the /etc/system file, controls the maximum amount of memory allocated to the buffer cache and is specified in KB. The default value of bufhwm is 0, which allows up to two percent of system memory to be used. This may need to be increased to 10 percent for a dedicated file server with a relatively small memory system, and can be increased up to 20 percent. On a larger system, the bufhwm variable may need to be limited to prevent the system from running out of operating system kernel virtual address space.

The buffer cache is used to cache inode, indirect block, and cylinder group related disk I/O only. The following is an example of a buffer cache (bufhwm) setting in the /etc/system file that can handle up to 10 MB of cache. This is the highest value to which you should set bufhwm:

set bufhwm=10240 (the unit is KB)

For Linux:

Linux disk devices are named as follows:

/dev/hdx for an IDE disk

/dev/sdx for a SCSI disk

/dev/vg00/vol0 for a LVM disk

where x is a letter a, b, c, d, etc.

The CXBF devices for these disks will be created in the paths of:

/dev/cxbf/cxbfx/

where x is a number from 0 to 32256.

They are selected by their device name in the CommCell® Console (for example, /dev/cxbf/cxbf32/dsk).

To find the relationship between the Linux CXBF device and original device, go to /proc/cxbf/cxbfx and type cat hdd_name. This will report the original device name.

CXBF Device Configuration

After installing the software, you must create CXBF devices for the volumes you want to back up/protect. When you add a volume to a subclient content, it is automatically configured as a CXBF device.

See the following procedures for step-by-step instructions:

  • If you encounter errors while configuring or deconfiguring CXBF devices on the client, go to /cvd.log on the client for more detailed information concerning the error.

  • If you apply an update to an AIX, Linux, or Solaris platform where CXBF drivers have been installed and CXBF devices have been configured, you must reboot the system after the update is applied in order to activate the CXBF drivers.

  • For Quick Recovery Agent adding a volume to the subclient content does not automatically configure it to be a CXBF device.

  • Use the Volume Explorer to deconfigure a CXBF device, deleting a subclient does not deconfigure CXBF devices for a client.

For Oracle Volumes:

Using the CVSnap Tool:

Back to Top


Recovery Points

When used with the Quick Recovery® Agent (QR) on Windows or with ContinuousDataReplicator (CDR) on Unix, an additional feature is available for the QSnap service to create Recovery Points, by creating a snapshot of the file system data on a QR Volume or Destination machine. For more information, see Recovery Points.

Note: When using QSnap functionality with CDR on Windows, you cannot create recovery points.

Back to Top


License Requirement

To perform a data protection operation using this Agent a specific Product License must be available in the CommServe® Server.

Review general license requirements included in License Administration. Also, View All Licenses provides step-by-step instructions on how to view the license information.

Recovery Points Feature

This feature requires a Feature License to be available in the CommServe® Server.

Review general license requirements included in License Administration. Also, View All Licenses provides step-by-step instructions on how to view the license information.

 

Back to Top