Import Wizard Tool

Getting Started FAQ Best Practices Troubleshooting  

How are the DataFabric Manager entities mapped to the CommServe entities?

The following table shows the mapping of the DataFabric Manager server entities to the Simpana CommServe entities.

DataFabric Manager Entity Simpana CommServe entity Notes
Storage Service Storage Policy All existing storage services in the DataFabric Manager server are mapped to a storage policy in the Simpana software with the following name format:

<DFM_name>_<StorageServiceID>_<StorageServiceName>

Storage Service nodes (Primary, Vault/Backup, Mirror) Storage Policy Copies The storage service nodes are mapped as follows:
  • Primary node to Primary(Snap) copy
  • Mirror node to Mirror<nodeID> copy

    Example: in a Primary-Mirror (P-V) configuration, the mirror node would be mapped to Mirror(2) copy.

  • Backup node to Backup<nodeID> copy
Dataset Subclient All existing datasets in the DataFabric Manager server are mapped to a subclient in the Simpana software with the following name format:

<DFM_name>_<DatasetID>_<DatasetName>

Each dataset is used unless there are multiple storage devices provisioned on the primary node of the dataset. In this scenario, new subclients are created for each storage device in the dataset.

Storage Device in the Primary Node for the Dataset Client The storage device is mapped as a NAS client.
Job from Primary Node SnapProtect Job A job run from the primary node are identified as a third party SnapProtect job.
Backup/Vault Job SnapVault Job These jobs/snaps are skipped.
Mirror Job SnapMirror Job These jobs/snaps are skipped.
Resource Pool Resource Pool associated to the Storage Policy Copy properties If a resource pool does not exist in the configuration, a new resource pool is created and associated to the Storage Service.
Provisioning Policy Provisioning Policy associated to the Storage Policy Copy properties Primary provisioning policy settings are imported when a non-application dataset is converted to an application dataset during the import.

The secondary provisioning policy is imported in the following way:

  • A provisioning policy with the SnapProtect prefix retains its name.
  • A provisioning policy without the SnapProtect prefix is duplicated. The duplicate is renamed to SnapProtect<ProvisoningPolicyName>.
  • If there are no provisioning policies, the tool will create a dummy SnapProtect provisioning policy with the name SnapProtectDefaultProvisioningPolicyForSecondary.
Protection Policy Source Copy for the Storage Policy Copy The topology of the protection policy is mapped to the corresponding storage policy copy.
Libraries Media Libraries  
Schedule SnapProtect or Auxiliary Copy schedule A schedule from the DataFabric Manager server can have multiple sub-schedules. Each sub-schedule is converted to a SnapProtect or Auxiliary Copy schedule. Schedules are enabled by default.
Retention Basic retention on Storage Policy Copy, SnapProtect schedule properties The retention information imported from the DataFabric Manager server is set as basic retention on the storage policy copies.

What DataFabric Manager Configurations are supported for import?

The following table displays the configurations that are supported for import as well as the configuration that are not.

What Gets Imported What Does Not Get Imported
  • All non-application datasets
  • All relationships listed in the External Relationship section of the DataFabric Manager
  • Datasets with the following SnapProtect protection policies:
    • PM (Mirror)
    • PV (Back up)
    • PMV (Mirror, then Back up)
    • PVM (Back up, then Mirror)
    • PMM (Chain of two mirrors)
    • P<MM (Mirror to two destinations)
    • P<MV (Mirror and Back up)

    For external relationships, all the above are supported except P<MM and P<MV.

  • SnapProtect schedules
  • Jobs and snapshots on the primary node of a dataset
  • Datasets with disaster recovery (DR) protection policies
  • Application datasets that were not created through SnapProtect
  • Datasets with primary members that are part of existing subclients
  • Snapshots on the secondary and tertiary nodes of a dataset
  • Datasets with a file server or aggregate as primary or secondary node member
  • Throttling settings on the protection policies
  • OSSV (Open System SnapVault) relations
  • Pre-existing schedules, jobs and snapshots on external relationships
  • External relationships not imported:
    • Qtree level SnapMirror relationships
    • Volume level SnapVault relationships
    • More than one relationship coming from the same source

What should be done when the Import operation fails?

When a failure occurs during the import operation, perform the following steps to troubleshoot the issue:

  1. Review the failure reason in the Simpana console and logs and fix any issues found.
  2. Run the tool manually from the command line and execute the command to import again the DataFabric Manager server information.

    Do not export the information again as the configuration details may be in an inconsistent state.

  3. Once the import operation is successful, execute the command to export the DataFabric Manager server information.

Can I have multiple CommServers managing the same datasets?

Yes. However, it is not recommended to have multiple CommServers handling the same datasets as it may cause data inconsistency.

What happens when I import datasets with only non-qtree data as primary members?

After the import operation, the corresponding subclient will have the entire volume as content, which includes non-qtree data and all other qtrees.

How can I skip the Import of specific datasets?

Follow the steps below to skip specific datasets during the import operation by running the import tool from the command line.

  1. Review the datasets you want to skip by finding out the Dataset Name and Dataset ID from the DataFabric Manager server. For example purposes, these steps will use Dataset1 for dataset name and 1001 for dataset ID.
  2. Execute the following command to export the DataFabric Manager configuration details and store them in XML files.

    SnapProtectImport.exe WriteToXML

  3. Navigate to the <software installation path>\Base\SnapProtectImportXMLs folder. This folder contains the XML files you will need to edit.
  4. Open the SnapProtectImport_Config_<DFM Server Name>.xml file and delete the following XML tags and all its child tags corresponding to the dataset you want to skip:
  5. Open the SnapProtectImport_Schedule_<DFM Server Name>.xml file and delete the Schedules tag and all its child tags corresponding to the dataset you want to skip. Example of this tag:

    <Schedules DataSetId="1001" DatasetName=" Dataset1" …

  6. Open the SnapProtectImport_Jobs_<DFM Server Name>.xml file and delete the Jobs tag and all its child tags corresponding to the dataset you want to skip. Example of this tag:

    <jobs DatasetId="1001" …

  7. Execute the following command to import the configuration details of the DataFabric Manager from the edited XML files to the CommServe:

    SnapProtectImport.exe ReadFromXML

Back to Top