Getting Started | FAQ | Best Practices | Troubleshooting |
The following table shows the mapping of the DataFabric Manager server entities to the Simpana CommServe entities.
DataFabric Manager Entity | Simpana CommServe entity | Notes |
Storage Service | Storage Policy |
All existing storage services in the DataFabric Manager server are mapped to a
storage policy in the Simpana software with the following
name format: <DFM_name>_<StorageServiceID>_<StorageServiceName> |
Storage Service nodes (Primary, Vault/Backup, Mirror) | Storage Policy Copies | The storage service nodes are mapped as follows:
|
Dataset | Subclient | All existing datasets in the DataFabric Manager server are mapped to a subclient in the Simpana software with the following name
format: <DFM_name>_<DatasetID>_<DatasetName> Each dataset is used unless there are multiple storage devices provisioned on the primary node of the dataset. In this scenario, new subclients are created for each storage device in the dataset. |
Storage Device in the Primary Node for the Dataset | Client | The storage device is mapped as a NAS client. |
Job from Primary Node | SnapProtect Job | A job run from the primary node are identified as a third party SnapProtect job. |
Backup/Vault Job | SnapVault Job | These jobs/snaps are skipped. |
Mirror Job | SnapMirror Job | These jobs/snaps are skipped. |
Resource Pool | Resource Pool associated to the Storage Policy Copy properties | If a resource pool does not exist in the configuration, a new resource pool is created and associated to the Storage Service. |
Provisioning Policy | Provisioning Policy associated to the Storage Policy Copy properties | Primary provisioning policy settings are imported when a
non-application dataset is converted to an application dataset during
the import. The secondary provisioning policy is imported in the following way:
|
Protection Policy | Source Copy for the Storage Policy Copy | The topology of the protection policy is mapped to the corresponding storage policy copy. |
Libraries | Media Libraries | |
Schedule | SnapProtect or Auxiliary Copy schedule | A schedule from the DataFabric Manager server can have multiple sub-schedules. Each sub-schedule is converted to a SnapProtect or Auxiliary Copy schedule. Schedules are enabled by default. |
Retention | Basic retention on Storage Policy Copy, SnapProtect schedule properties | The retention information imported from the DataFabric Manager server is set as basic retention on the storage policy copies. |
The following table displays the configurations that are supported for import as well as the configuration that are not.
What Gets Imported | What Does Not Get Imported |
|
|
When a failure occurs during the import operation, perform the following steps to troubleshoot the issue:
Do not export the information again as the configuration details may be in an inconsistent state.
Yes. However, it is not recommended to have multiple CommServers handling the same datasets as it may cause data inconsistency.
After the import operation, the corresponding subclient will have the entire volume as content, which includes non-qtree data and all other qtrees.
Follow the steps below to skip specific datasets during the import operation by running the import tool from the command line.
SnapProtectImport.exe WriteToXML
<DFMDataSetList DFMDataSetId="1001" DFMDataSetName="Dataset1" …
<DFMDataSetList DFMDataSetId="0"
DFMDataSetName="Dataset1_
<DFMSubClients DFMDataSetName=" Dataset1" …
<DFMSubClients DFMDataSetName=" Dataset1_
<DFMStorageServices DFMStorageServiceName="Dataset1” StorageServiceId="0" ….
<Schedules DataSetId="1001" DatasetName=" Dataset1" …
<jobs DatasetId="1001" …
SnapProtectImport.exe ReadFromXML