DPX Open Storage Best Practices

« Go Back


Catalogic DPX TM is built to protect data, applications, and servers using snapshot-based, block-level functionality.  The source machines could be using many storage technologies and the destination (target) for backups could be NetApp (NSB Solution) or non-NetApp storage (DPX Open Storage). This guide specifically describes using any disk storage (non-NetApp) with DPX software for providing a data protection solution. The hardware and software components are configured to implement a system that protects data on supported client systems to any Open Storage, and optionally archives the data to tape. This guide offers specific recommendations for system configuration, as well as general guidelines across all components, including data protection software, storage system hardware and software, and tape library configuration. This ensures that the overall solution operates optimally and fulfills customer’s specific data protection needs.
DPX Open Storage technology is a disk-based storage backup feature designed to back up data to a central, secondary disk-based storage system. The source of the backup is referred to as the DPX client, whereas the destination of the backup is referred to as the DPX Open Storage Server. DPX Open Storage feature is supported in DPX release 4.3 or later. With respect to technology, features and functionality, DPX Open Storage technology is similar to the NSB solution that Catalogic offers but it allows users to back up to a centralized server that is not a NetApp storage system.
DPX is compatible with a wide range of Disk storage systems available in the market. For the latest system requirements and compatibility details regarding supported hardware, file systems, applications, operating systems, and service packs, refer to the System Requirements and Compatibility section on our website.

DPX Open Storage Server component can be installed only on Windows based systems running Windows 2008 R2, Windows 2012, Windows 2012 R2 or Windows 2016. The DPX Open Storage feature can be used to perform block-level backups of servers and applications by installing the DPX client agent on the servers.
The following features are not supported with DPX Open Storage Server architecture:
  • Server level recovery (BMR/IV/FV/RRP) for UEFI based clients
For system requirements to deploy the DPX Open Storage Server component, refer to the latest System Requirements and Compatibility Guide section on our website.   To do so, navigate to, select your product version, and click on the hyperlink titled “Product Compatibility”.
It is strongly recommended to maintain at least 30% free space on each destination volume used by the DPX Open Storage Server. By default, a warning is issued when free space falls below 30%, and the backup fails if it falls below and stays below 20%. Hence, it is very important to ensure the adequate free space availability in each volume at all times. Also, the default free-space thresholds and alerts must not be modified.

System Requirements

The following are additional considerations for DPX Open Storage Server installation:
  • For new installations, a Windows 2016 machine is recommended.
  • A DPX Open Storage Server can be hosted on a Windows 2012 R2 machine, however, not all Windows 2012 storage features are supported. In general, Windows 2012 features that are available on Windows 2008 are supported.
  • A minimum of dual core CPU or two CPUs must be available. Quad core or four CPUs are recommended.
  • Windows x64 is required. Cluster nodes are not supported.
  • A minimum of 4 GB of available memory is required for new installations. 16 GB or more is recommended.
  • It is recommended to use a 10 GigE network for better performance.
  • It is recommended to use high performance disk drives like 15K SAS drives with at least 3 Gb/s connection.
  • Contact your Catalogic sales engineer for environments that exceed 15 TB in size (total backup data is greater than 15 TB).  Depending on the environment and how DPX is used, additional OSS servers may be necessary.
  • If you are upgrading from an older release of software such as BEX 3.4.1, you can continue using your Advanced Recovery for Open Storage or AROS server if it meets the minimum requirements specified in the compatibility matrix.
  • DPX Open Storage Servers must reside in only one DPX Enterprise and relate to only one master server, i.e. you cannot share a single DPX Open Storage Server across multiple DPX Master Servers.
  • A single DPX Enterprise can contain multiple DPX Open Storage Servers.
  • A DPX Open Storage Server must not be used for any purpose other than DPX. Additional applications or data on the server might reduce backup performance, which can reduce application performance and might increase the risk of storage data corruption. We strongly recommend to use dedicated server for DPX Open Storage Server and do not share it with DPX Master Server. 
  • It is not recommended to use the server for DISKDIRECTORY volumes or for reporting applications.
  • A highly reliable configuration such as RAID 5 with hot spares is recommended.
  • Microsoft iSCSI Initiator is required. However, Microsoft iSCSI target service must not be running. Use StarWind iSCSI target service only.
  • It is important to ensure that the network infrastructure sustains the desired data transfer rate across all segments and devices. Use of multiple network adapters in combination with technologies such as NIC teaming and port trunking is recommended when backing up many server volumes concurrently to the DPX Open Storage Server.
Do not use continuous, real-time, or online defragmentation utilities with the server. These utilities can interfere with backup, condense, and restore operations. The server is optimized to manage its files without additional defragmentation.

Installing DPX Open Storage Server

Follow the steps outlined in the DPX Deployment Guide to install DPX Open Storage Server component.

Migrating DPX Open Storage Server (OSS)

If there is a need to migrate the DPX Open Storage Server to another server, review the KB Article DPX Open Storage Server Migration Guide  for details.

Storage and Sizing for Secondary Data

Any physical disk drives that can be formatted using a file system that supports blocks can be used for secondary storage. It is typical to see secondary storage needs met with lower cost and/or larger capacity drives. It is strongly recommended to use high performance disk drives like FC/DAS/SAS for better performance.
Storage needs for DPX Open Storage depend on the size of existing data, frequency of backup, retention period, and change rate. Consult your Catalogic Software sales engineer for approximate storage requirement estimates for your specific environment and data protection policies. It is advised to take a conservative approach for initial storage provisioning as it can be difficult to estimate what an environment’s change rate and growth will be over time. Additionally, note that storage efficiency savings are not absolute and are inherently data dependent. Deduplication and compression may not be appropriate for all secondary storage volumes and the savings achieved with either are highly dependent on similarity and compressibility of the source data.
Short-term iSCSI restore operations, for example IA map, generally do not consume much space. However, longer term use, such as long running RRP restores or use of IV for I/O intensive large data sets could consume significant space in the volume containing the LUN. Regular monitoring of the disk space is recommended to avoid out-of-space conditions.
Adhere to the following guidelines when creating the destination volumes on the DPX Open Storage Server:
  • Destination volumes should not be co-located with the system root partition.
  • Destination volumes should be dedicated to disk backups and not be shared with other applications.
  • Consider the anticipated client fan-in, retention needs, backup frequency, incremental data growth, and the minimum required free space thresholds when determining the size of the destination volume.
  • We recommend no more than three destination volumes (a max of 30 TB each) on a DPX Open Storage Server.

General Considerations

  • Should more than one client need to be included in a backup job, due diligence must be observed in selecting and grouping clients across jobs. While no single rule can be applied to all situations and environments, we recommend not to include more than 10 clients per job. The number of clients per job may be further reduced if the associated clients have large amounts of data (more than 1 TB per client).
  • If grouping multiple clients to take advantage of space efficiency features, grouping clients running similar Operating Systems and hosting similar type of data might be more beneficial.
  • Retention periods are specified at the job level. So if a backup job definition includes 10 clients, the retention period defined for that job will apply to all 10 clients in that job. It is important to consider this when grouping clients in a job.
  • Backup jobs can be scheduled to start automatically at pre-defined schedules. The schedule, however, is associated with the job and all the clients included in a job will be backed up together. It is important to consider this when grouping clients in a job.
  • Backup jobs should be scheduled carefully to adhere to defined RPO requirements. Since all the clients included in a job will have identical recovery points, desired RPO should also be taken into consideration when combining multiple clients in a single job. It is a good practice to consult all the concerned administrators and stakeholders beforehand to ensure the Service Level Agreement (SLA) requirements are fully met.
  • There must be a one-to-one relationship between a backup job (source) and the destination volume (target) regardless of the number of clients included in the job.
  • At the start and during the life of the backup, DPX software checks for the availability of certain minimum percentages (30% and 20%) of free space in the destination volume. If available free space in the destination volume falls below the error threshold at any time during the backup, the data transfer task to the target volume will fail and be queued for retry according to the chosen retry parameters. If the available free space remains below the threshold after all retries are exhausted, the backup job itself fails. It is therefore, recommended to create fewer larger volumes instead of many smaller volumes on the DPX Open Storage Server so that the free space is not divided across many volumes.
  • If the backup data on disk is further backed up to tape via “Archive to Media” backup type, make sure that “Archive to Media” backup doesn’t overlap with the corresponding disk-to-disk backup as “Archive to Media” backup transfers the base and not incremental image to tape, and hence may take longer than expected to complete.
  • Virtual disks such as BMR, SQL, EXCH etc. should be included in the source selection of the backup if advanced recovery features are desired. A server recovery (Bare Metal Restore and Virtualization) cannot be performed from a backup if “BMR” is not included in the backup job definition. Similarly, backups of the databases will not be application consistent if virtual objects such SQL or EXCH are not included.
  • Except for the first (base) backup, all subsequent backups are built on top of the previous backup. It is, therefore, important to verify these backups frequently. This can be done either manually or can be scheduled as a job. Please refer to the User Guide for more information.
  • It is critical to maintain and verify the health of the backup snapshots stored on the Open Storage Server. For agent-based OSS backups, refer to the KB article Data Verification for DPX Open Storage Server , for details on the various methods of verification and to learn how to automate the verification process.
  • Expiring DPX Open Storage Server recovery points via a Catalog Condense process is a CPU/IO intensive task that can take a significant amount of time.  The time required to perform Catalog Condense can scale directly with the number of snapshots that need to be expired.  Daily or greater backup frequency is recommended to reduce the load on the Catalog condense.  If hourly backups are performed, do so judiciously with this consideration in mind.

 Protecting the DPX Open Storage Server to Tape

This section provides guidelines on how to protect the Open Storage Server (OSS) itself.  An OSS installation can encompass two different types of client data protection technologies:
  • Agent-based technology.  A client with the DPX agent installed in the guest operating system; backups are performed to the OSS server through the DPX agent.
  • Agentless technology. A VM is selected to be protected without an agent installed; backups to the OSS server are performed without an agent.
For agentless technology, use the methods outlined in this chapter to protect that data to tape. For agent-based technology backups, these same methods can be used, but agent-based technology backups also have the ability to use the DPX Archive (or Archive to Media) feature described in the next chapter.
The methods in this chapter detail how to protect an entire OSS server volume to tape.  Since these volumes are typically large, plan accordingly using a scheme that does not involve daily base backups; for example, a weekly/monthly base and daily incrementals.  These methods can be used for disaster recovery. 
The technology options that are available to protect the OSS volume to tape are either Image or File backup.  Note that File backup will allow individual file backup and restore and is currently not supported with volumes enabled for Windows deduplication.  Therefore, protecting and recovering the DPX Open Storage Server depends on whether deduplication is used on the OSS volume containing the data to be protected to tape.  For OSS volumes with deduplication enabled, the Image backup feature must be used.  Image backup requires recovery of the volume in its entirety, not individual files.  If the target OSS volume does not have deduplication enabled, the File backup feature also becomes available as an additional option.  With the File backup and restore feature set, you may recover either the entire volume or individual files on the OSS server. With the Image backup and restore feature set, you may not recover individual files. However, since the Image backup method works at the block level, incremental backups may be substantially reduced in size.

Archive to Media for Agent-based Clients

Customers with the need to have disk-to-disk-to-tape solution will be pleased with the Tape Archive function of the DPX Open Storage Solution.  Through Tape Archive, you can schedule a backup to the DPX Open Storage Server disk and then immediately archive the snapshot to a physical tape device.  The resultant snapshot is a Full backup that appears on tape, with a separately defined retention period.
When restoring, DPX determines if the snapshot resides on DPX Open Storage Server disk, and if so, uses that snapshot to perform the restore.  However, if the snapshot has since expired on disk, due to the longer term retention that you are able to set on tape, DPX will automatically reference the tape containing the blocks of backup data in a single-step restore.

General Considerations

  • Note that this features does not currently function with Agentless backups.  To protect and restore Agentless backups from tape, please see methods previously outlined in the previous chapter titled “Protecting the DPX Open Storage Server to Tape”.
  • When restoring from tape, Block restore features such as Instant Access, Bare Metal Restore, and Virtualization restore will not be available.
  • File history, if disabled by default on the original disk-to-disk job, will be generated and stored within the DPX catalog if the snapshot is archived to tape.
Note: File History is not supported on a deduplicated source volume. For volumes that have been deduplicated, you will not be able to use Archive to Media.

Archive to Media

Best Practices

  • Define archive to media process as a schedule in the SAME job definition as your disk-to-disk job. This will associate the archive schedule to archive the latest snapshot for this job that resides on the DPX Open Storage Server to a tape device. Setting up a separate backup job and archive schedule will not have the intended results.
  • Unless required for compliance purposes, archive to media should be scheduled on a weekly or monthly basis. Scheduling archive to media on a daily basis, while allowing for a greatly improved recovery point objective, will not be cost effective in terms of tape usage as this will be comparable to performing a base backup every night to tape. 
  • Schedule the archive to media process when you know your disk to disk job will have completed. For example, if you schedule your D2D job for 6:00 PM every evening, schedule the archive to media on Saturday 6:00 AM.
  • Avoid scheduling Archive to Media while Catalog Condense processes are running.
  • Calculate the amount of time and number of tape devices you will need in order to perform the archive based on the technology you are using.
Example: To archive a server snapshot that is 50GB in size using a single tape drive, capable of writing 150MB/s, attached locally to the DPX Open Storage Server, the time required would be 50GB/(150MB/s) = approx. 6 minutes.


  • Archive to media does require file history processing to be enabled under Destination > NDMP options. It will automatically generate file history if the option has been disabled (best practice recommendation during implementation to conserve the catalog space), but cannot be generated if the original source volume is deduplicated.
  • By default, Archive to Media will create one task per volume contained within the snapshot. Each task will attempt to allocate its own tape device regardless of the default MAXDEVICES global setting in the enterprise. Manually modifying the job definition is an alternative, but should only be done under the guidance of Catalogic Support services. Be sure to make a backup of the file before any changes are made.

Tape Library Configuration Best Practices

  •  It is recommended that the tape library be directly connected to DPX Open Storage Server for efficient high speed local backups of snapshots residing on that server.
  • A separate media pool should be used for backup to tape. Media pools should be organized based on tape retention (e.g. weekly, monthly, yearly, etc.).
  • Avoid performing tape backup to a tape library attached to another device server (that is not the DPX Open Storage Server). Such a configuration would greatly impact archiving performance and place unnecessary load on the network.
  • Avoid mixing tape media used for other job types, e.g. file level, image, and NDMP backup jobs.
Reference Document 
Article TypeLong Form
Article Number000004947
Article Created Date3/27/2017 9:26 AM

Powered by