Link

Backup Repositories

We should consider following while building the backup repository:

  • Capacity
  • Write performance
  • Read performance
  • Data density
  • Security
  • Backup file utilization

As a basic guideline, a repository should be highly resilient, since it is hosting customer data. It also needs to be scalable, allowing the backup to grow as needed.

Organization policies may require different storage types for backups with different retention. In such scenarios, you may configure two backup repositories:

  • A high-performance repository hosting several recent retention points for instant restores and other quick operations
  • A repository with more capacity, but using a cheaper and slower storage, storing long-term retention points

You can consume both layers by setting up a backup copy job from the first to the second repository, or leverage Scale-out Backup Repository, if licensed.

Per-VM Backup Files

It is possible to write one backup file chain per each VM on a repository, compared to the regular chain holding data for all the VMs of a given job. This option greatly eases job management, allowing to create jobs containing much more VMs than jobs with single chains, and also enhances performance thanks to more simultaneous write streams towards a repository, even when running a single job.

In addition to optimizing write performance with additional streams to multiple files, there are other positive side effects as well. When using the forward incremental forever backup mode, you may experience improved merge performance. When backup file compacting is enabled, per VM backup files require less free space: instead of requiring sufficient space to temporarily accommodate an additional entire full backup file, only free space equivalent to the largest backup file in the job is required. Parallel processing to tape will also have increased performance, as multiple files can be written to separate tape devices simultaneously.

Per VM backup files

Per VM backup files is an advanced option available for backup repositories, and it is disabled by default for new simple backup repositories. When enabling the option on an repository an active full backup is necessary on to existing jobs to apply the setting.

Using per-VM backup file will negatively impact repository space usage since Veeam deduplication is file based. If backup jobs have been created while grouping similar guests to optimize deduplication and an Active Full is used, per VM Backup chain might require additional repository space.

** NOTE: In Scale-Out Backup Repositories, Per-VM backup files option is ENABLED by default **

Concurrent Tasks

Start with configuring one concurrent task per CPU core and adjust based on load of the server, storage and network.

Dependent on the used storage too many write threads to a stroage might be counter productive. For example, a low range NAS storage will probably not react very well to a high amount of parallel write processes created by per-VM backup files. To mitigate these effects its better to limit the concurrent tasks in this case. On the other hand a high-end deduplication appliance might have a limit on a single write thread but can handle a lot of parallel tasks very well and thus profits from the use of per-VM backup files and enough concurrent repository tasks.

** Note: Consider tasks for read operations on backup repositories (like backup copy jobs). **

Backup File Size

Best practice is to keep the backup chain size (sum of a single full and linked incrementals) under 10 TB (~16TB of source data).

Remember that very big objects can become hardly manageable. Since Veeam allows a backup chain to be moved from one repository to another with nothing more than a copy/paste operation of the files themselves, it is recommended to keep the file sizes manageable. This will allow for a smooth, simple and effortless repository storage migration and better storage-use distribution in SOBRs.

Per-VM backup files do help here, as the size of the chain is only dependent on the size of single VMs.


References


Table of contents