Saturday, November 8, 2014

Use Synology as a Veeam B&R "Linux Repository"

I posted a fix earlier today for adding back the key exchange & cipher sets that Veeam needs when connecting to a Synology NAS running DSM 5.1 as a Linux host for use as a backup repository. As it turns out, some folks with Synology devices didn't know that using them as a "native Linux repository" was possible. This post will document the process I used to get it going originally on DSM 5.0; it wasn't a lot of trial-and-error, thanks to the work done by others and posted to the Veeam forums.

Caveat: I have no clue if this will work on DSM 4.x, as it wasn't until I was already running 5.0 when I started to work on it.

  1. Create a shared folder on your device. Mine is /volume1/veeam
  2. Install Perl in the Synology package center.
  3. If running DSM 5.1 or later, update the /etc/ssh/sshd_conf file as documented in my other post
  4. Enable SSH (control panel --> system -->terminal & snmp)
  5. Enable User Home Service ( control panel --> user --> advanced)
Once this much is done, Veeam B&R will successfully create a Linux-style repository using that path. However, it will not be able to correctly recognize free space without an additional tweak, and for that tweak, you need to understand how B&R works with a Linux repository...

When integrating a Linux repository, B&R does not install software on the Linux host. Here's how it works: 
  1. connects to the host over SSH
  2. transmits a "tarball" (veeam_soap.tar)
  3. extracts the tarball into temporary memory
  4. runs some Perl scripts found in the tarball
It does this Every. Time. It. Connects.

One of the files in this bundle (lib/Esx/System/Filesystem/ uses arguments with the Linux 'df' command that the Synology's busybox shell doesn't understand/support. To get Veeam to correctly recognize the space available in the Synology volume, you'll need to edit the '' file to remove the invalid "-x vmfs" argument (line 72 in my version) in the file. However, that file must be replaced within the tarball so it can be re-sent to the Synology every time it connects. Which also means every Linux repository will get the change as well (in general, this shouldn't be an issue, because the typical Linux host won't have a native VMFS volume to ignore).

Requests in the Veeam forum have been made to build in some more real intelligence for the Perl module so that it will properly recognize when the '-x' argument is valid and when it isn't.

So how does one complete this last step? First task: finding the tarball. On my backup server running Windows Server 2012R2 and Veeam B&R 7, it's in c:\program files\veeam\backup and replication\backup. If you used a non-default install directory or have a different version of B&R, you might have to look elsewhere.

Second, I used a combination of  7-Zip and Notepad++ to manage the file edit on my Windows systems. Use whatever tool suits, but do not use an editor that doesn't respect *nix-style text file conventions (like the end-of-line character).

Once you edit the file and re-save the tarball, a rescan of the Linux repository that uses your Synology should result in valid space available results.

One final note: why do it this way? The Veeam forums have several posts suggesting that using an iSCSI target on the Synology--especially in conjunction with Windows 2012R2's NTFS dedupe capability--is a superior solution to using it as a Linux Repository. And I ran it that way for a long time: guest initiator in the backup host, direct attached to an iSCSI target. But I also ran into space issues on the target, and there aren't good ways to shrink things back down once you've consumed that space--even when thin provisioning for the target is enabled. No, it's been my experience that, while it's not as space-efficient, there are other benefits to using the Synology as a Linux repo. Your mileage may vary.


  1. Jim, how (or why) did you figure out that some perl script buried that deep was the cause of the filesize reporting issues?! Mine used to say I had something like 80000TB of free space (I wish) ... but after applying your fix, it's now correct. I had a different issue with involved conversations with Veeam support, and they did not seem concerned about the incorrectly reported size, nor especially interested in fixing it. In their defense, it was not really related to my support case I suppose. Anyways, thanks for the solution.

    1. Jeff, I can't take credit for finding the right file & line to edit in the tarball; that came from a thread in one of the Veeam forums. I didn't think of linking it, but here is the important one for posterity:

    2. Well that makes more sense ... I guess I never even thought of scouring the forums for a solution. Thanks for posting the link.

  2. Oh, and I can confirm that a Synology box running DSM 4.X (one of the later 4.X versions) does work ... at least mostly. I originally was running some version of DSM 4 when I first set this up. I say "mostly", because the support issue I mentioned in my previous post was logged because while my backups were working fine, the retention policy did not seem to be taking effect. At least, the physical files weren't getting deleted. In the end, they had me update to the latest DSM (standard support procedure) ... but I also tweaked something with NFS file services, and NFS permissions at the same time. One of those things fixed it, because it's working quite well now.

    1. It's interesting that tweaking NFS permissions & services had an impact on things: the whole point of getting this working correctly is that Veeam's own transport mechanism--not NFS, or anything else on the linux box--is being used. If you are instead referring to "regular" permissions, then yes, that can have an impact. But because we all pretty much use the 'root' user, that should trump everything else.

    2. Well, I'm not certain at all that anything I did regarding NFS had any effect. I guess I thought it might, due to the vPower NFS stuff in the repository setup. But after reading it again, it seems that's only used for running VM's directly off of backup files. I don't think I changed any "regular" permissions, as (like you said) I just set it up to connect as root.

      I guess I was just hoping that something else I did fixed the problem. It's either that, or the update to a later DSM fixed it, but that just seemed like customer support wishful thinking. Perhaps I'll turn off NFS services and see what happens. The whole issue seemed tough at first to convince Veeam that I really did have a problem. They were convinced it was just retention policy ... however I know for a fact that it was not physically removing old backup files, even though Veeam had removed them from it's metadata, issued a command to delete them, and I guess presumed that they were deleted. But files would build up until the disk ran out of space, and I had to manually delete old backups so it could continue.

  3. Thank you for your guide so far. You Key and Cipher one got me over the initial hurdle but I have followed this page and I still have my backups fail as:

    Failed to start client agent on the host '<' Timeout to start agent

    I can see it uploading a 15mb file to the tmp directory and it has executable permissions but its still failing. I had followed the guide of removing -x vmfs and re-tarring the file.

    Any help would be greatly appreciated.

  4. Same here running DSM 5.2-5592 Update 1

  5. Same here on my Synology Diskstation.

    1. Same here. After analyzing the uploaded 16mb file, it appears that it is a binary x86 executable, not a perl script. So, on my Diskstation that have an ARM CPU, no any chance that it may run :(

    2. This method only works on Synology products that are x86 architecture. This is a limitation of veeam only supporting x86 (even Veeam B&R 9).

  6. For DSM 6.x you will need to use public key authentication when adding the Synology as a linux server. IF you try to use password authentication it will fail with a message about starting the soap api. See the following post for a guide on how to generate a key:

    1. That may be one work-around. The other one--a simple one I used--is to reset the root password to something known. That's all I needed on all of my boxes...

  7. DSM 6.x Here.
    Having a few issues getting this going -
    Edited the sshd_config added lines for key generation

    created keys and put them in even used CAT > authorized_keys
    single > so it overwrote anything in the file.

    something with the key is not right - can't seem to get that right.
    also veeam 9 only supports RSA2 / SSH2 keys vs RSA1 which it seems Synology wants to use...

    Have seen it might be possible to install openssh for better / newer ssh on synology.
    using Intel Chip in our RS2416+ unit so it should be good to go.

    figure might as well get it working over NFS and no Proxy!

  8. Hello,

    Today it's work for me without change anything in sshd_config :

    1° Create a shared folder on your device. Mine is /volume1/backup
    2° Install Perl in the Synology package center.
    3° Enable SSH (control panel --> system -->terminal & snmp) in low level securty (by default it's on normal but Veeam can't connect with this level)
    4° Enable User Home Service ( control panel --> user --> advanced)

    Have fun.

    1. My Syno is running on DMS 6.1.5.x
      It's a RS818+ NAS

    2. Just did this on VEEAM and a Synology 6.2-23739 and it works fine without sshd_config. Seems like the storage is accurate as well, I'll send a few jobs there and see if it stays accurate. DS1815+

  9. An outstanding best online grocery store in dubai share! I've just forwarded this onto a co-worker who was conducting a little research on this. And he in fact bought me breakfast due to the fact that I discovered it for him... lol. So let me reword this.... Thank YOU for the meal!! But yeah, thanks for spending the time to discuss this matter here on your internet site.