Showing posts with label Veeam. Show all posts
Showing posts with label Veeam. Show all posts

Tuesday, June 7, 2022

Synology DSM and Veeam 11

For a long time, Veeam has been telling its users to not use "low-end NAS boxes" (eg, Synology, QNAP, Thecus) as backup repositories for Backup & Replication (VBR), even though these Linux-based devices should be compatible if they have "x86" architecture (as opposed to ARM).

The reality is that none of these devices use "bog standard" Linux distributions, and due to their appliance-based nature, have some significant limitations on what can be done to their custom distributions.

However, there are many folks—both as home users or within small/budget-limited businesses—who are willing to "take their lumps" and give these things a shot as repositories.

I am one of them, particularly for my home "lab" environment. I've written about this use case (in particular, the headaches) a couple of times in this blog [1, 2], and this post joins them, addressing yet another fix/workaround that I've had to implement.

Background

I use a couple of different Synology boxes for backup purposes, but the one I'm dealing with today is the DS1817+. It has a 10GbE interface for connectivity to my network, a quad-core processor (the Intel Atom C2538) and 8GB RAM (upgradable to 16GB, but I haven't seen the demand that would require it). It is populated with 8x1TB SATA SSDs for ~6TB of backup capacity.

I upgraded DSM to 7.0 a while back, and had to make some adjustments to the NFS target service to continue to support ESXi datastores via NFS 4.1

Yesterday, I updated it to 7.1-42661 Update 2, and was greeted to a number of failed backup jobs this morning.

Symptoms

All the failed jobs have uniform symptoms: Timeout to start agent

With further investigation, I saw that my DS1817+ managed server was "not available", and when attempting to get VBR to re-establish control, kept getting the same error with the installation of transport services:

Installing Veeam Data Mover service Error: Failed to invoke command /opt/veeam/transport/veeamtransport --install 6162:  /opt/veeam/transport/veeamtransport: error while loading shared libraries: libacl.so.1: cannot open shared object file: No such file or directory

Failed to invoke command /opt/veeam/transport/veeamtransport --install 6162:  opt/veeam/transport/veeamtransport: error while loading shared libraries: libacl.so.1: cannot open shared object file: No such file or directory

Workaround

After failing to find a fix after some Linux-related searches, I discovered a thread on the Veeam Community Forum that addressed this exact issue [3]. 

This is apparently a known issue with VBR11 and Synology boxes, and as Veeam is moving further and further away from the "on the fly" deployment of the transport agent to a permanently-installed "Data Mover" daemon (which is necessary to provide the Immutable Backup feature), it becomes a bigger issue. Veeam has no control over the distribution—and would just as soon have clients use other architectures—and Synology would probably be happy with customers considering their own backup tool over competing options...

At any rate, some smart people posted workarounds to the issue after doing their own research, and I'm re-posting for my own reference because it worked for me.

  1. Download the latest ACL library from Debian source mirrors. The one I used—and the one in the Forum thread—is http://ftp.debian.org/debian/pool/main/a/acl/libacl1_2.2.53-10_amd64.deb
  2. Unpack the .deb file using 7zip
  3. Upload the data.tar file to your Synology box. Feel free to rename the file to retain your sanity; I did.
  4. Extract the tarball to the root directory using the "-C /" argument:
    tar xvf data.tar -C /
  5. If you are using a non-root account to do this work, you'll need to use "sudo" to write to the root. You will also need to adjust owner/permissions on the extracted directories & files:
    sudo tar xvf data.tar -C /
    sudo chown -R root:root /usr/lib/x86_64-linux-gnu
    sudo chmod -R 755 /usr/lib/x86_64-linux-gnu
  6. Create soft links for these files in the boxes filesystem:
    sudo ln -sf /usr/lib/x86_64-linux-gnu/libacl.so.1 /usr/lib/libacl.so.1
    sudo ln -sf /usr/lib/x86_64-linux-gnu/libacl.so.1.1.2253 /usr/lib/libacl.so.1.1.2253
  7. Last, get rid of any previous "debris" from failed transport installations
    sudo rm -R /opt/veeam
Once the Synology is prepped, you must go back into VBR and re-synchronize with the Linux repository:
  1. Select the "Backup Infrastructure" node in the VBR console
  2. Select the Linux node under Managed Servers
  3. Right-click on the Synology box being updated and select "Properties..." from the popup menu.
  4. Click [Next >] until the only option is [Finish]. On the way, you should see that the Synology is correctly identified as a compatible Linux box, and the new Data Mover transport service is successfully installed.

Summary

I can't guarantee that this will work after a future update of DSM, and there may come a day when other libraries are "broken" by updates to VBR or DSM. But this workaround was successful for me.

Update

The workaround has persisted through a set of updates to DSM7. I have seen this come up with DSM6, but this workaround does not work on that; too many platform incompatibilities, I suspect. Need to do some more research & experimentation for DSM6...

Wednesday, May 17, 2017

VBR v10 new hotness

Sitting in the general session is not typically the way I'd compose a new post, but I'm pretty stoked by some new, long-desired features announced for the next version of Veeam Backup and Replication (VBR), version 10.

First is the (long awaited) inclusion of physical endpoint backup management via VBR console. We've had Endpoint Backup for a while, which is awesome, and we've been able to use VBR repositories to store backups, but all management was at the endpoint itself. In addition to centralized management, the newest version of the managed endpoint backup (alright, alright... Agent) will support Microsoft Failover Clusters at GA!

Second is the new feature that significantly expands VBR's capability: the ability to backup NAS devices. Technically, it's via SMB or NFS shares, so you could target any share--including one on a supported virtual or physical platform--but the intention is to give great backup & recovery options for organizations that utilize previously-unsupported platforms for NAS, like NetApp, Celera, etc.

Third--and most exciting to me, personally--is the addition of a replication mode utilizing VMware's new "VMware APIs for I/O Filtering" (VAIO). This replication mode uses a snapshot-free capture of VMDK changes on the source, with and the destination being updated on a (configurable, default of 15s) by-the-second interval. This new replication method is branded "Veeam CDP" (Continuous Data Protection). There are competing products on the market that offer similar capability, but Veeam is advertising that they are the first to leverage VAIO while other products are using either undocumented/unsupported APIs, or old APIs intended for physical replication devices.

There are a number of other nice, new features coming--Object storage support, Universal APIs for storage integration, etc.--but these three will be the big, compelling reasons to not only upgrade to Version 10 when it arrives (for current customers) but to upgrade your vSphere environments if you haven't already embraced Version 6.x.

Saturday, December 19, 2015

Veeam 9 and StoreOnce Catalyst

HPE has offered their StoreOnce deduplication platform as a free, 1TB virtual appliance for some time (the appliance is also available for licensed 5TB and 10TB variants). As a competitor for other dedupe backup targets, it offers similar protocols and features: virtual tape library, SMB (although they persist in calling it CIFS), NFS...and a proprietary protocol branded as Catalyst.
StoreOnce protocols
Catalyst is part of a unified protocol from HPE that ties together several different platforms, allowing "dedupe once, replicate anywhere" functionality. Like competing protocols, Catalyst also provides some performance improvements for both reads and writes as compared to "vanilla" file protocols.

Veeam has supported the StoreOnce platform since v8, but only through SMB (err... CIFS?) protocol. With the immanent release of Veeam 9—with support for Catalyst—I decided to give the free product a try and see how it works with v8, v9, and what the upgrade/migration process looks like.

HPE offers the StoreOnce VSA in several variants (ESXi stand-alone, vCenter-managed and Hyper-V) and is very easy to deploy, configure and use through its integrated browser-based admin tool. Adding a storage pool is as simple as attaching a 1TB virtual disk to the VM (ideally, on a secondary HBA) before initialization.

Creating SMB shares is trivial, but if the appliance is configured to use Active Directory authentication, share access must be configured through the Windows Server Manager MMC snap-in; while functional, it's about as cumbersome as one might think. StoreOnce owners would be well-served if HPE added permission/access functionality into the administrative console. Using local authentication eliminates this annoyance, and is possibly the better answer for a dedicated backup appliance...but I digress.

StoreOnce fileshare configuration
Irrespective of the authentication method configured on the appliance, local authentication is the only option for Catalyst stores, which are also trivial to create & configure. In practice, the data stored in a Catalyst store is not visible or accessible via file or VTL protocols—and vice-versa; at least one competing platform of which I'm familiar doesn't have this restriction. This functional distinction does make it more difficult to migrate stored data from one protocol to another; among other possible scenarios, this is particularly germane when an existing StoreOnce+Veeam user wishes to upgrade from v8 to v9 (presuming StoreOnce is also running a firmware version that is supported for Veeam's Catalyst integration) and has a significant amount of data in the file share "side" of the StoreOnce. A secondary effect is that there is no way to utilize the Catalyst store without a Catalyst-compatible software product: in my case, ingest is only possible using Veeam, whether it's one of the backup job functions or the in-console file manager.

Veeam 9 file manager
As of this writing, I have no process for performing the data migration from File to Catalyst without first transferring the data to an external storage platform that can be natively managed by Veeam's "Files" console. Anyone upgrading from Veeam 8 to Veeam 9 will see the existing "native" StoreOnce repositories converted to SMB repositories; as a side effect, file-level management of the StoreOnce share is lost. Any new Catalyst stores can be managed through the Veeam console, but the loss of file-management for the "share side" means there is no direct transfer possible. Data must be moved twice in order migrate from File to Catalyst; competing platforms that provide simultaneous access via file & "proprietary" protocols allow migration through simple repository rescans.

Administrative negatives aside, the StoreOnce platform does a nice job of optimizing storage use with good dedupe ratios. Prior to implementing StoreOnce (with Veeam 8, so only SMB access), I was using Veeam-native compression & deduplication on a Linux-based NAS device. With no other changes to the backup files, migrating them from the non-dedupe NAS to StoreOnce resulted in an immediate 2x deduplication ratio; modifying the Veeam jobs to dedupe appliance-aware settings (eg, no compression at storage) saw additional gains in dedupe efficiency. After upgrading to Veeam 9 (as a member of a partner organization, I have early to the RTM build)—and going through the time-consuming process of migrating the folders from File to Catalyst—my current status is approaching 5x, giving me the feeling that dedupe performance may be superior on the Catalyst stores as compared to File shares. As far as I'm concerned, this is already pretty impressive dedupe performance (given that the majority of the job files are still using sub-optimal settings) and I'm looking forward to increasing performance as the job files cycle from the old settings to dedupe appliance-optimized as retention points are aged out.

Appliance performance during simultaneous read, write operations
StoreOnce appliance performance will be variable, based not only on the configuration of the VM (vCPU, memory) but also on the performance of the underlying storage platform; users of existing StoreOnce physical appliances will have a fixed level of performance based on the platform/model. Users of the virtual StoreOnce appliance can inject additional performance into the system by upgrading the underlying storage (not to mention more CPU or memory, as dictated by the capacity of the appliance) to a higher performance tier.

Note: Veeam's deduplication appliance support—which is required for Catalyst—is only available with Enterprise (or Enterprise Plus) licensing. The 60-day trial license includes all Enterprise Plus features and can be used in conjunction with the free 1TB StoreOnce appliance license to evaluate this functionality in your environment, whether you are a current Veeam licensee or not.

Update

With the official release of Veeam B&R v9, Catalyst and StoreOnce are now available to those of you holding the Enterprise B&R licenses. I will caution you, however, to use a different method of converting from shares to Catalyst than I used. Moving the files does work, but it's not a good solution: you don't get to take advantage of the per-VM backup files that is a feature of v9 (if a backup starts with a monolithic file, it will continue to use it; only creating a new backup—or completely deleting the existing files—will allow per-VM files to be created. This is the preferred format for Catalyst, and the dedupe engine will work more efficiently with per-VM files than it will with monolithic files; I'm sure there's a technical reason for it, but I can vouch for it in practice. Prior to switching to per-VM files, my entire backup footprint, even after cycling through the monolithic files to eliminate dedupe-unfriendly elements like job-file compression, consumed over 1TB of raw storage with a dedupe ratio that never actually reached 5:1. After discarding all those jobs and starting fresh with cloned jobs and per-VM files, I now have all of my backups & restore points on a single 1TB appliance with room to spare and a dedupe ratio currently above 5:1.


I'm still fine-tuning, but I'm very pleased with the solution.

Saturday, November 8, 2014

Use Synology as a Veeam B&R "Linux Repository"

I posted a fix earlier today for adding back the key exchange & cipher sets that Veeam needs when connecting to a Synology NAS running DSM 5.1 as a Linux host for use as a backup repository. As it turns out, some folks with Synology devices didn't know that using them as a "native Linux repository" was possible. This post will document the process I used to get it going originally on DSM 5.0; it wasn't a lot of trial-and-error, thanks to the work done by others and posted to the Veeam forums.

Caveat: I have no clue if this will work on DSM 4.x, as it wasn't until I was already running 5.0 when I started to work on it.

  1. Create a shared folder on your device. Mine is /volume1/veeam
  2. Install Perl in the Synology package center.
  3. If running DSM 5.1 or later, update the /etc/ssh/sshd_conf file as documented in my other post
  4. Enable SSH (control panel --> system -->terminal & snmp)
  5. Enable User Home Service ( control panel --> user --> advanced)
Once this much is done, Veeam B&R will successfully create a Linux-style repository using that path. However, it will not be able to correctly recognize free space without an additional tweak, and for that tweak, you need to understand how B&R works with a Linux repository...

When integrating a Linux repository, B&R does not install software on the Linux host. Here's how it works: 
  1. connects to the host over SSH
  2. transmits a "tarball" (veeam_soap.tar)
  3. extracts the tarball into temporary memory
  4. runs some Perl scripts found in the tarball
It does this Every. Time. It. Connects.

One of the files in this bundle (lib/Esx/System/Filesystem/Mount.pm) uses arguments with the Linux 'df' command that the Synology's busybox shell doesn't understand/support. To get Veeam to correctly recognize the space available in the Synology volume, you'll need to edit the 'mount.pm' file to remove the invalid "-x vmfs" argument (line 72 in my version) in the file. However, that file must be replaced within the tarball so it can be re-sent to the Synology every time it connects. Which also means every Linux repository will get the change as well (in general, this shouldn't be an issue, because the typical Linux host won't have a native VMFS volume to ignore).

Requests in the Veeam forum have been made to build in some more real intelligence for the Perl module so that it will properly recognize when the '-x' argument is valid and when it isn't.

So how does one complete this last step? First task: finding the tarball. On my backup server running Windows Server 2012R2 and Veeam B&R 7, it's in c:\program files\veeam\backup and replication\backup. If you used a non-default install directory or have a different version of B&R, you might have to look elsewhere.

Second, I used a combination of  7-Zip and Notepad++ to manage the file edit on my Windows systems. Use whatever tool suits, but do not use an editor that doesn't respect *nix-style text file conventions (like the end-of-line character).

Once you edit the file and re-save the tarball, a rescan of the Linux repository that uses your Synology should result in valid space available results.

One final note: why do it this way? The Veeam forums have several posts suggesting that using an iSCSI target on the Synology--especially in conjunction with Windows 2012R2's NTFS dedupe capability--is a superior solution to using it as a Linux Repository. And I ran it that way for a long time: guest initiator in the backup host, direct attached to an iSCSI target. But I also ran into space issues on the target, and there aren't good ways to shrink things back down once you've consumed that space--even when thin provisioning for the target is enabled. No, it's been my experience that, while it's not as space-efficient, there are other benefits to using the Synology as a Linux repo. Your mileage may vary.

Repair Synology DSM5.1 for use as a Linux backup repository.

After updating my Synology to DSM 5.1-5004, the following morning I was greeted by a rash of error messages from my Veeam B&R 7 backup jobs: "Error: Server does not support diffie-hellman-group1-sha1 for keyexchange"

I logged into the backup host and re-ran the repository resync process, to be greeted by the same error.
Synology DSM 5.1 error
The version of SSH on the Synology was OpenSSH 6.6p2:

As it turns out, this version of SSH doesn't enable the required key exchange protocol by default; luckily, that's an easy edit of the /etc/ssh/sshd_config file. And to play it safe, I added not only the needed Kex parameter, I also added the published defaults.
KexAlgorithms diffie-hellman-group1-sha1,curve25519-sha256@libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1
After restarting SSH in the DSM control panel, then re-scanning the repository, all was not quite fixed:

Back to the manfile for sshd_conf...

The list of supported ciphers is impressive, but rather than add all of them into the list, I thought it would be useful to get a log entry from the daemon itself as it negotiated the connection with the client. Unfortunately, it wasn't clear where it was logging, so it took some trial-and-error with the config settings before I found a useful set of parameters:
SyslogFacility USER
LogLevel DEBUG
At that point, performing a rescan resulted in an entry in /var/log/messages:
Armed with that entry, I could add the Ciphers entry in sshd_conf, using the options from the Veeam ssh client to the defaults available in this version of sshd:
Ciphers aes128-cbc,blowfish-cbc,3des-cbc,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com,chacha20-poly1305@openssh.com
One more rescan, and all was well, making it possible to retry the failed jobs.

Follow Up

There have been responses of both successes and failures from people using this post to get their repository back on line. I'm not sure what's going on, but I'll throw in these additional tips for editing sshd_config:
  1. Each of these entries (KexAlgorithms and Ciphers) are single line entries. You must have the keyword—case sensitive— followed by a single space, followed by the entries without whitespace or breaks.
  2. There's a spot in the default sshd_config that "looks" like the right place to put these entries; that's where I put them. It's a heading labelled "# Ciphers and keying." Just drop them into the space before the Logging section. In the screenshot below, you can see how there's no wrap, no whitespace, etc. This works for me.
  3. Restart the SSH service. You can use the command line (I recommend using telnet during this operation, or you'll loose your SSH connection as the daemon cycles) or the GUI control panel. If using the latter, uncheck SSH, save, check SSH.

Friday, October 10, 2014

VeeamON 2014: A post-event challenge

Branded as the industry's first and only "Data Center Availability" conference, Veeam's freshman effort was a success by almost any measure.

Disclaimer: I work for a Veeam Partner and my conference attendance was comp'd in exchange for some marketing/promotional activities prior to the conference. I have also been a long-time user of Veeam Backup & Replication, since before my transition to the partner side of business due to my vExpert status in the VMware community.

Because I work for a partner, I arrived in Las Vegas on Sunday, October 5 to attend the partner-oriented social & networking events and to be ready for the 8:30am start on Monday morning for the partner keynote.

In a twist from other industry conferences I've attended, the keynote was MC'd by comedian Richard Laible, with a format intended to mimic those of late-night talk shows. It was successful, and the give-and-take between Richard and his "guest" was well-orchestrated and amusing.

In the first "interview," Veeam CEO Ratmir Timashev was able to tell the story of the founding of Veeam, underscore the company's love of their reseller-partners and reaffirmed the company's longstanding policy of staying 100% "channel-based" (no customer may purchase directly from Veeam); most important, he talked about the shift of Veeam from being "merely the best" backup product for virtualization, but to strive towards producing the best availability product for the enterprise.

Other Veeam employees took to the stage, and customer success stories were played out. In other words, much like any other keynote.

The remainder of the day was filled with breakout sessions covering a wide range of topics--both technical and business-oriented--for the partner crowd. The obligatory sponsor exposition opened for a happy hour/dinner reception, which also capped-off the scheduled activities for the day.

The second full day of events (Tuesday) was opened with a second keynote which echoed much of the messaging in the Partner keynote, but with an obvious new audience: the customer & prospects attending the event. In addition to even more entertainment (a pair from X-Pogo performed), some additional features of the forthcoming Version 8 for the "Availability Suite" (a rebranding of the former Backup & Management Suite) were shared, as well as even more customer testimonials which underscored Veeam's commitment not just to protecting data, but to making good on their aim to create the "always available datacenter."

The remainder of the day was again filled with breakout sessions, again ranging from business to technical topics. The day was scheduled late, however, with the optional party at the "LIGHT" nightclub in the Mandalay Bay hotel.

The third and final day opened with breakout sessions, these principally seemed to be presented by sponsor partners rather than Veeam employees with Veeam-specific topics. None of the sessions I attended, however, seemed too far off-base at a Veeam-oriented conference: the connection and/or synergy between the sponsor's product & Veeam's products was clear by the end of the session.

A final keynote by reddit.com's co-founder, Alexis Ohanian, was both humorous and insightful, and essentially closed out the conference.

There are many other posts out there with even more details and insight into the conference; check out my fellow #vDBer Mike Preston's series from the conference at http://blog.mwpreston.net for more insight and reporting.



My retelling of this is all to aim towards one thought: Veeam did a great job on their first conference. The content was relevant, the sponsors were invested and made sense, and it was both informative and entertaining.

Here's the challenge: What about 2015?

Unless the breakout catalog is significantly expanded, I'm not sure how many folks will want/need to attend a second year. Don't get me wrong: I'm not saying that no one will attend. On the contrary: if they repeated next year with a cookie-cutter duplicate of this year, anyone who a) didn't attend and b) wants to learn more about Veeam's products and how they can boost the availability of the datacenter would find their time well-spent.

I'm saying that everyone that went was a first-timer, and they got that spot-on. They can still fine-tune it, but next year's first-time attendee will get great value whether they change it or not.

No, the problem is getting repeat attendees. The conference can increase their first-time attendee counts simply based on positive word-of-mouth recommendations, but the top end for that will be reached far sooner than getting both those new attendees and the repeat (alumni?) attendees.

As it was, the number that was rumored prior to the conference—around 1200 people comprised of attendees & Veeam staff—seemed to have some validity. The conference space at the Cosmopolitan was sized well for the attendees, and it was never crowded or crazy like VMworld can feel (with almost 20x the attendance). But I can't imagine that Veeam is going to be content with putting on a two-and-a-half-day conference for "only" 1000 people. Yes, you want a multi-day conference to help justify the travel costs, but let's be honest: the VMUG organization has chapters that manage to put together single-day conferences for that number of attendees.

This isn't meant as criticism: I'm identifying the challenge they now face, and send the call-to-action to Veeam to plan next year's event—as far as I know, TBA for place & time, yet expected from Doug Hazelman's parting "See you next year at VeeamON 2015"—with the goals of both increasing the number of new attendees compared to the inaugural "class" from this year, as well as compelling most (if not all) of this year's attendees to return.