Saturday, December 19, 2015

Veeam 9 and StoreOnce Catalyst

HPE has offered their StoreOnce deduplication platform as a free, 1TB virtual appliance for some time (the appliance is also available for licensed 5TB and 10TB variants). As a competitor for other dedupe backup targets, it offers similar protocols and features: virtual tape library, SMB (although they persist in calling it CIFS), NFS...and a proprietary protocol branded as Catalyst.
StoreOnce protocols
Catalyst is part of a unified protocol from HPE that ties together several different platforms, allowing "dedupe once, replicate anywhere" functionality. Like competing protocols, Catalyst also provides some performance improvements for both reads and writes as compared to "vanilla" file protocols.

Veeam has supported the StoreOnce platform since v8, but only through SMB (err... CIFS?) protocol. With the immanent release of Veeam 9—with support for Catalyst—I decided to give the free product a try and see how it works with v8, v9, and what the upgrade/migration process looks like.

HPE offers the StoreOnce VSA in several variants (ESXi stand-alone, vCenter-managed and Hyper-V) and is very easy to deploy, configure and use through its integrated browser-based admin tool. Adding a storage pool is as simple as attaching a 1TB virtual disk to the VM (ideally, on a secondary HBA) before initialization.

Creating SMB shares is trivial, but if the appliance is configured to use Active Directory authentication, share access must be configured through the Windows Server Manager MMC snap-in; while functional, it's about as cumbersome as one might think. StoreOnce owners would be well-served if HPE added permission/access functionality into the administrative console. Using local authentication eliminates this annoyance, and is possibly the better answer for a dedicated backup appliance...but I digress.

StoreOnce fileshare configuration
Irrespective of the authentication method configured on the appliance, local authentication is the only option for Catalyst stores, which are also trivial to create & configure. In practice, the data stored in a Catalyst store is not visible or accessible via file or VTL protocols—and vice-versa; at least one competing platform of which I'm familiar doesn't have this restriction. This functional distinction does make it more difficult to migrate stored data from one protocol to another; among other possible scenarios, this is particularly germane when an existing StoreOnce+Veeam user wishes to upgrade from v8 to v9 (presuming StoreOnce is also running a firmware version that is supported for Veeam's Catalyst integration) and has a significant amount of data in the file share "side" of the StoreOnce. A secondary effect is that there is no way to utilize the Catalyst store without a Catalyst-compatible software product: in my case, ingest is only possible using Veeam, whether it's one of the backup job functions or the in-console file manager.

Veeam 9 file manager
As of this writing, I have no process for performing the data migration from File to Catalyst without first transferring the data to an external storage platform that can be natively managed by Veeam's "Files" console. Anyone upgrading from Veeam 8 to Veeam 9 will see the existing "native" StoreOnce repositories converted to SMB repositories; as a side effect, file-level management of the StoreOnce share is lost. Any new Catalyst stores can be managed through the Veeam console, but the loss of file-management for the "share side" means there is no direct transfer possible. Data must be moved twice in order migrate from File to Catalyst; competing platforms that provide simultaneous access via file & "proprietary" protocols allow migration through simple repository rescans.

Administrative negatives aside, the StoreOnce platform does a nice job of optimizing storage use with good dedupe ratios. Prior to implementing StoreOnce (with Veeam 8, so only SMB access), I was using Veeam-native compression & deduplication on a Linux-based NAS device. With no other changes to the backup files, migrating them from the non-dedupe NAS to StoreOnce resulted in an immediate 2x deduplication ratio; modifying the Veeam jobs to dedupe appliance-aware settings (eg, no compression at storage) saw additional gains in dedupe efficiency. After upgrading to Veeam 9 (as a member of a partner organization, I have early to the RTM build)—and going through the time-consuming process of migrating the folders from File to Catalyst—my current status is approaching 5x, giving me the feeling that dedupe performance may be superior on the Catalyst stores as compared to File shares. As far as I'm concerned, this is already pretty impressive dedupe performance (given that the majority of the job files are still using sub-optimal settings) and I'm looking forward to increasing performance as the job files cycle from the old settings to dedupe appliance-optimized as retention points are aged out.

Appliance performance during simultaneous read, write operations
StoreOnce appliance performance will be variable, based not only on the configuration of the VM (vCPU, memory) but also on the performance of the underlying storage platform; users of existing StoreOnce physical appliances will have a fixed level of performance based on the platform/model. Users of the virtual StoreOnce appliance can inject additional performance into the system by upgrading the underlying storage (not to mention more CPU or memory, as dictated by the capacity of the appliance) to a higher performance tier.

Note: Veeam's deduplication appliance support—which is required for Catalyst—is only available with Enterprise (or Enterprise Plus) licensing. The 60-day trial license includes all Enterprise Plus features and can be used in conjunction with the free 1TB StoreOnce appliance license to evaluate this functionality in your environment, whether you are a current Veeam licensee or not.


With the official release of Veeam B&R v9, Catalyst and StoreOnce are now available to those of you holding the Enterprise B&R licenses. I will caution you, however, to use a different method of converting from shares to Catalyst than I used. Moving the files does work, but it's not a good solution: you don't get to take advantage of the per-VM backup files that is a feature of v9 (if a backup starts with a monolithic file, it will continue to use it; only creating a new backup—or completely deleting the existing files—will allow per-VM files to be created. This is the preferred format for Catalyst, and the dedupe engine will work more efficiently with per-VM files than it will with monolithic files; I'm sure there's a technical reason for it, but I can vouch for it in practice. Prior to switching to per-VM files, my entire backup footprint, even after cycling through the monolithic files to eliminate dedupe-unfriendly elements like job-file compression, consumed over 1TB of raw storage with a dedupe ratio that never actually reached 5:1. After discarding all those jobs and starting fresh with cloned jobs and per-VM files, I now have all of my backups & restore points on a single 1TB appliance with room to spare and a dedupe ratio currently above 5:1.

I'm still fine-tuning, but I'm very pleased with the solution.

Monday, November 23, 2015

Long-term self-signed certs

While I'm a big proponent of using an enterprise-class certificate authority—either based on internal offline root/online issuing or public CAs—there are some instances when using a self-signed cert fits the bill. Unfortunately, most of the tools for creating a self-signed cert have defaults that result in less-than-stellar results: the digest algorithm is sha1, the cert is likely to have a 1024-bit key, and the extensions that define the cert for server and/or client authentication are missing.

With a ton of references discoverable on The Interwebz, I spent a couple of hours trying to figure out how to generate a self-signed with the following characteristics:

  • 2048-bit key
  • sha256 digest
  • 10-year certificate life (because, duh, I don't want to do this every year)
  • Accepted Use: server auth, client auth
It took pulling pieces from several different resources, documented herein:

Required Software

OpenSSL (command-line software)
Text editor (to create the config file for the cert)


  1. Create a text file that specifies the "innards" of the cert:
    default_bits = 2048
    encrypt_key = no
    distinguished_name = req_dn
    prompt = no

    [ req_dn ]
    CN={replace with server fqdn}
    OU={replace with department}
    O={replace with company name}
    L={replace with city name}
    ST={replace with state name}
    C={replace with 2-letter country code}

    [ exts ]
    extendedKeyUsage = serverAuth,clientAuth
  2. Run the following openssl command (all one line) to create the new private key & certificate:
    openssl req -x509 -config {replace with name of config file created above} -extensions "exts" -sha256 -nodes -days 3652 -newkey rsa:2048 -keyout host.rsa -out host.cer
  3. Run the following openssl command to bundle the key & cert together in a bundle that can be imported into Windows:
    openssl pkcs12 -export -out host.pfx -inkey host.rsa -in host.cer

What's happening

The text file sets up a number of configuration items that you'd either be unable to specify at all (the extensions) or would have to manually input during creation (the distinguished name details).

The request in the second step creates a 2048-bit private key (host.rsa) and a self-signed certificate (host.cer) with a 10-year lifetime (3652 days) with the necessary usage flags and SHA256 digest.

Friday, June 5, 2015

Resurrecting a TomTom XL

I'm a longtime fan of TomTom GPS devices, and thanks to my friends over at w00t, I've bought quite a few over the last score years, gifting some and reselling others.

While my most reliable mapping/routing service (recently) has been Waze on my iPhone, I've had an older TomTom XL·S 310/340 that I've kept in the company car, because sometimes Waze isn't always available or accurate—more because of Verizon CDMA limitations than anything else, but that's a different story—and having a dedicated device is super convenient.

I've been doing a bunch of travel in that company car, and the out-of-date map on the TomTom has become a bit of an annoyance, so unlike the XL I have for the personal car with lifetime map updates, I had a conundrum: do I purchase a new map ($45), subscribe to a year of updates ($49), punt and live with just the iPhone, or purchase a new device for home and move the one with lifetime maps to the company car and let the XL·S go to the electronics graveyard?

Because the device had been working flawlessly otherwise—with the exception of essentially zero battery life—I went ahead and selected the Map Update service.

After attaching the device to my PC and downloading several updates to the TomTom Home management application, the purchased map update was immediately available as an installable option. This old unit only had 2GB of local storage, so the old map had to be deleted before installing the new update; I bravely went ahead with the update process.

And after a goodly while, received errors that Home was unable to copy a file to the device, so it aborted the process. The management app itself suggested disconnecting, reconnecting and retrying the update, so I did that.

A common sight: errors writing to internal storage
Unfortunately, repeating the process didn't help: it might error out at a different file, but over and over, it would still fail.

As it happens, however, when the TomTom is attached to the PC, it shows up as a removable USB drive. When interacting with the Home application, it can create backup copies of the filesystem on the PC, and by comparing the data on the properly-updating home XL, I was able to make some assumptions about the XL·S filesystem. Instead of relying on the Home application to properly transfer the map to the device, I let Windows do it, copying the map data from the downloaded ZIP file to the removable device that was the TomTom's internal storage.

One problem: I was missing a file from the map download.

TomTom uses DRM to keep non-subscribers from using their maps. I was fine with that: as a subscriber, I should have rights to use those maps. However, some searching on the interwebz didn't net me any solutions. Luckily, I also thought to look on my PC where Home was running; there was a second download that had an "" file. Inspecting it, I found a .dct file; a quick google search informed me that this was my DRM key.

By putting the map and the DRM key on the TomTom manually, I now had a map that was usable by the device.

Or did I?

While I knew I could operate the device and use the map via the Home management app, the device refused to boot independently. Again, I used my google-fu and discovered that I should be able to wipe the local storage and get Home to reinstall the boot image and application software. And after wiping, but prior to doing the install, I performed Windows filesystem checks to make sure the TomTom local storage was functional and free of errors.

The Home tool worked as documented, but once again, after trying to add the map update, copy/install errors became my bane. I tried again to use Windows to copy the map update and DRM file, and lo... success! Not only would the device operate with the Home app, but it worked when independently powered.

So that's the trick:

  1. Wipe the TomTom local storage. Completely.
  2. Let Home reinstall the boot image and mapping application. This could require several restarts of the device, including hard resets (press and hold the power button until the TomTom logo appears and the drum sound is played).
  3. Extract the PC-based map to the TomTom local storage.
  4. Extract the .dct file to the map folder on the TomTom local storage.
  5. Restart the TomTom.
The device was working perfectly, so I continued with adding the MapShare corrections, and as the image above shows, I ran into another file transfer error. Following this error, the device refused to restart properly, getting stuck at the indemnity acknowledgement screen and spontaneously restarting. I reconnected the device and removed the most recent files from the map folder—the ones that didn't match the files received in the map update or the DRM file—and restarted the device, and it recovered nicely.

Update 2:
Before anyone asks: the .dct file that's the DRM key is specifically created by TomTom for my use on this device alone and is unusable on any other device, with any other map. The device serial number and map thumbprint are both part of the decryption key for DRM, so even if I didn't care about TomTom's IP rights and the possibility of litigation for it (which I actually do on both accounts), sharing the DRM file with the world wouldn't help anyone. So no, I will not share any of the files I received from TomTom in this update process.