Thursday, December 25, 2014

Synology DS2413+

Based on the recommendations of many members of the vExpert community, I purchased a Synology DS2413+. This is a 12-bay, Linux-based array that can be expanded to 24 spindles with the addition of the DX1211 expansion chassis. My plan was to eliminate a pair of arrays in my home setup (an aging Drobo Pro and my older iomega px6-300d), keeping a second array for redundancy.

The array is a roughly cube-shaped box which sits nicely on a desk, with easy access to the 12 drive trays and "blinky lights" on the front panel. It also sports two gigabit (2x1000Mb/s) network ports that can be bonded (LACP is an option if the upstream switch supports it) for additional throughput.

Synology has a page full of marketing information if you want more details about the product. The intent of this post is to provide the benchmark information for comparison to other arrays, as well as information about the device's comparative performance in different configurations.

The Synology array line is based on their "DSM" (DiskStation Manager) operating system, and as of this iteration (4.1-2661), there are several different ways to configure a given system. The result is a variety of different potential performance characteristics for a VMware environment, depending on the number of spindles working together along with the configuration of those spindles in the chassis.

The two major classes of connectivity for VMware are represented in DSM: You can choose a mix of NFS and/or iSCSI. In order to present either type of storage to a host, disks in the unit must be assembled into volumes and/or LUNs, which are in turn published via shares (NFS) or targets (iSCSI).

DSM supports a panoply of array types—Single-disk, JBOD, RAID0, RAID1, RAID5, RAID6, RAID1+0—as the basis for creating storage pools. They also have a special "SHR" (Synology Hybrid RAID) which automatically provides for dynamic expansion of the storage capacity when an even number of drive sizes are present; both single-drive- and dual-drive-failure protection modes are available with SHR on the DS2413+.

When provisioning storage, you have essentially two starting options: do you completely dedicate a set of  disks to a volume/LUN ("Single volume on RAID"), or do you want to provision different portions of a set of disks to different volumes and/or LUNs ("Multiple volumes on RAID")?

iSCSI presents a different sort of twist to the scenario. DSM permits the admin to create both "Regular files" and "Block-level" LUNs for iSCSI. The former reside as sparse file on an existing volume, while the latter is done with a new partition on either dedicated disks (Single LUNs on RAID) or a pre-existing disk group (Multiple LUNs on RAID). The "Regular files" LUN is the only option that allows for "thin provisioning" and VMware VAAI support; the Single LUN option is documented as highest-performing.

For purposes of comparison, the only mode of operation for the iomega px6-300d (which I've written about several times on this blog) is like using "Multiple Volumes/LUNs on RAID" in the Synology, while the older iomega ix2-200d and ix4-200d models operate in the "Regular files" mode. So the DSM software is far more versatile than iomega's StorCenter implementations.

So that leaves a lot of dimensions for creating a test matrix:
  • RAID level (which is also spindle-count sensitive)
  • Volume/LUN type
  • Protocol
DS2413+ 1 block seq read

4K random read

4K random write

512K seq write

512K seq read

Protocol RAID Disks
iSCSI none 1 16364 508 225 117.15 101.11
RAID1 2 17440 717 300 116.19 116.91
RAID1/0 4 17205 2210 629 115.27 107.75
6 17899 936 925 43.75 151.94
RAID5 3 17458 793 342 112.29 116.34
4 18133 776 498 45.49 149.27
5 17256 1501 400 115.15 116.12
6 15768 951 15960.41114.08
RAID0 2 17498 1373 740 116.44 116.22
3 18191 1463 1382 50.01 151.83
4 18132 771 767 52.41 151.05
5 17692 897 837 56.01 114.35
6 18010 1078 1014 50.87 151.47
RAID66 17173 2563 870 114.06 116.37
Protocol RAID Disks 1 block seq read

4K random read

4K random write

512K seq write

512K seq read

NFS none 1 16146 403 151 62.39 115.03
RAID1 2 15998 625 138 63.82 96.83
RAID1/0 4 15924 874 157 65.52 115.45
6 16161 4371 754 65.87 229.52
RAID5 3 16062 646 137 63.2 115.15
4 16173 3103 612 65.19 114.76
5 15718 1013 162 59.26 116.1
RAID0 2 15920 614 183 66.19 114.85
3 15823 757 244 64.98 114.6
4 16258 3769 1043 66.17 114.64
5 16083 4228 1054 66.06 114.91
6 16226 4793 1105 65.54 115.27
RAID66 15915 1069 157 64.33 114.94

While this matrix isn't a complete set of the available permutations for this device, when I stick with the 6-disk variations that match the iomega I already have in the lab, I've been stunned by the high latency and otherwise shoddy performance of the iSCSI implementation. Further testing with additional spindles did not—counter to expectations—improve the situation.

I've discovered the Achilles' Heel of the Synology device line: regardless of their protestations to the contrary about iSCSI improvements, their implementation is still a non-starter for VMware environments.

I contacted support on the subject, and their recommendation was to create the dedicated iSCSI target volumes. Unfortunately, this also eliminates the ability to use VAAI-compatible iSCSI volumes, as well as sharing disk capacity for NFS/SMB volumes. For most use cases of these devices in VMware environments, that's not just putting lipstick on a pig: the px6 still beat the performance of a 12-disk RAID1/0 set using all of Synology's tuning recommendations.

NFS performance is comparable to the PX6, but as I've discovered in testing the iomega series, NFS is not as performant as iSCSI, so that's not saying much... What to do, what to do: this isn't a review unit that was free to acquire and free to return...

I've decided to build out the DS2413+ with 12x2TB drives, all 7200RPM Seagate ST2000DM001 drives in a RAID1/0, and use it as an NFS/SMB repository. With over 10TB of formatted capacity, I will use it for non-VMware storage (backups, ISOs/media, etc) and low-performance-requirement VMware workloads (logging, coredumps) and keep the px6-300d I was planning to retire.

I'll wait and see what improvements Synology can make with their iSCSI implementation, but in general, don't see using these boxes for anything but NFS-only implementations.

Update 2:
Although I was unsatisfied with the DS2413+, I had a use case for a new array to experiment with Synology's SSD caching, so I tried a DS1813+. Performance with SSD was improved over the non-hybrid variation, but iSCSI latency for most VMware workloads was still totally unacceptable. I also ran into data loss issues when using the NFS/VAAI in this configuration (although peers on Twitter responded with contrary results).

On a whim, I went to the extreme of removing all the spinning disk in the DS1813+ and replacing them with SSD.


The iSCSI performance is still "underwhelming" when compared to what a "real" array could do with a set of 8 SATA SSDs, but for once, not only did it exceed the iSCSI performance of the px6-300d, but it was better than anything else in the lab. I could only afford to populate it with 256GB SSDs, so the capacity is considerably lower than an array full of 2TB drives, but the performance of a "Consumer AFA" makes me think positively about Synology once again.

Now I just need to wait for SSD prices to plummet...

Tuesday, December 16, 2014

Remote Switchport Identification for ESXi

I was working by remote, trying to complete some work in a client's VMware environment when I discovered that one of the hosts didn't have the proper trunking to its network adapters. I had access to the managed switch, but for one reason or another, the ports weren't identified in the switch. Had the switch been from Cisco, the host itself could've told me what I needed: ESXi supports CDP on the standard virtual switch & uplinks.
But this was an HP switch.
Luckily, I had three things going for me:

  1. The HP switch supported LLDP
  2. I had access to temporary Enterprise Plus licensing
  3. The host had redundant links for the virtual switch.
How did that help? 

While the standard switch will only support CDP, the VMware Distributed Switch (VDS) supports either CDP or LLDP.

Here's how I managed to get my port assignments:
  1. Create a VDS instance
  2. Modify the distributed virtual switch (DVS) to use LLDP instead of CDP (the default)
  3. Update host licensing to temporary Enterprise Plus
  4. Add one (1) adapter to the DVS uplink group
  5. After 30 seconds, click on the "information" link for the adapter to retrieve switchport details
  6. Return adapter to the original standard switch
  7. Repeat steps 3-5 for additional adapters
  8. Remove host from DVS
  9. Return host licensing back to original license
  10. Repeat steps 3-9 for remaining hosts
  11. Remove DVS from environment

Saturday, November 8, 2014

Use Synology as a Veeam B&R "Linux Repository"

I posted a fix earlier today for adding back the key exchange & cipher sets that Veeam needs when connecting to a Synology NAS running DSM 5.1 as a Linux host for use as a backup repository. As it turns out, some folks with Synology devices didn't know that using them as a "native Linux repository" was possible. This post will document the process I used to get it going originally on DSM 5.0; it wasn't a lot of trial-and-error, thanks to the work done by others and posted to the Veeam forums.

Caveat: I have no clue if this will work on DSM 4.x, as it wasn't until I was already running 5.0 when I started to work on it.

  1. Create a shared folder on your device. Mine is /volume1/veeam
  2. Install Perl in the Synology package center.
  3. If running DSM 5.1 or later, update the /etc/ssh/sshd_conf file as documented in my other post
  4. Enable SSH (control panel --> system -->terminal & snmp)
  5. Enable User Home Service ( control panel --> user --> advanced)
Once this much is done, Veeam B&R will successfully create a Linux-style repository using that path. However, it will not be able to correctly recognize free space without an additional tweak, and for that tweak, you need to understand how B&R works with a Linux repository...

When integrating a Linux repository, B&R does not install software on the Linux host. Here's how it works: 
  1. connects to the host over SSH
  2. transmits a "tarball" (veeam_soap.tar)
  3. extracts the tarball into temporary memory
  4. runs some Perl scripts found in the tarball
It does this Every. Time. It. Connects.

One of the files in this bundle (lib/Esx/System/Filesystem/ uses arguments with the Linux 'df' command that the Synology's busybox shell doesn't understand/support. To get Veeam to correctly recognize the space available in the Synology volume, you'll need to edit the '' file to remove the invalid "-x vmfs" argument (line 72 in my version) in the file. However, that file must be replaced within the tarball so it can be re-sent to the Synology every time it connects. Which also means every Linux repository will get the change as well (in general, this shouldn't be an issue, because the typical Linux host won't have a native VMFS volume to ignore).

Requests in the Veeam forum have been made to build in some more real intelligence for the Perl module so that it will properly recognize when the '-x' argument is valid and when it isn't.

So how does one complete this last step? First task: finding the tarball. On my backup server running Windows Server 2012R2 and Veeam B&R 7, it's in c:\program files\veeam\backup and replication\backup. If you used a non-default install directory or have a different version of B&R, you might have to look elsewhere.

Second, I used a combination of  7-Zip and Notepad++ to manage the file edit on my Windows systems. Use whatever tool suits, but do not use an editor that doesn't respect *nix-style text file conventions (like the end-of-line character).

Once you edit the file and re-save the tarball, a rescan of the Linux repository that uses your Synology should result in valid space available results.

One final note: why do it this way? The Veeam forums have several posts suggesting that using an iSCSI target on the Synology--especially in conjunction with Windows 2012R2's NTFS dedupe capability--is a superior solution to using it as a Linux Repository. And I ran it that way for a long time: guest initiator in the backup host, direct attached to an iSCSI target. But I also ran into space issues on the target, and there aren't good ways to shrink things back down once you've consumed that space--even when thin provisioning for the target is enabled. No, it's been my experience that, while it's not as space-efficient, there are other benefits to using the Synology as a Linux repo. Your mileage may vary.

Repair Synology DSM5.1 for use as a Linux backup repository.

After updating my Synology to DSM 5.1-5004, the following morning I was greeted by a rash of error messages from my Veeam B&R 7 backup jobs: "Error: Server does not support diffie-hellman-group1-sha1 for keyexchange"

I logged into the backup host and re-ran the repository resync process, to be greeted by the same error.
Synology DSM 5.1 error
The version of SSH on the Synology was OpenSSH 6.6p2:

As it turns out, this version of SSH doesn't enable the required key exchange protocol by default; luckily, that's an easy edit of the /etc/ssh/sshd_config file. And to play it safe, I added not only the needed Kex parameter, I also added the published defaults.
KexAlgorithms diffie-hellman-group1-sha1,,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1
After restarting SSH in the DSM control panel, then re-scanning the repository, all was not quite fixed:

Back to the manfile for sshd_conf...

The list of supported ciphers is impressive, but rather than add all of them into the list, I thought it would be useful to get a log entry from the daemon itself as it negotiated the connection with the client. Unfortunately, it wasn't clear where it was logging, so it took some trial-and-error with the config settings before I found a useful set of parameters:
SyslogFacility USER
LogLevel DEBUG
At that point, performing a rescan resulted in an entry in /var/log/messages:
Armed with that entry, I could add the Ciphers entry in sshd_conf, using the options from the Veeam ssh client to the defaults available in this version of sshd:
Ciphers aes128-cbc,blowfish-cbc,3des-cbc,aes128-ctr,aes192-ctr,aes256-ctr,,,
One more rescan, and all was well, making it possible to retry the failed jobs.

Follow Up

There have been responses of both successes and failures from people using this post to get their repository back on line. I'm not sure what's going on, but I'll throw in these additional tips for editing sshd_config:
  1. Each of these entries (KexAlgorithms and Ciphers) are single line entries. You must have the keyword—case sensitive— followed by a single space, followed by the entries without whitespace or breaks.
  2. There's a spot in the default sshd_config that "looks" like the right place to put these entries; that's where I put them. It's a heading labelled "# Ciphers and keying." Just drop them into the space before the Logging section. In the screenshot below, you can see how there's no wrap, no whitespace, etc. This works for me.
  3. Restart the SSH service. You can use the command line (I recommend using telnet during this operation, or you'll loose your SSH connection as the daemon cycles) or the GUI control panel. If using the latter, uncheck SSH, save, check SSH.

Friday, October 10, 2014

VeeamON 2014: A post-event challenge

Branded as the industry's first and only "Data Center Availability" conference, Veeam's freshman effort was a success by almost any measure.

Disclaimer: I work for a Veeam Partner and my conference attendance was comp'd in exchange for some marketing/promotional activities prior to the conference. I have also been a long-time user of Veeam Backup & Replication, since before my transition to the partner side of business due to my vExpert status in the VMware community.

Because I work for a partner, I arrived in Las Vegas on Sunday, October 5 to attend the partner-oriented social & networking events and to be ready for the 8:30am start on Monday morning for the partner keynote.

In a twist from other industry conferences I've attended, the keynote was MC'd by comedian Richard Laible, with a format intended to mimic those of late-night talk shows. It was successful, and the give-and-take between Richard and his "guest" was well-orchestrated and amusing.

In the first "interview," Veeam CEO Ratmir Timashev was able to tell the story of the founding of Veeam, underscore the company's love of their reseller-partners and reaffirmed the company's longstanding policy of staying 100% "channel-based" (no customer may purchase directly from Veeam); most important, he talked about the shift of Veeam from being "merely the best" backup product for virtualization, but to strive towards producing the best availability product for the enterprise.

Other Veeam employees took to the stage, and customer success stories were played out. In other words, much like any other keynote.

The remainder of the day was filled with breakout sessions covering a wide range of topics--both technical and business-oriented--for the partner crowd. The obligatory sponsor exposition opened for a happy hour/dinner reception, which also capped-off the scheduled activities for the day.

The second full day of events (Tuesday) was opened with a second keynote which echoed much of the messaging in the Partner keynote, but with an obvious new audience: the customer & prospects attending the event. In addition to even more entertainment (a pair from X-Pogo performed), some additional features of the forthcoming Version 8 for the "Availability Suite" (a rebranding of the former Backup & Management Suite) were shared, as well as even more customer testimonials which underscored Veeam's commitment not just to protecting data, but to making good on their aim to create the "always available datacenter."

The remainder of the day was again filled with breakout sessions, again ranging from business to technical topics. The day was scheduled late, however, with the optional party at the "LIGHT" nightclub in the Mandalay Bay hotel.

The third and final day opened with breakout sessions, these principally seemed to be presented by sponsor partners rather than Veeam employees with Veeam-specific topics. None of the sessions I attended, however, seemed too far off-base at a Veeam-oriented conference: the connection and/or synergy between the sponsor's product & Veeam's products was clear by the end of the session.

A final keynote by's co-founder, Alexis Ohanian, was both humorous and insightful, and essentially closed out the conference.

There are many other posts out there with even more details and insight into the conference; check out my fellow #vDBer Mike Preston's series from the conference at for more insight and reporting.

My retelling of this is all to aim towards one thought: Veeam did a great job on their first conference. The content was relevant, the sponsors were invested and made sense, and it was both informative and entertaining.

Here's the challenge: What about 2015?

Unless the breakout catalog is significantly expanded, I'm not sure how many folks will want/need to attend a second year. Don't get me wrong: I'm not saying that no one will attend. On the contrary: if they repeated next year with a cookie-cutter duplicate of this year, anyone who a) didn't attend and b) wants to learn more about Veeam's products and how they can boost the availability of the datacenter would find their time well-spent.

I'm saying that everyone that went was a first-timer, and they got that spot-on. They can still fine-tune it, but next year's first-time attendee will get great value whether they change it or not.

No, the problem is getting repeat attendees. The conference can increase their first-time attendee counts simply based on positive word-of-mouth recommendations, but the top end for that will be reached far sooner than getting both those new attendees and the repeat (alumni?) attendees.

As it was, the number that was rumored prior to the conference—around 1200 people comprised of attendees & Veeam staff—seemed to have some validity. The conference space at the Cosmopolitan was sized well for the attendees, and it was never crowded or crazy like VMworld can feel (with almost 20x the attendance). But I can't imagine that Veeam is going to be content with putting on a two-and-a-half-day conference for "only" 1000 people. Yes, you want a multi-day conference to help justify the travel costs, but let's be honest: the VMUG organization has chapters that manage to put together single-day conferences for that number of attendees.

This isn't meant as criticism: I'm identifying the challenge they now face, and send the call-to-action to Veeam to plan next year's event—as far as I know, TBA for place & time, yet expected from Doug Hazelman's parting "See you next year at VeeamON 2015"—with the goals of both increasing the number of new attendees compared to the inaugural "class" from this year, as well as compelling most (if not all) of this year's attendees to return.

Wednesday, April 9, 2014

Importing a CA-signed certificate in VMware vSphere Log Insight

I came across a retweet tonight; someone looking for help getting VMware vSphere Log Insight (vCLog) to accept his CA-signed certificate for trusted SSL connections:

As luck would have it, I was able to get it going in my home environment a while back. In the course of studying for Microsoft certifications, I had the opportunity to get some real practice with their certificate authority role, and I have a "proper" offline root and online enterprise intermediate as an issuer running in my home lab environment.

With that available to me, I'd already gone through and replaced just about every self-signed certificate I could get my hands on with my own enterprise certs, and vCLog was just another target.

I will admit that in my early work with certificates and non-Windows systems, I had a number of false starts; I've probably broken SSL in my environment as many times as I've fixed it.

One thing I've learned about the VMware certs: they tend to work similarly. I learned early on that the private key cannot be PKCS#1 encoded; it must be PKCS#8 key. How can you tell which encoding you have?

If the Base64 header for your private key looks like this:
you have a PKCS#1 key. If, instead, it looks like this:
then you have the PKCS#8 key. Unfortunately, many CAs that provide the tools needed to create the private key and a signed cert only provide you with the PKCS#1 key. What to do? Use your handy-dandy OpenSSL tool to convert it (note: there are live/online utilities that can do this for you, but think twice and once more just for good measure: do you really want to give a copy of your private key to some 3rd party?):
openssl rsa -in private.pkcs1 -out private.pkcs8
Once you have the properly formatted private key, you must assemble a single file with all the certs in the chain—in the correct order—starting with the private key. This can be done with a text editor, but make sure you use one that honors *NIX-style end-of-line characters (newline as opposed to carriage-return+linefeed like DOS/Windows likes to use).

Most public Certificate Authorities (I personally recommend DigiCert) are going to be using a root+intermediate format, so you'll end up with four "blobs" of Base64 in your text file:
[Base64-encoded private key blob]
[Base64-encoded intermediate-signed server certificate]
[Base64-encoded root-signed intermediate CA cert]
[Base64-encoded root CA cert]
Note that there's nothing in between the END and BEGIN statements, nor preceding or following the sections. Even OpenSSL's tools for converting from PKCS#12 to PEM-encoded certificates may put "bag attributes" and other "human readable" information about the certificates in the files; you have to strip that garbage out of there for the file to be acceptable.

If you follow these rules for assembly, your file will be accepted by vCLog's certificate import function, and your connection will be verified as a trusted connection.

Wednesday, April 2, 2014

An odd thing happened on the way to the VSAN...

Object placement

Cormac Hogan has a nice post on the nature of VSAN from an "Objects & Components" perspective, Rawlinson Rivera describes witness creation & placement, and Duncan Epping teaches the user how to see the placement of objects in VSAN.

Based on these (and many other articles written by them and other authors—check out Duncan's compendium of VSAN links) I thought I had a pretty good idea of how a VM would be laid out on a VSAN datastore.

Turns out, I was wrong...

Default use case

Take a VM with a single VMDK and put it on a VSAN datastore with no storage policy, and you get the default configuration of Number of Failures to Tolerate (nFT) = 1 and Number of Disk Stripes per Object (nSO) = 1. You'd expect to see the disk mirrored between two hosts, with a third host acting as witness:
Image shamelessly copied from Duncan
If you drill down into the object information on the Web Client, it bears out what you'd expect:

Multi-stripe use case

The purpose of the "Number of Disk Stripes per Object" policy is to leverage additional disks in a host to provide more performance. The help text from the nSO policy is as follows:
The number of HDDs across which each replica of a storage object is striped. A value higher than 1 may result in better performance (for e.g. when flash read cache misses need to get services from HDD), but also results in higher use of system resources. Default value 1, Maximum value: 12.
In practice, adding additional stripes results in VSAN adding a new "RAID 0" layer in the leaf objects hierarchy under the "RAID 1" layer. That first level is the per-host distribution of objects needed to meet the nFT policy rule; this second layer represents the per-host distribution of objects necessary to meet the nSO policy rule.

As you can see from the diagram, the stripes for a single replica aren't necessarily written to the same host. Somehow, I'd gotten the impression that a replica had a 1:1 relationship with a host, which isn't the way it's run in practice.

I'd also been misreading the details in the web client for the distribution of the components; when all your disks have a display name that only varies in the least-significant places of the "naa" identifier, it's easy to get confused. To get around that, I renamed all my devices to reflect the host.slot and type so I could see where everything landed at a glance:

As this screencap shows, the VM's disk is a 4-part stripe, split among all three of my hosts. One host (esx2) has all four components, so the other hosts need enough "secondary" witnesses to balance it out (three for esx3 because it hold one data component, one for host esx1 because it holds three data components). There's also a "tiebreaker" witness (on esx1) because the sum of the data components and secondary witnesses is an even number.

The other disks show similar distribution, but the details of disk utilization is not the same. The only thing I've found to be fairly consistent in my testing is that one host will always get an entire replica, while the other two hosts share components for the other replica; this occurs for all policies with nSO>1. If you have more than 3 hosts, your results will likely be different.

Thursday, February 6, 2014

SSL Reverse Proxy using Citrix NetScaler VPX Express

Part 5 in a series

This part is the final post of the series; it builds on the previous posts by adding an SSL-based content switch on top of our previously-created simple HTTP content switch.

The NetScaler does a fine job of handling SSL traffic in a manner similar to the way it handles the unencrypted HTTP traffic. The key differentiator—other than making sure to distinguish the traffic as being SSL-bound—is the inclusion of certificate handling.

Of course, the "outside" or Content Switching virtual server must have an SSL certificate; the client trying to reach your host(s) is expecting an SSL connection, so the listener responding to the particular host request must respond with a conforming certificate or he/she will have to deal with certificate errors.

The "inside" server that's the target of Content Switching probably wants to communicate with its clients using SSL, too (In some special cases—known as "SSL Offload"—the inside server allows non-encrypted connections from specific hosts that are pre-configured to handle the overhead of SSL encryption; NetScaler can do this, too). In order for the NetScaler to perform as a proxy, it must have sets of SSL certificates for both the inside and the outside connections. Once you have those, you can quickly set up an SSL-based content switching configuration that mirrors the HTTP setup.

And the best part? Only the Content Switching virtual server needs to have an SSL certificate that is signed by a trusted root! (Caveat: it must be either a wildcard or multiple-SAN certificate. Remember: the DNS name must match either the CN [common name] or one of the DNS SAN [subject alternate name] entries of the host certificate) The "inside" servers that you're putting "behind" the NetScalers can have self-signed certificates or certificates signed by an in-house CA.

A little about Certificate files

The NetScaler has a ton of flexibility for working with many certificate formats—PEM and DER encoding, PKCS#12 bundles, etc.—but I find that it's easiest and most flexible when using individual, single-certificate (or key) PEM-type, Base64-encoded text files. It's easiest if you just have them ready-to-go; if you don't, you can learn about using OpenSSL, or you can simply use an online converter like SSL Shopper's Certificate Tools. Personally, I use a local copy of OpenSSL.

For the purpose of this tutorial, I'm going to assume you have all the certificates you need, already in PEM format.

SSL handling in the NetScaler

The SSL feature must be enabled to do any sort of SSL load balancing or proxy configuration; it is enabled in the same place that Load Balancing and Content Switching is enabled, off the System->Settings menu:

Preparing Certificates

Once that's enabled, the yellow warning symbol for the Traffic Management function disappears. The first step to managing certificates is to get certificate files uploaded to the NetScaler. Select the SSL option itself:

then "Manage Certificates / Keys /CSRs" in the Tools section of the right-hand column.
The dialog resembles a file management window because it essentially is: it's a tool that lets you upload certificate files to the NetScaler's certificate store. Click [Upload...] to load the certificate files on the NetScaler. You'll need both the certificate and its private key, plus any CA certificates—including intermediates—that were used in a signing chain.

Once you have your certificates loaded, close the file dialog and expand the SSL menu tree and select Certificates

Click [Install...]. This process both creates a configuration object that the NetScaler can use to bind certificates to interfaces, and it gives you the opportunity to link certificates together if they form a signing chain. Although you can also use this interface to perform the upload function, I find it works more consistently—especially when handling filenames—to upload in one step, then install.

The server certificate itself needs to be installed using both the certificate and its key file; signing CAs can be loaded with just the certificate file.

Once all the certificates in the chain are loaded, select the server certificate and click the [Action] dropdown, then the "Link..." option. 

If you've got a recognized file and the CA that signed the file is already installed, it will be pre-selected in the Link Certificate dialog. Click [OK].
Repeat with any other certificates in the chain, back to the CA root.

Creating the Content Switching Configuration

With minor exceptions, we'll follow the same process for creating a standard HTTP content-switching config. Specific differences will be highlighted using italic typeface.
  1. If they don't exist already, create your server entries. Because I'm building on the work previously documented, my servers are already present.
  2. Create SSL-based services for the servers; configure https as a monitor:
  3. Create SSL-based Load Balancing Virtual Servers
    1. Set the protocol to SSL
    2. Disable "Directly Addressable"
    3. Enable the SSL-based service
    4. Switch to the SSL Tab
    5. Highlight the server certificate and click [Add >] to bind the certificate to the server
  4. Create the new Content Switching policies. We can't use the previous ones—even if they're functionally identical—because we're going to use them on a different CS Virtual Server.
  5. Create (or modify) an SSL-based Content Switching Virtual Server
    1. Set the protocol to SSL
    2. Set the IP address for the virtual server. It can be the same address as the HTTP virtual server.
    3. Insert policies and set targets to SSL-based targets
    4. Switch to the SSL Tab
    5. Highlight the server certificate and click [Add >] to bind the certificate to the server
    6. Highlight the next CA cert in the signing chain; click the drop-down arrow on the [Add >] button and select [as CA >] to add the signing cert.
    7. Repeat step 5 for all remaining certificates in the signing chain.
    8. Click [Create] when complete.
As soon as the configuration is "settled" in the innards of the NetScaler, the "State" should indicate that it is "Up" and you can again test using your HOSTS file. Note: you may still get a certificate error if your URL doesn't match the name in the certificate bound to the Content Switching virtual server (eg, a short name will not properly resolve against a domain wildcard certificate).

Parts in this series:

HTTP Reverse Proxy using Citrix NetScaler VPX Express

Part 4 in a series

So far: the first three parts of this series dealt with the introduction of a problem (multiple servers behind a NAT firewall that use the same port) and solution (Citrix NetScaler VPX Express); laying the groundwork for configuring the solution; an overview of what we'll be configuring.

Because it is possible to set up content switching with a single host (the degenerate case), this is the method we'll begin with. While it doesn't really do much for us, simply repeating the steps for a second (and subsequent) will result in a working solution. Other guides lay down the steps with two hosts already in mind, and teasing apart the pieces to apply it to your situation might be more difficult.


Some planning must be done prior to doing this setup. The first is a set of IP addresses that you'll need to have handy. This post will use the following addresses; substitute them with your own:
CS Virtual Server192.168.106.37
Target Server A192.168.106.38
Target Server B192.168.106.39

Enable Features

The bare-bones install of the NetScaler has a number of features enabled, but the ones we need for content switching are disabled. Open the System configuration tree and select Settings

Select "Configure basic features" and make sure the following features are enabled (checked):
  • Load Balancing
  • Content Switching
If you selected "Traffic Management" in the left menu before and after enabling the feature, this is what you'd see:
Default, features disabled
LB and CS enabled
Begin the setup by expanding "Load Balancing" under "Traffic Management" and select "Servers":

In the center section, click [Add...] and create the server. The "Server Name" is an identifier used in the NetScaler; it does NOT have to be the FQDN or short name for the server.

Then switch to the Services option

and create a protocol-specific entry for the server, including a monitor
(I like to use http because it doesn't require any customization; a custom http-ecv monitor can be created to check for the explicit function of the target server, but that's beyond the scope of this series).

I also recommend using a naming convention that includes the type of object you're creating ('svc' for the service) and the protocol it's tied to ('http'); that will make it more obvious where a given object comes from when you see them bound in other places.

Switch to the Virtual Servers menu

and click [Add...] to build the virtual server.

Make sure you uncheck the "Directly Addressable" option; this eliminates the need to give the virtual server its own address (we want to give an address to the Content Switching virtual server) and select the service we just created.

Switch to the Content Switching menu and select "Policies"

Click [Add...] to create a policy to trigger sending the traffic based on the hostname used in the HTTP header.

Select the Virtual Servers option under Content Switching

and click [Add..] to create a new virtual server.
This server gets the IP address to which we'll be forwarding traffic.

Click "Insert Policy" to insert a new policy

Select the new policy from the drop-down, then pull down the list of targets, selecting the new load balancing server. You will get a warning about the "Goto Expression"

Select [Yes], then [Create] to make the server.

At this point, your setup should function for the first server you configured!

Now: go back to the step for creating the outside server and repeat except for creating a new Content Switching server.

Now: Open the existing server

and add another policy, using the new server's policy and LB virtual server entry:

You can test this internally by either updating your DNS server entries or adding a line to your machine's HOSTS file: serverA serverB

Point your browser at http://serverA after you make the change, and voila!, you get to the target. Switch to http://serverB, and you get that target instead.

Once you've verified the functionality from the inside, update the forwarding on your NAT firewall and test using an outside address (eg, use a cell phone that's not on your home WiFi).

Parts in this series: