Wednesday, May 13, 2026

Retrieving tags from NSX-T

In the process of preparing a new NSX-T environment, a request came for a dump of the tags in the production environment so that we could match those in the new one. PowerShell & RESTful API to the rescue!

Note: after working with VMware solutions since 2005 and for the company itself for just over 6 years, I have it ingrained to call it "VMware". Aside from this paragraph, you'll probably never see me write "VMware by Broadcom" or even something as gross as "Broadcom vSphere." Just understand that "by Broadcom" is implied until such time as the tech stack finds new ownership.

At any rate: PowerCLI, the module set for PowerShell that VMware publishes for automating parts/pieces of their software stack, is extremely light in cmdlets for interacting with NSX-T. But NSX-T has a very rich RESTful API, so that's what I'm taking advantage of.

I found several solutions that other folks had written, and for one reason or another, it just wasn't working to create output the way we needed it. So here are a couple of iterations that I wrote. The first dumps a CSV-formatted file that lists the tags and the entities that are associated with them; the second dumps a CSV-formatted file that list the VMs and the tags (if any) that are applied to them.

Get-NsxTags.ps1

function Read-Param { <# generic function to grab input from the user, providing for defaults if there's no entry provided #> param( $prompt, $default ) $value = Read-Host ("$prompt [$default]" -f $default) if (-not $value) { return $default } else { return $value } } function Get-Data { <# Perform a RESTful API call against the NSX-T manager #> param( $nsx, $usr, $secPwd, $apiPath ) $password = [System.Runtime.InteropServices.Marshal]::PtrToStringAuto([System.Runtime.InteropServices.Marshal]::SecureStringToBSTR($secPwd)) $credPair = "$($usr):$($password)" $encCreds = [Convert]::ToBase64String([Text.Encoding]::ASCII.GetBytes($credPair)) $params = @{ Uri = "$nsx$apiPath" Method = 'GET' Headers = @{ "Authorization" = "Basic $encCreds" "Content-Type" = "application/json" } Body = '{}' SkipCertificateCheck = $true } $results = Invoke-RestMethod @params return $results.results } # All NSX-T API paths "hang" off the base path $ApiBase = "/policy/api/v1" # DFW-related stuff is off the Policies path, below. Note: the "default" domain seems to be the only one we have; technically it's a variable... $ApiTags = "$ApiBase/infra/tags" <# INPUT SECTION #> # The actual URI must have the "https://" prefix $nsxtManager = "https://" + (Read-Param "Enter NSX-T IP or FQDN" "Default VIP") $output = Read-Param "Enter output filepath" "c:\temp\get-nsxtags.csv" $username = Read-Param "Enter username" "defaultusername" $secPwd = Read-Host "Enter password" -AsSecureString <# ALGORITHM NSX provides an API to list the tags, and then individual entries provide the tag name and the scope required to retrieve the effective resources associated with them #> $itemlist=@() #storage for the output list #get all the tags $tags = Get-Data -nsx $nsxtManager -usr $username -secPwd $secPwd -apiPath $ApiTags write-host '.' -NoNewline #loop through the tags and grab the associated resources, appending the resource names to a single string field for later output for($t =0; $t -lt $tags.Length; $t++) { $items = '' if($tags[$t].tagged_objects_count -gt 0) { #if there are no objects associated with the tag, don't try and retrieve them $ApiItems = $ApiTags + '/effective-resources?scope=' + $tags[$t].scope + '&tag=' + $tags[$t].tag write-host '.' -NoNewline #this will take a while; show progress happening... $itemSet = Get-Data -nsx $nsxtManager -usr $username -secPwd $secPwd -apiPath $ApiItems foreach ($item in $itemSet) { switch ($item.target_type) { 'VirtualMachine' { $items += $item.target_display_name + ' [vm],' } 'HostTransportNode' { $items += $item.target_display_name + ' [Host],' } default { $items += $item.target_display_name + '<<undefined>>,' } } } } #add a row to the output, removing a trailing comma if needed $row = New-Object PSObject -Property @{ Tag = $tags[$t].tag Count = $tags[$t].tagged_objects_count Items = '' } if ($items.Length -gt 0) { $row.Items = $items.substring(0, $items.length-1) } $itemlist += $row } write-host '.' write-host 'Done' $itemlist | Select-Object 'Tag','Count','Items' | Export-CSV -Path $output -NoTypeInformation


Get-NSXvmTags.ps1

$dfltNSXMGR = "default" $nsxtManager = Read-Host ("Enter NSX-T IP or FQDN [$dfltNSXMGR]" -f $dfltNSXMGR) if (-not $nsxtManager) { $nsxtManager = "https://$dfltNSXMGR" } else { $nsxtManager = "https://$nsxtManager" } $output = Read-Host "Enter output filepath" $username = Read-Host "Enter username" $secPwd = Read-Host "Enter password" -AsSecureString $password = [System.Runtime.InteropServices.Marshal]::PtrToStringAuto([System.Runtime.InteropServices.Marshal]::SecureStringToBSTR($secPwd)) $response = Invoke-RestMethod -Uri "$nsxtManager/api/v1/fabric/virtual-machines?included_fields=display_name,tags" -Method Get -Headers @{ "Authorization" = "Basic $( [Convert]::ToBase64String([Text.Encoding]::ASCII.GetBytes(("${username}:${password}"))) )" "Content-Type" = "application/json" } -Body '{}' -SkipCertificateCheck $vmlist = @(); foreach ($vm in $response.results) { $tags = '' foreach ($tag in $vm.tags) { $tags += $tag.tag + ',' } if ($tags.Length -gt 0) { $row = New-Object PSObject -Property @{ VM = $vm.display_name Tags = $tags.substring(0, $tags.length-1) } } else { $row = New-Object PSObject -Property @{ VM = $vm.display_name Tags = '' } } $vmlist += $row } $vmlist | Select-Object 'VM','Tags' | Export-CSV -Path $output -NoTypeInformation

These two are interesting in that in the first case, you use one API URI to get the tags—which provide the information (scope, tag) to use as parameters—and a child URI to get the items that are associated with that tag. But in the second case, a single URI provides a list of VMs as well as the tags that are applied to them as properties.

Resuming this site

 Well, when I look at this blog and see that the last post was from 2022, I'm a bit shocked. But it's not exactly a surprise:

When I started working for VMware in October 2017, the demand for working on my system and keeping my tech skills were greatly reduced. Yes, I was still maintaining everything and even implementing things on the VMware stack that I hadn't previously, but there just wasn't time or impetus to do much publishing.

Then "the Broadcom thing" happened.

I got laid off along with 4K other VMware associates.

I spent nine months trying to get back into a technical pre-sales position in the industry, and "came in second" several times. But second isn't good enough, and $2300/mo in COBRA payments wasn't going to be very good either.

On a whim, I applied to the United States Postal Service to be a letter carrier. Seemed like straightforward work, and I knew I could not just do the work from a skills perspective, but with all the bicycling I was doing I'd have no issue from the physical aspect.

I had a contingent job offer in less that 24 hours. Which was both exciting and depressing. Exciting, because working as a civil servant would give me access to great healthcare—a key reason for having a good job in the US of A—and a path forward to retirement. If I could stick it out for just over 5 years, I could "retire" at 62, keep my healthcare as a federal retiree, and get a part-time job in a bike shop to feed my passion.

Being a letter carrier is the hardest "easy job" I've ever had.

When you start, the idea is to gradually ease you into the demands of the position: you're limited in the number of hours they can work you each day, and the number of days without a break you can go. For someone who not only had worked "9 to 5" salaried jobs his whole career, but had the previous 9 months of free time, it was a shock.

Shortly after finishing the official training, I was regularly working 6 days a week--including Sundays--with no way to know if/when I was getting a day off. And each day, although I was guaranteed at least 4 hours of work if I was scheduled, I rarely worked fewer than 10h each day.

The work seems easy. You put letters, magazines, and packages into someone's mailbox or front porch. Rinse and repeat about 1000 times every day. "Anyone can do it."


I'm here to tell you now: it's exhausting, both mentally and physically—and that's without having walking routes to do! From the end of Aug 2024 until June 2025, I was stationed at an office without any walking routes; until January 2025—with few exceptions—I never knew what route I'd do when I reported. Some days, I'd get a text telling me to report to a different office; some days I'd show up and get sent elsewhere. And some days, I'd work 10+ hours, return to the station and still get sent to yet another location to help.

Such is the life of a "Part-Time Flexible" city carrier, aka "PTF."

January 2025, I was able to get a "hold down" on a route that was temporarily vacant due to the regular carrier being out on extended sick leave. That meant I couldn't be sent anywhere else at the start of the day, and I knew when my days off should fall, but otherwise I was still working 6 days a week and (usually) over 8h/day. I was promoted from PTF to "Full-time Regular" (aka "Regular") in February, and working on Sundays ended along with most of the enforced overtime; I could still be mandated to work over 8, but only under certain circumstances. Life got much better. I would still have to worry about a route when the hold-down ended, but for that winter and spring, I had felt like I'd gotten some of my life back.

The carrier on sick leave—it was his route I'd been holding for several months—decided to go ahead and retire rather than finish out his leave and come back in any limited fashion, so now the clock was ticking: the route would be declared vacant and I'd lose my hold when it was assigned to a new carrier through the bidding process. Yes, I'd request it, but my lack of seniority would be a severe limitation. So I started watching the vacancy postings, keeping an eye out for a route that I'd be able to make my own, but wasn't up for bid because it was super hard & no other carriers would want.

I made several bids that lost, but finally won one. I didn't think it seemed too bad, but several carriers told me that it "was a real hoofer," meaning a lot of hard walking.

I transferred stations and became the regular for "Route 12C035" in mid-June, 2025. It is a walking route with ~460 homes and ~52 businesses. On my first day, it took me over 10h to get everything ready that morning and deliver, walking over 12mi in the process. My legs were shattered. My feet were numb. It was all I could do to come in the next day and do it all again.

Four weeks later I had my first podiatrist appointment. I have high arches; I needed an orthotic to help with the mechanics of my walking. I needed better shoes.

I needed a different job!

The reality is that I never stopped applying for positions in technical sales. It was hard to schedule interviews amid the demands of the carrier schedule, but I made it happen. But still no job offers.

I kept getting better, faster, and stronger on the route. If I could get prepped and "on the street" within an hour of reporting, I could typically finish the route by the end of 8 hours on the clock. I learned it was a bit unprecedented: only one other carrier who had the route could do the same with regularity. And I got frustrated with co-workers who wouldn't come to work for whatever reason, so I'd also have some mandatory overtime to help cover those routes as well.

In late January of 2026, one of my IT colleagues called and let me know that a position would be opening on the enterprise infrastructure team for a private company, and would I be interested? It would be focused on "keeping the lights on" for the core systems, something I'd done before both privately and as a consulting engineer, so I reached out to the hiring manager to learn more.

Life went on with the Post Office—including a few weeks as an acting supervisor—when I got the call I'd been hoping to receive since the end of 2023: a job offer in high tech that was fair in compensation, for a good company, with a manager & team that I'd met and would be able to work with.

I resigned from the USPS the same day, working my final shift on 20-Mar-2026.

19 months. Exactly 82 weeks. For 574 days, I was a City Letter Carrier for the United States Postal Service, and I both hated and loved it.

But now I'm back in IT, and I have stuff to share again!

Tuesday, June 7, 2022

Synology DSM and Veeam 11

For a long time, Veeam has been telling its users to not use "low-end NAS boxes" (eg, Synology, QNAP, Thecus) as backup repositories for Backup & Replication (VBR), even though these Linux-based devices should be compatible if they have "x86" architecture (as opposed to ARM).

The reality is that none of these devices use "bog standard" Linux distributions, and due to their appliance-based nature, have some significant limitations on what can be done to their custom distributions.

However, there are many folks—both as home users or within small/budget-limited businesses—who are willing to "take their lumps" and give these things a shot as repositories.

I am one of them, particularly for my home "lab" environment. I've written about this use case (in particular, the headaches) a couple of times in this blog [1, 2], and this post joins them, addressing yet another fix/workaround that I've had to implement.

Background

I use a couple of different Synology boxes for backup purposes, but the one I'm dealing with today is the DS1817+. It has a 10GbE interface for connectivity to my network, a quad-core processor (the Intel Atom C2538) and 8GB RAM (upgradable to 16GB, but I haven't seen the demand that would require it). It is populated with 8x1TB SATA SSDs for ~6TB of backup capacity.

I upgraded DSM to 7.0 a while back, and had to make some adjustments to the NFS target service to continue to support ESXi datastores via NFS 4.1

Yesterday, I updated it to 7.1-42661 Update 2, and was greeted to a number of failed backup jobs this morning.

Symptoms

All the failed jobs have uniform symptoms: Timeout to start agent

With further investigation, I saw that my DS1817+ managed server was "not available", and when attempting to get VBR to re-establish control, kept getting the same error with the installation of transport services:

Installing Veeam Data Mover service Error: Failed to invoke command /opt/veeam/transport/veeamtransport --install 6162:  /opt/veeam/transport/veeamtransport: error while loading shared libraries: libacl.so.1: cannot open shared object file: No such file or directory

Failed to invoke command /opt/veeam/transport/veeamtransport --install 6162:  opt/veeam/transport/veeamtransport: error while loading shared libraries: libacl.so.1: cannot open shared object file: No such file or directory

Workaround

After failing to find a fix after some Linux-related searches, I discovered a thread on the Veeam Community Forum that addressed this exact issue [3]. 

This is apparently a known issue with VBR11 and Synology boxes, and as Veeam is moving further and further away from the "on the fly" deployment of the transport agent to a permanently-installed "Data Mover" daemon (which is necessary to provide the Immutable Backup feature), it becomes a bigger issue. Veeam has no control over the distribution—and would just as soon have clients use other architectures—and Synology would probably be happy with customers considering their own backup tool over competing options...

At any rate, some smart people posted workarounds to the issue after doing their own research, and I'm re-posting for my own reference because it worked for me.

  1. Download the latest ACL library from Debian source mirrors. The one I used—and the one in the Forum thread—is http://ftp.debian.org/debian/pool/main/a/acl/libacl1_2.2.53-10_amd64.deb
  2. Unpack the .deb file using 7zip
  3. Upload the data.tar file to your Synology box. Feel free to rename the file to retain your sanity; I did.
  4. Extract the tarball to the root directory using the "-C /" argument:
    tar xvf data.tar -C /
  5. If you are using a non-root account to do this work, you'll need to use "sudo" to write to the root. You will also need to adjust owner/permissions on the extracted directories & files:
    sudo tar xvf data.tar -C /
    sudo chown -R root:root /usr/lib/x86_64-linux-gnu
    sudo chmod -R 755 /usr/lib/x86_64-linux-gnu
  6. Create soft links for these files in the boxes filesystem:
    sudo ln -sf /usr/lib/x86_64-linux-gnu/libacl.so.1 /usr/lib/libacl.so.1
    sudo ln -sf /usr/lib/x86_64-linux-gnu/libacl.so.1.1.2253 /usr/lib/libacl.so.1.1.2253
  7. Last, get rid of any previous "debris" from failed transport installations
    sudo rm -R /opt/veeam
Once the Synology is prepped, you must go back into VBR and re-synchronize with the Linux repository:
  1. Select the "Backup Infrastructure" node in the VBR console
  2. Select the Linux node under Managed Servers
  3. Right-click on the Synology box being updated and select "Properties..." from the popup menu.
  4. Click [Next >] until the only option is [Finish]. On the way, you should see that the Synology is correctly identified as a compatible Linux box, and the new Data Mover transport service is successfully installed.

Summary

I can't guarantee that this will work after a future update of DSM, and there may come a day when other libraries are "broken" by updates to VBR or DSM. But this workaround was successful for me.

Update

The workaround has persisted through a set of updates to DSM7. I have seen this come up with DSM6, but this workaround does not work on that; too many platform incompatibilities, I suspect. Need to do some more research & experimentation for DSM6...

Friday, February 28, 2020

Update: maintaining the pi-hole HA pair

In an earlier post, I shared how I got pi-hole working in my environment, thanks to a number of posts on a reddit thread. Since then, I've been living with the setup and tweaking my configuration a bit.

This post documents one of the tweaks that might be useful for others...

If you're using the method documented by Panja0, you know that there's a script in the pi-hole distribution (gravity.sh) that must be edited in order to synchronize files between the nodes of the HA pair. Well, he reminds you in the tutorial that it'll need to be re-edited every time you update pi-hole, or the synchronization won't occur.

As you might guess, I didn't remember when I updated a week ago, and couldn't understand why my settings weren't getting synchronized. So I went back to the post, reviewed my settings, and face-palmed myself when I discovered my oversight. I had failed to re-edit gravity.sh

After I did the necessary edits, I realized that, even if I'd remembered about it, I'd still need to refer to the original post to get the right command line, etc., for the edits.

I didn't want to spend the time to figure out how to trigger a script to make the update for me upon a pi-hole update, but I sure could figure out the script to do the correct updates!

I mean... come on: what better use of automation than to use a script to a) check to see if the update has already been performed, and b) if not, perform the update?

#!/bin/bash
# make sure the pihole-gemini script is being run by gravity.sh

GEMINI='su -c /usr/local/bin/pihole-gemini - <gemini user>'
GRAVITY=/opt/pihole/gravity.sh

TRIGGER=$(sed -e '$!{h;d;}' -e x $GRAVITY)
if [ "$TRIGGER" != "$GEMINI" ]
then
        # insert the gemini commandline before the last line of the script
        sed -i "$ i$GEMINI" $GRAVITY
fi

If you decide to use the script, just make sure that you make any necessary modifications for the first two script variables to match your installation. You also need it on both nodes of your HA pair!

In my setup, I'm saving this script in the /etc/scripts directory, which I'm using for other "keepalived" scripts. I'll remember to run it next time I update pi-hole, and that's all I'll need to recall!