Friday, December 23, 2016

Apple Watch First Impressions

 ...from a former Pebble user


When Pebble announced their acquisition by FitBit, I was cautious about the future of the product: I backed the original Pebble on Kickstarter, as well as the Pebble Steel, Time Steel and finally, Time 2 when the opportunities presented themselves. But then recent things like having a total reset screwing up all my settings (and needing to do a factory reset to get things back) and a limited lifetime (and no more warranty support) for the existing units, I decided to look elsewhere for a good smartwatch.

As a longtime iPhone/iPad user I'd looked at the specs for Apple Watch when it was first released, and between the significant cost difference from the Pebble (like 4x more expensive, depending on the edition and band choices) and significant hardware limitations (Single-day battery life? Really? Not water resistant?), the sale of Pebble was making my smartwatch options pretty bleak.

However, the recently released Series 2 from Apple addressed 2 of the 3 biggest faults I had with the platform (nothing is going to address the cost problem: this is Apple we're talking about, and all of its options are boutique-priced) by adding significant strides to battery life along with 50M water resistance.

So I pulled the trigger and yesterday was able to take delivery of a 42mm Stainless Steel with Milanese Loop band in Space Black.
42mm Apple Watch Series 2 in Space Black
with Milanese Loop band
If you're interested in an un-boxing, you can search elsewhere. Suffice it to say that, in typical Apple fashion, the watch was simultaneously beautifully and over-packaged; a fair expectation for an $800 timepiece, whether it comes from Apple or not, but the amount of material waste from the packaging hails back to when Apple thought they were competing in the luxury timepiece market rather than the fitness wearables market. They really, really could've gone with less.

I started by placing the watch on the charging disc for a few hours to make sure it was well charged, then I went through the pairing process. Unlike Pebble, the Watch doesn't use two different Bluetooth profiles (one standard and one low-energy), and pairing with my iPhone 6s running iOS 10.2 was smooth and less error-prone compared to my usual experience with Pebble pairing. If there's one thing to be said for getting the two devices from the same manufacturer, it's the effortless user experience with pairing.

Before purchasing, I visited a local Apple store to get a feel for my choices in cases and bands. I selected the 42mm over the 38mm because of the larger display and my old eyes. The stainless steel case is a heftier feel over aluminium (or ceramic), which I definitely prefer, and there was a noticeable difference between the 38mm and 42mm as well, solidifying my choice of that size. Lighter watches tend to slide around to the underside of my wrist, while heavier ones seem to stay in place on the top. And if I have to deal at all with the watch on the underside of my wrist, the sapphire crystal of the stainless steel & ceramic cases was a must. I also prefer the heavier link band, but between the $500 premium and its "butterfly clasp" (which I hate), there was no way I was going with the Apple link band. The Milanese felt "weighty" enough in comparison to the link band, and its "infinite adjustability" had some appeal as well.

Once I had the watch paired and on my wrist, I started digging into the features I'd come accustomed to on the Pebble. Probably the biggest surprise was the dearth of watch face choices: unlike the Pebble ecosystem, with thousands of watch faces to choose from—everything from utilitarian designs to homages to Star Trek to the silly "Drunk O'Clock" face—the handful of faces available in the Watch ecosystem was a big surprise.

Worse, while all the Watch faces are customizable to some degree, all of them have the limitation of disallowing the customization of "time" itself. The face I'm most accustomed to on the Pebble—YWeather by David Rincon—is nearly reproducible on the Watch using the "Modular" face, but the options—or "Complications" as Apple terms them—aren't very flexible and make "time" a less-prominent feature in the face. Which, in my opinion, sort of defeats the purpose in a watch face.
Apple Watch
"Modular"
Pebble
"YWeather"

If I could just move the Time to the center section and make it more prominent, while moving the date to the upper-right, it'd be good enough...

Notifications are also very different on the Apple Watch; the most significant seems to be the suppression of all notifications when the phone is actively being used, which I'm extremely unhappy with. Among other things, it means that I'm not getting notifications when I've got the phone plugged into power and showing a route in Waze. Even when the phone is locked & screen is off, I'm finding that notifications I usually received on the Pebble are missing/silent on the watch: I've yet to get a notification from Slack, which is one of the busiest apps on my phone after Mail itself.
Yes, I've made sure that things like "cover to mute" is disabled and "mirror phone" is set for pretty much all of the integrations on the watch, but the only type of notification that I get seems to be Messages and Calendar.

Application integration is nice for many apps I have on the phone; being able to quickly raise/lower the garage door using GarageIO on the watch instead of the phone is nice, as is checking the home alarm. However, it does seem that some watch app integrations require the phone-based app to be running (or at least "backgrounded") in order for the watch component to function. It's not consistent, so I'm still trying to figure out which ones need to be running in order to work.

The blob of apps in the App Layout sucks, however. While I have the ability to move apps around to change their proximity to the "central" Clock app, the fact that there are so many that I'd just as soon never see—even after telling Watch to uninstall the integration—is mind-boggling when you consider the minimalist design elements used everywhere else in all Apple products.

At any rate, I'm still getting used to this thing, but from my perspective, I like parts of it, but other parts are still inferior to Pebble

Tuesday, November 8, 2016

Virtual SAN Cache Device upgrade

Replacing/Upgrading the cache+buffer device in VSAN

Dilemma: I've got a VSAN cluster at home, and I decided to switch from single diskgroups-per-host to dual to give myself a bit more availability as well as additional buffer capacity (with all-flash, there's not much need for a read cache).

My scenario has some unique challenges for this transformation. First, although I already have the new buffer device to head the new disk group, I don't actually have all the new capacity disks that I'll need for the final configuration: I'll need to use some of the existing capacity disks if I want to get the second disk group going before I have the additional capacity devices. Second, I have insufficient capacity in the remainder of the VSAN datastore to perform a full evacuation while still maintaining policy compliance (which is sort of why I'm looking to add capacity in addition to splitting the one disk group up).


The nominal way to perform my transformation is:
  1. Put the host into maintenance mode, evacuating all registered VMs
  2. Delete the disk group, evacuating the data so all VMs remain storage policy-compliant.
  3. Add the new device
  4. Rebuild disk group(s)
I already took a maintenance outage during the last patch updates and added my new cache+buffer device to each host, so "Step 3" is already completed.
And then I hit on something: While removing the buffer device from a diskgroup will cause the decommissioning of the entire disk group, individual capacity devices can be removed without upsetting more than the objects being stored on that device alone. I have sufficient capacity in the remainder of the disk group—not to mention on the other hosts in the cluster—to operate on individual capacity elements.

So, here's my alternative process:

  1. Remove one capacity device from its disk group with full migration

  2. Add the capacity device to the new disk group.

It takes longer because I'm doing the evacuation and reconfiguration "in series" rather than "in parallel," but it leaves me with more active & nominal capacity+availability than doing it on an entire diskgroup at once.

My hosts will ultimately have two disk groups, but they'll break one "rule of thumb" by being internally asymmetric: My buffer devices are 400GB and 800GB NVMe cards, respectively, so when I'm fully populated with ten (10) 512GB capacity disks in each host, four (4) will be grouped with the smaller and six (6) will be grouped with the larger. When you keep in mind that Virtual SAN won't use more than 600GB of a cache+buffer device regardless of its size, it actually has some internal symmetry: each capacity disk will be (roughly) associated with 100GB of buffer, for a ~5:1 buffer:capacity ratio.

CLI alternative

Although this entire process can be performed using the Web Client, an alternative is to write a CLI script. The commands needed are all in the esxcli storage or vsan namespaces; combined with some shell/PowerShell scripting, it is conceivable that one could:
  • Identify storage devices.
    esxcli storage core device list
  • Identify any existing disk group, cache+buffer and capacity devices
    esxcli vsan storage list.
  •  Remove one of the capacity disks with migration
    esxcli vsan storage remove -d <device> -m evacuateAllData
  • Create a new disk group using an available flash device from the core device list as the new group's cache+buffer device, and the recently evacuated device as the capacity device
    esxcli vsan storage add -s <cache+buffer device> -d <device>
  • Loop through the remaining capacity devices, first removing then adding them to the new disk group. The esxcli vsan storage remove command is blocking when run from the ESXi console, so your script should wait for full evacuation and availability before the next step in the script is executed.

Thursday, October 13, 2016

Adding floppy for PVSCSI drivers when creating a VM in vCenter Web Client

Someone asked in a private slack channel if it was "just him" or can you really not add a floppy image when creating a VM using the Web Client. This is relevant any time you want to build a VM using the PVSCSI drivers so they'll always be available, even if VMware Tools is uninstalled.
The answer—at least with v6.0U2—is "no."
In this scenario, the vmimages folder won't expand; it offers the "arrowhead" showing there is content to be discovered within, but when you select it, you get no content...

Fortunately, there's a workaround: if you go ahead and save the new VM (without powering on) and then edit it, modifying the source for the floppy image, the vmimages folder will correctly expand and populate, allowing you to select one.

UPDATE: It turns out we were talking about two different Web Clients! My assumption was that we were referring to the vCenter Web Client, while the person asking was referring to the new(ish) Host Web Client.

The defect and workaround as I've documented it only apply to the vCenter Web Client. The Host Web Client will not behave correctly even in the workaround; this is a solid defect. There are other workarounds—use the C# client, copy the IMG file to an accessible datastore, etc.—but none are as good as the defect being eliminated in the first place.

Friday, February 26, 2016

NTFS, dedupe, and the "large files" conundrum.

Microsoft did the world a huge favor when they added the deduplication feature to NTFS with the release of Windows Server 2012. We can have a discussion outside of this context on whether inline or post-process dedupe would have been better (the NTFS implementation is post-process), but the end result is something that seems to have minimal practical impact on performance but provides huge benefits in storage consumption, especially on those massive file servers that collect files like a shelf collects dust.

On the underside, the dedupe engine collects the duplicate blocks and hides them under the hidden "System Volume Information" folder and leaves pointers in the main MFT. You can do a disk size scan and see very little on-disk capacity taken by a given folder, yet a ginormous amount of disk is being consumed in that hidden folder.


See that little slice of color on the far left? That's the stub of files that aren't sitting in the restricted dedupe store. The statistics tell a different story:


200GB of non-scannable data (in the restricted store) versus 510MB stored in the "regular" MFT space. Together they comprise some 140K files in 9K folders, and the net action of dedupe is saving over 50GB in capacity on that volume:


The implementation is fairly straightforward, and I've found few instances where it didn't save the client a bunch of pain.

Except when used as a backup target.

Personally, I though this was the perfect use case—and it is, but with the caveats discussed herein—because backup tools like Veeam can perform deduplication within a backup job, but job-to-job deduplication isn't in the cards. Moving the backup repository to a deduplicating volume would save a ton of space, giving me either space to store more data or more restore points for existing backups.

Unfortunately, I ran into issues with it after running backups for a couple of weeks. Everything would run swimmingly for a while, then suddenly backups would fail with filesystem errors. I'd wipe the backup chain and start again, only to have it happen again. Fed up, I started searching for answers...

Interestingly, the errors I was receiving (The requested operation could not be completed due to a file system limitation.) go all the way back to limitations on NTFS without deduplication, and the early assertions by Microsoft that "defragmentation software isn't needed with NTFS because it protects itself from fragmentation." Anyone else remember that gem?!? Well, the Diskeeper folks were able to prove that NTFS volumes do, in fact, become fragmented, and a cottage industry of competing companies popped up to create defrag software. Microsoft finally relented and not only agreed that the problem can exist on NTFS, but they licensed a "lite" version of Diskeeper and included it in every version of Windows since Windows 2000. They also went so far as to add additional API calls to the filesystem and device manager so that defragger software could better operate in a safe manner than "working around" the previous limitations.

I digress...

The errors and the underlying limitation have to do with the way NTFS handles file fragmentation. It has special hooks to readily locate multiple fragments across the disk (which is, in part, why Microsoft argued that a fragmented NTFS volume wouldn't suffer the same sort of performance penalty that an equivalently-fragmented FAT volume would experience), but the data structures to hold that information is a fixed resource. Once volume fragmentation reaches a certain level, the data structures are exhausted and I/O for the affected file is doomed. The fix? Run a defragger on the volume to free up those data structures (every fragment consumes essentially one entry in the table, so the fewer fragments that exist, the fewer table resources are consumed, irrespective of total file size) and things start working again.

Enter NTFS deduplication

Remember that previous description of how the dedupe engine will take duplicate blocks from the volume—whether they're within a single file or across multiple—and put it in the System Volume Information folder, then leave a pointer in the main MFT to let multiple files (or the same file) access to that block?

Well, we just deliberately engineered a metric crapton (yes, that's a technical description) of intentional fragmentation on the volume. So when individual deduplicated files grow beyond a certain size (personal evidence says it's ~200GB, but posts I've found here and there say it's as little as 100GB while MS says it's 500GB https://support.microsoft.com/en-us/kb/2891967) you can't do anything with the file. Worse, defrag tools can't fix it, because this fragmentation isn't something that the algorithms can "grab"; the only real fix—other than throwing away the files and starting over—is to disable dedupe. And if you're near the edge of capacity due to the benefit of dedupe, even that's no option: rehydrating the file will blow past your capacity. Lose-lose.

Luckily, Microsoft identified the issue and gave us a tool when building volumes intended for deduplication: "large files" flag in the format command. Unfortunately, as you might guess when referring to "format," it's destructive. The structures that are laid down on the physical media when formatting a volume are immutable in this case; only an evacuation and reformat fixes the problem.

Given that restriction, wouldn't it be helpful to know if your existing volumes support large files (ie extreme fragmentation) before you enable deduplication? Sure it would!

The filesystem command "fsutil" is your friend. From an administrative command prompt, run the following command + arguments (this is an informational argument that makes no changes to the volume, but requires administrative access to read the system information):

fsutil fsinfo ntfsinfo <drive letter>



Notice the Bytes Per FileRecord Segment value? On a volume that does not support high levels of fragmentation, you'll see the default value of 1024. You'll want to reformat that volume with the "/L" argument before enabling dedupe for big backup files on that bad boy. And no, the ability to do that format argument is not available in the GUI when creating a new volume; you've got to use the command line.

What does it look like after you've reformatted it? Here you go:


The Bytes Per FileRecord Segment value jumps up to the new value of 4096.

You'll still want to adhere to Microsoft's dedupe best practices (https://msdn.microsoft.com/en-us/library/windows/desktop/hh769303(v=vs.85).aspx), and if you're reformatting it anyway, by all means make sure you do it with the 64K cluster size so you don't run into any brick walls if you expect to expand the volume in the future. Note that the fsutil command also shows the volume's cluster size (Bytes per Cluster) if you're wanting to check that, too.

Special thanks to fellow vExpert Frank Buechsel, who introduced me to using fsutil for this enquiry.