Showing posts with label iscsi. Show all posts
Showing posts with label iscsi. Show all posts

Thursday, April 14, 2011

Drive Performance

UPDATE: more data from other arrays & configurations
In an earlier post I said I was building a table of performance data from my experimentation with my new iomega ix2-200 as well as other drive configurations for comparison. In addition to the table that follows, I'm also including a spreadsheet with the results:
The Corners 1 block seq read

(IOPS)
4K random read

(IOPS)
4K random write

(IOPS)
512K seq write

MB/s
512K seq read

MB/s
Notes
local SSD RAID0 10400 2690 3391 63.9 350.6 2 x Kingston "SSD Now V-series" SNV425
ix2 SSD CIFS 3376 891 308 25.7 40.4 2 x Kingston "SSD Now V-series" SNV425
ix2 SSD iSCSI 4032 664 313 29.4 38.5 2 x Kingston "SSD Now V-series" SNV425
local 7200 RPM SATA RAID1 7242 167 357 94.3 98.1 2 x Western Digital WD1001FALS
ix4 7200RPM CIFS** 2283 133 138 32.5 39.4 4 x Hitachi H3D200-series;
**jumbo frames enabled
ix2 7200RPM CIFS 2362 125 98 9.81 9.2 2 x Hitachi H3D200-series
ix2 7200RPM iSCSI 2425 123 104 9.35 9.64 2 x Hitachi H3D200-series
ix4 7200RPM iSCSI** 4687 117 122 37.4 40.8 4 x Hitachi H3D200-series;
**jumbo frames enabled
ix4a stock CIFS 2705 112 113 24 27.8 4 x Seagate ST32000542AS
ix4 stock iSCSI 1768 109 96 34.5 41.7 4 x Seagate ST31000520AS
ix4a stock iSCSI* 408 107 89 24.2 27.2 4 x Seagate ST32000542AS;
*3 switch "hops" with no storage optimization introduce additional
latency
ix2 stock CIFS 2300 107 85 9.85 9.35 2 x Seagate ST31000542AS
ix2 stock iSCSI 2265 102 84 9.32 9.66 2 x Seagate ST31000542AS
ix4 stock CIFS 4407 81 81 32.1 37 4 x Seagate ST31000520AS
DROBO PRO (iSCSI) 1557 71 68 33.1 40.5 6 x Seagate ST31500341AS + 2 x Western Digital
WD1001FALS; jumbo frames
DROBO USB 790 63 50 11.2 15.8 2 x Seagate ST31000333AS + 2 x Western Digital WD3200JD
DS2413+ 7200RPM RAID1/0 iSCSI 12173 182 194 63.53 17.36 2 x Hitachi HDS722020ALA330 + 6 x HDS723020BLA642
DS2413+ 7200RPM RAID1/0 NFS 2 x Hitachi HDS722020ALA330 + 6 x HDS723020BLA642
DS2413+ SSD RAID5 iSCSI 19238 1187 434 69.79 123.97 4 x Crucial M4

PX6-300 1 block seq read

(IOPS)
4K random read

(IOPS)
4K random write

(IOPS)
512K seq write

MB/s
512K seq read

MB/s
Protocol RAID Disks
iSCSI none 1 16364 508 225 117.15 101.11
RAID1 2 17440 717 300 116.19 116.91
RAID1/0 4 17205 2210 629 115.27 107.75
6 17899 936 925 43.75 151.94
RAID5 3 17458 793 342 112.29 116.34
4 18133 776 498 45.49 149.27
5 17256 1501 400 115.15 116.12
6 18022 1941 106552.64149.1
RAID0 2 17498 1373 740 116.44 116.22
3 18191 1463 1382 50.01 151.83
4 18132 771 767 52.41 151.05
5 17692 897 837 56.01 114.35
6 18010 1078 1014 50.87 151.47
RAID66 17173 2563 870 114.06 116.37
Protocol RAID Disks 1 block seq read

(IOPS)
4K random read

(IOPS)
4K random write

(IOPS)
512K seq write

MB/s
512K seq read

MB/s
NFS none 1 16146 403 151 62.39 115.03
RAID1 2 15998 625 138 63.82 96.83
RAID1/0 4 15924 874 157 65.52 115.45
6 16161 4371 754 65.87 229.52
RAID5 3 16062 646 137 63.2 115.15
4 16173 3103 612 65.19 114.76
5 15718 1013 162 59.26 116.1
6 16161 1081 201 63.85 114.63
RAID0 2 15920 614 183 66.19 114.85
3 15823 757 244 64.98 114.6
4 16258 3769 1043 66.17 114.64
5 16083 4228 1054 66.06 114.91
6 16226 4793 1105 65.54 115.27
RAID66 15915 1069 157 64.33 114.94

About the data

After looking around the Internet for tools that can be used to benchmark drive performance, I settled on the venerable IOmeter. Anyone who has used it, however, knows that there is an almost infinite set of possibilities for configuring it for data collection. In originally researching storage benchmarks, I came across several posts that suggest IOmeter along with various sets of test parameters to run against your storage. Because I'm a big fan of VMware, and Chad Sakac of EMC is one of the respected names in the VMware ecosystem, I found his blog post to be a nice start when looking for IOmeter test parameters. His set is a good one, but requires some manual setup to get things going. Also in my research, I came across a company called Enterprise Strategy Group which not only does validation and research for hire, they've published their custom IOmeter workloads in an IOmeter "icf"configuration file. The data published above was collected using their workload against a 5GB iobw.tst buffer. While the table above represents "the corners" for the storage systems tested, I also captured the entire result set from the IOmeter runs and have published the spreadsheet for additional data if anyone is interested.

px6-300 Data Collection

The data in the px6-300 tables represents a bit of a shift in the methodology: the original data sets were collected using the Windows version of iometer, while the px6-300 data was collected using the VMware Labs ioAnalyzer 1.5 "Fling". Because it uses the virtual appliance, a little disclosure is due: the test unit is connected by a pair of LACP-active/active 1Gb/s links to a Cisco SG-300 switch. In turn, an ESXi 5.1 host is connected to the switch via 4x1Gb/s links, each of which has a vmkernel port bound to it. The stock ioAnalyzer's test disk (SCSI0:1) has been increased in size to 2GB and is using an eager-zeroed thick VMDK (for iSCSI). The test unit has all unnecessary protocols disabled and is on a storage VLAN shared by other storage systems in my lab network. The unit is otherwise idle of any workloads (including the mdadm synchronization that takes place when configuring different RAID levels for disks, a very time-consuming process); there may be other workloads on the ESXi host, but DRS is enabled for the host's cluster, and if CPU availability were ever an issue in an I/O test (it isn't), other workloads would be migrated away from the host to provide additional resource.

The Takeaway

As expected, the SSD-based systems were by-far the best-performing on a single-spindle basis. However, as one might expect, an aggregate of spindles can provide synergy that meets or exceeds the capability of SSD, and locally-attached storage can also make up the difference in I/O performance. The trade-off, of course, is the cost (both up-front and long-term) versus footprint.

Wednesday, April 13, 2011

iomega StorCenter drive swaps

In my previous post about the iomega ix2, I wasn't sure that resizing the storage could be done without shell access by using GUI commands. I've now played with it enough to know that a) you can, but it is a destructive operation and b) the same technique can be used to shrink the available storage instead of grow it.

First, the technique:
  1. Swap in your new drive.
  2. Let it rebuild.
  3. Swap in the second drive.
  4. Before it gets done rebuilding, use the "delete disk" option:

    You'll have to jump through a bunch of hoops to make this happen, least of which is deleting all the data and shares (like I said, it's a destructive technique!), but once it's going, you might not get any indication that it's actually "doing" anything.
  5. Restart the unit
When it starts, if you have email alerts configured, you'll get a message that it's rebuilding the storage. Log back into the GUI and check your Dashboard; it should now show the new, increased capacity.

So why would anyone ever want to go smaller with one of these units? Well, suppose you had a pair of SSD drives that you picked up for cheap from Woot! or some other retailer. And suppose you picked up a pair of Icy Dock 2.5"-to-3.5" HDD Converters so that they'd fit correctly in the drive frame for the ix2? Well, you would then have the opportunity to create a smoking fast NAS with those SSDs.

I don't know enough about the StorCenter's Linux kernel (or Linux kernels in general) to tell if the unit can or does use TRIM to keep the write speeds optimal. But let's be fair: even without it an SSD blows away spinning disk at any speed. Given the cost/capacity ratio of SSDs, however, you'd have to be pretty starved for performance to try such a thing—and would certainly be better served by putting SSDs in a higher performance box than an ix2-200!

Sunday, April 10, 2011

iomega ix2-200 quick look

I received an iomega ix2-200 as a "spiff" from EMC, courtesy of Chad Sakac (thanks, Chad!), because I participated in some VMware environment storage performance metric collection.

The unit I received is the 2TB version, meaning that it had a pair of 1TB, 5900 RPM SATA-2 disks (Seagate ST31000520AS, to be exact). You can find reviews of this unit (and its drives) all over the Internet, both singing its praises as well as trashing iomega, EMC and anyone considered foolish enough to entrust data to the device.

In a nutshell, the unit is a 200MHz 1GHz ARM926eJ-S-compatible Linux appliance based on the Marvell Kirkwood MV88F6281. The ix2 has 256MB RAM, one Gigabit ethernet port and three USB ports in addition to the two drives.



It takes advantage of LVM for its flexibility in storage management and data protection. The Linux heritage also gives it a long list of integrated, optional features that make it very interesting to someone that otherwise has no fileserver at home, but unless you're ready to get your hands dirty, it isn't expandable as an application platform. The device exposes its functionality through a very user-friendly web interface, and has four front-panel lights to communicate system status to the user/operator.


Personally, I thought it might be a clever way to store disk images in a shared, network-accessible location without using expensive SAN storage. We currently do it all the time on a regular workstation with a gigantic drive, and I thought this would give us a smaller footprint than a full-size PC. Further, I thought the ~1TB usable storage could be augmented with a Drobo connected via USB as add-on capacity without increasing the physical footprint.

This use case more-or-less failed miserably. Although I knew a mirrored-pair of SATA drives wouldn't have much throughput, I didn't "do the math" ahead of time to see just how bad it would be to save an image. My test platform was pretty basic: I attached a 320GB SATA disk to a USB adapter and began writing the image to a CIFS share. The image was larger than usual because it was a disk I wanted to archive after being in production, not a template image for other production disks.

I gave up on the image when it was 1h into it and was going to take another (estimated) 7h.

Truth is, I didn't expect much out of it given that it's basically a single, slow SATA disk: in mirrored mode, you might get some improvement in reads, but your writes will be penalized a little for the mirror set to stay in sync. The fact that I didn't pay for the thing made the discovery pretty painless.

The second part of the plan was also thwarted: the first-gen Drobo that I plugged into the unit wouldn't show up as attached storage. The Drobo works fine on my Windows 7 desktop, and other types of USB storage worked fine on the ix2. Nope, these two guys just aren't compatible.

So now I'm playing: I picked up a pair of 2TB, 7200RPM SATA-3 drives on sale (Hitachi H3D20006472S), and have proven to myself that it's possible to a) upgrade to faster disk and b) expand the capacity on the ix2. There's no secret to getting a faster disk installed: replace the two disks one-at-a-time (with a disk of same-or-larger capacity) and let the Linux mdadm subsystem resynchronize the data from the remaining disk. Increasing the capacity is a bit more technical, requiring console access (for non-destructive resize); I've not tried it, but it may be possible to resize using a factory reset, but I didn't try it myself. I do know, however, that SSH access and basic understanding of LVM makes storage modification a cakewalk. I've even contemplated resizing the user storage volume down to a bare-minimum in order to replace the disks with SSDs. The capacity is miniscule compared to spinning-disk, but the experiment to see how fast it could be is kind of compelling...

At any rate: I'm in the process of building a table of performance data from iometer, and I'm including not just the stock drives, but the upgraded pair, my 1st-gen Drobo and my Drobo Pro. I'm also going to include the results from a pair of 128GB Kingston SSDs in a stripe set (RAID 0) from my desktop machine as a "best-case scenario" for comparison.

And finally, one might think that my experience with the ix2 would have soured me on the whole line of products. It hasn't. I also have a 4TB (raw) ix4-200d that I picked up for cheap on eBay which I've proven to myself makes a stellar backup target for VMware environments using NFS. I will add the performance metrics from that unit to my results, as well as the results I get when I finish upgrading its disks from 4x1TB 5900RPM units to 4x2TB 7200RPM units. I am predicting that I will be much happier with the performance of the upgraded ix4 than the performance I'm currently getting out of the Drobo Pro.

Stay tuned!