Showing posts with label px6. Show all posts
Showing posts with label px6. Show all posts

Thursday, September 15, 2011

iomega px6-300d: first look

The iomega px6-300d is a storage appliance that can be provisioned with up to six SATA drives, both traditional rotating-disk and SSD. It is one of four NAS/SAN appliances in the "px" line from iomega, and is the largest-capacity "desktop" model.
Under the hood, it contains a dual-core Intel Atom processor (running at 1.8GHz) with 2GB RAM, sports a pair of Gigabit Ethernet ports, three USB ports (1 x USB 3.0, 2 x USB 2.0), and runs a customized version of Linux called "EMC LifeLine." The unit can be purchased with various pre-installed drive configurations (6, 12 & 18TB raw capacity); along with the px4 models, iomega has also (finally!) offered a "diskless" configuration.
Retail Box
I purchased a diskless unit ($750 from B&H) for use in my home lab, and independently acquired a matched set of six 2TB Hitachi 7200RPM SATA-III drives (HDS723020BLA642, $110 from NewEgg). This 12TB (raw) setup cost me less than $1500; in contrast, the list price for a pre-populated 12TB unit is $3,299.99.
The unit came in a nice retail box, and was well-padded and packed. In addition to the unit itself, the package contained the external power supply, power cord, CAT5e Ethernet cable, a "Quick Start Guide" and a "Solutions CD."
px6-300d front
px6-300d rear
The front of the unit provides drive access, power button, information display (and a pair of input buttons) and one USB port; the rear of the unit has a pair of 90mm fans, the power connection, reset button, Gigabit Ethernet ports and a pair of USB ports. The rear also sports (what appears to be) a surprising and undocumented x1 PCIe expansion slot.
The Quick Start Guide instructs the owner to install software from the Solutions CD, which will then assist in installing the array; having experience with the ix line, I simply created a DHCP reservation (the MAC address is documented with a sticker on the rear of the unit), connected it to Having purchased a diskless unit, I was curious to see how the system would behave when booted with no drives. True to form, this situation was handled gracefully.
front door open, showing drive trays
drive trays with 2.5", 3.5" drives

trays line up SATA connector
Up to six drives can be installed using supplied hot-swap trays, which support both 3.5" and 2.5" form-factor drives. In addition to the matched drives, I also had a 2.5" SSD available for testing, but the supplied screws didn't fit the drive properly; it was a bit of a struggle to find screws that would not only work with the drive, but also work with the sled: the smaller drives must be attached to the sled from the bottom, so care must be given to make sure the screws do not protrude too far from the bottom of the sled.
The unit I received was pre-installed with an out-of-date firmware (3.1.10; 3.1.14 most current as of this posting), which booted correctly even without any drives installed in the unit. This is a distinct departure from the "ix" line of NAS boxes, which require at least one functioning drive to store the firmware (which also explains why the older lines didn't support a diskless purchase option). The management interface will not allow you to do anything until at least one drive is installed, however, so that was the first order of business.

After installing the first drive, the management interface permitted me to add the unit to my Windows domain, and I also went through the exercise of downloading and updating the firmware. This is a very easy process, so novice users shouldn't be afraid of it.
The unit supports a wide variety of RAID levels, and the user can combine arbitrary combinations of drives into "storage pools" so that the unit can provide different levels of performance based on your needs, so I will be testing the various permutations with increasing spindle counts.
Drive
Count
Storage Pool Protection Options
1None
2None, RAID0, RAID1
3None, RAID0, RAID5
4None, RAID0, RAID1/0, RAID5
5None, RAID0, RAID5
6None, RAID0, RAID1/0, RAID5, RAID6
Once you create a storage pool, you can create one or more NAS volume (for CIFS, NFS or other share-types) or iSCSI volumes in it. When creating a file-sharing volume, the unit can recognize space utilization on a file-by-file basis, and will display free/used space as one would expect. A volume used for iSCSI will appear to be 100% allocated as soon as it is created, even if no initiator has ever connected and accessed the LUN. Additionally, multiple shares can be created for a single file-sharing volume, but there's a one-to-one mapping between an iSCSI volume and a target. Security for either file shares or iSCSI is optional; the unit defaults to read/write permissions for everyone.
There are a number of nice features that I'll investigate further, but the first order of business is performance testing. With luck, I'll learn that the device can scale with each added spindle; it will also be interesting to see how much of a performance hit the device will take when using RAID6 instead of RAID5 in the 6-spindle configuration.

Monday, April 25, 2011

Non-destructive Drive Expansion in the StorCenter

If you can't tell from the series of posts I've already published, I'm having some fun playing with the iomega ix2-200 that I received from Chad Sakac of EMC. In reviewing those posts, I realized that I didn't publish anything on the trick to expanding the storage on the ix2 (which should also apply to all the models in the StorCenter series) without destroying your data.

This technique is fairly straightforward, and while it takes time and a bit of work at the command line via "Support mode", you will also be best served if all your data is still backed up. Note: to get at the support page on a "Cloud Edition" unit, the URL is /diagnostics.html

Preparation...

  1. Upgrade your drives with bigger models.
    1. It's not strictly required, but I suggest you upgrade starting at the highest labelled drive, working your way to the lowest labelled drive
    2. In order to make full use of each drive's capacity, they should all be identical.
    3. Shut down your unit each time you swap a drive (unless you're using a model that is explicitly hot-swappable).
    4. Allow the unit to fully redistribute the protected data before swapping the next spindle
  2. Enable SSH access to your unit.
    1. There's an unlinked page on the system called support.html; access it by putting the address directly into your browser's address bar.
      support.html
    2. Check Allow remote access for support (SSH and SFTP) and click Apply
  3. Use an SSH client to logon to your unit
    • username: root
    • default password: soho
    • If you have security enabled, you will need to append the primary administrator's password to the default password. For example, if your primary user's password is ducksauce, the SSH password is sohoducksauce.

Magic Time!

The devices can be expanded because of the storage subsystem being used in the Linux kernel. The process is straightforward: expand the "outer" container before expanding the inner container(s).
  1. Dismount the user storage volume:
    root@ix2-200d:/# umount -l /mnt/soho_storage
  2. Expand the RAID pseudo-device:
    root@ix2-200d:/# mdadm --grow /dev/md1 --size=max
  3. Expand the LVM physical volume:
    root@ix2-200d:/# pvresize /dev/md1
  4. Determine the free space in the LVM volume group:
    root@ix2-200d:/# vgdisplay
    --- Volume group ---
      VG Name               md1_vg
        .
        .
        .
      Free  PE / Size       476930 / 931.50 GB
        .
    
  5. Expand the LVM logical volume by the amount of free blocks:
    root@ix2-200d:/# lvextend -l +476930 /dev/md1_vg/md1vol1
  6. Mount the expanded volume:
    root@ix2-200d:/# mount -t xfs -o noatime /dev/mapper/md1_vg-md1vol1 /mnt/soho_storage
  7. Expand the xfs file system:
    root@ix2-200d:/# xfs_growfs /dev/md1_vg/md1vol1
  8. Reboot the system (so the web management tools will recognize the expansion):
    root@ix2-200d:/# telinit 6
Your device will reboot and (if you have email notifications correctly configured) it will notify you that "data protection is being reconstructed", a side effect of expanding the outermost RAID container.

Thursday, April 14, 2011

Drive Performance

UPDATE: more data from other arrays & configurations
In an earlier post I said I was building a table of performance data from my experimentation with my new iomega ix2-200 as well as other drive configurations for comparison. In addition to the table that follows, I'm also including a spreadsheet with the results:
The Corners 1 block seq read

(IOPS)
4K random read

(IOPS)
4K random write

(IOPS)
512K seq write

MB/s
512K seq read

MB/s
Notes
local SSD RAID0 10400 2690 3391 63.9 350.6 2 x Kingston "SSD Now V-series" SNV425
ix2 SSD CIFS 3376 891 308 25.7 40.4 2 x Kingston "SSD Now V-series" SNV425
ix2 SSD iSCSI 4032 664 313 29.4 38.5 2 x Kingston "SSD Now V-series" SNV425
local 7200 RPM SATA RAID1 7242 167 357 94.3 98.1 2 x Western Digital WD1001FALS
ix4 7200RPM CIFS** 2283 133 138 32.5 39.4 4 x Hitachi H3D200-series;
**jumbo frames enabled
ix2 7200RPM CIFS 2362 125 98 9.81 9.2 2 x Hitachi H3D200-series
ix2 7200RPM iSCSI 2425 123 104 9.35 9.64 2 x Hitachi H3D200-series
ix4 7200RPM iSCSI** 4687 117 122 37.4 40.8 4 x Hitachi H3D200-series;
**jumbo frames enabled
ix4a stock CIFS 2705 112 113 24 27.8 4 x Seagate ST32000542AS
ix4 stock iSCSI 1768 109 96 34.5 41.7 4 x Seagate ST31000520AS
ix4a stock iSCSI* 408 107 89 24.2 27.2 4 x Seagate ST32000542AS;
*3 switch "hops" with no storage optimization introduce additional
latency
ix2 stock CIFS 2300 107 85 9.85 9.35 2 x Seagate ST31000542AS
ix2 stock iSCSI 2265 102 84 9.32 9.66 2 x Seagate ST31000542AS
ix4 stock CIFS 4407 81 81 32.1 37 4 x Seagate ST31000520AS
DROBO PRO (iSCSI) 1557 71 68 33.1 40.5 6 x Seagate ST31500341AS + 2 x Western Digital
WD1001FALS; jumbo frames
DROBO USB 790 63 50 11.2 15.8 2 x Seagate ST31000333AS + 2 x Western Digital WD3200JD
DS2413+ 7200RPM RAID1/0 iSCSI 12173 182 194 63.53 17.36 2 x Hitachi HDS722020ALA330 + 6 x HDS723020BLA642
DS2413+ 7200RPM RAID1/0 NFS 2 x Hitachi HDS722020ALA330 + 6 x HDS723020BLA642
DS2413+ SSD RAID5 iSCSI 19238 1187 434 69.79 123.97 4 x Crucial M4

PX6-300 1 block seq read

(IOPS)
4K random read

(IOPS)
4K random write

(IOPS)
512K seq write

MB/s
512K seq read

MB/s
Protocol RAID Disks
iSCSI none 1 16364 508 225 117.15 101.11
RAID1 2 17440 717 300 116.19 116.91
RAID1/0 4 17205 2210 629 115.27 107.75
6 17899 936 925 43.75 151.94
RAID5 3 17458 793 342 112.29 116.34
4 18133 776 498 45.49 149.27
5 17256 1501 400 115.15 116.12
6 18022 1941 106552.64149.1
RAID0 2 17498 1373 740 116.44 116.22
3 18191 1463 1382 50.01 151.83
4 18132 771 767 52.41 151.05
5 17692 897 837 56.01 114.35
6 18010 1078 1014 50.87 151.47
RAID66 17173 2563 870 114.06 116.37
Protocol RAID Disks 1 block seq read

(IOPS)
4K random read

(IOPS)
4K random write

(IOPS)
512K seq write

MB/s
512K seq read

MB/s
NFS none 1 16146 403 151 62.39 115.03
RAID1 2 15998 625 138 63.82 96.83
RAID1/0 4 15924 874 157 65.52 115.45
6 16161 4371 754 65.87 229.52
RAID5 3 16062 646 137 63.2 115.15
4 16173 3103 612 65.19 114.76
5 15718 1013 162 59.26 116.1
6 16161 1081 201 63.85 114.63
RAID0 2 15920 614 183 66.19 114.85
3 15823 757 244 64.98 114.6
4 16258 3769 1043 66.17 114.64
5 16083 4228 1054 66.06 114.91
6 16226 4793 1105 65.54 115.27
RAID66 15915 1069 157 64.33 114.94

About the data

After looking around the Internet for tools that can be used to benchmark drive performance, I settled on the venerable IOmeter. Anyone who has used it, however, knows that there is an almost infinite set of possibilities for configuring it for data collection. In originally researching storage benchmarks, I came across several posts that suggest IOmeter along with various sets of test parameters to run against your storage. Because I'm a big fan of VMware, and Chad Sakac of EMC is one of the respected names in the VMware ecosystem, I found his blog post to be a nice start when looking for IOmeter test parameters. His set is a good one, but requires some manual setup to get things going. Also in my research, I came across a company called Enterprise Strategy Group which not only does validation and research for hire, they've published their custom IOmeter workloads in an IOmeter "icf"configuration file. The data published above was collected using their workload against a 5GB iobw.tst buffer. While the table above represents "the corners" for the storage systems tested, I also captured the entire result set from the IOmeter runs and have published the spreadsheet for additional data if anyone is interested.

px6-300 Data Collection

The data in the px6-300 tables represents a bit of a shift in the methodology: the original data sets were collected using the Windows version of iometer, while the px6-300 data was collected using the VMware Labs ioAnalyzer 1.5 "Fling". Because it uses the virtual appliance, a little disclosure is due: the test unit is connected by a pair of LACP-active/active 1Gb/s links to a Cisco SG-300 switch. In turn, an ESXi 5.1 host is connected to the switch via 4x1Gb/s links, each of which has a vmkernel port bound to it. The stock ioAnalyzer's test disk (SCSI0:1) has been increased in size to 2GB and is using an eager-zeroed thick VMDK (for iSCSI). The test unit has all unnecessary protocols disabled and is on a storage VLAN shared by other storage systems in my lab network. The unit is otherwise idle of any workloads (including the mdadm synchronization that takes place when configuring different RAID levels for disks, a very time-consuming process); there may be other workloads on the ESXi host, but DRS is enabled for the host's cluster, and if CPU availability were ever an issue in an I/O test (it isn't), other workloads would be migrated away from the host to provide additional resource.

The Takeaway

As expected, the SSD-based systems were by-far the best-performing on a single-spindle basis. However, as one might expect, an aggregate of spindles can provide synergy that meets or exceeds the capability of SSD, and locally-attached storage can also make up the difference in I/O performance. The trade-off, of course, is the cost (both up-front and long-term) versus footprint.