Showing posts with label drobo. Show all posts
Showing posts with label drobo. Show all posts

Saturday, December 17, 2011

drobo's top-notch customer support

Although I'm a professional storage administrator, I really like the dead-simple maintenance that you get with the storage appliances from the folks at Drobo. It's true that they seem a bit pricey, and it's very true that their consumer-directed units are "performance-challenged," but when it comes to an easy way to create a huge, self-managing, readily growing storage pool for everything from home video and digital stills to a dumping-ground for backup images, it's really hard to beat.
I own four units:

  • 2 "first gen" 4-spindle, USB-2
  • 8-spindle DroboPro (USB-2, FireWire, iSCSI)
  • 5-spindle Drobo FS (CIFS)
The reason for this posting is because of a problem I had today with my Pro: although it's presenting a 16TB volume to the connected host via Ethernet/iSCSI, it's filled with a mix of 1TB and 1.5TB SATA-2 drives. The usable capacity is in the neighborhood of 8TB, and I had it roughly 70% full with backups and staging files when it decided to die.
I pulled the power, then restarted it, and it started a boot cycle.
Nuts.
After checking online and finding a troubleshooting post from Drobo, I went through the steps to address the problem. No joy.
I'd purchased the unit in November, 2009, so it was clearly out of warranty, but I took a chance and sent a help request to Drobo Support. It didn't take long before Chad was answering back through their support system, and by giving him good, descriptive information about the problem, he quickly responded with "please call me at..." to get some final information on my symptoms.
Unfortunately, they see this happen enough that they've actually coined a name for it: the Yellow Brick Road.
Luckily, the folks at Drobo recognized some time ago that this particular boot-loop cannot be solved by the customer, and even when the device is out of warranty, they'll RMA the thing and get you back in business.
By any measure, that's awesome customer support.

Thursday, April 14, 2011

Drive Performance

UPDATE: more data from other arrays & configurations
In an earlier post I said I was building a table of performance data from my experimentation with my new iomega ix2-200 as well as other drive configurations for comparison. In addition to the table that follows, I'm also including a spreadsheet with the results:
The Corners 1 block seq read

(IOPS)
4K random read

(IOPS)
4K random write

(IOPS)
512K seq write

MB/s
512K seq read

MB/s
Notes
local SSD RAID0 10400 2690 3391 63.9 350.6 2 x Kingston "SSD Now V-series" SNV425
ix2 SSD CIFS 3376 891 308 25.7 40.4 2 x Kingston "SSD Now V-series" SNV425
ix2 SSD iSCSI 4032 664 313 29.4 38.5 2 x Kingston "SSD Now V-series" SNV425
local 7200 RPM SATA RAID1 7242 167 357 94.3 98.1 2 x Western Digital WD1001FALS
ix4 7200RPM CIFS** 2283 133 138 32.5 39.4 4 x Hitachi H3D200-series;
**jumbo frames enabled
ix2 7200RPM CIFS 2362 125 98 9.81 9.2 2 x Hitachi H3D200-series
ix2 7200RPM iSCSI 2425 123 104 9.35 9.64 2 x Hitachi H3D200-series
ix4 7200RPM iSCSI** 4687 117 122 37.4 40.8 4 x Hitachi H3D200-series;
**jumbo frames enabled
ix4a stock CIFS 2705 112 113 24 27.8 4 x Seagate ST32000542AS
ix4 stock iSCSI 1768 109 96 34.5 41.7 4 x Seagate ST31000520AS
ix4a stock iSCSI* 408 107 89 24.2 27.2 4 x Seagate ST32000542AS;
*3 switch "hops" with no storage optimization introduce additional
latency
ix2 stock CIFS 2300 107 85 9.85 9.35 2 x Seagate ST31000542AS
ix2 stock iSCSI 2265 102 84 9.32 9.66 2 x Seagate ST31000542AS
ix4 stock CIFS 4407 81 81 32.1 37 4 x Seagate ST31000520AS
DROBO PRO (iSCSI) 1557 71 68 33.1 40.5 6 x Seagate ST31500341AS + 2 x Western Digital
WD1001FALS; jumbo frames
DROBO USB 790 63 50 11.2 15.8 2 x Seagate ST31000333AS + 2 x Western Digital WD3200JD
DS2413+ 7200RPM RAID1/0 iSCSI 12173 182 194 63.53 17.36 2 x Hitachi HDS722020ALA330 + 6 x HDS723020BLA642
DS2413+ 7200RPM RAID1/0 NFS 2 x Hitachi HDS722020ALA330 + 6 x HDS723020BLA642
DS2413+ SSD RAID5 iSCSI 19238 1187 434 69.79 123.97 4 x Crucial M4

PX6-300 1 block seq read

(IOPS)
4K random read

(IOPS)
4K random write

(IOPS)
512K seq write

MB/s
512K seq read

MB/s
Protocol RAID Disks
iSCSI none 1 16364 508 225 117.15 101.11
RAID1 2 17440 717 300 116.19 116.91
RAID1/0 4 17205 2210 629 115.27 107.75
6 17899 936 925 43.75 151.94
RAID5 3 17458 793 342 112.29 116.34
4 18133 776 498 45.49 149.27
5 17256 1501 400 115.15 116.12
6 18022 1941 106552.64149.1
RAID0 2 17498 1373 740 116.44 116.22
3 18191 1463 1382 50.01 151.83
4 18132 771 767 52.41 151.05
5 17692 897 837 56.01 114.35
6 18010 1078 1014 50.87 151.47
RAID66 17173 2563 870 114.06 116.37
Protocol RAID Disks 1 block seq read

(IOPS)
4K random read

(IOPS)
4K random write

(IOPS)
512K seq write

MB/s
512K seq read

MB/s
NFS none 1 16146 403 151 62.39 115.03
RAID1 2 15998 625 138 63.82 96.83
RAID1/0 4 15924 874 157 65.52 115.45
6 16161 4371 754 65.87 229.52
RAID5 3 16062 646 137 63.2 115.15
4 16173 3103 612 65.19 114.76
5 15718 1013 162 59.26 116.1
6 16161 1081 201 63.85 114.63
RAID0 2 15920 614 183 66.19 114.85
3 15823 757 244 64.98 114.6
4 16258 3769 1043 66.17 114.64
5 16083 4228 1054 66.06 114.91
6 16226 4793 1105 65.54 115.27
RAID66 15915 1069 157 64.33 114.94

About the data

After looking around the Internet for tools that can be used to benchmark drive performance, I settled on the venerable IOmeter. Anyone who has used it, however, knows that there is an almost infinite set of possibilities for configuring it for data collection. In originally researching storage benchmarks, I came across several posts that suggest IOmeter along with various sets of test parameters to run against your storage. Because I'm a big fan of VMware, and Chad Sakac of EMC is one of the respected names in the VMware ecosystem, I found his blog post to be a nice start when looking for IOmeter test parameters. His set is a good one, but requires some manual setup to get things going. Also in my research, I came across a company called Enterprise Strategy Group which not only does validation and research for hire, they've published their custom IOmeter workloads in an IOmeter "icf"configuration file. The data published above was collected using their workload against a 5GB iobw.tst buffer. While the table above represents "the corners" for the storage systems tested, I also captured the entire result set from the IOmeter runs and have published the spreadsheet for additional data if anyone is interested.

px6-300 Data Collection

The data in the px6-300 tables represents a bit of a shift in the methodology: the original data sets were collected using the Windows version of iometer, while the px6-300 data was collected using the VMware Labs ioAnalyzer 1.5 "Fling". Because it uses the virtual appliance, a little disclosure is due: the test unit is connected by a pair of LACP-active/active 1Gb/s links to a Cisco SG-300 switch. In turn, an ESXi 5.1 host is connected to the switch via 4x1Gb/s links, each of which has a vmkernel port bound to it. The stock ioAnalyzer's test disk (SCSI0:1) has been increased in size to 2GB and is using an eager-zeroed thick VMDK (for iSCSI). The test unit has all unnecessary protocols disabled and is on a storage VLAN shared by other storage systems in my lab network. The unit is otherwise idle of any workloads (including the mdadm synchronization that takes place when configuring different RAID levels for disks, a very time-consuming process); there may be other workloads on the ESXi host, but DRS is enabled for the host's cluster, and if CPU availability were ever an issue in an I/O test (it isn't), other workloads would be migrated away from the host to provide additional resource.

The Takeaway

As expected, the SSD-based systems were by-far the best-performing on a single-spindle basis. However, as one might expect, an aggregate of spindles can provide synergy that meets or exceeds the capability of SSD, and locally-attached storage can also make up the difference in I/O performance. The trade-off, of course, is the cost (both up-front and long-term) versus footprint.

Monday, January 14, 2008

Playing with drobo

Okay, I'm a sucker for certain gizmos. I am now the proud owner of not one, but two Data Robotics "drobo" external storage devices.

I bought the first one from Dell for $409 +s/h. That's almost $100 off the current list price that everyone else posts. I picked up a pair of 750GB Western Digital drives from NewEgg, the Caviar SE16 WD7500AAKS. These drives are 7200RPM, 3.0Gb/s SATA Drives with a 3-year OEM warranty. Nice. With the >650GB of protected storage, I've been able to move a bunch of home video from tape (Digital8) to best-quality AVI on the drobo.

As a reward for purchase and registering the drobo, DataRobotics offered me another one for $299.

I couldn't resist.

I bought the second one along with another pair of the WD drives, and was prepared to be in storage nirvana before the end of the year.

But fate had different ideas...

One of the two new drives was DOA. The drobo failed it, and when I attached it to a USB dongle and tried to get it to work as a plain drive on my desktop, it also failed. Oh well, time to get the drive back to NewEgg for a replacement.

Yechh. 25% out-of-box failure rate on these drives. Good thing I'm using them in a protected storage array!

Anyway, by the time the replacement drive arrived and I got the drobo installed on my Vista system, it was 10 days into the new year.

And this is where things get to the point. My Vista machine is running four 320GB drives in a pair of RAID-1 volumes; I'm not using RAID-0/1 because of issues with the drivers and chipset, and I was wasting quite a bit of space in that configuration.

So I had the clever idea: swap the 4x320GB drives in my machine for the 2x750GB drives in the drobo.

I started by moving all the data in the second RAID-1 array to the drobo.

When that was done, I deleted the volume in Vista, removed the 2 drives for that array from my system, installed them in the drobo, and waited for it to incorporate the drives.

Then I swapped one of the new 750GB drives in the drobo for one of the remaining 320GB drives in the PC. Both the PC and the drobo went into 'degraded state' while the arrays were rebuilt with the swapped spindles, but everything eventually 'went green.'

Again, I swapped a 750GB in the drobo for a 320GB in the PC. Again, I went through the process of having the arrays rebuild themselves. Finally, I had to delete the 320GB RAID1 volume on the PC and create the 750GB volume without clearing the data in order to get the PC to use the entire space available on the new drives.

I now have a nearly 1TB storage volume on my drobo, coupled with a 750GB RAID-1 volume with my Vista installation on it. Now it's time to redo the volumes in Vista to take advantage of the additional space...