Monday, April 25, 2011

Non-destructive Drive Expansion in the StorCenter

If you can't tell from the series of posts I've already published, I'm having some fun playing with the iomega ix2-200 that I received from Chad Sakac of EMC. In reviewing those posts, I realized that I didn't publish anything on the trick to expanding the storage on the ix2 (which should also apply to all the models in the StorCenter series) without destroying your data.

This technique is fairly straightforward, and while it takes time and a bit of work at the command line via "Support mode", you will also be best served if all your data is still backed up. Note: to get at the support page on a "Cloud Edition" unit, the URL is /diagnostics.html

Preparation...

  1. Upgrade your drives with bigger models.
    1. It's not strictly required, but I suggest you upgrade starting at the highest labelled drive, working your way to the lowest labelled drive
    2. In order to make full use of each drive's capacity, they should all be identical.
    3. Shut down your unit each time you swap a drive (unless you're using a model that is explicitly hot-swappable).
    4. Allow the unit to fully redistribute the protected data before swapping the next spindle
  2. Enable SSH access to your unit.
    1. There's an unlinked page on the system called support.html; access it by putting the address directly into your browser's address bar.
      support.html
    2. Check Allow remote access for support (SSH and SFTP) and click Apply
  3. Use an SSH client to logon to your unit
    • username: root
    • default password: soho
    • If you have security enabled, you will need to append the primary administrator's password to the default password. For example, if your primary user's password is ducksauce, the SSH password is sohoducksauce.

Magic Time!

The devices can be expanded because of the storage subsystem being used in the Linux kernel. The process is straightforward: expand the "outer" container before expanding the inner container(s).
  1. Dismount the user storage volume:
    root@ix2-200d:/# umount -l /mnt/soho_storage
  2. Expand the RAID pseudo-device:
    root@ix2-200d:/# mdadm --grow /dev/md1 --size=max
  3. Expand the LVM physical volume:
    root@ix2-200d:/# pvresize /dev/md1
  4. Determine the free space in the LVM volume group:
    root@ix2-200d:/# vgdisplay
    --- Volume group ---
      VG Name               md1_vg
        .
        .
        .
      Free  PE / Size       476930 / 931.50 GB
        .
    
  5. Expand the LVM logical volume by the amount of free blocks:
    root@ix2-200d:/# lvextend -l +476930 /dev/md1_vg/md1vol1
  6. Mount the expanded volume:
    root@ix2-200d:/# mount -t xfs -o noatime /dev/mapper/md1_vg-md1vol1 /mnt/soho_storage
  7. Expand the xfs file system:
    root@ix2-200d:/# xfs_growfs /dev/md1_vg/md1vol1
  8. Reboot the system (so the web management tools will recognize the expansion):
    root@ix2-200d:/# telinit 6
Your device will reboot and (if you have email notifications correctly configured) it will notify you that "data protection is being reconstructed", a side effect of expanding the outermost RAID container.

10 comments:

  1. Only trouble I'm running into is I can't find xfs_growfs??

    ReplyDelete
  2. xfs_growfs is in /mnt/apps/usr/sbin

    ReplyDelete
  3. Jim - Thanks for the instructions so simple even this monkey could follow and execute.

    I am now, assuming unit reboots, the proud owner of 2 TB mirror for my VMware backup to fill up. The new drives, FYI, came out of some Seagate "Expansion" external desktop drives from Target ~ 99/each which was $20 less then NewEgg for bare drive.

    Thanks again.

    ReplyDelete
  4. Jim - My disk space was > 95% full before upgrade and now is not, but I still have the warning up. I went back and checked the steps but everything is "fixed" for the larger size. Been digging into what might need to be changed but thought I would ask for any ideas?

    Thanks

    ReplyDelete
    Replies
    1. Are you sure that you got the LVM expansion done correctly? Is the GUI interface showing the right capacity (forget the % used for now)?

      Delete
  5. this works perfectly!

    ReplyDelete
  6. Thank you for your post. I changed the original Seagate drives (one of them had died past week) and I'd installed two 2TB WD Caviar Green. The process worked perfectly on my Iomega ix2-200. Now, I have a 2TB RAID-1. Good job!

    ReplyDelete
  7. I have IX2-200 Cloud edition and my vg display shows different drive labels... Can someone or the author tell me if this script will work for my cloud version?
    root@BradshawNAS:/# mount
    rootfs on / type rootfs (rw)
    /dev/root.old on /initrd type ext2 (rw,relatime,errors=continue)
    none on / type tmpfs (rw,relatime,size=51200k,nr_inodes=31083)
    /dev/md0_vg/BFDlv on /boot type ext2 (rw,noatime,errors=continue)
    /dev/loop0 on /mnt/apps type ext2 (ro,relatime)
    /dev/loop1 on /etc type ext2 (rw,sync,noatime)
    /dev/loop2 on /oem type cramfs (ro,relatime)
    proc on /proc type proc (rw,relatime)
    none on /proc/bus/usb type usbfs (rw,relatime)
    none on /proc/fs/nfsd type nfsd (rw,relatime)
    none on /sys type sysfs (rw,relatime)
    devpts on /dev/pts type devpts (rw,relatime,gid=5,mode=620)
    tmpfs on /mnt/apps/lib/init/rw type tmpfs (rw,nosuid,relatime,mode=755)
    tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,relatime)
    /dev/mapper/md0_vg-vol1 on /mnt/system type xfs (rw,noatime,attr2,logbufs=8,noquota)
    /dev/mapper/18796e6b_vg-lv592c2f3 on /mnt/pools/A/A0 type xfs (rw,noatime,attr2,nobarrier,logbufs=8,noquota
    root@BradshawNAS:/# vgdisplay
    --- Volume group ---
    VG Name 18796e6b_vg
    System ID
    Format lvm2
    Metadata Areas 1
    Metadata Sequence No 6
    VG Access read/write
    VG Status resizable
    MAX LV 0
    Cur LV 1
    Open LV 1
    Max PV 0
    Cur PV 1
    Act PV 1
    VG Size 1.80 TB
    PE Size 4.00 MB
    Total PE 471809
    Alloc PE / Size 471809 / 1.80 TB
    Free PE / Size 0 / 0
    VG UUID FoobxS-1p3S-3hRw-8IvE-LUHl-LF43-zMiKZC

    --- Volume group ---
    VG Name md0_vg
    System ID
    Format lvm2
    Metadata Areas 1
    Metadata Sequence No 3
    VG Access read/write
    VG Status resizable
    MAX LV 0
    Cur LV 2
    Open LV 2
    Max PV 0
    Cur PV 1
    Act PV 1
    VG Size 20.01 GB
    PE Size 4.00 MB
    Total PE 5122
    Alloc PE / Size 5122 / 20.01 GB
    Free PE / Size 0 / 0
    VG UUID iJw46u-5mxS-20lj-Ph8p-aM0s-yHwd-n00voZ

    ReplyDelete
    Replies
    1. Understanding that this is over 3 years old, I just had to do this and had the same issues of different VG names (58c8bb82_vg), as well as different user storage name (/mnt/pools/A/A0)

      Just substitute your VG name and leverage tab competition when doing anything with /dev//..

      Delete