This technique is fairly straightforward, and while it takes time and a bit of work at the command line via "Support mode", you will also be best served if all your data is still backed up. Note: to get at the support page on a "Cloud Edition" unit, the URL is /diagnostics.html
Preparation...
- Upgrade your drives with bigger models.
- It's not strictly required, but I suggest you upgrade starting at the highest labelled drive, working your way to the lowest labelled drive
- In order to make full use of each drive's capacity, they should all be identical.
- Shut down your unit each time you swap a drive (unless you're using a model that is explicitly hot-swappable).
- Allow the unit to fully redistribute the protected data before swapping the next spindle
- Enable SSH access to your unit.
- Use an SSH client to logon to your unit
- username:
root
- default password:
soho
- If you have security enabled, you will need to append the primary administrator's password to the default password. For example, if your primary user's password is
ducksauce
, the SSH password issohoducksauce
.
- username:
Magic Time!
The devices can be expanded because of the storage subsystem being used in the Linux kernel. The process is straightforward: expand the "outer" container before expanding the inner container(s).- Dismount the user storage volume:
root@ix2-200d:/# umount -l /mnt/soho_storage
- Expand the RAID pseudo-device:
root@ix2-200d:/# mdadm --grow /dev/md1 --size=max
- Expand the LVM physical volume:
root@ix2-200d:/# pvresize /dev/md1
- Determine the free space in the LVM volume group:
root@ix2-200d:/# vgdisplay
--- Volume group --- VG Name md1_vg . . . Free PE / Size 476930 / 931.50 GB .
- Expand the LVM logical volume by the amount of free blocks:
root@ix2-200d:/# lvextend -l +476930 /dev/md1_vg/md1vol1
- Mount the expanded volume:
root@ix2-200d:/# mount -t xfs -o noatime /dev/mapper/md1_vg-md1vol1 /mnt/soho_storage
- Expand the xfs file system:
root@ix2-200d:/# xfs_growfs /dev/md1_vg/md1vol1
- Reboot the system (so the web management tools will recognize the expansion):
root@ix2-200d:/# telinit 6
Only trouble I'm running into is I can't find xfs_growfs??
ReplyDeleteعارف
Deletexfs_growfs is in /mnt/apps/usr/sbin
ReplyDeleteJim - Thanks for the instructions so simple even this monkey could follow and execute.
ReplyDeleteI am now, assuming unit reboots, the proud owner of 2 TB mirror for my VMware backup to fill up. The new drives, FYI, came out of some Seagate "Expansion" external desktop drives from Target ~ 99/each which was $20 less then NewEgg for bare drive.
Thanks again.
Jim - My disk space was > 95% full before upgrade and now is not, but I still have the warning up. I went back and checked the steps but everything is "fixed" for the larger size. Been digging into what might need to be changed but thought I would ask for any ideas?
ReplyDeleteThanks
Are you sure that you got the LVM expansion done correctly? Is the GUI interface showing the right capacity (forget the % used for now)?
Deletethis works perfectly!
ReplyDeleteThank you for your post. I changed the original Seagate drives (one of them had died past week) and I'd installed two 2TB WD Caviar Green. The process worked perfectly on my Iomega ix2-200. Now, I have a 2TB RAID-1. Good job!
ReplyDeleteI have IX2-200 Cloud edition and my vg display shows different drive labels... Can someone or the author tell me if this script will work for my cloud version?
ReplyDeleteroot@BradshawNAS:/# mount
rootfs on / type rootfs (rw)
/dev/root.old on /initrd type ext2 (rw,relatime,errors=continue)
none on / type tmpfs (rw,relatime,size=51200k,nr_inodes=31083)
/dev/md0_vg/BFDlv on /boot type ext2 (rw,noatime,errors=continue)
/dev/loop0 on /mnt/apps type ext2 (ro,relatime)
/dev/loop1 on /etc type ext2 (rw,sync,noatime)
/dev/loop2 on /oem type cramfs (ro,relatime)
proc on /proc type proc (rw,relatime)
none on /proc/bus/usb type usbfs (rw,relatime)
none on /proc/fs/nfsd type nfsd (rw,relatime)
none on /sys type sysfs (rw,relatime)
devpts on /dev/pts type devpts (rw,relatime,gid=5,mode=620)
tmpfs on /mnt/apps/lib/init/rw type tmpfs (rw,nosuid,relatime,mode=755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,relatime)
/dev/mapper/md0_vg-vol1 on /mnt/system type xfs (rw,noatime,attr2,logbufs=8,noquota)
/dev/mapper/18796e6b_vg-lv592c2f3 on /mnt/pools/A/A0 type xfs (rw,noatime,attr2,nobarrier,logbufs=8,noquota
root@BradshawNAS:/# vgdisplay
--- Volume group ---
VG Name 18796e6b_vg
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 6
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 1
Act PV 1
VG Size 1.80 TB
PE Size 4.00 MB
Total PE 471809
Alloc PE / Size 471809 / 1.80 TB
Free PE / Size 0 / 0
VG UUID FoobxS-1p3S-3hRw-8IvE-LUHl-LF43-zMiKZC
--- Volume group ---
VG Name md0_vg
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 3
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size 20.01 GB
PE Size 4.00 MB
Total PE 5122
Alloc PE / Size 5122 / 20.01 GB
Free PE / Size 0 / 0
VG UUID iJw46u-5mxS-20lj-Ph8p-aM0s-yHwd-n00voZ
Understanding that this is over 3 years old, I just had to do this and had the same issues of different VG names (58c8bb82_vg), as well as different user storage name (/mnt/pools/A/A0)
DeleteJust substitute your VG name and leverage tab competition when doing anything with /dev//..