I had 2 1Tb drives in my Synology DS212j in RAID1 (or Synology Hybrid Raid), and I ran out of space.
I went and bought a new 4Tb drive, a single one in order to wait for another one from a different production batch (lower probability of both failing at the same time).
I thought it will be trivial to replace 2 drives with a degraded RAID of one larger drive, and increase the available space. Unfortunately, it was not trivial.
At first, I replaced one of the drives and powered on in degraded state. I used Synology GUI to repair the RAID and thus sync all data to the new 4Tb drive. That worked and took some hours.
Then, I removed the second 1Tb drive to keep the RAID in degraded state with the new larger drive. I powered on my NAS, but it didn't offer to expand the space. In fact, all the options in Storage Manager were greyed out. Bad luck.
So I went the command-line way.
Now some theory: Synology Hybrid Raid (SHR) actually uses LVM (Linux logical volume management) on top of RAID1 (managed my mdadm). Unfortunately, this additional layer of LVM between RAID and the filesystem has complicated things for me.
So, in order to extend my volume, I need to:
- Resize physical partition on the drive to fill all available space (eg, /dev/sda3)
- Resize RAID1 array on top of /dev/sda3 (eg, /dev/md2)
- Resize LVM physical volume on top of /dev/md2
- Resize LVM volume group, built from the physical volume (eg, /dev/vg1000)
- Resize LVM logical volume, spanning over the /dev/vg1000 (eg, /dev/vg1000/lv)
- Resize ext4 filesystem on top of /dev/vg1000/lv
Quite a number of steps. Here is how I did it:
1. Resizing of physical partitions is possible with parted. Note that fdisk cannot handle such big drives as 4Tb. However, newer parted has the resize command removed (bastards), so you actually need to delete and recreate the partition in its place. Scary, but works.
For that, start parted /dev/sda, then issue commands
- unit s - this makes parted use units of sectors instead of Mb/Gb/etc - this is cruicial to be exact in recreating your new partition
- print free - will list the current partition table along with the free space in the end
- rm 3 - will delete 3rd parition (check if it is the correct number, mine had 5 for some odd reason)
- mkpart ext4 - will create a new partition in its place, make sure to specify the same start sector as was printed, and the last sector of the free space, so you will use the whole disk. If it complains about alignment, press ignore - it will still be minimally aligned.
- Now reboot - I didn't find a working method of forcing Synology kernel to reread the partition table. Even though my data partition was /dev/sda5 and after recreation became /dev/sda3, Linux RAID was still able to detect it (probably using UUID) after reboot and assemble RAID array correctly.
2. After reboot you should see lots of space in /proc/partitions, but mdadm -D /dev/md2 will say you still use only a fraction of it.
Run mdadm --grow /dev/md2 --size max - this should do the trick. But it didn't work for me. In fact, that was the trickiest part to figure out. If it actually increased the size of your RAID array, proceed to step 3, but if it kept the old size, read on.
Now you need to reassemble the RAID array, asking it to update the device size. Unfortunately, you need to unmount the filesystem and disable LVM for it to work.
- Stop all Synology apps in the GUI, like Download manager and others.
- Stop other services accessing the filesystem (/volume1), I had to stop these:
/usr/syno/etc/rc.d/S20pgsql.sh stop
/usr/syno/etc/rc.d/S78iscsitrg.sh stop
/usr/syno/etc/rc.d/S81atalk.sh stop
/usr/syno/etc/rc.d/S83nfsd.sh stop
/usr/syno/etc/rc.d/S84rsyncd.sh stop
/usr/syno/etc/rc.d/S85synonetbkpd.sh stop
/usr/syno/etc/rc.d/S88synomkflvd.sh stop
/usr/syno/etc/rc.d/S66S2S.sh stop
/usr/syno/etc/rc.d/S66fileindexd.sh stop
/usr/syno/etc/rc.d/S66synoindexd.sh stop
/usr/syno/etc/rc.d/S77synomkthumbd.sh stop
/usr/syno/etc/rc.d/S80samba.sh stop - umount /volume1
- umount -f /volume1/@optware - this was tricker, thus -f. After that you will probably loose your SSH session. Go to the web GUI and enable SSH again in Control Panel/Terminal.
- umount /volume1/@optware - after reconnecting, you need to do this once more
- vgchange -a n - this will disable your LVM volume group, disabling also the logical volume. This will work only if the both /volume1 and /volume1/@optware are unmounted.
- mdadm -S /dev/md2 - only now you can stop the RAID array
- mdadm -A /dev/md2 -U devicesize /dev/sda3 - this will reassemble the RAID array, updating the device size
- mdadm --grow /dev/md2 -z max - finally, growing of the array will work!
3. Resizing of LVM physical volume is then easy.
- vgdisplay -v - will show you everything, use it to check your resizing steps.
- pvresize /dev/md2 - will use all available space on /dev/md2
4. Check vgdisplay - at this point it will show how much free space you have in the LVM volume group.
5. The free space in the VG should now be allocated to the locagical volume (LV). I didn't find a way of telling it to use all available space, so I did it in steps:
- vgdisplay -v - will show you how much free space you have in your PV
- lvextend -L+100G /dev/vg1000/lv - will extend the LV by 100 Gb. You can specify the exact amount of free space reported by vgdisplay here, or just do it several times until no free space is available.
6. Congratulations! Now we have a bigger underlying storage, but we still need to resize the ext4 filesystem on top of LVM.
- e2fsck -f /dev/vg1000/lv - this is the required step before doing the actual resize.
- resize2fs /dev/vg1000/lv - this will take a long time again
But once the resize is completed you can reboot the NAS and enjoy your newly available free space!
Too bad Synology's own tools could not do this, so I had to spend many hours researching the setup and doing it manually. Hopefully this blog post will help someone to save time.
Good luck!
1 comments:
Thankks for writing
Post a Comment