#storage

7 posts.

Expanding an OPNsense VM disk

After resizing the VM disk at the host level, OPNsense does not automatically use the extra space. gpart show may report the disk as corrupt. This is expected.

What is actually happening is the backup GPT header is still at the old end of the disk.

Fix

Enter the console shell.

Repair GPT metadata. This does not touch data.

gpart recover ada0

Resize the root partition. On a default UFS install this is freebsd-ufs, usually partition 3.

gpart resize -i 3 ada0

Grow the filesystem.

growfs /

Verify.

df -h

Notes

  • Applies to UFS installs
  • The corruption warning after a disk resize is normal
  • No reinstall required

Attempts to fix corrupted BTRFS volume in DSM

/ Nas /

I was restarting a Docker container in my NAS’ DiskStation and I suddenly got a warning that my primary volume is mounted in read-only mode.

Checking dmesg, I saw an error about a corrupted leaf. At this point, I didn’t really know how Btrfs works, or what a leaf is.

[ 363.524916] BTRFS critical (device dm-1): [cannot fix] corrupt leaf: root=1461 block=8947565723648 slot=1, bad key order
[ 363.526807] md3: [Self Heal] Retry sector [229802368] round [1/2] start: sh-sector [76600704], d-disk [3:sata3p5], p-disk [0:sata1p5], q-disk [-1: null]
[ 363.529030] md3: [Self Heal] Retry sector [229802376] round [1/2] start: sh-sector [76600712], d-disk [3:sata3p5], p-disk [0:sata1p5], q-disk [-1: null]
[ 363.529228] md3: [Self Heal] Retry sector [229802368] round [1/2] choose d-disk
[ 363.529230] md3: [Self Heal] Retry sector [229802368] round [1/2] finished: get same result, retry next round
[ 363.529232] md3: [Self Heal] Retry sector [229802368] round [2/2] start: sh-sector [76600704], d-disk [3:sata3p5], p-disk [0:sata1p5], q-disk [-1: null]
[ 363.529391] md3: [Self Heal] Retry sector [229802368] round [2/2] choose p-disk
[ 363.529394] md3: [Self Heal] Retry sector [229802368] round [2/2] finished: get same result, give up
[ 363.538846] md3: [Self Heal] Retry sector [229802384] round [1/2] start: sh-sector [76600720], d-disk [3:sata3p5], p-disk [0:sata1p5], q-disk [-1: null]
[ 363.539030] md3: [Self Heal] Retry sector [229802376] round [1/2] choose d-disk
[ 363.539032] md3: [Self Heal] Retry sector [229802376] round [1/2] finished: get same result, retry next round
[ 363.539035] md3: [Self Heal] Retry sector [229802376] round [2/2] start: sh-sector [76600712], d-disk [3:sata3p5], p-disk [0:sata1p5], q-disk [-1: null]
[ 363.539187] md3: [Self Heal] Retry sector [229802376] round [2/2] choose p-disk
[ 363.539190] md3: [Self Heal] Retry sector [229802376] round [2/2] finished: get same result, give up
[ 363.549362] md3: [Self Heal] Retry sector [229802392] round [1/2] start: sh-sector [76600728], d-disk [3:sata3p5], p-disk [0:sata1p5], q-disk [-1: null]
[ 363.549567] md3: [Self Heal] Retry sector [229802384] round [1/2] choose d-disk
[ 363.549570] md3: [Self Heal] Retry sector [229802384] round [1/2] finished: get same result, retry next round
[ 363.549572] md3: [Self Heal] Retry sector [229802384] round [2/2] start: sh-sector [76600720], d-disk [3:sata3p5], p-disk [0:sata1p5], q-disk [-1: null]
[ 363.549738] md3: [Self Heal] Retry sector [229802384] round [2/2] choose p-disk
[ 363.549741] md3: [Self Heal] Retry sector [229802384] round [2/2] finished: get same result, give up
[ 363.559460] md3: [Self Heal] Retry sector [229802392] round [1/2] choose d-disk
[ 363.560726] md3: [Self Heal] Retry sector [229802392] round [1/2] finished: get same result, retry next round
[ 363.562301] md3: [Self Heal] Retry sector [229802392] round [2/2] start: sh-sector [76600728], d-disk [3:sata3p5], p-disk [0:sata1p5], q-disk [-1: null]
[ 363.564761] md3: [Self Heal] Retry sector [229802392] round [2/2] choose p-disk
[ 363.566015] md3: [Self Heal] Retry sector [229802392] round [2/2] finished: get same result, give up

I spent two days trying to recover. Most of the advice is to salvage the files and rebuild the filesystem.

Potential cause is bitflip in memory. I recently upgraded my RAM to 16GB × 4. I did not test it and just plugged it in. After a couple of days, my filesystem got corrupted.

List btrfs devices

btrfs fi show

Unmount volume and stop services in DSM

synostgvolume --unmount /volume2
# I forgot the correct command, but it should resemble something like --unmount-with-packages

This is supposed to stop services and unmount the volume, but it was not working for me.

Trying out btrfs check --repair

I tried btrfs check --repair as a last resort. But I’m blocked by the following error. I was not able to figure out how to fix it.

couldn't open RDWR because of unsupported option features (800000000000003).

Mounting DSM volumes in Ubuntu

apt-get update
apt-get install -y mdadm lvm2  # Initiate mdadm
mdadm -AsfR                    # Assemble or activate an array, scan for MD superblocks, etc.
vgchange -ay                   # Activate volume group
cat /proc/mdstat               # List active RAID arrays
btrfs fi show                  # List btrfs devices
btrfs check /dev/mapper/vg1000-lv

Attempted to mount the volume in Ubuntu because I could not unmount it in DSM. I could not do anything aside from btrfs check because of an unknown feature error. I’m thinking DSM have custom code baked into their Btrfs bundle.

btrfs error screenshot

couldn't open RDWR because of unsupported option features (0x800000000000003)
ERROR: cannot open file system

Summary

I decided to back up everything and rebuild my volume. It was originally built last July 2020 and has gone through a lot of changes such as disk size increase (adding new disks). There were a lot of errors in btrfs check too.

It’s hard to continue using it with doubt if the error will not happen again.

The bad key order corruption was likely to be from memory bitflip. I’ll do a memtest on the host machine before doing anything else.

Cutting my losses by not spending more time on this issue. I learned a bit about Btrfs which is good because I will still use it. I have a better idea next time what to check.

Writing this down so I have reference in the future.

Resources

Segfault on emulated NVME as SSD Cache

/ Nas /

I had an idea to emulate an NVME and try to use it as cache in Synology DSM. I get to a point where it tried to mount the cache, then I get a segfault.

[ 1846.792660] kvm[24147]: segfault at 0 ip 000055bb2d97fb32 sp 00007fc7a62a2fb0 error 4 in qemu-system-x86_64[55bb2d857000+613000] likely on CPU 1 (core 1, socket 0)
[ 1846.793173] Code: e1 27 54 00 e8 6f 7b ed ff 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 53 48 8b 77 30 48 89 fb 44 8b 43 38 48 8b 06 8b 7e 60 <48> 8b 08 45 85 c0 78 46 80 7b 4c 00 74 58 8b 43 48 83 c0 01 3d 00
[ 1846.801440] zd64: p1 p2 p3

mergerfs and SnapRAID

/ Nas /

Another setup to try. I have a bunch of various size disk that I do not know what’s the best setup for.

mergerfs combines different disk with different filesystem and show it as a single mount. Files are still flat. Disks can still be mounted individually.

SnapRAID can be used to setup a parity drive to allow recovery if one drive fails.

The idea is to combine the two technology. Mergerfs to combine disks + SnapRAID for data redundancy.

Something to checkout.

https://selfhostedhome.com/combining-different-sized-drives-with-mergerfs-and-snapraid/

https://perfectmediaserver.com/tech-stack/snapraid/

Showing 4 of 7 posts