Linux Fun

I’ll probably break these out, but for now this is the title and location.

We’re finally hiring a linux administrator (13 years experience) to replace me and my “hobby”. Good thing is he’s way better than I can hope to be. Bad thing is he doesn’t start for another couple weeks and I have a lot of Linux-y stuff going on.

So here are some helpful commands:
What Linux Distribution Are You Using?
cat /etc/*-release
OR
lsb_release -a

What Kernel Version Am I Running?
uname -a

What Processors Am I Using?
cat /proc/cpuinfo

What Hardware Specs Do I Have? (Motherboard Model, BIOS revision, etc)
dmidecode
OR, if installed
hwinfo

How Do I Setup No-Password-Needed SSH?
ssh-keygen
enter
enter
ssh-copy-id username@systemname
userpassword

ZFS Replace A Drive
zpool offline poolname /dev/daX
zpool replace poolname /dev/daX /dev/daY
zpool status poolname
After Rebuild
zpool detach poolname /dev/daX

LVM – Create Physical, Volume, and Logical
pvcreate /dev/sdb1
vgcreate vgpool /dev/sdb1
lvcreate -L 3G -n lvstuff vgpool

LVM – Display Current Status
pvdisplay

LVM – Add A New Disk
fdisk /dev/daX
n, p, 1, t, 8e, w
pvcreate /dev/daX1

LVM – Extend LVM Pool To New Disk
vgextend vgpool /dev/daX1

LVM – Resize File System (required for shrinking/growing)
lvextend -L+8G /dev/vgpool/lvstuff
lvextend -L50GB /dev/vgpool/lvstuff (extend to total of 50gb)
resize2fs /dev/vgpool/lvstuff

ZFS Zpool Replace Failed Drive

So I am using ZFS on my Nas4Free installation (v5000 zfs) and I had a failed drive. Background: 20x 3TB SATA mirror and then striped in the same pool, 2x 3TB hotspares (not sure if these work – maybe more information later), and a raid card level ZIL of 120GB SSDs.

One of the drives in the mirror failed out. Had a technician replace the drive, but forgot to offline it before the drive was replaced. Since this was my first time with a pure ZFS environment (usually I had the raid controller do the heavy lifting and ZFS was just sitting there), I detached the drive. Caused all sorts of issues.

so then I had:

#zpool status -v
NAME STATE READ WRITE CKSUM
zfs ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
da3 ONLINE 0 0 0
da4 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
da5 ONLINE 0 0 0
da6 ONLINE 0 0 0
mirror-2 ONLINE 0 0 0
da7 ONLINE 0 0 0
da8 ONLINE 0 0 0
mirror-3 ONLINE 0 0 0
da9 ONLINE 0 0 0
da10 ONLINE 0 0 0
mirror-4 ONLINE 0 0 0
da11 ONLINE 0 0 0
da12 ONLINE 0 0 0
da14 ONLINE 0 0 0
mirror-6 ONLINE 0 0 0
da15 ONLINE 0 0 0
da16 ONLINE 0 0 0
mirror-7 ONLINE 0 0 0
da17 ONLINE 0 0 0
da18 ONLINE 0 0 0
mirror-8 ONLINE 0 0 0
da19 ONLINE 0 0 0
da20 ONLINE 0 0 0
logs
da1p1 ONLINE 0 0 0
spares
da21 AVAIL
da22 AVAIL

We can see mirror-5 was just a single stripe disk. Awesome.

After adding in the drive and formatting via the GUI for ZFS, I then ran:
zpool attach zfs /dev/da14 /dev/da13
zpool attach YOURPOOLNAME /dev/YOURDEVICETHATSWORKING /dev/YOURNEWDEVICE

# zpool status
pool: zfs
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scan: resilver in progress since Tue Aug 19 13:51:48 2014
3.23G scanned out of 1.41T at 276M/s, 1h29m to go
322M resilvered, 0.22% done

It actually finished in 40 minutes with no errors. Resilvering is a type of scrub, so a scrub cannot be running at the same time.

Then, just to be sure, I ran the scrub to verify data integrity:
zpool scrub zfs
zpool scrub YOURPOOLNAME

To Stop a scrub
zpool scrub -s YOURPOOLNAME

Verify status with (this checks all known data to be valid):
zpool status -v

# zpool status
pool: zfs
state: ONLINE
scan: scrub in progress since Tue Aug 19 17:06:45 2014
22.8G scanned out of 1.41T at 614M/s, 0h39m to go
0 repaired, 1.57% done