Category Archives: Linux

The Linux Category actually encompasses *BSD, RH, Fedora, Ubuntu, and the like.

BTRFS Snapshot Replication

This post is going to be a little bit different in terms of BTRFS and replication. The “normal” way is you use snapshots for backup purposes – or – if you use a snapshot to clone the data, you end up copying the data from the snapshot to the new location. My needs are a little bit different, so I can use the data (read-only) from the snapshot directly.

In my scenario I have a source-of-truth server with 2.3TB of data and over 800,000 files that has daily changes (additions mostly) to the data. This data change is only moderate – about 500 files/1GB per day – but the actual changes occur throughout the business day rather than at a single point of time. The original spec was to RSYNC changes to remote sites daily for the purposes of backups and distribution points, but this was quickly changed to “can we have this run every hour?” by the consumers of the data. Client systems connect to the closest server available and rsync data based on their xml payloads (not including that configuration in this post as it is out of scope).

Unfortunately RSYNC is terribly slow when copying from a single directory with many files. It gets even worse as the copy travels over SSH to geographically disconnected sites; the latency at the best site is 40ms and at the worst is over 150ms. Because this is critical data, it was deemed necessary to have MD5 checksums on each file to guarantee the distribution points are identical to that of the source of truth server. The change of data being only 500 files/1GB had little impact on the 12-20 hours it would take at our worst site.

Enter BTRFS. Yes, I know that ZFS offers this as well, but the BTRFS is slightly more native on Linux than ZFS is, and I only need the checksum (scrub) and snapshot abilities for my file systems as the data is not compressible or deduplication friendly.

Snapshot

I actually downloaded some BTRFS friendly scripts from github: https://github.com/nachoparker/btrfs-snp

chmod +x btrfs-snp
mv btrfs-snp /usr/local/sbin

btrfs-snp /mnt/btrfs/ hourly 2 3600

Sync to Remote Server

Once again, the BTRFS friendly script from github: https://github.com/nachoparker/btrfs-sync

chmod +x btrfs-sync
mv btfrs-sync /usr/local/sbin

btrfs-sync -d -v /mnt/btrfs/.snapshots/ root@10.10.3.21:/mnt/btrfs/snapshot/

My Script

In my case I needed to then utilize the data for clients to connect via RSYNC, which means the data had to be in a specific spot already advertised to those clients. Enter sym links! Here is my full script.

#ping make sure the device responds
ping -c 3 10.130.20.200

#create snapshot named 'hourly', delete any more than 2 snaps, no less than 3600 seconds old
#btrfs-snp /mnt/btrfs/ hourly 2 3600

#TESTING PURPOSES create snapshot named 'hourly', delete any more than 2 snaps, no less than 600 seconds old
btrfs-snp /mnt/btrfs/ hourly 2 600

#send the snapshot to the other system (requires authkeys ssh setup)
btrfs-sync -d -v /mnt/btrfs/.snapshots/ root@10.10.3.21:/mnt/btrfs/snapshot/

#now run the following commands on the remote system - need testing
## as this may cause rsync to fail if a player is currently loading
ssh root@10.10.3.21 'latestdir=$(ls -rt /mnt/btrfs/snapshot | tail -1) && rm /mnt/btrfs/data && ln -s /mnt/btrfs/snapshot/$latestdir/ /mnt/btrfs/data'

Other Scripts

In case they remove from github, Figured I’d put them here:

#!/bin/bash

#
# Simple script that synchronizes BTRFS snapshots locally or through SSH.
# Features compression, retention policy and automatic incremental sync
#
# Usage:
#  btrfs-sync [options] <src> [<src>...] [[user@]host:]<dir>
#
#  -k|--keep NUM     keep only last <NUM> sync'ed snapshots
#  -d|--delete       delete snapshots in <dst> that don't exist in <src>
#  -z|--xz           use xz     compression. Saves bandwidth, but uses one CPU
#  -Z|--pbzip2       use pbzip2 compression. Saves bandwidth, but uses all CPUs
#  -q|--quiet        don't display progress
#  -v|--verbose      display more information
#  -h|--help         show usage
#
# <src> can either be a single snapshot, or a folder containing snapshots
# <user> requires privileged permissions at <host> for the 'btrfs' command
#
# Cron example: daily synchronization over the internet, keep only last 50
#
# cat > /etc/cron.daily/btrfs-sync <<EOF
# #!/bin/bash
# /usr/local/sbin/btrfs-sync -q -k50 -z /home user@host:/path/to/snaps
# EOF
# chmod +x /etc/cron.daily/btrfs-sync
#
# Copyleft 2018 by Ignacio Nunez Hernanz <nacho _a_t_ ownyourbits _d_o_t_ com>
# GPL licensed (see end of file) * Use at your own risk!
#
# More at https://ownyourbits.com
#

set -e -o pipefail

# help
print_usage() {
  echo "Usage:
  $BIN [options] [[user@]host:]<src> [<src>...] [[user@]host:]<dir>

  -k|--keep NUM     keep only last <NUM> sync'ed snapshots
  -d|--delete       delete snapshots in <dst> that don't exist in <src>
  -z|--xz           use xz     compression. Saves bandwidth, but uses one CPU
  -Z|--pbzip2       use pbzip2 compression. Saves bandwidth, but uses all CPUs
  -p|--port         SSH port. Default 22
  -q|--quiet        don't display progress
  -v|--verbose      display more information
  -h|--help         show usage

<src> can either be a single snapshot, or a folder containing snapshots
<user> requires privileged permissions at <host> for the 'btrfs' command

Cron example: daily synchronization over the internet, keep only last 50

cat > /etc/cron.daily/btrfs-sync <<EOF
#!/bin/bash
/usr/local/sbin/btrfs-sync -q -k50 -z /home user@host:/path/to/snaps
EOF
chmod +x /etc/cron.daily/btrfs-sync
"
}

echov() { if [[ "$VERBOSE" == 1 ]]; then echo "$@"; fi }

#----------------------------------------------------------------------------------------------------------

# preliminary checks
BIN="${0##*/}"
[[ $# -lt 2      ]] && { print_usage                                ; exit 1; }
[[ ${EUID} -ne 0 ]] && { echo "Must be run as root. Try 'sudo $BIN'"; exit 1; }

# parse arguments
KEEP=0
PORT=22
ZIP=cat PIZ=cat
SILENT=">/dev/null"

OPTS=$( getopt -o hqzZk:p:dv -l quiet -l help -l xz -l pbzip2 -l keep: -l port: -l delete -l verbose -- "$@" 2>/dev/null )
[[ $? -ne 0 ]] && { echo "error parsing arguments"; exit 1; }
eval set -- "$OPTS"

while true; do
  case "$1" in
    -h|--help   ) print_usage; exit  0 ;;
    -q|--quiet  ) QUIET=1    ; shift 1 ;;
    -d|--delete ) DELETE=1   ; shift 1 ;;
    -k|--keep   ) KEEP=$2    ; shift 2 ;;
    -p|--port   ) PORT=$2    ; shift 2 ;;
    -z|--xz     ) ZIP=xz     PIZ=( xz     -d ); shift 1 ;;
    -Z|--pbzip2 ) ZIP=pbzip2 PIZ=( pbzip2 -d ); shift 1 ;;
    -v|--verbose) SILENT=""  VERBOSE=1        ; shift 1 ;;
    --)                shift;  break   ;;
  esac
done

SRC=( "${@:1:$#-1}" )
DST="${@: -1}"

# detect remote dst argument
[[ "$SRC" =~ : ]] && {
  NET_SRC="$( sed 's|:.*||' <<<"$SRC" )"
  SRC="$( sed 's|.*:||' <<<"$SRC" )"
  SSH_SRC=( ssh -p "$PORT" -o ServerAliveInterval=5 -o ConnectTimeout=1 -o BatchMode=yes "$NET_SRC" )
}

[[ "$SSH_SRC" != "" ]] && SRC_CMD=( ${SSH_SRC[@]} ) || SRC_CMD=( eval )
${SRC_CMD[@]} test -x "$SRC" &>/dev/null || {
  [[ "$SSH_SRC" != "" ]] && echo "SSH access error to $NET_SRC. Do you have passwordless login setup, and adequate permissions for $SRC?"
  [[ "$SSH_SRC" == "" ]] && echo "Access error. Do you have adequate permissions for $SRC?"
}

# detect remote dst argument
[[ "$DST" =~ : ]] && {
  NET="$( sed 's|:.*||' <<<"$DST" )"
  DST="$( sed 's|.*:||' <<<"$DST" )"
  SSH=( ssh -p "$PORT" -o ServerAliveInterval=5 -o ConnectTimeout=1 -o BatchMode=yes "$NET" )
}
[[ "$SSH" != "" ]] && DST_CMD=( ${SSH[@]} ) || DST_CMD=( eval )
${DST_CMD[@]} test -x "$DST" &>/dev/null || {
  [[ "$SSH" != "" ]] && echo "SSH access error to $NET. Do you have passwordless login setup, and adequate permissions for $DST?"
  [[ "$SSH" == "" ]] && echo "Access error. Do you have adequate permissions for $DST?"
  exit 1
}

#----------------------------------------------------------------------------------------------------------

# more checks

## don't overlap
pgrep -F  /run/btrfs-sync.pid  &>/dev/null && { echo "$BIN is already running"; exit 1; }
echo $$ > /run/btrfs-sync.pid

${DST_CMD[@]} "pgrep -f btrfs\ receive &>/dev/null" && { echo "btrfs-sync already running at destination"; exit 1; }

## src checks
echov "* Check source"
while read entry; do SRCS+=( "$entry" ); done < <(
  "${SRC_CMD[@]}" "
    for s in "${SRC[@]}"; do
      src=\"\$(cd \"\$s\" &>/dev/null && pwd)\" || { echo \"\$s not found\"; exit 1; } #abspath
      btrfs subvolume show \"\$src\" &>/dev/null && echo \"0|\$src\" || \
      for dir in \$( ls -drt \"\$src\"/* 2>/dev/null ); do
        DATE=\"\$( btrfs su sh \"\$dir\" 2>/dev/null | grep \"Creation time:\" | awk '{ print \$3, \$4 }' )\" \
        || continue   # not a subvolume
        SECS=\$( date -d \"\$DATE\" +\"%s\" )
        echo \"\$SECS|\$dir\"
      done
    done | sort -V | sed 's=.*|=='
  "
)
[[ ${#SRCS[@]} -eq 0 ]] && { echo "no BTRFS subvolumes found"; exit 1; }

## check pbzip2
[[ "$ZIP" == "pbzip2" ]] && {
    "${SRC_CMD[@]}" type pbzip2 &>/dev/null && \
    "${DST_CMD[@]}" type pbzip2 &>/dev/null || {
      echo "INFO: 'pbzip2' not installed on both ends, fallback to 'xz'"
      ZIP=xz PIZ=unxz
  }
}

## use 'pv' command if available
PV=( pv -F"time elapsed [%t] | rate %r | total size [%b]" )
[[ "$QUIET" == "1" ]] && PV=( cat ) || type pv &>/dev/null || {
  echo "INFO: install the 'pv' package in order to get a progress indicator"
  PV=( cat )
}

#----------------------------------------------------------------------------------------------------------

# sync snapshots

get_dst_snapshots() {      # sets DSTS DST_UUIDS
  local DST="$1"
  unset DSTS DST_UUIDS
  while read entry; do
    DST_UUIDS+=( "$( sed 's=|.*==' <<<"$entry" )" )
    DSTS+=(      "$( sed 's=.*|==' <<<"$entry" )" )
  done < <(
    "${DST_CMD[@]}" "
      DSTS=( \$( ls -d \"$DST\"/* 2>/dev/null ) )
      for dst in \${DSTS[@]}; do
        UUID=\$( sudo btrfs su sh \"\$dst\" 2>/dev/null | grep 'Received UUID' | awk '{ print \$3 }' )
        [[ \"\$UUID\" == \"-\" ]] || [[ \"\$UUID\" == \"\" ]] && continue
        echo \"\$UUID|\$dst\"
      done"
  )
}

choose_seed() {      # sets SEED
  local SRC="$1"

  SEED="$SEED_NEXT"
  if [[ "$SEED" == "" ]]; then
    # try to get most recent src snapshot that exists in dst to use as a seed
    local RXID_CALCULATED=0
    declare -A PATH_RXID DATE_RXID SHOWP RXIDP DATEP
    local LIST="$( "${SRC_CMD[@]}" sudo btrfs subvolume list -su "$SRC" )"
    SEED=$(
      for id in "${DST_UUIDS[@]}"; do
        # try to match by UUID
        local PATH_=$( awk "{ if ( \$14 == \"$id\" ) print \$16       }" <<<"$LIST" )
        local DATE=$(  awk "{ if ( \$14 == \"$id\" ) print \$11, \$12 }" <<<"$LIST" )

        # try to match by received UUID, only if necessary
        [[ "$PATH_" == "" ]] && {
          [[ "$RXID_CALCULATED" == "0" ]] && { # create table during the first iteration if needed
            local PATHS=( $( "${SRC_CMD[@]}" sudo btrfs su list -u "$SRC" | awk '{ print $11 }' ) )
            for p in "${PATHS[@]}"; do
              SHOWP="$( "${SRC_CMD[@]}" sudo btrfs su sh "$( dirname "$SRC" )/$( basename "$p" )" 2>/dev/null )"
              RXIDP="$( grep 'Received UUID' <<<"$SHOWP" | awk '{ print $3     }' )"
              DATEP="$( grep 'Creation time' <<<"$SHOWP" | awk '{ print $3, $4 }' )"
              [[ "$RXIDP" == "" ]] && continue
              PATH_RXID["$RXIDP"]="$p"
              DATE_RXID["$RXIDP"]="$DATEP"
            done
            RXID_CALCULATED=1
          }
          PATH_="${PATH_RXID["$id"]}"
           DATE="${DATE_RXID["$id"]}"
        }

        [[ "$PATH_" == "" ]] || [[ "$PATH_" == "$( basename "$SRC" )" ]] && continue

        local SECS=$( date -d "$DATE" +"%s" )
        echo "$SECS|$PATH_"
      done | sort -V | tail -1 | cut -f2 -d'|'
    )
  fi
}

exists_at_dst() {
  local SHOW="$( "${SRC_CMD[@]}" sudo btrfs subvolume show "$SRC" )"

  local SRC_UUID="$( grep 'UUID:' <<<"$SHOW" | head -1 | awk '{ print $2 }' )"
  grep -q "$SRC_UUID" <<<"${DST_UUIDS[@]}" && return 0;

  local SRC_RXID="$( grep 'Received UUID' <<<"$SHOW"   | awk '{ print $3 }' )"
  grep -q "^-$"       <<<"$SRC_RXID"       && return 1;
  grep -q "$SRC_RXID" <<<"${DST_UUIDS[@]}" && return 0;

  return 1
}

## sync incrementally
sync_snapshot() {
  local SRC="$1"
  "${SRC_CMD[@]}" test -d "$SRC" || return

  exists_at_dst "$SRC" && { echov "* Skip existing '$SRC'"; return 0; }

  choose_seed "$SRC"  # sets SEED

  # incremental sync argument
  [[ "$SEED" != "" ]] && {
    local SEED_PATH="$( dirname "$SRC" )/$( basename $SEED )"
    "${SRC_CMD[@]}" test -d "$SEED_PATH" &&
      local SEED_ARG=( -p "$SEED_PATH" ) || \
      echo "INFO: couldn't find $SEED_PATH. Non-incremental mode"
  }

  # do it
  echo -n "* Synchronizing '$src'"
  [[ "$SEED_ARG" != "" ]] && echov -n " using seed '$SEED'"
  echo "..."

  "${SRC_CMD[@]}" \
  sudo btrfs send -q ${SEED_ARG[@]} "$SRC" \
    | "$ZIP" \
    | "${PV[@]}" \
    | "${DST_CMD[@]}" "${PIZ[@]} | sudo btrfs receive \"$DST\" 2>&1 |(grep -v -e'^At subvol ' -e'^At snapshot '||true)" \
    || {
      "${DST_CMD[@]}" sudo btrfs subvolume delete "$DST"/"$( basename "$SRC" )" 2>/dev/null
      return 1;
    }

  # update DST list
  DSTS+=("$DST/$( basename "$SRC" )")
  DST_UUIDS+=("$SRC_UUID")
  SEED_NEXT="$SRC"
}

#----------------------------------------------------------------------------------------------------------

# sync all snapshots found in src
echov "* Check destination"
get_dst_snapshots "$DST" # sets DSTS DST_UUIDS
for src in "${SRCS[@]}"; do
  sync_snapshot "$src" && RET=0 || RET=1
  for i in $(seq 1 2); do
    [[ "$RET" != "1" ]] && break
    echo "* Retrying '$src'..."
    sync_snapshot "$src" && RET=0 || RET=1
  done
  [[ "$RET" == "1" ]] && { echo "Abort"; exit 1; }
done

#----------------------------------------------------------------------------------------------------------

# retention policy
[[ "$KEEP" != 0 ]] && \
  [[ ${#DSTS[@]} -gt $KEEP ]] && \
  echov "* Pruning old snapshots..." && \
  for (( i=0; i < $(( ${#DSTS[@]} - KEEP )); i++ )); do
    PRUNE_LIST+=( "${DSTS[$i]}" )
  done && \
  ${DST_CMD[@]} sudo btrfs subvolume delete "${PRUNE_LIST[@]}" $SILENT

# delete flag
[[ "$DELETE" == 1 ]] && \
  for dst in "${DSTS[@]}"; do
    FOUND=0
    for src in "${SRCS[@]}"; do
      [[ "$( basename $src )" == "$( basename $dst )" ]] && { FOUND=1; break; }
    done
    [[ "$FOUND" == 0 ]] && DEL_LIST+=( "$dst" )
  done
[[ "$DEL_LIST" != "" ]] && \
  echov "* Deleting non existent snapshots..." && \
  ${DST_CMD[@]} sudo btrfs subvolume delete "${DEL_LIST[@]}" $SILENT

exit 0

# License
#
# This script is free software; you can redistribute it and/or modify it
# under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This script is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this script; if not, write to the
# Free Software Foundation, Inc., 59 Temple Place, Suite 330,


#!/bin/bash

#
# Script that creates BTRFS snapshots, manually or from cron
#
# Usage:
#          sudo btrfs-snp  <dir> (<tag>) (<limit>) (<seconds>) (<destdir>)
#
# Copyleft 2017 by Ignacio Nunez Hernanz <nacho _a_t_ ownyourbits _d_o_t_ com>
# GPL licensed (see end of file) * Use at your own risk!
#
# Based on btrfs-snap by Birger Monsen
#
# More at https://ownyourbits.com
#

function btrfs-snp()
{
  local   BIN="${0##*/}"
  local   DIR="${1}"
  local   TAG="${2:-snapshot}"
  local LIMIT="${3:-0}"
  local  TIME="${4:-0}"
  local   DST="${5:-.snapshots}"
  local MARGIN=15 # allow for some seconds of inaccuracy for cron / systemd timers

  ## usage
  [[ "$*" == "" ]] || [[ "$1" == "-h" ]] || [[ "$1" == "--help" ]] && {
    echo "Usage: $BIN <dir> (<tag>) (<limit>) (<seconds>) (<destdir>)

  dir     │ create snapshot of <dir>
  tag     │ name the snapshot <tag>_<timestamp>
  limit   │ keep <limit> snapshots with this tag. 0 to disable
  seconds │ don't create snapshots before <seconds> have passed from last with this tag. 0 to disable
  destdir │ store snapshot in <destdir>, path absolute or relative to <dir>

Cron example: Hourly snapshot for one day, daily for one week, weekly for one month, and monthly for one year.

cat > /etc/cron.hourly/$BIN <<EOF
#!/bin/bash
/usr/local/sbin/$BIN /home hourly  24 3600
/usr/local/sbin/$BIN /home daily    7 86400
/usr/local/sbin/$BIN /home weekly   4 604800
/usr/local/sbin/$BIN /     weekly   4 604800
/usr/local/sbin/$BIN /home monthly 12 2592000
EOF
chmod +x /etc/cron.hourly/$BIN"
    return 0
  }

  ## checks
  local SNAPSHOT=${TAG}_$( date +%F_%H%M%S )

  [[ ${EUID} -ne 0  ]] && { echo "Must be run as root. Try 'sudo $BIN'"; return 1; }
  [[ -d "$SNAPSHOT" ]] && { echo "$SNAPSHOT already exists"            ; return 1; }

  mount -t btrfs | cut -d' ' -f3 | grep -q "^${DIR}$" || {
    btrfs subvolume show "$DIR" &>/dev/null || {
      echo "$DIR is not a BTRFS mountpoint or snapshot"
      return 1
    }
  }

  [[ "$DST" = /* ]] || DST="$DIR/$DST"
  mkdir -p "$DST"
  local SNAPS=( $( ls -d "$DST/${TAG}_"* 2>/dev/null ) )

  ## check time of the last snapshot for this tag
  [[ "$TIME" != 0 ]] && [[ "${#SNAPS[@]}" != 0 ]] && {
    local LATEST=$( sed -r "s|.*_(.*_.*)|\\1|;s|_([0-9]{2})([0-9]{2})([0-9]{2})| \\1:\\2:\\3|" <<< "${SNAPS[-1]}" )
    LATEST=$( date +%s -d "$LATEST" ) || return 1

    [[ $(( LATEST + TIME )) -gt $(( $( date +%s ) + MARGIN )) ]] && { echo "No new snapshot needed for $TAG in $DIR"; return 0; }
  }

  ## do it
  btrfs subvolume snapshot -r "$DIR" "$DST/$SNAPSHOT" || return 1

  ## prune older backups
  [[ "$LIMIT" != 0 ]] && \
  [[ ${#SNAPS[@]} -ge $LIMIT ]] && \
    echo "Pruning old snapshots..." && \
    for (( i=0; i <= $(( ${#SNAPS[@]} - LIMIT )); i++ )); do
      btrfs subvolume delete "${SNAPS[$i]}"
    done

  echo "snapshot $SNAPSHOT generated"
}

btrfs-snp "$@"

# License
#
# This script is free software; you can redistribute it and/or modify it
# under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This script is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this script; if not, write to the
# Free Software Foundation, Inc., 59 Temple Place, Suite 330,
# Boston, MA  02111-1307  USA

List Ubuntu Version

When logging into a system it generally would show you the current version in a MOTD style window. This server had the MOTD changed so I needed to grab the pertinent information.

lsb_release -d
cat /etc/issue

Or on newer systems (16.04 or later)

cat /etc/os-release
hostnamectl

Joan Room Booking

I wanted to add some room booking assistants to our conference rooms. My last place used evoko units – which worked quite well – but they required 1) PoE ethernet drops and 2) $1100 each.

So after reading about Joan I decided to get one to try out. They offer both SaaS and on-prem hosting options; I opted for the on-prem because it is free and I haven’t setup a linux server in a while.

They don’t have installation steps for the software running on the host, but they do offer an OVF just for our needs (currently running VMWare 6.7 with VCenter). Great!

Followed along with (https://support.getjoan.com/hc/en-us/articles/115003534485-On-premises-hosting) the getjoan install site. Downloaded the OVF tgz file.
Already I was a bit upset – why is this compressed with tgz? I should note there is no space savings realized by performing these actions; we’re still at just over 2GB for the entire file, but now I have 3 copies of it..
Uncompressed it and now it’s a .tar file. Inside the .tar is the OVF and VMDK files I need, so I simply renamed this .tar to .ova and went about importing into VMWare.

Selected my files for import and during validation it FAILED!
Issues detected with selected template. Details: – 60:7:VALUE_ILLEGAL: Value ”lsilogic” of ResourceSubType element not found in []. – 69:7:VALUE_ILLEGAL: Value ”lsilogic” of ResourceSubType element not found in []. – 78:7:VALUE_ILLEGAL: Value ”3” of Parent element does not refer to a ref of type DiskControllerReference.

Super helpful. So I untar’d the file to be able to edit the .ovf manifest in my favorite text editor. I changed it from lsilogic to lsilogicsas and re-ran. Received a different error, but still no dice. Some google searches later led me to attempt to bypass vsphere completely and import directly onto one of my hosts.

http://host/ui and a login later, I had the OVA imported successfully! Yay!

Booted it up and it has the virtualbox tools already installed. This delays the startup of the machine while it waits:
“A start job is running for the raise network”
Five. Minutes. Later.

I’m on VMware, so this virtualbox bridged network won’t ever work!

Well, let’s install vmtools so that I can at least stop using the console and SSH in like a normal person: Failed!

Need to add the cdrom drive to the machine, but I can’t do that while it’s running. Stop the machine, add the CDROM drive, and start it back up.

Another. Five. Minutes. FML!

Fix the waiting game for Networking:
sudo mkdir -p /etc/systemd/system/networking.service.d/
sudo bash -c 'echo -e "[Service]\nTimeoutStartSec=20sec" > /etc/systemd/system/networking.service.d/timeout.conf'
sudo systemctl daemon-reload

Stop virtualbox from starting up and failing:
sudo systemctl disable vboxadd.service

Now to install vmware’s tools:
Using vcenter, select to install vmware tools on the running vm
Then, using command line:
sudo mkdir /mnt/cdrom
sudo mount /dev/cdrom /mnt/cdrom
tar xzvf /mnt/cdrom/VMWareTools-* -C /tmp
cd /tmp/vmware-tools-distrib/
sudo ./vmware-install.pl
Follow along with the wizard to install
Reboot

I’ll potentially update this when/if I actually get into the configuration of Joan.

Reset WordPress Password

Taking over the IT department when the previous IT regime had zero plans on how to integrate the series of businesses they had taken over in the past several years makes for some fun times. I have 4 different godaddy accounts, a couple DH accounts, and even one from a German company I had never heard of. And I had to fight, beg, talk, email, reverse engineer, and guess on several logins. Something something “no documentation”.

That being said, I’ve also had the responsibility of migrating and managing some of our wordpress sites and was SOL when it came to logins. Luckily GD, DH, and even the German cpanel host company all allowed for some sort of mysql access – whether that was shell access or phpmyadmin – so I could “easily” reset the credentials.

On Dreamhost using SSH:
mysql -h MYSQL.DOMAINNAME.TLD -u MYDBUSERPASSWORDFROMTHEPANEL -p
Enter your DB User password
show databases;
use DATABASENAMEHERE;
show tables;
Look for one with “users” at the end (eg wp_users)
List the Users Table along with the ID you’ll need later (First Column)
select id, user_login, user_pass from NAMEOFUSERTABLE;
update NAMEOFUSERTABLE set user_pass = MD5('YOURNEWPASSWORDHERE') where ID = NUMBERFOUNDABOVE

Through PHPMYADMIN
Open PHPMyAdmin and click on the WP database
Find the “Users” table (eg wp_users)
Click on Browse
Click on edit by the user for which you desire to change the password
Where it says “user_pass” change the function drop down to MD5 and then type in a plain text password.
Hit save/submit

Unifi Linux and Windows Certificates

I thought I knew it all about certificates, but then I was humbled once again.

I needed to “secure” an internal linux webserver using our Windows 2016 CA as to remove the “this is an unverified site” messages that liked to pop up when browsing the various sites.

The process I had done in the past was to create the CSR using openssl, then copy the encryption data, open up my trusty http://certserverhere/certsrv/ site and go through the process of making a webserver certificate. Then, when finished, just download the certificate and the CA + chain, import on linux, and profit.

Well, the new versions of the templates (V3 and V4 specifically) no longer allowed the web enrollment using my trusty http://certserverhere/certsrv site. Booo.

I could probably get it to work by just requesting my own certificates using the MMC, but I’m still leaning towards the whole CLI phase of life. I should also note that I find the performance and management of Unifi on Linux to be significantly better and easier than that on Windows. YMMV.

By the way, this is technically how I published a certificate on our Unifi wireless controller. The CA Certificate Authority is a 2016 Windows Server that’s been published in AD. The unifi machine is running Ubuntu 17.10 and unifi version 5.6.29. I also used WinSCP, Putty, and my base machine is Win10 (not super applicable).

SSH to the Unifi Machine
(I did this as root, so add “sudo” before commands if you’re not the root god)
cd /usr/lib/unifi
java -jar lib/ace.jar new_cert unifi.domain.tld CompanyName Town State Country
This creates unifi_certificate.csr.der and unifi_certificate.csr.pem – the DER is encrypted and the PEM is what we need.

Get the PEM over to your CA Server
I just used nano to view all the data and then copy pasted, but feel free to WinSCP it over as well
nano unifi_certificate.csr.pem
Copy this text, then on the CA create a new text file and paste the data there. Save.

Certreq
Open an administrative Command Prompt on your CA server
certreq -submit -attrib "SAN:dns=unifi.yourdomain.tld&dns=unifi" -attrib "CertificateTemplate:WebServer2018" unifi_certificate.csr.pem
By default your Certificate Template will be “WebServer” instead of the one I listed above – I created my own template with the year it’s valid for the sake of record keeping.

Save the Certificate
Assuming the request went through, you’ll be able to name and save your signed certificate. In my case I named it unifi_withSAN.domain.tld.cer. I also navigated to the http://certserverhere/certsrv site and downloaded the CA certificate, Certificate chain, or CRL (I just downloaded the CA Certificate as it’s a single host with no subs).

Copy it back to Unifi
I used WinSCP to copy both the signed certificate as well as the CA Certificate I downloaded back to my /home directory on the Unifi server.

Final Touches
Back on your Unifi SSH session (in the /usr/lib/unifi directory)
java -jar lib/ace.jar import_cert /home/unifi_withSAN.domain.local.cer /home/srv-cert01-ca.cer
Replace srv-cert01-ca with the name of your CA certificate.
If successful, restart the unifi services
service unifi restart

Close your browser and open back up to https://unifi:8443 and no more error!

Xibo Install Ubuntu 17.04

Technically this guide could be used for 16.04 and 16.10 (maybe even 17.10 when it arrives), but I tested on 17.04. I wanted to get Xibo installed to stop using a monthly subscription for terrible service, save some money, be the hero, and get a slightly larger bonus.

Install Ubuntu 17.04
LAMP
Mail
Standard
OpenSSH

Enable Root, SSHD Config (optional, may make your configuration less secure)
sudo passwd root
newpassword
sudo su -
nano /etc/ssh/sshd_config
PermitRootLogin yes
Ctrl x
y
service sshd restart

Update Your Server
apt-get update && apt-get dist-upgrade
y

Install PHP 5.6
I know, by default LAMP installs PHP 7 now. We need PHP 5.6+ but less than 7.
add-apt-repository ppa:ondrej/php
apt-get update
apt-get install php7.0 php5.6 php5.6-mysql php-gettext php5.6-mbstring php-mbstring php7.0-mbstring php-xdebug libapache2-mod-php5.6 libapache2-mod-php7.0

Install PHP 7 (NOTE: XIBO CURRENTLY DOES NOT SUPPORT PHP 7+, SO THESE NOTES ARE TO BE DISREGARDED)
apt-get install php-gd php-mcrypt php-soap php-dom php-curl php-zip

Switch From PHP7 to PHP5.6
a2dismod php7.0 ; sudo a2enmod php5.6 ; sudo service apache2 restart
update-alternatives --set php /usr/bin/php5.6

Switch From PHP5.6 to PHP7 (OPTIONAL)
a2dismod php5.6 ; sudo a2enmod php7.0 ; sudo service apache2 restart
update-alternatives --set php /usr/bin/php7.0

Download XIBO, Change Permissions on Apache (Currently version 1.8.2)
wget https://github.com/xibosignage/xibo-cms/releases/download/1.8.2/xibo-cms-1.8.2.tar.gz
tar xvzf xibo-cms-1.8.2.tar.gz
mv xibo-cms-1.8.2 /var/www/html/xibo-server
chown -R www-data:www-data /var/www/html/xibo-server
apache2ctl restart

Create XIBO Uploads Directory
mkdir /var/www/xibouploads
My Default www (documentroot) location is /var/www/html, so this created directory is outside of the www realm (good thing).
chown -R www-data:www-data /var/www/xibouploads

Configure XIBO Installation
Open a web browser to http://YOURSERVERIP/xibo-server/web/install/index.php
You may want to change your document root or apache virtual host at a later time because remembering http://YOURSERVERIP/xibo-server/web/index.php/login is a PITA.
Follow the white rabbit wizard to complete the setup.

Edit Apache and Redirect
I ended up creating a virtual host for my system and adding a redirect (there was a pesky “I want to load /login instead of index.php” issue).
nano /etc/apache2/sites-enabled/000-default.conf
At the bottom add:

<VirtualHost *:80>
ServerAdmin ITSUPPORT@yourcompany.tld
DocumentRoot /var/www/html/xibo-server/web
ServerName xibo
ServerAlias xibo.yourdomain.local
<Directory “/var/www/html/xibo-server/web”>
Options -Indexes +FollowSymLinks -MultiViews
AllowOverride All
Order allow,deny
Allow from all
</Directory>
</VirtualHost>

Enable modrewrite in apache with a2enmod rewrite, or cp /etc/apache2/mods-available/rewrite.load /etc/apache2/mods-enabled/ | apache2ctl restart
sudo a2enmod rewrite

Add the /login redirect
nano /var/www/html/xibo-server/web/.htaccess
At the bottom add the following:

Redirect /login/ /index.php