Time For A New Pc

I have been on the lookout for a small computer case to replace my aging cube case. The front panel had stopped working, so using it was somewhat awkward when plugging in usb thumb drives. However I had found it so difficult to find an appropriate replacement then the last interation of computer build, some 5 years ago, that I had not replaced it then. Sure I could get a Home Theatre PC case, but that was too small. The next size up seemed to be a mini tower, and that was too large to fit in the small space under my desk.

It was then that I came across the Siverstone SST-SG13, in several flavours - but the one I liked the SST-SG13B. This seemed perfect.

computerCase

Of course, I started to ask myself if I should replace the rest of my PC. I have two monitors, a relatively new 27 inch main screen and a 24 inch side monitor, and both of those were absolutely fine. I had recently replaced my computer speakers and I have pretty well stopped using a CD drive (although I have a portable unit if needed), so the only real benefit could be had from the core pc itself and the mouse and keyboard. I really liked the look of these low profile wireless keyboard and mouse that I bought them and now use them with my PC.

keyboard_mouse

So I started examing performance and price against the performance of my old PC. This was a Intel Core i5-5200. Looking at benchmark comparisons, it looked to me that with an AMD Ryzen 5 3600 there was a sweet spot in price with an almost 7 times performance improvement. (I am deliberately being niaive here). I also considered the option where the graphics were built in t the processor chip (as they had been on the Core i7), but two condsiderations let me down the gpu card route.

  1. The performance price penalty seemed high (specifically compared to the same conditions when I bought the Core i7).
  2. I am thinking that I will be doing much more video editing in the future, and gpu support for that would be a big bonus.

Memory was an interesting consideration. I had 16GB on the Core i7 machine and I had never come close to running out. I did look at moving to 32GB, but at the time I didn’t feel I would use it and so the extra cost wasn’t worth it. I am not so sure now, having butted up against the limit recently. I am not totally excluded, but it will mean replacing a pair of 8GB memory cards with a pair of 16GB ones.

I started looking at what I should do about disks. On my old system I had a pair of 2TB hard drives and a smaller 240GB SSD. I had used the SSD as my main root system, and had used some of one of the hard drives to provide an ext4 partition (the remainder was running btrfs, in a raided 1 configuration with the other drive) for a windows 10 virtual machine I need for my business requirements. I had used some of the other one for alternative root system to boot into if ever the ssd failed me, but I had never had the need to use it. I had also come to the conclusion that raid was more trouble that it was worth, particularly if I accidentally deleted a file it was deleted on both hard drives, but if I did a regular copy from one drive to the other, loss of a file on one drive didn’t mean it was lost altogether (although more on this in in my next blog post).

It was then that I discovered NVME and the massive performance gains I could make by using it. After a bit of consideration I decided that I could equip this PC with a 1TB version of one of these to be the Prime storage medium. I wanted to have a separate device in which to store data that was either less important or was a backup of what was contained on the prime storage medium. and I realised that I could afford that using a 1TB regular SSD. It was also clear that I could also fit one other SSD into this small case and that it would be silly to throw away the one currently in the machine, but re-use it.

I decided to give my old PC (sans monitors etc) to a charity, but before I could do that I have to recover my data from it and store it somewhere. In the end I decided to leave one 2TB hard drive in the PC as its main root filesystem, keep the second hard drive with my recovered data on it and clear the SSD to act as the filesystem for my windows 10 virtual machine.

I have come to value using btrfs as my root file system. The ability to manage subvolumes, to easily make multiple copies of them without inccurring the penalty of full duplication of all the data, but I am nervous to use it when what is actually happening is updates within a big file, such as a database or a virtual machine. For these I would prefer to use an ext4 (without a journal) filesystem. Note, however, this is only when the database engine or the virtual machine is running with it. It is, I believe, perfectly acceptable to use btrfs for backup storage of these files.

In terms of requirement today, I am frequently developing software using nodejs, but accessing database engines. My main development is for a client running sqlserver and until recently had been running up a windows virtual machine and using the database engine on that to run sqlserver. My linux node development would access that engine. But I have come across the docker sqlserver package that runs on linux, and I have switched over to that for the majority of my development. So ideally I want that package to access (via docker volumes) an ext4 partition on my NVME drive, to give the highest performance. I have set aside a 50G partition (within the 1TB)

partition_layout

Now that windows no longer has to provide the database engine for the majority of the time, and in fact much of the work I now undertake with it (considering my retirement) is not needed, and so the drives have been cleaned up, and I am there fore able to use the SSD from my old machine to hold that (and potentially other) virtual machine.

The 1TB SSD is used for backup and as a staging post for some specialist files. I do have another Synology Disk Server for longer term backup, but I want to be sure that I can have a backup local on my machine that is on a separate physical device to the main datastores. Since I have intially set up this new computer I have gone through a failure of the NVME drive btrfs filesystem that required recovery from backup. This has caused me to reflect on the approach to backup and as a result I have made some minor changes.

The result of my reflection is that I can see two separate needs for backups:-

  1. Protection for failure of a storage device. This is graded in the sense that we need to cover:-
    • failure of a single device
    • failure of the entire computer
    • disaster at home, in which case both my computer and the Disk Server are inoperable
  2. Recovery of a file that you have deleted, or altered and what the old content back.

It was way back in 2005 that I first blogged about this and then again in 2011. Those basic mecahisms are still in place, but I have been adjusting them both because of the demands of point 1 above, but also because of the fantasic ability of btrfs to meet the needs of point 2. It is so cheap to make a daily snapshot of my entire home directory and of the /etc directory, that is what I do.

Here is my daily backup file - driven by cron (weekly and monthly are similar so no point in repeating here)

#!/bin/sh

ARCHIVE=/bak
MOUNTPT=/mnt/btrfs_pool

logger -t "Backup:" "Starting Daily Backup"

# Do we need Owl to be awake?
if [ $(($(date +%_d) % 4)) -eq 0 ] || [ -n "$(ls -A $ARCHIVE/archive)" ] ; then

	# wait up to 5 hours for owl to be come available
	counter=0
	until [ $counter -gt 300 ] || (ping -c 1 -W 1 owl.home > /dev/null)
	do
		sleep 60
		let counter=counter+1
		logger -t "Backup:" "Still Waiting for Owl ..."
	done
	if [ $counter -le 300 ] ; then
		logger -t "Backup:" "Owl up, so send archive directory to it"
		[ -n "$(ls -A $ARCHIVE/archive)" ] && rsync -aq --remove-source-files $ARCHIVE/archive/ disk:/volume1/NetBackup/archive/snap/archive/

		# We are going to peel the oldest Accuvision front end mdb away to the archive every four days, although
		# as we move to only three a month we will slow down as we will stop when there
		# are only 2 copies left

		if [ $(($(date +%_d) % 4)) -eq 0 ] ; then
			logger -t "Backup:" "Peeling of a Accuvision Front End for Archiving"
    			if [ $(ls -t $ARCHIVE/accu_fe|wc -l) -gt 2 ] ; then
       				rsync -aq --remove-source-files $ARCHIVE/accu_fe/$(ls -tr $ARCHIVE/accu_fe|head -n 1) disk:/volume1.NetBackup/archive/snap/
    			fi
		fi
	fi
fi


logger -t "Backup:" "Fetching Money Backups"

scp chroot:/var/www/db/money-live* /bak/money/ >/dev/null

logger -t "Backup:" "Alan Home"

if [ -d $MOUNTPT/backup-alan-daily-7 ] ; then
	if [ -d $MOUNTPT/backup-alan-daily-5 ] ; then
		btrfs subvolume delete -c $MOUNTPT/backup-alan-daily-7 2>&1 > /dev/null
	fi
fi
if [ -d $MOUNTPT/backup-alan-daily-5 ] ; then
	mv $MOUNTPT/backup-alan-daily-5 $MOUNTPT/backup-alan-daily-7
fi
if [ -d $MOUNTPT/backup-alan-daily-3 ] ; then
	if [ -d $MOUNTPT/backup-alan-daily-4 ] ; then
		btrfs subvolume delete -c $MOUNTPT/backup-alan-daily-4 2>&1 > dev/null
		mv $MOUNTPT/backup-alan-daily-3 $MOUNTPT/backup-alan-daily-5
	else
		mv $MOUNTPT/backup-alan-daily-3 $MOUNTPT/backup-alan-daily-4
	fi
fi
if [ -d $MOUNTPT/backup-alan-daily-2 ] ; then
	mv $MOUNTPT/backup-alan-daily-2 $MOUNTPT/backup-alan-daily-3
fi
if [ -d $MOUNTPT/backup-alan-daily-1 ] ; then
	mv $MOUNTPT/backup-alan-daily-1 $MOUNTPT/backup-alan-daily-2
fi
btrfs subvolume snapshot -r $MOUNTPT/alan  $MOUNTPT/backup-alan-daily-1 2>&1 > /dev/null

logger -t "Backup:" "Copy Alan Snapshot to bak"

if [ -d $ARCHIVE/backup-alan-daily-2 ] ; then
	btrfs subvolume delete -c $ARCHIVE/backup-alan-daily-2 2>&1 > /dev/null
fi
if [ -d $ARCHIVE/backup-alan-daily-1 ] ; then
	mv $ARCHIVE/backup-alan-daily-1 $ARCHIVE/backup-alan-daily-2
fi

if [ -d $MOUNTPT/backup-alan-daily-2 ] && [ -d $ARCHIVE/backup-alan-daily-2 ] ; then
	btrfs send -q -p $MOUNTPT/backup-alan-daily-2 $MOUNTPT/backup-alan-daily-1 | btrfs receive $ARCHIVE/
else
	btrfs send -q $MOUNTPT/backup-alan-daily-1 | btrfs receive /$ARCHIVE/
fi

logger -t "Backup:" "Starting etc backup"


if [ -d $MOUNTPT/backup-etc-daily-7 ] ; then
	if [ -d $MOUNTPT/backup-etc-daily-5 ] ; then
		btrfs subvolume delete -c $MOUNTPT/backup-etc-daily-7 2>&1 > /dev/null
	fi
fi
if [ -d $MOUNTPT/backup-etc-daily-5 ] ; then
	mv $MOUNTPT/backup-etc-daily-5 $MOUNTPT/backup-etc-daily-7
fi

if [ -d $MOUNTPT/backup-etc-daily-3 ] ; then
	if [ -d $MOUNTPT/backup-etc-daily-4 ] ; then
		btrfs subvolume delete -c $MOUNTPT/backup-etc-daily-4 2>&1 > /dev/null
		mv $MOUNTPT/backup-etc-daily-3 $MOUNTPT/backup-etc-daily-5
	else
		mv $MOUNTPT/backup-etc-daily-3 $MOUNTPT/backup-etc-daily-4
	fi
fi

if [ -d $MOUNTPT/backup-etc-daily-2 ] ; then
	mv $MOUNTPT/backup-etc-daily-2 $MOUNTPT/backup-etc-daily-3
fi

if [ -d $MOUNTPT/backup-etc-daily-1 ] ; then
	mv $MOUNTPT/backup-etc-daily-1 $MOUNTPT/backup-etc-daily-2
fi

btrfs subvolume create $MOUNTPT/backup-etc-daily-1 2>&1 > /dev/null
cp  -a --reflink=always $MOUNTPT/rootfs/etc/* $MOUNTPT/backup-etc-daily-1/
# make it readonly
btrfs property set -ts $MOUNTPT/backup-etc-daily-1 ro true

logger -t "Backup:" "Copy etc backup to /bak"

if [ -d $ARCHIVE/backup-etc-daily-2 ] ; then
	btrfs subvolume delete -c $ARCHIVE/backup-etc-daily-2 2>&1 > /dev/null
fi

if [ -d $ARCHIVE/backup-etc-daily-1 ] ; then
	mv $ARCHIVE/backup-etc-daily-1 $ARCHIVE/backup-etc-daily-2
fi

if [ -d $MOUNTPT/backup-etc-daily-2 ] && [ -d $ARCHIVE/backup-etc-daily-2 ] ; then
	btrfs send -q -c $MOUNTPT/backup-etc-daily-2 $MOUNTPT/backup-etc-daily-1 | btrfs receive $ARCHIVE/
else
	btrfs send -q $MOUNTPT/backup-etc-daily-1 | btrfs receive $ARCHIVE/
fi



logger -t "Backup:" "Daily backup complete"

Right at the beginning of the file I am actually checking if there is any backup to do to the disk server, and since that powers down each night, its possible that it hasn’t started again when my cron wants to run - shortly after I start it up in the morning. So I use the ping technique shown there to see if its awake yet (only if I know this backup has something for it) and if not sleep for a minute and try again. I wait for up to 5 hours (I do sometimes get up in the middle of the night because something is buzzing round my head when the server still has several hours before waking up).

The second step (if we didn’t timeout) is to send a special archive directorys content to the disk server. Anything I place in this directory is for long term archiving, so it takes it, sends it to the server and then deletes it from the source.

The third step is to keep a long time repository of versions of an access database front end. I get a new release about 3 times a month, and they are put into the referenced directory. Every 4 days, of there is more than two back copies remaining, I select the oldest and send that to the remote archive.

Finally the snapshots of my home directory and of etc - the first part of each section is to support recovery of lost files because I deleted them or altered them, the second part is to move it off disk to my backup one.

Installation of Debian on this system is quite problematic. I think this has been caused by the need for gpu firmware that is not available in the installation images. My first attempts when I first had the hardware kept failing. With the benefit of hindsight I think this is because the firmware for the graphics drivers is in the non-free section of the repository and that was only available to me by downloading the dvd live image.

The second installation (after the disk corruption - which I suspect was caused overheating as I was running boinc rosetta@home tasks, using all my memory and all 12 cores of the cpu continuously for a few weeks - I have since left it running only on some raspberry pi 4s, hopefully saving my desktop and my laptops fan) also caused problems. The main issue was that at the stage of the installation where it installs the grub bootloader - this step was failing but without much indication of why. I even checked the logs, but not knowing what to look out for took a long time to find anything. Eventually I found a message that said run out of space on the device. I immediately assumed my boot partition was not big enough, and did another installation from scratch where I set the boot partition size to 1 GB. But it still failed.

I eventually found a post that said that there is a was a space in the EFI boot for some variables and that space had been used up. So I used efibootmgr to list and then delete the boot entries. Grub was finally able to install a boot block then, but I had wasted nearly a whole day. I hope someone finds post and can use it to solve their problem. However once I had found that and dealt with it, I have been able to install Debian Bullseye, and its been great.