A couple of months ago I decided to upgrade the OS of my dataserver. A server I created several years ago with 8 x 750GB hard disks serving a 4TB RAID-5 set. To handle the RAID-5 set I use a RocketRAID 2320 SATA-II Controller from HighPoint Technologies. Motherboard, CPU, memory and video card were replaced over the years, but the raid controller is still operational.
As OS I run Debian, started originally with Etch (I think) and in the years updated to now finally sid). Every update I had some trouble with the RocketRAID driver. Which I could solve with some tweaks.
This time I had even more problems. Normally I only install important security updates, so when I decide to upgrade Debian several months have passed. Multiple runs of aptitude and apt-get and some small fixes were needed to get all packages updated.
The second thing I had to do was making my system bootable and that it would mount all my file systems during start. I use two separate disks in software RAID-1. On those disks I use LVM with several logical volumes for my file systems. After the first boot I was prompted with a maintenance prompt, so I had to manually mount my file systems.
# vi /etc/mdadm/mdadm.conf
ARRAY /dev/md0 devices=/dev/sda1,/dev/sdb1
ARRAY /dev/md1 devices=/dev/sda2,/dev/sdb2
ARRAY /dev/md2 devices=/dev/sda3,/dev/sdb3
# mdadm --assemble -s
# vgchange -a y
# mkdir /mnt
# mount /dev/vgsystem/root /mnt
# chroot /mnt
# vgscan ; vgchange -a y
# mount -a
# vi /etc/mdadm/mdadm.conf
# update-initramfs -u
The problem was however in udev, after a force reinstall initrd found my raidset, root logical volume, but somehow systemd couldn’t find my volume group. Again I was prompted with a maintenance prompt. I couldn’t find why systemd wouldn’t mount my devices, there were no critical errors in the logfiles, all file systems were clean. So after an hour googling and trying to fix some small things that occure during booting, I always ended in the maintenance prompt.
So I decided to remove systemd. I couldn’t remember when I had installed it or that it was part of the upgrade. Systemd is a more shiny replacement for sysvinit. Since sysvinit never dissapointed me the last years, I did an apt-get remove systemd. After removal apt-get automatically would install sysvinit-core, so after installation I would be back to business.
But…. somehow the removal of systemd fscked up my terminal during installation, which stopped reacting on my keyboard input, so the only thing I could do was to reboot the system. And guess what… another maintenance prompt, or actually it dropped me to a shell, since /sbin/init was gone.
This time LVM refused to create my devices, both vgscan and vgchange seems to run successfully, but device mapper wouldn’t create my devices, so I was unable to mount them. I googled for some time, but without any error messages it is hard to find something. So I couldn’t chroot to my system to fix things, I needed a boot cd. My dvd drive wouldn’t boot, so I created a USB disk to boot from. Both a Debian rescue disk and Ubuntu 14.04 failed booting, don’t know why, probably something to do with scanning my USB controller during boot.
Finally I was able to boot using ploplinux. I mounted my system in a chroot and was able to install sysvinit-core. After a reboot I finally had a working system. Although I still was unable to access my data on the RAID-5 set. Time to compile a kernel module.
Which shouldn’t be that hard, cause Highpoint has a linux driver source available, including a README which explains the process to compile the kernel module. However the source code was designed for linux 2.4 and the early versions of 2.6, but since these versions there were some changes in the scsi kernel module, so compiling ended with errors. Long story short, I changed the source code with some help from Google and uploaded the patched version to this article.