Categories
trivial shennanigans

Upgrading Linux boxes

After returning to upgrading my main Linux box (due to playing with Docker and using some stuff (genome annotation pipelines) that needs more than the system max of 16GB RAM) I came across this blog post about a similar situation (albeit more time). The beautiful machine
I’ve generally always upgraded my own computers. My main Linux box has been upgraded in bits and pieces for some time. I think the oldest current part is the case that is at least 15 years old. It’s been heavily updated with sound insulation from the car audio scene, although TBH it is much quieter now than when the motherboard was an Asus PC-DL running a pair of power hungry overclocked Xeons.
However, for one reason or another (lack of time, stable hardware, iPadOS etc) I’ve not done this for some time. My box was last opened two years ago when the TV tuner card (PCIe TBS6980) died and I replaced it, with an almost exact model. The previous real upgrade was seven or more years ago.
So I got myself a 10 yr old server board with dual Xeons and Max 32gb ram. Intel S5500BC board with a pair of Xeon E6240s. The setup only cost £70 but each CPU is far faster than the previous single Xeon X3470. Plus max RAM is double.
However, when I could previously swap over a motherboard in less than an hour, now we got multiple beginner errors.
– First error was refusing to boot due to a grounding error. I’d assumed there would be the usual 9 standoffs in the ATX format. Nope. There is no motherboard hole for a middle bottom standoff. What compounded the gorging error was that the lower left standoff was too short. Whoops!
– Then I’d not inserted RAM correctly. Turns out a proper server board does not fail to POST, but just omits the DIMM slots and allows the rest to work. Luckily a red LED indicates the incorrect DIMM slots.
– Then all 32GB of RAM (8 x 4GB) was recognised, but once booted into Linux only 24 GB was seen. Turns out another beginner error and the DIMM was inserted enough to be recognised, but not enough to work properly.
As well as the beginner errors the board is a server one and CPU fans ran so fast that it was difficult to think with the noise. Turns out most Intel server boards are intended to be paired with an Intel chassis. If the board does not detect the chassis it just switches on the fans full speed instead of managing then due to CPU heat! Noisy!
I found a few blog posts on reflashing the BIOS to a more recent one AND also something called a Baseboard Management Controller. When did they come along? This adds a non-Intel chassis profile for fan speed and allows it to be managed in line with CPU heat.
Even though Intel have EOLed these boards, I still found the latest BIOS on their site. The current BIOS was so old though I needed to update to an intermediate build, BIOS 66. Then flash to BIOS 69 which is the latest. Flashing the BIOS on servers boards is easy! The board used EFI can can be booted to a console, which allows you to flash the BIOS from a USB stick. Even easier there’s a BAT script to do this from the USB. Funky!
BUT the BMC firmware was very difficult to find. I eventually found it on a niche You Tube video.
Anyway the lesson I should learn is hardware upgrades can only be easy if you spend a lot of money. If you want to save money then you need to do them regularly to keep your skills up!

Categories
trivial shennanigans

old-fashioned kernel upgrading

I keep the kernel on my Linux box fairly up to date. With more or less every point release, after my distro, Gentoo, has released a fairly ‘mature’ patched version, I upgrade. However, I’m thinking that I’m using some pretty old fashioned technqiues in doing so. For example I manually configure the kernel, my boot loader is LILO, and I do not use any of the distro’s helpers.

My usual procedure is:

copy the old config direct from /PROC and using the ‘oldconfig’ option update the config with all new options for the new kernel. Since I rarely leave this more than 1 version difference there’s generally only 20 or so differences:

cp /proc/config.gz
gunzip config.gz
cp config /usr/src/NEWKERNEL/.config
cd /usr/src/NEWKERNEL
make oldconfig

Once that’s done I compile the kernel using the bzImage image, and compile the modules and install them at the same time.

make bzImage && make modules && make modules_install

Incidentally that double ampersand is a cool shortcut. If the previous command ends with an error it does not run.

Once compiled, I change the /usr/src/linux link to the new kernel, copy it to the boot folder, add the new kernel to the lilo.conf file, run lilo, n reboot with prayer to whatever humanist non-deity you don’t believe in!

rm /usr/src/linux
ln -s /usr/src/NEWKERNEL linux
vi /etc/lilo.conf
lilo
reboot

Incidentally if you use a distro that stores the Linux headers, or rather iuses the kernel ones, in /usr/src/linux, then be careful changing this link. Luckily the distro I use stores these in another place, so you can upgrade kernels willy nilly, without affecting what glibc is compiled with.

Categories
Computergeekery

Use of Backup in Anger!

I lost my entire RAID10 array yesterday. In a fit of “too much noise in office” I removed the hot swap SCSI array box from my workstation box, attached it to a wooden platform, and suspended it in a large plastic box using an old inner tube from my bike. This really reduced the noise, however, like a moron, I did not attach the scsi cable properly and 2 drives got kicked from the array. That was not a problem. However, what was, is when I tried to re-assemble the array without checking the cable. I ended up wiping one of the raid partitions. Still not a major issue, except I subsequently zeroed out the superblock of the missing drive in order to add it back in. Anyway, that was my array lost!

As a main backup strategy I use an homebrewed incremental Rsync script to back up my Linux workstation everynight to a 2Tb ReadyNas+ box (Macs are backed up with a combination of Time Machine and Super Duper). So now I had a chance to test it out. So after recreating the array and copying the data across the network I was back up and running!

mdadm --create /dev/md1 --chunk=256 -R  -l 10 -n 4 -p f2 /dev/sd[abcd]3 
echo 300000 >> /sys/block/md1/md/sync_speed_max 
watch cat /proc/mdstat 
mkfs.xfs /dev/md1
mount /home
rsync -avP /mnt/backup/SCOTGATEHome/current/ /home/

It took about 1 hour to sync, and then 3 hours to copy across the 156Gb of files over the network.

It all worked great, and I’m very pleased to know that my backup strategy is working!

Now back to complete the “silent and suspended hard drive array!”

Categories
trivial shennanigans

Google Browser sync now open source!

One of the downsides to Firefox 3 is the mouse scrolling speed on Linux (not on OS X), and the fact I can no longer share my passwords, my bookmarks, and cookies between my Mac and Linux box. I did take a look at Weave, but it seemed pretty poor so far (although I’m sure it will get better).

Well the first step in Firefox3 browser synch’ing has been taken.

http://google-opensource.blogspot.com/2008/07/open-sourcing-browser-sync.html

Cool. Now all I need to do is to wait for somebody to port it to FF3 and I’m set! That may take a while, but at least it’s now possible.

One thing that seems not to have been noticed as that without an encrypted backend to store the synched info the plugin is useless. The util did not sync between browsers, but it was browser to encrypted backend at Google. I wonder if Google would still allow that? Still at least it’s now possible.

Perhaps those Weave people will take this code to better their own? GBS did work pretty dammed well, so that would be a good idea.

Categories
Mac trivial shennanigans

Don’t just leave your backups alone, check them!

I use a Readynas+ to backup all my Linux using a homegrown incremental Rsync script and my Mac using SuperDuper.  For the last few weeks the space on the little Nas has been getting pretty low.  I just thought that I just had a lot of stuff…  But thinking further, 1.4Tb is a hell of a lot of stuff.  Where on earth was my space going?  A few uses of du -h showed that I had 200Gb of music and film.  My Mac took over 300Gb (3 x rotated SuperDuper sparse images).  My main Linux box took a lot, but suprisingly my Mythtv backup took 0.5Tb.  WTF? I only keep 3 incremental backups (plus an archived one every now and again) of that machines root partition. e.g. 15Gb max!  Then it dawned on me.  With Myth 0.21 new Disc Groups, I added a new 300Gb partition for storage, BUT I never added it to the exclude list for my Rsync backup. So I’d been backing up all my recordings for the last few months. Doh!!

That’s quite a lot to backup on a nightly basis!

Categories
trivial shennanigans

SMBFS to CIFS

Note to self.  If you are ever going to migrate some Samba shares from using the deprecated SMBFS to using CIFS instead, then instead of just changing the filesystem in FSTAB, then wondering why the dammed thing refuses to mount. Try reading up a bit and realise that the actual mount utility is a different one and is likely not to be installed.

What a numpty.  Anyway 2 tips for anybody whos doing this:

  1. The following is a great way to increase the debug output:
    echo 1 >/proc/fs/cifs/cifsFYI
  2. install mount_cifs

Linux Raid Wiki

There’s a lot of outdated stuff concerning Linux software raid out there. Over the last 6 months there’s been a concerted effort by people on the Linux Raid mailing list to improve this situation. The continuing results can be seen on this Wiki:

http://linux-raid.osdl.org/

It’s a great resource already, and getting better. Go have a look if you want to know about the current status of just what cool stuff you can do.

History Meme

Ok I give in. After seeing this on many blogs over the last 48 hrs I have to give in.

history|awk '{a[$2]++} END{for(i in a){printf "%5d\t%s\n ",a[i],i}}'|sort -rn|head

Macbook Pro
98 ls
96 cd
30 ping
29 ssh
26 du
22 sudo
20 rm
19 cvs
18 top
18 sed

Linux box:

88 ls
63 cd
33 ssh
31 su
30 vi
24 du
17 exit
16 ./Encode2flv.sh
13 rm
13 mv

Even on my Mythtv box:

33 ls
23 df
22 su
19 sudo
12 firefox
9 mplayer
9 cd
7 killall
5 vi
5 tail

Not that informative, but a good laugh! Although I do have to ask myself.  Why do I need to look in a directory so many times?  Another question that comes into mind is that I think I do too much stuff as root.  There’s a lot of commands that I know I run, that are missing there.

Categories
Mac trivial shennanigans

Linux to OS X

It’s been 12 months almost in my new job, and those 12 months have seen a slow migration to having my MacBook Pro(MBP) as my main computing plaything. Previous to this I’ve always had a Windows laptop as a business machine and consequently for reasons not worth going over, my Linux workstation was my main machine. However, now the MBP has slowly, but surely surplanted it as my main machine.  Obviously for work but for personal stuff too.

I’ve spent nearly a grand this year upgrading the Linux box, and to be honest it was money poorly spent. OK currently once again I have a pretty fast machine: 3.6Ghz dual Xeons, 6GB Ram, reasonably fast graphics and a 0.5Tb RAID10 array, but even though it’s connected to a pair of Dell 24″ widescreen LCDs, I seem to spend much of my computing time sat in front of the telly with the laptop, placed where it’s named to be placed. Sometimes even in the local pub/cafe connected to the free wireless.

Furthermore now that I’ve got the 1.4TB ReadyNas+ as the central storage in the home LAN, and moved the DNS server over to the DD_WRT running WRT54GS , the machine does not even act as a server. 2 years ago I would almost get the Jones if the machine was down. Now I even turn it off sometimes. What a change!

I really enjoyed the upgrading, and I suppose this enjoyment from rebuilding the hardware is worth it. Although something that I did not enjoy was the money and time spent quietening the dammed machine, as working from a home office the constant hum is an annoyance.  My next challenge is that I really do want to get the machine booting from a RAID10 array though (small raid1 /boot partition with 2 RAID10 array’s for everything else). Perhaps having the MBP and other reliable storage will stop me panicking during some tricky rebuild, when I remember that my backup is quite old! Although there’s nothing quite like the live reshaping of a system, when you have no backup

Categories
Mac

Incremental backups using Rsync

Many thanks for Raj for pointing me to this great tutorial on making incremental backups using Rsync.

I’ve been using rsync for a while for backups now. First my Macbook Pro’s Docs and Scripts directories are synced with my Linux box’s (initiated by a Cron job on the Linux box). Then I irregularly backup my Home directory to a USB disc. I’ve recently moved this to the ReadyNas+ box, although its horsepower is a little slow for rsync over ssh. I read a few emails on the Linux Raid list a while ago intimating that rsync incrementals were possible by creating a hardlink to the previous directory backup and as a consequence only putting new files in the new directory. However, laziness has fought against motivation and it’s not got done.

This nice tutorial should fight against the inertia and make me do it.