old-fashioned kernel upgrading

I keep the kernel on my Linux box fairly up to date. With more or less every point release, after my distro, Gentoo, has released a fairly ‘mature’ patched version, I upgrade. However, I’m thinking that I’m using some pretty old fashioned technqiues in doing so. For example I manually configure the kernel, my boot loader is LILO, and I do not use any of the distro’s helpers.

My usual procedure is:

copy the old config direct from /PROC and using the ‘oldconfig’ option update the config with all new options for the new kernel. Since I rarely leave this more than 1 version difference there’s generally only 20 or so differences:

cp /proc/config.gz
gunzip config.gz
cp config /usr/src/NEWKERNEL/.config
cd /usr/src/NEWKERNEL
make oldconfig

Once that’s done I compile the kernel using the bzImage image, and compile the modules and install them at the same time.

make bzImage && make modules && make modules_install

Incidentally that double ampersand is a cool shortcut. If the previous command ends with an error it does not run.

Once compiled, I change the /usr/src/linux link to the new kernel, copy it to the boot folder, add the new kernel to the lilo.conf file, run lilo, n reboot with prayer to whatever humanist non-deity you don’t believe in!

rm /usr/src/linux
ln -s /usr/src/NEWKERNEL linux
vi /etc/lilo.conf
lilo
reboot

Incidentally if you use a distro that stores the Linux headers, or rather iuses the kernel ones, in /usr/src/linux, then be careful changing this link. Luckily the distro I use stores these in another place, so you can upgrade kernels willy nilly, without affecting what glibc is compiled with.

January 27, 2009. trivial shennanigans. Tags: . 3 Comments.

Use of Backup in Anger!

I lost my entire RAID10 array yesterday. In a fit of “too much noise in office” I removed the hot swap SCSI array box from my workstation box, attached it to a wooden platform, and suspended it in a large plastic box using an old inner tube from my bike. This really reduced the noise, however, like a moron, I did not attach the scsi cable properly and 2 drives got kicked from the array. That was not a problem. However, what was, is when I tried to re-assemble the array without checking the cable. I ended up wiping one of the raid partitions. Still not a major issue, except I subsequently zeroed out the superblock of the missing drive in order to add it back in. Anyway, that was my array lost!

As a main backup strategy I use an homebrewed incremental Rsync script to back up my Linux workstation everynight to a 2Tb ReadyNas+ box (Macs are backed up with a combination of Time Machine and Super Duper). So now I had a chance to test it out. So after recreating the array and copying the data across the network I was back up and running!

mdadm --create /dev/md1 --chunk=256 -R  -l 10 -n 4 -p f2 /dev/sd[abcd]3 
echo 300000 >> /sys/block/md1/md/sync_speed_max 
watch cat /proc/mdstat 
mkfs.xfs /dev/md1
mount /home
rsync -avP /mnt/backup/SCOTGATEHome/current/ /home/

It took about 1 hour to sync, and then 3 hours to copy across the 156Gb of files over the network.

It all worked great, and I’m very pleased to know that my backup strategy is working!

Now back to complete the “silent and suspended hard drive array!”

November 2, 2008. Computergeekery. Tags: , , , . No Comments.

Google Browser sync now open source!

One of the downsides to Firefox 3 is the mouse scrolling speed on Linux (not on OS X), and the fact I can no longer share my passwords, my bookmarks, and cookies between my Mac and Linux box. I did take a look at Weave, but it seemed pretty poor so far (although I’m sure it will get better).

Well the first step in Firefox3 browser synch’ing has been taken.

http://google-opensource.blogspot.com/2008/07/open-sourcing-browser-sync.html

Cool. Now all I need to do is to wait for somebody to port it to FF3 and I’m set! That may take a while, but at least it’s now possible.

One thing that seems not to have been noticed as that without an encrypted backend to store the synched info the plugin is useless. The util did not sync between browsers, but it was browser to encrypted backend at Google. I wonder if Google would still allow that? Still at least it’s now possible.

Perhaps those Weave people will take this code to better their own? GBS did work pretty dammed well, so that would be a good idea.

July 10, 2008. trivial shennanigans. Tags: . No Comments.

Don’t just leave your backups alone, check them!

I use a Readynas+ to backup all my Linux using a homegrown incremental Rsync script and my Mac using SuperDuper.  For the last few weeks the space on the little Nas has been getting pretty low.  I just thought that I just had a lot of stuff…  But thinking further, 1.4Tb is a hell of a lot of stuff.  Where on earth was my space going?  A few uses of du -h showed that I had 200Gb of music and film.  My Mac took over 300Gb (3 x rotated SuperDuper sparse images).  My main Linux box took a lot, but suprisingly my Mythtv backup took 0.5Tb.  WTF? I only keep 3 incremental backups (plus an archived one every now and again) of that machines root partition. e.g. 15Gb max!  Then it dawned on me.  With Myth 0.21 new Disc Groups, I added a new 300Gb partition for storage, BUT I never added it to the exclude list for my Rsync backup. So I’d been backing up all my recordings for the last few months. Doh!!

That’s quite a lot to backup on a nightly basis!

May 26, 2008. Mac, trivial shennanigans. Tags: . No Comments.

SMBFS to CIFS

Note to self.  If you are ever going to migrate some Samba shares from using the deprecated SMBFS to using CIFS instead, then instead of just changing the filesystem in FSTAB, then wondering why the dammed thing refuses to mount. Try reading up a bit and realise that the actual mount utility is a different one and is likely not to be installed.

What a numpty.  Anyway 2 tips for anybody whos doing this:

  1. The following is a great way to increase the debug output:
    echo 1 >/proc/fs/cifs/cifsFYI
  2. install mount_cifs

May 24, 2008. trivial shennanigans. Tags: . No Comments.

Linux Raid Wiki

There’s a lot of outdated stuff concerning Linux software raid out there. Over the last 6 months there’s been a concerted effort by people on the Linux Raid mailing list to improve this situation. The continuing results can be seen on this Wiki:

http://linux-raid.osdl.org/

It’s a great resource already, and getting better. Go have a look if you want to know about the current status of just what cool stuff you can do.

May 11, 2008. Uncategorized. Tags: , . No Comments.

History Meme

Ok I give in. After seeing this on many blogs over the last 48 hrs I have to give in.

history|awk '{a[$2]++} END{for(i in a){printf "%5d\t%s\n ",a[i],i}}'|sort -rn|head

Macbook Pro
98 ls
96 cd
30 ping
29 ssh
26 du
22 sudo
20 rm
19 cvs
18 top
18 sed

Linux box:

88 ls
63 cd
33 ssh
31 su
30 vi
24 du
17 exit
16 ./Encode2flv.sh
13 rm
13 mv

Even on my Mythtv box:

33 ls
23 df
22 su
19 sudo
12 firefox
9 mplayer
9 cd
7 killall
5 vi
5 tail

Not that informative, but a good laugh! Although I do have to ask myself.  Why do I need to look in a directory so many times?  Another question that comes into mind is that I think I do too much stuff as root.  There’s a lot of commands that I know I run, that are missing there.

April 14, 2008. Uncategorized. Tags: . 3 Comments.

Linux to OS X

It’s been 12 months almost in my new job, and those 12 months have seen a slow migration to having my MacBook Pro(MBP) as my main computing plaything. Previous to this I’ve always had a Windows laptop as a business machine and consequently for reasons not worth going over, my Linux workstation was my main machine. However, now the MBP has slowly, but surely surplanted it as my main machine.  Obviously for work but for personal stuff too.

I’ve spent nearly a grand this year upgrading the Linux box, and to be honest it was money poorly spent. OK currently once again I have a pretty fast machine: 3.6Ghz dual Xeons, 6GB Ram, reasonably fast graphics and a 0.5Tb RAID10 array, but even though it’s connected to a pair of Dell 24″ widescreen LCDs, I seem to spend much of my computing time sat in front of the telly with the laptop, placed where it’s named to be placed. Sometimes even in the local pub/cafe connected to the free wireless.

Furthermore now that I’ve got the 1.4TB ReadyNas+ as the central storage in the home LAN, and moved the DNS server over to the DD_WRT running WRT54GS , the machine does not even act as a server. 2 years ago I would almost get the Jones if the machine was down. Now I even turn it off sometimes. What a change!

I really enjoyed the upgrading, and I suppose this enjoyment from rebuilding the hardware is worth it. Although something that I did not enjoy was the money and time spent quietening the dammed machine, as working from a home office the constant hum is an annoyance.  My next challenge is that I really do want to get the machine booting from a RAID10 array though (small raid1 /boot partition with 2 RAID10 array’s for everything else). Perhaps having the MBP and other reliable storage will stop me panicking during some tricky rebuild, when I remember that my backup is quite old! Although there’s nothing quite like the live reshaping of a system, when you have no backup

December 16, 2007. Mac, trivial shennanigans. Tags: . No Comments.

Incremental backups using Rsync

Many thanks for Raj for pointing me to this great tutorial on making incremental backups using Rsync.

I’ve been using rsync for a while for backups now. First my Macbook Pro’s Docs and Scripts directories are synced with my Linux box’s (initiated by a Cron job on the Linux box). Then I irregularly backup my Home directory to a USB disc. I’ve recently moved this to the ReadyNas+ box, although its horsepower is a little slow for rsync over ssh. I read a few emails on the Linux Raid list a while ago intimating that rsync incrementals were possible by creating a hardlink to the previous directory backup and as a consequence only putting new files in the new directory. However, laziness has fought against motivation and it’s not got done.

This nice tutorial should fight against the inertia and make me do it.

December 2, 2007. Mac. Tags: . No Comments.

Missing XInputExtension when using XForwarding over SSH

I recently had some issues whilst trying to tunnel an Xsession over to another machine (basically as I was too lazy to walk downstairs to my Mythtv box to use xxdiff to analyse 2 config files).

Using this command
ssh -X mythtv

resulted in many error messages once i ran any app that would have launched a Xdisplay:

mythtv ~ # xxdiff httpd.conf httpd.conf.old
X Error: BadDrawable (invalid Pixmap or Window parameter) 9
Major opcode: 55
Minor opcode: 0
Resource id: 0x1a6
X Error: BadWindow (invalid Window parameter) 3
Major opcode: 2
Minor opcode: 0
Resource id: 0x1a6
Xlib: extension "XInputExtension" missing on display "localhost:10.0".
Failed to get list of devices
X Error: BadWindow (invalid Window parameter) 3
Major opcode: 2
Minor opcode: 0
Resource id: 0x1a6
X Error: BadWindow (invalid Window parameter) 3
Major opcode: 2
Minor opcode: 0
Resource id: 0x1a6
X Error of failed request: BadGC (invalid GC parameter)
Major opcode of failed request: 60 (X_FreeGC)
Resource id in failed request: 0x3200000
Serial number of failed request: 131
Current serial number in output stream: 133

Apparently the current version of SSH limits the X Window extensions that can be used. XInputExtension is one of these. If instead you use “-Y” it will accept ALL these X Window extensions and it works

ssh -Y mythtv

Great!

Laziness promotes education.

November 28, 2007. trivial shennanigans. Tags: . 6 Comments.

Older Entries Next Page »

Bad Behavior has blocked 4605 access attempts in the last 7 days.