Labels

css (1) jQuery (7) linux (38) mac (70) php (29) python (1) svn/git (9) trac (4) ubuntu (1) virtualbox (6) vista (2) windows (14)

Monday, 23 April 2012

Deleting Google Mail (GMail) messages via Mac Mail

Maybe I'm old school, but this idea of archiving mail instead of deleting it just doesn't sit well with me, I have over 22k emails in one inbox from god knows how many years, and I do back this up periodically, which takes ages! I don't want it clogging up with crap mail I've hit the delete key on.

Now Google Mail (GMail) when you send the delete command via Mac Mail using their recommended settings of not moving to the trash box and not leaving the message on the server simply archives the message. This is a pain.

If you want it to truly be deleted (i.e. moved to GMail's trash) then tick both of those boxes under Preferences > Accounts in MacMail then click your GMails trash folder in Mac Mail, click Mailbox > Use this mailbox for > Trash.

Now all mail you delete will go to the GMail trash and GMail will automatically delete it after 30 days.

Sunday, 19 February 2012

High Availability or Backups?

Clients often ask me to make their sites and services "Highly available". By this they mean they want some form of redundancy and their server to have as much uptime as possible.

This in itself is not a bad thing and it's not massively complicated to set up (the complexity depends on the software involved).

However one thing they never ask me is "Can you set up a backup solution". Which seems odd to me, in my experience a decent backup solution is far more valuable than a HA setup.

What might you need HA for?
1. Server hardware failure
2. Network failure
3. Data centre goes boom!
4. Scheduled maintenance requires a server to go offline

Out of all of those 4 is probably the most common and that would normally be for reboot due to software updates which takes around a minute, a minute of down time every few months for most sites is no problem. (Over 1 year if you rebooted once per month at 1 minute downtime per reboot that would be 99.9999% uptime)

What might you need backups for?
1. Server hardware failure
2. Data centre goes boom!
3. User error corrupts files/databases
4. Server is compromised

So 1 and 2 are in both HA and Backups, granted HA will respond quicker and with fresher data than backups, if your data is critical (and I mean really critical i.e. financial stuff) then having a database replica is a good idea.

User error, this happens more often than you'd think, a typo in an SQL update clause for example can kill a database, this is where backups are a must, HA can't help you here since all changes will be replicated to all servers.

Server is compromised, heven forbid this ever happens to you but you should always be prepared. If it does the safest solution is to load a backup from a date before the intrusion, fix the hole (keeping the server offline while fixing it) then you're good to go again. Backups in this case also provide a history of file changes which can help pinpoint when you were compromised.

Now consider that HA requires duplicate servers to deal with problems that rarely happen (in the past year all my pingdom graphs have 99.99%+ uptime), which makes it expensive, backups are a lot cheaper storage solutions such as Amazon S3 cost peanuts in comparison to a second server.

So think to yourself. Do I really need HA? Do I have a backup plan in place first?

I'm sure if you think about it you'll agree backups are more important than HA.

Wednesday, 15 February 2012

chroot SSH using OpenSSH ChrootDirectory with Ubuntu/Debian

It's quite common to use sftp jails using OpenSSH's ChrootDirectory and the ForceCommand internal-sftp directives in the sshd_config file, however it's not as obvious how to set up a full shell in a chroot.

Since shell access requires some files (/bin/bash, various files from /lib, /dev/null etc etc) the common way I've seen on the internet to set up a chroot shell is to simply copy these files.

Personally I don't like that method, since you have to copy each file over and all the libraries then it's a pain to keep them up to date since package managers won't touch them...you get the picture.

So this is what I've done, note I've not tested it for security the chroot I required was to prevent a user with limited experience from being able to break a live system yet still access the files in /home

Replace all instances of [username] with the chrooted user's username

Step 1:

Create a chroot directory, I chose /chroots/[username]

Make sure this is owned by root and only writable by root.

Step 2:
At the end of /etc/ssh/sshd_config add

Match user [username]
ChrootDirectory /chroots/[username]

Restart ssh

Step 3:
Install and run debootstrap, this creates a minimal install of your chosen distribution in the chroot so all your binaries and libraries are there including an apt conf so you can update using apt

aptitude -y install debootstrap
debootstrap lucid /chroots/[username]

This installs ubuntu lucid to /chroots/[username]

Now a few files need to be linked from the main system to the chroot, you can either 1) copy these or 2) hard link them
These are at a minimum
/etc/apt/sources.list
/etc/passwd
/etc/group

You can then use apt to update the system as normal by running  chroot /chroots/[username] then your normal apt commands.

Step 4:

Mount the home directory in the chroot, in the main system add a line like this to /etc/fstab
/home/[username]/ /chroots/webdev/home/[username]/ none defaults,bind 0 0

And that's pretty much it.