MT Hardware Recommendations
To NAS/SAN, or not?
In theory, it may be a better design to use a NAS or SAN. Network storage scales better, has build-in redundancy, etc. However, an often overlooked factor is that more hardware = more failures = more maintenance = more cost. Buy extra hardware when required, such as when you cannot buy a single system capable of handing the task(s), or when five 9's of uptime is required.
The cost of hardware is more than its purchase price. It needs to include the cost of operation and time spent by humans interacting with it throughout its life. The latter factors usually dwarf the purchase price. Think in terms of TCO. It may not make sense to use three machines (SAN plus two servers) where an N+1 server would suffice.
Buy one machine, sized to last 24 months
Example: In 2004 I replaced two PIII dual 700MHz/36GB/1GB systems with one dual Xeon 3.0/75GB/2GB. RRDutil graphs showed the rate of disk usage over the past two years and I determined that 75GB of disk and 2GB of RAM would suffice for 2-3 years. I paid for the hardware I needed, knowing that hardware will be cheaper and faster in the future.
In 2009 I replaced that server with a dual quad core (E5410) Xeon 2.33/150GB/16GB. Again, I sized the new server using data on the prior years usage, as well as doubling the RAM because of the switch from 32 to 64 bit.
For near line recovery, buy a spare large disk
Even if you use RAID, keep a spare disk/array in the system with a very recent "snapshot" of the system. For example, if you get 1TB of storage from three SAS disks in a RAID array, stick a 1 or 2TB SATA/SAS disk in there for a near line backup.
I spin up my near line disk once a week and sync my "primary" disk to it. My sync script then spins it back down using camcontrol. I also run a shell script at boot time that automatically spins it down (the boot process spins it up). My near line disk will last almost indefinitely and consumes very little power. My near line disk is rated at 10,000 spin up/down cycles, meaning I could spin it up once a day for about 10 years before hitting its duty rating.
I use this disk if I ever suffer a catastrophic disk/RAID failure. That is unlikely, because I are run smartmontools, have remote monitoring, and out of band alerts. So I should know long enough in advance to repair an ailing volume. But if the unexpected strikes and my RAID array disappears, I still have that near line backup to recover to. It won't take as long to restore a weeks worth of files as it would to restore the entire disk/array.
Use ECC RAM
If you depend on the machine and can't get your hands on it in less than 10 minutes, use ECC RAM in it. Period.
A Backup Server
If complete hardware redundancy is important enough to you to warrant the cost, make the second machine identical to the first. Then you've got an inventory of spare parts sitting in the rack next to your really important machine. The more identical the secondary, the better. Configure the disk layouts identically, keep copies of the other systems /etc/rc.conf and other files on both systems, etc.
Use SAS disks
The SAS standard allows for the plugging in of SCSI disks and SATA disks interchangeably (if your backplane supports it). On P1, your "production" server, you will use SCSI disks of course. However, on B1, your backup server, you can use SATA disks instead. Because you have Really Big Disks in there, you can still do your rsync copy of the entire disk image from P1 to B1. Plus, you can use rsnapshot to _also_ keep incremental backups of your production system.
If the pooh really hits the fan and P1 implodes, traffic can be redirected to B1. You have a complete copy of P1 on B1 as well as all the incremental backups so you can be right back online while you fix P1, wait for spare parts to arrive, etc.
If you want to get wild and crazy, install PF on both, plug in a crossover cable between their second NIC ports, and set them up for automatic network failover using CARP. But that is probably overkill. I think a stronger argument can be made for having B1 on another network in another data center.