In reply to Rob Parsons:
> That is: what happens when your prime copy gets corrupted/messed about with - and then is happily mirrored to the 'backup' copy?
It depends on the algorithm used to determine change, and what is corrupted. I choose to use file date as the basis of change. So corruptions to my working copy that didn't change the date wouldn't cause the mirror backup to be modified. If you chose to use file content comparison, or length comparison, then corruption is likely to be propagated. But file content comparison is very expensive in terms of data transfer, and CPU time, so most people are unlikely to select it. Corruption of the body of the file is unlikely to modify the length, so, again, that wouldn't be propagated. Anything that corrupts the date or file length is likely to corrupt the file header in the directory, and thus likely to be detected during a delta scan (header CRC mismatch), and thus be reported.
> As a minor comment: mirror != backup
There are many forms of backup.
The simplest form of backup is just a bulk copy from one disk to another, performed manually. The downside is that it will always copy everything, and will therefore take a long time. It will also not cope with deletions, folder moves, etc. Everything will be there, but it will get out of sync with the working drive. It also suffers from the corruption problem.
The next level up is to use a difference-detecting copier, say xcopy on PC. This will only copy changes, so works a lot quicker once the backup is established. It has the same problem of not deleting old/moved stuff, and the corruption problem.
Mirroring using a difference detection will sort out the problem of removal of the old versions, so that your backup tracks additions and deletions to your working copy. The downside is that if you delete a file from your working copy, or it get corrupted, those changes are copied to the backup. This is especially bad if it operates automatically, especially 'live mirroring', where if you delete a file on your working copy, it is automatically and immediately removed from the mirror. As such, it's probably best used under manual control, when the working copy is known to be 'stable'. I use this with a 'two tier backup', where I have a working copy, a routine backup mirror, and a long-term backup mirror that only gets refreshed infrequently (it's an offsite mirror). I suppose I could come up with a scheme to record changes to my working copy from the date of the last long-term backup, and hope that those don't get corrupted...
Syncing allows two copies to be 'working copies', with some means of arbitrating which of the copies to use if they are different; most commonly, keeping the latest copy (not always the best strategy, depending on the frequency backup is performed).
Reversionary backup, where you can restore files that have been deleted, or corrupted, is the most sophisticated, and requires a versioned backup store, which may be implemented as some version management system, storing file differences, or simply keeping multiple dated versions of each file, or by using multiple backup devices (e.g. backup tapes in old skool). The downside of this system is the increased file storage, or the use of compressed file differences (so it's not easy to use it as a working copy without doing a backup restore), or the need to maintain a strict backup device rotation scheme. It you use a device rotation, you only defer the corruption problem, unless you take measures to actively detect file corruption.
They're all forms of backup. You just have to pick one that is appropriate for your situation.
All of them are better than having no backup at all...
Do you have any suggestions for detecting file corruption on all files on a working copy? Running fsck before you backup, perhaps? I'm always open to suggested improvements in my routine.