Every Linux distribution provides a range of utilities that you can use to make backups of your files. Here is the how I get the job done with crontab and a shell script using tar and find
I wanted to secure all the data-files on my system on a regular basis. Regular for me implies on an automated basis.
I've tried doing a manual backup every week or so but that caused too much hassle, so effectively I stopped making backups....
Further, I wanted to have an up-to-date backup of the most
essential configuration settings on my system. This would help me in case I accidentally lost some datafiles or setting on my system.
In case of losing everything I would need to reinstall my Linux distribution (plus extra installed software) and restore data files
and settings. I decided that a full backup of the whole system wouldn't be worth the effort (and resources!).
Say "backup" and most Unix people think "tapedrive". However, nowadays harddrives come that cheap that I chose to add an extra harddrive to my AMD 400 machine.
This cheap option has the advantage that a harddrive can be mounted automatically, no need for manually inserting tapes. A disadvantage is that the backup resides in the same physical unit as the very data it is supposed to secure. However, since
I do have a CD-writer om my local network I still have the option to copy a backup to a CD once in a while.
My main HD is 6Mb. The backup HD has 10Mb.
After adding the drive to my machine I wrote a little shell script (for bash) that basically does the following:
1 3 * * * /root/scripts/daily_backupAdd this line using crontab -e when root.
Here's the actual code:
#!/bin/bash # # creates backups of essential files # DATA="/home /root /usr/local/httpd" CONFIG="/etc /var/lib /var/named" LIST="/tmp/backlist_$$.txt" # mount /mnt/backup set $(date) # if test "$1" = "Sun" ; then # weekly a full backup of all data and config. settings: # tar cfz "/mnt/backup/data/data_full_$6-$2-$3.tgz" $DATA rm -f /mnt/backup/data/data_diff* # tar cfz "/mnt/backup/config/config_full_$6-$2-$3.tgz" $CONFIG rm -f /mnt/backup/config/config_diff* else # incremental backup: # find $DATA -depth -type f \( -ctime -1 -o -mtime -1 \) -print > $LIST tar cfzT "/mnt/backup/data/data_diff_$6-$2-$3.tgz" "$LIST" rm -f "$LIST" # find $CONFIG -depth -type f \( -ctime -1 -o -mtime -1 \) -print > $LIST tar cfzT "/mnt/backup/config/config_diff_$6-$2-$3.tgz" "$LIST" rm -f "$LIST" fi # # create sql dump of databases: mysqldump -u root --password=mypass --opt mydb > "/mnt/backup/database/mydb_$6-$2-$3.sql" gzip "/mnt/backup/database/mydb_$6-$2-$3.sql" # umount /mnt/backup
data files:
All my data files are in /root, /home or /usr/local/httpd.
settings:
I chose to backup all the setting in /etc (where most essential settings are stored), /var/named (nameserver settings) and /var/lib (not sure about the importance of this one...). I might need to add more to the list but I still far from being a Unix-guru ;-). All suggestions are welcome!
tar versus cpio
The first version of this script used cpio to create backups iso tar. However, I found the cpio format not very handy for restoring single files so I chang
ed it to tar.
A disadvantage of using tar is that you can't (as far as I know) simply pipe the output of a find to it.
Using a construct like tar cvfz archive.tgz `find /home -ctime -1 -depth
-print` caused errors for files that contained a space " " character. This problem was solved by wring the output of find to a file first (and using tar with the -T option).