TYPO free

home

fighting for TYPO free code

TYPO3 database backup without the heavy tables

23 Jul 2008

There are times when you need to backup your database or even copy the complete database to another server. If you are unlucky, you cannot use the mysqldump tool to make a copy of the complete database at once because that would bring down the server completely. It would also be nice to be able to rsync over the data to another server (the staging server). Then having one huge dump file is also sub-optimal.

A solution is needed that is lightweight. I made a small script that dumps a database using mysqldump. It has the the following featur

  • Dumps every table into a separate sql file;
  • Keeps a table 'blacklist';
  • For blacklisted tables only the structure is dumped, not the data;
  • If a dump has been done of a table containing more than X lines, then the script sleeps for X seconds so the database server will not be overloaded.

  1. #!/bin/sh
  2. # Database dump tool with table blacklist
  3. # Michiel Roos <[email protected]>
  4.  
  5. host=localhost
  6. database=test
  7. username=test
  8. password=secret
  9. binDir=/Applications/MAMP/Library/bin/
  10. backupDir=/Users/dude/tmp/backup/
  11. sleep=5
  12. aLotOfRows=1000
  13.  
  14. skip=(
  15. sys_log
  16. cache_hash
  17. cache_pages
  18. cache_pagesection
  19. index_fulltext
  20. index_grlist
  21. index_phash
  22. index_rel
  23. index_section
  24. index_stat_search
  25. index_stat_word
  26. index_words
  27. sys_log
  28. tx_realurl_chashcache
  29. tx_realurl_errorlog
  30. tx_realurl_pathcache
  31. tx_realurl_urldecodecache
  32. tx_realurl_urlencodecache
  33. )
  34.  
  35. len=${#skip[*]}
  36.  
  37. # for each table in the database
  38. for table in `nice -n 19 ${binDir}mysql --batch -u$username -p$password -h$host -D$database -e 'show tables' | egrep -v Tables_in_`;
  39. do
  40. i=0
  41. noData=''
  42. while [ $i -lt $len ];
  43. do
  44. if [ "$table" = "${skip[$i]}" ];
  45. then
  46. #echo "'$table' is equal to '${skip[$i]}'"
  47. noData='--no-data'
  48. fi
  49. let i++
  50. done
  51. echo "dumping $table"
  52. rowCount=`nice -n 19 ${binDir}mysql --batch -u$username -p$password -h$host -D$database -e "SELECT COUNT(*) FROM $table" | egrep -v COUNT`
  53. nice -n 19 ${binDir}mysqldump -u$username -p$password -h$host $noData --add-drop-database --add-locks --create-options --delayed-insert --disable-keys --quick $database $table > $backupDir$table.sql;
  54. if [ $rowCount -lt $aLotOfRows ];
  55. then
  56. echo "done!"
  57. else
  58. echo "done! Sleeping $sleep seconds . . ."
  59. sleep $sleep
  60. fi
  61. done;

Sure there is room for improvement. This incarnation is pretty crude. But I found this very usefull already.

Update: don't count the rows if we're only dumping the structure.

  1. #!/bin/sh
  2. # Database dump tool with table blacklist
  3. # Michiel Roos <[email protected]>
  4.  
  5. host=localhost
  6. database=test
  7. username=test
  8. password=secret
  9. binDir=/Applications/MAMP/Library/bin/
  10. backupDir=/Users/dude/tmp/backup/
  11. sleep=5
  12. aLotOfRows=1000
  13.  
  14. skip=(
  15. sys_log
  16. cache_hash
  17. cache_pages
  18. cache_pagesection
  19. index_fulltext
  20. index_grlist
  21. index_phash
  22. index_rel
  23. index_section
  24. index_stat_search
  25. index_stat_word
  26. index_words
  27. sys_log
  28. tx_realurl_chashcache
  29. tx_realurl_errorlog
  30. tx_realurl_pathcache
  31. tx_realurl_urldecodecache
  32. tx_realurl_urlencodecache
  33. )
  34.  
  35. len=${#skip[*]}
  36.  
  37. # for each table in the database
  38. for table in `nice -n 19 ${binDir}mysql --batch -u$username -p$password -h$host -D$database -e 'show tables' | egrep -v Tables_in_`;
  39. do
  40. i=0
  41. noData=''
  42. rowCount=0
  43. while [ $i -lt $len ];
  44. do
  45. if [ "$table" = "${skip[$i]}" ];
  46. then
  47. #echo "dumping structure of table '$table'"
  48. noData='--no-data'
  49. fi
  50. let i++
  51. done
  52.  
  53. if [ "--no-data" = "${noData}" ];
  54. then
  55. echo "dumping structure of '$table'"
  56. nice -n 19 ${binDir}mysqldump -u$username -p$password -h$host $noData --add-drop-database --add-locks --create-options --delayed-insert --disable-keys --quick $database $table > $backupDir$table.sql;
  57. else
  58. echo "dumping '$table'"
  59. rowCount=`nice -n 19 ${binDir}mysql --batch -u$username -p$password -h$host -D$database -e "SELECT COUNT(*) FROM $table" | egrep -v COUNT`
  60. nice -n 19 ${binDir}mysqldump -u$username -p$password -h$host --add-drop-database --add-locks --create-options --delayed-insert --disable-keys --quick $database $table > $backupDir$table.sql;
  61. fi
  62.  
  63. if [ $rowCount -lt 1000 ];
  64. then
  65. echo "done!"
  66. else
  67. echo "done! Sleeping $sleep seconds . . ."
  68. sleep $sleep
  69. fi
  70. done;

Steffen 24 Jul 2008, 01:11
Hi Michiel,

this afternoon I spend some time on writing backup scripts. Your timing is just perfect ;-)

One suggestion when using your script as a cronjob:

Test the target device or touch the backup file before starting any DB stuff. This gives you the possibility to track down the error more precisely, especially if you use NFS or other network attached storage.

touch $backupDir$table.sql
if [ $? -ne 0 ]
then
echo -e "ERROR in ${0}: Could not touch file $backupDir$table.sql." | mail $RECEIPIENT -s"[Backup error]"
exit 1
fi

Of course you can test if the file already exists:
if [ -f $backupDir$table.sql ]
...
fi

... or loop some time if the storage device is temporarily down (using bash syntax here):

while [ ! -b /dev/sdb1 ] && [ $ATTEMPTS -lt 9 ]
do
let ATTEMPTS+=1
sleep 60
done
Commenting is closed for this item