Forum OpenACS Q&A: GZIP backup not working

Collapse
Posted by Matthew Coupe on
Help!

I've got a problem with my backup.

I am running Postgresql 8.2.5 and carry out a daily database dump using pg_dump which is then zipped using gzip. These files are then transferred automatically to another server and a restore is performed meaning we have 2 servers which are almost in synch with each other for backup and failover purposes.

This was working fine until last month. The file has now reached 2.5GB and when I try to gzip the .dmp file I get a Permission Denied message. I've tried changing the permissions on the .dmp file without success. Is there some sort of maximum file size? I am using Red Hat Enterprise 4.

Any ideas would be greatly appreciated. I can also provide more info if it's needed.

Thanks
Matthew

Collapse
Posted by Matthew Coupe on
update:

Further investigations are making me think that it's the .dmp file with the problem and not the gzip application. I can successfully zip an arbitrary file using gzip.

I can't copy the file anywhere and when I open it with EMACS there is no content.

Strange.

Collapse
Posted by Dave Bauer on
What is the file size limit on your filesystem? I think its probably 2G on Red Hat 4.
Collapse
Posted by Matthew Coupe on
It seems 2GB is the max filesize. I'm trying a split dump operation to break it into smaller chunks. Will feedback on how it goes.

From: http://www.postgresql.org/docs/8.0/interactive/backup.html#BACKUP-DUMP-LARGE

Use split. The split command allows you to split the output into pieces that are acceptable in size to the underlying file system. For example, to make chunks of 1 megabyte:

pg_dump dbname | split -b 1m - filename

Reload with

createdb dbname
cat filename* | psql dbname

Collapse
Posted by Matthew Coupe on
Yup, that was the issue. The syntax above is correct for Postgresql dump and restore on 2GB max filesize systems.

Perhaps this should go in one of the backup docs on this site somewhere?

Cheers for the help Dave.