Forum OpenACS Q&A: PostgreSQL database replication just by transfering/mirroring the files

i have some queries about replication of data from one postgresql.. but by some different approach.

as a small summery of my project,

I am working on a project of replication of data. and I have done with kernel module programming in kernel 2.6 that has two machines A and B, when i update any file(in whole directory tree) on some specified directory on machine A, my programs updates the same file on machine B..

(on each write system call on machine A, the difference in new file and old file is patched on machine B)



So, now i can have my PostgreSQL data_base on some directory say /usr/share/data (on machine A)

and have same on machine B initially.



now what i want to do is replicate the changes made by machine A to B.



so i started my program in this situations by passing whole directory.

1> stopped postgres on B

2> updated on A

3> started postgres on B

4> checked data base on B IT WAS UPDATED..



now just problem is, the updation is taking much time.. I WANT THAT TO EXECUTE FASTER.



so can i AVOID replication of SOME FILES?? like log files etc?

like 000010000000 file in pg_xlog its 16MB and taking too much time for patching.

Or should i replicate only files that are in ..../base/ directory?



(currently for checking my code i'm using diff and patch, which i know are not efficient ways..i'll implement those algorithms later)

i'm taking binary difference.

the replication works something like this..

i update something on machine A

1> WAL updated.(.../pg_xlog/00000100000)


2> (after about 2 min) .../base/data base number gets updated


3> (after about 5 min) WAL updated (.../pg_xlog/00000100000)


4> ../pg_clog/0000 updated


5> ../global/pg_control gets updated.

after all this i can see the data base updated!

how can i see immediate results?

i use checkpoint to flush WAL immediately and my data_base to update quickly... but main problem remains of replication BIG WAL file for even small data base update..

Sounds like you're trying to replicate what DRBD already does: http://www.drbd.org/

I haven't tried DRBD with PostgreSQL yet, but it should be a good experiment. DRBD acts as a block device that copies all changes made to it to other machines via a dedicated network. RAID 1 over a network. This would work if you made sure the PostgreSQL on the secondary machine does not make any writes to the data.

Now, why are you doing this through kernel modules? Is slony-i not sufficient for your needs?

-Roberto

i'm working on project of file system replication and done with that.. i replicate files on machine A to machine B on network as soon as any file is modified on machine A..

so i want to apply this concept further to databases.

i've tried by many ways.. that worked in replicating databases..

one way i used recently is like this...

i have two machies A and B on network.. with postgres installed on machine A and B,
with postmaster on B off..

now i keep track on all files in data directory of postgresql database on machine A

i mirror all files on machine A db dir to machine B(Except xlog dirctory.. big file causess bandwidth problem)

when i might need database back (in case of physical damage on machine A) i call "pg_resetxlog" on machine B to avoid consistency issues generated by not consistent xlog files..

then i start Postmaster on machine B and get my database back...

anyproblem with this approch..??

earlier i mirrored xlog too.. that ofcourse, did work.. but wasted time in patching the big xlog files each time..

(btw i've to stick to file system replication approach, thats part of my project)

Ah, I see, so this is for university course.

Well, then benchmark your module against DRBD and see how they fare, if you dare 😊

As for your proposed approach, it looks like it would work, but it's not very useful. But if you just need to complete the course, you could do that. Or you could challenge yourself and do something more interesting.

-Roberto

yes offcourse.. its 1 man 1 month project.. definitely i'd like to do more.. but have time constraints.. log file backup was taking too much time to get patch up!

thanks..