Discussion:
[ADMIN] streaming replication
(too old to reply)
Karuna Karpe
2011-11-02 07:48:47 UTC
Permalink
But I want huge amount of data in database. so it is take so much to take
dump from slave db and load in master db.
for example :-

I have one master db server with 20GB of data and two slave db
server are replicated from master server. when my master db server is
fail, then one of slave server become new master server. In new master
server 50MB of data add into database. After some time my failed(old)
master back up, then I want to become my old master server as master and my
new master server become slave(as previous setting). So please let me know
that how replication only that 50MB data into old master database server????

Please give me solution for that...

Regards,
Karuna karpe.
Dear Karuna,
Its one way replication streaming ie Master to Slave. Once failed master
DB is up, you need to reconfigure the replication streaming. Before that
you need to take the updated dump of slave db and load in master db.
Hope this information is useful.
Vinay
Hello,
I am replicating master server to slave server using streaming
replication. I want to know that, when my master server is failed and my
slave server become master server with adding some additional data into
database and after some time failed master server is up then how will i
replicate that additional changes into my failed master server?????
can any one tell me that how to do this?
Regards,
karuna karpe.
Fujii Masao
2011-11-04 01:38:17 UTC
Permalink
On Wed, Nov 2, 2011 at 4:48 PM, Karuna Karpe
Post by Karuna Karpe
But I want huge amount of data in database. so it is take so much to take
dump from slave db and load in master db.
for example :-
        I have one master db server with 20GB of data and two slave db
server are replicated from master server.  when my master db server is fail,
then one of slave server become new master server.  In new master server
50MB of data add into database.  After some time my failed(old) master back
up, then I want to become my old master server as master and my new master
server become slave(as previous setting). So please let me know that how
replication only that 50MB data into old master database server????
What about using rsync to take a base backup from new master and load it
onto old master? rsync can reduce the backup time by sending only differences
between those two servers.

Regards,
--
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center
--
Sent via pgsql-admin mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin
Alex Lai
2011-11-07 16:13:05 UTC
Permalink
Post by Fujii Masao
On Wed, Nov 2, 2011 at 4:48 PM, Karuna Karpe
Post by Karuna Karpe
But I want huge amount of data in database. so it is take so much to take
dump from slave db and load in master db.
for example :-
I have one master db server with 20GB of data and two slave db
server are replicated from master server. when my master db server is fail,
then one of slave server become new master server. In new master server
50MB of data add into database. After some time my failed(old) master back
up, then I want to become my old master server as master and my new master
server become slave(as previous setting). So please let me know that how
replication only that 50MB data into old master database server????
What about using rsync to take a base backup from new master and load it
onto old master? rsync can reduce the backup time by sending only differences
between those two servers.
Regards,
My postgres instance has two databases. The pg_dump size is about 30GB
size. Rsync the entire $PGDATA take about an hour to a empty
directory. When I rsync the $PGDATA to the existing directory, it still
take 50 minutes. It seems to me that rsync still spend most of the time
checking any changes even with very little changes. Maybe I miss some
option when using rsync can speed up the update.
--
Best regards,


Alex Lai
***@sesda2.com
--
Sent via pgsql-admin mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin
Scott Ribe
2011-11-07 16:52:00 UTC
Permalink
Rsync the entire $PGDATA take about an hour to a empty directory. When I rsync the $PGDATA to the existing directory, it still take 50 minutes.
1) How slow is your disk? (Rsync computer to computer across the network should actually be faster if they're not many changes.)

2) Why is an hour to bring the old master up to date such a problem? Are you planning on failing over that frequently?
--
Scott Ribe
***@elevated-dev.com
http://www.elevated-dev.com/
(303) 722-0567 voice
--
Sent via pgsql-admin mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin
Kevin Grittner
2011-11-07 17:10:03 UTC
Permalink
Post by Alex Lai
Post by Fujii Masao
What about using rsync to take a base backup from new master and
load it onto old master? rsync can reduce the backup time by
sending only differences between those two servers.
My postgres instance has two databases. The pg_dump size is about
30GB size. Rsync the entire $PGDATA take about an hour to a empty
directory. When I rsync the $PGDATA to the existing directory, it
still take 50 minutes. It seems to me that rsync still spend most
of the time checking any changes even with very little changes.
Maybe I miss some option when using rsync can speed up the update.
If the bottleneck is the network, be sure that you are using a
daemon on the remote side; otherwise you do drag all the data over
the wire for any file which doesn't have an identical timestamp and
size. An example of how to do that from the rsync man page:

rsync -av -e "ssh -l ssh-user" rsync-***@host::module /dest

This will try to identify matching portions of files and avoid
sending them over the wire.

-Kevin
--
Sent via pgsql-admin mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin
Scott Ribe
2011-11-07 17:19:42 UTC
Permalink
Post by Kevin Grittner
If the bottleneck is the network, be sure that you are using a
daemon on the remote side; otherwise you do drag all the data over
the wire for any file which doesn't have an identical timestamp and
This will try to identify matching portions of files and avoid
sending them over the wire.
??? The normal way of using will using rolling checksums rather than sending all the data over the network:

rsync -av rsync-***@host:/source /dest
--
Scott Ribe
***@elevated-dev.com
http://www.elevated-dev.com/
(303) 722-0567 voice
--
Sent via pgsql-admin mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin
Kevin Grittner
2011-11-07 17:36:03 UTC
Permalink
Post by Scott Ribe
??? The normal way of using will using rolling checksums rather
Perhaps this is an unexpected learning opportunity for me. If there
is no daemon running on the other end, what creates the remote
checksums?

-Kevin
--
Sent via pgsql-admin mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin
Scott Ribe
2011-11-07 18:20:23 UTC
Permalink
Post by Kevin Grittner
Perhaps this is an unexpected learning opportunity for me. If there
is no daemon running on the other end, what creates the remote
checksums?
rsync--it invokes rsync on the other end by default.
--
Scott Ribe
***@elevated-dev.com
http://www.elevated-dev.com/
(303) 722-0567 voice
--
Sent via pgsql-admin mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin
Kevin Grittner
2011-11-07 19:00:35 UTC
Permalink
Post by Scott Ribe
Post by Kevin Grittner
Perhaps this is an unexpected learning opportunity for me. If
there is no daemon running on the other end, what creates the
remote checksums?
rsync--it invokes rsync on the other end by default.
Empirically confirmed. I don't know how I got it into my head that
one of the daemon options is needed in order to start an rsync
instance on the remote side.

Thanks for straightening me out on that,

-Kevin
--
Sent via pgsql-admin mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin
senthilnathan
2011-11-29 12:08:32 UTC
Permalink
Just check the following thread for more details:
http://postgresql.1045698.n5.nabble.com/Timeline-Conflict-td4657611.html
We have system(Cluster) with Master replicating to 2 stand by servers.
i.e
M |-------> S1
|-------> S2
If master failed, we do a trigger file at S1 to take over as master. Now
we
need to re-point the standby S2 as slave for the new master (i.e S1)
While trying to start standby S2,there is a conflict in timelines, since
on
recovery it generates a new line.
Is there any way to solve this issue?
... [show rest of quote]

Basically you need to take a fresh backup from new master and restart
the standby
using it. But, if S1 and S2 share the archive, S1 is ahead of S2
(i.e., the replay location
of S1 is bigger than or equal to that of S2), and
recovery_target_timeline is set to
'latest' in S2's recovery.conf, you can skip taking a fresh backup
from new master.
In this case, you can re-point S2 as a standby just by changing
primary_conninfo in
S2's recovery.conf and restarting S2. When S2 restarts, S2 reads the
timeline history
file which was created by S1 at failover and adjust its timeline ID to
S1's. So timeline
conflict doesn't happen.

Regards,
--
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center
--
--
View this message in context: http://postgresql.1045698.n5.nabble.com/streaming-replication-tp4954954p5032131.html
Sent from the PostgreSQL - admin mailing list archive at Nabble.com.
--
Sent via pgsql-admin mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin
Loading...