HiveBrain v1.2.0
Get Started
← Back to all entries
patternsqlMinor

Postgres major version upgrade using replication server

Submitted by: @import:stackexchange-dba··
0
Viewed 0 times
majorpostgresversionreplicationusingserverupgrade

Problem

I am planning to upgrade my database from Postgres 10 to Posgres 14. To reduce the downtime, I plan to use the replication for the same. Plan (in short) is to:

  • Dump the data using pg_dumpall



  • Kill the Postgres service running on replication



  • Setup new Postgres 14 service



  • Restore the data from the dump created in Step 1



  • Setup logical replication with the primary to get the data up to date



  • Make replication the primary, and applications start connecting to the replication.



  • Kill the Postgres service running on the old primary, and upgrade it too to 14.



  • Setup streaming replication between the new primary (old replication) and new replication.



I have a few questions regarding this:

  • Is it advisable to take a dump and restore, or let logical replication handle complete replication (apart from the DDL's) from scratch? In terms of time that it could take, which would be the better option? We have close to 650GB of data. I would think the dump is compressed, but restoring them (on the same machine) could still take time I believe.



  • Is there something I need to setup to make sure that the replication starts from after the dump point or will it be taken care?



  • One limitation I see with the logical replication is that it does not replicate sequences. For this I believe, before the switch over - I could increment the sequence by a very large number to make sure that there are no conflicts?



  • I have also seen an extension pglogical that I could use for the setup. Although, I am yet to dig deeper. From your experience, does it give any added advantages \ would it make the setup easier?



  • Although I would think it depends on different parameters, how long it could take typically to complete the restore\replication for a database of this much size, until the primary and master are in sync?

Solution

Ad 1 and 2:

The easy thing would be not to copy the data yourself, but let logical replication handle that.

If you insist in using dump/restore, you will have to proceed like this:

-
create a logical replication slot with the CREATE_REPLICATION_SLOT command and the pgoutput plugin in a replication connection, which will export a snapshot

-
while the replication connection is still open, run pg_dump and pass the snapshot name with the --snapshot option

-
use that replication slot for logical replication

Otherwise, replication will not start at the right point.
Ad 3:

Setting the sequences to a high value is fine.
Ad 4:

I have no opinion there
Ad 5:

Nobody can predict that. You will have to do a test run.

Context

StackExchange Database Administrators Q#301877, answer score: 5

Revisions (0)

No revisions yet.