PostgreSQL 14 Logical Replication with Pglogical
PostgreSQL Logical Replication allows one database cluster to send changed records to another, regardless of versions, and also can be used to perform two-way or “multi-master” replication (with major caveats).
Building on my previous post, Setup PostgreSQL Streaming Replication, we will add Logical Replication from the original “PostgreSQL 14 Cluster” to a newer “PostgreSQL 15 Cluster”. Since upgrading major versions of PostgreSQL requires downtime for upgrading the data pages on your server, either with pg_upgrade or a dump/restore, we can instead stand up a new cluster and use pglogical to sync the data to the new cluster, switching to the new cluster when complete.
Logical replication sends new and updates rows, and mark rows that have been deleted to the replica nodes. The changes are saved locally on that server. Conversely, streaming replication sends the WAL (Write Ahead Log) stream instead of logical rows to the replica, which is low level and version specific.
Logical replicas run in read-write mode, while streaming replicas can operate solely in read-only mode. The best benefit is migrating data to a new (upgraded) PostgreSQL cluster with minimal downtime.
Limitations of pglogical
Because of the nature of logical replication, there are limits to what you can do and what you need.
- Temporary tables not supported
- Only one database can be replicated (per replication slot)
- Requires Primary Keys on all tables
- DDL (Schema changes) replication is not supported (but a function can help)
- Foreign Keys are not checked
- Sequences are updated periodically only, not on each change
- Truncate with Cascade clause does not affect the replica (subscriber)
Setup the PG14 Cluster
We are going back to the primary node (db01) of the PG14 Cluster we previously setup for streaming replication to add logical replication as well. Replication can only run on the primary node, so we won’t be referencing the standby replica (db03), but you should add the same changes to it so it can run when it gets promoted to primary. (It will complain if max_worker_processes differs!)
pglogical refers to the replication source node or upstream as the “provider node.” Replica or downstream nodes are called “subscriber nodes.” Also, providers can also be subscribers and vice versa; this is multi-master replication.
Use the package manager for your OS to locate and install pglogical. Since I’m setting this up on FreeBSD, the port has not been upgraded for PG14 (even though PG15 is currently in beta), so I’ll install from source. I’ll need to patch a couple files to add in some header files as well.
Installing pglogical from source on FreeBSD 12:
git clone https://github.com/2ndQuadrant/pglogical.git cd pglogical vim pglogical_fe.c (Add these #include statements after similar lines, save and exit) #include <sys/types.h> #include <sys/wait.h> vim pglogical_apply_spi.c (Add this #include statement after similar lines, save and exit) #include <arpa/inet.h> pkg install gmake gmake sudo gmake install
Configure for pglogical
Update your postgresql.conf file with these settings
wal_level = 'logical' # logical is a superset of 'replica' max_worker_processes = 10 # one per database needed on provider node # one per node needed on subscriber node max_replication_slots = 10 # one per node needed on provider node max_wal_senders = 10 # one per node needed on provider node shared_preload_libraries = 'pglogical' track_commit_timestamp = on # needed for last/first update wins conflict resolution
pglogical runs as a process that connects to PostgreSQL. We need a rule in pg_hba.conf to allow this connection from localhost. (Since PostgreSQL 10+, it no longer uses a “replication” control in the database column.) This default rule should work, or add it if you removed it previously.
host all all 127.0.0.1/32 trust
Restart PostgreSQL to load the
shared_preload_libraries = pglogical extension.
Otherwise you may get this error when you create the extension:
ERROR: pglogical is not in shared_preload_libraries
Create the extension in the database you will be replicating. This must be done as a postgres super user.
psql -d mydb -U postgres mydb=# CREATE EXTENSION pglogical; CREATE EXTENSION
Also, the provider node (db01) needs to be setup for pglogical.
mydb=# SELECT pglogical.create_node( node_name := 'mydb_pg14', dsn := 'host=db01 port=5432 dbname=mydb');
Next, we’ll create a replication set named “default”, containing the list of tables to replicate. This adds all tables from the given list of schemas in the database to the set.
mydb=# SELECT pglogical.replication_set_add_all_tables('default', ARRAY['public']);
Every table in the list MUST have a primary key. If one does not, you will get this error:
ERROR: table my_table cannot be added to replication set default DETAIL: table does not have PRIMARY KEY and given replication set is configured to replicate UPDATEs and/or DELETEs HINT: Add a PRIMARY KEY to the table
You can do that with the ALTER TABLE command. Something like:
mydb=# alter table my_table add primary key (id);
However, new tables will not be added automatically. If this is a short-term replication used for upgrades to newer hardware or PostgreSQL versions, that may not be a problem for you. Otherwise, you will need to manually add new tables, or use an event trigger to have it done automatically. This code is from the official docs:
CREATE OR REPLACE FUNCTION pglogical_assign_repset() RETURNS event_trigger AS $$ DECLARE obj record; BEGIN FOR obj IN SELECT * FROM pg_event_trigger_ddl_commands() LOOP IF obj.object_type = 'table' THEN IF obj.schema_name = 'config' THEN PERFORM pglogical.replication_set_add_table('configuration', obj.objid); ELSIF NOT obj.in_extension THEN PERFORM pglogical.replication_set_add_table('default', obj.objid); END IF; END IF; END LOOP; END; $$ LANGUAGE plpgsql; CREATE EVENT TRIGGER pglogical_assign_repset_trg ON ddl_command_end WHEN TAG IN ('CREATE TABLE', 'CREATE TABLE AS') EXECUTE PROCEDURE pglogical_assign_repset();
Setup the PG15 Cluster
Create a “db03” node and install postgresql, pglogical, and any other extensions or packages that you also have on the PG14 cluster. Set the postgresql.conf and pg_hba.conf configurations as in the PG14 Cluster. After that, make sure we can create the extension.
psql -d template1 -U postgres template1=# create extension pglogical; CREATE EXTENSION template1=# drop extension pglogical;
If you are migrating data to another cluster and turning off the old cluster, you probably also want to setup one or more nodes as standby read-only replicas. You should set that up now as well.
I found the next step missing: Does pglogical create the database? Tables? Can data be there? Turns out, no, no, and sure. That makes some sense as pglogical does not handle DDL (Create/Alter database, table, etc.). So we have several options to get things ready.
Data Schema Import
(Here, schema (mostly) refers to table definitions, not the group of tables.)
The easiest way to do this is to backup the primary database, schema only, no data, then load
that into our replica database. Pglogical created a ‘pglogical’ schema (table group) there,
so I suppose we should skip it. It also grumbled about some pglogical permissions even though I tried
to avoid them. Also note the
create extension pglogical command should already be in your backup since
we already created it on the provider node.
createuser -U postgres --createdb app createdb -U app mydb pg_dump -h db01 -U postgres --schema-only --exclude-schema=pglogical mydb > mydb.schema.sql psql -d mydb -U app < mydb.schema.sql
When we create the subscription, it will transfer all the data from the primary node to the logical replica node. This can take some time, and may be slow to update the indexes. It may not be a great option if you have a very large database.
To create the node and subscription, we issue these SQL statements. The create_subscription will return immediately and the load happens in the background. To wait until it completes, use the wait_for_subscription_sync_complete function.
psql -U postgres -d mydb mydb=# SELECT pglogical.create_node( node_name := 'mydb_pg15', dsn := 'host=db03 port=5432 dbname=mydb' ); mydb=# SELECT pglogical.create_subscription( subscription_name := 'mydb_pg14_sub', provider_dsn := 'host=db01 port=5432 dbname=mydb' ); -- Optionally wait for it to complete. mydb=# SELECT pglogical.wait_for_subscription_sync_complete('mydb_pg14_sub');
This utility that comes with pglogical takes a node that is currently a streaming replica of the provider, or initialized with a pg_basebackup. Because of that, it can’t be used across major PostgreSQL versions. The benefit is the decreased time of transferring the data from the provider node. There isn’t a lot of information available on the tool, but you can learn more from reading it’s help screen.
With pglogical_create_subscriber, you don’t create a subscription, but a “subscriber” directly.
This method copies the entire data directory from the provider, but pglogical works on a single database (per replication slot),o you may get more data than you need. But this should prevent loading all the data through the replication protocol.
If you are using replication to transfer your data to a new cluster, you can turn off your subscription when complete.
mydb=# SELECT pglogical.drop_subscription('mydb_pg14_sub');
You can completely remove pglogical from your database and configurations, or set them up as a new provider for a future transfer.
mydb=# SELECT pglogical.drop_node('mydb_pg15'); mydb=# drop extension pglogical;
Instead of the above, set up this node as a provider be defining a replication set. You may not really want to do this until you need it as new tables won’t be added to the replication set.
mydb=# SELECT pglogical.replication_set_add_all_tables('default', ARRAY['public']);
When your next cluster comes online, set it up as a subscriber and add a subscription to this node to transfer your data again.
Replicating Multiple Databases
If you need more databases replicated between the servers, you use another replication slot, another pglogical process, and subscriber. If that seems like a hassle, consider consolidating all your databases as schemas in a central database. (Forthcoming post)