Performance impact of row filters and column lists in logical replication (original) (raw)

New features introduced in PostgreSQL 15 allow selective publication of tables' contents within logical replication publications, through the ability to specify column lists and row filter conditions. Let's see how they affect performance.

Row filters and column lists can reduce the amount of data sent by the publisher and processed by the subscriber, thus reducing network consumption and CPU overhead.

One of the differences between streaming replication1 and logical replication2 in PostgreSQL is that streaming replication replicates the entire instance, while logical replication allows you to replicate only certain tables and operations.

Before PostgreSQL 15, if a table is published, all its rows and columns are replicated. But with PostgreSQL 15, you can specify row filters and column lists for publications to replicate only the rows and columns that match the specified criteria3. If you are not familiar with these two new features, you can refer to my colleagues' blog posts below.

These two new features not only meet the functional needs for users, but also have an impact on performance of logical replication. When using filters, the data is filtered before transmission, which brings some overhead. On the other hand, less data is sent, which saves bandwidth and time spent on data transfer. Also, subscribers spend less time replaying transactions.

Let's see their overall impact on performance with some tests.

Contents

> Introduction to the performance test

> Test methods

> Test steps

> Specified GUC parameters

> Hardware environment

> Table structure

> Test results

> Row filter

> Column list

> Effect

> For the future

Introduction to the performance test

I used two different methods to check how the filter features affect logical replication performance.

Test methods

ill-woman-using-laptop-03-variation-01I used the following two methods to test different aspects:

The first method is closer to real-world scenarios. The second method receives changes in the logical replication slot, which redirects the changes in the specified publication to a file; this will not be affected by anything other than decoding, filtering, and data transfer.

The tests compare replication using row filter/column list against replication not using filters. For row filter, I tested filtering different amounts of rows.

Test steps

For each of the test methods, I followed the steps below:

Specified GUC parameters

The following are the GUC parameters specified in the performance test to prevent interference from other factors.

shared_buffers = 8GB
checkpoint_timeout = 30min
max_wal_size = 20GB
min_wal_size = 10GB
autovacuum = off

Hardware environment

Two machines were used in the test – one as the publisher and the other as the subscriber (by creating a subscription or using pg_recvlogical). The two machines were in the intranet, and their hardware specs were as follows:

| | Publisher | Subscriber | | | ------------ | ----------------------------------------- | ----------------------------------------- | | CPU | 16 | 16 | | Model name | Intel(R) Xeon(R) Silver 4210CPU @ 2.20GHz | Intel(R) Xeon(R) Silver 4210CPU @ 2.20GHz | | Memory | 50 GB | 50 GB |

The network bandwidth between the two machines was about 20.0 Gbits/sec.

Table structure

The table structure in the test was as follows:

postgres=# \d tbl
Table "public.tbl"
Column | Type | Collation | Nullable | Default
--------+-----------------------------+-----------+----------+---------
id | integer | | not null |
kind | integer | | |
key | integer | | |
time | timestamp without time zone | | | now()
item1 | integer | | |
item2 | integer | | |
item3 | integer | | |
item4 | integer | | |
Indexes:
"tbl_pkey" PRIMARY KEY, btree (id)

For row filter, I filtered data of different proportions according to the value of the kind column. Due to a limitation of the row filter feature, I set the replica identity columns to id and kind.

In the test for column list, only 4 columns were copied - id, kind, key, and time.

Test results

Having described the setup used for the tests, let's have a look at the results.

Row filter

First, I compared the time it took for initial synchronization with different ratios of rows filtered, and without filter. We can see that the proportion of filtered data is basically equal to the proportion of time.

Row filter performance comparison - Test 1

The following two figures are the test results of incremental logical replication. In bulk insert scenario (test 2 below), when a quarter of the data is sent after filtering, the time is reduced by approximately 33% compared with no filter. When executing transactions of a small number of updates (test 3 below), there is also a significant improvement in tps, which is about 70% higher when three-quarters of the data is filtered.

Row filter performance comparison - Test 2

Row filter performance comparison - Test 3

Below are the results using pg_recvlogical. When a quarter of the data is sent, in bulk insert scenario (test 4 below), the time is reduced by about 43% compared with no filter, and in the scenario with transactions of a small number of updates (test 5 below), it is reduced by about 59%. The improvement is more pronounced in the latter scenario.

Row filter performance comparison - Test 4

Row filter performance comparison - Test 5

In conclusion, initial synchronization, bulk insert, and a small number of updates can all be significantly improved by using the row filter. Even when the filter does not reduce the data sent, the degradation is not noticeable (less than 3%).

Column list

In the initial synchronization scenario, as shown in the figure below, using the column list reduces the time by about 11% compared with no filter.

Column list performance comparison - Test 1

In the test of incremental replication, using column list saves about 14% of the time in the scenario of bulk insert (test 2 below). When transactions with a small number of updates are executed, the tps is increased by 2.5% (test 3 below), and the improvement is not obvious in this case.

Column list performance comparison - Test 2

Column list performance comparison - Test 3

The test results of pg_recvlogical are similar to incremental replication, with about 12% improvement in the bulk insert scenario (test 4 below) and 2.7% improvement in small updates scenario (test 5 below).

Column list performance comparison - Test 4

Column list performance comparison - Test 5

Overall, the use of column filters improves performance for initial table replications and bulk inserts, but for transactions in which a small number of updates are executed, the performance improvement is not obvious. But if the width of the filtered column is larger (for example, columns to store photos or article content), the improvement may be more obvious.

Effect

From the above test results, we can see some performance improvements brought by row filter and column list in logical replication. By using these 2 features, we can reduce the amount of data sent by the publisher and processed by the subscriber, thus reducing network consumption and CPU overhead. At the same time, it also saves disk space on the subscriber side. If users just need a subset of rows or columns of a table, they can consider using filters to improve performance.

For the future

Row filter and column list in logical replication bring performance improvements. Let us look forward to more new features and performance improvements related to logical replication in future versions of PostgreSQL.


References in this post:

Subscribe to be notified of future blog posts

If you would like to be notified of my next blog posts and other PostgreSQL-related articles, fill the form here.

We also have a series of technical articles for PostgreSQL enthusiasts of all stripes, with tips and how-to's.

Explore PostgreSQL Insider >

Topics:PostgreSQL,PostgreSQL community,Logical replication