Andrew Dunstan's PostgreSQL and Technical blog (original) (raw)

Announcing Release 18 of the PostgreSQL Buildfarm client

I have pushed Release 18 of the PostgreSQL Buildfarm client

In addition to numerous minor tweaks and bug fixes, the following changes are made:

Review: PostgreSQL 16 Administration Cookbook

This book, published in December 2023 by packt, is the latest incarnation of a book originally principally authored by Simon Riggs, and brought up to date for PostgreSQL 16 by my EDB colleagues Gianni Ciolli, Boriss Mejías, Jimmy Angelakos and Vibhor Kumar. It's available in both hard copy and electronic format. For review purposes, the publisher provided me with one of each. DBAs will probably find the electronic format more useful because of the ability to cut and paste from it. And in these days of devops, where just about every developer is their own DBA to some extent, the book should appeal to a wide audience of PostgreSQL users.

Books like this don't necessarily aim to be comprehensive guides, but rather to provide a toolbox full of useful tips and recipes. And this book is chock full of them. They cover topics relating to pretty much every aspect of PostgreSQL administration, from installation to maintenance to troubleshooting. And the book is pretty reliable. That's not surprising to me, as I know these authors to be fairly meticulous in their attention to detail. When I'm given a book to review I usually start by looking at areas where I have pretty good knowledge already. All too often I find errors that are egregious or just plain careless. I can report that I didn't find any errors beyond an insignificant typo or two. This book is pretty solid, and I'm happy to recommend it.

I do have a few relatively minor criticisms. First, the layout could be improved. A horizontal bar between recipes would be useful. Simply using a slightly larger font for recipe headings doesn't make it easy enough to see where one topic ends and another starts. Better visual clues would be good. If that increases the book's length, maybe they could also save some by removing the repetitious "Learn more on Discord" Section at the end of each chapter.

More substantially, some topics are covered in such little detail as to be not very useful. For example, the section of Foreign Data Wrappers is very sparse, as is the section that mentions Prometheus and Grafana. It might be better to have less recipes but with more useful information.

Finally, it is odd to read things stated in the first person singular in a book with multiple authors. I imagine this is a holdover from earlier incarnations of the book, but it still feels strange. If it's going to be kept then the reader should be told whose voice this is. But in general I think it would be better to use "we" and "our" instead of "I" and "my".

None of this should deter you from buying the book. If you're a PostgreSQL developer or administrator you are highly likely to find useful gems of information in this book.

Announcing Release 17 of the PostgreSQL Buildfarm client

I have pushed Release 17 of the PostgreSQL Buildfarm client.

Release 17 has two main features:

There are also a number of relatively small bug fixes and tweaks (e.g. some improvements in processing typedefs).

The release is available at https://github.com/PGBuildFarm/client-code/releases or https://buildfarm.postgresql.org/downloads/latest-client.tgz

Enjoy!

Announcing Release 16 of the PostgreSQL Buildfarm client

Hot on the heels of Release 15 comes Release 16.

This release deals with some issues that have been discovered with the check for update feature of Release 15 and the force_every and trigger_exclude features, so that it now works correctly with those features.

It also features these items:

The release can be downloaded from

https://github.com/PGBuildFarm/client-code/releases/tag/REL_16 or

Announcing Release 15 of the PostgreSQL Buildfarm client

Changes

The release can be downloaded from

https://github.com/PGBuildFarm/client-code/releases/tag/REL_15 or

https://buildfarm.postgresql.org/downloads

Upgrading is highly recommended.

Goodbye Windows XP

This time for sure (h/t Bullwinkle)

Yesterday I finally turned off my Windows XP instance for good. It's been running for quite a long time, almost 12 years, basically as the host for the buildfarm animals currawong, brolga and frogmouth. However, XP itself went out of support on 2014, and in recent years modern Postgres will not build or run on it due to some platform limitations. The latest version of Postgres that could run on it was release 10, which has now reached end of life, so there is no longer a supported version of Postgres that can build on XP, and there is thus no longer a need for these buildfarm animals.

This means we're also probably saying goodbye to Microsoft Visual C 2008, and Msys version 1.

Taming Buildfarm Logs

The output from running a buildfarm step is the output from running a command such as make check followed by any relevant log files that the step creates. These log files can get quite large, and in some cases there are quite a lot of them. Up to now the buildfarm server has just show all of this in an undifferentiated text blob. That has made finding out what might have gone wrong not impossible but not as easy as it could be, and there have been complaints about it over the years.

A few days ago I implemented a new way of presenting this information. You still see the command output, but instead of showing the log file contents you see a list of the log files. Click on one log file and it will show you the contents of that file. Click on it again and they are hidden again. So if, say, a TAP test fails at script number 12, you can quickly locate the regress_log_012_mytest.log and associated server logs, without having to scroll through thousand of lines that don't interest you.

You can still see the old format if you need to - the page for each stage has a link to the raw log.

I hope this will enhance the usability and usefulness of the buildfarm.

Thanks to Kori Lane for pointing me at the JavaScript gadget that is used to implement this.

Update: All issues now resolved.

Release 14 of the PostgreSQL Buildfarm client

This is to announce release 14 of the PostgreSQL Buildfarm client.

Significant changes:

Because of the changes to how pg_upgrade logging works, owners are strongly urged to upgrade to the new release as soon as possible if they are running the TestUpgrade module.

The meaning of the branches_to_build keywords are as follows: STABLE means all the live branches other than master/HEAD, while OLD means those branches older than STABLE that we are now supporting limited builds for (currently REL9_2_STABLE through REL9_6_STABLE).

Release 13.1 of the PostgreSQL BuildFarm client

I have just pushed Release 13.1 of the PostgreSQL BuildFarm client.

This update to Release 13 fixes errors that occur from the new default branch name checking code when used with versions of git that are too old. This code is now disabled if the git version is not capable of running it, and in verbose mode a warning is printed. The warning can be disabled by explicitly setting "skip_git_default_check => 1" in the config file. In either case, the owner will need to update their git installation or remove all branch and mirror repositories when the default branch name changes.

Downloads are available at:

https://github.com/PGBuildFarm/client-code/releases and

https://buildfarm.postgresql.org/downloads

Release 13 of the PostgreSQL BuildFarm client

I have pushed Release 13 of the PostgreSQL BuildFarm client.

Change highlights:

MSVC animals are now very much simpler to set up, and to upgrade to a new compiler. Using the new mechanism, as shown in the sample config file, all that's needed is to specify a location where the standard script vcvarsall.bat can be found. The script will then run that script and extract the settings and apply them. Tha means that upgrading to a new version of Visual Studio would entail just a one line change in the config file.

If you put a file called [animalname].inhibit-runs in the build root, all runs will be stopped until the file is removed. If you put a file called [animalname].force-one-run in the build root, each configured branch will forced to run once, and the file will be removed. These only apply if you use the run_branches.pl script.

The client should transparently deal with any change that is made in the upstream repository's default branch name. This avoids the need for a flag day when we eventually change the default branch name for postgresql, as I assume we will do before long. The branch bf_HEAD which the client creates now refers to the upstream default whatever it might be, rather than the hardcoded name 'master'. The code of the SCM module underwent quite a lot of change in order to make this happen; the checkout code had become quite long and convoluted and I had to refactor it somewhat before I was able to make and test this change. The changes have been fairly extensively tested, but I'm still slightly nervous about them. Owners are asked to report any issues promptly.

the use_discard_caches setting reflects a change in the way postgres handles this - it's now a runtime setting rather than a compile time setting. On older branches it sets "-DCLOBBER_CACHE_ALWAYS". If you use this setting don't use that define.

Downloads are available at

https://github.com/PGBuildFarm/client-code/releases andhttps://buildfarm.postgresql.org/downloads

Running Standalone TAP Tests

Recent discussion set me thinking about what would be involved in setting up standalone TAP tests for Postgres, to do, say, more code coverage than we get from the current core tests. So I started experimenting. And with a tiny bit of Makefile magic it turns out to be absurdly easy.

You need just two things: an entirely boilerplate Makefile and a tiny bit of glue at the top of your TAP test files.

First let's look at the Makefile. Here it is in its entirety:

TAP_TESTS = 1

PG_CONFIG = pg_config PGXS := (shell(shell (shell(PG_CONFIG) --pgxs) include $(PGXS)

install_path for PostgreSQL binaries etc

PG_INSTALL_ROOT := (dir(dir (dir(shell $(PG_CONFIG) --bindir)) export PG_INSTALL_ROOT

where to find PostgresNode and friends

PG_NODE_LOC = $(top_srcdir)/src/test/perl export PG_NODE_LOC

Then in your TAP test perl file(s) (which should reside in a subdirectory called "t") you put this:

use lib "$ENV{PG_NODE_LOC}";

use PostgresNode; use Test::More;

local $ENV{PG_REGRESS} = "/bin/true";

my nodeinst=node_inst = nodeinst=ENV{PG_INSTALL_ROOT};

for pre--release-14 releases you would possibly set LD_LIBRARY_PATH

based on this. For release 14 and up PostgresNode does that for you

via the install_path parameter.

my node=PostgresNode−>getnewnode(′dummy′,installpath=>node = PostgresNode->get_new_node('dummy', install_path => node=PostgresNode>getnewnode(dummy,installpath=>node_inst);

...

That's all you need. Given that you can run your TAP tests with just a Postgres installation, as in this example:

andrew@emma:tests $ make PG_CONFIG=../inst.head.5701/bin/pg_config installcheck rm -rf '/home/andrew/pgl/tests'/tmp_check /usr/bin/mkdir -p '/home/andrew/pgl/tests'/tmp_check cd ./ && TESTDIR='/home/andrew/pgl/tests' PATH="/home/andrew/pgl/inst.head.5701/bin:$PATH" PGPORT='65701'
top_builddir='/home/andrew/pgl/tests//home/andrew/pgl/inst.head.5701/lib/postgresql/pgxs/src/makefiles/../..'
PG_REGRESS='/home/andrew/pgl/tests//home/andrew/pgl/inst.head.5701/lib/postgresql/pgxs/src/makefiles/../../src/test/regress/pg_regress'
REGRESS_SHLIB='/src/test/regress/regress.so'
/usr/bin/prove -I /home/andrew/pgl/inst.head.5701/lib/postgresql/pgxs/src/makefiles/../../src/test/perl/ -I ./ t/*.pl t/dummy.pl .. ok
All tests successful. Files=1, Tests=1, 3 wallclock secs ( 0.04 usr 0.01 sys + 1.22 cusr 0.36 csys = 1.63 CPU) Result: PASS

The Makefile and a demo TAP test are at This Gitlab repo

In the next episode, we'll look at how to leverage this so that your standalone tests are run by your buildfarm animal.

Buildfarm adopts modern git naming

In keeping with modern git practice, I have renamed the default branch on both the client-code and server-code repositories to main. If you have a clone of either of these repositories, you can adapt to the change by doing the following commands in your local repo:

git checkout master git branch -m main git fetch git branch --unset-upstream git branch -u origin/main git symbolic-ref refs/remotes/origin/HEAD refs/remotes/origin/main git fetch -p

There is still a bunch of work to do to ensure that things run smoothly when source repositories such as The PostgreSQL Git repository make this change, as will surely happen at some stage. I will be testing this on a small source repository I control - the plan is that when it happens it will be transparent to buildfarm animals.

Update (August 4 2021):

I just went to update a couple of machines with very old versions of git. They choked on these lines

git branch --unset-upstream git branch -u origin/main

The solution is instead to edit the .git/config file directly. In these lines change "master" to "main":

[branch "main"]    remote = origin merge = refs/heads/master

The rest of the recipe worked fine. In some cases you might need a final git pull.

Using EXPLAIN json format output in plpgsql

It's possible to use EXPLAIN output in plpgsql. In the default text format, the result comes back as a set of text values, so you can process them in a loop like this:

declare
exp text;
begin
for exp in explain myquery
loop
raise notice '%', exp;
end loop;
end;

If you use json format output, however, the result comes back as a single json document rather than a set of lines. You still might need a loop - I haven't found another way yet of getting the output from EXPLAIN into a variable - but the loop will only have one iteration. Here is an example taken from a function I wrote the other day that lets you get the estimated number of rows from the plan:

declare
exp json;
begin
for exp in explain (format json) myquery
loop
raise notice 'rows: %', exp#>>'{0,Plan,Plan Rows}';
end loop;
end;

Removing a Buildfarm client/server perl dependency

With one very small exception, there is nothing in the buildfarm server that requires the client to be running perl, even though both the server and the standard client are written in perl.

Andreas Scherbaum has written a new specialized client in python, and he asked me about removing that dependency. Essentially what this requires is that an alternative mechanism be allowed for transmitting the serialized configuration in the client's web transaction. Currently the client uses perl's Storable module to serialize the data, and the server uses the same module to de-serialize it. The obvious alternative candidate is to serialize it as JSON.

The first problem was to determine if we can sanely distinguish between data serialized by the two mechanisms. And we can. JSON is not allowed to contain any control characters, and a structure serialized using Storable's nfreeze() method is pretty much guaranteed to contain such characters. So I've added a test to the receiving program that looks for such characters and if it doesn't find them assumes that the data is JSON and decodes it accordingly. This has been tested using the client nightjar.

There are a few wrinkles, though. The main one is that we can't assume that every client has a module available to encode the data as JSON. In fact, nightjar didn't until I installed one. So in order to build this into the client I'll need to find a way to determine at runtime if it's available and fall back to using Storable otherwise.

We should also look at actually storing the data as JSON rather than as a serialized blob. When the buildfarm was created we didn't have any database support for JSON, but now the sensible thing to do would be to store the data as jsonb, and make it searchable.

But that's a project for another day.

Using wal-e with Postgres on a non-standard port

I just discovered after a great deal of puzzlement and far too much time that if you run multiple instances of Postgres on a machine, and you're using wal-e for continuous backup, you'll need an entry for PGPORT or PGSERVICE in the environment directory for any instance that's not running on port 5432. Otherwise, wal-e will happily start and stop a backup on your port 5432 server and your non-5432 backups will be completely useless since they won't have a backup label.

Yet another example of the fact that any untested backup is not worth anything. If I hadn't tested this and found the problem we could have been left without a backup when we needed it.

Indiscriminate use of CTEs considered harmful

Common Table Expressions are a wonderful thing. Not only are they indespensible for creating recursive queries, but they can be a powerful tool in creating complex queries that are comprehensible. It's very easy to get lost in a fog of sub-sub-sub-queries, so using CTEs as a building block can make things a lot nicer.

However, there is one aspect of the current implementation of CTEs that should make you pause. Currently CTEs are in effect materialized before they can be used. That is, Postgres runs the query and stashes the data in a temporary store before it can be used in the larger query. There are a number of consequences of this.

First, this can be a good thing. I have on a number of occasions used this fact to good effect to get around problems with poorly performing query plans. It's more or less the same effect as putting "offset 0" on a subquery.

However, it can also result in some very inefficient plans. In particular, if CTEs return a lot of rows they can result in some spectacularly poorly performing plans at the point where you come to use them. Note also that any indexes on the underlying tables will be of no help to you at all at this point, since you are no longer querying against those tables but against the intermediate result mentioned above, which has no indexes at all.

This was brought home to me forcefully on Friday and Saturday when I was looking at a very poorly performing query. After some analysis and testing, the simple act of inlining two CTEs in the query in question resulted in the query running in 4% of the time it had previously taken. Indiscriminate use of CTEs had made the performance of this query 25 times worse.

So the moral is: be careful in using CTEs. They are not just a convenient tool for abstracting away subqueries.

There has been some discussion about removing this aspect of the implementation of CTEs. It's not something that is inherent in CTEs, it's simply a part of the way we have implemented them in PostgreSQL. However, for now, you need to be familiar with the optimization effects when using them, or you might face the same problem I was dealing with above.

Windows XP - PostgreSQL's long goodbye.

I just committed a patch to allow building with Visual Studio 2015. Due to a change in the way we need to detect locales with this compiler, it will not be possible to use binaries built with it on Windows XP. Despite the fact that Microsoft declared Windows XP to be completely out of support well over a year ago, it still lives on in a huge number of installations. My own instance still gets occasional updates from Microsoft. And you can still build and run the very latest PostgreSQL on Windows XP. But you can't use it on Windows XP if it's built with Visual Studio 2015 or later.

I will keep my instance (whose only job is to run several buildfarm members) running as long as it doesn't require any significant tweaks. But a day will eventually come when it does require such tweaks, or just won't build and run successfully any more, and at that stage I'll shut it down.

Building an extension with VS2015

I needed to get a DLL prepared of my tiny closed format ranges extension, and so I tried following the excellent instructions Craig Ringer gave a couple of years ago. I was using Visual Studio 2015, on a Windows 10 box, targeting PostgreSQL 9.4, 64bit.

But I ran into some problems. I could compile the thing quite cleanly, but Postgres couldn't find the functions when I tried to create the extension. I had a short and very helpful email exchange with Craig, and came up with this solution: the exported functions needed to be declared with a prototype before the call to PG_FUNCTION_INFO_V1, and both the prototype and the function definition needed to be decorated with PGDLLEXPORT. Once I had done that, the DLL loaded when I ran "create extension", and the types worked as expected.

I don't know if this is required with any earlier versions of Visual Studio, or even of it works with them. When I get a chance I will see if I can find out.

Based on this, I think it's probably worth trying at some stage to put together a script to build the project and solution files for extensions. I don't have time to do that right now, however.