Reinout van Rees’ weblog - Reinout van Rees (original) (raw)

Python Leiden (NL) meetup: building apps with streamlit - Daniël Kentrop

2026-04-02

Tags: python, pun

(One of my summaries of the Leiden (NL) Python meetup).

Daniël is a civil engineer turned software developer. He works for a company involved in improving the 17000 km of dikes in the Netherlands. He liked programming, but interfacing with his colleagues was a bit problematic. Jupyter notebooks without a venv? Interfaces with QT (which explode in size)? PyInstaller? In the end, he often opted for data in an excel sheet and then running a python script…

He now likes to use streamlit, a Python library for creating simple web apps, prototyping, visualisation. It has lots of build-in elements and widgets. Data entry, all sorts of (plotly) charts, sliders, selectboxes, pop-ups, basically everything you need.

You can add custom components with html/css/js.

How does it work? It is basically a script. The whole page is loaded every time and re-run. A widget interaction means a re-run. Pressing a button means a re-run. There_is_ state you can store on the server per session. He showed a demo demonstrating the problems caused by the constant re-running (and losing of state) and how to solve it with the session state.

He then showed a bigger streamlit demo. On a map, you could draw an area and select water level measurement stations and then show water levels of the last month. Nice.

An upcoming change to streamlit: they’re going to move from the Tornado web runner to Starlette, which also means ASGI support.

Python Leiden (NL) meetup: newwave, python package setup and cpython contribution - Michiel Beijen

2026-04-02

Tags: python, pun

(One of my summaries of the Leiden (NL) Python meetup).

Wave audio is more or less the original representation of digital audio. .wav format (there are competing formats like .au or .aiff). Typically it is “PCM-encoded”, just as what comes out of the CD.

The .wav format was introduced by microsoft in windows 3.1 in 1992. But… it is still relevant for audio production or podcasts. And: for lab equipment. The company Michiel works for (https://samotics.com/, sponsor of the meetup’s pizza, thanks!) records the sound of big motors and other big equipment for analysis purposes. They use .wav for this.

Around 1992, Python also started to exist. Version 1.0 (1994) included the “wave” module in its standard library. But: it is only intended for regular .wav usage, so two stereo channels and max 44.1 kHz frequency. He needed three channels (for sound recordings for the three phases of the electrical motor) and he needed higher resolution.

He showed that you could actually put three channels into a .wav with Python. Audacity loaded it just fine, but the “flac” encoder refused it, with an error about a missingWAVE_FORMAT_EXTENSIBLE setting. He started investigating the python bugtracker and discovered someone had already provided a fix for reading wav files in that extended format.

So he dug further. And discovered some bugs himself and reported them. And found undocumented methods and reported them. So after reports, you should try fixing them with a pull request. He discovered a half-finished PR and worked on basis of that. Lots of discussion in the ticket, but in the end it got merged. Another bugfix also got merged. Hurray!

But… those fixes will end up in Python 3.15, October 2026. And his company is just moving from 3.10 to 3.11… So he made a library out of it athttps://codeberg.org/michielb/newwave . (And he put in some good words forhttps://codeberg.org , as that’s a nice github alternative: operated by volunteers under a German Foundation instead of we-have-put-all-our-open-source-eggs-in-one-basket’s Github, being owned by Microsoft/USA).

Python Leiden (NL) meetup: creating QR codes with python - Rob Zwartenkot

2026-04-02

Tags: python, pun

(One of my summaries of the Leiden (NL) Python meetup).

Qr codes are everywhere. They’re used to transport data, the best example is a link to a website, but you can use them for a lot of things. A nice different usage is a WIFI connection string, something like WIFI:T:WPA;S:NetworkName;p:password;; . Rob focuses on the url kind.

There’s a standard, ISO/IEC 18004. QR codes need to be square. With a bit of a margin around it. The cells should also be square. You sometimes see somewhat rounded cells, but that’s not according to the standard. You sometimes see a logo in the middle, but that actually destroys data! Luckily there’s error correction in the standard, that’s the only reason why it works. There’s more to QR codes than you think!

He uses the segno as qr code library (instead of “qrcode”). It is more complete, allows multiple output formats, it can control error correction:

import segno qr = segno.make("https://pythonleiden.nl/") qr.save("leiden.png")

Such an image is very small. If you scale it, it gets blurry. And there’s no border and no error correction. We can do better:

import segno qr = segno.make("https://pythonleiden.nl/", error="h")

"h" is the "high" level of error correction, it allows

for up to 30% corruption.

qr.save( "leiden.png", scale=10, border=4, )

Segno can also give you the raw matix of cells. That way you can do some further processing on it. For instance with PIL (the Python Imaging Library). As an example, he placed a logo in the middle of the QR code.

How you can work with the matrix:

... same as before ... for line in qr.matrix: for cell in line: ....

He went totally overboard with round dots and colors and a logo in the middle. At least on my phone, it still worked! Funny.

https://reinout.vanrees.org/images/2026/qr-example.png

Tombi, pre-commit, prek and uv.lock

2026-03-18

Tags: python, django

In almost all my Python projects, I’m using pre-commit to handle/check formatting and linting. The advantage: pre-commit is the only tool you need to install. Pre-commit itself reads its config file and installs the formatters and linters you defined in there.

Here’s a typical .pre-commit-config.yaml:

default_language_version: python: python3

repos:

The “tombi” at the end might be a bit curious. There’s already the build-in “check-toml” toml syntax checker, right? Well, tombi also does formatting and schema validation. And in a recent project, I handled configuration through toml files.

It was for a Django website where several geographical maps were shown, each with its own title, description, legend yes/no, etcetera. I made up a .toml configuration format so that a colleague could configure all those maps without needing to deal with the python code. I created a json schema as format specification (yes, json is funnily used for that purpose). With tombi, I could make sure the config files were valid.

Oh, and tombi has an LSP plugin, so my colleague got autocomplete and syntax help out of the box. nice.

I’m also using uv a lot. That generates an uv.lockfile, in .toml format, with all the version pins. It is a toml file, but without the.toml extension. So pre-commit ignored it. Until suddenly it started complaining about the indentation. But only in a github action, not locally.

Note: the complaint about the indentation is probably correct, as there’s an issue in the uv bugtracker about changing the indentation from 4 to 2 in the lockfile.

The weird thing for me was that I pin the the versions of the plugins. So the behaviour locally and on github should be the same. Some observations:

Some further debugging showed that pre-commit was actually skipping the uv.lockfile. But apparently not on github. I did some searching in pre-commit’s source code and tombi’s pre-commit hook definition. The only relevant part there was types: [toml]. So somehow pre-commit has a definition of what a toml file is. But I couldn’t find anything.

Until I spotted that pre-commit uses identify as the means to detect file types. (Looks like a handy library, btw!). And that project hada change a couple of weeks agothat identifies uv.lock as a toml file!

Anyway: small mystery solved.

Write the docs meetup: digital sovereignty for writers - Olufunke Moronfolu

2026-03-05

Tags: writethedocs, python

(One of my summaries of the Amsterdam *write the docs* meetup).

Full title: digital sovereignty for writers: your data, your decisions. Olufunke Moronfolu has her website at https://writerwhocodes.com/ .

“Digital sovereignty is the ability to have control over your own digital destiny: the data, hardware and software that you rely on and create” (quote from the World Economic Forum).

What do writers want? Mostly: to be read. For this you could for instance start looking for (commercial) blogging platforms, searching for the best one. And after a while you start looking for a different one. On and on. You can run into problems. Substack might ban your newsletter. A google workspace domain being blocked. A Medium story getting deleted without feedback.

Tim Berners-Lee intended for the web to be universal and open. But now it is mostly a collection of isolated silos.

There are some questions you can ask yourself to test your sovereignty. If your current platform deletes your account, is your content completely lost? Second question: can you export your work in some portable format (like markdown).

If you are a technical writer, you have to do the test twice. Once for your own content and once for your company’s documentation.

Own your content. Most sovereign for your own website/blog would be hugo/jekyll or other static generators. In the middle are (self-hosted?) wordpress sites. Least sovereign is something like linkedin/medium/substack. For company content, confluence/notion would be least sovereign. Wiki.js/bookstack middle. The best is docs as code like some markdown in git.

So: review the platform’s policy. What is the ease of export? Do you have control? What’s the stability? Do you have an identity there? Perhaps even a domain?

Own your identity. Having your own domain is best. If you’re some-platform.com/name, your identity goes away if the site disappears.

Decide how to share. Sovereign would be an email list, an RSS feed or something like the POSSE approach (Publish (on your) Own Site, Syndicate Elsewhere).

Build for the future: build something. Start. It doesn’t have to be perfect. Your own domain name and a single static page is already much more sovereign than a million followers on a site that could vanish tomorrow.

If you want to do more, join the “independent web” (indieweb, https://indieweb.org) movement.


Personal note: I’ve got my own domain. This is a blog entry that ends up in an RSS/atom feed. The site is .rst files in a git repo. Statically generated with Sphinx. So: yeah, pretty sovereign :-)

Write the docs meetup: developers documentation, your hidden strength - Frédéric Harper

2026-03-05

Tags: writethedocs, python

(One of my summaries of the Amsterdam *write the docs* meetup).

If you have a product, you need good developer documentation. “It is an integral part of your product: one cannot exist without the other”. You might have the best product, but if people don’t know how to use it, it doesn’t matter.

What he tells developers: good documentation reduces support tickets and angry customers. You should be able to “sell” good documentation to your company: it saves money and results in more sales.

Some notes on documentation contents:

Some extra notes:

Python Leiden meetup: PR vs ROC curves, which to use - Sultan K. Imangaliyev

2026-01-22

Tags: python, pun

(One of my summaries of the Python Leiden meetup in Leiden, NL).

Precision-recall (PR) versus Receiver Operating Characteristics (ROC) curves: which one to use if data is imbalanced?

Imbalanced data: for instance when you’re investigating rare diseases. “Rare” means few people have them. So if you have data, most of the data will be of healthy people, there’s a huge imbalance in the data.

Sensitivity versus specificity: sensitive means you find most of the sick people, specificity means you want as few false negatives and false positives as possible. Sensitivity/specificity looks a bit like precision/recall.

If you classify, you can classify immediately into healthy/sick, but you can also use a_probabilistic classifier_ which returns a chance (percentage) that someone can be classified as sick. You can then tweak which threshold you want to use: how sensitive and/or specific do you want to be?

PR and ROC curves (curve = graph showing the sensitivity/specificity relation on two axis) are two ways of measuring/visualising the sensitivity/specificity relation. He showed some data: if the data is imbalanced, PR is much better at evaluating your model. He compared balanced and imbalanced data with ROC and there was hardly a change in the curve.

He used scikit-learn for his data evaluations and demos.

Python Leiden meetup: PostgreSQL + Python in 2026 – Aleksandr Dinu

2026-01-22

Tags: python, pun

(One of my summaries of the Python Leiden meetup in Leiden, NL).

He’s going to revisit common gotchas of Python ORM usage. Plus some Postgresql-specific tricks.

ORM (object relational mappers) define tables, columns etc using Python concepts: classes, attributes and methods. In your software, you work with objects instead of rows. They can help with database schema management (migrations and so). It looks like this:

class Question(models.Model): question = models.Charfield(...) answer = models.Charfield(...)

You often have Python “context managers” for database sessions.

ORMs are handy, but you must be beware of what you’re fetching:

Bad, grabs all objects and then takes the length using python:

questions_count = len(Question.objects.all())

Good: let the database do it,

the code does the equivalent of "SELECT COUNT(*)":

questions_count = Question.objects.all().count()

Relational databases allow 1:M and N:M relations. You use them with JOIN in SQL. If you use an ORM, make sure you use the database to follow the relations. If you first grab the first set of objects and then grab the second kind of objects with python, your code will be much slower.

“Migrations” generated by your ORM to move from one version of your schema to the next are real handy. But not all SQL concepts can be expressed in an ORM. Custom types, stored procedures. You have to handle them yourselves. You can get undesired behaviour as specific database versions can take a long time rebuilding after a change.

Migrations are nice, but they can lead to other problems from a database maintainer’s point of view, like the performance suddenly dropping. And optimising is hard as often you don’t know which server is connecting how much and also you don’t know what is queried. Some solutions for postgresql:

If you’ve found a slow query, run that query with EXPLAIN (ANALYZE, BUFFERS) the-query. BUFFERS tells you how many pages of 8k the server uses for your query (and whether those were memory or disk pages). This is so useful that they made it the default in postgresql 18.

Some tools:

Ansible-lint pre-commit problem + “fix”

2025-11-19

Tags: python, django

I’m used to running pre-commit autoupdate regularly to update the versions of the linters/formatters that I use. Especially when there’s some error.

For example, a couple of months ago, there was some problem with ansible-lint. You have an ansible-lint, ansible and ansible-core package and one of them needed an upgrade. I’d get an error like this:

ModuleNotFoundError: No module named 'ansible.parsing.yaml.constructor'

The solution: pre-commit autoupdate, which grabbed a new ansible-lint version that solved the problem. Upgrading is good.

But… little over a month ago, ansible-lint pinned python to 3.13 in the pre-commit hook. So when you update, you suddenly need to have 3.13 on your machine. I have that locally, but on the often-used “ubuntu latest” (24.04) github action runner, only 3.12 is installed by default. Then you’d get this:

[INFO] Installing environment for https://github.com/pre-commit/pre-commit-hooks. [INFO] Once installed this environment will be reused. [INFO] This may take a few minutes... [INFO] Installing environment for https://github.com/astral-sh/ruff-pre-commit. [INFO] Once installed this environment will be reused. [INFO] This may take a few minutes... [INFO] Installing environment for https://github.com/ansible-community/ansible-lint.git. [INFO] Once installed this environment will be reused. [INFO] This may take a few minutes... An unexpected error has occurred: CalledProcessError: command: ('/opt/hostedtoolcache/Python/3.12.12/x64/bin/python', '-mvirtualenv', '/home/runner/.cache/pre-commit/repomm4m0yuo/py_env-python3.13', '-p', 'python3.13') return code: 1 stdout: RuntimeError: failed to find interpreter for Builtin discover of python_spec='python3.13' stderr: (none) Check the log at /home/runner/.cache/pre-commit/pre-commit.log Error: Process completed with exit code 3.

Ansible-lint’s pre-commit hook needs 3.10+ or so, but won’t accept anything except 3.13. Here’s the change: https://github.com/ansible/ansible-lint/pull/4796 (including some comments that it is not ideal, including the github action problem).

The change apparently gives a good error message to people running too-old python versions, but it punishes those that do regular updates (and have perfectly fine non-3.13 python versions). A similar pin was done in “black” and later reverted (see the comments on this issue) as it caused too many problems.

Note: this comment gives some of the reasons for hardcoding 3.13. Pre-commit itself doesn’t have a way to specify a minimum Python version. Apparently old Python version cans lead to weird install errors, though I haven’t found a good ticket about that in the issue tracker. The number of issues in the tracker is impressively high, so I can imagine such a hardcoded version helping a bit.

Now on to the “fix”. Override the language_version like this:

If you use ansible-lint a lot (like I do), you’ll have to add that line to all your (django) project repositories when you update your pre-commit config…

I personally think this pinning is a bad idea. After some discussion in issue 4821 I created a sub-optimal proposal to at least setting the default to 3.12, but that issue was closed&locked because I apparently “didn’t search the issue tracker”.

Anyway, this blog post hopefully helps people adjust their many pre-commit configs.

Python Leiden (NL) meetup summaries

2025-11-13

Tags: python, pun

My summaries from the sixth Python meetup in Leiden (NL).

Python and MongoDB, a perfect marriage - Mathijs Gaastra

His first experience with Mongodb was when he had to build a _patient data warehouse_based on literature. He started with postgres, but the fixed table structure was very limiting. Mongodb was much more flexible.

Postgres is a relational database, Mongodb is a document database. Relational: tables, clearly defined relationships and a pre-defined structure. Document/nosql: documents, flexible relationships and a flexible structure.

Nosql/document databases can scale horizontally. Multiple servers, connected. Relational databases have different scaling mechanisms.

Why is mongo such a nice combination with python?

He showed example python code, comparing a mysql example with a Mongodb version. The Mongodb version did indeed look simpler.

The advantage of Mongodb (the freedom) also is its drawback: you need to do your own validation and your own housekeeping, otherwise your data slowly becomes unusable.

Mathijs is now only using Mongodb, mostly because of the speed of development he enjoys with it.

Identifying “blast beats” in music using Python - Lino Mediavilla

He showed a couple of videos of drummers. Some with and some without “blast beats”. In metal (if I understood correctly) it means both a lot of base drum, but essentially also a “machine gun” on tne snare drum. He likes this kind of music a lot, so he wanted to analize it programmatically

He used the demucs library for his blast beat counter project. Demucs separates different instruments out of a piece of music.

With fourier transforms, he could analyse the frequencies. Individual drum sounds (snare drum hit, base drum hit) were analysed this way.

With the analysed frequency bits, they could recognise them in a piece of music and count occurrences and pick out the blast beats. He had some nice visualisations, too.

He was asked to analyze “never gonna give you up” from Rick Ashley :-) Downloading it from youtube, separating out the drums, ananlysing it, visualising it: it worked! Nice: live demo. (Of course there were no blast beats in the song.)

Deploying Python apps on your own infra with Github actions - Michiel Beijen

Live demo time again! He build a quick jekyll site (static site generator) and he’s got a small hetzner server. Just a bit of apache config and he’s got an empty directory that’s being hosted on a domainname. He quickly did this by hand.

Next he added his simple code to a git repo and uploaded it to github.

A nice trick for Github actions are self hosted runners. They’re easy to install, just follow the instructions on Github.

The runner can then run what’s in your github’s action, like “generate files with jekyll and store them in the right local folder on the server”.

The runner runs on your server, running your code: a much nicer solution than giving your ssh key to Github and having it log into your server. You also can use it on some local computer without an external address: the runner will poll Github instead of it being Github that sends you messages.

The auto-deploy worked. And while he was busy with his demo, two PRs with changes to the static website had already been created by other participants. He merged them and the site was indeed updated right away.

Overview by year

Statistics: charts of posts per year and per month.