Matthias Kestenholz: Posts about Django (original) (raw)

Matthias Kestenholz: Posts about Djangohttps://406.ch/writing/category-django/2026-04-08T12:00:00ZMatthias KestenholzSwitching all of my Python packages to PyPI trusted publishinghttps://406.ch/writing/switching-all-of-my-python-packages-to-pypi-trusted-publishing/2026-04-08T12:00:00Z2026-04-08T12:00:00Z

Switching all of my Python packages to PyPI trusted publishing

As I have teased on Mastodon, I’m switching all of my packages to PyPI trusted publishing. I have been using it to release the django-debug-toolbar a few times but never set it up myself. The process seemed tedious.

The malicious releases uploaded to PyPI two weeks ago and the blog post about digital attestations in pylock.toml finally pushed me to make the switch. All of my PyPI tokens have been revoked so there is no quick shortcut.

Note

I’m also looking at other code hosting platforms. I have been using git before GitHub existed and I’ll probably still use git when GitHub has completed its enshittification. For now the cost/benefit ratio of staying on GitHub is still positive for me. Trusted publishing isn’t available everywhere, so for now it is GitHub anyway.

In the end, switching an existing project was easier than expected. I have completed the process for django-prose-editor and feincms3-cookiecontrol.

For my future benefit, here are the step by step instructions I have to follow:

  1. Have a package which is buildable using e.g. uvx build

  2. On PyPI add a trusted publisher in the project’s publishing settings:

    • Owner: matthiask, feincms, feinheit, whatever the user or organization’s name is.
    • Repository: django-prose-editor
    • Workflow name: publish.yml
    • Environment: release
  3. In the GitHub repository, create a release environment in Settings / Environments. Add myself and potentially also other releasers as a required reviewer. I allow self-review and disallow administrators to bypass the protection rules.

  4. Run git tag x.y.z and git push, no more uvx twine or hatch publish.

  5. Approve the release in the actions tab on the repository.

  6. Either enjoy or swear and repeat the steps.

I’m happy with testing the release process in production. The older I get the less I care if people think I’m stupid. That’s also why feincms3-cookiecontrol 1.7.0 doesn’t exist, only 1.7.1 – the process failed and I had to bump the patch version and try again. Copy the publish.yml from a known good place, for example from the django-prose-editor repository. I have added the if: github.repository == 'feincms/django-prose-editor' statement which ensures that the workflow only runs in the main repository, but that’s optional if you don’t care about failing workflows.

LLMs for Open Source maintenance: a cautious casehttps://406.ch/writing/llms-for-open-source-maintenance-a-cautious-case/2026-03-25T12:00:00Z2026-03-25T12:00:00Z

LLMs for Open Source maintenance: a cautious case

When ChatGPT appeared on the scene I was very annoyed at all the hype surrounding it. Since I’m working in the fast moving and low margin business of communication and campaigning agencies I’m surrounded by people eager to jump on the hype train when a tool promises to lessen the workload and take stuff from everyone’s plate.

These discussions coupled with the fact that the training of these tools required unfathomable amounts of stealing were the reason for a big reluctance on my part when trying them out. I’m using the word stealing here on purpose, since that’s exactly the crime Aaron Swartz was accused of by the attorney’s office of the district of Massachusetts. It’s frustrating that some people can get away with the same crime when it is so much bigger. For example, OpenAI and Anthropic downloaded much more data than Aaron ever did.

A somewhat related thing happened with the too-big-to-fail banks: There, the people at the top were even compensated with golden parachutes at the end. LLM companies seem to be above accountability too.

Despite all this, I have slowly started integrating these tools into my workflows. I don’t remember the exact point in time, but since some time in 2025 my opinions on their utility has started to change. At the beginning, I always removed the attribution and took great care to write and rewrite the code myself, only using the LLMs for inspiration and maybe to generate integration tests. More and more I have to admit that they are useful, especially in time constrained projects with a clear focus and purpose.

Last month I fixed and/or closed all open issues in the django-tree-queries repository with the help of Claude Code. Is that a good thing? It could be argued I should have done the work myself. But I wouldn’t have — I have other things I want to do with my time. I don’t want to (always) work on Open Source software in the evening. I definitely also have leaned heavily on LLMs when working on django-prose-editor.

Is faster better?

We can produce more code, more features and close tickets faster than before. In my experience the speed up isn’t as big as some people may want us to believe, but it’s there. And contrary to what people in my LinkedIn feed say, that’s not an obviously good thing. Is it a race to the bottom where we drown in LLM-generated slop in quantities impossible to maintain? It doesn’t feel like that — but it’s a race that could go both ways. Throwaway code can be thrown away though, and well tested code does what the tests say, which is good enough according to my rules for releasing open source software.

Speaking as someone who has put more into the training set than they’ve taken out so far, I don’t feel all that bad using the tools. Coding agents can already be run locally with reasonable hardware requirements, at least during inference, which is where the ongoing cost sits. Maybe using them is still rationalization. But contribution and profit needing to stay in some rough balance feels like the right frame. Total abstinence isn’t the only ethical choice we have.

Community tensions

What makes me less comfortable is how communities are reacting. There are real concerns within the Django world, and not just the practical one of overworked maintainers wading through hastily generated patches that don’t actually fix anything. The deeper worry is about the communal nature of contribution: that working on Django is supposed to be a learning experience, a way into the community, and that using an LLM as a vehicle rather than a tool hollows out that process. Reviewers end up interacting with what is essentially a facade, unable to tell whether anyone actually understood the problem. That’s a real concern and I don’t want to dismiss it.

But it maps onto a different situation from what I’ve been describing. Using Claude Code to close issues in projects I maintain and understand is not the same as using it to paper over gaps in comprehension on a ticket in someone else’s project. Whether LLM-assisted contributions to Django itself are appropriate is a difficult question; whether it’s appropriate to use them when maintaining your own software less so.

There’s also a harder tension around quality. Django’s conservatism has real value: rigorous review, minimal magic, a coherent philosophy. The ORM and template system don’t need to reinvent themselves, they work well, are still evolving while staying rock-solid for all my use cases. And reading the release notes always brings me joy. But it could be more exciting more often. Quality isn’t a strictly positive thing. Everything has costs. It’s not great if the price of the bar is that legitimate bugs sit open for years because nobody has a few evenings to spend on them. It happened with django-tree-queries before I went through it with Claude Code. I think the bar for contributing to Django is too high. I would value a little more motion and a little less stability, even as someone running dozens of Django websites and apps.

Then there’s the pile-on dynamic that plays out on Mastodon and GitHub. When the Harfbuzz and chardet maintainers disclosed LLM usage, the reaction from some corners was something to behold. People expressing what amounted to personal grievance over tooling choices in projects they may not even use. There’s a particular kind of entitlement in telling a maintainer – who is keeping software alive, possibly even in their spare time – that the way they choose to do that work is an affront. Open source is a gift, whether paid or not, and nobody has to accept it, but disclosing your tooling isn’t an invitation for complaints. The ethical concerns about training data, resource use and other negative externalities are legitimate and worth raising. Performative outrage directed at individual maintainers is not the same thing.

I don’t have an easy conclusion. The tools are useful, the ethics are murky, and communities are still figuring out how to respond. A cautious, honest use of them feels better to me than the alternatives.

Weeknotes (2026 week 11)https://406.ch/writing/weeknotes-2026-week-11/2026-03-11T12:00:00Z2026-03-11T12:00:00Z

Weeknotes (2026 week 11)

Last time I wrote that I seem to be publishing weeknotes monthly. Now, a quarter of a year has passed since the last entry. I do enjoy the fact that I have published more posts focused on a single topic. That said, what has been going on in open source land is certainly interesting too.

LLMs in Open Source

I have started a longer piece to think about my stance regarding using LLMs in Open Source. The argument I’m thinking about is that there’s a balance between LLMs having ingested all of my published open source code and myself using them now to help myself and others again.

The happenings in the last two weeks (think Pentagon, Iran, and the bombings of schools) have again brought to the foreground the perils of using those tools. I therefore haven’t been motivated to pursue this train of thought for the moment. When the upsides are somewhat questionable and tentative and the downsides are so clear and impossible to miss, it’s hard to use my voice to speak in favor of these tools.

That said, all the shaming when someone uses an LLM that I see in my Mastodon feed also annoys me. I’ll quote part of a post here which I liked and leave it at that for the moment:

The AI hype-cyclone is bad, but so is the anti-AI witch hunt. Commits co-authored by Claude do not mean that a project has “abandoned engineering as a serious endeavor”

[…]

@nedbat on Mastodon

Other goings-on

Releases since December

Rich text editors: How restrictive can we be?https://406.ch/writing/rich-text-editors-how-restrictive-can-we-be/2025-12-17T12:00:00Z2025-12-17T12:00:00Z

Rich text editors: How restrictive can we be?

How restrictive should a rich text editor be? It’s a question I keep coming back to as I work on FeinCMS and Django-based content management systems.

I published the last blog post on django-prose-editor specifically in August 2025, Menu improvements in django-prose-editor. The most interesting part of the blog post was the short mention of the TextClass extension at the bottom which allows adding a predefined list of CSS classes to arbitrary spans of text.

In the meantime, I have spent a lot of time working on extensions that try to answer this question: the TextClass extension for adding CSS classes to inline text, and more recently the NodeClass extension for adding classes to nodes and marks. It’s high time to write a post about it.

Rich Text editing philosophy

All of this convinced me that offering the user a rich text editor with too much capabilities is a really bad idea. The rich text editor in FeinCMS only has bold, italic, bullets, link and headlines activated (and the HTML code button, because that’s sort of inevitable – sometimes the rich text editor messes up and you cannot fix it other than going directly into the HTML code. Plus, if someone really knows what they are doing, I’d still like to give them the power to shoot their own foot).

Commit in the FeinCMS repository, August 2009, current version from django-content-editor design decisions

Should we let users shoot themselves in the foot?

Giving power users an HTML code button would have been somewhat fine if only the editors themselves were affected. Unfortunately, that was not the case.

As a team we have spent more time than we ever wanted debugging strange problems only to find out that the culprit was a blob of CSS or JavaScript inserted directly into an unsanitized rich text editor field. We saw everything from a few reasonable and well scoped lines of CSS to hundreds of KiBs of hotlinked JavaScript code that broke layouts, caused performance issues, and possibly even created security vulnerabilities.

We have one more case of Betteridge’s law of headlines here.

The pendulum swings

The first version of django-prose-editor which replaced the venerable CKEditor 4 in our project was much more strict and reduced – no attributes, no classes, just a very short list of allowlisted HTML tags in the schema.

We quickly hit some snags. When users needed similar headings with different styles, we worked around it by using H2 and H3 — not semantic at all. I wasn’t exactly involved in this decision; I just didn’t want to rock the boat too much, since I was so happy that we were even able to use the more restricted editor at all in this project.

Everything was good for a while, but more and more use cases crept up until it was clear that something had to be done about it. First, the TextClass extension was introduced to allow adding classes to inline text, and later also the NodeClass extension mentioned above. This was a compromise: The customer wanted inline styles, we wanted as little customizability as possible without getting in the way.

That said, we obviously had to move a bit. After all, going back to a less strict editor or even offering a HTML blob injection would be worse. If we try to be too restrictive we will probably have to go back to allowing everything some way or the other, after all:

The more you tighten your grip, Tarkin, the more star systems will slip through your fingers.

– Princess Leia

Combining CSS classes

The last words are definitely not spoken just yet. As teased on Mastodon at the beginning of this month I am working on an even more flexible extension which unifies the NodeClass and TextClass extensions into a single ClassLoom extension.

The code is getting real world use now, but I’m not ready to integrate it yet into the official repository. However, you can use it if you want, it’s 1:1 the version from a project repository. Get the ClassLoom extension here.

This extension also allows combining classes on a single element. If you have 5 colors and 3 text styles, you’d have to add 15 combinations if you were only able to apply a single class. Allowing combinations brings the number of classes down to manageable levels.

Conclusion

So, back to the original question: How restrictive can we be?

The journey from CKEditor 4’s permissiveness through django-prose-editor’s initial strictness to today’s ClassLoom extension has been one of finding that balance. Each extension — TextClass, NodeClass, and now ClassLoom — represents a step toward controlled flexibility: giving content editors the styling options they need while keeping the content structured, maintainable, and safe.

Weeknotes (2025 week 49)https://406.ch/writing/weeknotes-2025-week-49/2025-12-05T12:00:00Z2025-12-05T12:00:00Z

Weeknotes (2025 week 49)

I seem to be publishing weeknotes monthly, so I’m now thinking about renaming the category :-)

Mosparo

I have started using a self-hosted mosparo instance for my captcha needs. It’s nicer than Google reCAPTCHA. Also, not sending data to Google and not training AI models on traffic signs feels better.

Fixes for the YouTube 153 error

Simon Willison published a nice writeup about YouTube embeds failing with a 153 error. We have also encountered this problem in the wild and fixed the feincms3 embedding code to also set the required referrerpolicy attribute.

Updated packages since 2025-11-04

Thoughts about Django-based content management systemshttps://406.ch/writing/thoughts-about-django-based-content-management-systems/2025-11-05T12:00:00Z2025-11-05T12:00:00Z

Thoughts about Django-based content management systems

I have almost exclusively used Django for implementing content management systems (and other backends) since 2008.

In this time, content management systems have come and gone. The big three systems many years back were django CMS, Mezzanine and our own FeinCMS.

During all this time I have always kept an eye open for other CMS than our own but have steadily continued working in my small corner of the Django space. I think it’s time to write down why I have been doing this all this time, for myself and possibly also for other interested parties.

Why not use Wagtail, django CMS or any of those alternatives?

Let’s start with the big one. Why not use Wagtail?

The Django administration interface is actually great. Even though some people say that it should be treated as a tool for developers only, recent improvements to the accessibility and the general usability suggest otherwise. I have written more about my views on this in The Django admin is a CMS. Using and building on top of the Django admin is a great way to immediately profit from all current and future improvements without having to reimplement anything.

I don’t want to have to reimplement Django’s features, I want to add what I need on top.

Faster updates

Everyone implementing and maintaining other CMS is doing a great job and I don’t want to throw any shade. I still feel that it’s important to point out that systems can make it hard to adopt new Django versions on release day:

These larger systems have many more (very talented) people working on them. I’m not saying I’m doing a better job. I’m only pointing out that I’m following a different philosophy where I’m conservative about running code in production and I’d rather have less features when the price is a lot of maintenance later. I’m always thinking about long term maintenance. I really don’t want to maintain some of these larger projects, or even parts of them. So I’d rather not adopt them for projects which hopefully will be developed and maintained for a long time to come. By the way: This experience has been earned the hard way.

The rule of least power

From Wikipedia:

In programming, the rule of least power is a design principle that “suggests choosing the least powerful [computer] language suitable for a given purpose”. Stated alternatively, given a choice among computer languages, classes of which range from descriptive (or declarative) to procedural, the less procedural, more descriptive the language one chooses, the more one can do with the data stored in that language.

Django itself already provides lots and lots of power. I’d argue that a very powerful platform on top of Django may be too much of a good thing. I’d rather keep it simple and stupid.

Editing heterogenous collections of content

Django admin’s inlines are great, but they are not sufficient for building a CMS. You need something to manage different types. django-content-editor does that and has done that since 2009.

When Wagtail introduced the StreamField in 2015 it was definitely a great update to an already great CMS but it wasn’t a new idea generally and not a new thing in Django land. They didn’t say it was and welcomed the fact that they also started using a better way to structure content.

Structured content is great. Putting everything into one large rich text area isn’t what I want. Django’s ORM and admin interface are great for actually modelling the data in a reusable way. And when you need more flexibility than what’s offered by Django’s forms, the community offers many projects extending the admin. These days, I really like working with the django-json-schema-editor component; I even reference other model instances in the database and let the JSON editor handle the referential integrity transparently for me (so that referenced model instances do not silently disappear).

More reading

The future of FeinCMS and the feincms category may be interesting. Also, I’d love to talk about these thoughts, either by email or on Mastodon.

Weeknotes (2025 week 45)https://406.ch/writing/weeknotes-2025-week-45/2025-11-04T12:00:00Z2025-11-04T12:00:00Z

Weeknotes (2025 week 45)

Autumn is nice

I love walking through the forest with all the colors and the rustling when you walk through the leaves on the ground.

Updated packages since 2025-10-23

Weeknotes (2025 week 43)https://406.ch/writing/weeknotes-2025-week-43/2025-10-23T12:00:00Z2025-10-23T12:00:00Z

Weeknotes (2025 week 43)

I published the last weeknotes entry in the first half of September.

Drama in OSS

I have been following the Ruby gems debacle a bit. Initially at Feinheit we used our own PHP-based framework swisdk2 to build websites. This obviously didn’t scale and I was very annoyed with PHP, so I was looking for alternatives.

I remember comparing Ruby on Rails and Django, and decided to switch from PHP/swisdk2 to Python/Django for two reasons: The automatically generated admin interface and the fact that Ruby source code just had too much punctuation characters for my taste. It’s a very whimsical reason and I do not put any weight on that. That being said, given how some of the exponents in Ruby/Rails land behave I’m very very glad to have chosen Python and Django. While not everything is perfect (it never is) at least those communities agree that trying to behave nicely to each other is something to be cheered and not something to be sneered at.

Copilot

I assigned some GitHub issues to Copilot. The result wasn’t very useful. I don’t know if I want to repeat it, local tools work fine for when I really need them.

Python and Django compatibility

It’s the time again to update the GitHub actions matrix and Trove identifiers. I do not like doing it. You can expect all maintained packages to be compatible with the latest and best versions, no upper bounds necessary. Man, if only AI could automate those tasks…

Updated packages since 2025-09-10

My favorite Django packageshttps://406.ch/writing/my-favorite-django-packages/2025-10-22T12:00:00Z2025-10-22T12:00:00Z

My favorite Django packages

Inspired by other posts I also wanted to write up a list of my favorite Django packages. Since I’ve been working in this space for so long and since I’m maintaining quite a large list of packages I worry a bit about tooting my own horn too much here; that said, the reasons for choosing some packages hopefully speak for themselves.

Also, I’m sure I’m forgetting many many packages here. Sorry for that in advance.

Core Django

Data structures

CMS building

I have been working on FeinCMS since 2009. So, it shouldn’t surprise anyone that this is still my favorite way to build CMS on top of Django. I like that it’s basically a thin layer on top of Django’s administration interface and doesn’t want to take over the whole admin interface or even the whole website.

Working with external content

PDF generation

Testing and development

Last but not least, I really like django-debug-toolbar. So much, that I’m even helping with the maintenance since 2016.

Serving

We mostly use Kubernetes to serve websites these days. Inside the pods, I’m working with the granian RSGI/ASGI server and with blacknoise for serving static files.

LLMs are making me a better programmer...https://406.ch/writing/llms-are-making-me-a-better-programmer/2025-09-12T12:00:00Z2025-09-12T12:00:00Z

LLMs are making me a better programmer…

I’m still undecided about LLMs for programming. Sometimes they are very useful, especially when working on a clearly stated problem within a delimited area. Cleaning the code up afterwards is painful and takes a long time though. Even for small changes I’m unsure if using LLMs is a way to save (any) resources, be it time, water, energy or whatever.

They do help me get started, and help me be more ambitious. That’s not a new idea. Simon Willison wrote a post about this in 2023 and the more I think about it or work with AI the more I think it’s a good way to look at it.

A recent example which comes to mind is writing end to end tests. I can’t say I had a love-hate relationship with end to end testing, it was mostly a hate-hate relationship. I hate writing them because it’s so tedious and I hate debugging them because of all the timing issues and the general flakyness of end to end testing. And I especially hate the fact that those tests break all the time when changing the code, even when changes are mostly unrelated.

When I discovered that I could just use Claude Code to write those end to end tests I was ecstatic. Finally a way to add relevant tests to some of my open source projects without having to do all this annoying work myself! Unfortunately, I quickly discovered that Claude Code decided (ha!) it’s more important to make tests pass than actually exercising the functionality in question. When some HTML/JavaScript widget wouldn’t initialize, why not just manipulate innerHTML so that the DOM looks as if the JavaScript actually ran? Of course, that’s a completely useless test. The amount of prodding and instructing the LLM agent required to stop adding workarounds and fallbacks everywhere was mindboggling. Also, since tests are also code which has to be maintained in the future, does generating a whole lot of code actually help or not? Of course, the amount of code involved wasn’t exactly a big help when I really had to dig into the code to debug a gnarly issue, and the way the test was written didn’t exactly help!

I didn’t want to go back to the previous state of things when I had only backend tests though, so I had to find a better way.

Playwright codegen to the rescue

I already had some experience with Playwright codegen, having used it for testing some complex onboarding code for a client project I worked on a few years back, so I was already aware of the fact that I could run the browser, click through the interface myself, and playwright would actually generate some of the required Python code for the test itself.

This worked fine for a project, but what about libraries? There, I generally do not have a full project ready to be used with ./manage.py runserver and Playwright. So, I needed a different solution: Running Playwright from inside a test!

If your test uses the LiveServerTestCase all you have to do is insert the following lines into the body of your test, directly after creating the necessary data in the database (using fixtures, or probably better yet using something like factory-boy):

import subprocess print(f"Live server URL: {live_server.url}") subprocess.Popen(["playwright", "codegen", f"{self.live_server_url}/admin/"]) input("Press Enter when done with codegen...") 

Or of course the equivalent invocation using live_server.url when using the live_server fixture from pytest-django.

Of course Tim pointed me towards page.pause(). I didn’t know about it; I think it’s even better than what I discovered, so I’m probably going to use that one instead. I still think writing down the discovery process makes sense.

So now, when LiveServerTestCase is already set up and I already have a sync Playwright context lying around, I can just do:

page = context.new_page() page.pause() 

TLDR

Claude Code helped getting me to get off the ground with adding end to end tests to my projects. Now, my tests are better because – at least for now – I’m not using AI tools anymore.

Weeknotes (2025 week 37)https://406.ch/writing/weeknotes-2025-week-37/2025-09-10T12:00:00Z2025-09-10T12:00:00Z

Weeknotes (2025 week 37)

I’m having a slow week after the last wisdom tooth extraction. Finally! I’m slowly recuperating from that.

I’m trying to split up the blog posts a bit and writing more standalone pieces instead of putting everything into weeknotes. Publishing more focussed pieces sounds like a good thing and should also help me with finding my own writing later.

Releases

Weeknotes (2025 week 35)https://406.ch/writing/weeknotes-2025-week-35/2025-08-29T12:00:00Z2025-08-29T12:00:00Z

Weeknotes (2025 week 35)

Summer was and is nice. The hot days seem to be over (for now), but in the last years summer hasn’t really left until the end of September, so we’ll see. I personally like the warm weather but I really hoped that our leaders were smarter. The climate emergency could be seen from far away. The pigheadedness is hard to stomach. And of course it’s not the only problem we’re facing as humanity at all.

Releases

I did some longer-form writing about two of the releases here: Menu improvements in django-prose-editor and django-content-editor now supports cloning of content

django-content-editor now supports cloning of contenthttps://406.ch/writing/django-content-editor-cloning/2025-08-27T12:00:00Z2025-08-27T12:00:00Z

django-content-editor now supports cloning of content

What is the content editor?

Django’s builtin admin application provides a really good and usable administration interface for creating and updating content. django-content-editor extends Django’s inlines mechanism with an interface and tools for managing and rendering heterogenous collections of content as are often necessary for content management systems.

We are using django-content-editor in basically all projects, as a part of feincms3. The content editor is used not only for building page content, but also for blog entries, for building multi-step intelligent form wizards, for learning units and even to digitize teaching materials for schools, including static and interactive content.

The great thing about it is that it enables us to edit complex content inside Django’s administration interface without trying to replace it with a completely separate interface, as some other more well-known Django-based CMS want to do.

Cloning content

The complexity of managed content has grown a bit, especially since we introduced support for nesting sections. Teaching materials are often available in several learning levels, with only minor differences between them. Unfortunately, the differences aren’t purely additive: It’s not the case that higher levels just have more materials available. Otherwise, we’d probably have used a level on content items to hide content which shouldn’t be shown to students. Content is sometimes totally different. Because of this we’re using content editor’s regions for the learning level, one region per level.

Even then, the basic structure is often the same and building that manually for all levels is annoying at best. That’s why I finally got the occasion to add support for cloning content between regions to the editor.

Of course, cloning should also take the other features into account and allow selecting sections as a whole instead of having to select individual items. Here’s a screenshot of the current interface:

Screenshot showing the content cloning interface

Closing thoughts

I’m still really happy with the content editor; I wish the Django admin would look a little bit nicer because then people would probably be more encouraged to actually learn how powerful it is. The first impression is unfortunately that it looks old and a bit too technical, but in my experience working with many many customers it’s not really the case. Most people are immediately able to work with it and find the interface well structured and appreciate the no bullshit attitude, because working with it really is efficient.

Menu improvements in django-prose-editorhttps://406.ch/writing/menu-improvements-in-django-prose-editor/2025-08-23T12:00:00Z2025-08-23T12:00:00Z

Menu improvements in django-prose-editor

I have repeatedly mentioned the django-prose-editor project in my weeknotes but I haven’t written a proper post about it since rebuilding it on top of Tiptap at the end of 2024.

Much has happened in the meantime. A lot of work went into the menu system (as alluded to in the title of this post), but by no means does that cover all the work. As always, the CHANGELOG is the authoritative source.

0.11 introduced HTML sanitization which only allows HTML tags and attributes which can be added through the editor interface. Previously, we used nh3 to clean up HTML and protect against XSS, but now we can be much more strict and use a restrictive allowlist.

We also switched to using ES modules and importmaps in the browser.

Last but not least 0.11 also introduced end-to-end testing using Playwright.

The main feature in 0.12 was the switch to Tiptap 3.0 which fixed problems with shared extension storage when using several prose editors on the same page.

In 0.13 we switched from esbuild to rslib. Esbuild’s configuration is nicer to look at, but rslib is built on the very powerful rspack which I’m using everywhere.

In 0.14, 0.15 and 0.16 the Menu extension was made more reusable and the way extension can register their own menu items was reworked.

The upcoming 0.17 release (alpha releases are available and I’m using them in production right now!) is a larger release again and introduces a completely reworked menu system. The menu now not only supports button groups and dialogs but also dropdowns directly in the toolbar. This allows for example showing a dropdown for block types:

Screenshot showing prose editor dropdowns

The styles are the same as those used in the editor interface.

The same interface can not only be used for HTML elements, but also for HTML classes. Tiptap has a TextStyle extension which allows using inline styles; I’d rather have a more restricted way of styling spans, and the prose editor TextClass extension does just that: It allows applying a list of predefined CSS classes to <span> elements. Of course the dropdown also shows the resulting presentation if you provide the necessary CSS to the admin interface.

Weeknotes (2025 week 27)https://406.ch/writing/weeknotes-2025-week-27/2025-07-05T12:00:00Z2025-07-05T12:00:00Z

Weeknotes (2025 week 27)

I have again missed a few weeks, so the releases section will be longer than usual since it covers six weeks.

django-prose-editor

I have totally restructured the documentation to make it clearer. The configuration chapter is shorter and more focussed, and the custom extensions chapter actually shows all required parts now.

The most visible change is probably the refactored menu system. Extensions now have an addMenuItems method where they can add their own buttons to the menu bar. I wanted to do this for a long time but have only just this week found a way to achieve this which I actually like.

I’ve reported a bug to Tiptap where a .can() chain always succeeded even though the actual operation could fail (#6306).

Finally, I have also switched from esbuild to rslib; I’m a heavy user of rspack anyway and am more at home with its configuration.

django-content-editor

The 7.4 release mostly contains minor changes, one new feature is the content_editor.admin.RefinedModelAdmin class. It includes tweaks to Django’s standard behavior such as supporting a Ctrl-S shortcut for the “Save and continue editing” functionality and an additional warning when people want to delete inlines and instead delete the whole object. This seems to happen often even though people are shown the full list of objects which will be deleted.

Releases

Preserving referential integrity with JSON fields and Djangohttps://406.ch/writing/preserving-referential-integrity-with-json-fields-and-django/2025-06-04T12:00:00Z2025-06-04T12:00:00Z

Preserving referential integrity with JSON fields and Django

Motivation

The great thing about using feincms3 and django-content-editor is that CMS plugins are Django models – if using them you immediately have access to the power of Django’s ORM and Django’s administration interface.

However, using one model per content type can be limiting on larger sites. Because of this we like using JSON plugins with schemas for more fringe use cases or for places where we have richer data but do not want to write a separate Django app for it. This works well as long as you only work with text, numbers etc. but gets a bit ugly once you start referencing Django models because you never know if those objects are still around when actually using the data stored in those JSON fields.

Django has a nice on_delete=models.PROTECT feature, but that of course only works when using real models. So, let’s bridge this gap and allow using foreign key protection with data stored in JSON fields!

Models

First, you have to start using the django-json-schema-editor and specifically its JSONField instead of the standard Django JSONField. The most important difference between those two is that the schema editor’s field wants a JSON schema. So, for the sake of an example, let’s assume that we have a model with images and a model with galleries. Note that we’re omitting many of the fields actually making the interface nice such as titles etc.

from django.db import models from django_json_schema_editor.fields import JSONField class Image(models.Model): image = models.ImageField(...) gallery_schema = { "type": "object", "properties": { "caption": {"type": "string"}, "images": { "type": "array", "format": "table", "minItems": 3, "items": { "type": "string", "format": "foreign_key", "options": { # raw_id_fields URL: "url": "/admin/myapp/image/?_popup=1&_to_field=id", }, }, }, }, } class Gallery(models.Model): data = JSONField(schema=gallery_schema) 

Now, if we were to do it by hand, we’d define a through model for a ManyToManyField linking galleries to images, and adding a on_delete=models.PROTECT foreign key to this through model’s image foreign key and we would be updating this many to many table when the Gallery object changes. Since that’s somewhat boring but also tricky code I have already written it (including unit tests of course) and all that’s left to do is define the linking:

Gallery.register_data_reference( # The model we're referencing: Image, # The name of the ManyToManyField: name="images", # The getter which returns a list of stringified primary key values or nothing: getter=lambda obj: obj.data.get("images"), ) 

Now, attempting to delete an image which is still used in a gallery somewhere will raise ProtectedError exceptions. That’s what we wanted to achieve.

When you have a gallery instance you can now use the images field to fetch all images and use the order from the JSON data:

def gallery_context(gallery): images = {str(image.pk): image for image in gallery.images.all()} return { "caption": gallery.data["caption"], "images": [images[pk] for pk in gallery.data["images"]], } 

JSONPluginBase and JSONPluginInline

I would generally do the instantiation of models slightly differently and use django-json-schema-editor’s JSONPluginBase and JSONPluginInline which offer additional niceties such as streamlined JSON models with only one backing database table (using proxy models) and supporting not just showing the primary key of referenced model instances but also their __str__ value.

The example above would have to be changed to look more like this:

from django_json_schema_editor import JSONPluginBase class JSONPlugin(JSONPluginBase, ...): pass JSONPlugin.register_data_reference(...) Gallery = JSONPlugin.proxy("gallery", schema=gallery_schema) 

However, that’s not documented yet so for now you unfortunately have to read the code and the test suite, sorry for that. It’s used heavily in production though so if you start using it it won’t suddenly start breaking in the future.

How I'm bundling frontend assets using Django and rspack these dayshttps://406.ch/writing/how-i-m-bundling-frontend-assets-using-django-and-rspack-these-days/2025-05-26T12:00:00Z2025-05-26T12:00:00Z

How I’m bundling frontend assets using Django and rspack these days

I last wrote about configuring Django with bundlers in 2018: Our approach to configuring Django, Webpack and ManifestStaticFilesStorage. An update has been a long time coming. I wanted to write this down for a while already, but each time I started explaining how configuring rspack is actually nice I look at the files we’re using and switch to writing about something else. This time I managed to get through – it’s not that bad, I promise.

This is quite a long post. A project where all of this can be seen in action is Traduire, a platform for translating gettext catalogs. I announced it on the Django forum.

Our requirements

The requirements were still basically the same:

We have old projects using SASS. These days we’re only using PostCSS (especially autoprefixer and maybe postcss-nesting. Rewriting everything is out of the question, so we needed a tool which handled all that as well.

People in the frontend space seem to like tools like Vite or Next.js a lot. I have also looked at Parcel, esbuild, rsbuild and others. Either they didn’t support our old projects, were too limited in scope (e.g. no HMR), too opinionated or I hit bugs or had questions about their maintenance. I’m sure all of them are great for some people, and I don’t intend to talk badly about any of them!

In the end, the flexibility, speed and trustworthiness of rspack won me over even though I have a love-hate relationship with the Webpack/rspack configuration. We already had a reusable library of configuration snippets for webpack though and moving that library over to rspack was straightforward.

That being said, configuring rspack from scratch is no joke, that’s why tools such as rsbuild exist. If you already know Webpack well or really need the flexibility, going low level can be good.

High-level project structure

The high-level overview is:

During development:

During deployment:

In production:

Example configuration

Here’s an example configuration which works well for us. What follows is the rspack configuration itself, building on our snippet library rspack.library.js. We mostly do not change anything in here except for the list of PostCSS plugins:

rspack.config.js:

module.exports = (env, argv) => {  const { base, devServer, assetRule, postcssRule, swcWithPreactRule } =  require("./rspack.library.js")(argv.mode === "production")  return {  ...base,  devServer: devServer({ backendPort: env.backend }),  module: {  rules: [  assetRule(),  postcssRule({  plugins: [  "postcss-nesting",  "autoprefixer",  ],  }),  swcWithPreactRule(),  ],  },  } } 

The default entry point is main and loads frontend/main.js. The rest of the JavaScript and styles are loaded from there.

The HTML snippet loader works by adding WEBPACK_ASSETS = BASE_DIR / "static" to the Django settings and adding the following tags to the <head> of the website, most often in base.html:

{% load webpack_assets %} {% webpack_assets 'main' %} 

The corresponding template tag in webpack_assets.py follows:

from functools import cache from django import template from django.conf import settings from django.utils.html import mark_safe register = template.Library() def webpack_assets(entry): path = settings.BASE_DIR / ("tmp" if settings.DEBUG else "static") / f"{entry}.html" return mark_safe(path.read_text()) if not settings.DEBUG: webpack_assets = cache(webpack_assets) register.simple_tag(webpack_assets) 

Last but not least, the fabfile contains the following task definition:

@task def dev(ctx, host="127.0.0.1", port=8000): backend = random.randint(50000, 60000) jobs = [ f".venv/bin/python manage.py runserver {backend}", f"HOST={host} PORT={port} yarn run rspack serve --mode=development --env backend={backend}", ] # Run these two jobs at the same time: _concurrently(ctx, jobs) 

The fh-fablib repository contains the _concurrently implementation we’re using at this time.

The library which enables the nice configuration above

Of course, the whole library of snippets has to be somewhere. The fabfile automatically updates the library when we release a new version, and the library is the same in all the dozens of projects we’re working on. Here’s the current version of rspack.library.js:

const path = require("node:path") const HtmlWebpackPlugin = require("html-webpack-plugin") const rspack = require("@rspack/core") const assert = require("node:assert/strict") const semver = require("semver") assert.ok(semver.satisfies(rspack.rspackVersion, ">=1.1.3"), "rspack outdated") const truthy = (...list) => list.filter((el) => !!el) module.exports = (PRODUCTION) => {  const cwd = process.cwd()  function swcWithPreactRule() {  return {  test: /\.(j|t)sx?$/,  loader: "builtin:swc-loader",  exclude: [/node_modules/],  options: {  jsc: {  parser: {  syntax: "ecmascript",  jsx: true,  },  transform: {  react: {  runtime: "automatic",  importSource: "preact",  },  },  externalHelpers: true,  },  },  type: "javascript/auto",  }  }  function swcWithReactRule() {  return {  test: /\.(j|t)sx?$/,  loader: "builtin:swc-loader",  exclude: [/node_modules/],  options: {  jsc: {  parser: {  syntax: "ecmascript",  jsx: true,  },  transform: {  react: {  runtime: "automatic",  // importSource: "preact",  },  },  externalHelpers: true,  },  },  type: "javascript/auto",  }  }  function htmlPlugin(name = "", config = {}) {  return new HtmlWebpackPlugin({  filename: name ? `${name}.html` : "[name].html",  inject: false,  templateContent: ({ htmlWebpackPlugin }) =>  `${htmlWebpackPlugin.tags.headTags}`,  ...config,  })  }  function htmlSingleChunkPlugin(chunk = "") {  return htmlPlugin(chunk, chunk ? { chunks: [chunk] } : {})  }  function postcssLoaders(plugins) {  return [  { loader: rspack.CssExtractRspackPlugin.loader },  { loader: "css-loader" },  { loader: "postcss-loader", options: { postcssOptions: { plugins } } },  ]  }  function cssExtractPlugin() {  return new rspack.CssExtractRspackPlugin({  filename: PRODUCTION ? "[name].[contenthash].css" : "[name].css",  chunkFilename: PRODUCTION ? "[name].[contenthash].css" : "[name].css",  })  }  return {  truthy,  base: {  context: path.join(cwd, "frontend"),  entry: { main: "./main.js" },  output: {  clean: PRODUCTION,  path: path.join(cwd, PRODUCTION ? "static" : "tmp"),  publicPath: "/static/",  filename: PRODUCTION ? "[name].[contenthash].js" : "[name].js",  // Same as the default but prefixed with "_/[name]."  assetModuleFilename: "_/[name].[hash][ext][query][fragment]",  },  plugins: truthy(cssExtractPlugin(), htmlSingleChunkPlugin()),  target: "browserslist:defaults",  },  devServer(proxySettings) {  return {  host: "0.0.0.0",  hot: true,  port: Number(process.env.PORT || 4000),  allowedHosts: "all",  client: {  overlay: {  errors: true,  warnings: false,  runtimeErrors: true,  },  },  devMiddleware: {  headers: { "Access-Control-Allow-Origin": "*" },  index: true,  writeToDisk: (path) => /\.html$/.test(path),  },  proxy: [  proxySettings  ? {  context: () => true,  target: `http://127.0.0.1:${proxySettings.backendPort}`,  }  : {},  ],  }  },  assetRule() {  return {  test: /\.(png|webp|woff2?|svg|eot|ttf|otf|gif|jpe?g|mp3|wav)$/i,  type: "asset",  parser: { dataUrlCondition: { maxSize: 512 /* bytes */ } },  }  },  postcssRule(cfg) {  return {  test: /\.css$/i,  type: "javascript/auto",  use: postcssLoaders(cfg?.plugins),  }  },  sassRule(options = {}) {  let { cssLoaders } = options  if (!cssLoaders) cssLoaders = postcssLoaders(["autoprefixer"])  return {  test: /\.scss$/i,  use: [  ...cssLoaders,  {  loader: "sass-loader",  options: {  sassOptions: {  includePaths: [path.resolve(path.join(cwd, "node_modules"))],  },  },  },  ],  type: "javascript/auto",  }  },  swcWithPreactRule,  swcWithReactRule,  resolvePreactAsReact() {  return {  resolve: {  alias: {  react: "preact/compat",  "react-dom/test-utils": "preact/test-utils",  "react-dom": "preact/compat", // Must be below test-utils  "react/jsx-runtime": "preact/jsx-runtime",  },  },  }  },  htmlPlugin,  htmlSingleChunkPlugin,  postcssLoaders,  cssExtractPlugin,  } } 

Closing thoughts

Several utilities from this library aren’t used in the example above, for example the sassRule or the HTML plugin utilities which are useful when you require several entry points on your website, e.g. an entry point for the public facing website and an entry point for a dashboard used by members of the staff.

Most of the code in here is freely available in our fh-fablib repo under an open source license. Anything in this blog post can also be used under the CC0 license, so feel free to steal everything. If you do, I’d be happy to hear your thoughts about this post, and please share your experiences and suggestions for improvement – if you have any!

Django, JavaScript modules and importmapshttps://406.ch/writing/django-javascript-modules-and-importmaps/2025-05-22T12:00:00Z2025-05-22T12:00:00Z

How I’m using Django, JavaScript modules and importmaps together

I have been spending a lot of time in the last few months working on django-prose-editor. First I’ve rebuilt the editor on top of Tiptap because I wanted a framework for extending the underlying ProseMirror and didn’t want to reinvent this particular wheel. While doing that work I noticed that using JavaScript modules in the browser would be really nice, but Django’s ManifestStaticFilesStorage doesn’t yet support rewriting import statement in modules out-of-the-box without opting into the experimental support accessible through subclassing the storage. A better way to use JavaScript modules with the cache busting offered by ManifestStaticFilesStorage would be importmaps.

Motivation

Developing Django applications that include JavaScript has always been challenging when it comes to properly distributing, loading, and versioning those assets. The traditional approach using Django’s forms.Media works well for simple use cases, but falls short when dealing with modern JavaScript modules.

The ability to ship reusable JavaScript utilities in third-party Django apps has been a pain point for years. Often developers resort to workarounds like bundling all JS into a single file, using jQuery-style global variables, or requiring complex build processes for consumers of their apps.

Importmaps offer a cleaner solution that works with native browser modules, supports cache busting, and doesn’t require complex bundling for simple use cases.

The history

The conversation around better JavaScript handling in Django has been ongoing for years. Thibaud Colas’ DEP draft come to mind as does the discussion about whether to improve or deprecate forms.Media.

A few packages exist which are offering solutions in this space:

django-js-asset came before Django added official support for object-based media CSS and JS paths but has since been changed to take advantage of that official support. It has enabled the removal of ugly hacks. In the meantime, Django has even added official support for object-based Script tags.

My DEP draft

Building on these efforts, I’ve been thinking about submitting my own DEP draft for importmap support. It hasn’t yet come far though, and I’m still more occupied with verifying and using my existing solution, especially learning if it has limitations which would make the implemented approach unworkable for official inclusion.

The current effort

As alluded to above, I already have a working solution for using importmaps (in django-js-asset) and I’m actively using it in django-prose-editor. Here’s how it works:

importmap.update({ "imports": { "django-prose-editor/editor": static_lazy("django_prose_editor/editor.js"), } }) 

A minimal editor implementation using this:

import {  // Tiptap extensions  Document, Paragraph, HardBreak, Text, Bold, Italic,  // Prose editor utilities  Menu, createTextareaEditor, initializeEditors, } from "django-prose-editor/editor" const extensions = [  Document, Paragraph, HardBreak, Text, Bold, Italic, Menu, ] initializeEditors((textarea) => {  createTextareaEditor(textarea, extensions) }) 

The importmap looks as follows when using Django’s ManifestStaticFilesStorage which produces filenames containing the hash of the file’s contents for cache busting (edited for readability):

<script type="importmap"> {"imports": {  "django-prose-editor/editor": "/static/django_prose_editor/editor.6e8dd4c12e2e.js" }} </script> 

This means that when your code has import { ... } from "django-prose-editor/editor", the browser automatically loads the file from /static/django_prose_editor/editor.6e8dd4c12e2e.js. The hashed filename provides cache busting while the import statement remains clean and consistent.

Problems with the current implementation

While this approach works, there are several issues to address:

Comparison to django-esm

django-esm takes a different approach. It assumes you’re using JavaScript modules everywhere and solves the problem of exposing the correct paths to those modules to the browser. It supports both private modules from your repository and modules installed in node_modules.

However, it doesn’t fully address the scenario where a third-party Django app (a Python package) ships JavaScript modules that need to be integrated into your application.

I still use a bundler for most of my JavaScript from node_modules, so I don’t need this specific functionality yet. That will probably change in the future.

Using bundlers

If you’re still using a bundler, as I do, you want to ensure that the import isn’t actually evaluated by the bundler but left as-is. The rspack configuration I’m using at the moment is also documented in the django-prose-editor README but I’m duplicating it here for convenience:

module.exports = {  // ...  experiments: { outputModule: true },  externals: {  "django-prose-editor/editor": "module django-prose-editor/editor",  // Or the following, I'm never sure.  "django-prose-editor/editor": "import django-prose-editor/editor",  }, } 

This configuration marks the dependency as “external” (so it won’t be bundled) and specifies that it should be loaded as a module using a static import statement.

For browser compatibility, you can also include es-module-shims to support browsers that don’t yet handle importmaps natively (around 5% at the time of writing according to caniuse.com).

Using django-compressor or similar packages

Tools like django-compressor aren’t well-suited for modern JavaScript modules as they typically produce old-style JavaScript files rather than ES modules. They’re designed for a different era of web development and don’t integrate well with the importmap approach.

Note

The problem is that django-compressor at this time emits non-module script files. Using import statements in these files isn’t possible, instead you have to use dynamic imports.

// Instead of import { Document, ... } from "django-prose-editor/editor" // you need import("django-prose-editor/editor").then(({ Document, ... }) => { }) 

Both work fine. The bundle emitted by django-compressor will not contain the prose editor module itself though; including this module inside the bundle is not possible.

Conclusion

Using importmaps with Django provides a clean solution for managing JavaScript modules in Django applications, especially for third-party apps that need to ship their own JavaScript. While there are still some rough edges to smooth out, this approach works well and offers a path forward that aligns with modern web standards.

Have you tried using importmaps with Django? I’d be interested to hear about your experiences and approaches.

Weeknotes (2025 week 21)https://406.ch/writing/weeknotes-2025-week-21/2025-05-21T12:00:00Z2025-05-21T12:00:00Z

Weeknotes (2025 week 21)

I have missed two co-writing sessions and didn’t manage to post much outside of that, but let’s get things back on track.

django-prose-editor 0.12

The last weeknotes entry contains more details about the work of really connecting Tiptap extensions with server-side sanitization. 0.12 includes many improvements and bugfixes which have been made during real-world use of the prose editor in customer-facing products.

I’m not completely happy about the way we’re specifying the editor configuration and haven’t been able to settle on either extensions or config as a keyword argument. The field supports both ways, at least for now. It’s probably fine.

Releases

Customizing Django admin fieldsets without fearing forgotten fieldshttps://406.ch/writing/customizing-django-admin-fieldsets-without-fearing-forgotten-fields/2025-04-14T12:00:00Z2025-04-14T12:00:00Z

Customizing Django admin fieldsets without fearing forgotten fields

When defining fieldsets on Django modeladmin classes I always worry that I forget updating the fieldsets later when adding or removing new model fields, and not without reason: It has already happened to me several times. Forgetting to remove fields is mostly fine because system checks will complain about it, forgetting to add fields may be real bad. A recent example was a crashing website because a required field was missing from the admin and therefore was left empty when creating new instances!

I have now published another Django package which solves this by adding support for specifying the special "__remaining__" field in a fieldsets definition. The "__remaining__" placeholder is automatically replaced by all model fields which haven’t been explicitly added already or added to exclude1.

Here’s a short example for a modeladmin definition using django-auto-admin-fieldsets:

from django.contrib import admin from django_auto_admin_fieldsets.admin import AutoFieldsetsModelAdmin from app import models @admin.register(models.MyModel) class MyModelAdmin(AutoFieldsetsModelAdmin): # Define fieldsets as usual with a placeholder fieldsets = [ ("Basic Information", {"fields": ["title", "slug"]}), ("Content", {"fields": ["__remaining__"]}), ] 

I have used Claude Code a lot for the code and the package, and as always, I had to fix bugs and oversights. I hope it didn’t regurgitate the code of an existing package – I searched for an existing solution first but didn’t find any.

The package is available on PyPI and is developed on GitHub, at least for the time being.


  1. Autocreated fields such as surrogate primary keys or fields which aren’t editable are also excluded automatically of course.