Logging: "atomically" log messages of different severity (original) (raw)
March 21, 2026, 1:05am 1
In a multi-threaded context, assume I want to do the following:
- log the main messages of exceptions with a severity of e.g.
ERROR
This can of course easily be done vialogger.exception()or usingexc_info= - But if the logger’s severity is
DEBUGI’d also like to get the exception’s traceback be logged… as severityDEBUG.
Now it would be easy if the traceback would then simply be also printed asERRORthen I could do something like a custom formatter, which depending on the log level includes the traceback or not, but if the traceback should beDEBUGI think it only can be done with an extra call tologger.debug() - Now I could of course do this in my calling code and simply call
logger.error()with the main exception message, and - conditionally -logger.debug()with the traceback, but then the problem is that these are no longer guaranteed to be directly consecutive in the outputs, when another thread would come in-between.
Any ideas how this could be done (and I don’t mean workarounds like including the thread ID in the messages or so)?
Thanks 
flyinghyrax (Matt Seiler) March 21, 2026, 2:22pm 2
Requiring that the traceback be in a separate message makes this pretty difficult. I’m not sure how this would work without managing a lock around the logger. 
Lucas_Malor (Lucas Malor) March 21, 2026, 2:58pm 3
IMHO this is a YAGNI. Can I ask you why you want to do so?
calestyo (Christoph Anton Mitterer) March 21, 2026, 3:43pm 4
Not even sure whether it would work with a lock,… assuming that any library I’m using might also use logging, such lock would need to be enforced by that?!
calestyo (Christoph Anton Mitterer) March 21, 2026, 3:44pm 5
Well as I’ve said, to display e.g. the exception info only when the log level is DEBUG and to also have it marked as such and not simply included in the ERROR level message part.
Lucas_Malor (Lucas Malor) March 21, 2026, 4:35pm 6
I understood what you want. But why?
calestyo (Christoph Anton Mitterer) March 21, 2026, 4:55pm 7
Not really sure whether this question makes sense other than as philosophical question or for rejecting the use case.
Why do people want logging with different severity levels? Why do they want to be able to customise log format strings.
All to make logs more readable.
If one does e.g. some HTTP requests and that fails because of hostname resolution or connection refused, than *that* is the error in that case - the tracelog is not, it’s just for well debugging it.
So I think it does make very much sense to also record that differently in the log (which alone would still be easy).
The problem is making sure that the debug info comes right after the corresponding error, which I guess is a valid wish too, because it makes it far easier to understand what it is about.
Lucas_Malor (Lucas Malor) March 21, 2026, 5:41pm 8
Just for understanding if there’s a Good Reason For Doing That TM 
So do you want a way to filter out a long traceback easily, right?
If so, is it not a grep sufficient?
How do you see the logs? Where they are stored?
And probably this is not possible without a lock. And probably this is a good way to slow down the entire multi-threaded app 
But I suppose the app will not throw an error every second! 
Since I had no interest in that, I asked Gemini:
import logging
import threading
import sys
class SplitTracebackLogger(logging.Logger):
# Class-level lock so multiple instances of this logger
# don't step on each other's toes while writing to the same file! 🔒
_consecutive_lock = threading.RLock()
def _log(self, level, msg, args, exc_info=None, extra=None, stack_info=False, stacklevel=1):
# If there's no exception attached, or it's already a DEBUG call,
# just behave like a totally normal logger.
if not exc_info or level <= logging.DEBUG:
super()._log(level, msg, args, exc_info, extra, stack_info, stacklevel)
return
# BOOM! We hit an error with a traceback. Lock the doors! 🚪
with self._consecutive_lock:
# 1. Log the main message natively, but STRIP the exc_info (pass None).
# This ensures the standard formatter doesn't double-print the traceback.
super()._log(level, msg, args, None, extra, stack_info, stacklevel)
# 2. If the user wants to see DEBUG info, print the traceback right below it!
if self.isEnabledFor(logging.DEBUG):
# If exc_info is exactly `True` (like in logger.exception()),
# we need to grab the active exception tuple for the debug call.
if exc_info is True:
exc_info = sys.exc_info()
# We use the standard _log() to magically format the traceback at DEBUG level!
super()._log(logging.DEBUG, "Traceback:", (), exc_info, extra, stack_info, stacklevel)
# --- THE MAGIC REGISTRATION ---
# Tell Python to use OUR class for all future loggers!
logging.setLoggerClass(SplitTracebackLogger)
# Setup as usual...
logger = logging.getLogger("OOMaster")
logger.setLevel(logging.DEBUG)
handler = logging.StreamHandler(sys.stdout)
handler.setFormatter(logging.Formatter('%(levelname)s: %(message)s'))
logger.addHandler(handler)
# --- THE USAGE ---
# The rest of the application stays beautifully clean. No wrapper functions!
try:
1 / 0
except ZeroDivisionError:
# 🤯 This ONE call natively splits into two synchronized outputs!
logger.exception("The server is on fire!")
I quickly tested it and it seems to work. It seems to me not so well written but who cares 
flyinghyrax (Matt Seiler) March 21, 2026, 8:03pm 9
It would need to lock the handler, rather than the logger, I think? (If you have multiple handlers writing to the same output then that wouldn’t help either). In any case like @Lucas_Malor says, you’d have to watch that contention wasn’t taking away all your concurrency benefits.
I think the way I would handle something like this would be a contextual identifier. Although you mentioned not wanting to use thread IDs, that’s an ideal way to filter in whatever log viewer you’re using to a single sequence of operations. Interleaved logs from other threads can be filtered out when viewing. Or for a narrower scope, something like a request ID if you have such.
I guess in general, instead of trying to control how the logs are emitted, put enough contextual information in the log records to filter or sequence them however you want afterwards.
It might be technically possible to do what you want with careful locking and a custom log handler, but the complexity doesn’t seem worth it for what you’re getting in return!
flyinghyrax (Matt Seiler) March 21, 2026, 8:10pm 10
Another option - it would be significantly easier to conditionally add trace back information to the same error log depending on the log level. That is, one statement is logged at error level, but the formatted output only includes the traceback if the current level is debug, or some other similar runtime configuration like an environment specifier. You can do it at the call site by selectively setting exc_info, or in a custom log formatter, or by configuring your handlers differently at startup based on the environment (so lots of options!)
You don’t get the conceptual tidiness of the debug info being in a debug statement, but you get all the information you wanted in the contexts you wanted it.
Lucas_Malor (Lucas Malor) March 22, 2026, 9:37am 11
Ahah I was just thinking the same solution right now X-D It’s dead simple 
IMHO anyway it’s better to create a separate var rather than checking the log level.
Why? Because it will be surprising that a stack trace is logged at error level when the error level is debug 
I don’t think you need to put a lock to all the handlers. The handlers are all invoked by the single logging call. And anyway the locking problem is small, since I suppose you don’t have exceptions every millisecond!
So maybe I created FUD.
The Gemini code above works (even if I suppose it must check the level of the logger, and not if the level of the log). But it’s an overcomplication and it’s quite unexpected to see something like that in logs. Your above idea it’s just better and simpler 