(original) (raw)
Got it -- fair enough.
We deploy so often where I work (a couple of times a week at least) that 104 days seems like an eternity. But I can see where for a very stable file server or something you might well run it that long without deploying. Then again, why are you doing performance tuning on a "very stable server"?
-Ben
On Mon, Oct 16, 2017 at 11:58 AM, Guido van Rossum <guido@python.org> wrote:
On Mon, Oct 16, 2017 at 8:37 AM, Ben Hoyt <benhoyt@gmail.com> wrote:--I've read the examples you wrote here, but I'm struggling to see what the real-life use cases are for this. When would you care about \*both\* very long-running servers (104 days+) and nanosecond precision? I'm not saying it could never happen, but would want to see real "experience reports" of when this is needed.A long-running server might still want to log precise \*durations\* of various events. (Durations of events are the bread and butter of server performance tuning.) And for this it might want to use the most precise clock available, which is perf\_counter(). But if perf\_counter()'s epoch is the start of the process, after 104 days it can no longer report ns precision due to float rounding (even though the internal counter does not lose ns).--Guido van Rossum (python.org/\~guido)