[Python-Dev] Python Benchmarks (original) (raw)
"Martin v. Löwis" martin at v.loewis.de
Sat Jun 3 10:01:58 CEST 2006
- Previous message: [Python-Dev] Python Benchmarks
- Next message: [Python-Dev] Python Benchmarks
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
Fredrik Lundh wrote:
since process time is sampled, not measured, process time isn't exactly in- vulnerable either.
I can't share that view. The scheduler knows exactly what thread is running on the processor at any time, and that thread won't change until the scheduler makes it change. So if you discount time spent in interrupt handlers (which might be falsely accounted for the thread that happens to run at the point of the interrupt), then process time is measured, not sampled, on any modern operating system: it is updated whenever the scheduler schedules a different thread.
Of course, the question still is what the resolution of the clock is that makes these measurements. For Windows NT+, I would expect it to be "quantum units", but I'm uncertain whether it could measure also fractions of a quantum unit if the process does a blocking call.
I don't think that sampling errors can explain all the anomalies we've been seeing, but I'd wouldn't be surprised if a high-resolution wall time clock on a lightly loaded multiprocess system was, in practice, more reliable than sampled process time on an equally loaded system.
On Linux, process time is accounted in jiffies. Unfortunately, for compatibility, times(2) converts that to clock_t, losing precision.
Regards, Martin
- Previous message: [Python-Dev] Python Benchmarks
- Next message: [Python-Dev] Python Benchmarks
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]