The original reason was that the Unix wall clock was more accurate
than its CPU clock. If that's changed we should probably (perhaps in a
platform-dependent way) change the default to the most accurate clock
available.
Currently it seems clock\_gettime() APIs have nanosecond resolution and OTOH gettimeofday() have microsecond. Other than that, clock\_gettime() has a significant advantage: it has per-process timer available which will increase the accuracy of the timing information of the profiled application.
--
Sumer Cip