The function datetime.datetime.fromtimestamp() can throw a ValueError when the timestamp is close to an integer value but not quite due to rounding errors. It then gives the following error: microsecond must be in 0..999999 This can be seen by running the attached code (the values are taken from an actual event log), which gives the following output: 1146227423.0 -> 2006-04-28 14:30:23 1146227448.7 -> 2006-04-28 14:30:48.702000 1146227459.95 -> 2006-04-28 14:30:59.947000 1146227468.41 -> 2006-04-28 14:31:08.409000 1146227501.4 -> 2006-04-28 14:31:41.399000 1146227523.0 -> Error converting 1146227522.99999976 microsecond must be in 0..999999 Admittedly, I can work around the bug in this case, by summing the durations first, and calculating all times from "starttime" directly. Nevertheless, I think this is a bug in datetime, as it should work as long as the input time any floating point value within a given range (based on the date range that is supported). Details of my Python environment: Python 2.4.2 (#1, Feb 6 2006, 13:53:18) [GCC 3.2.3 20030502 (Red Hat Linux 3.2.3-53)] on linux2 Cheers, Erwin
Logged In: YES user_id=31435 Huh! My comment got lost. The patch looks good, but add 1 to `timet` instead of 1.0. We don't know whether the C time_t type is an integral or floating type, and using an integer literal works smoothly for both. For that matter, we don't know that time_t counts number of seconds either (e.g., perhaps it counts number of nanoseconds), but other code in Python assumes that it does, so there's no special sin in assuming it does here too.