[Python-Dev] datetime nanosecond support (original) (raw)
Guido van Rossum guido at python.org
Wed Jul 25 05:47:54 CEST 2012
- Previous message: [Python-Dev] datetime nanosecond support
- Next message: [Python-Dev] datetime nanosecond support
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
On Tue, Jul 24, 2012 at 8:25 PM, Vincenzo Ampolo <vincenzo.ampolo at gmail.com> wrote:
On 07/24/2012 06:46 PM, Guido van Rossum wrote:
You're welcome. Hi Guido, I'm glad you spent your time reading my mail. I would have never imagined that my mail could come to your attention.
Stop brownnosing already. :-) If you'd followed python-dev you'd known I read it.
Have you read PEP 410 and my rejection of it (http://mail.python.org/pipermail/python-dev/2012-February/116837.html)? Even though that's about using Decimal for timestamps, it could still be considered related. I've read it and point 5 is very like in this issue. You said: "[...] I see only one real use case for nanosecond precision: faithful copying of the mtime/atime recorded by filesystems, in cases where the filesystem (like e.g. ext4) records these times with nanosecond precision. Even if such timestamps can't be trusted to be accurate, converting them to floats and back loses precision, and verification using tools not written in Python will flag the difference. But for this specific use case a much simpler set of API changes will suffice; only os.stat() and os.utime() need to change slightly (and variants of os.stat() like os.fstat()). [...]" I think that's based on a wrong hypothesis: just one case -> let's handle in a different way (modifying os.stat() and os.utime()). I would say: It's not just one case, there are at lest other two scenarios. One is like mine, parsing network packets, the other one is in parsing stock trading data. But in this case there is no os.stat() or os.utime() that can be modified. I've to write my own class to deal with time and loose all the power and flexibility that the datetime module adds to the python language.
Also, this use case is unlike the PEP 410 use case, because the timestamps there use a numeric type, not datetime (and that was separately argued).
Not every use case deserves an API change. :-)
First you will have to show how you'd have to code this without nanosecond precision in datetime and how tedious that is. (I expect that representing the timestamp as a long integer expressing a posix timestamp times a billion would be very reasonable.) Yeah that's exactly how we built our Time class to handle this, and we wrote also a Duration class to represent timedelta. The code we developed is 383 python lines long but is not comparable with all the functionalities that the datetime module offers and it's also really slow compared to native datetime module which is written in C.
So what functionality specifically do you require? You speak in generalities but I need specifics.
As you may think using that approach in a web application is very limiting since there is no strftime() in this custom class.
Apparently you didn't need it? :-) Web frameworks usually have their own date/time formatting anyway.
I cannot share the code right now since It's copyrighted by the company I work for but I've asked permission to do so. I just need to wait tomorrow morning (PDT time) so they approve my request. Looking at the code you can see how tedious is to try to remake all the conversions that are already implemented on the datetime module. Just let me know if you actually want to have a look at the code.
I believe you.
I didn't read the entire bug, but it mentioned something about storing datetimes in databases. Do databases support nanosecond precision?
Yeah. According to http://wiki.ispirer.com/sqlways/postgresql/data-types/timestamp at least Oracle support timestamps with nanoseconds accuracy, SQL server supports 100 nanosecond accuracy. Since I use Postgresql personally the best way to accomplish it (also suggested by the #postgresql on freenode) is to store the timestamp with nanosecond (like 1343158283.880338907242) as bigint and let the ORM (so a python ORM) do all the conversion job. An yet again, having nanosecond support in datetime would make things much more easy.
How so, given that the database you use doesn't support it?
While writing this mail Chris Lambacher answered with more data about nanosecond support on databases
Thanks, Chris.
TBH, I think that adding nanosecond precision to the datetime type is not unthinkable. You'll have to come up with some clever backward compatibility in the API though, and that will probably be a bit ugly (you'd have a microsecond parameter with a range of 0-1000000 and a nanosecond parameter with a range of 0-1000). Also the space it takes in memory would probably increase (there's no room for an extra 10 bits in the carefully arranged 8-byte internal representation).
But let me be clear -- are you willing to help implement any of this? You can't just order a feature, you know...
-- --Guido van Rossum (python.org/~guido)
- Previous message: [Python-Dev] datetime nanosecond support
- Next message: [Python-Dev] datetime nanosecond support
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]