[Python-Dev] Install Hook [Was: Re: PEP 414 updated] (original) (raw)
Vinay Sajip vinay_sajip at yahoo.co.uk
Mon Mar 5 00:19:29 CET 2012
- Previous message: [Python-Dev] Install Hook [Was: Re: PEP 414 updated]
- Next message: [Python-Dev] PEP 414 updated
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
Armin Ronacher <armin.ronacher active-4.com> writes:
I would hope they both have the same effect. Namely stripping the 'u' prefix in all variations.
Okay, that's all I was curious about.
Why did I go with the tokenize approach? Because I never even considered a 2to3 solution. Part of the reason why I wrote this PEP was that 2to3 is so awfully slow and I was assuming that this would be largely based on the initial parsing step and not the fixers themselves. Why did I not time it with just the unicode fixer? Because if you look at how simple the tokenize version is you can see that this one did not take me more than a good minute and maybe 10 more for the distutils hooking.
You don't need to justify your approach - to me, anyway ;-) I suppose tokenize needed changing because of the grammar change, so it seems reasonable to put the changed version to work.
I agree that 2to3 seems slow sometimes, but I can't say I've pinned it down as to exactly where the time is spent. I assumed that it was just because it seems to run over a lot of files each time, regardless of whether they've been changed since the last run or not. (I believe there might be ways of optimising that, but my understanding is that in the default/simple cases it runs over everything.)
I factored out the transformation step in my hook into a method, so I should be able to swap out the lib2to3 approach with a tokenize approach without too much work, should that prove necessary or desirable.
Regards,
Vinay Sajip
- Previous message: [Python-Dev] Install Hook [Was: Re: PEP 414 updated]
- Next message: [Python-Dev] PEP 414 updated
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]