This doesn't strike me as needing an optimization through a dedicated method.                   maybe a new dict mapping type -- "shared_dict" -- it would be used in places like the csv reader where it makes sense, but wouldn't impact the regular dict at all.">

(original) (raw)

On 4/22/2019 4:03 PM, Inada Naoki wrote:
On Tue, Apr 23, 2019 at 2:18 AM Chris Barker via Python-Dev  
 wrote:  
  
On Fri, Apr 12, 2019 at 10:20 AM Brett Cannon  wrote:  



This doesn't strike me as needing an optimization through a dedicated method.



maybe a new dict mapping type -- "shared_dict" -- it would be used in places like the csv reader where it makes sense, but wouldn't impact the regular dict at all.

you could get really clever an have it auto-convert to a regular dict when any changes were made that are incompatible with the shared keys...



My current idea is adding builder in somewhere in stdlib (maybe collections?):

builder = DictBuilder(keys_tuple)
value = builder(values) # repeatedly called.

I don't want to add new mapping type because we already have shared key dict,
and changing mapping type may cause backward compatibility problem.

Regards,



As a heavy user of some self-written code that does stuff very
similar to csv reader, and creates lots of same-key dicts, I'd be
supportive of a performance enhancing solution here, although I
haven't done a detailed study of where the time is currently spent.



Is the problem that the existing shared key dict isn't always
detected? Or just that knowing in advance that it is expected to be
a shared key dict can save the detection work?



I do know that in my code, I have a complete list of keys and values
when I create each dict, and would be happy to tweak it to use the
most performance technique. The above looks like a nice interface,
assuming that values is expected to be in the same iterable order as
keys_tuple (but is there a need for keys_tuple to be a tuple? could
it be any iterable?).