Issue 28199: Compact dict resizing is doing too much work (original) (raw)

Created on 2016-09-18 22:13 by rhettinger, last changed 2022-04-11 14:58 by admin. This issue is now closed.

Messages (39)

msg276921 - (view)

Author: Raymond Hettinger (rhettinger) * (Python committer)

Date: 2016-09-18 22:13

The dictresize() method unnecessarily copies the keys, values, and hashes as well as making insertions in the index table. Only the latter step is necessary or desirable.

Here in the pure Python code for resizing taking from the original proof-of-concept code at https://code.activestate.com/recipes/578375

def _resize(self, n):
    '''Reindex the existing hash/key/value entries.
       Entries do not get moved, they only get new indices.
       No calls are made to hash() or __eq__().

    '''
    n = 2 ** n.bit_length()                     # round-up to power-of-two
    self.indices = self._make_index(n)
    for index, hashvalue in enumerate(self.hashlist):
        for i in Dict._gen_probes(hashvalue, n-1):
            if self.indices[i] == FREE:
                break
        self.indices[i] = index
    self.filled = self.used

And here is a rough sketch of what it would look like in the C code (very rough, not yet compileable):

static void insert_indices_clean(PyDictObject *mp, Py_hash_t hash) { size_t i, perturb; PyDictKeysObject *k = mp->ma_keys; size_t mask = (size_t)DK_SIZE(k)-1;

i = hash & mask;
for (perturb = hash; dk_get_index(k, i) != DKIX_EMPTY;
     perturb >>= PERTURB_SHIFT) {
i = mask & ((i << 2) + i + perturb + 1);
}
dk_set_index(k, i, k->dk_nentries);

}

static int dictresize(PyDictObject *mp, Py_ssize_t minused) { Py_ssize_t i, newsize; PyDictKeyEntry *ep0;

/* Find the smallest table size > minused. */
for (newsize = PyDict_MINSIZE;
     newsize <= minused && newsize > 0;
     newsize <<= 1)
    ;
if (newsize <= 0) {
    PyErr_NoMemory();
    return -1;
}

/* Resize and zero-out the indices array */
realloc(dk->dk_indices, es * newsize);
memset(&dk->dk_indices.as_1[0], 0xff, es * size);
dk->dk_size = size;

/* Loop over hashes, skipping NULLs, inserting new indices */
for (i = 0; i < mp->dk_nentries; i++) {
    PyDictKeyEntry *ep = &ep0[i];
    if (ep->me_value != NULL) {
        insert_indices_clean(mp, ep->me_hash);
    }
}
return 0;

}

msg276927 - (view)

Author: Raymond Hettinger (rhettinger) * (Python committer)

Date: 2016-09-19 00:37

Just before the re-insertion, we should also do a compaction-in-place for the keys/values/hashes array if it has a significant number of holes for previously deleted entries.

msg276931 - (view)

Author: Xiang Zhang (xiang.zhang) * (Python committer)

Date: 2016-09-19 03:16

Then how about entries(key, value pairs)? The size of entries does not match the available hash slots. For example, an empty dict allocates a hash array with 5 available slots and 5 entries. Now we resize the hash array to size 16 and it can afford 10 entries but you only get room for 5 entries. How could we insert the 6th entry?

msg276934 - (view)

Author: Inada Naoki (methane) * (Python committer)

Date: 2016-09-19 04:01

Current compact ordered dict implementation is bit different from yours. When there was removed item, there are NULL entries in ma_entries, instead of swapping last item and deleted item. It's important to keep insertion order.

But it's easy to detect clean dict. Your suggestion can be used when:

I think dictresize for split table can be split function. But I don't know it can improve performance or readability.

msg276938 - (view)

Author: Inada Naoki (methane) * (Python committer)

Date: 2016-09-19 04:40

We can still clean this up for Python 3.6. We're in feature freeze, not development freeze.

Does it mean there is a chance to improve OrderedDict to use new dict implementation, if it seems safe enough? Is new implementation a feature?

(After OrderedDict implementation is improved, functools.lru_cache can use OrderedDict and remove doubly linked list too.)

msg276944 - (view)

Author: Serhiy Storchaka (serhiy.storchaka) * (Python committer)

Date: 2016-09-19 04:52

(After OrderedDict implementation is improved, functools.lru_cache can use OrderedDict and remove doubly linked list too.)

functools.lru_cache can use just ordered dict. But simple implementation is 1.5 times slower. I'm working on this.

I think that changing implementation of lru_cache and OrderedDict is a new feature and can came only in 3.7, when new dict implementation will be more tested.

msg277737 - (view)

Author: Inada Naoki (methane) * (Python committer)

Date: 2016-09-30 05:02

As written in comment, reusing keys object breaks odict implementation. I think we can't avoid copy for Python 3.6.

msg277751 - (view)

Author: Inada Naoki (methane) * (Python committer)

Date: 2016-09-30 08:16

$ ./install/bin/python3.6-default -m perf timeit -s 'x = range(1000); d={}' 'for i in x: d[i]=i; del d[i];' .................... Median +- std dev: 363 us +- 11 us $ ./install/bin/python3.6 -m perf timeit -s 'x = range(1000); d={}' 'for i in x: d[i]=i; del d[i];' # patched .................... Median +- std dev: 343 us +- 17 us $ ./install/bin/python3.6-default -m perf timeit -s 'x = range(1000); d={}' 'for i in x: d[i]=i; del d[i];' .................... Median +- std dev: 362 us +- 11 us $ ./install/bin/python3.6 -m perf timeit -s 'x = range(1000); d={}' 'for i in x: d[i]=i; del d[i];' # patched .................... Median +- std dev: 342 us +- 14 us $ ./install/bin/python3.6-default -m perf timeit -s 'x = range(1000); d={}' 'for i in x: d[i]=i; del d[i];' .................... Median +- std dev: 364 us +- 11 us

msg277755 - (view)

Author: STINNER Victor (vstinner) * (Python committer)

Date: 2016-09-30 14:45

Oh, your message reminded me that I always wanted an option in the timeit module to run the benchmark on two Python versions and then directly compare the result. I just added the feature to perf and then I released perf 0.7.11, enjoy! The output is more compact and it's more reliable because the comparison ensures that the difference is significant.


$ export PYTHONPATH=~/prog/GIT/perf $ ./python-resize -m perf timeit --inherit-environ=PYTHONPATH --compare-to=./python-ref -s 'x = range(1000); d={}' 'for i in x: d[i]=i; del d[i];' --rigorous python-ref: ........................................ 77.6 us +- 1.8 us python-resize: ........................................ 74.8 us +- 1.9 us

Median +- std dev: [python-ref] 77.6 us +- 1.8 us -> [python-resize] 74.8 us +- 1.9 us: 1.04x faster

I can reproduced the 4% speedup.

(I didn't review the patch yet.)

msg277756 - (view)

Author: STINNER Victor (vstinner) * (Python committer)

Date: 2016-09-30 14:46

dictresize.patch is quite big, it seems like half of the patch is more cleanup/refactoring. Can you please split the patch into two parts, to only a smaller patch just for the optimization part?

msg277758 - (view)

Author: Inada Naoki (methane) * (Python committer)

Date: 2016-09-30 15:19

Ah, I'm sorry. I forget to remove some changes relating to inplace compaction (reusing oldkeys when oldsize==newsize).

msg278223 - (view)

Author: Raymond Hettinger (rhettinger) * (Python committer)

Date: 2016-10-07 03:10

For the simple case with no dummy entries, I was expecting a fast path that just realloced the keys/values/hashes arrays and then updated the index table with reinsertion logic that only touches the indices. Use realloc() is nice because it makes it possible that the keys/values/hashes don't have to be recopied and if they did, it would use a fast memcpy to move them to the newly resized array.

msg278225 - (view)

Author: Inada Naoki (methane) * (Python committer)

Date: 2016-10-07 03:33

Since entries array is embedded in PyDictKeysObject, we can't realloc entries. And while values are split array, dictresize() convert split table into combine table.

Split table may have enough size of ma_values at first in typical case. And in not typical case, split table may not be used. So I think realloc ma_values is premature optimization, unless profiler says make_keys_shared() is slow.

msg279446 - (view)

Author: Inada Naoki (methane) * (Python committer)

Date: 2016-10-25 18:57

@haypo, could you review this?

msg279593 - (view)

Author: Serhiy Storchaka (serhiy.storchaka) * (Python committer)

Date: 2016-10-28 11:10

What if split copying entries from inserting indices? First fill a continuous array of entries (could use memcpy if there were not deletions), then build a hashtable of continuous indices. Following patch implements this idea.

msg279595 - (view)

Author: Xiang Zhang (xiang.zhang) * (Python committer)

Date: 2016-10-28 11:32

Hmm, what's the advantage?

msg279597 - (view)

Author: Inada Naoki (methane) * (Python committer)

Date: 2016-10-28 11:42

LGTM.

msg279598 - (view)

Author: Serhiy Storchaka (serhiy.storchaka) * (Python committer)

Date: 2016-10-28 11:49

The advantage is using memcpy in case of combined table without deletions. This is common case of creating dict without pre-resizing: dict(list), dict(iterator), etc.

In future, when will be possible to reuse old entries array, this also could help in case of multiple modifications without significant size growth. First squeeze entries (skipping NULL values), then clear and rebuild index table.

msg279599 - (view)

Author: Inada Naoki (methane) * (Python committer)

Date: 2016-10-28 12:01

Current code and my patch called insertdict_clean() or insert_index() for each entry. On the other hand, Serhiy's patch calls build_indices() once. This may be faster when compiler doesn't inlining the helper function. As a bonus, we can use memcpy to copy entries.

Cons of Serhiy's patch is it's two pass. If entries are larger than L2 cache, fetch from L3 cache may be larger. So I can't declare that Serhiy's patch is faster until benchmark.

(My forecast is no performance difference between my patch and Serhiy's on amd64 machine, and Serhiy's patch is faster on more poor CPU.)

msg279605 - (view)

Author: Xiang Zhang (xiang.zhang) * (Python committer)

Date: 2016-10-28 13:13

I doubt how many memcpy could benefit. Two pass does not necessarily make faster. I make a simple test:

With dictresize3, (I make insert_index inline):

[bin]$ ./python3 -m perf timeit -s 'd = {i:i for i in range(6)}' 'dict(d)' .................... Median +- std dev: 441 ns +- 21 ns [bin]$ ./python3 -m perf timeit -s 'd = {i:i for i in range(60)}' 'dict(d)' .................... Median +- std dev: 2.02 us +- 0.10 us [bin]$ ./python3 -m perf timeit -s 'd = {i:i for i in range(600)}' 'dict(d)' .................... Median +- std dev: 18.1 us +- 0.9 us

With dictresize4:

[bin]$ ./python3 -m perf timeit -s 'd = {i:i for i in range(6)}' 'dict(d)' .................... Median +- std dev: 448 ns +- 33 ns [bin]$ ./python3 -m perf timeit -s 'd = {i:i for i in range(60)}' 'dict(d)' .................... Median +- std dev: 2.04 us +- 0.09 us [bin]$ ./python3 -m perf timeit -s 'd = {i:i for i in range(600)}' 'dict(d)' .................... Median +- std dev: 18.2 us +- 0.1 us

Just like INAKA states, there is hardly any difference. And you need 2 pass for dicts with deleted member.

msg279615 - (view)

Author: STINNER Victor (vstinner) * (Python committer)

Date: 2016-10-28 17:03

Remark about perf timeout: you can use --compare-to with the patched Python binary to check if the result is significant and compute the "x.xx faster" number.

msg279625 - (view)

Author: Serhiy Storchaka (serhiy.storchaka) * (Python committer)

Date: 2016-10-28 20:03

Yes, the difference is pretty small. The largest difference I got is 7% in following microbenchmark:

$ ./python -m perf timeit -s 'x = list(range(1000))' -- 'dict.fromkeys(x)' Unpatched: Median +- std dev: 80.6 us +- 0.5 us dictresize3.patch: Median +- std dev: 80.9 us +- 2.8 us dictresize4.patch: Median +- std dev: 75.4 us +- 2.1 us

I suppose this is due to using memcpy. In other microbenchmarks the difference usually is 1-2%.

Xiang, your microbenchmark don't work for this patch because dict(d) make only one resizing.

Cons of Serhiy's patch is it's two pass. If entries are larger than L2 cache, fetch from L3 cache may be larger.

Agree. But on other hand, dictresize3.patch needs to fit in L2 cache all indices table and prefetch two entries arrays: old and new. dictresize4.patch in every loop works with smaller memory: two entries arrays in the first loop and an indices table and new entries arrays in the second loop.

msg279655 - (view)

Author: Xiang Zhang (xiang.zhang) * (Python committer)

Date: 2016-10-29 06:14

If you inline insert_index, dictresize3 is not that bad.

./python3 -m perf timeit -s 'x = list(range(1000))' -- 'dict.fromkeys(x)' dictresize3: Median +- std dev: 43.9 us +- 0.7 us dictresize3(insert_index inlined): Median +- std dev: 41.6 us +- 0.6 us dictresize4: Median +- std dev: 41.7 us +- 1.2 us

But don't bother on microbenchmark, just move on. I just think the logic is not as clear as dictresize3. But the easiness for future modification makes sense.

msg279656 - (view)

Author: Inada Naoki (methane) * (Python committer)

Date: 2016-10-29 06:42

Serhiy, would you commit it by 3.6b3?

-- sent from mobile

msg279663 - (view)

Author: Roundup Robot (python-dev) (Python triager)

Date: 2016-10-29 07:50

New changeset 6b88dfc7b25d by Serhiy Storchaka in branch '3.6': Issue #28199: Microoptimized dict resizing. Based on patch by Naoki Inada. https://hg.python.org/cpython/rev/6b88dfc7b25d

New changeset f0fbc6071d7e by Serhiy Storchaka in branch 'default': Issue #28199: Microoptimized dict resizing. Based on patch by Naoki Inada. https://hg.python.org/cpython/rev/f0fbc6071d7e

msg279664 - (view)

Author: Serhiy Storchaka (serhiy.storchaka) * (Python committer)

Date: 2016-10-29 07:57

I don't have strong preferences, but at this moment dictresize4.patch looks a little better to me. Maybe I wrong and in future optimizations we will returned to insert_index().

Yet one opportunity for future optimization -- inline dk_get_index() and dk_set_index() and move check for indices width out of the loop. I don't know whether there is significant effect of this.

msg279806 - (view)

Author: Jason R. Coombs (jaraco) * (Python committer)

Date: 2016-10-31 17:03

In https://github.com/pypa/setuptools/issues/836, I've pinpointed this commit as implicated in dictionaries spontaneously losing keys. I have not yet attempted to replicate the issue in a standalone environment, and I'm hoping someone with a better understanding of the implementation can devise a reproduction that distills the issue that setuptools seems only to hit in very specialized conditions.

Please let me know if I can help by providing more detail in the environment where this occurs or by filing another ticket. Somewhere we should capture that this is a regression pending release in 3.6.0b3 today, and for that reason, I'm adding Ned to this ticket.

msg279810 - (view)

Author: Ned Deily (ned.deily) * (Python committer)

Date: 2016-10-31 17:20

Thanks, Jason, for the heads-up. Serhiy, can you take a look at this quickly? I'm going to hold 360b3 until we have a better idea what's going on.

msg279814 - (view)

Author: Serhiy Storchaka (serhiy.storchaka) * (Python committer)

Date: 2016-10-31 18:17

I have rolled back the changes.

msg279815 - (view)

Author: Ned Deily (ned.deily) * (Python committer)

Date: 2016-10-31 18:19

Thanks, Serhiy! Jason, can you verify that there is no longer a 3.6 regression with your tests?

msg279816 - (view)

Author: Jason R. Coombs (jaraco) * (Python committer)

Date: 2016-10-31 18:20

Testing now...

msg279817 - (view)

Author: Jason R. Coombs (jaraco) * (Python committer)

Date: 2016-10-31 18:21

Confirmed. Tests are now passing where they were failing before. Thanks for the quick response!

msg279818 - (view)

Author: Ned Deily (ned.deily) * (Python committer)

Date: 2016-10-31 18:22

Excellent, thanks everyone! I'll leave this open for re-evaluation for 3.7.

msg279882 - (view)

Author: Xiang Zhang (xiang.zhang) * (Python committer)

Date: 2016-11-01 16:09

I use gdb to run setuptools test suite and find the assumption, split tables are always dense is broken for both dictresize3 and dictresize4.

#0 0x00007ffff71171c7 in __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:55 #1 0x00007ffff7118e2a in __GI_abort () at abort.c:89 #2 0x00007ffff71100bd in __assert_fail_base (fmt=0x7ffff7271f78 "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n", assertion=assertion@entry=0x5e4b90 "oldvalues[i] != ((void *)0)", file=file@entry=0x5e4aa0 "Objects/dictobject.c", line=line@entry=1270, function=function@entry=0x5e59f0 <__PRETTY_FUNCTION__.12083> "dictresize") at assert.c:92 #3 0x00007ffff7110172 in __GI___assert_fail (assertion=assertion@entry=0x5e4b90 "oldvalues[i] != ((void *)0)", file=file@entry=0x5e4aa0 "Objects/dictobject.c", line=line@entry=1270, function=function@entry=0x5e59f0 <__PRETTY_FUNCTION__.12083> "dictresize") at assert.c:101 #4 0x000000000048bddc in dictresize (mp=mp@entry=0x7ffff219d2b0, minused=) at Objects/dictobject.c:1270 #5 0x000000000048bf93 in insertion_resize (mp=mp@entry=0x7ffff219d2b0) at Objects/dictobject.c:1100 #6 0x000000000048c5fd in insertdict (mp=mp@entry=0x7ffff219d2b0, key=key@entry=0x7ffff579c3c0, hash=-3681610201421769281, value=value@entry=0x7ffff07f56e8) at Objects/dictobject.c:1136 #7 0x000000000048fdfd in PyDict_SetItem (op=op@entry=0x7ffff219d2b0, key=key@entry=0x7ffff579c3c0, value=value@entry=0x7ffff07f56e8) at Objects/dictobject.c:1572 #8 0x0000000000492cb5 in _PyObjectDict_SetItem (tp=tp@entry=0xd52548, dictptr=0x7ffff080cbd8, key=key@entry=0x7ffff579c3c0, value=value@entry=0x7ffff07f56e8) at Objects/dictobject.c:4274 #9 0x000000000049df8a in _PyObject_GenericSetAttrWithDict (obj=0x7ffff080cbb8, name=0x7ffff579c3c0, value=0x7ffff07f56e8, dict=dict@entry=0x0) at Objects/object.c:1172 #10 0x000000000049e0cf in PyObject_GenericSetAttr (obj=, name=, value=) at Objects/object.c:1194 #11 0x000000000049d80e in PyObject_SetAttr (v=v@entry=0x7ffff080cbb8, name=name@entry=0x7ffff579c3c0, value=value@entry=0x7ffff07f56e8) at Objects/object.c:932

Thanks to Victor's _PyDict_CheckConsistency, it's easy then to find even without dictresize3 and dictresize4 (the current version), the test suite still fails (#define DEBUG_PYDICT).

#0 0x00007ffff71171c7 in __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:55 #1 0x00007ffff7118e2a in __GI_abort () at abort.c:89 #2 0x00007ffff71100bd in __assert_fail_base (fmt=0x7ffff7271f78 "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n", assertion=assertion@entry=0x5e53a0 "mp->ma_values[i] != ((void *)0)", file=file@entry=0x5e4d00 "Objects/dictobject.c", line=line@entry=498, function=function@entry=0x5e5dd0 <__PRETTY_FUNCTION__.11869> "_PyDict_CheckConsistency") at assert.c:92 #3 0x00007ffff7110172 in __GI___assert_fail (assertion=assertion@entry=0x5e53a0 "mp->ma_values[i] != ((void *)0)", file=file@entry=0x5e4d00 "Objects/dictobject.c", line=line@entry=498, function=function@entry=0x5e5dd0 <__PRETTY_FUNCTION__.11869> "_PyDict_CheckConsistency") at assert.c:101 #4 0x000000000048ba17 in _PyDict_CheckConsistency (mp=mp@entry=0x7ffff0806e68) at Objects/dictobject.c:498 #5 0x00000000004927a3 in PyDict_SetDefault (d=d@entry=0x7ffff0806e68, key=0x7ffff2ffcdd8, defaultobj=0x8abf20 <_Py_NoneStruct>) at Objects/dictobject.c:2807 #6 0x0000000000492854 in dict_setdefault (mp=0x7ffff0806e68, args=) at Objects/dictobject.c:2824 #7 0x0000000000499469 in _PyCFunction_FastCallDict (func_obj=func_obj@entry=0x7ffff0f2f8c8, args=args@entry=0x105afe8, nargs=nargs@entry=2, kwargs=kwargs@entry=0x0) at Objects/methodobject.c:234 #8 0x0000000000499815 in _PyCFunction_FastCallKeywords (func=func@entry=0x7ffff0f2f8c8, stack=stack@entry=0x105afe8, nargs=nargs@entry=2, kwnames=kwnames@entry=0x0) at Objects/methodobject.c:295 #9 0x0000000000537b6f in call_function (pp_stack=pp_stack@entry=0x7fffffff5cd0, oparg=oparg@entry=2, kwnames=kwnames@entry=0x0) at Python/ceval.c:4793

From the backtrace we can see PyDict_SetDefault breaks the invariant. And reading the code, yes, it doesn't handle split table separately.

I simply replace the logic in PyDict_SetDefault with insertdict to make a test. It doesn't fail, even with dictresize4.

An easy example to reproduce:

class C: ... pass ... c1, c2 = C(), C() c1.a, c1.b = 1, 2 c2.dict.setdefault('b', None) python: Objects/dictobject.c:498: _PyDict_CheckConsistency: Assertion `mp->ma_values[i] != ((void *)0)' failed. Aborted (core dumped)

msg279890 - (view)

Author: Xiang Zhang (xiang.zhang) * (Python committer)

Date: 2016-11-01 18:37

Open #28583 and #28580 to tackle this.

msg279906 - (view)

Author: Inada Naoki (methane) * (Python committer)

Date: 2016-11-02 08:32

Thanks, Xiang.

Shard-key dict is very hard to maintain...

msg280042 - (view)

Author: Xiang Zhang (xiang.zhang) * (Python committer)

Date: 2016-11-04 08:06

#28580 and #28583 are resolved now. I think dictresize4 can be recommited now.

msg280139 - (view)

Author: Roundup Robot (python-dev) (Python triager)

Date: 2016-11-06 15:24

New changeset 39f33c15243b by Serhiy Storchaka in branch 'default': Issue #28199: Microoptimized dict resizing. Based on patch by Naoki Inada. https://hg.python.org/cpython/rev/39f33c15243b

msg280140 - (view)

Author: Serhiy Storchaka (serhiy.storchaka) * (Python committer)

Date: 2016-11-06 15:26

Re-committed. It might be dangerous to commit this in 3.6 at that stage.

History

Date

User

Action

Args

2022-04-11 14:58:37

admin

set

github: 72386

2016-11-06 15:26:21

serhiy.storchaka

set

status: open -> closed
resolution: fixed
messages: +

stage: commit review -> resolved

2016-11-06 15:24:08

python-dev

set

messages: +

2016-11-04 08:06:55

xiang.zhang

set

messages: +
stage: needs patch -> commit review

2016-11-02 08:32:02

methane

set

messages: +

2016-11-01 18:37:41

xiang.zhang

set

dependencies: + Optimize iterating split table values, PyDict_SetDefault doesn't combine split table when needed
messages: +

2016-11-01 16:09:36

xiang.zhang

set

messages: +

2016-10-31 18:22:43

ned.deily

set

priority: release blocker ->
resolution: duplicate -> (no value)
messages: +

stage: test needed -> needs patch

2016-10-31 18:21:02

jaraco

set

messages: +

2016-10-31 18:20:03

jaraco

set

messages: +

2016-10-31 18:19:31

ned.deily

set

messages: +

2016-10-31 18:17:31

serhiy.storchaka

set

messages: +

2016-10-31 17:31:26

xiang.zhang

set

messages: -

2016-10-31 17:25:37

xiang.zhang

set

messages: +

2016-10-31 17:20:29

ned.deily

set

status: closed -> open
priority: normal -> release blocker
messages: +

resolution: fixed -> duplicate
stage: resolved -> test needed

2016-10-31 17:03:08

jaraco

set

nosy: + ned.deily, jaraco
messages: +

2016-10-29 07:57:44

serhiy.storchaka

set

status: open -> closed
versions: + Python 3.7, - Python 3.6
type: performance
messages: +

resolution: fixed
stage: resolved

2016-10-29 07:50:23

python-dev

set

nosy: + python-dev
messages: +

2016-10-29 06:42:48

methane

set

messages: +

2016-10-29 06:14:00

xiang.zhang

set

messages: +

2016-10-28 20:03:45

serhiy.storchaka

set

messages: +

2016-10-28 17:03:36

vstinner

set

messages: +

2016-10-28 13:13:31

xiang.zhang

set

messages: +

2016-10-28 12:01:58

methane

set

messages: +

2016-10-28 11:49:48

serhiy.storchaka

set

messages: +

2016-10-28 11:42:46

methane

set

messages: +

2016-10-28 11:32:57

xiang.zhang

set

messages: +

2016-10-28 11:10:09

serhiy.storchaka

set

files: + dictresize4.patch

messages: +

2016-10-25 18:57:41

methane

set

messages: +

2016-10-07 03:33:02

methane

set

messages: +

2016-10-07 03:10:39

rhettinger

set

messages: +

2016-10-06 09:21:09

methane

set

files: + dictresize3.patch

2016-09-30 15:19:39

methane

set

files: + dictresize2.patch

messages: +

2016-09-30 14:46:33

vstinner

set

messages: +

2016-09-30 14:45:42

vstinner

set

nosy: + vstinner
messages: +

2016-09-30 08:16:38

methane

set

assignee: methane
messages: +

2016-09-30 05:02:52

methane

set

files: + dictresize.patch
keywords: + patch
messages: +

2016-09-19 04:52:58

serhiy.storchaka

set

messages: +

2016-09-19 04:40:49

methane

set

messages: +

2016-09-19 04:01:39

methane

set

messages: +

2016-09-19 03:16:54

xiang.zhang

set

nosy: + xiang.zhang
messages: +

2016-09-19 00:37:56

rhettinger

set

messages: +

2016-09-18 22:13:49

rhettinger

create