Issue 12352: multiprocessing.Value() hangs - Python tracker (original) (raw)

Created on 2011-06-17 13:19 by greg.ath, last changed 2022-04-11 14:57 by admin. This issue is now closed.

Messages (27)

msg138506 - (view)

Author: (greg.ath)

Date: 2011-06-17 13:19

Hi,

My multithreaded application uses multithreading.Value() to ensure thread-safe operations on shared data. For unexpected reasons, after some change in my code, the function will consistently hang.

I did a gdb backtrace of the hanging process, and I discovered that the multiprocessing.head.py tries to acquire twice a same non recursive lock. The first aquire is done in the "malloc" function: #61 call_function (f= Frame 0x13be190, for file /usr/lib/python2.6/multiprocessing/heap.py, line 173, in malloc (self=<Heap(_stop_to_block={}, _lengths=[], _lock=<thread.lock at remote 0x7f00fc770eb8>, _allocated_blocks=set([...

The second aquire is done in the "free" function: #3 0x00000000004a7c5e in call_function (f= Frame 0x1662d50, for file /usr/lib/python2.6/multiprocessing/heap.py, line 155, in free (self=<Heap(_stop_to_block={}, _lengths=[], _lock=<thread.lock at remote 0x7f00fc770eb8>, _allocated_blocks=set([...

I don't understand the link between these two method calls, so I am unable to write an easy script to reproduce the problem. I would say that some garbage collection was done within the malloc, which called the free.

Python 2.6.5 Linux dev 2.6.32-25-server #45-Ubuntu SMP Sat Oct 16 20:06:58 UTC 2010 x86_64 GNU/Linux

msg138741 - (view)

Author: Charles-François Natali (neologix) * (Python committer)

Date: 2011-06-20 17:18

Thanks for reporting this. There's indeed a bug which can lead to this deadlock. Relevant code in Lib/multiprocessing/heap.py

The obvious solution is to use a recursive lock instead. Could you try your application after changing: """ class Heap(object):

_alignment = 8

def __init__(self, size=mmap.PAGESIZE):     
    self._lastpid = os.getpid()     
    self._lock = threading.Lock()

"""

to """ class Heap(object):

_alignment = 8

def __init__(self, size=mmap.PAGESIZE):     
    self._lastpid = os.getpid()     

-> self._lock = threading.RLock() """

One could probably reproduce this by allocating and freeing many multiprocessing.Values, preferably with a lower GC threshold.

msg138774 - (view)

Author: Charles-François Natali (neologix) * (Python committer)

Date: 2011-06-21 09:47

The obvious solution is to use a recursive lock instead.

Note that it's not really a solution, just a workaround to avoid deadlocks, become this might lead to a corruption if free is called while the heap is in an inconsistent state. I have to think some more about a final solution, but I'd like to check first that this is really what's happening here.

msg138775 - (view)

Author: (greg.ath)

Date: 2011-06-21 10:02

Hi,

I also wonder about the performance cost of a recursive lock.

I am still unable to reproduce the bug in a simple script. Looking closely to the gdb stack, there is that frame: Frame 0x13be190, for file /usr/lib/python2.6/multiprocessing/heap.py, line 173 I understand that python reuses only the beginning of a memory block, so it frees the remaining of the block. I use both Value(c_int) and Value(c_double), which have different sizes. That may explain that behaviour.

in heap.py, in the malloc function: 167 self._lock.acquire() 168 try: 169 size = self._roundup(max(size,1), self._alignment) 170 (arena, start, stop) = self._malloc(size) 171 new_stop = start + size 172 if new_stop < stop: 173 self._free((arena, new_stop, stop))

Thanks for your help

2011/6/21 Charles-François Natali <report@bugs.python.org>:

Charles-François Natali <neologix@free.fr> added the comment:

The obvious solution is to use a recursive lock instead.

Note that it's not really a solution, just a workaround to avoid deadlocks, become this might lead to a corruption if free is called while the heap is in an inconsistent state. I have to think some more about a final solution, but I'd like to check first that this is really what's happening here.



Python tracker <report@bugs.python.org> <http://bugs.python.org/issue12352>


msg138776 - (view)

Author: Charles-François Natali (neologix) * (Python committer)

Date: 2011-06-21 10:28

Looking closely to the gdb stack, there is that frame:

Yeah, but it calls _free, which runs unlocked. That's not the problem.

I am still unable to reproduce the bug in a simple script.

Try with this one:

""" import multiprocessing.heap

tab = []

for i in range(100000): print(i) b = multiprocessing.heap.BufferWrapper(10) # create a circular reference (we want GC and not refcount collection when # the block goes out of scope) b.tab = tab tab.append(b) # drop buffers refcount to 0 to make them eligible to GC if i % 100 == 0: del tab tab = [] """

It deadlocks pretty quickly (well, on my box). And, as expected, disabling/enabling the GC inside malloc solves the problem. I have to think a little bit more for a clean solution.

msg138779 - (view)

Author: STINNER Victor (vstinner) * (Python committer)

Date: 2011-06-21 11:02

I also wonder about the performance cost of a recursive lock.

An alternative is to disable the garbage collector in malloc():

def malloc(self, size):
    ...
    enabled = gc.isenabled()
    if enabled:
        # disable the garbage collector to avoid a deadlock if block
        # is freed (if self.free() is called)
        gc.disable()
    try:
        with self._lock:
            size = self._roundup(max(size,1), self._alignment)
            ...
            return block
    finally:
        if enabled:
            gc.enable()

gc.disable() and gc.enable() just set an internal flag and so should be fast.

msg139102 - (view)

Author: Charles-François Natali (neologix) * (Python committer)

Date: 2011-06-25 17:10

Patch (with test) attached. It disables the garbage collector inside critical sections. Of course, if another thread re-enables the gc while the current thread is inside a critical section, things can break (the probability is really low, but who knows). I can't think of any satisfying solution, since it's tricky, because we don't want it to be thread-safe but reentrant, which is much more difficult. I've got another patch which does the following:

Another solution would be to make the Finalizer callback run lockless, maybe just set add the block number to a list/set, and perform the freeing of pending blocks synchronously when malloc() is called (or the heap is finalized). There are two drawbacks to that:

Suggestions welcome.

msg139126 - (view)

Author: STINNER Victor (vstinner) * (Python committer)

Date: 2011-06-25 22:31

Or you can combine your two ideas:

in free(), perform a trylock of the mutex if the trylock fails, then create a new Finalizer to postpone the freeing of the same block to a later time,... ... perform the freeing of pending blocks synchronously when malloc() is called

If free() is called indirectly from malloc() (by the garbage collector), free() adds the block to free in a "pending free" list. Pseudo code:


def init(self): ... self._pending_free = queue.Queue()

def _exec_pending_free(self): while True: try: block = self._pending_free.get_nowait() except queue.Empty: break self._free(block)

def free(self, block): if self._lock.acquire(False): self._exec_pending_free() self._free(block) else: # malloc holds the lock self._pending_free.append(block)

def malloc(): with self._lock: self._malloc() self._exec_pending_free()

Problem: if self._pending_free.append() appends whereas malloc() already exited, the free will have to wait until the next call to malloc() or free(). I don't know if this case (free() called while malloc() is running, but malloc() exits before free()) really occur: this issue is a deadlock because free() is "called" from malloc(), and so malloc() waits until the free() is done. It might occur if the garbage collector calls free after _exec_pending_free() but before releasing the lock.

msg139130 - (view)

Author: Charles-François Natali (neologix) * (Python committer)

Date: 2011-06-25 23:33

[...]

def free(self, block):  if self._lock.acquire(False):     self._exec_pending_free()     self._free(block)  else:     # malloc holds the lock     self._pending_free.append(block)

_pending_free uses a lock internally to make it thread-safe, so I think this will have exactly the same problem (the trylock can fail in case of contention or free() from multiple threads, thus we can't be sure that the else clause is executed on behalf of the garbage collector and it won't run while we're adding the block to _pending_free).

Anyway, this seems complicated and error-prone, disabling the gc seems the most straightforward way to handle that.

msg139134 - (view)

Author: STINNER Victor (vstinner) * (Python committer)

Date: 2011-06-26 00:36

_pending_free uses a lock internally to make it thread-safe, so I think this will have exactly the same problem

You are probably right. Can't we use a lock-less list? list.append is atomic thanks to the GIL, isn't it? But I don't know how to implement the lock-less list consumer. It would be nice to have a function to remove and return the content of the list, an atomic "content=mylist[:]; del mylist[:]" function.

(the trylock can fail in case of contention or free() from multiple threads, thus we can't be sure that the else clause is executed on behalf of the garbage collector and it won't run while we're adding the block to _pending_free)

If two threads call free at same time, the "second" (taking the GIL) will add the block to pending_free.

Anyway, this seems complicated and error-prone, disabling the gc seems the most straightforward way to handle that.

I don't like touching such global "variable", but you are right.

msg139156 - (view)

Author: Charles-François Natali (neologix) * (Python committer)

Date: 2011-06-26 09:42

[...]

I don't like touching such global "variable", but you are right.

Well, I don't like it either, but I can't really think of any other solution. Antoine, any thought on that?

msg139160 - (view)

Author: Charles-François Natali (neologix) * (Python committer)

Date: 2011-06-26 10:14

You are probably right. Can't we use a lock-less list? list.append is atomic thanks to the GIL, isn't it? But I don't know how to implement the lock-less list consumer. It would be nice to have a function to remove and return the content of the list, an atomic "content=mylist[:]; del mylist[:]" function.

While not just something like: While True: try: block = list.pop() except IndexError: break _free(block)

Lock-less lists are not strictly atomic (only on cPython), but I doubt that gc.disable() is available and works on every Python interpreter anyway... So the idea would be:

msg139220 - (view)

Author: Charles-François Natali (neologix) * (Python committer)

Date: 2011-06-26 20:57

Here's a patch based on the second approach.

msg139329 - (view)

Author: STINNER Victor (vstinner) * (Python committer)

Date: 2011-06-27 22:28

heap_gc_deadlock_lockless.diff: _free_pending_blocks() and free() execute the following instructions in a different order, is it a problem?

vs

You may call _free_pending_blocks() just after loack.acquire() and a second time before before lock.release()... it is maybe overkill, but it should reduce the probability of the delayed free problem.

You may document that _pending_free_blocks.append() and _pending_free_blocks.pop() are atomic in CPython and don't need a specific lock.

msg139330 - (view)

Author: STINNER Victor (vstinner) * (Python committer)

Date: 2011-06-27 22:32

You may document that _pending_free_blocks.append() and _pending_free_blocks.pop() are atomic in CPython and don't need a specific lock.

Oops, i skipped complelty your long comment explaining everything! It is enough.

msg139332 - (view)

Author: STINNER Victor (vstinner) * (Python committer)

Date: 2011-06-27 22:51

There are different technics to workaround this issue. My preferred is heap_gc_deadlock_lockless.diff because it has less border effect and have a well defined behaviour.

msg139372 - (view)

Author: Antoine Pitrou (pitrou) * (Python committer)

Date: 2011-06-28 18:45

Nice work! I also think heap_gc_deadlock_lockless.diff is good, except for Victor's reservation: is it deliberate that you reversed the following two statements in _free_pending_blocks(), compared to the code in free()?

msg139397 - (view)

Author: Charles-François Natali (neologix) * (Python committer)

Date: 2011-06-29 07:13

Nice work! I also think heap_gc_deadlock_lockless.diff is good, except for Victor's reservation: is it deliberate that you reversed the following two statements in _free_pending_blocks(), compared to the code in free()?

No, it's not deliberate (it shouldn't have any impact since they're protected by the mutex though). As for calling _free_pending_blocks() a second time, I'm not sure that's necessary, I find the code simpler and cleaner that way. I'll provide a new patch in a couple days (no access to my development box right now).

msg139627 - (view)

Author: Charles-François Natali (neologix) * (Python committer)

Date: 2011-07-02 09:09

Updated patch.

msg139630 - (view)

Author: STINNER Victor (vstinner) * (Python committer)

Date: 2011-07-02 10:10

The last heap_gc_deadlock_lockless.diff looks good.

Note: please try to use different filenames for different versions of the same patch. For example, add a number (heap_gc_deadlock_lockless-2.diff) to the name.

msg139637 - (view)

Author: Antoine Pitrou (pitrou) * (Python committer)

Date: 2011-07-02 11:50

It seems you won't restore the thresholds if the GC wasn't enabled at first.

msg139639 - (view)

Author: Roundup Robot (python-dev) (Python triager)

Date: 2011-07-02 11:57

New changeset 96a0788583c6 by Charles-François Natali in branch '2.7': Issue #12352: Fix a deadlock in multiprocessing.Heap when a block is freed by http://hg.python.org/cpython/rev/96a0788583c6

msg139640 - (view)

Author: Roundup Robot (python-dev) (Python triager)

Date: 2011-07-02 12:09

New changeset 874143242d79 by Charles-François Natali in branch '2.7': Issue #12352: In test_free_from_gc(), restore the GC thresholds even if the GC http://hg.python.org/cpython/rev/874143242d79

msg139641 - (view)

Author: Roundup Robot (python-dev) (Python triager)

Date: 2011-07-02 12:36

New changeset 0d4ca1e77205 by Charles-François Natali in branch '3.1': Issue #12352: Fix a deadlock in multiprocessing.Heap when a block is freed by http://hg.python.org/cpython/rev/0d4ca1e77205

msg139642 - (view)

Author: Roundup Robot (python-dev) (Python triager)

Date: 2011-07-02 12:40

New changeset 37606505b227 by Charles-François Natali in branch '3.2': Merge issue #12352: Fix a deadlock in multiprocessing.Heap when a block is http://hg.python.org/cpython/rev/37606505b227

msg139643 - (view)

Author: Roundup Robot (python-dev) (Python triager)

Date: 2011-07-02 12:43

New changeset fd8dc3746992 by Charles-François Natali in branch 'default': Merge issue #12352: Fix a deadlock in multiprocessing.Heap when a block is http://hg.python.org/cpython/rev/fd8dc3746992

msg139678 - (view)

Author: Charles-François Natali (neologix) * (Python committer)

Date: 2011-07-03 09:40

The test passes on all the buildbots, closing. greg, thanks for reporting this!

History

Date

User

Action

Args

2022-04-11 14:57:18

admin

set

github: 56561

2011-07-03 09:40:37

neologix

set

status: open -> closed
resolution: fixed
messages: +

stage: resolved

2011-07-02 12:43:41

python-dev

set

messages: +

2011-07-02 12:40:21

python-dev

set

messages: +

2011-07-02 12:36:24

python-dev

set

messages: +

2011-07-02 12:09:49

python-dev

set

messages: +

2011-07-02 11:57:47

python-dev

set

nosy: + python-dev
messages: +

2011-07-02 11:50:17

pitrou

set

messages: +

2011-07-02 10:10:58

vstinner

set

messages: +

2011-07-02 09:13:45

neologix

set

files: - heap_gc_deadlock_lockless.diff

2011-07-02 09:13:28

neologix

set

files: - heap_gc_deadlock_lockless.diff

2011-07-02 09:13:17

neologix

set

files: - heap_gc_deadlock.diff

2011-07-02 09:13:05

neologix

set

files: + heap_gc_deadlock_lockless.diff

2011-07-02 09:09:46

neologix

set

files: + heap_gc_deadlock_lockless.diff

messages: +

2011-06-29 07:13:08

neologix

set

messages: +

2011-06-28 18:45:05

pitrou

set

messages: +

2011-06-27 22:51:49

vstinner

set

messages: +

2011-06-27 22:32:33

vstinner

set

messages: +

2011-06-27 22:28:02

vstinner

set

messages: +

2011-06-26 20:57:29

neologix

set

files: + heap_gc_deadlock_lockless.diff

messages: +

2011-06-26 10:14:00

neologix

set

messages: +

2011-06-26 09:42:31

neologix

set

messages: +

2011-06-26 00:36:20

vstinner

set

messages: +

2011-06-25 23:33:18

neologix

set

messages: +

2011-06-25 22:31:44

vstinner

set

messages: +

2011-06-25 17:17:21

neologix

set

files: + heap_gc_deadlock.diff

2011-06-25 17:16:52

neologix

set

files: - heap_gc_deadlock.diff

2011-06-25 17:10:47

neologix

set

files: + heap_gc_deadlock.diff
keywords: + patch

2011-06-25 17:10:03

neologix

set

messages: +

2011-06-21 11:02:56

vstinner

set

nosy: + vstinner
messages: +

2011-06-21 10:28:30

neologix

set

nosy: + pitrou
messages: +

2011-06-21 10:02:16

greg.ath

set

messages: +

2011-06-21 09:47:54

neologix

set

messages: +

2011-06-20 17🔞43

neologix

set

nosy: + neologix
messages: +

2011-06-17 13:19:47

greg.ath

create