Enable retries in the std.py backend. by ShaheedHaque · Pull Request #31 · poppyred/python-consul2 (original) (raw)
This repository was archived by the owner on May 30, 2024. It is now read-only.
Conversation3 Commits2 Checks2 Files changed
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.Learn more about bidirectional Unicode characters
[ Show hidden characters]({{ revealButtonHref }})
Resolves #30. This PR also:
- Replaces the deprecated call to requests.session() with its
current counterpart, requests.Session(). - Allows arbitrary connection kwargs to be passed down, not least
to allow the new retry behaviour to be overridden.
…lso:
- Replaces the deprecated call to requests.session() with its current counterpart, requests.Session().
- Allows arbitrary connection kwargs to be passed down, not least to allow the new retry behaviour to be overridden.
Hi, is there any chance of a release with this PR incorporated? FWIW, there is a Celery fix (celery/celery#5605) which benefits from this, and which I had wanted to delay before this was available but I feel I need to pursue at this time.
ShaheedHaque added a commit to ShaheedHaque/celery that referenced this pull request
…ponses
from Consul with the outbound Celery request that caused it. This leaves it prone to mistaking the (final) response from an operation N as the response to an (early) part of operation N + 1.
This changes fix that by using a separate connection for each request. That of course has the downside of (a) being relatively expensive and (b) increasing the rate of connection requests into Consul:
The former is annoying, but at least the backend works reliably.
The latter can cause Consul to reject excessive connection attempt, but if it does, at least it returns a clear indication of this (IIRC, it responds with an HTTP 429"too many connections" indication).
Additionally, this issue can be ameliorated by enabling retries in the python-consul2 (which I believe should be turned on regards less to handle transient network issues). This is fixed by the PR in https:/github.com/poppyred/python-consul2/pull/31.
Note that we have never seen (b) outside a test specifically trying to hammer the system, but we see (a) all the time in our normal system tests.
ShaheedHaque added a commit to ShaheedHaque/celery that referenced this pull request
…ponses
from Consul with the outbound Celery request that caused it. This leaves it prone to mistaking the (final) response from an operation N as the response to an (early) part of operation N + 1.
This changes fix that by using a separate connection for each request. That of course has the downside of (a) being relatively expensive and (b) increasing the rate of connection requests into Consul:
The former is annoying, but at least the backend works reliably.
The latter can cause Consul to reject excessive connection attempt, but if it does, at least it returns a clear indication of this (IIRC, it responds with an HTTP 429"too many connections" indication).
Additionally, this issue can be ameliorated by enabling retries in the python-consul2 (which I believe should be turned on regards less to handle transient network issues). This is fixed by the PR in https:/github.com/poppyred/python-consul2/pull/31.
Note that we have never seen (b) outside a test specifically trying to hammer the system, but we see (a) all the time in our normal system tests.
To opt-out from the new behaviour add a parameter "one_client=1" to the connection URL.
ShaheedHaque added a commit to ShaheedHaque/celery that referenced this pull request
…ponses
from Consul with the outbound Celery request that caused it. This leaves it prone to mistaking the (final) response from an operation N as the response to an (early) part of operation N + 1.
This changes fix that by using a separate connection for each request. That of course has the downside of (a) being relatively expensive and (b) increasing the rate of connection requests into Consul:
The former is annoying, but at least the backend works reliably.
The latter can cause Consul to reject excessive connection attempt, but if it does, at least it returns a clear indication of this (IIRC, it responds with an HTTP 429"too many connections" indication).
Additionally, this issue can be ameliorated by enabling retries in the python-consul2 (which I believe should be turned on regards less to handle transient network issues). This is fixed by the PR in https:/github.com/poppyred/python-consul2/pull/31.
Note that we have never seen (b) outside a test specifically trying to hammer the system, but we see (a) all the time in our normal system tests.
To opt-out from the new behaviour add a parameter "one_client=1" to the connection URL.
auvipy pushed a commit to celery/celery that referenced this pull request
- As per #5605, the Consul backend does not cleanly associate responses from Consul with the outbound Celery request that caused it. This leaves it prone to mistaking the (final) response from an operation N as the response to an (early) part of operation N + 1.
This changes fix that by using a separate connection for each request. That of course has the downside of (a) being relatively expensive and (b) increasing the rate of connection requests into Consul:
The former is annoying, but at least the backend works reliably.
The latter can cause Consul to reject excessive connection attempt, but if it does, at least it returns a clear indication of this (IIRC, it responds with an HTTP 429"too many connections" indication).
Additionally, this issue can be ameliorated by enabling retries in the python-consul2 (which I believe should be turned on regards less to handle transient network issues). This is fixed by the PR in https:/github.com/poppyred/python-consul2/pull/31.
Note that we have never seen (b) outside a test specifically trying to hammer the system, but we see (a) all the time in our normal system tests.
To opt-out from the new behaviour add a parameter "one_client=1" to the connection URL.
Increase code coverage.
Rewrite Consul backend documentation, and describe the options now available.
jeyrce pushed a commit to jeyrce/celery that referenced this pull request
- As per celery#5605, the Consul backend does not cleanly associate responses from Consul with the outbound Celery request that caused it. This leaves it prone to mistaking the (final) response from an operation N as the response to an (early) part of operation N + 1.
This changes fix that by using a separate connection for each request. That of course has the downside of (a) being relatively expensive and (b) increasing the rate of connection requests into Consul:
The former is annoying, but at least the backend works reliably.
The latter can cause Consul to reject excessive connection attempt, but if it does, at least it returns a clear indication of this (IIRC, it responds with an HTTP 429"too many connections" indication).
Additionally, this issue can be ameliorated by enabling retries in the python-consul2 (which I believe should be turned on regards less to handle transient network issues). This is fixed by the PR in https:/github.com/poppyred/python-consul2/pull/31.
Note that we have never seen (b) outside a test specifically trying to hammer the system, but we see (a) all the time in our normal system tests.
To opt-out from the new behaviour add a parameter "one_client=1" to the connection URL.
Increase code coverage.
Rewrite Consul backend documentation, and describe the options now available.
@poppyred Any chance of a release with this merged please?
1 participant