bpo-29512: Add test.bisect, bisect failing tests by vstinner · Pull Request #2452 · python/cpython (original) (raw)
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service andprivacy statement. We’ll occasionally send you account related emails.
Already on GitHub?Sign in to your account
Conversation6 Commits1 Checks0 Files changed
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.Learn more about bidirectional Unicode characters
[ Show hidden characters]({{ revealButtonHref }})
Add a new "python3 -m test.bisect" tool to bisect failing tests.
It can be used to find which test method(s) leak references, leak
files, etc.
Add a new "python3 -m test.bisect" tool to bisect failing tests.
It can be used to find which test method(s) leak references, leak files, etc.
Ok, here is a first complete implementation of my "bisect" tool for Python tests. It's likely incomplete, but at least it worked well when I needed it to find a reference leak. It's especially useful for me to find a leak on Windows, where development is harder for me ;-)
"# FIXME: document that following arguments are test arguments"
I failed to find the right argparse option to say that all unknown options must go into test_args, without having to declare the 20+ options of regrtest.
The code works. It's just an issue with --help which doesn't show "test_args" in the usage line.
I wrote the tool to find the failing method leaking references. The tool can skip a failing test when at least 2 tests fail. Since the tool takes a random sample of tests at each iteration, it's likely that running the tool gives two different results when multiple tests fail.
It might be possible to make the tool smarter to validate that skipped tests don't fail, but I'm not sure that it's worth it, at least for the first version of the tool. It's always possible to enhance such tool step by step ;-)
More concrete example of multiple failing tests. Right now, at least 3 methods of test_code leak references. Each run of test.bisect randomly displays one these failing tests:
test.test_code.CoExtra.test_free_different_thread
test.test_code.CoExtra.test_get_set
test.test_code.CoExtra.test_free_called
For me, it's ok, I understand how it works ;-)
The code looks fine. I'm big +1 on having more tools to debug ref leaks knowing first hand how hard it is to find such regressions. Let's merge it and work/improve it.
1st1 approved these changes Jun 28, 2017
Let's merge it and work/improve it.
Since I also proposed the same development method: ok, let's do that ;-)
vstinner added a commit that referenced this pull request
…r to 3.6 (#2513)
Add a new "python3 -m test.bisect" tool to bisect failing tests.
It can be used to find which test method(s) leak references, leak files, etc. (cherry picked from commit 84d9d14)
Only report a leak if each run leaks at least one memory block. (cherry picked from commit beeca6e)
vstinner added a commit that referenced this pull request
… from 3.6 to 3.5 (#2540)
Add a new "python3 -m test.bisect" tool to bisect failing tests.
It can be used to find which test method(s) leak references, leak files, etc. (cherry picked from commit 84d9d14)
Only report a leak if each run leaks at least one memory block. (cherry picked from commit beeca6e)
(cherry picked from commit a3ca94d)
--forever now stops if a fail changes the environment. (cherry picked from commit 5e87592) (cherry picked from commit 4132adb)
vstinner added a commit that referenced this pull request
…o 2.7 (#2541)
Add a new "python3 -m test.bisect" tool to bisect failing tests.
It can be used to find which test method(s) leak references, leak files, etc.
--forever now stops if a fail changes the environment.
- Fix test_bisect: use absolute import