Issue 18968: Find a way to detect incorrectly skipped tests (original) (raw)

Issue 18952 (fixed in http://hg.python.org/cpython/rev/23770d446c73) was another case where a test suite change resulted in tests not be executed as expected, but this wasn't initially noticed since it didn't fail the tests, it just silently skipped them.

We've had similar issues in the past, due to test name conflicts (so the second test shadowed the first), to old regrtest style test discovery missing a class name from the test list, and to incorrect skip conditions on platform specific tests.

Converting "unexpected skips" to a failure isn't enough, since these errors occur at a narrower scope than entire test modules.

I'm not sure on what would work, though. Perhaps collecting platform specific coverage stats for the test suite itself and looking for regressions?

Run the test suite both with and without the patch, and compare the results. Additional skipped tests, additional failed tests, or less than the number of expected additional tests signal a problem.

The first two should be automatable, the last depends on the human paying attention. ;)

This should at least deal with problems created by a patch.