[Python-Dev] Test the test suite? (original) (raw)
Nick Coghlan ncoghlan at gmail.com
Wed Aug 28 23:40:13 CEST 2013
- Previous message: [Python-Dev] Test the test suite?
- Next message: [Python-Dev] cpython: Issue #18571: Implementation of the PEP 446: file descriptors and file handles
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
On 29 Aug 2013 02:34, "Serhiy Storchaka" <storchaka at gmail.com> wrote:
28.08.13 14:37, Victor Stinner написав(ла):
No, my question is: how can we detect that a test is never run? Do we need test covertage on the test suite? Or inject faults in the code to test the test suite? Any other idea? Currently a lot of tests are skipped silently. See issue18702 [1]. Perhaps we need a tool which collects skipped and runned tests, compare these sets with sets from a previous run on the same buildbot and reports if they are different. [1] http://bugs.python.org/issue18702
Figuring out a way to collect and merge coverage data would likely be more useful, since that could be applied to the standard library as well. Ned Batchelder's coverage.py supports aggregating data from multiple runs.
Cheers, Nick.
Python-Dev mailing list Python-Dev at python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/ncoghlan%40gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.python.org/pipermail/python-dev/attachments/20130829/86be4f97/attachment.html>
- Previous message: [Python-Dev] Test the test suite?
- Next message: [Python-Dev] cpython: Issue #18571: Implementation of the PEP 446: file descriptors and file handles
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]