Issue 18451: Omit test files in devinabox coverage run (original) (raw)
This issue has been migrated to GitHub: https://github.com/python/cpython/issues/62651
classification
Title: | Omit test files in devinabox coverage run | ||
---|---|---|---|
Type: | behavior | Stage: | needs patch |
Components: | Versions: |
process
Status: | closed | Resolution: | wont fix |
---|---|---|---|
Dependencies: | Superseder: | ||
Assigned To: | brett.cannon | Nosy List: | brett.cannon, pitrou |
Priority: | normal | Keywords: | easy |
Created on 2013-07-14 15:19 by brett.cannon, last changed 2022-04-11 14:57 by admin. This issue is now closed.
Messages (6) | ||
---|---|---|
msg193056 - (view) | Author: Brett Cannon (brett.cannon) * ![]() |
Date: 2013-07-14 15:19 |
Devinabox's full_coverage.py run should omit test files. Probably need to put the path in quotes for proper escaping (same for report). | ||
msg193273 - (view) | Author: Antoine Pitrou (pitrou) * ![]() |
Date: 2013-07-18 09:44 |
Is it common practice to ignore test files in coverage reports? It sounds like not omitting them can help you find out if e.g. some tests are not run by mistake. | ||
msg193290 - (view) | Author: Brett Cannon (brett.cannon) * ![]() |
Date: 2013-07-18 13:23 |
The key problem with keeping them is that beginners might mistake that a test didn't run simply because some resource wasn't available when the tests were run (e.g. I forget to run the coverage report so I do it on an airport to the conference and have no Internet). Plus if you find this out you need to know hours in advance as a complete coverage run takes quite a while. | ||
msg193291 - (view) | Author: Antoine Pitrou (pitrou) * ![]() |
Date: 2013-07-18 13:32 |
> The key problem with keeping them is that beginners might mistake > that a test didn't run simply because some resource wasn't available > when the tests were run (e.g. I forget to run the coverage report so > I do it on an airport to the conference and have no Internet). Plus > if you find this out you need to know hours in advance as a complete > coverage run takes quite a while. I don't understand what the problem is. The coverage shows you precisely what was run and what wasn't. How is it a bug? | ||
msg193297 - (view) | Author: Brett Cannon (brett.cannon) * ![]() |
Date: 2013-07-18 13:53 |
The problem is confusing new contributors. "Why wasn't this test run?" "Because you're not on OS X." "Why wasn't this run?" "I didn't have internet at the time." It's noise that's unnecessary. People should be focusing on the coverage of the modules in the stdlib and not the tests themselves. Plus the process takes so darn long already I don't think it's worth the time to waste on covering the tests as well. | ||
msg193299 - (view) | Author: Antoine Pitrou (pitrou) * ![]() |
Date: 2013-07-18 14:14 |
> The problem is confusing new contributors. > > "Why wasn't this test run?" > "Because you're not on OS X." > "Why wasn't this run?" > "I didn't have internet at the time." Well, you're trying to fix a symptom, rather than the underlying cause. And the concept of skipped tests is quite basic, it shouldn't be very hard to grasp. > It's noise that's unnecessary. People should be focusing on the > coverage of the modules in the stdlib and not the tests themselves. > Plus the process takes so darn long already I don't think it's worth > the time to waste on covering the tests as well. Whether or not the report includes the test files shouldn't impact the test runtime. |
History | |||
---|---|---|---|
Date | User | Action | Args |
2022-04-11 14:57:48 | admin | set | github: 62651 |
2013-08-12 16:15:34 | brett.cannon | set | status: open -> closedresolution: wont fix |
2013-07-18 14:14:17 | pitrou | set | messages: + |
2013-07-18 13:53:22 | brett.cannon | set | messages: + |
2013-07-18 13:32:54 | pitrou | set | messages: + |
2013-07-18 13:23:37 | brett.cannon | set | messages: + |
2013-07-18 09:44:13 | pitrou | set | nosy: + pitroumessages: + |
2013-07-14 15:19:29 | brett.cannon | create |