Stubtest: Fix __mypy-replace false positives by AlexWaygood · Pull Request #15689 · python/mypy (original) (raw)

Running stubtest on typeshed's stdlib stubs with the mypy master branch (or the mypy 1.5 branch) produces the following output currently:

error: pstats.FunctionProfile.__mypy-replace is not present at runtime
Stub: in file stdlib\pstats.pyi:32
def (*, ncalls: builtins.str =, tottime: builtins.float =, percall_tottime: builtins.float =, cumtime: builtins.float =, percall_cumtime: builtins.float =, file_name: builtins.str =, line_number: builtins.int =)
Runtime:
MISSING

error: pstats.StatsProfile.__mypy-replace is not present at runtime
Stub: in file stdlib\pstats.pyi:41
def (*, total_tt: builtins.float =, func_profiles: builtins.dict[builtins.str, pstats.FunctionProfile] =)
Runtime:
MISSING

Found 2 errors (checked 541 modules)

This is due to recent changes @ikonst made to mypy's dataclasses plugin, where mypy now generates internal __mypy-replace methods so as to perform more accurate typechecking of dataclasses.replace().

To fix this false positive, I've added a new slot to SymbolTableNode to indicate whether plugin-generated methods actually exist at runtime or not. If we don't want to do this, there are other approaches we could take, such as hardcoding in stubtest.py a list of problematic plugin-generated method names to avoid checking. I went for this approach as it felt more principled, but I'm happy to change the approach if adding a new slot to SymbolTableNode is problematic.

I tried adding this test to mypy/test/teststubtest.py:

diff --git a/mypy/test/teststubtest.py b/mypy/test/teststubtest.py index 661d46e9f..80ece38ad 100644 --- a/mypy/test/teststubtest.py +++ b/mypy/test/teststubtest.py @@ -1842,6 +1842,27 @@ class StubtestUnit(unittest.TestCase): error=None, )

Unfortunately the test fails due to (I think) the fact that mypy doesn't currently generate __eq__ and __repr__ methods for dataclasses (see #12186). There's also something to do with _DT that I don't really understand:

======================================================================== FAILURES ========================================================================= ______________________________________________________________ StubtestUnit.test_dataclasses ______________________________________________________________ [gw3] win32 -- Python 3.11.2 C:\Users\alexw\coding\mypy\venv\Scripts\python.exe

args = (<mypy.test.teststubtest.StubtestUnit testMethod=test_dataclasses>,), kwargs = {} cases = [<mypy.test.teststubtest.Case object at 0x0000027188E74210>, <mypy.test.teststubtest.Case object at 0x0000027188E74250>], expected_errors = set() c = <mypy.test.teststubtest.Case object at 0x0000027188E74250>, @py_assert1 = False @py_format3 = "{'test_module...Foo.repr'} == set()\nExtra items in the left set:\n'test_module.Foo._DT'\n'test_module.Foo.eq'\n'test_module.Foo.repr'\nUse -v to get more diff" @py_format5 = "test_module.Foo._DT\ntest_module.Foo.__eq__\ntest_module.Foo.__repr__\n\n>assert {'test_module...Foo.repr'} ==...he left set:\n'test_module.Foo._DT'\n'test_module.Foo.eq'\n'test_module.Foo.repr'\nUse -v to get more diff" output = 'test_module.Foo._DT\ntest_module.Foo.__eq__\ntest_module.Foo.__repr__\n' actual_errors = {'test_module.Foo._DT', 'test_module.Foo.eq', 'test_module.Foo.repr'}

def test(*args: Any, **kwargs: Any) -> None:
    cases = list(fn(*args, **kwargs))
    expected_errors = set()
    for c in cases:
        if c.error is None:
            continue
        expected_error = c.error
        if expected_error == "":
            expected_error = TEST_MODULE_NAME
        elif not expected_error.startswith(f"{TEST_MODULE_NAME}."):
            expected_error = f"{TEST_MODULE_NAME}.{expected_error}"
        assert expected_error not in expected_errors, (
            "collect_cases merges cases into a single stubtest invocation; we already "
            "expect an error for {}".format(expected_error)
        )
        expected_errors.add(expected_error)
    output = run_stubtest(
        stub="\n\n".join(textwrap.dedent(c.stub.lstrip("\n")) for c in cases),
        runtime="\n\n".join(textwrap.dedent(c.runtime.lstrip("\n")) for c in cases),
        options=["--generate-allowlist"],
    )

    actual_errors = set(output.splitlines())
  assert actual_errors == expected_errors, output

E AssertionError: test_module.Foo._DT E test_module.Foo.eq E test_module.Foo.repr E E assert {'test_module...Foo.repr'} == set() E Extra items in the left set: E 'test_module.Foo._DT' E 'test_module.Foo.eq' E 'test_module.Foo.repr' E Use -v to get more diff

mypy\test\teststubtest.py:171: AssertionError ================================================================= short test summary info ================================================================= FAILED mypy/test/teststubtest.py::StubtestUnit::test_dataclasses - AssertionError: test_module.Foo._DT ============================================================== 1 failed, 48 passed in 6.78s ===============================================================

I verified manually, however, that this gets rid of all the new false positives when running stubtest on typeshed's stdlib.