[Python-Dev] Is core dump always a bug? Advice requested (original) (raw)

Bob Ippolito bob at redivi.com
Wed May 12 00:05:49 EDT 2004


On May 11, 2004, at 11:33 PM, Phillip J. Eby wrote:

At 10:14 PM 5/11/04 -0400, Fred L. Drake, Jr. wrote:

On Tuesday 11 May 2004 10:06 pm, Greg Ewing wrote: > Just a thought, but is statically verifying the bytecode even > possible in principle? Seems to me it could be equivalent to > the halting problem.

I don't see any reason to think them equivalent; we don't need to determine that the code will execute to completion, only that the bytecodes can be decoded without error. Not trivial by any means, but I think it's a more constrained problem. Right; it should be possible, for example, to verify stack depth used, that stack growth isn't unlimited, that there is a return at the end, no invalid references to locals or conames, and so on. Basic "sanity" stuff, not in-depth analysis. But it isn't clear to me that this is really necessary for new.code(), let alone .pyc/.pyo files. And anybody that's crazy enough to send bytecode "over the wire" as part of an agent system or whatever darn well better have written their own bytecode verifier already, and maybe a sandbox or two as well.

Exactly, why bother when there's plenty of valid bytecode that crashes? Even without third party extension modules, there are usually platform specific files you can open and play with that'll cause a crash.
os._exit and os.abort are essentially crashes too, and then there's the signal module, os.kill, etc.

The only example I can think of is if you were experimenting with genetic algorithms or something and you decided to randomly generate and mutate bytecode and see if it produces interesting results.. but in that case you probably want a much smaller and simpler VM anyways.

-bob



More information about the Python-Dev mailing list