[Python-Dev] C API for gc.enable() and gc.disable() (original) (raw)

"Martin v. Löwis" martin at v.loewis.de
Sun Jun 22 01:00:41 CEST 2008


XEmacs implements this strategy in a way which is claimed to give constant amortized time (ie, averaged over memory allocated).

See my recent proposal. The old trick is to do reorganizations in a fixed fraction of the total size, resulting in a per-increase amortized-constant overhead (assuming each reorganization takes time linear with total size).

However, isn't the real question whether there is memory pressure or not? If you've got an unloaded machine with 2GB of memory, even a 1GB spike might have no observable consequences. How about a policy of GC-ing with decreasing period ("time" measured by bytes allocated or number of allocations) as the fraction of memory used increases, starting from a pretty large fraction (say 50% by default)?

The problem with such an approach is that it is very difficult to measure. What to do about virtual memory? What to do about other applications that also consume memory?

On some systems (Windows in particular), the operating system indicates memory pressure through some IPC mechanism; on such systems, it might be reasonable to perform garbage collection (only?) when the system asks for it. However, the system might not ask for GC while the swap space is still not exhausted, meaning that the deferred GC would take a long time to complete (having to page in every object).

Nevertheless, I think the real solution has to be for Python programmers to be aware that there is GC, and that they can tune it.

I don't think there is a "real solution". I think programmers should abstain from complaining if they can do something about the problem in their own application (unless the complaint is formulated as a patch) - wait - I think programmers should abstain from complaining, period.

Regards, Martin



More information about the Python-Dev mailing list