[Python-Dev] Benchmarking the int allocator (Was: Type of range object members) (original) (raw)
Guido van Rossum guido at python.org
Wed Aug 16 19:08:00 CEST 2006
- Previous message: [Python-Dev] Benchmarking the int allocator (Was: Type of range object members)
- Next message: [Python-Dev] Benchmarking the int allocator (Was: Type of range object members)
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
On 8/16/06, "Martin v. Löwis" <martin at v.loewis.de> wrote:
I have now some numbers. For the attached t.py, the unmodified svn python gives
Test 1 3.25420880318 Test 2 1.86433696747 and the one with the attached patch gives Test 1 3.45080399513 Test 2 2.09729003906 So there apparently is a performance drop on int allocations of about 5-10%. On this machine (P4 3.2GHz) I could not find any difference in pystones from this patch. Notice that this test case is extremely focused on measuring int allocation (I just noticed I should have omitted the for loop in the second case, though).
I think the test isn't hardly focused enough on int allocation. I wonder if you could come up with a benchmark that repeatedly allocates 100s of 1000s of ints and then deletes them? What if it also allocates other small objects so the ints become more fragmented?
-- --Guido van Rossum (home page: http://www.python.org/~guido/)
- Previous message: [Python-Dev] Benchmarking the int allocator (Was: Type of range object members)
- Next message: [Python-Dev] Benchmarking the int allocator (Was: Type of range object members)
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]