[Python-Dev] an idea for improving struct.unpack api (original) (raw)
Ilya Sandler ilya at bluefir.net
Thu Jan 6 06:27:16 CET 2005
- Previous message: [Python-Dev] Re: [Csv] csv module TODO list
- Next message: [Python-Dev] an idea for improving struct.unpack api
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
A problem:
The current struct.unpack api works well for unpacking C-structures where everything is usually unpacked at once, but it becomes inconvenient when unpacking binary files where things often have to be unpacked field by field. Then one has to keep track of offsets, slice the strings,call struct.calcsize(), etc...
Eg. with a current api unpacking of a record which consists of a header followed by a variable number of items would go like this
hdr_fmt="iiii" item_fmt="IIII" item_size=calcsize(item_fmt) hdr_size=calcsize(hdr_fmt) hdr=unpack(hdr_fmt, rec[0:hdr_size]) #rec is the record to unpack offset=hdr_size for i in range(hdr[0]): #assume 1st field of header is a counter item=unpack( item_fmt, rec[ offset: offset+item_size]) offset+=item_size
which is quite inconvenient...
A solution:
We could have an optional offset argument for
unpack(format, buffer, offset=None)
the offset argument is an object which contains a single integer field which gets incremented inside unpack() to point to the next byte.
so with a new API the above code could be written as
offset=struct.Offset(0) hdr=unpack("iiii", offset) for i in range(hdr[0]): item=unpack( "IIII", rec, offset)
When an offset argument is provided, unpack() should allow some bytes to be left unpacked at the end of the buffer..
Does this suggestion make sense? Any better ideas?
Ilya
- Previous message: [Python-Dev] Re: [Csv] csv module TODO list
- Next message: [Python-Dev] an idea for improving struct.unpack api
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]