pgut001(_at_)cs(_dot_)auckland(_dot_)ac(_dot_)nz (Peter Gutmann) writes:
If only it were that easy. Consider what happens when someone says "You have
a
900K transfer buffer, stream data through it". Instead of being able to drop
a
Do you mean you have 900K to transfer, or you have a buffer of 900K
that you can use to buffer something larger? In either case, I'd use
the largest buffer-size that fits (i.e. 512K in this case) and just
eat the rest of the buffer.
simple 32-bit length at the start and read in 899,996 bytes, encrypt, and
write
it out again, I have to break off a 512K chunk, then a 256K chunk, then a 64K
chunk, then a 32K chunk, then an 8K chunk, then a 4K chunk, etc etc etc.
Because the amount of expansion is variable-length, depending on the size of
the I/O buffer I have to precalculate the encoding expansion beforehand and
reduce the data amount read by that amount to see what it'll take to optimally
fill the buffer, and because I have to keep dropping in partial-length headers
all over the place I can't do a single I/O but have to do it all in dribs and
drabs, and in general do a lot of unnecessary work (both in terms of writing
code and CPU time) and waste a lot more space in encoding than simply using a
32-bit length would have.
The reason Colin and I came up with this idea was the belief that
a) we might want to encode data longer than 32 bits, and
b) we needed a way to differentiate ourselves with the
then-existing PGP 2.x length-encoding scheme
So, based on these constraints, we couldn't use a simple 32-bit length
encoding without adding extra bytes to the overhead. Perhaps this was
a poor choice, but it's what we came up with at the time.
Peter.
-derek
--
Derek Atkins, SB '93 MIT EE, SM '95 MIT Media Laboratory
Member, MIT Student Information Processing Board (SIPB)
URL: http://web.mit.edu/warlord/ PP-ASEL-IA N1NWH
warlord(_at_)MIT(_dot_)EDU PGP key available