ietf-openpgp
[Top] [All Lists]

RE: Compression and partial length chunks

2003-12-29 12:07:56

Hasnain -

If I must have hold the whole source data in memory before I can
compress and then encrypt it, then I would always know before-hand the
lengths of the resulting packets (literal, compressed, and encrypted).
So when exactly does the need arise to use PBLs? Even when I am
streaming data, I cannot avoid the basic constraint that the entire
source data must be in memory before it can be processed. It is not the
case, as you pointed out, that I am feeding chunks of the source data to
the compression and encryption routines and then sending them out as
PBLs.  

I'm afraid I misled you in my earlier comments.  Nothing in the OpenPGP
algorithms or formats requires you to hold the entire document in memory.
The compression and encryption algorithms can operate on chunks at a time,
as long as they are in order: first chunk, second chunk, third chunk, etc.

You asked earlier whether you could process the chunk "independently of
the chunks already received" and my answer was no.  What I meant was
that you have to have processed those other chunks already, in order,
and retained state information from those chunks.  The widely used zlib
decompression library provides this functionality, for example, as do
most decryption libraries which provide the CFB chaining mode used by PGP.

I am new to the interesting world of PGP. I think it would be useful, if
at all possible, to be able to deal with the chunks independently so as
to avoid physical memory limitations. 

I hope my clarification above will show that it is possible to do this,
but that state must be retained from chunk to chunk in the decompression
and decryption modules.

Hal Finney

<Prev in Thread] Current Thread [Next in Thread>