On Wed, 27 Feb 2019 23:37:20 +0100,
Jon Callas wrote:
Moreover, forbidding it says that we state now that no one could
ever have a reasonable use; my experience is that categorical
statements like that are just asking the fates to bite us in an
uncomfortable place. Amazon S3 increased their max object size to
5TB a few years ago. Ióm not saying it should be that large, but I
think this is a pretty convincing argument that 16K is too small.
I think you misunderstand the point of the chunk size parameter.
Modulo perhaps performance, the chunk size is not visible to the
application. That is, messages encrypted using AEAD can
(theoretically) still be 4 exibyte large when using a 16 kiB chunk
size; a message can consist of any number of chunks. Chunk size is
like HTTP's chunked transfer encoding or OpenPGP's partial body
The reason that we don't want large chunk sizes is that when
decrypting a chunk, the chunk MUST (RFC 5116) be buffered. Since the
encryptor generally doesn't know the context in which the decryption
will occur, it must be conservative and choose a small chunk size.
Otherwise, a decryptor will inevitably encounter a chunk size that it
can't buffer and gratuitously fail. Alternatively, it violates the
must and outputs the decrypted data without first verifying it thereby
resulting in the next EFAIL.
So I think that a SHOULD is the right way to put it. I care less
about what the SHOULD limit should be, but a small hard limit sounds
like a bad idea.
Do you still think a hard upper limit is a bad idea?
openpgp mailing list