On Feb 28, 2019, at 12:51 AM, Neal H. Walfield <neal(_at_)walfield(_dot_)org>
I think you misunderstand the point of the chunk size parameter.
Modulo perhaps performance, the chunk size is not visible to the
application. That is, messages encrypted using AEAD can
(theoretically) still be 4 exibyte large when using a 16 kiB chunk
size; a message can consist of any number of chunks. Chunk size is
like HTTP's chunked transfer encoding or OpenPGP's partial body
The reason that we don't want large chunk sizes is that when
decrypting a chunk, the chunk MUST (RFC 5116) be buffered. Since the
encryptor generally doesn't know the context in which the decryption
will occur, it must be conservative and choose a small chunk size.
Otherwise, a decryptor will inevitably encounter a chunk size that it
can't buffer and gratuitously fail. Alternatively, it violates the
must and outputs the decrypted data without first verifying it thereby
resulting in the next EFAIL.
Perhaps I do misunderstand, but that’s kinda what I was expecting; an
equivalent to the old streaming chunks.
So I think that a SHOULD is the right way to put it. I care less
about what the SHOULD limit should be, but a small hard limit sounds
like a bad idea.
Do you still think a hard upper limit is a bad idea?
No, I don’t think a hard upper limit is a bad idea. In the text you quote
above, I said a *small* hard upper limit. And no, I don’t know what small
means. I think 16K is small. 256K might be small. AES-GCM has issues about 64G.
One might argue that’s a reasonable large upper limit. Me, I think anything in
the few megabytes to a gigabyte-ish is just fine.
Let me rewind a bit and get to my real point which I don’t think I’m making
When one creates a standard, one needs to be careful with parameters,
particularly the MUSTs. Seemingly sensible things can have downstream effects
that convince people to use some other protocol. Worse, someone’s angry blog
post about something can quickly go into “not even wrong” territory and embed
itself into folklore and you can’t get it out. There are plenty of bad ideas
that someone else has a really reasonable use case for.
The Partial Body Lengths that OpePGP has had from the beginning have no
restrictions on them. There’s non-normative guidance that pretty much says you
shouldn’t even use them, but there’s no restriction on size. This has never
caused a problem that I’m aware of. I’m sure it caused a problem that none of
us are aware of, and the implementors just solved it on their own. It is this
experience that has me wondering about what the restrictions out to be.
The best way to deal with it in a standard is to have non-normative guidance.
It is non-normative guidance that I was suggesting. Let me write an example bit
of non-normative guidance below:
- - - - -
Implementations should pick a chunk size with care. Depending on the AEAD
algorithm, large chunk sizes may limit the ability to process chunks,
particularly with an AEAD algorithm that is not single-pass, and thus the
system decrypting it must hold the entire chunk as it decrypts it. In general,
the chunk size is irrelevant to any code outside of the chunk handling code, so
smaller is better.
An implementation SHOULD pick a chunk size of 256KiB or less. An implementation
MAY use larger sizes, but this could limit interoperability with
resource-limited systems and should be done with care.
- - - - -
I don’t think that’s perfect, but it’s okay as a first draft. I would
completely support something like the text above. I’d support it if it said
16KiB too. I’ll also support hard limits, but my intuition is that that
decision will come with frustration for someone else.
Does this make more sense?
openpgp mailing list