‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On Wednesday, February 27, 2019 4:03 PM, Jon Callas
On Feb 27, 2019, at 3:00 PM, Bart Butler
Do I understand correctly that you oppose shrinking the allowable range
with MUST at all too? I think the argument for this is fairly convincing
from a usage perspective to ensure that someone decrypting a large message
is not obligated to download a huge amount of data before finding out that
it is corrupted or otherwise has been tampered with. Likewise, we had to
address unanticipated performance issues in OpenPGP.js with very small
chunks which could have allowed a bad actor to essentially DoS the library
with a strangely-constructed message.
In other words, I'm not really swayed by the implementation simplification
argument but I do think that very small or very large chunk size, in
addition to probably being useless, pose a real threat in terms of abuse.
So I think having a MUST for the range, maybe 16kiB to 256 kiB, or 16 kiB
to 1024 kiB is a reasonable thing to do. And as long as we keep the size
byte, we can always increase the upper limit of the range in the future if
My warning is against shooting someone else in the foot, or forcing them to
use some other protocol.
Thus, saying (e.g.) that the range MUST be between 1K and 16K is a bad idea;
we even know now that 256K has in some cases an efficiency advantage. You can
say, MUST support 1K to 16K, SHOULD support up to 256K and MAY support larger
sizes. There can also be a couple of paragraphs to explain that there are
good reasons neither to be very small nor very large.
The problem is that MAY for either very small or very large chunks sizes in
some ways forces library maintainers to have support for these chunk sizes
because they WILL appear in the wild, and then there will be complaints if they
do not decrypt. If they are technically legal then everyone has to account for
these edge cases in their apps--i.e., a 1-chunk AEAD multi-gigabyte movie which
you technically can't start playback until the whole thing has been downloaded
and buffered because the authentication is only at the end.
My concern is someone saying something like, “Gosh, I’d like to have OpenPGP
AEAD encryption for S3 Objects, but I can’t ‘cause those go up to 5TB.”
Anyone who’s going to use 5TB objects probably knows the headaches they
inherit and yeah, you aren’t going to do that on a Cortex M0.
Does this make sense?
It does, and normally on this kind of thing I would completely agree with you,
but in this case I think there are two mitigating factors:
1. AEAD chunk size does not limit message/file size in any meaningful way
assuming we set the upper limit chunk size to something reasonable like 1024
kiB, you just use multiple chunks, which is the idea anyway.
2. Abuse potential in an open standard
It's #2 which is really compelling for me for exactly the reason that we DO
want this to be usable for arbitrary uses and message sizes in federated
contexts, and for that to be possible we need to try to set reasonable limits
to prevent malicious or careless users from creating bad-but-legal payloads.
Description: OpenPGP digital signature
openpgp mailing list