On Fri, 29 Mar 2019 04:17:03 +0100,
Peter Gutmann wrote:
Neal H. Walfield <neal(_at_)walfield(_dot_)org> writes:
Until now, OpenPGP didn't require buffering data. A decrypted AEAD chunk
MUST only be released when it has been authenticated. In the current
proposal, AEAD chunks are potentially unbounded (well, up to 4 exabytes...)
in size. No one can decrypt such chunks without cheating, i.e., releasing
unauthenticated plaintext.
This has been considered before, e.g. with S/MIME's authenticated encryption:
https://tools.ietf.org/html/rfc6476#section-6
and so far doesn't seem to have caused any major problems. That is, it's not
that there's a perfect solution, it's that actual problem situations seem to
be pretty rare.
The advice seems pretty good:
The exact solution to these issues is somewhat implementation-
specific, with some suggested mitigations being as follows:
implementations should buffer the entire message if possible and
verify the MAC before performing any decryption. If this isn't
possible due to streaming or message-size constraints, then
** implementations should consider breaking long messages into a
** sequence of smaller ones, each of which can be processed atomically
** as above. If even this isn't possible, then implementations should
make obvious to the caller or user that an authentication failure has
occurred and that the previously returned or output data shouldn't be
used. Finally, any data-formatting problem, such as obviously
truncated data or missing trailing data, should be treated as a MAC
verification failure even if the rest of the data was processed
correctly.
It seems to me that we can take this advice at the protocol layer and
largely avoid the security concerns. We just always use small
chunks.
If you want to do it right, you'd really want some formal academic treatment
rather than guessing at chunk sizes and what may or may not be needed, i.e.
typical message size X, typical chunk size Y gives these security bounds. PGP
is typically used to encrypt data at rest (make the chunk size the file size)
or short email messages (chunk size doesn't matter, it's short). That leaves
a remainder of large emails, which we know exist but don't know how frequent
they are or how often they're sent or from what sorts of systems.
I'm having trouble imagining why a larger chunk size would ever be
better in either of these cases.
- File encryption: smaller chunk size means finding errors faster
- Large emails: smaller chunk size means that it is possible to
preview the email, which is helpful on mobile connections.
Please help me understand when a large chunk size could be better.
Thanks,
:) Neal
_______________________________________________
openpgp mailing list
openpgp(_at_)ietf(_dot_)org
https://www.ietf.org/mailman/listinfo/openpgp