On Tue, Sep 15, 2009 at 05:39:50PM -0400, Jeffrey Hutzelman wrote:
--On Tuesday, September 15, 2009 02:55:54 PM -0500 Nicolas Williams
I think the right answer is to leave _query_ strings unnormalized and
require that _storage_ strings be normalized (see my separate reply on
that general topic, with a different Subject:, just now).
Or at least, leave query strings unnormalized until just before the query
happens, and then normalize them in the same way as the storage string.
More generally, what you want is normalization-insensitive comparison, and
normalization of storage strings when they are stored is just an
optimization for that.
I think that normalization-on-create is more likely to result in better
interoperability as all implementors ensure that their displays can
handle a particular NF. I say that as an advocate of normalization-
insensitive-but-preserving behavior in ZFS, but that was mostly a result
of realities on the ground (one NFS implementation normalizes to NFD on
create, most others don't normalize but their input methods tend to
generate NFC, and not all implementors properly display unnormalized
text nor text in all NFs). For most apps, I think normalizing storage
strings is the better approach.
Except in cases like SCRAM, where strings are used to derive cryptographic
keys or in other ways where it matters whether they're the same, but you
don't get to compare them directly. Then you need to insure that the same
input string is always transformed in the same way, or things break.
Unfortunately, for SCRAM passwords, it's the client that has to do that
transformation on every transaction, so we must insure that all clients do
so in the same way.
Right. I didn't mean to imply that the client should not normalize
passwords; I should have mentioned that.
Ietf mailing list