Both l= and x= are bad for interoperability, because it is utterly unclear
what a recipient will do with them. Whevever I ask, the answer is they
might do this and they could do that. If I put a really long x= into a
signature, will recipient systems accept a stale message that otherwise
they wouldn't? If I sign the first 100 bytes of a 10K message, will
recipient systems accept it, and if so, what will users see? There's no
way to tell, because everyone just makes something up.
I would argue that your specification of l=100 when the actual message size is
10K is intentional breakage of your own signature. Perhaps I am short on
imagination, but I cannot imagine why anyone would intentionally specify an
incorrect value for l=, knowing that this will invalidate the signature. You
seem to be pushing the limits of credibility in an apparent attempt to justify
dropping l= from the spec. If you don't want to use l= in your DKIM signature,
don't use it, but some of us believe it to be a useful attribute.
--
Paul Russell, Senior Systems Administrator
OIT Messaging Services Team
University of Notre Dame
prussell(_at_)nd(_dot_)edu
_______________________________________________
NOTE WELL: This list operates according to
http://mipassoc.org/dkim/ietf-list-rules.html