At 07:44 AM 11/3/2004 -0800, Dave Crocker wrote:
On Mon, 01 Nov 2004 16:29:58 -0800, Jim Fenton wrote:
I agree that mailing lists should re-sign messages. But I expect
that it will take quite a while before that happens, and in the
meanwhile, I want the original signature to work wherever possible.
and it will take quite awhile for other sending software to start
signing, too. shall we try to compensate for them, too, somehow?
Please explain what you mean by 'other sending software'.
Again, what is special about mailing lists is that they are in series with the
message path. If 'p1' represents the fraction of messages that are signed, and
'p2' represents the fraction of mailing lists that re-sign messages. In order
to avoid damaging the reputation of the mailing list, assume it only sends
signed messages when it receives a signed message. So the fraction of messages
coming through mailing lists that is signed is p1*p2, which will be
considerably less than either p1 or p2 until they get closer to 1. If
p1=p2=20%, then the result is only 4%.
If we're able to get signed messages to pass through half of the mailing lists,
then this improves to p1*p2+p1*(1-p2)/2, or 12% in the above example. I think
this is worthy of a modest effort, and I consider what IIM does in this regard
to be modest.
what is significant about the current thread is the nature of the
analysis that needs to be done, to handle the changes a mailing list
can introduce into a previously-signed message.
internet standards that rely on these kinds of statistical and case
analyses make for complex, problematic implementation and testing.
We are proposing no statistical analysis as part of IIM; you're making it sound
like we're proposing something like Bayesian filtering! But I think it's very
appropriate to use statistics or experimentation to decide how far we should go
with this.
BTW, what about the canonicalization that is proposed in both IIM and DK: do
you advocate eliminating that as well?
"complex, probleman implementation and testing" is a code-phrase for
"difficult to adopt and make interoperate on large scale".
anyone with internet-scale testing, deployment and use experience to
the contrary should speak up.
Again, disagree with the premise that this is complex. In calling this
problematic, you show that you have already made a judgement.
Folks -- we are working in a topic that has an unbroken track record
of failing to gain ANY large-scale successes in the entire history of
the IETF, in spite of repeated attempts.
This track record calls for taking the most narrow, focused approach
we can. This means the specification should be absolutely minimalist.
We should strive for the most basic and straightforward capability we
can.
As I have said before, Dave, 'narrow' and 'focused' are relative and subject to
interpretation. I think we need to agree to disagree on this particular point.
-Jim