Hi!
I'm sure others will make more intelligent comments but I have a few
that I'd like to offer.
First, text in the SSP draft states repeatedly that receivers are free
to dispose of their messages as they see fit so I think that certain and
frequent comments in the Review to the contrary are incorrect.
In general, the draft needs to consider adoption incentives for
receivers.
SSP offers itself as a means to detect unauthorized domain use. That is
sufficient incentive for adoption by receivers.
My guess is that such an analysis will show that there is a relatively
> small set of publishers and receivers who are highly motivated
> to implement the more advanced features -- advanced relative to
> earlier SSP drafts --
I don't know what you consider "the more advanced features" but I can
speak with some authority for small receivers and they would welcome the
unauthorized domain use detection capability which SSP provides.
In my opinion, the draft should be broken into an initial core, with
optional extensions. The core should define the publication mechanism
> and the smallest set of features that are deemed useful and likely
> to receive a broad base of initial adoption.
No, no. Let's resist this urge. The "smallest set of features that are
deemed useful" etc are what many believe the existing draft already is -
so there's nothing to split out.
> the protocol is not constrained to "interpret" with respect to defining
what the published information means, but rather is meant to guide, or even
mandate, how the mail receive-side participant should handle messages.
There are several statements within the document to the contrary. How
do you reconcile those with your assertion above?
There have been some Internet publication mechanisms used that might be
thought to be similar to SSP. Most are third-party, centrally controlled
attribute or quality databases. These are an entirely different
administrative and information models from the self-published directive
nature of SSP.
You're speaking here of BL's and SPF. The existence and ubiquitous use
of which is absolute proof of at least three things: 1. the idea of
offering input into receive side filters is not unprecedented 2. the
network infrastructure is very robust with respect to queries 3. the
world has embraced the concept of DNS-based input into the message
decision making matrix. Those are precisely the criterion upon which we
can reliably presume the mechanical and adoptive success of SSP.
The original SSP specification applied only to unsigned messages.
Not sure what you mean by "original SSP specification" but you state
this more than once in your Review. I believe this to be errant but
it's not interesting. Whatever changes may have occurred were
accomplished through the WG process and so were not done in secret.
If a signer has a good reputation, then why is that not
> sufficient for enabling delivery? In other words, with a
> signature of a domain with a good reputation, what threats is
> SSP trying to protect against?
Perhaps it is sufficient and when it is, there are some steps in the SSP
algorithm (all steps after step 1) which are a waste of time/resources.
I've struggled with this myself and am not altogether satisfied. An
easy optimization would be to add "Verifier Acceptable Third-Party
Signatures (VATPS)" to the text of step 1. But, I don't think this
problem amounts to the receiver being "told what to do" by the domain
owner. In other words, SSP's requirement that the algorithm in 4.4
continue beyond step 1 even when a VATPS is detected early is an
inefficiency in the algorithm. If a receiver knows that they are always
going to accept messages with a VATPS, then queries to determine whether
the policy of the From domain agrees with this practice are pointless.
Now, Jim will have something to say to clarify this issue further I'm
sure (hoping) :)
The draft does note that initial receive-side adopters of SSP
> will find no SSP DNS record. However the draft does not
> address the adoption and use impact of being expected to make
> a query that will almost always fail for a significant number
> of years into the future.
What is the significance of this observation? The DNS infrastructure
has proven to be very robust and there are routinely now a half-dozen or
more queries performed on each and every message received these days.
If the DNS infrastructure is incapable of scaling for one more query per
message then I suggest we've got much bigger problems to be working on
than this WG is tasked with. Is there something especially negative
about a failed query that I don't understand?
In reviewing the apparent semantics of full SSP use, I believe
> it seeks to move a DKIM signature, which uses the same domain
> name as is in the From field, into the realm of declaring content
> to be valid.
I see absolutely no basis for such a belief. Please explain. Please
define both "content" and "valid".
8. Filter Engine vs. End-user as SSP Target
>
> The current SSP has re-introduced a range of assumptions about use with
human recipients, and has used those assumptions for dictating specification
details. The specification needs to remove all consideration of human
issues or else to provide substantial empirical basis for its inclusion.
What are these assumptions?
Arvel
_______________________________________________
NOTE WELL: This list operates according to
http://mipassoc.org/dkim/ietf-list-rules.html