Trojans sending much of the viruses (for more Trojans) and spam (for
revenue) are not concerned about full compliance to a
specification, nor do they implement proper states for an SMTP client.
OK.
Why attribute abuse
to malevolent intent when expediency is more likely a principle
motivation.
I don't care what the motivation is.
So stopping a message after wasting the full bandwidth of
the rejected message may seem like a small concern, but there may be
another 100,000 more of these Trojans requesting service.
I don't much care when the message is rejected. It is great if the
message can be rejected early on, but that does not mean that I
am going to reject a scheme just because it does not make this
possible.
In the real world spam zombies have a finite life. Sure some
botnets have 100,000 machines, most do not and there is a lot of
work going on to reduce the value of hijacked machines as spambots.
If you think there is a problem here, suggest a fix. spreading
FUD about a mechanism that is not intended to address the
problem does not help.
In addition, many of these spammers use a technique that appends a
random sub-domain resolved using a wildcarded record as a means to
obfuscate filters. It also means data needed for a repeated
visit or a
repeated message will not be in a DNS cache. A loss of even more
bandwidth on the DNS queries, possibly more than from the
messages being checked.
Sounds like a very noticable scheme to me.
I believe most MARID DNS queries will be 1 or 2 per
transaction and can be
cached easily. I have seen nothing to suggest that the
relationship is
other than linear. Therefore I dismiss "these concerns"
which seem to
suggest that the relationship is geometric or exponential
or whatever. It
doesn't make sense, and is not borne out by the early
adopters of SPF.
Could you explain the premise used for this assumption?
Observation of the deployed SPF records perhaps???
Hmm, this paragraph also seems to contain high FUD-to-fact ratio.
Sending hosts > Receiving hosts (increases scale and scope)
Originating domains > Receiving domain (increases scale and scope)
Receiving Domain sets > Receiving domain (increases scale and scope)
I do not have the slightest idea what point you are trying to
make here and I don't think you do either.
My premise is that abuse of published partial lists left open will
ensure either the removal of these records, or an attempt to fully
publish a comprehensive record set. A desire to delegate to other
domains providing services runs the risk that as they add a few
indirections, this may cause hard errors and the complete loss of mail
service without the administrator of the domain having done anything
atypical or perhaps having done nothing.
The posts I made earlier describing a means of applying a restriction
on indirections have not made it to the list yet.
It is really easy to do.
Using an assumption that Sender-ID records are able to limit the
required queries to 10 and set the "transientError" timeout at 10
seconds, what will be the impact on network integrity?
If you end up with a problem process offline, but I don't see why
a problem should arise.
The Internet stops working if you hypothesize worst case response
for every transaction, so what? we knew that.