Re: [ietf-dkim] Re: SSP + SPF records in DNS
2008-01-03 20:16:15
On Jan 3, 2008, at 1:36 AM, Frank Ellermann wrote:
Douglas Otis wrote:
A realistic estimate of the text based CDIR payload might be about
140. These CIDRs must span both IPv6 and IPv4 ranges.
Did you count 20 bytes in a:example.org/24//32 and then replace 11
for example.org by 130 to get 140 ?
Estimates based upon RR records approaching a system limits would be
problematic. For review, the estimate of 140 assumed an average for
either IPv4 or IPv6 CIDRs to take about 22 characters, within payloads
able to accommodate typical ancillary information, without which a
truncation might otherwise result.
A high percentage of abusive email emanates from compromised systems.
While large CIDRs may reduce the number required, this tactic reduces
protection. Unexpected sources of email are always popping up.
Some libraries limit MX/ hostname cascades to 50 CIDRs.
There are no "cascades" in SPF, let alone in SSP, and if a pre-MARID
SPF library doesn't implement the post-MARID RFC 4408 file a bug
report, use a fresher implementation, fix it, or hire somebody to
fix it.
It remains clear the SPF exploit concern is still not understood. In
addition, victims of SPF parsing exploits might not be publishing or
checking SPF records, or even using email.
Initial SPF records causing a "cascade of transactions" can be cached
and then execute entirely different sequences. It does not matter
whether a cascade includes A, AAAA, PTR, MX, or TYPE 99/TXT records.
For a spammer/attacker, the attack can recycle targets and become free
once the duration of a spam campaign exceeds a receiver's negative
caching.
Receivers parsing RFC4408 SPF records to evaluate email-addresses
allows bad actors to generate a cascade of DNS transactions from
cached records, such as:
(SPF) -> MX -> Hostname-A or Hostname-AAAA
-> Hostname-A or Hostname-AAAA
...
MX -> Hostname-A or Hostname-AAAA
-> Hostname-A or Hostname-AAAA
...
...
or
(SPF) -> SPF
-> SPF
-> SPF
-> SPF
-> SPF
-> SPF
-> SPF
-> SPF
-> SPF
-> SPF
-> TXT
Is it really wise to invite unrelated TXT resource records into
this congested location, already (ab)used by a PRA evaluation
process?
PRA doesn't look at v=ssp1, nobody with an IQ above zero uses PRA
looking at v=spf1, and pro-PRA arguments for its spf2.0/pra are
limited to "plausible after an erroneous decision to ignore the
envelope sender address" (and then IMO more plausible than SSP's
"first author" approach).
If DKIM SSP were to use type 99 RRs to avoid repeated transactions,
expect use of TXT records for the same reason. In essence, type 99 is
unlikely to ever become adopted, as it will be seen as a wasted
transaction to be avoided as well.
Of course, the TXT RR at the base domain does not scale and should be
avoided. : )
I can't tell how many "type 99" (or rather corresponding TXT) record
users don't have 25 bytes for SSP directly, I can't tell how many
would be willing to pull spf2.0/pra as "nice try, but now it's over"
in favour of SSP, and I can't tell if 25 is realistic: RFC 5016 4.7
and 5.4 (2) claim that 25 is not good enough. OTOH a future SSP RFC
is free to obsolete parts of the informational RFC 5016.
RR-sets consume 12 bytes for RR properties, where strings consume an
additional byte or so for length. Per the current SSP 41 byte string
of "v=ssp1 dkim=strict handling=process t=n:s" a total of 54 bytes is
needed for its separate RR that could comprise about 18% of available
resource space. When more than two different protocols utilize the
same TXT RR and location, newer versions are unlikely to remain viable.
Each email-address evaluation may require processing an entire SPF
record set. This set may span as many as 111 resource record
transactions, ignoring checks for TXT or type 99 and reporting
records.
If you don't ignore it you get 112 instead of 111. I've no idea
what reporting records are.
SPF has an exp mechanism to fetch a macro expanded messages that could
be added to a DSN. While an unlikely method to spam, this adds yet
another possible TXT transaction in the sequence.
The 111/112 affects receivers wishing to evaluate an SPF policy with
10 MX mechanisms, each with 10 names. For receivers wishing to
evaluate only v=ssp1 that's irrelevant, admittedly they'd need both
query=txt and query=spf for some years.
Again, this is a "plan B" if the two objections against SSP at
_ssp._domainkey listed above turn out to be bad enough. I can't tell
if that's the case.
The macro expansion of SPF records to evaluate DKIM related email-
addresses must be avoided. To ensure this, plan B should be avoided.
Rampant SPF evaluations can result in a devastating attack
completely free for spamer/attacker.
In your scenario the attacker (not a spammer) maintains the MX
records, each with 10 bogus addresses of the victim. It is 1
query=mx to the attacker for 10 query=a to the victim, that's cheap
but certainly not "completely free". Your idea to use various MX
records selected by different local parts would stabilize this 1:10
amplification, for less traffic on the attacking NS they'd stick to
their evil MX records.
This conclusion overlooks cached records induce the sequence of
transactions by leveraging SPF macros. The resulting transactions no
longer require _any_ of an attacker's/spammer's resources, and becomes
a zero cost attack.
Now that we have discussed it again also on this list let's drop it
*here*. A future 4408bis will have to fix the MX issue if needed,
it's simple enough.
Further limiting the SPF MX mechanism will not solve the caching
exploit created by SPF's macro expansion feature. Preventing
expansion of local-part components will require exploits to expend
more resources initially, but then any label component is able to
recycle an attack from cached records, again at zero cost. : (
E.g. limit the mx per record to one, deprecate mx in favour of
include, or limit the number of NXDOMAIN
Such limitations will affect those who depend upon MX/CIDRs for
delegating to domains that are not publishing SPF records.
- in fact a SHOULD in RFC 4408 can be implemented by limiting
NXDOMAIN as noted on the page cited by Julian (see the last
paragraph of the rebuttal, "a viable void-lookups limit for SPF
might lie between 2 and 5.")
Unless a different version of the SPF record is published, and where
older versions are removed, there is no way to prevent the use of
unsafe routines. IMHO, converting to different version is highly
unlikely. There is no viable means of escape.
http://www.openspf.org/draft-otis-spf-dos-exploit_Analysis#rebuttal
The FUD is an attempt to increase awareness of risks remaining
within the SPF protocol.
We are aware of it, and "interoperability and implementation
reports" should IMHO state *how* they implement the relevant RFC
4408 SHOULD.
Unfortunately, even you are still unaware of the exploit concern, nor
are there any simple fixes.
A focus on security should move email away from IP address path
registration. Converting to DKIM in conjunction with TPA-SSP would
provide safe alternatives for controlling DSNs in a manner compatible
with RFC1123.
SPF leverages MX records, where MX records themselves offer a
moderate amplification of about 10.
No, the limit of 10 names per mx-mechanism is a hard RFC 4408 limit,
it is no limit of "MX records themselves".
However, size constraints of MX responses provides roughly comparable
limits.
SPF permits references to MX record to be based upon labels
generated by macro expanding email-address local-parts. : (
One fresh malicious MX triggered per mail is enough to get an MX
amplification, that you insist on using local part macros for this
attack is IMO irrational.
Staging an attack at _zero_ resource cost to the attacker is
irrational? Is it rational to use a method that both reflects and
greatly amplifies an attack compared to the spam itself? Of course
the spam would be sent regardless. The draft that I wrote should not
have estimate gain in this fashion. It seems to have been too
distracting. I apologize for not being clearer.
I'm no fan of SPF's local part macros, they are odd for UTF8SMTP,
and messy for quoted strings with leading / adjacent / trailing
dots, but they are not essential for an attack scenario.
Fortunately, resources consumed in staging an attack normally provides
revenue when not attacking. SPF's macro feature enables a free
attack, which removes an important factor constraining an otherwise
greater botnet problem.
Normal use of MX records will seldom cause all hostnames to be
resolved at once when in different domains.
"Call back verification" is an example of no "normal use" - spammers
have a reason why they forge "plausible" MAIL FROMs.
It is also unlikely that call back verification will rapidly attempt
to resolve all MX targets that are within different domains. However,
this will happen with SPF.
spam enabling the attack is still accepted by appending a "+all" or
its equivalent as the final SPF mechanism.
The attacker isn't interested in delivering spam, that could help to
locate the NS of the attacker. Your idea to run a DoS attack and a
spam campaign simultaneously is irrational.
The NS of the attacker is likely to be a compromised system that is
moved every few minutes. A spammer/attacker is interested in
capitalizing upon their botnet. When their attack emanates from DNS
resolvers used by recipients, the botnet remains hidden. An attack
that does not impair their spamming activities would be a generous gift.
Several proponents of SPF remarked they think DNS is broken.
RFC 1123 broke responsible SMTP forwarding, and it's kind of odd
that determining a DNS zone cut doesn't work, if that's what your're
talking about
It would appear DKIM helps solve your concern about forwarding. TPA-
SSP can impose the desired constraints on the MailFrom without
reliance upon IP address path registration. : )
This DNS oddity resulted in CSV's tree walk
IMHO, Dave was right back then about validating SMTP client first.
Publishing domain wide policy was considered by others to be a means
to encourage CSV use. However, it seems providers would rather have
recipients depend solely upon email domain authorizations. At least
providers should not be troubled by a TPA-SSP authorization scheme. : )
it might affect the current SSP proposal, but SPF can do without
it: It's *unnecessary* to protect domains when they have neither an
MX nor an address with an SMTP server. It's *possible* but not
*necessary* - forging such addresses makes no sense for spammers,
such addresses are "implausible", they fail in a "call back
verification".
I am in even in favour of eliminating acceptance when there are no MX
records. Permitting just A records for discovery would causing the
hunt for email policy to punish second level domain providers, or
require policy be published adjacent to every A record. Perhaps hosts
could be publishing within a "_host." subdomain to avoid this problem?
-Doug
_______________________________________________
NOTE WELL: This list operates according to
http://mipassoc.org/dkim/ietf-list-rules.html
|
|