ietf
[Top] [All Lists]

Re: Proposed DNSSEC Plenary Experiment for IETF 74

2008-11-28 13:45:01
Andrew,

I don't want to stretch this discussion out too much because I
think the point has been made, but a few comments below.

--On Friday, 28 November, 2008 10:58 -0500 Andrew Sullivan
<ajs(_at_)shinkuro(_dot_)com> wrote:

On Fri, Nov 28, 2008 at 10:09:16AM -0500, John C Klensin wrote:

ones) are the most likely targets of attacks.  If they are,
then having DNSSEC verification only to those servers, with
client machines trusting the nearby caching servers without
DNSSEC protection, provides very little protection at all.
Put differently, if we cannot extend DNSSEC protection and
verification to the desktop, DNSSEC provides very little
marginal security advantage.

This doesn't actually follow, because there could be another
way to validate the link between the end host and the
validating recursive resolver.  For instance, we could use
TSIG between a non-validating stub resolver and a validating
recursive resolver in order to ensure that attacks between
those two points aren't successful.  If I know I have the
right node doing the validation for me, then attacks against
the ISP's validating recursive resolver require complete
takeover of that machine: by no means impossible, for sure,
but a bigger deal than just spoofing answers to a stub
resolver.

Sure.  But I suspect that the number of systems that fully
support TSIG that do not support client validation are few.  I'd
be happy to be proven wrong about that.  One could also run the
DNS queries  between stub resolver and validating recursive
resolver over a properly-validated and secured tunnel, but the
number of those isn't huge either.   We could also debate what
is, and isn't difficult -- depending on network topology and
operations quality, it is often much easier and more effective
in practice to mount an attack against a server than against the
network.

That said, I don't want to make light of the end-point
problem, since TSIG between a stub and a recursor isn't a
trivial problem today either.  Moreover, since end nodes in
many environments get their recursor's address(es) via DHCP,
and since that path is pretty easy to compromise, the whole
edifice rests on a sandy foundation.

Exactly.

Nevertheless, I just want
to be clear that having every end node in the world doing RFC
4035-and-friends validation is not the only path to useful
DNSSEC.

I would never go so far as to say "only path to useful...".
I'm actually a big believer, in the present environment, in
LAN-local validating caching resolvers.  But that is not a
popular setup, especially for the residential, SOHO, and small
business setups, that are often at the greatest risk.  But,
unless one can either take advantage of special cases or harden
the servers and data paths well past current norms, I don't see
the potential of DNSSEC living up to the expectations and hype
unless one has end node (or at least end-network) validation.

As several people have pointed out, effective use of DNSSEC to
the desktop requires sufficient work on APIs and UIs that an
application, or the user, can distinguish between "signed and
validated", "signed but does not validate", and "unsigned".   

Why?  It seems to me that acceptable definitions of "works" and
"doesn't work" in a security-aware context could include
"validated or insecure delegation" and "bogus delegation"
respectively.  In my opinion, any plans that involve users
making sensible security trade-offs due to validation failures
will get us right back where we are with self-signed or
expired (or both) certificates for https.  It seems a
perfectly good idea to me that "bogus" means exactly the same
thing as "site off the air".

We are in agreement about end users doing security validation
and decision-making.   But, unless you can deploy DNSSEC, with
signing of all relevant zones, on a flag day basis, the end user
software needs to be able to distinguish between "address
validated with DNNSEC" and "address accepted because no
signatures are present".  Otherwise, one has to treat every
address as equally untrusted and that is more or less equivalent
to DNSSEC not being present.

Whether it is appropriate to treat "failed validation" was
equivalent to "no domain" or "no server response" is a much more
subtle question, one I'm much more comfortable trying to answer
with a signed root and tree than I am with lookaside.

...
the middlebox problem, with servers not under the user's
control making decisions about whether or not particular
strings are resolved or reported to the user machine as
non-existent.   I have not been following the DNSSEC protocol
work closely enough to be sure, but my impression is that
such protocol work has not even been started, much less
concluded and standardized.

You have exactly two options: allow the recursive server to
make the decisions you seem to dislike -- and I think people
who like that approach think it's a feature, not a bug -- or
else to do validation out at the end nodes.  The end node gets
a bit to tell upstream validators that it is going to do all
validation itself, and those upstream systems are required to
pass along all the data necessary for such validation.  So
it's still possible to do everything at the end node.

I don't either like or dislike that recursive server model.  I
just think that the quality of security/trust improvement it
provides is questionable given current operational realities
and, perhaps more important, that only a very small number of
successful attacks on such servers that people are depending on
could bring the whole DNSSEC concept into serious disrepute.

This is quite independent of the question of whether
applications have the ability to understand the results from
the validator.  I agree that OS APIs seem to be missing.  I'm
not sure that's something the IETF ought to be solving, but
I'd happily entertain arguments either way.

IMO, the dividing line is precisely between doing validation at
the endpoints and doing it somewhere else.  If the answer is
"endpoints", then it is perfectly sensible and consistent with
IETF history to say "local problem".  On the other hand, if
validation is at the caching resolver, then it seems to me that
the model for communicating between the stub resolver and that
system is precisely an IETF problem (again, if only to be sure
that the application can tell the difference between "validated"
and "unsigned".

several of them, do we need search rules for look-aside
databases 

My personal reading of the current specifications is that, if
you have at least one path to validation, then validation is
supposed to work. So search rules ought not to be needed.
What the implementations actually do is currently at variance
with my interpretation, however.

Again, I'm much more concerned about the current operational
practice and how it is evolving than I am about the theory in
the specs.  I know that, in any situation like this, "single
authoritative tree" is a lot easier, and a lot harder for either
a bad guy or carelessness to diddle in subtle ways without being
caught and held accountable, than having multiple arrangements.
And it is quite clear that we don't have that tree today.

To paraphrase Bill, if there are two possible validation paths,
using two different sets of lists on different servers, there is
the possibility of different answers on different paths.  And,
precisely because this mechanism is supposed to provide security
and trust validation, playing "see no evil, hear no evil,
anticipate no evil" with that particular risk is wildly
inappropriate.

I'm in favor of getting things signed just as quickly as that is
feasible -- either from the root down or using look-aside
mechanisms that are, themselves, fully validated and with good
tools for dealing with potential conflicts.   But my reading of
Russ's proposed experiment had to do with demonstrating that
DNSSEC is actually useful in dealing with threats and plausible
attack scenarios, not just demonstrating that one can safely
sign zones and deploy validating software in some places on the
network.  For that demonstration of effectiveness, we are not,
IMO, quite there yet and it is frightening that we are only
having the discussion now (from that point of view, the proposed
experiment has already succeeded because we [finally] are having
the discussion).

    john


_______________________________________________
Ietf mailing list
Ietf(_at_)ietf(_dot_)org
https://www.ietf.org/mailman/listinfo/ietf