ietf
[Top] [All Lists]

Re: https at ietf.org

2013-12-10 13:44:15
On 12/10/2013 06:00 AM, John C Klensin wrote:


--On Monday, December 09, 2013 22:39 -0500 Phillip Hallam-Baker
<hallam(_at_)gmail(_dot_)com> wrote:

...
For a similar reason removal of TLD's can't happen as people
can still graft on namespace and establish TA's for the
grafted on namespace.


It is trivial to fix when the validation is taking place in a
service in the cloud (aka a resolver).

Rather less easy to do if people drink the DANE cool-aid and
do the job at the end point.

Now you can take this point as either arguing against doing
DANE or considering the risk and deploying the appropriate
control. But you do have to consider it.


What you are in effect asserting is that the resolver
providers are the apex of the trust chain and so there is a
diffuse trust surface rather than a sharp point. Which is true
when the validation takes place in the resolver.

I am probably going to regret getting involved in this thread,

GMTA :)

but I would draw two rather different conclusions from the above
and a number of other comments:

(1) We have seriously oversold DNSSEC as a data quality and
reliability mechanism when it is merely a transmission integrity
mechanism.  The former is about the DNS and associated
registration database (e.g., "whois") records being accurate,
secure, and maybe even information-containing.  The latter is
merely about an assurance that the data one receives hasn't been
altered in transit in the DNS.

Now most of the people who have been involved in the design and
implementation of DNSSEC have been quite careful about the
above, at least most of the time.  But sometimes they are sloppy
about their language, sometimes they say "DNNSEC" and people
hear "DNS Security" and make inferences to data quality.  More
important, there is an (illogical) chain of reasoning from
"DNSSEC is in use" to "[now] the DNS is secure" to "all of the
data provided by the DNS or its supplemental databases are of
high quality".

While the integrity checks of DNSSEC provide some protection
against some types of attacks on the "data quality" part of the
DNS environment, the attacks they protect against are very
difficult.

John,

IMO you're spot on with all of the above. Especially about being careful with our language about what DNSSEC is designed to do. I try very hard to be precise, but it's good to have this reminder periodically.

An attacker with the resources to apply them would
almost certainly find it easier, less resource-expensive, and
harder to detect to attack registry databases (before data are
entered into DNS zones and signed), registrar practices, or
post-validation servers.  Non-technical attacks, such as the
oft-cited hypothetical NSL, are easily applied at those points
as well -- much more easily than tampering with keys or
signatures.

As previously mentioned, these attacks are theoretically possible, but they are trivially detectable since all of the critical data is visible in the DNS. They may not be _immediately_ detectable, and in fact likely would not be detected immediately; where "immediate" is going to vary widely depending on the value/profile of the target. But we have plenty of experience with people noticing DNS problems for critical resources. Even in the DNSSEC case we have major players doing DNSSEC validation now (Comcast and Google leap to mind) so shenanigans involving DNSSEC are going to be noticed.

The point I'm trying to get across here is that any sort of manipulation of the DNS by a 3rd party (such as a malicious hack, NSL, etc.) is valuable only to the extent that it can go unnoticed, and therefore cause innocent end users to depend on the 3rd party's resources instead of the valid ones _without their knowledge_.

We don't need DNSSEC to see that this sort of thing only works for a limited time. We have had lots of events where high profile sites have had their registrar data changed, and there is a huge public hue and cry. Of course a skillful attacker could create a phishing page that looks enough like the real site to gather a non-trivial number of user passwords, but again, we don't need DNSSEC for that. In fact, if the sophisticated attacker manages to socially engineer the registrar credentials (the most popular form of this type of attack) then they can update the DS record in addition to the NS records, and have a fake site that validates perfectly.

But even that sort of attack would only work for a short period of time. More importantly, the ability to slide the malicious data into the DNS at all is going to be proportional to the location of the resource in the tree. We already know that individual zones are vulnerable to registrar attacks. However new TLD DS records in the root are greeted with fanfare (at least amongst a fairly substantial number of DNS wonks), so the ability to slip stuff in at that level is minimal at best. This is more true at the root itself.

So to be concise (yeah, I know, too late) claiming that DNSSEC is vulnerable to external manipulation at the root or TLD level is almost certainly wrong. There are theoretical attacks that could be launched, but their practical value is nil. If someone has a valid attack that uses a method I haven't taken into account, they should find a trusted channel to make that known.

(2) In a different version of some of the comments on the
thread, the "where to validate" question is important.  If one
tries to validate at the endpoints, endpoint systems, including
embedded ones, should have the code and resources needed to
validate certs and handle rollovers, even under hostile
conditions, and that isn't easy.  If one relies on intermediate,
especially third-party, servers to validate, than much of the
expected integrity protection is gone... and the number of times
such servers have been compromised would make this a
non-theoretical problem even without concerns about
governmental-type attacks (NSL and otherwise) on those servers.
No easy solutions here.

Again, spot on. I've been saying for many years now that the most interesting part of DNSSEC is going to be local on the end user side. Pushing validation all the way down is critical.

I don't know where that combination of situations leaves
initiatives like DANE, but I suspect we should be looking at
trust conditions and relationships a lot more carefully than the
discussions and claims I've seen suggest we have been doing.

I like DANE a lot, and I think it has a critical role going forward. The problem is that it is only as valid as the registrar data for the zone. So it's not clear that DANE is going to be a complete replacement for a CA-provided cert.

Doug

<Prev in Thread] Current Thread [Next in Thread>