ietf
[Top] [All Lists]

Re: I-D Action: draft-thomson-postel-was-wrong-01.txt

2017-06-14 18:36:14

In message <db4218d7-59b4-7dd8-2cf6-ed9673960cac(_at_)nic(_dot_)cz>, 
=?UTF-8?B?UGV0ciDFoHBhxI1law==?= writes:
On 14.6.2017 00:03, Joe Touch wrote:
Hi, all,

...
  Title           : The Harmful Consequences of
        Postel's Maxim
https://tools.ietf.org/html/draft-thomson-postel-was-wrong-01

Before I dive into details, let me state I support this documents in its
current form.


I completely agree with John Klensin that a test suite defines the
protocol standard (warts and all).

However, I disagree with the characterization of the Postel Principle in
this doc, and strongly disagree with one of its key conclusions ("fail
noisily in response to...undefined inputs"). Failing in response to bad
inputs is fine, but "undefined" needs to be treated agnostically.

IMO, the Postel Principle is an admission of that sort of agnosticism -
if you don't know how the other end will react, act conservatively. If
you don't know what the other end intends, react conservatively. Both
are conservative actions - in one sense, you try not to trigger
unexpected behavior (when you send), and in another you try not to
create that unexpected behavior (when you receive).

That's the very definition of how unused bits need to be handled. Send
conservatively (use the known default value), but allow any value upon
receipt.

This very much depends on the the original specification.

If the spec says "send zeros, ignore on receipt" and marks this clearly
as an extension mechanism then it might be okay as long as the extension
mechanism is well defined.


On the other hand, accepting values/features/requests which are not
specified is asking for trouble, especially in long-term. Look at DNS
protocol, it is a mess.

- CNAME at apex? Some resolvers will accept it, some will not.

Well this is something that should be checked on the authoritative
server when the zone is loaded / updated.  A resolver can't check
this as DNS is loosely coherent.  Delegations can come or go.  Other
data comes and goes.

- Differing TTLs inside RRset during zone transfer? Some servers will
accept it and some not.

And we have rules to truncate to the minimum value.

Named fails hard for a number of zone content issues when running
as the master server for a zone which it deliberately ignores when
running in slave mode as the slave operator can't fix them.  That
said if there was IETF consensus that these should be fatal on slave
zones, so that we aren't left to pick up a bad reputation for failing
to serve the zone, it would be easy to change them.

We are already failing to resolve signed zones when validating where
the authoritative servers returns FORMERR or BADVERS or fail to
respond to queries with a DNS COOKIE EDNS option being present.  In
both cases we fallback to plain DNS queries which are incompatible
with DNSSEC.  There are a number of .GOV zone served by QWEST that
fall into this category.  Yes, we have attempted to inform QWEST
for the last 2+ years that their servers are broken.

See https://ednscomp.isc.org/compliance/gov-full-report.html#eo
for zones.  They are highlighted in orange.

If we ever need to set a EDNS flag or send EDNS version 1 queries
the number of zones that will fail on a validating resoler will
increase mostly because too many firewalls default to "these field
must be zero" and drop the request instead of listening to the EDNS
RFC which say to IGNORE unknown flags and to return BADVERS with
the highest version you do support if you don't support the version.

Firewalls are capable of generating TCP RST so they should be capable
of generating a BADVERS response if they don't want to pass version
!= 0 queries.

Note: It doesn't have to resolvers that detect DNS protocol errors.
You can test for these sorts of errors easily and refuse to delegate
to servers that don't follow the protocol.

Resolvers can't continue to workaround every stupid response
autoritative servers return.  They don't have enough time.

To sum it up, decision what is acceptable and what is unacceptable
should be in protocol developer's hands. Implementations should reject
and non-specified messages/things unless protocol explicitly says
otherwise. No more "ignore this for interoperability"!


With my DNS-software-develoepr hat on, I very clearly see value of
The New Design Principle in section 4.

Set it to stone! :-)


The principle does not setup the feedback cycle in Sec 2; a bug is a bug
and should be fixed, and accommodating alternate behaviors is the very
definition of "be generous in what you receive". "Being conservative in
what you send" doesn't mean "never send anything new" - it means do so
only deliberately.

-----
Failing noisily is, even when appropriate (e.g., on a known incorrect
input), an invitation for a DOS attack.

That behavior is nearly as bad as interpreting unexpected (but not
prohibited) behavior as an attack. Neither one serves a useful purpose
other than overreaction, which provides increased leverage for a real
DOS attack.

Sorry but I cannot agree. This very much depens on properies of "hard
fail" messages.

If "error messages" are short enough they will not create significantly
more problems than mere flood of random packets (which can be used for
DoS no matter what we). In fact, short predictible error message is even
better because it gives you ability to filter it somewhere.

Also, passing underspecified messages further in the pipeline is causing
problems on its own. (Imagine cases when proxy passes
malformed/underspecified messages to the backend because it can.)


So again, I really like this document. Thank you!

-- 
Petr Å paÄ?ek  @  CZ.NIC

-- 
Mark Andrews, ISC
1 Seymour St., Dundas Valley, NSW 2117, Australia
PHONE: +61 2 9871 4742                 INTERNET: marka(_at_)isc(_dot_)org

<Prev in Thread] Current Thread [Next in Thread>