ietf
[Top] [All Lists]

Re: The internet architecture

2009-01-01 09:27:37
Tony Finch <dot(_at_)dotat(_dot_)at> wrote:

In fact the use of the term "multihoming" to describe multiple IP
addresses on the same link is a serious problem with practical
consequences.

   I see no benefit of setting up "multi-homing" without separate links.
Routing based on source address is a dangerous practice, even when the
source address is trusted.

The Internet's idea of multihoming as supported by the architecture is
NOT what I consider to be multihoming. (The lack of support for my
idea of multihoming makes Internet connections klunky, fragile, and
immobile, but we all know that, and should be embarrassed by it.)

   I'm not at all clear what "support" of "multihoming" Tony is asking
for...

The Internet's idea of multihoming is reasonably precisely
encapsulated by RFC 3484. This specification breaks existing practice
and fails to do what it is designed to.

   RFC 3484, of course, is "Default Address Selection for IPv6". I
guess that Tony is referring to Section 10.5 (which, frankly, I have
never succeeded in deciphering). If anyone actually does what 10.5
suggests, I have not stumbled upon it.

OK, so what is Internet multihoming? If the DNS resolves a hostname
to multiple IP addresses then those are assumed to be multiple links
to the same host.

   That is sometimes true, and often not.

   We indeed do that in a few cases, to have a path which bypasses
routing or connectivity problems. More often, we use separate DNS names
to point to the different interfaces on the same host.

   Typically, multiple A)ddress records for the same domain name will
point to different hosts configured to return identical responses to
service requests.

Unfortunately there is no way for a client to make an informed
decision about which IP address to choose.

   Our attitude is that clients _should_ be expected to choose blindly.

RFC 3484 specifies how to make an UNINFORMED decision, or at best,
how one could in principle inform the decision. However in order to be
informed, a host needs to be hooked into the routing infrastructure -
but the Internet architecture says that the "dumb" network need not
explain its workings to the intelligent but ignorant hosts.

   ... which seems about right. Layer 3 is supposed to find an
interconnection from one network in the Internet to another. There
seems to be little point in "explaining" how it does this to the
endpoints.

As a result, having multiple addreses for the same hostname on
different links does not work.

   There's some logical inference missing here. It certainly does work,
though presumably it doesn't do something which Tony thinks might be
expected of it.

It never worked before RFC 3484, and though RFC 3484 tries to fix it,
it fails because of the lack of routing information.

   I don't follow...

What is worse is it breaks DNS round robin - which I admit is a hack,
but it's a hack with 15 years of deployment, therefore not to be
broken without warning. Na?ve implementations have broken round-robin
DNS because RFC 3484 ignores it.

   Round-robin seems mostly unrelated -- it was never guaranteed to be
particularly good at load-balancing.

So to summarize:

A host has no way to use multiple links to provide redundancy or
resilience, without injecting a PI route into the DFZ - like the
anycasted root name servers.

   This would be nice to fix, but it's not clear there's a sufficient
constituency interested in fixing it.

Given multiple addresses for the same hostname, a client has no way
to make an informed decision about which is the best to connect to.
This is why hosts that support IPv6 do not work as well as IPv4-only
hosts.

   I guess I don't follow. Indeed, IPv6 hosts that follow RFC 3483
will defeat some attempts at load-balancing, but it would seem that
this would only affect server farms using IPv6 -- which ought to know
better than to depend on those load-balancing tricks.

The Internet addressing architecture has a built-in idea that there
is one instance of each application per host, and applications are
identified by protocols (port numbers).

   This is a broken idea. It should be abandoned.

There is no support for multiple instances of the same application
per host (i.e. virtual hosting) unless the application has its own
addressing.

   I'm not clear what Tony might see as such "support".

There is no support for distributing an application across multiple
hosts (or multiple links to the same host) because address selection
is blind to availability and reachability - whether you consider them
to be binary or fractional values.

   Again, I'm not clear.

If you try to use it you are either relying on the non-kosher
round-robin DNS, or you are likely to suffer failed or sub-optimal
connections.

   RFC 3484 specifies that implementations of getaddrinfo() should sort
the list of IPv6 and IPv4 addresses that they return. (This has never
seemed to me a particularly good idea.) It goes on to state that
applications should iterate through the list until they find a working
address. (This need not imply a delay, as I read it.)

In practice multihomed services (services with multiple redundant
links to the public Internet) do not use any of the techniques
described in RFCs as host multihoming, and often use techniques
that are contrary to the architecture or outright protocol
violations (e.g. Akamai's use of CNAME chains).

   Does Tony have an alternative to suggest?

--
John Leslie <john(_at_)jlc(_dot_)net>
_______________________________________________
Ietf mailing list
Ietf(_at_)ietf(_dot_)org
https://www.ietf.org/mailman/listinfo/ietf

<Prev in Thread] Current Thread [Next in Thread>