I've been asked twice now in private email to clarify what I mean, so this
is going to turn into a massive rant about how the current Internet
architecture - as it is deployed, and as it seems to be developing - has a
completely broken idea of how to address endpoints. The multiple meanings
of the word "multihoming" relate directly to the multiple points in the
Tony, I pretty much agree with everything you say here. There really is a
pretty serious disconnect between what we seem to be able to build
and what applications actually need.
This is rather more common for machines that mostly accept connections
(web servers), as you can do relatively simple source address based
In my experience policy routing is not what people use multiple addresses
on the same link for. Multiple addresses on the same link are used for
virtual hosting for application protocols that don't signal
application-level addresses within the application protocol. The canonical
example is HTTP over SSL, but POP and IMAP have the same problem. (People
sometimes hack around this design error in POP and IMAP by embedding the
virtual domain in the username, which is yet another example of why every
application protocol needs its own addressing architecture.)
Indeed. But of course this approach doesn't work when a diffferent POP or IMAP
server is needed for different virtual hosts, so you throw in a proxy to deal
with that case. The proxy in turn can make authentication fairly ... exciting.
But you're still stuck the minute you hit the need for different virtual hosts
to use different certificates. More recent protocols, starting, I believe, with
MTQP (RFC 3887) have a domain parameter on the STARTTLS command. And I think
there's a TLS extension to do this. But older protocols (which means most of
them) don't have the parameter, and TLS-level support is nonexistant in
practice, so in this case you're stuck with certificate selection based on IP
addresses. In some very unusual scenarios with disjoint client pools or
unusually flexible clients you may be able to get away with doing it based on
source address or destination port respectively, but most of the time you end
up with multiple destination IPs to handle this.
And even the parameter is clunky - it feels like what it is: A tacked on field.
You're absolutely right that this highlights our failure to give application
addressing design the attention it deserves.
re-iterate, one computer providing multiple different services on the same
IP address is NOT multihoming, in the same way that one computer providing
multiple services on different port numbers is not multihoming.
The dual scenario is multiple computers providing the same service on
different IP addresses. The simplest deployment is to scale up on the
cheap by relying on round-robin DNS. In this case there are multiple
links, but they are often on the same layer 2 segment, so again not
multihoming in the sense that I meant. If you scale up further, the first
thing you do is get proper resilience with a load-balancing router (which
probably requires the architecturally impure NAT).
There are also lots of commonly-ussed solution points between round-robin DNS
and load-balancing routers where more than one host effectively shares
an IP address. These tricks are often, but not always, associated with
various clustering schemes.
One unfortunate side effect of all this is that applications end up having to
know all sorts of stuff about IP addresses - both internal and external. This
in turn contributes to the renumbering problem in ways that IPv6 support for
multiple addresses per interfaces cannot address.
If you require more
than one point of presence, the next step (beyond layer 2 techniques like
trunking ethernet vlans between multiple sites) is IP routing tricks, i.e.
anycast. For serious wide-area services, you are likely to use location-
and availibity-sensitive DNS to deirect users to the right instance.
Note that NONE of these techniques use the architecturally-supported idea
of multihoming. In fact most of them deliberately avoid it because it does
The Internet's idea of multihoming as supported by the architecture is NOT
what I consider to be multihoming. (The lack of support for my idea of
multihoming makes Internet connections klunky, fragile, and immobile, but
we all know that, and should be embarrassed by it.) The Internet's idea of
multihoming is reasonably precisely encapsulated by RFC 3484. This
specification breaks existing practice and fails to do what it is designed
OK, so what is Internet multihoming? If the DNS resolves a hostname to
multiple IP addresses then those are assumed to be multiple links to the
Unfortunately there is no way for a client to make an informed decision
about which IP address to choose. RFC 3484 specifies how to make an
UNINFORMED decision, or at best, how one could in principle inform the
decision. However in order to be informed, a host needs to be hooked into
the routing infrastructure - but the Internet architecture says that the
"dumb" network need not explain its workings to the intelligent but
And what ends up happening in practice is applications grow a handful of knobs
for tuning this stuff. But since there's no specification for any of this
different implementations end up with knobs with different semantics and
different levels of applicability. (In the latter case I'm referring to, say,
an application that uses multiple protocols having the same or different knobs
for each protocol.)
As a result, having multiple addreses for the same hostname on different
links does not work. It never worked before RFC 3484, and though RFC 3484
tries to fix it, it fails because of the lack of routing information. What
is worse is it breaks DNS round robin - which I admit is a hack, but it's
a hack with 15 years of deployment, therefore not to be broken without
warning. Naïve implementations have broken round-robin DNS because RFC
3484 ignores it.
So to summarize:
A host has no way to use multiple links to provide redundancy or
resilience, without injecting a PI route into the DFZ - like the
anycasted root name servers.
Given multiple addresses for the same hostname, a client has no way to
make an informed decision about which is the best to connect to. This is
why hosts that support IPv6 do not work as well as IPv4-only hosts.
In my experience this is often the initial reason for adding
application-specific configuation settings to control this stuff.
The Internet addressing architecture has a built-in idea that there is one
instance of each application per host, and applications are identified by
protocols (port numbers). There is no support for multiple instances of
the same application per host (i.e. virtual hosting) unless the
application has its own addressing.
And the more virtualization there is the more of a problem this becomes.
There is no support for distributing an application across multiple hosts
(or multiple links to the same host) because address selection is blind to
availability and reachability - whether you consider them to be binary or
fractional values. If you try to use it you are either relying on the
non-kosher round-robin DNS, or you are likely to suffer failed or
In some cases you also end up with applications breaking another set of
rules and maintaining caches of past connectivity results.
In practice multihomed services (services with multiple redundant links to
the public Internet) do not use any of the techniques described in RFCs as
host multihoming, and often use techniques that are contrary to the
architecture or outright protocol violations (e.g. Akamai's use of CNAME
Yep, that's exactly where things end up.
Ietf mailing list