ietf
[Top] [All Lists]

Re: ietf.org end-to-end principle

2016-03-17 09:57:35
Hi,

But, before we draw too many conclusions, may I ask what constitutes an 
end-to-end solution in this space, and what does not? I may be dense today, 
but it isn’t necessarily clear for me.

Well we'd first have to define what an "end" is :-)

As I read your list, I realised that pretty much any of those can be
called end-to-end - or not, depending on what an end is for you.

If the "end" is one particular host on the internet, mirrors are "many
ends" already.

That is either a concern or not, depending on whether the ends are all
controlled by one entity (so that it can guarantee content
synchronisation among them all) or by independent third parties.

Mirrors run by the same entity are all alike; it doesn't matter which
one you connect to.

Mirrors run independently are different, because they are controlled by
different entities and can be subject to manipulations.
FTP mirrors have realised that a long time ago, and repos typically
carry extra metadata to prevent de-syncronisation: package signatures,
MD5sums on a different host for manual verification, ...

At that point, something on the application layer is starting to work
around the fact that "the" end is actually just "one out of many" ends.

For "servers which are duplicated" and load balancers the same reasoning
applies. As soon as some of the ends are different from others, problems
start to arise if one needs to identify which end exactly one is talking to.

In the examples where you talk about "a server" which does various
things, that looks like a single end to me. The fact that it may load
remote content from other sites is just HTML&friends.

In fact, the browser world is very much used to the fact that a website
is not an end; it's a collection of many ends, glued together to form a
cohesive appearance. And I guess all of us know the complexities around
that concept: mixed content, cross-site scripting, unability of end
users to identify the actual source... and yet, it works (somewhat,
depending on your personal definition of "works"). And if it is
well-known and works on the application layer, maybe we shouldn't
despise on it on the lower layers. Yes, we'll have to bite the bullet of
much more complexity than we currently are comfortable with.

The thing is: it's not like we have much choice.

"NAT is evil, there shall not be NAT!" said the IETF.
"Oh really?" said the NAT steam roller as he rolled over a pile of RFCs.

CDN steam rollers are following suit :-)


Which ones of the following practices are not end-to-end:

* a mirror
* a server that implements some (possibly dynamic) rules on what connection 
attempts are honoured
* collaboration between the routing system and servers on controlling dos 
attacks
* a server that has login or captcha procedures, run on the server
* a server that login or captcha procedures, but they are implemented on a 
different entity where traffic is redirected as needed
* a server that is duplicated or copied in multiple instances
* server(s) residing on an any cast address
* arrangements where DNS or other mechanisms are used to distribute requests 
to the most suitable or geographically local point
* a server whose function is distributed to a number of nodes (such as a load 
balancer in front)
* arrangements where the server is run by a contracted party
* the concept of a CDN

(My quick reaction to all of the above is that these are still arrangements 
that are in the hands of the party that serves information; the emergence of 
these practices in the Internet is more about the scale of the services than 
about inserting NAT- or firewall like other parties on a path. But I could be 
wrong...)

Scale is about making something big. CDNs make things... different (and
enable making it big in the process). The difference shows at some spots
(TLS) but not others.

My definition of an end is probably (but this really needs much
thinking) that "one end" is something that is controlled by one entity
(for somethings on any layer; IP hosts, HTML web pages, ...). As you
distribute control to more than one entity, you create multiple ends. If
you then need to identify one particular end out of the set for some
reason, things can get complicated.

I hope that the above actually holds water on many layers. Applying it
to anycast addresses feels ok: anycasting is nice if the entire set of
servers is under central control. If it's not, and one host out of the
set starts giving strange answers, or routing to one instance goes bad,
identifying the problem can become mighty complicated.

If I'm talking rubbish, sorry for stealing everyone's time :-)

Greetings,

Stefan

-- 
Stefan WINTER
Ingenieur de Recherche
Fondation RESTENA - Réseau Téléinformatique de l'Education Nationale et
de la Recherche
2, avenue de l'Université
L-4365 Esch-sur-Alzette

Tel: +352 424409 1
Fax: +352 422473

PGP key updated to 4096 Bit RSA - I will encrypt all mails if the
recipient's key is known to me

http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xC0DE6A358A39DC66

Attachment: 0x8A39DC66.asc
Description: application/pgp-keys

Attachment: signature.asc
Description: OpenPGP digital signature