Just to add one more point to what Brian has raised:
The Overlay Pros&Cons discussion (the first bullet of 4.1) should mention that
the benefit of overlay is positively correlated with the ratio of
"intermediate nodes" to edge nodes (NVE).
In many of today's typical data centers, Access Switches are connected to the
DC Gateways via one layer of Aggregation Switches or directly, i.e. the
"Intra-DC network" in the Figure 1 of the draft only consists of the
aggregation switches if NVEs are implemented on the Access Switches. Under this
environment, the only beneficiary of Overlay are the aggregation switches. For
Data centers with Access Switches connected DC Gateways directly, the benefit
disappears.
Linda
-----Original Message-----
From: nvo3 [mailto:nvo3-bounces(_at_)ietf(_dot_)org] On Behalf Of Brian E
Carpenter
Sent: Saturday, May 24, 2014 3:46 PM
To: ietf(_at_)ietf(_dot_)org
Cc: nvo3(_at_)ietf(_dot_)org
Subject: Re: [nvo3] Last Call: <draft-ietf-nvo3-framework-06.txt> (Framework
for DC Network Virtualization) to Informational RFC
A few comments below. I can't help feeling that NVO3 is creating a monster,
however.
4.1. Pros & Cons
...
- Traffic carried over an overlay may not traverse firewalls and
NAT devices.
I don't know whether "may not" means "might not" or "must not", and that
completely determines what the sentence means. For example, does it mean this?
- Traffic carried over an overlay might fail to traverse firewalls and
NAT devices.
I suggest reviewing every instance of "may not" to avoid this ambiguity.
- Hash-based load balancing may not be optimal as the hash
algorithm may not work well due to the limited number of
combinations of tunnel source and destination addresses. Other
NVO3 mechanisms may use additional entropy information than
source and destination addresses.
Load balancing appears out of nowhere here. Are we supposed to assume that load
balancing is a requirement? Load balancing between what - between different
tenants, different physical DCs, different servers?
Also, there seems to be an assumption that load balancing is only based on
addresses. Actually it's usually based on ports as well, and more or less by
definition they are invisible to the underlay.
So it's worse than "may not work well".
I would have expected QoS support to also appear as a challenge, for similar
reasons. Isn't giving tenants a fair share of the underlay capacity an issue?
(There's a mention of traffic engineering later, but surely you don't want this
issue to be handled by operators twiddling knobs?)
4.2.4. Path MTU
...
TCP will
adjust its maximum segment size accordingly.
And how will that work for non-TCP traffic?
It is also possible to rely on the NVE to perform segmentation and
reassembly operations without relying on the Tenant Systems to know
about the end-to-end MTU. The assumption is that some hardware
assist is available on the NVE node to perform such SAR operations.
However, fragmentation by the NVE can lead to performance and
congestion issues due to TCP dynamics and might require new
congestion avoidance mechanisms from the underlay network [FLOYD].
In a word: yuck. Surely you should be recommending against anything like that,
or any attempt to re-segment TCP on the fly.
Finally, the underlay network may be designed in such a way that the
MTU can accommodate the extra tunneling and possibly additional NVO3
header encapsulation overhead.
Surely you should be recommending this, which is by far the safest solution.
(And of course it should allow for the IPv6 minimum MTU.)
7. References
...
[NVOPS] Narten, T. et al, "Problem Statement : Overlays for Network
Virtualization", draft-narten-nvo3-overlay-problem-
statement (work in progress)
Nit: that draft was replaced a long time ago by
http://tools.ietf.org/html/draft-ietf-nvo3-overlay-problem-statement
(which is already in the RFC Editor queue).
Brian
_______________________________________________
nvo3 mailing list
nvo3(_at_)ietf(_dot_)org
https://www.ietf.org/mailman/listinfo/nvo3