> From: John Scudder <jgs(_at_)juniper(_dot_)net>
> On Mar 13, 2012, at 3:04 PM, Joel M. Halpern wrote:
>> It may take engineering and evaluating some cache management
> Isn't it relevant to the architecture document, that it be possible
> for a reader to judge whether the architecture is a good one or not?
Yes and no.
At the _architectural_ level, the details of the algorithms are not
generally visible - and, more importantly, as explained below, in a 'good'
architecture, it's _designed_ so that algorithms can be changed. (As
Corbato so neatly put it in his Multics design paper, without the ability
to change course, one has a boat without a rudder.) So in one sense, no,
it's not an 'architectural' issue.
However, an architecture might be a failure if there is _no possible_
algorithm which can perform a certain required function. But that
condition is awfully hard to ascertain (it's hard to prove a negative).
For some insight into this, look at TCP: _could_ you say, looking at the
TCP specs circa 1977, that it was "[architecturally] a good [design] or
Even more importantly, was it right to go off and build the Internet, with
the retransmission/etc algorithms we had in TCP in the late 1970s?
(Especially given that the retransmission algorithms later turned out to
be colossally wrong, leading to congestive collapse of the network about 5
Sometimes you just gotta go build it and see what happens, and see if
smart people can figure out ways around whatever issues crop up. Not being
able to absolutely, cast-iron prove, a priori up front, that it's
feasible, is not a reasonable requirement: had we had that in place in the
late 1970s, we'd never have built the Internet.
_At an architectural level_, TCP did have one thing going for it with
regard to its algorithms: the retransmission, window management, etc,
algorithms could be replaced piece-meal (i.e. we did not have to have a
co-ordinated flag day). That really saved our bacon when it turned out
they had issues (and not just with retranmission - I actually remember
Silly Window Syndrome).
The same thing is true of the cache management algorithm(s) in the xTRs.
There's no requirement that everyone run the same one, and if the current
cache management algorithms turns out to have issues, it will be no more
hassle to change them than it was to change TCP's retransmission algorithm.
So, at the _architectural level_, LISP's cache management does have that
desirable property: we can experiment with new ones, and deploy better
ones, whenever we want. Also, some sites might have load patterns that are
different from othersm, so that we might want to run different cache
management algorithms in different places. Again, _at an architectural
level_, this turns out to be trivial to do.