ietf-openproxy
[Top] [All Lists]

Protocol Performance

2002-06-25 16:34:50


Hi,
It's been a while since I could follow this groups activities (due to company 
transfer), I am trying to catch up,so sorry if my comments/questions have 
already been addressed.

I have two questions:
1. With regards the email below, which I found from the archive, has there been 
any discussion (or are there any recent results) on the performance evaluation 
of an edge module w and w/o ICAP deployment?
In draft-stecher-opes-icap-eval-00.txt, section 4.1 only indicates that 
performance requirements are met, is there a more comprehensive document 
somewhere?
2. Is it safe to say that a L7 switch (web switch, e.g. nortel alteon) is a 
simplified OPES device (since ICAP is not an requirement)? If not, why?

Minor typos & comments wrt the latest drafts:
*draft-ietf-opes-scenarios-00.txt, section 4, 4th bullet, 1st sentence, "but in 
the content" is repeated twice.
*draft-ietf-opes-architecture-02.txt:
--minor typo: first page, first author name followed by a period
--section 2.1, 3rd paragraph, "..opes architecture is largely 
independent"..what does "largely" mean here? is it fully independent or are 
there cases where it is, then which one's?
--section 2.1.1, 1st paragraph, "..compiled from several sources..", for first 
time reader "sources" might not be clear?
--section 2.3, 1st paragraph, "..in this model, all data filters are invoked 
for all data..", for the first time reader "data filters" might not be clear?

gamze


-----------------------------------------------------------------
Gamze Seckin, Ph.D.
Research Engineer: Media Networking
Hutchison Mediator (US), Inc.

---------------------------------------------------------------------------------------------------------------------------------------------

To: "Rahman, Rezaur" <rezaur(_dot_)rahman(_at_)intel(_dot_)com 
<mailto:rezaur(_dot_)rahman(_at_)intel(_dot_)com>>, 
"ietf-openproxy(_at_)imc(_dot_)org <mailto:ietf-openproxy(_at_)imc(_dot_)org>" 
<ietf-openproxy(_at_)imc(_dot_)org <mailto:ietf-openproxy(_at_)imc(_dot_)org>>, 
Frank Berzau <frank(_at_)webwasher(_dot_)com 
<mailto:frank(_at_)webwasher(_dot_)com>> 
Subject: Re: Protocol Performance 
From: Volker Hilt <vhilt(_at_)dnrc(_dot_)bell-labs(_dot_)com 
<mailto:vhilt(_at_)dnrc(_dot_)bell-labs(_dot_)com>> 
Date: Tue, 24 Jul 2001 09:30:40 -0400 
List-archive: <<http://www.imc.org/ietf-openproxy/mail-archive/>> 

I haven't heard of anyone doing any performance analysis between SOAP and
ICAP. I think your work will help openproxy group to choose the right
protocol. I would suggest you post the performance comparison methodology
you are planning on to this mailing list, so that the group can comment on
its applicability to the OPES framework.

Our main goal is to measure the following two aspects of both protocols:
- additional bandwidth introduced by the protocol 
  (compared to the original message size) 
- CPU cycles needed for protocol processing (e.g. for
    * the creation of headers,
    * message marshaling,
    * transferring data to/from network,
    * ...)
These aspects seem to be most significant during the run-time of a
remote call-out protocol, in particular if a large number of requests
has to be processed. (We're not planning to look into non-run-time
aspects such as design issues, ease of implementation, etc.)

The evaluation-setup we're currently thinking of are measurements based
on existing protocol implementations. We will generate fixed
HTTP-Requests and Responses that we will feed into the client-side
implementation of the protocol stacks. We will not include the message
parser and the rule processor in the performance measurements. Instead
we planning to feed the prepared messages directly into the interface of
the protocol itself. This excludes all non-protocol related effects from
the measurements (even if they would be the same for both protocols). On
the server side we're planning to use a dummy service, that simply
returns the message it has received from the client.

The measurements itself will be taken by introducing checkpoints into
the protocol code. This should enable us to measure bandwidth as well as
required CPU-cycles on a fine-grained level. 

An implication of the performance evaluation using existing
implementations is that the quality of the implementation itself has
impact on the result of the analysis. To minimize this effect, we are
trying to pick two implementations that are somehow comparable. But even
if the implementations differ, the analysis will still provide accurate
measures of the fractions of time that are spent on header processing,
message marshaling and sending/receiving data.

I would also like to see a comparison between a caching box
running with and without ICAP, or with or without SOAP (if any
exists yet). Kind of an ICAP bake-off. If we had a ICAP server
which would not do any real processing, but just respond back
to the server, plus a load balanced setup so we don't see the
server become the bottleneck. That should give us an idea about
the performance impact when enabling ICAP.

Would that be in the scope of what you're planning?

Currently we're not planning to do such measurements. However, our
analysis will show the costs of using a call-out protocol which seems to
be an important component needed to determine the brake-off point upon
which the use of a remote callout server is preferable to local service
execution.

Volker


<Prev in Thread] Current Thread [Next in Thread>