ietf-asrg
[Top] [All Lists]

Re: [Asrg] SICS

2004-12-23 19:16:28
On Dec 23 2004, Barry Shein wrote:

On December 23, 2004 at 16:04 laird(_at_)lbreyer(_dot_)com (Laird Breyer) 
wrote:
 > BTW, the proxy scenario I gave above isn't intended to be a ready
 > solution, rather it's an argument to prop up my claim that handling
 > the invalid requests can be handled scalably.

All I get from it is that an in-memory hash or other fast lookup
algorithm, if it's not being used already, might speed up response
time assuming that your model improves on the resources which are
actually strapped.

I'm sure this sort of thing is being used already where needed.
However, it addresses scalability because each handled request uses up
a fixed amount of resources (bounded number of processing
instructions, fixed memory, I/O channels) independent of the number of
users or the number of queries. So there's not going to be a slow down
with size. Moreover, it's clearly a parallelizable concept.


That doesn't address scalability, it just shifts the knee of the curve
much like a faster CPU or more memory might.

Yes, it shifts the burden of spawning full-fledged SMTP sessions onto
the subset of valid requests. The question is whether the resulting
cost of handling an invalid request is so insignificant with off the
shelf hardware to allow orders of magnitude more requests than your
current full fledged SMTP. I believe it is in this case.

So we don't need to argue that invalid requests are as difficult to
address as spam sent to valid users, which is all I'm trying to
convince you of. Put differently, so long as (invalid user) requests
are a constant couple orders of magnitude more than (valid user)
requests, they are a pure optimization problem. If however the invalid
requests do grow exponentially compared to the valid requests, the
above won't work.


Scalability has to address:

a) The critical resource(s)
b) Permutational effects

Where (b) is often subtle and significant. For example, if you use N
servers you have to load balance between them. How does that
decision-making scale as N increases. It's often somewhere between
O(N^2) and O(N!).

Come on, for practical purposes you're not looking for the optimum,
just a local optimum so the big oh isn't as bad as you suggest. 

The load involved in censoring invalid user SMTP requests is going to
be purely I/O bound, since the picture I painted clearly won't tax the
CPU. Each system can handle so many simultaneous socket connections,
and each such connection is going to cost a nearly identical amount of
resources as any other connection. 

So I doubt you'll need complex load balancing. Just pick a server at
random for each request. This is different from full fledged SMTP
because with SMTP, the total resources needed will vary according to
the input, so the resultant load varies too. You will still need to load
balance your real SMTP servers.


At any rate, as fascinating as some might find it I sincerely don't
believe the problem with spam, even at the ingress, is going to be
solved or even ameliorated for very long by some improvement in
recipient validity processing.

It's like someone breaking your windows unchallenged and someone says
I know, let's find cheaper ways to make window panes!

It helps no doubt but somehow misses the point, including whether the
cost is in the panes or installing the panes, etc.

Well, if you're going to start with window analogies, I've got one too ;-) 

You're saying we can't just worry about someone breaking the windows,
we also need to worry about those who try and fail, because they leave
scratches. I'm saying make your windows scratch proof (cheaply) and then
only bother with people who actually break the windows. There's enough
window-breakers to keep us occupied.

-- 
Laird Breyer.

_______________________________________________
Asrg mailing list
Asrg(_at_)ietf(_dot_)org
https://www1.ietf.org/mailman/listinfo/asrg


<Prev in Thread] Current Thread [Next in Thread>