ietf
[Top] [All Lists]

Re: Naive question on multiple TCP/IP channels

2015-02-04 15:24:04
On Wed, Feb 4, 2015 at 4:00 PM, Christopher Morrow 
<morrowc(_dot_)lists(_at_)gmail(_dot_)com>
wrote:

On Wed, Feb 4, 2015 at 3:47 PM, Phillip Hallam-Baker
<phill(_at_)hallambaker(_dot_)com> wrote:

I know that traditionally we have considered congestion control to be a
per
connection thing rather than per process or per user. Those don't exactly
exist in the Internet model. But if we consider DDoS to be an extreme
form
of congestion, we have to look at the O/S implementing broader rate
limiting
anyway.

I'm not sure this scales in a useful manner...especially if you want
(for instance) to be able to read from your cloud/nas/etc at 'line
rate' but then limit access rates to other places because of 'dos'
concerns.


No it is not simple at all!

At a gut level, there is something wrong if a node is generating vast
numbers of
SYN messages and never doing anything more. But only looking for SYN floods
at the network border means the attackers will attempt some Data as well.

Working at the endpoint is a lot simpler but then we rely on the endpoint
not
being compromised which is a bad assumption for a machine doing DDoS
attacks.


The reason I am doing this work is to try to work out what controls to
place
where.Traditionally a firewall has been seen as a device that protects the
local
network from attacks coming in. But preventing outbound attacks is equally
important. Nobody is going to bother attacking Comcast's customers if
all their residential gateways have egress controls that make them
useless as bots. Conversely nobody is going to pay for Comcast service
if the egress controls makes it useless for browsing etc.




Policy for that is not going to be clear, or simple, or reasonable.

If the case for multiple streams is better performance based on
friendlier
slow start parameters, maybe these should be allowed without the end
run. If
the net is surviving with people starting five streams instead of one,
maybe
the slow start parameters could start at five packets per destination
host
instead of one per connection. It would be a little more code to
implement
but speed is hardly an issue where its purpose is throttling anyway.

More connections means you may avoid the 'random early drop' problem
for some of your connections, right? Presuming somewhere between
src/dest there's a thinner pipe (or more full pipe) and RED/etc is
employed (or just queue drops in general), putting all your eggs in
one connection basket is less helpful than 5 connection baskets.


Hmm, again sounds like gaming the congestion parameters rather than
an argument for a particular approach.



One connection (one traditional http/tcp connection) also means object
serialization gets you as well. (I think roberto alludes to this
above)


There are several necessary functions. One of them is MUX and another of
them is serializing data from API calls. The question being which function
each belongs at which layer.