Do you really think that yet another transport protocol can be
designed built and deployed in the world wide Internet any quicker?
Even considering a move to IPv6?
Well, I think it may require quite a few years to migrate to IPv6. So maybe
there's time to prepare a transport protocol.
It will require a good 2 years or so to get the specification done,
then you have bikeoffs (with respects to pillsbury :->) and of
course you must rally support so that it gets developed so you
have enough bikes to race :)
[ Well, I'm managing to rally support somewhere:-) ]
I don't see how you can propose a new protocol and get it out
any quicker than you will see SCTP deployed...
As far as I can perceive I think SCTP is by now the optimum in the IPv4
internetworking environment. But it was contrained by the incumbent
Internet. For example it didn't assume adequately deployment of ECN
But I have to admit again that no way may prevent SCTP from exploiting ECN.
[I noted that SCTP has for example reserved 'chunk ID' for 'Explicit Congestion
Notification Echo' and 'Congestion Window Reduced'.]
As SCTP has to overcome TCP in the IPv4 internetworking environment
(well, this argument seems just an assumption) I don't think it will be widely
deployed in the incumbent Internet.
If the 'Interface with Upper Layer' were to provide downward comptibility
with TCP (where ATP is intent to), SCTP might be definitely quicker.
I think XTP is low efficient. SCTP is a functional superset of TCP,
RTP+UDP and RTSP+UDP in the unicast scenario.
Yes, SCTP shares a lot in common with TCP but there are some
No, the problem was that we could not mandate SCTP to use ECN because
ECN was an Experimental RFC. Has such we had to stand on our head
to get it in and not have a normative reference. I think you
will find that in the BIS we will require an SCTP implementation
to have ECN.
Even at that you can NOT solely rely on ECN since it is NOT fully
deployed into the Inter-net. You must keep the packet loss is a
signal of congestion.
Even if ECN were magically deployed everywhere tommorrow, you still
have a problem. What happens when a router gets a BURST of traffic
that exceeds its buffers. It MUST DROP packets if it runs out
of buffer space. A lost packet needs to be intrepreted as a congestion
ATP bets on wide deployment of ECN in the next generation Internet. I uphold
ECN so strongly that I believe it should be mandatory in the IPv6
environment. And I can't object to the opinion that the bet might fail.
If it is a burst of traffic cause a router drop packets, maybe it is better not
feedback the congestion, for quite likely the congestion will disappear after
burst, before the feedback makes the sender lowers its trasmit rate. The router
may choose to randomly set the ECN flag on only a small set of packets. And
the receiver may choose to ignore burst ECN signals. It is not prohibitively
to detect the burstiness.
I think there may be other ways of detecting a corrupted
packet that may be employed... HACK TCP and similar items.. but there
still needs to be more research ...
I think it implies that we all agree that it is important to differentiate
due to congestion from that due to link layer error.
We discussed this in sigtran... question? How do you know (from the
receiver side) that you are missing a packet?
You setup a connection with me.
Send me something.
You send somthing else, to make an additional request, one I
am unaware you are going to make. How do
I know to ask you for a retransmission when that packet is lost?
Send me something(with a packet sequence number N).
You send something else, unfortunately it is lost.
I (the receiver) timeout. I send (and may resend) you NAK:
"Hi! I haven't got your message for quite a long time(since I got previous
packet sequence number N). What's the matter?"
[ In ATP it's more alike this:
"Hi! I haven't got your message with sequence number N+1 for quite a long
Is it lost?"
And it may act as the keep-alive signal as well.]
I mean, no, the receiver alone never know whether a packet is lost or not.
But the cooperation between the receiver and the sender may.
But why the receiver timeout instead of the sender? The upper layer application
In ATP the receiver may detect the upper layer application congestion (in a
common multi-thread host system) and deliberately deny a packet it has received
but not accepted yet. By sending back a explicit congestion feedback and a
negative acknowledgement the receiver makes the sender both slow down and
retransmit the unaccepted packet as soon as possible. The sender waits the
round trip time. It is expected to be faster than letting the sender timeout.
4. SCTP supports multi-streams in a single connection.
I cannot perceive the necessity clearly.
Look at head of line blocking. This will explain the
item you are missing. A telephone example works best
but it can also be applied to HTTP or many other apps.
I have a connection carrying control information on
20 phone calls, Setup, teardown, addtional info etc.
Now The first packet (a setup on call 1) is lost. But
setup for call 2, 3 and 4 (in subsequent packets) are
received at the server. The TCP connection will hold
these packets awaiting the retransmission of call 1's packet.
In SCTP you would use seperate streams for each of the
calls (or some subset) and thus call 2,3 and 4 would
not be held up.
HTTP has a similar problem when downloading jpeg's and