ietf-asrg
[Top] [All Lists]

Re: [Asrg] About that e-postage draft [POSTAGE]

2009-02-22 06:53:35
On 2009-02-21 19:07:09 -0500, Bill Cole wrote:
Steve Atkins wrote, On 2/20/09 7:26 PM:
On Feb 20, 2009, at 3:25 PM, Bill Cole wrote:
[Quoting John Leslie]
 The bottom line is, redeeming a million tokens per second is practical
with processing delay not much greater than network latency. (This was
not true ten years ago...)

I think I'd quibble with details on that, but they really are not all 
that
important.

I guess maybe one detail is...

If the processing delay for every redemption attempt is of the same order 
of magnitude as irreducible network latencies, i.e. tens to hundreds of 
milliseconds, handling a million one-use token redemption attempts per 
second is absolutely hopeless.

I think John meant "processing delay" as seen from the client. So this
includes a) the time to send the request (tens to hundreds of
milliseconds), b) the time it takes the server to process the request
(sub-milliseconds?) and c) the time for the response to get back to the
client (again tens to hundreds of milliseconds). 

The number of requests a server can process per second is mostly
determined by b). The communication delays only increase the number of
open but idle connections (which may also be a problem).

So if checking a single token takes 1 millisecond (rather conservative,
IMHO), one server can check roughly 1000 tokens per second, so a server
farm of 1000 servers can check 1 million tokens per second. That doesn't
seem "absolutely hopeless" to me. It's certainly technically possible,
although I'm sceptical whether it's economically feasible.

I'm pretty sure that I'm not the best systems analyst/designer on this 
list.
I certainly hope I'm not the best one to have thought about e-postage. 
I'd
be happy to learn from a master how it is in fact possible to make an 
ideally simplified minimal system like this work as a starting point 
for how to assemble a more complex system that has more elements of 
reality in it. I think (but may be wrong!) that it isn't possible to 
design a system that will be theoretically capable of correctly 
handling a million redemption requests per second of which ~90% are 
the result of someone working to break the system.


It's fun to consider, though.

Using your numbers - one million redemption requests a second of which 
at least 90% are invalid, leads to 100,000 outstanding valid requests 
per second, which would give around 250 billion outstanding stamps at 
any one time, if we expire them after a month (expiring would likely 
involve voiding the unused stamps and issuing new ones in the same 
amount, but that's a business issue, and doesn't affect the redemption 
requirements).

I expect most tokens would be redeemed pretty soon, so you won't have
to keep them around for a month, at least not for real-time checking.


To me it feels like the hard bit of this is handling a million packets 
in and out per second reliably, along with the overhead of providing 
robustness and redundancy, rather than the redemption itself.

That was my point, because it seems to me that a redemption cannot be done 
with just one packet in and one out, but really needs two in and one out.
A legitimate stamp needs to have 3 possible states in the server's map: 
redeemed, unredeemed, and pending acknowledgment of redemption. If the 
server only has two states for a stamp, then it would end up with one of 
two flaws by design:

1. If the stamp is marked as redeemed when a successful redemption attempt 
completes on the server, it is possible that the success will not be 
successfully communicated to the client. If the client then retries the 
redemption, it will fail.

Yes, but is this a problem which needs to be avoided at all cost? If the
redemption fails, the client can simply buy another stamp and send the
message again. Delivery of that particular message was now twice as
expensive as that of an average message, but if that only happens
infrequently it doesn't matter. If it does happen frequently for a
particular bank, the affected client(s) will probably be reconfigured to
prefer tokens from other banks. 

So if your 50k clients try to redeem the same token at the same time,
one of these 50k requests will be the first to be processed by the
server. The server marks the token as redeemed and sends a positive
reply to the client. The other 49999 requests will be denied. If
(because of network congestion caused by the 50k nearly simultaneous
requests) the reply never reaches the "lucky" client, the token will be
lost. 

Sending 50k requests with the same token is garantueed to yield at most
one positive reply in this case. It isn't viable for someone who wants
to send many messages, but it may be a viable DoS attack.


2. If the stamp is left as unredeemed while waiting for the client ack of 
success, stamp "reuse" becomes a question of how many redemption decisions 
can be made per client RTT.

The server may have to defer many thousands of clients for scores of 
milliseconds while waiting for one to send an ack.

Yes, but it only needs to defer those clients which sent a request with
the same token. It can process lots of other tokens in the meantime. 

Also the server doesn't have to keep the connection open. If the token
it wants to check is currently in the "pending" state, it can reply with
a temporary error and leave it to the client to retry at a later time.

I think the ways to handle that all include dividing the front end
between multiple machines, but that creates a tougher problem keeping
the back end recordkeeping fast and coherent from the viewpoints all
of the front ends.

I think it is possible to solve the coherency problem by using the token
to choose the server. Then there is always exactly one server
responsible for a specific token and no coherency problem does occur.
The draft doesn't specify the protocol used to buy or redeem tokens, so
this could be done in the client: A token might consist of two parts,
the first one identifies the server to connect to, the second is unique
within that server.

I suspect that the "solution" that will be chosen if anyone tries to create 
a real e-postage system instead of hand-waving about it will be to open it 
to lost packet damage as the cost of scalability.

Yes, that's what I'd probably do.

Network latency with clients becomes irrelevant if the server assumes
that its redemption messages are always delivered. That allows for a
lot of optimization. Once in a while a stamp that should work will
fail to do so, and if such a system ever gets into the real world I'm
sure its users will be shocked at how much higher the real-world
failures are than in their tests...

If you are still cheaper than the competition, that doesn't matter. If
you aren't, but already have sufficient market share (because you were
one of the first), it might not matter, either. Otherwise you're out of
business.

        hp

-- 
   _  | Peter J. Holzer    | Openmoko has already embedded
|_|_) | Sysadmin WSR       | voting system.
| |   | hjp(_at_)hjp(_dot_)at         | Named "If you want it -- write it"
__/   | http://www.hjp.at/ |  -- Ilja O. on 
community(_at_)lists(_dot_)openmoko(_dot_)org

Attachment: signature.asc
Description: Digital signature

_______________________________________________
Asrg mailing list
Asrg(_at_)irtf(_dot_)org
http://www.irtf.org/mailman/listinfo/asrg