mail-ng
[Top] [All Lists]

Re: What I see as problems to solve ... and a strawman solution

2004-01-30 10:09:26


On Jan 30, 2004, at 2:35 AM, Kai Henningsen wrote:

1. Mail should really be binary transparent (just like ftp can be with the
right options - that's where mail came from, in the beginning ...)

The new mechanism should be based on XML. All text parts are in UTF-8 or UTF-16, base 64 encoded.

as soon as you do that, all issues about header/body issues, binary transport, how things are formatted, how to handle new and emerging data types and internationalization (with a few limitations) basically go away. Then you define what XML pieces you must have to be a conforming message (which header parts are required for transport and identification, what the required default content part is and how it's set up, and some meta-part explaining what's in the message), and now you have a transparent transfer system that's infinitely expandable.

The internationalization limitation is one that needs to be resolved by the internationalization of domain/dns info, but once that part's solved, this new system is ready to handle it.

XML is a natural here.

Other requirements I have:

(a) the server to server transfer should be SSL encrypted. no plain text where people can sniff it.

(b) while I've been critical about SPF (http://www.plaidworks.com/chuqui/blog/001257.html) under SMTP, I think it's a very useful tool for authentication of servers, so some flavor of it should be part of the server to server approval for accepting mail from a site.

(c) as a good friend of mine who does computer security keeps pounding into my head, "just because I can authenticate your existance doesn't mean you're authorized to do something". Just because SPF says you can send mail doesn't mean I want you to send mail to me. So there needs to be a mechanism for servers and their admins to define friend/foe/unknown built in from the start, with the ability to hook that determination into some kind of distributed white/black/greylists as well as building in deterministic algorithms for on the fly evaluation.

(d) I would strongly suggest that anyone in that "unknown" bucket be rate limited by a receiving server as part of the greylisting.

3a. Server-to-server communications need to be authenticated.

3b. Client-to-server communications need to be authenticated.

heck, we can actually do a pretty good job of authenticating servers now -- we generally known their hostname and we always know their IP address. You can build any kind of authentication system you want, and it still won't tell you out the other end whether you want that thing to enact a transaction with you.

What needs to be thought about is how, once you have this identity-token, how to determine whether they're a black, white, or greyhat. And in reality, most servers are going to be dealing mostly with greyhats, because there's no way to define trust. So my suggestion is that instead of trying to build in something like a distributed web of trust or whatever you prefer (where you will have scalability, poisoning, and all of the fun and games we have today -- you're simply changing the playoground), that the focus be on tools for identifying white and black hats, and for all others, ways to (for lack of a better term) "chroot" the greyhats so that if they are hostile, the damage is limited or stopped before it happens.