spf-discuss
[Top] [All Lists]

[spf-discuss] Re: Automatic key verification / CERT in DNS / RFC4398 (Was: [Announce] GnuPG 1.4.3 released)

2006-04-05 18:17:00
At 1:57 PM +0200 2006-04-05, Jeroen Massar wrote:

        See DKIM.

 Which is not there yet and will take the coming X years to be designed,
 developed and deployed. While PGP is working already for several years
 and has been proven to work very reliably.

Yeah, but you're talking about inventing new protocol parts to fit into the overall PGP/GPG structure, and which will likewise take years to be designed, developed, and deployed in a standards-compliant manner.

Keep in mind that relatively few people use any kind of personal encryption at all, and most that do make use of S/MIME instead of PGP or GPG, because S/MIME is what is provided by default from Microsoft and Netscape/Mozilla. You're an anthill right now, and you've got to become a mountain before your proposal can start having any kind of measurable positive impact on the Internet community. Assuming you get there at all, it's going to take you a while.

        Think about ten million users, or fifty million.  Each user
 having several hundred bytes (or even several KB) of data stored for
 them.  Stored in the DNS.  In a single flat zone.  Bad idea.  Like,
 really bad idea.  Like, one of the worst DNS-related ideas I think
 I've ever heard of, at least in a very long time.

 Checking http://dailychanges.com/ status of domains on 4/4/2006:
 49,340,982 .com
 7,425,723  .net

 etc. so what was the issue again of storing a couple of million records
 in DNS?

Yeah, but those records don't change frequently, and most of those zones have relatively little information in them, and you're not trying to store a complete copy of all 49 million plus .com zones within the .com gTLD servers themselves.

Try loading all 49+ million .com zones onto the same thirteen .com gTLD servers, and see what happens. Now, have each of those .com sub-zones publish DNSSEC crypto records, all of which also has to be contained in the same single flat .com zone.

I don't think there's a nameserver farm anywhere in the world that could handle that kind of load or those kinds of memory requirements.

          If you claim it is a 'bad idea' then why does the CERT record
 exist at all when it has the intentions of allowing PGP keys to be
 stored? ;)

 Using for instance nsd (http://www.nlnetlabs.nl/nsd/) should not pose a
 problem in serving these amounts of contents.

I'm familiar with NSD. It is very fast, but also quite memory intensive. There were some interesting reports at RIPE from the folks responsible for SUNET, which is secondary to some of the largest ccTLD zones in the world, and how they were unable to fit all the necessary data into very large quantities of memory on their server (something like 64GB of RAM?).

Do you know of any machines that can have terabytes of RAM available to store that much data?


Hundreds of GB of data stored on disk for serving up crypto keys for millions of users is not that much of a problem. Okay, you're going to want some pretty beefy web servers with lots of RAM for caching, but you're never going to attempt to fit all that data into memory.

That's not the way that most nameservers work. Most nameservers (NSD especially) store all data in RAM, and when you're talking about relatively large crypto keys for each user in a group of tens of millions of users, that is highly non-scalable.

Yes, there are database-backend nameservers, but unless you're going to pay some pretty big bucks for the Nominum software, they're relatively non-standard and relatively unknown. And no one knows how even the best database-backend nameservers are going to handle data on these kinds of scales.

 It is more a 'separation' question I am asking, so that one has a
 subzone for these records, which will allow one to have say 3
 nameservers, which are registered at the tld servers thus can't easily
 be changed, for example.org but have 20, which you stuff in example.org,
 handling the load for _certs.example.org where the CERTS are stored.
 It's a choice item giving the possility of doing it.

Flat databases don't scale. We know this. This is why we no longer use HOSTS.TXT, but instead use the hierarchical DNS. Either find a way to do this kind of work in a hierarchy (so that the load can be spread out amongst many servers), or change the protocol used to store the data so that you're not trying to keep it all in RAM.

 Which is why I noted that one could have a single pgp key for a complete
 domain could cover all the cases where one doesn't have the enduser
 signing the messages but a central system doing it for them. Which is in
 effect what DKIM does but allowing the freedom to have per-user keys
 too.

So long as you stick to just one key for the entire domain, it doesn't matter if it's DKIM or PGP. It still has some greatly increased CPU requirements (because every single message passing through the server will now have to be cryptographically signed, which will increase the CPU server load by many orders of magnitude per message), but at least it has the possibility of being scalable in terms of the amount of key data that has to be stored and accessed on a frequent basis.

 Eg large sites like hotmail/gmail/yahoo/whatever could have a 'websign
 key' where the outbound webengine signs the message for the user.
 Presto, 60M users served with one key.

I have yet to be convinced that cryptographically signing each and every message that passes through the server can be scalable in any common sense of the word, but at least that's a different problem which might be addressable through custom hardware.

We did try this technique before -- it was called pgpsendmail, and it cryptographically signed every message passing through the system. It didn't work very well, and few people ended up using it. I don't think that using this same concept all over again is likely to work any better this time than it did last time.

But if you've got some new magic oil that could be applied to the process and instantly solve the problem, I'd be interested in hearing about it.


Doing client-side signing and verification is definitely scalable, but is difficult to get jump-started.

 I don't care if we go for PKA or CERT records as long as the silly
 spoofing of source addresses gets halted.

I don't think that's likely to happen any time soon. The solutions which are easy to implement are non-scalable, and the scalable solutions are much more difficult to implement.

--
Brad Knowles, <brad(_at_)stop(_dot_)mail-abuse(_dot_)org>

"Those who would give up essential Liberty, to purchase a little
temporary Safety, deserve neither Liberty nor Safety."

    -- Benjamin Franklin (1706-1790), reply of the Pennsylvania
    Assembly to the Governor, November 11, 1755

 LOPSA member since December 2005.  See <http://www.lopsa.org/>.

-------
Sender Policy Framework: http://www.openspf.org/
Archives at http://archives.listbox.com/spf-discuss/current/
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?listname=spf-discuss(_at_)v2(_dot_)listbox(_dot_)com

<Prev in Thread] Current Thread [Next in Thread>