--On Wednesday, 14 November, 2007 05:16 -0800 Dave Crocker
John C Klensin wrote:
But, in fairness to the proposal, the general idea has one
advantage. If one is concerned about source / originator
identity and authentication, having to make a real-time
direct connection back to the sender's repository permits
thinking about much stronger methods than, e.g., header
I'm pretty sure it permits nothing stronger at all.
So I need you to provide some detail.
I'll also note that a mechanism with this much cost --
especially at the
infastructure -- needs a proof-positive guarantee that it will
benefits, and not just that it makes benefits possible.
To take your comments in reverse order, I dislike this
mechanism. I strongly encouraged Doug to write it up because I
didn't think repeated references to it without any details were
doing any of us any good. Now that I've seen and thought about
the details --and read the comments of others -- I like it even
less than I expected to. I expected to dislike it because I
consider "synchronize and run for the airplane" and "synchronize
before the stinking wireless link goes down again and then
actually read the mail offline" to be _very_ important mail-use
modes. I am also, to put it mildly unwilling to lose multihop
relaying to reach locations with less-than-perfect or
some-times-of-day-only connectivity. Consequently, I see it as
costly and burdensome on the receivers (and other "good guys")
while providing the bad guys only a temporary impediment.
Before the "temporary" is questioned, I have every confidence
that, if we were to establish something like this as the primary
way of transferring mail, the designers of bot and zombie
software would observe that a great many machines run, either
without firewalls or without adequate restrictions on incoming
ports. They would notice that most others are running
"personal" firewalls with well-known interfaces. They would
then set up mail databases on the compromised machines, open
ports and compromise firewalls, and then go merrily on their
way. It is obvious to me that this is possible and that the
hard problem is only in figuring out the details of how to do
it. Once one does figure it out, the marginal effort to
compromise a network's worth of machines with this newer,
fancier, and smarter software rather than older and more
simplistic versions is trivial.
Yes, I can also imagine relatively simple countermeasures to the
countermeasures, but getting users, unlike bots, to install and
maintain such countermeasures has historically been hard. After
all, were prophylactic technology that we understand reasonable
well widely deployed and properly configured and maintained, we
would have no botnets. So we need to assume that a machine
that can be compromised today using low-quality malware can be
compromised six months hence using high-quality software.
Statistically, the higher threshold -- or the greater use of
machine resources and higher odds of being noticed by the user
-- would drop some machines out of the bot-candidate pool, but,
to the spammer or botnet operator, even losing a few tens or
thousands, or even a few hundreds of thousands, of candidate
machines is trivial.
So please don't take any of my comments as supportive of this
Now, all of that said, and repeating my belief that giving up
relaying is too high a price to pay and that making SMTP
sessions much longer... The critical authentication problem
with techniques based on header signatures is that they require
either a widely-deployed, well-managed, and robust PKI (and we
know where _that_ condition gets us) or they require reliance on
a PKI-alternative that is less robust. Putting keys or other
authorization credentials in the DNS isn't bad, but DNS spoofing
of various flavors is not exactly unheard of (and I dread the
effects of telling the bad guys that they have to learn that
particular skill and apply it in the vicinity of ISP and
enterprise DNS forwarders). That problem presumably gets _lots_
better in the presence of DNSSEC, but only if "presence" means
very widespread deployment in resolvers, signatures up and down
the tree, and a really good strategy for dealing with DNSSEC
validation failures at the user end (e.g., we know that popping
up little boxes that say
Warning: <incomprehensible gibberish> happened, do you
want to continue?
is not part of such a strategy). For some reason, I'm not
expecting that this month or even this year.
By contrast, suppose we were to abandon relaying and deploy
IPSec or some other technique that guards against mid-session
TCP hijacking if that is necessary. Then we suddenly have the
full range of multiple-handshake and key exchange mechanisms,
validation checks with external servers, etc., available to us.
If desired to validate message integrity, one could also use
different and less tricky signature mechanisms, since the
delivered message would be known and one could more easily
bypass the effects of valid in-transit modifications (such as
addition of trace fields).
Tedious? Worth the trouble for "ordinary" messages? Almost
certainly not, even if one ignores the considerable costs of
abandoning relaying. And, as I tried to say, if this is what
one wanted to do, then TBR is far too complicated and expensive.
But the security checks would, IMO, be slightly more secure.