Re: Need for Complexity in SPF Records
2005-03-27 19:06:26
David MacQuigg wrote:
Radu, I wrote this response yesterday, then today decided it doesn't
sound quite right. I'm really not as sure of what I'm saying as it
sounds. Show me I'm wrong, and I'll re-double my efforts to find
solutions that don't abandon what is already in SPF, solutions like your
mask modifier. Examples are the best way to do that. Your example.com
below is almost there, but it still doesn't tell me why we really need
exists and redirect.
Ok, we'll have a look at all the ideas on the table. That's what the
table is for, right ? :)
I won't cut anything out of your message, so that the progression of the
explanation is easily seen and reflected upon if necessary.
At 07:21 PM 3/26/2005 -0500, Radu wrote:
David MacQuigg wrote:
At 04:06 PM 3/26/2005 -0500, Radu wrote:
David MacQuigg wrote:
Now I'm confused. If the reason for masks is *not* to avoid
sending multiple packets, and *only* to avoid processing mechanisms
that require another lookup, why do we need these lookups on the
client side? Why can't the compiler do whatever lookups the client
would do, and make the clients job as simple as possible?
Sorry for creating confusion.
Say that you have a policy that compiles to 1500 bytes.
The compiler will split it into 4 records, about 400-bytes each or so.
example.com IN TXT \
"v=spf1 exists:{i}.{d} ip4:... redirect=_s1.{d2} m=-65/8 m=24/8"
_s1.example.com IN TXT "v=spf1 ip4:.... .... .... redirect=_s2.{d2}"
_s2.example.com IN TXT "v=spf1 ip4:.... .... .... redirect=_s3.{d2}"
_s3.example.com IN TXT "v=spf1 ip4:.... .... .... -all"
We want the mask to be applied after the exists:{i}.{d}. Since that
mechanism was in the initial query, cannot be expanded to a list of IPs
the mask cannot possibly apply to it.
I think what you are saying is that the compiler can't get this down
to a simple list of IPs, because we need redirects containing macros
that depend on information only the client has. So if we are to put
the burden of complex SPF evaluations on the server side, where it
belongs, seems like we have to pass all the necessary information to
the server in the initial query. We already pass the domain name.
Adding the IP address should not be a big burden, and it would have
some other benefits we discussed.
If you can find a way to do that and still keep the query cacheable,
let me know. If it is compatible with the way DNS works currently,
I'll even listen and pay attention. ;)
That 1 UDP packet might not seem like a lot. But currently it is
cacheable and most of the time is not even seen on the internet.
Making it uncacheable would be a multiple fold burden on bandwidth.
That's exactly why caching and the TTL mechanism was invented, and now
you suggest we give it up?
No, I see your point. If we truly need %{i} macros, and we evaluate them
on the server side, that would produce a different response record for
every IP address, and it might not make sense to cache such records.
Responses for SPF records with no %{i} macros would cache as always.
The %{d} macros would not impair caching. Even the %{i} responses might
be worth caching for a few minutes, if you are getting hammered by one IP.
Actually, all records should have the longest possible TTL (within the
constraints of the network design). This avoids caching name servers
everywhere asking the same queries too often.
Responses to %{i} queries are no different. Since there are 2^32
possible questions, you want each one to come up as infrequently as
possible. If you have a pest or even regular traffic every hour, but
your %{i} TTL is 59 minutes, then the cache efficiency is 0%. But if you
could make it 1h and 1 minute, the cache efficiency would be 50%. On the
other hand, for steady traffic, the cache efficiency would be really
high, so even a lower TTL would not make much difference, as the savings
are huge compared to the cost. tt's a little bit counter intuitive that
the "uncacheable" records should have long TTLs. Anyway, this is
somewhat philosophical, because you can't cache 2^32 * {number of forged
domains that publish %{i}}.
As an example, lets pretend that yahoo publishes a record with %{i} and
a TTL of 10 minutes. Potentially it will receive the same 2^32 questions
from all the caching servers of the world, every 10 minutes. I know for
sure that ohmi will be asking every 10 minutes, because I get lots of
forgeries as yahoo.com (say 1 every 11 minutes). So will all the other
little servers. So doubling that TTL means I'll only ask every 20
minutes. This is where the damage is, little servers asking for the
information every 10 minutes, but never using it more than once.
But when yahoo users send 300M messages a day to their hotmail friends,
hotmail will ask yahoofor the information 144 times, and use it's cache
the other 299,... million times. So the cost of %{i} as seen by yahoo is
not coming from hotmail querying it, from the swarm of little servers
everywhere.
Whether the loss of caching on a few records is too high a price depends
on the severity of the threatened abuse. Should we tolerate a small
increase in DNS load for the normal flow of email, to limit the
worst-case abuse of the %{i} macro. I don't know.
Well, the %{i} is not a small increase. It is even far more expensive
than PTR. Let's say that you have a spewing spambox that uses forgery
techniques. (let's say it's at 1.1.1.1)
Let's say that all domains used one %{i} mechanism.
The spambox sends ohmi N forgeries from different domains.
If every domain listed a PTR mechanism, I would query the 1.1.1.1.ARPA
adress once, and for the remaining N-1 queries I would find it in the
local cache. So my cost of the PTR is 1 query per mail source.
But if everyone uses an %{i}, I now have to ask the following questions:
1.1.1.1._spf.domain1.com
1.1.1.1._spf.domain2.com
1.1.1.1._spf.domain3.com
1.1.1.1._spf.domain4.com
...
1.1.1.1._spf.domainN.com
these are distinct queries, and I only ask each question exactly once,
so the fact the the local DNS cache does cache the answers, I will never
ask for them again. All that traffic will go over my DSL connection to
the ISP to the root servers, and so on. Actually, as Tod pointed out,
every time my caching server is asked about a new domain, it generates
multiple recursive queries: 1st one to the root servers, 2nd to the
authority NS servers, 3rd one the the subdomains and so on. I hadn't
thought about this, or I would have presented a much gloomier SPF-doom
scenarion.
So every one of those queries costs 3 queries on my DSL line. 3*N.
Compared to the PTR mechanism that only costs 1 query across DSL. I have
the caching server on my side of the DSL modem, I don't use the ISP's. I
also get charged for excess bandwidth consumed.
If I used the ISP's caching server, I would ask N questions even for the
PTR case. The further the caching server is, the more expensive it is to
use it. Also the benefit is lost, as the further it is, the higher the
response latency gets. (Assume my DSL connection had a 200ms latency.
Asking N questions would take N*200ms, while asking the same N questions
from a cache on my side of the modem would be 200ms for the 1st
question, and 0.1ms for every subsequent one). And I'd pe paying dollars
for the N*200ms performance.
What I *would* do is discourage the widespread use of macros, redirects,
and includes, and state in the standard that processing of records with
these features SHOULD be lower priority than processing simple records.
That may help to implement a defense mode if these features are abused.
Absolutely, I'm with you on this. I already suggested that the expensive
macros are to be limited to 1 per record. The d and o are not expensive
as they expand the same, no matter what the source of the connection is
or what the claimed mail-from is.
I would not introduce the concept of 'priority' though.
After all, no-one is forcing postmaster to do 10 queries, or N queries.
Even my sendmail implementation of SPF has configuration options for how
expensive the check is allowed to get. You can say that checks with %i
are never done, and in that case the policy does not result in an
answer, and you can also configure the max number of DNS mechs to an
arbitrarily low number. If it is lower than the spec, and the checker
sees more than that in the record, it doesn't try to expand even one,
and returns with "record too expensive". In both of those cases, no
Received-SPF header is added.
Maybe I'm just not seeing the necessity of setups like the above
example.com. I'm sure someone could come up with a scenario where it
would be real nice if all SPF checkers could run a Perl script
embedded in an SPF record, but we have to ask, is that really
necessary to verify a domain name?
The "..." imply a list of ip4: mechanism that is 400-bytes long.
That's why the chaining is necessary. ebay.com has something like
that. hotmail.com uses something similar too. When you have lots of
outgoing servers, you need more space to list them, no?
Why can't they make each group of servers a sub-domain with its own
simple DNS records, as rr.com has done with its subdomains?
_s3.example.com can have as many servers as can be listed in a 400 byte
SPF record, and that includes some racks with hundreds of servers listed
in one 20 byte piece of the 400 byte record. With normal clustering of
addresses, I would think you could list thousands of servers in each
subdomain, with nothing but ip4's in the SPF record.
It may already be that way. If I had that longer list of domains that
publish SPF, I could run the spfcompiler on then and find out very
quickly what the average, min and max compiled record lengths would be.
One reason I can see why mail server's can't be clustered too tightly is
in an application like ebay's. Their business depends on being able to
send "last chance" emails, so they have to have mail servers sprinkled
all over for redundancy (load sharing too).
As I understand it, users sending mail from _s3.example.com will still
see 'example.com' in their headers, but the envelope address will be the
real one _s3.example.com. That's the one that needs to authenticate,
and the one that will inherit its reputation from example.com.
I'm afraid you misunderstood. The _s3-like names are generated by the
compiler, but nothing in the configuration of the SMTP server is changed
to reflect it. So if the next version of the compiler changes to using
_p3, there is zero effect on the the mail users. Because the _s records
are daisy chained, it's only the root of the chain that can be used as
as start of policy. That root is at domain.com.
Also, as the network changes, the contents of _s3 changes too. Maybe the
whole daisy chain gets shorter or longer. That will not affect the
envelope address used on mail. Evaluation must always start at
domain.com (top of daisy chain)
Seems to me this is using DNS exactly the way it was intended,
distributing the data out to the lowest levels, and avoiding the need to
construct hierarchies within the SPF records. Sure, it can be done, but
what is the advantage over just putting simple records at the lowest
levels, and letting DNS take care of the hierarchy? Why does ebay.com
need four levels of hierarchy in its SPF records?
Currently just for convenience, as they're not using any compiler. In
the future, the compiler will flatten the hierarchy. It may be a while
till then, so in the meanwhile we need a transition plan.
If we simply can't sell SPF without all these whiz-bang features, I
would say put it *all* on the server side. All the client should
have to do is ask - "Hey <domain> is this <ip> OK?" We dropped that
idea because it doesn't allow caching on the client side, but with a
simple PASS/FAIL response, the cost of no caching is only one UDP
round trip per email. This seems like small change compared to
worries about runaway redirects, malicious macros, etc.
I'll humour you:
This server-side processing would not be happening on a caching
server, correct? That would not save anything. I hope you agree.
If the caching server were in the domain which created the expensive SPF
record, then it would save traffic to and from the client, at the
expense of traffic within the domain that deserves it. If example.com
needs 100 queries within their network to answer my simple query "Is
this <ip> OK?", then they need to think about how to better organize
their records. All I need is a simple PASS/FAIL, or preferably a list
of IP blocks that I can cache to avoid future queries. ( This should be
the server's choice.)
I see where the misunderstanding started. Let me attempt to clear it up:
Caching servers are rarely/never deployed close to the authoritative
servers. Caching servers really only make sense if they are close to
where the queries are generated. I showed this above with my 200ms DSL
connection example. It was a little exagerated, but it serves the
purpose of explanation well.
Caches generally are most beneficial when they are closer to the
consumer. The principle applies equally to processor L1 caches, disk
caches, HTTP page/GIF caches.
The processor caches offer a great example:
The L1 cache runs at the same speed as the core, so assuming a processor
speed of 1GHz, every read and write which is a cache hit costs 1ns. If
the data is not found, the request goes to the L2 cache, which is
bigger, but much slower. So now every request that ends up to the L2
cache takes maybe 5ns. So if the CPU is running on L2 data, it is
waiting 80% of the time. If the data is not in L2, the request goes to
memory. It now takes at least 100ns to do a cache-line fill from memory,
so the CPU is waiting 99% of the time when reading from RAM. The next
level is the disk-based swap space. It's the next best thing to
re-reading a file, especially if it is a file on a network drive. If it
needs to run of swap space, we all know that it's just not worth running
at that point, that's how slow it is.
In the CPU world, slow is expensive. It's like having a 3GHz machine
running on swap data. The MIPS/dollar proposition is pittyful.
The same thing applies to networks. In the example I gave above - 200ms
DSL - waiting N*200ms instead of 200ms + (N-1)*0.1ms is slow, and
therefore expensive, because now I cannot check 1000 domains per second,
but only 5.
What I *don't* want in answer to my simple query, is a complex script to
run every time I have a similar query. That seems to be the fundamental
source of our problem. SPF needs to focus on its core mission,
authenticating domain names, and doing just that as efficiently and
securely as possible. All these complex features seem to be motivated
by a desire to provide functionality beyond the core mission -
constructing DNS block lists, etc. Now we are finding that the complex
features are not only slowing us down, but have opened up some
unnecessary vulnerabilities to abuse.
Unfortunately, the java, javascript, flash, and others found the model
of scripts running on the client rather than the server to be much
better than scripts running on the server.
But we should make the distinction between expensive scripts and cheap
scripts. All those web-enabling technologies are scripts that get
downloaded in one shot, and then run continuously without needing to
communicate with the server again. That makes them cheap.
Analogously, cheap SPF scripts (IP lists) are much better than expensive
scripts (DNS mechanisms), where the entire work is in transfering
tidbits of data across the net.
The flash format would have failed if it needed to request each polygon
from the server individually, and serially.
So the only place where it might make a difference is if the
evaluation was run on the authoritative server for the domain.
The problem with that, is that authoritative servers are designed with
performance and reliability in mind (as opposed to caching servers,
which care more about cost savings). As such, the auth servers *do
not* do recursive queries, as an SPF record evaluation might be. They
also do not do any caching. They respond to every query with a piece
of data they already have in memory or on disk. If they don't have
that piece of information, they return an empty response or "it
doesn't exist" (NXDOMAIN). They never look for it anywhere else.
That's why they are authoritative. If they don't know about it, it
doesn't exist.
Now, the spfcompiler only makes sense if it is running on a master
server. Itself the master for a zone is authoritative. The above
authoritative servers are slaves. They take the information from the
master server and disseminate it as if it was their own. It is the
adminstrator of the master zone server that allows them to do so. No
other server can respond authoritatively to queries for the zone in
question.
So, the only place the spf compiler makes sense is on the master
server, because ultimately, it is the only one who really knows the
facts. When the facts change, the master informs the slaves, which do
a zone transfer in order to update their databases. So the truth
propagates nearly instantly from the master to the slaves, and as such
the slaves can be seen as clones of the master, identical in every
way, except for the 3 second delay it takes them to transfer the zone
files.
You cannot run the compiler on the slaves, because they might each
evaluate the record differently, as they are coping with different
network conditions (such as lost packets, etc). In that case, they
would each tell a different "truth" than each other and than the
master server. In that case they would no longer be authoritative.
Now, having the master zone server respond to queries that require it
to do calculations of any kind is an absolute no-no. That is because
no matter how big the zone is (yahoo, rr, whatever), there is only one
master. Ok, there may be a couple, but their job is not to respond to
queries, but to 'hold the authority'. The slaves are for responding to
queries.
I would also say the slaves are the right machines on which to do
whatever complex lookups are needed to answer a query. The owners of
those machines are the only ones who will make the tradeoff of cost vs
desired complexity.
I actualy said that you cannot run the compiler (ie, complex evaluator
program), so I will disagree here, but I will explain in more detail.
It is a common best practice for a domain name to employ slave
authoritative servers that are well spread around the world. This is so
if one trunk gets cut somewhere, the domain name does not suffer, as it
is able to serve queries from its redundant servers. (when a resolv call
fails, it tries the next authoritative server on the list of
authoritative servers for a domain).
As such, the slaves for a domain are separated by great geographical
distances, and this makes the whole system more reliable.
But since they are separated, if you ask them all to resolve the same
SPF record independent of each other, they will come up with different
answers. This is because different queries time out for each one, and
they are asking different other servers for the answers to the questions.
For instance, say that an ISP has 2 name servers: ns1.isp.com and
ns2.isp.com. A customer of that isp has a vanity domain, or a large
company. So the customer uses an include:smtplist.isp.com in its own SPF
record.
The customer uses 5 slave name servers, which are different than the
ISP's name servers: ns1.dnsRus.com, ns2.dnsRus.com, ns1.weknowdns.com
ns2.weknowdns.com, ns3.weknowdns.com.
If the slaves compile the include:smtplist.isp.com mechanism, they might
come up with different results, because they would do the compile at
differrent times. Indeed, if the ISP needs to change the TXT at
smtplist.isp.com, it might take a minute or two for the change to
propagate to ns1.isp.com and 5-6 minutes to propagate to ns2.isp.com.
That may depend on how busy each of the servers is, and their
configurations, which may not be the same, etc.
So in the case in the large company's compiled SPF record, some of the 5
name servers ask ns1.isp.com and some ask ns2.isp.com for the TXT record
to smtplist.isp.com. Oops!! The slaves now compiled different SPF
records for the big company. And slaves don't ask each other for
confirmation. They are authoritative, so they all *know* that their info
is correct.
That's the problem, right there... authoritative servers presenting
different information as correct.
When they designed the DNS system they avoided this problem by design.
That's why the slave servers are called slaves, because they're supposed
to do nothing but what the master tells them. Then the chain of command
is intact. So a thinking slave is an oxymoron by design in this case.
You cannot ask the slave to do any thinking/compiling, as that would
break the assumptions that the whole DNS system is based on.
So, the only place that compiling can be done by a DNS server is at the
master servers. They are abosolutely authoritative, and there's no risk
of disagreeing with other servers.
I fact, the master servers for a zone use the same source of information
(database, zone file, etc). When that file changes, they read it in and
inform the slaves. Then, after the necessary propagation delays all the
slaves are updated and respond with the exact same information.
Sometimes the source of information is a database, like in the MyDNS
server which uses MySQL. MyDNS is purely a master server. It does
neither recursive queries (which would make it caching), nor does it
accept incoming zone transfers (which would make it capable to be a slave).
When there are multiple master DNS servers, using the same database,
they have to use database replication, so that all masters use the same
information. In that case, only one of the databases is writeable, and
the rest read-only. Even there, on the back end, there is this concept
of a master database, and multiple replicated copies. The compiler will
run on the master-of-masters, and update the master database. The update
will then replicate to all the slave databases, which are used by the
other master DNS servers, which will update the slave DNS servers, and
everyone is on the same page.
So if the compiler, running on the master-of-masters queries ns2.isp.com
(which is a couple of minutes late to update), there is no problem, as
the master-of-master does the recompilation every TTL seconds (isp.com
setting), and the isp does not expect its records to propagate to the
world in less time than the TTL is specifies.
Of course if the TTL of the big company's records is less than that of
the ISP, the SPF record will have that shorter TTL, and will be compiled
more often.
So doing what you propose would require the DNS system to be turned
upside down. The justification of SPF is just not good enough.
I don't see how this turns anything upside down. DNS is supposed to be
decentralized. If complex lookups are necessary, having a bunch of
slave servers do the work on behalf of a master server is consistent
with decentralization.
Well, compare what you are suggesting with the way I understand the DNS
world to work. From my perspective, your proposal is a departure from
the way things work. If I'm wrong, I hope that someone more
knowleageable will give me a well-placed kick. (I promise to take it
like a man... gimme!) :)
Let's estimate the worst-case load on DNS if we say "no lookups, one
packet only in any response". I'm guessing 90% of domains will provide
a simple, compiled, cachable, list of IP blocks. This is as good as it
gets, with the possible exception of a fallback to TCP if the packet is
too long. The 10% with really complex policies may have a big burden
from queries and computations within their own network, but what goes
across the Internet is a simple UDP packet with a PASS or FAIL.
Oh, but the critical detail is that a lot of firewalls block port 53
TCP, whether by design or configuration. Since this is the state of the
world, DNS queries over TCP are inherently unreliable.
I doubt if the 10% of domains with long compiled SPF records will accept
that unreliability as a fact of life. They will stick to UDP, which is
more or less guaranteed, in the sense that even if a packet is lost
once, the next time it will probably make it. The DNS system deals
gracefully with temporary problems like this, so not a problem.
But when your record depends on TCP, and some firewall somewhere blocks
it, there's no amount of retrying that will get that connection through.
And because of this, we're stuck with daisy-chaing the longest records.
In the end, it's done for the sake of reliability, at the expense of
some extra traffic.
That response is not cacheable, but lets compare the added load to some
other things that happen with each email. Setting up a TCP connection
is a minimum of three packets. SMTP takes two packets for the HELO and
response. MAIL FROM is another two. Then we need two for the
authentication. At that point we can send a reject (one packet) and
terminate the connection (4 packets).
Looks to me like the additional load on DNS is insignificant for normal
mail, and only a few percent of the minimum traffic per email in a DoS
storm. Also, the additional load is primarily on the domain with the
expensive SPF records, where it should be.
This is not always the case. Consider a case like:
"v=spf1 ip4:1.1.1.1/28 mx:t-online.de include:isp1.com include:isp2.com
include:isp3.com -all"
Say that the 3 ISP's don't even publish SPF yet, but the includes are
there just in case they ever do.
This record is very cheap on the publisher's DNS (only 1 TXT query goes
to the publisher's DNS). But for every bandwidth penny spent by the
publisher, the 3 ISPs have to spend 1 penny each. Poor t-online.de has
to spend 10 pennies for each penny that the publisher spends.
And the sad thing is, while the isp's can minimize the cost by
publishing cheap SPF records, there's nothing t-online can do to lower
it's damage.
What's even worse, is that t-online can't even find out why it sees
increased bandwidth levels. It's extremely complicated to track an MX or
A query back to an email address.
Even worse than that, the default max-ncache-ttl in BIND is 3 hours.
That means that even if the publisher's TXT record has a TTL of 24H, the
isp's will be hit with a query every 3 hours, while the publisher only
every 24H.
So no, the costs is not necesarily on the publisher.
Taking into account the TTLs above, and the TTL of t-online's records of
1H, the score would be
1:8:240
So the publisher's record costs t-online.de 2.40euro for every penny
cost to the publisher.
The ISP's pay 8 pennies each.
Even if this were a spammer
domain, and they weren't *really* doing any internal lookups, the load
on their DNS server is two packets for every additional two-packet load
on the vitims. No amplification factor here.
Add that the spammer is actually likely to both use the t-online.de's
resources, and be stupid enough to not realize that mail doesn't go
through the MX exchange. Suddenly, the amplification factor becomes a
certainty.
How about this: All SPF records SHOULD be compiled down to a list of
IPs. If you need more than that, then do as much as you like, but
give the client a simple PASS or FAIL. Most domains will then say
"Here is our list of IPs. Don't ask again for X hours." Only a few
will say "Our policy is so complex, you can't possibly understand
it. Send us every IP you want checked."
That's exactly what the exists:{i}.domain does. It tells the domain
about every IP it wants checked, and the server checks it.
Unfortunately, it is extremely expensive because it's AGAU.
If I were writing an SPF-doom virus, this is where I would start.
I need to get back to designing ICs. :>)
Nah... you've got some great ideas and I value your contribution and
feedback.
And I appreciate your time in getting me up to speed on these problems.
I hope one day I can return the favor.
It's a pleasure to be of service. SPF is a good cause, and I think it
deserves to be saved.
Incidentally, I got curious and did some tests, and it appears that
yahoo does not do any DNS queries on incoming mail. Hotmail does two but
either doesn't respect TTLs, or do queries on a spot-check basis,
because even though I have a low TTL, they did not refresh.
It could be that already, even without checking SPF these two figured
out that DNS is more expensive than storing spam. Fascinating!
This wasn't a scientific test as I would normally do, but a quick
check-your-fears check.
So at least for now, I think I know that yahoo and hotmail will not do
any spf checks any time soon, based on this little test and a lot of
extrapolation. ;)
Radu.
<Prev in Thread] |
Current Thread |
[Next in Thread>
|
- Re: Use of New Mask Mechanism, (continued)
- Re: Use of New Mask Mechanism, David MacQuigg
- Re: Use of New Mask Mechanism, David MacQuigg
- Re: Use of New Mask Mechanism, Radu Hociung
- Re: Use of New Mask Mechanism, David MacQuigg
- Re: Use of New Mask Mechanism, Radu Hociung
- Re: Use of New Mask Mechanism, David MacQuigg
- Re: Use of New Mask Mechanism, Radu Hociung
- Re: Use of New Mask Mechanism, David MacQuigg
- Re: Use of New Mask Mechanism, Radu Hociung
- Need for Complexity in SPF Records, David MacQuigg
- Re: Need for Complexity in SPF Records,
Radu Hociung <=
- Re: Need for Complexity in SPF Records, David MacQuigg
- Re: Use of New Mask Mechanism, Andy Bakun
- Re: Use of New Mask Mechanism, Radu Hociung
- Re: Use of New Mask Mechanism, Frank Ellermann
- Re: Re: Use of New Mask Mechanism, Radu Hociung
- Re: Use of New Mask Mechanism, Frank Ellermann
- Re: Re: Use of New Mask Mechanism, Radu Hociung
- Re: Use of New Mask Mechanism, Frank Ellermann
- Re: Re: Use of New Mask Mechanism, Radu Hociung
- Re: Use of New Mask Mechanism, David MacQuigg
|
|
|