Howdy,
Below I explain why the assertions a couple weeks ago by Peter
Williams that the STT message specification "conveys all that is
necessary to implement the mechanics of the payment protocol", is
"tuned for optimized encodings", "use(s) ASN.1, or a minor variant",
"use(s) BER, else a minor optimized variant", and is "far more
carefully thought out" than SEPP are incorrect. I also offer my
opinion on the quality of the Cambridge Prefix Notation and the
associated STT encoding rules.
I managed to partially implement the VISA/Microsoft STT spec over the
weekend. The design is clearly a component of a security system; the
specification conveys all that is necessary to implement the mechanics
of the payment protocol.
As you can see from the other mail I sent a few minutes ago, the STT
specs are immature and if two peers succeed in communicating using STT
it is by coincidence.
I tried to do the same with the SEPP spec. The document tends more to
be of a system design, rather than a component spec, and seems rather
incomplete from the perspective of an implementor and an evaluator of
the trusted system. In particular, ASN.1 OIDs and some types are
either not defined or missing.
STT has a bunch of problems of its own, some quite severe, as you can
see from my previous mail. I did not expect such blatant errors to
exist in what I had been lead to believe was supposed to be a well
thought out specification. As far as SEPP goes, I agree that it has
oversights, but SEPP is a draft that is still open to public comment,
so I am not concerned to see that it still has some oversights at this
point.
SEPP uses an innovative X.509 v3-oriented CMS access protocol, with
uniform messaging security enveloping shared with the payment
protocol's own messaging transport.
STT uses a variation of X.509 v1 certicates tuned for optimised
encodings and real-world id structures for end-systems/terminals.
[...]
Interestingly, both use ASN.1, or a minor variant, for notation
Incorrect. SEPP uses ASN.1, STT does not. Nor does STT use a
notation that is even close to ASN.1. The Cambridge Prefix Notation
that STT uses (as documented in the STT document) is far less mature
compared to ASN.1. Indeed, the STT document willingly admits this
when it quite aptly refers to the Cambridge Prefix Notation as an
"impure notation". (See p. 63 where STT is forced to use informalisms
such as "TLV_CREDTAG*", "ADF" and "SignatureSection" in the definition
of Credential).
The main point here is that STT is by no means a minor (or any sort
of) variant of ASN.1. Peter was mistaken in concluding that they are
similar.
and both use BER, else a minor optimised variant, for encoding.
This too is incorrect. SEPP uses DER (not BER), and STT uses adhoc
encoding rules that share little similarity with BER. Sure, the adhoc
encoding rules use a tag, length and value, and sometimes just a tag
and value are used, but to say that this makes it similar to BER is
like saying that Yiddish and English are similar because they both
employ words and are used in human communication. Again, Peter was
mistaken in his conclusions on similarity.
... a minor optimised variant, for encoding.
And I don't see where there is any sort of optimization in the adhoc
encoding rule that STT uses that makes it more efficient than BER.
For example, in BER tags are typically 1 byte long (there are none
longer than this in X.509, SEPP, or most other specs that I have seen)
followed by typically a 1 or 3 byte length field (thought it can be 4
or 5 bytes long, or more). So, BER-encoded values typically has a
tag/length overhead of at most 2 or 4 bytes. Compare this to the
overhead of the adhoc encoding rules, which is 8 bytes (tag is 4
bytes, length is 4 bytes) for the TLV form, or typically 5 or 7 bytes
for the "optimized" TV form. Use of STT atomic data types without
the TV or TLV form is possible, but is a rarity in the STT spec.
Maybe the word "optimization" referred to CPU utilization, since it is
not more efficient in bandwidth. I am not certain whether it is more
optimal than BER in CPU utilization, but if speed of encoding/
decoding (not the time taken to push the bytes across the wire) was
most important why wasn't a well thought out encoding rule such as the
Light Weight Encoding Rules developed by Christian Huitema (and
others?) a few years ago used? Or for that matter, PER is very fast
to encode/decode *and* produces very compact encodings. Why reinvent
the wheel?
Neither will require ASN.1/CN application-oriented data-structure
compilers for quality-implementation.
Compilers are never required. Even when writing programs. However,
one of the benefits of using a compiler is that you can as least
verify that a specification can be implemented, assuming there are no
semantic errors. Since the Cambridge Prefix Notation (at least as
presented in the STT document) cannot be processed by compilers even
if all the obvious bugs were cleaned up, there is no way of being
readily certain that there are no non-semantic errors remaining.
Overall, the STT material leaves me with the impression of being far
more carefully thought-out, from the security standpoint. SEPP seems
to be more politically and commercially astute.
I cannot comment from the security standpoint, but I can assure you
that STT is not well thought out from the standpoint of the syntax
used to describe its messages or from the standpoint of the adhoc
encoding rules that it employs.
----
One thing that I wondered as I read the STT document is that maybe the
Cambridge Prefix Notation and the adhoc encoding rules that STT
employs are not as brain-damaged as it would seem; maybe it is just
STT's use of these tools that is brain-damaged. I cannot say because
I have never seen a formal description of Cambridge Prefix Notation
and the adhoc encoding rules, aside from what is described in the STT
document. Is it that the description of these in the STT document is
a poor summary of a more well thought out solution, or is what is in
the STT document really all there is? Can anyone shed light on this?
Bancroft Scott
Open Systems Solutions, Inc.