ietf-smime
[Top] [All Lists]

Re: draft-housley-binarytime-00.txt

2004-09-15 06:13:17


I think this is an unacceptable approach.  It could lead to decode errors 
in implementations that support the signing-time attribute that is defined 
in CMS.

Right, that's why we have david Kemp's comment 


  > The other Peter's suggestion:

    Time ::= CHOICE {
        utcTime          UTCTime,
        generalizedTime  GeneralizedTime,
        epochSeconds     INTEGER}

  > is fine with me, but I'm not sure existing applications would deal with 
  > an unrecognized CHOICE value as gracefully as they would with an 
  > unrecognized attribute.  To avoid problems, the S/MIME -msg spec would 
  > have to state that epochSeconds MUST NOT be used by sending applications.

  > Dave


There is no need at all to introduce these options, they serve
nothing at all, since the other attribute exists.

We have already had this debate. 

When did we debate to introduce UTCtime and generalizedtime into the new
attribute? I must have missed that? As far as I remember the only
debate was whether we need at all a specifiction with seconds.

You answered to Peter Gutman's comment ...
  >The proposal isn't so much limiting as... bizarre.  The ASN.1 time formats
  >have been around forever. everything supports them, and this proposal is for 
a
  >format that isn't even as "flexible" as the not-very-flexible UTCTime.  What
  >real problem is this addressing?  Why a time_t?  Why not a 64-bit time to 
keep
  >the Java guys happy?  Or the Windows nanoseconds-since-1600 time?  Or the
  >Macintosh seconds-since 1904?  You could make it a choice, so no-one would
  >feel left out, with at least one of the choices being an identified-by-OID
  >value so everyone could add their own favourite oddball format...

with : 
  Any of these is possible, but a choice defeats the whole point, which is to 
  avoid complex conversion.

And now you add a choice, and, furthermore, values that you qualify
as more difficult to handle.

                                  At least two people see potential value 
'A potential value'? I have not seen anything concrete except 

   There are no existing applications in this space, and 
   thus no backwards compatibilty issues.  But CMS  and a newly defined 
   attribute can be applied to future applications without having any 
   impact on existing apps.


in the BinaryTime.  Clearly there is not a ground swell of support.  This 
is the reason that the "Status of this Memo" section indicates that this 
will become an Experimental RFC.  In this way, we can find out if there is 
any value from implementations.  If no implementations emerge, then we can 
let it drop.  On the other hand, if implementors find it useful, then a 
standards-track RFC can follow later.


First of all, the IESG has something to say here. So please don't say 
'it will become'.  

Becoming experimental requires that the IESG determines whether the protocol
is not in conflict with some existing text. At least, with the new text,
this seems clearly the case to me, i.e., in conflict with almost everything
that can be told about the existing attribute, and any of the supposed
benefits seem wrong to me. 

At least, I think the IESG should add some appropriate wording to
explain whether of not the text is in conflict with exsiting standardisation
work. 

But I'll address the statements of the text (a second time).

1.1  BinaryTime

   Many operating systems represent date and time as an integer.  This
   document specifies an ASN.1 type for representing a date and time in
   a manner that is compatible with these operating systems.  This
   approach has several advantages over the UTCTime and GeneralizedTime
   types.

Not many systems represent data and time in BER as far as I remember.

I do not understand this comment.  The quoted text says that operating 
systems use an integer, not a character string.  BER (or other encoding) is 
going to be applied in either case.  This is essential to resolve endian 
issues at a minimum.

An integer encoded in BER is AFAIK not the way to encode seconds on
on any known system. (I have even forgotten x-endian stuff.). 

   First, a BinaryTime value is smaller than either a UTCTime or a
   GeneralizedTime value.

True, but very weak conpared to the size of the document, ...

   Second, in many operating systems, the value can be used without
   conversion.  The operating systems that do require conversion can do
   so with straightforward computation.

You need at least some conversion from a BER encoded variable length
integer to some value. You don't any indication why you want to compare
the date with some local integer value?

Many values can use used directly.  If the endian ordering is different on 
a particular system, then straightforward manipulation is needed.  If the 
epoch is different, addition or subtraction is needed to compensate.  If 
the granularity is something other than seconds, then multiplication or 
division is needed to compensate.  To me, these are "straightforward 
computation."

To summarize: 

   - variable length encoding
   - endian
   - substraction or addition,
   - multiplications on IBM 3x0 series.
   
This seems to me at least as straightforward as a few multiplications fo 
a character string based, no variable length, no endian issue,
an addition/substraction for UTCTime. ... 

I can state all of this if you like:

    Second, in many operating systems, the value can be used without
    conversion.  The operating systems that do require conversion can do
    so with straightforward computation.  If the endian ordering is different
    than the ASN.1 representation of an INTEGER, then straightforward
    manipulation is needed.  If the epoch is different than the one chosen
    for BinaryTime, addition or subtraction is needed to compensate.  If the
    granularity is something other than seconds, then multiplication or
    division is needed to compensate.  Also, padding may be needed
    convert the variable length ASN.1 encoding of INTEGER to a fixed
    length value used in the operating system.

Tell me any operating system where the value can be used without any
small treatment. See also the comments about internal formats from P.G.
concerning the word 'many'. 
 

Comparison of a date/time value in a protocol to the current time from the 
operating system seems very obvious to me.

And that comparison should lead to what result? The signature is too old,
not yet valid? What application are you thinking about? secure NTP?

 
As far as I remember, date comparisions have to be made in case
if you want to check certificates. In this case, the logic to
convert a local time value to an Generalizedtime already exists
on the machine. Of course, if you assume that no certs are used
at all, ... then you might still save more octets by reducing
the SignerInfo structure.

The certificates and crls are optional in SignerInfo.  The SignedData sid 
field can be used to identify a public key that is not embedded in a 
certificate, such as a trust anchor.

Indeed, this case is a one where Signeddata have GeneralizedTime (unless
I have overseen something). Well, the type is also proposed for
Authenticated-data, and somewhere in the the key exchange info you
have date in generalizedTime, but this is probaly never used at all here.
 
   Third, date comparison is very easy with BinaryTime.  Integer
   comparison is easy, even when multi-precision integers are involved.
   Date comparison with UTCTime or GeneralizedTime can be complex when
   the two values to be compared are provided in different time zones.

There are no time zones involved in the signingTime attribute.
One change in RFC 3369 vs 2630 to uppercase the MUST.

Signing-time is not the only possible use of BinaryTime.  It is the one 
specified in the document.  However, if the ASN.1 type is useful, then it 
will start appearing in other places.  This would be an indication that the 
Experimental RFC is useful.

You don't address what I have said. You indicate that time zone
comparisons are difficult, but they do not even occur within the
existing signingTime attribute. 

 
 String comparison is as easy as integer comparison.

In 25 years, time definitions of 32 bit machines may become difficult
to compare with an integer. Nothing guarantees you that the local
time definitions with simply shift or else.

I think this is addressed by my proposed text above.

Yes, which makes the logic as "complex" as for the existing character
encodings IMO. 


The textual representation of generalizedtime in zulu holmds at least
a few years more, and beyond 9999 there is already an RFC :-)

The integer representation will not have trouble in 10000 ;-)

   This is a rare instance where both memory and processor cycles are
   saved.

Processor cycles are not saved, since soon, i.e. in about 25 years,
you have to check whether you are beyond epoch, etc. So may need
at least some (almso rather simple) logic as with the adjustments
of UTCTime.

I do not see this one.  Some operating systems already use int64 to 
represent time.

Paddings, endians conversion, etc, all need time. 

Or, to resume: the only arguments that I can see is to save a few octets.
If you want to do this, code in PER for example.

PER of the character string will not reduce it to 4 or 5 octets.

I was thinking of PER for the whole signedData, or some completely
different format removing all unnecessary tags, ecapsulations etc. 


5  Security Considerations

   This specification does not introduce any new security considerations
   beyond those already discussed in [CMS].

CMS has no security considerations concerning the signingTime attribute.
Anyway, in the following you are doing quite the contrary, i.e., you
add new considerations.

Okay.  This is not the point I wanted to make, but I can see how you can 
interpret it that way.  I'll delete the paragraph.

   Use of the updated signing-time attribute does not necessarily
   provide confidence in the time that the signature value was produced.
   Therefore, acceptance of a purported signing time is a matter of a
   recipient's discretion.  RFC 3161 [TSP] specifies a protocol for
   obtaining time stamps from a trusted entity.

   The original signing-time attribute defined in [CMS] has the same
   semantics as the updated signing-time attribute specified in this
   document.  If both of these attributes are present, they SHOULD

SHOULD assumes that unless good reasons the data should be identical,
or, that a client should perform a comparison? If you don't assume
any work to be done by the client, you should mention that nothing
can be said about the two values.

   provide the same date and time.

At least, if both are present, the only vaguely valid argument about
savings of space vanished, and cpu cycles are also necessary to skip
or parse.

RFC 3369 has a lot of text saying that ther must only be one occurence
of the signingTime attribuet and only one value.

With this new specification you now add a second occurence. Does this
mean that you consider the existing 3369 spec is too strong?

No.  I was trying to accommodate a situation where the signature would be 
checked by two recipients, one that prefers signing-time and one that 
prefers signing-time2.  I cannot see a better way to handle this situation.

At least, as Peter Gutmann says: you have lost all the space gain as the other 
Peter
told, implementations would prbably need to add the two attributes.  

Is there a particular reason for the SHOULD in the following? 

   The original signing-time attribute defined in [CMS] has the same
   semantics as the updated signing-time attribute specified in this
   document.  If both of these attributes are present, they SHOULD
   provide the same date and time.

Peter