[Top] [All Lists]

Re: draft-housley-binarytime-00.txt

2004-09-13 13:20:43


I think you have misunderstood some of the remarks:

The new text adds the syntax of the existing signingTime attribute
to the new one, thus make the new one a superset.

The suggestion was to add a binary time to the existing one
(assumed that anything should be done), but certainly not to
reintroduce the options of the existing attributes.

I think this is an unacceptable approach. It could lead to decode errors in implementations that support the signing-time attribute that is defined in CMS.

There is no need at all to introduce these options, they serve
nothing at all, since the other attribute exists.

We have already had this debate. At least two people see potential value in the BinaryTime. Clearly there is not a ground swell of support. This is the reason that the "Status of this Memo" section indicates that this will become an Experimental RFC. In this way, we can find out if there is any value from implementations. If no implementations emerge, then we can let it drop. On the other hand, if implementors find it useful, then a standards-track RFC can follow later.

But I'll address the statements of the text (a second time).

1.1  BinaryTime

   Many operating systems represent date and time as an integer.  This
   document specifies an ASN.1 type for representing a date and time in
   a manner that is compatible with these operating systems.  This
   approach has several advantages over the UTCTime and GeneralizedTime

Not many systems represent data and time in BER as far as I remember.

I do not understand this comment. The quoted text says that operating systems use an integer, not a character string. BER (or other encoding) is going to be applied in either case. This is essential to resolve endian issues at a minimum.

   First, a BinaryTime value is smaller than either a UTCTime or a
   GeneralizedTime value.

True, but very weak conpared to the size of the document, ...

   Second, in many operating systems, the value can be used without
   conversion.  The operating systems that do require conversion can do
   so with straightforward computation.

You need at least some conversion from a BER encoded variable length
integer to some value. You don't any indication why you want to compare
the date with some local integer value?

Many values can use used directly. If the endian ordering is different on a particular system, then straightforward manipulation is needed. If the epoch is different, addition or subtraction is needed to compensate. If the granularity is something other than seconds, then multiplication or division is needed to compensate. To me, these are "straightforward computation."

I can state all of this if you like:

   Second, in many operating systems, the value can be used without
   conversion.  The operating systems that do require conversion can do
   so with straightforward computation.  If the endian ordering is different
   than the ASN.1 representation of an INTEGER, then straightforward
   manipulation is needed.  If the epoch is different than the one chosen
   for BinaryTime, addition or subtraction is needed to compensate.  If the
   granularity is something other than seconds, then multiplication or
   division is needed to compensate.  Also, padding may be needed
   convert the variable length ASN.1 encoding of INTEGER to a fixed
   length value used in the operating system.

Comparison of a date/time value in a protocol to the current time from the operating system seems very obvious to me.

As far as I remember, date comparisions have to be made in case
if you want to check certificates. In this case, the logic to
convert a local time value to an Generalizedtime already exists
on the machine. Of course, if you assume that no certs are used
at all, ... then you might still save more octets by reducing
the SignerInfo structure.

The certificates and crls are optional in SignerInfo. The SignedData sid field can be used to identify a public key that is not embedded in a certificate, such as a trust anchor.

   Third, date comparison is very easy with BinaryTime.  Integer
   comparison is easy, even when multi-precision integers are involved.
   Date comparison with UTCTime or GeneralizedTime can be complex when
   the two values to be compared are provided in different time zones.

There are no time zones involved in the signingTime attribute.
One change in RFC 3369 vs 2630 to uppercase the MUST.

Signing-time is not the only possible use of BinaryTime. It is the one specified in the document. However, if the ASN.1 type is useful, then it will start appearing in other places. This would be an indication that the Experimental RFC is useful.

 String comparison is as easy as integer comparison.

In 25 years, time definitions of 32 bit machines may become difficult
to compare with an integer. Nothing guarantees you that the local
time definitions with simply shift or else.

I think this is addressed by my proposed text above.

The textual representation of generalizedtime in zulu holmds at least
a few years more, and beyond 9999 there is already an RFC :-)

The integer representation will not have trouble in 10000 ;-)

   This is a rare instance where both memory and processor cycles are

Processor cycles are not saved, since soon, i.e. in about 25 years,
you have to check whether you are beyond epoch, etc. So may need
at least some (almso rather simple) logic as with the adjustments
of UTCTime.

I do not see this one. Some operating systems already use int64 to represent time.

Or, to resume: the only arguments that I can see is to save a few octets.
If you want to do this, code in PER for example.

PER of the character string will not reduce it to 4 or 5 octets.

5  Security Considerations

   This specification does not introduce any new security considerations
   beyond those already discussed in [CMS].

CMS has no security considerations concerning the signingTime attribute.
Anyway, in the following you are doing quite the contrary, i.e., you
add new considerations.

Okay. This is not the point I wanted to make, but I can see how you can interpret it that way. I'll delete the paragraph.

   Use of the updated signing-time attribute does not necessarily
   provide confidence in the time that the signature value was produced.
   Therefore, acceptance of a purported signing time is a matter of a
   recipient's discretion.  RFC 3161 [TSP] specifies a protocol for
   obtaining time stamps from a trusted entity.

   The original signing-time attribute defined in [CMS] has the same
   semantics as the updated signing-time attribute specified in this
   document.  If both of these attributes are present, they SHOULD

SHOULD assumes that unless good reasons the data should be identical,
or, that a client should perform a comparison? If you don't assume
any work to be done by the client, you should mention that nothing
can be said about the two values.

   provide the same date and time.

At least, if both are present, the only vaguely valid argument about
savings of space vanished, and cpu cycles are also necessary to skip
or parse.

RFC 3369 has a lot of text saying that ther must only be one occurence
of the signingTime attribuet and only one value.

With this new specification you now add a second occurence. Does this
mean that you consider the existing 3369 spec is too strong?

No. I was trying to accommodate a situation where the signature would be checked by two recipients, one that prefers signing-time and one that prefers signing-time2. I cannot see a better way to handle this situation.

Does someone remember the reason why the 3369 spec says that
all dates between 1950 and 2050 MUST be coded in UTCTime?

Yep.  That is the rule in RFC 2459.

Is it because of existing systems or in order to have a
canonocal form, i.e. like DER?
With the proposed spec identical information can have two different

At the time, most implementations could not handle GeneralizedTime at all. This technique provided a transition period.