At 13:13 12/11/04 -0500, you wrote:
The IESG has approved the following document:
- 'BinaryTime: An alternate format for representing date and time in ASN.1 '
<draft-housley-binarytime-02.txt> as an Experimental RFC
This document has been reviewed in the IETF but is not the product of an
IETF Working Group.
The IESG contact person is Steve Bellovin.
Technical Summary
This protocol provides a means to represent time in ASN.1 as an integral
number of seconds since the epoch (00:00:00 UTC, January 1, 1970). This
avoids
the well-known problems with comparison of timestamps that are given with
respect to some particular timezone.
Protocol Quality
Steve Bellovin has reviewed this document for the IESG.
RFC Editor Note:
Section 2, old text:
BinaryTime ::= INTEGER
New text:
BinaryTime ::= INTEGER (0..MAX)
OLD:
The integer value is the number of seconds after midnight UTC,
January 1, 1970.
NEW:
The integer value is the number of seconds, ignoring leap seconds,
after midnight UTC, January 1, 1970.
This slipped under my radar until this announcement.
Has there been detailed discussion of leap second issues? What exactly
does the revised text "ignoring leap seconds" actually mean (I think I can
guess, but I also think there's some room for misinterpretation here)? Has
any consideration been given to conversion between the integer timestamp
and more conventional representations involving dates and times; if so, is
that documented?
My view is that leap seconds are a small detail which have great potential
for causing confusion. I've been involved in some discussion of a
date-time library for a programming language (Haskell) [1], which
discussion is currently stalled because of unresolved leap second issues.
A particular concern of mine is that if a simple count of elapsed seconds
is used, then there is no way to define an algorithm for accurately
determining the correspondence between this binary timestamp and (say) ISO
8601 date-time representations. Leap second occurrences are not known very
far in advance.
My own preference [2] for a binary time representation is to use a pair of
numbers: a day count and a second (or sub-second) count. I feel this
adequately handles the vast majority of use-cases for timestamps in
applications that do not care or know about leap-seconds (i.e. where the
inaccuracies of not considering leap-seconds are not considered
significant), or when the intervals concerned are generally much less than
a day and hence are measured it terms of time only, but still allows
specialized applications that do know about such things to make the
appropriate adjustments when working with timestamps and time intervals.
I submit these as considerations for when the experimental status of this
specification is reviewed, and request that the resolutions be documented
if the specification moves from experimental to standards-track status.
#g
--
[1] Approximate locus of Haskell library discussion:
http://www.haskell.org/pipermail/libraries/2003-June/001093.html
http://www.haskell.org/pipermail/libraries/2003-June/001157.html
[2] Pretty much my own current view in that debate...
http://www.haskell.org/pipermail/libraries/2003-June/001211.html
http://www.haskell.org/pipermail/libraries/2003-November/001541.html
(but there are other views "nearby")
------------
Graham Klyne
For email:
http://www.ninebynine.org/#Contact
_______________________________________________
Ietf mailing list
Ietf(_at_)ietf(_dot_)org
https://www1.ietf.org/mailman/listinfo/ietf