On 4-mei-04, at 12:25, Brett Watson wrote:
Going for an 'epoch' system (eg seconds since 17th March 1973 or
whatever)
is (IMHO) more problematic since you start getting affected by
leap-seconds
(or at least having to worry about how not to be affected by them) -
it's
also less human readable.
I won't argue with your "less human readable" assessment,
IMO, this is only a problem if EVERYTHING else is also human readable.
As soon as there is binary data of any kind, you need a tool that can
handle this for even the most superficial glances at the stuff and then
the two lines of code that convert the date to something that is human
readable isn't an issue at all.
but I will point out
that it is correspondingly more machine readable. The grammar for a
valid ISO
date of the form you specified may seem trivial, but it isn't once you
check
for all conditions (appropriate month lengths and leap year rules).
Yes, this is a problem.
Epoch offset *is* trivial, because every valid number corresponds to a
unique valid date.
A very desirable property.
But don't think that the ISO format is simpler with regards to leap
seconds!
The only simple way to deal with leap seconds is not to use them.
But this leads to problems when you have to interact with systems that
do use them. So I think this isn't a solution either.
But maybe there is some middle ground to be found by keeping the date
and the time apart. So use epoch style for both independently: a day
number, and a number of seconds since midnight. This should work well
both when the local clock isn't all that good and is synchronized with
UTC once in a while, and when there is very accurate UTC support. We
can even make a rule that forbids sending mail at leap second times to
keep implementations simple. :-)
(Assume you have a convenient
list of leap second instances available in both cases.)
I think this is exactly the one thing we can't reasonably require.