ietf
[Top] [All Lists]

Re: Predictable Internet Time

2017-01-01 13:24:35
There are two separate problems

The first problem is that every machine has a common understanding of the
current instant in time. In Windows as in UNIX, this is represented as the
number of seconds that have elapsed since a particular fixed date and time.

The main issues there are precision and extent. The original UNIX and
Windows time used 32 bit integers representing seconds. These have been
replaced with 64 bit representations expressed in ticks. For example, in
Windows:

ticksType: System.Int64
<https://msdn.microsoft.com/en-us/library/system.int64(v=vs.110).aspx>

A date and time expressed in the number of 100-nanosecond intervals that
have elapsed since January 1, 0001 at 00:00:00.000 in the Gregorian
calendar.
The second problem is the representation of time. That is the conversion to
a human readable form. And this is complicated by the fact that many
systems use human readable forms for communication between machines and
many of those systems fail when presented with a leap second.

Any code path that is only exercised for less than a second per year at
most is a code path that is unlikely to be properly tested. It is thus code
that is likely to fail.

This is one of the many reasons that leap seconds are such a terrible idea.
The other is that they are unpredictable. PIT is designed to deal with the
second problem.

To have a complete solution, the way forward would be to require systems
using PIT to use the 'time smear' approach that has been pioneered by
Akamai and is now used by Amazon, Google, etc. albeit in slightly different
and non standard ways.

Using time smearing, a program will never emit the time value 12:59:60 as
demanded by the standard. Instead the leap second is added to the machine
gradually over the course of 20 or 24 hours. This avoids the need to emit a
time value that could cause a system to fail at the cost of a modest
difference between the purported and actual value.


Oh and if you think leap seconds are bad in UTC, they are even worse when
the complications of time zones are considered. London, New York and
Mountain View all add leap seconds at slightly different times.



On Sun, Jan 1, 2017 at 10:58 AM, <sandy(_at_)weijax(_dot_)net> wrote:

   The (human) universe has competing time standards.  How much of this
problem is mitigated by time services?  I mean, if every device that cared
about a common time was kept in synch by timeservers, and all the
timeservers updated together, wouldn't that fix the problem?  if so, the
problem is really that not everything has access to timeservers.  Many
devices aren't always on the net, and when they are they have better things
to spend bandwidth on than constant time updates.
   How much of this problem can be solved by better time services, and how
much of the problem cannot be?

Sandy Wills
interested lurker


----- Original Message -----
From:
"Phillip Hallam-Baker" <phill(_at_)hallambaker(_dot_)com>

To:
"IETF Discussion Mailing List" <ietf(_at_)ietf(_dot_)org>
Cc:

Sent:
Sat, 31 Dec 2016 16:32:20 -0500
Subject:
Predictable Internet Time



Well the astronomers are at it again. They are messing about with time
which is a terrible idea. Specifically they are adding another leap second.

There are many arguments against leap seconds and the arguments in favor
really amount to the astronomers declaring that they are going to rub our
noses in it for as long as we let them. So here is an alternative proposal.

The single biggest problem with UTC is that the decisions to add seconds
are made by a committee a few months in advance of the change. And this
results in time becoming unpredictable because it is never possible to know
if we are dealing with a corrected or uncorrected time. For this reason, I
have been using TAI as the basis for time representation in my recent
protocol proposals. This reduces but does not eliminate the confusion.

Leap seconds occur at a rate of roughly ten per 25 years. So not
correcting means a drift of 40 seconds over a century


So to remove the confusion entirely, while preventing the need for a
discontinuous adjustment of the drift between UTC and TAI, I propose the
Predictable Internet Time (PIT) as follows.

PIT = TAI + delta (y) where

Where Delta (y) = int (37 + 0.4 * (y -2016) ) for y < 2116
= 77 + (UTC-Delta (y-100))

For values above 2116, PIT would make use of the table of UTC corrections
with a delay of one century. This would enable manufacturers to build
devices with built in correction tables for a design life of one century
which should meet everyone's needs except Danny Hillis who is building a
clock anyway.


A conversion to PIT would be feasible for most governments as it is highly
unlikely that the variance between UTC and PIT would ever be greater than a
few seconds.

The big problem with planning such a transition in the past is that the
alternatives on the table have been stopping further leap seconds
completely and continuing the UTC scheme. That would be a recipe for
disaster unless the EU and US both adopted TAI+36 seconds or whatever. We
could end up with a situation in which one side digs in its heels, refuses
to change and we end up with a 'give us back our eleven days' type
correction.

Nor is changing the definition of UTC to effect simultaneous change a
feasible solution because to do so would be to demand the astronomers
accept a diminution in their own prestige.

With a suitable definition, PIT could create a condition in which it would
only take a decision by one major government to force a change on the
astronomers. The commercial advantages of PIT over UTC are obvious - fewer
things are going to break for no good reason. That is an argument that
every politician is willing to listen to.

While a variance of a second or three between New York and London might be
inconvenient, the inconvenience is going to be a lot worse for the side not
using PIT who have all the inconvenience of unpredictable leap seconds plus
the inconvenience of the difference. The pressure on other governments to
adopt PIT over is going to be significant. It is hard to see that there
would be any real constituency for the UTC approach, the astronomers are
much more interested in buying telescopes than time in any case.


We could tweak the definition so that the corrections kick in sooner but
it should be possible to build for a minimum of a fifty year service life.


<Prev in Thread] Current Thread [Next in Thread>