ietf
[Top] [All Lists]

Re: What is the right way to do Web Services discovery?

2016-11-23 07:09:17
On Tue, Nov 22, 2016 at 2:42 PM, Jared Mauch 
<jared(_at_)puck(_dot_)nether(_dot_)net> wrote:

(Not going to address things where I perhaps disagree, but focus on areas
of concern …)

On Nov 22, 2016, at 10:04 AM, Phillip Hallam-Baker <
phill(_at_)hallambaker(_dot_)com> wrote:

I am asking here as there seems to be a disagreement in HTTP land and
DNS land.

Here are the constraints as I see them:

0) For any discovery mechanism to be viable, it must work in 100% of
cases. That includes IPv4, IPv6 and either with NAT.

I think must be sufficiently robust, where it’s defined as 98+%.  Studies
on the operational networks seem to indicate this is the threshold is in
the 95-99% range.  Saying 100% is a goal, but perhaps unrealistic.


​Let us remind ourselves of what the purpose of SRV records originally was.
They were developed to improve reliability by enabling fault tolerance.​

​In the commercial world, systems have been expected to be four nines
reliable as a matter of course for a decade. That isn't even state of the
art, it is baseline. There are plenty of services built for 99.9999% up
time. And that is actually essential because if you have a system that can
fail at multiple points, the errors start to accumulate and pretty soon you
have a system that is visibly unreliable.

Any Web Service discovery architecture has to be 100% compatible with the
legacy infrastructure. Because if it isn't it is going to reduce
reliability, not increase it.

Building in SRV means that the cost of achieving 99.99% uptime is reduced.
​But reducing the SLA to 98% would be vastly cheaper.


On Tue, Nov 22, 2016 at 5:38 PM, Mark Andrews <marka(_at_)isc(_dot_)org> wrote:


In message <CAMm+LwgtJuLdL_RKJNSVNGODGj8D25nfj0jkhnBLFS=a
aXG+rA(_at_)mail(_dot_)gmail(_dot_)com>
, Phillip Hallam-Baker writes:



1) Attempting to introduce new DNS records is a slow process. For
practical
purposes, any discovery mechanism that requires more than SRV + TXT is
not
going to be widely used.

Absolute total garbage.

Introducing a new DNS record isn't slow.  It take a couple of weeks.
Really.  Thats how long it takes to allocate a code point.

RFC 1034 compliant recursive servers and resolver libraries should
handle it the moment you start to use it.


​Well until you can persuade the ISPs to provide RFC1034 compliant
interfaces in their Web Configuration tools, the majority of sites will not
be able to use a new record.

Allocating records isn't the problem. The Internet is defined by running
code, not allocated code points.​ Only a minority of network admins
actually edit zone files these days.

​CAA was specified several years ago now. We are only just getting to the
point where it is viable. ​


​It isn't my job to get your specifications deployed by having my systems
break unless people upgrade. ​It isn't anyone's job.

Besides which, we already have an RFC that says use SRV+TXT - RFC 6763.