ietf
[Top] [All Lists]

Re: What is the right way to do Web Services discovery?

2016-11-22 13:53:30
I assume y'all have read RFC 6763...

On Tue, Nov 22, 2016 at 2:42 PM, Jared Mauch 
<jared(_at_)puck(_dot_)nether(_dot_)net> wrote:
(Not going to address things where I perhaps disagree, but focus on areas of 
concern …)

On Nov 22, 2016, at 10:04 AM, Phillip Hallam-Baker 
<phill(_at_)hallambaker(_dot_)com> wrote:

I am asking here as there seems to be a disagreement in HTTP land and DNS 
land.

Here are the constraints as I see them:

0) For any discovery mechanism to be viable, it must work in 100% of cases. 
That includes IPv4, IPv6 and either with NAT.

I think must be sufficiently robust, where it’s defined as 98+%.  Studies on 
the operational networks seem to indicate this is the threshold is in the 
95-99% range.  Saying 100% is a goal, but perhaps unrealistic.  Downgrade 
modes don’t necessarily represent an attack, and may be the intent of an 
enterprise.  (I don’t want to argue this point, but raise it for those that 
may not be aware).


1) Attempting to introduce new DNS records is a slow process. For practical 
purposes, any discovery mechanism that requires more than SRV + TXT is not 
going to be widely used.

2) Apps area seems to have settled on a combination of SRV+TXT as the basis 
for discovery. But right now the way these are used is left to individual 
protocol designers to decide. Which is another way of saying 'we don't have 
a standard'.

3) The DNS query architecture as deployed works best if the server can 
anticipate the further requests. So a system that uses only SRV+TXT allows 
for a lot more optimization than one using a large number of records.

4) There are not enough TCP ports to support all the services one would 
want. Further keeping ports open incurs costs. Pretty much the only 
functionality from HTTP that Web Services make use of is the use of the URL 
stem to effectively create more ports. A hundred different Web services can 
all share port 80.

5) The SRV record does not specify the URL stem though. Which means that 
either it has to be specified in some other DNS record (URI or TXT path) or 
it has to follow a convention (i.e. .well-known).

I think the historical error was that there wasn’t a Web-Property type 
application record similar to how we have MX records for other applications, 
eg: E-Mail.  This leads to many workarounds, including those of e-mail to a 
host record without MX presuming that was enough.  It worked well enough to 
be usable for decades, but also set into peoples minds that HTTP would go 
directly to the host vs an indirection layer similar to e-mail.

6) Sometimes SRV records don't get through and so any robust service has to 
have a strategy for dealing with that situation.

I think a degrade mode, eg: what if MX records were filtered?  At some point, 
you want to exert pain on broken devices and networks to provide them an 
incentive to correct unintended behaviors.  One can’t workaround forever, 
there must be an inherent or implied TTL.  Anyone with legacy they carry 
around with them understands.  I have my own personal mistakes from 20+ years 
ago lying around my own infrastructure that will be painful to correct.  One 
should think about how we migrate things like the http application/transport 
away from the individual A/AAAA records (if that’s a desired goal) and how it 
would move further along the path.

- Jared