ietf
[Top] [All Lists]

RE: [DNSOP] Last Call: <draft-ietf-dnsop-onion-tld-00.txt> (The .onion Special-Use Domain Name) to Proposed Standard

2015-08-11 10:08:02
In retrospect, the definition of the “http” and “https” schemes (i.e. RFC 7230) 
should have probably enumerated clearly which name registries were acceptable 
for those schemes, so that the following language from RFC 7320 (a BCP) could 
be invoked against any attempt by an app – Onion or anyone else -- to inject 
their own unique brand of “specialness” into the interpretation of the 
Authority component of their URIs:

Scheme definitions define the presence, format and semantics of an
authority component in URIs; all other specifications MUST NOT
constrain, or define the structure or the semantics for URI
authorities, unless they update the scheme registration itself.

7230 casually mentions DNS and “network host table” as name registries that can 
potentially be used for “http” and/or “https”, but never implies that those 2 
constitute the only members of the set.

If both injecting “specialness” into the URI, and updating the “https” scheme 
itself, were viewed as unavailable or unappealing options, then Onion – and 
anyone else who follows the same path – would be left with no other choice but 
to define their own scheme.

                                                                                
                                                                                
- Kevin

From: DNSOP [mailto:dnsop-bounces(_at_)ietf(_dot_)org] On Behalf Of Alec Muffett
Sent: Monday, August 10, 2015 5:25 PM
To: Joe Hildebrand
Cc: Edward Lewis; Ted Hardie; ietf(_at_)ietf(_dot_)org; Richard Barnes; 
dnsop(_at_)ietf(_dot_)org; Mark Nottingham
Subject: Re: [DNSOP] Last Call: <draft-ietf-dnsop-onion-tld-00.txt> (The .onion 
Special-Use Domain Name) to Proposed Standard


On Aug 10, 2015, at 1:25 PM, Joe Hildebrand 
<hildjj(_at_)cursive(_dot_)net<mailto:hildjj(_at_)cursive(_dot_)net>> wrote:

If the smiley means "they're already deployed, so we don't get to talk about 
whether they're appropriate", then fine, but that's why a bunch of people are 
complaining about the precedent this sets. If the smiley means "this is a good 
protocol design that other people should follow", then I'm not sure we have 
consensus on that.

I apologise that my personal opinion and cheery demeanour appears to be 
extrapolatable into a couple of contrasting strategic positional statements.

To put my personal opinion at greater and more clear length:

In the context of URI schemes, accessing a Tor onion site (currently) over HTTP 
or HTTPS is precisely *that* - a fetch of HTML or other content which a HTTP or 
HTTPS request might typically be used to access - without the client software 
needing to be adapted for Tor access at "Layer 7".

Such a fetch is functionally just a vanilla HTTP/S over an “alternate" 
transport, the latter generally enabled by a SOCKS proxy or a content-aware VPN.

Such fetches currently work, have done so for several years, and have been used 
by many, many thousands of users, possibly millions.

Similarly, ssh://user@someonionaddress.onion is equally an extant and 
functional SSH request to someonionaddress.onion

Equally git://someonionaddress.onion/user/project-name.git would not 
immediately strike me as needing to be forcibly changed to 
“onion-git://“<onion-git://%E2%80%9C> simply because Git is invoked over an 
"alternate” transport with a “alternate” name resolution. It currently works, 
so why break it?

From this observation, my personal opinion of “the proper scheme for an HTTP/S 
fetch to an Onion address" is something of a duck-test:

TEST: if a fetch looks like HTTP/S and quacks like HTTP/S, then I think that it 
should likely be given a HTTP/S scheme.

Conversely: it’s arguable that schemes like “daap” or “rtsp” are also 
“HTTP-based”, and that *they* have special schemes, so perhaps fetches from 
Onion-addresses should “have special schemes” too?

I can sympathise with this argument.  It makes logical sense.

I personally differentiate and resolve this argument in terms of intent, and in 
terms of client and server-software.

“rtsp://” for instance is used for streaming, and requires magic, RTSP-specific 
headers, and the frontend is something like VLC or iTunes, and the backend 
requires a special streaming stack.

To me, this smells of specialism.

Equally: if iTunes incorporates a webview and fetches a bunch of web-content 
for human consumption, it likely uses a HTTP/S scheme to do so, rather than a 
specialist “ituneshttps://“; scheme.

This smells of specialist software trying to be a "general-purpose browser".

So, given these conditions:

- if the intent is largely to provide HTML+CSS content to human beings,
- if the client software is most likely a well-known browser (tor browser = 
firefox) operating in its normal mode
- …but not necessarily or exclusively a browser…
- and if the server software is something like Apache + {Wordpress, Drupal, a 
bunch of static HTML}

…then under these conditions, again, I apply the duck test. I feel that such 
fetches are HTTP/S and should have that scheme.

Hence why I feel that the extant, vanilla HTTP/S schemes are most appropriate 
for fetching browser content from onion addresses.

The other matters, regarding domain name resolution and generic URI syntax, I 
have already covered at some length.

   - a

*aside: using VLC to RTSP to an Onion address will work just fine when SOCKS 
(etc) is configured… etc...

—
Alec Muffett
Security Infrastructure
Facebook Engineering
London

<Prev in Thread] Current Thread [Next in Thread>