ietf
[Top] [All Lists]

Re: Last Call: <draft-nottingham-safe-hint-05.txt> (The "safe" HTTP Preference) to Proposed Standard

2014-10-24 14:49:32
Hi

I have read the draft, and I do not support its publication. What worries me 
about it is not speculation about extensions that we’ll be asked to do or 
mis-use by content providers. It is that there is no way to use it properly. 
The drafts does not specify what “safe” content is and what “unsafe” content 
it, and some people treat this as an advantage. The result is that there’s no 
way for a content provider to know what a user means when their browser emits 
the “safe” hint, and no way for the user to know what kind of content they are 
going to get.

Stephen Farrell has made some interesting points about what “safe” might mean 
in other cultures. I think the failure is much closer to home, so let’s assume 
for the sake of argument that everyone affected is mainstream American 
(although neither Stephen nor I are Americans). So obviously anyone would 
consider porn to be “unsafe”, because we don’t want the children to see it and 
we don’t want it at work. A rabbit teaching the alphabet to kids OTOH is “safe” 
([1]). But those are the easy types of content. What about political content? 
What about political content that is non-mainstream? Even inflammatory? Is it 
safe? For whom?  A signal is useless if there is no agreed-upon semantic to it. 
Yet the draft punts on attaching such a semantic. 

Section 3 mentions YouTube. That is actually a perfect example of what I mean. 
Sites like YouTube, deviantArt, Flickr, and Wattpad, even Wikipedia provide 
user-generated content.How are they to decide what is and isn’t “safe”? They 
have several choices:
 - They can ask the contributors to mark their content. Many of them do that. 
When you upload something, they require you to mark it as “mature”, “strictly 
mature” or not, and you even get to pick one of several categories of mature: 
"nudity", "sexual themes", "violence/gore", "strong language", and 
“ideologically sensitive”.([2])  They can use those indications in filtering 
content for people who prefer their content “safe”. But what happens when a 
user complains that the content they got was non-safe even thought it doesn’t 
fall into those categories?  (although “ideologically sensitive”?  Anything can 
fall into that!) 
 - They can assume everything is safe until someone complains. That makes 
sense. Everything that someone objects to is by definition objectionable. 
Pretty soon, only the alphabet-teaching rabbit is considered safe, and people 
have to turn off the hint to get anything done. 
 - They can intelligently (through a combination of heuristics by computers and 
human intervention) actually judge all content to figure out some rational 
definition of safe. There’s two problems with that. First, it’s hugely 
expensive. Wikipedia would die if it needed to do that, and I doubt even Google 
can afford to have people watch all of Youtube’s videos to rate them.. Second, 
whatever definition of “safe” they came up with, the users may not agree with 
them. 

IMO this does more harm than good, and I think we should not publish it.

Yoav

[1] yes, there are some who view talking animals as a violation of the second 
commandment. I did say “mainstream” Americans.
[2] that is also a great argument against the claim that content providers want 
just one bit. I think it is the browser vendors who aren’t willing to emit more 
than one bit.

<Prev in Thread] Current Thread [Next in Thread>