ietf-openproxy
[Top] [All Lists]

RE: Notes on a threat and trust model

2002-03-27 08:53:14

hilarie

see comments inline

abbie

-----Original Message-----
From: The Purple Streak (Hilarie Orman) 
[mailto:ho(_at_)alum(_dot_)mit(_dot_)edu]
Sent: Tuesday, March 26, 2002 7:39 PM
To: ietf-openproxy(_at_)imc(_dot_)org
Subject: Notes on a threat and trust model



Here's a start on the notions of threat and trust in OPES; it's
a first cut, may have significant lacunae.  For discussion.


=================


An OPES system has the responsibility of delivering data that is an
accurate representation of what the publisher wants, what content
enhancement systems can provide, and what end users want to see.


--- abbie
i would say that provide accurate representation of waht the
publisher/author intentions are. (In a lot of the authoring techniques, they
talk about the author intentions are ..)
------------

SNIP

OPES systems also have these responsibilities, but they must also
apply content enhancement functions.  These may be simple or
complicated.  They must not introduce errors into content without
errors, they should not violate publisher or end user policies.

---- abbie
it could be worded better, but the ideas are there. I would also state that
OPES can only apply authorized content enhancement functions as specified by
the contnet provider.

Threats come in a few classes, with some overlap. The classes are:

  - errors that decrease the quality of the user experience
    (as compared to non-modified content)
  - privacy violations
  - access control violations
  - malicious content introduction
  - content misattribution, leading to loss of ownership 
information or so
    as to alter trust decisions made by participants

More specifically, an improper OPES intermediary could
   -  get the wrong policy for a user or web site
   -  send data to the wrong callout server, one that is
      malicious
   -  introduce pathways for denial of service attacks
   -  introduce errors that appeared to come from another party
   -  violate privacy or confidentiality
      (e.g., by caching data that should only be delivered once, or
      by confusing variants of objects)
   -  give confidential data to a callout server (or other party, such
      as an accounting service) that is outside the
      trust boundary of the end user or publisher
   -  be induced to add malicious content
   -  communiate with an authorization server that is
      malicious
   -  obscure the origin of data, inducing trust by the enduser
      that was not warranted
   -  mishandle authentication information
   -  erase copyright notices
   -  alter information that is essential for authentication
   -  mislabel environment information that is needed by a callout
      server
   -  assist in masquerading by malicious parties
   -  facilitate lurking and looming


--- Abbie --- 
sounds good, i cannot think of more at this time

We turn to the question of how parties establish trust and use the
trust relationships to mitigate risk.

1. Authorization and authentication and policy servers must 
be trusted 
by the
    OPES intermediary.  There must be a sound basis for its 
identication
    of trusted servers and the communication with them.

2. Mapping content and actions to subjects.  There must be a way to
    identify end users, publishers, and special 
administrators in order
    to apply policy relating to them correctly.  This does not mean
    that the OPES intermediary must know personal information about a
    user, but it must be able to associate content traffic with the
    policy rules for the generator of the traffic.


--- Abbie--- agreed, Do we call those profiles and how OPES could access
them in atrusted manner.

SNIP

3. Authentication of policy information relating to content requestors
    and responders.  An OPES intermediary may receive policy
    information from a policy server, from an end user, from a
    publisher, through a new protocol, or through content extensions.
    In all cases, it must have a well-founded way of 
determining that the
    policy is from a party iwth authority to set policy.


--- ABbie -- Agreed.

4. Site policy.  The intermediary must have a policy regarding the
    acceptance of policy rules.  For example, some sites will 
not allow
    users to remove restrictions, but they will allow them to add
    restrictions.

---- ABbie, makes good sense to me.

5. Enforcement of user confidentiality requirements.  There must be a
    privacy policy and it must be honored.

--- Abbie, yes, but how do we do that. DO we need a policy verification
mechanism??

6. Enforcement of publisher access control policy.

7. Delegation of authority by end user or publisher

8. Content policy.
    a. do not modify
    b. apply only language or device translation services
    c. apply services signed by designated authorities
    d. do not use callout servers run by list of authorities
    e. alert publisher of content rejection
    f. do not apply list of services
    g. must apply list of services
    h. do not report usage
    i. report usage
    j. must add link to non-modified content
    k. do not replace url's
    l. maintain content trace (content traceroute)
    m. do not add scripts
    n. do not introduce any URI's naming new parties


---- ABbie, in VPCN we call this contnet profile, can we add here the
conditions for error reporting (to report when and where????)


--- Abbie
Hilarie, good work


abbie
 
<Prev in Thread] Current Thread [Next in Thread>