ietf
[Top] [All Lists]

Review of draft-hartman-webauth-phishing-05

2007-08-17 10:21:00
I'm just digging out of my backlog and I got to this draft. Pleae
consider these last Last Call comments.

-Ekr

$Id: draft-hartman-webauth-phishing-05-rev.txt,v 1.1 2007/08/17 17:20:20 ekr 
Exp $

GENERAL
I think this draft is premature. There's a very large and growing
academic literature on phishing resistant authentication systems
and I don't think the data definitively supports any single
approach. Givn that, it seems to me that it's too early for
the IETF to be publishing anything claiming to be "requirements"
for phishing resistant authentication, especially if the intent
is to hold future efforts to this standard. If the IESG chooses
to publish this, it should be marked with a clear disclaimer
that this is an individual's opinion and not a stake in the
ground by the IETF.

I'm disappointed by the sparseness of the bibliography. As I
said, there's a large literature on this topic and this draft
cites essentially none of it. At minimum it would be appropriate
to cite:

  [0] J. Alex Halderman, Brent Waters, and Edward W. Felten, "A Convenient
  Method for Securely Managing Passwords", In Proceedings of the 14th
  International World Wide Web Conference (WWW 2005)
  
  [1] Blake Ross, Collin Jackson, Nicholas Miyake, Dan Boneh and John C. 
Mitchell
  Stronger Password Authentication Using Browser Extensions.
  Proceedings of the 14th Usenix Security Symposium, 2005.
   
  [2] Stuart Schechter, Rachna Dhamija, Andy Ozment and Ian Fischer,
  "The Emperor's New Security Indicators", Proceedings of the IEEE
  Symposium on Security and Privacy, May 2007.
  
  [3] "Rachna Dhamija, J. D. Tygar and Marti Hearst", Why Phishing Works, in
  the Proceedings of the Conference on Human Factors in Computing
  Systems (CHI2006), 2006.
  
But really this just scratches the surface.

It's possible that this is just an editorial choice, but when I read
this document I came away with the impression that it ignored large
parts of the existing literature--as opposed to just not citing it.
In particular, I don't see how one can discuss strong password 
equivalence in cleartext password systems without discussing
PwdHash, etc. This leaves the impression that the author has
pre-decided on what set of approaches are good, which, as I said,
is premature. It's particularly surprising that this document
focuses attention on ZKPP/PAKE systems and yet utterly neglects
TLS-PSK.

There seems to be a general failure in this document to distinguish
between the interface provided by passwords and protocols that use
passwords.  For instance, basic auth, PwdHash, digest, SRP, and
password-derived public keys all could be made to have very similar
user experiences, yet the underlying protocol technology is
fundamentally very different. The entire draft needs a scrub
for this.


DETAILED COMMENTS

S 1.
   TLS implementations typically confirm
   that the name entered by the user in the URL corresponds to the
   certificate as described in [RFC2818].


- The TLS stack doesn't do this. The HTTPS implementation does.
- It's not just the "name entered by the user". It's the link
  being dereferenced, no matter how entered.


   information.  Domain names that look like target websites, links in

s/like target/like those of/
 
The domain names don't look like the web sites themselves.


S 3.1.
   We assume attackers can convince the user to go to a website of their
   choosing.  Since the attacker controls the web site and since the
   user chose to go to the website the TLS certificate will verify and
   the website will appear to be secure.  The certificate will typically
   not be issued to the entity the user thinks they are communicating
   with, but as discussed above, the user will not notice this.

It's really important to emphasize that this is a production versus
verification issue. If you simply never go to URLs in emails, you
will in general not get phished because your intent is captured
by whatever URL you've bookmarked, typed in, etc. The way you
get phished is when you allow someone else to produce a URL that
is claimed to match some real-world identity.

Why is this important:
1. It suggests that another form of user education might work.
2. Once you realize that this is an issue of getting the user
to check for the presence of given indicia, it becomes questionable
whether *any* approach will work (see [2]).

So, I'm not convinced this is a correct threat model.


   The attacker can convincingly replicate any part of the UI of the
   website being spoofed.  The attacker can also spoof trust markers
   such as the security lock, URL bar and other parts of the browser UI.
   There is one limitation to the attacker's ability to replicate UI.
   The attacker cannot replicate a UI that depends on information the
   attacker does not know.  For example, an attacker could generally
   replicate the UI of a banking site's login page.  However the
   attacker probably could not replicate the account summary page until
   the attacker learned the user name and password because the attacker
   would not know what accounts to list or approximate balances that
   will look convincing to a user.  Of course attackers may know some
   personal information about a user.  Websites that want to rely on
   attackers not knowing certain information need to maintain the
   privacy of that information.

I'm not sure what this is supposed to be claiming, but it seems to
me to be either trivial or wrong.

The trivial interpretation is that the attacker can take a picture of
the intended UI and display it on the screen. This is obviously true,
but whether that's convincing is less clear: for instance, browsers
could ALWAYS frame every site-produced window, thus allowing the
user to distinguish the browser UI from site produced data. Whether
that's convincing or not is an empirical question of course, but
given the data on the types of scheme this draft proposes, I think
that this statement is too strong.

The wrong version would be to say that the attacker can *cause* the
browser to display the wrong security indicia. This is not correct
as far as I know. Clearly the attacker can cause the browser to
display the "lock" icon, but he can only partly control the cert.
Again, whether this is "convincing" is an empirical question.


   The attacker can convince the user to do anything with the phishing
   site that they would do with the real target site.  As a consequence,
   when passwords are used, if we want to avoid the user giving the
   attacker their password, the web site must prove that it has an
   established authentic relationship with the user without requiring a
   static password to do so, and in a way that cannot be visually
   mimicked so as to trick a user. 

As I noted earlier, this confuses passwords the UI with passwords the
protocol. SRP uses passwords (heck, it's in the name!) but it's not
vulnerable to this attack.


S 4.1.

   A solution to these requirements MUST also support smart cards and
   other authentication solutions.  Some environments have security
   requirements that are strong enough that passwords simply are not a
   viable option. 

This seems premature. Moreover, it again confuses interface with
protocol. To take an example, SRP can be used perfectly well with
smartcards: you do the client-side computation on the smartcard.
Pretty much any password-based solution can be ported to smartcards
simply by placing the password on the card. Calling out smartcards
in particular is also way too specific.



S 4.2.

   There are three basic approaches to establishing a trusted UI.  The
   first is to use a dynamic UI based on a secret shared by the user and
   the local UI; the paper [ANTIPHISHING] recommends this approach.  A
   second approach is to provide a UI action that highlights trusted or
   non-trusted components in some way.  This could work similarly to the
   Expose feature in Apple's OS X where a keystroke visually
   distinguishes structural elements of the UI.  Of course such a
   mechanism would only be useful if users actually used it.

This seems to me to neglect a number of approaches:

1. The Ctrl-Alt-Delete approach which doesn't highlights non-trusted
and trusted components, but simply brings up an un-interceptable
dialog.

2. Chrome on every single component (forbid frameless windows).

I appreciate that (2) is out of style right now, but again, the
data that the approaches you suggest actually work better is 
minimal.

   Finally,
   the multi-level security community has extensive research in
   designing UIs to display classified, compartmentalized information.
   It is critical that these UIs be able to label information and that
   these labels not be spoofable.

This seems like a category error. The first two approaches are actual
things. This is just "there is research". Did that research actually
lead to any technology?


S 4.3.
   A critical requirement is that when a user authenticates to a
   website, the website MUST NOT receive a strong password equivalent
   [IABAUTH].  A strong password equivalent is anything that would allow
   a phisher to authenticate as a user with a different identity
   provider.

s/with a different identity provider/to a different relaying party/.


   Weak password equivalents (quantities that act as a
   password for a given service but cannot be reused with other services
   ) MAY only be sent when a new identity is being enrolled or a
   password is changed.  A weak password equivalent allows a party to
   authenticate to a given identity provider as the user.

Where does this come from? Seems wrong to me (see [1]). The key
is to make sure that the WPE goes to the right place.


   There are two implications of this requirement.  First, a strong
   cryptographic authentication protocol needs to be used instead of
   sending the password encrypted over TLS.

Only because of the unjustified second requirement on WPEs. I don't
see any security reason why PwdHash isn't fine here.


   The zero-knowledge class of
   password protocols such as those discussed in section 8 of the IAB
   authentication mechanisms document [IABAUTH] seem potentially useful
   in this case.  Note that mechanisms in this space tend to have
   significant deployment problems because of intellectual property
   issues.

I'm confused as to why you're citing mechanisms which the IETF has
in many cases declined to standardize when there are phishing-resistant
mechanisms that don't have that problem, i.e., conventional
challenge-response.


S 4.4.
   Authentication of the server and client at the TLS level is
   sufficient to meet the requirement of mutual authentication.  If
   authentication is based on a shared secret such as a password, then
   the authentication protocol MUST prove that the secret or a suitable
   verifier is known by both parties.  Interestingly the existence of a
   shared secret will provide better protection that the right server is
   being contacted than if public key credentials are used.  By their
   nature, public key credentials allow parties to be contacted without
   a prior security association. 

Again, this confuses interface with protocol. There are lots of ways
around this problem. For instance, you could use SSH l-o-f style
authentication and warn when a new public key is encountered from the
server.


   In protecting against phishing
   targeted at obtaining other confidential information, this may prove
   a liability.  However public key credentials provide strong
   protection against phishing targeted at obtaining authentication
   credentials because they are not vulnerable to dictionary attacks.

Well, maybe. One natural way to build a PK-based system based
on passwords (interface/protocol again) is to derive the PK from
the password. At which point you have a dictionary attack again.


   Such dictionary attacks are a significant weakness of shared secrets
   such as passwords intended to be remembered by humans.  For public
   key protocols, this requirement would mean that the server typically
   needs to sign an assertion of what identity it authenticated.

I don't understand this last point.


S 4.5.
   Users expect that whatever party they authenticate to will be the
   party that generates the content they see.  One possible phishing
   attack is to insert the phisher between the user and the real site as
   a man-in-the-middle.  On today's websites, the phisher typically
   gains the user's user name and password.  Even if the other
   requirements of this specification are met, the phisher could gain
   access to the user's session on the target site.  This attack is of
   particular concern to the banking industry.  A man-in-the-middle may
   gain access to the session which may give the phisher confidential
   information or the ability to execute transactions on the user's
   behalf.

Well, maybe. Again, see PwdHash, which ties the WPE to the URI, hence
to the TLS cert.


   1.  Assuming that only certificates from trusted CAs are accepted and
       the user has not bypassed server certificate validation, it is
       sufficient to confirm that the identity of the server at the TLS
       level is the same at the HTTP authentication level.  In the case
       of TLS client authentication this is trivially true.

I don't understand this last claim.


S 4.6.
   important that the protocol enable this.  For such identities, the
   user MUST be assured that the target server is authorized by the
   identity provider to accept identities from that identity provider.
   Several mechanisms could be used to accomplish this:

I'm not convinced that just because some financial wants this it
suddenly becomes an IETF requirement. Doing any kind of multi-party
authentication is a lot more complicated than two party, so it's
way premature to be specifying requirements. This entire section
should be struck.


S 4.6.
   In Section 4.2, we discuss how a secret between the user and their
   local computer can be used to let the user know when a password will
   be handled securely.  A similar mechanism can be used to help the
   user once they are authenticated to the website.  The website can
   present information based on a secret shared between the user and
   website to convince the user that they have authenticated to the
   correct site.  This depends critically on the requirements of
   Section 4.5 to guarantee that the phisher cannot obtain the secret.
   It is tempting to use this form of trusted UI before authentication.
   For example, a website could request a user name and then display
   information based on a secret for that user before accepting a
   password.  The problem with this approach is that phishers can obtain
   this information, because it can be obtained without knowing the
   password.  However if the secret is displayed after authentication
   then phishers could not obtain the secret.  This is one of the many
   reasons why it is important to prevent phishing targeted at
   authentication credentials.

This only applies if the channel is encrypted or the attacker is 
off-path.


S 7.
ben Laurie -> Ben Laurie


Appendix A.
This is redundant.














_______________________________________________
Ietf mailing list
Ietf(_at_)ietf(_dot_)org
https://www1.ietf.org/mailman/listinfo/ietf