Usable Security

From CS260Wiki
Jump to: navigation, search

Lecture Slides

File:Cs260-slides-25-security.pdf

Extra Materials

Discussant's Slides and Materials

Reading Responses

Airi Lampinen - 11/24/2010 15:12:28

Whitten and Tygar present a usability evaluation of an encryption tool called PGP 5.0., pointing out a number of challenges in the current design and showing that many cryptography novices were unable to achieve effective eletronic mail security. The results are based on a cognitive walkthrough analysis and a laboratory user test.

The paper also includes a proposal for a definition of usability in the context of security. The definition states that security software is usable if the people who are expected to use it are reliably made aware of the security tasks they need to perform, are able to figure out how to successfully perform those tasks, don’t make dangerous errors, and are sufficiently comfortable with the interface to continue using it. While this definition cannot be straightforwardly applied to the realm of managing interactional privacy, it might be a useful starting point.

The results that Whitten and Tygar found do not come as a big surprise but the analysis of the problems of the icons and representations in the studied system is very interesting. Designing for usable security is a very challenging task, as security tends to be a secondary goal and when secure procedures require extra effort, they are most likely neglected by the majority of users. I don't believe this is a problem that could be solved by increasing awareness and educating end-users but I do feel it's an area where careful and mindful design that takes the everyday practices of users into consideration is needed.

The second paper, Egelman et al.'s article on web browser phishing warnings presents somewhat similar results from a related domain. The authors conclude that highly targeted phishing attacks are likely to continue to be very effective as long as users do not understand how easy it is to forge email. However, they believe that effective browser warnings may mitigate the need for user education. This paper, too, shows that users do not really understand what's going and, nor are they most often motivated enough to make the effort to figure out what is going on.

The best take-away from this article are the concluding guidelines for what needs to be taken into account when designing browsers to warn users of phishing attacks. First, phishing indicators need to be designed to interrupt the user’s task. Second, they need to provide the user with clear options on how to proceed, rather than simply displaying a block of text. Third, they must be designed such that one can only proceed to the phishing website after reading the warning message. Finally, phishing indicators need to be distinguishable from less serious warnings and used only when there is a clear danger. While these conclusions are drawn within a specific domain, I believe they could be fruitful considerations also more widely in thinking how to communicate threats to end-users who are not highly motivated to think about information security.


Charlie Hsu - 11/26/2010 21:59:06

Why Johnny Can't Encrypt

This paper argued that effective, usable security requires different design techniques than standard user interface design to get right. To test this, the authors performed a case study on PGP, an email security plugin. The authors stepped through the interface with cognitive analysis, and ran an user test where subjects novice to cryptography attempted to achieve secure email. The paper concluded that PGP was not effective in providing usable security to typical computer users.

From the beginning of the paper, I had to admit I had low expectations. The hypothesis itself was not particularly insightful to me: since when is "standard user interface design" good enough? This term was never defined in the paper, and the paper simply seemed to imply that since PGP had an "attractive graphical user interface" and was generally agreed upon to have a "good user interface", that it had good general standard user interface design. Certainly aesthetics are a part of good interface design, but I feel that task analysis and user-centered design is even more important. Creating a simple, visible cognitive model of the task to be performed is a centrl part of user-centered design, and security is no exception to this! PGP failed to create that simple mental model.

The user study and cognitive analysis also didn't seem particularly insightful, reading more like a critique of a design than attempting to tackle the harder, more significant question of how to create a simple mental model of security digestable by the average computer user. One thing the paper did reveal with stunning clarity was the complicated state of encryption. Even for myself, grasping my head around generating public/private key pairs, maintaining them, digital signatures and how they work, are all daunting tasks and ones that I've had trouble with personally (Keychain Access on Mac OS X). As soon as the usability standard was set for PGP in the paper (section 2.3), I knew that my own slight confusion was a bad sign; it felt like the authors were setting up inevitable failure in their standard.


You've Been Warned

This paper summarized a study of the effectiveness of different phishing warnings. Phishing warnings were considered successful if they stopped the user from entering sensitive personal information into a fradulent website. Active warnings that interrupted the user's task were found to be more effective than passive ones. The paper also provided some recommendations for increasing the effectiveness of warnings, including providing clear choices, failing safely, preventing habituation, and dynamically altering the phishing website to promote mistrust.

I found the recommendations discussed at the end of the paper very logical and fitting to the results of the study. The insights about effective security warnings can apply to all sorts of different warning needs: warnings about removing a file/directory, warnings about resending information in an HTTP POST request, etc. Warning messages usually have a tradeoff to consider; the degree to which a task is interrupted should vary depending on how important the warning is. However, the case of phishing provides a relatively uncommon, important warning to provide, and thus designing a strong interruption of the task is important. Firefox's interruption (the most effective of those tested) does not allow the user to continue the task without reading the warning and recognizing the choices; IE's active warning performs the same interrupt but offers two relatively large buttons representing choices that users may gravitate towards before reading the warning.

Preventing habituation is particularly important in warning messages. One commonly referenced example that shows the importance of preventing habiruation is that of an user attempting to delete all the files in one directory, and being asked for a confirmation for each file. The authors were right to emphasize the effectiveness of Firefox's unique phishing warning. One note that came to mind as I was thinking about preventing habituation: this can even be used by phishing websites as a tactic by itself! By fostering habits (i.e. a lengthy set of repetitive confirmations), trust (or annoyance by) with the user can develop and the user's guard can be let down.


Linsey Hansen - 11/28/2010 13:56:33

pass


Kurtis Heimerl - 11/28/2010 15:17:42

pass


Brandon Liu - 11/28/2010 15:57:08

"You've been warned"

A potential missing variable in the study is the stress factor. In a more realistic environment, participants would be under time pressure and not necessarily relaxed during their online shopping and email experience.

Even if stress was factored in, that would likely increase subsceptibility to fishing. The results of the study showed nearly everyone was at high vulnerability already.

The strength of this study was that the researchers were able to gather so many participants. There was also a great discussion of the backgrounds of the participants and how the orientation questions were designed. One of the most interesting qualitative observations was that more technical users were less likely to heed phishing warnings.

The study had a strong discussion of the root causes of the security problem, including a mismatched mental model of phishing attacks. The solutions proposed include creating dynamic warning messages and messages with a totally distinct appearance (like in the Firefox case). One problem with this solution in practice is that any notification mechanism can be abused to the point where users are again habituated. A remedy would be to make phishing protection an OS-level feature, that client applications cannot easily access.

"Why Johnny Can't Encrypt"

The conclusion of this study was that PGP5 is not usable, for a number of reasons. I found the discussion of the types of icons used for the different security techniques to be irrelevant, since it was obvious to me that the main flaws of the system were that novice users didn't understand the model behind it. It would have been nice to see some discussion of the effect of icon designs on the typical experienced PGP5 user.

An obvious path for future studies is to step down a level and begin evaluating more basic security applications to see what parts are usable by novices. This would be more valuable than explaining why novice users don't understand public key cryptography.

The paper ends with a discussion of how to teach typical email users the mental model for security. They describe that there needs to be a "simple and minimal model" of how encryption works. Such a model would probably involve "locked messages" or something - it is a challenge to develop such a model since everyone understands email in terms of physical messages like paper letters.

A general solution to the problems described is to make everything encrypted by default. Security software fails if anything intended to be encrypted is not successfully encrypted. Thus, this seems like one of the few viable solutions.


Matthew Chan - 11/28/2010 16:51:06

Why Johnny Can't Encrypt

In this paper, the authors explore the role of user interface in security and how it might be the cause of 90% of all security failures. This paper is very important because it highlights the fact that UI for security cannot be treated like general consumer software. The authors performed a cognitive walkthrough of the user interface and then ran a lab study on users. One of the significant critiques of the UI is the confusion it can cause, such as the quill as a metaphor for the digital signature, or displaying "encoding" when it should really be "encrypting." Other critiques of the UI involved the metaphor that the software used and how it fails to give feedback or instruction to the users on how to use the software, especially for key revocation, etc.; the other big flaw is that PGP alerts users to save their key to an external storage device like a CD, yet the default storage location is the Desktop or Home directory on the hard drive. In addition, the graphics for key (from brass to an old key) were unclear when signifying a user with RSA encryption or the Diffie-Hellman.

The results from the laboratory study on participants involved using Eudora with the PGP extension. Users were given a campaign scenario and had to email a secret itinerary to five other team members. The results were astounding because users took some time to figure out how to complete certain tasks, did not thoroughly understand the model of encryption, public and private keys, etc. Users even tried decrypting another member's key as well.

Overall, this paper is very important since there's very little research in the role of UI and security. This paper does not relate to my work, but has always been fascinating to me. Even after reading the paper, i installed PGP and thunderbird onto my macbook to play around with it, and it was a little confusing to get the set up correctly (ie. the general population might have trouble doing the "make," etc. commands just for the set up). I did not see any blind spots in this paper, but it does make designers think more about security when designing other systems and interfaces.

You've Been Warned!

The overall message of this paper is that browsers that actively alert a user about a phishing website works better than passive alerts. Phishing scams have increased over time, and the authors decided to explore the effectiveness of current alerts to warn users, such as icons, color changes in the URL, etc. Specifically, the authors looked at Microsoft Internet Explorer and Mozilla Firefox 2.0.

The authors then had a large-scale recruitment and had users make purchases on Amazon and eBay. Unknown to the users, the authors sent a phishing email using amazonaccounts.net and ebay-logins.net. The authors also modified the local black list in Mozilla or had Microsoft add the URLs to the black list; also, to ensure the phishing emails went through, the authors used openSPF and DKIM since the public email domains did a good job filtering phishing emails.

Many of the particpants fell victim to the phishing emails such as "Your order has been delayed and will be cancelled unless you click the following link." Another important instance is habituation, where Windows IE users just clicked "OK" or the "X" in the upper right corner to by-pass or close warning boxes without reading them. Another big factor was that some users just skipped or ignored the alerts because they trusted or had confidence in the website (partly because it looked and felt legitimate); another proceeded because s/he had the option to ignore and move on. The authors also explored the effectiveness of different alerts as mentioned and found that users heeded more to active alerts that breaks their momentum and interrupts them.

In conclusion, the authors made a few recommendations on phishing alerts such as breaking habituation, failing safely, providing clear choices, and interrupting the user's primary task. Potential blind spots were highlighted by the authors, such as sending the phishing emails right after a purchase was made, and they noted that their study might not apply for an all-general phishing scam.


Siamak Faridani - 11/28/2010 17:44:29

Usable Security Why Johnny Can’t Encrypt: A Usability Evaluation of PGP 5.0

Authors start by hypothesizing that human errors are one of the major factors causing security failures. They point out that proper security considerations are not often followed by average users. And increasing access to the internet is exposing these security holes to spammers and phishing websites. Finally in the introduction they claim that all other solutions (like training, automation, ...) are simply infective and a final solution is heading towards failure if it does not include an intuitive UI for the user. As a result the rest of the paper focuses on highlighting crucial elements in designing user interfaces for computer security. They use a case study to validate their hypothesis that current best in class user interfaces for security are still incapable of ensuring information security for novice users. They have chosen PGP 5.0 because of many years of development and a mature UI. They use a cognition walk through and a lab user test. And finally they conclude that the UI for PGP 5.0 is far from the ideal UI that follows all their security guidelines for UI. In section 2.1 authors provide a definition for a security software UI . For example they define a security software to be usable if it does not make errors and if the user is comfortable using it. I am not sure if this is a sufficiently through definition for example why not including through documentations, and on demand help for each option? To me there is always a trade-off between ease of use and security and I believe a UI should be able to figure that out. For example I personally do not care about the security of my messages over AIM but I am worried about the security of my communication with my bank. The rest of the paper is about observing users use PGP 5 and making comments about these observations. I personally didn't find those sections particularly interesting as they are tailored to the PGP and cannot be easily extended to other software packages (for example to a user interface for bank accounts)

You’ve Been Warned: An Empirical Study of the Effectiveness of Web Browser Phishing Warnings This paper is mainly about phishing. It is how active warnings in new browsers can warn people when a phishing email wants to steal their information. They look at whether or not these active warnings are effective for reducing the risk of losing information to attackers. They particularly look at two browsers IE7 and Firefox simply because they are the most used browsers. Although active warning has been implemented in Opera, Netscape, IE and Firefox they only focused on the last two because of them being wildly used by computer users. Different browsers use different methods to indicate phishing for example some use passive warnings and users can totally dismiss these warnings. Others use active warnings where users have to take actions if they chose to see the phishing content. Now the question is do users understand what to do after they are presented with the active or passive warnings. They report results for an in lab user study. In their C-HIP model they look at different stages that a warning may fail for example the first one is the attention level meaning: does it fail to grab user's attention?

I particularly liked the way they set up their study. They formulated their study as a shopping experiment. They made them do two purchases online one from Ebay and one from amazon. These two websites are chosen because they have been commonly known websites for phishing attacks. It was also very interesting to me that the phishing message was sent to users after the lab experiment and as follow up messages. They finally show that Firefox is better in preventing information lost and active IE is also more effective than passive IE. Another interesting fact in their design was adding a control group. They showed that passive IE does not have a significant effect in preventing attacks. They also go on to provide justifications for why they think passive IE is ineffective.

Another interesting observation by authors was the way people comprehend warnings. For example they showed that firefox warnings are better understood by users that active warnings in IE. Users who trusted the warnings obeyed them and authors observed that they is a strong correlation between trust and obey. Another interesting fact was related to too many warning messages in secure environments for example one said they ignored the warning simply because their PC bombards them with similar messages. This made me think that perhaps not all applications should generate warnings. Also the users who recognized the warning were more likely to ignore these warnings. Especially SSL warnings in IE look very similar to passive phishing warnings.

They conclude by a number of recommendations: for example active warnings instead of passive warnings. or providing choices with highlighted consequences. also the safest action should be the default setting. The best recommendation might be preventing habituation. For example more serious warnings should be different than less serious warnings. The last recommendation that seemed plausible was drawing the trust away from the website. For example the browser can deface the website or completely change the content of the website as the content of the website should not override the content of the warning.


Luke Segars - 11/28/2010 17:48:28

Why Johnny Can't Encrypt

This paper describe reasons why a common "usable" security technology called PGP fails to make security accessible to non-technical users. The authors go over some properties of security that make it particularly difficult to use including the important fact that security is often a secondary goal of a user (unlike emailing someone, browsing the internet, or playing a game). The authors then conduct a user study to determine whether new users could successfully send an encrypted email if given 90 minutes to do so.

While I agree that PGP doesn't make security as easy as it needs to be, I came away feeling very unimpressed by the authors' evaluation of the PGP system. Many of their criticisms were made in regards to particular icons and minor UI issues that I suspect have very little impact on a user's understanding of the system. They then consider a set of fringe cases that PGP supposedly allows to occur too easily.

In my opinion, the problem with PGP (and other security layers) isn't that the icons are wrong or that the pop-up boxes aren't wordy enough. The problem is exactly what the authors stated earlier in their paper: security is a secondary goal for the overwhelming majority of internet users. The fact that PGP provides any sort of complex decision making, be they well-decorated decisions or not, often puts a large enough barrier between the users and their goal to convince them that they'd rather do without. If widespread encryption for simple applications is ever going to come about, it will be for one of two reasons:

  (1) users will become significantly more security-minded, likely due to an increased appreciation of the importance of security (i.e. massive identity theft).
  (2) security-minded professionals will come up with a near-invisible means of securing communication so that users don't need to be concerned with it (i.e. HTTPS).

Ultimately the problem with PGP is nothing particular about the interface; the problem is that the interface exists at all. People generally don't understand (and shouldn't need to understand) public and private keys, key servers, key revocation, and all of the other advanced concepts that come along with securing digital information, and there's really no reason (yet) for that to change. Once we are able to procure security mechanisms that are almost invisible, users will be happy to accept it, but for now it's just not worth comprehending a dozen complex topics from Wikipedia pages just to send an email.



Shaon Barman - 11/28/2010 18:19:23

Why Johnny Can’t Encrypt: A Usability Evaluation of PGP 5.0

The authors use user testing and direct analysis the usability weaknesses in PGP 5, a tool used for encryption along with key management.

Security is a huge field and making a system completely secure is difficult. When the human element added, this task becomes much more difficult. Even with secure tools, it is difficult for the average person to use these tools effectively. It seems like a fundamental limitation of PGP is that in order to use the tool effectively, one must have an idea of how the system works. The tool presents a key metaphor, but this key metaphor breaks down with public and private keys, along with key repositories. Even after reading the paper, I still have only a vague idea about what "validity" and "trust" actually mean and how they differ. There are many different terms that the average user does not regularly encounter. Because of this, the task of sending a secure email includes the task of learning how the encryption system works. Because security is a secondary issue, most users will overshadow it in favor of getting their main task done. For a system to be secure, such as email, the secure route should be the path of least effort.

I liked the use of direct analysis to find specific problem points with PGP. One addition I would have liked was whether or not any of the users actually encountered these problems, such as an irreversible action. The user study also provided concrete evidence of UI problems, and showed how specific users handled the system and where they got confused. Although, because of these descriptions, it is difficult to see what each user accomplished. A table which showed which tasks each user accomplished would have been nice. Overall, this paper shows that problems in security are not only in the system, but exists in training the users who will use that system.

You’ve Been Warned: An Empirical Study of the Effectiveness of Web Browser Phishing Warnings

The authors analyzed the usefulness of phishing warnings in IE and Firefox.

I liked how the authors disguised their experiment in the context of a shopping study. This gave more validity to their results. Overall, the results seem to indicate common knowledge. Most people when exposed to a warning message will disregard it, unless it provides a major obstable. In addition, a repeated warning message is even more useless. When using a computer, the user is constantly bombarded by messages, whether its ads, disclaimers, warnings, etc. Processing all of these messages takes a lot of effort and usually it comes with little reward. In response, most people have tuned out these messages so they can focus on the task at hand. In addition, it seems like tools, like email filtering or google, work so well that its not necessary verify that an email or website is malicious. Because of this, the user is more vulnerable when a malicious website is not detected (or is given a lower priority warning). It seems like the big takeaway is in order to get the users attention, the UI must make the user actively engage with user, not just present a message that can be easily dismissed.


Arpad Kovacs - 11/28/2010 18:27:15

The "Why Johnny Can't Encrypt" paper describes a cognitive walkthrough/heuristic evaluation and lab-based user test of PGP 5.0, and concludes that most participants were unable to use PGP to sign and encrypt messages due to usability issues. The main usability flaw of PGP is that it presents too much information to the user, and does not clearly explain to the user all of the lock-and-key metaphors it uses to represent the encryption process (eg the user must read the manual to learn that the old-fashioned blue key represents RSA, while the modern brass key represents Diffie-Hellman/DSS algorithm, and which key to use in which case). As a result, although PGP is quite featureful, many users will misinterpret ratings such as validity and trust, or will fail to learn about its advanced capabilities (such as key revocation). Another issue with the PGP 5.0 user interface is that it allows users to perform irreversible actions, such as deleting, revoking, or publicly uploading a private key, and only presents a confirmation dialogue without warning the user of the possible consequences.

The contributions of this paper are proposing a definition of usability that is prioritized according to security concerns, as well as discussing properties inherent to security that make it a difficult problem domain for user-interface design. The authors' findings show how essential usability is to effective security, since some experts estimate that 90% of security breaches can be traced to misconfiguration. The most valuable lesson I learned from the paper is that the user must have a good mental model of the system in order to use it correctly and securely. It seems that the main issue in the user tests was that users did not understand the distinction between public and private keys and the encrypted message (for example, some users tried to decrypt the keys themselves), nor the process for exchanging keys and encrypting, decrypting, and verifying messages. Newer plugins such as Enigmail for Thunderbird try to remedy this using a step-by-step walkthrough tutorial/wizard, which guides users through the process of generating a keypair and uploading the public key, then encrypting and signing a test message, sending it to an echo server, then decrypting and verifying it. Since the majority of the population uses webmail clients such as gmail/hotmail/yahoo, perhaps the best solution to usable email encryption is direct integration into webmail systems. This way, the mail provider can perform all of the configuration, and the system will be ready-to-use. This requires the user to trust the mail server administrators to keep their information confidential, however it seems that most users would prefer this option to the risk of misconfiguring or misusing existing email systems and be subject to a man-in-the-middle attack. Of course, advanced users and those who require high security levels would still prefer to store keys and perform the encryption/decryption on a local computer.

However, in my experience, the greatest barrier to effective use of security software is not just the lack of effective guidance of a program, but rather the fact that most users are not immersed in a security-conscious environment where they understand the consequences and security implications of every action they take. This is reflected by the fact that most users of email encryption are already knowledgeable in computer security, or have friends or peers who can setup and teach the user how the system works. In contrast, normal users often do not even understand why sending email in plaintext is a bad idea, and those who want to learn the system on their own face a steep learning curve, since they must not only learn the software itself, but also adapt a new security-aware mindset (eg never divulge your private key) to be effective. The software should assist users in learning better security practices by not only labeling the options that are available, but also discussing the rationale behind each option (eg signing vs encrypting), and in which cases it should or should not be deployed.


The "You've been warned" paper analyzes the effectiveness of active vs passive phishing filters using the Communication-Human Information Processing Model. The C-HIP model helps measure the hazard matching (how accurately a warning conveys risk) and arousal strength (the urgency of the warning) of software warning indicators. The experiment was set up to be as realistic as possible, with the participants using their genuine email addresses and financial information to order items from Amazon and eBay, and asked to participate in a shopping study. A novel aspect of this study was that the spear phishing attempt coincided with an actual transaction with the website being spoofed, therefore victims were more likely to believe that the phishing attempt was legitimate. The results of the user study show that Internet Explorer 7's passive protection was as ineffective as the control group, which was not subject to any anti-phishing warning system and therefore were highly susceptible to the phishing attack. The active warning systems made a statistically significant difference, with half of the IE7 users not falling to the phishing scam, and all of the Firefox users obeying the warnings.

The contributions of this paper are comparing the phishing warning mechanisms from the Internet Explorer 7, and Firefox 2 web browsers, and analyzing their effectiveness using C-HIP, then offering recommendations for improving warnings in future versions. As can be seen by the test results, warnings must be active and interrupt the user in order to grab his/her attention; passive warnings are too easy to inadvertantly dismiss or ignore. In addition, it is vital for the warning to be unique, and not be perceived as a false positive, since as the study shows, many users do not fully read or comprehend the text of the warning message (for example, many of the IE users believed that the the warning was for a self-signed or expired SSL certificate, an error condition they encountered before and did not think was significant). I found it interesting that technical experience was negatively correlated with the rate at which users obeyed warnings in the Internet Explorer active warning group; perhaps advanced Windows/IE users are so accustomed to seeing warning/error messages that their arousal strength thresholds have increased, and thus they ignore even important dialogs. Thus the error message should be fail-safe; even if the user takes the "autopilot" course of action (press OK or click the red X to close the warning), the phishing website should not appear unless the user clearly understands the risks and makes a conscious effort to bypass the warning. Finally, it is important to show the user clear choices so they can see a clear course of action, and possibly visibly alter the website in order to prevent the user from falling into habitual behavior that can occur due to too many false positives.


Drew Fisher - 11/28/2010 18:27:36

Why Johnny Can't Encrypt: A Usability Evaluation of PGP 5.0

The key point to be taken from this study is that usability in security software needs to account for a number of things beyond those usually considered in usability evaluations. Users expect security to be easy and transparent. Often, real security is neither of these things, for reasons ranging from lack of user knowledge about the complicated subject of cryptography to poor default behaviors.

I'm disappointed in the participant that couldn't remember her password for 90 minutes. It's not like there was a ton else for her to do during the study. I guess that's a sign of the sort of people that we have to support, though.

User behavior and attitudes towards software has largely changed since the 1980s and early 1990s, when users would go through a tutorial on how to use a mouse to interact with their computer. It would be interesting to see if our approaches to usability need to also adapt to what the average user is willing to put up with. Users would likely have benefitted from a short tutorial on using PGP 5.0 to help them internalize cryptography concepts, but I wonder how many people in this day and age wouldn't skip the tutorial, and then fail to use the software. As noted before, people are notoriously bad at putting in effort up-front to gain longer-term benefits.


You've Been Warned: An Empirical Study of the Effectiveness of Web Browser Phishing Warnings

This paper confirms numerically what we've long suspected - users generally don't read anything. They've been conditioned by applications throwing up so many error messages and dialog boxes for all sorts of reasons that they no longer believe that such messages can be important. This paper shows that the only way to get people to pay any attention to you

It seems that one of the key aspects of producing messages that will be heeded is their novelty. It appears this contrasts with making user interfaces consistent and learnable - if you want your messages to be read, you have to slow down the user and interrupt their flow. Perhaps a technique that randomly generates certain aspects of the chrome would improve this situation, but at the cost of users recognizing and believing the warning. The research question here would be: "is negative attention better than no attention?"

This paper highlights a problem that afflicts Windows particularly badly - the lack of a usable, unintrusive notification system. Another issue is the fact that different software publishers are competing for the attention of and use by the end user, so the most obnoxious application wins the most user attention. Unfortunately, this winds up just conditioning the user to ignore notifications that may be important. This is something akin to the "Loudness War" that plagues audio recordings in recent years - the situation maps to the prisoner's dilemma. I'd be interested to see if there's a way to construct interfaces to ecologically discourage this self-interested behavior from software developers.


Thejo Kote - 11/28/2010 18:51:50

You've been warned:

In this paper, Egelman and co-authors present the results of a study into the effectiveness of the warnings displayed by browsers when users visit a phishing website. They find that most users (97%) click on the link in a phishing e-mail and visit the website. Once they've clicked on a link, active warnings are more likely to protect users than passive ones. They also find that the active warning in Firefox performed better than that in IE7.

They analyse the results of their experiment using the C-HIP framework. The main recommendations they make are that warning systems should interrupt a user's primary task to be effective, they should provide clear choices about what the user can do and change how the warning is presented based on the severity to prevent habituation.

This paper reiterated the dnagers of habituation. End users of the internet are now bombarded with security warnings so often that they simply ignore most of them. The discussion on mental models that users have about phishing was also revealing. Most users are blindly trust the e-mails that they receive. Now that the major e-mail providers also show warnings in the e-mail interface itself, it would be interesting to conduct a follow up study to determine how useful that is.

Why Johnny can't encrypt:

Whitten and Tygar address usability of security systems in this paper. They argue that standard best practices in the creation of user interface cannot directly be applied to achieve effective security. They conduct a cognitive walkthrough and a user test of the PGP 5.0 e-mail encryption software package to test their hypothesis.

Their usability analysis of PGP 5.0 and the user test shows that the software is far from usable even though that was one of the design goals of PGP 5.0. But, the paper isn't about usability of one application, but in general, about the challenegs of creating usable and effective security. Are those two goals fundamentally at odds with each other?

Prof. Tygar teaches a class at the School of Information where we re-evaluated the PGP system for a mid-term last year. I came away with the feeling that while things have improved quite a bit when compared to what is described in the paper, it's still not ready for use by most end users who are not technically sophisticated. In the case of encryption of e-mail e-mail, key distribution and management remains a highly unusable solution. I can't think of any easy way to improve that important piece of the system without some massive changes to the entire eco-system that needs the co-operation of many entities with varied interests and motivations. The fact that most people don't encrypt their e-mail today should be evidence enough that it is not ready for prime time. Security seems to work well when it "just works" without any intervention by end users, like encryption of web traffic (for the most part), but the fundamental question is can we have usable security which is at the same time provably effective?


Kenzan boo - 11/28/2010 18:52:15

Why Johnny can't encrypt: a usability evaluation of PGP 5.0. The article describes PGP's user interface and the problems associated with using PGP. It states that most security UIs are either confusing to the point people avoid using them or are near non existent. They do a study using PGP 5.0's user interface which is viewed as the most successful for mail security.

I have personally used PGP at my work place for all secure mail correspondences. From using it at work, once set up, it is not too difficult to send encrypted or view encrypted mail. all you need is just a password. However the initial set up process is extremely difficult for a novice joining the company. It took me personally a while with one of our sys admins to properly set up PGP and have him sign my key for me. I did not use the UI to do it, but the set up process and the web of trust that needs to be initially verified does add a huge barrier to using pgp.


You've been warned: an empirical study of the effectiveness of web browser phishing warnings.

This article is on a study on several modern web browsers built in anti phishing warnings. Many browsers like chrome now include active warnings that redirect the users flow of information whereas the used to have passive warnings. The test found that 97% of the users they studied ended up falling for atleast one of the phishing attacks with passive warnings. but with active warnings, 79% actually avoided the attacks.

One of the more well known browsers that have done this very well is chrome. firefox also does an attempt at this but does not properly scare the user away. Chrome does a bright red scary page that warns the user. anyone is immediately adverse to a bright red page and will catch their attention. whereas firefox just asks the user to click a few buttons to continue.



Bryan Trinh - 11/28/2010 18:53:25

You've Been Warned: An Empirical Study of the Effectiveness of Web Browser Phishing Warnings

Browsers of all kind have implemented some sort of warning system to prevent users from accidentally giving away personal information to suspicious websites, but not all have been successful. This paper discusses the evaluation of an active warning system to prevent users from falling victim to phishing schemes using the Communication-Human Information Processing Model, borrowed from the warning science field.

The cognitive model that they presented reminded me of the human cognitive model. In both models, the human thought process was segmented into easily definable segments. And like the human cognitive model, it provides an easy way to improve the micro tasks that are involved in executing the task. Although nothing completely new and novel was evaluated here, this is a good example of reapplying existing knowledge from another field successfully. They were able to glean very valuable information. For instance with the negative Pearson's correlation that they found, designers of warnings now know to create a warning that is drastically different for increasingly severe attacks.

The disparity between the firefox and IE users was left unexplained. Some what funny how that turned out though.


What does attractive user interface mean

Why Johnny Can't Encrypt: A Usability Evaluation of PGP 5.0 This paper evaluates the effectiveness of the PGP 5.0 system to appropriately warn users of security issues. From this case study they describe continuing work that looks further into the problem of creating user interfaces for security. The authors hypothesize that creating user interfaces for security will need to follow design principles that are completely different than traditional software user interfaces.

This paper essentially provides a list of things that one shouldn't do when creating a security encryption system for emails, but not much else. They simply evaluate one system and note all their failures.

It might just be me, but I don't buy their claim that PGP 5.0 is "attractive" and "simple to use". They are very general subjective statements made among a long list of other statements that are grounded in more concrete data. After reading the study it is clear that the system is not very easy to use--the author even acknowledges the mismatch between icons and meaning. Things are not obvious and the human model does not map well to the machines model, which should be articulated through the interface. Anyways, just a weird point of discussion that should have been removed from the paper.


Matthew Can - 11/28/2010 18:56:07

Why Johnny Can’t Encrypt

This paper develops a definition of usability for security and performs an evaluation of PGP 5.0 against this usability standard. The evaluation reveals that PGP 5.0 has usability flaws that keep is from providing adequate security. The authors conclude that UI design for security is sufficiently different from general UI design principles, and they provide guidelines for usability for security.

The paper’s contribution to HCI is that it opens the door for further research into UI design for security. It establishes a definition of usability for security and provides some guidelines for designing UI for security.

I’m not sure how useful it was for this paper to evaluate PGP 5.0 to validate their hypothesis. PGP 5.0 requires that users be knowledgeable about computer security, and the authors seemed quite concerned with how well an interface gives the user a good mental model of the security mechanism in place. In contrast, I think the user should have to know as little as possible about how the security works, simply whether it is working or not. In fact, the only thing I’m certain I learned from the paper’s user test is that the average person is not a security expert.

One thing I did like about the paper is that it addressed the problem of too much information for UI for security. This is a problem that Facebook has been trying to deal with, and there doesn’t seem to be any good solution. One suggestion is to present only the most relevant security information, choosing default settings for less important security parameters. These default settings should be fail-safe. The problem is, this is often in conflict with what companies like Facebook want you to do, namely to share more information.


You’ve Been Warned

This paper presents a lab study of the effectiveness of browser phishing warnings. The paper analyzes the results of the study in the context of the C-HIP model from the warning sciences. The analysis leads the authors to suggest guidelines for making phishing warnings more effective.

What I liked about this paper is that it analyzes the results of the study with a warning model in mind. By looking at each stage of the model separately, the paper can explain the results and provide specific suggestions at a fine granularity. For example, the warning comprehension stage of the model examines how well the warning conveys a sense of danger and presents suggested actions. The authors found that one way that Firefox warnings were better than IE warnings is that they provide more effective warning comprehension.

One thing I am concerned about is that this study lacks ecological validity. There could be side effects from the lab study that made people more willing to complete the study, despite the risks. For example, participants might have chosen to ignore the phishing warnings because of the formality of the study and because they placed trust in the researchers (think of the Milgram experiment). The authors do state that they think the desire to complete the study was offset by the lack of desire to purchase the items in the study. However, it is not at all clear what their basis is for thinking that the "lab effect" was nullified.


Richard Shin - 11/28/2010 19:00:19

Why Johnny Can’t Encrypt: A Usability Evaluation of PGP 5.0

This paper explores the need for security-specific usability standards through a usability evaluation of PGP 5.0 that examines whether novice users can successfully achieve security goals using the software. Through a cognitive walkthrough and laboratory user testing, the authors find that PGP 5.0 fails usability standards for ensuring security, despite seemingly meeting conventional usability goals. They then discuss possible techniques and guidelines for user interface design that better guide novice users into ensuring security when using software.

As the authors point out, ensuring security through usability poses challenges different from most other usability goals; chiefly, users tend to care not about security on its own, but merely that security is maintained while they carry out their goals. Previous research we have read focused mainly on enabling what users actively and specifically desired, rather than nudging users toward a hidden goal or maintaining ambient expectations, so I thought this paper covered a previously-undiscussed aspect of designing user interfaces. I don't recall having previously seen the technique of cognitive walkthrough, either, which seemed a powerful yet convenient way for designers to discover potential problems interfaces, albeit not a specific contribution from this paper. Testing whether users successfully realize the authenticity or security of some aspect of a computer system carries great importance in research into usable security, and guidelines for carrying out such tests could help researchers determine how users react to security problems.

However, I thought that the paper's use of cognitive walkthrough seemed somewhat inappropriate for determining whether PGP is actually usable or not. While the technique lets designers devise hypothetical problems users might face, it provides little knowledge about whether such problems are real or whether users would actually suffer from them. The authors don't make an attempt to ground their assumptions about the user's thinking processes in empirical observation, and it seemed to me that the authors could easily have been picking at inconsequential problems in the user interface, or presenting non-problems that users would have no trouble with. The approach also doesn't uncover any potentially positive aspects of the user interface, making it difficult to judge whether the UI overall provides usable security.

You’ve Been Warned: An Empirical Study of the Effectiveness of Web Browser Phishing Warnings

This paper presents a study into how effective browser warnings are at preventing users from falling victim to known phishing attacks. The authors compare what they denote 'active' and 'passive' phishing warnings, as distinguished by whether the user must first dismiss the warning in order to proceed, or informs the user of a possible phishing attempt that requires no acknowledgement from the user. They found that while only one participant in their study heeded passive phishing warnings, a dramatically greater 79% of the participants followed active warnings, demonstrating the great impact a seemingly small design decision can have.

Unlike many of the other papers we have read in the past, this paper does not discuss a novel system nor systemize/classify a large number of systems; instead, it studies the effectiveness of specific representative example interfaces, deriving general conclusions about their representative characteristics. The conclusions of the study, that active warnings have greater effectiveness compared to passive warnings, seem obvious in a sense, but the study still seems valuable in rigorously and empirically examining whether such intuitions actually hold true, and demonstrating that active warnings stop phishing attacks compared than passive warnings more than an order of magnitude more often. Undoubtedly, future products which incorporate security warnings could learn from this research and better ensure the safety of their users.

It seemed to me, though, that the authors only examined the effect that one dimension of variance in warnings could have; I thought that they could have additionally studied the effects of different variables without much trouble, which would have been more helpful that just the passive/active distinction. It was also disappointing that they tested only two browsers; perhaps they could have uncovered more insights if they exposed study participants to more warning designs. Also, the authors didn't seem to make much effort to generalize their findings to different kinds of warnings; phishing indicators is a fairly narrow category, and it's hard to tell whether the same lessons would necessarily hold true for other types.


Thomas Schluchter - 11/28/2010 19:00:19

===Why Johnny can't encrypt===

The paper presents the results of a user study on PGP software that explores how well various security-related tasks are supported.

I don't understand why the user interface is supposed to support a task that isn't well understood in the first place. Much like Microsoft Word won't help an iliterate person in writing a letter, or the bash shell won't help someone completely unfamiliar with UNIX execute system commands, PGP UI won't be able to help users who are not aware of the underlying principles of securing their communication. Both Word and the bash shell are tools that are built to execute tasks, and both the illiterate Word user and the UNIX novice will be able to wield them to produce some kind of outcome: most likely an illegible document or severe damage to the file system.

A common complaint about Facebook's privacy settings is that they are too complicated, resulting in poor user choices. I think this complaint ignores an important part of reality: People are also just plain unaware of privacy implications of what they do online. Surely, a confusing UI will not help them become aware of these implications, but the existence of a highly optimized interface will not guarantee that everyone turns into a privacy geek. There is a parallel to using encryption software here: Poor usability impacts those who know what they are doing and would like to get it done. Optimized usability does not help those who don't even know what they are doing.

To me, the paper doesn't answer the question why Johnny can't encrypt, it rather discusses why the concept of encryption is difficult to successfully convey in a user interface.

You've been warned

The paper reports the results of a study on the effects of different phishing warnings in popular web browsers. It is found that active warning that disrupt the user's flow are far more effective than passive warnings.

The paper is interesting in that it systematically looks at what causes us to pay attention to. The 'crying wolf' phenomenon is probably familiar to any user of personal computers. It raises the question whether usable security should be implicit rather than explicit.

Instead of relying on the user to be a responsible individual that reads all warnings, analyzes them in context and then makes a conscious decision about how to proceed, maybe it would be more effective to take a very conservative approach to security problems by default. In the case of phishing for example, one could imagine automatically blocking access to the incriminated site, and only letting the user view it through an explicit action.


Pablo Paredes - 11/28/2010 19:01:56

Summary for Egelman, S., Cranor, L., Hong, J. - You've Been Warned: An Empirical Study of the Effectiveness of Web Browser Phishing Warnings.

This paper, in too many words, describes the analysis of how users behave in front of phishing warnings. The study analyzed three groups, one with Internet Explorer (IE) active warnings (a prompted choice menu), one IE passive warning (a simple window), and an active Firefox warning. It was interesting, yet not surprising, to note that IE users did suffer from the "crying wolf" effect. Apparently IE's warnings for a variety of important and not important issues all look very similar, and are obtrusive enough, that people learn to mostly disregard them without reading the issues. This effect was not observed with Firefox.

Although users were screened for computer literacy, one doubt I have is the profile of Firefox versus IE users. In the paper it is not clearly explained the profile (time of usage of tool, reason, etc.) and therefore it is not clear if this has an effect or not. The other doubt I have is about the methodology. There are lose ends both on the qualitative and quantitative analysis: the number of users were not completely balanced, the answers for qualitative questions were not complete, and there is not clarity on how much the coincidental effect on the buying exercise has en effect, as there is not a control group where people receive phishing messages without any induced buying exercise.

Overall, it is clear that active warnings have stronger effects than passive ones, as they do activate the first phase of the C-HIP model, which is the attention switch element. However, there is no relevant analysis on how the other elements of the process are affected or not. No analysis was performed to determine what is the learning effect in users. Probably a hypothesis is that a quick learning curve (and therefore lasting behavioral change) can be reached due to the seriousness of the potential attacks. Also, habituation to warnings is not the problem itself, but rather the action taken after receiving the warning. Habituation could lead to a different outcome is the induced risk is assessed differently, i.e. habituation could lead to immediate close of the URL by the simple trigger of the specific (well designed) phishing warning.

Finally, another notion to be taken into account, is that many users are not fully aware of the different pop-up warnings and windows, and that we cannot assume that they are paying full attention to the screen, as reported by some users who never saw the windows due to their concentration in the keyboard. Therefore, maybe additional indicators could be added to other parts of the hardware design, like probably lighting the keyboard, making noises or haptic feedback like vibration.

As a whole I believe this paper seems way too long and overanalyzes the problem. However the issue of proper design for serious warnings is indeed of interest, and it should be taken into consideration holistically, not only at the design level, but also as a social process that may demand investment into dedicated publicity and educational campaigns.


Summary for Whitten, A., Tygar, J. D. - Why Johnny Can't Excrypt: A Usability Evaluation of PGP 5.0

Again a paper overly worded. Tons of useless analysis that could be described as part of an anex that includes the description of the process, rather than including all the verbatim as part of the main body. However, again, the notion of interface design for security is clearly shown here, although I strongly belief the issue should not be analyzed as a design only process, but rather as a holistic approach that takes into account the notion of cyber security has to be managed from a constructivist point of view, i.e. as a social process.

The paper defines the five "differences" of a security application from a normal end-user application, by analyzing PGP. What is puzzling is the definition of PGP as a good interface. The very notion of good interface design is based on the effective accomplishment of the task for which this interface is designed for. In this case, by observing the alarming number of people that unsuccessfully encrypted an email, it is evident the application is NOT well designed.

The main leraning from this paper is in understanding that the user does not have security as a primary need or objective when using computers, but rather a secondary, or even worst, as a necessary evil. This complex problem should be analyzed beyond a design perspective. The very notion that security, provided all the threats it brings, is not acknowledged by the regular user as a primary need or objective, shows clearly the need to take a step back and analyze if whether users are aware enough of the issue. This could be interpreted even as a public security problem, and therefore society and industry leaders and representatives should clearly attach this issue from the root, i.e. starting with proper education and awareness campaigns.

The personal and singular nature of security processes demands a strong analysis of the educational process. I argue that by adding an adequate educational process, one that clearly shows the need to worry about security, and the clear and simple steps needed to be protected, the need to change the design paradigm is less important. What I mean is that by bringing privacy and security issues up in the needs and objective scale for users, best practice traditional design tools and processes can be used to quickly develop strong security interfaces. However it is important to recognize that some additional measures will be needed to make the interface simpler and stronger. It is disappointing to see that the authors do not make even an attempt to describe what these design "practices" should be, and how they could be used to sustain the listed security properties (unmotivated user, abstraction, lack of feedback, barn door, weakest link). I believe that one of the quick steps that can be made to improve design is to make sure simplicity is elicited (to avoid distractions), and that first and foremost, concepts of privacy are taught to users.

As a whole I find security design to be an interesting subject, and I agree with both papers that more research should be done. However, I think a holistic approach should be taken, beginning by understanding the reason why security is not seen as a primary concern for users, and more robust longitudinal and field experiments should be performed. These research should be endorsed and financed by the corporations that have much more to loose than the weakest link (end user) has.


Dan Lynch - 11/28/2010 19:02:36

Why Johnny Can't Encrypt

This article starts out discussing the implications of security issues and user interface design. In order to test this the group used PGP 5.0 to see whether novice cryptographers could send an encrypted mail message. Their conclusion was that PGP was not usable enough to ensure security for the majority of users. This is cleary a problem, particularly because the paper states that 90 percent of all security failures are due to to configuration errors which are directly impacted by usability.

This paper is extremely important in that security is one of the most discussed topics in the media today, as millions of peoples information traverses the net every single day. It is imperative that these issues are resolved as personal information and privacy is our first defense from major crimes such as identity theft. Overall this paper is relevant and on point with today's issues, even though it wasn't written recently.

You've Been Warned

This paper discusses and studies the efficacy of phishing warnings in web browsers. In other words, they are looking at how effective web browsers are at letting a user know their personal information may be used maliciously by the current web page. The study looked at active vs. passive warning systems, and studied how well people paid attention to these warning or if they heeded them at all.

Again, a very important topic. A model for warning is imperative if we are to defeat the evil hackers of the world (note that there are many good hackers). Active warnings are probably the only way to really tell people they should be on their toes---especially when most people are not as educated as computer science majors in computer security issues.


Anand Kulkarni - 11/28/2010 19:03:07

Phishing


The authors experimentally compare active and passive warning systems for phishing attacks, finding that active warning systems significantly outperform passive systems.

The contribution here makes a natural policy recommendation in designing browsers and email clients -- the machine should shoulder the work of identifying purported phishing attacks and cautioning the user. This recommendation has application more generally in the application of HCI in security settings -- since most contexts have large periods without security hazards, users become uncautious in general except when explicitly warned against a specific circumstance as it arises. This could lead to better systems for security in general, since I suspect many don't consider HCI implications. The authors also include a model for user interpretation of warnings, which could lead to more effective understanding of how to structure warnings; some effort is spent on generalizing the findings. The primary validation is experimental, with a setup that mimicked a typical online shopping activity. I was impressed by the fact that the authors carried out more extensive user interviews to try and pinpoint the reasons certain warnings suceeded and failed.


Why Johnny can't encrypt The authors attempt to develop design principles unique to security applications and contexts, making the case that existing usability principles are inadequate through an experimental evaluation of PGP 5.

The core contribution here is the development of usability design principles for security, which is an important cross-disciplinary application. While security research more commonly focuses on guarantees of theoretical security or effective techniques for application, it seems equally important (and relatively unexamined) to determine how to control for human error and non-use of rigorous security by improving design and making good security practices easy to carry out. It seems both natural and appropriate to do so. The use of PGP5 is a particularly compelling example, since it's both a well-regarded and fairly central piece of consumer/industrial security software and one which the authors report scores highly on traditional, non-security-specific metrics of usability. I particularly like the fact that the experiment attempted a social engineering attack by asking for the key in plaintext, and that they considered the possibility of sending untrustworthy keys -- these seem to show that there is a significantly higher standard for "secure" usability rather than traditional usability.


Aaron Hong - 11/29/2010 4:33:02

In "You've Been Warned: An Empirical Study of the Effectiveness of Web Browser Phishing Warnings" by Egelman et al., they discuss two different kinds of warnings in web browsers, active and passive, and the effectiveness of them. They show that active warnings have more power to deter users from entering a phishing website, but that a staggering 97% fell for at least one of the messages despite 79% of participants heeding the active warnings. They corroborate research done on phishing also by showing that many decisions made to trust the phishing website was based on the professional "look-and-feel." They discuss a few methods for more effective warnings such as interrupting the primary task or looking more "dangerous."


Overall, I thought it was a interesting study and definitely a very important field to look into since phishing is probably one of the most effective scamming methods. I like the approach they took, however I thought the experimental set-up was a little contrived--biasing the participants to more likely trust the phishing websites. First, one issue they noticed themselves was the extremely timed nature of their attack that they justified as possible in the real world through wireless traffic sniffing. But most phishing messages are not so aptly timed within a few seconds of a purchase. Another thing is the setting of the laboratory may led to the participants trusting in the information given to them--since they are in a sterile lab at a large university, they may suspect less that something shady is going on. There is no clear indicator whether it would be different if participants reacted in the wild, but now since these factors were not controlled, I question whether they would have an effect or not.

In "Why Johnny Can't Encrypt: A Usability Evaluation of PGP 5.0" by Whitten and Tygar, they talk about user interface designs for security systems as a different domain than general software user interface designs. There also are other criteria for security systems that take priority as "don't make dangerous errors" as in "exposing the secrete you thought you had encrypted." Another thing I thought the paper made a good point about is that security concerns are usually secondary compared to what you intended to do, which in this case is send an email. With that perspective in mind, these security mechanisms need to be designed in a way were we can operate them even if they are a corollary concern. Certainly, it shouldn't take 90 minutes and still not necessary work. It was appropriate the paper noted some users said it was "unlikely that they would have continued to use it in the real world."