- 1 Bjoern's Slides
- 2 Extra Materials
- 3 Discussant's Materials
- 4 Reading Responses
- 5 Valkyrie Savage - 10/31/2011 23:00:56
- 6 Laura Devendorf - 11/27/2011 21:15:58
- 7 Amanda Ren - 11/27/2011 21:56:10
- 8 Sally Ahn - 11/27/2011 23:02:29
- 9 Galen Panger - 11/28/2011 1:08:39
- 10 Hanzhong (Ayden) Ye - 11/28/2011 1:51:23
- 11 Suryaveer Singh Lodha - 11/28/2011 5:01:21
- 12 Allie - 11/28/2011 8:29:06
- 13 Vinson Chuong - 11/28/2011 8:35:26
- 14 Alex Chung - 11/28/2011 8:56:23
- 15 Rohan Nagesh - 11/28/2011 9:01:14
- 16 Jason Toy - 11/28/2011 9:03:20
- 17 Shiry Ginosar - 11/28/2011 9:05:30
- Cranor and Garfinkel. Security and Usability. O'Reilly, 2005.
- Why Johnny can't encrypt: a usability evaluation of PGP 5.0. Whitten, A. and Tygar, J. D. In Proceedings of the 8th Conference on USENIX Security Symposium - Volume 8 (Washington, D.C., August 23 - 26, 1999), p. 14-14.
- You've been warned: an empirical study of the effectiveness of web browser phishing warnings. Egelman, S., Cranor, L. F., and Hong, J., In Proceeding of CHI 2008, p.1065-1074.
Valkyrie Savage - 10/31/2011 23:00:56
Users are bad at making security decisions. There are ways to influence them to make better ones, and also it would be nice if we understood why.
The conditioned safe ceremonies paper dealt with some things that I have considered somewhat during conversations with Drew. First, though, I want to say that I really like their word choice on “ceremonies.” It rings as just exactly the right thing for a protocol with people, and it brings up mental images (for me) of something which can be long and drawn out and frilly and not necessarily useful or, alternatively, something which is functional, short, and plain. It’s a great metaphor for the circus that we currently have of security measures implemented inconsistently across dozens of websites of varying importances in our lives.
I minored in psychology during undergrad, and I can’t say that I learned much from the bits about how users are trained into behaviors. I pondered instead the things that I would question about this paper: mainly their extolling of email registrations. I suppose at the end they do mention how email passwords can be short and hacked themselves, but they didn’t present a reasonable alternative for how to ensure that the correct user is being registered. My friend just this week had his XBox live account hacked, and he managed to beat the attacker to clicking the password change link in his email (for some reason, MS made the VERY questionable security decision of forwarding a lost password link to both the registered email address for the account and an UNCONFIRMED new email address that the attacker had added). I think it’s a shame that there are “ethical” barriers for performing real studies on how people would react to proper phishing scams. Too bad the black hats don’t give a damn.
The shorter paper by West was kind of a cute, happy-go-lucky blurb about a few easy ways to make security easier. I accept all his commentary about users and how they are more inclined to go for their goals than worry about other things. How can we make security something more than an abstract concept? I can’t remember who it was, but someone I was speaking with recently postulated that if we could display for instance an example web page that an attacker could build out of data gotten via a phishing attack on a particular site, it might convince users to be a bit more cautious. Then again, what are people cautious about? As West discussed, everyone thinks he is a better than average driver and will live longer than average (well, not me, but that’s a different story), and many many factors contribute to fear more than reason. People are more afraid of dying in plane crashes than car crashes, despite the fact that it’s statistically much less likely, for some reasons that aren’t perhaps totally clear yet. There’s a lot more psych work to be done before we can necessarily tap into the right parts of users’ brains to get them to make good decisions.
Laura Devendorf - 11/27/2011 21:15:58
Karlof et. al. introduce and discuss conditioned safe ceremony, to promote safe security practices in human computer interaction by conditioning the user to engage in practices and naturally be suspicious of attack scenarios. The Psychology of security discusses human practices in relation to security to suggest ways systems designers can promote the right actions.
I appreciated the approach taken by the Karlof et. al. in developing technologies to support how humans actually interact instead of how or what they should understand. I think the paper presents and interesting problem in conditioning the user for best practices while also maintaining some realistic usability measure. I also thought the evaluation the completed was very well executed considering the circumstances and was interesting in the way it used deception to measure actions. My only concern with the study is the use of the XLab system and how it might provide access to one type of user that may not be representative of the entire population. It was also surprising how much money they provided to participants.
I liked the discussion of factors to consider when developing interfaces for security and the studies that showed interesting trends between people evaluating risk and those evaluating reward. However, I'm not sure that all methods for teaching a user about the risks that are posed were discussed. For instance, I've ignored warnings simply because I don't really understand what they mean. I don't want to read a paragraph of text before getting to a point I need. Perhaps we need to use different language in the warnings that make what is happening simpler but not pedantic. Similarly, I feel like a marketing campaign showing the risks and teaching people to be safe online might be more effective than punishing them through the computer.
Amanda Ren - 11/27/2011 21:56:10
The West paper describes how human behavior relates to people making security decisions.
This paper states that regardless of the user interface of a system, its how users think of risk that guides their behavior when it comes to security. Most users believe that they themselves are not at risk. It was interesting that users would end up in a decision loop where they will take a security risk and take a gamble on accomplishing a task rather than accepting failure. If the outcome was favorable, they would continue taking the risk. This relates to today's technology because there are increasing number of Internet users and with it, increasing numbers of hackers and spammers. The good news is that because of these predictable characteristics on decision making, we can design systems to have security settings come as the default. Or as they also suggested, use positive and negative reinforcement.
The Karlof paper presents a email based registration ceremony, which they compare to registration based on challenge questions.
This paper was in response to humans relying on "click-whirr" responses when making decisions. Because of this, people often face social engineering threats. Many web sites have instead turned to machine authentication, but this also requires human intervention because users usually have multiple machines. The paper's technique is to condition users to always click a registration link when authenticating their machine. As mentioned by the previous paper, many of their users followed the malicious instructions because they didn't associate much risk with the website. This paper focuses on the fact that humans have a conditioned response. i have seen this used in today's attacks where the attacker will imitate the popup Windows pushes when it detects that your computer is at risk for a virus. Users are conditioned to allow access to the program, and because of that, give access for the virus to install.
Sally Ahn - 11/27/2011 23:02:29
The readings for this week discuss the challenge of designing systems to maximize security for online users. In "The psychology of security," Ryan West outlines some human tendencies revealed in psychology and decision making studies that are relevant to the design of security systems. The second paper investigates this issue further and argues for conditioned-safe ceremonies that leverage relevant findings in human psychology to improve the security of machine authentication systems.
In the first article, Ryan West lists psychological tendencies of human users that make them vulnerable to security attacks. The major points are that humans do not perceive risk accurately, and that they favor quick decisions rather than slower, but more accurately evaluated decisions. In the second paper, Karlof et al. also describes some human tendencies (namely, click-whirr responses and rule-based decision making) that malicious systems can exploit to gain access to users' private information. Although these psychological findings are not surprising, it is interesting to consider them from the perspective of a security designer. West's suggestions (e.g. prove value of security systems with notifications of twarted attacks, distinguish warning alerts from other system alerts) are reasonable and straight-forward, but Karlof et al.'s findings show that users become conditioned to ignore such notifications and alerts, which suggest that the efficacy of these measures will decrease over time. Thus, I think Karlof et al.'s idea of conditioned-safe ceremonies is quite valuable. Their e-mail registration experiment was a thorough user study that exemplifies the efficacy of this approach. However, I am not sure how widely this idea can be applied to security design in general. Can we believe that conditioned-safe ceremonies are indeed general design principles, when we are only given one example that implements such a design?
To be fair, usable security is a particularly challenging field in HCI because the designer must also contend with other (malicious) designers. The existence of human adversaries forces constant change in security interface as each side tries to subvert the current practice of its opponent; security designs that are successful today may become targets for new malicious systems tomorrow. This makes me appreciate the difficulty of creating general design principles like those of conditioned-safe ceremonies. I do think that the authors make a valuable contribution by showing how systems that integrate human tendencies into its design make it much harder for malicious parties to counterattack.
Galen Panger - 11/28/2011 1:08:39
The conditioned-safe ceremonies study was very compelling. I am not familiar at all with the computer security field, so it was news to me that the challenge questions approach could be easily circumvented. However, I am quite familiar with behavioral research and so the distinctions around rule-based decision making and conditioned responses make the failure of the common approaches (login screens and challenge questions) obvious, once highlighted. The study’s idea of using email authentication because clicking the link always registers the user’s computer, not the phisher's, is a good one. Also, the finding that the warnings did not affect or increased the user’s susceptibility to attack was a bit chilling—the warnings can counterintuitively make people think they are safer!
I also very much sympathize with the statement on page 3 that “decades of buggy software have conditioned user to expect errors, failures and other incomprehensible system behavior.” I don’t think the importance of this could be understated—often pages don’t load correctly even though they are authentic, and so phishing attacks don’t have to exactly replicate the targeted website because users have come to expect buggy outcomes. Even worse, even though users are told to check the security certificate, LOTS of websites have invalid certificates, sometimes for months at a time (case in point: discoverbank.com). Even some Google properties have insecure elements and so Chrome (a Google browser!) shows users a warning about them. This inconsistency makes even the most basic heuristic (check the URL and certificate) failure-prone.
The other study, on the psychology of security, I thought was a bit basic, but it was nonetheless helpful to see basic concepts of usability and behavioral economics applied to computer security. Most important, I thought, was the highlighting of the real costs of proactive security procedures and the uncertain gains (which are themselves only uncertain losses—meaning people are likely to gamble). Here, another basic behavioral economics rule (set good defaults) also applies, but seems too basic to be very helpful. Any thoughtful security designer should know intuitively that setting defaults will have a big impact on what users do vis-a-vis their security.
Hanzhong (Ayden) Ye - 11/28/2011 1:51:23
The first article discusses the specific psychological issue in Internet security. The article pointed out several underlying pattern that have been otherwise neglected by people in daily life. One of such patterns is that people tend to believe they are less vulnerable to risks than others, and they also believe that they are less likely to be harmed by consumer products compared to others; another one is that people do not perceive gains and loss equally, which means when designers consider the design of a UI taking security issues into consideration, they must be aware that the users will perceive a greater magnitude of gain than of loss. These underlying patterns suggest many useful tips when designing a security-warranted user interface such as: reward pro-security behavior, improve the awareness of risk, catch corporate security policy violators, reduce the cost of implementing security mechanism, etc.
The second research introduces the notion of a conditioned-safe ceremony. They use the word ‘ceremony’ because it is similar to the conventional notion of protocol but it includes the interaction with human participants. Several important ideas from human factors and human reliability community are introduced and used, such ideas include forcing functions, defense in depth, human tendencies, etc. Upon several principles they proposed, an email registration ceremony is implemented and a study is launched with 200 participants. Some deception strategy are used to get more credential results. They found that conditioning does help email registration users resist attacks, while also makes challenge question users to be more vulnerable. However I think conditioned safe ceremony still has its limits such as it is vulnerable when adversaries temp to convince users to disable protective mechanisms and in some other particular situations.
Suryaveer Singh Lodha - 11/28/2011 5:01:21
Conditioned-safe Ceremonies and a User Study of an Application to Web Authentication:
The paper discusses click-whirr response issues in common internet use by a user. Also, even if some websites use "challenge questions" as a way to provide additional level of security, I agree with the authors that security benefits of such an approach is minimal, if at all. Challenge questions are vulnerable to Man in the Middle (MITM) attacks. The authors discuss how users omit the browser security mechanisms frequently, and can relate to it. Bank of America online banking account mechanism uses one similar check - I need to select an image as my safe image while creating account, and if the image doesn't appear on the login webpage everytime I login, then I can figure out something is fishy and the webpage is not secure for me to enter my details. Initialy when I was new to this, I used to take care to check the image before entering details, but slowly it has become a part of my routine and I can't even recollect the last time I consciously checked the image on the login webpage before entering details. The authors present a case for the usefulness of "conditioned-safe ceremony" and back it up with a user study which proves its worth. A conditioned-safe ceremony is one that deliberately conditions users to reflexively act in ways that protect them from attacks. Such ceremonies only condition safe rules and also condition atleat one immunizing rule, which when applied to an attack causes the atack to fail. The liked the user study discussion, specially the different anecdotes from various users definitely reflect the diversity in the user background and understanding. The results are consistent with the hypohesis that condition-safe ceremonies helps in resisting social engineering attacks by a big factor
The psychology of Security:
This article briefly introduces research on risk, uncertainty, and human decision making psychology and focuses on how these factors affect user's security decisions. Humans tend to be optimistic and hence tend to believe they are less vulnerable to risks than others. One of the major problem is non acceptance of security tools by users. When evaluating alternatives in making a decision, outcomes that are abstract in nature tend to be less persuasive than outcomes that are concrete. Safety is an abstract concept as the reward for being secure is "nothing bad happening", and hence it doesn't factor too much while making a decision based on alternatives. People do not perceive gains and loss equally. While a system designer may consider the cost of security effort small, the loss could be perceived as worse than the greater gain in safety. People tend to focus more on the losses that will affect their immediate goal than the gains when making decisions under time pressure. We should find ways to reward pro-security behavior, improve awareness of risk among users and catch corporate security policy violators and at the sametime focus on reducing the cost of implementing security (by probably employing default secure settings, unless user spends time to change the settings to unsecure.)
Allie - 11/28/2011 8:29:06
In "The Psychology of Security" by Ryan West, the author assets that encryption, authorization, and authentication can be difficult for people to use and understand, and they often fail to recognize security risks or informatino provided tocue them. This is because 1) people do not think they are at risk 2) users aren't stupid, they're unmotivated, or cognitive miser 3) safety is an abstract concept 4) feedback and learning from security-related decisions 5) evaluating the security/cost tradeoff 6) making trade-offs between risk, losses, and gains 7) users are likely to gamble for loss than accept a guaranteed loss, meaning people do not perceive gains and loss equally. This is important for the system designer, who must consider the cost of security in the concept of loss, since it could be perceived as worse than the greater gain. 8) losses are perceived disproportionately to gains.
The paper suggests the following steps to improving security awareness for users: 1) rewrad pro-security behavior 2) imiprove the awareness of risk 3) catch corporate security policy violators 4) reduce the cost of implementing security.
In the "Conditioned-safe Ceremonies and a User Study of an Application to Web Authentication" paper, Karloff, Tygar, and Wagner propose a registration ceremony for machine authentication based on email. They found that conditioning helped email registration users resist security attacks, but cnotributed twoards making challenge question users more vulnerable. The study is based on the idea that click-whirr responses reinforce authentication security. Conditioned-safe ceremony is where users reflexively act in ways to protect themselves from security attacks. This theory was proven in a subsequent user study of email-based registration ceremony. Conditioned-safe ceremonies 1) should only condition safe rules; and 2) condition at least one immunizing rule, i.e. a rule when applied during an attack causes the attack to fail. Nonetheless, 42% of email users in he study were vulnerable to simulated attacks despite the registration ceremony.
Both papers are interesting in applying understandings of behavioral psychology to enhance computer security. The study conducted in the second paper leaves much room for improvement, as perhaps the assumptions made prior to the study may not have been all that on the mark. Nonetheless I believe the authors are on the right track in terms of addressing security issues from a psychological perspective.
Vinson Chuong - 11/28/2011 8:35:26
In "The Psychology of Security", West gathers past research into a list of bullet points describing (1) how users weigh tradeoffs when making security-related decisions and (2) how to present those tradeoffs to encourage "safe" decisions. In "Conditioned-Safe Ceremonies...", Karlof, Tygar, and Wagner present an example implementation of the above guidelines and show that users can indeed be guided towards "safe" decisions"
The main claim in West's paper is that safety is so abstract and the consequences of not making safe decisions are so rare that people are unable to evaluate the tradeoffs accurately. He argues thus that people would prefer taking the gamble and making the unsafe decision instead of not performing their intended task. He goes on to suggest that to encourage people to make safe decisions (1) the risks behind making unsafe decisions must be made concrete and (2) the costs of making safe decisions must be reduced. In other words, the expected cost of making an unsafe decision must be perceived as higher than the cost of not performing an intended task.
Karlof, Tygar, and Wagner show that one effective way of doing this is by incorporating the safe actions into the users' routines for performing their intended tasks. They argue that people would prefer routine to unexpected change. However, as conceded in the paper, the sheer number of possible attack vectors limits the usefulness of any single security mechanism.
In his conclusion, West wonders if there is any way to remove the human from the equation--to either automate or force safe actions by default. An example of this is the use of password managers. A password manager can map website (SSL certificate and domain name) to user credentials and automatically submit the right credentials when the user visits the right website. If an attack attempts to coerce the user into entering his credentials into a similar but different website, the password manager would prevent it.
Another example is spam filtering. I don't think I've ever received a malicious email in my Gmail inbox (assuming I could detect such emails). Google is able to take advantage of the vast amounts of information available to it to detect and filter malicious content. Google does this not only in Gmail, but also in its Chrome browser, preventing users from visiting websites that may have malicious content.
So, what about crowdsourcing? Assume for a moment that Google Chrome (and its mobile variants) is the only browser in existence. Suppose that the browsers have a button which allows users to report suspicious-looking websites. A social engineering attack can't possibly be 100% successful. The question is, can this approach prevent users from being exposed to the attack over time?
Alex Chung - 11/28/2011 8:56:23
As stated in "Psychology of Security", the concept of safety is an abstract idea that is difficult to quantify. Risk assessment requires the stakeholders to evaluate the value of property before deciding on the premium necessary to hedge against the risk. Thus consumers often offload the responsibility of picking the safer products by associating safety with brand. For example, Apple consumers often cite the "no computer virus" in MacOSX as one of their major purchasing reasons. Yet it has more to do with marketing than anything.
After I read "Why Johnny Can't Encrypt" and done the evaluation of PGP 5.0, my conclusion was that users want more security but don't want to pay for it by spending more time or effort. Transparency was also important but the minimal time and effort takes precedent.
Currently, there are two extreme approaches to computer security. On the one hand, Microsoft Vista pushes all the burden of decision making to the users by spraying alert windows all over the place. It creates user fatigue and desensitizes the users like the agreement button on the online form. On the other hand, Facebook hides its security and privacy menu so deep that non-CS people would never be able to find unless they watch the 10 minutes video. Both cases are bad examples of HCI implementation fo security and privacy.
Rohan Nagesh - 11/28/2011 9:01:14
This week's readings focused on the usability and design aspects of security. The first paper focused on principles of users' security habits and the learnings we can incorporate into our systems design as designers. The second paper focused on conditioned-safe ceremonies, a system that exploits the click-whirr reflexes of humans.
I thought the first reading's note about the two scenarios with losses vs. gains was quite interesting. I saw myself going with the sure $5 in the positive scenario but then taking a chance on losing $10 on a heads coin flip in the negative scenario. The fact that this scenario analysis translates very well to security was quite interesting to me. Users tend to focus on the fact that the gains from security are a bit abstract, and they'll take a chance on the potential ramifications of a security vulnerability.
The second paper analyzed an email registration ceremony with a few different attacks: challenge questions vs email forwarding and email copy/paste. I found it interesting that challenge questions were identified by users to be less secure than email forwarding/copy paste. I'd imagine that the click-whirr principles of humans would trick them into forwarding their emails or copying their activation link as opposed to actually taking the time to fill out the challenge questions. I also found it interesting that the warnings presented in the system didn't really help.
Jason Toy - 11/28/2011 9:03:20
The Psychology of Security
"The Psychology of Security" is about how people think of risk and how this guides their behavior.
The paper discusses a framework for why users make the decisions they do regarding security and solutions for guiding users to make better decisions. Other research has discovered that users make poor decisions based on the fact that risk and security is hard to measure. Risk is an abstract, and hard to quantify: when a user does a better job of being secure, they see less tangible results. In addition, users often find security a hindrance to their main objective and have been conditioned to ignore bugs and warnings, making warnings ineffective as users often bypass them. Two of the things mentioned about users and security that interest me are the cost/complexity of security and the inability for users to understand what is going on in multiple cases or perhaps user risk. I think the first is exemplified in the case of router setup. Router setup is often complex, leading users to fail to actually secure their routers. The second can be seen in the creation of firesheep, an extension for firefox that intercepts unencrypted cookies from sites like Facebook, allowing others on the network to hijack your session. While firesheep is widely known even among the general public and countermeasures exists, I am unaware of many people who either care or understand the implications of it.
The paper does a good job discussing various means of risk and safety, and one example I especially liked was the certificate and active x control example. However I felt that the paper could go in depth in examples like these: given the example, does it matter how much you trust the site that you are viewing, or how quickly you want to obtain your answer?
Conditioned-safe Ceremonies and a User Study of an Application to Web Authentication
"Conditioned-safe Ceremonies" discusses a registration ceremony for machine authentication based on email.
This paper discusses a new system for reducing phishing attacks by creating a ceremony for authentication. The paper argues that human's automatic or "click-whir" responses lead to them seeing a legitimate looking login form and immediately punching in their credentials. The same problem happens for challenge questions, which makes them an ineffective safeguard. The authors' solution is to use email to send a randomly generated link to users registering their computer for use. After using the link on that computer, the link is rendered useless. Even if an attacker can now obtain user credentials, they cannot obtain the link giving their computer access to the account. Unless the user manages to give the attacker the link, and fail to punch in the link before the attacker does, then the user is safe. This system is similar to the one used by gaming distribution platform Steam today. When you log on using Steam, you have to authenticate a new computer. In addition, when you do not use a computer for a while, or you have registered a new computer for use on Steam and go back to an old one, you may have to re-register by going through the email process again.
The paper does an interesting user study that does a good job simulating real world environments by working outside a laboratory and misleading subjects to its true purpose. Nuances like allowing users to change their paypal accounts, and using 'real' money helped with this simulation. However one weakness of the paper is that it does not discuss the problems of its system. I argue that one weakness of this system that the still exists and is not touched upon by the paper is the use of emails as identification. While the system seems more secure, this only helps if your email is secure. Services like Steam keep records of your email. Should your Steam account be compromised (it has happened before), and if you use a universal password (true for many users), the whole system is compromised. Conversely, if your email is compromised, an attacker can sift through your emails and discover what accounts you have and compromise them (again assuming a universal password, or some monitoring attacks while you are unaware you are compromised to obtain your credentials). Finally a weak link in the chain can break the system as can be seen in 'Sarah Palin email hack'. Where biographical information was looked up on the web to answer security questions allowing a user access to her email. This system is still only as secure as a user makes it.
Shiry Ginosar - 11/28/2011 9:05:30
The Psychology of Security and Conditioned-safe Ceremonies present a rather grim yet true view of the world where users rarely want to pay the price of keeping their systems secure. While The Psychology of Security offers an overview of the reasons for this behavior, Conditioned-safe Ceremonies offers a novel way to bypass the lazy tendencies of the user and even use them to the advantage of the security measurements.
It is interested to read the Psychology of Security paper in light of recent internet services. The main assumption in the paper is that the system designer has an interest in keeping the user's data secure. However, in many internet services this is arguably no longer the case. Companies like Facebook and Google, for instance, make the majority of their revenues from advertisements that would be more efficient when targeted to specific users. As a consequence, the default settings on Facebook seem to be specifically designed to offer minimal security to the user, and moreover, they often change without proper notice. It would be interesting to rethink the question of user security from this perspective, and to come up with mechanisms that the users themselves can deploy that will allow them to stay secure while relying on their built in click and whirr behavior as is suggested by the Conditioned-safe Ceremonies paper.