Usability Testing

From CS160 Spring 2013
Jump to: navigation, search

Contents

Readings

Reading Responses

Bryan Pine - 3/9/2013 12:22:53

 For my heuristic evaluation, I reviewed Venmo, a mobile payment app.  Venmo allows users to input their account information and quickly make payments to other users over the phone.  It also has a social component that allows users to keep updated on what other users are paying for.
 One of the major problems (severity 3) that I noticed in the heuristic evaluation was the visibility of the major functions on the homepage.  A lot of important functions were hidden on the profile page or delegated to the top and bottom edge of the screen to make room for the news feed that dominates the homepage.  This creates frustration, delay, confusion, and mistakes.  Although these problems are all different, they are somewhat related; a fix that solves one is likely to help solve the other problems.  Because of this, I feel that the usability of the home screen can effectively be approximated by time-to-use metrics for key functions.
 Assuming that I have a home screen design that I consider to be superior, my hypothesis would be that the new design is indeed superior to the old design in that NEW users can access certain of the application's main functions (like sending money) faster than they could under the old system.  I am specifically limiting this hypothesis to new users because I suspect that experienced users would have knowledge of the old system, which would skew the results.  The independent variable would be the two screen designs, and the dependent variables would be how long it takes to access key functions like sending money or updating the profile.  I would control for the testing environment (distractions would be a problem) and subject reaction speed (results could be scaled based on an independent measure of reflexes).  Although controlling the testing environment would make the results only valid for that particular environment, I would repeat the experiment in many different (controlled) environments, and make sure that the same trends were observed in each case.  I just don't want to compare apples to oranges by having one person do a reaction-time test in a lab and the other do it on a street corner.  Other things like user experience with apps in general would be random variables to ensure the results are robust.


Colin Chang - 3/9/2013 18:28:15

Revisit the application you reviewed for your heuristic evaluation assignment. Remind us what the application's name is and what it does. Imagine you redesigned the application to address at least one of the major usability problems. Briefly describe how you would design an experiment to test whether your redesign improved usability.

I looked at the app Haze, a new weather app which features a colorful UI. One of the usability problems I wanted to focus on (despite my not giving it higher severity ratings) is the correlative positioning of the rain percentage circle in the far right screen. Recall that Haze has three screns (two of which we are concerned with): the middle has the current temperature represented by a ball s/t the higher the temperature the higher the ball. The right most screen had the current percentage of rain represented by a ball s/t the higher the percentage the higher the ball. My confusion was that the ball looked much like a sun, which, when higher, indicated warmed whether - the exact opposite of what the rain view suggested. In my Heuristic evaluation, I suggested inverting the rain screen's ball indicator s/t the higher the percent chance of rain, the lower the ball was positioned.

The experiment I'd like to run is just that - inverting the correlative positioning of the rain percentage ball s/t the higher the percent change of rain (or, the worse the weather would be) the lower the position of the ball. We would assign one batch of users to the status quo and the others to my suggestion. To verify the users mental model (and eventually simply evaluate which way to do the rain ball positioning is best) we would ask them after swipping around the app for a minute what they thought the app was telling them about the weather. We would pick a mid-temperature day and a high chance of rain day to dramatasize the rain ball positioning. We would tell them to ignore the current actual weather and that the app is just simulating a possible (again, not necessarily currently instantiated) weather situation.

What would your hypothesis be? What would your independent variable(s) be? What would your dependent variable(s) be? Which variables would you control for and what which be random variables?

My hypothesis would be that people say, regarding what the app is telling them about weather and the situation posed above, that the non-status-quo people will be more correct (that is, a high percentage of them will be correct) about the weather than the status-quo people.

My independent variables: I have one, which is the correlative positioning of the rain percentage ball. One situation is that the percentage and position of the ball are directly correlated and the other is that they are inversely correlated.

My dependent variables: the correctness of the participants guess of what the app was telling them the weather was going to be like.

Control variables: the app's percent chance of rain, temperature, and whatever the third screen is. Basically, all information that the app will ever portray. Also, the positioning of the non-rain-percentage balls.

Random variables: we would randomly assign which participants would work with which version of the app: status-quo or not.


Alvin Yuan - 3/9/2013 19:50:58

I evaluated Spotify's Android app. It is a music listening app where users can create playlists, listen to radio, and explore artists and albums. One of the major usability problems I discussed was that users cannot view what was previously played in the radio or what they had listened to in the past. To construct an experiment, I would have my redesign to only have changed the radio feature, for example so that users could now see a history of what was played on their radio. Then I would gather participants that regularly listen to music (what I imagine Spotify's target user to be). I would randomly assign those participants to two groups: one that uses Spotify's current app and one that uses my redesign. I would have them go do a set of tasks within Spotify, such as create a playlist, find a song, play a song using radio then find the song after it finishes playing, etc. Afterward they would fill out a survey that asked them questions about their experience and opinion with the different features of Spotify, the important ones being about radio. The order of the tasks and the questions will be randomized for each participant.

My hypothesis would be that participants in the redesign group enjoyed the radio feature more and found it to be more user-friendly. My independent variable is the version of Spotify the participant uses, my redesign or the standard app. The dependent variable is the participant's responses in the survey, in particular their opinions on the radio feature. I would control the time of day, day of the week, and environment the experiment happens (physical surroundings, background noise, internet speed), as well as the rest of the Spotify app interface. Random variables include some of the participant's background information (age, music preference, amount of regular music listening (with some lower threshold)) and environmental factors such as ongoing music-related news and weather.


Alvin Yuan - 3/9/2013 19:51:16

I evaluated Spotify's Android app. It is a music listening app where users can create playlists, listen to radio, and explore artists and albums. One of the major usability problems I discussed was that users cannot view what was previously played in the radio or what they had listened to in the past. To construct an experiment, I would have my redesign to only have changed the radio feature, for example so that users could now see a history of what was played on their radio. Then I would gather participants that regularly listen to music (what I imagine Spotify's target user to be). I would randomly assign those participants to two groups: one that uses Spotify's current app and one that uses my redesign. I would have them go do a set of tasks within Spotify, such as create a playlist, find a song, play a song using radio then find the song after it finishes playing, etc. Afterward they would fill out a survey that asked them questions about their experience and opinion with the different features of Spotify, the important ones being about radio. The order of the tasks and the questions will be randomized for each participant.

My hypothesis would be that participants in the redesign group enjoyed the radio feature more and found it to be more user-friendly. My independent variable is the version of Spotify the participant uses, my redesign or the standard app. The dependent variable is the participant's responses in the survey, in particular their opinions on the radio feature. I would control the time of day, day of the week, and environment the experiment happens (physical surroundings, background noise, internet speed), as well as the rest of the Spotify app interface. Random variables include some of the participant's background information (age, music preference, amount of regular music listening (with some lower threshold)) and environmental factors such as ongoing music-related news and weather.


Mukul Murthy - 3/9/2013 21:16:05

The Android application I reviewed for my heuristic evaluation assignment was AC Transit Alerts, which gives info on upcoming AC transit buses. One of the issues I discussed is a consistency and standards problem - after a user selects a bus line, he must pick a direction, and some of the lines have duplicate direction listings, as in this picture: MukulMurthy-he-screen2.png This was an issue because when I was testing the app, I would always end up checking both to try to figure out a difference. I could never figure out a difference; if there was one, why not make it clear? And if there wasn't one, why have duplicate entries? The rest of this experiment description operates under the description that duplicate entries always mean the same thing (as they seem to be whenever I try them), and the fix would be removing duplicates so that only one entry remains for any direction.

My experiment to test whether the redesign improves usability would involve giving each subject a bus line, direction, and a stop, and asking them to use the app to discover with complete certainty when the next bus of the specified line and direction will arrive at the specified stop. I would measure the amount of time and button presses it takes each person to give the correct response. My hypothesis would be that the people that had the redesigned app would give the correct answer in less time and with less button presses (because I believe many of the people with the current version would have to go back, and end up checking all duplicates).

There would be three groups of 50 people each. One of them would have to find the time of the next bus for a fixed bus line and direction at a fixed stop, where the direction had duplicates (old design). The second group would have to find the time of the next bus for the same line, direction, and stop, where the direction had no duplicates (new design). The third group would have to find the time of the next bus of a line, direction, and stop chosen randomly, out of a pool of buses that have no duplicates (control group).

The independent variable in this experiment is the number of duplicate directions there are for the bus the user is given. For the new design and control groups, this would be zero; for the old design groups, this would be one or more. The dependent variables are the total time it takes the user to answer the question and how many button presses they make. I would expect the new design and control groups to have less time and button presses than the old design group. Some of the variables I would control for would be the given bus line, direction, and stop (for the old and new design groups). The format of the task and question would always be asked the same way, so that nothing in the way the task was given biases the experiment.

Random variables would include the gender and age of the person performing the task, their familiarity and experience with AC transit, and their familiarity and experience with smart phones. These factors would all influence the time, but because people would be randomly distributed between the three groups, these factors would hopefully average out among the group and be negligible overall.


Jeffery Butler - 3/9/2013 22:54:46

The application I reviewed for my heuristic evaluation was an app that was related to Gold's Gym. With this particular app, users can sign up for a Gold's Gym near them, can make goals, find schedules for classes, and get pumped up before workouts with an annoying sound maker.

In my heuristic evaluation for this assignment I criticized the app for its lack of flexibility in relation to making goals limited to only six months in advance. I believed this could be annoying for a potential users who want to make New Years Resolutions goals or goals more than six months in advance.

I hypothesis that users would want to set goal longer than 6 moths in advance. My independent variable in this experiment would be the length in which I allow participants to set goals in advance. My dependent variable would be the participants selection of the goal duration. In this experiment I would control who I selected to be participants, that is Gold's Gym members and the random variable would be their age. The reason why age is an issue in this experiment is because usually younger individuals think on shorter terms than adults due to life experience,

This experiment would consist of selecting random participants who go to Gold's Gym and ask them either one of two questions: 1. How far in the future is your current goal? or 2. If you could pick something you wanted to improve in yourself how far in the future will the goal be set? Then from their response I could see if it was below or above the 6 months mark, hence whether it would be worth fixing the app. Then I would promptly reply if it was too far in advance: "Sorry thats too far in advance, think more short term?" From here, I would see how they adapt to the adjustment and mirror it with how user's react to the limiting feature in the Gold's Gym app.



Christina Hang - 3/10/2013 15:54:35

I reviewed an application called “Grocery G”. This application helps its users manage their grocery shopping lists and gives easy access to a variety of coupons. One of the problems of the application was the lack of a guide or help manual. I would test how my newly added guide to the application helps the users better utilize the application. The experiment would test how the guide would improve the time to get certain tasks done like adding an item to the shopping list or adding a picture to the item description. My hypothesis would be that the help guide will decrease the time for the user to perform the tasks that was intended from the original application’s features. My independent variables would be the application without the guide and another version with the guide. The dependent variables would be the time it takes for the user to add five items to the shopping list and get into a shopping group. In order for the users to not waste time in thinking what items to add, the control variables would be the shopping list items, that is the test users will have to add one item each from the set list of wheat bread, eggs, cream cheese, tomatoes, and Cheerios. The random variables would be the brand they choose and the group they choose to join.


Linda Cai - 3/10/2013 16:43:45

For my heuristic evaluation assignment, l evaluated Padmapper, an app that displays apartment listings directly on a map and allows you to look at the listing information within the app. One of the more severe usability problems was the text box that displayed information about the listing on the apartment listing information screen. This violated the aesthetic and minimalist design heuristic, since irrelevant information is still displayed when the user only needs to see the text. I would design an experiment to test the hypothesis, do smaller text box sizes hinder reading ability and understanding of the text? My independent variable would be text box sizes; I would have various levels, such as one line, original size from the Padmapper app, a larger size that takes up almost the entire screen, and full screen. My dependent variable would be time it takes to read and understand the apartment listing text. I would control the reading environment (noise level, temperature, tables and chairs, room) monitor/screen size, and the time and day. By doing so, I could reduce confounding variables that may cause changes to the reading or cognition speed and more or less ensure internal validity. Random variables include reader's age, their amount of sleep, education level, weather, mood, eyesight, along other things.


Kimberly White - 3/10/2013 19:18:15

For my heuristics evaluation, I reviewed a scheduling app. It allows tasks to be added, crossed off, and removed. A lot of the usability issues were either cosmetic, or had easy fixes. The biggest problem was having no easy way to "cancel" a keyboard on the main screen. Other issues involved insufficient help screens and confusing icons. The fixes for these issues would be adding a "done" option for the keyboard, and changing the help and tutorial screens to group all the help information together.

For testing, the simplest approach would probably be to give users a set of simple tasks to do, and see how long it takes them to learn and complete each task. For example, they might be asked to create a new task group, add a task, and then delete a crossed off task. The independent variable would be the interface, the dependent variable would be how long it takes to complete the tasks. A majority of the random variables would relate to the users themselves, such as if they're used to android conventions, or what their own model of how a planner should work is.


Brian L. Chang - 3/10/2013 20:09:25

I reviewed the Flipboard app for my heuristic evaluation assignment. The Flipboard app collects trending articles from various sites and then displays them by topic. If redesigned Flipboard, I would make sure that it was more clear how to refresh the first page and also create a help page that was more interactive and had a search feature. I would test each of these improvements in separate tests to see the effectiveness of each and if I had time, I would test the changes together. I would select people from different age groups and backgrounds reflective of Flipboard's current user group, but also only select users that have never used Flipboard to make sure that the results are not biased. Then to test the new redesign, I would time one group in finding out how to long it took to refresh the page and search for a help feature. Another group would do the same thing but with the redesign.

    My hypothesis would be that people in the group that got to use the redesign would find things quicker because certain things would be more intuitive. My independent variable would be the old design or the new design. My dependent variable would be the speed that they find out how to use Flipboard app. I would control for age, where people are from, if they read at all and if they have used the app before. Most other variables, I would let be random to make sure that it does not suffer from external validity. For example, I would let the variables of where they like to read, when they read and what topics they like to read all be random.


Michael Flater - 3/10/2013 20:49:33

My reviewed app was called StubHub. It is an app that allows people to buy and sell event tickets, mostly sports but also concerts and plays.

The worst error in this app that I found was lack of a "help" feature. If I did add this feature, I would probably run an experiment in which the hypothesis would be "Adding a help feature to this application can increase a new users learning curve to become fully functional faster."

My major independent variable is the "help" function itself. Since the app does not have it, I would randomly pick half the respondents to use the new app and half to use the original.

The dependent variable is the participants understanding of the application. This can be measured by a short quiz, perhaps after the user has played with the application for a given period of time.

The random variable in this experiment is previous knowledge of technologies such as smart phone apps. There isn't much I can do about this, though, since steering away from it would bias my experiment. The control variable I can use would be to test participants for certain skills before the experiment is conducted and place them in a category based on their score. This way I have the ability to control which results are more significant than others.


Aarthi Ravi - 3/10/2013 21:12:25

I had chosen the "WhatsApp" application for the Heuristic Evaluation Assignment. This application allows you to send messages over the internet to your friends.In the individual chat screen of this application, if you send a message when you are not connected to the internet,then the application does not give you any feedback about the status of the connection. No error symbol is displayed.

My design to provide feedback would be to use a warning symbol against the message sent, as well as, produce a different sound indicating error occurred.

Hypothesis: Providing a warning symbol and a different sound to indicate that the message was not delivered would immediately help the users know that a problem occurred during delivery of a message. Thus, it would prevent them from assuming that the message was sent when it wasn't.

Independent Variables: Warning Symbol and Sound to indicate Message Not Delivered Error.

Dependent Variables: Occurrence of click of the warning symbol or Occurrence of checking internet connectivity of the device

Control Variables: Layout of the Application, Size of the UI controls are kept same

Random Variables: Participants, Can use the application in portrait/landscape mode, the volume of the speakers is kept as is (could be silent or loud or normal)


Eun Sun Shin - 3/10/2013 22:04:27

I reviewed Quora for my heuristic evaluation assignment. This application is the iPhone app version of the website Quora. On both the app and website, users are able to ask any questions, get real answers from people within the Quora community (who are knowledgeable or have first hand experience), and blog about what you know. One problem exists in the Question or Answer pages. The bottom menu bar disappears, getting rid of the "emergency exit" back to the home screen (top stories). Also, when a user goes from one question to another without ever going back to the top stories in between the series of actions, then the user has to traverse backwards through every visited page to get back to the home screen. There is no option to immediately go back to the home screen. I would solve this problem by adding the menu bar in the Question/Answer pages for consistency and better navigation. I will experiment with different kinds of buttons present in the menu bar to see what will improve usability, if the addition of the menu bar does improve usability at all. Menu bar 1 is the original menu bar that shows up when users open the Quora app right now. It has buttons for "Top Stories," "Notifications," "Browse," and "Profile." Menu bar 2 is a new menu bar I will create, consisting of buttons for . To test whether my redesign improved usability, I would design an experiment that consists of observing multiple, random people using the application (varies from beginnings to frequent Quora users). My general hypothesis would be that the added menu bar at the bottom allows users to read through more questions and answers, thus increasing the amount of material covered in one session. My other hypothesis, assuming that my general hypothesis seems to be supported by evidence, is that menu bar 2 allows users to read through more questions and answers than menu bar 1 does.The two different menu bars will be independent variables (to some users menu bar 1 will appear, while to other users menu bar 2 will appear). The number of questions and answers clicked on by users will be the dependent variable. Participants will be randomly chosen among iPhone owners, and they will be randomly assigned to one of the three scenarios: menu bar 1, menu bar 2, or no menu bar. I can then observe the numbers of questions/answers viewed by participants for each of the three scenarios and compare the numbers. If one of the menu bars resulted in a significantly higher number of viewed questions/answers than no menu bar, then we know that adding a menu bar improved usability. If there is not much different among the three numbers, then it can be assumed that adding a menu bar did not result in much improvement. If less questions/answers were viewed with menu bars, then the addition of the menu bar in the question/answer screens may have worsened usability.


Soo Hyoung Cheong - 3/10/2013 22:18:38

For the heuristic evaluation assignment, I chose Instagram, which allows users to take photos and add filters to create "Instant" vintage type of photos that you can share with your friends.

The basic outline of the experiment would be to make the user post an "Instagram" of their choice. Then I would tell them to change the description of the photo or the location where the photo is taken. If the users are able to accomplish this without any help and show consistently low trend in number of clicks and amount of time it takes to accomplish the task, it shows that the redesign definitely improved usability.

The hypothesis is that the user will be able to find a way to edit information about the photo immediately without trying out things to figure out a way to accomplish the task. My independent variable would be just individual user's original familiarity with using Instagram. My dependent variables would be the number of clicks or the amount of time it takes for the user to edit photo information starting from the moment when the photo is uploaded. I would control the person's facility with the device that he or she will be using and make sure that their task is to make changes to photo information regardless whether the change is necessary or not, and random variables would be the information processing/learning rate of each person.


Lauren Fratamico - 3/10/2013 22:31:36

The app I looked at was Endomondo - an exercise tracker/social app. I am going to address the usability problem which restricts you from only exiting the device from the main screen. I will be adding the menu that allows you to exit from all screens and will be doing an automated assessment. I will record a couple of things: the screen that people exit the app from and the total time people are using the devise (from open to close). If people close the screen from pages other than the home page and if they are using the app for a shorter period of time than they are without the additional menu, than I will conclude my design improvement to be a success.

My hypotheses are that people will on average us the app for a slightly smaller time period with the ability to exit from all screens (because they don't have to navigate to the home page to exit) and that people will now exit the device from pages other than the home page. My independent variable will be weather or not people are allowed to exit from screens other than the home screen (if there is that menu option present on other pages or not). My dependent variables will be the screen that people exit from and the total time they are using the app. Control variables will include type of user (eg will have a fixed and same number of both frequent and first time users of the app) and also the rest of the layout of the app (only difference will be the addition of one menu on all pages or not). Random variables will include age, gender, and other attributes of a person.


Kevin Liang - 3/10/2013 23:07:02

The app I reviewed is called ShopKick. I am going to review the problem where it must go through 2 screens to go to a store page. Unwrapping the unrolled page directs the user to a larger photo. Tapping the larger photo finally brings us to the store page. That is a problem with the Flexibility and Efficiency of Use heuristic. I would experiment by testing on the time spent on a view. If the user spends a significant time on a view then it can be safe to conclude that it was the target page. If the store page was never loaded then the large picture was the target view.

My hypothesis would be, the user is trying to get to the store page rather than the larger photo. My independent variable would be the the page the user ends up on. My dependent variable is time. We can generally tell which target page a user wants depending on how much time the user spends on the page. For example if the user spends 1-2 seconds on the large picture screen, chances are it wasn't the target page. I would control data speed as a control variable. Sometimes a user can be on a page for a long time because the page has not loaded. Mainly this is due to slow loading coming from data speeds. Now there may be a case where both pages are target views. This is our random variable. A user scrolling around may just have itchy fingers or are genuinely looking at a view. Since we cannot control how the user reacts on a page because it is totally dependent on the current situation on what the user is feeling to do, we use that as a random variable.


Samir Makhani - 3/10/2013 23:10:13

The application I chose for my heuristic evaluation assignment was "Wake n Shake" which is an iPhone alarm that will not turn off until you shake the iphone to some extent. Fortunately, the severity of all broken heuristics were less than or equal to 2, so there are no major changes required to improve usability of the application. One heuristic I would definitely improve upon, mentioned in my individual heuristic design, is User Control & Freedom. Many of the swipe and touch gestures are not intuitive on the interface, and definitely can make the user frustrated with the first time using the application.

Hypothesis: The more instructions provided for the first time the user uses the app, the better the reviews will be regarding the interface, as well as usability.

Independent Variable: The instructions provided to the user when they open the application the first time. There will be two variations, which will be detailed instructions, or no instructions at all/

Dependent Variable: Time spent getting used to the interface and gestures, overall use of application. These variables are entirely based on how much instruction the application provides on using its interface.

Control: All users must be new to the application and have not interacted with it ever before (on other users devices). Half of them will receive great instructions on the interface, the other half will not. Must have most updated version of this application.

Random Variable: The various ways we present the instructions. Should we prevent the user from interacting with the application until they read the instructions and exit out from the instruction alerts? Or should we just display them in corner and let the user click some area when they need help or have trouble with the interface?


Tiffany Lee - 3/10/2013 23:10:23

The application I evaluated was Evernote. Evernote is an application that helps you remember everything. It is a virtual notebook where you can add photos, voice recordings, and text. The goal of this app is to help users stay organized, save ideas, and improve productivity. I would redesign the button on the top left corner of the main page to look more like a button by adding a rectangle box around the text and icon. I would test by seeing what the average time it would take for people in the control group and treatment group to find out how to change the settings (which would require people to click that button). I would want to see on average which group can figure out the task the fastest. My hypothesis would be that the treatment group (the one with the rectangle around the button) would be able to do the task faster. The independent variable would be whether or not the button had a rectangle. The dependent variables would be whether or not they find out how to do it and how long it took them if they did finish the task. The variables I would control would be people who have never used the app before, the time of day people use it, and people who are reasonably well rested and alert. The random variables would be age, ethnicity, education level, gender, job, and location.


Alice Huynh - 3/11/2013 0:03:52

Revisit the application you reviewed for your heuristic evaluation assignment. Remind us what the application's name is and what it does. Imagine you redesigned the application to address at least one of the major usability problems. Briefly describe how you would design an experiment to test whether your redesign improved usability.

The application that I reviewed was the “Find Free Stuff” application that searches through all the popular selling websites and gives the user a summary of what is free. One of the major usability problems that I addressed was “flexibility and efficiency of use” of the “search results and favorites” page that list all of the results that the mobile application finds on the web.

What would your hypothesis be? What would your independent variable(s) be? What would your dependent variable(s) be? Which variables would you control for and what which be random variables?

My hypothesis would be: with the numbered pages and the knowledge that all of the results are sorted in alphabetical order that the user can find the results for “Wagons” a lot faster than the current results page that only affords scrolling down the list of results.

The independent variable would be the numbered pages versus no numbered pages.

The dependent variable would be the time it takes for a user to find the first instance of such a category of free stuff. (“Wagon”)

The variables that I want to control are the number of results for all the users that I test with and ensure that all of the results are the exact same result set. I also want to control the order of the results; I want the results to be ordered alphabetically based on the item name. I also want to test users that are not near-sited so that there aren’t any vision issues. I also want to ensure that all of the users use the same type of phone with the exact same screen size so that the same amount of results show on the screen at once for all users.

One random variable would be the age of the user. I would get a random sample of the ages of users that have downloaded the mobile application before and use that to judge what age I want the test subjects to be. This is a type of “randomization within constraints” such that I use the age range of all users that have downloaded the “Free Stuff” application to pick my test subjects. Different ages of different users will account for their reaction time and the ability to scroll fast.


Joyce Liu - 3/11/2013 0:23:33

The app that I reviewed for my heuristic evaluation assignment was Yelp. It’s a restaurant/business review site. One of the major usability issues that I identified was the restaurant picture album. When you first arrive on a restaurant's profile, there is a "profile" picture at the top. Naturally, people will want to click there to see the pictures; however, once you click into it, you have to scroll one by one through the pictures. If there are hundreds of photos, that can become really annoying because the user might not have interest in certain photos.

I would redesign the system by when you click on the profile picture, instead of showing one picture at a time, it would show a grid view of all of the photos. My hypothesis is that people will click through more photos when it’s in grid view. My independent variable is the number of pictures that the user clicks through. My dependent variables would be 1) the original view and 2) grid view. The variables that I would control for would include the restaurant’s profile (it should be the same restaurant), the number of pictures that a restaurant has (which would be uniform given that I’m choosing the same restaurant), the layout of the rest of the restaurant profile. The random variables in this experiment could be the order in which the pictures are presented, the profile picture of the restaurant. The reading talked about how when we are talking about a random variable, the word ‘random’ usually refers to random assignment of circumstances to the levels of the independent variable. We can also think about a random variable in this experiment as the random assignment of participants to either viewing the original view or grid view. By randomly assigning participants, we lower the risk of obtaining a biased result.


Ryan Rho - 3/11/2013 0:32:27

Revisit the application you reviewed for your heuristic evaluation assignment (Zipcar). Remind us what the application's name is and what it does. Imagine you redesigned the application to address at least one of the major usability problems. Briefly describe how you would design an experiment to test whether your redesign improved usability.

What would your hypothesis be? What would your independent variable(s) be? What would your dependent variable(s) be? Which variables would you control for and what which be random variables?

I have reviewed a mobile app for Zipcar, a car rental services targeting users living in a city or campus where there are not many parking spots and parking spots are expensive. The users can search for rental cars at a location and rent a car by checking its availability. The cost is calculated hourly. Zipcar offers several deals such as discount early in the morning and cheaper insurance.

One of the major usability problems is that it is awkward when you choose a date and time for the car reservation. Although the date & time do not make sense because it's the past, it looks like the date & time are selected. In addition, you cannot cancel the action while you choose a date or time. The only button you can select is 'done'. So the bottom line of the usability problem is that it lacks feedback when there is an error in date & time for car reservation.

In order to improve this usability, I would like to design experiment of asking users with diverse preferences to reserve a car. Diverse preferences for the users are important because that would result in users choosing diverse dates & times so that it incurs more usability problems. When it comes to renting cars, the experiment can use the real data so that the availability of rental cars is randomized. Thus, conducting with users in several different locations is important.

A hypothesis is that the user would like to rent a car sometime in between the present and the future, so the past cannot be selected. The independent variable is the list of available cars because it is not dependent on the user's behavior. The dependent variable is the rental cost because the user can select time for the reservation and if you rent a car at night, it's cheaper, and if it's a weekend, it's more expensive. In addition, the number of hours for the rent affects the cost.

Control variables can be the price because the price could be changed so that the users may change the duration of the rental or dates to adjust their budget limit. Random variables are the availability of the rental cars and the condition of each car because if there is a problem of a car, the availability of the car is affected.


Cory Chen - 3/11/2013 1:11:08

I evaluated the iOS app Fantastical for the heuristic evaluation. It is a calendar app that allows the user to type in his event, and then it turns the natural language text into a calendar event automatically.

I would run an experiment on "new event" screen change that I previously proposed (remove the show details button and instead allow the user to scroll down for details. This would improve flexibility and efficiency of use). For my experiment, I would randomly select people who have have never used the app before and I would randomly select people who are long-time users of the app. I would then give half of each group the updated version of the application and half of each group the old version of the application. Then I'd ask them to add a few specific new events to Fantastical's calendar (maybe like 5 events) and time them during the process.

My hypothesis would be that the new version of the application does not effect the speed of new users but increases the speed of adding events for the long-time users. The independent variable would be whether the event details were displayed by default or hidden behind a button, and whether they were new or old users (2x2 experiment). The dependent variable would be the speed at which they finish creating 5 events. I would control the location and time of day just in case that would give some people an unfair advantage, and I'd control the device that they use Fantastical on (iPhone 5 only). Many other variables would be random variables, such as their familiarity with iOS and their alertness during the experiment.


Shujing Zhang - 3/11/2013 1:12:41

I analyzed an app called Sound Tracking. It is used to share what users are listening to, share and discover music together with friends and followers. In addition, it can auto-detect what’s playing on the device or what is playing in the environment.

I want to address one of the Heuristic problems (Flexibility and efficiency of use), which is rated 3 in my evaluation. In the “Trending” function, there are a lot of music genres. However, the interface only displayed different pictures of the album, which don’t mean anything to me, I have to press each picture to see what the song is called and what genre it belongs to, which is not efficient. In addition, every time I go press another function, and when I return to the trending, I have to scroll down again to find what I was looking at, and sometimes it was really hard to visit the album.

My redesign one: I would add the name of the album, the genre, as well as the singer below each picture of the album. In this way, people will recognize their favorite album and singer instantly by just scrolling the “Trending” interface.

My redesign two: In addition, after the user has clicked on one of the album picture, the app will save the state, so that when user returns back to the “Trending” interface, the app will show at the position wherever the user stopped browsing from last time.

My experiment:

My hypothesis one would be: the more short description on the thumbnail picture of the album, the faster the user can find what they want to listen to.

My hypothesis two would be: saving the previous state on the “Trending” interface would save people time of discovering more interesting album.

For redesign one:

Independent variable: the appearance of the description (Whether the thumbnail picture has a description), and the length of the description.

Dependent variable: the length of the time starting from people searching new songs in “Trending” section until they click on a new song to listen.

Control variable: same person trying two version of the same app. The number of picture displayed is constant.

Random variable: screen lightness, Network speed, background music, and the time of the day the user is using the app.

For redesign two: Independent variable: The app saving previous state of the app upon user leaving a certain function.

Dependent variable: the length of the time starting from people coming back to “Trending” section until he/she finds another interesting song.

Control variable: same person using two version of the application. Content displayed in “Trending” section is the same.

Random variable: internet speed, screen lightness.


David Seeto - 3/11/2013 2:41:21

The application I reviewed for the heuristic evaluation assignment was the Dolphin Mobile Browser, an alternative browser to the native Android browser for your mobile device. It claims to simplify mobile browsing using add-ons and built in features. To redesign the application to address the lack of feedback when taking two actions: either minimizing or exiting the app, I will prompt users whether they want to minimize the app and NOT delete user data and also allow for an option to supersede this prompt every time. This is motivated by the fact that users potentially minimize the app while not realizing that it is not the same as hitting exit in the options menu, not clearing user data off the phone, which is crucial to browsing security.

I will allow two random group of users use two versions of the app, one with a prompt when minimizing and one without the prompt. In this case, the existence of the app is the independent variable. Through their interaction with the browser and other apps, we will ask them various questions about their knowledge of specific features, one being the clearing user data feature when exiting. The dependent variable in this case is whether or not the user learns the distinction between minimizing and exiting the app to clear user data. Comparing the responses of people who had a prompt versus who didn't, we will check the validity of the following hypothesis: users prompted for confirmation while minimizing and NOT clearing user data will understand the consequence of minimizing versus exiting. We control for what we prompt them to do and the app they use. We allow the user's previous experience to be random.


Zhaochen "JJ" Liu - 3/11/2013 2:57:12

'Here Maps by Nokia

Features to be redesigned: number input filed shows a full keyboard. When user register for an account and was asked to provide birthday information, the keyboard prompts a full keyboard that shows numbers, letters and symbols.

Experiments: ask a group of random selected users to type their birthdays with different layout of virtual keyboard

  • Hypothesis: users take longer time to enter their birthdays correctly with a full keyboard, compares to using a numeric keyboard
  • Independent variables: the keyboard type, whether it is a full keyboard or a numeric keyboard
  • Dependent variables: the time elapsed between a user starting to enter and completing entering his birthday
  • Control variables:
    • use the same phone, with the same screen size, processing speed
    • type the same lengths of birthday (e.g. 1990 01 23); input label and input box of the same size
    • subjects’ gestures (sit on a chair)
  • Random variables:
    • different birthday number combinations (e.g. 1999 11 11 or 1968 03 28)
    • different familiarity with smartphones’ virtual keyboard


Cong Chen - 3/11/2013 3:04:15

I reviewed the Google Map's applications for the iPhone. This application helps people get from one place to another place and displaying the street view for various places. The major usability problem with the Google Map app is that in the middle of a trip, if the user moves away from the "current" default location of the cursor, the user cannot cancel or quit the trip without first "resuming" to the default location on the map. This was very confusing and could lead to a lot of issues.

Assuming that these are fixed, I would design an experiment as follows: Get a large sample of people within driving age range (>16 and <70); people with different ethnic background, experiences, jobs, etc to improve the external validity of the experiment. The participants will be driving, walking, or traveling using the Google Map's app and at some time, are asked to cancel their trip. The independent variable would be the which app we give to the user; the one without the fix or one with the fix. The dependent variable will be how long it takes them to figure out/cancel their trip. My hypothesis is that the users who have the improved app will be able to cancel their trip faster. The control variables would the type of car (assuming they are driving), the trip that each participant has to make, and the time to start the trip. Note, some participants will not travel via car. The random variable would be the condition of the traffic, day, and other factors that are beyond our control. This will also help improve the external validity of our experiment.


Ben Dong - 3/11/2013 5:11:02

I reviewed the Piazza app for Android. For my experiment, I would complete a redesign to attempt to fix their search by tag, which currently doesn't list existing tags and has no form of autocomplete.

My independent variable would be the visibility of existing Piazza tags while searching. This would include no tags shown (current implementation), a list of existing tags shown, and autocomplete shown. My dependent variable would be the efficiency of searches, which could be measured by the number of searches attempted before a result is viewed.

My hypothesis is that as I improve the visibility of tags (no tags to tag list to autocomplete), search efficiency will increase and the number of searches attempted before a page view will decrease. I would control for the rest of the app being exactly the same. The random variables would be which classes users are enrolled in, as well as what searches they conduct.


Sihyun Park - 3/11/2013 5:38:05

For my heuristic evaluation assignment, I chose to assess Tumblr app for Android. Tumblr is a a microblogging social network where users share various kinds of posts - from text and photos to audio and video. If I were to redesign the application, I would fix the two severe usability issues that I found on Tumblr: Match between system and the real world & Help and Documentation. To address the former, I would allow the application to switch to landscape UI when the device is rotated horizontally. For the latter, I would add a walkthrough when the user first opens up the application, similar to Summly app on iOS. (which has a great walkthrough for the first-time user.)

To test whether my redesign improved usability, I would design an experiment with: 1. Hypothesis - A user interface that switches to landscape when the device rotates to landscape will increase usability (i.e. how quickly the user can cycle through the posts on Tumblr) 2. Independent variable - A user interface that switches to landscape and one that doesn't. 3. Dependent - the amount of time it takes for the user to go through 15 posts, which is an average number of posts a user would read in one cycle. 4. Control variable - The rest of the application (i.e. the features available on application other than the orientation.) & The number of posts (15) & The posts that the users will read (All users will read the same posts.) 5. Random variable - The sample population of testers of varying familiarity with the application.


yunrui zhang - 3/11/2013 7:20:04

The application I reviewed in the heuristic evaluation assignment is an IOS app called "SWYPE". It provides a gesture-based input keyboard and allow users to input text using that keyboard on display interface. One of the major usability problems is having two "ON" buttons on its main page, and no explanation of the difference between the two is provided. It turns out to be the "ON" on the bottom is to enable swipe keyboard, and the "ON" on the top is to disable it when the keyboard is in use.

I would like to redesign the application such that the top "ON" button is only display when the keyboard is in use. The rationality is that it is only needed when the keyboard is in use.

I would design an experiment like this: random select a group of people of size 20. I give each of them an task to perform: input "Hello" using the SWYPE keyboard and input "World" using the standard keyboard. Then I randomly select 10 of them to use the original design, and 1- of them to use my redesign. Finally I record the time they spend to perform the task.

My hypothesis is that the time they take the complete the given task for my redesign is shorter than the original design.

The independent variable is the design to use.

The dependent variable is time spent to perform the task.

I would control the word to input, devise to use, and environment, temperature, light and noise level.

Random variables are the people's traits, include but not limited to intelligence level, age, gender, ethic background and language.


Derek Lau - 3/11/2013 7:28:54

For my heuristic evaluation assignment, I reviewed the BBC News app. One usability problem that I would address is the lack of aesthetic and minimalist design with the app (but specifically the main screen). In the experiment, I would create mockups of the current app as-is as a control group, since this is the original design that we are trying to test against. I would then draw up some mockups that have icons and headlines that are less cluttered than the current app: one set of mockups with only icons on the main screen, one set with only headlines main screen, and one set with a small icon and headlines as the current app has now, but make them larger so that the screen displays no more than 9 icons at a time. I would then time users from each group to see how long it takes to read 10 main page stories.

My hypothesis would be: Does a simpler front page interface increase 1. ease of use and 2. content reached in the BBC News app? The independent variable is the amount of content displayed on the front page, measured by a few categories and subcategories: amount of text displayed, amount of images displayed, and amount of total stories displayed. The dependent variable is the amount of time it takes for users to access content. Control variables would include: having users' expertise with one given mobile platform and navigation nuances (Android vs. iOS), users' ability to read and understand English, and users' familiarity with the app (they must not have used it before). Random variables would include: users' ages, users' bias towards/against BBC News, users' interest in reading the news, and users' opinions about aesthetics and design.


Eric Xiao - 3/11/2013 9:51:42

Buy Near Me Buy and sell items to other Cal students

I would redesign the site to add a landing page/dashboard so the user knows what's happening and what actions can be taken. To test this improved usability, I would take a random sample of Cal students (our target audience), and measure how much time they spend on the page as well as how many items they view overall.

As another way to test for improved usability, I would measure the click through rates on the site using A/B testing on the improved dashboard, looking at whether the addition/existence of an actual dashboard for a logged in user would increase his time on the site, items viewed, and how many exits there are in the visitor flow on the dashboard page.

The hypothesis would be that the user would be more receptive to a landing page and know how to do more things, which would keep the user more informed and use the site more. This means increasing items viewed, items listed, and click throughs by these users as the dependent variable. The independent variable is the existence of a dashboard for each user. Random variables would be the demographic of Cal students, such as by major, gender, race, socioeconomic background, etc.

Things we would keep constant (control variables) would be the rest of the website and layout, the items a person can view on the browse page, and a formal period of time to use the site. It's also important to keep the feeling of rush to be constant (e.g. people should not feel like they can get out early, or need to go somewhere after the interview, which would cause them to rush things).


Brett Johnson - 3/11/2013 9:52:48

   For the heuristic evaluation, I reviewed the application Pulse, a blog/magazine/site reader for the iPhone. If I was redesigning Pulse, I think the one feature that I would redesign would be the help and documentation section. As it stands, when the user clicks for help, they have the option of either sending an email or going to a third party support forum. Instead of this setup, I would design an in-app tutorial that went over the major features of the application and the user could access at any time. 
   To test whether this system is really any better, I would randomly assign people to 2 groups, the first testing the current help system and the second testing the new one. My hypothesis would be that user satisfaction will be higher and/or user frustration will be lower with the new help system. This would be measured by a survey given to the user. In this experiment, the independent variable is the content of the help system. The dependent variable would be the level of satisfaction and/or the level of frustration the user had at the end of testing. Besides for the control variables such as lighting, temperature, and testing environment, most variables relating to the individual being tested would be variable. The users would not be split up based on socioeconomic status or education level. This would improve the external validity of the experiment, which would be important for a reader app like this one that tries to capture a huge audience.  


Tiffany Jianto - 3/11/2013 10:24:21

I reviewed the Google Drive application for my heuristic evaluation assignment. The Google Drive application allows users to access documents, files, and other items they have stored on their Google Drive. One of the major usability problems is the lack of auto-correction and awareness of misspelled words. To redesign this, I would place a small, squiggly red line under words that are misspelled since this is something users are already familiar with, and they would know that something was misspelled. To test whether this redesign improved usability, I would want to compare the number of misspelled words before my redesign to the number of misspelled words after my redesign. This could be in the form of a timed survey asking them to type some things with one part with the previous design and one part with the redesign; this would help see how many misspelled words there are and users’ awareness of them.

My hypothesis would be that with the small, squiggly red line under words that are misspelled, users will be more likely to have less misspelled words. The independent variables would be the small, squiggly red line, the words that the users have to type; the dependent variables would be the number of misspelled words. A control variable would be the words that the users have to type; this way, it is easier to make comparisons. Random variables would be the population since I want a general evaluation, and the form of text input that they are using, which could be T9, smart phone keyboard, or swipe keyboard, since I am testing for awareness and misspelled words with the squiggly red line.


Tenzin Nyima - 3/11/2013 10:27:48

For heuristic evaluation assignment, I reviewed an app for making international call at very cheap rates. The app is called PennyTalk. If I get to redesign the app, first of all, I would fix the Call Log page. As explained in the assignment, the order of the call logs (call history) were all messed up. It violated consistency and standards. The list of call logs should follow the platform convention of listing the most recent calls on the top and the least recent calls at the bottom of the list. Therefore, in my redesign, I would list the most recent call on the top and least recent call on the bottom of the list. To test on the improvement of usability, I would first let the users make some calls from the app with the same old Call Log page and ask the users to make couple of more calls to couple to the phone number they called just hours before. I will record their user experience. After that I would repeat the same process with the newly redesigned Call Log page and record their user experience.

My hypothesis is that if you follow the convention of listing the call history, users will have better experience with calling the same number they dialed before. My independent variable is the list of the call history. My dependent variable would be the ways of listing the phone numbers with most recent calls on the top (the conventional way). The variables that I control will be the independent variable - the list of the call history. Also I would control the way the phone numbers are represented (the size of the digits, etc) and the way redialing works (when user tap on one of the numbers from the call logs it dials the phone number). Some people might call somebody and doesn't want his/her partner know about the phone call. So, I will have a feature on my redesigned page of Call logs where users can now select one of the call records from the call log page and delete it. This would be my random variable - the feature to control the listing on the call log based on users' preference. This feature was missing from the app.


Yong Hoon Lee - 3/11/2013 11:05:53

My application was Yahoo! Sportacular, an app which allows the user to view sports scores and receive alerts when teams that they follow are in a game.

The interface modification I would make is to add a "Sports" option to the menu, which allows the user to change the score. Currently, the only way to accomplish this is to tap the icon showing the object which represents the sport (such as a baseball) located next to the menu icon, an unintuitive design that may be somewhat hard to pick up on, as there is no indication that the user can tap the ball icon.

The experiment would proceed as follows: I would ask the subject to change the sport from NFL football to baseball, and count the number of taps the user makes to do so.

My hypothesis would be that the modified interface would result in fewer taps, on average, as in the unmodified case, some users may tap the menu and have to tap the back button when they notice that the option is not in that menu.

The independent variable in this experiment is the interface design, namely the two different versions of the app, one with a "Sports" option and one without. The dependent variable is the number of taps the user makes.

I would control for a few variables, most importantly the device. The device used in the experiment would be uniform throughout, as it would be very difficult to utilize a method of random selection for the device. Furthermore, I would control for the lighting conditions, so that extra taps are not added by users who cannot see the screen clearly. I would also control for the two sports chosen, as the main behavior I want to test involves switching the sport, and removing the variable of which sport is chosen allows me to concentrate my findings around the switching of the sport (i.e. whether adding a menu option helps the user or not).

I would utilize random selection most notably in deciding the subjects to test. I would use controlled randomness to decide this, as I would like random subjects from the ages of 20 to 65 (so that those who are too old or too young are not tested). Furthermore, I would want a random distribution of sports fans, as well as a random distribution of familiarity with mobile devices. These goals may be tricky to achieve, and so I may have to control them in ways such as limiting to college students walking on campus (not necessarily selecting for prior smartphone usage, however, so as to maintain that random variable), which would also limit the age of my subjects.


Thomas Yun - 3/11/2013 11:18:31

The application I reviewed was the Astro File Manager. Basically, it allows the users to copy, move, paste, delete, etc. the device's files. One of the usability problems was with the icons being hard to recognize for the functions that they perform. Thus, redesigning this, my experiment would probably be testing how much easier or faster it would be to perform an action with a file.

My hypothesis would be: the easier it is to recognize the icon, the easier and faster it is for users to perform the action. My independent variables: icons or just words instead (maybe different set of icons if I want to avoid words) Dependent variable: Correlation between icon and speed of action Control Variable: It is the user's first time and they do not use the help screen. Random Variable: The user will decide what to do with the file


kate gorman - 3/11/2013 11:33:50

App: Snapseed--> advanced photo editing app. Independent Variable: Main Interface

Dependent Variables: Progression through editing, retention to the app (what % of users return for another session within the month), # of photos uploaded, # of photos edited, time spent in app.

Hypothesis: an easier to use main interface in the FTUE will allow new users to understand the app, use the app more often, and modify more photos.

Control Variables: phone type, os version. user type--> looking at new users only. only compare apples to apples on phone type and os version to ensure hardware does not impact decisions. Experiment setup: 3 groups: 15% see new experience, 15% old experience, 70% old experience. You use the 2 groups of the same size to control for any network effects and to ensure that comparisons between those groups are equal. Also, having 2 control groups allows for variance between the control groups to be measured. Thus, if the difference between the new variant, and the old experience does not exceed the difference between the controls, we can say it is likely statistically insignificant (we can also just measure this).

Not Controlled for (but may impact outcome): Install source: since we are considering first time users, install source shows a large intent disparity. For example, users that installed via an ad vs. searched and dowloaded from the app store have different intent levels and inherent interest in the app. We can slice the data by this, but will not control for it. It might be interesting to see if more hardcore users (app store downloaders for example) are okay with the old interface, but more casual users are not as proficient with the old and excel with the new.

Random Variables: new users will be randomly assigned a variant in such a way to keep the variants at their intended percentages.


Elise McCallum - 3/11/2013 11:45:53

The application I reviewed for my heuristic evaluation assignment was Daily Schedule Planner, which split tasks into three categories (work, personal, daily) and allowed users to set alarms, mark tasks as done, edit text of the description, and switch tasks between categories. One of the major usability problems is that, from the main screen, there is no visible way to delete a task, only to add a task or mark a task as complete (which remains in the task queue). To redesign the application, I would add a small "X" or something else indicative of a delete sign next to each task so that the user could easily delete the task if they so desired. I would then design an experiment to test whether or not my redesign improved usability.

In this experiment, I would compare how users performed tasks on the old version of the application with how they performed tasks on the new version of the application. My hypothesis would be that the time it takes users to delete a task would be shorter with the new design than with the old design. The independent variable would be the presence or absence of the delete symbol. This can also be extended to changing the size and position of the delete symbol to discover the optimal size and positioning for the fastest completion of the task. The dependent variables would be the reaction time, or the time it takes the user to figure out how to delete a task. the control variables would be the number of tasks presented on a page, the exact text of the task, and the tools with which a user can manipulate the interface (i.e. everyone must use their right index finger, a stylus, the tip of their nose, or whatever is most appropriate). The random variables would be the levels of the independent variables (size of the delete symbol, color of the delete symbol, and location of the delete symbol).


Raymond Lin - 3/11/2013 11:50:56

For my Heuristic Evaluation Assignment, I chose to review was Flixster. This application is well known for being informative in terms of current movies out, what movies are coming out, and serving as a DB for many past movies. I determined that one of the usability issues was the lack of instructions on each screen. In order to redesign this, I would probably have a quick tutorial at the first use of the app and an option for it tucked away in a help page as well. In order to test this, I would have 2 sets of people which would receive two different versions of this application (w/ a tutorial and w/o).

My hypothesis would be that, the tutorial would allow the user to perform their desired action a lot quicker by knowing where to go before actually using the app, than those who dived straight into the app. In this experiment, our independent variable is either receiving the tutorial or going in blind. While our dependent variable would be time to actually perform desired task. I can control whether they get the tutorial, but the random variables that I would need to look out for is how adept they are with mobile apps, whether they've encountered previous apps of a similar interface, and or the lack of clarity in the tutorial (this should be minimal though).


Timothy Ko - 3/11/2013 11:57:25

I reviewed Flixster, which was a mobile application that primarily looked up movie times and the theatres that were showing them.

To address the ambiguous “settings” instructions (these were the instructions that wanted you to edit your iPhone settings, no the Flixster settings) I would create an experiment that tested people’s perception of hierarchy.

To set it up, I would create a similar situation, where I would test to see if people assumed if instructions referred to the current application they were in or the overall machine. The results of this would determine if it’s worth the trouble to clarify the ambiguous Flixster instructions. They wouldn’t have to be as complicated. I would just set up a dummy application within a smartphone, and have instructions to access the settings option, which both the application and smartphone OS would have.

My hypothesis is that people will choose the settings option most immediately available to them, which is the application version, when given instructions that are ambiguous about which settings option to choose.

My independent variable would be the ambiguity of the instructions. There will be three conditions: the instructions will either say “access the smartphone settings option”, “access the application settings option”, and “access the settings option”. The last one is the most ambiguous. The dependent variable will be which settings option the user will select first. There are two choices here: the application settings option, and the smartphone OS settings option.

One possible confounding variable is how experienced the user is with smartphones. Inexperienced users might be more inclined to just go for the application settings option, since it’s the more immediate choice. So we will limit our sample to just experienced users. This will be our control variable.

Another confounding variable could depend on what kind of smartphone experienced users owned. There are quite a few different kinds of smartphones to choose from, with their own operating systems (iOS, Android, Windows Phone). This could affect how often settings are actually accessed in overall use of the smartphone. However, we want our experiment’s results to be relevant to the general smartphone using population. So this would be our random variable. This way we can include all experienced smartphone users, but also reduce the confounding effects of what kind of smartphone they use.


Sumer - 3/11/2013 12:22:47

The name of our application is the Produce Almanac and it reminds the user what is in season so that user might pick up fruits or vegetables that they need. One usability problem is recognition rather than recall. To design an experiment, I would have 10 users test our product to see if their search can lead to a fruit or vegetable that they actually want to buy.

The hypothesis is that if a fruit is in season that is pretty rare, an avid grocery shopper might want to buy it. The independent variable depends on how much the user is into healthy food of fruits and vegetables. The dependent variable is what fruit or vegetables are available. I would control on fruits/vegetables that are often year-round (apples/bananas) and see how much in season products are purchased vs these products. The random variables are the in season vegetables (like dependent variables) because only some show up every season, not all fruits that are rare.


Brian Wong - 3/11/2013 12:32:51

The application I reviewed was Venmo. It is a mobile payment application that helps you pay small amounts to friends. One of the major usability problems was that it used the hidden menu on the bottom instead of a sliding menu on the top left.

I would design experiment where there are two prototypes, one with the current hidden menu, and one where the menu button is in the top left corner. I would then ask users to change their username and see how long it takes them and how many button presses they go through. My hypothesis is that it will be substantially faster to change one's username when the menu is in the top left. The independent variable is the location of the menu. The dependent variable is the time taken to change a username. The variables that would be controlled would include the phone model, the Android OS, and the start screen that the app is started own and shown to the users. Some variables that may be random could include the smart phone background of the users (because of randomly selected users), and their current phone usage.


Christine Loh - 3/11/2013 12:33:00

The application I reviewed for my heuristic evaluation assignment was called "Doodle Buddy." It is basically a painting app (similar to the drawing app we were required to program for our individual programming assignment #2). If I redesigned the application to address at least one of the major usability problems, I would design an experiment to test whether my redesign improved usability with the following characteristics:

Hypothesis:

A hypothesis is a statement about the expected nature of the relationship. In this experiment it's that the user will use the redo button at least once every 50 times they use the undo button.

Independent variables:

An independent variable is the circumstance of major interest in an experiment. In the case of my experiment, it would be the variable that I manipulate -- whether or not there is a redo button on the application.

Dependent variables:

A dependent variable is the measurement of participant's behavior in response to manipulation of the independent variable. In this case, it's the number of times the participant presses the redo button.

Control variables:

Control variables are other circumstances in the experiment that need to be accounted for and are controlled. We can control the difficulty of the drawing that the participants are drawing, the lighting conditions in the room, having the participants be right-handed.

Random variables:

If we don't want to control all the circumstances, the remaining circumstances can vary randomly as the random variables. The level of artistic skill the participants have can be a random variable.


Tananun Songdechakraiwut - 3/11/2013 12:45:03

Transit Maps App from iPhone is an app that stores transit maps of many cities in several countries. Users have to upload the maps themselves either via direct links provided by the app or URL.

Redesigned? I implement a search engine on the '+' screen so users don't have to go through line by line, but can easily add a certain map at ease.

Experiment Design? The experiment will focus on people ages of 18-60 which represented a majority of users(I believe those ages younger than 18 and older than 60 can't utilize the app that much -- kids probably either take school buses or driven to school by parents, and older people retired/don't go out a lot/inconvenient to take transit). I intentionally select subjects regardless of genders and ages in order to represent most of the users in general. The subjects are assigned to two cases randomly in order to avoid any bias happening during the experiment. They flip a coin and thus their chance of participating one of the two is 50/50.

First case is the users interacting with the old app with no additional search engine. Second is the users using the new app with the search engine.

I then randomly choose a map 50 times for each user and each time record time the user spends on to find the map. Average them out for each participant. I do the same for both cases.

The time is inversely proportional to the usability of the new app. In particular, if the new app shows significantly less time for users to find a map on average, then it is clearly better usability.

My hypothesis is that the search engine will significantly reduce an amount of time the users take to find a random map.

Independent Var: Whether the search engine is implemented Dependent Var: Time to find a map on average Control Var: Limit the participants of only ages between 18-60 Random Var: Gender, Random ages within 18-60, Randomly assigned participants of the two cases


Zeeshan Javed - 3/11/2013 12:53:06

Revisit the application you reviewed for your heuristic evaluation assignment. Remind us what the application's name is and what it does. Imagine you redesigned the application to address at least one of the major usability problems. Briefly describe how you would design an experiment to test whether your redesign improved usability. What would your hypothesis be? What would your independent variable(s) be? What would your dependent variable(s) be? Which variables would you control for and what which be random variables?

The name of our application was Gluten Free Diet. The target users for this application are gluten intolerant individuals who are unable to adhere to a normal diet who also show an interest in weight control or weight loss. A weight loss application for gluten intolerant individuals may also appeal to those who are joining a gluten free diet due to personal reasons who also seek to lose weight. Even if the gluten intolerant individual does not seek to lose weight, the application would still appeal due to other features including a recipe archive, barcode scanner, and ingredient search. In designing an experiment to test usability, one could set up a simple testing environment with users of the same background and age. The independent variable would be the barcode scanner from the phone that they use on the application. The dependent variable would be their reaction to event that occurs once the barcode registers properly and the application delivers feedback. This would be extremely useful in testing the major usability of this crucial part of the application.

The hypothesis that I would have is: it is hypothesized that the tester will react in a way demonstrating positive feedback upon use of the application. The control experiement could very easily be set up so that there is a single user who is giving a faulty barcode scanner that does not register or feedsback incorrect information. I would control for any outside factors preventing the user or distracting the user away from using the application to their full attention. Random variables may be conditions induced by the grocery store climate, including not putting barcodes on easy to find places for the user along with not carrying gluten free products in the first place.


Claire Tuna - 3/11/2013 12:56:52

Revisit the application you reviewed for your heuristic evaluation assignment. Remind us what the application's name is and what it does.

The application that I critiqued was Google Drive for Android that “gives you instant access to Google Docs, a suite of editing tools...” (Google Drive website: https://www.google.com/intl/en_US/drive/start/).

Imagine you redesigned the application to address at least one of the major usability problems. Briefly describe how you would design an experiment to test whether your redesign improved usability. What would your hypothesis be? What would your independent variable(s) be? What would your dependent variable(s) be? Which variables would you control for and what which be random variables?]

Google drive for web has a piece of text above the document that says “Saving...” if the document is in the process of saving and “All changes saved to drive” when completed. I think this gives the user a feeling of comfort and security. Without these messages, it is unclear as to whether changes have been saved. I would transfer this feature to the mobile application. In my redesign, there would be a bar that said “saving...” or “all changes saved” that would be constantly visible.

I would test a random selection of Android users familiar with the web version of Google Drive (random with constraint) because I want users who are comfortable with the way Google Drive autosaves, rather than users who are accustomed to clicking File>Save every time they close a document). I would be testing with an Android device, so I want users who are already used to the device, as iPhone users may have some confounding aggravation with using an unfamiliar system.

I would have users insert their basic information into a Google Drive Spreadsheet with labeled rows such as “Name”/”Address”/”Hometown”, whatever. After they complete the spreadsheet, I would ask them to close the document. The independent variable is the presence of the “saving”/”saved” notification: either it is present, as in the redesign or it isn’t. The dependent variables are the feelings toward the system measured by the survey.

Because this is not a factor in speed/efficiency, but in more of a human factor of comfort/understanding of the system, there is no precise way to measure this improvement. Perhaps I would make a survey to be completed after the task that asked questions like:

  • How often did Google Drive save your changes? (options: every couple of seconds, every 30 seconds, every few minutes, when I exited the document, never)
  • When you closed the document, you knew your changes had been saved. (feeling 1-5 ranging from “not at all” to “absolutely”)

With these survey questions in mind, maybe we can call the dependent variables “opinion of system responsiveness” and “trust in saved changes”.

The task is a control variable (everyone is entering the same text). To avoid maturation during testing, I would test each user only once. I make this distinction because I think that if a user had seen the notifications first, he/she would feel like they were missing in the version without notifications, and feel perhaps even more uncomfortable. That is, the user is a random variable.

My hypothesis is that users of the redesigned app would think the system was more responsive than users of the old app and that they would have more trust in saved changes than users of the old app.


Avneesh Kohli - 3/11/2013 12:57:09

The application I reviewed for the heuristic evaluation assignment was ESPN Scorecenter, an application that essentially brings all of the information and features of ESPN’s website to mobile users. One of the usability issues I discovered was that adding new teams was buried within 4 levels of menus, so that not only is it difficult to manage your favorite teams, but some users might have a difficult time finding the feature in the first place. Given that this is an essential feature to the application, this is a serious problem. I would design an experiment that would evaluate whether or not my new design allows users to find and manage their favorite teams more quickly. My hypothesis would be that users should be able to accomplish the task in about half the time. The independent variable would be where in the navigation I place the ability to manage a user’s favorite teams, as the user would have no control over it. The dependent variable would be the time it takes the user to find this and add his or her favorite teams. I would control that the user will only be adding one team at a time. The random variable would be the team that the user is going to select, so it could be high or low in the alphabet, or in a division/conference that is presented earlier or later in a list.


Jin Ryu - 3/11/2013 12:57:47

The app I reviewed previously is called "Soup Recipes", and it functions as a virtual recipe book for various soups. I would redesign the application to have a working favorite soup list function as well as improving the visibility of titles that actively show which the category the user is located and also re-defining/providing descriptions to categories so they can be more clear for the user. To test if these changes improved usability, I would design an experiment. My hypothesis would be: the number of steps (clicks) it would take the user from starting the app to start reading the recipe of a soup they'd like to prepare would be much the same but take less time in the redesigned application than the original, because the user will spend less time pressing the 'back' buttons to start over (in the new app). Furthermore, both the number of steps and time to find the same soup from before upon restarting the application will also be much less in the improved application because the user will be able to use the shortcuts provided by marking soups as favorites. Independent variables are: the original application and the redesigned application, where either one would be tested against the same number of subjects in a blind. The dependent variables to be measured are: how many steps (clicks) and how much time it took to get from the home menu screen to open a recipe of a specific soup (randomly given). We also would want to measure as a dependent variable to compare the favorites function: how many steps (clicks) and how much time it took to get from the home menu screen to find the same soup recipe again (after closing it). The control variables would be to use the same type/target group of people, who are interested in cooking soups, and the setting in which the experiment occurs (probably the supermarket when users might want to look for ingredients). Random variables would be the assignment of the independent variable (different versions of the app) to a test subject. We can also incorporate a variety of ages, gender, backgrounds in people, and times of the day in which subjects were tested against the applications.


Lishan Zhang - 3/11/2013 12:57:53

The application I reviewed for my heuristic evaluation assignment is “Craigslist”, a mobile version of a classified advertisements website.

Features to be redesigned:

The search field didn’t have auto-completion or error-prevention, which made the users easy to make mistakes and took a longer time for typing keywords.

Experiments:

Find a group of people to type a list of keywords like “apartment”, “photographer” in the search field with the virtual keyboard.

Hypothesis:

People may take longer time to type a long keyword in virtual keyboard without any hint. And they also make more mistakes.

Independent variables:

Have a feature of auto-completion in search field or not have this feature.

Dependent variables:

The frequency of error and the time for typing made by users.

Control variables:

Use the same list of long keywords and use the applications in the same situation such as walking in the street.

Random variables:

The users may have different familiarity with virtual keyboard entry and they may use different mobile phones.


Edward Shi - 3/11/2013 13:03:42

The application that I reviewed in my heuristic evaluation assignment was the eBay Mobile application and it allows users to use eBay on their mobile phones in way that should be easier than through the browser. One of the main problems that popped up alot were the inconsistency of colors on the buttons with the standard. The sign in and register buttons are greyed out which would normally indicate that the user can not press the button. I would design an experiment that colored in the buttons to see if it would make it easier for people to login or register. My hypothesis would be that if we were to change the button from grey to blue, users would be able to login or register more quickly than if the buttons were grey. The independent variables for my experiment would then be the colors of the buttons (one grey and one blue). The dependent variable would be the time it takes for users to find to login or register button and tap on it to use it. My control variables would be users who are right handed so that different hands will not play a role into it and I would keep everything else in the application the same so that they do not inadvertently affect my trial. I would also control the time of day so that users are not more tired and others and would assign users sleep for a certain alotted time. My random variables would be which users are assigned to which independent variable group. I will hope the the randomness will eventually even out any particular spikes in experience, intelligence, and other unforeseen factors. I would also have to keep the tasks separate as login and register are two different tasks. One of the problems I foresee with my experiment is that the users may only participate once. As soon as they have done it before, they will both know that regardless of colors, the buttons can both be tapped. Furthermore, if possible, I would control the users to have no experience on the application so that their can be no previous experience bias in the experiment.


Glenn Sugden - 3/11/2013 13:05:30

Drawling, my CS160 individual programming assignment on the Android platform.

I would redesign the "clear screen area" so that it: had a definite border, had an "undo" feature, and was a lot harder to accidentally hit.

My hypothesis is that: the "clear screen area" is vague and too easy to accidentally hit. This poor design could be solved with a few simple corrections.

Independent variable: Two versions of the application, one as it is and one with a well defined, undoable "clear screen" button. Randomized tasks given to the user that would require full screen interactions (as well as clearing the screen on purpose).

Dependent variables: Number of times the user accidentally struck the "clear screen" area/button.

Controlled variables: single device only (E.G. Nexus 7), constant levels of lighting, no stylus

Random variables: user holds tablet or sets the tablet down, angle of use (in lap, reclined in chair, etc.), randomized tasks to minimize task interference with user interaction flow, user's experience with tablet use: familiarity with device, finger size, finger pressure, etc.


Yuliang Guan - 3/11/2013 13:12:44

The mobile application I reviewed for heuristic evaluation assignment is called “Eating Well”. It is a collection of hundreds of recipes, which are both healthy and fast cooking. In this application, users can read recipes by category, search recipes using key words, flag their favorites, and email a recipe to a friend. I would like to redesign the searching function in this application as an example since I believe it has high priority to fix.

After redesign, in order to test whether it works or not, I would try to enter a key word which is not included in any recipe title, such as “water”, to see if the system has a response to my experiment. If it shows “no results found”, then it seems that my redesign improved the usability. Otherwise, if nothing comes out, then I may need to redesign again. Of course, I need to test many times but not only once to make sure the accuracy of my experiment.

The hypothesis is when the key word you entered to search is not included in the title of any recipe, there should be a feedback showing “no results found”. The independent variable is to enter some words that do not included in any recipes in this application. The dependent variable is how the application make response. Control variables may include the application itself, the person who does the experiment, the process of operation. The random variables are word entered, the smartphone works well/dead suddenly, etc.


Eric Leung - 3/11/2013 13:13:18

I reviewed SnapTax, by TurboTax. It's a quick tax filing app, that does the most basic and most common actions.

I would redesign the FAQ page so that its not one long page, its questions and answers are searchable, and the color scheme fits along with the rest of the app. The most important would be breaking up each question and answer so that it is on its own page, and to make it searchable.

My hypothesis would be that this new search will allow users to more quickly and effectively find the solution to their questions. My independent variable would be whether the users are using the old page or the new page. Our dependent variable will be the accuracy and time it takes to find the answer to a question that is given to them. We might give them 5 questions to look up the answer for, and they need to find the answer or say that this question is not answered. We control the questions that we ask the users, but we might shuffle the order of the way that the questions appear, or even how the app page is designed. The app page could be ordered by a topic grouping or an alphabetical grouping.


Elizabeth Hartoog - 3/11/2013 13:15:17

I reviewed a recipe app for allthecooks.com on the android platform.

The first violation I detailed was about the conversion tool. I felt that it was confusing to use at first due to design choices with input fields that weren't actually input fields or input fields that had no effect on the tool.

What I would do is change the app so that the numeric value of the converter that cannot be changed (even though it pops up the keyboard) should have a different style of text box to make it clear that it cannot take input. I would also remove the textbox at the top that seems to do nothing. This would be my improved app.

My hypothesis would be that new users to the conversion tool will be able to perform a conversion faster using the improved app than using the original app. My independent variables would be the app, whether it is the original or improved app. The dependent variables would be the speed at which the user performs the given conversion task. The control variables would include the phone used and the environment (both supplied by the experiment). The random variables would then be: user experience with the android platform, user experience with cooking.

The experiment would be conducted as follows. I would put each participant in the same room which has a tablespoon of water. I would then hand them a phone with the conversion tool already launched on the phone and tell the instructions to the user: "Figure out how many gallons of water are in 1 tablespoon of water." I would then time it to see how long it took each user. The users would be randomly selected to be part of the original or improved app.


Moshe Leon - 3/11/2013 13:23:17

The application I reviewed was the Android mobile version of LinkedIn. It is supposed to connect professional people according to their job experience, careers, and education. It is almost like a sub-Facebook network, with an ability to send emails, add friends, and meet new people. It is mainly used as a location to post your credentials for others to view, including your resume.

The issue I chose to redesign is the ADD CONTACT from within the ADD CONNECTIONS SCREEN. The problem was that there is no indication if the contact is already added, and if you add someone who is already added, it throws an error without any explanation of the error cause, or possible solution. The hard fix is to just eliminate the button of the ADD CONTACT once the contact has been added; however, I am sure that the company would go for an easy fix- which is to display an explanation within the error message explaining the problem. A simple message like- “contact is already added” would suffice.

The experiment would involve at least four people, and they would have to get into the application and add people in a specific order out of a list. They all have similar knowledge level (education, work) and two of them never used the application before, while the other two used it in the past. In each pair, the ones who used it and the ones who didn’t, one person is told to look for an error message, and one person is not told to look or beware. While they are told exactly where to navigate and try to add contacts, they are told to add contacts which have been set up prematurely, and given to them as a written list. They must add the first and the last 3 people in the order in which they are written, but they can add the rest (about 30 contacts) in a random order. The first 3 contacts and the last 3 contacts have one user in common, enforcing them to add the same person twice. Once that person is added twice, there would be an error message, and I would like to see if it is now clear, and that the people understand what just happened. My Hypothesis would be that new users would not expect to see any error, and would be distracted even if they are told to look for the error message while people who have been using the application for a while would be more aware of such an error, and might even not go through the error inducing pattern- and that even when noticed, the error is now clear. The experiment, if successful, would indicate how reliable the solution was, and if a more drastic measure needs to take place- such as removing the button, or making it un-clickable. The list I am providing them with is a controlled variable- most of it is random and only the beginning and the end is pre-constructed to initiate and cause the error. Choosing 4 professional people out of which 2 never used the application and two have are also a control variables. The instructions I am providing them with are specific to where they need to go within the application, and is a control variable as well. The gender, age, race, workplaces, are all Random assignment variables. I am trying to overcome a confounding variable, in the form of experienced users Vs non-experienced users, so I am making two groups, and also incorporating it within my hypothesis. The dependent variable is the user’s attention and understanding of the error message, if he will cause it to appear. The independent variable is the middle of the list, which the user can add contacts in a random manner. The beginning and the end of the list are dependent and controlled variables. Also, the way the user gets to the test screen location, is entirely up to him and is another independent variable- there are many choices he can make, and eventually get to the desired test location.


Monica To - 3/11/2013 13:26:32

The mobile application that I reviewed for the heuristic evaluation assignment is called Pocket (Formerly known as Read it Later). This application allowed users to save online articles, webpages, and other media forms like video and photos to the application for future offline or online access. For an example, I am on my tablet and I find interesting articles that I want to save to read later in my free time. I would then save it to Pocket and later, all I would need to do is open the application to access my collection of articles and online content.

One major usability problem I found was in the way a user would first set-up the application for Safari. To be able to save content with Safari, a user would have to go through a series of steps to set up the app. The main issue with this is that the app first directed the user to a webpage with the set-up instructions on Safari (exits the app and opens Safari app), and one of the major steps was to have the user copy and paste about 50 lines of javascript code into a Safari bookmark and save. This not only exposed the user to complex intimidating code, it introduces huge room for error. I would redesign this in a way that would hide this action from the user. I would provide the user with a button to connect with Safari so that a user could simply just press a button on the application instead of having to exit, open Safari, copy and paste javascript code. This should be something that happens in the background, it's astonishing that the app requires the everyday user to do this.

My hypothesis would be that cleaner UI (aka a one button press set up) is not only more comprehensive for a user but also a faster process for a user to complete. My independent variables I would have two levels of it. My independent variable would be the version of the app. One level of this variable would be the original form of set-up for this app and the second level would be my redesign of the set-up for this application. Since I am testing comprehension and speed. My dependent variable is the time it takes from when the user first opens the app until the time when the user completes the Safari set-up for the app. My control variables would be the environment the experimentation takes place (indoors with consistent lighting), the time of day, the types of users (specifically, since I'm testing for set-up of Safari, I want to test only iPhone users). And variables that I would randomize are the testers ages, the amount of sleep they've received, their average iPhone usage time, and the amount of apps they use regularly( will have more specified definition).

I am testing for comprehension and speed. I want to assume that the more comprehensive the process is, the faster a user would execute it. I want to test the original application against my redesign, so I would randomly assign my iPhone users (of different ages, iPhone expertise or novice, amount of sleep, etc) to the different versions of the app and ask them to execute the set-up for Safari. This is similar to the A/B testing method used commonly on websites. One random group is exposed to one version of the product and the other is assigned to another. And the results or dependent variables are measured and then compared. I find this method to be most effective for this particular evaluation. However, there are a few confounding variables that may threaten the validity of my experiment. For one, history could be an issue; some of my testers could have been exposed to a similar type of set-up before and may be more familiar with the process, in this case they would perform the task faster than people who a new to the process. Another confounding variable is the the education level or expertise of my tester. Some of my testers could be computer scientists who are familiar with Javascript and don't find it intimidating or unusual, while novice users who have never seen code before may just view the chunk of Javascript as a block of gibberish.


Andrew Gealy - 3/11/2013 13:31:34

I did my heuristic evaluation on the Piazza iPhone app. I identified two major usability problems. One of them was the "good" feature, where users can mark posts as "good". I still have no idea what this does, so my solution would be simply to remove it, as it seems potentially redundant with favorites. The other major usability problem was the fact that private posts are not tagged private and are not easily filtered. My solution would be either to create a private main screen category like there is for pinned and favorited posts, or create a filter option like there is for unresolved or unread posts.

My hypothesis would be that making private posts more accessible in the aforementioned ways will increase the use of private posts on Piazza in general and decrease the time it takes for a user to locate private posts.

My independent variable would be the handling of private posts: with a main screen category, a filter category, or the current method (visual tag only), which is the control condition. These conditions would be randomly assigned to test subjects. All other variables relating to the interface would be controlled.

Dependent variables would be: 1. the time it takes a user to locate a private post given identical starting conditions but different private post handlings. 2. the number of private posts created by Piazza users.

The first dependent variable is probably best tested in private sessions. Users from each condition would be presented with identical home screens and asked to locate a particular private post. The time it takes would be measured. The second dependent variable may be best tested by a broad A/B release, where some regular users of Piazza would have the current version and some regular users would have one of the updated versions. Overall volume of private posts could be measured with a very large sample size.


Marco Grigolo - 3/11/2013 13:34:10

I reviewed the YouTube app for iOS

Usability Problem: Missing the Watch Later Button I would redesign the app to resemble more closely the service offered by the web. In this way much of the knowledge of most users gained by using the web service would transfer smoothly to the iOS app, and they would know exactly where to find the button. I would slightly change the watch later button on the list of video to be more visible, since the smart phone screen is much smaller than a PC screen, and so the button would not be as visible/intuitive as for a PC, and would confuse new users. As for the Testing: Hypothesis: 2 Hypothesis to test: 1. The presence of a watch later button in the video lists is essential to implement correctly the watch later feature 2. The presence of a bigger button the then button in the web version would reduce the time the user need to add a video. Independent variables, and reduce the mistakes (try click on watch later, and instead load the video). Independent Variables: The watch later button, located on the lower right corner of each video in the video lists. It can be either big or small, depending on the hypothesis we are testing. Dependent Variables: Time to access the feature watch later, depending on size of button and presence of button Success rate of using the feature: rate of people who can successfully find the button and use the feature Ratio of error/success: % of times, while using the watch later feature, we click on something else Controlled Variables: Noise and light (constant light, no persons/music/noise around hte user): The access time should not depend/be influenced by these features, since they would affect usability in general and only the feature implemented. Since we wants results about watch later only, we should isolate those variables. Random variables: It is very important to see how different type of users would react to changes. Therefore the tester should be a mix of users completely new to YouTube, new to the App but experienced in the Web service, and experienced both in the App and the Website. This way we could see the access time and error rate between the old and new version for all users, and see if we really made progress for new users without negatively influencing users that have another way to using this feature.


Haotian Wang - 3/11/2013 13:36:33

The application that I reviewed for my heuristic evaluation was Smart Ride, which gives you bus-stop arrival information. The App has a map tab which shows stops on a map, a search-by-agency tab, and a nearby-stops tab.

The problem I had with the search-tab is that it cannot search for bus stops by fine-grained location (like city), but only general areas (northern-california). The fix I would perform is to allow fine-grained location search and pseudo-matches in wording.

I would run an experiment for this fix by comparing how some users perform on finding bus-stop times before and after the fix. I would present each of my test subjects with the same path-finding task. For example, I would tell them where their current location is, a location to get to (such as a school), and ask them to us SmartRide to find the bus times to take them to school. My independent variable would be which app is being used: before the fix or after the fix. My dependent variables would be the amount of time taken to complete the task, plus perhaps a user-rating on how difficult it was to perform the task, on a numerical scale. My control variable would be the the task that I choose. That is, each test subjects performs the exact same task (getting to school in my example). I would randomly select from the population of bus-takers, and randomly distribute them to either use the unfixed app or the fixed app.


Timothy Wu - 3/11/2013 13:38:24

The application I chose for the heuristic evaluation assignment was the Craigslist iPhone application, which allows users to browse and post listings on the Craigslist website. From the assignment, I mentioned that the persistence of expired items in ad listings was a major usability problem. If I were to design an experiment to address one of the major usability problems, I would design an experiment to address whether the removal of expired items in classified ad listings would improve usability. The experiment would involve letting users use the old version of the application with expired listings, and let a different set of users use the newer version of the application without expired listings. The goal is to test the speed that the user can find a listing that is relevant to their interest using both versions of the app.

In designing the experiment, I would first state the hypothesis and then enumerate the variables. My hypothesis for the experiment is that removing expired ads from the ad listings would improve usability by decreasing the time it takes to finding ad listings that are relevant to whatever they are searching for. The independent variable would be whether the application contains expired listings or if it does not contain expired listings. The dependent variable would be the the amount of time it takes to find a listing that is relevant to the user's interest. I would control for where the user is using the application to make the experiment more feasible in carrying out. The experiment would take place in a laboratory type setting instead of in a real-world on-the-go type setting. I would also control for the time of day when the experiment would take place, making the time during the middle of the day when people are most alert.

Another important control would be to have a set database of listings that each user will have to search through. In addition, another control would be that the user will be given a set of criteria to look for in a listing, and then find the listing in the list that corresponds to these criteria. This way, each user would be searching for the same thing and the type of item being searched does not effect the results of the experiment.

Most importantly, each user group will only use one version of the app to eliminate the effect of practice on increasing usability time. A random variable will be the users, who will be randomly selected from a wide population of target users. A constrained random variable will be which users are assigned to use which version of the application, as each user will be randomly assigned to use some version of the application but there will be the constraint that there will be 50% of users using one version of the application and 50% using the other version. In all, the goal of the experiment is to show that one version of the application has better usability than the other version by means of measuring the amount of time it takes for the user to find ads that fit a certain criteria.


Jian-Yang Liu - 3/11/2013 13:38:50

The application I reviewed was Instagram. It's an app for the mobile that aims for a fast, beautiful, and fun way to share photos with friends and family. A major usability problem that I found with the app was that it didn't address the help and documentation heuristic very well such that it could be of much more help to people who wish to understand Instagram's various uses.

My hypothesis would be quite simple: The more information that's given regarding the app, in terms of not only the tutorial, but also its various functions to redesign photos, the more popular it will be and used by people of all ages. The independent variable is the help and documentation. We can have two levels: The first, no tutorial with a complete lack of help within the mobile app, and the second, with a simple get started tutorial as well as intricate documentation on what the buttons do. The dependent variable is the amount of people that use the app, since we wish to find out whether more people would be willing to use the app if they know how to use it well. I would probably control the people from looking up help and documentation elsewhere and only within the app itself, and perhaps limit the ages of the people to between 10 and 45 (for easier work). Random variables: Gender, whether or not they've used photo-taking apps before, how much are they interested in photo-taking apps.


Ben Goldberg - 3/11/2013 13:42:48

The app I reviewed was the BearWALK app, which you can use to schedule a pickup from BearWALK. A major usability problem of the app was that when you clicked on the "See Buses" button, you would be taken to a map of the bus routes, but then there was no way to get back to the BearWALK log in page. I would redesign the page to have a "Back" button from the bus routes page so you could go back.

My test would be to pick a set of random people and have them randomly test one of the two versions of this app. I would give them instructions to view the bus routes page and then go back to the log in page for BearWALK. I would record the time taken to solve this task and whatever version had the lowest average time would be the better version.

My hypothesis is that this redesign would improve usability significantly. I believe users would be able to complete the specified task in a fraction of the time. My independent variable is the two different versions of the app. My dependent variable is the time taken to finish the task. Both the users chosen and who uses what version of the app would be random. Other factors will be controlled, like the environment around the person. That way there are no distractions for the person testing the app.


Achal Dave - 3/11/2013 13:51:19

I reviewed the MarketLive app for Windows Phone. It is a stock watcher application, with some functionality for alerts.

One of the most annoying issues I would like to fix is the lack of intuitive icons for navigation buttons (no match between system and world). First (before the experiment), I would attempt to manually sketch out various different icons, and get some feedback from friends and other sources.

In reality, I would attempt to narrow this down to a couple choices for each of the four buttons, but for the sake of this experiment, I will test one particular combination of my icons.

Now, I would (theoretically) replace the buttons in that application with mine, and hypothesize that users would be able to do what they want faster with my version than with the original. Specifically, I claim that new users will press the "..." button (which gives information about what the icons/buttons do) less often with the new icons.

Hence, my independent variable is the set of icons used for buttons. The dependent variable is how often users press the ellipsis. I would need to control the amount of time users had spent with the app (since I care primarily about how new users view the icons, not experienced users who have memorized the old icons). However, I would randomly select on the types of users (national origin, languages, age) to make sure my icons are not biased towards a particular type of user group-- this app is not for a particular country or people. I would also randomize the type of Windows Phone users had (manufacturer and model), since this should not change how users interact with the application.


Oulun Zhao - 3/11/2013 13:53:13

The application I evaluated was called “7Day Sleep”. This application is used to help the users improve their sleeping qualities, by playing certain sound track and displaying special images before the users go to sleep. In addition, it can help keep track of your sleeping time and give you feedback based on your usage of the application.

How to design an experiment to test whether the redesign has improved usability: After improving the volume adjustment usability, we can have a group of people to use the sound track feature before they go to bed and evaluating the improved volume adjustment feature.

Hypothesis: People in a noisy environment using the application would want to make the volume relatively large while people in a quiet environment would want to adjust the volume to relatively small.

Independent variables: Noisiness of the sleeping environments should be the independent variables. We can have five levels of noisiness: very noisy, relatively noisy, medium, relatively quiet, very quiet.

Dependent variables: The volume level that the participants adjust to should be the dependent variables here.

Which variables should be control variables and which should be random variables:

Control variables: The sound track playing will be restricted to one specific sound tracks out of the seven sound tracks in the application.

Random variables: The participants and their sleeping environments should be random variable, different people may have different preference and psychological background. Therefore, by randomly selecting participants, we can have a more comprehensive testing.


Zach Burggraf - 3/11/2013 13:55:41

For my Heuristic Eval I reviewed iRSVP, a speed reading application that flashes words in a document on the screen one at a time in rapid sequence to allow users to read at a very high Words/Minute rate.

One of the major usability problems I encountered was that the main reader screen has a slider to adjust the WPM rate with no labels or feedback whatsoever. Furthermore, the slider was exponential in scale so for higher WPM rates it was extremely difficult to fine tune the setting using the slider.

In redesigning the app, I would eliminate the slider and put a label in the center of the screen that clearly shows how many words per minute the app is currently flashing words at. On either side I would have a +/- button that increments/decrements the WPM rate by a small but noticeable factor. This would provide the user feedback so it is clear that the input was successful and what it actually did, and would also make it easier to set the value to the user's liking.

In testing this change, my hypothesis would be that it takes less time and input to set the WPM to a comfortable reading speed, and that less users will give up as I did when using the slider. The independent variables would be the document being read (which should remain unchanged) and the font and colour scheme being used. The dependent variables would be the users and the two different ways of setting the WPM. All variables would be controlled, none would be random.


Brent Batas - 3/11/2013 13:56:37

1)

The application I reviewed was The Economist app, a mobile reader for the popular news magazine of the same name.

In order to test whether my redesign improved usability, I would use A/B testing in the form of a new patch, where a portion of the group gets a new version with a redesigned interface, while the majority of the group gets the original interface. This would allow me to have people test the interface without them realizing it—this is important so that they test the interface exactly as they would use it normally.

Along with interface changes in the new patch, I would also have the app have code that reports which version (A or B) the person is using when they subscribe, or perform other actions with the app that I am tracking.

At the end of it, I would want to see whether or not the new interface translates to more subscriptions, with everything else remaining the same.

2)

My hypothesis would be that my new interface improves usability, which would translate to an increase in subscriptions.

My independent variables would be the changes that I’ve made in the interface. For example, redesigning the subscription process to make it mobile-device-friendly, as well as making changes to the preview issue screen to make it less misleading.

As for dependent variables, I would want to measure the % of subscriptions from the group using the new interface and compare it to the % of subscriptions from the group using the old interface. Ultimately, this is the result I care about in evaluating my hypothesis.

As for control variables, I would want to measure the usage of the app as a reader. People use the app to just read articles, and I would want this to usage to stay the same, since it shouldn’t have anything to do with the interface changes which have to do with the subscription process.

As for random variables, these would be the article content. The articles change with normal usage of the application, and these shouldn’t affect my dependent variables, so the article content should be allowed to vary to improve the external validity of the experiment.


kayvan najafzadeh - 3/11/2013 13:57:02

I had Scientific Calculator application to evaluate which was a very powerful calculator with so many features and the UI is so complex that most of the features are not visible for user or are hard to find. If I would redesign this application I would have allowed text selection in the input box of the calculator so user can fix a small mistake easily and doesn't have to clear all and start from beginning. Hypothesis: Allowing user to select the input and modify it will make correcting errors much faster and easier. Independent Variable: The version of application, either revised or current one. I as an experimenter will randomly assign participants to use one of two designs of the application. Dependent Variables: The number of times that users will use clear button on the old version or modify their entry in new version, as well as the time it will take participants to modify the entry or reenter the input. Control Variables: The device that being used, the calculation problems that they should try, and the situation that participant will be in. Random Variable: assignment of each version to each participant.

The experiment will be that I will have both version of the application (old design and new design) then I will assign one one of the versions to a participant with a set of arithmetic problems to solve. then I'll count number of time that user either uses clear button and re enters the input or selects a part of input and modifies it and measure time for each participant to do the correction. This procedure should repeat until i have enough data to analyse (for example 10 participant for each group). if the average time that it will take for participant to correct its entry per number of modifications is lower with the revised version then we have verified the hypothesis.


Soyeon Kim (Summer) - 3/11/2013 13:58:52

The application that I reviewed for my heuristic evaluation assignment was Dropbox. Any files that user uploads to one’s Dropbox is automatically saved to the user’s computers, phones, and the Dropbox website so that the user can have access to one’s files anywhere. The application also allows the user to share folders and files with others.

Last time, I found major usability issue where the list of folders and files are only viewed in alphabetical order only. This is problematic because it is not efficient to expect users to remember the name of the file/folder that they are looking for. Most of the time, users don’t remember the exact name so they just scroll up and down the list until they find one that they were looking for.

My hypothesis would be if more efficient viewing option, such as viewing the list of files/folders by date modified, is provide, it takes less time for users to find a particular file/folder that they are looking for.

My independent variable would be listing files/folders by date modified. My dependent variable would be the time that one takes to find a file/folder that one’s looing for.

Because I want to test only the effectiveness of the alternative viewing option (date modified), I want to control the type of device the experiment subjects are using. Different interface, font sizes, and screen sizes may or may not affect dependent variable, so it is the best to control the type of the device to prevent some unexpected results.

Random variable would be X which represents the time that the user took to find a desired file/folder. Specifically, X would a continuous variable since X can take any positive real value.


Alysha Jivani - 3/11/2013 13:59:09

I evaluated “Embark iBART” for my heuristic evaluation assignment. It is basically a public transit application to assist people with planning trips via BART. The 5 main sections are: Trip Planner, System Map, Departures, Advisories, and More Info. Most of the heuristic violations I found had to do with lack of error prevention and lack of visibility, but they weren’t extremely critical. If I were to re-design the app to address a major usability problem, I would probably fix the fact that the app allows the user to enter invalid times for the “arrive-by” option. If the time that the user would like to arrive by has already passed, there should be some sort of error-check in place that prompts the user to enter in a new arrive-by time or the option to select “depart now”, instead of forcing the user to go back a screen and fix his/her error (after he/she realizes it’s an error).

In my experiment, the independent variable would be whether the user is using the current version of the application (control) or the newer version that integrates error-checking (perhaps there could be two different versions of the new interface that implements the error-check). The dependent variable would be the amount of time that it takes users to recover from their error (i.e. how long it takes for them to update/change their search so that the input times are valid). My hypothesis is that adding the extra error-checking feature (with the options for modifying the trip search and for selecting “depart now” at the same screen) will decrease the amount of time it takes for the user to recover from his/her error.

I think it might be difficult to create a staged and controlled environment in which users commit errors regarding time-entry without them being aware of it, so perhaps this would better be executed as a long-term experiment with A-B testing. I would conduct a simple random sample of iPhone users who use the BART and then randomly assign them to the A or B (or C) condition. In order to ensure that I get an equal number of users testing the A, B, and C versions of the app, I would apply the concept of randomization within constraints (i.e. I would make sure that there would be an equal number of test-users for each version). Ideally, by conducting an SRS and randomly assigning users to the two different conditions, I will be able to control for errors made due to variability in circumstances and/or personality traits. (For example, I think errors like entering the time incorrectly are more likely to occur when the user is in a rush. Perhaps this tendency to be in a rush has to do with personality traits like being very busy or being chronically tardy.)


Sangyoon Park - 3/11/2013 14:00:04

App : AC Transit - AnyStop This app provides user the information about when a AC-Transit bus is arriving at a specific stop. A user can find a bus stop by searching closest stop from the user's location, manually typing the name of a bus stop, and finding a stop from a route. One of the major usability to fix immediately was that for clicking a button 'All routes for AC Transit.' After clicking the button, the app stops working if the phone is not able to receive data for all the stops due to poor signal reception or lag from the server. I would redesign this problem. Instead of getting all the routes/stops names from the server whenever a user clicks 'All routes for AC Transit,' the app could cache all the data it received (except the time for stop information since it could vary). When it is possible, update them in background, not requesting a hold for the the UI thread. To test this, put previous version of the app and the fixed version (chacing applied) into two different situation such as, one in a basement with poor signal receition, and one with wi-fi connected so it could get a immediate response from the server. And see what's improved significantly. My hypothesis is that, if the app caches all the routes/stops information and updating it 'when it is possible' through background work, the app would be much faster when it loads all the routes/stops when a user wants to see a list of them. Independent variable should be the signal reception (could be 5 levels as ordinary phone's signal reception bar shows). Dependent variable is a button type is pressed. The size of the list the app shows depends on what kind of button is pressed. If 'All...' button is pressed, the app should show all the possible routes/stops, if a specific route button is pressed, only related stops are showing. In the experimentation, we should control wifi connection. The wi-fi connection should be fast enough to say our poor phone signal is significantly slower than that. Phone signal reception could be a random variable. Since it is really hard to control the exact speed of a cell phone's reception (as we can see in a normal life, the signal reception varies even in the same place), so making it slow could be a difficult factor.


Eric Wishart - 3/11/2013 14:00:54

The app I chose to review is called Biomed Allergy Translator. It is described as a simple tool that helps bridge the language barrier when ordering food in countries across the world when trying to communicate certain dietary needs.

I would remove the buttons at the bottom of the screen and give the back button on the phone the functionality that users assume it has.

I could test if my redesign works by having users perform a series of tasks, such as pretend you have this allergy and would like to translate it too a specific language, and then change the language.

The independent variables are the users actions.

The variables that I would control for are the series of actions that a user would take, and the random variable would be the number of times that they screw up going back to a previous page.

My dependent variable would be the number of times that a user fails.

My hypothesis would be that users would fail less using my implementation as compared to the current functionality, where failure is defined as pressing "back", leaving the application, and losing all progress.


Nadine Salter - 3/11/2013 14:01:55

The application I reviewed for my heuristic evaluation, Apple's Find My iPhone, helps you find missing Apple devices registered to your Apple iCloud account. I didn't find any major usability problems — my findings, of which there were several, were all minor.

If I had a significant redesign in mind, I'd presumably hypothesise that my redesign was a better solution to the specific problem articulated in heuristic evaluation. Shockingly, the independent variables would be whatever wasn't changed and the dependent variables would be what was changed. Control and random variables would present themselves depending on the redesign.


Harry Zhu - 3/11/2013 14:03:05

I reviewed the Unit Converter app, which is basically an app that helps you convert hundreds of units. The main issue I had with that app was visibility issues. It was hard to distinguish buttons to navigate the app in the UI. I would design an experiment of placing the buttons in different places, and see which users can perform an action of converting some units the fastest. An independent variables would be different UIs that I would create for the app, because I can control which UI is being used. A dependent variable would be how quickly the user could perform the action of convert units (similar to how fast the test subject pressed the button after seeing the light). A control variable would be making sure that the users are interacting with the app for the first time. This will be easy since the UIs will be totally new. Some random variables will be mainly the user's background, such as their age and experience with interacting with phone applications.


Weishu Xu - 3/11/2013 14:08:36

The application I reviewed for my heuristic evaluation assignment was MyFitnessPal, which is designed to help the user keep track of calories consumed and calories burned each day based on food eaten and exercise performed. One of the aspects that requires the most amount of redesign work is the Friend finder application, which the application incorporates to create a social and viral environment. I would redesign the application to allow users to search for friends either through searching names or uploading their phone contacts or Facebook contacts.

When designing the experiment, my hypothesis would be that incorporating a more convenient-to-use friend finder feature will help drive user engagement with the social feature. I will measure that by observing how many users will try to add or find friends under the current system and under the new system. Currently, I predict many users will avoid the friend feature because they do not know their friends' usernames nor do they want to spam them with emails. With the new feature, I predict that more users will try to add friends. When designing the experience, the independent variables will be whether the user is give the current app or the redesigned app. The dependent variables I would measure would be how many users in each pool ultimately end up adding a friend under with each app and also how long the average user spends tinkering with the app before either figuring it out or giving up (in order to understand how complex users find it). I would control for user demographics: they would need to be a similar age group and have similar interest levels in technology products and new mobile apps as well as interest in using social products. The random variables would include familiarity with technology, interest in tracking fitness, and personalities (how determined they are or how easily they tend to give up).


Juntao Mao - 3/11/2013 14:09:31

Juntao Mao - 3/11/2013 14:17:33

I reviewed the Piazza App, mobile app for Piazza forum, for the iPad for my heuristic evaluation assignment. One major usability problem is lack of help and documentation. If I resigned the app to address this problem, I would test with some randomly chosen Piazza users. (randomly chosen here means from different class, preferrably different school and age groups) My hypothesis would be that for the users who are having issues/questions about Piazza app, they can find their answers in the help section. One test could be that I give them what problems they are having (independent variable) and ask them to look for the answer to that problem pretending they don't know the answer. Their success in this quest is the dependent variable. Control variables may include where they are testing, limiting to not asking friends, length of time they have at finding the answer. Random variables may be the knowledge the user has beforehand, how accustomed the user is to app/iOS technology in general.


Elise McCallum - 3/11/2013 14:20:57

The application I reviewed for my heuristic evaluation assignment was Daily Schedule Planner, which split tasks into three categories (work, personal, daily) and allowed users to set alarms, mark tasks as done, edit text of the description, and switch tasks between categories. One of the major usability problems is that, from the main screen, there is no visible way to delete a task, only to add a task or mark a task as complete (which remains in the task queue). To redesign the application, I would add a small "X" or something else indicative of a delete sign next to each task so that the user could easily delete the task if they so desired. I would then design an experiment to test whether or not my redesign improved usability.

In this experiment, I would compare how users performed tasks on the old version of the application with how they performed tasks on the new version of the application. My hypothesis would be that the time it takes users to delete a task would be shorter with the new design than with the old design. The independent variable would be the presence or absence of the delete symbol. This can also be extended to changing the size and position of the delete symbol to discover the optimal size and positioning for the fastest completion of the task. The dependent variables would be the reaction time, or the time it takes the user to figure out how to delete a task. the control variables would be the number of tasks presented on a page, the exact text of the task, and the tools with which a user can manipulate the interface (i.e. everyone must use their right index finger, a stylus, the tip of their nose, or whatever is most appropriate). The random variables would be the levels of the independent variables (size of the delete symbol, color of the delete symbol, and location of the delete symbol).


John Sloan - 3/11/2013 14:22:52

The application I evaluated was Python for iOS, which is an app that allows you to code in python right on your mobile device. The usability problem I would first address would be the fact that within the page for looking up projects you have saved there is no clear up a directory button. Right now it is just a '..' folder with an arrow pointing as though it would go down a directory instead of up. The author was trying to go for the fact that it uses the '..' command like in terminal, but is convoluted by the fact that its still a folder and so its natural affordances imply the opposite of its actual function. To redesign this I would drop the idea of using '..' and instead just use a back button.

As an experiment for this I would use a reaction time test to see if user's initial reaction time to trying to move up a directory is improved with a back button.

For this, my hypothesis would be that using a back button instead would be much more efficient for initial reactions than the '..' folder. The independent variables are the two different levels I am choosing, meaning with a back button, or with a '..' folder. My dependent variables are the reaction time, since it will be dependent on the user and which level they are using. For control, I would make sure that each app had the same stored project files to make sure that the reaction time is not effected by someone having more saved files than another user. It could also be worthwhile to make sure all the aspects of the testing room are consistent so that the reaction time is not affected by the user being distracted. If there were certain aspects of the testing environment that I could not control I would count them as random variables.


Arvind Ramesh - 3/11/2013 14:23:03

The applicationI reviewed was Flipboard, an app that takes various news articles from several news sources and sorts them by category. I would redesign the app to allow users to "favorite" articles to come back to later. This is currently possible in Flipboard, but it requires a Twitter account. I would embed this functionality into the app itself.

My hypothesis would be that the addition of a feature to store selected news articles would increase the number of total articles read by the user. My independent variable would be an extra button that would replace the current buttons at the top of the screen. My new button would simply read "favorite", making it easy for the user to understand what it does. My dependent variable would be the number of news articles favorited per user. Obviously, if each user is favoriting a large number of articles, the new button would increase usage of the app, as well as increasing its ad revenue. I would control for current Twitter users (who already have this feature) and make all the people testing the new app non-Twitter users. My random variables would probably be the amount of news each user currently reads. It can be inferred that if a user already reads a lot of news, they will be more likely the favorite more articles. This could get in the way of the test, and would have to be accounted for in the testing methodology.



Lemuel Daniel Wu - 3/11/2013 14:24:30

1. The application that I used was Skype, an application for calling and texting others that also helps you organize your contacts that have Skype into a separate contacts list. The app design was horrible, and even its released version was filled with bugs. If I was to redesign the application, I would want to fix the bug that labels phone numbers with a flag based on the first 3 numbers, even if the number only contains 10 digits (not enough to make this an international call). The bug used to appear because a number that starts with 510, for example, would be labeled as a Peruvian number, even though we know that 510 is also an area code for the bay area.

My experiment would be to take a large database of american phone numbers, and have them be typed into the Skype application. The test would alert me every time a different country's flag was displayed. Next, I would test with a large database of foreign phone numbers, and make sure that the flags that are displayed accurately match which the country of the phone number that I put into the phone.

Thus, the numbers would be the independent variable, and the dependent variable would be which flag appeared in the application. Random variables would not really exist in this test, because who the user is and when he/she tests the application will not change the outcome. One variable I would need to really control is that all testing users use my version of the application, because we know for sure that the previous version would fail the test (we proved this in the heuristic evaluation application). I would also need to control the test so that users are all American, are physically in the United States, and using American SIM cards so that their service recognizes that they are in the US. This is because the test for local numbers is done with American numbers.

If I was to eventually expand this test, I would have more databases of local numbers (no country code) specific to other countries, and use SIM cards and users in those countries to test whether those countries' flags are displayed when the users type in these local phone numbers.


André Crabb - 3/11/2013 14:28:52

App: SoundHound SoundHound is an app that listens to a song and tells you waht it is. Its similar to Shazam but with more functionality.

If I redesigned one of the usability problems I discovered, I would probably want to fix the Aesthetice and Minimalist Design issue where they have a lot of inforamtion on the first screen the user sees. To design an experiement, I would look for images of the old version of SoundHound, which does not have the extra information. (If possible, I would look for the old version of the actual app, instead of just pictures.) Then, I would prototype on paper or some prototyping software (such as MetaApp!), and ask friends and people which version of the screens they preferred.

My hypothesis would be, if the main screen of SoundHound was more simplistic, users would have a better experience when tagging songs. The independent variable would be the new screen / user experience. The dependent variable would be the users' happiness / frustration. The control variables would be the other screens and the use of the app. Those wouldn't change so that I could focus on the main screen change. I'm not really sure what I would let be a random variable in this experiment.


Dennis Li - 3/11/2013 14:31:23

The app I reviewed was called pocket fridge. The functionality of the application allowed users to record what was in their refrigerator both using a list format as well as a refrigerator GUI. The main functionality that I would work to improve if asked to redesign this app is the process of adding and removing things from the fridge.

Hypothesis: By removing the GUI and adding more intuitive methods of entry (+ button) it would be easier for users to add products into their fridge. Additionally, by accelerating navigation though a search feature, users would more easily be able to find things they would want to remove.

My independent variables would be the the features that I am changing, the functionality that I am adding and removing. Specifically, I will be removing the fridge GUI. I will be adding a more intuitive add button and a search bar.

The dependent variables are what the user experiences. The time it takes for them to complete a task such as adding or removing an item. Also their feelings about these changes would be a dependent variable.

I would make sure that they app is on the same device and that the user interface design elements are the same. The environment they use the app in would also be the same and the types of users I am testing the changes on would be the same


Alexander Javad - 3/11/2013 14:31:42

I actually did not do the heuristic evaluation assignment, but I will do my best to provide an intelligent response. Let's say I wanted to make my application's system status more visable. I can test if my changes increase usability by designing an experiemtn to do so! I would have a control group which receives the original version of my application, and an experimental group which receives the new version of my application. Participants will be randomly sampled from the population and randomly assigned to these groups. My hypothesis would be "The changes I've made to my application increase usability by increasing the visability of the system's status". My independent variable would be a rating the user gives on how easy it is to understand and use my application on a scale of 1-10 where 10 is extremely understandable and easy to use. My dependent variable is what I am varying, and the only thing I'm varying is the version of my application that I am assigning to the experimental and control group. I would control the amount of time each user gets to use my application for. I do not see there being any random variables in this experiment.