Oftentimes, when we start out on some research project to understand our site visitors' behavior, we tend to forget that we bring to these projects our own biases and preconceived notions.
I recently attended a seminar by user experience grandmaster Danielle Cooley where she spent some time exploring this topic.
Here are some of her tips so that you can understand your own biases and do a better job when you have to conduct your own research and usability projects.
Cooley breaks down the word "bias" into eight different dimensions:
Selection bias. This is a very well-known bias. The people who opt in for a particular research activity are inherently different from the general population, or those who choose not to participate. Make sure you understand the particular segment of the population that you are examining.
Acquiescence bias. Respondents tend to want to agree with the interviewer, because they want to be liked and earn their stipend for taking the survey. Cooley urges interviewers not to be too friendly when conducting in-person interviews, just for this reason.
Social desirability bias. People may not tell the truth when you ask them sensitive questions, such as about their sexual preference or drug use. "They also tend to lie or at least alter their answers when asked about something where their answers would tend to make them look bad," Cooley says. She uses as an example a survey that asked Bostonians if they walked up a long flight of stairs from the deepest metro station or took the escalator. The results were way off the actual observed practice, because of course most people won't attempt to climb 200 steps.
Central tendency bias. When people are surveyed with a range of answers (from one to 10, from approve to disapprove, etc.), they often tend to reply with something in the middle of the range. This can be mitigated by using questions that present both sides of an issue, for example: "Do you prefer or avoid websites that ask for your email address?" with a range of answers that go from "always prefer" to "no opinion" to "always avoid."
Confirmation bias. "People tend to believe that a particular set of information supports their existing beliefs or biases," she says. Cooley cites one political study where both Democrats and Republicans confirmed separate and opposing implications from the same survey results.
Reverse fundamental attribution errors. "Traditionally, people blame external circumstances for their own negative behaviors but attribute others' negative behaviors to particular personality flaws," says Cooley. But in user experience research, we see the subjects blame themselves for being unable to navigate a particular website or complete a given task, no matter how poor the site design.
Hawthorne Effects. When subjects know that they are being observed, they often change their behavior. The name comes from the site of a phone company manufacturing plant in Hawthorne, Ill., that was used for several behavioral studies in the 1920s. "Just remember, we aren't doing peer-reviewed science research here. We are just trying to figure out how to build a better website," she says.
Evaluator effects. Just because you know all the above biases doesn't mean that the survey or research project that you attempt won't come out differently from one that someone else will attempt.
Knowing these biases is a first step in improving your field research. Good luck with your own projects.