46
QDA Miner User’s Manual 167
disagree somewhat on what represents a religious metaphor, they may nevertheless
agree on how many of those metaphors they find in each speech. Since the research
question is related to the comparison of frequency of metaphors among speakers, then
an agreement on the overall frequency per speech should be sufficient.
Code Importance - This option allows you to assess whether coders agree on the relative
importance of a code in the document. This kind of agreement is achieved by comparing for
a given code the percentage of words in the document that have been tagged as instances of
this.
)
If a researcher makes the hypothesis that when adolescents are asked to talk about a
love relationship, male adolescents devote more time than females talking about the
other person's physical characteristics and less time about their own needs or feelings.
With this kind of hypothesis, the researcher needs to establish an agreement on the
relative importance of these topics, not necessarily on the frequency or location. Again,
coders may somewhat disagree on what specific segments represent instances where
the adolescent is talking about their needs or feelings, but they may nevertheless come
to a close agreement on the importance of each code.
Code Overlap - This last level of agreement is the most stringent criterion of agreement
since it requires the coders to agree not only on the presence, frequency and spread of
specific codes, but also on their location. When this option is selected, you need to provide a
minimum overlap criterion expressed in percentage. By default, this criterion is set to 90%.
If the codes assigned by both users overlap each other on 90% of their length, then these
codes will be considered to be in agreement. If the percentage of overlap is lower than 90%,
the program will consider them to disagree.
)
Such a level of agreement would be interesting for several reasons. A researcher may
be interested in indentifying specific manifestations of a phenomenon. Another
important reason is that the examination of agreements and disagreements at this level
is accessory to the achievement of high agreement on any other lower level. In other
words, close examination of where agreements occur or do not occur should allow one
to diagnose the source of disagreement and take corrective actions to establish a shared
understanding of what each code means for every coder. In a sense, checking the
agreement at this level may be used in a formative way to train coders. Yet, the final
summative evaluation of agreement may still be performed on a less stringent level,
provided that this lower level matches the research hypothesis.
The first three criteria use documents as the unit of analysis and calculated for each document a single
numerical value per code and coder. This numerical value represents either the occurrence, the frequency
of the specific code or the percentage of words in the document assigned to it. This Coding Analysis
Features 80 measure is then used to establish the level of agreement. For example, when assessing
agreement on code frequencies, if a user assigns a specific code to a document four times while another
one uses this code only once for the same document, then the program will add 0.25 to the total number
of agreements and 0.75 to the total number of disagreements. The absence of a code in the document is
considered to be an agreement. Since the unit of analysis is the document and the numerical value for
each code lies between 0 and 1, each document has an equal weight in the assessment of agreement, no
matter how many codes are found in these documents.