Beyond the black box, or: understanding the difference between various statistical distance measures
One of the issues we noticed time and again when using the different stylometric approches is the huge influence of settings and parameters on the results. This is not surprising, of course, and also does not question the validity of the method as such. Rather, this is the result of the current historical state of stylometry, where research on the best methods, algorithms and settings for different languages, genres, and times needs to be further pursued, despite significant progress in this process of “fine-tuning” stylometric methods, in particular for “closed-game” attribution problems, but to some degree also for “open-game” attribution problems.
What this means are two things: First, for any study involving texts for which no solid experience is available yet, appropriate algorithms and settings need to be established first on an unproblematic collection of texts for fine-tuning. Then, these settings can be used for the analysis of the collection of texts actually under scrutiny, which should be as closely related to the fine-tuning collection as possible, in terms of language, genre, and time. Second, it is quite important for the researcher to know what the differences between the various options for algorithms or distance measures and parameter settings really are.
One essential aspect of this, more fundamental even than the “most frequent word” range, and more difficult to understand, are the statistical distance measures involved in comparing the texts. What these distance measures essentially do, is that they take as input the feature frequency matrix for all texts involved in the text comparison (where features can be words, letters and word or letter n-grams). For each text, the feature frequency list could be considered a multi-dimensional vector. The relative distance or proximity between the texts is then a matter of calculating the distance between several multidimensional vectors. To do this, several distance measures have been suggested.
The politics of statistical distance measures
The day before yesterday, Maciej explained some of these distance measures to us, using a wonderful political analogy in which each of the distance measures is compared to a political system. This is the political system of stylometry in brief, as shown in Maciej’s slide:
The Euclidean distance, Maciej explained, is tyrannical, because it gives a voice only to the very top of the most frequent words list. Following the principles of Euclidean geometry, it is based on the square-root of the sum of the squared differences between all vector points; because no weighing is applied, the usually larger absolute differences between the top most frequent few words (see Zipf’s law) have a massive influence on the results; for the lower words on the scale, the distances are smaller, and will not weigh in very much. For this reason, the Euclidian distance is not recommended in most cases.
The Manhattan distance corresponds to an oligarchy, because it is slightly less biased towards the very most frequent words. In contrast to the Euclidean distance, Manhattan distance relies on the sum of the difference of all coordinates of the vectors, something which reduces just a little bit the influence of the very top most frequent words, when compared to the Euclidean distance. But it still gives decisive importance to a very small group of words, in the range of maybe the 10 to 20 most frequent words (according to Maciej).
Classic Delta by Burrows is entirely democratic: instead of comparing absolute frequencies of features, it relies on absolute z-scores, and calculates a mean difference score from all individual difference between the z-scores. This means that it effectively applies scaling or weighing so that the less frequent words count more in the overall distance score, and the most frequent ones count less. Each words gets an equal say in the distance measure, like each person has a vote in democracy.
Fortunately, stylometric democracy yields much better results; in fact, Classic Delta was a huge step forward for stylometry. (As a side-note: corruption in stylometry seems to apply either to the texts in the corpus itself, like dirty OCR, or manifest itself in cherry-picking of results, arbitrarily preferring some results over others, lead by some pre-established ideology.)
Finally, Eder’s Delta is somewhat like a feudal monarchy or maybe a parliamentary monarchy. By applying a specific weighing to the importance of each frequency, decreasing as a function of the decreasing rank or frequency of the words in the wordlist, the importance of the less frequent words is raised without making it equal to that of the most frequent words. Basically, this means that it is a compromise between Euclidean distance and Burrows’ Classic Delta.
One relatively stable tendency Jan Rybicky and Maciej Eder have noted is that for relatively weakly inflected languages like English or German, Burrow’s Delta is a good distance measure, while for inflected languages like French and many others, Eder’s distances seems to generally yield better results. However, this is of course only a rule-of-thumb and needs to be checked every time.
The politics of the black-box issue
Knowing such things permits us to go beyond the stylometric methods as a black box. In this model, we put in some texts on the one side, and get an attribution answer on the other side. This is not only falsely suggesting some kind of objectivity of the procedure, which, although it is in principle entirely formalized, documentable and reproducible, does not produce objective truths because the settings may be unsuitable to the case at hand. More importantly, we should not, as humanists, let go of control and understanding of what exactly goes on in the black box, and how settings affect our results. My question is how far we need to go, how deeply we need to peep into the black box, or how transparent it should be to us.
“Program or be programmed” is the poignant title of a disappointing book. In the case at hand, I’d rather paraphrase this and say that in order not to be manipulated by our tools, we should strive to become competent manipulators of our tools, manipulation meaning here sophisticated usage as well as planful modification of the tools which enable our research. I’m wondering, however, whether the somewhat metaphorical appreciation of the different distance measures I just briefly laid out is really enough to understand stylometry; it is enough to understand what the overall effect of using different distance measure is on the data, but it is not enough to really understand what is going on under the hood of the black box. I guess I will have to come back to this and understand the maths in a bit more detail.
OpenEdition suggests that you cite this post as follows:
Christof Schöch (August 3, 2012). Beyond the black box, or: understanding the difference between various statistical distance measures. The Dragonfly's Gaze. Retrieved December 7, 2024 from https://doi.org/10.58079/nwed
To BerenikeH: This article does a thorough analysis of Delta —
Argamon, S. (2008). Interpreting Burrows’ Delta: Geometric and probabilistic foundations. Literary and Linguistic Computing, 23(2), 131–147.
Thank you, Joseph!
Thanks for this! I have a similar question: are you aware of any publication that explicitly addresses the black-box problem in stylometry? (maybe a text of your own? ;-)
Hi Joseph, this analogy was part of a lecture at the Workshop on Stylometry at the Leipzig European Summer School in July 2012; the title was “Distance Measures
and Nearest Neighbor Classifications”. I don’t know of a published version of this anlogy.
Thanks for the clear description — good analogies! Was the lecture you refer to by Maciej Eder in a course or at a conference? Has he published these comparisons to political systems?