Author or genre? Assessing the quality of cluster analysis graphs in two-dimensional classification problems
One of the very fundamental issues in stylometric classification tasks is that the data under scrutiny is usually messy in some way. And I don’t even mean dirty OCR here, which is a problem anyone even casually playing with Google’s N-Gram Viewer and interested in pre-1800s texts will have noticed (Mark Liberman, among others, wrote an interesting piece on this). What I mean, rather, is that the text collections we would like to classify according to one criterion, say by their author, their genre, or their author’s gender, usually containt texts which vary by all of the categories just mentioned, and usually some more.
How are we to know whether our beautiful distance measures and algorithms actually classify the texts according to the category we would like them to, or rather, how can we make sure they will? More likely than anything, the distance measures use distinguishing cues that actually pertain to the various categories all at once, in a necessarily intransparent fashion (if we knew what features distinguished authors rather than genres, we wouldn’t need stylometric tools in the first place). A typical text collection which we submit to stylometric classification tasks may vary by genre (comedy or tragedy), form (prose or verse), time (seventeenth or eighteenth-centrury), theme (plenty, and messy in themselves) and of course author (you name them) and maybe also author’s gender. Any one of these categories may influence the classification we hope will be according to just one category.
One way to go about this is to try and come up with text collections which are less messy. That is certainly a good idea in many cases, and common practice: work only on early nineteenth-century novels, or only on seventeenth-century comedies, or only on twentieth-century crime fiction (research on the latter topic, by the way, is seriously hampered by copyright issues). But what if the classification problem you are interested in does not allow you to work with a more unified corpus? What if, for instance, the number (and/or length) of available texts becomes too reduced to allow reliable distance calculations?
One path which promises a way out of this would be to find the right parameters that focus in on the category you are interested in. First of all, you could choose the linguistic phenomena accordingly (words, letters or parts-of-speech, single or in n-grams); you could decide on the degree of preprocessing you need (lemmatize your wordlist or not, apply culling to it or not, to name just two options); also, you could choose wisely which part of your frequency list to use (only the 20 most frequent features, or the entire list, or anything in between); finally, you could select an appropriate statistical comparison method (vector-based distance measuring or Principal Component Analysis, for example) as well as choose the algorithm which best suits your situation (Burrows’, Argamon’s, or Eder’s Delta, for example).
But on what basis are we to make those wise, appropriate, suitable decisions? This is something I have been complaining about before, of course. Over the last couple of days I have been investigation one of many of those fronts, and have run into an interesting problem. For testing and fine-tuning purposes, I used a text collection which is more or less homogeneous as to time of writing (all texts are from the second half of the seventeenth century) and to form (all texts are plays written in verse). The texts vary with regard to two categories, authorship (Thomas Corneille or Pierre Corneille) and to sub-genre (tragedy or comedy). This text collection consists of 54 plays, with 29 plays by Pierre Corneille (9 comedies, 20 tragedies) and 25 plays by Thomas Corneille (8 comedies, 17 tragedies). As usual, the texts come from the exellent “Théâtre classique” site by Paul Fièvre. The question at hand was to know at which settings (notably at which range of words from the word frequency list) the classification would be dominated by the author category, and at which settings it would be dominated by the genre category.
I did a pretty large number of classification runs using, as usual, the ever-improving “stylo” script by Jan Rybicky and Maciej Eder. Keeping all other settings stable (Classic Delta, Cluster Analysis), and using always only 100 words from the word frequency list, I varied each run by the onset on the word frequency list, going all the way from an onset at the top of the list to an onset at the 2000th word. (I did this for single words and word tri-grams as a first experiment). Each time, the script produced a wonderful dendrogram, and now I finally get to the real issue of this post: with all of these dendrograms on my hand, how can I assess whether the clustering each dendrogram presents is rather according to authorship or according to genre? To tell the truth, and some exceptions apart, the dendrograms are all kind of messy, but how messy exactly? Which category dominates the classification, even if only slightly?
Above is a fragment from one such dendrogram which may be useful to illustrate the problem. The cluster analysis groups two plays whenever it can, or else adds single plays to a pair according to its distance; it then it groups the pairs according to the same principle, until it reaches the top node (to the left). It is easy to see that some clusters are nice, and form pairs of plays written by the same author and presenting the same genre (in the middle, for instance, Le Festin de Pierre et Le Galant doublé are both by Thomas Corneille, and both are comedies). In other cases, the two plays in a pair match according to genre, but not according to author (Le Feint astrologue and Le Menteur, at the top of the graph, are both comedies, but one by Thomas, one by Pierre Corneille). What about the plays which are not in pairs, like La Suite du Menteur (third from the top)? Has it been grouped with plays which have the same characteristics? To which surrounding plays should it be compared when deciding this? What about the larger context? The Suite du Menteur is a play by Pierre Corneille, and there is another Pierre Corneille play close by, but the larger cluster is dominated by Thomas Corneille plays? How can all this be summarized in some convenient proportion of “author-matches” and “genre-matches”, simple enought to allow comparision of many dendrograms yet subtle enough to capture most nuances of these dendrograms?
I tried out two solutions: one was to indentify all low-level pairs that present a correct matching of both author and gender, of author only, of genre only, or no correct matching at all, treating single plays as pairing with their next neighbour, and leaving larger clusters aside. I then counted these types of pairs for each dendrogram and drew up a chart comparing all dendrograms.
Because this did not seem to do justice to the slightly more complex cases when isolated plays need to be matched with mixed pairs, I then devised a calculation scheme using the same kind of information but giving two numeric values between 0 and 1 to each play, for gender and author matching, according to the degree of matching found inside the pair or with the closest pair. This is slightly more subtle, and somewhat simplifies the graph, but still does not account for the larger context of each pairing.
Anyway, the graph above, which is based on the initial method, does show a few interesting tendencies (tendencies which, however, would need to be confirmed using a better way of assessing the graph): First of all, there is very little genre-based pairing in the dendrograms with a wordlist onset of 0 to 750, with the exception of 375 (I wish I knew what is going on there). This suggest that one way to avoid genre-based classification, if that is not what you are looking for, is to limit the range of words used for classification to the first 850 words or, even better, to the first 350 words.
This result confirms an experience others have made: that genre, most likely because it is strongly related to themes, is more likely to show up in parts of the wordlist beyond the function words. However, it is good to know that for French plays from the seventeenth century, this is also true, and that the critical area of the word frequency list starts only at around 850 words. That goes beyond the pure function words, of course, and gives plenty of room and plenty of data for good distance measure calculations. If you do want to classify your texts by genre and not by author, however, this graph is bad news: the author always somehow shows up along with genre.
Now, if any of you have any comments, or ideas on how to create a more systematic approach to this issue, on how to establish better counts of the various types of pairings in the dendrograms, and better still, or on how to further automate the whole procedure, then don’t hesitate to drop me line in the comments.
OpenEdition suggests that you cite this post as follows:
Christof Schöch (October 29, 2012). Author or genre? Assessing the quality of cluster analysis graphs in two-dimensional classification problems. The Dragonfly's Gaze. Retrieved December 7, 2024 from https://doi.org/10.58079/nweg
First of all, let me say it is really great to get such encouraging feedback to this post. Blogging is such an amazing thing!
@Ted Underwood: It really seems there is a lot more to be done with more advanced statistical procedures, and I really appreciate this way of looking at the bigger picture instead of just searching for small solutions or workarounds. I will definitely look into some of the things you suggest. I have been (literally) dreaming of a PCA in which magically, authorship was one axis and genre the other. The reality os much messier, obviously.
Of course, I will also try find help to develop a solid workaround/solution, too ;-) Both directions are a matter of going down the stylometry road all the way, instead of just halfway, and I’m all for pushing ahead. So many more things to learn!
@Joris: Yes, I have experimented with consensus trees, of course. But they seem to be harder to interpret, when you want to see what is really happening. They will summarize all of my dendrograms rather than tell me what each of them shows. However, I use them to go back to my initial question about Molière and Corneille, and use the settings that seemed most promising from the individual cluster analyses/dendrograms. Or how would you suggest using them?
@ Matthew Jockers: Thank you so much for your comment, glad you like the post. There is much more to be done! I am really looking forward to reading that book chapter, and the book, in fact. I really believe this kind of disentanglement of influences should be done for all kinds of languages, types of texts, and epochs; besides French, I’m also thinking of Spanish literature, of course; there does not seem to be a lot of work on that.
So, thanks again to all of you!
I’m delighted to see this work being done, thanks. I have done a similar experiment in chapter 6 of my soon (2013) to be published book. I use the same corpus we used in Quantitative Formalism (http://litlab.stanford.edu/LiteraryLabPamphlet1.pdf) and attempt to asses the “strength” or “pull” of five different facets: genre, author, time, text and gender. The book includes a graph similar to yours in which I “rate” or “rank” the pull of each facet on a set of high frequency features. In all but a few cases, the “author” signal over powers the genre signal, but not always (and these become interesting cases). I should note that in my experiment, I’m using novelistic sub genres (i.e. “national tale,” “gothic” and “bildungsroman” not “novel,” “drama,” or even “comedy” or “tragedy”)
Very nice work, Christof! This way we’re getting closer and closer to some scheme for identifying which parts of the word frequency spectrum signifies which quality of a text. For better counting/establishing consistency of types of pairing, I think you could take a look at the subject of consensus trees maybe, cf. http://bitly.com/SveTH2 (PDF)?
You’re exploring a problem that interests me a lot at the moment. Stylometry is one place it crops up (tension between author and genre). The quick and dirty way to solve the problem, as you point out, has often been to select common features (which tend in practice to be associated with authorship rather than subject or genre). But Alison, Jockers, Witmore, Heuser, Moretti, et. al. have shown that even common features may be associated with genre! And as you point out, the “authorship” signal is also strong, and hard to disentangle from genre.
Similar problems crop up, e.g., with publication date and genre. That’s actually the part of the problem that interests me most acutely.
One solution to the problem is feature selection. Instead of simply drawing a cut-off at some frequency level, separating common and uncommon words, it is possible to design a script that tests individual features for their power to discriminate authors and/or genres. This is called “scheme-specific attribute selection,” and is discussed in data mining textbooks. I’ve been doing a variant of it (using a Wilcoxon test to select features associated with specific genres) and I do find that it’s possible to select features that are associated specifically with genre rather than authorship. I get pretty good classification accuracy that way (in the 90-95% range).
But if the goal is to understand the way different factors are interrelated (e.g. genre may after all be *related* to chronology in some interesting ways) we’ll need more sophisticated analytical techniques. I think regression analysis is one viable tool; that would allow us to “control for” chronological differences, or separate chronology and genre as two different “axes” of discrimination.
More ambitiously — and I have to admit I don’t fully understand how this would work yet — I believe “Bayesian belief networks” might potentially offer a way to model the interaction of different attributes (like genre and chronology).
But, basically, I just want to say: good question, and there’s a lot of interesting research yet to be done on it!