My DH2013 program, possibly
The Digital Humanities Conference 2013 in Lincoln, Nebraska will start very soon. The program is packed with long and short talks and panel sessions in six parallel tracks, and of course the whole breadth of DH is represented.
But computational text analysis seems to be one of the major topics again, after it was so prominent last year at DH2012 in Hamburg, Germany. Here is what my DH2013 program may look like if I concentrated only the computational text analysis talks. This is actually quite unlikely, there are just too many interesting things, for example plenty of talks about data visualization of various forms, about collaborative research environments, digital editions, and more! (I’ll link to the abstracts once they are available online.)
Wednesday morning, 8:30 to 10:00, short paper session 02, with among other things several papers on computational text analysis, by David Hoover (The Full Spectrum Text-Analysis Spreadsheet), myself (Fine-tuning our Stylometric Tools: Investigating Authorship and Genre in French Classical Theater), and Anna Jobin and Frederic Kaplan (Are Google’s linguistic prosthesis biased towards commercially more interesting expressions? A preliminary study on the linguistic effects of autocompletion algorithms).
Wednesday morning 10:30 to 12:00, long paper session 04 focused on studies of citation and text reuse, with John Walsh and Cassidy Sugimoto (Victorian Paratextual Poetics and Citation Analysis), Chris Alen Sula (Citation studies in the humanities), as well as Jean-Gabriel Ganascia, Pierre Glaudes and Andrea DelLungo (Automatic Detection of Reuses and Citations in Literary Texts). The afternoon looks good too, with lots of talks about visualisation in LP08 and SP06.
Thursday morning, 8:30 to 10:00, long paper session 10, chaired by Fotis Jannidis, with three talks all clearly focused on methodological advances in stylometry, by Maciej Eder (Bootstrapping Delta: a safety net in open-set authorship attribution), David Hoover (his second talk! Almost All the Way Through–All at Once) and Jan Rybicki, Jan; David Hoover and Mike Kestemont (Collaborative Authorship: Conrad, Ford and Rolling Delta).
Thursday morning, 10:30 to 12:00 will continue with stylometry and more specifically authorship or speaker/narrator attribution in long paper session 14, with talks by Mike Kestemont, Sara Moens and Jeroen Deploige (Stylometry and the Complex Authorship in Hildegard of Bingen’s Oeuvre), Karina van Dalen-Oskam (Epistolary voices. The case of Elisabeth Wolff and Agatha Deken) and (Hsieh-Chang Tu and Jieh Hsiang (A Text-Mining Approach to the Authorship Attribution Problem of Dream of the Red Chamber).
Thursday afternoon, 1:30 to 3:00, long paper session 17 chaired by Elisabeth Burr, is devoted to social network analysis applied to (mostly) literary texts, with Caroline Suen, Laney Kuenzel and Sebastian Gil (Extraction and Analysis of Character Interaction Networks From Plays and Movies), Yannick Rochat, Cyril Bornet and Frédéric Kaplan (A social network analysis of Rousseau’s autobiography “Les Confessions”) and a mysterious third paper, by Michael Andrew Finegold and others (Six Degrees of Francis Bacon), quite likely also a statistical approach, though ;-).
Friday morning, 8:30 to 10:00, long paper session 20, is again devoted to social network analysis: David Bamman and others (Inferring Social Rank in an Old Assyrian Trade Network), David Michael Brown and Juan Luis Suárez (Preliminaries: The Social Networks of Literary Production in the Spanish Empire During the Administration of the Duke of Lerma, 1598-1618) and Graham Alexander Sack (Simulating Plot: Towards a Generative Model of Narrative Structure). This one is a tricky session, because there is a parallel session, long paper session 21, with talks on computational text analysis beyond stylometry, with Jack Elliott (Unsupervised Learning of Plot Structure: A Study in Category Romance) and Mark Andrew Algee-Hewitt and Ryan Heuser (Tropes, Context and Computation: An approach to digital poetics) and another talk with a mysterious title by Marc Wilhelm Küster (Agents for Actors). To make things worse, there is a parallel session on digital editions and XML markup which I would also like to hear.
Friday morning, 10:30 to 12:00, short paper session 15, chaired by Maciej Eder, has interesting papers on encoding historical place names and about character encoding as well as two stylometry papers applied to Japanese literary texts, with Mito Takahashi, Kana Tezuka, and Tamaki Yano (Identifying the author of the Noh play by considering a rhythmic structure – Validating the application of multivariate analysis ) and Ayaka Uesaka and Masakatsu Murakami (Authorship problem of Japanese early modern literatures in Seventeenth Century).
To round things off, Friday afternoon, 1:30 to 3:00, has long paper session 28 with talks on document classification and phonological analysis of poetry: with Noga Levy, Lior Wolf and Peter Stokes, Peter (Document classification based on what is there and what should be there) and Drayton Callen Benner (The Sounds of the Psalter: Computational Analysis of Phonological Parallelism in Biblical Hebrew Poetry).
Wow! I am really looking forward to hearing all of these papers. I guess it is natural that I am particularly eager to hear the papers I helped review (some of which are among the ones mentioned here). The dilemma when looking at this narrow selection as opposed to the whole range of papers at DH2013 is obvious: How can I deepen my understanding of computational text analysis while at the same time broadening my horizon of what digital humanities are?
OpenEdition suggests that you cite this post as follows:
Christof Schöch (July 6, 2013). My DH2013 program, possibly. The Dragonfly's Gaze. Retrieved January 16, 2025 from https://doi.org/10.58079/nweu