What the perfect repository for text analysis looks like (to me)
The longer I work with various collections of literary texts, available in various formats, and for use with various tools, the more I would like to have a nice repository which me and others could use to ingest, store, transform, update and extract text collections. So what would this repository look like? Basically, I’m describing a use case, and would be very interested to hear from you out there, fellow text analysis practicioners, whether you have aspects to add to this, and how you are currently dealing with the issues outlined here.
Basically, I see the following steps in my “repository for text analysis use case”, roughly from beginning to end:
- Ingest texts coming from various sources (Gutenberg, Wikisource, TextGrid, ABU, theatre-classique.fr, ebooksgratuits.com, you name it) and, hence, in various formats (txt, html, epub, doc, XML, TEI P4, TEI P5), into the repository.
- Transform all of these text formats into a central “master” format, basically a relatively basic implementation of TEI, and make them valid against a schema defining that implementation.
- Add typological metadata to the TEI header, including things like: genre, sub-genre, author gender, literary epoch, narrative form, etc.
- Create various derivative files from the master files, especially plain text files split into pieces in various ways (by kb or numbers of words, or by structural segments like chapters, paragraphs, scenes or acts).
- Create such derivative files containing only specific parts of the master files, like only text from the “body”, or only “speeches”, or everything but quotations.
- Create collections of derivative files based on the typological metadata, such as a collections of all crime fiction novels from the 20th century, or all comedies in verse written by women (not that many yet).
- Use the typological metadata to flexibly generate filenames for the derivative files, for example “author_title” or “genre_year-title”.
- The master and/or derivative files can be further annotated linguistically (tokenization, lemmatization, POS-tagging) and otherwise (named entity recognition, speakers, etc.)
- The master files can be corrected, updated and versioned, and new sets of derivatives be generated from the updated masters
- The collections of derivative files can be published for documentation, reproducibility and reuse, including resue by external analysis services
- All data is stored safely and securely, and access can be more or less restricted so that collaborative work on the collections becomes possible
- And finally, of course, and because 12 is a nice number, use these collections of derivative text files for all kinds of computational text analysis
Don’t we all need this in some way or another, when doing computational text analysis? Currently, I am doing these things with a wild but somewhat functional although ultimately clunky combination of a wide array of tools and services (I’m probably forgetting some):
- Calibre to create txt or rtf from epub and other formats (#1 above)
- Oxgarage to create TEI from various formats (#1 above)
- A little bit of XSLT in jEdit or oXygen, to derive selective content from TEI files and write txt files (#4 and #5)
- RegEx in jEdit or oXygen, to create or clean up TEI files (#1 and #3)
- RelaxNG in jEdit oXygen, for validation (#3)
- Dropbox for storing everything and accessing it myself (#11)
- “cat”, “split” and “rename” on the command line to split and merge text files and to rename the filenames of specific text collections (#7)
- TreeTagger via an R wrapper, for linguistic annotation (#8)
- “stylo” package for R, WEKA, Gephi, TXM, jEdit, for all kinds of queries and analyses (#12)
- do things manually, for good measure and difficult cases (all steps, even #12: I do read stuff in the good old way, too.)
Notice that in the first list, #9 (update with versioning) #10 (publish for reuse) and #11 (storage with access control) are not really catered for with this bricolage setup. More importantly, however, the workflow involves too many tools which are not really connected with each other, so that the process of creating a given derivative of a text can neither be easily be repeated by myself nor, just as importantly, by others.
Currently, here in Würzburg, we are exploring how to provide for most if not all of these things with TextGrid and the DARIAH infrastructure. I’m personally curious to hear how others do this, but it would also be of great interest to us when further defining this use case, how others deal with this: So, how do you do it? What is needed but not in the list? What works well for you? What doesn’t?
Of course, I have not really said what my perfect repository would look like: I’m not sure, but I do know that when I’ll be using it, I will be impressed and gratified by its elegance, flexibility and reactiveness. It will work beautifully.
OpenEdition suggests that you cite this post as follows:
Christof Schöch (July 3, 2013). What the perfect repository for text analysis looks like (to me). The Dragonfly's Gaze. Retrieved December 7, 2024 from https://doi.org/10.58079/nwet
Nice post.
Interestingly, TXM’s platform development roadmap includes a ‘source corpus workbench’ environment specification along similar lines, see: http://sourceforge.net/apps/mediawiki/txm/index.php?title=Corpus_workshop_design
Some services are already available in the software:
– various input formats (Unicode raw text, Word/ODT, Factiva-Europresse, XML, TEI, parallel – TMX, synchronized – TRS…)
– simple CSV table based metadata management
– XML-TXM TEI based pivot format
– batch raw text files splitter/re-encoding/etc. (Groovy – yet another Python like Java based dynamic script programming language) scripts and macros (scripts with GUI)
– batch file renaming script using inside XML element content/attribute value XPath access
– integrated complete text editor with RegEx query/replace with sub-pattern memorization
– OxGarage and other XSLT stylesheets and filters (eg convert from ODT or filter everything except teiHeader)
– token/pos/lemma NLP services
– etc.
SVN and ownCloud/DropBox clients are still missing and it is interesting to read that they are wanted.
Hi Christoph,
we were surprised not to read about the DFG-funded Deutsches Textarchiv (DTA), http://www.deutschestextarchiv.de/ here and would recommend to have a look at it. We have developed an infrastructure and a search engine which allow for most of your desired features. In our extension module DTAE, http://www.deutschestextarchiv.de/dtae we aim to enhance our core collection by integrating text from various sources (Wikisource, Gutenberg*, scholarly editions etc.) and formats (practically all you mentioned, i.e. txt, html, epub, doc, XML, TEI P4, TEI P5) into the repository. Some of this work is done in the context of CLARIN-D, http://www.clarin-d.de/de/ (cf. [1]) — a great project for the integration, federated content search and linguistic analysis of text resources.
We will comment your list on the background of the DTA core corpus and DTAE experience:
ad 1.: Integration is done by conversion into the TEI P5-based DTA ›base format‹ (DTABf), http://www.deutschestextarchiv.de/doku/basisformat, cf. [2]. It must be said (and cannot be stressed enough) that proper conversion always entails some manual work, due to individual markup practices of different projects. Using OxGarage and other tools or out-of-the-box routines without double-checking did not lead to satisfying results. However much you minimize the effort by developing and refining your own routines, manual corrections always remain necessary.
ad 2.: Indeed, a central “master” format, in which all texts are represented, is crucial. Ours is, see above, the DTABf. Of course, RNG http://www.deutschestextarchiv.de/basisformat.rng und ODD http://www.deutschestextarchiv.de/basisformat.odd are available. The DTABf is an agreed Best Practice Model for the structuring of historical texts in CLARIN-D (cf. CLARIN-D User Guide, http://de.clarin.eu/en/language-resources/userguide, Ch. II, 6.).
ad 3.: Metadata is integrated from the respective sources, but almost always has to be enhanced or corrected. We developed a webform (based on the TEI-Header of the DTABf) for that: http://www.deutschestextarchiv.de/dtaq/dtae/submit/clarin. Metadata is available in various formats: TEI-Header, CMDI, DC.
ad 4.: You can download all text of the DTA corpora as: plain text, HTML, XML, and TCF.
ad 5.: Alternatively use our search engine DDC to query certain parts of the documents. E.g. all quotations in Mendelssohn, Moses: Jerusalem oder über religiöse Macht und Judenthum. Berlin, 1783: http://www.deutschestextarchiv.de/search/ddc/search?q=%2F..%2F+with+%24con%3D%2Fcit%2F&book=mendelssohn_jerusalem_1783
ad 6.: Use metadata filter, e.g. to look for the term “Seuche” in works that were a) classified as genre “Wissenschaft :: Medizin” AND b) published between 1790 and 1820: http://deutschestextarchiv.de/search/ddc/search?fmt=html&corpus=core&ctx=&q=Seuche+%23has%5BtextClassDTA%2C%2FMedizin%2F%5D+%23less_by_date%5B1790%2C1820%5D&limit=15
ad 7.: Within the DTABf, the metadata are thoroughly structured so that the extraction of selected fields to generate file names for your individual collections is facilitated.
ad 8.: The whole corpus has been automatically analyzed considering linguistic features. All this information is applied in standoff fashion, and is included in the DTA search engine, e.g.
POS: search for “ehelichen” as a verb ($p=VVINF, http://deutschestextarchiv.de/search/ddc/search?fmt=html&corpus=core&ctx=&q=ehelichen+with+%24p%3DVVINF&limit=15) vs. “ehelichen” as an adjective ($p=ADJA, http://deutschestextarchiv.de/search/ddc/search?fmt=html&corpus=core&ctx=&q=ehelichen+with+%24p%3DADJA&limit=30)
tokenization: allows, for example, for proximity queries using #DISTANCE (e.g. search for “gefunden” and “werden” with max. 2 tokens inbetween: http://deutschestextarchiv.de/search/ddc/search?fmt=html&corpus=core&ctx=&q=%22%40gefunden+%232+%40werden%22+%23random&limit=10
lemmatization: allows for lemma-based queries, e.g. search for Token that are assigned the lemma “Erkenntnis”: http://deutschestextarchiv.de/search/ddc/search?fmt=html&corpus=core&ctx=&q=%24l%3DErkenntnis+%23random&limit=10
named entity recognition: is also done by the DTA, in a cooperation project (http://www.hab.de/de/home/wissenschaft/projekte/aedit-fruehe-neuzeit-archiv–editions–und-distributionsplattform-fuer-werke-der-fruehen-neuzeit.html), but for the meantime as an internal service. We use a combination of automated normalization of the historic spelling and 2 specific taggers to get the best results, cf. [3].
ad 9.: Cf. our environment for Quality Assurance DTAQ, where we correct and update our texts regularly: http://www.deutschestextarchiv.de/dtaq/about. You can download and ‘lock’ a file and upload the new version when you’re done. We are working on a webbased XML-editor to allow collaborative correction and annotation on-site, which will be presented at TEI-MM in Rome (Frank Wiegand: TEI/XML Editing for Everyone’s Needs, http://digilab2.let.uniroma1.it/teiconf2013/program/posters/155).
ad 10.: Almost all DTA texts are published under CC BY-NC, only some subcorpora have other licensing (i.e. CC BY-SA).
ad 11.: In DTAQ, you can restrict access to certain texts or collections to specific user groups. Most of the DTAQ texts are free, though, and the core corpus is available to everybody, anyway.
ad 12.: @Everybody: Feel free to do so. Contact us if you have any questions: dta@bbaw.de
Best, Christian Thomas and Susanne Haaf
References
* General Publications of the DTA team: http://deutschestextarchiv.de/doku/publikationen
* DTAE (Extensions of the core corpus): http://www.deutschestextarchiv.de/dtae
* DTA ›base format‹: http://www.deutschestextarchiv.de/doku/basisformat
[1] Christian Thomas, Frank Wiegand: Making great work even better. Appraisal and Digital Curation of widely dispersed Electronic Textual Resources (c. 15th–19th cent.) in CLARIN-D. Full Paper for the International Conference “Historical Corpora 2012”, December 6–9, 2012; Goethe University, Frankfurt, Germany. http://edoc.bbaw.de/volltexte/2012/2308/
[2] Alexander Geyken, Susanne Haaf, Frank Wiegand: The DTA ›base format‹: A TEI-Subset for the Compilation of Interoperable Corpora. In: 11th Conference on Natural Language Processing (KONVENS) – Empirical Methods in Natural Language Processing, Proceedings of the Conference. Edited by Jeremy Jancsary. Wien, 2012 (= Schriftenreihe der Österreichischen Gesellschaft für Artificial Intelligence 5). http://www.oegai.at/konvens2012/proceedings/57_geyken12w/57_geyken12w.pdf
[2] Christian Thomas, Bryan Jurish: Named Entity Recognition (NER) im Deutschen Textarchiv – Computerlinguistisch gestützte Identifikation von Personen- und Ortsnamen in den Korpora des DTA. Workshop „Mehr Personen – Mehr Daten – Mehr Repositorien“, 4.–6. März 2013 in der Berlin-Brandenburgischen Akademie der Wissenschaften. Slides: http://deutschestextarchiv.de/files/DTAE-NER_vortrag-2013-03-06.pdf
Unfortunately, the link in ad 6. was corrupted in the course of conversion (because it contains square brackets).
Just visit http://www.deutschestextarchiv.de/ and enter the following query into the search slot (activate query “im Korpus”):
Seuche #has[textClassDTA,/Medizin/] #less_by_date[1790,1820]
I run into these problems using the text files available at gallica.fr. Christof, have you used gallica data? If so, how do you work with them? I have had to use an approach similar to Ted’s but I wonder: a) is there a “best practice” for working with these files? and b) am I reinventing a wheel?
I have not actually used any Gallica full texts, because the last time I checked, the quality was relatively bad and you need to clean the texts up (running titles, etc.). Much better than Internet Archive texts, but still not really usable.
Apart from that, I would expect the issues to be more or less the same as for texts from other sources. I don’t know of any official “best practice” and it seems to me most people are reinventing their little wheels at this point. Let’s try and discuss this at DH2013!
Thanks for sharing your vision, Christof!
If I may play devil’s advocate for just a second, I’m not entirely sure I crave your text analysis environment quite as much as you seem to. We wouldn’t want a black box or any type of inflexibility, would we? ;-) I actually quite like the ability to plug and play different tools for different purposes (I’ve recently ditched one tool for three jobs for three tools for one job each), as long as results get better.
I think the main issue, and you hint at that, is the incompatibility of output formats of most of the tools I use, and this seems to me an area that still needs a lot of work. I use Perl, LibXML, and some of the wrapper modules available on CPAN for almost all normalization work I do to get these to play nicely together, but these are obviously ad-hoc solutions.
I see versioning as less of an issue, most institutional repositories and indeed TextGrid are getting there.
An interesting point to me seems the ability to publish and reuse also for the purpose of peer-review. I wonder if we need some sort of agreement on outputs, formats, storage, and procedures, never mind criteria and standards for evaluation?
I agree that the core of the issue is the compatibility problem, which is even less an issue of the texts itself (most tools use plain text in the end, anyway, even if markup is used for preprocessing) and more so a problem of the associated metadata, which you have to link to each text without having them in the actual text file, and connect to your analysis tool, each time in the way that tool or procedure demands it. And of course flexibility is key, which means that a modular approach and the possibility to set parameters or tweak scripts will be important.
But what I find inacceptable is that each scholar concerned with computational text analysis somehow puts together their own little system that serves just their purpose at just that time. And I wish, for myself and for many of my colleagues, we could find a way to enhance our options and the range and subtlety of analyses without having to set up more of these individual systems and without first having to learn Python or Perl to do so.
I agree with Christof’s reply here.
The links between texts and metadata really do need to be standardized.
Part of the problem is being solved by libraries (for my Anglophone research context, HathiTrust and WordCat are doing a pretty good job of organizing volume-level metadata).
But there’s also metadata inside volumes, since many volumes mix verse and prose and publisher’s advertisements, etc. If we want to do reproducible research, we need a way of standardizing that kind of markup. My approach right now is to write a system that generates *very basic* page-level xml markup algorithmically for ~1 million books. (Let’s call it TEI very, very -lite.)Then I’ll send those marked-up texts and metadata back to HathiTrust, where they will become accessible to other scholars.
When people find problems with the markup conventions I used, that will also be fine — I’ll make my code available, so they can edit the code and re-generate the corpus themselves
apologies, WordCat => WorldCat !
I think you’re accurately describing a problem that we all confront. And I agree that we need some better, more coherent, more broadly shared solution.
I deal with these issues a little differently, essentially because I don’t use TEI. I use plain text, and when I need to extract, say, “verse” or “quotations” from the text, I have to come up with an algorithmic way to do that.
I’m also working on algorithmic ways to categorize genre. My assumption is that we won’t agree on a stable, shared set of genre categories any time in the near future. So instead of creating a single database of genre metadata, what we need is a reasonably transparent, reproducible way to explain how works were categorized for a particular project.
So for me, the workflow looks like
1. A plain text repository. I am relying only on HathiTrust now, because a single source of text reduces formatting problems for me later.
2. OCR correction and running header removal.
3. Tokenizing the files (solving end-of-line problems) and storing page-level word counts.
4. Algorithmically mapping genre both at the volume level and within a volume, so that I can separate “drama” from “fiction” or “tables of contents” from “body text.”
I’m doing 2, 3, and 4 mostly in Python, but I’m starting to use Java in a few places for reasons of scale (and, I hope, eventually, portability).
Thanks for this insight into how you do things like that. Of course, using one well-suited programming language (like indeed Python, or Perl, or Java) for everything is also quite a good solution, it solves the continuity problem I face with my multi-tool approach. Except that my Python skills are not sufficient (yet, I hope). Using TEI makes some things easier and others harder, I guess. In the long run, it should be quite powerful, I like to believe.
I agree with the genre label problem, although some of the categories are so large we may agree most of the time (drama, narrative or essay/other), and for the more fine-grained subgenres, I’m either thinking of relatively clear cases (crime fiction, even though in the late ninetheenth century, this is also a tricky category) or historically given labels, such as tragedy, opera, comedy, “opéra-comique” etc. And then see whether algorithmic clustering corresponds to these historical groupings or not.
Agreed. About the big categories (prose/verse, lyric/drama) we can probably come to some consensus.
And I fear Python doesn’t really make things easier. I’m still confronting the same problem you describe. Everything has to be done ad hoc.