Natural Language Morphology Queries in Perseus

1 comment
Natural language queries are now possible on Perseus under Philologic. Previously, Richard had implemented searching for various parts of speech in various forms. For instance, as noted in the About page for Perseus, a search for 'pos:v*roa*' will return all the instances of perfect active aorist verbs in the selected corpus. Now, a search for 'form:could-I-please-have-some-perfect-active-optatives?' will return the same results. In fact, searching for 'form:perf-act-opt', 'form:perfect-active-optative', 'form:perfection-of-action-optimizations', or 'form:perfact-actovy-opts-pretty-please' will all accomplish this same task. Note that the dashes are necessary between the words, otherwise a search for plural nouns written as 'form:plural nouns' will actually be searching for any plural word followed by the word "nouns", which will fail. I carefully chose shorter forms of all the keywords, such as "impf" and "ind" for "imperfect" and "indicative" so that a search including any word starting with "ind" will match indicatives regardless of what follows the 'd'. Hopefully, there are no overlapping matches (such as using "im" to abbreviate "imperfect" which would also match "imperative"). If you do encounter any, please let me know. Potentially, we could put a list of acceptable abbreviations somewhere, although they are fairly straightforward and typing the full term out is always a fail-safe method.

Basically, the modified crapser script simply translates searches beginning with "form:" into the corresponding "pos:" search. Using a hash of regular expressions and string searching, it simply returns the corresponding code. In the previous example, the search is actually looking for "pos:....roa..". Notice that it fills in the empty space of the code with dots, allowing them to be anything. I implemented an alternative filler, the dash, so that when you search for something like "form:perf-act-opt-exact", you will actually be searching for "pos:----roa--" (and your search will fail because there are no terms that are only and exactly perfect active optative without other specifications).

One limitation that this method of natural language querying has is that it cannot match the versatility of the "pos:" searches. That is, because it selects either dots or dashes as fillers, you cannot get a mixture of them in your search. You cannot run a search such as "pos:v-.sroa---". However, this limitation will likely have little effect for the average user and the user needing such a search can still obtain it using the "pos:" method. An alternative method involving drop down input boxes for each slot of the code would enable the full power of the pos searches, but it would also be potentially more tedious to implement and potentially tedious to use as well. Such a input form would require the user to know more about the encoding than the "form:" searching I implemented does. For example, a user would need to know that "verb" is required in the first slot, even if "aorist optative" makes that the only possibility. Whereas searching for 'form:aorist-optative' works without the user ever needing to know that a 'v' is required in the first slot.
Read More

Encyclopédie: Similar Article Identification II

Leave a Comment
After doing a series of revisions as part of my last post this subject (link), I thought it might be helpful to provide an update posting. We have been interested in teasing out how the VSM handles small vs large articles and to get some sense of why various similar articles are selected. Over the weekend, I reran the vector space similarity function on 39,218 articles, taking some 29 hours. I excluded some 150 surface forms of words in a stopword list, all sequences of numbers (and roman numerals), as well as features (in this case word stems) found in more than 1568 and less than 35 articles. This last step removed features like blanch, entend, mort, and so on. Thus, I removed some 600 features, leaving 10,157 features used for the calculation. Here is the search form:

Headword: (e.g. tradition)
Author: (e.g. Holbach)
Classification: (e.g. Horlogerie)
English Class: (e.g. Clockmaking)
Size (words): (e.g. 250- or 250-1000)
Show Top: articles (e.g. 10 or 50)
The number of matching terms for small articles can be, of course, very small. For example, article "Tout-Bec" (62 words) is left with four stems [amer 1|oiseau 2|ornith 1|bec 3]. The first most of the most similar articles is Rhinoceros (Hist. nat. Ornith.) -- remember, only the main article here -- matches on three stems:
word               frq1     frq2
bec                 3        5
oiseau              2        2
ornith              1        1
Are these similar? Well, both very small articles refer to kinds of rare birds that are notable by their beaks, one with a very large beak and one that looks like it has two or more beaks. It is also important to note that "ornith" (the class of knowledge) in both is picked up by this example. The next article down (Pipeliene) matches on:
amer                1        1
bec                 3        1
oiseau              2        2
The third most similar in this example is "Connoissance des Oiseaux par le bec & par les pattes.", a plate legend, with as you expect, lots of beaks. This matches on two stems, bec and oiseau.

It seems that the size of the query article, now that I have eliminated many function words and other extraneous data, carries a significant impact. The larger the article, the more possible matches you will get (Zipf's Law applies). Longer articles will tend to be most similar to other longer articles, and shorter will match better to shorter. So, similarity would appear to be a function of relative frequencies of common features and the length of the articles. We saw this in our original examination of the Encyclopédie and the Dictionnaire de Trévoux, and had built in some restrictions in terms of size as well as comparing articles with the same first letter rather than all to all. As far as I can tell, the kind of more of feature pruning shown here does not have a significant impact on larger articles.

User feedback might be significant in determining just how many features and what kinds of features are required to get more interesting matches. For any pair, we could store the VSM score, the sizes, and the matching features along with the user rating of the match. That might generate some actionable data for future applications.

[Aside: In some cases, similar passages lead to possibly related plates and legends. Cadrature, for example, links to numerous plate legends dealing with clockmaking.]
Read More

Mapping Encyclopédie classes of knowledge to LDA generated topics

Leave a Comment
As was described in my previous blog entry, I've been working on comparing the results given by LDA generated topics with the classes of knowledge identified by the philosophes in the Encyclopédie. My initial experiment was to try to see if out of 5000 articles belonging to 100 classes of knowledge, with 50 articles per class, I would find those 100 topics using an LDA topic modeler. My conclusion was that it didn't find all of them, but still found quite a few. Since then, I have played a bit more with this dataset and have come up with better results.
Since a topic modeler will give you the topic proportion per article (I just use the top three), what I tried to do this time was to draw up a table with each class of knowledge, and what the topic modeler identified in terms of topics for each class of knowledge. Before looking at this, it's important to keep in mind that in the sample of articles I used, there are 50 articles per class of knowledge. Therefore, the closer the number of the dominant topic in a class of knowledge gets to 50, the better the topic modeler will have done in identifying the class of knowledge and in reproducing the human classification.
Of course, the classification of articles in the Encyclopédie can be at times a little puzzling. The articles were written by a large number of people and therefore the classification is not always consistent. With that in mind, one should not expect to get perfect matches using a topic modeler. Moreover, since the topic modeler will assume that each article is about N number of topics, the calculation might be further off.
For my experiment, I settled on 107 topics, of which I eliminated 7, which were identified as stopwords lists. When looking at the results of this experiment, there are 41 classes of knowledge in which we find 40 or more articles grouped within the same LDA topic. This means that 41% of the classes of knowledge were identified with a great level of accuracy. If we look at topics that have more than 25 articles matching the same class of knowledge we get up to 83 classes (or 83%).
If we look at those results, there are strange flaws, such as physique and divination that don't seem to be identified. This might be due to a miscalculation, but I have yet to figure out what it could be. Highly specialized classes, such as corroyerie, poésie, or astronomie get excellent matches, which is to be expected.
This experiment also gave us an idea of what the percentage of LDA topics are to be considered as stopwords lists. Between 5 and 10% of the topics should be discarded when using an LDA classifier.
Finally, we should consider that LDA generated topics do not systematically match human identified topics. An unsupervised model is bound to give different results, it would be interesting to see how well supervised LDA (sLDA) would do in our particular test case.

Read More

Index Design Notes 1: PhiloLogic Index Overview

Leave a Comment
I've been playing around with some perl code in response to several questions about the structure of PhiloLogic's main word index--I'll post it soon, but in the meantime, I thought I'd try to give a conceptual overview of how the index works. As you may know, PhiloLogic's main index data structure is a hash table supporting O(1) lookup of any given keyword. You may also know that PhiloLogic only stores integers in the index: all text objects are represented as hierarchical addresses, something like a normalized, fixed-width Xpointer.

Let's say we can represent the position of some occurrence of the word "cat" as
0 1 2 -1 1 12 7 135556 56
which could be interpreted as
document 0,
book 1,
chapter 2,
section ,
paragraph 1,
sentence 12,
word 7,
byte 135556,
page 56, for example.

A structured, positional index allows us to evaluate phrase queries, positional queries, or metadata queries very efficiently. Unfortunately, storing each of these 9 numbers as 32-bit integers would take 36 bytes of disk space, for every occurence of the word. In contrast, it's actually possible to encode all 9 of the above numbers in just 39 bits, if we store them efficiently--that's a 93% saving. The document field has the value 0, which we can store in a single bit, whereas byte position, our most expensive, can be stored in just 18 bits. The difficulty being that the simple array of integers becomes a single long bit string stored in a hash. First we encode each number in binary, like so
0 1 01 11 1 0011 111 001000011000100001 000111

but this is only 18 bits, so we have to pad it off with 6 extra bits to get an even byte alignment, and then we can store it in our hash table under "cat".

Now, suppose that we use somthing like this format to index a set of small documents with 10,000 words total. We can expect, among other things, a handful of occurrences of "cat", and probably somewhere around a few hundred occurrences of the word "the". In a GDBM table, duplicate keywords aren't permitted--there can be exactly one record of "cat". For a database this size, it would be feasible to append every occurrence into a single long bit string Let's say our text structures require 50 bits to encode, and that we have 5 occurrences of cat. We look up "cat" in GDBM, and get a packed bit string 32 bytes, or 256 bits long. we can divide that by the size of a single occurrence, so we know that we have 5 occurrences and 6 bits of padding.

"The", on the other hand, would be at least on the order of few kilobytes, maybe more. 1 or 2 K of memory is quite cheap on a modern machine, but as your database scales into the millions of words, you could have hundreds of thousands, even millions of occurrences of the most frequent words. At some point, you will certainly not want to have to load megabytes of data into memory at once for each key-word lookup. Indeed, in a search for "the cat", you'd prefer not to read every occurrence of "the" in the first place.

Since PhiloLogic currently doesn't support updating a live database, and all word occurrences are kept in sorted order, it's relatively easy for us to devise an on-disk, cache-friendly data structure that can meet our requirements. Let's divide up the word occurences into 2-kilobyte blocks, and keep track of the first position in each block. Then, we can rapidly skip hundreds of occurrences of a frequent word, like "the", when we know that the next occurence of "cat" isn't in the same document!

Of course, to perform this optimization, we would need to know the frequency of all terms in a query before we scan through them, so we'll have to add that information to the main hash table. Finally, we'd prefer not to pay the overhead of an additional disk seek for low-frequency words, so we'll need a flag in each key-word entry to signal whether we have:
1) a low frequency word, with all occurences stored inline
or
2) a high frequency word, stored in the block tree.

Just like the actual positional parameters, the frequencies and tree headers can also be compressed to an optimal size on a per-database level. In philologic, this is stored in databasedir/src/dbspecs.h, a c header file that is generated at the same time as the index, then compiled into a custom compression/decompression module for each loaded database, which the search engine can dynamically load and unload at run time.

In a later post, I'll provide some perl code to unpack the indices, and try to think about what a clean search API would look like.
Read More

Encyclopédie: Similar Article Identification

6 comments
The Vector Space Model (VSM) is a classic approach to information retrieval. We integrated this as a standard function in PhiloMine and have used it for a number of specific research projects, such as identifying borrowings from the Dictionnaire de Trévoux in the Encyclopédie, which is described in our forthcoming paper "Plundering Philosophers" and related talks[1]. While originally developed by Gerard Salton[2] in 1975 as a model for classic information retrieval, where a user submits a query and gets results in an ranked relevancy list, the algorithm is also very useful to identify similar blocks of text, such as encyclopedia articles or other delimited objects. Indeed, this kind of use of the VSM was proposed by Salton and Singhal[3] in a paper presented months before Salton's death. They demonstrated the use of VSM to produce links between parts of documents, forming a type of automatic hypertext:
The capability of generating weighted vectors for arbitrary texts also makes it possible to decompose individual documents into pieces and explore the relationships between these text pieces. [...] Such insights can be used for picking only the "good" parts of the document to be presented to the reader.
Salton and Singhal further argued that manual link creation would be impractical for huge amounts of text, but these conclusions may have had limited influence given the general interest at that time in human generated hypertext links on the WWW.

Based on earlier work using PhiloMine, we have seen a number of "interesting" -- and at times unexpected -- connections between articles in the Encyclopédie, often drawing connections between previously unrelated articles, if by unrelated we mean having different authors, classes of knowledge and few cross-references (renvois) between them. One might consider this kind of similarity measure between articles as a kind of intertextual discovery tool, where the system would propose articles possibly related to a specific article.

The Vector Space Model functions by comparing a query vector to all of the vectors in a corpus, making it an expensive calculation, not always suitable to real time use. In this experiment, I have recast the VSM implementation in PhiloMine to function as a batch job to generate a database of 27,753 Encyclopédie articles (those with 100 or more words) with the 20 most similar articles for each article. To do this, I pruned features (word stems) which more than 8,325 and less than 41 articles, resulting in a vector size of 10,431 features. I used a standard French word stemmer to reduce lexical variation and a Log Normalization function to handle variations in article sizes. The task took about 17 hours to run.

Update (December 7): I have replaced the VSM build above with the same on 39,200 articles -- all articles with 60 or more words -- which took about 29 hours to run. I pruned features found in more than 11,200 documents and less than 50, leaving 9,710 features. This may change some results by adding more small articles. Note, this is about as large a VSM task as can be performed in memory using perl hashes, since anything large runs out of memory. If we want to go larger, probably store vectors on disk and TIE them to perl hashes.

The results for a query shows the 20 most similar articles, ranked by the similarity score, where an exact match is equal to 1. For example, the article OUESSANT (Modern Geography) -- based on 27,000 articles -- is related to the articles VERTU [0.274], Luxe [0.267], ECONOMIE ou OECONOMIE [0.265], POPULATION [0.263], CHRISTIANISME [0.261], SOCIÉTÉ [0.256], AVERTISSEMENT DES ÉDITEURS (suite) [0.255], MANICHÉISME [0.254], CYNIQUE, secte de philosophes anciens [0.254], Gout [0.250], EDUCATION [0.248] and so on. This reflects the discussion of the moral conditions of the inhabitants of the small island off the coast of Brittany.

You can give it a try using this form (again now for 39,200 articles):

Headword: (e.g. tradition)
Author: (e.g. Holbach)
Classification: (e.g. Horlogerie)
English Class: (e.g. Clockmaking)
Size (words): (e.g. 250- or 250-1000)
Show Top: articles (e.g. 10 or 50)

[Dec 9: I added word count info for each article. You can restrict searches to articles in ranges of size. Also, now storing 50 top matches, which you can limit. Showing matching articles which are smaller than source article. Dec 10: added function to display matching stems for any pairwise comparison for inspection.]

There are a number of other options that I might add to the VSM calculations, including using TF-IDF as an alternative normalization weighting scheme and use of virtual normalization to again reduce lexical variations and improve the performance of the stemming algorithm. I have also thought of using Latent Semantic Analysis as another way to handle similarity weighting, but given that we have many query terms, it is not clear that LSA would help all that much.

In a real production environment, I think we will add a "similar article link" from articles in the Encyclopédie. We have talked about having users rank the quality of the similarity performance. The scores assigned are somewhat helpful in ranking, but not in assessing an absolute number, since they can vary by the size of the input article. VSM is an unsupervised learning model. It is not clear to me that we could integrate user evaluations in any systematic fashion, but this is certainly an interesting subject of further consideration.

As always, please let me know what you think. I have a couple of general queries. I have used main and sub articles (as well plate legends, etc.) as units of similarity calculation. Should I use main entries only? I also limited this to articles with more than 100 words. At 50 words, we have some 43,000 articles. Should I do this for a full implementation?

References

[1] See Timothy Allen, Stéphane Douard, Charles Cooney, Russell Horton, Robert Morrissey, Mark Olsen, Glenn Roe, and Robert Voyer, "Plundering Philosophers: Identifying Sources of the Encyclopédie", Journal of the Association for History and Computing (forthcoming 2009). Also, see Ceglowski, Maxiej. 2003: "Building a Vector Space Search Engine in Perl", Perl.com [http://www.perl.com/pub/a/2003/02/19/engine.html].

[2] Salton, G., A. Wong, and C. S. Yang. 1975: "A Vector Space Model for Automatic Indexing," Communications of the ACM 18/11: 613-620.

[3] Singhal, A. and Salton, G. 1995: "Automatic Text Broswing Using Vector Space Model" in Proceedings of the Dual-Use Technologies and Applications Conference 318-324.
Read More

Frequencies in the Greek and Latin texts

3 comments
Earlier this year Mark built a frequency query for the French texts (affectionately named wordcount.pl)
Kristin has now implemented this for our Greek and Latin texts. If you wonder what's new about this: Word count for individual documents has always been there in PhiloLogic loads, but the difference here is that you can see frequencies over the entire corpus, or a subset of works/authors.

You can find the forms here:
http://perseus.uchicago.edu/LatinFrequency.html
http://perseus.uchicago.edu/GreekFrequency.html

Update: Forms moved to the 'production site', perseus.uchicago.edu. You can now specify genre as well. Stay tuned for further stats, meant to provide a friendly reminder of Zipf's Law.

Note: the counts are raw frequency counts, without lemmatization.
I have edited the search form a tiny bit - let me know if you encounter any problems.
Read More

Do LDA generated topics match human identified topics?

1 comment
I've been experimenting lately on how LDA generated topics and the Encyclopédie classes of knowledge match. The experiment was conducted in the following way:
- I chose 100 classes of knowledge in the Encyclopédie, and picked 50 articles of each.
- I then ran a first LDA topic trainer choosing 100 topics.
- I then proceeded to identify each generated topic and name after the Encyclopédie classes of knowledge.
- My plan was then to look at the topic proportions per article and see if the top topic would correspond to its class of knowledge. Would the computer manage to classify the articles in the same way the encyclopedists had?
I was not able to get that far when choosing 100 topics for my first LDA run. This is because LDA will always generate a couple topics which aren't really topics, but are just lists of very common words and they just happen to be used in the same documents. Therefore, one should always disregard these topics and focus on the others. What this means is that I had to add a couple more topics to my LDA run in order to get 100 identifiable topics. So I settled with 103 topics. I found 3 distributions of words which were unidentifiable, so I dismissed them.
The results show that LDA topics and the Encyclopédie classes of knowledge do not match (see links to results below). Some do very well, like Artillerie, for which the corresponding distribution of words is :
canon piece poudre artillerie boulet fusil ligne calibre mortier bombe feu charge culasse livre met chambre pouce lumiere roue affut diametre coup batterie levier bouche ame flasque balle tourillon tire
Other distribution of words make sense in themselves but do not match any of the original classes of knowledge. For instance, there is no topic on 'teinture', 'peinture'. What we get instead is a mixture of both classes of knowledge which could be identified as colors :
couleur rouge blanc bleu tableau jaune verd peinture ombre teinture noir toile tableaux nuance papier etoffe bien teint peintre pinceau trait teinturier melange veut figure teindre feuille beau sert colle
Now the topic modeler is not wrong here. It's telling us that these words tend to occur together, which is true. Another significant example is the one with 'Boutonnier', 'Soie', and 'Rubanier' :
soie fil rouet corde brin tour main bouton gauche longueur boutonnier droite attache bout fils tourner sert molette noeud cordon doigt piece emerillon moule broche ouvrage ruban rochet branche aiguille
What we get here is a topic about the art of making clothes, which is more general than 'Boutonnier' or 'Rubanier'.
For this to actually work, the philosophes would have had to have been extremely rigorous in their choice of vocabulary, because this is what LDA expects. Also, another problem is that LDA considers that each document is a mixture of topics, and not made out of one topic. So if one document is exclusively focused on one topic, LDA will still try to extract a certain number of topics out of it. If this is the case, then you are going to get some topics which are mere subdivisions of the class of knowledge in this document. The reason why our experiment broke down could be that the LDA topic trainer created new subdivisions for some classes of knowledge, or regrouped several classes of knowledge. These are all valid as topics, but do not correspond to human identified topics.

Link to results
Read More

Section Highlighting in Philologic

1 comment
In many of the Perseus texts currently loaded under philologic, the section labels would overlap and be unreadable. These labels come from the milestone tags in the xml text and are placed along the edge of the text. One particularly problematic text in this regard was the New Testament, as the sections were verses and were thus often small sections of text.

In order to fix the overlapping issue, I wrote a little bit of javascript to hide the tags which would be placed in the same position as a previous tag. I also added a function to recalculate this if the window is resized. My main function is fairly simple:

function killOverlap (){
$lastOffset = 0;
$(".mstonecustom").each(function (i) {
if (this.offsetTop == $lastOffset){
this.className = "mstonen2";
}
else {
$lastOffset = this.offsetTop;
}});}

I also added a function which highlights a section when you hover over its milestone label along the side of the text. This seems useful to me, as often it is helpful to know where a section starts and ends. This was a slightly more complex problem. I had to alter the citequery3.pl script in order to add a span tag and some ids in order to get the javascript to work. The javascript was then fairly simple:

function highlight(){
$(".mstonecustom").hover(
function () {
myid = jq("text" + $(this).attr('id'));
$("w", myid).css({"font-weight" : "bolder"});},
function () {
myid = jq("text" + $(this).attr('id'));
$("w", myid).css({"font-weight" : "normal"});})}

In order for it to work though, you have to alter the citequery3.pl script with this:

my $spanid = $citepoints{$offsets[$offset]};
$spanid =~ s/.*\.([0-9]+)\.([0-9]+)$/a$1b$2/;
#...
$tempstring =~ s/(^<[^>]+>)/$1<span class="mstonecustom" id="$spanid">$citepoints{$offsets[$offset]}<\/span>/;
#... {
$tempstring =~ s/<span class="mstonecustom" id="$spanid">$citepoints{$offsets[$offset]}<\/span>//;}

$milesubstrings[$offset] = "<span class=" . $citeunits{$offsets[$offset]} . " id="text">" . $tempstring . "<\/span>";

That's about it. It may come in useful again someday. For an example, take a look at this.
Read More

Towards PhiloLogic4

Leave a Comment
Earlier this year I wrote a long discussion paper called "Renovating PhiloLogic" which provided an overview of the system architecture, a frank review of the strengths and (many) failings of the current implementation of the 3 series of PhiloLogic, and proposed a general design model for what would effectively be a complete reimplementation of the system, retaining only selected portions of the existing code base. While we are still discussing this, often in great detail, a few general objectives for any future renovation have emerged, including:
  • service oriented architecture;
  • release of new system in perl module libraries;
  • multiple database query support, and,
  • options for advanced or extended indexing models.
I will be putting together a public version of this discussion draft in the near future and will blog it when I have something ready.

Before sallying forth to do start working on a PhiloLogic4, there are a number of preliminary steps that Richard and I agree are required in order to 1) support the existing PhiloLogic3 series, and 2) clear the existing (messy) code base of some of the most egregious sections of the system, most notably the loader. Some of these are simply housekeeping and updates, some of these are patches and bug fixes, and some others are clean-ups which should streamline the current system and help in any redevelopment.

We will start by retasking one of our current machines, a 32 bit OS-X installation, to be the primary PhiloLogic development machine. We will also get the Linux branch on a 32 bit Linux machine (flavor to be determined). There is a known 64 bit installation problem which we will address at the end of this initial process. When we reach the right step, we will install it all on 64 bit machines and fix it then, hopefully with much less effort on a streamlined version, while releasing upgraded 32 bit versions on the way. The other element for our consideration is the degree to which we can merge the OS-X and Linux branches of the system. Right now, we have two completely distinct branches. It would be much better to have one, which we think may be accomplished in a couple of different ways.

We are currently thinking of 4 distinct steps, which should each result in new maintenance releases of PhiloLogic3.

Step One

Apply the most recent OS-X Leopard patch kit to both the OS-X and Linux branches as required and feasible. This is the patch kit that Richard and I assembled for the migration to our new servers and has some nifty little extensions. We will also be updating the PhiloLogic code release site (Google Code) and retooling the new PhiloLogic site, which will then be referred from the existing location (philologic.uchicago.edu). Maintenance release when done. [MVO]

Step Two

The PhiloLogic loader currently using a GNU Makefile scheme to load databases. This made good sense many years ago, when loads could take many hours (or days), but is probably no longer needed. There are also many places where we use various utilities (sed, gawk, gzip, etc.) which add complications and make the entire scheme more brittle. Our current thinking is to fold all of the Makefile functions into a revised version of philoload, but may determine a better way to proceed once we get into it. We're planning a maintenance release of this when done. [MVO]

Step Three

The current PhiloLogic loader performs a number of C compiles, many of which are no longer needed. For example, the system still compiles the search2 binaries. These were left in Philologic3 in order to have backwards compatibility. We need to keep the ability to generate the correct pack and unpack libraries which are used by search3. Once we have cleared out all unnecessary C compiles, we will investigate a couple of known bugs in search3, and attempt to resolve these. Again, once done, we would do a maintenance release. [RW and MVO]

Step Four

As noted above, some users have reported 64 bit compile problem on either installation or load. Once we have the loader streamlined, eliminating as much of the old C compiles are possible, we will investigate this problem. We're hoping that this will be easily remedied and, even better, could be resolved in a combined release which would merge the current OS-X and Linux branches. This would be the terminal release of the PhiloLogic3 series. Any future releases would be only for bug fixes.

We hoping that these steps will result in a stable terminal release of the PhiloLogic3 series, which will be easier to install and use. It will also result in significant streamlining which will help in any future Philologic renovation or a new PhiloLogic4 series.

This is an initial plan, so please do post your comments, suggestions, and complaints.
Read More

Encyclopédie under KinoSearch

3 comments
One of the things that I have wanted to do for a while is to examine implementations of Lucene, both as a search tool to complement PhiloLogic and possibly as a model for future PhiloLogic renovations. Late this summer, Clovis identified a particular nice open source, perl implementation of Lucene called KinoSearch. This looks like it will fit both bills very nicely indeed. As a little experiment, I loaded 73,000 articles (and other objects) from the Encyclopédie, and cooked up a super simple query script. This allows you to type in query words and get links to articles sorted by their relevancy to your query (the italicized number next to the headword). At this time, I am limiting to the top 100 "hits". Words should be lower case, accents are required, and words should be separated by spaces. Try it:

Query Words: or
Require all words

Here are a couple of examples which you can block copy in:
artisan laboureur ouvrier paysan
malade symptome douleur estomac
peuple pays nation ancien république décadence

The first thing to notice is search speed. Lucene is known to be robust, massively scalable, and fast. The KinoSearch implementation is certainly very fast. A six term search returns in a real .35 seconds and less than 1/10 of a second of system time, using time on the command line. I did not time the indexing run, but think 10 minutes or so. [Addition: by reading 147 TEI files rather than 77,000 split files, the loading indexing time for the Encyclopédie is falls to (using time) real 2m45.9s, user 2m33.8s sys 0m11.1s.]


The KinoSearch developer, Marvin Humphrey, has a splendid slide show, outlining how it works, with specific reference to the kind of parameters, such as stemmers and stopwords, that one needs to consider as well as an overview of the indexing scheme. Clovis and I thought this might be the easiest way to begin working with Lucene, since it is a perl module with C components, so it is easy to install and get running. Given the performance and utility of KinoSearch, I suspect that we will be using it extensively for projects where ranked relevancy results are of interest. These might include structured texts, such as newspaper and encyclopedia articles, and possibly large collections of uncorrected OCR materials which may not suitable for text analysis applications supported by PhiloLogic. Also, on first review, the code base is very nicely designed and, since it has many of the same kinds of functions as PhiloLogic, strikes me as being a really fine model of how we might want to renovate PhiloLogic.

For this experiment, I took the articles as individual documents in TEI, which Clovis had prepared for other work. For each article, I grabbed the headword and PhiloLogic document id, which are loaded as fielded data. The rest of the article is stripped of all encoding and loaded in. It would be perfectly simple to read the data from our normal TEI files. We could see simply adding a script that would load source data from a PhiloLogic database build, to add a different kind of search, which would need to have a different search box/form.

I have not played at all with parameters and I can imagine that we would want to perform some functions, such as using simple rules for normalization, on input, since it uses a stemmer package also by M Humphrey. Please email me, post comments, or add a blog entry here if you see problems, particularly search oddities, have ideas about other use cases, or more general interface notions. I will be writing a more generalized loader and query script -- with paging, numbers of hits per page, filtering by minimum relvancy scores and looking at a version of the Philologic object fetch which would try to high-light matching terms -- and moving that over to our main servers.
Read More

back to comparing similar documents

Leave a Comment
I mentioned a little while ago some work I did on comparing one document with the rest of the corpus it belongs to ( the examples I used in that blog post will not give the same results anymore, the results might not be as good, I haven't optimized the new code for the Encyclopédie yet). The idea behind it was to use the topic proportions for each article generated from LDA, and come up with a set of calculations to decide which document(s) was closest to the original document. The reason why I'm mentioning here once more is that I've been through that code again, cleaned it up quite a bit, improved its performance, tweaked the calculations. Basically, I made it usable for other people but myself. Last time I built a basic search form to use with Encyclopédie articles. This time I'm going to show the command line version, which has a couple more options than the web version.
In the web version, I was using both the top three topics in each document, and their individual proportion within that document. For instance, Document A would have topic 1, 2 and 3 as its main topics. Topic1 would have a proportion of 0.36, Topic2 0.12, Topic3 0.09. In the command line version, there's the option of only using the topics, without the proportion. The order of importance of each topic is of course still respected. Depending on the corpus you're looking at, you might want to use one model rather than the other. It does give different results. One could of course tweak this some more and decide to only take the proportion of the prominent topic, therefore giving it more importance. There is definitely room for improvement.
There was also another option that was left out of the web version. By default, I set a tolerance level, that is the score needed by each document in order to be given as a result of the query. In the command line version, I made it possible to define this tolerance in order to get more or fewer results. This option is currently only possible with the refined model (the one with topic proportions). The code is currently living in
robespierre:/Users/clovis/LDA_scripts/
It's called compare_to_all.pl. There's some documentation in the header to explain how to use it. It's fairly simple. I might do some more work on it, and will update the script accordingly.
There are other applications of this script besides using on a corpus made of well defined documents. One could very well imagine applying this to a corpus subdivided in chunks of text using a text segmentation algorithm. On could then try to find passages on the same topic(s) using a combination of LDA and this script. The Archives parlementaires could be a good test case.
Another option would be to run every document of a corpus against the whole corpus and store all the results in a SQL database. This would allow having a corpus where each document can be linked to various others according to the mixture of topics they are made of.
I will try to give more concrete results some time soon.
Read More

Supervised LDA: Preliminary Results on Homer

Leave a Comment
While Clovis has been running LDA tests on Encyclopédie texts using the Mallet code, I have been running some tests using the sLDA algorithm. After a few minor glitches, Richard and I managed to get the sLDA code, written by Chong Wang and David Blei, from Blei's website up and running.

Unlike LDA, sLDA (Supervised Latent Dirichlet Allocation), requires a training set of documents paired with corresponding class labels or responses. As Blei suggests, these can be categories, responses, ratings, counts or many other things. In my experiments on Homeric texts, I used only two classes, corresponding to Homer's two major works: the Iliad and the Odyssey. Akin to LDA, topics are inferred from the given texts and a model is made of the data. This model, having seen the class labels of the texts it was trained on, can then be used to infer the class labels of previously unseen data.

For my experiments, I modified the xml versions of the Homer texts that we have on hand using a few simple perl scripts. Getting the xml transformed into an acceptable format for Wang's code required a bit of finagling, but was not too terrible. My scripts first took the xml and split it into books (the 24 books of the Iliad and likewise for the Odyssey), then stripped the xml tags from the text. Saving out four books from each text for applying the inference step, I took the rest of the books and output the corresponding data file necessary for input into the algorithm (data format here).

I played around a bit with leaving out words that occurred extremely frequently or extremely rarely. For the results I am posting here, the English vocabulary was vast and I cut it down to words that occurred between 10 and 60 times. This probably cuts it down too much though, so it would be good to try some variations. Richard has suggested also cutting out the proper nouns before running sLDA in order to focus more on the semantic topics. For the Greek vocabulary, I used the words occurring between 3 and 100 times, after stripping out the accents.

Running the inference part of sLDA on the 8 books that I had saved out seemed to work quite well. It got all 8 correctly labeled as to whether they belonged to the Iliad or to the Odyssey. In a reverse run, the inference was able to again achieve 100 percent accuracy on labeling the 40 books after having been trained on only the 8 remaining books.

The raw results of the trials give a matrix of betas with a column for each word, and a row for each topic. These betas thus give a log based weighting of each word in each topic. Following this are the etas, with a column for each topic and a row for each class. These etas give the weightings of each topic in each class, as far as I understand it. Richard and I slightly altered the sLDA code to output an eta for each class, rather than one less than the number of classes as it was giving us. As far as we understand the algorithm as presented in Blei's paper, it should be giving us an eta for each class. Our modification didn't seem to break anything, so we are assuming that it worked, as the results seem to be looking nice. Using the final model data, I have a perl script that outputs the top words in each topic along with the top topics in each class. These are the results that I am giving below.


Results of my sLDA Experiments on Homer:

English Text: 10 Topics Greek Text: 10 Topics

Also, samples of the output from Blei and Wang's code, corresponding to the English Text with 100 topics:

Final Model: gives the betas and the etas which I used to output my results
Likelihood: the likelihood of these documents, given the model
Gammas
Word-assignments

Inferred Labels: Iliad has label '0', Odyssey has label '1'.
Inferred Likelihood: the likelihood the previously unseen texts
Inferred Gammas

I have not played around much with the gammas, but they seem to give a weighting of each topic in each document. Thus you could figure out for which book of the Iliad or the Odyssey a specific topic was the most prevalent. It would be interesting to see if this correctly pinpoints which book the cyclops comes in for instance, as this is a fairly easily identifiable topic in most of the trials.


Read More

Encyclopédie Renvois Search/Linker

Leave a Comment
During the summer (2009), a user (UofC PhD, tenured elsewhere) wrote to ask if there was any way to search the Encyclopédie and "generate a list of all articles that cross-reference a given article". We went back and forth a bit, and I slapped a little toy together and let him play with it, to which his reply was "Oh, this is cool! Five minutes of playing with the search engine and I can tell you it shows fun stuff...". This is, of course, an excellent suggestion which we have talked about in the past, usually in the context of visualizing relationships of articles in various ways. At the highest level, visualizing the relationships of the renvois is what Gilles and I attempted to do in our general "cartography paper"[1] and, more recently, Robert and Glenn (et. al.) tried, in a radically different way, to do in their work on "centroids"[2].

The current implementation of the Encyclopédie under PhiloLogic will allow users to follow renvois links (within operational limits to be outlined below), but does not support searching and navigating the renvois in any kind of systematic fashion. Since this is something I think warrants further consideration, I thought it might be helpful to document this toy, give some examples, let folks play with it, outline some of the current issues, and conclude with some ideas about what might be done going forward.

To construct this toy, I wrote a recognizer to extract metadata for each article in the Encyclopédie which has one or more renvois. As part of the original development of the Encyclopédie, each cross reference was automatically detected from certain typographic and lexical clues. This resulted in roughly 61,000 cross-references. Accordingly, the extracted database has 61,000 records. I loaded these into a simple MySQL database and used a standard script to support searching and reporting. The search parameters may include articles headwords, authors, normalized and English classes of knowledge as well as the term(s) being cross referenced. For example, there are 39 cross-referenced article pairs for the headword estomac. As you can see from the output, I'm listing the headword, author, classes of knowledge, and the cross referenced term. You can get the article of the cross referenced term or the cross-references in that article. Thus, the second example shows the link to Digestion:

ESTOMAC, ventriculus (Tarin: Anatomie, Anatomy ) ==> Digestion || renvois
[The renvois of Digestion find 56 articles pairs, including one to intestins]
DIGESTION (Venel: Economie animale, Animal economy ) ==> Intestins || renvois
Intestins (unknown: Anatomie, Anatomy ) ==> Chyle || renvois


and so on ==>lymphe==>sang==>ad nauseum. No, there is no ad nauseum, just how you might feel after going round and round.

Now, there are problems, but please go ahead and play with this now using the submit form, as long as you promise to come back and read thru the rest of this and let me know about any other problems.

Problems

As noted above, the renvois were identified automatically. And as with most of these things, it worked reasonably well. But you will see link errors and other things which indicate problems. Glenn reported these to me and I was going to eliminate them. On second thought, this little toy lets to consider the renvois rather more systematically. Where you see a link error is (probably) a recognizer error, which either failed to get a string to link or got confused by some typography. The linking mechanism itself is based on string searches. In other words, whenever you click on a renvois, you are in fact performing a search on the headwords. This simple heuristic works reasonably well, returning string matched headwords. In some cases, you get nothing because there is no headword that has the renvois word(s), and at other times you will get quite a list of articles, which may or may not include what the authors/editors intended. It is, of course, well known that many renvois simply don't correspond to an article and many others differ in various ways from the article headwords. I am also applying a few rules to renvois searching to try to improve recall and reduce noise. So, this also adds another level of indirection.

Now, ideally, one would go through the entire database, examine each renvois and build a direct link to the one article that the authors/editors intended. But we're talking 60,000+ renvois against 72,000 (or so) articles and it is not clear that humans could resolve this in many instances. When Gilles and I worked on this, we used a series of (long forgotten) heuristics to filter out noise and errors. So, this simple toy works within operational limits and gives us a way to more systematically identify possible errors and ways to improve it.

Future Work

Aside from being a quick and dirty to way get some notion of errors in the renvois, we might be able to make this more presentable. Please feel free to play with this and suggest ways to think about. In the long haul, I would love a totally cool visualization. A clickable directed graph, so you could click on a node and re-center it on another article, or class of knowledge or author. Maybe something like Tricot's representation of the classes of knowledge. Or maybe something like DocuBurst. Marti Heast's chapter on visualizing text analysis, is a treasure-trove of great ideas.

For the immediate term, I would like to recast this simple model to allow the user to specify number of steps. So, set the number of iterations to follow, so you would get something like:

ESTOMAC, ventriculus (Tarin: Anatomie, Anatomy ) ==> Digestion || renvois
DIGESTION (Venel: Economie animale, Animal economy ) ==> Intestins || renvois
Intestins (unknown: Anatomie, Anatomy ) ==> Viscere || renvois
ESTOMAC, ventriculus (Tarin: Anatomie, Anatomy ) ==> Chyle || renvois
CHYLE (Tarin: Anatomie | Physiologie, Anatomy. Physiology ) ==> Sanguification || renvois
SANGUIFICATION (unknown: Physiologie, Physiology ) ==> Respiration || renvois
RESPIRATION (unknown: Anatomie | Physiologie, Anatomy | Physiology ) ==> Air || renvois


Following this chains of renvois either until you run out or your hit an iteration limit. I will try to follow this up with both the multi-iteration model and see if I can recover some of what Liz tried to do using GraphViz to generate clickable directed graphs.

References

[1] Gilles Blanchard et Mark Olsen, « Le système de renvoi dans l’Encyclopédie: Une cartographie des structures de connaissances au XVIIIe siècle », Recherches sur Diderot et sur l'Encyclopédie, numéro 31-32 L'Encyclopédie en ses nouveaux atours électroniques: vices et vertus du virtuel, (2002) [En ligne], mis en ligne le 16 mars 2008.

[2] Charles Cooney, Russell Horton, Robert Morrissey, Mark Olsen, Glenn Roe, and Robert Voyer, "Re-engineering the tree of knowledge: Vector space analysis and centroid-based clustering in the Encyclopédie", Digital Humanities 2008, University of Oulu, Oulu, Finland, June 25-29, 2008
Read More

Archives Parlementaires: lèse (more)

Leave a Comment
As I mentioned in my last in this thread, I was a bit surprised to see just how prevalent the construction lèse nation had become early in the Revolution. The following is a sorted KWIC of lEse in the AP, with the object type restricted to "cahiers", resulting in 38 occurrences. These are, of course, the complaints sent to the King, reflecting relatively early developments of Revolutionary discourse. Keeping in mind all of the caveats regarding this data, we can see some interesting and possibly contradictory uses:
CAHIER: (p.319)sent être, comme criminels de lèse-humanité au premier chef, et ils se joindront au
CAHIER GÉN...: (p.77)manière de juger, qui lèse les droits les plus sacrés des citoyens, doit av
CAHIER: (p.697)r individus, cette concession lèse les et avoir eu d'autre mo dre une r {La partie d
CAHIER: (p.108)e, excepté dans les crimes de lèse-majesté au premier chef. Art. 33. Qu'aucun jugem
CAHIER: (p.791) si ce n'est pour le crime de lèse-majesté au premier chef, et réduite aux seuls c
CAHIER: (p.448)té seulement pour le crime de lèse-majesté au premier chef ou pour celui de haute t
CAHIER: (p.409)s choses saintes, et crime de lèse-majesté, dans tous les cas spécifiés par l'ord
CAHIER: (p.260)istériels, sauf pour crime de lêse-majesté, de haute trahison et autres cas, qui se
CAHIER: (p.42)e, à l'exception des crimes de lèse-majesté, de péculat et de concussion; mais, dan
CAHIER: (p.780), si ce n'était pour crime de lèse-majesté divine et humaine. Art. 9. Qu'ii soit as
CAHIER: (p.476)ée, si ce n'est pour crime de lèse-majesté divine et humaine. Art. 8. Qu'il soit as
CAHIER: (p.584)our le meurtre et le crime de lèse-majesté divine ou humaine, et que hors de ce cas
CAHIER: (p.378)ont seuls juges des crimes de lèse-majesté et de lèse-nation. Art. 8. Le compte de
CAHIER: (p.42)re précise ce qui est crime de lèse-majesté. Et que l'on établisse quels sont les c
CAHIER.: (p.117)déclaré coupable du crime de lèse-majesté etnation. et comme tel, puni des peines
CAHIER GÉN...: (p.671) excepté le crime de lèse-majesté, le poison, l'incendie et assassinat sur
CAHIER: (p.660) les cas, excepté le crime de lèse majesté, le poison, l'incendie et assassinat sur
CAHIER: (p.532)hommes coupables elu crime de lèse-majesté nationale; l'exemple elu passé nous a m
CAHIER: (p.645)poursuivis comme criminels de lèse-majesté nationale; que visite soit faite dans le
CAHIER: (p.383)s par elle comme criminels de lèse-majesté, quand ils tromperont la confiance du so
CAHIER: (p.286)s crimes de lèse-nation ou de lèse-majesté seulement; et que, dans ce cas, l'accus
CAHIER GÉN...: (p.210)ni comme criminel de lèse-majesté; 4° Cette loi protectrice de la libert
CAHIER: (p.35)rrémissibles comme le crime de lese-majesté. 13° 'Qu'en matière civile comme en mat
CAHIER: (p.378) crimes de lèse-majesté et de lèse-nation. Art. 8. Le compte des finances imprimé a
CAHIER: (p.359) crimes de lèsemajesté, et de lèse-nation, ce qui comprend les crimes d'Etat. 7° En
CAHIER: (p.301)ort infâme, comme coupable de lèse-nation, celui qui sera convaincu d'avoir violé c
CAHIER.: (p.536) et punis comme coupables de lèse nation. 17" De demander 1 aliénation irrévocabl
CAHIER: (p.82)x, sera déclarée criminelle de lèse-nation et poursuivie comme telle, soit par les Et
CAHIER: (p.402)tte règle seront coupables de lèse-nation et poursuivis comme tels dès qu'ils auron
CAHIER: (p.285) patrie, coupable du crime de lèse-nation, et puniecomme telle par le tribunal qu'é
CAHIER: (p.544) coupables de rébellion et de lèse-nation, favoriser la violation de la constitution
CAHIER: (p.42)lisse quels sont les crimes de lèse-nation. Le vœu des bailliages est que les ressor
CAHIER: (p.285)n user que pour {es crimes de lèse-nation ou de lèse-majesté seulement; et que, da
CAHIER: (p.402)s généraux, comme coupable de lèse-nation; que les impositions seront réparties dan
CAHIER: (p.320)e défendre, c'est un crime de lèse-nation. Qui pourrait nier que dans la génératio
CAHIER: (p.388)-mêmes; déclarant criminel de lèse-nation tous ceux qui pourraient entreprendre dire
CAHIER.: (p.249)sions. Ce serait vu crime de lèse-patrie de ne pas correspondre à sa confiance pat
CAHIER GÉN...: (p.221)i serait un crime de lèse-patrie. 2° De demander l'abolition de la gabelle
These include "lèse-majesté nationale", "lèse-majesté et nation" (OCR error fixed), "crimes de lèse-majesté et de lèse-nation", and (my favorite) "crime de lèse-majesté divine et humaine". Kelly suggests that notions of royal authority had been trimmed over the 18th century and with this reduction came a restriction of just what would constitute lèse-majesté and to what kinds of crimes it would apply. He argues that it was only in 1787, with the Assembly of Notables, that the idea of the nation "begins to take shape in a public glare" and further suggested that the decrees of September 1789 to decree the punishments for lèse-nation (and subsequent events) show the "confused and arbitrary genesis of lèse-nation".

See also the 11 entries in our Dictionnaires d'autrefois for lese which stress lèse-majesté through the entire period with lèse-nation being left as an after-thought, such as in the DAF (8th edition): "Il se joint quelquefois, par analogie, à d'autres noms féminins. Crime de lèse-humanité, de lèse-nation, de lèse-patrie." One should not construe this as excessively conservative, however, since lèse-majesté is, by far, the most common construction in the 19th and 20th centuries (at least as represented in ARTFL-Frantext).
Read More

Topic Based Text Segmentation Goodies

1 comment
As you may recall, Clovis ran some experiments this summer (2009) applying a perl implementation of Marti Heart's TextTiling algorithm to perform topic based text segmentation on different French documents (see his blog post and related files). Clovis reasonably suggests that some types of literary documents, such as epistolary novels, may be more suitable candidates than other types, because they do not have the same degree of structural cohesion. Now, as I mentioned in my first discussion of the Archives Parlementaires, I suspect that this collection may be particularly well to topic based segmentation. At the end of his post, Clovis also suggests that we might be able to test how well a particular segmentation approach is working by using a clustering algorithm, such as LDA Topic Modeling, to see if the segments can be shown to be reasonably cohesive. Both topic segmentation and modeling are difficult to assess because human readers/evaluators can have rather different opinions, leading to problems in "inter-rater reliability", which is probably a more vexing problem in the humanities and related areas of textual studies than in other domains.

Earlier this year (and a bit last year), I also ran some experiments on some 18th century English materials, such as Hume's History of England and the Federalist Papers. Encouraged by these results, particularly on the Federalist Papers, I have accumulated a number of newer algorithms, packages, and papers which may be useful for future work in this area. These are on my machine (for ARTFL folks, let me know if you want to know where), but I will not redistribute here as a couple of packages require non-redistribution or other limitations. I am putting links to some of the source files, when I have them.

Since Heart's original work, there have been a number of different approaches to topic based text segmentation. Clovis and I have tried to make note of much of this work on our CiteULike references (segmentation). There is some overlap with Shlomo's list. In no particular order of preference or chronology, here is what I have so far. I will also try to provide some details on using these when I have a chance to run them up.

From the Columbia NLP group (http://www1.cs.columbia.edu/nlp/tools.cgi), we have both Min-Yan Kan's Segmenter and Michael Galley's LCSeg. These required signing a use agreement, which I have in my office. The release archives for both have papers, some test data,

I spent some time trying to track down Freddy Choi's C99 algorithm and implementation described in some papers in the early part of this decade. I finally tracked it all down on the WayBack Machine at Internet Archive (link, thank you!!), which also has some papers, software, data and implementations of TextTiling and other approaches from that period. It appears several of the packages below use C99 and some of the code from this.

I was going to reference Utiyama and Isihara's implementation (TextSeg), but in the few months since I assembled this list, the link has (also) gone dead:
http://www2.nict.go.jp/x/x161/members/mutiyama/software.html#textseg
This appears to be a combination of approaches.

Igor Malioutov's MinCut code (2006) is available from his page:
http://people.csail.mit.edu/igorm/acl06code.html

There appears to be some info on TextTiling in Simon Cozens (2006), "Advanced Perl Programming".

We also want to check out Beeferman et. al. (link) since I recall that this group had done some interesting work. I have Beeferman's implementation of TextTiling in C, but don't think I have run across anything else.

If you run across anything useful, please blog it here or let me know. Papers should be noted on our CiteUlike. Thanks!!
Read More

Archives Parlementaires: lèse collocations

Leave a Comment
The collocation table function of PhiloLogic is a quick way to look at changes in word use. Lèse majesté, treason or injuries against the dignity of the sovereign or state, is a common expression. The collocation table below shows terms around "lese | leze | lèse | lèze | lése | léze" in ARTFL Frantext (550 documents, 1700-1787) with majesté being by far the most common.



It is interesting to note that the construction "lèse nation" does not appear once in this report. Searching for "lèse nation" before the Revolution in ARTFL-Frantext finds a single occurrence, in Mirabeau's [1780] Lettres écrits du donjon de Vincennes, where he complains that "toute invocation de lettre-de-cachet me paraît un crime de lèse-nation". The collocation table for lEse in the current sample of the Archives Parlementaires (there are no instances of the lEze in this dataset), shows the lèse nation construction to be far more frequent.




There have been discussions* of the transition from lèse majesté to lèse nation, which is clearly shown here. Now, a reasonable objection to this is that this report includes the entire (as much as we have at the moment) revolutionary period. But we see roughly the same rates and ranking for lèse in 1789.

It would appear -- I would not put too much credit in these numbers -- that the shift from majesty to nation, and all that this implies in terms of the way state is envisaged, was well under way by 1789. This either happened very quickly in the years leading up to the Revolution, since the construction just once in ARTFL-Frantext before, or was a development that took place in types of documents not found in the rather more literary/canonical sample in ARTFL-Frantext, such as journals, pamphlets, and other more ephemeral materials. I guess data entry projects will never end.

One other observation. I like the collocation cloud as a graphic. But if you examine the table, you may notice that the cloud does not really represent the frequency differences all that well. The second table -- all of the AP -- shows that nation occurs more than 6 times as frequently as majesté, but differences of that magnitude tend to be rather difficult to show in a cloud. So, the compromise of providing both is probably the best approach.

* G. A. Kelly, "From Lèse Majesté to Lèse nation: Treason in 18th century France", Journal of the History of Ideas, 42 (1981): 269-286 (JStor).


Read More

Archives Parlementaires (I)

Leave a Comment
A couple of weeks ago, some ARTFL folks discussed the notion of outlining some research and/or development projects that we will be, or would like to be, working on the coming months. We discussed a wide range of possibilities that could involve substantive work, using some of the systems we have already developed or are working on, or more purely technical work. Everyone came up with some pretty interesting projects and proposals, and we decided that it might be entertaining and useful for each of us to outline a specific project or two and write periodic entries here as things move forward. In the cold light of sobriety, this sounds like a pretty good idea. So, let me be the first to give this a whirl.

Our colleagues at the Stanford University Library have been digitizing the Archives Parlementaires using the DocWorks system. During a recent visit, Dan Edelstein was kind enough to deliver 43 volumes of OCRed text, which represents about half of the entire collection. Dan and I very hastily assembled an alpha text build of this sample under PhiloLogic. I converted the source data into a light TEI notation and attempted to identify probable sections in the data, such as "cahiers" , "séances", and other plausible divisions using an incredible simple approach. Dan built a table to identify volumes and years, which we used to load the dataset in (hopefully) coherent order. This is a very alpha test build. It is uncorrected OCR (much of which is surprising good) without links to pages images. The volumes are being scanned in no particular order, so we have volumes from a large swath of the collection. We are hoping to get the rest of volumes from Stanford in the relatively near future and will be working up or more coherent and user friendly site, with page images and the like. So, with these caveats, here is the PhiloLogic search form.

The Archives Parlementaires are the official, printed record of French legislative assemblies from beginning of the Revolution (1787) thru 1860. We are interested in the first part of the first series (82 volumes), out of copyright, ending in January 1794 which contain records pertaining to the Constituent Assembly, Legislative Assembly, and the Convention. The first seven volumes of the AP are the General Cahiers de doléances, which are organized by locality and estate (clergy, nobility, and third). The rest contain debates, speeches, draft legislation, reports, and many other kinds of materials typically organized by legislative session, often twice daily (morning and evening).

There will be some general house keeping required to start. Some of this will involve writing a better division recognizer, particularly for the Cahiers which are currently not including the place name and estate. I will also need to decide how to handle annexes, editorial materials, notes, etc. I suspect that it may also be worth some effort to try to correct some of the errors automatically, by simple replacement rules and identification impossible sequences. I am also thinking of using proximity measures to try to correct some proper names, such as Bobespierre, Kobespierre, etc. I would also want to concentrate some effort on terms that may reflect structural divisions. Dan has suggested identification of speakers, where possible, so one could search the speeches (full and in debates) of specific individuals like Robespierre, but this appears to be fairly problematic, since it is not clear how to identify just where these might stop.

Loading this data, particularly the complete (or at least out of copyright) dataset will probably be of general utility to Revolutionary historians, particularly when linked to page images and given some other enhancements. This will be done in conjunction with our colleagues at Stanford and other researchers.

I have several rather distinct research efforts in mind. There are a series of technical enhancements which I think fit the nature of the data fairly well:
  • sequence alignment to identified borrowed passages from earlier works, such as Rousseau and Montesquieu,
  • topic based text segmentation, to split individual sessions into parts, and,
  • topic modeling or clustering to attempt to identify the topics of parts identified by topic based segmentation.
We have already run experiments using PhiloLine, the many to many sequence aligner which we are using for various other applications. As we have found, this works for uncorrected OCR relatively well. For example, Condorcet in the Séance du vendredi 3 septembre 1790 [note the OCR error below] borrows a passage from Voltaire's Épitres in his

Nouvelles réflexions sur le projet de payer la dette exigible en papier forcé, par M. GoNDORCET.

Un maudit Écossais, chassé de son pays, Vint changer tout en France et gâter nos esprits. L'espoir trompeur et vain, l'avarice au teint blême, Sous l'abbé Terrasson calculaient son système, Répandaient à grands flols les papiers imposteurs, Vidaient nos coffres-forts et corrompaient no s mœurs.

Un maudit écossais, chassé de son pays,
vint changer tout en France, et gâta nos esprits.
L'espoir trompeur et vain, l'avarice au teint blême,
sous l'abbé Terrasson calculant son système,
répandaient à grands flots leurs papiers imposteurs,
vidaient nos coffres-forts, et corrompaient nos
moeurs;
without specific reference to Voltaire (that I could find). This is generally pretty decent OCR. The alignments work for poorer quality and where there are significant insertions or deletions. For example:

Rousseau, Jean-Jacques, [1758], Lettre à Mr. d'Alembert sur les spectacles:
autrui des accusations qu'elles croient fausses; tandis qu'en d'autres pays les femmes, également coupables par leur silence et par leurs discours, cachent, de peur de représailles, le mal qu'elles savent, et publient par vengeance celui qu'elles ont inventé. Combien de scandales publics ne retient pas la crainte de ces sévères observatrices? Elles font presque dans notre ville la fonction de censeurs. C'est ainsi que dans les beaux tems de Rome , les citoyens, surveillans les uns des autres, s'accusoient publiquement par zele pour la justice; mais quand Rome fut corrompue et qu'il ne resta plus rien à faire pour les bonnes moeurs que de cacher les mauvaises, la haine des vices qui les démasque en devint un. Aux citoyens zélés succéderent des délateurs infames; et au lieu qu'autrefois les bons accusoient les méchans, ils en furent accusés à leur tour . Grâce au ciel, nous sommes loin d'un terme si funeste. Nous ne sommes point réduits à nous cacher à nos propres yeux, de peur de nous faire horreur. Pour moi, je n'en aurai pas meilleure opinion des femmes, quand elles seront plus circonspectes: on se ménagera davantage, quand on
Séance publique du 30 avril 1793, l'an II de la:
son tribunal n'exerce pas, d'ailleurs, une autorité aussi 1 mu soire qu'on pourrait le croire ; il se fait J"_ tice d'une partie de la violation des lois «j ciales ; ses vengeances sont terribles p l'homme libre, puisque la censure o lst "°" la honte et le mépris : et combien cle st* § dales publics ne retient pas la crainte m. châtiments ? Dans les beaux temps cle n°*ji les citoyens, surveillants nés les uns a es» s'accusaient publiquement par zèle p % justice. Mais quand Rome fut corrompu^ citoyens zélés succédèrent des oeiai •„ t fâmes; au lieu qu'autrefois les bons accu- -^ les méchants, ils en furent accuses tour . -, rla méEn Egypte, la censure ssu_ v moire des morts ; la comédie eut o*" B^^ des un pouvoir plus étendu sur la rep vivants. „ •* i„ t-Ole niani^ 1 * L'esprit de l'homme est fait ae te ut rtr-c, encore plus du ridicule que d'un ,»ïl u
The Rousseau passage is found in a speech titled Nécessité d'établir une censure publique par J.-P. Picqué, which does not appear to mention the title and possibly not Rousseau at all (as far as I can tell). As you can see, this is fair messy OCR and is significantly truncated. We have a preliminary database running and will probably release this once we have the entire set and experiment further with alignment parameters.

Based on preliminary work that I have done on Topic based text segmentation, which Clovis followed up on in more detail (link), suggests that the individual séances may be a particularly good candidate for topic segmentation, since the topics can shift around radically. Running text tends not to do as well as clear shifts in topics. There are a number of newer approaches than the Hurst TextTiling implementation (which I will blog when I run them up) that may be more effective.

Finally, on the technical side, I want to experiment with LDA topic modeling. Again, Clovis' initial work on topic identification for the articles of Echo de la fabrique, indicate that, if one can get good topic segments, the modeling algorithm may be fairly effective. Oddly enough, I cannot recall anyone doing the "topic two-step", where one would apply topic modeling to parts of documents split up by a topic based segmentation algorithm. Or, I may have missed some important papers. The idea behind all of this is an attempt to build the ability to search for relatively coherent topics, either for browsing or searching.

So far, I have been talking about some more technical experimentation to see if certain algorithms, or general approaches, might be effective on a large and fairly complex document space. While I used the AP for significant work when I was doing Revolutionary studies, my initial systematic interest was in the General Cahiers de doléances. For my dissertation, and some later articles ("The Language of Enlightened Politics: The Société de 1789 in the French Revolution" in Computers and the Humanities 23 (1989): 357-64), I keyboarded a small sample of the Cahiers (don't ever, ever do that as a poor graduate student :-) to serve as a baseline corpus to look at changes in Revolutionary discourse over time, with specific reference to the materials published by the Société de 1789. I suspect that a statistical analysis of the language in the cahiers may bring to light interesting differences between the Estates, urban/rural, and north/south. For this set of tasks, I am planning to use the comparative functions of PhiloMine to examine the degree to which these divisions can be identified using machine learning approaches and, if so, what kinds of lexical differences can be identified. It would be equally interesting to compare a more linguistic analysis to the content analysis results found in Gilbert Shaprio et al, Revolutionary demands: a content analysis of the Cahiers de doléances of 1789.

I will, as promised (or threatened) above, try to blog good results and failures -- remember Edison is credited with saying while trying to invent the lightblub, “I have not failed. I've just found 10,000 ways that won't work.” -- of these efforts here so we can all consider them.
Read More

Epub to tei lite converter

Leave a Comment
This is just to let you know that we now have an epub to tei converter. It can be found here:
http://artfl.googlecode.com/files/epub_parser.tar
As you'll notice, there are three files in this archive. The first one is epub_parser.sh. It's the only one you need to edit. Specify the paths (where the epub files are and where you want your tei files to be in) without slashes and just execute epub_parser.sh. The second one is parser.pl which is called by epub_parser.sh. The third one is entities.pl which handles html entities and is also called by epub_parser.sh. Before running it, make sure all three scripts are in the same directory.
A sample philologic load can be found here:
http://artflx.uchicago.edu/philologic/epubtest.whizbang.form.html
Of course, this is just a proof of concept and will only be used only for text search and machine learning purposes. Some things will have to be tuned up. Note that I put a div1 every ten pages since there is no way to recognize chapters in the original epub files.
Read More

Text segmentation code and usage

Leave a Comment

Here's a quick explanation on how to use the text segmentation perl module called Lingua-FR-Segmenter. You can find here: http://artfl.googlecode.com/files/Lingua-FR-Segmenter-0.1.tar.gz It's not available on cpan as it's just a hacked version of Lingua::EN::Segmenter::TextTiling made to work with French. The first thing to do before installing it is to install Lingua::EN::Segmenter::TextTiling which will get you all the required dependencies (cpan -i Lingua::EN::Segmenter::TextTiling). When you install the French segmenter, make test will fail, so don't run it. That's normal since I haven't changed the example which is for the English version of the module. An example of how it can be used :

#!/usr/bin/perl
use strict;
use warnings;
use Lingua::FR::Segmenter::TextTiling qw(segments);
use lib '.';

my $text;
my $count;
while (<>) {
$text .= $_;
}
my $num_segment_breaks = 100000; # safe number so that we don't run out of segment breaks
my @segments = segments($num_segment_breaks,$text);
foreach (@segments) {
$count++;
print;
print "\n----------SEGMENT_BREAK----------\n" if exists $segments[$count];
}

There are other possibilities, but this is the basic one which will segment the text whenever there's a topic shift. Some massaging is necessary in order to get good results, and the changes needed are different from one text to the next. Basically separate paragraphs with a newline.

Read More

Classifying the Echo de la Fabrique

Leave a Comment
I've been working lately on trying to classify the Echo de la Fabrique, a 19th century newspaper, using LDA. The official website is located at http://echo-fabrique.ens-lsh.fr/. The installation I used is strictly meant for experimentation on topic modeling.
The dataset I used is significantly smaller than the Encyclopédie, which means that the algorithm has fewer articles with which to generate topics. This makes the whole process trickier since choosing the right number of topics suddenly becomes more important. I suspect that adding more articles to this dataset will yield better results. I settled for 55 topics, and found a name corresponding to the general idea conveyed by each distribution of words. I then proceeded to add those topics to each tei file and loaded it into philologic. I chose to include 4 topics per article, or fewer if topics didn't reach the mark of 0.1.
The work I've done so far on LDA has already shown several things about its accuracy in generating meaningful topics and in properly classifying text. It tends to work really well with topics that are concept driven. For instance, in the Echo de la Fabrique , the topic 'justice' works really well. Same thing goes with 'Hygiène' associated with words like 'choléra' or 'eau'. On the other hand, there are some distribution of words which were not identifiable as topics. Those topics have been marked as 'Undetermined' with a number such as 'Undetermined1' to distinguish each undetermined topic. And then, there are also topics like 'Petites annonces' or 'Misère ouvrière ' which are not as concept driven, and therefore are subject to more inaccuracies. Once again, I believe that having more articles from the same source would partially improve this problem : more documents, more training for the topic modeler, reduced dependency on concepts.
Each topic has a number attached to it. This number represents the importance of the topic for each article. To get the most prominent topic, search for e.g. 'justice 1', 'justice 2' for the second topic, 'justice 3' for the third topic, and 'justice 4' for the fourth topic. If you want a search for all four, just type 'justice'. Note that the classification tends to be more accurate with the first topic than with the other three, but that 's not always the case.
Anyway, without further ado, here is the search form:
https://artflsrv03.uchicago.edu/philologic4/echofabrique/ (Update: this is under PhiloLogic4 and has only Topics 1 enabled at this time.)
Please let me know if you have any comments, suggestions. Any feedback is much appreciated.
Read More

Some Classification Experiments

Leave a Comment
Since Clovis has running some experiments to see how well Topic Modeling using LDA might be used to predict topics on unseen instances, I thought I would back track a bit and write a bit about some experiments I ran last year which may be salient for future for comparative experimentation or even to begin thinking about putting some of our classification work into some level of production. I am presuming that you are basically familiar with some of the classifiers and problems with the Encyclopédie ontology. These are described in varying levels of detail in some of our recent papers/talks and on the PhiloMine site.

The first set was a series of experiments classifying a number of 18th century documents using a stand alone Bayesian classifier, learning the ontology of the Encyclopédie, and predicting the classes on chapters (divs) of selected documents. I have selected three for discussion here, since they are interesting and are segmented nicely into reasonable size chunks. I ran these using the English classifications and did not exclude the particularly problematic classes, such as Modern Geography (which tend to be biographies about important folks, filed under where they were from) or Literature. Each document shows the Chapter or Article, which is linked to the text of the chapter, followed by one or more classifications, assigned using the Multinomial Bayesian classifier. If I rerun these, I will simply pop the classification data right in each segment, for easier consultation. Right now, you will need to juggle between two windows:

Montesquieu, Esprit des Loix
Selected articles from Voltaire, Dictionnaire philosophique
Diderot, Elements de physiologie

PENDING: Discussion of some interesting examples and notable failures.

The second set of experiments compared K-Nearest Neighbor (KNN) classifier to the Multinomial Bayesian classifiers in two tests, the first being cross classification of the Encyclopédie and the second being multiple classifications, again using the Encyclopedie ontology, to predict classes of knowledge in Montesquieu's Esprit des Loix. The reason for these experiments is to examine the performance of linear (Bayesian) and non-linear (KNN) classifications in the rather noisy information space that is the Encyclopédie ontology. By "noisy" I mean to suggest that it is not at all uniform in terms of size of categories (which can range from several instances to several thousand), size of articles processed, degree of "abstractness," where some categories are very general and some are very specific, and a range other considerations. We have debated, on and off, whether KNN or Bayesian (or other linear classifiers such as Support Vector Machines) classifiers are better suited to the kinds of noisy information spaces that one encounters in retro-fitting historical resources such as the Encyclopedie. The distinction is not rigid. In fact, in a paper last year, on which Russ was the lead author, we argued that one could reasonably combine KNN and Bayesian classifiers by using a "meta-classifier" to determine which should be used to perform a classification task on a particular article in cases of a dispute (Cooney, et. al. "Hidden Roads and Twisted Paths: Intertextual Discovery using Clusters, Classifications, and Similarities", Digital Humanities 2008, University of Oulu, Oulu, Finland, June 25-29, 2008 [link]). We concluded that, for example, "KNN is most accurate when it classifies smaller articles into classes of knowledge with smaller membership".

Cross classification of the classified articles in Encyclopedie using MNB and KNN. I did a number of runs, varying the size of the training set and set to be classified. The result files for each of these runs, on an article by article basis, as quite large (and I'm happy to send them along). So, I compiled the results into a summary table. I took 16,462 classified articles, excluding Modern Geography, and "trained" the classifiers on between 10% and 50% of the instances. I put "trained" in scare quotes because a KNN classifier is an unsupervised learner, so what you are really doing is selecting a subset of comparison vectors with their classes. The selection process resulted in 276 and 708 classes of knowledge in the information space. As is shown in the table, KNN significantly outperforms MNB in this task. We know from pervious work, and general background, that the MNB tends to flatten out distinctions among smaller classes, but has the advantage of being fast.

The distinctions are at times fairly particular and many times the classifiers come up with quite reasonable predictions, even when they are wrong. A few examples (red shows a mis-classification):

Abaissé, Coat of arms (en terme de Blason)

KNN Best category = CoatOfArms
KNN All categories = CoatOfArms, ModernHistory
MNB Best category = ModernHistory
MNB All categories = ModernHistory, Geography
AGRÉMENS, Rufflemaker (Passement.)
KNN Best category = Ribbonmaker
KNN All categories = Ribbonmaker
MNB Best category = Geography
MNB All categories = Geography
TYPHON, Jaucourt: General physics (Physiq. générale)
KNN Best category = Geography
KNN All categories = Geography, GeneralPhysics, Navy, AncientGeography
MNB Best category = Geography
MNB All categories = Geography, AncientGeography
I applied the comparative classifiers in a number of runs using different parameters for Montesquieu, Esprit des Loix. All of the runs tended to give fairly similar results, so here is the last of the result sets. The results are all rather reasonable, with in limits, given the significant variations in size of chapters/sections in the EdL. The entire "section" 1:5:13 is
Idée du despotisme. Quand les sauvages de la Louisiane veulent avoir du fruit, ils coupent l'arbre au pied, et cueillent le fruit. Voilà le gouvernement despotique.
which gets classified as

KNN Best category = NaturalHistoryBotany
KNN All categories = NaturalHistoryBotany
MNB Best category = NaturalHistoryBotany
MNB All categories = NaturalHistoryBotany, Geography, Botany, ModernHistory

In certain other instances, KNN will pick classes like "Natural Law" or "Political Law" while the MNB will return the more general "Jurisprudence". I am particularly entertained by

PARTIE 2 LIVRE 12 CHAPITRE 5:
De certaines accusations qui ont particulièrement besoin de modération et de prudence
KNN Best category = Magic
KNN All categories =
MNB Best category = Jurisprudence
MNB All categories = Jurisprudence

Consulting the article, one finds a "Maxime importante: il faut être très circonspect dans la poursuite de la magie et de l'hérésie" and that the rest of the chapter is indeed a discussion of magic. While the differences are fun, and sometimes puzzling, one should also note the degree of agreement between the different classifiers, particularly if one discounts certain hard to determine differences between classes, such as Physiology and Medicine. The chapter "Combien les hommes sont différens dans les divers climats" (3:14:2) is classified by KNN as "Physiology" and MNB as "Medicine". Both clearly distinguish this chapter from others on Jurisprudence or Law.

I have tended to find KNN classifications to be rather more interesting than MNB. But I don't think the jury is out on that and one can always perform the kinds of tests that Russ described in the Hidden Roads talk.

All of these experiments were run using Ken Williams' incredible handy perl modules AI:Categorizer rather than PhiloMine (which also has a number of Williams' modules) just because it was easier to construct and tinker with the modules. I will post some of these shortly, for future reference.


Read More
Next PostNewer Posts Previous PostOlder Posts Home