Frequencies in the Greek and Latin texts

3 comments
Earlier this year Mark built a frequency query for the French texts (affectionately named wordcount.pl)
Kristin has now implemented this for our Greek and Latin texts. If you wonder what's new about this: Word count for individual documents has always been there in PhiloLogic loads, but the difference here is that you can see frequencies over the entire corpus, or a subset of works/authors.

You can find the forms here:
http://perseus.uchicago.edu/LatinFrequency.html
http://perseus.uchicago.edu/GreekFrequency.html

Update: Forms moved to the 'production site', perseus.uchicago.edu. You can now specify genre as well. Stay tuned for further stats, meant to provide a friendly reminder of Zipf's Law.

Note: the counts are raw frequency counts, without lemmatization.
I have edited the search form a tiny bit - let me know if you encounter any problems.
Read More

Do LDA generated topics match human identified topics?

1 comment
I've been experimenting lately on how LDA generated topics and the Encyclopédie classes of knowledge match. The experiment was conducted in the following way:
- I chose 100 classes of knowledge in the Encyclopédie, and picked 50 articles of each.
- I then ran a first LDA topic trainer choosing 100 topics.
- I then proceeded to identify each generated topic and name after the Encyclopédie classes of knowledge.
- My plan was then to look at the topic proportions per article and see if the top topic would correspond to its class of knowledge. Would the computer manage to classify the articles in the same way the encyclopedists had?
I was not able to get that far when choosing 100 topics for my first LDA run. This is because LDA will always generate a couple topics which aren't really topics, but are just lists of very common words and they just happen to be used in the same documents. Therefore, one should always disregard these topics and focus on the others. What this means is that I had to add a couple more topics to my LDA run in order to get 100 identifiable topics. So I settled with 103 topics. I found 3 distributions of words which were unidentifiable, so I dismissed them.
The results show that LDA topics and the Encyclopédie classes of knowledge do not match (see links to results below). Some do very well, like Artillerie, for which the corresponding distribution of words is :
canon piece poudre artillerie boulet fusil ligne calibre mortier bombe feu charge culasse livre met chambre pouce lumiere roue affut diametre coup batterie levier bouche ame flasque balle tourillon tire
Other distribution of words make sense in themselves but do not match any of the original classes of knowledge. For instance, there is no topic on 'teinture', 'peinture'. What we get instead is a mixture of both classes of knowledge which could be identified as colors :
couleur rouge blanc bleu tableau jaune verd peinture ombre teinture noir toile tableaux nuance papier etoffe bien teint peintre pinceau trait teinturier melange veut figure teindre feuille beau sert colle
Now the topic modeler is not wrong here. It's telling us that these words tend to occur together, which is true. Another significant example is the one with 'Boutonnier', 'Soie', and 'Rubanier' :
soie fil rouet corde brin tour main bouton gauche longueur boutonnier droite attache bout fils tourner sert molette noeud cordon doigt piece emerillon moule broche ouvrage ruban rochet branche aiguille
What we get here is a topic about the art of making clothes, which is more general than 'Boutonnier' or 'Rubanier'.
For this to actually work, the philosophes would have had to have been extremely rigorous in their choice of vocabulary, because this is what LDA expects. Also, another problem is that LDA considers that each document is a mixture of topics, and not made out of one topic. So if one document is exclusively focused on one topic, LDA will still try to extract a certain number of topics out of it. If this is the case, then you are going to get some topics which are mere subdivisions of the class of knowledge in this document. The reason why our experiment broke down could be that the LDA topic trainer created new subdivisions for some classes of knowledge, or regrouped several classes of knowledge. These are all valid as topics, but do not correspond to human identified topics.

Link to results
Read More

Section Highlighting in Philologic

1 comment
In many of the Perseus texts currently loaded under philologic, the section labels would overlap and be unreadable. These labels come from the milestone tags in the xml text and are placed along the edge of the text. One particularly problematic text in this regard was the New Testament, as the sections were verses and were thus often small sections of text.

In order to fix the overlapping issue, I wrote a little bit of javascript to hide the tags which would be placed in the same position as a previous tag. I also added a function to recalculate this if the window is resized. My main function is fairly simple:

function killOverlap (){
$lastOffset = 0;
$(".mstonecustom").each(function (i) {
if (this.offsetTop == $lastOffset){
this.className = "mstonen2";
}
else {
$lastOffset = this.offsetTop;
}});}

I also added a function which highlights a section when you hover over its milestone label along the side of the text. This seems useful to me, as often it is helpful to know where a section starts and ends. This was a slightly more complex problem. I had to alter the citequery3.pl script in order to add a span tag and some ids in order to get the javascript to work. The javascript was then fairly simple:

function highlight(){
$(".mstonecustom").hover(
function () {
myid = jq("text" + $(this).attr('id'));
$("w", myid).css({"font-weight" : "bolder"});},
function () {
myid = jq("text" + $(this).attr('id'));
$("w", myid).css({"font-weight" : "normal"});})}

In order for it to work though, you have to alter the citequery3.pl script with this:

my $spanid = $citepoints{$offsets[$offset]};
$spanid =~ s/.*\.([0-9]+)\.([0-9]+)$/a$1b$2/;
#...
$tempstring =~ s/(^<[^>]+>)/$1<span class="mstonecustom" id="$spanid">$citepoints{$offsets[$offset]}<\/span>/;
#... {
$tempstring =~ s/<span class="mstonecustom" id="$spanid">$citepoints{$offsets[$offset]}<\/span>//;}

$milesubstrings[$offset] = "<span class=" . $citeunits{$offsets[$offset]} . " id="text">" . $tempstring . "<\/span>";

That's about it. It may come in useful again someday. For an example, take a look at this.
Read More

Towards PhiloLogic4

Leave a Comment
Earlier this year I wrote a long discussion paper called "Renovating PhiloLogic" which provided an overview of the system architecture, a frank review of the strengths and (many) failings of the current implementation of the 3 series of PhiloLogic, and proposed a general design model for what would effectively be a complete reimplementation of the system, retaining only selected portions of the existing code base. While we are still discussing this, often in great detail, a few general objectives for any future renovation have emerged, including:
  • service oriented architecture;
  • release of new system in perl module libraries;
  • multiple database query support, and,
  • options for advanced or extended indexing models.
I will be putting together a public version of this discussion draft in the near future and will blog it when I have something ready.

Before sallying forth to do start working on a PhiloLogic4, there are a number of preliminary steps that Richard and I agree are required in order to 1) support the existing PhiloLogic3 series, and 2) clear the existing (messy) code base of some of the most egregious sections of the system, most notably the loader. Some of these are simply housekeeping and updates, some of these are patches and bug fixes, and some others are clean-ups which should streamline the current system and help in any redevelopment.

We will start by retasking one of our current machines, a 32 bit OS-X installation, to be the primary PhiloLogic development machine. We will also get the Linux branch on a 32 bit Linux machine (flavor to be determined). There is a known 64 bit installation problem which we will address at the end of this initial process. When we reach the right step, we will install it all on 64 bit machines and fix it then, hopefully with much less effort on a streamlined version, while releasing upgraded 32 bit versions on the way. The other element for our consideration is the degree to which we can merge the OS-X and Linux branches of the system. Right now, we have two completely distinct branches. It would be much better to have one, which we think may be accomplished in a couple of different ways.

We are currently thinking of 4 distinct steps, which should each result in new maintenance releases of PhiloLogic3.

Step One

Apply the most recent OS-X Leopard patch kit to both the OS-X and Linux branches as required and feasible. This is the patch kit that Richard and I assembled for the migration to our new servers and has some nifty little extensions. We will also be updating the PhiloLogic code release site (Google Code) and retooling the new PhiloLogic site, which will then be referred from the existing location (philologic.uchicago.edu). Maintenance release when done. [MVO]

Step Two

The PhiloLogic loader currently using a GNU Makefile scheme to load databases. This made good sense many years ago, when loads could take many hours (or days), but is probably no longer needed. There are also many places where we use various utilities (sed, gawk, gzip, etc.) which add complications and make the entire scheme more brittle. Our current thinking is to fold all of the Makefile functions into a revised version of philoload, but may determine a better way to proceed once we get into it. We're planning a maintenance release of this when done. [MVO]

Step Three

The current PhiloLogic loader performs a number of C compiles, many of which are no longer needed. For example, the system still compiles the search2 binaries. These were left in Philologic3 in order to have backwards compatibility. We need to keep the ability to generate the correct pack and unpack libraries which are used by search3. Once we have cleared out all unnecessary C compiles, we will investigate a couple of known bugs in search3, and attempt to resolve these. Again, once done, we would do a maintenance release. [RW and MVO]

Step Four

As noted above, some users have reported 64 bit compile problem on either installation or load. Once we have the loader streamlined, eliminating as much of the old C compiles are possible, we will investigate this problem. We're hoping that this will be easily remedied and, even better, could be resolved in a combined release which would merge the current OS-X and Linux branches. This would be the terminal release of the PhiloLogic3 series. Any future releases would be only for bug fixes.

We hoping that these steps will result in a stable terminal release of the PhiloLogic3 series, which will be easier to install and use. It will also result in significant streamlining which will help in any future Philologic renovation or a new PhiloLogic4 series.

This is an initial plan, so please do post your comments, suggestions, and complaints.
Read More

Encyclopédie under KinoSearch

3 comments
One of the things that I have wanted to do for a while is to examine implementations of Lucene, both as a search tool to complement PhiloLogic and possibly as a model for future PhiloLogic renovations. Late this summer, Clovis identified a particular nice open source, perl implementation of Lucene called KinoSearch. This looks like it will fit both bills very nicely indeed. As a little experiment, I loaded 73,000 articles (and other objects) from the Encyclopédie, and cooked up a super simple query script. This allows you to type in query words and get links to articles sorted by their relevancy to your query (the italicized number next to the headword). At this time, I am limiting to the top 100 "hits". Words should be lower case, accents are required, and words should be separated by spaces. Try it:

Query Words: or
Require all words

Here are a couple of examples which you can block copy in:
artisan laboureur ouvrier paysan
malade symptome douleur estomac
peuple pays nation ancien république décadence

The first thing to notice is search speed. Lucene is known to be robust, massively scalable, and fast. The KinoSearch implementation is certainly very fast. A six term search returns in a real .35 seconds and less than 1/10 of a second of system time, using time on the command line. I did not time the indexing run, but think 10 minutes or so. [Addition: by reading 147 TEI files rather than 77,000 split files, the loading indexing time for the Encyclopédie is falls to (using time) real 2m45.9s, user 2m33.8s sys 0m11.1s.]


The KinoSearch developer, Marvin Humphrey, has a splendid slide show, outlining how it works, with specific reference to the kind of parameters, such as stemmers and stopwords, that one needs to consider as well as an overview of the indexing scheme. Clovis and I thought this might be the easiest way to begin working with Lucene, since it is a perl module with C components, so it is easy to install and get running. Given the performance and utility of KinoSearch, I suspect that we will be using it extensively for projects where ranked relevancy results are of interest. These might include structured texts, such as newspaper and encyclopedia articles, and possibly large collections of uncorrected OCR materials which may not suitable for text analysis applications supported by PhiloLogic. Also, on first review, the code base is very nicely designed and, since it has many of the same kinds of functions as PhiloLogic, strikes me as being a really fine model of how we might want to renovate PhiloLogic.

For this experiment, I took the articles as individual documents in TEI, which Clovis had prepared for other work. For each article, I grabbed the headword and PhiloLogic document id, which are loaded as fielded data. The rest of the article is stripped of all encoding and loaded in. It would be perfectly simple to read the data from our normal TEI files. We could see simply adding a script that would load source data from a PhiloLogic database build, to add a different kind of search, which would need to have a different search box/form.

I have not played at all with parameters and I can imagine that we would want to perform some functions, such as using simple rules for normalization, on input, since it uses a stemmer package also by M Humphrey. Please email me, post comments, or add a blog entry here if you see problems, particularly search oddities, have ideas about other use cases, or more general interface notions. I will be writing a more generalized loader and query script -- with paging, numbers of hits per page, filtering by minimum relvancy scores and looking at a version of the Philologic object fetch which would try to high-light matching terms -- and moving that over to our main servers.
Read More

back to comparing similar documents

Leave a Comment
I mentioned a little while ago some work I did on comparing one document with the rest of the corpus it belongs to ( the examples I used in that blog post will not give the same results anymore, the results might not be as good, I haven't optimized the new code for the Encyclopédie yet). The idea behind it was to use the topic proportions for each article generated from LDA, and come up with a set of calculations to decide which document(s) was closest to the original document. The reason why I'm mentioning here once more is that I've been through that code again, cleaned it up quite a bit, improved its performance, tweaked the calculations. Basically, I made it usable for other people but myself. Last time I built a basic search form to use with Encyclopédie articles. This time I'm going to show the command line version, which has a couple more options than the web version.
In the web version, I was using both the top three topics in each document, and their individual proportion within that document. For instance, Document A would have topic 1, 2 and 3 as its main topics. Topic1 would have a proportion of 0.36, Topic2 0.12, Topic3 0.09. In the command line version, there's the option of only using the topics, without the proportion. The order of importance of each topic is of course still respected. Depending on the corpus you're looking at, you might want to use one model rather than the other. It does give different results. One could of course tweak this some more and decide to only take the proportion of the prominent topic, therefore giving it more importance. There is definitely room for improvement.
There was also another option that was left out of the web version. By default, I set a tolerance level, that is the score needed by each document in order to be given as a result of the query. In the command line version, I made it possible to define this tolerance in order to get more or fewer results. This option is currently only possible with the refined model (the one with topic proportions). The code is currently living in
robespierre:/Users/clovis/LDA_scripts/
It's called compare_to_all.pl. There's some documentation in the header to explain how to use it. It's fairly simple. I might do some more work on it, and will update the script accordingly.
There are other applications of this script besides using on a corpus made of well defined documents. One could very well imagine applying this to a corpus subdivided in chunks of text using a text segmentation algorithm. On could then try to find passages on the same topic(s) using a combination of LDA and this script. The Archives parlementaires could be a good test case.
Another option would be to run every document of a corpus against the whole corpus and store all the results in a SQL database. This would allow having a corpus where each document can be linked to various others according to the mixture of topics they are made of.
I will try to give more concrete results some time soon.
Read More

Supervised LDA: Preliminary Results on Homer

Leave a Comment
While Clovis has been running LDA tests on Encyclopédie texts using the Mallet code, I have been running some tests using the sLDA algorithm. After a few minor glitches, Richard and I managed to get the sLDA code, written by Chong Wang and David Blei, from Blei's website up and running.

Unlike LDA, sLDA (Supervised Latent Dirichlet Allocation), requires a training set of documents paired with corresponding class labels or responses. As Blei suggests, these can be categories, responses, ratings, counts or many other things. In my experiments on Homeric texts, I used only two classes, corresponding to Homer's two major works: the Iliad and the Odyssey. Akin to LDA, topics are inferred from the given texts and a model is made of the data. This model, having seen the class labels of the texts it was trained on, can then be used to infer the class labels of previously unseen data.

For my experiments, I modified the xml versions of the Homer texts that we have on hand using a few simple perl scripts. Getting the xml transformed into an acceptable format for Wang's code required a bit of finagling, but was not too terrible. My scripts first took the xml and split it into books (the 24 books of the Iliad and likewise for the Odyssey), then stripped the xml tags from the text. Saving out four books from each text for applying the inference step, I took the rest of the books and output the corresponding data file necessary for input into the algorithm (data format here).

I played around a bit with leaving out words that occurred extremely frequently or extremely rarely. For the results I am posting here, the English vocabulary was vast and I cut it down to words that occurred between 10 and 60 times. This probably cuts it down too much though, so it would be good to try some variations. Richard has suggested also cutting out the proper nouns before running sLDA in order to focus more on the semantic topics. For the Greek vocabulary, I used the words occurring between 3 and 100 times, after stripping out the accents.

Running the inference part of sLDA on the 8 books that I had saved out seemed to work quite well. It got all 8 correctly labeled as to whether they belonged to the Iliad or to the Odyssey. In a reverse run, the inference was able to again achieve 100 percent accuracy on labeling the 40 books after having been trained on only the 8 remaining books.

The raw results of the trials give a matrix of betas with a column for each word, and a row for each topic. These betas thus give a log based weighting of each word in each topic. Following this are the etas, with a column for each topic and a row for each class. These etas give the weightings of each topic in each class, as far as I understand it. Richard and I slightly altered the sLDA code to output an eta for each class, rather than one less than the number of classes as it was giving us. As far as we understand the algorithm as presented in Blei's paper, it should be giving us an eta for each class. Our modification didn't seem to break anything, so we are assuming that it worked, as the results seem to be looking nice. Using the final model data, I have a perl script that outputs the top words in each topic along with the top topics in each class. These are the results that I am giving below.


Results of my sLDA Experiments on Homer:

English Text: 10 Topics Greek Text: 10 Topics

Also, samples of the output from Blei and Wang's code, corresponding to the English Text with 100 topics:

Final Model: gives the betas and the etas which I used to output my results
Likelihood: the likelihood of these documents, given the model
Gammas
Word-assignments

Inferred Labels: Iliad has label '0', Odyssey has label '1'.
Inferred Likelihood: the likelihood the previously unseen texts
Inferred Gammas

I have not played around much with the gammas, but they seem to give a weighting of each topic in each document. Thus you could figure out for which book of the Iliad or the Odyssey a specific topic was the most prevalent. It would be interesting to see if this correctly pinpoints which book the cyclops comes in for instance, as this is a fairly easily identifiable topic in most of the trials.


Read More

Encyclopédie Renvois Search/Linker

Leave a Comment
During the summer (2009), a user (UofC PhD, tenured elsewhere) wrote to ask if there was any way to search the Encyclopédie and "generate a list of all articles that cross-reference a given article". We went back and forth a bit, and I slapped a little toy together and let him play with it, to which his reply was "Oh, this is cool! Five minutes of playing with the search engine and I can tell you it shows fun stuff...". This is, of course, an excellent suggestion which we have talked about in the past, usually in the context of visualizing relationships of articles in various ways. At the highest level, visualizing the relationships of the renvois is what Gilles and I attempted to do in our general "cartography paper"[1] and, more recently, Robert and Glenn (et. al.) tried, in a radically different way, to do in their work on "centroids"[2].

The current implementation of the Encyclopédie under PhiloLogic will allow users to follow renvois links (within operational limits to be outlined below), but does not support searching and navigating the renvois in any kind of systematic fashion. Since this is something I think warrants further consideration, I thought it might be helpful to document this toy, give some examples, let folks play with it, outline some of the current issues, and conclude with some ideas about what might be done going forward.

To construct this toy, I wrote a recognizer to extract metadata for each article in the Encyclopédie which has one or more renvois. As part of the original development of the Encyclopédie, each cross reference was automatically detected from certain typographic and lexical clues. This resulted in roughly 61,000 cross-references. Accordingly, the extracted database has 61,000 records. I loaded these into a simple MySQL database and used a standard script to support searching and reporting. The search parameters may include articles headwords, authors, normalized and English classes of knowledge as well as the term(s) being cross referenced. For example, there are 39 cross-referenced article pairs for the headword estomac. As you can see from the output, I'm listing the headword, author, classes of knowledge, and the cross referenced term. You can get the article of the cross referenced term or the cross-references in that article. Thus, the second example shows the link to Digestion:

ESTOMAC, ventriculus (Tarin: Anatomie, Anatomy ) ==> Digestion || renvois
[The renvois of Digestion find 56 articles pairs, including one to intestins]
DIGESTION (Venel: Economie animale, Animal economy ) ==> Intestins || renvois
Intestins (unknown: Anatomie, Anatomy ) ==> Chyle || renvois


and so on ==>lymphe==>sang==>ad nauseum. No, there is no ad nauseum, just how you might feel after going round and round.

Now, there are problems, but please go ahead and play with this now using the submit form, as long as you promise to come back and read thru the rest of this and let me know about any other problems.

Problems

As noted above, the renvois were identified automatically. And as with most of these things, it worked reasonably well. But you will see link errors and other things which indicate problems. Glenn reported these to me and I was going to eliminate them. On second thought, this little toy lets to consider the renvois rather more systematically. Where you see a link error is (probably) a recognizer error, which either failed to get a string to link or got confused by some typography. The linking mechanism itself is based on string searches. In other words, whenever you click on a renvois, you are in fact performing a search on the headwords. This simple heuristic works reasonably well, returning string matched headwords. In some cases, you get nothing because there is no headword that has the renvois word(s), and at other times you will get quite a list of articles, which may or may not include what the authors/editors intended. It is, of course, well known that many renvois simply don't correspond to an article and many others differ in various ways from the article headwords. I am also applying a few rules to renvois searching to try to improve recall and reduce noise. So, this also adds another level of indirection.

Now, ideally, one would go through the entire database, examine each renvois and build a direct link to the one article that the authors/editors intended. But we're talking 60,000+ renvois against 72,000 (or so) articles and it is not clear that humans could resolve this in many instances. When Gilles and I worked on this, we used a series of (long forgotten) heuristics to filter out noise and errors. So, this simple toy works within operational limits and gives us a way to more systematically identify possible errors and ways to improve it.

Future Work

Aside from being a quick and dirty to way get some notion of errors in the renvois, we might be able to make this more presentable. Please feel free to play with this and suggest ways to think about. In the long haul, I would love a totally cool visualization. A clickable directed graph, so you could click on a node and re-center it on another article, or class of knowledge or author. Maybe something like Tricot's representation of the classes of knowledge. Or maybe something like DocuBurst. Marti Heast's chapter on visualizing text analysis, is a treasure-trove of great ideas.

For the immediate term, I would like to recast this simple model to allow the user to specify number of steps. So, set the number of iterations to follow, so you would get something like:

ESTOMAC, ventriculus (Tarin: Anatomie, Anatomy ) ==> Digestion || renvois
DIGESTION (Venel: Economie animale, Animal economy ) ==> Intestins || renvois
Intestins (unknown: Anatomie, Anatomy ) ==> Viscere || renvois
ESTOMAC, ventriculus (Tarin: Anatomie, Anatomy ) ==> Chyle || renvois
CHYLE (Tarin: Anatomie | Physiologie, Anatomy. Physiology ) ==> Sanguification || renvois
SANGUIFICATION (unknown: Physiologie, Physiology ) ==> Respiration || renvois
RESPIRATION (unknown: Anatomie | Physiologie, Anatomy | Physiology ) ==> Air || renvois


Following this chains of renvois either until you run out or your hit an iteration limit. I will try to follow this up with both the multi-iteration model and see if I can recover some of what Liz tried to do using GraphViz to generate clickable directed graphs.

References

[1] Gilles Blanchard et Mark Olsen, « Le système de renvoi dans l’Encyclopédie: Une cartographie des structures de connaissances au XVIIIe siècle », Recherches sur Diderot et sur l'Encyclopédie, numéro 31-32 L'Encyclopédie en ses nouveaux atours électroniques: vices et vertus du virtuel, (2002) [En ligne], mis en ligne le 16 mars 2008.

[2] Charles Cooney, Russell Horton, Robert Morrissey, Mark Olsen, Glenn Roe, and Robert Voyer, "Re-engineering the tree of knowledge: Vector space analysis and centroid-based clustering in the Encyclopédie", Digital Humanities 2008, University of Oulu, Oulu, Finland, June 25-29, 2008
Read More

Archives Parlementaires: lèse (more)

Leave a Comment
As I mentioned in my last in this thread, I was a bit surprised to see just how prevalent the construction lèse nation had become early in the Revolution. The following is a sorted KWIC of lEse in the AP, with the object type restricted to "cahiers", resulting in 38 occurrences. These are, of course, the complaints sent to the King, reflecting relatively early developments of Revolutionary discourse. Keeping in mind all of the caveats regarding this data, we can see some interesting and possibly contradictory uses:
CAHIER: (p.319)sent être, comme criminels de lèse-humanité au premier chef, et ils se joindront au
CAHIER GÉN...: (p.77)manière de juger, qui lèse les droits les plus sacrés des citoyens, doit av
CAHIER: (p.697)r individus, cette concession lèse les et avoir eu d'autre mo dre une r {La partie d
CAHIER: (p.108)e, excepté dans les crimes de lèse-majesté au premier chef. Art. 33. Qu'aucun jugem
CAHIER: (p.791) si ce n'est pour le crime de lèse-majesté au premier chef, et réduite aux seuls c
CAHIER: (p.448)té seulement pour le crime de lèse-majesté au premier chef ou pour celui de haute t
CAHIER: (p.409)s choses saintes, et crime de lèse-majesté, dans tous les cas spécifiés par l'ord
CAHIER: (p.260)istériels, sauf pour crime de lêse-majesté, de haute trahison et autres cas, qui se
CAHIER: (p.42)e, à l'exception des crimes de lèse-majesté, de péculat et de concussion; mais, dan
CAHIER: (p.780), si ce n'était pour crime de lèse-majesté divine et humaine. Art. 9. Qu'ii soit as
CAHIER: (p.476)ée, si ce n'est pour crime de lèse-majesté divine et humaine. Art. 8. Qu'il soit as
CAHIER: (p.584)our le meurtre et le crime de lèse-majesté divine ou humaine, et que hors de ce cas
CAHIER: (p.378)ont seuls juges des crimes de lèse-majesté et de lèse-nation. Art. 8. Le compte de
CAHIER: (p.42)re précise ce qui est crime de lèse-majesté. Et que l'on établisse quels sont les c
CAHIER.: (p.117)déclaré coupable du crime de lèse-majesté etnation. et comme tel, puni des peines
CAHIER GÉN...: (p.671) excepté le crime de lèse-majesté, le poison, l'incendie et assassinat sur
CAHIER: (p.660) les cas, excepté le crime de lèse majesté, le poison, l'incendie et assassinat sur
CAHIER: (p.532)hommes coupables elu crime de lèse-majesté nationale; l'exemple elu passé nous a m
CAHIER: (p.645)poursuivis comme criminels de lèse-majesté nationale; que visite soit faite dans le
CAHIER: (p.383)s par elle comme criminels de lèse-majesté, quand ils tromperont la confiance du so
CAHIER: (p.286)s crimes de lèse-nation ou de lèse-majesté seulement; et que, dans ce cas, l'accus
CAHIER GÉN...: (p.210)ni comme criminel de lèse-majesté; 4° Cette loi protectrice de la libert
CAHIER: (p.35)rrémissibles comme le crime de lese-majesté. 13° 'Qu'en matière civile comme en mat
CAHIER: (p.378) crimes de lèse-majesté et de lèse-nation. Art. 8. Le compte des finances imprimé a
CAHIER: (p.359) crimes de lèsemajesté, et de lèse-nation, ce qui comprend les crimes d'Etat. 7° En
CAHIER: (p.301)ort infâme, comme coupable de lèse-nation, celui qui sera convaincu d'avoir violé c
CAHIER.: (p.536) et punis comme coupables de lèse nation. 17" De demander 1 aliénation irrévocabl
CAHIER: (p.82)x, sera déclarée criminelle de lèse-nation et poursuivie comme telle, soit par les Et
CAHIER: (p.402)tte règle seront coupables de lèse-nation et poursuivis comme tels dès qu'ils auron
CAHIER: (p.285) patrie, coupable du crime de lèse-nation, et puniecomme telle par le tribunal qu'é
CAHIER: (p.544) coupables de rébellion et de lèse-nation, favoriser la violation de la constitution
CAHIER: (p.42)lisse quels sont les crimes de lèse-nation. Le vœu des bailliages est que les ressor
CAHIER: (p.285)n user que pour {es crimes de lèse-nation ou de lèse-majesté seulement; et que, da
CAHIER: (p.402)s généraux, comme coupable de lèse-nation; que les impositions seront réparties dan
CAHIER: (p.320)e défendre, c'est un crime de lèse-nation. Qui pourrait nier que dans la génératio
CAHIER: (p.388)-mêmes; déclarant criminel de lèse-nation tous ceux qui pourraient entreprendre dire
CAHIER.: (p.249)sions. Ce serait vu crime de lèse-patrie de ne pas correspondre à sa confiance pat
CAHIER GÉN...: (p.221)i serait un crime de lèse-patrie. 2° De demander l'abolition de la gabelle
These include "lèse-majesté nationale", "lèse-majesté et nation" (OCR error fixed), "crimes de lèse-majesté et de lèse-nation", and (my favorite) "crime de lèse-majesté divine et humaine". Kelly suggests that notions of royal authority had been trimmed over the 18th century and with this reduction came a restriction of just what would constitute lèse-majesté and to what kinds of crimes it would apply. He argues that it was only in 1787, with the Assembly of Notables, that the idea of the nation "begins to take shape in a public glare" and further suggested that the decrees of September 1789 to decree the punishments for lèse-nation (and subsequent events) show the "confused and arbitrary genesis of lèse-nation".

See also the 11 entries in our Dictionnaires d'autrefois for lese which stress lèse-majesté through the entire period with lèse-nation being left as an after-thought, such as in the DAF (8th edition): "Il se joint quelquefois, par analogie, à d'autres noms féminins. Crime de lèse-humanité, de lèse-nation, de lèse-patrie." One should not construe this as excessively conservative, however, since lèse-majesté is, by far, the most common construction in the 19th and 20th centuries (at least as represented in ARTFL-Frantext).
Read More

Topic Based Text Segmentation Goodies

Leave a Comment
As you may recall, Clovis ran some experiments this summer (2009) applying a perl implementation of Marti Heart's TextTiling algorithm to perform topic based text segmentation on different French documents (see his blog post and related files). Clovis reasonably suggests that some types of literary documents, such as epistolary novels, may be more suitable candidates than other types, because they do not have the same degree of structural cohesion. Now, as I mentioned in my first discussion of the Archives Parlementaires, I suspect that this collection may be particularly well to topic based segmentation. At the end of his post, Clovis also suggests that we might be able to test how well a particular segmentation approach is working by using a clustering algorithm, such as LDA Topic Modeling, to see if the segments can be shown to be reasonably cohesive. Both topic segmentation and modeling are difficult to assess because human readers/evaluators can have rather different opinions, leading to problems in "inter-rater reliability", which is probably a more vexing problem in the humanities and related areas of textual studies than in other domains.

Earlier this year (and a bit last year), I also ran some experiments on some 18th century English materials, such as Hume's History of England and the Federalist Papers. Encouraged by these results, particularly on the Federalist Papers, I have accumulated a number of newer algorithms, packages, and papers which may be useful for future work in this area. These are on my machine (for ARTFL folks, let me know if you want to know where), but I will not redistribute here as a couple of packages require non-redistribution or other limitations. I am putting links to some of the source files, when I have them.

Since Heart's original work, there have been a number of different approaches to topic based text segmentation. Clovis and I have tried to make note of much of this work on our CiteULike references (segmentation). There is some overlap with Shlomo's list. In no particular order of preference or chronology, here is what I have so far. I will also try to provide some details on using these when I have a chance to run them up.

From the Columbia NLP group (http://www1.cs.columbia.edu/nlp/tools.cgi), we have both Min-Yan Kan's Segmenter and Michael Galley's LCSeg. These required signing a use agreement, which I have in my office. The release archives for both have papers, some test data,

I spent some time trying to track down Freddy Choi's C99 algorithm and implementation described in some papers in the early part of this decade. I finally tracked it all down on the WayBack Machine at Internet Archive (link, thank you!!), which also has some papers, software, data and implementations of TextTiling and other approaches from that period. It appears several of the packages below use C99 and some of the code from this.

I was going to reference Utiyama and Isihara's implementation (TextSeg), but in the few months since I assembled this list, the link has (also) gone dead:
http://www2.nict.go.jp/x/x161/members/mutiyama/software.html#textseg
This appears to be a combination of approaches.

Igor Malioutov's MinCut code (2006) is available from his page:
http://people.csail.mit.edu/igorm/acl06code.html

There appears to be some info on TextTiling in Simon Cozens (2006), "Advanced Perl Programming".

We also want to check out Beeferman et. al. (link) since I recall that this group had done some interesting work. I have Beeferman's implementation of TextTiling in C, but don't think I have run across anything else.

If you run across anything useful, please blog it here or let me know. Papers should be noted on our CiteUlike. Thanks!!
Read More
Next PostNewer Posts Previous PostOlder Posts Home