Natural Language Morphology Queries in Perseus

1 comment
Natural language queries are now possible on Perseus under Philologic. Previously, Richard had implemented searching for various parts of speech in various forms. For instance, as noted in the About page for Perseus, a search for 'pos:v*roa*' will return all the instances of perfect active aorist verbs in the selected corpus. Now, a search for 'form:could-I-please-have-some-perfect-active-optatives?' will return the same results. In fact, searching for 'form:perf-act-opt', 'form:perfect-active-optative', 'form:perfection-of-action-optimizations', or 'form:perfact-actovy-opts-pretty-please' will all accomplish this same task. Note that the dashes are necessary between the words, otherwise a search for plural nouns written as 'form:plural nouns' will actually be searching for any plural word followed by the word "nouns", which will fail. I carefully chose shorter forms of all the keywords, such as "impf" and "ind" for "imperfect" and "indicative" so that a search including any word starting with "ind" will match indicatives regardless of what follows the 'd'. Hopefully, there are no overlapping matches (such as using "im" to abbreviate "imperfect" which would also match "imperative"). If you do encounter any, please let me know. Potentially, we could put a list of acceptable abbreviations somewhere, although they are fairly straightforward and typing the full term out is always a fail-safe method.

Basically, the modified crapser script simply translates searches beginning with "form:" into the corresponding "pos:" search. Using a hash of regular expressions and string searching, it simply returns the corresponding code. In the previous example, the search is actually looking for "pos:....roa..". Notice that it fills in the empty space of the code with dots, allowing them to be anything. I implemented an alternative filler, the dash, so that when you search for something like "form:perf-act-opt-exact", you will actually be searching for "pos:----roa--" (and your search will fail because there are no terms that are only and exactly perfect active optative without other specifications).

One limitation that this method of natural language querying has is that it cannot match the versatility of the "pos:" searches. That is, because it selects either dots or dashes as fillers, you cannot get a mixture of them in your search. You cannot run a search such as "pos:v-.sroa---". However, this limitation will likely have little effect for the average user and the user needing such a search can still obtain it using the "pos:" method. An alternative method involving drop down input boxes for each slot of the code would enable the full power of the pos searches, but it would also be potentially more tedious to implement and potentially tedious to use as well. Such a input form would require the user to know more about the encoding than the "form:" searching I implemented does. For example, a user would need to know that "verb" is required in the first slot, even if "aorist optative" makes that the only possibility. Whereas searching for 'form:aorist-optative' works without the user ever needing to know that a 'v' is required in the first slot.
Read More

Encyclopédie: Similar Article Identification II

Leave a Comment
After doing a series of revisions as part of my last post this subject (link), I thought it might be helpful to provide an update posting. We have been interested in teasing out how the VSM handles small vs large articles and to get some sense of why various similar articles are selected. Over the weekend, I reran the vector space similarity function on 39,218 articles, taking some 29 hours. I excluded some 150 surface forms of words in a stopword list, all sequences of numbers (and roman numerals), as well as features (in this case word stems) found in more than 1568 and less than 35 articles. This last step removed features like blanch, entend, mort, and so on. Thus, I removed some 600 features, leaving 10,157 features used for the calculation. Here is the search form:

Headword: (e.g. tradition)
Author: (e.g. Holbach)
Classification: (e.g. Horlogerie)
English Class: (e.g. Clockmaking)
Size (words): (e.g. 250- or 250-1000)
Show Top: articles (e.g. 10 or 50)
The number of matching terms for small articles can be, of course, very small. For example, article "Tout-Bec" (62 words) is left with four stems [amer 1|oiseau 2|ornith 1|bec 3]. The first most of the most similar articles is Rhinoceros (Hist. nat. Ornith.) -- remember, only the main article here -- matches on three stems:
word               frq1     frq2
bec                 3        5
oiseau              2        2
ornith              1        1
Are these similar? Well, both very small articles refer to kinds of rare birds that are notable by their beaks, one with a very large beak and one that looks like it has two or more beaks. It is also important to note that "ornith" (the class of knowledge) in both is picked up by this example. The next article down (Pipeliene) matches on:
amer                1        1
bec                 3        1
oiseau              2        2
The third most similar in this example is "Connoissance des Oiseaux par le bec & par les pattes.", a plate legend, with as you expect, lots of beaks. This matches on two stems, bec and oiseau.

It seems that the size of the query article, now that I have eliminated many function words and other extraneous data, carries a significant impact. The larger the article, the more possible matches you will get (Zipf's Law applies). Longer articles will tend to be most similar to other longer articles, and shorter will match better to shorter. So, similarity would appear to be a function of relative frequencies of common features and the length of the articles. We saw this in our original examination of the Encyclopédie and the Dictionnaire de Trévoux, and had built in some restrictions in terms of size as well as comparing articles with the same first letter rather than all to all. As far as I can tell, the kind of more of feature pruning shown here does not have a significant impact on larger articles.

User feedback might be significant in determining just how many features and what kinds of features are required to get more interesting matches. For any pair, we could store the VSM score, the sizes, and the matching features along with the user rating of the match. That might generate some actionable data for future applications.

[Aside: In some cases, similar passages lead to possibly related plates and legends. Cadrature, for example, links to numerous plate legends dealing with clockmaking.]
Read More

Mapping Encyclopédie classes of knowledge to LDA generated topics

Leave a Comment
As was described in my previous blog entry, I've been working on comparing the results given by LDA generated topics with the classes of knowledge identified by the philosophes in the Encyclopédie. My initial experiment was to try to see if out of 5000 articles belonging to 100 classes of knowledge, with 50 articles per class, I would find those 100 topics using an LDA topic modeler. My conclusion was that it didn't find all of them, but still found quite a few. Since then, I have played a bit more with this dataset and have come up with better results.
Since a topic modeler will give you the topic proportion per article (I just use the top three), what I tried to do this time was to draw up a table with each class of knowledge, and what the topic modeler identified in terms of topics for each class of knowledge. Before looking at this, it's important to keep in mind that in the sample of articles I used, there are 50 articles per class of knowledge. Therefore, the closer the number of the dominant topic in a class of knowledge gets to 50, the better the topic modeler will have done in identifying the class of knowledge and in reproducing the human classification.
Of course, the classification of articles in the Encyclopédie can be at times a little puzzling. The articles were written by a large number of people and therefore the classification is not always consistent. With that in mind, one should not expect to get perfect matches using a topic modeler. Moreover, since the topic modeler will assume that each article is about N number of topics, the calculation might be further off.
For my experiment, I settled on 107 topics, of which I eliminated 7, which were identified as stopwords lists. When looking at the results of this experiment, there are 41 classes of knowledge in which we find 40 or more articles grouped within the same LDA topic. This means that 41% of the classes of knowledge were identified with a great level of accuracy. If we look at topics that have more than 25 articles matching the same class of knowledge we get up to 83 classes (or 83%).
If we look at those results, there are strange flaws, such as physique and divination that don't seem to be identified. This might be due to a miscalculation, but I have yet to figure out what it could be. Highly specialized classes, such as corroyerie, poésie, or astronomie get excellent matches, which is to be expected.
This experiment also gave us an idea of what the percentage of LDA topics are to be considered as stopwords lists. Between 5 and 10% of the topics should be discarded when using an LDA classifier.
Finally, we should consider that LDA generated topics do not systematically match human identified topics. An unsupervised model is bound to give different results, it would be interesting to see how well supervised LDA (sLDA) would do in our particular test case.

Read More

Index Design Notes 1: PhiloLogic Index Overview

Leave a Comment
I've been playing around with some perl code in response to several questions about the structure of PhiloLogic's main word index--I'll post it soon, but in the meantime, I thought I'd try to give a conceptual overview of how the index works. As you may know, PhiloLogic's main index data structure is a hash table supporting O(1) lookup of any given keyword. You may also know that PhiloLogic only stores integers in the index: all text objects are represented as hierarchical addresses, something like a normalized, fixed-width Xpointer.

Let's say we can represent the position of some occurrence of the word "cat" as
0 1 2 -1 1 12 7 135556 56
which could be interpreted as
document 0,
book 1,
chapter 2,
section ,
paragraph 1,
sentence 12,
word 7,
byte 135556,
page 56, for example.

A structured, positional index allows us to evaluate phrase queries, positional queries, or metadata queries very efficiently. Unfortunately, storing each of these 9 numbers as 32-bit integers would take 36 bytes of disk space, for every occurence of the word. In contrast, it's actually possible to encode all 9 of the above numbers in just 39 bits, if we store them efficiently--that's a 93% saving. The document field has the value 0, which we can store in a single bit, whereas byte position, our most expensive, can be stored in just 18 bits. The difficulty being that the simple array of integers becomes a single long bit string stored in a hash. First we encode each number in binary, like so
0 1 01 11 1 0011 111 001000011000100001 000111

but this is only 18 bits, so we have to pad it off with 6 extra bits to get an even byte alignment, and then we can store it in our hash table under "cat".

Now, suppose that we use somthing like this format to index a set of small documents with 10,000 words total. We can expect, among other things, a handful of occurrences of "cat", and probably somewhere around a few hundred occurrences of the word "the". In a GDBM table, duplicate keywords aren't permitted--there can be exactly one record of "cat". For a database this size, it would be feasible to append every occurrence into a single long bit string Let's say our text structures require 50 bits to encode, and that we have 5 occurrences of cat. We look up "cat" in GDBM, and get a packed bit string 32 bytes, or 256 bits long. we can divide that by the size of a single occurrence, so we know that we have 5 occurrences and 6 bits of padding.

"The", on the other hand, would be at least on the order of few kilobytes, maybe more. 1 or 2 K of memory is quite cheap on a modern machine, but as your database scales into the millions of words, you could have hundreds of thousands, even millions of occurrences of the most frequent words. At some point, you will certainly not want to have to load megabytes of data into memory at once for each key-word lookup. Indeed, in a search for "the cat", you'd prefer not to read every occurrence of "the" in the first place.

Since PhiloLogic currently doesn't support updating a live database, and all word occurrences are kept in sorted order, it's relatively easy for us to devise an on-disk, cache-friendly data structure that can meet our requirements. Let's divide up the word occurences into 2-kilobyte blocks, and keep track of the first position in each block. Then, we can rapidly skip hundreds of occurrences of a frequent word, like "the", when we know that the next occurence of "cat" isn't in the same document!

Of course, to perform this optimization, we would need to know the frequency of all terms in a query before we scan through them, so we'll have to add that information to the main hash table. Finally, we'd prefer not to pay the overhead of an additional disk seek for low-frequency words, so we'll need a flag in each key-word entry to signal whether we have:
1) a low frequency word, with all occurences stored inline
or
2) a high frequency word, stored in the block tree.

Just like the actual positional parameters, the frequencies and tree headers can also be compressed to an optimal size on a per-database level. In philologic, this is stored in databasedir/src/dbspecs.h, a c header file that is generated at the same time as the index, then compiled into a custom compression/decompression module for each loaded database, which the search engine can dynamically load and unload at run time.

In a later post, I'll provide some perl code to unpack the indices, and try to think about what a clean search API would look like.
Read More

Encyclopédie: Similar Article Identification

6 comments
The Vector Space Model (VSM) is a classic approach to information retrieval. We integrated this as a standard function in PhiloMine and have used it for a number of specific research projects, such as identifying borrowings from the Dictionnaire de Trévoux in the Encyclopédie, which is described in our forthcoming paper "Plundering Philosophers" and related talks[1]. While originally developed by Gerard Salton[2] in 1975 as a model for classic information retrieval, where a user submits a query and gets results in an ranked relevancy list, the algorithm is also very useful to identify similar blocks of text, such as encyclopedia articles or other delimited objects. Indeed, this kind of use of the VSM was proposed by Salton and Singhal[3] in a paper presented months before Salton's death. They demonstrated the use of VSM to produce links between parts of documents, forming a type of automatic hypertext:
The capability of generating weighted vectors for arbitrary texts also makes it possible to decompose individual documents into pieces and explore the relationships between these text pieces. [...] Such insights can be used for picking only the "good" parts of the document to be presented to the reader.
Salton and Singhal further argued that manual link creation would be impractical for huge amounts of text, but these conclusions may have had limited influence given the general interest at that time in human generated hypertext links on the WWW.

Based on earlier work using PhiloMine, we have seen a number of "interesting" -- and at times unexpected -- connections between articles in the Encyclopédie, often drawing connections between previously unrelated articles, if by unrelated we mean having different authors, classes of knowledge and few cross-references (renvois) between them. One might consider this kind of similarity measure between articles as a kind of intertextual discovery tool, where the system would propose articles possibly related to a specific article.

The Vector Space Model functions by comparing a query vector to all of the vectors in a corpus, making it an expensive calculation, not always suitable to real time use. In this experiment, I have recast the VSM implementation in PhiloMine to function as a batch job to generate a database of 27,753 Encyclopédie articles (those with 100 or more words) with the 20 most similar articles for each article. To do this, I pruned features (word stems) which more than 8,325 and less than 41 articles, resulting in a vector size of 10,431 features. I used a standard French word stemmer to reduce lexical variation and a Log Normalization function to handle variations in article sizes. The task took about 17 hours to run.

Update (December 7): I have replaced the VSM build above with the same on 39,200 articles -- all articles with 60 or more words -- which took about 29 hours to run. I pruned features found in more than 11,200 documents and less than 50, leaving 9,710 features. This may change some results by adding more small articles. Note, this is about as large a VSM task as can be performed in memory using perl hashes, since anything large runs out of memory. If we want to go larger, probably store vectors on disk and TIE them to perl hashes.

The results for a query shows the 20 most similar articles, ranked by the similarity score, where an exact match is equal to 1. For example, the article OUESSANT (Modern Geography) -- based on 27,000 articles -- is related to the articles VERTU [0.274], Luxe [0.267], ECONOMIE ou OECONOMIE [0.265], POPULATION [0.263], CHRISTIANISME [0.261], SOCIÉTÉ [0.256], AVERTISSEMENT DES ÉDITEURS (suite) [0.255], MANICHÉISME [0.254], CYNIQUE, secte de philosophes anciens [0.254], Gout [0.250], EDUCATION [0.248] and so on. This reflects the discussion of the moral conditions of the inhabitants of the small island off the coast of Brittany.

You can give it a try using this form (again now for 39,200 articles):

Headword: (e.g. tradition)
Author: (e.g. Holbach)
Classification: (e.g. Horlogerie)
English Class: (e.g. Clockmaking)
Size (words): (e.g. 250- or 250-1000)
Show Top: articles (e.g. 10 or 50)

[Dec 9: I added word count info for each article. You can restrict searches to articles in ranges of size. Also, now storing 50 top matches, which you can limit. Showing matching articles which are smaller than source article. Dec 10: added function to display matching stems for any pairwise comparison for inspection.]

There are a number of other options that I might add to the VSM calculations, including using TF-IDF as an alternative normalization weighting scheme and use of virtual normalization to again reduce lexical variations and improve the performance of the stemming algorithm. I have also thought of using Latent Semantic Analysis as another way to handle similarity weighting, but given that we have many query terms, it is not clear that LSA would help all that much.

In a real production environment, I think we will add a "similar article link" from articles in the Encyclopédie. We have talked about having users rank the quality of the similarity performance. The scores assigned are somewhat helpful in ranking, but not in assessing an absolute number, since they can vary by the size of the input article. VSM is an unsupervised learning model. It is not clear to me that we could integrate user evaluations in any systematic fashion, but this is certainly an interesting subject of further consideration.

As always, please let me know what you think. I have a couple of general queries. I have used main and sub articles (as well plate legends, etc.) as units of similarity calculation. Should I use main entries only? I also limited this to articles with more than 100 words. At 50 words, we have some 43,000 articles. Should I do this for a full implementation?

References

[1] See Timothy Allen, Stéphane Douard, Charles Cooney, Russell Horton, Robert Morrissey, Mark Olsen, Glenn Roe, and Robert Voyer, "Plundering Philosophers: Identifying Sources of the Encyclopédie", Journal of the Association for History and Computing (forthcoming 2009). Also, see Ceglowski, Maxiej. 2003: "Building a Vector Space Search Engine in Perl", Perl.com [http://www.perl.com/pub/a/2003/02/19/engine.html].

[2] Salton, G., A. Wong, and C. S. Yang. 1975: "A Vector Space Model for Automatic Indexing," Communications of the ACM 18/11: 613-620.

[3] Singhal, A. and Salton, G. 1995: "Automatic Text Broswing Using Vector Space Model" in Proceedings of the Dual-Use Technologies and Applications Conference 318-324.
Read More
Next PostNewer Posts Previous PostOlder Posts Home