- the original text content of the node
- the "type" of the node: text, StartTag,EndTag, or Markup[for DTD's, comments, etc.]
- any attributes the node has
- the absolute byte offset in the string or file
shlax: a shallow, lazy XML parser in python
A Unified Index Construction Library
while(1) {
// as long as we read lines from standard input.
if (fgets(line,511,stdin) == NULL) {
hitbuffer_finish(hb);
break;
}
// scan for hits in standard Philo3 format.
state = sscanf(line,
"%s %d %d %d %d %d %d %d %d %s\n",
word, &hit[0],...);
if (state == 10) {
// if we read a valid hit
if ((strcmp(word,hb->word))) {
//if we have a new word...
hitbuffer_finish(hb); // write out the current buffer.
hitbuffer_init(hb, word); // and reinitialize
uniq_words += 1LLU; //LLU for a 64-bit unsigned int.
}
hitbuffer_inc(hb, hit); //add the hit to whichever word you're on.
totalhits += 1LLU;
}
else {
fprintf(stderr, "Couldn't understand hit.\n");
}
}
Vector Processing for OHCO
The presentation is an attempt to describe, somewhat formally, how PhiloLogic is capable of performing as well as it does. This comes from spending three years learning how Leonid's search core works, and attempting to extend and elucidate whatever I can. It's also the intellectual framework that I'm using to plan new features, like search on line and meter position, metadata, joins, etc. Hopefully, I can get someone who's better at math than I am to help me tighten up the formalities.
Basically, I refer to the infamous OHCO thesis as a useful axiom for translating the features of a text into a set of numerical objects, and then compare the characteristics of this representation to XML or Relational approaches. I'd love to know how interesting/useful/comprehensible others find the presentation, or the concept. What needs more explanation? What gets tedious?
If you look at the speaker notes, you can see me derive a claim that PhiloLogic runs 866 times faster than a relational database for word search. Math is fun!
PhiloLogic proto-binding for Python
In an earlier post, I mentioned that I'd try to to call the philologic C routines via ctypes, a Python Foreign Function Interface library. I did, and it worked awesomely well! Ctypes lets you call C functions from python without writing any glue at all in some cases, giving you access to high-performance C routines in a clean, modern programming language. We'd ultimately want a much more hand-crafted approach, but for prototyping interfaces, this is a very, very useful tool.
First, I had to compile the search engine as a shared library, rather than an executable:
gcc -dynamiclib -std=gnu99 search.o word.o retreive.o level.o gmap.o blockmap.o log.o out.o plugin/libindex.a db/db.o db/bitsvector.o db/unpack.o -lgdbm -o libphilo.dylib
All that refactoring certainly paid off. The search4 executable will now happily link against the shared library with no modification, and so can any other program that wants high-speed text object search:
#!/usr/bin/python
import sys,os
from ctypes import *
# First, we need to get the C standard library loaded in
# so that we can pass python's input on to the search engine.
stdlib=cdll.LoadLibrary("libc.dylib")
stdin = stdlib.fdopen(sys.stdin.fileno(),"r")
# Honestly, that's an architectural error.
# I'd prefer to pass in strings, not a file handle
# Now load in philologic from a shared library
libphilo = cdll.LoadLibrary("./libphilo.dylib")
# Give it a path to the database. The C routines parse the db definitions.
db = libphilo.init_dbh_folder("/var/lib/philologic/databases/mvotest5/")
# now initialize a new search object, with some reasonable defaults.
s = libphilo.new_search(db,"phrase",None,1,100000,0,None)
# Read words from standard input.
libphilo.process_input(s,stdin)
# Then dump the results to standard output.
libphilo.search_pass(s,0)
# Done.
That was pretty easy, right? Notice that there weren't any boilerplate classes. I could hold pointers to arbitrary data in regular variables, and pass them directly into the C subroutines as void pointers. Not safe, but very, very convenient.
Of course, this opens us up for quite a bit more work: the C library really needs a lot more ways to get data in and out than a pair of input/output file descriptors, I would say. In all likelihood, after some more experiments, we'll eventually settle on a set of standard interfaces, and generate lower-level bindings with SWIG, which would alow us to call philo natively from Perl or PHP or Ruby or Java or LISP or Lua or...anything, really.
Ctypes still has some advantages over automatically-generated wrappers, however. In particular, it lets you pass python functions back into C, allowing us to write search operators in python, rather than C--for example, a metadata join, or a custom optimizer for part-of-speech searching. Neat!
Unix Daemon Recipes
How do I create a long-running worker process that will neither:
1) terminate when it's parent terminates, such as a terminal session or a CGI script, or
2) create the dreaded "zombie" processes that clog process tables and eventually crash the system.
as it turns out, this is the same basic problem as any UNIX daemon program; this just happens to be one designed to, eventually, terminate. PhiloLogic needs processes of this nature at various places: most prominently, to allow the CGI interface to return preliminary results.
Currently, we use a lightweight Perl daemon process, called nserver.pl, to accept search requests from the CGI scripts, invoke the search engine, and then clean up the process after it terminates. Effective, but there's a simpler way, with a tricky UNIX idiom.
First, fork(). This allows you to return control to the terminal or CGI script. If you aren't going to exit immediately you should SIGCHLD as well, so that you don't get interrupted later.
Second, have the child process call setsid() to gain a new session, and thus detach from the parent. This prevents terminal hangups from killing the child process.
Third, call fork() again, then immediately exit the (original) child. The new "grandchild" process is now an "orphan", and detached from a terminal, so it will run to completion, and then be reaped by the system, so you can do whatever long-term analytics you like.
A command line example could go like this:
#!/usr/bin/perl
use POSIX qw(setsid);
my $word = $ARGV[0] or die "Usage:searchwork.pl word outfile\n";
my $outfile = $ARGV[1] or die "Usage:searchwork.pl word outfile\n";
print STDERR "starting worker process.\n";
&daemonize;
open(SEARCH, "| search4 --ascii --limit 1000000 /var/lib/philologic/somedb);
print SEARCH "$word\n";
close(SEARCH);
exit;
sub daemonize {
open STDIN, '/dev/null' or die "Can't read /dev/null: $!";
open STDOUT, '>>/dev/null' or die "Can't write to /dev/null: $!";
open STDERR, '>>/dev/null' or die "Can't write to /dev/null: $!";
defined(my $childpid = fork) or die "Can't fork: $!";
if ($childpid) {
print STDERR "[parent process exiting]\n";
exit;
}
setsid or die "Can't start a new session: $!";
print STDERR "Child detached from terminal\n";
defined(my $grandchildpid = fork) or die "Can't fork: $!";
if ($grandchildpid) {
print STDERR "[child process exiting]\n";
exit;
}
umask 0;
}
The benefit is that a similar &daemonize subroutine could entirely replace nserver, and thus vastly simplify the installation process. There's clearly a lot more that could be done with routing and control, of course, but this is an exciting proof of concept, particularly for UNIX geeks like myself.
The Joy of Refactoring Legacy Code
PhiloLogic is commonly known as being a rather large Perl/CGI project, but all of the actual database interactions are done by our custom search engine, which is in highly optimized C. The flow of control in a typical Philo install looks something like this:
--CGI script search3t accepts user requests, and parses them.
--CGI passes requests off to a long-running Perl daemon process, called nserver.
--nserver spawns a long-running worker process search3 to evaluate the request
--the worker process loads in a compiled decompression module, at runtime, specific to the database.
--search3t watches the results of the worker process
--when the worker is finished, or outputs more than 50 results, search3t passes them off to a report generator.
This architecture is extremely efficient, but as PhiloLogic has accrued features over the years it has started to grow less flexible, and parts of the code base have started to decay. The command line arguments to search3, in particular, are arcane and undocumented. A typical example:
search3 -P:binary -E:L=1000000 -S:phrase -E:L=1000000 -C:1 /tmp/corpus.10866
I've long been advocating a much, much simpler interface to the search engine. The holy grail would be a single-point-of-entry that could be installed as a C library, and called from any scripting language with appropriate interfacing code. There are several obstacles, particularly with respect to caching and memory management, but the main one is organizational.
How do you take a 15-year-old C executable, in some state of disrepair, and reconfigure the "good parts" into a modern C library? Slowly and carefully. Modern debugging tools like Valgrind help, as does the collective C wisdom preserved by Google. A particular issue is imperative vs. object-oriented or functional style. Older C programs tend to use a few global variables to represent whatever global data structure they work upon--in effect, what modern OOP practices would call a "singleton" object, but in practice a real headache.
For example, PhiloLogic typically chooses to represent the database being searched as a global variable, often set in the OS's environment. But what if you want to search two databases at once? What if you don't have a UNIX system? An object-oriented representation of the large-scale constructs of a program allows the code to go above and beyond its original purpose.
Reclassifying the Encyclopédie
At ARTFL we've been mining this rich vein for some time now. We presented Mining Eighteenth Century Ontologies: Machine Learning and Knowledge Classification in the Encyclopédie at Digital Humanities 2007, detailing our initial attempts at classification and the critical interpretation of machine learning results. We followed up at DH 2008 with Twisted Roads and Hidden Paths, in which we expanded our toolkit to include k-nearest-neighbor vector space classifications, and a meta-classifying decision tree. Where we had previously achieved around 72% accuracy in categorizing articles medium-length and long articles using Naive Bayes alone, using multiple classifiers combined in this way we were able to get similar rates of accuracy over the entire encyclopedia, including the very short articles, which are quite difficult to classify due to their dearth of distinctive content. This post describes an effort to productionize the results of that latter paper, in order to insert our new, machine-generated classifications into our public edition of the Encyclopédie.
For the impatient, jump ahead to the ARTFL Encyclopédie search form and start digging. The new machine generated classifications can be searched just as any other item of Philologic metadata, allowing very sophisticated queries to be constructed.
For instance, we can ask questions like "Are there any articles originally classified under Géographie that are reclassified as Philsophie?" In fact there are several, and it's interesting to peruse them and deduce why their original classifications and generated classifications fall as they do. The editors followed a policy of not including biographies in the Encyclopédie, but evidently could not restrain themselves in many cases. Instead of creating a biography class, however, they categorized such entries under the headword corresponding to the region of the notable person's birth, and assigned it the class Géographie. Thus the article JOPOLI contains a discussion of the philosopher Augustin Nyphus, born there in 1472, and hence is classified by our machine learner under Philosophie.
If any of this piques your interest, please do get in touch with us (artfl dawt project at gee-male dawt com should work). We'd love to discuss our work and possible future directions. Or come check us out in Virginia next week!
using the JSON perl mod
So, the perl JSON module is actually very easy and nice to use, it will convert your perl data structure into JSON without a sweat.
Here's a quick example which I hope will be useful :
#!/usr/bin/perl
use strict;
use warnings;
use JSON;
my %hash;
foreach my $file (@list_of_files) {
open(FILE,"$file");
my @list;
while (<FILE>) {
push(@array,$_);
}
%hash{$file} = [@array]; # store array reference in hash
}
my $obj = \%results;
my $json = new JSON;
my $js = $json->encode($obj, {pretty => 1, indent => 2}); # convert Perl data structure to JSON representation
$output .= "$js\n\n";
print $output;
And done!
Lemma Collocations on Perseus
This post actually relates nicely to Mark's recent post. I have recently been working on lemmatized collocation tables for the Greek texts on Perseus. If you just want to see the results, skim to the end, the rest describes how I got there.
It is not so difficult to look up the lemma for each word surrounding a certain search hit, as for these texts the structure to do so is already in place and the information is stored in SQL tables. Efficiency in gathering this information is the main difficulty. Looking up the lemma in the tables now in place can take a couple different SQL queries, which each take up a small chunk of time. For a few lookups, or even a few hundred, this is not too big of a problem. However, for a collocation table spanning five words on either side, we need at least 10 lookups in the databases per hit. The time it takes to do that adds up quite quickly.
So, following a suggestion from Mark, I wrote a script and generated a file with a line for every word in the Perseus database. Basically, it takes each word id starting with 1 on up to 5.5 million something and looks up its lemma. This generated a 5.5 million line file with lines likes this:
2334550 δέ
2334551 ὅς
2334552 ἕκαστος
2334553 ἵππος
2334554 nolemma
2334555 ὅς
2334556 δέ
2334557 πέτομαι
2334558 κονίω
2334559 πεδίον
Now, looking up the words on either side of a hit is much simpler - all you need to know is the "word id" of the hit and you can look at those around it. The "nolemma" entries are primarily punctuation and such which were given word tags at some point.
The size of this massive file however was another hindrance to efficient generation of collocation tables. Although we now know exactly what line we need to look at for the information we need, getting that line is still costly as it could require reading in a couple million lines of a file to get the one we need. After playing around with it a bit, command line grep searching seemed to be the fastest way to go, but even an grep search 10 times per hit adds up fast. So, I tried combining the searches into one massive egrep command to be read into a perl array. My searches looked something like:
egrep "23345[456][0-9]" lemmafile
Giving a window of 30 lines in the file starting at 2334540 and ending at 2334569. This limited the searches to one per hit instead of 10, but it still wasn't fast enough. So, I combine all of the searches like so:
egrep "(2342[234]|33329[678]|...|829[567])[0-9]"
(A bit of accounting for numbers ending in 0 is needed so that a window around 400 doesn't include things in the 490's, but this is not too difficult.)
This looked nice and seemed to work until I tried running it on more hits. It was then coming up with such massive regular expressions to grep for that grep was complaining that they were too big. So, I broke them up into chunks of roughly 350 at a time. Fewer, and the time would go up due to the added number of grep searches; too many more, and grep would overflow again. I may not have hit on the exact time minimizing value, but it is close at least.
Finally, here are some example searches:
Search for logoi.
Search for lemma:logos. (The time estimate is higher than actual load time).
Or, here is the search form:
Search Form.
Make sure that you choose Collocation Table and check the lemma button under the "Refined Search Results" Tab at the bottom.
It can handle most searches, except for very high frequency words. If anyone has ideas on how to make it faster, it could perhaps enable us to get results for all searches. Though perhaps this is not possible without somehow creating and saving those results somewhere.
UPDATE:
After talking to Mark, I altered the way the data is read from the file and now things should be running faster. The reason that all this discussion and speed streamlining for lemmatized collocation tables is necessary is the fact that the texts on Perseus do not have the lemmas embedded in the text. As Mark noted, many of the other databases would allow for much simpler and faster generation of the same data due to the fact that they do have lemmas in the text. However, for the purposes of Perseus, lemmas needed to be separated from the texts to allow them to be more dynamically updated, changed and maintained.
As for the speed, it should now be faster thanks to a handy function in Perl. I had investigated methods for reading a certain line of a file, since I happened to know exactly what lines I needed. However, finding none that did not read the whole contents of the file up to that line, I instead implemented the process described above. I overlooked SEEK. I dismissed it because it starts from a certain byte offset and not a certain line. Nevertheless, we can harness its power by simply padding each line with spaces to ensure every line in our file is the same byte length. With this pointer from Mark and some padding on the lines, knowing the line number and the number of bytes per line is enough to start reading from the exact location in the file that we desire.
MONK Data Under PhiloLogic: 1
As I'm sure you all know, the MONK Project (http://monkproject.org/), directed by Martin Mueller and John Unsworth, has generated a large collection of tagged data some of which has been made public and some of which is limited to CIC or other institutions (http://monkproject.org/
The following is a long post describing in some detail one approach to integrating this kind of information. Some of this will be deeply geeky and you can feel free to skip over sections. There is, towards the bottom of this post, a link to a standard PhiloLogic search form, so you can play with this proof-of-concept build yourself.
Richard and Helma have developed a mechanism to search for part of speech and lemma searching under PhiloLogic for their Greek and Latin databases (link). This is based on some truly inspired hacking by Richard and forms one model of how to handle this kind of functionality. My understanding of this, and Richard please correct me if I am wrong, is that it uses an undocumented feature in the index/search3 subsystem that allows us to have multiple index entries for each word position in the main index. This works and is certainly an approach to be considered as we think about a new series of PhiloLogic.
Build Notes
I have been experimenting with a somewhat different mechanism to handle this kind of problem, which is based on previous examples of mapping multiple word attributes to an index entry, using multiple field "crapser" entries. You may recall that this is the mechanism by which we merged Martin's virtual normalization data to very large collections of early modern English data and is currently running at Northwestern (link). My approach is to index not words, but pairs of surface forms and part of speech tags and to link these to an expanded (5 field) word database (called by crapser) containing the index form, surface form, lemma, part of speech and normalized forms. Here are some index entry forms (and frequencies):
24 conquer:vvb 445 conquer:vvi 143 conquered:vvd 414 conquered:vvn
These map to the word vector database which looks like:
idx surf pos lem normal conquered:j conquered j conquer conquered conquered:j-vvn conquered j-vvn conquer conquered conquered:n-vvn conquered n-vvn conquer conquered conquered:vvd conquered vvd conquer conquered conquered:vvn conquered vvn conquer conquered
To build this I first reduced for fully verbose form of the data in which each token is tagged:
<w eos="0" lem="country" pos="n1" reg="COUNTRY" spe="COUNTRY">
I eliminated all encoding that is redundant, just to make things easier to work with since the files are huge:
<w pos="n1">COUNTRY</w>
Where there is some additional information, I keep it in the encoded document:
<w lem="illustration" pos="n2">ILLUSTRATIONS</w>
I then loaded this data into a very slightly modified PhiloLogic textloader. This simply builds an index representation of the surface form of the word and the part of speech, by getting the PoS from encoding:
if ($thetag =~ /<w/) { $thepartofspeech = ""; $thetag =~ m/pos="([^"]*)"/i; $thepartofspeech = $1; }
and adding this to the index entry:
$theword = $theword . ":" . $thepartofspeech;
When loaded to this point, you have modified index entries. The next step is simply to build a multi-field word vector database (crapser). I did this by reading the input data and adding entries for lemmas or normalizations. This is simply an extension of what is already documented in the "virtual-normalize" directory in the "goodies" in the PhiloLogic release.
The next step was to slightly modify a version of Leonid's original "gimme". The "sequential" version of this function (in the standard Philologic distribution), maps a multi-field (tab delimited) query using regexp patterns in egrep. This is fast and simple. It allows naming of fields, so you can simply specify "lem=justice" and it will generate a regular expression pattern (where TAB = the tab character):
^[^TAB]*TAB[^TAB]*TAB[^TAB]*TABjusticeTAB[^TAB]*$
And you get, of course, full regular expressions. (Note, this renders with some odd spacing, there are no spaces). Swap in this version of crapser and it all appears to run without further modification.
So, to summarize, the implementation does not require any modifications to core system components. It requires only slight modifications to a textloader, which we do all the time for specific databases, and a slightly modified "crapser" with a suitably build word vector database.
The Database
The database has 567 documents containing 38.5 million words (types) and 273,600 index entries. Recall that these are surface form words and part of speech tags and not normal types. The dataset has selections from various sources, including Documenting the American South as well as some British Early Modern texts. It should have full PhiloLogic search and reporting capabilities. You can query the words in the database as usual, simply by typing in words. To force searches on lemmas, normalizations, and parts of speech by specifying (with examples):
lem=conquer
nrm=conquer
pos=pns32
and finally, if you want to get one surface form and part of speech you can search the index entry directly, such as "conquered:vvd". Note that the Part of Speech is specified after a colon and you don't need to specify anything else. This is obviously not a real query interface, but it suggests how we can think about interfaces further along (eg, pull down menus, etc). You can also use regular expressions, such as lem=conque.* Finally, you can combine these, such as "pos=po.* lem=enemy", which means find forms of enemy followed by possessive pronous within three words, such as : "
After all of that, here is the search form. (Reloaded 7/28/11)
Now, before running of to play with this, there are some important notes following which describes how to use this in more detail.
Discussion
This is a proof-of-concept build. In a full implementation, I would need to add some search syntax to allow the user to indicate a set of combined criteria for a single word. I was having some problems coming up with a use case, but I guess one could want to say search for a particular lemma AND part of speech. It would all work with a little massaging. Aside from that, this simple model should support all of the standard PhiloLogic searching and reporting
features. Do let me know if you find something that does not work.
This model supports disambiguating searches, such as to find dog when it is used as a verb. Try "dog:vvi" for hits like
Now, this is all fine and dandy. Except that doing conjoined searches on parts of speech reveals a significant conceptual difficulty, which I believe also applies to Richard's implementation. Each part of speech generates thousands of surface form index entries. For example:
"pos=vvn pos=po3. pos=j pos=n1"
generates 81,000 unique terms (index entries) in 4 vectors. The evaluation then does a massive join at the level of index entry evaluation. So, it is SLOW and subject to possible memory buffer overflow or other problems. In fact, the system will begin to generate results of this type fairly quickly, due to PhiloLogic's lazy evaluation (start returning results as soon as you have a few). But it can take several minutes to complete the task. We would certainly not want to put this on a high traffic machine, since if you have many similar queries, it would bog it down. Obviously, we could simply test to make sure that users search criteria would not drag the whole system down or simply lock this database to one user at a time, or some other work around. If we got reasonable French NLP, this could be implemented quickly.
However, I believe we have bumped up upon a conceptual problem. To find POS1 and POS2 and POS3 either in a row or within N number of words requires an evaluation of word and/or part of speech positions in documents.
There are a couple of possible solutions, all of which would require consideration of distinct indexing structures. The first is simply to build another kind of NGRAM POS index which would have sequences of parts of speech indexed and mapped to document regions. The second would be a another kind of index which would look like a standard PhiloLogic index entry, except that it would be ONLY part of speech. This would reduce the size of the word vectors, but would not in itself improve the index evaluation to find those sequences that fit the query in the actual documents, since we still have to return to word positions in documents.
We might call this "The Conjoined Part of Speech Problem (CPSP)". It is, in my opinion, a highly specialized type of search and it is not clear just what the use cases might look like in relatively simple languages (English, French) as opposed to Greek, for which Helma makes a convincing case. So, there is a question of just how important this might be. In email communication, Martin makes the case that it would be and that researchers who want this kind of query would be willing to wait a few minutes.
It would be a trivial and useful experiment to run a load where I would index ONLY part of speech information. This would be a good test to see if evaluation speed for conjoined part of speech searches would be reasonable. In fact, Richard and I did a few quick experiments that suggest this would work. The idea would be to distinguish between simple queries -- and run them as usual -- and multiple PoS queries, which would be run on a dedicated index build. So, build parallel indicies. Oddly enuff, in the current architecture, I suspect that one could simply have a switch to say WHICH database to consult dynamically, simply by evaluating the query and then setting the database argument. That would be another one of my famous, award-winning, hall of shame hacks. But it could be made to work.
Martin has also pointed out another issue, which is searching, sorting, and counting of PoS, lemma, and other data. Now, that makes a lot of sense. I want to search for "country" and find distributions of particular parts of speech. Or, I want to do a collocation table searching on a lemma and counting the lemmas around the word. I think all of this is certainly doable -- the latter is something I wrote about some 15 years ago -- with hacks to the various reporting subsystems (not in 3, which is just too much of a mess). In an SOA model of PhiloLogic, this would be quite reasonably handled, ideally by other teams using PhiloLogic if not here at Chicago.
I think these are important issues to raise, but not necessarily resolve at this time, if (when?) we consider the architecture of any future Philologic4 development effort. For example, the current models of report generators would have to know about lemmas, etc. And we would need to at least leave hooks in any future model to support different indexing schemes for things like.
Finally, watch this space. I believe Richard is doing a build of this data using his model as well.
Please do play around with all of this and let me know what you think. One consideration would be implementing this for selected French collections. We would obviously need real virtual normalizers, lemmatizers and PoS identifiers for a broader range of French than we have now.
KWIC Modifications
The problem with KWIC for Greek texts is that Greek fonts to not support single-width fonts, which KWIC uses to align the results more cleanly. In addition, the title lines, which give the bibliographic information and link, can be different lengths and this also causes problems aligning the search terms. See for instance, this search.
To solve the word tag problem I just made a few modifications to the KwicFormat subroutine in philosubs. The main edit there was changing the line that stripped all tags into this:
$bf=~ s#<(?!(w |/w))[^>]*># #gi; #keep only word tags
For the alignment issue, things were more complicated. Keeping track of the length of the left side of the line doesn't allow for a consistent place on the page due to the differing widths of the letters. In the end, I modified artfl_kwic to chop the left side of the hit to a size as close to a certain length as possible without breaking any words. Previously, both the right and the left were chopped to a certain length regardless of breaking words and length including tags, often resulting in very little content. Now, only the length of the display string is accounted for and in addition the length is adjusted for the length of the bibliographic title.
I also added a span around the left and right sides of the hit to allow for positioning and alignment using Javascript (and CSS). Then, adding the following lines to the Results Header, the search terms are all lined up in a neat line:
span.left { right:46%; position:absolute; }The numbers may look a little messy, but they give nice results. I found that without the decimal, the two sides were a bit too far apart, but there may be another way around that.
span.right { left:54.5%; position:absolute;
height:18px; overflow:hidden;}
The extra bits for formatting the right span are in place of trimming the content of the right side in perl as I did for the left side. I found that the overflow:hidden attribute is quite handy if you can get it to work (it is a bit tempermental). As long as it is found in an absolutely positioned object with restricted size, AND it is contained within an element with overflow set to auto, it should work. It simply hides any content that does not fit within the given boundaries. It gives a very clean look to the right side of the page and even adjusts to different window sides so that the content never leaks to the next line.
Take a look at this page and play with the window size to see what I mean. Unfortunately, there is no such nice property for trimming the overflow off of the left side instead of the right side. That is why I did it in perl instead of Javascript. There is a function called clip in javascript which is designed to clip an image, but again the way it works makes it much easier to trim from the right side than the left side. One could probably twist the clip function enough to make something similar happen for the left side (and make everything nicely adjustable and lovely), but for now, it is happening in perl. (I tried for a while, but my concoctions just seemed to slow things down and not add anything exciting results).
UPDATE: I couldn't resist playing a bit more with the javascript, and now it works like I wanted it to! Now, if you click on the link above, it won't illustrate what I said it would, because the javascript has been improved. I added this function:
function trimKwicLines(){
var contentwidth = $(".content").width();
$(".left").each(function (i) {
var width = $(this).width();
var leftoffset = contentwidth*.4 - width*1 - 2;
$(this).css("left", leftoffset);
});
}
And changed things here and there in the CSS.