PhiloLogic4: The Big Picture

While Clovis and I continue to document various aspects of PhiloLogic4's architecture and design, it may be helpful to keep in mind a sort of top-level "bird's-eye view" of the system as a whole.  PhiloLogic does a huge number of different things at different times, and it can be very difficult to keep them all organized. My best attempt to convey it in a single diagram is below:

As with PhiloLogic3, the foundation of all PhiloLogic services is a set of C functions, which are now collected together in a library called "libphilo", contained in the main PhiloLogic4 github repository.  These provide the high-performance compression, indexing, and search algorithms that distinguish PhiloLogic from most other XML and database technologies.  

This C library is the building block upon which all of PhiloLogic4's python library classes are built.  The two most important are 
  1. the Loader class, which controls parsing and indexing TEI XML files, and 
  2. the DB class, which governs all access to a PhiloLogic database.  

These classes themselves make use of other classes, most of which appear in the diagram above; it's extremely important to note that the Loader and the DB share almost no behaviors or components.  

This separation is a point of departure from most other database systems: in PhiloLogic4, the set of components that produce a database is distinct from the set of components that query an existing database.  We refer to the time when XML documents are ingested and indexed as load-time, and the time when a user queries the database as run-time or query-time.

Although one of the original design goals of PhiloLogic4 was to focus on the development of a more generalized library for TEI processing, it became clear at some point that a set of general behaviors was not enough, and that pragmatic development required two additional components:
  1. a general-purpose document-ingesting script, capable of handling errors and ambiguity, and
  2. a readymade web application suitable for most purposes, and customizable for others

These components were built as applications making use of the standard library components, and allow a PhiloLogic developer to specify all text- and language-specific features without modification of any shared functions.

The load_script has been described already in a previous post, but it is worth revisiting in this broader context.  The load script is responsible for three fundamental tasks:
  1. taking command-line arguments from the user, and passing all the supplied files into the loader class, along with additional parameters
  2. storing all system-specific configuration parameters: hostname, filesystem locations, etc.
  3. storing all text-specific configuration parameters: XPaths, tokenization regexes, special filters, etc.
When the load script has finished running, it moves the loaded database into an appropriate path in the web server's document tree, and creates a web application around it.  This is the very same web application described in Clovis's recent post.  It is created by copying a set of files stored elsewhere, typically in the PhiloLogic4 install directory, although specifying another set of files to "clone" from is possible.  It is important to note that, by convention, we refer to the web application together with the database that it accesses as a "database", as one almost never exists without the other, and this is reflected in the diagram above.  

The behavior of such a database/application is just as Clovis described it: all queries go to one of several "report generators", which interpret query parameters and access the database accordingly.  They produce a result object, a python object that maps very closely to a JSON object--that is, a single dictionary literal consisting of other literals, without functions, tuples, lambdas, objects, and other such structures that cannot be expressed in JSON.  This result object is then passed on to a Mako template file, which can transform the result into HTML viewable by a web browser, which is finally returned to the user--"finally" usually meaning under 100 milliseconds, of course.  

Over the coming months, Clovis and I will be describing many of these components in detail, and this post may be updated as this larger documentation project proceeds; but for now, I hope it serves as a helpful overview of PhiloLogic4.
Read More

General Overview of PhiloLogic4's Web Architecture

Leave a Comment
Very early in the development of PhiloLogic4, we decided to separate the core library (the C core and Python bindings) from the actual Web interface. While there is still a clear separation between the Web environment and the library code, the two are nevertheless interdependent, which is to say that one cannot function independently of the other (unless you intend to use the library functions on the command line...)

As such, the Web component of PhiloLogic4 was designed as a Web Application, and each database functions as its own individual Web App. This allows for greater flexibility and customization. With PhiloLogic4, we wanted the Web layer to be the only part of the code a database developer has to deal with. We even went so far as to offer configuration options that drastically change the behavior of our various utilities. Before I start diving into each individual component (in later posts), I wanted to give a general picture of the Web app, as well as an idea of its features and flexibility.

The application is at its core a Python WSGI app which handles (most) requests through a script that interprets queries and reroutes them to the relevant parts of the application. The results of requests are rendered in HTML thanks to the use of Mako, a powerful and easy-to-use template library. A description of the general layout of the Web App will give a better idea of how the PhiloLogic4 Web App functions.

There are four distinct sections (besides CSS and JS resources) inside the application:
  • The reports directory, which contains the major search reports which fetch data from the database by interfacing with the core library, and then return a specialized results report. These reports include concordance, KWIC (Key Word In Context), collocation, and time series. 
  • The functions directory, which contains all of the generic functions used by individual reports. These functions include parsing the query string, loading web configuration options, access control, etc. 
  • The scripts directory, which contains standalone CGI scripts that are called directly from Javascript code on the client side. These functions bypass the dispatcher and have a very specialized purpose, such as returning the total number of hits for any given query, or switching from a concordance display to a KWIC display.
The first three directories contain all that is necessary to return initial results to the client. The CGI scripts contained in /scripts provide additional functionality made possible by the use of Javascript in our Web Client. Significant work has been done to provide a dynamic and interactive Web interface, and this was made possible via heavy use of Javascript throughout the application, something which I'll describe in greater detail in another post.

Another design decision we made, somewhat late in the development process, was to rely on a CSS/JS framework for the layout of our HTML. We decided to use Bootstrap for its flexibility and responsiveness. As a result, PhiloLogic4 should work on any screen, be it phone, tablet or computer, although some functionality (such as KWIC reports) is hidden on smaller screens due to the limited space available.

Finally-- and I will go into much further detail in a separate post--we've designed a RESTful API that provides access to the full functionality of our web app. This is made possible by delaying for as long as possible the process of choosing to render search results as HTML or JSON. Basically, we expose the same results object to the HTML renderer (the Mako templates) that we do to any potential client. This design feature has allowed us to build a PhiloReader Android client application, focused on reading, by calling the relevant APIs needed for such functionality.

In my next post on the Web Application, I will go through the various configuration options available. 
Read More

PhiloLogic4 Load Script Architecture

1 comment
Clovis and I have been doing a great deal of work lately on PhiloLogic4's document-loading process, and I feel that it's matured enough to start documenting in detail.  The best place to start is with the standard PhiloLogic4, which you can look at on github if you don't have one close at hand:

The load script works more or less like the old philoload script, with some important differences:

  1. The load script is not installed system-wide--you generally want to keep it near your data, with any other scripts. 
  2. The load script has no global configuration file--all configuration is kept separate in each copy of the script that you create.
  3. The PhiloLogic4 Parser class is fully configurable from the load script--you can change any Xpaths you want, or even supply a replacement Parser class if you need to.
  4. The load script is designed to be short, and easy to understand and modify.
The most important pieces of information in any load script are the system setup variables at the top of the file.  These will give immediate errors if they aren't set up right.  

database_root is the filesystem path to the web-accessible directory where your PhiloLogic4 database will live, like /var/www/philologic/--so your webserver process will need read access to it, and you will need write access to create the database--and don't forget to keep the slash at the end of the directory, or you'll get errors.  

url_root is the HTTP URL that the database_root directory is accessible at: be a reasonable mapping of the example above, but it will depend on your DNS setup, server configuration, and other hosting issues outside the scope of this document.

template_dir, which defaults to database_root + "/_system_dir/_install_dir/", is the directory containing all the scripts, reports, templates, and stylesheets that make up a PhiloLogic4 database application.  If you have customized behaviors or designs that you want reflected in all of the databases you build, you can keep those templates in a directory on their own where they won't get overwritten.  

(At the moment, you can't "clone" the templates from an existing database, because they actual database content can be very large, but we'd very much like to implement that feature in the future to allow for easy reloads.)

Most of the rest of the file is configuration for the Loader class, which does all of the real work, but the config is kept here, in the script, so you don't have to maintain custom classes for every database. 

For now, it's just important to know what options can be specified in the load script:
  1. default_object_level defines the type of object returned for the purpose of most navigation reports--for most database, this will be "doc", but you might want to use "div1" for dictionary or encyclopedia databases.
  2. navigable_objects is a list of the object types stored in the database and available for searching, reporting, and navigation--("doc","div1","div2","div3") is the default, but you might want to append "para" if you are parsing interesting metadata on paragraphs, like in drama.  Pages are handled separately, and don't need to be included here.
  3. filters and post_filters are lists of loader functions--their behavior and design will be documented separately, but they are basically lists of modular loader functions to be executed in order, and so shouldn't be modified carelessly.
  4. plain_text_obj is a very useful option that generates a flat text file representations of all objects of a given type, like "doc" or "div1", usually for data mining with Mallet or some other tool.
  5. extra_locals is a catch_all list of extra parameters to pass on to your database later, if you need to--think of it as a "swiss army knife" for passing data from the loader to the database at run-time.
The next section of the load script is setup for the XML Parser:

This is a bit complex, and will be explored in depth in a separate post, but the basic layout is this:
  1. xpaths is a list of 2-tuples that maps philologic object types to absolute XPaths--that is, XPaths evaluated where "." refers to the TEI document root element.  You can define multiple XPaths for the same type of object, but you will get much better and more consistent results if you do not.
  2. metadata_xpaths is a list of 3-tuples that map one or more XPaths to each metadata field defined on each object type.  These are evaluated relative to whatever XML element matched the XPath for the object type in question--so "." here refers to a doc, div1, or paragraph-level object somewhere in the xml.
  3. pseudo_empty_tags is a very obscure option for things that you want to treat as containers, even if they are encoded as self-closing tags.  
  4. suppress_tags is a list of tags in which you do not want to perform tokenization at all--that is, no words in them will be searchable via full-text search.  It does not prohibit extracting metadata from the content of those tags.
  5. word_regex and punct_regex are regular expression fragments that drive our tokenizer.  Each needs to consist of exactly one capturing subgroup so that our tokenizer can use them correctly. They are both fully unicode-aware--usually, the default \w class is fine for words, but in some cases you may need to add apostrophes and such to the word pattern.  Likewise, the punctuation regex pattern fully supports multi-byte utf-8 punctuation.  In both cases you should enter characters as unicode code points, not utf-8 byte strings.
The next section consists of just a few scary incantations that shouldn't be modified:

But the following 2 sections are where all the work gets done, and an important place to perform modifications.   First, we construct the Loader object, passing it all the configuration variables we have constructed so far:

Then we operate the Loader object step-by-step:

And that's it!  

Usually, these load functions should all be executed in the same order, but it is worth paying special attention to the load_metadata variable that is constructed right before l.parse_files is called.  This variable controls the entire parsing process, and is incredibly powerful.  Not only does it let you define any order in which to load your files, but you can also supply any document-level metadata you wish, and change the xpaths, load_filters, or parser class used per file, which can be very useful on complex or heterogeneous data sets.  However, this often requires either some source of stand-off metadata or pre-processing/parsing stage.  

For this purpose, we've added a powerful new Loader function called sort_by_metadata which integrates the functions of a PhiloLogic3 style metadata guesser and sorter, while still being modular enough to replaced entirely when necessary.  We'll describe it in more detail in a later post, but for now, you can look at the new artfl_load_script to get a sense of how to construct a more robust, fault-tolerant loader using this new function.

Up next: the architecture of the PhiloLogic Loader class itself.
Read More

shlax and ElementTree

Leave a Comment
I've just pushed a few commits to the central philo4 repository;
mostly small bugfixes to the makefile and the parser, but I added a convenience method to the shlax XML parser.

As you may know, Python has a really nice XML library called ElementTree, but it has a few quirks:
1) it uses standard, "fussy" XML parsers that choke on the slightest flaw, and
2) it has a formally correct but incomprehensible approach to namespaces that is exceedingly impractical for day-to-day TEI hacking.

In this update, I've added a shlaxtree module to the philo4 distribution that hooks our fault-tolerant, namespace-agnostic XML parser up to ElementTree's XPath evaluator and serialization facilities. It generally prefers the 1.3 version of ElementTree, which is standard in python 2.7, but a simple install in 2.6 and 2.5.

Basically, the method philologic.shlaxtree.parse() will take in a file object, and return the root node of the xml document in the file, assuming it found one. You can use this to make a simple bibliographic extractor like so:

#!/usr/bin/env python
import philologic.shlaxtree as st
import sys
import codecs

for filename in sys.argv[1:]:
file =,"r","utf-8")
root = st.parse(file)
header = root.find("teiHeader")
print header.findtext(".//titleStmt/title")
print header.findtext(".//titleStmt/author")

Not bad for 10 lines, no? What's really cool is that you can modify trees, nodes, and fragments before writing them out, with neat recursive functions and what not. I've been using it for converting old SGML dictionaries to TEI--once you get the hang of it, it's much easier than regular expressions, and much easier to maintain and modify as well.
Read More

shlax: a shallow, lazy XML parser in python

1 comment
Recently, I stumbled upon a paper from the dawn age of XML:

"REX: XML Shallow Parsing with Regular Expressions", Robert D. Cameron

It describes how to do something I'd never seen done before: parse the entirety of standard XML syntax in a single regular expression.

We've all written short regexes to find some particular feature in an XML document, but we've also all seen those fail because of oddities of whitespace, quoting, linebreaks, etc., that are perfectly legal, but hard to account for in a short, line-by-line regular expression.

Standard XML parsers, like expat, are fabulous, well maintained, and efficient. However, they have a common achilles heel: the XML standard's insistence that XML processors "MUST" report a fatal error if a document contains unbalanced tags. For working with HTML or SGML based documents, this is disastrous!

In contrast, Cameron's regex-based parser is extremely fault-tolerant--it extracts as much structure from the document as possible, and reports the rest as plain text. Further, it supports "round-tripping": the ability to exactly re-generate a document from parser output, which standard parser typically lack. As a corollary of this property, it becomes possible to report absolute byte offsets, which is a "killer feature" for the purposes of indexing.

Because of all these benefits, I've opted to translate his source code from javascript to python. I call my modified implementation "shlax" [pronounced like "shellacs", sort of], a shallow, lazy XML parser. "Shallow" meaning that it doesn't check for well-formedness, and simply reports tokens, offsets, and attributes as best it can. "Lazy" meaning that it iterates over the input, and yields one object at a time--so you don't have to write 8 asynchronous event handlers to use it, as in a typical SAX-style parser. This is often called a "pull" parser, but "shpux" doesn't sound as good, does it?

If you're interested, you can look at the source at the libphilo github repo. The regular expression itself is built up over the course of about 30 expressions, to allow for maintainability and readability. I've made some further modifications to Cameron's code to fit our typical workflow. I've buffered the text input, which allows us to iterate over a file-handle, rather than a string--this saves vast amounts of memory for processing large XML files, in particular. And I return "node" objects, rather than strings, that contain several useful items of information:
  1. the original text content of the node
  2. the "type" of the node: text, StartTag,EndTag, or Markup[for DTD's, comments, etc.]
  3. any attributes the node has
  4. the absolute byte offset in the string or file
You don't need anything more than that to power PhiloLogic. If you'd like to see an example of how to use it, take a look at my DirtyParser class, which takes as input a set of xpaths to recognize for containers and metadata, and outputs a set of objects suitable for the index builder I wrote about last time.

Oh, and about performance: shlax is noticeably slower than Mark's perl loader. I've tried to mitigate for that in a variety of ways, but in general, python's regex engine is not as fast as perl's. On the other hand, I've recently had a lot of success with running a load in parallel on an 8-core machine, which I'll write about when the code settles. That said, if efficiency is a concern, our best option would be to use well-formed XML with a standard parser.

So, my major development push now is to refactor the loader into a framework that can handle multiple parser backends, flexible metadata recognizers, and multiple simultaneous parser processes. I'll be posting about that as soon as it's ready.
Read More

A Unified Index Construction Library

Leave a Comment
I've spent the last two weeks replacing PhiloLogic's index-construction routines, following my prior work on the query and database interfaces.

The legacy index-packing code dates back to sometime before PhiloLogic 2, and is spread over 3 executable programs linked together by a Makefile and some obscure binary state files.

Unfortunately, the 3 programs all link to different versions of the same compression library, so they couldn't simply be refactored and recompiled as a single unit.

Instead, I worked backwards from the decompression routines I wrote last month, to write a new index construction library from scratch.

Thus, I had the luxury of being able to define an abstract, high-level interface that meets my four major goals:

1)simple, efficient operation
2)flexible enough for various index formats
3)easy to bind to other languages.
4)fully compatible with 3-series PhiloLogic

The main loop is below. It's pretty clean. All the details are handled by a hit-buffer object named "hb" that does compression, memory management, and database interfacing.
while(1) {
// as long as we read lines from standard input.
if (fgets(line,511,stdin) == NULL) {
// scan for hits in standard Philo3 format.
state = sscanf(line,
"%s %d %d %d %d %d %d %d %d %s\n",
word, &hit[0],...);

if (state == 10) {
// if we read a valid hit
if ((strcmp(word,hb->word))) {
//if we have a new word...
hitbuffer_finish(hb); // write out the current buffer.
hitbuffer_init(hb, word); // and reinitialize
uniq_words += 1LLU; //LLU for a 64-bit unsigned int.
hitbuffer_inc(hb, hit); //add the hit to whichever word you're on.
totalhits += 1LLU;
else {
fprintf(stderr, "Couldn't understand hit.\n");

The code is publicly available on github, but I'm having some problems with their web interface. I'll post a link once it's sorted out.
Read More

Vector Processing for OHCO

Leave a Comment
I've posted an expanded version of my CI Days talk on Google docs. I'd recommend looking at the speaker notes (click "actions" on the bottom left) since I won't be narrating it in person.

The presentation is an attempt to describe, somewhat formally, how PhiloLogic is capable of performing as well as it does. This comes from spending three years learning how Leonid's search core works, and attempting to extend and elucidate whatever I can. It's also the intellectual framework that I'm using to plan new features, like search on line and meter position, metadata, joins, etc. Hopefully, I can get someone who's better at math than I am to help me tighten up the formalities.

Basically, I refer to the infamous OHCO thesis as a useful axiom for translating the features of a text into a set of numerical objects, and then compare the characteristics of this representation to XML or Relational approaches. I'd love to know how interesting/useful/comprehensible others find the presentation, or the concept. What needs more explanation? What gets tedious?

If you look at the speaker notes, you can see me derive a claim that PhiloLogic runs 866 times faster than a relational database for word search. Math is fun!
Read More

PhiloLogic proto-binding for Python

Leave a Comment

In an earlier post, I mentioned that I'd try to to call the philologic C routines via ctypes, a Python Foreign Function Interface library. I did, and it worked awesomely well! Ctypes lets you call C functions from python without writing any glue at all in some cases, giving you access to high-performance C routines in a clean, modern programming language. We'd ultimately want a much more hand-crafted approach, but for prototyping interfaces, this is a very, very useful tool.

First, I had to compile the search engine as a shared library, rather than an executable:

gcc -dynamiclib -std=gnu99 search.o word.o retreive.o level.o gmap.o blockmap.o log.o out.o plugin/libindex.a db/db.o db/bitsvector.o db/unpack.o -lgdbm -o libphilo.dylib

All that refactoring certainly paid off. The search4 executable will now happily link against the shared library with no modification, and so can any other program that wants high-speed text object search:


import sys,os
from ctypes import *

# First, we need to get the C standard library loaded in
# so that we can pass python's input on to the search engine.
stdin = stdlib.fdopen(sys.stdin.fileno(),"r")
# Honestly, that's an architectural error.
# I'd prefer to pass in strings, not a file handle

# Now load in philologic from a shared library
libphilo = cdll.LoadLibrary("./libphilo.dylib")

# Give it a path to the database. The C routines parse the db definitions.
db = libphilo.init_dbh_folder("/var/lib/philologic/databases/mvotest5/")

# now initialize a new search object, with some reasonable defaults.
s = libphilo.new_search(db,"phrase",None,1,100000,0,None)

# Read words from standard input.

# Then dump the results to standard output.
# Done.

That was pretty easy, right? Notice that there weren't any boilerplate classes. I could hold pointers to arbitrary data in regular variables, and pass them directly into the C subroutines as void pointers. Not safe, but very, very convenient.

Of course, this opens us up for quite a bit more work: the C library really needs a lot more ways to get data in and out than a pair of input/output file descriptors, I would say. In all likelihood, after some more experiments, we'll eventually settle on a set of standard interfaces, and generate lower-level bindings with SWIG, which would alow us to call philo natively from Perl or PHP or Ruby or Java or LISP or Lua or...anything, really.

Ctypes still has some advantages over automatically-generated wrappers, however. In particular, it lets you pass python functions back into C, allowing us to write search operators in python, rather than C--for example, a metadata join, or a custom optimizer for part-of-speech searching. Neat!

Read More

Unix Daemon Recipes

Leave a Comment
I was digging through some older UNIX folkways when I stumbled upon an answer to a long-standing PhiloLogic design question:

How do I create a long-running worker process that will neither:

1) terminate when it's parent terminates, such as a terminal session or a CGI script, or
2) create the dreaded "zombie" processes that clog process tables and eventually crash the system.

as it turns out, this is the same basic problem as any UNIX daemon program; this just happens to be one designed to, eventually, terminate. PhiloLogic needs processes of this nature at various places: most prominently, to allow the CGI interface to return preliminary results.

Currently, we use a lightweight Perl daemon process, called, to accept search requests from the CGI scripts, invoke the search engine, and then clean up the process after it terminates. Effective, but there's a simpler way, with a tricky UNIX idiom.

First, fork(). This allows you to return control to the terminal or CGI script. If you aren't going to exit immediately you should SIGCHLD as well, so that you don't get interrupted later.

Second, have the child process call setsid() to gain a new session, and thus detach from the parent. This prevents terminal hangups from killing the child process.

Third, call fork() again, then immediately exit the (original) child. The new "grandchild" process is now an "orphan", and detached from a terminal, so it will run to completion, and then be reaped by the system, so you can do whatever long-term analytics you like.

A command line example could go like this:

use POSIX qw(setsid);

my $word = $ARGV[0] or die " word outfile\n";
my $outfile = $ARGV[1] or die " word outfile\n";

print STDERR "starting worker process.\n";

open(SEARCH, "| search4 --ascii --limit 1000000 /var/lib/philologic/somedb);

print SEARCH "$word\n";


sub daemonize {
open STDIN, '/dev/null' or die "Can't read /dev/null: $!";
open STDOUT, '>>/dev/null' or die "Can't write to /dev/null: $!";
open STDERR, '>>/dev/null' or die "Can't write to /dev/null: $!";
defined(my $childpid = fork) or die "Can't fork: $!";
if ($childpid) {
print STDERR "[parent process exiting]\n";
setsid or die "Can't start a new session: $!";
print STDERR "Child detached from terminal\n";
defined(my $grandchildpid = fork) or die "Can't fork: $!";
if ($grandchildpid) {
print STDERR "[child process exiting]\n";
umask 0;

The benefit is that a similar &daemonize subroutine could entirely replace nserver, and thus vastly simplify the installation process. There's clearly a lot more that could be done with routing and control, of course, but this is an exciting proof of concept, particularly for UNIX geeks like myself.
Read More

The Joy of Refactoring Legacy Code

Leave a Comment
I've spent the last few weeks rehabbing PhiloLogic's low-level search engine, and I thought I'd write up the process a bit.

PhiloLogic is commonly known as being a rather large Perl/CGI project, but all of the actual database interactions are done by our custom search engine, which is in highly optimized C. The flow of control in a typical Philo install looks something like this:

--CGI script search3t accepts user requests, and parses them.
--CGI passes requests off to a long-running Perl daemon process, called nserver.
--nserver spawns a long-running worker process search3 to evaluate the request
--the worker process loads in a compiled decompression module, at runtime, specific to the database.
--search3t watches the results of the worker process
--when the worker is finished, or outputs more than 50 results, search3t passes them off to a report generator.

This architecture is extremely efficient, but as PhiloLogic has accrued features over the years it has started to grow less flexible, and parts of the code base have started to decay. The command line arguments to search3, in particular, are arcane and undocumented. A typical example:

export SYSTEM_DIR=/path/to/db
export LD_LIBRARY_PATH=/path/to/db/specific/decompression/lib/
search3 -P:binary -E:L=1000000 -S:phrase -E:L=1000000 -C:1 /tmp/corpus.10866 

The internals are quite a bit scarier.  Arguments are processed haphazardly in bizarre corners of the code, and many paths and filenames are hard-coded in.  And terrifying unsafe type-casts abound.  Casting a structure containing an array of ints into an array of ints?  Oh my.

I've long been advocating a much, much simpler interface to the search engine. The holy grail would be a single-point-of-entry that could be installed as a C library, and called from any scripting language with appropriate interfacing code. There are several obstacles, particularly with respect to caching and memory management, but the main one is organizational.

How do you take a 15-year-old C executable, in some state of disrepair, and reconfigure the "good parts" into a modern C library? Slowly and carefully. Modern debugging tools like Valgrind help, as does the collective C wisdom preserved by Google. A particular issue is imperative vs. object-oriented or functional style. Older C programs tend to use a few global variables to represent whatever global data structure they work upon--in effect, what modern OOP practices would call a "singleton" object, but in practice a real headache.

For example, PhiloLogic typically chooses to represent the database being searched as a global variable, often set in the OS's environment. But what if you want to search two databases at once? What if you don't have a UNIX system? An object-oriented representation of the large-scale constructs of a program allows the code to go above and beyond its original purpose.

Or maybe I'm just a neat freak--regardless, the [simplified] top-level architecture of 'search3.999' {an asymptotic approach to an as-yet unannounced product} should show the point of it all:

    static struct option long_options[] = {
  {"ascii", no_argument, 0, 'a'},
{"corpussize", required_argument, 0, 'c'},
{"corpusfile", required_argument, 0, 'f'},
{"debug", required_argument, 0, 'd'},
{"limit", required_argument, 0, 'l'},

//...process options with GNU getopt_long...

  db = init_dbh_folder(dbname);
  if (!method_set) {
  s = new_search(db,
  status = process_input ( s, stdin );

//...print output...
// memory...
  return 0

An equivalent command-line call would be:
search3999 --ascii --limit 1000000 --corpussize 1 --corpusfile /tmp/corpus.10866 dbname search_method
which is definitely an improvement.  It can also print a help message.

Beyond organizational issues, I also ended up rewriting large portions of the decompression routines.  The database can now fully configure itself at runtime, which adds about 4 ms to each request, but with the benefit that database builds no longer require compilation.  TODO: The overhead can be eliminated if we store that database parameters as integers, rather than as formatted text files.

I think at this point the codebase is clean enough to try hooking up to python, via ctypes, and then experiment with other scripting language bindings.  Once I clean up the makefiles I'll put it up on our repository.

Read More
Previous PostOlder Posts Home

Zett - A Responsive Blogger Theme, Lets Take your blog to the next level.

This is an example of a Optin Form, you could edit this to put information about yourself.

This is an example of a Optin Form, you could edit this to put information about yourself or your site so readers know where you are coming from. Find out more...

Following are the some of the Advantages of Opt-in Form :-

  • Easy to Setup and use.
  • It Can Generate more email subscribers.
  • It’s beautiful on every screen size (try resizing your browser!)