[previous] [next] [top] [search] [index]

WN Utility Programs


The main utility program used by WN is wndex which is used to produce the index.cache files from index files. Its use is described in detail in the chapter on creating your data hierarchy, In this chapter we consider some other utilities, mostly perl scripts, which are useful in maintaining your server.

Digest

Digest is a perl script which can be found in the bin directory of the distribution. This program is designed to work with the range feature of the WN server and with list searches.

Here is how it works. The digest utility is executed with three arguments: two regular expressions and the name of a structured file, like a mail file, news digest or address list. The first regular expression is should match the section separator of the structured file and the second should match the beginning of the line to be used as the section title. For a mail digest, for example, these could be ^From and ^Subject: respectively. The third argument should be the the name of the mail file. For example the command

     digest ^From ^Subject: foo
produces a file named foo.index.html which consists primarily of an unordered list. Each item in the list is an anchor referring to a line range in foo -- the ranges being delimited by lines which match the first regular expression argument. In this case that means each range will start with a line beginning with "From" which is the marker in a mail file designating the start of a new message. The title of each range is taken from the first line in the range which contains a match for the second regular expression and, in fact, the title will consist of everything on that line after the matched regular expression. In this case that means the title will be everything after the word "Subject:" on the message subject line.

The first line of each range or section is a line which matches the first regular and the next matching line will begin the next section. Normally the search for the match for the title regular expression begins with this first line. However, it is sometimes useful to skip this first line in the search for a title match. This can be done by starting the second regular expression with the character '$'. For example the command

     digest ^$ $^ foo
says to divide foo into sections (line ranges) which are separated by blank lines (the regular expression ^$ matches a blank line). To obtain a title for each section the blank line is skipped (since the second regular expression starts with $) and then everything on the next line is taken as subject (since ^ matches the beginning of the next line). The regular expressions of this example would be useful, for example, for an address list foo which consisted of multiline records separated by blank lines with an individual's name on the first line of each record. The digest utility would then produce a foo.index.html file with an unordered list of anchors, one for each individual in the list. Selecting an anchor would present the record for that individual. Using a list search for this file would allow a form user to enter a name or regular expression and obtain a list of anchors for matching items.

There are fancier tools than digest for displaying mail archives, but this utility has great flexibility for dealing with a wide variety of structured files.

PNUTS

PNUTS (pronounced "peanuts") is an acronym for previous, next, up, top, search. It is a perl script which takes as argument the name of a file describing the hierarchical structure of a group of HTML files constituting a single virtual document. the pnuts program then searches these files for lines like
     <!-- pnuts -->
which it replaces with a sequence of anchors like

[previous] [next] [ up] [ top] [ search] [ index]

with links to the relevant files in the virtual document. Actually it replaces this line with a single line starting with <!-- pnuts -->, followed by the anchors. That way the next time it is run, say after inserting a new chapter in your document, the "pnuts" line will be replaced by a new one with the appropriate links.

The pnuts program is run with a command like

     pnuts -s dosearch.html -i docindex.html foo.pnuts

The argument -s dosearch.html is optional and supplies a URL for the [search] anchor to be substituted. Thus if just "dosearch.html" is used this will be an anchor linking to a relative URL. Instead you could use a full URL like "http://hostname/dir/file". If there is no -s argument then there will be no search item in the list of items inserted by pnuts. The optional argument -i docindex.html is similar to the -s option except it provides the URL (relative or absolute) which should be anchored to [index]. This URL typically points to an an HTML document created with indexmaker

The file foos.pnuts contains the information by which pnuts knows which files to process and what the order on those files should be. It consists of a list of files relative to the current directory, one per line, in the order which should be reflected in the [next] [previous] links. If a file is hierarchically one level lower than the previous file this should be indicated by preceding its name with one more character than the preceding file. Here is an example:

     top.html
     second.html
     <tab>firstsub.html
     <tab><tab>subsub.html
     <tab>secondsub.html
     third.html

If this list is supplied to pnuts it will insert anchors into all these files wherever <!-- pnuts --> occurs. All those named [top] will point to the file top.html. In firstsub.html and secondsub.html the [up] link will point to second.html. The [previous] and [next] links will reflect the order top.html, second.html, firstsub.html, subsub.html, secondsub.html, third.html.

Indexmaker

This is a perl script whose function is to produce an index (in the usual sense not the WN sense) for a virtual document consisting of a number of HTML files in a single directory. The index to this guide is a good example of how an index produced by indexmaker works. The indexmaker program is run with a command like
     indexmaker -d path  -t "Index Title" -o outputfile words

Here the -d -t and -o arguments are optional. The -t option supplies the title for the HTML document produced. If no -t argument is given then "Index" is used as the title. The -o option provides a name for the output HTML file -- the default being docindex.html. The -d option should be the directory containing the files being indexed. It should either begin with a '/' and be relative to the WN root directory or not begin with a '/' and be relative to the directory which will contain the docindex.html file. If there is no -d option then the docindex.html file must reside in the same directory as the files being indexed. If this is done then it is a good idea to add an "Attribute=nosearch" to the docindex.html record in the index file for the directory. Otherwise docindex.html will index itself in addition to the other files in the directory.

The final argument to indexmaker is the file words. It is a list of words or phrases, in alphabetical order, one per line, which you wish to appear in the index. One way to produce it is use UNIX utilities to procduce a list of all words in the files, then run sort -dfu on it and remove unsuitable words from the list.

What the indexmaker program does is produce a long list of anchors, one for each word in the words file. Each word is linked to a context search for itself.


WN -- for those who think the Web should be more than a user friendly interface to ftp

John Franks <john@math.nwu.edu>
[previous] [next] [top] [search] [index]