Summer vacation

Summer vacation is just about over 🙁

My last day at the Internet Archive was June 7, 2013. After five and a half years, it was time to move on to new things. Sad to go, but for me, a change of environment and a new set of challenges is needed every so often to force myself to grow and develop in new and (hopefully) interesting ways, both professionally and personally.

Fortunately, IA is letting me keep this blog, which I intend to continue to use to write about interesting things I work on, or just as a means to comment on goings-on in the industry.

Watch this space.

EDIT: After writing this, I noticed that I had two posts still in the draft stage from earlier this summer. So I went ahead and emptied the rain barrel:

Title linkers

We looked at a lot of different web graphs with Hadoop. Back in Feb. 2011, I wrote a Pig script to analyze the links between domains (websites) where the links had a specific property: the link text matched exactly the title of the linked-to page. For example:

<a href="">Aaron's Awesome Homepage!</a>

In this case, the link text “Aaron’s Awesome Homepage!” exactly matches the title of the linked-to page.

I hypothesized that for inter-site links, this property would indicate a tighter coupling between the pages compared to a link where the link text and page title did not match. In the end, I’m not so sure. I mean, I suspect that the prevalence of the titles matching is mostly due to webpage editing systems. If most of the pages in this experiment were created via content management systems, then the editing software probably aided in construction of the link and could have easily scraped the title off the linked-to page when the URL was entered by the human editor.

In any case, here we go.

The dataset was a sample of 850 million web pages from the Wayback Machine circa 2004 focused primarily on news sites and blogs. As you would suspect, most of the archived web pages in the sample were regarding the Iraq War.

The relevant portion of the Pig script is as follows. It’s conceptually simple: just join a list of page metadata with a list of links based on URL, then filter lines where the link text matches the page title.

/* Load the link graph in the form the same as the example table above. */
meta  = LOAD '$METADATA' AS (url:chararray,title:chararray);
links = LOAD '$LINKS'    AS (from:chararray,to:chararray,anchor:chararray);
/* Eliminate empty titles and anchors */
meta  = FILTER meta  BY title  != '';
links = FILTER links BY from   != to AND anchor != '';
/* Generate domains for the from and to urls */
links = FOREACH links GENERATE from, DOMAIN( from ) as fromdomain, to, DOMAIN( to ) as todomain, anchor ;
/* Eliminate intra-domain links */
links = FILTER links BY fromdomain != todomain ;
/* Join the links to the metadata records by URL */
results = JOIN meta BY url, links BY to;
/* Now only keep those where the title matches the anchor text */
results = FILTER results BY meta::title == links::anchor;
/* Only keep the link info for the output */
results = FOREACH results GENERATE links::fromdomain, links::todomain ; 
results = GROUP results BY (links::fromdomain, links::todomain);
results = FOREACH results GENERATE group, COUNT( $1 ) as count;
results = ORDER results BY count DESCENDING;


And the top-20, based on the number of inter-site links with the link-text == page-title property are:

From To Count 40,516,033 26,561,080 23,357,605 10,350,732 5,431,860 4,582,241 3,443,017 3,358,902 2,833,345 2,371,328 2,255,917 2,107,307 1,875,400 1,692,922 1,660,720 1,601,128 1,598,830 1,584765 1,584,648 1,584,648

As you can see, treating as a single site/domain for analysis purposes isn’t so hot. Although it is is a domain according to your browser, it would be better to break out the results blog by blog. For example, by treating as a separate site than


But, regardless of the problems with, a list/table isn’t as exciting as a graph, now is it?

The Pig script outputs a file of the form:

<from-domain, to-domain, count>

which I was able to convert to a GraphViz “.dot” file with a little shell script:

sort -k 3,3 -n -r pig-output.txt | awk 'BEGIN { print "digraph graph { node [shape=none];" } { if ( $3 > 5000 ) print "\"" $1 "\" -> \"" $2 "\";" } END { print "}" }' >

The sort sorts from highest-to-lowest by count and the if statement drops any domain pair with fewer than 5000 links, just to remove the low-value connections in the graph.

Then, generate a graph based on different GraphViz layout engines

$ for i in dot twopi fdp neato ; do ${i} -v -v -Tpng > links-${i}.png ; done

I tried playing around with various GraphViz DOT script/commands to try and make the resulting graphs look nice, especially with respect to overlapping of nodes.

Man, what a pain. The set of options/commands is large and varies depending on which layout is used. After trying for hours and hours, I eventually gave-up and just settled on the ‘fdp’ default layout as being the best one.

In any case, here are the images (click for full-size):

The New Hotness

After re-reading Don’t Phrase Me, Bro!, it occurs to me that there’s plenty more I could have written about The New Hotness…such as: what is The New Hotness (TNH)?

And rather than writing it all here (again), let me just link to the wiki: The New Hotness.

There’s some neat stuff in there. Give it a read. But, if you don’t want to read the whole thing, at least read the section CollapsingCollector. That is the motivating purpose for TNH to exist at all.

Don’t phrase me, bro!

As Robert Muir nicely explains, phrase searching with Lucene can be expensive:

When using phrase queries (say “foo bar”) in the typical case where single words are in the index, phrase queries have to walk the inverted index for “foo” and for “bar” and find the documents that contain both terms, then walk their positions lists within each one of those documents to find the places where “foo” appeared right before “bar”.

To illustrate, consider the examples:

"Internet Archive"
"Sally sells seashells by the seashore"
"William Shakespeare"

These types of queries (searches) can take a long time to run because once we get the list of documents that contain those words, we have to retrieve the position of each word in the document and then match up the positions to find the ones that are adjacent to each other. And the more words there are in the phrase, the longer this takes.

For example, consider “Internet Archive”. And let’s suppose we have an full-text index of 100,000,000 documents. The search software first has to find all the documents containing the word “Internet”, then all the documents containing the word “Archive”. We then intersect those two lists to come up with the set of documents that contain both words.

That’s the first part, then we have to examine each document to see at which positions the words appear in, and keep only those documents where the word “Internet” appears at one position before “Archive”. For example, what if we had the following document:

The Internet is a cool place to Archive your stuff!
 1      2     3 4   5    6    7    8      9    10

This document is found in the first step because it contains both words, but when we compare the position of the word “Internet” (2) with the position of the word “Archive” (8), we see that they are not next to each other in the right order.

However, for the document

My favorite website is Internet Archive.
 1    2        3     4    5        6

we get a match for the phrase since the positions of the words are 5 and 6.

Although it might not seem, on the surface, that getting the position information and comparing word positions as being expensive, when you have indexes with 100,000,000 documents or more, it certainly can be.

Shingling and other stuff

One common approach to speeding up phrase queries is to combine terms together as a “shingle” and indexing the shingles as distinct terms. There’s lots of online explanation and discussion on that topic, which I won’t repeat here.

Then there are these related topics, which I will also skip over for now:

  • stop words: drop words like “the”, “of”, “as”, “with” since they are too common to be of any use.
  • stemming: find words sharing a common root. User says “arrogant” but we found “arrogance”, go ahead and use it.
  • misspellings: very common nowadays, and very helpful to users. I search for “mispelling”, give me results for “misspelling”.
  • n-grams: heavily used for non-Western languages, which don’t use whitespace to separate words.

So, then, if I’m not talking about any of the above, what’s the point of this post really?

Query Translation

If you use a web search engine, you typically expect that your query will be matched against all the parts of the document: title, keywords, body content, etc. Suppose your query is

Mavericks surf contest

You want to see hits with ideally all three of those words together. But, you’re probably happy to see highly relevant documents where “Mavericks” is in the title and “surf contest” is in the page body. In general we want to match our keywords to all the possible fields, and adjust our ranking depending on which fields they appear.

Back in 2009, we were using NutchWAX, which was basically a bunch of ugly hacks and patches on top of Nutch. One of the unpleasant things we inherited was the Nutch query interpretation which added phrase queries to everything.
For example, the query

Mavericks surf contest

would be expanded by Nutch into something along the lines of

title:"Mavericks surf contest"
OR content:"Mavericks surf content"
OR (title:mavericks AND title:surf AND title:contest)
OR (title:mavericks AND title:surf AND content:contest)

Every single query would have at least two phrase queries as part of it. Yikes!

Custom Query Parsing

So, to avoid attaching these expensive phrase queries to everything, i wrote a custom query parser that does a couple nice things:

  • Applies terms to all the fields: title, keywords, body, etc.
  • Converts “smartquotes” into dumb ones
  • Supports + and – operators for “require” and “exclude”
  • Removes things that are not Unicode letters nor numbers
  • Removes accents from Latin characters

Which translates the user query

Mavericks surf -piñata


  (+url:mavericks^4.0 +url:surf^4.0)
  (+title:mavericks^3.0 +title:surf^3.0)
  (+boiled:mavericks^2.5 +boiled:surf^2.5)
  (+content:mavericks^1.5 +content:surf^1.5)
  (+title:mavericks +content:surf)
  (+title:surf +content:mavericks)

Overall, it seems to work pretty well and the per-field weights tend to help produce nicely ranked results. And without all that unnecessary phrasing.

Parsing Petabox

As described in “Surfing with the elephants”, if you already have (W)ARC files in your Hadoop cluster, then waimea will run them through the full-text indexing pipeline, producing a Lucene index.

But what if your (W)ARC files are not in HDFS, but are stored in the Internet Archive’s Petabox system?

Thanks to my colleague Kenji Nagahashi — no problemo.

Kenji wrote a Hadoop FileSystem implementation called PetaboxFileSystem which provides read-only access to Petabox files to Hadoop. W00t!

Now, with a bit of command-line assembly, we can parse (W)ARC files directly from Petabox into HDFS:

$ cd waimea
$ hadoop jar jbs.jar \ 
   org.archive.jbs.Parse \
   -conf etc/job-parse.xml \
   -Dfs.petabox.impl=org.archive.crawler.hadoop.PetaboxFileSystem \
   -Dfs.petabox.user={your-ia-user-name} \
   -Dfs.petabox.sig={your-ia-login-cookie-value} \
   /search/{project}/parsed \

Once you have all the (W)ARCs parsed into HDFS, then just run the waimea script to run the rest of the workflow:

waimea$ ./bin/ /search/{project}

NARA hacks

Building the full-text index for U.S. Federal Agency Web Harvests required two hacks:

  1. Custom rules for determining a URL’s “site”
  2. Capture-specific boosting

Custom rules for determining a URL’s “site”

By default, our custom search server will only show the best/top 1 result from any particular website. You can see this in action by changing the value of the URL parameter h in the following search queries:

in the second case all the results from each site are displayed, not just the best one. As you can see, we get a lot of hits from the same site — — which is pretty annoying.

So, by default, only the top hit from any one website is displayed. But, in order to do so, we must first determine what the “site” is for each URL.

Normally we use the same rules your browser uses for determining the “site” for a URL. These rules are listed on and all the browser-makers bake them into the browser. These rules govern whether or not one site can set a cookie for another one. For example, your browser sees as the same site as, but is not the same site as

In most cases, the notion of “site” or “domain” matches users’ expectations and using the same publicsuffix list for determining a URL’s site for search results give a pleasant user experience. However, this project is primarily concerned with US Congressional representatives and many of them are all hosted under and

Consider, are all of these the same “site”?


The publicsuffix rules, and your browser, will say “yes”, that they are all the same “site” or domain.

But, for full-text search, we probably want to treat each of those as separate sites. If we perform a search which has results on all four of those sites, we’d like to see the best/top hit from each one; we don’t want to only show the best/top hit from everything under

The code

To determine the domain/site for a URL, I wrote some code which applies the rules from publicsuffix. It seems that there are many such implementations out there, but here’s why I like mine:

  1. The publicsuffix rules are verbatim inputs to the library, so that you can supply a newer version of the rules directly from the publicsuffix site if you like
  2. Custom rules can be added

The second item on that list is the important one here. When we indexed the web archives, we added two publicsuffix rules to our Hadoop configuration:


These rules force * and * to be treated as separate sites — while still ignoring www, so that and were the same site.

Capture-specific boosting

The other hack for was to boost specific captures so that the top result for a Congressperson’s name would likely be same capture that we selected for the portal. For example, if you browse the pages of the 112th Congress, you’ll see that the specific capture we linked for Senator Dianne Feinstein is:


out of the 4 different captures we had of her website in the collection:

The point here is that when you perform a full-text search for “Dianne Feinstein”, your top hit from should link to the same capture that we linked on the portal page.

The code

There’s actually not much code required for this. The difficult part is extracting the links from the HTML portal pages. Once we have those, we can easily modify the Pig script which calculates the URL boosts to give an extra boost to those specific captures.

In waimea there is a general-purpose script for computing boost values for URLs based on inlink count. We can simply add some code to incorporate this extra boost for the portal URLs

-- Add the extras boost values
extras = LOAD '$EXTRAS' USING PigStorage(' ') AS ( url:chararray, digest:chararray, value:double );

-- We use a 'replicated' JOIN because we know the 'extras' relation is
-- pretty small, a few hundred records, and should easily fit into
-- memory.
data = JOIN data BY (url,digest) LEFT, extras BY (url,digest) USING 'replicated';

-- For inlink counts, convert null values to 1 so that the calls to LOG10() don't freak out.
-- For 'extra', convert null to 0.
data = FOREACH data 
         data::url AS url, 
         data::digest AS digest, 
         data::domain AS domain, 
         (data::page_inlink_count   is null ? 1L : data::page_inlink_count  ) AS page_inlink_count,
         (data::domain_inlink_count is null ? 1L : data::domain_inlink_count) AS domain_inlink_count,
         (extras::value             is null ? 0L : extras::value            ) AS extra;

-- Calculate boost for each page.
data = FOREACH data 
         digest, 1.0 + 
         (maximum( LOG10( (double) page_inlink_count   ), 0.0 ) /  2.0) + 
         (maximum( LOG10( (double) domain_inlink_count ), 0.0 ) / 10.0) +
         AS boost:double;

That’s it.

Now, this extra boosting isn’t bulletproof. There’s no guarantee that the specific capture will be the top hit for a Congressperson’s name, but at least it is much more likely.

High-Availability Deployments

First off, thanks to my colleague Sam Stoller for suggesting this approach. Beers of gratitude are to be purchased and consumed.

Over the years, the web group at IA has done many one-off deployments for various projects. These one-offs typically consist of a set of static web pages (a.k.a. a web portal), an instance of the Wayback Machine, and a full-text search server. Since both Wayback and the full-text search are Java web applications, they can easily be hosted by a single Tomcat instance. And the static web pages are pretty small potatoes, so they can also be easily served by Tomcat as well.

In the past, we would create the initial deployment and then just rsync the entire thing over to another machine to act as a backup. In the end we’d have two machines/VMs, titled and for some project foo. This worked satisfactorily for research projects and other low-usage deployments, but on occasion we did have a deployment where availability (as opposed to load/performance) was important.

In the case where we had foo/foo-bu, there was server or hardware problem, we could manually re-configure things to direct traffic to the “backup” server. The main drawback is the obvious need for human intervention and the corresponding service interruption. For a few of our public-facing deployments, requiring human-intervention to recover from an otherwise trivial service disruption was seen as a large operational risk. In addition, regular maintenance that incurred service disruptions on these public services had to be scheduled at least a week in advance, per our service contracts.

Back in February of this year, I was working on a public-facing public service which had some strict contractual obligations on availability and maintenance windows. I figured that here must some tools in the IA kit that could be put to use. Tools which were easy to deploy and run, and which didn’t require a lot of infrastructure. After brainstorming a bit with Sam, he sketched out on a whiteboard what become the following deployment:

mockup_HA Design

As sketched out in the diagram there are three main services:

  1. Linux HA to manage the external IP address
  2. HAProxy to load-balance and fail-over the servers
  3. A web server, in this case Tomcat, but could be any server back-end stack

Linux HA

In this deployment, Linux HA has one job and one job only: manage the external IP address.

If you read the Linux HA documentation, it is a complete cluster management system that can do pretty-much everything under the sun. There are probably a whole bunch of snazzy features we could be using, but in this particular solution we only care about managing the external IP address for the deployment.

Linux HA is configured so that the two VMs will discuss amongst themselves which will be the holder of the conch, then that node assigns the external IP to itself. If the holder of the external IP address crashes or is shut-down, then the other will automatically self-assign the external IP address within a few seconds. The key point is that whenever one or more of the nodes is up and running, the external IP address will be assigned to a functioning node.

From the Linux HA perspective, that’s it.


Each node runs haproxy for two reasons:

  1. It makes a very nice front-door even when all the back-end services are otherwise happy. We use haproxy in the Wayback Machine and Sam pointed-out that it is very good at metering incoming requests, dispensing them to multiple back-end servers, and balancing the load as back-end services fill-up. Even with a single or dual-server system (such as this) haproxy is a good gatekeeper to have. If a single-node system receives a sudden increase in requests, haproxy can be the difference between servicing some requests and a total server melt-down.
  2. Fail-over: haproxy can detect when one of the back-end servers becomes unavailable and will route all the requests to the other. This is very handy when performing maintenance, as well as when one of the VMs (or underlying hardware) decides to go for an extended smoke-break.

We configure the haproxy daemons to ping a specific URL on the back-end servers. This URL is not only something that will respond with a HTTP 200 when the back-end server is up and happy, but also can be disabled by a human operator for maintenance purposes. Assuming that the back-end server supports serving static files, we just touch a file at the URL /haproxy_check/up. As long as this URL returns an HTTP 200 then haproxy will see the back-end server as up and happy.

If anyone performs maintenance on the back-end server, they can simply rename the file something like /haproxy_check/down-for-maintenance-by-aaronhaproxy to get a 404 from the expected URL and therefor mark the back-end server as “down” and send requests to the other server. Also, by using an obvious name, any other human who might get a Nagios alert message will be able to login to the node and see who is responsible for the maintenance just by seeing that the up is now named down-for-maintenance-by-aaron.

Web server

The back-end web server stack is pretty simple, and can pretty-much be whatever you like. In my case, it’s just a Tomcat server with the Wayback and full-text search web apps and some static HTML pages. These back-end web servers are ignorant of the load-balanced front-end. They just service requests and serve content. Nothing fancy.

Use cases

Let’s consider some use cases!

Everyone is Happy

Normally, when everybody is happy, the external IP address remains fixed to one of the virtual machines and all requests are sent to it. Its haproxy receives all of the requests and then distributes them to the two back-end servers. The responses flow back out the way the requests came in. The end-users see a little performance boost since the requests are serviced by two back-end servers instead of one. Everyone is happy.

Virtual Machine X is down

If either Virtual Machine A or Virtual Machine B suddenly goes down for a dirt-nap, everyone remains happy.

mockup_HA Design

If Virtual Machine A dies, then the haproxy on VM-B will notice, then route all requests to the server back-end on VM-B. External user requests are serviced and everyone is happy.

If Virtual Machine B dies, then within a few seconds the Linux HA daemon on Virtual Machine A will notice, then assign itself the external IP address and start handling the incoming user requests. Those requests will be sent to the haproxy on VM-A which will forward them on to the back-end server of VM-A and all the user requests will be serviced within a few seconds of VM-B’s asplosion. Aside from that little hiccup, everyone is happy.

If both VMs are down at the same time…well, then no one is happy.

Scheduled Maintenance

Sheduled maintenance can take two flavors.

In the first case, where we are twiddling with the underlying hardware, well that devolves to the case where VM-{A,B} is down. As above.

In the second, where we either need to tweak the back-end server software or modify some of the content, we can simply mark the back-end server as “down” by renaming the file that haproxy checks. Note that even though the back-end server is still “up”, by renaming that file we make the server unavailable to the outside world. This is a nice situation as we can now do whatever we want to the back-end server without worrying about end-users seeing it until we bring the server back “up” by renaming that file again.

Suppose the back-end server listens to a port not accessible outside the IA firewall. Then while that server is “down” to the outside world, we can still access/test it via the private port.


Although this approach works very well in our situation, there are a few drawbacks:

  1. I assume that there is the possibility of “split-brain” among the Linux HA daemons.

    Suppose that our network routers go crazy and the Linux HA daemons cannot talk to each other, but both can communicate with the outside world. In that case, each one would see the other as “dead” and would self-assign the external IP address and try to service requests from the outside world. Everything would asplode pretty quickly. In fact, if this was happening, it’s very likely that our entire network would be hosed and this little 2-node deployment would be the least of our concerns.

  2. Duplicate, load-balanced deployments.

    This is a benefit, but there are corresponding costs too. Any and all changes made to one deployment must be carefully replicated to the other. Either the deployment engineers must be careful in making changes, or some additional IT infrastructure for replicating the changes is needed. The coin has two sides.

  3. Usage/access logs across the servers must be collated.

    Assuming we want to analyze service usage, we have to collate the access logs across the load-balanced servers. This is difficult to do with just these two nodes. It’s easier if we have some external service for log analysis.


For our small, one-off deployments, this design hits a “sweet spot”. Without adding too much complexity or effort, we are able to have automagic fail-over in situations where otherwise a human would have to wake up in the middle of the night and remotely administer the fail-over by hand. We also get fail-over for regular maintenance, as well as some load-balancing to boot.

Cheers Sam!

Domain discovery

When IA performs a domain crawl for a national library partner, we often start with a long list of known domains within their country-code. In some cases the partner provides us with a list from their national domain registrar, in other cases we extract a list from the Wayback Machine, and often some combination of both.

Another method to discover domains is to simply look at the link-graph for a previous domain crawl and compare the list of all the URLs linked to the list of URLs that were crawled (or already in the Wayback Machine). For example, if there’s a link to and we have no captures in the Wayback Machine for*, then we would add that domain to our list for next time.

In some cases “next time” is while the current domain crawl is still in progress. We could perform this link-graph analysis every week for a multi-week domain crawl.

When building full-text search indexes with waimea, the following Pig script can be run using the link-graph datasets produced by it.

-- Find domains where there are 1+ links to it, but we do not have 
-- and parsed captures from that domain.
-- It's still possible that we have 404s or non-text (e.g. jpgs) from
-- a domain that is reported as unknown by this script.
-- The output of this script can be cross-referenced with the CDX
-- to find domains where we truly have no captures at all.
%default DOMAIN_SUFFIX ''

REGISTER lib/bacon.jar
REGISTER lib/json.jar
DEFINE FROMJSON org.archive.bacon.FromJSON();

-- Load the parsed captures and domain-level inlink counts. Only retain the domains from each.
inlink_counts_domain = LOAD '$INLINK_COUNTS_DOMAIN' USING AS ( key:chararray, value:chararray );
inlink_counts_domain = FOREACH inlink_counts_domain GENERATE FROMJSON( value ) AS m:[];
inlink_counts_domain = FOREACH inlink_counts_domain GENERATE m#'domain' AS domain:chararray;

parsed_captures = LOAD '$PARSED_CAPTURES' USING AS ( key:chararray, value:chararray );
parsed_captures = FOREACH parsed_captures GENERATE FROMJSON( value ) AS m:[];
parsed_captures = FOREACH parsed_captures GENERATE m#'domain' AS domain:chararray;

-- Also, uniq the parsed_capture domains, we don't need multiple instances of each.
parsed_captures = DISTINCT parsed_captures;

-- Find all domains that have 1+ links to it, but no page captures
unknown_domains = JOIN inlink_counts_domain BY domain LEFT, parsed_captures BY domain;
unknown_domains = FILTER unknown_domains BY parsed_captures::domain is null;
unknown_domains = FOREACH unknown_domains GENERATE inlink_counts_domain::domain as domain;

-- Filter unknown domains to just the given suffix (typically the TLD of interest, e.g. ".uk").
unknown_domains_tld = FILTER unknown_domains BY domain MATCHES '$DOMAIN_SUFFIX';

-- Output: domain:chararray
STORE unknown_domains INTO '$UNKNOWN_DOMAINS';
STORE unknown_domains_tld INTO '$UNKNOWN_DOMAINS_TLD';

The last little flourish is just to narrow things down to the top-level country code of interest. You could omit that and just examine the entire list of unknown domains if you prefer.

Surfing with the elephants

Today I added a new repo into our GitHub account: waimea

This small repo contains the Pig scripts and associated libraries that we use to build full-text indexes in Hadoop. It’s the repo I promised some months ago when I described the process to build full-text indexes.

Once cloned locally, the entire process can be run via a single command-line invocation

$ git clone git://
$ cd waimea
waimea$ ./bin/ /search/petabox/wide00002

Suppose we have all the WARC files for the wide00002 crawl in HDFS under


then the script will run the Pig scripts to analyze the link-graph, perform deduplication, and finally produce a set of Lucene index shards:


Each directory is produced by the corresponding Pig script.

Why did I name this repo “wiamea”? Well, some years back when I was working on a Hadoop workflow for building full-text indexes for Archive-It, I was annoyed that every workflow was referred to as “the pipeline”. We had a “pipeline” for ingesting (w)ARC files into the repository, a “pipeline” for producing Wayback Machine indexes, and a “pipeline” for building full-text search indexes. I decided that mine was special, so I named it “Mavericks”.

Yes, the whole system for building full-text search indexes for Archive-It is called “Mavericks”, after the famous big-wave surf spot off Half-Moon Bay. In the same spirit, the public repo for the basically the same system (sans some Archive-It specific hacks) is named after the original big-wave surf spot of Waimea Bay, Hawaii.

Hang trunk!
Surfing with elephants

Cluster merging addendum

After thinking more about the problem with the output parts of a cluster merge not being named the same as the inputs, I realized there’s another possible solution other than renaming the files.

One could take the same approach as used by MultiStorage in piggybank. It requires that one of the values in the output tuples contains the name of the file to write the tuple to. Then it just writes the value to that file.

I suppose we could take the same approach, but modify ClusterMergingLoader to return the name of the input file group as one of the tuple fields (like -tagsource in PigStorage), then pass that value along through the script and finally hand it off to ZipNumStorage along with the (key,value) pair.

Might be worth doing since it would remove a potentially confusing result if we forget to rename the part-* files after the cluster is built.