Wednesday, January 23, 2013

Putting term vectors on a diet

What are term vectors?

Term vectors are an interesting Lucene feature, which allows for retrieving a single-document inverted index for any document ID of your index. This means that given any document ID, you can quickly list all its unique terms in sorted order, and for every term you can quickly know its original positions and offsets. For example, if you indexed the following document:

Field nameField value
textthe quick brown fox jumps over the lazy dog

You would retrieve the following term vectors:

TermFrequencyPositionsOffsets
brown12[10,15]
dog18[40,43]
fox13[16,19]
jumps14[20,25]
lazy17[35,39]
over15[26,30]
quick11[4,9]
the20, 6[0,3], [31,34]

For very small documents, it makes little sense to store term vectors given than they can be recomputed very quickly by re-analyzing a document’s stored fields. But if your documents are large or if your analysis pipeline is expensive, storing term vectors on disk can be much faster than computing them on the fly. So far, term vectors have been mainly used for highlighting and MoreLikeThis (searching for similar documents) but there is an interesting issue open in Lucene JIRA to use term vectors to perform partial document updates.

However, term vectors come with a cost. They store a lot of information and often take up a lot of disk space. This is bad because it can make indexing and searching slower (especially if the index size grows beyond the size of your OS cache).

Term vectors compression

Having worked on stored fields compression in the past months, my first idea was to apply the same recipe: collect enough raw data to fill a 16 KB block, then compress it and flush it to disk. However term vectors are more challenging to compress: terms are already unique so it is rather hard for LZ codecs such as LZ4 to reach good compression ratios. Moreover, general-purpose compression algorithms are usually not very good at compressing numeric data (frequencies, term positions and offsets) so I needed something else.

After long hours of trial and error, I managed to write a new term vectors format based on LZ4 and bit-packing which efficiently compresses term vectors for various cases. Depending on the collection of documents, the compression ratio of the term vector files varied from 0.53 to 0.90. For example, indexing term vectors (with positions and offsets enabled) for 1M articles from the English Wikipedia database generates 5.9G of term vector files with the default codec from Lucene 4.0 or 4.1. By switching to this new term vectors format, the size of the term vector files decreased to 3.9G! Another good news is that this size reduction made indexing faster: while indexing those 1M articles took 1038 seconds with the current term vectors format, it took only 870 seconds with this new compressed format (see ingestion rate charts below).


Ingestion rate with the current default format.

Ingestion rate with the new compressed format.

Although this new format is still very experimental, I think it’s promising and would make a good candidate to become the new default term vectors format for a future version of Lucene. If you are interested in better understanding how it works and the compression ratio you can expect from this format, you can read more about it in Lucene Jira.

Wednesday, November 14, 2012

Stored fields compression in Lucene 4.1

Last time, I tried to explain how efficient stored fields compression can help when your index grows larger than your I/O cache. Indeed, magnetic disks are so slow that it is usually worth spending a few CPU cycles on compression in order to avoid disk seeks.

I have a very good news for you: the stored fields format I used for these experiments will become the new default stored fields format as of Lucene 4.1! Here are the main highlights:

  • only one disk seek per document in the worst case (compared to two with the previous default stored fields format)
  • documents are compressed together in blocks ot 16 KB or more using the blazing fast LZ4 compression algorithm

Over the last weeks, I’ve had the occasion to talk about this new stored fields format with various Lucene users and developers who raised interesting questions that I’ll try to answer:

  • What happens if my documents are larger than 16KB? This stored fields format prevents documents from spreading across chunks: if your documents are larger than 16KB, you will have larger chunks that contain only one document.
  • Is it configurable? Yes and no: the stored fields format that will be used by Lucene41Codec is not configurable. However, it is based on another format: CompressingStoredFieldsFormat, which allows you to configure the chunk size and the compression algorithm to use (LZ4, LZ4 HC or Deflate).
  • Are there limitations? Yes, there is one: individual documents cannot be larger than 232 - 216 bytes (a little less than 2 GB). But this should be fine for most (if not all) use-cases.
  • Can I disable compression? Of course you can, all you need to do is to write a new codec that uses a stored fields format which does not compress stored fields such as Lucene40StoredFieldsFormat.
  • My index is stored in memory / on a SSD, does it still make sense to compress stored fields? I think so:
    • it won’t slow down your search engine: on my very slow laptop (Core 2 Duo T6670), decompressing a 16 KB block of english text takes 80┬Ás on average, so even if your result pages have 50 documents, your queries will only be 4ms slower (much less with faster hardware and/or smaller pages)
    • RAM and SSD are expensive, so thanks to stored fields compression you’ll be able to have larger indexes on the same hardware, or equivalent indexes on cheaper hardware
  • Can I plug in my own compression algorithm? Unfortunately you can’t, but if you really need to use a different compression algorithm, the code should be easy to adapt. However you should be aware of two optimizations of the LZ4 implementation in Lucene that you would almost certainly need to implement if you want to achieve similar performance:
    • it doesn’t compress to a temporary buffer before writing the compressed data to disk, instead it writes directly to a Lucene DataOutput — this proved to be faster (with MMapDirectory at least)
    • it stops decompressing as soon as enough data has been decompressed: for example, if you need to retrieve the second document of a chunk, which is stored between offsets 1024 and 2048 of the chunk, Lucene will only decompress 2 KB of data.

Many thanks to Robert Muir who helped me improve and fix this new stored fields format!

Tuesday, October 9, 2012

Efficient compressed stored fields with Lucene

Whatever you are storing on disk, everything usually goes perfectly well until your data becomes too large for your I/O cache. Until then, most disk accesses actually never touch disk and are almost as fast as reading or writing to main memory. The problem arises when your data becomes too large: disk accesses that can’t be served through the I/O cache will trigger an actual disk seek, and everything will suddenly become much slower. Once data becomes that large, there are three options: either you find techniques to reduce disk seeks (usually by loading some data in memory and/or relying more on sequential access), buy more RAM or better disks (SSD?), or performance will degrade as your data will keep growing.

If you have a Lucene index with some stored fields, I wouldn’t be surprised that most of the size of your index is due to its .fdt files. For example, when indexing 10M documents from a wikipedia dump that Mike McCandless uses for nightly benchmarks, the .fdt files are 69.3% of the index size.

.fdt is one of the two file extensions that are used for stored fields in Lucene. You can read more about how they work in Lucene40StoredFieldsFormat’s docs. The important thing to know is that loading a document from disk requires two disk seeks:

  • one in the fields index file (.fdx),
  • one in the fields data file (.fdt).

The fields index file being usually small (~ 8 * maxDoc bytes), the I/O cache should be able to serve most disk seeks in this file. However the fields data file is often much larger (a little more than the original data) so the seek in this file is more likely to translate to an actual disk seek. In the worst case that all seeks in this file translate to actual disk seeks, if your search engine displays p results per page, it won’t be able to handle more than 100/p requests per second (given that a disk seek on commodity hard drives is ~ 10ms). As a consequence the hit rate of the I/O cache on this file is very important for your query throughput. One option to improve the I/O cache hit rate is to compress stored fields so that the fields data file is smaller overall.

Up to version 2.9, Lucene had an option to compress stored fields but it has been deprecated and then removed (see LUCENE-652 for more information). In newer versions, users can still compress documents but this has to be done at the document level instead of the index level. However, the problem is still the same: if you are working with small fields, most compression algorithms are inefficient. In order to fix it, ElasticSearch 0.19.5 introduced store-level compression: it compresses large (64KB) fixed-size blocks of data instead of single fields in order to improve the compression ratio. This is probably the best way to compress small docs with Lucene up to version 3.6.

Fortunately, Lucene 4.0 (which should be released very soon) introduces flexible indexing: it allows you to customize Lucene low-level behavior, in particular the index files formats. With LUCENE-4226, Lucene got a new StoredFieldsFormat that efficiently compresses stored fields. Handling compression at the codec level allows for several optimizations compared to ElasticSearch’s approach:

  • blocks can have variable size so that documents never spread across two blocks (so that loading a document from disk never requires uncompressing more than one block),
  • uncompression can stop as soon as enough data has been uncompressed,
  • less memory is required.

Lucene40StoredFieldsFormat vs. CompressingStoredFieldsFormat

In order to ensure that it is really a win to compress stored fields, I ran a few benchmarks on a large index:

  • 10M documents,
  • every document has 4 stored fields:
    • an ID (a few bytes),
    • a title (a few bytes),
    • a date (a few bytes),
    • a body (up to 1KB).

CompressingStoredFieldsFormat has been instantiated with the following parameters:

  • compressionMode = FAST (fast compression and fast uncompression, but high compression ratio, uses LZ4 under the hood),
  • chunkSize = 16K (means that data will be compressed into blocks of ~16KB),
  • storedFieldsIndexFormat = MEMORY_CHUNK (the most compact fields index format, requires at most 12 bytes of memory per chunk of compressed documents).

Index size

  • Lucene40StoredFieldsFormat
    • Fields index: 76M
    • Field data: 9.4G
  • CompressingStoredFieldsFormat
    • Fields index: 1.7M
    • Field data: 5.7G

Indexing speed

Indexing took almost the same time with both StoredFieldsFormats (~ 37 minutes) and ingestion rates are very similar:

Document loading speed

I measured the average time to load a document from disk using random document identifiers in the [0 - maxDoc[ range. According to free, my I/O cache was ~ 5.2G when I ran these tests:

  • Lucene40StoredFieldsFormat: 11.5ms,
  • CompressingStoredFieldsFormat: 4.25ms.

In the case of Lucene40StoredFieldsFormat, the fields data file is much larger than the I/O cache so many requests to load a document translated to an actual disk seek. On the contrary, CompressingStoredFieldsFormat's fields data file is only a little larger than the I/O cache, so most seeks are served by the I/O cache. This explains why loading documents from disk was more than 2x faster, although it requires more CPU because of uncompression.

In that very particular case it would probably be even faster to switch to a more aggressive compression mode or a larger block size so that the whole .fdt file can fit into the I/O cache.

Conclusion

Unless your server has very fast I/O, it is usually faster to compress the fields data file so that most of it can fit into the I/O cache. Compared to Lucene40StoredFieldsFormat, CompressingStoredFieldsFormat allows for efficient stored fields compression and therefore better performance.

Monday, June 25, 2012

What is the theory behind Apache Lucene?

There is a recurring request from users to have more insight into Lucene internals. For example, see:

Although most of the ideas behind Lucene are explained in any good book on Information Retrieval, Lucene also implements some advanced algorithms for specific tasks. In these cases, it is probably easier to read an article describing the idea than to reverse-engineer the code. This is why I started a wiki page to collect links to research papers and blog articles that explain some advanced ideas behind Lucene.

Feel free to help me improve this wiki page by sending me ideas of Lucene algorithms that would deserve an entry on it!

Thursday, June 21, 2012

How fast is bit packing?

One of the most anticipated changes in Lucene/Solr 4.0 is its improved memory efficiency. Indeed, according to several benchmarks, you could expect a 2/3 reduction in memory use for a Lucene-based application (such as Solr or ElasticSearch) compared to Lucene 3.x.

One of the techniques that Lucene uses to reduce its memory footprint is bit-packing. This means that integer array values, instead of being fixed-size (8, 16, 32 or 64 bits per value), can have any size in the [1-64] range. If you store 17-bits integers this way, this is a 47% reduction of the size of your array compared to an int[]!

Here is what the interface looks like:

interface Mutable {
  long get(int index);
  void set(int index, long value);
  int size();
}

Under the hood, this interface has 4 implementations that have different speed and memory efficiency:

  1. Direct8, Direct16, Direct32 and Direct64 that just wrap a byte[], a short[], an int[] or a long[],
  2. Packed64, which packs values contiguously in 64-bits (long) blocks,
  3. Packed64SingleBLock, that looks like Packed64 but uses padding bits to prevent values from spanning across several blocks (32 bits per value at most),
  4. Packed8ThreeBlocks and Packed16ThreeBlocks, that store values in either 3 bytes (24 bits per value) or 3 shorts (48 bits per value).

In case you are interested, the code is available in Lucene svn repository.

Direct{8,16,32,64}

The methods of these classes directly translate to operations on an array:

  • Direct8: byte[],
  • Direct16: short[],
  • Direct32: int[],
  • Direct64: long[].

Operations on these classes should be very fast given that they directly translate into array accesses. However, these implementations also have the same drawback as arrays, which is that if you want to store 17-bits values, you will need to use a Direct32, which has a 88% memory overhead for 17-bits values.

Packed64

This implementation stores values contiguously in 64-bits blocks. This is the most compact implementation: if you want to store a million17-bits values, it will require roughly 17 * 1000000 / 8 ~= 2MB space. One pitfall is that some values may span across two different blocks (when the number of bits per value is not a divisor of n), as a consequence, to avoid costly CPU branches, the implementation of the get and set methods are a little tricky and always update 2 blocks with different shifts and masks.

Packed64SingleBlock

This implementation is similar to Packed64 but does not allow its values to span across several blocks. If you want to store 21-bits values, every block will consist of 3 21-bits values (using 3*21=63 bits) and 64-63=1 padding bit (2% space loss). Here are the different value sizes that this class accepts.

Bits per valueValues per blockPadding bitsSpace loss
32200%
21312%
16400%
12546%
10646%
9712%
8800%
7912%
61046%
51246%
41600%
32112%
23200%
16400%

Packed{8,16}ThreeBlocks

This class uses 3 bytes or shorts to store a single value. It is well-suited for 24 and 48-bits values, but has a maximum size of Integer.MAX_VALUE/3 (so that the underlying array can be addressed by an int).

How do they compare?

For every number of bits per value, there are 2 to 4 available implementations. One important criterion to select the one that best suits your needs is the memory overhead.

Here are the memory overheads for every number of bits per value and bit-packing scheme. The X-axis is the number of bits per value while the Y-axis is the memory overhead (space loss / actually used space).

For every bit-packing scheme, I only considered the most compact implementation. I could use a Direct64 to store 20-bits values, but it is very likely to have similar (probably a little worse since the CPU cache is less likely to help) performance to a Direct32, although it requires twice as more space.

For example, there are 4 available implementations to store 20-bits values:

  • Direct32 (32 bits per value), which has 60% memory overhead
  • Packed64 (20 bits per value), which has 0% memory overhead
  • Packed64SingleBlock (21 bits + 1/3 padding bit per value), which has 7% memory overhead
  • Packed8ThreeBlocks (24 bits per value), which has 20% memory overhead

Even if we now know how compact the different implementations are, it is still very difficult to decide which implementation to use whithout knowing their relative performance characteristics. This is why I wrote a simple benchmark that for every number of bits per value in [1,64]:

  • creates 2 to 4 packed integer arrays (one per implementation) of size 10,000,000
  • tests their random write performance (offsets are randomly chosen in the [0, 10000000[ range),
  • tests their random read performance.

The X-axis is the number of bits per value while the Y-axis is the number of read/written values per second.

The Direct* implementations are clearly faster than the packed implementations (~3x faster than Packed64 and 2x faster than Packed64SingleBlock). However, it is interesting to observe that the Packed*ThreeBlocks implementations are almost as fast as the Direct* implementations.

Packed64 and Packed64SingleBLock are much faster with small values (1 or 2 bits), due to the fact that the CPU caches can hold many more values at the same time, resulting in fewer cache misses when trying to access the data.

Now, how do read operations compare?

This time results are very different. The fastest implementation are still the Direct* ones, but they are only ~18% faster than Packed64 and Packed*ThreeBlocks, and only ~8% faster than Packed64SingleBLock on average. This means that for read-only use cases, you could save a lot of memory by switching your arrays to a packed implementation while keeping performance to the same level.

Conclusion

Although bit-packing can help reduce memory use significantly, it is very rarely used in practice, probably because:

  • people usually don’t know how many bits per value they actually need,
  • 8, 16, 32 and 64-bits arrays are language built-ins, while packed arrays require some extra coding.
However, this experiment shows that you can achieve significant reductions in memory use by using packed integer arrays, without sacrificing performance too much since packed arrays can be almost as fast as raw arrays, especially for read operations.