Adrien Grand

Jun 09

Versatile sorting

Sorted data sets are very useful since they make a lot of things easier:

Java provides the ability to sort arrays and lists thanks to Arrays.sort and Collections.sort. These methods are flexible enough to accept custom comparators, so that you can define which sort order to use depending on your needs, but you don’t have the choice on the sorting algorithm although they can have very different speed and memory characteristics. Here is how the Oracle JVM 1.7 sorts data:

Although these APIs serve most use-cases, sometimes you would want to have more control over the implementation, for example:

In order to have more control over sorting, Lucene initially imported CGlib’s SorterTemplate and improved it over time. But then arose the need to use TimSort to sort partially-sorted data, and it was very hard to fold this sorting algorithm into SorterTemplate since this sorting algorithm requires temporary storage (on the contrary to the quicksort and in-place merge-sort implemented by SorterTemplate). This is why I started a small GitHub project to refactor SorterTemplate so that it can support more sorting algorithms:

For example, here is some code which sorts two parallel arrays using introspective sort:

These new classes are now being used in Lucene, but I think it could be very useful to other projects, so feel free to use them! Feedback is very welcome.

Feb 10

lz4-java 1.1.0 is out

I’m happy to announce the release of lz4-java 1.1.0. Artifacts can be downloaded from Maven Central and javadocs can be found at jpountz.github.com/lz4-java/1.1.0/docs/.

Release highlights

Performance

In order to give a sense of the speed of lz4 and xxhash, I published some benchmarks. The lz4 compression/decompression benchmarks have been computed using Ning’s jvm-compressor-benchmark framework while the xxhash benchmark has been computed using a Caliper benchmark:

I did my best to make these benchmarks unbiased, but the performance of these algorithms depends a lot on the kind of data which is compressed/hashed (the length for example) so whenever possible you should make benchmarks with your own data to decide which implementation to use.

Happy compressing and hashing!

Jan 23

Putting term vectors on a diet

What are term vectors?

Term vectors are an interesting Lucene feature, which allows for retrieving a single-document inverted index for any document ID of your index. This means that given any document ID, you can quickly list all its unique terms in sorted order, and for every term you can quickly know its original positions and offsets. For example, if you indexed the following document:

Field nameField value
textthe quick brown fox jumps over the lazy dog

You would retrieve the following term vectors:

TermFrequencyPositionsOffsets
brown12[10,15]
dog18[40,43]
fox13[16,19]
jumps14[20,25]
lazy17[35,39]
over15[26,30]
quick11[4,9]
the20, 6[0,3], [31,34]

For very small documents, it makes little sense to store term vectors given than they can be recomputed very quickly by re-analyzing a document’s stored fields. But if your documents are large or if your analysis pipeline is expensive, storing term vectors on disk can be much faster than computing them on the fly. So far, term vectors have been mainly used for highlighting and MoreLikeThis (searching for similar documents) but there is an interesting issue open in Lucene JIRA to use term vectors to perform partial document updates.

However, term vectors come with a cost. They store a lot of information and often take up a lot of disk space. This is bad because it can make indexing and searching slower (especially if the index size grows beyond the size of your OS cache).

Term vectors compression

Having worked on stored fields compression in the past months, my first idea was to apply the same recipe: collect enough raw data to fill a 16 KB block, then compress it and flush it to disk. However term vectors are more challenging to compress: terms are already unique so it is rather hard for LZ codecs such as LZ4 to reach good compression ratios. Moreover, general-purpose compression algorithms are usually not very good at compressing numeric data (frequencies, term positions and offsets) so I needed something else.

After long hours of trial and error, I managed to write a new term vectors format based on LZ4 and bit-packing which efficiently compresses term vectors for various cases. Depending on the collection of documents, the compression ratio of the term vector files varied from 0.53 to 0.90. For example, indexing term vectors (with positions and offsets enabled) for 1M articles from the English Wikipedia database generates 5.9G of term vector files with the default codec from Lucene 4.0 or 4.1. By switching to this new term vectors format, the size of the term vector files decreased to 3.9G! Another good news is that this size reduction made indexing faster: while indexing those 1M articles took 1038 seconds with the current term vectors format, it took only 870 seconds with this new compressed format (see ingestion rate charts below).


Ingestion rate with the current default format.

Ingestion rate with the new compressed format.

Although this new format is still very experimental, I think it’s promising and would make a good candidate to become the new default term vectors format for a future version of Lucene. If you are interested in better understanding how it works and the compression ratio you can expect from this format, you can read more about it in Lucene Jira.

Jan 09

lz4-java 1.0.0 released

I am happy to announce that I released the first version of lz4-java, version 1.0.0.

lz4-java is a Java port of the lz4 compression library and the xxhash hashing library, which are both known for being blazing fast.

This release is based on lz4 r87 and xxhash r6. Artifacts have been pushed to Maven Central (net.jpountz.lz4:lz4:jar:1.0.0) and javadocs can be found at http://jpountz.github.com/lz4-java/1.0.0/docs/.

Examples

For those who would like to get started quickly, here are examples:

Happy compressing and hashing!

Nov 14

Stored fields compression in Lucene 4.1

Last time, I tried to explain how efficient stored fields compression can help when your index grows larger than your I/O cache. Indeed, magnetic disks are so slow that it is usually worth spending a few CPU cycles on compression in order to avoid disk seeks.

I have a very good news for you: the stored fields format I used for these experiments will become the new default stored fields format as of Lucene 4.1! Here are the main highlights:

Over the last weeks, I’ve had the occasion to talk about this new stored fields format with various Lucene users and developers who raised interesting questions that I’ll try to answer:

Many thanks to Robert Muir who helped me improve and fix this new stored fields format!

Oct 09

Efficient compressed stored fields with Lucene

Whatever you are storing on disk, everything usually goes perfectly well until your data becomes too large for your I/O cache. Until then, most disk accesses actually never touch disk and are almost as fast as reading or writing to main memory. The problem arises when your data becomes too large: disk accesses that can’t be served through the I/O cache will trigger an actual disk seek, and everything will suddenly become much slower. Once data becomes that large, there are three options: either you find techniques to reduce disk seeks (usually by loading some data in memory and/or relying more on sequential access), buy more RAM or better disks (SSD?), or performance will degrade as your data will keep growing.

If you have a Lucene index with some stored fields, I wouldn’t be surprised that most of the size of your index is due to its .fdt files. For example, when indexing 10M documents from a wikipedia dump that Mike McCandless uses for nightly benchmarks, the .fdt files are 69.3% of the index size.

.fdt is one of the two file extensions that are used for stored fields in Lucene. You can read more about how they work in Lucene40StoredFieldsFormat’s docs. The important thing to know is that loading a document from disk requires two disk seeks:

The fields index file being usually small (~ 8 * maxDoc bytes), the I/O cache should be able to serve most disk seeks in this file. However the fields data file is often much larger (a little more than the original data) so the seek in this file is more likely to translate to an actual disk seek. In the worst case that all seeks in this file translate to actual disk seeks, if your search engine displays p results per page, it won’t be able to handle more than 100/p requests per second (given that a disk seek on commodity hard drives is ~ 10ms). As a consequence the hit rate of the I/O cache on this file is very important for your query throughput. One option to improve the I/O cache hit rate is to compress stored fields so that the fields data file is smaller overall.

Up to version 2.9, Lucene had an option to compress stored fields but it has been deprecated and then removed (see LUCENE-652 for more information). In newer versions, users can still compress documents but this has to be done at the document level instead of the index level. However, the problem is still the same: if you are working with small fields, most compression algorithms are inefficient. In order to fix it, ElasticSearch 0.19.5 introduced store-level compression: it compresses large (64KB) fixed-size blocks of data instead of single fields in order to improve the compression ratio. This is probably the best way to compress small docs with Lucene up to version 3.6.

Fortunately, Lucene 4.0 (which should be released very soon) introduces flexible indexing: it allows you to customize Lucene low-level behavior, in particular the index files formats. With LUCENE-4226, Lucene got a new StoredFieldsFormat that efficiently compresses stored fields. Handling compression at the codec level allows for several optimizations compared to ElasticSearch’s approach:

Lucene40StoredFieldsFormat vs. CompressingStoredFieldsFormat

In order to ensure that it is really a win to compress stored fields, I ran a few benchmarks on a large index:

CompressingStoredFieldsFormat has been instantiated with the following parameters:

Index size

Indexing speed

Indexing took almost the same time with both StoredFieldsFormats (~ 37 minutes) and ingestion rates are very similar:

Document loading speed

I measured the average time to load a document from disk using random document identifiers in the [0 - maxDoc[ range. According to free, my I/O cache was ~ 5.2G when I ran these tests:

In the case of Lucene40StoredFieldsFormat, the fields data file is much larger than the I/O cache so many requests to load a document translated to an actual disk seek. On the contrary, CompressingStoredFieldsFormat's fields data file is only a little larger than the I/O cache, so most seeks are served by the I/O cache. This explains why loading documents from disk was more than 2x faster, although it requires more CPU because of uncompression.

In that very particular case it would probably be even faster to switch to a more aggressive compression mode or a larger block size so that the whole .fdt file can fit into the I/O cache.

Conclusion

Unless your server has very fast I/O, it is usually faster to compress the fields data file so that most of it can fit into the I/O cache. Compared to Lucene40StoredFieldsFormat, CompressingStoredFieldsFormat allows for efficient stored fields compression and therefore better performance.

Jul 27

Wow, LZ4 is fast!

I’ve been doing some experiments with LZ4 recently and I must admit that I am truly impressed. For those not familiar with LZ4, it is a compression format from the LZ77 family. Compared to other similar algorithms (such as Google’s Snappy), LZ4’s file format does not allow for very high compression ratios since:

This might sound like a lot of lost space, but fortunately things are not that bad: there are generally a lot of opportunities to find repeated sequences in a 64kb block, and unless you are working with trivial inputs, you very rarely need to encode lengths which are greater than 15. In case you still doubt LZ4 ability to achieve high compression ratios, the original implementation includes a high compression algorithm that can easily achieve a 40% compression ratio on common ASCII text.

But this file format also allows you to write fast compressors and uncompressors, and this is really what LZ4 excels at: compression and uncompression speed. To measure how faster LZ4 is compared to other famous compression algorithms, I wrote three Java implementations of LZ4:

Then I modified Ning’s JVM compressor benchmark (kudos to Ning for sharing it!) to add my compressors and ran the Calgary compression benchmark.

The results are very impressive:

Compression

Uncompression

If you are curious about the compressors whose names start with “LZ4 chunks”, these are compressors that are implemented with Java streams API and compress every 64kb block of the input data separately.

For the full Japex reports, see people.apache.org/~jpountz/lz4.

Jun 25

What is the theory behind Apache Lucene?

There is a recurring request from users to have more insight into Lucene internals. For example, see:

Although most of the ideas behind Lucene are explained in any good book on Information Retrieval, Lucene also implements some advanced algorithms for specific tasks. In these cases, it is probably easier to read an article describing the idea than to reverse-engineer the code. This is why I started a wiki page to collect links to research papers and blog articles that explain some advanced ideas behind Lucene.

Feel free to help me improve this wiki page by sending me ideas of Lucene algorithms that would deserve an entry on it!

Jun 21

How fast is bit packing?

One of the most anticipated changes in Lucene/Solr 4.0 is its improved memory efficiency. Indeed, according to several benchmarks, you could expect a 2/3 reduction in memory use for a Lucene-based application (such as Solr or ElasticSearch) compared to Lucene 3.x.

One of the techniques that Lucene uses to reduce its memory footprint is bit-packing. This means that integer array values, instead of being fixed-size (8, 16, 32 or 64 bits per value), can have any size in the [1-64] range. If you store 17-bits integers this way, this is a 47% reduction of the size of your array compared to an int[]!

Here is what the interface looks like:

interface Mutable {
  long get(int index);
  void set(int index, long value);
  int size();
}

Under the hood, this interface has 4 implementations that have different speed and memory efficiency:

  1. Direct8, Direct16, Direct32 and Direct64 that just wrap a byte[], a short[], an int[] or a long[],
  2. Packed64, which packs values contiguously in 64-bits (long) blocks,
  3. Packed64SingleBLock, that looks like Packed64 but uses padding bits to prevent values from spanning across several blocks (32 bits per value at most),
  4. Packed8ThreeBlocks and Packed16ThreeBlocks, that store values in either 3 bytes (24 bits per value) or 3 shorts (48 bits per value).

In case you are interested, the code is available in Lucene svn repository.

Direct{8,16,32,64}

The methods of these classes directly translate to operations on an array:

Operations on these classes should be very fast given that they directly translate into array accesses. However, these implementations also have the same drawback as arrays, which is that if you want to store 17-bits values, you will need to use a Direct32, which has a 88% memory overhead for 17-bits values.

Packed64

This implementation stores values contiguously in 64-bits blocks. This is the most compact implementation: if you want to store a million17-bits values, it will require roughly 17 * 1000000 / 8 ~= 2MB space. One pitfall is that some values may span across two different blocks (when the number of bits per value is not a divisor of n), as a consequence, to avoid costly CPU branches, the implementation of the get and set methods are a little tricky and always update 2 blocks with different shifts and masks.

Packed64SingleBlock

This implementation is similar to Packed64 but does not allow its values to span across several blocks. If you want to store 21-bits values, every block will consist of 3 21-bits values (using 3*21=63 bits) and 64-63=1 padding bit (2% space loss). Here are the different value sizes that this class accepts.

Bits per valueValues per blockPadding bitsSpace loss
32200%
21312%
16400%
12546%
10646%
9712%
8800%
7912%
61046%
51246%
41600%
32112%
23200%
16400%

Packed{8,16}ThreeBlocks

This class uses 3 bytes or shorts to store a single value. It is well-suited for 24 and 48-bits values, but has a maximum size of Integer.MAX_VALUE/3 (so that the underlying array can be addressed by an int).

How do they compare?

For every number of bits per value, there are 2 to 4 available implementations. One important criterion to select the one that best suits your needs is the memory overhead.

Here are the memory overheads for every number of bits per value and bit-packing scheme. The X-axis is the number of bits per value while the Y-axis is the memory overhead (space loss / actually used space).

For every bit-packing scheme, I only considered the most compact implementation. I could use a Direct64 to store 20-bits values, but it is very likely to have similar (probably a little worse since the CPU cache is less likely to help) performance to a Direct32, although it requires twice as more space.

For example, there are 4 available implementations to store 20-bits values:

Even if we now know how compact the different implementations are, it is still very difficult to decide which implementation to use whithout knowing their relative performance characteristics. This is why I wrote a simple benchmark that for every number of bits per value in [1,64]:

The X-axis is the number of bits per value while the Y-axis is the number of read/written values per second.

The Direct* implementations are clearly faster than the packed implementations (~3x faster than Packed64 and 2x faster than Packed64SingleBlock). However, it is interesting to observe that the Packed*ThreeBlocks implementations are almost as fast as the Direct* implementations.

Packed64 and Packed64SingleBLock are much faster with small values (1 or 2 bits), due to the fact that the CPU caches can hold many more values at the same time, resulting in fewer cache misses when trying to access the data.

Now, how do read operations compare?

This time results are very different. The fastest implementation are still the Direct* ones, but they are only ~18% faster than Packed64 and Packed*ThreeBlocks, and only ~8% faster than Packed64SingleBLock on average. This means that for read-only use cases, you could save a lot of memory by switching your arrays to a packed implementation while keeping performance to the same level.

Conclusion

Although bit-packing can help reduce memory use significantly, it is very rarely used in practice, probably because:

However, this experiment shows that you can achieve significant reductions in memory use by using packed integer arrays, without sacrificing performance too much since packed arrays can be almost as fast as raw arrays, especially for read operations.