Sunday, June 9, 2013

Versatile sorting

Sorted data sets are very useful since they make a lot of things easier:

  • checking for duplicates,
  • computing frequencies (the number of times each unique element appears),
  • compression (thanks to delta encoding and bit-packing for example),
  • searching thanks to binary search,

Java provides the ability to sort arrays and lists thanks to Arrays.sort and Collections.sort. These methods are flexible enough to accept custom comparators, so that you can define which sort order to use depending on your needs, but you don’t have the choice on the sorting algorithm although they can have very different speed and memory characteristics. Here is how the Oracle JVM 1.7 sorts data:

  • Arrays.sort(Object[]) uses TimSort,
  • Arrays.sort on native arrays uses a dual-pivot quicksort,
  • Collections.sort dumps the list into an Object[] array, sorts the array with TimSort, and then re-adds elements to the list from the array.

Although these APIs serve most use-cases, sometimes you would want to have more control over the implementation, for example:

  • To sort parallel arrays: imagine you have an array of objects and an array of floats, where the float at offset i is the score of the object at the same offset in the other array. Now you want to sort objects by their score. It is doable by writing a list view on top of these two arrays, but since Collections.sort dumps data into an Object[] array, this would be very memory intensive.
  • To avoid useless object allocations: to sort a random-access list (such as ArrayList, but not LinkedList), there is no need to dump all your elements into an array, you could instead sort the list in place and save memory.
  • To better fit your data: if your data is “almost” sorted, there is a good chance that TimSort would perform faster than Quicksort.
  • To better fit your constraints: if you want to sort a huge array which occupies a large part of your heap, then TimSort might not be the best algorithm since it requires up to n/2 temporary slots. You could instead use an in-place sorting algorithm.
  • To better reuse memory: say you want to sort 100,000 small Object[] arrays. For every array, Java’s Arrays.sort will create a new temporary array (TimSort needs temporary storage to perform merges). By having more control over the sorting implementation, the temporary storage could be reused.

In order to have more control over sorting, Lucene initially imported CGlib’s SorterTemplate and improved it over time. But then arose the need to use TimSort to sort partially-sorted data, and it was very hard to fold this sorting algorithm into SorterTemplate since this sorting algorithm requires temporary storage (on the contrary to the quicksort and in-place merge-sort implemented by SorterTemplate). This is why I started a small GitHub project to refactor SorterTemplate so that it can support more sorting algorithms:

  • a modified TimSort, which allows you to configure the amount of temporary storage that can be allocated,
  • Merge-sort, with configurable memory overhead similarly to TimSort,
  • Introspective sort, which is essentially an improved Quicksort,
  • Heap-sort, on both binary and ternay heaps.

For example, here is some code which sorts two parallel arrays using introspective sort:

These new classes are now being used in Lucene, but I think it could be very useful to other projects, so feel free to use them! Feedback is very welcome.

Sunday, February 10, 2013

lz4-java 1.1.0 is out

I’m happy to announce the release of lz4-java 1.1.0. Artifacts can be downloaded from Maven Central and javadocs can be found at jpountz.github.com/lz4-java/1.1.0/docs/.

Release highlights

  • lz4 has been upgraded from r87 to r88 (improves LZ4 HC compression speed).
  • Experimental streaming support: data is serialized into fixed-size blocks of compressed data. This can be useful for people who need to manipulate data using streams and want compression to be transparent.
  • The released artifact contains pre-compiled JNI bindings for some common platforms: win32/amd64, darwin/x86_64, linux/i386 and linux/amd64. Users of these platforms can now benefit from the speed of the JNI bindings without having to build from source.

Performance

In order to give a sense of the speed of lz4 and xxhash, I published some benchmarks. The lz4 compression/decompression benchmarks have been computed using Ning’s jvm-compressor-benchmark framework while the xxhash benchmark has been computed using a Caliper benchmark:

I did my best to make these benchmarks unbiased, but the performance of these algorithms depends a lot on the kind of data which is compressed/hashed (the length for example) so whenever possible you should make benchmarks with your own data to decide which implementation to use.

Happy compressing and hashing!

Wednesday, January 23, 2013

Putting term vectors on a diet

What are term vectors?

Term vectors are an interesting Lucene feature, which allows for retrieving a single-document inverted index for any document ID of your index. This means that given any document ID, you can quickly list all its unique terms in sorted order, and for every term you can quickly know its original positions and offsets. For example, if you indexed the following document:

Field nameField value
textthe quick brown fox jumps over the lazy dog

You would retrieve the following term vectors:

TermFrequencyPositionsOffsets
brown12[10,15]
dog18[40,43]
fox13[16,19]
jumps14[20,25]
lazy17[35,39]
over15[26,30]
quick11[4,9]
the20, 6[0,3], [31,34]

For very small documents, it makes little sense to store term vectors given than they can be recomputed very quickly by re-analyzing a document’s stored fields. But if your documents are large or if your analysis pipeline is expensive, storing term vectors on disk can be much faster than computing them on the fly. So far, term vectors have been mainly used for highlighting and MoreLikeThis (searching for similar documents) but there is an interesting issue open in Lucene JIRA to use term vectors to perform partial document updates.

However, term vectors come with a cost. They store a lot of information and often take up a lot of disk space. This is bad because it can make indexing and searching slower (especially if the index size grows beyond the size of your OS cache).

Term vectors compression

Having worked on stored fields compression in the past months, my first idea was to apply the same recipe: collect enough raw data to fill a 16 KB block, then compress it and flush it to disk. However term vectors are more challenging to compress: terms are already unique so it is rather hard for LZ codecs such as LZ4 to reach good compression ratios. Moreover, general-purpose compression algorithms are usually not very good at compressing numeric data (frequencies, term positions and offsets) so I needed something else.

After long hours of trial and error, I managed to write a new term vectors format based on LZ4 and bit-packing which efficiently compresses term vectors for various cases. Depending on the collection of documents, the compression ratio of the term vector files varied from 0.53 to 0.90. For example, indexing term vectors (with positions and offsets enabled) for 1M articles from the English Wikipedia database generates 5.9G of term vector files with the default codec from Lucene 4.0 or 4.1. By switching to this new term vectors format, the size of the term vector files decreased to 3.9G! Another good news is that this size reduction made indexing faster: while indexing those 1M articles took 1038 seconds with the current term vectors format, it took only 870 seconds with this new compressed format (see ingestion rate charts below).


Ingestion rate with the current default format.

Ingestion rate with the new compressed format.

Although this new format is still very experimental, I think it’s promising and would make a good candidate to become the new default term vectors format for a future version of Lucene. If you are interested in better understanding how it works and the compression ratio you can expect from this format, you can read more about it in Lucene Jira.

Wednesday, January 9, 2013

lz4-java 1.0.0 released

I am happy to announce that I released the first version of lz4-java, version 1.0.0.

lz4-java is a Java port of the lz4 compression library and the xxhash hashing library, which are both known for being blazing fast.

This release is based on lz4 r87 and xxhash r6. Artifacts have been pushed to Maven Central (net.jpountz.lz4:lz4:jar:1.0.0) and javadocs can be found at http://jpountz.github.com/lz4-java/1.0.0/docs/.

Examples

For those who would like to get started quickly, here are examples:

  • lz4 compression and decompression
  • block hashing with xxhash
  • streaming hashing with xxhash.

Happy compressing and hashing!

Wednesday, November 14, 2012

Stored fields compression in Lucene 4.1

Last time, I tried to explain how efficient stored fields compression can help when your index grows larger than your I/O cache. Indeed, magnetic disks are so slow that it is usually worth spending a few CPU cycles on compression in order to avoid disk seeks.

I have a very good news for you: the stored fields format I used for these experiments will become the new default stored fields format as of Lucene 4.1! Here are the main highlights:

  • only one disk seek per document in the worst case (compared to two with the previous default stored fields format)
  • documents are compressed together in blocks ot 16 KB or more using the blazing fast LZ4 compression algorithm

Over the last weeks, I’ve had the occasion to talk about this new stored fields format with various Lucene users and developers who raised interesting questions that I’ll try to answer:

  • What happens if my documents are larger than 16KB? This stored fields format prevents documents from spreading across chunks: if your documents are larger than 16KB, you will have larger chunks that contain only one document.
  • Is it configurable? Yes and no: the stored fields format that will be used by Lucene41Codec is not configurable. However, it is based on another format: CompressingStoredFieldsFormat, which allows you to configure the chunk size and the compression algorithm to use (LZ4, LZ4 HC or Deflate).
  • Are there limitations? Yes, there is one: individual documents cannot be larger than 232 - 216 bytes (a little less than 2 GB). But this should be fine for most (if not all) use-cases.
  • Can I disable compression? Of course you can, all you need to do is to write a new codec that uses a stored fields format which does not compress stored fields such as Lucene40StoredFieldsFormat.
  • My index is stored in memory / on a SSD, does it still make sense to compress stored fields? I think so:
    • it won’t slow down your search engine: on my very slow laptop (Core 2 Duo T6670), decompressing a 16 KB block of english text takes 80µs on average, so even if your result pages have 50 documents, your queries will only be 4ms slower (much less with faster hardware and/or smaller pages)
    • RAM and SSD are expensive, so thanks to stored fields compression you’ll be able to have larger indexes on the same hardware, or equivalent indexes on cheaper hardware
  • Can I plug in my own compression algorithm? Unfortunately you can’t, but if you really need to use a different compression algorithm, the code should be easy to adapt. However you should be aware of two optimizations of the LZ4 implementation in Lucene that you would almost certainly need to implement if you want to achieve similar performance:
    • it doesn’t compress to a temporary buffer before writing the compressed data to disk, instead it writes directly to a Lucene DataOutput — this proved to be faster (with MMapDirectory at least)
    • it stops decompressing as soon as enough data has been decompressed: for example, if you need to retrieve the second document of a chunk, which is stored between offsets 1024 and 2048 of the chunk, Lucene will only decompress 2 KB of data.

Many thanks to Robert Muir who helped me improve and fix this new stored fields format!

Tuesday, October 9, 2012

Efficient compressed stored fields with Lucene

Whatever you are storing on disk, everything usually goes perfectly well until your data becomes too large for your I/O cache. Until then, most disk accesses actually never touch disk and are almost as fast as reading or writing to main memory. The problem arises when your data becomes too large: disk accesses that can’t be served through the I/O cache will trigger an actual disk seek, and everything will suddenly become much slower. Once data becomes that large, there are three options: either you find techniques to reduce disk seeks (usually by loading some data in memory and/or relying more on sequential access), buy more RAM or better disks (SSD?), or performance will degrade as your data will keep growing.

If you have a Lucene index with some stored fields, I wouldn’t be surprised that most of the size of your index is due to its .fdt files. For example, when indexing 10M documents from a wikipedia dump that Mike McCandless uses for nightly benchmarks, the .fdt files are 69.3% of the index size.

.fdt is one of the two file extensions that are used for stored fields in Lucene. You can read more about how they work in Lucene40StoredFieldsFormat’s docs. The important thing to know is that loading a document from disk requires two disk seeks:

  • one in the fields index file (.fdx),
  • one in the fields data file (.fdt).

The fields index file being usually small (~ 8 * maxDoc bytes), the I/O cache should be able to serve most disk seeks in this file. However the fields data file is often much larger (a little more than the original data) so the seek in this file is more likely to translate to an actual disk seek. In the worst case that all seeks in this file translate to actual disk seeks, if your search engine displays p results per page, it won’t be able to handle more than 100/p requests per second (given that a disk seek on commodity hard drives is ~ 10ms). As a consequence the hit rate of the I/O cache on this file is very important for your query throughput. One option to improve the I/O cache hit rate is to compress stored fields so that the fields data file is smaller overall.

Up to version 2.9, Lucene had an option to compress stored fields but it has been deprecated and then removed (see LUCENE-652 for more information). In newer versions, users can still compress documents but this has to be done at the document level instead of the index level. However, the problem is still the same: if you are working with small fields, most compression algorithms are inefficient. In order to fix it, ElasticSearch 0.19.5 introduced store-level compression: it compresses large (64KB) fixed-size blocks of data instead of single fields in order to improve the compression ratio. This is probably the best way to compress small docs with Lucene up to version 3.6.

Fortunately, Lucene 4.0 (which should be released very soon) introduces flexible indexing: it allows you to customize Lucene low-level behavior, in particular the index files formats. With LUCENE-4226, Lucene got a new StoredFieldsFormat that efficiently compresses stored fields. Handling compression at the codec level allows for several optimizations compared to ElasticSearch’s approach:

  • blocks can have variable size so that documents never spread across two blocks (so that loading a document from disk never requires uncompressing more than one block),
  • uncompression can stop as soon as enough data has been uncompressed,
  • less memory is required.

Lucene40StoredFieldsFormat vs. CompressingStoredFieldsFormat

In order to ensure that it is really a win to compress stored fields, I ran a few benchmarks on a large index:

  • 10M documents,
  • every document has 4 stored fields:
    • an ID (a few bytes),
    • a title (a few bytes),
    • a date (a few bytes),
    • a body (up to 1KB).

CompressingStoredFieldsFormat has been instantiated with the following parameters:

  • compressionMode = FAST (fast compression and fast uncompression, but high compression ratio, uses LZ4 under the hood),
  • chunkSize = 16K (means that data will be compressed into blocks of ~16KB),
  • storedFieldsIndexFormat = MEMORY_CHUNK (the most compact fields index format, requires at most 12 bytes of memory per chunk of compressed documents).

Index size

  • Lucene40StoredFieldsFormat
    • Fields index: 76M
    • Field data: 9.4G
  • CompressingStoredFieldsFormat
    • Fields index: 1.7M
    • Field data: 5.7G

Indexing speed

Indexing took almost the same time with both StoredFieldsFormats (~ 37 minutes) and ingestion rates are very similar:

Document loading speed

I measured the average time to load a document from disk using random document identifiers in the [0 - maxDoc[ range. According to free, my I/O cache was ~ 5.2G when I ran these tests:

  • Lucene40StoredFieldsFormat: 11.5ms,
  • CompressingStoredFieldsFormat: 4.25ms.

In the case of Lucene40StoredFieldsFormat, the fields data file is much larger than the I/O cache so many requests to load a document translated to an actual disk seek. On the contrary, CompressingStoredFieldsFormat's fields data file is only a little larger than the I/O cache, so most seeks are served by the I/O cache. This explains why loading documents from disk was more than 2x faster, although it requires more CPU because of uncompression.

In that very particular case it would probably be even faster to switch to a more aggressive compression mode or a larger block size so that the whole .fdt file can fit into the I/O cache.

Conclusion

Unless your server has very fast I/O, it is usually faster to compress the fields data file so that most of it can fit into the I/O cache. Compared to Lucene40StoredFieldsFormat, CompressingStoredFieldsFormat allows for efficient stored fields compression and therefore better performance.

Friday, July 27, 2012

Wow, LZ4 is fast!

I’ve been doing some experiments with LZ4 recently and I must admit that I am truly impressed. For those not familiar with LZ4, it is a compression format from the LZ77 family. Compared to other similar algorithms (such as Google’s Snappy), LZ4’s file format does not allow for very high compression ratios since:

  • you cannot reference sequences which are more than 64kb backwards in the stream,
  • it encodes lengths with an algorithm that requires 1 + floor(n / 255) bytes to store an integer n instead of the 1 + floor(log(n) / log(2^7)) bytes that variable-length encoding would require.

This might sound like a lot of lost space, but fortunately things are not that bad: there are generally a lot of opportunities to find repeated sequences in a 64kb block, and unless you are working with trivial inputs, you very rarely need to encode lengths which are greater than 15. In case you still doubt LZ4 ability to achieve high compression ratios, the original implementation includes a high compression algorithm that can easily achieve a 40% compression ratio on common ASCII text.

But this file format also allows you to write fast compressors and uncompressors, and this is really what LZ4 excels at: compression and uncompression speed. To measure how faster LZ4 is compared to other famous compression algorithms, I wrote three Java implementations of LZ4:

  • a JNI binding to the original C implementation (including the high compression algorithm),
  • a pure Java port, using the standard API,
  • a pure Java port that uses the sun.misc.Unsafe API to speed up (un)compression.

Then I modified Ning’s JVM compressor benchmark (kudos to Ning for sharing it!) to add my compressors and ran the Calgary compression benchmark.

The results are very impressive:

  • the JNI default compressor is the fastest one in all cases but one, and the JNI uncompressor is always the fastest one,
  • even when compressed with the high compression algorithm, data is still very fast to uncompress, which is great for read-only data,
  • the unsafe Java compressor/uncompressor is by far the fastest pure Java compressor/uncompressor,
  • the safe Java compressor/uncompressor has comparable performance to some compressors/uncompressors that use the sun.misc.Unsafe API (such as LZF).

Compression

Uncompression

If you are curious about the compressors whose names start with “LZ4 chunks”, these are compressors that are implemented with Java streams API and compress every 64kb block of the input data separately.

For the full Japex reports, see people.apache.org/~jpountz/lz4.

Monday, June 25, 2012

What is the theory behind Apache Lucene?

There is a recurring request from users to have more insight into Lucene internals. For example, see:

Although most of the ideas behind Lucene are explained in any good book on Information Retrieval, Lucene also implements some advanced algorithms for specific tasks. In these cases, it is probably easier to read an article describing the idea than to reverse-engineer the code. This is why I started a wiki page to collect links to research papers and blog articles that explain some advanced ideas behind Lucene.

Feel free to help me improve this wiki page by sending me ideas of Lucene algorithms that would deserve an entry on it!

Thursday, June 21, 2012

How fast is bit packing?

One of the most anticipated changes in Lucene/Solr 4.0 is its improved memory efficiency. Indeed, according to several benchmarks, you could expect a 2/3 reduction in memory use for a Lucene-based application (such as Solr or ElasticSearch) compared to Lucene 3.x.

One of the techniques that Lucene uses to reduce its memory footprint is bit-packing. This means that integer array values, instead of being fixed-size (8, 16, 32 or 64 bits per value), can have any size in the [1-64] range. If you store 17-bits integers this way, this is a 47% reduction of the size of your array compared to an int[]!

Here is what the interface looks like:

interface Mutable {
  long get(int index);
  void set(int index, long value);
  int size();
}

Under the hood, this interface has 4 implementations that have different speed and memory efficiency:

  1. Direct8, Direct16, Direct32 and Direct64 that just wrap a byte[], a short[], an int[] or a long[],
  2. Packed64, which packs values contiguously in 64-bits (long) blocks,
  3. Packed64SingleBLock, that looks like Packed64 but uses padding bits to prevent values from spanning across several blocks (32 bits per value at most),
  4. Packed8ThreeBlocks and Packed16ThreeBlocks, that store values in either 3 bytes (24 bits per value) or 3 shorts (48 bits per value).

In case you are interested, the code is available in Lucene svn repository.

Direct{8,16,32,64}

The methods of these classes directly translate to operations on an array:

  • Direct8: byte[],
  • Direct16: short[],
  • Direct32: int[],
  • Direct64: long[].

Operations on these classes should be very fast given that they directly translate into array accesses. However, these implementations also have the same drawback as arrays, which is that if you want to store 17-bits values, you will need to use a Direct32, which has a 88% memory overhead for 17-bits values.

Packed64

This implementation stores values contiguously in 64-bits blocks. This is the most compact implementation: if you want to store a million17-bits values, it will require roughly 17 * 1000000 / 8 ~= 2MB space. One pitfall is that some values may span across two different blocks (when the number of bits per value is not a divisor of n), as a consequence, to avoid costly CPU branches, the implementation of the get and set methods are a little tricky and always update 2 blocks with different shifts and masks.

Packed64SingleBlock

This implementation is similar to Packed64 but does not allow its values to span across several blocks. If you want to store 21-bits values, every block will consist of 3 21-bits values (using 3*21=63 bits) and 64-63=1 padding bit (2% space loss). Here are the different value sizes that this class accepts.

Bits per valueValues per blockPadding bitsSpace loss
32200%
21312%
16400%
12546%
10646%
9712%
8800%
7912%
61046%
51246%
41600%
32112%
23200%
16400%

Packed{8,16}ThreeBlocks

This class uses 3 bytes or shorts to store a single value. It is well-suited for 24 and 48-bits values, but has a maximum size of Integer.MAX_VALUE/3 (so that the underlying array can be addressed by an int).

How do they compare?

For every number of bits per value, there are 2 to 4 available implementations. One important criterion to select the one that best suits your needs is the memory overhead.

Here are the memory overheads for every number of bits per value and bit-packing scheme. The X-axis is the number of bits per value while the Y-axis is the memory overhead (space loss / actually used space).

For every bit-packing scheme, I only considered the most compact implementation. I could use a Direct64 to store 20-bits values, but it is very likely to have similar (probably a little worse since the CPU cache is less likely to help) performance to a Direct32, although it requires twice as more space.

For example, there are 4 available implementations to store 20-bits values:

  • Direct32 (32 bits per value), which has 60% memory overhead
  • Packed64 (20 bits per value), which has 0% memory overhead
  • Packed64SingleBlock (21 bits + 1/3 padding bit per value), which has 7% memory overhead
  • Packed8ThreeBlocks (24 bits per value), which has 20% memory overhead

Even if we now know how compact the different implementations are, it is still very difficult to decide which implementation to use whithout knowing their relative performance characteristics. This is why I wrote a simple benchmark that for every number of bits per value in [1,64]:

  • creates 2 to 4 packed integer arrays (one per implementation) of size 10,000,000
  • tests their random write performance (offsets are randomly chosen in the [0, 10000000[ range),
  • tests their random read performance.

The X-axis is the number of bits per value while the Y-axis is the number of read/written values per second.

The Direct* implementations are clearly faster than the packed implementations (~3x faster than Packed64 and 2x faster than Packed64SingleBlock). However, it is interesting to observe that the Packed*ThreeBlocks implementations are almost as fast as the Direct* implementations.

Packed64 and Packed64SingleBLock are much faster with small values (1 or 2 bits), due to the fact that the CPU caches can hold many more values at the same time, resulting in fewer cache misses when trying to access the data.

Now, how do read operations compare?

This time results are very different. The fastest implementation are still the Direct* ones, but they are only ~18% faster than Packed64 and Packed*ThreeBlocks, and only ~8% faster than Packed64SingleBLock on average. This means that for read-only use cases, you could save a lot of memory by switching your arrays to a packed implementation while keeping performance to the same level.

Conclusion

Although bit-packing can help reduce memory use significantly, it is very rarely used in practice, probably because:

  • people usually don’t know how many bits per value they actually need,
  • 8, 16, 32 and 64-bits arrays are language built-ins, while packed arrays require some extra coding.
However, this experiment shows that you can achieve significant reductions in memory use by using packed integer arrays, without sacrificing performance too much since packed arrays can be almost as fast as raw arrays, especially for read operations.