Sunday, February 10, 2013

lz4-java 1.1.0 is out

I’m happy to announce the release of lz4-java 1.1.0. Artifacts can be downloaded from Maven Central and javadocs can be found at jpountz.github.com/lz4-java/1.1.0/docs/.

Release highlights

  • lz4 has been upgraded from r87 to r88 (improves LZ4 HC compression speed).
  • Experimental streaming support: data is serialized into fixed-size blocks of compressed data. This can be useful for people who need to manipulate data using streams and want compression to be transparent.
  • The released artifact contains pre-compiled JNI bindings for some common platforms: win32/amd64, darwin/x86_64, linux/i386 and linux/amd64. Users of these platforms can now benefit from the speed of the JNI bindings without having to build from source.

Performance

In order to give a sense of the speed of lz4 and xxhash, I published some benchmarks. The lz4 compression/decompression benchmarks have been computed using Ning’s jvm-compressor-benchmark framework while the xxhash benchmark has been computed using a Caliper benchmark:

I did my best to make these benchmarks unbiased, but the performance of these algorithms depends a lot on the kind of data which is compressed/hashed (the length for example) so whenever possible you should make benchmarks with your own data to decide which implementation to use.

Happy compressing and hashing!

Wednesday, January 9, 2013

lz4-java 1.0.0 released

I am happy to announce that I released the first version of lz4-java, version 1.0.0.

lz4-java is a Java port of the lz4 compression library and the xxhash hashing library, which are both known for being blazing fast.

This release is based on lz4 r87 and xxhash r6. Artifacts have been pushed to Maven Central (net.jpountz.lz4:lz4:jar:1.0.0) and javadocs can be found at http://jpountz.github.com/lz4-java/1.0.0/docs/.

Examples

For those who would like to get started quickly, here are examples:

  • lz4 compression and decompression
  • block hashing with xxhash
  • streaming hashing with xxhash.

Happy compressing and hashing!

Wednesday, November 14, 2012

Stored fields compression in Lucene 4.1

Last time, I tried to explain how efficient stored fields compression can help when your index grows larger than your I/O cache. Indeed, magnetic disks are so slow that it is usually worth spending a few CPU cycles on compression in order to avoid disk seeks.

I have a very good news for you: the stored fields format I used for these experiments will become the new default stored fields format as of Lucene 4.1! Here are the main highlights:

  • only one disk seek per document in the worst case (compared to two with the previous default stored fields format)
  • documents are compressed together in blocks ot 16 KB or more using the blazing fast LZ4 compression algorithm

Over the last weeks, I’ve had the occasion to talk about this new stored fields format with various Lucene users and developers who raised interesting questions that I’ll try to answer:

  • What happens if my documents are larger than 16KB? This stored fields format prevents documents from spreading across chunks: if your documents are larger than 16KB, you will have larger chunks that contain only one document.
  • Is it configurable? Yes and no: the stored fields format that will be used by Lucene41Codec is not configurable. However, it is based on another format: CompressingStoredFieldsFormat, which allows you to configure the chunk size and the compression algorithm to use (LZ4, LZ4 HC or Deflate).
  • Are there limitations? Yes, there is one: individual documents cannot be larger than 232 - 216 bytes (a little less than 2 GB). But this should be fine for most (if not all) use-cases.
  • Can I disable compression? Of course you can, all you need to do is to write a new codec that uses a stored fields format which does not compress stored fields such as Lucene40StoredFieldsFormat.
  • My index is stored in memory / on a SSD, does it still make sense to compress stored fields? I think so:
    • it won’t slow down your search engine: on my very slow laptop (Core 2 Duo T6670), decompressing a 16 KB block of english text takes 80┬Ás on average, so even if your result pages have 50 documents, your queries will only be 4ms slower (much less with faster hardware and/or smaller pages)
    • RAM and SSD are expensive, so thanks to stored fields compression you’ll be able to have larger indexes on the same hardware, or equivalent indexes on cheaper hardware
  • Can I plug in my own compression algorithm? Unfortunately you can’t, but if you really need to use a different compression algorithm, the code should be easy to adapt. However you should be aware of two optimizations of the LZ4 implementation in Lucene that you would almost certainly need to implement if you want to achieve similar performance:
    • it doesn’t compress to a temporary buffer before writing the compressed data to disk, instead it writes directly to a Lucene DataOutput — this proved to be faster (with MMapDirectory at least)
    • it stops decompressing as soon as enough data has been decompressed: for example, if you need to retrieve the second document of a chunk, which is stored between offsets 1024 and 2048 of the chunk, Lucene will only decompress 2 KB of data.

Many thanks to Robert Muir who helped me improve and fix this new stored fields format!

Tuesday, October 9, 2012

Efficient compressed stored fields with Lucene

Whatever you are storing on disk, everything usually goes perfectly well until your data becomes too large for your I/O cache. Until then, most disk accesses actually never touch disk and are almost as fast as reading or writing to main memory. The problem arises when your data becomes too large: disk accesses that can’t be served through the I/O cache will trigger an actual disk seek, and everything will suddenly become much slower. Once data becomes that large, there are three options: either you find techniques to reduce disk seeks (usually by loading some data in memory and/or relying more on sequential access), buy more RAM or better disks (SSD?), or performance will degrade as your data will keep growing.

If you have a Lucene index with some stored fields, I wouldn’t be surprised that most of the size of your index is due to its .fdt files. For example, when indexing 10M documents from a wikipedia dump that Mike McCandless uses for nightly benchmarks, the .fdt files are 69.3% of the index size.

.fdt is one of the two file extensions that are used for stored fields in Lucene. You can read more about how they work in Lucene40StoredFieldsFormat’s docs. The important thing to know is that loading a document from disk requires two disk seeks:

  • one in the fields index file (.fdx),
  • one in the fields data file (.fdt).

The fields index file being usually small (~ 8 * maxDoc bytes), the I/O cache should be able to serve most disk seeks in this file. However the fields data file is often much larger (a little more than the original data) so the seek in this file is more likely to translate to an actual disk seek. In the worst case that all seeks in this file translate to actual disk seeks, if your search engine displays p results per page, it won’t be able to handle more than 100/p requests per second (given that a disk seek on commodity hard drives is ~ 10ms). As a consequence the hit rate of the I/O cache on this file is very important for your query throughput. One option to improve the I/O cache hit rate is to compress stored fields so that the fields data file is smaller overall.

Up to version 2.9, Lucene had an option to compress stored fields but it has been deprecated and then removed (see LUCENE-652 for more information). In newer versions, users can still compress documents but this has to be done at the document level instead of the index level. However, the problem is still the same: if you are working with small fields, most compression algorithms are inefficient. In order to fix it, ElasticSearch 0.19.5 introduced store-level compression: it compresses large (64KB) fixed-size blocks of data instead of single fields in order to improve the compression ratio. This is probably the best way to compress small docs with Lucene up to version 3.6.

Fortunately, Lucene 4.0 (which should be released very soon) introduces flexible indexing: it allows you to customize Lucene low-level behavior, in particular the index files formats. With LUCENE-4226, Lucene got a new StoredFieldsFormat that efficiently compresses stored fields. Handling compression at the codec level allows for several optimizations compared to ElasticSearch’s approach:

  • blocks can have variable size so that documents never spread across two blocks (so that loading a document from disk never requires uncompressing more than one block),
  • uncompression can stop as soon as enough data has been uncompressed,
  • less memory is required.

Lucene40StoredFieldsFormat vs. CompressingStoredFieldsFormat

In order to ensure that it is really a win to compress stored fields, I ran a few benchmarks on a large index:

  • 10M documents,
  • every document has 4 stored fields:
    • an ID (a few bytes),
    • a title (a few bytes),
    • a date (a few bytes),
    • a body (up to 1KB).

CompressingStoredFieldsFormat has been instantiated with the following parameters:

  • compressionMode = FAST (fast compression and fast uncompression, but high compression ratio, uses LZ4 under the hood),
  • chunkSize = 16K (means that data will be compressed into blocks of ~16KB),
  • storedFieldsIndexFormat = MEMORY_CHUNK (the most compact fields index format, requires at most 12 bytes of memory per chunk of compressed documents).

Index size

  • Lucene40StoredFieldsFormat
    • Fields index: 76M
    • Field data: 9.4G
  • CompressingStoredFieldsFormat
    • Fields index: 1.7M
    • Field data: 5.7G

Indexing speed

Indexing took almost the same time with both StoredFieldsFormats (~ 37 minutes) and ingestion rates are very similar:

Document loading speed

I measured the average time to load a document from disk using random document identifiers in the [0 - maxDoc[ range. According to free, my I/O cache was ~ 5.2G when I ran these tests:

  • Lucene40StoredFieldsFormat: 11.5ms,
  • CompressingStoredFieldsFormat: 4.25ms.

In the case of Lucene40StoredFieldsFormat, the fields data file is much larger than the I/O cache so many requests to load a document translated to an actual disk seek. On the contrary, CompressingStoredFieldsFormat's fields data file is only a little larger than the I/O cache, so most seeks are served by the I/O cache. This explains why loading documents from disk was more than 2x faster, although it requires more CPU because of uncompression.

In that very particular case it would probably be even faster to switch to a more aggressive compression mode or a larger block size so that the whole .fdt file can fit into the I/O cache.

Conclusion

Unless your server has very fast I/O, it is usually faster to compress the fields data file so that most of it can fit into the I/O cache. Compared to Lucene40StoredFieldsFormat, CompressingStoredFieldsFormat allows for efficient stored fields compression and therefore better performance.

Friday, July 27, 2012

Wow, LZ4 is fast!

I’ve been doing some experiments with LZ4 recently and I must admit that I am truly impressed. For those not familiar with LZ4, it is a compression format from the LZ77 family. Compared to other similar algorithms (such as Google’s Snappy), LZ4’s file format does not allow for very high compression ratios since:

  • you cannot reference sequences which are more than 64kb backwards in the stream,
  • it encodes lengths with an algorithm that requires 1 + floor(n / 255) bytes to store an integer n instead of the 1 + floor(log(n) / log(2^7)) bytes that variable-length encoding would require.

This might sound like a lot of lost space, but fortunately things are not that bad: there are generally a lot of opportunities to find repeated sequences in a 64kb block, and unless you are working with trivial inputs, you very rarely need to encode lengths which are greater than 15. In case you still doubt LZ4 ability to achieve high compression ratios, the original implementation includes a high compression algorithm that can easily achieve a 40% compression ratio on common ASCII text.

But this file format also allows you to write fast compressors and uncompressors, and this is really what LZ4 excels at: compression and uncompression speed. To measure how faster LZ4 is compared to other famous compression algorithms, I wrote three Java implementations of LZ4:

  • a JNI binding to the original C implementation (including the high compression algorithm),
  • a pure Java port, using the standard API,
  • a pure Java port that uses the sun.misc.Unsafe API to speed up (un)compression.

Then I modified Ning’s JVM compressor benchmark (kudos to Ning for sharing it!) to add my compressors and ran the Calgary compression benchmark.

The results are very impressive:

  • the JNI default compressor is the fastest one in all cases but one, and the JNI uncompressor is always the fastest one,
  • even when compressed with the high compression algorithm, data is still very fast to uncompress, which is great for read-only data,
  • the unsafe Java compressor/uncompressor is by far the fastest pure Java compressor/uncompressor,
  • the safe Java compressor/uncompressor has comparable performance to some compressors/uncompressors that use the sun.misc.Unsafe API (such as LZF).

Compression

Uncompression

If you are curious about the compressors whose names start with “LZ4 chunks”, these are compressors that are implemented with Java streams API and compress every 64kb block of the input data separately.

For the full Japex reports, see people.apache.org/~jpountz/lz4.