Over the years at Terracotta we have come across many scenarios where we had to write some specialized data structures to meet our performance expectation. One such data structure that was fun to write was ObjectIDSet which is a compressed set of longs.
In Terracotta, we identify all our shared objects by assigning an unique Object Identifier (ID) which is essentially a class with a long id. In many places we found ourselves dealing with a large set of IDs, either storing or processing them. For example our Distributed Garbage collector would like to know the set of all live objects in the system.
For huge datasets, it was taking a lot of space to store these IDs in memory increasing GC pauses in Terracotta Server Array. It also takes space and time storing it in disk and sending them on wire. So we came up with this data structure ObjectIDSet which stores them in a compressed form. This data structure has underwent many changes and refactoring over the years.
Version 1 : RangeObjectIDSet :
IDs are assigned in blocks, so most of the time these IDs are contiguous. So the first version, called RangeObjectIDSet, internally creates Ranges of IDs that exists in the set. So a set of IDs like {2,3,4,5,8,9,10} will be stored as Set of Ranges { Range(2,5) , Range(8,10)}. The Ranges themselves are stored in an AATreeSet which is optimized for space (no parent pointers).
Now if we add, say 6, to the set, all we have to do is update the Range(2,5) to Range(2,6). Now if we add 7 to the set, then Range(2,6) and Range(8,10) will be merged to one Range(2,10). Similarly on remove we may have to split the ranges into two if the removed element is neither the first nor the last. If it is, then we can just update the range.
This worked well saving space while performing similar to a regular HashSet in speed when there are contiguous blocks of IDs in the set. But for some use cases (esp. our caching use case) we found that this didn't work well. If the application is removing alternate objects and making it garbage then you end up with a discontiguous set like {2,4,6,8,10} even though the allocation is contiguous. This did not compress well with RangeObjectIDSet.
Version 2 : BitSetObjectIDSet :
So we came up with BitSetObjectIDSet. This has a fixed Range represented by two longs, the first is the start index and the other long represents a bit set representing the next 64 numbers in the set. So a Range(64,0xFF00000000000000) means the set contains 64 to 79. If an ID is added, the bit corresponding that number is set, if its removed that bit is unset. Since its fixed range you can always calculate the start index of the range where the ID should go.
Of course there are cases where RangeObjectIDSet performs better than BitSetObjectIDSet but we found that even in those cases, BitSetObjectIDSet didn't perform too bad. But the worse case for RangeObjectIDSet are handled well in BitSetObjectIDSet.
By using these data structure, not only do we save space from compression, we also discard the ObjectID instance itself which saves us another 16 bytes of memory per object. So overall we save 100s of MB of heap by using this in many places in the product. Optimizations like these along with BigMemory keeps GC pauses at check in the Terracotta Server Array.
Some possible future improvements could be a hybrid version of RangeObjectIDSet and BitSetObjectIDSet which will allow us to have both types of Ranges in the same set depending on the entries. You can see the implementations of these classes here.