Interface AdvancedColumnFamilyOptionsInterface<T extends AdvancedColumnFamilyOptionsInterface<T>>
-
- All Known Subinterfaces:
ColumnFamilyOptionsInterface<T>
- All Known Implementing Classes:
ColumnFamilyOptions,Options
public interface AdvancedColumnFamilyOptionsInterface<T extends AdvancedColumnFamilyOptionsInterface<T>>Advanced Column Family Options which are not mutable (i.e. present inAdvancedMutableColumnFamilyOptionsInterfaceTaken from include/rocksdb/advanced_options.h
-
-
Method Summary
All Methods Instance Methods Abstract Methods Modifier and Type Method Description intbloomLocality()Control locality of bloom filter probes to improve cache miss rate.CompactionOptionsFIFOcompactionOptionsFIFO()The options for FIFO compaction styleCompactionOptionsUniversalcompactionOptionsUniversal()The options needed to support Universal Style compactionsCompactionPrioritycompactionPriority()Get the Compaction priority if level compaction is used for all levelsCompactionStylecompactionStyle()Compaction style for DB.java.util.List<CompressionType>compressionPerLevel()Return the currently setCompressionTypeper instances.booleanforceConsistencyChecks()In debug mode, RocksDB run consistency checks on the LSM every time the LSM change (Flush, Compaction, AddFile).booleaninplaceUpdateSupport()Allows thread-safe inplace updates.booleanlevelCompactionDynamicLevelBytes()Return ifLevelCompactionDynamicLevelBytesis enabled.longmaxCompactionBytes()Control maximum size of each compaction (not guaranteed)intmaxWriteBufferNumberToMaintain()The total maximum number of write buffers to maintain in memory including copies of buffers that have already been flushed.intminWriteBufferNumberToMerge()The minimum number of write buffers that will be merged together before writing to storage.intnumLevels()If level-styled compaction is used, then this number determines the total number of levels.booleanoptimizeFiltersForHits()Returns the current state of theoptimize_filters_for_hitssetting.TsetBloomLocality(int bloomLocality)Control locality of bloom filter probes to improve cache miss rate.TsetCompactionOptionsFIFO(CompactionOptionsFIFO compactionOptionsFIFO)The options for FIFO compaction styleTsetCompactionOptionsUniversal(CompactionOptionsUniversal compactionOptionsUniversal)Set the options needed to support Universal Style compactionsTsetCompactionPriority(CompactionPriority compactionPriority)If levelcompactionStyle()==CompactionStyle.LEVEL, for each level, which files are prioritized to be picked to compact.ColumnFamilyOptionsInterfacesetCompactionStyle(CompactionStyle compactionStyle)Set compaction style for DB.TsetCompressionPerLevel(java.util.List<CompressionType> compressionLevels)Different levels can have different compression policies.TsetForceConsistencyChecks(boolean forceConsistencyChecks)In debug mode, RocksDB run consistency checks on the LSM every time the LSM change (Flush, Compaction, AddFile).TsetInplaceUpdateSupport(boolean inplaceUpdateSupport)Allows thread-safe inplace updates.TsetLevelCompactionDynamicLevelBytes(boolean enableLevelCompactionDynamicLevelBytes)Iftrue, RocksDB will pick target size of each level dynamically.TsetMaxCompactionBytes(long maxCompactionBytes)Maximum size of each compaction (not guarantee)TsetMaxWriteBufferNumberToMaintain(int maxWriteBufferNumberToMaintain)The total maximum number of write buffers to maintain in memory including copies of buffers that have already been flushed.TsetMinWriteBufferNumberToMerge(int minWriteBufferNumberToMerge)The minimum number of write buffers that will be merged together before writing to storage.TsetNumLevels(int numLevels)Set the number of levels for this database If level-styled compaction is used, then this number determines the total number of levels.TsetOptimizeFiltersForHits(boolean optimizeFiltersForHits)This flag specifies that the implementation should optimize the filters mainly for cases where keys are found rather than also optimize for keys missed.
-
-
-
Method Detail
-
setMinWriteBufferNumberToMerge
T setMinWriteBufferNumberToMerge(int minWriteBufferNumberToMerge)
The minimum number of write buffers that will be merged together before writing to storage. If set to 1, then all write buffers are flushed to L0 as individual files and this increases read amplification because a get request has to check in all of these files. Also, an in-memory merge may result in writing lesser data to storage if there are duplicate records in each of these individual write buffers. Default: 1- Parameters:
minWriteBufferNumberToMerge- the minimum number of write buffers that will be merged together.- Returns:
- the reference to the current options.
-
minWriteBufferNumberToMerge
int minWriteBufferNumberToMerge()
The minimum number of write buffers that will be merged together before writing to storage. If set to 1, then all write buffers are flushed to L0 as individual files and this increases read amplification because a get request has to check in all of these files. Also, an in-memory merge may result in writing lesser data to storage if there are duplicate records in each of these individual write buffers. Default: 1- Returns:
- the minimum number of write buffers that will be merged together.
-
setMaxWriteBufferNumberToMaintain
T setMaxWriteBufferNumberToMaintain(int maxWriteBufferNumberToMaintain)
The total maximum number of write buffers to maintain in memory including copies of buffers that have already been flushed. UnlikeAdvancedMutableColumnFamilyOptionsInterface.maxWriteBufferNumber(), this parameter does not affect flushing. This controls the minimum amount of write history that will be available in memory for conflict checking when Transactions are used. When using an OptimisticTransactionDB: If this value is too low, some transactions may fail at commit time due to not being able to determine whether there were any write conflicts. When using a TransactionDB: If Transaction::SetSnapshot is used, TransactionDB will read either in-memory write buffers or SST files to do write-conflict checking. Increasing this value can reduce the number of reads to SST files done for conflict detection. Setting this value to 0 will cause write buffers to be freed immediately after they are flushed. If this value is set to -1,AdvancedMutableColumnFamilyOptionsInterface.maxWriteBufferNumber()will be used. Default: If using a TransactionDB/OptimisticTransactionDB, the default value will be set to the value ofAdvancedMutableColumnFamilyOptionsInterface.maxWriteBufferNumber()if it is not explicitly set by the user. Otherwise, the default is 0.- Parameters:
maxWriteBufferNumberToMaintain- The maximum number of write buffers to maintain- Returns:
- the reference to the current options.
-
maxWriteBufferNumberToMaintain
int maxWriteBufferNumberToMaintain()
The total maximum number of write buffers to maintain in memory including copies of buffers that have already been flushed.- Returns:
- maxWriteBufferNumberToMaintain The maximum number of write buffers to maintain
-
setInplaceUpdateSupport
T setInplaceUpdateSupport(boolean inplaceUpdateSupport)
Allows thread-safe inplace updates. If inplace_callback function is not set, Put(key, new_value) will update inplace the existing_value iff * key exists in current memtable * new sizeof(new_value) ≤ sizeof(existing_value) * existing_value for that key is a put i.e. kTypeValue If inplace_callback function is set, check doc for inplace_callback. Default: false.- Parameters:
inplaceUpdateSupport- true if thread-safe inplace updates are allowed.- Returns:
- the reference to the current options.
-
inplaceUpdateSupport
boolean inplaceUpdateSupport()
Allows thread-safe inplace updates. If inplace_callback function is not set, Put(key, new_value) will update inplace the existing_value iff * key exists in current memtable * new sizeof(new_value) ≤ sizeof(existing_value) * existing_value for that key is a put i.e. kTypeValue If inplace_callback function is set, check doc for inplace_callback. Default: false.- Returns:
- true if thread-safe inplace updates are allowed.
-
setBloomLocality
T setBloomLocality(int bloomLocality)
Control locality of bloom filter probes to improve cache miss rate. This option only applies to memtable prefix bloom and plaintable prefix bloom. It essentially limits the max number of cache lines each bloom filter check can touch. This optimization is turned off when set to 0. The number should never be greater than number of probes. This option can boost performance for in-memory workload but should use with care since it can cause higher false positive rate. Default: 0- Parameters:
bloomLocality- the level of locality of bloom-filter probes.- Returns:
- the reference to the current options.
-
bloomLocality
int bloomLocality()
Control locality of bloom filter probes to improve cache miss rate. This option only applies to memtable prefix bloom and plaintable prefix bloom. It essentially limits the max number of cache lines each bloom filter check can touch. This optimization is turned off when set to 0. The number should never be greater than number of probes. This option can boost performance for in-memory workload but should use with care since it can cause higher false positive rate. Default: 0- Returns:
- the level of locality of bloom-filter probes.
- See Also:
setBloomLocality(int)
-
setCompressionPerLevel
T setCompressionPerLevel(java.util.List<CompressionType> compressionLevels)
Different levels can have different compression policies. There are cases where most lower levels would like to use quick compression algorithms while the higher levels (which have more data) use compression algorithms that have better compression but could be slower. This array, if non-empty, should have an entry for each level of the database; these override the value specified in the previous field 'compression'.
NOTICEIf
level_compaction_dynamic_level_bytes=true,compression_per_level[0]still determinesL0, but other elements of the array are based on base level (the levelL0files are merged to), and may not match the level users see from info log for metadata.If
ExampleL0files are merged tolevel - n, then, fori>0,compression_per_level[i]determines compaction type for leveln+i-1.For example, if we have 5 levels, and we determine to merge
L0data toL4(which meansL1..L3will be empty), then the new files go toL4uses compression typecompression_per_level[1].If now
L0is merged toL2. Data goes toL2will be compressed according tocompression_per_level[1],L3usingcompression_per_level[2]andL4usingcompression_per_level[3]. Compaction for each level can change when data grows.Default: empty
- Parameters:
compressionLevels- list ofCompressionTypeinstances.- Returns:
- the reference to the current options.
-
compressionPerLevel
java.util.List<CompressionType> compressionPerLevel()
Return the currently set
CompressionTypeper instances.- Returns:
- list of
CompressionTypeinstances.
-
setNumLevels
T setNumLevels(int numLevels)
Set the number of levels for this database If level-styled compaction is used, then this number determines the total number of levels.- Parameters:
numLevels- the number of levels.- Returns:
- the reference to the current options.
-
numLevels
int numLevels()
If level-styled compaction is used, then this number determines the total number of levels.- Returns:
- the number of levels.
-
setLevelCompactionDynamicLevelBytes
@Experimental("Turning this feature on or off for an existing DB can cause unexpected LSM tree structure so it\'s not recommended") T setLevelCompactionDynamicLevelBytes(boolean enableLevelCompactionDynamicLevelBytes)
If
true, RocksDB will pick target size of each level dynamically. We will pick a base level b >= 1. L0 will be directly merged into level b, instead of always into level 1. Level 1 to b-1 need to be empty. We try to pick b and its target size so that- target size is in the range of (max_bytes_for_level_base / max_bytes_for_level_multiplier, max_bytes_for_level_base]
- target size of the last level (level num_levels-1) equals to extra size of the level.
At the same time max_bytes_for_level_multiplier and max_bytes_for_level_multiplier_additional are still satisfied.
With this option on, from an empty DB, we make last level the base level, which means merging L0 data into the last level, until it exceeds max_bytes_for_level_base. And then we make the second last level to be base level, to start to merge L0 data to second last level, with its target size to be
1/max_bytes_for_level_multiplierof the last levels extra size. After the data accumulates more so that we need to move the base level to the third last one, and so on.Example
For example, assume
max_bytes_for_level_multiplier=10,num_levels=6, andmax_bytes_for_level_base=10MB.Target sizes of level 1 to 5 starts with:
[- - - - 10MB]with base level is level. Target sizes of level 1 to 4 are not applicable because they will not be used. Until the size of Level 5 grows to more than 10MB, say 11MB, we make base target to level 4 and now the targets looks like:
[- - - 1.1MB 11MB]While data are accumulated, size targets are tuned based on actual data of level 5. When level 5 has 50MB of data, the target is like:
[- - - 5MB 50MB]Until level 5's actual size is more than 100MB, say 101MB. Now if we keep level 4 to be the base level, its target size needs to be 10.1MB, which doesn't satisfy the target size range. So now we make level 3 the target size and the target sizes of the levels look like:
[- - 1.01MB 10.1MB 101MB]In the same way, while level 5 further grows, all levels' targets grow, like
[- - 5MB 50MB 500MB]Until level 5 exceeds 1000MB and becomes 1001MB, we make level 2 the base level and make levels' target sizes like this:
[- 1.001MB 10.01MB 100.1MB 1001MB]and go on...
By doing it, we give
max_bytes_for_level_multipliera priority againstmax_bytes_for_level_base, for a more predictable LSM tree shape. It is useful to limit worse case space amplification.max_bytes_for_level_multiplier_additionalis ignored with this flag on.Turning this feature on or off for an existing DB can cause unexpected LSM tree structure so it's not recommended.
Caution: this option is experimental
Default: false
- Parameters:
enableLevelCompactionDynamicLevelBytes- boolean value indicating ifLevelCompactionDynamicLevelBytesshall be enabled.- Returns:
- the reference to the current options.
-
levelCompactionDynamicLevelBytes
@Experimental("Caution: this option is experimental") boolean levelCompactionDynamicLevelBytes()
Return if
LevelCompactionDynamicLevelBytesis enabled.For further information see
setLevelCompactionDynamicLevelBytes(boolean)- Returns:
- boolean value indicating if
levelCompactionDynamicLevelBytesis enabled.
-
setMaxCompactionBytes
T setMaxCompactionBytes(long maxCompactionBytes)
Maximum size of each compaction (not guarantee)- Parameters:
maxCompactionBytes- the compaction size limit- Returns:
- the reference to the current options.
-
maxCompactionBytes
long maxCompactionBytes()
Control maximum size of each compaction (not guaranteed)- Returns:
- compaction size threshold
-
setCompactionStyle
ColumnFamilyOptionsInterface setCompactionStyle(CompactionStyle compactionStyle)
Set compaction style for DB. Default: LEVEL.- Parameters:
compactionStyle- Compaction style.- Returns:
- the reference to the current options.
-
compactionStyle
CompactionStyle compactionStyle()
Compaction style for DB.- Returns:
- Compaction style.
-
setCompactionPriority
T setCompactionPriority(CompactionPriority compactionPriority)
If levelcompactionStyle()==CompactionStyle.LEVEL, for each level, which files are prioritized to be picked to compact. Default:CompactionPriority.ByCompensatedSize- Parameters:
compactionPriority- The compaction priority- Returns:
- the reference to the current options.
-
compactionPriority
CompactionPriority compactionPriority()
Get the Compaction priority if level compaction is used for all levels- Returns:
- The compaction priority
-
setCompactionOptionsUniversal
T setCompactionOptionsUniversal(CompactionOptionsUniversal compactionOptionsUniversal)
Set the options needed to support Universal Style compactions- Parameters:
compactionOptionsUniversal- The Universal Style compaction options- Returns:
- the reference to the current options.
-
compactionOptionsUniversal
CompactionOptionsUniversal compactionOptionsUniversal()
The options needed to support Universal Style compactions- Returns:
- The Universal Style compaction options
-
setCompactionOptionsFIFO
T setCompactionOptionsFIFO(CompactionOptionsFIFO compactionOptionsFIFO)
The options for FIFO compaction style- Parameters:
compactionOptionsFIFO- The FIFO compaction options- Returns:
- the reference to the current options.
-
compactionOptionsFIFO
CompactionOptionsFIFO compactionOptionsFIFO()
The options for FIFO compaction style- Returns:
- The FIFO compaction options
-
setOptimizeFiltersForHits
T setOptimizeFiltersForHits(boolean optimizeFiltersForHits)
This flag specifies that the implementation should optimize the filters mainly for cases where keys are found rather than also optimize for keys missed. This would be used in cases where the application knows that there are very few misses or the performance in the case of misses is not important.
For now, this flag allows us to not store filters for the last level i.e the largest level which contains data of the LSM store. For keys which are hits, the filters in this level are not useful because we will search for the data anyway.
NOTE: the filters in other levels are still useful even for key hit because they tell us whether to look in that level or go to the higher level.
Default: false
- Parameters:
optimizeFiltersForHits- boolean value indicating if this flag is set.- Returns:
- the reference to the current options.
-
optimizeFiltersForHits
boolean optimizeFiltersForHits()
Returns the current state of the
optimize_filters_for_hitssetting.- Returns:
- boolean value indicating if the flag
optimize_filters_for_hitswas set.
-
setForceConsistencyChecks
T setForceConsistencyChecks(boolean forceConsistencyChecks)
In debug mode, RocksDB run consistency checks on the LSM every time the LSM change (Flush, Compaction, AddFile). These checks are disabled in release mode, use this option to enable them in release mode as well. Default: false- Parameters:
forceConsistencyChecks- true to force consistency checks- Returns:
- the reference to the current options.
-
forceConsistencyChecks
boolean forceConsistencyChecks()
In debug mode, RocksDB run consistency checks on the LSM every time the LSM change (Flush, Compaction, AddFile). These checks are disabled in release mode.- Returns:
- true if consistency checks are enforced
-
-