Package org.rocksdb
Interface AdvancedMutableColumnFamilyOptionsInterface<T extends AdvancedMutableColumnFamilyOptionsInterface<T>>
-
- All Known Subinterfaces:
MutableColumnFamilyOptionsInterface<T>
- All Known Implementing Classes:
ColumnFamilyOptions,MutableColumnFamilyOptions.MutableColumnFamilyOptionsBuilder,Options
public interface AdvancedMutableColumnFamilyOptionsInterface<T extends AdvancedMutableColumnFamilyOptionsInterface<T>>Advanced Column Family Options which are mutable Taken from include/rocksdb/advanced_options.h and MutableCFOptions in util/cf_options.h
-
-
Method Summary
All Methods Instance Methods Abstract Methods Modifier and Type Method Description longarenaBlockSize()The size of one block in arena memory allocation.longhardPendingCompactionBytesLimit()All writes are stopped if estimated bytes needed to be compaction exceed this threshold.longinplaceUpdateNumLocks()Number of locks used for inplace update Default: 10000, if inplace_update_support = true, else 0.intlevel0SlowdownWritesTrigger()Soft limit on number of level-0 files.intlevel0StopWritesTrigger()Maximum number of level-0 files.doublemaxBytesForLevelMultiplier()The ratio between the total size of level-(L+1) files and the total size of level-L files for all L.int[]maxBytesForLevelMultiplierAdditional()Different max-size multipliers for different levels.longmaxSequentialSkipInIterations()An iteration->Next() sequentially skips over keys with the same user-key unless this option is set.longmaxSuccessiveMerges()Maximum number of successive merge operations on a key in the memtable.intmaxWriteBufferNumber()Returns maximum number of write buffers.longmemtableHugePageSize()Page size for huge page TLB for bloom in memtable.doublememtablePrefixBloomSizeRatio()if prefix_extractor is set and memtable_prefix_bloom_size_ratio is not 0, create prefix bloom for memtable with the size of write_buffer_size * memtable_prefix_bloom_size_ratio.booleanparanoidFileChecks()After writing every SST file, reopen it and read all the keys.booleanreportBgIoStats()Determine whether IO stats in compactions and flushes are being measuredTsetArenaBlockSize(long arenaBlockSize)The size of one block in arena memory allocation.TsetHardPendingCompactionBytesLimit(long hardPendingCompactionBytesLimit)All writes are stopped if estimated bytes needed to be compaction exceed this threshold.TsetInplaceUpdateNumLocks(long inplaceUpdateNumLocks)Number of locks used for inplace update Default: 10000, if inplace_update_support = true, else 0.TsetLevel0SlowdownWritesTrigger(int level0SlowdownWritesTrigger)Soft limit on number of level-0 files.TsetLevel0StopWritesTrigger(int level0StopWritesTrigger)Maximum number of level-0 files.TsetMaxBytesForLevelMultiplier(double multiplier)The ratio between the total size of level-(L+1) files and the total size of level-L files for all L.TsetMaxBytesForLevelMultiplierAdditional(int[] maxBytesForLevelMultiplierAdditional)Different max-size multipliers for different levels.TsetMaxSequentialSkipInIterations(long maxSequentialSkipInIterations)An iteration->Next() sequentially skips over keys with the same user-key unless this option is set.TsetMaxSuccessiveMerges(long maxSuccessiveMerges)Maximum number of successive merge operations on a key in the memtable.TsetMaxWriteBufferNumber(int maxWriteBufferNumber)The maximum number of write buffers that are built up in memory.TsetMemtableHugePageSize(long memtableHugePageSize)Page size for huge page TLB for bloom in memtable.TsetMemtablePrefixBloomSizeRatio(double memtablePrefixBloomSizeRatio)if prefix_extractor is set and memtable_prefix_bloom_size_ratio is not 0, create prefix bloom for memtable with the size of write_buffer_size * memtable_prefix_bloom_size_ratio.TsetParanoidFileChecks(boolean paranoidFileChecks)After writing every SST file, reopen it and read all the keys.TsetReportBgIoStats(boolean reportBgIoStats)Measure IO stats in compactions and flushes, if true.TsetSoftPendingCompactionBytesLimit(long softPendingCompactionBytesLimit)All writes will be slowed down to at least delayed_write_rate if estimated bytes needed to be compaction exceed this threshold.TsetTargetFileSizeBase(long targetFileSizeBase)The target file size for compaction.TsetTargetFileSizeMultiplier(int multiplier)targetFileSizeMultiplier defines the size ratio between a level-L file and level-(L+1) file.TsetTtl(long ttl)Non-bottom-level files older than TTL will go through the compaction process.longsoftPendingCompactionBytesLimit()All writes will be slowed down to at least delayed_write_rate if estimated bytes needed to be compaction exceed this threshold.longtargetFileSizeBase()The target file size for compaction.inttargetFileSizeMultiplier()targetFileSizeMultiplier defines the size ratio between a level-(L+1) file and level-L file.longttl()Get the TTL for Non-bottom-level files that will go through the compaction process.
-
-
-
Method Detail
-
setMaxWriteBufferNumber
T setMaxWriteBufferNumber(int maxWriteBufferNumber)
The maximum number of write buffers that are built up in memory. The default is 2, so that when 1 write buffer is being flushed to storage, new writes can continue to the other write buffer. Default: 2- Parameters:
maxWriteBufferNumber- maximum number of write buffers.- Returns:
- the instance of the current options.
-
maxWriteBufferNumber
int maxWriteBufferNumber()
Returns maximum number of write buffers.- Returns:
- maximum number of write buffers.
- See Also:
setMaxWriteBufferNumber(int)
-
setInplaceUpdateNumLocks
T setInplaceUpdateNumLocks(long inplaceUpdateNumLocks)
Number of locks used for inplace update Default: 10000, if inplace_update_support = true, else 0.- Parameters:
inplaceUpdateNumLocks- the number of locks used for inplace updates.- Returns:
- the reference to the current options.
- Throws:
java.lang.IllegalArgumentException- thrown on 32-Bit platforms while overflowing the underlying platform specific value.
-
inplaceUpdateNumLocks
long inplaceUpdateNumLocks()
Number of locks used for inplace update Default: 10000, if inplace_update_support = true, else 0.- Returns:
- the number of locks used for inplace update.
-
setMemtablePrefixBloomSizeRatio
T setMemtablePrefixBloomSizeRatio(double memtablePrefixBloomSizeRatio)
if prefix_extractor is set and memtable_prefix_bloom_size_ratio is not 0, create prefix bloom for memtable with the size of write_buffer_size * memtable_prefix_bloom_size_ratio. If it is larger than 0.25, it is santinized to 0.25. Default: 0 (disable)- Parameters:
memtablePrefixBloomSizeRatio- The ratio- Returns:
- the reference to the current options.
-
memtablePrefixBloomSizeRatio
double memtablePrefixBloomSizeRatio()
if prefix_extractor is set and memtable_prefix_bloom_size_ratio is not 0, create prefix bloom for memtable with the size of write_buffer_size * memtable_prefix_bloom_size_ratio. If it is larger than 0.25, it is santinized to 0.25. Default: 0 (disable)- Returns:
- the ratio
-
setMemtableHugePageSize
T setMemtableHugePageSize(long memtableHugePageSize)
Page size for huge page TLB for bloom in memtable. If ≤ 0, not allocate from huge page TLB but from malloc. Need to reserve huge pages for it to be allocated. For example: sysctl -w vm.nr_hugepages=20 See linux doc Documentation/vm/hugetlbpage.txt- Parameters:
memtableHugePageSize- The page size of the huge page tlb- Returns:
- the reference to the current options.
-
memtableHugePageSize
long memtableHugePageSize()
Page size for huge page TLB for bloom in memtable. If ≤ 0, not allocate from huge page TLB but from malloc. Need to reserve huge pages for it to be allocated. For example: sysctl -w vm.nr_hugepages=20 See linux doc Documentation/vm/hugetlbpage.txt- Returns:
- The page size of the huge page tlb
-
setArenaBlockSize
T setArenaBlockSize(long arenaBlockSize)
The size of one block in arena memory allocation. If ≤ 0, a proper value is automatically calculated (usually 1/10 of writer_buffer_size). There are two additional restriction of the specified size: (1) size should be in the range of [4096, 2 << 30] and (2) be the multiple of the CPU word (which helps with the memory alignment). We'll automatically check and adjust the size number to make sure it conforms to the restrictions. Default: 0- Parameters:
arenaBlockSize- the size of an arena block- Returns:
- the reference to the current options.
- Throws:
java.lang.IllegalArgumentException- thrown on 32-Bit platforms while overflowing the underlying platform specific value.
-
arenaBlockSize
long arenaBlockSize()
The size of one block in arena memory allocation. If ≤ 0, a proper value is automatically calculated (usually 1/10 of writer_buffer_size). There are two additional restriction of the specified size: (1) size should be in the range of [4096, 2 << 30] and (2) be the multiple of the CPU word (which helps with the memory alignment). We'll automatically check and adjust the size number to make sure it conforms to the restrictions. Default: 0- Returns:
- the size of an arena block
-
setLevel0SlowdownWritesTrigger
T setLevel0SlowdownWritesTrigger(int level0SlowdownWritesTrigger)
Soft limit on number of level-0 files. We start slowing down writes at this point. A value < 0 means that no writing slow down will be triggered by number of files in level-0.- Parameters:
level0SlowdownWritesTrigger- The soft limit on the number of level-0 files- Returns:
- the reference to the current options.
-
level0SlowdownWritesTrigger
int level0SlowdownWritesTrigger()
Soft limit on number of level-0 files. We start slowing down writes at this point. A value < 0 means that no writing slow down will be triggered by number of files in level-0.- Returns:
- The soft limit on the number of level-0 files
-
setLevel0StopWritesTrigger
T setLevel0StopWritesTrigger(int level0StopWritesTrigger)
Maximum number of level-0 files. We stop writes at this point.- Parameters:
level0StopWritesTrigger- The maximum number of level-0 files- Returns:
- the reference to the current options.
-
level0StopWritesTrigger
int level0StopWritesTrigger()
Maximum number of level-0 files. We stop writes at this point.- Returns:
- The maximum number of level-0 files
-
setTargetFileSizeBase
T setTargetFileSizeBase(long targetFileSizeBase)
The target file size for compaction. This targetFileSizeBase determines a level-1 file size. Target file size for level L can be calculated by targetFileSizeBase * (targetFileSizeMultiplier ^ (L-1)) For example, if targetFileSizeBase is 2MB and target_file_size_multiplier is 10, then each file on level-1 will be 2MB, and each file on level 2 will be 20MB, and each file on level-3 will be 200MB. by default targetFileSizeBase is 64MB.- Parameters:
targetFileSizeBase- the target size of a level-0 file.- Returns:
- the reference to the current options.
- See Also:
setTargetFileSizeMultiplier(int)
-
targetFileSizeBase
long targetFileSizeBase()
The target file size for compaction. This targetFileSizeBase determines a level-1 file size. Target file size for level L can be calculated by targetFileSizeBase * (targetFileSizeMultiplier ^ (L-1)) For example, if targetFileSizeBase is 2MB and target_file_size_multiplier is 10, then each file on level-1 will be 2MB, and each file on level 2 will be 20MB, and each file on level-3 will be 200MB. by default targetFileSizeBase is 64MB.- Returns:
- the target size of a level-0 file.
- See Also:
targetFileSizeMultiplier()
-
setTargetFileSizeMultiplier
T setTargetFileSizeMultiplier(int multiplier)
targetFileSizeMultiplier defines the size ratio between a level-L file and level-(L+1) file. By default target_file_size_multiplier is 1, meaning files in different levels have the same target.- Parameters:
multiplier- the size ratio between a level-(L+1) file and level-L file.- Returns:
- the reference to the current options.
-
targetFileSizeMultiplier
int targetFileSizeMultiplier()
targetFileSizeMultiplier defines the size ratio between a level-(L+1) file and level-L file. By default targetFileSizeMultiplier is 1, meaning files in different levels have the same target.- Returns:
- the size ratio between a level-(L+1) file and level-L file.
-
setMaxBytesForLevelMultiplier
T setMaxBytesForLevelMultiplier(double multiplier)
The ratio between the total size of level-(L+1) files and the total size of level-L files for all L. DEFAULT: 10- Parameters:
multiplier- the ratio between the total size of level-(L+1) files and the total size of level-L files for all L.- Returns:
- the reference to the current options.
See
MutableColumnFamilyOptionsInterface.setMaxBytesForLevelBase(long)
-
maxBytesForLevelMultiplier
double maxBytesForLevelMultiplier()
The ratio between the total size of level-(L+1) files and the total size of level-L files for all L. DEFAULT: 10- Returns:
- the ratio between the total size of level-(L+1) files and
the total size of level-L files for all L.
See
MutableColumnFamilyOptionsInterface.maxBytesForLevelBase()
-
setMaxBytesForLevelMultiplierAdditional
T setMaxBytesForLevelMultiplierAdditional(int[] maxBytesForLevelMultiplierAdditional)
Different max-size multipliers for different levels. These are multiplied by max_bytes_for_level_multiplier to arrive at the max-size of each level. Default: 1- Parameters:
maxBytesForLevelMultiplierAdditional- The max-size multipliers for each level- Returns:
- the reference to the current options.
-
maxBytesForLevelMultiplierAdditional
int[] maxBytesForLevelMultiplierAdditional()
Different max-size multipliers for different levels. These are multiplied by max_bytes_for_level_multiplier to arrive at the max-size of each level. Default: 1- Returns:
- The max-size multipliers for each level
-
setSoftPendingCompactionBytesLimit
T setSoftPendingCompactionBytesLimit(long softPendingCompactionBytesLimit)
All writes will be slowed down to at least delayed_write_rate if estimated bytes needed to be compaction exceed this threshold. Default: 64GB- Parameters:
softPendingCompactionBytesLimit- The soft limit to impose on compaction- Returns:
- the reference to the current options.
-
softPendingCompactionBytesLimit
long softPendingCompactionBytesLimit()
All writes will be slowed down to at least delayed_write_rate if estimated bytes needed to be compaction exceed this threshold. Default: 64GB- Returns:
- The soft limit to impose on compaction
-
setHardPendingCompactionBytesLimit
T setHardPendingCompactionBytesLimit(long hardPendingCompactionBytesLimit)
All writes are stopped if estimated bytes needed to be compaction exceed this threshold. Default: 256GB- Parameters:
hardPendingCompactionBytesLimit- The hard limit to impose on compaction- Returns:
- the reference to the current options.
-
hardPendingCompactionBytesLimit
long hardPendingCompactionBytesLimit()
All writes are stopped if estimated bytes needed to be compaction exceed this threshold. Default: 256GB- Returns:
- The hard limit to impose on compaction
-
setMaxSequentialSkipInIterations
T setMaxSequentialSkipInIterations(long maxSequentialSkipInIterations)
An iteration->Next() sequentially skips over keys with the same user-key unless this option is set. This number specifies the number of keys (with the same userkey) that will be sequentially skipped before a reseek is issued. Default: 8- Parameters:
maxSequentialSkipInIterations- the number of keys could be skipped in a iteration.- Returns:
- the reference to the current options.
-
maxSequentialSkipInIterations
long maxSequentialSkipInIterations()
An iteration->Next() sequentially skips over keys with the same user-key unless this option is set. This number specifies the number of keys (with the same userkey) that will be sequentially skipped before a reseek is issued. Default: 8- Returns:
- the number of keys could be skipped in a iteration.
-
setMaxSuccessiveMerges
T setMaxSuccessiveMerges(long maxSuccessiveMerges)
Maximum number of successive merge operations on a key in the memtable. When a merge operation is added to the memtable and the maximum number of successive merges is reached, the value of the key will be calculated and inserted into the memtable instead of the merge operation. This will ensure that there are never more than max_successive_merges merge operations in the memtable. Default: 0 (disabled)- Parameters:
maxSuccessiveMerges- the maximum number of successive merges.- Returns:
- the reference to the current options.
- Throws:
java.lang.IllegalArgumentException- thrown on 32-Bit platforms while overflowing the underlying platform specific value.
-
maxSuccessiveMerges
long maxSuccessiveMerges()
Maximum number of successive merge operations on a key in the memtable. When a merge operation is added to the memtable and the maximum number of successive merges is reached, the value of the key will be calculated and inserted into the memtable instead of the merge operation. This will ensure that there are never more than max_successive_merges merge operations in the memtable. Default: 0 (disabled)- Returns:
- the maximum number of successive merges.
-
setParanoidFileChecks
T setParanoidFileChecks(boolean paranoidFileChecks)
After writing every SST file, reopen it and read all the keys. Default: false- Parameters:
paranoidFileChecks- true to enable paranoid file checks- Returns:
- the reference to the current options.
-
paranoidFileChecks
boolean paranoidFileChecks()
After writing every SST file, reopen it and read all the keys. Default: false- Returns:
- true if paranoid file checks are enabled
-
setReportBgIoStats
T setReportBgIoStats(boolean reportBgIoStats)
Measure IO stats in compactions and flushes, if true. Default: false- Parameters:
reportBgIoStats- true to enable reporting- Returns:
- the reference to the current options.
-
reportBgIoStats
boolean reportBgIoStats()
Determine whether IO stats in compactions and flushes are being measured- Returns:
- true if reporting is enabled
-
setTtl
T setTtl(long ttl)
Non-bottom-level files older than TTL will go through the compaction process. This needsMutableDBOptionsInterface.maxOpenFiles()to be set to -1. Enabled only for level compaction for now. Default: 0 (disabled) Dynamically changeable throughRocksDB.setOptions(ColumnFamilyHandle, MutableColumnFamilyOptions).- Parameters:
ttl- the time-to-live.- Returns:
- the reference to the current options.
-
ttl
long ttl()
Get the TTL for Non-bottom-level files that will go through the compaction process. SeesetTtl(long).- Returns:
- the time-to-live.
-
-