Interface ColumnFamilyOptionsInterface<T extends ColumnFamilyOptionsInterface<T>>
-
- All Superinterfaces:
AdvancedColumnFamilyOptionsInterface<T>
- All Known Implementing Classes:
ColumnFamilyOptions,Options
public interface ColumnFamilyOptionsInterface<T extends ColumnFamilyOptionsInterface<T>> extends AdvancedColumnFamilyOptionsInterface<T>
-
-
Field Summary
Fields Modifier and Type Field Description static longDEFAULT_COMPACTION_MEMTABLE_MEMORY_BUDGETDefault memtable memory budget used with the following methods:optimizeLevelStyleCompaction()optimizeUniversalStyleCompaction()
-
Method Summary
All Methods Instance Methods Abstract Methods Modifier and Type Method Description CompressionOptionsbottommostCompressionOptions()Get the bottom most compression options.CompressionTypebottommostCompressionType()Compression algorithm that will be used for the bottommost level that contain files.java.util.List<DbPath>cfPaths()AbstractCompactionFilter<? extends AbstractSlice<?>>compactionFilter()Accessor for the CompactionFilter instance in use.AbstractCompactionFilterFactory<? extends AbstractCompactionFilter<?>>compactionFilterFactory()Accessor for the CompactionFilterFactory instance in use.ConcurrentTaskLimitercompactionThreadLimiter()Get compaction thread limiterCompressionOptionscompressionOptions()Get the different options for compression algorithmsintlevelZeroFileNumCompactionTrigger()The number of files in level 0 to trigger compaction from level-0 to level-1.intlevelZeroSlowdownWritesTrigger()Soft limit on the number of level-0 files.intlevelZeroStopWritesTrigger()Maximum number of level-0 files.doublemaxBytesForLevelMultiplier()The ratio between the total size of level-(L+1) files and the total size of level-L files for all L.longmaxTableFilesSizeFIFO()FIFO compaction option.MemTableConfigmemTableConfig()Get the config for mem-table.java.lang.StringmemTableFactoryName()Returns the name of the current mem table representation.ToldDefaults(int majorVersion, int minorVersion)The function recovers options to a previous version.ToptimizeForPointLookup(long blockCacheSizeMb)Use this if you don't need to keep the data sorted, i.e.ToptimizeForSmallDb()Use this if your DB is very small (like under 1GB) and you don't want to spend lots of memory for memtables.ToptimizeForSmallDb(Cache cache)Some functions that make it easier to optimize RocksDB Use this if your DB is very small (like under 1GB) and you don't want to spend lots of memory for memtables.ToptimizeLevelStyleCompaction()Default values for some parameters in ColumnFamilyOptions are not optimized for heavy workloads and big datasets, which means you might observe write stalls under some conditions.ToptimizeLevelStyleCompaction(long memtableMemoryBudget)Default values for some parameters in ColumnFamilyOptions are not optimized for heavy workloads and big datasets, which means you might observe write stalls under some conditions.ToptimizeUniversalStyleCompaction()Default values for some parameters in ColumnFamilyOptions are not optimized for heavy workloads and big datasets, which means you might observe write stalls under some conditions.ToptimizeUniversalStyleCompaction(long memtableMemoryBudget)Default values for some parameters in ColumnFamilyOptions are not optimized for heavy workloads and big datasets, which means you might observe write stalls under some conditions.TsetBottommostCompressionOptions(CompressionOptions compressionOptions)Set the options for compression algorithms used bybottommostCompressionType()if it is enabled.TsetBottommostCompressionType(CompressionType bottommostCompressionType)Compression algorithm that will be used for the bottommost level that contain files.TsetCfPaths(java.util.Collection<DbPath> paths)A list of paths where SST files for this column family can be put into, with its target size.TsetCompactionFilter(AbstractCompactionFilter<? extends AbstractSlice<?>> compactionFilter)A single CompactionFilter instance to call into during compaction.TsetCompactionFilterFactory(AbstractCompactionFilterFactory<? extends AbstractCompactionFilter<?>> compactionFilterFactory)This is a factory that providesAbstractCompactionFilterobjects which allow an application to modify/delete a key-value during background compaction.TsetCompactionThreadLimiter(ConcurrentTaskLimiter concurrentTaskLimiter)Compaction concurrent thread limiter for the column family.TsetComparator(AbstractComparator comparator)Use the specified comparator for key ordering.TsetComparator(BuiltinComparator builtinComparator)SetBuiltinComparatorto be used with RocksDB.TsetCompressionOptions(CompressionOptions compressionOptions)Set the different options for compression algorithmsTsetLevelZeroFileNumCompactionTrigger(int numFiles)Number of files to trigger level-0 compaction.TsetLevelZeroSlowdownWritesTrigger(int numFiles)Soft limit on number of level-0 files.TsetLevelZeroStopWritesTrigger(int numFiles)Maximum number of level-0 files.TsetMaxBytesForLevelMultiplier(double multiplier)The ratio between the total size of level-(L+1) files and the total size of level-L files for all L.TsetMaxTableFilesSizeFIFO(long maxTableFilesSize)FIFO compaction option.TsetMemTableConfig(MemTableConfig memTableConfig)Set the config for mem-table.TsetMergeOperator(MergeOperator mergeOperator)Set the merge operator to be used for merging two different key/value pairs that share the same key.TsetMergeOperatorName(java.lang.String name)Set the merge operator to be used for merging two merge operands of the same key.TsetSstPartitionerFactory(SstPartitionerFactory factory)If non-nullptr, use the specified factory for a function to determine the partitioning of sst files.TsetTableFormatConfig(TableFormatConfig config)Set the config for table format.SstPartitionerFactorysstPartitionerFactory()Get SST partitioner factoryjava.lang.StringtableFactoryName()TableFormatConfigtableFormatConfig()Get the config for table format.TuseCappedPrefixExtractor(int n)Same as fixed length prefix extractor, except that when slice is shorter than the fixed length, it will use the full key.TuseFixedLengthPrefixExtractor(int n)This prefix-extractor uses the first n bytes of a key as its prefix.-
Methods inherited from interface org.rocksdb.AdvancedColumnFamilyOptionsInterface
bloomLocality, compactionOptionsFIFO, compactionOptionsUniversal, compactionPriority, compactionStyle, compressionPerLevel, forceConsistencyChecks, inplaceUpdateSupport, levelCompactionDynamicLevelBytes, maxCompactionBytes, maxWriteBufferNumberToMaintain, minWriteBufferNumberToMerge, numLevels, optimizeFiltersForHits, setBloomLocality, setCompactionOptionsFIFO, setCompactionOptionsUniversal, setCompactionPriority, setCompactionStyle, setCompressionPerLevel, setForceConsistencyChecks, setInplaceUpdateSupport, setLevelCompactionDynamicLevelBytes, setMaxCompactionBytes, setMaxWriteBufferNumberToMaintain, setMinWriteBufferNumberToMerge, setNumLevels, setOptimizeFiltersForHits
-
-
-
-
Field Detail
-
DEFAULT_COMPACTION_MEMTABLE_MEMORY_BUDGET
static final long DEFAULT_COMPACTION_MEMTABLE_MEMORY_BUDGET
Default memtable memory budget used with the following methods:- See Also:
- Constant Field Values
-
-
Method Detail
-
oldDefaults
T oldDefaults(int majorVersion, int minorVersion)
The function recovers options to a previous version. Only 4.6 or later versions are supported.- Returns:
- the instance of the current object.
-
optimizeForSmallDb
T optimizeForSmallDb()
Use this if your DB is very small (like under 1GB) and you don't want to spend lots of memory for memtables.- Returns:
- the instance of the current object.
-
optimizeForSmallDb
T optimizeForSmallDb(Cache cache)
Some functions that make it easier to optimize RocksDB Use this if your DB is very small (like under 1GB) and you don't want to spend lots of memory for memtables. An optional cache object is passed in to be used as the block cache- Returns:
- the instance of the current object.
-
optimizeForPointLookup
T optimizeForPointLookup(long blockCacheSizeMb)
Use this if you don't need to keep the data sorted, i.e. you'll never use an iterator, only Put() and Get() API calls- Parameters:
blockCacheSizeMb- Block cache size in MB- Returns:
- the instance of the current object.
-
optimizeLevelStyleCompaction
T optimizeLevelStyleCompaction()
Default values for some parameters in ColumnFamilyOptions are not optimized for heavy workloads and big datasets, which means you might observe write stalls under some conditions. As a starting point for tuning RocksDB options, use the following for level style compaction.
Make sure to also call IncreaseParallelism(), which will provide the biggest performance gains.
Note: we might use more memory than memtable_memory_budget during high write rate period
- Returns:
- the instance of the current object.
-
optimizeLevelStyleCompaction
T optimizeLevelStyleCompaction(long memtableMemoryBudget)
Default values for some parameters in ColumnFamilyOptions are not optimized for heavy workloads and big datasets, which means you might observe write stalls under some conditions. As a starting point for tuning RocksDB options, use the following for level style compaction.
Make sure to also call IncreaseParallelism(), which will provide the biggest performance gains.
Note: we might use more memory than memtable_memory_budget during high write rate period
- Parameters:
memtableMemoryBudget- memory budget in bytes- Returns:
- the instance of the current object.
-
optimizeUniversalStyleCompaction
T optimizeUniversalStyleCompaction()
Default values for some parameters in ColumnFamilyOptions are not optimized for heavy workloads and big datasets, which means you might observe write stalls under some conditions. As a starting point for tuning RocksDB options, use the following for universal style compaction.
Universal style compaction is focused on reducing Write Amplification Factor for big data sets, but increases Space Amplification.
Make sure to also call IncreaseParallelism(), which will provide the biggest performance gains.
Note: we might use more memory than memtable_memory_budget during high write rate period
- Returns:
- the instance of the current object.
-
optimizeUniversalStyleCompaction
T optimizeUniversalStyleCompaction(long memtableMemoryBudget)
Default values for some parameters in ColumnFamilyOptions are not optimized for heavy workloads and big datasets, which means you might observe write stalls under some conditions. As a starting point for tuning RocksDB options, use the following for universal style compaction.
Universal style compaction is focused on reducing Write Amplification Factor for big data sets, but increases Space Amplification.
Make sure to also call IncreaseParallelism(), which will provide the biggest performance gains.
Note: we might use more memory than memtable_memory_budget during high write rate period
- Parameters:
memtableMemoryBudget- memory budget in bytes- Returns:
- the instance of the current object.
-
setComparator
T setComparator(BuiltinComparator builtinComparator)
SetBuiltinComparatorto be used with RocksDB. Note: Comparator can be set once upon database creation. Default: BytewiseComparator.- Parameters:
builtinComparator- aBuiltinComparatortype.- Returns:
- the instance of the current object.
-
setComparator
T setComparator(AbstractComparator comparator)
Use the specified comparator for key ordering. Comparator should not be disposed before options instances using this comparator is disposed. If dispose() function is not called, then comparator object will be GC'd automatically. Comparator instance can be re-used in multiple options instances.- Parameters:
comparator- java instance.- Returns:
- the instance of the current object.
-
setMergeOperatorName
T setMergeOperatorName(java.lang.String name)
Set the merge operator to be used for merging two merge operands of the same key. The merge function is invoked during compaction and at lookup time, if multiple key/value pairs belonging to the same key are found in the database.
- Parameters:
name- the name of the merge function, as defined by the MergeOperators factory (see utilities/MergeOperators.h) The merge function is specified by name and must be one of the standard merge operators provided by RocksDB. The available operators are "put", "uint64add", "stringappend" and "stringappendtest".- Returns:
- the instance of the current object.
-
setMergeOperator
T setMergeOperator(MergeOperator mergeOperator)
Set the merge operator to be used for merging two different key/value pairs that share the same key. The merge function is invoked during compaction and at lookup time, if multiple key/value pairs belonging to the same key are found in the database.
- Parameters:
mergeOperator-MergeOperatorinstance.- Returns:
- the instance of the current object.
-
setCompactionFilter
T setCompactionFilter(AbstractCompactionFilter<? extends AbstractSlice<?>> compactionFilter)
A single CompactionFilter instance to call into during compaction. Allows an application to modify/delete a key-value during background compaction. If the client requires a new compaction filter to be used for different compaction runs, it can specify callsetCompactionFilterFactory(AbstractCompactionFilterFactory)instead. The client should specify only set one of the two.setCompactionFilter(AbstractCompactionFilter)takes precedence oversetCompactionFilterFactory(AbstractCompactionFilterFactory)if the client specifies both. If multithreaded compaction is being used, the supplied CompactionFilter instance may be used from different threads concurrently and so should be thread-safe.- Parameters:
compactionFilter-AbstractCompactionFilterinstance.- Returns:
- the instance of the current object.
-
compactionFilter
AbstractCompactionFilter<? extends AbstractSlice<?>> compactionFilter()
Accessor for the CompactionFilter instance in use.- Returns:
- Reference to the CompactionFilter, or null if one hasn't been set.
-
setCompactionFilterFactory
T setCompactionFilterFactory(AbstractCompactionFilterFactory<? extends AbstractCompactionFilter<?>> compactionFilterFactory)
This is a factory that providesAbstractCompactionFilterobjects which allow an application to modify/delete a key-value during background compaction. A new filter will be created on each compaction run. If multithreaded compaction is being used, each created CompactionFilter will only be used from a single thread and so does not need to be thread-safe.- Parameters:
compactionFilterFactory-AbstractCompactionFilterFactoryinstance.- Returns:
- the instance of the current object.
-
compactionFilterFactory
AbstractCompactionFilterFactory<? extends AbstractCompactionFilter<?>> compactionFilterFactory()
Accessor for the CompactionFilterFactory instance in use.- Returns:
- Reference to the CompactionFilterFactory, or null if one hasn't been set.
-
useFixedLengthPrefixExtractor
T useFixedLengthPrefixExtractor(int n)
This prefix-extractor uses the first n bytes of a key as its prefix. In some hash-based memtable representation such as HashLinkedList and HashSkipList, prefixes are used to partition the keys into several buckets. Prefix extractor is used to specify how to extract the prefix given a key.- Parameters:
n- use the first n bytes of a key as its prefix.- Returns:
- the reference to the current option.
-
useCappedPrefixExtractor
T useCappedPrefixExtractor(int n)
Same as fixed length prefix extractor, except that when slice is shorter than the fixed length, it will use the full key.- Parameters:
n- use the first n bytes of a key as its prefix.- Returns:
- the reference to the current option.
-
setLevelZeroFileNumCompactionTrigger
T setLevelZeroFileNumCompactionTrigger(int numFiles)
Number of files to trigger level-0 compaction. A value < 0 means that level-0 compaction will not be triggered by number of files at all. Default: 4- Parameters:
numFiles- the number of files in level-0 to trigger compaction.- Returns:
- the reference to the current option.
-
levelZeroFileNumCompactionTrigger
int levelZeroFileNumCompactionTrigger()
The number of files in level 0 to trigger compaction from level-0 to level-1. A value < 0 means that level-0 compaction will not be triggered by number of files at all. Default: 4- Returns:
- the number of files in level 0 to trigger compaction.
-
setLevelZeroSlowdownWritesTrigger
T setLevelZeroSlowdownWritesTrigger(int numFiles)
Soft limit on number of level-0 files. We start slowing down writes at this point. A value < 0 means that no writing slow down will be triggered by number of files in level-0.- Parameters:
numFiles- soft limit on number of level-0 files.- Returns:
- the reference to the current option.
-
levelZeroSlowdownWritesTrigger
int levelZeroSlowdownWritesTrigger()
Soft limit on the number of level-0 files. We start slowing down writes at this point. A value < 0 means that no writing slow down will be triggered by number of files in level-0.- Returns:
- the soft limit on the number of level-0 files.
-
setLevelZeroStopWritesTrigger
T setLevelZeroStopWritesTrigger(int numFiles)
Maximum number of level-0 files. We stop writes at this point.- Parameters:
numFiles- the hard limit of the number of level-0 files.- Returns:
- the reference to the current option.
-
levelZeroStopWritesTrigger
int levelZeroStopWritesTrigger()
Maximum number of level-0 files. We stop writes at this point.- Returns:
- the hard limit of the number of level-0 file.
-
setMaxBytesForLevelMultiplier
T setMaxBytesForLevelMultiplier(double multiplier)
The ratio between the total size of level-(L+1) files and the total size of level-L files for all L. DEFAULT: 10- Parameters:
multiplier- the ratio between the total size of level-(L+1) files and the total size of level-L files for all L.- Returns:
- the reference to the current option.
-
maxBytesForLevelMultiplier
double maxBytesForLevelMultiplier()
The ratio between the total size of level-(L+1) files and the total size of level-L files for all L. DEFAULT: 10- Returns:
- the ratio between the total size of level-(L+1) files and the total size of level-L files for all L.
-
setMaxTableFilesSizeFIFO
T setMaxTableFilesSizeFIFO(long maxTableFilesSize)
FIFO compaction option. The oldest table file will be deleted once the sum of table files reaches this size. The default value is 1GB (1 * 1024 * 1024 * 1024).- Parameters:
maxTableFilesSize- the size limit of the total sum of table files.- Returns:
- the instance of the current object.
-
maxTableFilesSizeFIFO
long maxTableFilesSizeFIFO()
FIFO compaction option. The oldest table file will be deleted once the sum of table files reaches this size. The default value is 1GB (1 * 1024 * 1024 * 1024).- Returns:
- the size limit of the total sum of table files.
-
memTableConfig
MemTableConfig memTableConfig()
Get the config for mem-table.- Returns:
- the mem-table config.
-
setMemTableConfig
T setMemTableConfig(MemTableConfig memTableConfig)
Set the config for mem-table.- Parameters:
memTableConfig- the mem-table config.- Returns:
- the instance of the current object.
- Throws:
java.lang.IllegalArgumentException- thrown on 32-Bit platforms while overflowing the underlying platform specific value.
-
memTableFactoryName
java.lang.String memTableFactoryName()
Returns the name of the current mem table representation. Memtable format can be set using setTableFormatConfig.- Returns:
- the name of the currently-used memtable factory.
- See Also:
setTableFormatConfig(org.rocksdb.TableFormatConfig)
-
tableFormatConfig
TableFormatConfig tableFormatConfig()
Get the config for table format.- Returns:
- the table format config.
-
setTableFormatConfig
T setTableFormatConfig(TableFormatConfig config)
Set the config for table format.- Parameters:
config- the table format config.- Returns:
- the reference of the current options.
-
tableFactoryName
java.lang.String tableFactoryName()
- Returns:
- the name of the currently used table factory.
-
setCfPaths
T setCfPaths(java.util.Collection<DbPath> paths)
A list of paths where SST files for this column family can be put into, with its target size. Similar to db_paths, newer data is placed into paths specified earlier in the vector while older data gradually moves to paths specified later in the vector. Note that, if a path is supplied to multiple column families, it would have files and total size from all the column families combined. User should provision for the total size(from all the column families) in such cases. If left empty, db_paths will be used. Default: empty- Parameters:
paths- collection of paths for SST files.- Returns:
- the reference of the current options.
-
cfPaths
java.util.List<DbPath> cfPaths()
- Returns:
- collection of paths for SST files.
-
setBottommostCompressionType
T setBottommostCompressionType(CompressionType bottommostCompressionType)
Compression algorithm that will be used for the bottommost level that contain files. If level-compaction is used, this option will only affect levels after base level. Default:CompressionType.DISABLE_COMPRESSION_OPTION- Parameters:
bottommostCompressionType- The compression type to use for the bottommost level- Returns:
- the reference of the current options.
-
bottommostCompressionType
CompressionType bottommostCompressionType()
Compression algorithm that will be used for the bottommost level that contain files. If level-compaction is used, this option will only affect levels after base level. Default:CompressionType.DISABLE_COMPRESSION_OPTION- Returns:
- The compression type used for the bottommost level
-
setBottommostCompressionOptions
T setBottommostCompressionOptions(CompressionOptions compressionOptions)
Set the options for compression algorithms used bybottommostCompressionType()if it is enabled. To enable it, please see the definition ofCompressionOptions.- Parameters:
compressionOptions- the bottom most compression options.- Returns:
- the reference of the current options.
-
bottommostCompressionOptions
CompressionOptions bottommostCompressionOptions()
Get the bottom most compression options. SeesetBottommostCompressionOptions(CompressionOptions).- Returns:
- the bottom most compression options.
-
setCompressionOptions
T setCompressionOptions(CompressionOptions compressionOptions)
Set the different options for compression algorithms- Parameters:
compressionOptions- The compression options- Returns:
- the reference of the current options.
-
compressionOptions
CompressionOptions compressionOptions()
Get the different options for compression algorithms- Returns:
- The compression options
-
setSstPartitionerFactory
@Experimental("Caution: this option is experimental") T setSstPartitionerFactory(SstPartitionerFactory factory)
If non-nullptr, use the specified factory for a function to determine the partitioning of sst files. This helps compaction to split the files on interesting boundaries (key prefixes) to make propagation of sst files less write amplifying (covering the whole key space). Default: nullptr- Parameters:
factory- The factory reference- Returns:
- the reference of the current options.
-
sstPartitionerFactory
@Experimental("Caution: this option is experimental") SstPartitionerFactory sstPartitionerFactory()
Get SST partitioner factory- Returns:
- SST partitioner factory
-
setCompactionThreadLimiter
T setCompactionThreadLimiter(ConcurrentTaskLimiter concurrentTaskLimiter)
Compaction concurrent thread limiter for the column family. If non-nullptr, use given concurrent thread limiter to control the max outstanding compaction tasks. Limiter can be shared with multiple column families across db instances.- Parameters:
concurrentTaskLimiter- The compaction thread limiter.- Returns:
- the reference of the current options.
-
compactionThreadLimiter
ConcurrentTaskLimiter compactionThreadLimiter()
Get compaction thread limiter- Returns:
- Compaction thread limiter
-
-