Package org.rocksdb

Interface ColumnFamilyOptionsInterface<T extends ColumnFamilyOptionsInterface<T>>

    • Method Detail

      • oldDefaults

        T oldDefaults​(int majorVersion,
                      int minorVersion)
        The function recovers options to a previous version. Only 4.6 or later versions are supported.
        Returns:
        the instance of the current object.
      • optimizeForSmallDb

        T optimizeForSmallDb()
        Use this if your DB is very small (like under 1GB) and you don't want to spend lots of memory for memtables.
        Returns:
        the instance of the current object.
      • optimizeForSmallDb

        T optimizeForSmallDb​(Cache cache)
        Some functions that make it easier to optimize RocksDB Use this if your DB is very small (like under 1GB) and you don't want to spend lots of memory for memtables. An optional cache object is passed in to be used as the block cache
        Returns:
        the instance of the current object.
      • optimizeForPointLookup

        T optimizeForPointLookup​(long blockCacheSizeMb)
        Use this if you don't need to keep the data sorted, i.e. you'll never use an iterator, only Put() and Get() API calls
        Parameters:
        blockCacheSizeMb - Block cache size in MB
        Returns:
        the instance of the current object.
      • optimizeLevelStyleCompaction

        T optimizeLevelStyleCompaction()

        Default values for some parameters in ColumnFamilyOptions are not optimized for heavy workloads and big datasets, which means you might observe write stalls under some conditions. As a starting point for tuning RocksDB options, use the following for level style compaction.

        Make sure to also call IncreaseParallelism(), which will provide the biggest performance gains.

        Note: we might use more memory than memtable_memory_budget during high write rate period

        Returns:
        the instance of the current object.
      • optimizeLevelStyleCompaction

        T optimizeLevelStyleCompaction​(long memtableMemoryBudget)

        Default values for some parameters in ColumnFamilyOptions are not optimized for heavy workloads and big datasets, which means you might observe write stalls under some conditions. As a starting point for tuning RocksDB options, use the following for level style compaction.

        Make sure to also call IncreaseParallelism(), which will provide the biggest performance gains.

        Note: we might use more memory than memtable_memory_budget during high write rate period

        Parameters:
        memtableMemoryBudget - memory budget in bytes
        Returns:
        the instance of the current object.
      • optimizeUniversalStyleCompaction

        T optimizeUniversalStyleCompaction()

        Default values for some parameters in ColumnFamilyOptions are not optimized for heavy workloads and big datasets, which means you might observe write stalls under some conditions. As a starting point for tuning RocksDB options, use the following for universal style compaction.

        Universal style compaction is focused on reducing Write Amplification Factor for big data sets, but increases Space Amplification.

        Make sure to also call IncreaseParallelism(), which will provide the biggest performance gains.

        Note: we might use more memory than memtable_memory_budget during high write rate period

        Returns:
        the instance of the current object.
      • optimizeUniversalStyleCompaction

        T optimizeUniversalStyleCompaction​(long memtableMemoryBudget)

        Default values for some parameters in ColumnFamilyOptions are not optimized for heavy workloads and big datasets, which means you might observe write stalls under some conditions. As a starting point for tuning RocksDB options, use the following for universal style compaction.

        Universal style compaction is focused on reducing Write Amplification Factor for big data sets, but increases Space Amplification.

        Make sure to also call IncreaseParallelism(), which will provide the biggest performance gains.

        Note: we might use more memory than memtable_memory_budget during high write rate period

        Parameters:
        memtableMemoryBudget - memory budget in bytes
        Returns:
        the instance of the current object.
      • setComparator

        T setComparator​(BuiltinComparator builtinComparator)
        Set BuiltinComparator to be used with RocksDB. Note: Comparator can be set once upon database creation. Default: BytewiseComparator.
        Parameters:
        builtinComparator - a BuiltinComparator type.
        Returns:
        the instance of the current object.
      • setComparator

        T setComparator​(AbstractComparator comparator)
        Use the specified comparator for key ordering. Comparator should not be disposed before options instances using this comparator is disposed. If dispose() function is not called, then comparator object will be GC'd automatically. Comparator instance can be re-used in multiple options instances.
        Parameters:
        comparator - java instance.
        Returns:
        the instance of the current object.
      • setMergeOperatorName

        T setMergeOperatorName​(java.lang.String name)

        Set the merge operator to be used for merging two merge operands of the same key. The merge function is invoked during compaction and at lookup time, if multiple key/value pairs belonging to the same key are found in the database.

        Parameters:
        name - the name of the merge function, as defined by the MergeOperators factory (see utilities/MergeOperators.h) The merge function is specified by name and must be one of the standard merge operators provided by RocksDB. The available operators are "put", "uint64add", "stringappend" and "stringappendtest".
        Returns:
        the instance of the current object.
      • setMergeOperator

        T setMergeOperator​(MergeOperator mergeOperator)

        Set the merge operator to be used for merging two different key/value pairs that share the same key. The merge function is invoked during compaction and at lookup time, if multiple key/value pairs belonging to the same key are found in the database.

        Parameters:
        mergeOperator - MergeOperator instance.
        Returns:
        the instance of the current object.
      • compactionFilter

        AbstractCompactionFilter<? extends AbstractSlice<?>> compactionFilter()
        Accessor for the CompactionFilter instance in use.
        Returns:
        Reference to the CompactionFilter, or null if one hasn't been set.
      • setCompactionFilterFactory

        T setCompactionFilterFactory​(AbstractCompactionFilterFactory<? extends AbstractCompactionFilter<?>> compactionFilterFactory)
        This is a factory that provides AbstractCompactionFilter objects which allow an application to modify/delete a key-value during background compaction. A new filter will be created on each compaction run. If multithreaded compaction is being used, each created CompactionFilter will only be used from a single thread and so does not need to be thread-safe.
        Parameters:
        compactionFilterFactory - AbstractCompactionFilterFactory instance.
        Returns:
        the instance of the current object.
      • useFixedLengthPrefixExtractor

        T useFixedLengthPrefixExtractor​(int n)
        This prefix-extractor uses the first n bytes of a key as its prefix. In some hash-based memtable representation such as HashLinkedList and HashSkipList, prefixes are used to partition the keys into several buckets. Prefix extractor is used to specify how to extract the prefix given a key.
        Parameters:
        n - use the first n bytes of a key as its prefix.
        Returns:
        the reference to the current option.
      • useCappedPrefixExtractor

        T useCappedPrefixExtractor​(int n)
        Same as fixed length prefix extractor, except that when slice is shorter than the fixed length, it will use the full key.
        Parameters:
        n - use the first n bytes of a key as its prefix.
        Returns:
        the reference to the current option.
      • setLevelZeroFileNumCompactionTrigger

        T setLevelZeroFileNumCompactionTrigger​(int numFiles)
        Number of files to trigger level-0 compaction. A value < 0 means that level-0 compaction will not be triggered by number of files at all. Default: 4
        Parameters:
        numFiles - the number of files in level-0 to trigger compaction.
        Returns:
        the reference to the current option.
      • levelZeroFileNumCompactionTrigger

        int levelZeroFileNumCompactionTrigger()
        The number of files in level 0 to trigger compaction from level-0 to level-1. A value < 0 means that level-0 compaction will not be triggered by number of files at all. Default: 4
        Returns:
        the number of files in level 0 to trigger compaction.
      • setLevelZeroSlowdownWritesTrigger

        T setLevelZeroSlowdownWritesTrigger​(int numFiles)
        Soft limit on number of level-0 files. We start slowing down writes at this point. A value < 0 means that no writing slow down will be triggered by number of files in level-0.
        Parameters:
        numFiles - soft limit on number of level-0 files.
        Returns:
        the reference to the current option.
      • levelZeroSlowdownWritesTrigger

        int levelZeroSlowdownWritesTrigger()
        Soft limit on the number of level-0 files. We start slowing down writes at this point. A value < 0 means that no writing slow down will be triggered by number of files in level-0.
        Returns:
        the soft limit on the number of level-0 files.
      • setLevelZeroStopWritesTrigger

        T setLevelZeroStopWritesTrigger​(int numFiles)
        Maximum number of level-0 files. We stop writes at this point.
        Parameters:
        numFiles - the hard limit of the number of level-0 files.
        Returns:
        the reference to the current option.
      • levelZeroStopWritesTrigger

        int levelZeroStopWritesTrigger()
        Maximum number of level-0 files. We stop writes at this point.
        Returns:
        the hard limit of the number of level-0 file.
      • setMaxBytesForLevelMultiplier

        T setMaxBytesForLevelMultiplier​(double multiplier)
        The ratio between the total size of level-(L+1) files and the total size of level-L files for all L. DEFAULT: 10
        Parameters:
        multiplier - the ratio between the total size of level-(L+1) files and the total size of level-L files for all L.
        Returns:
        the reference to the current option.
      • maxBytesForLevelMultiplier

        double maxBytesForLevelMultiplier()
        The ratio between the total size of level-(L+1) files and the total size of level-L files for all L. DEFAULT: 10
        Returns:
        the ratio between the total size of level-(L+1) files and the total size of level-L files for all L.
      • setMaxTableFilesSizeFIFO

        T setMaxTableFilesSizeFIFO​(long maxTableFilesSize)
        FIFO compaction option. The oldest table file will be deleted once the sum of table files reaches this size. The default value is 1GB (1 * 1024 * 1024 * 1024).
        Parameters:
        maxTableFilesSize - the size limit of the total sum of table files.
        Returns:
        the instance of the current object.
      • maxTableFilesSizeFIFO

        long maxTableFilesSizeFIFO()
        FIFO compaction option. The oldest table file will be deleted once the sum of table files reaches this size. The default value is 1GB (1 * 1024 * 1024 * 1024).
        Returns:
        the size limit of the total sum of table files.
      • memTableConfig

        MemTableConfig memTableConfig()
        Get the config for mem-table.
        Returns:
        the mem-table config.
      • setMemTableConfig

        T setMemTableConfig​(MemTableConfig memTableConfig)
        Set the config for mem-table.
        Parameters:
        memTableConfig - the mem-table config.
        Returns:
        the instance of the current object.
        Throws:
        java.lang.IllegalArgumentException - thrown on 32-Bit platforms while overflowing the underlying platform specific value.
      • memTableFactoryName

        java.lang.String memTableFactoryName()
        Returns the name of the current mem table representation. Memtable format can be set using setTableFormatConfig.
        Returns:
        the name of the currently-used memtable factory.
        See Also:
        setTableFormatConfig(org.rocksdb.TableFormatConfig)
      • tableFormatConfig

        TableFormatConfig tableFormatConfig()
        Get the config for table format.
        Returns:
        the table format config.
      • setTableFormatConfig

        T setTableFormatConfig​(TableFormatConfig config)
        Set the config for table format.
        Parameters:
        config - the table format config.
        Returns:
        the reference of the current options.
      • tableFactoryName

        java.lang.String tableFactoryName()
        Returns:
        the name of the currently used table factory.
      • setCfPaths

        T setCfPaths​(java.util.Collection<DbPath> paths)
        A list of paths where SST files for this column family can be put into, with its target size. Similar to db_paths, newer data is placed into paths specified earlier in the vector while older data gradually moves to paths specified later in the vector. Note that, if a path is supplied to multiple column families, it would have files and total size from all the column families combined. User should provision for the total size(from all the column families) in such cases. If left empty, db_paths will be used. Default: empty
        Parameters:
        paths - collection of paths for SST files.
        Returns:
        the reference of the current options.
      • cfPaths

        java.util.List<DbPath> cfPaths()
        Returns:
        collection of paths for SST files.
      • setBottommostCompressionType

        T setBottommostCompressionType​(CompressionType bottommostCompressionType)
        Compression algorithm that will be used for the bottommost level that contain files. If level-compaction is used, this option will only affect levels after base level. Default: CompressionType.DISABLE_COMPRESSION_OPTION
        Parameters:
        bottommostCompressionType - The compression type to use for the bottommost level
        Returns:
        the reference of the current options.
      • bottommostCompressionType

        CompressionType bottommostCompressionType()
        Compression algorithm that will be used for the bottommost level that contain files. If level-compaction is used, this option will only affect levels after base level. Default: CompressionType.DISABLE_COMPRESSION_OPTION
        Returns:
        The compression type used for the bottommost level
      • setBottommostCompressionOptions

        T setBottommostCompressionOptions​(CompressionOptions compressionOptions)
        Set the options for compression algorithms used by bottommostCompressionType() if it is enabled. To enable it, please see the definition of CompressionOptions.
        Parameters:
        compressionOptions - the bottom most compression options.
        Returns:
        the reference of the current options.
      • setCompressionOptions

        T setCompressionOptions​(CompressionOptions compressionOptions)
        Set the different options for compression algorithms
        Parameters:
        compressionOptions - The compression options
        Returns:
        the reference of the current options.
      • compressionOptions

        CompressionOptions compressionOptions()
        Get the different options for compression algorithms
        Returns:
        The compression options
      • setSstPartitionerFactory

        @Experimental("Caution: this option is experimental")
        T setSstPartitionerFactory​(SstPartitionerFactory factory)
        If non-nullptr, use the specified factory for a function to determine the partitioning of sst files. This helps compaction to split the files on interesting boundaries (key prefixes) to make propagation of sst files less write amplifying (covering the whole key space). Default: nullptr
        Parameters:
        factory - The factory reference
        Returns:
        the reference of the current options.
      • sstPartitionerFactory

        @Experimental("Caution: this option is experimental")
        SstPartitionerFactory sstPartitionerFactory()
        Get SST partitioner factory
        Returns:
        SST partitioner factory
      • setCompactionThreadLimiter

        T setCompactionThreadLimiter​(ConcurrentTaskLimiter concurrentTaskLimiter)
        Compaction concurrent thread limiter for the column family. If non-nullptr, use given concurrent thread limiter to control the max outstanding compaction tasks. Limiter can be shared with multiple column families across db instances.
        Parameters:
        concurrentTaskLimiter - The compaction thread limiter.
        Returns:
        the reference of the current options.
      • compactionThreadLimiter

        ConcurrentTaskLimiter compactionThreadLimiter()
        Get compaction thread limiter
        Returns:
        Compaction thread limiter