Interface RewriterConfigOrBuilder

All Superinterfaces:
com.google.protobuf.MessageLiteOrBuilder, com.google.protobuf.MessageOrBuilder
All Known Implementing Classes:
RewriterConfig, RewriterConfig.Builder

public interface RewriterConfigOrBuilder
extends com.google.protobuf.MessageOrBuilder
  • Method Details

    • getLayoutOptimizerValue

      int getLayoutOptimizerValue()
       Optimize tensor layouts (default is ON)
       e.g. This will try to use NCHW layout on GPU which is faster.
       
      .tensorflow.RewriterConfig.Toggle layout_optimizer = 1;
      Returns:
      The enum numeric value on the wire for layoutOptimizer.
    • getLayoutOptimizer

      RewriterConfig.Toggle getLayoutOptimizer()
       Optimize tensor layouts (default is ON)
       e.g. This will try to use NCHW layout on GPU which is faster.
       
      .tensorflow.RewriterConfig.Toggle layout_optimizer = 1;
      Returns:
      The layoutOptimizer.
    • getConstantFoldingValue

      int getConstantFoldingValue()
       Fold constants (default is ON)
       Statically infer the value of tensors when possible, and materialize the
       result using constants.
       
      .tensorflow.RewriterConfig.Toggle constant_folding = 3;
      Returns:
      The enum numeric value on the wire for constantFolding.
    • getConstantFolding

      RewriterConfig.Toggle getConstantFolding()
       Fold constants (default is ON)
       Statically infer the value of tensors when possible, and materialize the
       result using constants.
       
      .tensorflow.RewriterConfig.Toggle constant_folding = 3;
      Returns:
      The constantFolding.
    • getShapeOptimizationValue

      int getShapeOptimizationValue()
       Shape optimizations (default is ON)
       Simplify computations made on shapes.
       
      .tensorflow.RewriterConfig.Toggle shape_optimization = 13;
      Returns:
      The enum numeric value on the wire for shapeOptimization.
    • getShapeOptimization

      RewriterConfig.Toggle getShapeOptimization()
       Shape optimizations (default is ON)
       Simplify computations made on shapes.
       
      .tensorflow.RewriterConfig.Toggle shape_optimization = 13;
      Returns:
      The shapeOptimization.
    • getRemappingValue

      int getRemappingValue()
       Remapping (default is ON)
       Remap subgraphs onto more efficient implementations.
       
      .tensorflow.RewriterConfig.Toggle remapping = 14;
      Returns:
      The enum numeric value on the wire for remapping.
    • getRemapping

      RewriterConfig.Toggle getRemapping()
       Remapping (default is ON)
       Remap subgraphs onto more efficient implementations.
       
      .tensorflow.RewriterConfig.Toggle remapping = 14;
      Returns:
      The remapping.
    • getArithmeticOptimizationValue

      int getArithmeticOptimizationValue()
       Arithmetic optimizations (default is ON)
       e.g. Simplify arithmetic ops; merge ops with same value (like constants).
       
      .tensorflow.RewriterConfig.Toggle arithmetic_optimization = 7;
      Returns:
      The enum numeric value on the wire for arithmeticOptimization.
    • getArithmeticOptimization

      RewriterConfig.Toggle getArithmeticOptimization()
       Arithmetic optimizations (default is ON)
       e.g. Simplify arithmetic ops; merge ops with same value (like constants).
       
      .tensorflow.RewriterConfig.Toggle arithmetic_optimization = 7;
      Returns:
      The arithmeticOptimization.
    • getDependencyOptimizationValue

      int getDependencyOptimizationValue()
       Control dependency optimizations (default is ON).
       Remove redundant control dependencies, which may enable other optimization.
       
      .tensorflow.RewriterConfig.Toggle dependency_optimization = 8;
      Returns:
      The enum numeric value on the wire for dependencyOptimization.
    • getDependencyOptimization

      RewriterConfig.Toggle getDependencyOptimization()
       Control dependency optimizations (default is ON).
       Remove redundant control dependencies, which may enable other optimization.
       
      .tensorflow.RewriterConfig.Toggle dependency_optimization = 8;
      Returns:
      The dependencyOptimization.
    • getLoopOptimizationValue

      int getLoopOptimizationValue()
       Loop optimizations (default is ON).
       
      .tensorflow.RewriterConfig.Toggle loop_optimization = 9;
      Returns:
      The enum numeric value on the wire for loopOptimization.
    • getLoopOptimization

      RewriterConfig.Toggle getLoopOptimization()
       Loop optimizations (default is ON).
       
      .tensorflow.RewriterConfig.Toggle loop_optimization = 9;
      Returns:
      The loopOptimization.
    • getFunctionOptimizationValue

      int getFunctionOptimizationValue()
       Function optimizations (default is ON).
       
      .tensorflow.RewriterConfig.Toggle function_optimization = 10;
      Returns:
      The enum numeric value on the wire for functionOptimization.
    • getFunctionOptimization

      RewriterConfig.Toggle getFunctionOptimization()
       Function optimizations (default is ON).
       
      .tensorflow.RewriterConfig.Toggle function_optimization = 10;
      Returns:
      The functionOptimization.
    • getDebugStripperValue

      int getDebugStripperValue()
       Strips debug-related nodes from the graph (off by default).
       
      .tensorflow.RewriterConfig.Toggle debug_stripper = 11;
      Returns:
      The enum numeric value on the wire for debugStripper.
    • getDebugStripper

      RewriterConfig.Toggle getDebugStripper()
       Strips debug-related nodes from the graph (off by default).
       
      .tensorflow.RewriterConfig.Toggle debug_stripper = 11;
      Returns:
      The debugStripper.
    • getDisableModelPruning

      boolean getDisableModelPruning()
       If true, don't remove unnecessary ops from the graph
       
      bool disable_model_pruning = 2;
      Returns:
      The disableModelPruning.
    • getScopedAllocatorOptimizationValue

      int getScopedAllocatorOptimizationValue()
       Try to allocate some independent Op outputs contiguously in order to
       merge or eliminate downstream Ops (off by default).
       
      .tensorflow.RewriterConfig.Toggle scoped_allocator_optimization = 15;
      Returns:
      The enum numeric value on the wire for scopedAllocatorOptimization.
    • getScopedAllocatorOptimization

      RewriterConfig.Toggle getScopedAllocatorOptimization()
       Try to allocate some independent Op outputs contiguously in order to
       merge or eliminate downstream Ops (off by default).
       
      .tensorflow.RewriterConfig.Toggle scoped_allocator_optimization = 15;
      Returns:
      The scopedAllocatorOptimization.
    • getPinToHostOptimizationValue

      int getPinToHostOptimizationValue()
       Force small ops onto the CPU (default is OFF).
       
      .tensorflow.RewriterConfig.Toggle pin_to_host_optimization = 18;
      Returns:
      The enum numeric value on the wire for pinToHostOptimization.
    • getPinToHostOptimization

      RewriterConfig.Toggle getPinToHostOptimization()
       Force small ops onto the CPU (default is OFF).
       
      .tensorflow.RewriterConfig.Toggle pin_to_host_optimization = 18;
      Returns:
      The pinToHostOptimization.
    • getImplementationSelectorValue

      int getImplementationSelectorValue()
       Enable the swap of kernel implementations based on the device placement
       (default is ON).
       
      .tensorflow.RewriterConfig.Toggle implementation_selector = 22;
      Returns:
      The enum numeric value on the wire for implementationSelector.
    • getImplementationSelector

      RewriterConfig.Toggle getImplementationSelector()
       Enable the swap of kernel implementations based on the device placement
       (default is ON).
       
      .tensorflow.RewriterConfig.Toggle implementation_selector = 22;
      Returns:
      The implementationSelector.
    • getAutoMixedPrecisionValue

      int getAutoMixedPrecisionValue()
       Optimize data types (default is OFF).
       e.g., This will try to use float16 on GPU which is faster.
       Note that this can change the numerical stability of the graph and may
       require the use of loss scaling to maintain model convergence.
       
      .tensorflow.RewriterConfig.Toggle auto_mixed_precision = 23;
      Returns:
      The enum numeric value on the wire for autoMixedPrecision.
    • getAutoMixedPrecision

      RewriterConfig.Toggle getAutoMixedPrecision()
       Optimize data types (default is OFF).
       e.g., This will try to use float16 on GPU which is faster.
       Note that this can change the numerical stability of the graph and may
       require the use of loss scaling to maintain model convergence.
       
      .tensorflow.RewriterConfig.Toggle auto_mixed_precision = 23;
      Returns:
      The autoMixedPrecision.
    • getDisableMetaOptimizer

      boolean getDisableMetaOptimizer()
       Disable the entire meta optimizer (off by default).
       
      bool disable_meta_optimizer = 19;
      Returns:
      The disableMetaOptimizer.
    • getMetaOptimizerIterationsValue

      int getMetaOptimizerIterationsValue()
       Controls how many times we run the optimizers in meta optimizer (default
       is once).
       
      .tensorflow.RewriterConfig.NumIterationsType meta_optimizer_iterations = 12;
      Returns:
      The enum numeric value on the wire for metaOptimizerIterations.
    • getMetaOptimizerIterations

      RewriterConfig.NumIterationsType getMetaOptimizerIterations()
       Controls how many times we run the optimizers in meta optimizer (default
       is once).
       
      .tensorflow.RewriterConfig.NumIterationsType meta_optimizer_iterations = 12;
      Returns:
      The metaOptimizerIterations.
    • getMinGraphNodes

      int getMinGraphNodes()
       The minimum number of nodes in a graph to optimizer. For smaller graphs,
       optimization is skipped.
       0 means the system picks an appropriate number.
       < 0 means do not skip optimization.
       
      int32 min_graph_nodes = 17;
      Returns:
      The minGraphNodes.
    • getMemoryOptimizationValue

      int getMemoryOptimizationValue()
       Configures memory optimization passes through the meta-optimizer. Has no
       effect on manually requested memory optimization passes in the optimizers
       field.
       
      .tensorflow.RewriterConfig.MemOptType memory_optimization = 4;
      Returns:
      The enum numeric value on the wire for memoryOptimization.
    • getMemoryOptimization

      RewriterConfig.MemOptType getMemoryOptimization()
       Configures memory optimization passes through the meta-optimizer. Has no
       effect on manually requested memory optimization passes in the optimizers
       field.
       
      .tensorflow.RewriterConfig.MemOptType memory_optimization = 4;
      Returns:
      The memoryOptimization.
    • getMemoryOptimizerTargetNodeNameScope

      java.lang.String getMemoryOptimizerTargetNodeNameScope()
       A node name scope for node names which are valid outputs of recomputations.
       Inputs to nodes that match this scope may be recomputed (subject either to
       manual annotation of those input nodes or to manual annotation and
       heuristics depending on memory_optimization), but the nodes themselves will
       not be recomputed. This matches any sub-scopes as well, meaning the scope
       can appear not just as a top-level scope. For example, if the value is
       "gradients/", the default, it will match node name "gradients/foo",
       "foo/gradients/bar", but not "foo_gradients/"
       
      string memory_optimizer_target_node_name_scope = 6;
      Returns:
      The memoryOptimizerTargetNodeNameScope.
    • getMemoryOptimizerTargetNodeNameScopeBytes

      com.google.protobuf.ByteString getMemoryOptimizerTargetNodeNameScopeBytes()
       A node name scope for node names which are valid outputs of recomputations.
       Inputs to nodes that match this scope may be recomputed (subject either to
       manual annotation of those input nodes or to manual annotation and
       heuristics depending on memory_optimization), but the nodes themselves will
       not be recomputed. This matches any sub-scopes as well, meaning the scope
       can appear not just as a top-level scope. For example, if the value is
       "gradients/", the default, it will match node name "gradients/foo",
       "foo/gradients/bar", but not "foo_gradients/"
       
      string memory_optimizer_target_node_name_scope = 6;
      Returns:
      The bytes for memoryOptimizerTargetNodeNameScope.
    • getMetaOptimizerTimeoutMs

      long getMetaOptimizerTimeoutMs()
       Maximum number of milliseconds to spend optimizing a single graph before
       timing out. If equal to 0 the system picks a default (currently 5 minutes).
       If less than 0 the optimizer will never time out.
       
      int64 meta_optimizer_timeout_ms = 20;
      Returns:
      The metaOptimizerTimeoutMs.
    • hasAutoParallel

      boolean hasAutoParallel()
       Configures AutoParallel optimization passes either through the
       meta-optimizer or when manually specified through the optimizers field.
       
      .tensorflow.AutoParallelOptions auto_parallel = 5;
      Returns:
      Whether the autoParallel field is set.
    • getAutoParallel

      AutoParallelOptions getAutoParallel()
       Configures AutoParallel optimization passes either through the
       meta-optimizer or when manually specified through the optimizers field.
       
      .tensorflow.AutoParallelOptions auto_parallel = 5;
      Returns:
      The autoParallel.
    • getAutoParallelOrBuilder

      AutoParallelOptionsOrBuilder getAutoParallelOrBuilder()
       Configures AutoParallel optimization passes either through the
       meta-optimizer or when manually specified through the optimizers field.
       
      .tensorflow.AutoParallelOptions auto_parallel = 5;
    • getFailOnOptimizerErrors

      boolean getFailOnOptimizerErrors()
       If true, any optimization pass failing will cause the MetaOptimizer to
       stop with an error. By default - or when set to false, failing passes are
       skipped silently.
       
      bool fail_on_optimizer_errors = 21;
      Returns:
      The failOnOptimizerErrors.
    • hasScopedAllocatorOpts

      boolean hasScopedAllocatorOpts()
      .tensorflow.ScopedAllocatorOptions scoped_allocator_opts = 16;
      Returns:
      Whether the scopedAllocatorOpts field is set.
    • getScopedAllocatorOpts

      ScopedAllocatorOptions getScopedAllocatorOpts()
      .tensorflow.ScopedAllocatorOptions scoped_allocator_opts = 16;
      Returns:
      The scopedAllocatorOpts.
    • getScopedAllocatorOptsOrBuilder

      ScopedAllocatorOptionsOrBuilder getScopedAllocatorOptsOrBuilder()
      .tensorflow.ScopedAllocatorOptions scoped_allocator_opts = 16;
    • getOptimizersList

      java.util.List<java.lang.String> getOptimizersList()
       If non-empty, will use this as an alternative way to specify a list of
       optimizations to turn on and the order of the optimizations (replacing the
       meta-optimizer).
       Of the RewriterConfig options, only the AutoParallel configuration options
       (the auto_parallel field) apply to manually requested optimization passes
       ("autoparallel"). Memory optimization passes ("memory") invoked here are
       not configurable (in contrast to memory optimization passes through the
       meta-optimizer) and act only on manual op annotations.
       Custom optimizers (see custom_optimizers) that are not part of this
       schedule will be run after - in the order that they were specified.
       
      repeated string optimizers = 100;
      Returns:
      A list containing the optimizers.
    • getOptimizersCount

      int getOptimizersCount()
       If non-empty, will use this as an alternative way to specify a list of
       optimizations to turn on and the order of the optimizations (replacing the
       meta-optimizer).
       Of the RewriterConfig options, only the AutoParallel configuration options
       (the auto_parallel field) apply to manually requested optimization passes
       ("autoparallel"). Memory optimization passes ("memory") invoked here are
       not configurable (in contrast to memory optimization passes through the
       meta-optimizer) and act only on manual op annotations.
       Custom optimizers (see custom_optimizers) that are not part of this
       schedule will be run after - in the order that they were specified.
       
      repeated string optimizers = 100;
      Returns:
      The count of optimizers.
    • getOptimizers

      java.lang.String getOptimizers​(int index)
       If non-empty, will use this as an alternative way to specify a list of
       optimizations to turn on and the order of the optimizations (replacing the
       meta-optimizer).
       Of the RewriterConfig options, only the AutoParallel configuration options
       (the auto_parallel field) apply to manually requested optimization passes
       ("autoparallel"). Memory optimization passes ("memory") invoked here are
       not configurable (in contrast to memory optimization passes through the
       meta-optimizer) and act only on manual op annotations.
       Custom optimizers (see custom_optimizers) that are not part of this
       schedule will be run after - in the order that they were specified.
       
      repeated string optimizers = 100;
      Parameters:
      index - The index of the element to return.
      Returns:
      The optimizers at the given index.
    • getOptimizersBytes

      com.google.protobuf.ByteString getOptimizersBytes​(int index)
       If non-empty, will use this as an alternative way to specify a list of
       optimizations to turn on and the order of the optimizations (replacing the
       meta-optimizer).
       Of the RewriterConfig options, only the AutoParallel configuration options
       (the auto_parallel field) apply to manually requested optimization passes
       ("autoparallel"). Memory optimization passes ("memory") invoked here are
       not configurable (in contrast to memory optimization passes through the
       meta-optimizer) and act only on manual op annotations.
       Custom optimizers (see custom_optimizers) that are not part of this
       schedule will be run after - in the order that they were specified.
       
      repeated string optimizers = 100;
      Parameters:
      index - The index of the value to return.
      Returns:
      The bytes of the optimizers at the given index.
    • getCustomOptimizersList

      java.util.List<RewriterConfig.CustomGraphOptimizer> getCustomOptimizersList()
       list of CustomGraphOptimizers to apply.
       
      repeated .tensorflow.RewriterConfig.CustomGraphOptimizer custom_optimizers = 200;
    • getCustomOptimizers

      RewriterConfig.CustomGraphOptimizer getCustomOptimizers​(int index)
       list of CustomGraphOptimizers to apply.
       
      repeated .tensorflow.RewriterConfig.CustomGraphOptimizer custom_optimizers = 200;
    • getCustomOptimizersCount

      int getCustomOptimizersCount()
       list of CustomGraphOptimizers to apply.
       
      repeated .tensorflow.RewriterConfig.CustomGraphOptimizer custom_optimizers = 200;
    • getCustomOptimizersOrBuilderList

      java.util.List<? extends RewriterConfig.CustomGraphOptimizerOrBuilder> getCustomOptimizersOrBuilderList()
       list of CustomGraphOptimizers to apply.
       
      repeated .tensorflow.RewriterConfig.CustomGraphOptimizer custom_optimizers = 200;
    • getCustomOptimizersOrBuilder

      RewriterConfig.CustomGraphOptimizerOrBuilder getCustomOptimizersOrBuilder​(int index)
       list of CustomGraphOptimizers to apply.
       
      repeated .tensorflow.RewriterConfig.CustomGraphOptimizer custom_optimizers = 200;
    • hasInterOptimizerVerifierConfig

      boolean hasInterOptimizerVerifierConfig()
       VerifierConfig specifying the verifiers to be run after every optimizer.
       
      .tensorflow.VerifierConfig inter_optimizer_verifier_config = 300;
      Returns:
      Whether the interOptimizerVerifierConfig field is set.
    • getInterOptimizerVerifierConfig

      VerifierConfig getInterOptimizerVerifierConfig()
       VerifierConfig specifying the verifiers to be run after every optimizer.
       
      .tensorflow.VerifierConfig inter_optimizer_verifier_config = 300;
      Returns:
      The interOptimizerVerifierConfig.
    • getInterOptimizerVerifierConfigOrBuilder

      VerifierConfigOrBuilder getInterOptimizerVerifierConfigOrBuilder()
       VerifierConfig specifying the verifiers to be run after every optimizer.
       
      .tensorflow.VerifierConfig inter_optimizer_verifier_config = 300;
    • hasPostOptimizationVerifierConfig

      boolean hasPostOptimizationVerifierConfig()
       VerifierConfig specifying the verifiers to be run at the end, after all
       optimizers have run.
       
      .tensorflow.VerifierConfig post_optimization_verifier_config = 301;
      Returns:
      Whether the postOptimizationVerifierConfig field is set.
    • getPostOptimizationVerifierConfig

      VerifierConfig getPostOptimizationVerifierConfig()
       VerifierConfig specifying the verifiers to be run at the end, after all
       optimizers have run.
       
      .tensorflow.VerifierConfig post_optimization_verifier_config = 301;
      Returns:
      The postOptimizationVerifierConfig.
    • getPostOptimizationVerifierConfigOrBuilder

      VerifierConfigOrBuilder getPostOptimizationVerifierConfigOrBuilder()
       VerifierConfig specifying the verifiers to be run at the end, after all
       optimizers have run.
       
      .tensorflow.VerifierConfig post_optimization_verifier_config = 301;