Interface CallableOptionsOrBuilder

All Superinterfaces:
com.google.protobuf.MessageLiteOrBuilder, com.google.protobuf.MessageOrBuilder
All Known Implementing Classes:
CallableOptions, CallableOptions.Builder

public interface CallableOptionsOrBuilder
extends com.google.protobuf.MessageOrBuilder
  • Method Summary

    Modifier and Type Method Description
    boolean containsFeedDevices​(java.lang.String key)
    The Tensor objects fed in the callable and fetched from the callable are expected to be backed by host (CPU) memory by default.
    boolean containsFetchDevices​(java.lang.String key)
    map<string, string> fetch_devices = 7;
    java.lang.String getFeed​(int index)
    Tensors to be fed in the callable.
    com.google.protobuf.ByteString getFeedBytes​(int index)
    Tensors to be fed in the callable.
    int getFeedCount()
    Tensors to be fed in the callable.
    java.util.Map<java.lang.String,​java.lang.String> getFeedDevices()
    Deprecated.
    int getFeedDevicesCount()
    The Tensor objects fed in the callable and fetched from the callable are expected to be backed by host (CPU) memory by default.
    java.util.Map<java.lang.String,​java.lang.String> getFeedDevicesMap()
    The Tensor objects fed in the callable and fetched from the callable are expected to be backed by host (CPU) memory by default.
    java.lang.String getFeedDevicesOrDefault​(java.lang.String key, java.lang.String defaultValue)
    The Tensor objects fed in the callable and fetched from the callable are expected to be backed by host (CPU) memory by default.
    java.lang.String getFeedDevicesOrThrow​(java.lang.String key)
    The Tensor objects fed in the callable and fetched from the callable are expected to be backed by host (CPU) memory by default.
    java.util.List<java.lang.String> getFeedList()
    Tensors to be fed in the callable.
    java.lang.String getFetch​(int index)
    Fetches.
    com.google.protobuf.ByteString getFetchBytes​(int index)
    Fetches.
    int getFetchCount()
    Fetches.
    java.util.Map<java.lang.String,​java.lang.String> getFetchDevices()
    Deprecated.
    int getFetchDevicesCount()
    map<string, string> fetch_devices = 7;
    java.util.Map<java.lang.String,​java.lang.String> getFetchDevicesMap()
    map<string, string> fetch_devices = 7;
    java.lang.String getFetchDevicesOrDefault​(java.lang.String key, java.lang.String defaultValue)
    map<string, string> fetch_devices = 7;
    java.lang.String getFetchDevicesOrThrow​(java.lang.String key)
    map<string, string> fetch_devices = 7;
    java.util.List<java.lang.String> getFetchList()
    Fetches.
    boolean getFetchSkipSync()
    By default, RunCallable() will synchronize the GPU stream before returning fetched tensors on a GPU device, to ensure that the values in those tensors have been produced.
    RunOptions getRunOptions()
    Options that will be applied to each run.
    RunOptionsOrBuilder getRunOptionsOrBuilder()
    Options that will be applied to each run.
    java.lang.String getTarget​(int index)
    Target Nodes.
    com.google.protobuf.ByteString getTargetBytes​(int index)
    Target Nodes.
    int getTargetCount()
    Target Nodes.
    java.util.List<java.lang.String> getTargetList()
    Target Nodes.
    TensorConnection getTensorConnection​(int index)
    Tensors to be connected in the callable.
    int getTensorConnectionCount()
    Tensors to be connected in the callable.
    java.util.List<TensorConnection> getTensorConnectionList()
    Tensors to be connected in the callable.
    TensorConnectionOrBuilder getTensorConnectionOrBuilder​(int index)
    Tensors to be connected in the callable.
    java.util.List<? extends TensorConnectionOrBuilder> getTensorConnectionOrBuilderList()
    Tensors to be connected in the callable.
    boolean hasRunOptions()
    Options that will be applied to each run.

    Methods inherited from interface com.google.protobuf.MessageLiteOrBuilder

    isInitialized

    Methods inherited from interface com.google.protobuf.MessageOrBuilder

    findInitializationErrors, getAllFields, getDefaultInstanceForType, getDescriptorForType, getField, getInitializationErrorString, getOneofFieldDescriptor, getRepeatedField, getRepeatedFieldCount, getUnknownFields, hasField, hasOneof
  • Method Details

    • getFeedList

      java.util.List<java.lang.String> getFeedList()
       Tensors to be fed in the callable. Each feed is the name of a tensor.
       
      repeated string feed = 1;
      Returns:
      A list containing the feed.
    • getFeedCount

      int getFeedCount()
       Tensors to be fed in the callable. Each feed is the name of a tensor.
       
      repeated string feed = 1;
      Returns:
      The count of feed.
    • getFeed

      java.lang.String getFeed​(int index)
       Tensors to be fed in the callable. Each feed is the name of a tensor.
       
      repeated string feed = 1;
      Parameters:
      index - The index of the element to return.
      Returns:
      The feed at the given index.
    • getFeedBytes

      com.google.protobuf.ByteString getFeedBytes​(int index)
       Tensors to be fed in the callable. Each feed is the name of a tensor.
       
      repeated string feed = 1;
      Parameters:
      index - The index of the value to return.
      Returns:
      The bytes of the feed at the given index.
    • getFetchList

      java.util.List<java.lang.String> getFetchList()
       Fetches. A list of tensor names. The caller of the callable expects a
       tensor to be returned for each fetch[i] (see RunStepResponse.tensor). The
       order of specified fetches does not change the execution order.
       
      repeated string fetch = 2;
      Returns:
      A list containing the fetch.
    • getFetchCount

      int getFetchCount()
       Fetches. A list of tensor names. The caller of the callable expects a
       tensor to be returned for each fetch[i] (see RunStepResponse.tensor). The
       order of specified fetches does not change the execution order.
       
      repeated string fetch = 2;
      Returns:
      The count of fetch.
    • getFetch

      java.lang.String getFetch​(int index)
       Fetches. A list of tensor names. The caller of the callable expects a
       tensor to be returned for each fetch[i] (see RunStepResponse.tensor). The
       order of specified fetches does not change the execution order.
       
      repeated string fetch = 2;
      Parameters:
      index - The index of the element to return.
      Returns:
      The fetch at the given index.
    • getFetchBytes

      com.google.protobuf.ByteString getFetchBytes​(int index)
       Fetches. A list of tensor names. The caller of the callable expects a
       tensor to be returned for each fetch[i] (see RunStepResponse.tensor). The
       order of specified fetches does not change the execution order.
       
      repeated string fetch = 2;
      Parameters:
      index - The index of the value to return.
      Returns:
      The bytes of the fetch at the given index.
    • getTargetList

      java.util.List<java.lang.String> getTargetList()
       Target Nodes. A list of node names. The named nodes will be run by the
       callable but their outputs will not be returned.
       
      repeated string target = 3;
      Returns:
      A list containing the target.
    • getTargetCount

      int getTargetCount()
       Target Nodes. A list of node names. The named nodes will be run by the
       callable but their outputs will not be returned.
       
      repeated string target = 3;
      Returns:
      The count of target.
    • getTarget

      java.lang.String getTarget​(int index)
       Target Nodes. A list of node names. The named nodes will be run by the
       callable but their outputs will not be returned.
       
      repeated string target = 3;
      Parameters:
      index - The index of the element to return.
      Returns:
      The target at the given index.
    • getTargetBytes

      com.google.protobuf.ByteString getTargetBytes​(int index)
       Target Nodes. A list of node names. The named nodes will be run by the
       callable but their outputs will not be returned.
       
      repeated string target = 3;
      Parameters:
      index - The index of the value to return.
      Returns:
      The bytes of the target at the given index.
    • hasRunOptions

      boolean hasRunOptions()
       Options that will be applied to each run.
       
      .tensorflow.RunOptions run_options = 4;
      Returns:
      Whether the runOptions field is set.
    • getRunOptions

      RunOptions getRunOptions()
       Options that will be applied to each run.
       
      .tensorflow.RunOptions run_options = 4;
      Returns:
      The runOptions.
    • getRunOptionsOrBuilder

      RunOptionsOrBuilder getRunOptionsOrBuilder()
       Options that will be applied to each run.
       
      .tensorflow.RunOptions run_options = 4;
    • getTensorConnectionList

      java.util.List<TensorConnection> getTensorConnectionList()
       Tensors to be connected in the callable. Each TensorConnection denotes
       a pair of tensors in the graph, between which an edge will be created
       in the callable.
       
      repeated .tensorflow.TensorConnection tensor_connection = 5;
    • getTensorConnection

      TensorConnection getTensorConnection​(int index)
       Tensors to be connected in the callable. Each TensorConnection denotes
       a pair of tensors in the graph, between which an edge will be created
       in the callable.
       
      repeated .tensorflow.TensorConnection tensor_connection = 5;
    • getTensorConnectionCount

      int getTensorConnectionCount()
       Tensors to be connected in the callable. Each TensorConnection denotes
       a pair of tensors in the graph, between which an edge will be created
       in the callable.
       
      repeated .tensorflow.TensorConnection tensor_connection = 5;
    • getTensorConnectionOrBuilderList

      java.util.List<? extends TensorConnectionOrBuilder> getTensorConnectionOrBuilderList()
       Tensors to be connected in the callable. Each TensorConnection denotes
       a pair of tensors in the graph, between which an edge will be created
       in the callable.
       
      repeated .tensorflow.TensorConnection tensor_connection = 5;
    • getTensorConnectionOrBuilder

      TensorConnectionOrBuilder getTensorConnectionOrBuilder​(int index)
       Tensors to be connected in the callable. Each TensorConnection denotes
       a pair of tensors in the graph, between which an edge will be created
       in the callable.
       
      repeated .tensorflow.TensorConnection tensor_connection = 5;
    • getFeedDevicesCount

      int getFeedDevicesCount()
       The Tensor objects fed in the callable and fetched from the callable
       are expected to be backed by host (CPU) memory by default.
       The options below allow changing that - feeding tensors backed by
       device memory, or returning tensors that are backed by device memory.
       The maps below map the name of a feed/fetch tensor (which appears in
       'feed' or 'fetch' fields above), to the fully qualified name of the device
       owning the memory backing the contents of the tensor.
       For example, creating a callable with the following options:
       CallableOptions {
         feed: "a:0"
         feed: "b:0"
         fetch: "x:0"
         fetch: "y:0"
         feed_devices: {
           "a:0": "/job:localhost/replica:0/task:0/device:GPU:0"
         }
         fetch_devices: {
           "y:0": "/job:localhost/replica:0/task:0/device:GPU:0"
        }
       }
       means that the Callable expects:
       - The first argument ("a:0") is a Tensor backed by GPU memory.
       - The second argument ("b:0") is a Tensor backed by host memory.
       and of its return values:
       - The first output ("x:0") will be backed by host memory.
       - The second output ("y:0") will be backed by GPU memory.
       FEEDS:
       It is the responsibility of the caller to ensure that the memory of the fed
       tensors will be correctly initialized and synchronized before it is
       accessed by operations executed during the call to Session::RunCallable().
       This is typically ensured by using the TensorFlow memory allocators
       (Device::GetAllocator()) to create the Tensor to be fed.
       Alternatively, for CUDA-enabled GPU devices, this typically means that the
       operation that produced the contents of the tensor has completed, i.e., the
       CUDA stream has been synchronized (e.g., via cuCtxSynchronize() or
       cuStreamSynchronize()).
       
      map<string, string> feed_devices = 6;
    • containsFeedDevices

      boolean containsFeedDevices​(java.lang.String key)
       The Tensor objects fed in the callable and fetched from the callable
       are expected to be backed by host (CPU) memory by default.
       The options below allow changing that - feeding tensors backed by
       device memory, or returning tensors that are backed by device memory.
       The maps below map the name of a feed/fetch tensor (which appears in
       'feed' or 'fetch' fields above), to the fully qualified name of the device
       owning the memory backing the contents of the tensor.
       For example, creating a callable with the following options:
       CallableOptions {
         feed: "a:0"
         feed: "b:0"
         fetch: "x:0"
         fetch: "y:0"
         feed_devices: {
           "a:0": "/job:localhost/replica:0/task:0/device:GPU:0"
         }
         fetch_devices: {
           "y:0": "/job:localhost/replica:0/task:0/device:GPU:0"
        }
       }
       means that the Callable expects:
       - The first argument ("a:0") is a Tensor backed by GPU memory.
       - The second argument ("b:0") is a Tensor backed by host memory.
       and of its return values:
       - The first output ("x:0") will be backed by host memory.
       - The second output ("y:0") will be backed by GPU memory.
       FEEDS:
       It is the responsibility of the caller to ensure that the memory of the fed
       tensors will be correctly initialized and synchronized before it is
       accessed by operations executed during the call to Session::RunCallable().
       This is typically ensured by using the TensorFlow memory allocators
       (Device::GetAllocator()) to create the Tensor to be fed.
       Alternatively, for CUDA-enabled GPU devices, this typically means that the
       operation that produced the contents of the tensor has completed, i.e., the
       CUDA stream has been synchronized (e.g., via cuCtxSynchronize() or
       cuStreamSynchronize()).
       
      map<string, string> feed_devices = 6;
    • getFeedDevices

      @Deprecated java.util.Map<java.lang.String,​java.lang.String> getFeedDevices()
      Deprecated.
      Use getFeedDevicesMap() instead.
    • getFeedDevicesMap

      java.util.Map<java.lang.String,​java.lang.String> getFeedDevicesMap()
       The Tensor objects fed in the callable and fetched from the callable
       are expected to be backed by host (CPU) memory by default.
       The options below allow changing that - feeding tensors backed by
       device memory, or returning tensors that are backed by device memory.
       The maps below map the name of a feed/fetch tensor (which appears in
       'feed' or 'fetch' fields above), to the fully qualified name of the device
       owning the memory backing the contents of the tensor.
       For example, creating a callable with the following options:
       CallableOptions {
         feed: "a:0"
         feed: "b:0"
         fetch: "x:0"
         fetch: "y:0"
         feed_devices: {
           "a:0": "/job:localhost/replica:0/task:0/device:GPU:0"
         }
         fetch_devices: {
           "y:0": "/job:localhost/replica:0/task:0/device:GPU:0"
        }
       }
       means that the Callable expects:
       - The first argument ("a:0") is a Tensor backed by GPU memory.
       - The second argument ("b:0") is a Tensor backed by host memory.
       and of its return values:
       - The first output ("x:0") will be backed by host memory.
       - The second output ("y:0") will be backed by GPU memory.
       FEEDS:
       It is the responsibility of the caller to ensure that the memory of the fed
       tensors will be correctly initialized and synchronized before it is
       accessed by operations executed during the call to Session::RunCallable().
       This is typically ensured by using the TensorFlow memory allocators
       (Device::GetAllocator()) to create the Tensor to be fed.
       Alternatively, for CUDA-enabled GPU devices, this typically means that the
       operation that produced the contents of the tensor has completed, i.e., the
       CUDA stream has been synchronized (e.g., via cuCtxSynchronize() or
       cuStreamSynchronize()).
       
      map<string, string> feed_devices = 6;
    • getFeedDevicesOrDefault

      java.lang.String getFeedDevicesOrDefault​(java.lang.String key, java.lang.String defaultValue)
       The Tensor objects fed in the callable and fetched from the callable
       are expected to be backed by host (CPU) memory by default.
       The options below allow changing that - feeding tensors backed by
       device memory, or returning tensors that are backed by device memory.
       The maps below map the name of a feed/fetch tensor (which appears in
       'feed' or 'fetch' fields above), to the fully qualified name of the device
       owning the memory backing the contents of the tensor.
       For example, creating a callable with the following options:
       CallableOptions {
         feed: "a:0"
         feed: "b:0"
         fetch: "x:0"
         fetch: "y:0"
         feed_devices: {
           "a:0": "/job:localhost/replica:0/task:0/device:GPU:0"
         }
         fetch_devices: {
           "y:0": "/job:localhost/replica:0/task:0/device:GPU:0"
        }
       }
       means that the Callable expects:
       - The first argument ("a:0") is a Tensor backed by GPU memory.
       - The second argument ("b:0") is a Tensor backed by host memory.
       and of its return values:
       - The first output ("x:0") will be backed by host memory.
       - The second output ("y:0") will be backed by GPU memory.
       FEEDS:
       It is the responsibility of the caller to ensure that the memory of the fed
       tensors will be correctly initialized and synchronized before it is
       accessed by operations executed during the call to Session::RunCallable().
       This is typically ensured by using the TensorFlow memory allocators
       (Device::GetAllocator()) to create the Tensor to be fed.
       Alternatively, for CUDA-enabled GPU devices, this typically means that the
       operation that produced the contents of the tensor has completed, i.e., the
       CUDA stream has been synchronized (e.g., via cuCtxSynchronize() or
       cuStreamSynchronize()).
       
      map<string, string> feed_devices = 6;
    • getFeedDevicesOrThrow

      java.lang.String getFeedDevicesOrThrow​(java.lang.String key)
       The Tensor objects fed in the callable and fetched from the callable
       are expected to be backed by host (CPU) memory by default.
       The options below allow changing that - feeding tensors backed by
       device memory, or returning tensors that are backed by device memory.
       The maps below map the name of a feed/fetch tensor (which appears in
       'feed' or 'fetch' fields above), to the fully qualified name of the device
       owning the memory backing the contents of the tensor.
       For example, creating a callable with the following options:
       CallableOptions {
         feed: "a:0"
         feed: "b:0"
         fetch: "x:0"
         fetch: "y:0"
         feed_devices: {
           "a:0": "/job:localhost/replica:0/task:0/device:GPU:0"
         }
         fetch_devices: {
           "y:0": "/job:localhost/replica:0/task:0/device:GPU:0"
        }
       }
       means that the Callable expects:
       - The first argument ("a:0") is a Tensor backed by GPU memory.
       - The second argument ("b:0") is a Tensor backed by host memory.
       and of its return values:
       - The first output ("x:0") will be backed by host memory.
       - The second output ("y:0") will be backed by GPU memory.
       FEEDS:
       It is the responsibility of the caller to ensure that the memory of the fed
       tensors will be correctly initialized and synchronized before it is
       accessed by operations executed during the call to Session::RunCallable().
       This is typically ensured by using the TensorFlow memory allocators
       (Device::GetAllocator()) to create the Tensor to be fed.
       Alternatively, for CUDA-enabled GPU devices, this typically means that the
       operation that produced the contents of the tensor has completed, i.e., the
       CUDA stream has been synchronized (e.g., via cuCtxSynchronize() or
       cuStreamSynchronize()).
       
      map<string, string> feed_devices = 6;
    • getFetchDevicesCount

      int getFetchDevicesCount()
      map<string, string> fetch_devices = 7;
    • containsFetchDevices

      boolean containsFetchDevices​(java.lang.String key)
      map<string, string> fetch_devices = 7;
    • getFetchDevices

      @Deprecated java.util.Map<java.lang.String,​java.lang.String> getFetchDevices()
      Deprecated.
    • getFetchDevicesMap

      java.util.Map<java.lang.String,​java.lang.String> getFetchDevicesMap()
      map<string, string> fetch_devices = 7;
    • getFetchDevicesOrDefault

      java.lang.String getFetchDevicesOrDefault​(java.lang.String key, java.lang.String defaultValue)
      map<string, string> fetch_devices = 7;
    • getFetchDevicesOrThrow

      java.lang.String getFetchDevicesOrThrow​(java.lang.String key)
      map<string, string> fetch_devices = 7;
    • getFetchSkipSync

      boolean getFetchSkipSync()
       By default, RunCallable() will synchronize the GPU stream before returning
       fetched tensors on a GPU device, to ensure that the values in those tensors
       have been produced. This simplifies interacting with the tensors, but
       potentially incurs a performance hit.
       If this options is set to true, the caller is responsible for ensuring
       that the values in the fetched tensors have been produced before they are
       used. The caller can do this by invoking `Device::Sync()` on the underlying
       device(s), or by feeding the tensors back to the same Session using
       `feed_devices` with the same corresponding device name.
       
      bool fetch_skip_sync = 8;
      Returns:
      The fetchSkipSync.