)]
-```
-
-There is also a deprecated way to call `from_generator` by either with
-`output_types` argument alone or together with `output_shapes` argument.
-In this case the output of the function will be assumed to consist of
-`tf.Tensor` objects with the types defined by `output_types` and with the
-shapes which are either unknown or defined by `output_shapes`.
-
-Note: The current implementation of `Dataset.from_generator()` uses
-`tf.numpy_function` and inherits the same constraints. In particular, it
-requires the dataset and iterator related operations to be placed
-on a device in the same process as the Python program that called
-`Dataset.from_generator()`. The body of `generator` will not be
-serialized in a `GraphDef`, and you should not use this method if you
-need to serialize your model and restore it in a different environment.
-
-Note: If `generator` depends on mutable global variables or other external
-state, be aware that the runtime may invoke `generator` multiple times
-(in order to support repeating the `Dataset`) and at any time
-between the call to `Dataset.from_generator()` and the production of the
-first element from the generator. Mutating global variables or external
-state can cause undefined behavior, and we recommend that you explicitly
-cache any external state in `generator` before calling
-`Dataset.from_generator()`.
-
-
-
-
-| Args |
-
-
-|
-`generator`
- |
-
-A callable object that returns an object that supports the
-`iter()` protocol. If `args` is not specified, `generator` must take no
-arguments; otherwise it must take as many arguments as there are values
-in `args`.
- |
-
-|
-`output_types`
- |
-
-(Optional.) A (nested) structure of `tf.DType` objects
-corresponding to each component of an element yielded by `generator`.
- |
-
-|
-`output_shapes`
- |
-
-(Optional.) A (nested) structure of `tf.TensorShape`
-objects corresponding to each component of an element yielded by
-`generator`.
- |
-
-|
-`args`
- |
-
-(Optional.) A tuple of `tf.Tensor` objects that will be evaluated
-and passed to `generator` as NumPy-array arguments.
- |
-
-|
-`output_signature`
- |
-
-(Optional.) A (nested) structure of `tf.TypeSpec`
-objects corresponding to each component of an element yielded by
-`generator`.
- |
-
-
-
-
-
-
-
-
-| Returns |
-
-
-|
-`Dataset`
- |
-
-A `Dataset`.
- |
-
-
-
-
-
-from_sparse_tensor_slices
-
-
-@staticmethod
-from_sparse_tensor_slices(
- sparse_tensor
-)
-
-
-Splits each rank-N `tf.sparse.SparseTensor` in this dataset row-wise. (deprecated)
-
-Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
-Instructions for updating:
-Use `tf.data.Dataset.from_tensor_slices()`.
-
-
-
-
-| Args |
-
-
-|
-`sparse_tensor`
- |
-
-A `tf.sparse.SparseTensor`.
- |
-
-
-
-
-
-
-
-
-| Returns |
-
-
-|
-`Dataset`
- |
-
-A `Dataset` of rank-(N-1) sparse tensors.
- |
-
-
-
-
-
-from_tensor_slices
-
-
-@staticmethod
-from_tensor_slices(
- tensors
-)
-
-
-Creates a `Dataset` whose elements are slices of the given tensors.
-
-The given tensors are sliced along their first dimension. This operation
-preserves the structure of the input tensors, removing the first dimension
-of each tensor and using it as the dataset dimension. All input tensors
-must have the same size in their first dimensions.
-
-```
->>> # Slicing a 1D tensor produces scalar tensor elements.
->>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
->>> list(dataset.as_numpy_iterator())
-[1, 2, 3]
-```
-
-```
->>> # Slicing a 2D tensor produces 1D tensor elements.
->>> dataset = tf.data.Dataset.from_tensor_slices([[1, 2], [3, 4]])
->>> list(dataset.as_numpy_iterator())
-[array([1, 2], dtype=int32), array([3, 4], dtype=int32)]
-```
-
-```
->>> # Slicing a tuple of 1D tensors produces tuple elements containing
->>> # scalar tensors.
->>> dataset = tf.data.Dataset.from_tensor_slices(([1, 2], [3, 4], [5, 6]))
->>> list(dataset.as_numpy_iterator())
-[(1, 3, 5), (2, 4, 6)]
-```
-
-```
->>> # Dictionary structure is also preserved.
->>> dataset = tf.data.Dataset.from_tensor_slices({"a": [1, 2], "b": [3, 4]})
->>> list(dataset.as_numpy_iterator()) == [{'a': 1, 'b': 3},
-... {'a': 2, 'b': 4}]
-True
-```
-
-```
->>> # Two tensors can be combined into one Dataset object.
->>> features = tf.constant([[1, 3], [2, 1], [3, 3]]) # ==> 3x2 tensor
->>> labels = tf.constant(['A', 'B', 'A']) # ==> 3x1 tensor
->>> dataset = Dataset.from_tensor_slices((features, labels))
->>> # Both the features and the labels tensors can be converted
->>> # to a Dataset object separately and combined after.
->>> features_dataset = Dataset.from_tensor_slices(features)
->>> labels_dataset = Dataset.from_tensor_slices(labels)
->>> dataset = Dataset.zip((features_dataset, labels_dataset))
->>> # A batched feature and label set can be converted to a Dataset
->>> # in similar fashion.
->>> batched_features = tf.constant([[[1, 3], [2, 3]],
-... [[2, 1], [1, 2]],
-... [[3, 3], [3, 2]]], shape=(3, 2, 2))
->>> batched_labels = tf.constant([['A', 'A'],
-... ['B', 'B'],
-... ['A', 'B']], shape=(3, 2, 1))
->>> dataset = Dataset.from_tensor_slices((batched_features, batched_labels))
->>> for element in dataset.as_numpy_iterator():
-... print(element)
-(array([[1, 3],
- [2, 3]], dtype=int32), array([[b'A'],
- [b'A']], dtype=object))
-(array([[2, 1],
- [1, 2]], dtype=int32), array([[b'B'],
- [b'B']], dtype=object))
-(array([[3, 3],
- [3, 2]], dtype=int32), array([[b'A'],
- [b'B']], dtype=object))
-```
-
-Note that if `tensors` contains a NumPy array, and eager execution is not
-enabled, the values will be embedded in the graph as one or more
-`tf.constant` operations. For large datasets (> 1 GB), this can waste
-memory and run into byte limits of graph serialization. If `tensors`
-contains one or more large NumPy arrays, consider the alternative described
-in [this guide](
-https://tensorflow.org/guide/data#consuming_numpy_arrays).
-
-
-
-
-| Args |
-
-
-|
-`tensors`
- |
-
-A dataset element, whose components have the same first
-dimension. Supported values are documented
-[here](https://www.tensorflow.org/guide/data#dataset_structure).
- |
-
-
-
-
-
-
-
-
-| Returns |
-
-
-|
-`Dataset`
- |
-
-A `Dataset`.
- |
-
-
-
-
-
-from_tensors
-
-
-@staticmethod
-from_tensors(
- tensors
-)
-
-
-Creates a `Dataset` with a single element, comprising the given tensors.
-
-`from_tensors` produces a dataset containing only a single element. To slice
-the input tensor into multiple elements, use `from_tensor_slices` instead.
-
-```
->>> dataset = tf.data.Dataset.from_tensors([1, 2, 3])
->>> list(dataset.as_numpy_iterator())
-[array([1, 2, 3], dtype=int32)]
->>> dataset = tf.data.Dataset.from_tensors(([1, 2, 3], 'A'))
->>> list(dataset.as_numpy_iterator())
-[(array([1, 2, 3], dtype=int32), b'A')]
-```
-
-```
->>> # You can use `from_tensors` to produce a dataset which repeats
->>> # the same example many times.
->>> example = tf.constant([1,2,3])
->>> dataset = tf.data.Dataset.from_tensors(example).repeat(2)
->>> list(dataset.as_numpy_iterator())
-[array([1, 2, 3], dtype=int32), array([1, 2, 3], dtype=int32)]
-```
-
-Note that if `tensors` contains a NumPy array, and eager execution is not
-enabled, the values will be embedded in the graph as one or more
-`tf.constant` operations. For large datasets (> 1 GB), this can waste
-memory and run into byte limits of graph serialization. If `tensors`
-contains one or more large NumPy arrays, consider the alternative described
-in [this
-guide](https://tensorflow.org/guide/data#consuming_numpy_arrays).
-
-
-
-
-| Args |
-
-
-|
-`tensors`
- |
-
-A dataset "element". Supported values are documented
-[here](https://www.tensorflow.org/guide/data#dataset_structure).
- |
-
-
-
-
-
-
-
-
-| Returns |
-
-
-|
-`Dataset`
- |
-
-A `Dataset`.
- |
-
-
-
-
-
-get_single_element
-
-
-get_single_element()
-
-
-Returns the single element of the `dataset` as a nested structure of tensors.
-
-The function enables you to use a `tf.data.Dataset` in a stateless
-"tensor-in tensor-out" expression, without creating an iterator.
-This facilitates the ease of data transformation on tensors using the
-optimized `tf.data.Dataset` abstraction on top of them.
-
-For example, lets consider a `preprocessing_fn` which would take as an
-input the raw features and returns the processed feature along with
-it's label.
-
-```python
-def preprocessing_fn(raw_feature):
- # ... the raw_feature is preprocessed as per the use-case
- return feature
-
-raw_features = ... # input batch of BATCH_SIZE elements.
-dataset = (tf.data.Dataset.from_tensor_slices(raw_features)
- .map(preprocessing_fn, num_parallel_calls=BATCH_SIZE)
- .batch(BATCH_SIZE))
-
-processed_features = dataset.get_single_element()
-```
-
-In the above example, the `raw_features` tensor of length=BATCH_SIZE
-was converted to a `tf.data.Dataset`. Next, each of the `raw_feature` was
-mapped using the `preprocessing_fn` and the processed features were
-grouped into a single batch. The final `dataset` contains only one element
-which is a batch of all the processed features.
-
-NOTE: The `dataset` should contain only one element.
-
-Now, instead of creating an iterator for the `dataset` and retrieving the
-batch of features, the `tf.data.get_single_element()` function is used
-to skip the iterator creation process and directly output the batch of
-features.
-
-This can be particularly useful when your tensor transformations are
-expressed as `tf.data.Dataset` operations, and you want to use those
-transformations while serving your model.
-
-# Keras
-
-```python
-
-model = ... # A pre-built or custom model
-
-class PreprocessingModel(tf.keras.Model):
- def __init__(self, model):
- super().__init__(self)
- self.model = model
-
- @tf.function(input_signature=[...])
- def serving_fn(self, data):
- ds = tf.data.Dataset.from_tensor_slices(data)
- ds = ds.map(preprocessing_fn, num_parallel_calls=BATCH_SIZE)
- ds = ds.batch(batch_size=BATCH_SIZE)
- return tf.argmax(self.model(ds.get_single_element()), axis=-1)
-
-preprocessing_model = PreprocessingModel(model)
-your_exported_model_dir = ... # save the model to this path.
-tf.saved_model.save(preprocessing_model, your_exported_model_dir,
- signatures={'serving_default': preprocessing_model.serving_fn}
- )
-```
-
-# Estimator
-
-In the case of estimators, you need to generally define a `serving_input_fn`
-which would require the features to be processed by the model while
-inferencing.
-
-```python
-def serving_input_fn():
-
- raw_feature_spec = ... # Spec for the raw_features
- input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(
- raw_feature_spec, default_batch_size=None)
- )
- serving_input_receiver = input_fn()
- raw_features = serving_input_receiver.features
-
- def preprocessing_fn(raw_feature):
- # ... the raw_feature is preprocessed as per the use-case
- return feature
-
- dataset = (tf.data.Dataset.from_tensor_slices(raw_features)
- .map(preprocessing_fn, num_parallel_calls=BATCH_SIZE)
- .batch(BATCH_SIZE))
-
- processed_features = dataset.get_single_element()
-
- # Please note that the value of `BATCH_SIZE` should be equal to
- # the size of the leading dimension of `raw_features`. This ensures
- # that `dataset` has only element, which is a pre-requisite for
- # using `dataset.get_single_element()`.
-
- return tf.estimator.export.ServingInputReceiver(
- processed_features, serving_input_receiver.receiver_tensors)
-
-estimator = ... # A pre-built or custom estimator
-estimator.export_saved_model(your_exported_model_dir, serving_input_fn)
-```
-
-
-
-
-| Returns |
-
-|
-A nested structure of `tf.Tensor` objects, corresponding to the single
-element of `dataset`.
- |
-
-
-
-
-
-
-
-
-
-| Raises |
-
-
-|
-`InvalidArgumentError`
- |
-
-(at runtime) if `dataset` does not contain exactly
-one element.
- |
-
-
-
-
-
-group_by_window
-
-
-group_by_window(
- key_func, reduce_func, window_size=None, window_size_func=None
-)
-
-
-Groups windows of elements by key and reduces them.
-
-This transformation maps each consecutive element in a dataset to a key
-using `key_func` and groups the elements by key. It then applies
-`reduce_func` to at most `window_size_func(key)` elements matching the same
-key. All except the final window for each key will contain
-`window_size_func(key)` elements; the final window may be smaller.
-
-You may provide either a constant `window_size` or a window size determined
-by the key through `window_size_func`.
-
-```
->>> dataset = tf.data.Dataset.range(10)
->>> window_size = 5
->>> key_func = lambda x: x%2
->>> reduce_func = lambda key, dataset: dataset.batch(window_size)
->>> dataset = dataset.group_by_window(
-... key_func=key_func,
-... reduce_func=reduce_func,
-... window_size=window_size)
->>> for elem in dataset.as_numpy_iterator():
-... print(elem)
-[0 2 4 6 8]
-[1 3 5 7 9]
-```
-
-
-
-
-| Args |
-
-
-|
-`key_func`
- |
-
-A function mapping a nested structure of tensors (having shapes
-and types defined by `self.output_shapes` and `self.output_types`) to a
-scalar `tf.int64` tensor.
- |
-
-|
-`reduce_func`
- |
-
-A function mapping a key and a dataset of up to `window_size`
-consecutive elements matching that key to another dataset.
- |
-
-|
-`window_size`
- |
-
-A `tf.int64` scalar `tf.Tensor`, representing the number of
-consecutive elements matching the same key to combine in a single batch,
-which will be passed to `reduce_func`. Mutually exclusive with
-`window_size_func`.
- |
-
-|
-`window_size_func`
- |
-
-A function mapping a key to a `tf.int64` scalar
-`tf.Tensor`, representing the number of consecutive elements matching
-the same key to combine in a single batch, which will be passed to
-`reduce_func`. Mutually exclusive with `window_size`.
- |
-
-
-
-
-
-
-
-
-| Returns |
-
-|
-A `tf.data.Dataset`
- |
-
-
-
-
-
-
-
-
-
-| Raises |
-
-
-|
-`ValueError`
- |
-
-if neither or both of {`window_size`, `window_size_func`} are
-passed.
- |
-
-
-
-
-
-interleave
-
-
-interleave(
- map_func, cycle_length=None, block_length=None, num_parallel_calls=None,
- deterministic=None
-)
-
-
-Maps `map_func` across this dataset, and interleaves the results.
-
-For example, you can use `Dataset.interleave()` to process many input files
-concurrently:
-
-```
->>> # Preprocess 4 files concurrently, and interleave blocks of 16 records
->>> # from each file.
->>> filenames = ["/var/data/file1.txt", "/var/data/file2.txt",
-... "/var/data/file3.txt", "/var/data/file4.txt"]
->>> dataset = tf.data.Dataset.from_tensor_slices(filenames)
->>> def parse_fn(filename):
-... return tf.data.Dataset.range(10)
->>> dataset = dataset.interleave(lambda x:
-... tf.data.TextLineDataset(x).map(parse_fn, num_parallel_calls=1),
-... cycle_length=4, block_length=16)
-```
-
-The `cycle_length` and `block_length` arguments control the order in which
-elements are produced. `cycle_length` controls the number of input elements
-that are processed concurrently. If you set `cycle_length` to 1, this
-transformation will handle one input element at a time, and will produce
-identical results to `tf.data.Dataset.flat_map`. In general,
-this transformation will apply `map_func` to `cycle_length` input elements,
-open iterators on the returned `Dataset` objects, and cycle through them
-producing `block_length` consecutive elements from each iterator, and
-consuming the next input element each time it reaches the end of an
-iterator.
-
-#### For example:
-
-
-
-```
->>> dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ]
->>> # NOTE: New lines indicate "block" boundaries.
->>> dataset = dataset.interleave(
-... lambda x: Dataset.from_tensors(x).repeat(6),
-... cycle_length=2, block_length=4)
->>> list(dataset.as_numpy_iterator())
-[1, 1, 1, 1,
- 2, 2, 2, 2,
- 1, 1,
- 2, 2,
- 3, 3, 3, 3,
- 4, 4, 4, 4,
- 3, 3,
- 4, 4,
- 5, 5, 5, 5,
- 5, 5]
-```
-
-Note: The order of elements yielded by this transformation is
-deterministic, as long as `map_func` is a pure function and
-`deterministic=True`. If `map_func` contains any stateful operations, the
-order in which that state is accessed is undefined.
-
-Performance can often be improved by setting `num_parallel_calls` so that
-`interleave` will use multiple threads to fetch elements. If determinism
-isn't required, it can also improve performance to set
-`deterministic=False`.
-
-```
->>> filenames = ["/var/data/file1.txt", "/var/data/file2.txt",
-... "/var/data/file3.txt", "/var/data/file4.txt"]
->>> dataset = tf.data.Dataset.from_tensor_slices(filenames)
->>> dataset = dataset.interleave(lambda x: tf.data.TFRecordDataset(x),
-... cycle_length=4, num_parallel_calls=tf.data.AUTOTUNE,
-... deterministic=False)
-```
-
-
-
-
-| Args |
-
-
-|
-`map_func`
- |
-
-A function mapping a dataset element to a dataset.
- |
-
-|
-`cycle_length`
- |
-
-(Optional.) The number of input elements that will be
-processed concurrently. If not set, the tf.data runtime decides what it
-should be based on available CPU. If `num_parallel_calls` is set to
-`tf.data.AUTOTUNE`, the `cycle_length` argument identifies
-the maximum degree of parallelism.
- |
-
-|
-`block_length`
- |
-
-(Optional.) The number of consecutive elements to produce
-from each input element before cycling to another input element. If not
-set, defaults to 1.
- |
-
-|
-`num_parallel_calls`
- |
-
-(Optional.) If specified, the implementation creates a
-threadpool, which is used to fetch inputs from cycle elements
-asynchronously and in parallel. The default behavior is to fetch inputs
-from cycle elements synchronously with no parallelism. If the value
-`tf.data.AUTOTUNE` is used, then the number of parallel
-calls is set dynamically based on available CPU.
- |
-
-|
-`deterministic`
- |
-
-(Optional.) When `num_parallel_calls` is specified, if this
-boolean is specified (`True` or `False`), it controls the order in which
-the transformation produces elements. If set to `False`, the
-transformation is allowed to yield elements out of order to trade
-determinism for performance. If not specified, the
-`tf.data.Options.experimental_deterministic` option
-(`True` by default) controls the behavior.
- |
-
-
-
-
-
-
-
-
-| Returns |
-
-
-|
-`Dataset`
- |
-
-A `Dataset`.
- |
-
-
-
-
-
-list_files
-
-
-@staticmethod
-list_files(
- file_pattern, shuffle=None, seed=None
-)
-
-
-A dataset of all files matching one or more glob patterns.
-
-The `file_pattern` argument should be a small number of glob patterns.
-If your filenames have already been globbed, use
-`Dataset.from_tensor_slices(filenames)` instead, as re-globbing every
-filename with `list_files` may result in poor performance with remote
-storage systems.
-
-Note: The default behavior of this method is to return filenames in
-a non-deterministic random shuffled order. Pass a `seed` or `shuffle=False`
-to get results in a deterministic order.
-
-#### Example:
-
-If we had the following files on our filesystem:
-
- - /path/to/dir/a.txt
- - /path/to/dir/b.py
- - /path/to/dir/c.py
-
-If we pass "/path/to/dir/*.py" as the directory, the dataset
-would produce:
-
- - /path/to/dir/b.py
- - /path/to/dir/c.py
-
-
-
-
-
-
-| Args |
-
-
-|
-`file_pattern`
- |
-
-A string, a list of strings, or a `tf.Tensor` of string type
-(scalar or vector), representing the filename glob (i.e. shell wildcard)
-pattern(s) that will be matched.
- |
-
-|
-`shuffle`
- |
-
-(Optional.) If `True`, the file names will be shuffled randomly.
-Defaults to `True`.
- |
-
-|
-`seed`
- |
-
-(Optional.) A `tf.int64` scalar `tf.Tensor`, representing the random
-seed that will be used to create the distribution. See
-`tf.random.set_seed` for behavior.
- |
-
-
-
-
-
-
-
-
-| Returns |
-
-
-|
-`Dataset`
- |
-
-A `Dataset` of strings corresponding to file names.
- |
-
-
-
-
-
-make_initializable_iterator
-
-
-make_initializable_iterator(
- shared_name=None
-)
-
-
-Creates an iterator for elements of this dataset. (deprecated)
-
-Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
-Instructions for updating:
-This is a deprecated API that should only be used in TF 1 graph mode and legacy TF 2 graph mode available through `tf.compat.v1`. In all other situations -- namely, eager mode and inside `tf.function` -- you can consume dataset elements using `for elem in dataset: ...` or by explicitly creating iterator via `iterator = iter(dataset)` and fetching its elements via `values = next(iterator)`. Furthermore, this API is not available in TF 2. During the transition from TF 1 to TF 2 you can use `tf.compat.v1.data.make_initializable_iterator(dataset)` to create a TF 1 graph mode style iterator for a dataset created through TF 2 APIs. Note that this should be a transient state of your code base as there are in general no guarantees about the interoperability of TF 1 and TF 2 code.
-
-Note: The returned iterator will be in an uninitialized state,
-and you must run the `iterator.initializer` operation before using it:
-
-```python
-# Building graph ...
-dataset = ...
-iterator = dataset.make_initializable_iterator()
-next_value = iterator.get_next() # This is a Tensor.
-
-# ... from within a session ...
-sess.run(iterator.initializer)
-try:
- while True:
- value = sess.run(next_value)
- ...
-except tf.errors.OutOfRangeError:
- pass
-```
-
-
-
-
-| Args |
-
-
-|
-`shared_name`
- |
-
-(Optional.) If non-empty, the returned iterator will be
-shared under the given name across multiple sessions that share the same
-devices (e.g. when using a remote server).
- |
-
-
-
-
-
-
-
-
-| Returns |
-
-|
-A `tf.data.Iterator` for elements of this dataset.
- |
-
-
-
-
-
-
-
-
-
-| Raises |
-
-
-|
-`RuntimeError`
- |
-
-If eager execution is enabled.
- |
-
-
-
-
-
-make_one_shot_iterator
-
-
-make_one_shot_iterator()
-
-
-Creates an iterator for elements of this dataset. (deprecated)
-
-Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
-Instructions for updating:
-This is a deprecated API that should only be used in TF 1 graph mode and legacy TF 2 graph mode available through `tf.compat.v1`. In all other situations -- namely, eager mode and inside `tf.function` -- you can consume dataset elements using `for elem in dataset: ...` or by explicitly creating iterator via `iterator = iter(dataset)` and fetching its elements via `values = next(iterator)`. Furthermore, this API is not available in TF 2. During the transition from TF 1 to TF 2 you can use `tf.compat.v1.data.make_one_shot_iterator(dataset)` to create a TF 1 graph mode style iterator for a dataset created through TF 2 APIs. Note that this should be a transient state of your code base as there are in general no guarantees about the interoperability of TF 1 and TF 2 code.
-
-Note: The returned iterator will be initialized automatically.
-A "one-shot" iterator does not currently support re-initialization. For
-that see `make_initializable_iterator`.
-
-#### Example:
-
-
-
-```python
-# Building graph ...
-dataset = ...
-next_value = dataset.make_one_shot_iterator().get_next()
-
-# ... from within a session ...
-try:
- while True:
- value = sess.run(next_value)
- ...
-except tf.errors.OutOfRangeError:
- pass
-```
-
-
-
-
-| Returns |
-
-|
-An `tf.data.Iterator` for elements of this dataset.
- |
-
-
-
-
-
-
-map
-
-
-map(
- map_func, num_parallel_calls=None, deterministic=None
-)
-
-
-Maps `map_func` across the elements of this dataset.
-
-This transformation applies `map_func` to each element of this dataset, and
-returns a new dataset containing the transformed elements, in the same
-order as they appeared in the input. `map_func` can be used to change both
-the values and the structure of a dataset's elements. Supported structure
-constructs are documented
-[here](https://www.tensorflow.org/guide/data#dataset_structure).
-
-For example, `map` can be used for adding 1 to each element, or projecting a
-subset of element components.
-
-```
->>> dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ]
->>> dataset = dataset.map(lambda x: x + 1)
->>> list(dataset.as_numpy_iterator())
-[2, 3, 4, 5, 6]
-```
-
-The input signature of `map_func` is determined by the structure of each
-element in this dataset.
-
-```
->>> dataset = Dataset.range(5)
->>> # `map_func` takes a single argument of type `tf.Tensor` with the same
->>> # shape and dtype.
->>> result = dataset.map(lambda x: x + 1)
-```
-
-```
->>> # Each element is a tuple containing two `tf.Tensor` objects.
->>> elements = [(1, "foo"), (2, "bar"), (3, "baz")]
->>> dataset = tf.data.Dataset.from_generator(
-... lambda: elements, (tf.int32, tf.string))
->>> # `map_func` takes two arguments of type `tf.Tensor`. This function
->>> # projects out just the first component.
->>> result = dataset.map(lambda x_int, y_str: x_int)
->>> list(result.as_numpy_iterator())
-[1, 2, 3]
-```
-
-```
->>> # Each element is a dictionary mapping strings to `tf.Tensor` objects.
->>> elements = ([{"a": 1, "b": "foo"},
-... {"a": 2, "b": "bar"},
-... {"a": 3, "b": "baz"}])
->>> dataset = tf.data.Dataset.from_generator(
-... lambda: elements, {"a": tf.int32, "b": tf.string})
->>> # `map_func` takes a single argument of type `dict` with the same keys
->>> # as the elements.
->>> result = dataset.map(lambda d: str(d["a"]) + d["b"])
-```
-
-The value or values returned by `map_func` determine the structure of each
-element in the returned dataset.
-
-```
->>> dataset = tf.data.Dataset.range(3)
->>> # `map_func` returns two `tf.Tensor` objects.
->>> def g(x):
-... return tf.constant(37.0), tf.constant(["Foo", "Bar", "Baz"])
->>> result = dataset.map(g)
->>> result.element_spec
-(TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(3,), dtype=tf.string, name=None))
->>> # Python primitives, lists, and NumPy arrays are implicitly converted to
->>> # `tf.Tensor`.
->>> def h(x):
-... return 37.0, ["Foo", "Bar"], np.array([1.0, 2.0], dtype=np.float64)
->>> result = dataset.map(h)
->>> result.element_spec
-(TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(2,), dtype=tf.string, name=None), TensorSpec(shape=(2,), dtype=tf.float64, name=None))
->>> # `map_func` can return nested structures.
->>> def i(x):
-... return (37.0, [42, 16]), "foo"
->>> result = dataset.map(i)
->>> result.element_spec
-((TensorSpec(shape=(), dtype=tf.float32, name=None),
- TensorSpec(shape=(2,), dtype=tf.int32, name=None)),
- TensorSpec(shape=(), dtype=tf.string, name=None))
-```
-
-`map_func` can accept as arguments and return any type of dataset element.
-
-Note that irrespective of the context in which `map_func` is defined (eager
-vs. graph), tf.data traces the function and executes it as a graph. To use
-Python code inside of the function you have a few options:
-
-1) Rely on AutoGraph to convert Python code into an equivalent graph
-computation. The downside of this approach is that AutoGraph can convert
-some but not all Python code.
-
-2) Use `tf.py_function`, which allows you to write arbitrary Python code but
-will generally result in worse performance than 1). For example:
-
-```
->>> d = tf.data.Dataset.from_tensor_slices(['hello', 'world'])
->>> # transform a string tensor to upper case string using a Python function
->>> def upper_case_fn(t: tf.Tensor):
-... return t.numpy().decode('utf-8').upper()
->>> d = d.map(lambda x: tf.py_function(func=upper_case_fn,
-... inp=[x], Tout=tf.string))
->>> list(d.as_numpy_iterator())
-[b'HELLO', b'WORLD']
-```
-
-3) Use `tf.numpy_function`, which also allows you to write arbitrary
-Python code. Note that `tf.py_function` accepts `tf.Tensor` whereas
-`tf.numpy_function` accepts numpy arrays and returns only numpy arrays.
-For example:
-
-```
->>> d = tf.data.Dataset.from_tensor_slices(['hello', 'world'])
->>> def upper_case_fn(t: np.ndarray):
-... return t.decode('utf-8').upper()
->>> d = d.map(lambda x: tf.numpy_function(func=upper_case_fn,
-... inp=[x], Tout=tf.string))
->>> list(d.as_numpy_iterator())
-[b'HELLO', b'WORLD']
-```
-
-Note that the use of `tf.numpy_function` and `tf.py_function`
-in general precludes the possibility of executing user-defined
-transformations in parallel (because of Python GIL).
-
-Performance can often be improved by setting `num_parallel_calls` so that
-`map` will use multiple threads to process elements. If deterministic order
-isn't required, it can also improve performance to set
-`deterministic=False`.
-
-```
->>> dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ]
->>> dataset = dataset.map(lambda x: x + 1,
-... num_parallel_calls=tf.data.AUTOTUNE,
-... deterministic=False)
-```
-
-The order of elements yielded by this transformation is deterministic if
-`deterministic=True`. If `map_func` contains stateful operations and
-`num_parallel_calls > 1`, the order in which that state is accessed is
-undefined, so the values of output elements may not be deterministic
-regardless of the `deterministic` flag value.
-
-
-
-
-| Args |
-
-
-|
-`map_func`
- |
-
-A function mapping a dataset element to another dataset element.
- |
-
-|
-`num_parallel_calls`
- |
-
-(Optional.) A `tf.int64` scalar `tf.Tensor`,
-representing the number elements to process asynchronously in parallel.
-If not specified, elements will be processed sequentially. If the value
-`tf.data.AUTOTUNE` is used, then the number of parallel
-calls is set dynamically based on available CPU.
- |
-
-|
-`deterministic`
- |
-
-(Optional.) When `num_parallel_calls` is specified, if this
-boolean is specified (`True` or `False`), it controls the order in which
-the transformation produces elements. If set to `False`, the
-transformation is allowed to yield elements out of order to trade
-determinism for performance. If not specified, the
-`tf.data.Options.experimental_deterministic` option
-(`True` by default) controls the behavior.
- |
-
-
-
-
-
-
-
-
-| Returns |
-
-
-|
-`Dataset`
- |
-
-A `Dataset`.
- |
-
-
-
-
-
-map_with_legacy_function
-
-
-map_with_legacy_function(
- map_func, num_parallel_calls=None, deterministic=None
-)
-
-
-Maps `map_func` across the elements of this dataset. (deprecated)
-
-Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
-Instructions for updating:
-Use `tf.data.Dataset.map()
-
-Note: This is an escape hatch for existing uses of `map` that do not work
-with V2 functions. New uses are strongly discouraged and existing uses
-should migrate to `map` as this method will be removed in V2.
-
-
-
-
-| Args |
-
-
-|
-`map_func`
- |
-
-A function mapping a (nested) structure of tensors (having
-shapes and types defined by `self.output_shapes` and
-`self.output_types`) to another (nested) structure of tensors.
- |
-
-|
-`num_parallel_calls`
- |
-
-(Optional.) A `tf.int32` scalar `tf.Tensor`,
-representing the number elements to process asynchronously in parallel.
-If not specified, elements will be processed sequentially. If the value
-`tf.data.AUTOTUNE` is used, then the number of parallel
-calls is set dynamically based on available CPU.
- |
-
-|
-`deterministic`
- |
-
-(Optional.) When `num_parallel_calls` is specified, this
-boolean controls the order in which the transformation produces
-elements. If set to `False`, the transformation is allowed to yield
-elements out of order to trade determinism for performance. If not
-specified, the `tf.data.Options.experimental_deterministic` option
-(`True` by default) controls the behavior.
- |
-
-
-
-
-
-
-
-
-| Returns |
-
-
-|
-`Dataset`
- |
-
-A `Dataset`.
- |
-
-
-
-
-
-options
-
-
-options()
-
-
-Returns the options for this dataset and its inputs.
-
-
-
-
-
-| Returns |
-
-|
-A `tf.data.Options` object representing the dataset options.
- |
-
-
-
-
-
-
-padded_batch
-
-
-padded_batch(
- batch_size, padded_shapes=None, padding_values=None, drop_remainder=False
-)
-
-
-Combines consecutive elements of this dataset into padded batches.
-
-This transformation combines multiple consecutive elements of the input
-dataset into a single element.
-
-Like `tf.data.Dataset.batch`, the components of the resulting element will
-have an additional outer dimension, which will be `batch_size` (or
-`N % batch_size` for the last element if `batch_size` does not divide the
-number of input elements `N` evenly and `drop_remainder` is `False`). If
-your program depends on the batches having the same outer dimension, you
-should set the `drop_remainder` argument to `True` to prevent the smaller
-batch from being produced.
-
-Unlike `tf.data.Dataset.batch`, the input elements to be batched may have
-different shapes, and this transformation will pad each component to the
-respective shape in `padded_shapes`. The `padded_shapes` argument
-determines the resulting shape for each dimension of each component in an
-output element:
-
-* If the dimension is a constant, the component will be padded out to that
- length in that dimension.
-* If the dimension is unknown, the component will be padded out to the
- maximum length of all elements in that dimension.
-
-```
->>> A = (tf.data.Dataset
-... .range(1, 5, output_type=tf.int32)
-... .map(lambda x: tf.fill([x], x)))
->>> # Pad to the smallest per-batch size that fits all elements.
->>> B = A.padded_batch(2)
->>> for element in B.as_numpy_iterator():
-... print(element)
-[[1 0]
- [2 2]]
-[[3 3 3 0]
- [4 4 4 4]]
->>> # Pad to a fixed size.
->>> C = A.padded_batch(2, padded_shapes=5)
->>> for element in C.as_numpy_iterator():
-... print(element)
-[[1 0 0 0 0]
- [2 2 0 0 0]]
-[[3 3 3 0 0]
- [4 4 4 4 0]]
->>> # Pad with a custom value.
->>> D = A.padded_batch(2, padded_shapes=5, padding_values=-1)
->>> for element in D.as_numpy_iterator():
-... print(element)
-[[ 1 -1 -1 -1 -1]
- [ 2 2 -1 -1 -1]]
-[[ 3 3 3 -1 -1]
- [ 4 4 4 4 -1]]
->>> # Components of nested elements can be padded independently.
->>> elements = [([1, 2, 3], [10]),
-... ([4, 5], [11, 12])]
->>> dataset = tf.data.Dataset.from_generator(
-... lambda: iter(elements), (tf.int32, tf.int32))
->>> # Pad the first component of the tuple to length 4, and the second
->>> # component to the smallest size that fits.
->>> dataset = dataset.padded_batch(2,
-... padded_shapes=([4], [None]),
-... padding_values=(-1, 100))
->>> list(dataset.as_numpy_iterator())
-[(array([[ 1, 2, 3, -1], [ 4, 5, -1, -1]], dtype=int32),
- array([[ 10, 100], [ 11, 12]], dtype=int32))]
->>> # Pad with a single value and multiple components.
->>> E = tf.data.Dataset.zip((A, A)).padded_batch(2, padding_values=-1)
->>> for element in E.as_numpy_iterator():
-... print(element)
-(array([[ 1, -1],
- [ 2, 2]], dtype=int32), array([[ 1, -1],
- [ 2, 2]], dtype=int32))
-(array([[ 3, 3, 3, -1],
- [ 4, 4, 4, 4]], dtype=int32), array([[ 3, 3, 3, -1],
- [ 4, 4, 4, 4]], dtype=int32))
-```
-
-See also `tf.data.experimental.dense_to_sparse_batch`, which combines
-elements that may have different shapes into a `tf.sparse.SparseTensor`.
-
-
-
-
-| Args |
-
-
-|
-`batch_size`
- |
-
-A `tf.int64` scalar `tf.Tensor`, representing the number of
-consecutive elements of this dataset to combine in a single batch.
- |
-
-|
-`padded_shapes`
- |
-
-(Optional.) A (nested) structure of `tf.TensorShape` or
-`tf.int64` vector tensor-like objects representing the shape to which
-the respective component of each input element should be padded prior
-to batching. Any unknown dimensions will be padded to the maximum size
-of that dimension in each batch. If unset, all dimensions of all
-components are padded to the maximum size in the batch. `padded_shapes`
-must be set if any component has an unknown rank.
- |
-
-|
-`padding_values`
- |
-
-(Optional.) A (nested) structure of scalar-shaped
-`tf.Tensor`, representing the padding values to use for the respective
-components. None represents that the (nested) structure should be padded
-with default values. Defaults are `0` for numeric types and the empty
-string for string types. The `padding_values` should have the same
-(nested) structure as the input dataset. If `padding_values` is a single
-element and the input dataset has multiple components, then the same
-`padding_values` will be used to pad every component of the dataset.
-If `padding_values` is a scalar, then its value will be broadcasted
-to match the shape of each component.
- |
-
-|
-`drop_remainder`
- |
-
-(Optional.) A `tf.bool` scalar `tf.Tensor`, representing
-whether the last batch should be dropped in the case it has fewer than
-`batch_size` elements; the default behavior is not to drop the smaller
-batch.
- |
-
-
-
-
-
-
-
-
-| Returns |
-
-
-|
-`Dataset`
- |
-
-A `Dataset`.
- |
-
-
-
-
-
-
-
-
-| Raises |
-
-
-|
-`ValueError`
- |
-
-If a component has an unknown rank, and the `padded_shapes`
-argument is not set.
- |
-
-
-
-
-
-prefetch
-
-
-prefetch(
- buffer_size
-)
-
-
-Creates a `Dataset` that prefetches elements from this dataset.
-
-Most dataset input pipelines should end with a call to `prefetch`. This
-allows later elements to be prepared while the current element is being
-processed. This often improves latency and throughput, at the cost of
-using additional memory to store prefetched elements.
-
-Note: Like other `Dataset` methods, prefetch operates on the
-elements of the input dataset. It has no concept of examples vs. batches.
-`examples.prefetch(2)` will prefetch two elements (2 examples),
-while `examples.batch(20).prefetch(2)` will prefetch 2 elements
-(2 batches, of 20 examples each).
-
-```
->>> dataset = tf.data.Dataset.range(3)
->>> dataset = dataset.prefetch(2)
->>> list(dataset.as_numpy_iterator())
-[0, 1, 2]
-```
-
-
-
-
-| Args |
-
-
-|
-`buffer_size`
- |
-
-A `tf.int64` scalar `tf.Tensor`, representing the maximum
-number of elements that will be buffered when prefetching. If the value
-`tf.data.AUTOTUNE` is used, then the buffer size is dynamically tuned.
- |
-
-
-
-
-
-
-
-
-| Returns |
-
-
-|
-`Dataset`
- |
-
-A `Dataset`.
- |
-
-
-
-
-
-range
-
-
-@staticmethod
-range(
- *args, **kwargs
-)
-
-
-Creates a `Dataset` of a step-separated range of values.
-
-```
->>> list(Dataset.range(5).as_numpy_iterator())
-[0, 1, 2, 3, 4]
->>> list(Dataset.range(2, 5).as_numpy_iterator())
-[2, 3, 4]
->>> list(Dataset.range(1, 5, 2).as_numpy_iterator())
-[1, 3]
->>> list(Dataset.range(1, 5, -2).as_numpy_iterator())
-[]
->>> list(Dataset.range(5, 1).as_numpy_iterator())
-[]
->>> list(Dataset.range(5, 1, -2).as_numpy_iterator())
-[5, 3]
->>> list(Dataset.range(2, 5, output_type=tf.int32).as_numpy_iterator())
-[2, 3, 4]
->>> list(Dataset.range(1, 5, 2, output_type=tf.float32).as_numpy_iterator())
-[1.0, 3.0]
-```
-
-
-
-
-| Args |
-
-
-|
-`*args`
- |
-
-follows the same semantics as python's xrange.
-len(args) == 1 -> start = 0, stop = args[0], step = 1.
-len(args) == 2 -> start = args[0], stop = args[1], step = 1.
-len(args) == 3 -> start = args[0], stop = args[1], step = args[2].
- |
-
-|
-`**kwargs`
- |
-
-- output_type: Its expected dtype. (Optional, default: `tf.int64`).
- |
-
-
-
-
-
-
-
-
-| Returns |
-
-
-|
-`Dataset`
- |
-
-A `RangeDataset`.
- |
-
-
-
-
-
-
-
-
-| Raises |
-
-
-|
-`ValueError`
- |
-
-if len(args) == 0.
- |
-
-
-
-
-
-reduce
-
-
-reduce(
- initial_state, reduce_func
-)
-
-
-Reduces the input dataset to a single element.
-
-The transformation calls `reduce_func` successively on every element of
-the input dataset until the dataset is exhausted, aggregating information in
-its internal state. The `initial_state` argument is used for the initial
-state and the final state is returned as the result.
-
-```
->>> tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, _: x + 1).numpy()
-5
->>> tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, y: x + y).numpy()
-10
-```
-
-
-
-
-| Args |
-
-
-|
-`initial_state`
- |
-
-An element representing the initial state of the
-transformation.
- |
-
-|
-`reduce_func`
- |
-
-A function that maps `(old_state, input_element)` to
-`new_state`. It must take two arguments and return a new element
-The structure of `new_state` must match the structure of
-`initial_state`.
- |
-
-
-
-
-
-
-
-
-| Returns |
-
-|
-A dataset element corresponding to the final state of the transformation.
- |
-
-
-
-
-
-
-repeat
-
-
-repeat(
- count=None
-)
-
-
-Repeats this dataset so each original value is seen `count` times.
-
-```
->>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
->>> dataset = dataset.repeat(3)
->>> list(dataset.as_numpy_iterator())
-[1, 2, 3, 1, 2, 3, 1, 2, 3]
-```
-
-Note: If this dataset is a function of global state (e.g. a random number
-generator), then different repetitions may produce different elements.
-
-
-
-
-| Args |
-
-
-|
-`count`
- |
-
-(Optional.) A `tf.int64` scalar `tf.Tensor`, representing the
-number of times the dataset should be repeated. The default behavior (if
-`count` is `None` or `-1`) is for the dataset be repeated indefinitely.
- |
-
-
-
-
-
-
-
-
-| Returns |
-
-
-|
-`Dataset`
- |
-
-A `Dataset`.
- |
-
-
-
-
-
-shard
-
-
-shard(
- num_shards, index
-)
-
-
-Creates a `Dataset` that includes only 1/`num_shards` of this dataset.
-
-`shard` is deterministic. The Dataset produced by `A.shard(n, i)` will
-contain all elements of A whose index mod n = i.
-
-```
->>> A = tf.data.Dataset.range(10)
->>> B = A.shard(num_shards=3, index=0)
->>> list(B.as_numpy_iterator())
-[0, 3, 6, 9]
->>> C = A.shard(num_shards=3, index=1)
->>> list(C.as_numpy_iterator())
-[1, 4, 7]
->>> D = A.shard(num_shards=3, index=2)
->>> list(D.as_numpy_iterator())
-[2, 5, 8]
-```
-
-This dataset operator is very useful when running distributed training, as
-it allows each worker to read a unique subset.
-
-When reading a single input file, you can shard elements as follows:
-
-```python
-d = tf.data.TFRecordDataset(input_file)
-d = d.shard(num_workers, worker_index)
-d = d.repeat(num_epochs)
-d = d.shuffle(shuffle_buffer_size)
-d = d.map(parser_fn, num_parallel_calls=num_map_threads)
-```
-
-#### Important caveats:
-
-
-
-- Be sure to shard before you use any randomizing operator (such as
- shuffle).
-- Generally it is best if the shard operator is used early in the dataset
- pipeline. For example, when reading from a set of TFRecord files, shard
- before converting the dataset to input samples. This avoids reading every
- file on every worker. The following is an example of an efficient
- sharding strategy within a complete pipeline:
-
-```python
-d = Dataset.list_files(pattern)
-d = d.shard(num_workers, worker_index)
-d = d.repeat(num_epochs)
-d = d.shuffle(shuffle_buffer_size)
-d = d.interleave(tf.data.TFRecordDataset,
- cycle_length=num_readers, block_length=1)
-d = d.map(parser_fn, num_parallel_calls=num_map_threads)
-```
-
-
-
-
-| Args |
-
-
-|
-`num_shards`
- |
-
-A `tf.int64` scalar `tf.Tensor`, representing the number of
-shards operating in parallel.
- |
-
-|
-`index`
- |
-
-A `tf.int64` scalar `tf.Tensor`, representing the worker index.
- |
-
-
-
-
-
-
-
-
-| Returns |
-
-
-|
-`Dataset`
- |
-
-A `Dataset`.
- |
-
-
-
-
-
-
-
-
-| Raises |
-
-
-|
-`InvalidArgumentError`
- |
-
-if `num_shards` or `index` are illegal values.
-
-Note: error checking is done on a best-effort basis, and errors aren't
-guaranteed to be caught upon dataset creation. (e.g. providing in a
-placeholder tensor bypasses the early checking, and will instead result
-in an error during a session.run call.)
- |
-
-
-
-
-
-shuffle
-
-
-shuffle(
- buffer_size, seed=None, reshuffle_each_iteration=None
-)
-
-
-Randomly shuffles the elements of this dataset.
-
-This dataset fills a buffer with `buffer_size` elements, then randomly
-samples elements from this buffer, replacing the selected elements with new
-elements. For perfect shuffling, a buffer size greater than or equal to the
-full size of the dataset is required.
-
-For instance, if your dataset contains 10,000 elements but `buffer_size` is
-set to 1,000, then `shuffle` will initially select a random element from
-only the first 1,000 elements in the buffer. Once an element is selected,
-its space in the buffer is replaced by the next (i.e. 1,001-st) element,
-maintaining the 1,000 element buffer.
-
-`reshuffle_each_iteration` controls whether the shuffle order should be
-different for each epoch. In TF 1.X, the idiomatic way to create epochs
-was through the `repeat` transformation:
-
-```python
-dataset = tf.data.Dataset.range(3)
-dataset = dataset.shuffle(3, reshuffle_each_iteration=True)
-dataset = dataset.repeat(2)
-# [1, 0, 2, 1, 2, 0]
-
-dataset = tf.data.Dataset.range(3)
-dataset = dataset.shuffle(3, reshuffle_each_iteration=False)
-dataset = dataset.repeat(2)
-# [1, 0, 2, 1, 0, 2]
-```
-
-In TF 2.0, `tf.data.Dataset` objects are Python iterables which makes it
-possible to also create epochs through Python iteration:
-
-```python
-dataset = tf.data.Dataset.range(3)
-dataset = dataset.shuffle(3, reshuffle_each_iteration=True)
-list(dataset.as_numpy_iterator())
-# [1, 0, 2]
-list(dataset.as_numpy_iterator())
-# [1, 2, 0]
-```
-
-```python
-dataset = tf.data.Dataset.range(3)
-dataset = dataset.shuffle(3, reshuffle_each_iteration=False)
-list(dataset.as_numpy_iterator())
-# [1, 0, 2]
-list(dataset.as_numpy_iterator())
-# [1, 0, 2]
-```
-
-
-
-
-| Args |
-
-
-|
-`buffer_size`
- |
-
-A `tf.int64` scalar `tf.Tensor`, representing the number of
-elements from this dataset from which the new dataset will sample.
- |
-
-|
-`seed`
- |
-
-(Optional.) A `tf.int64` scalar `tf.Tensor`, representing the random
-seed that will be used to create the distribution. See
-`tf.random.set_seed` for behavior.
- |
-
-|
-`reshuffle_each_iteration`
- |
-
-(Optional.) A boolean, which if true indicates
-that the dataset should be pseudorandomly reshuffled each time it is
-iterated over. (Defaults to `True`.)
- |
-
-
-
-
-
-
-
-
-| Returns |
-
-
-|
-`Dataset`
- |
-
-A `Dataset`.
- |
-
-
-
-
-
-skip
-
-
-skip(
- count
-)
-
-
-Creates a `Dataset` that skips `count` elements from this dataset.
-
-```
->>> dataset = tf.data.Dataset.range(10)
->>> dataset = dataset.skip(7)
->>> list(dataset.as_numpy_iterator())
-[7, 8, 9]
-```
-
-
-
-
-| Args |
-
-
-|
-`count`
- |
-
-A `tf.int64` scalar `tf.Tensor`, representing the number of
-elements of this dataset that should be skipped to form the new dataset.
-If `count` is greater than the size of this dataset, the new dataset
-will contain no elements. If `count` is -1, skips the entire dataset.
- |
-
-
-
-
-
-
-
-
-| Returns |
-
-
-|
-`Dataset`
- |
-
-A `Dataset`.
- |
-
-
-
-
-
-take
-
-
-take(
- count
-)
-
-
-Creates a `Dataset` with at most `count` elements from this dataset.
-
-```
->>> dataset = tf.data.Dataset.range(10)
->>> dataset = dataset.take(3)
->>> list(dataset.as_numpy_iterator())
-[0, 1, 2]
-```
-
-
-
-
-| Args |
-
-
-|
-`count`
- |
-
-A `tf.int64` scalar `tf.Tensor`, representing the number of
-elements of this dataset that should be taken to form the new dataset.
-If `count` is -1, or if `count` is greater than the size of this
-dataset, the new dataset will contain all elements of this dataset.
- |
-
-
-
-
-
-
-
-
-| Returns |
-
-
-|
-`Dataset`
- |
-
-A `Dataset`.
- |
-
-
-
-
-
-unbatch
-
-
-unbatch()
-
-
-Splits elements of a dataset into multiple elements.
-
-For example, if elements of the dataset are shaped `[B, a0, a1, ...]`,
-where `B` may vary for each input element, then for each element in the
-dataset, the unbatched dataset will contain `B` consecutive elements
-of shape `[a0, a1, ...]`.
-
-```
->>> elements = [ [1, 2, 3], [1, 2], [1, 2, 3, 4] ]
->>> dataset = tf.data.Dataset.from_generator(lambda: elements, tf.int64)
->>> dataset = dataset.unbatch()
->>> list(dataset.as_numpy_iterator())
-[1, 2, 3, 1, 2, 1, 2, 3, 4]
-```
-
-Note: `unbatch` requires a data copy to slice up the batched tensor into
-smaller, unbatched tensors. When optimizing performance, try to avoid
-unnecessary usage of `unbatch`.
-
-
-
-
-| Returns |
-
-|
-A `Dataset`.
- |
-
-
-
-
-
-
-window
-
-
-window(
- size, shift=None, stride=1, drop_remainder=False
-)
-
-
-Combines (nests of) input elements into a dataset of (nests of) windows.
-
-A "window" is a finite dataset of flat elements of size `size` (or possibly
-fewer if there are not enough input elements to fill the window and
-`drop_remainder` evaluates to `False`).
-
-The `shift` argument determines the number of input elements by which the
-window moves on each iteration. If windows and elements are both numbered
-starting at 0, the first element in window `k` will be element `k * shift`
-of the input dataset. In particular, the first element of the first window
-will always be the first element of the input dataset.
-
-The `stride` argument determines the stride of the input elements, and the
-`shift` argument determines the shift of the window.
-
-#### For example:
-
-
-
-```
->>> dataset = tf.data.Dataset.range(7).window(2)
->>> for window in dataset:
-... print(list(window.as_numpy_iterator()))
-[0, 1]
-[2, 3]
-[4, 5]
-[6]
->>> dataset = tf.data.Dataset.range(7).window(3, 2, 1, True)
->>> for window in dataset:
-... print(list(window.as_numpy_iterator()))
-[0, 1, 2]
-[2, 3, 4]
-[4, 5, 6]
->>> dataset = tf.data.Dataset.range(7).window(3, 1, 2, True)
->>> for window in dataset:
-... print(list(window.as_numpy_iterator()))
-[0, 2, 4]
-[1, 3, 5]
-[2, 4, 6]
-```
-
-Note that when the `window` transformation is applied to a dataset of
-nested elements, it produces a dataset of nested windows.
-
-```
->>> nested = ([1, 2, 3, 4], [5, 6, 7, 8])
->>> dataset = tf.data.Dataset.from_tensor_slices(nested).window(2)
->>> for window in dataset:
-... def to_numpy(ds):
-... return list(ds.as_numpy_iterator())
-... print(tuple(to_numpy(component) for component in window))
-([1, 2], [5, 6])
-([3, 4], [7, 8])
-```
-
-```
->>> dataset = tf.data.Dataset.from_tensor_slices({'a': [1, 2, 3, 4]})
->>> dataset = dataset.window(2)
->>> for window in dataset:
-... def to_numpy(ds):
-... return list(ds.as_numpy_iterator())
-... print({'a': to_numpy(window['a'])})
-{'a': [1, 2]}
-{'a': [3, 4]}
-```
-
-
-
-
-| Args |
-
-
-|
-`size`
- |
-
-A `tf.int64` scalar `tf.Tensor`, representing the number of elements
-of the input dataset to combine into a window. Must be positive.
- |
-
-|
-`shift`
- |
-
-(Optional.) A `tf.int64` scalar `tf.Tensor`, representing the
-number of input elements by which the window moves in each iteration.
-Defaults to `size`. Must be positive.
- |
-
-|
-`stride`
- |
-
-(Optional.) A `tf.int64` scalar `tf.Tensor`, representing the
-stride of the input elements in the sliding window. Must be positive.
-The default value of 1 means "retain every input element".
- |
-
-|
-`drop_remainder`
- |
-
-(Optional.) A `tf.bool` scalar `tf.Tensor`, representing
-whether the last windows should be dropped if their size is smaller than
-`size`.
- |
-
-
-
-
-
-
-
-
-| Returns |
-
-
-|
-`Dataset`
- |
-
-A `Dataset` of (nests of) windows -- a finite datasets of flat
-elements created from the (nests of) input elements.
- |
-
-
-
-
-
-with_options
-
-
-with_options(
- options
-)
-
-
-Returns a new `tf.data.Dataset` with the given options set.
-
-The options are "global" in the sense they apply to the entire dataset.
-If options are set multiple times, they are merged as long as different
-options do not use different non-default values.
-
-```
->>> ds = tf.data.Dataset.range(5)
->>> ds = ds.interleave(lambda x: tf.data.Dataset.range(5),
-... cycle_length=3,
-... num_parallel_calls=3)
->>> options = tf.data.Options()
->>> # This will make the interleave order non-deterministic.
->>> options.experimental_deterministic = False
->>> ds = ds.with_options(options)
-```
-
-
-
-
-| Args |
-
-
-|
-`options`
- |
-
-A `tf.data.Options` that identifies the options the use.
- |
-
-
-
-
-
-
-
-
-| Returns |
-
-
-|
-`Dataset`
- |
-
-A `Dataset` with the given options.
- |
-
-
-
-
-
-
-
-
-| Raises |
-
-
-|
-`ValueError`
- |
-
-when an option is set more than once to a non-default value
- |
-
-
-
-
-
-zip
-
-
-@staticmethod
-zip(
- datasets
-)
-
-
-Creates a `Dataset` by zipping together the given datasets.
-
-This method has similar semantics to the built-in `zip()` function
-in Python, with the main difference being that the `datasets`
-argument can be a (nested) structure of `Dataset` objects. The supported
-nesting mechanisms are documented
-[here] (https://www.tensorflow.org/guide/data#dataset_structure).
-
-```
->>> # The nested structure of the `datasets` argument determines the
->>> # structure of elements in the resulting dataset.
->>> a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ]
->>> b = tf.data.Dataset.range(4, 7) # ==> [ 4, 5, 6 ]
->>> ds = tf.data.Dataset.zip((a, b))
->>> list(ds.as_numpy_iterator())
-[(1, 4), (2, 5), (3, 6)]
->>> ds = tf.data.Dataset.zip((b, a))
->>> list(ds.as_numpy_iterator())
-[(4, 1), (5, 2), (6, 3)]
->>>
->>> # The `datasets` argument may contain an arbitrary number of datasets.
->>> c = tf.data.Dataset.range(7, 13).batch(2) # ==> [ [7, 8],
-... # [9, 10],
-... # [11, 12] ]
->>> ds = tf.data.Dataset.zip((a, b, c))
->>> for element in ds.as_numpy_iterator():
-... print(element)
-(1, 4, array([7, 8]))
-(2, 5, array([ 9, 10]))
-(3, 6, array([11, 12]))
->>>
->>> # The number of elements in the resulting dataset is the same as
->>> # the size of the smallest dataset in `datasets`.
->>> d = tf.data.Dataset.range(13, 15) # ==> [ 13, 14 ]
->>> ds = tf.data.Dataset.zip((a, d))
->>> list(ds.as_numpy_iterator())
-[(1, 13), (2, 14)]
-```
-
-
-
-
-| Args |
-
-
-|
-`datasets`
- |
-
-A (nested) structure of datasets.
- |
-
-
-
-
-
-
-
-
-| Returns |
-
-
-|
-`Dataset`
- |
-
-A `Dataset`.
- |
-
-
-
-
-
-__bool__
-
-
-__bool__()
-
-
-
-
-
-__iter__
-
-
-__iter__()
-
-
-Creates an iterator for elements of this dataset.
-
-The returned iterator implements the Python Iterator protocol.
-
-
-
-
-| Returns |
-
-|
-An `tf.data.Iterator` for the elements of this dataset.
- |
-
-
-
-
-
-
-
-
-
-| Raises |
-
-
-|
-`RuntimeError`
- |
-
-If not inside of tf.function and not executing eagerly.
- |
-
-
-
-
-
-__len__
-
-
-__len__()
-
-
-Returns the length of the dataset if it is known and finite.
-
-This method requires that you are running in eager mode, and that the
-length of the dataset is known and non-infinite. When the length may be
-unknown or infinite, or if you are running in graph mode, use
-`tf.data.Dataset.cardinality` instead.
-
-
-
-
-| Returns |
-
-|
-An integer representing the length of the dataset.
- |
-
-
-
-
-
-
-
-
-
-| Raises |
-
-
-|
-`RuntimeError`
- |
-
-If the dataset length is unknown or infinite, or if eager
-execution is not enabled.
- |
-
-
-
-
-
-__nonzero__
-
-
-__nonzero__()
-
-
-
-
-
-
-
diff --git a/g3doc/api_docs/python/expression_impl/parquet/calculate_parquet_values.md b/g3doc/api_docs/python/expression_impl/parquet/calculate_parquet_values.md
deleted file mode 100644
index 393b3b9..0000000
--- a/g3doc/api_docs/python/expression_impl/parquet/calculate_parquet_values.md
+++ /dev/null
@@ -1,96 +0,0 @@
-description: Calculates expressions and returns a parquet dataset.
-
-
-
-
-
-
-# expression_impl.parquet.calculate_parquet_values
-
-
-
-
-
-
-
-Calculates expressions and returns a parquet dataset.
-
-
-expression_impl.parquet.calculate_parquet_values(
- expressions: List[expression.Expression],
- root_exp: placeholder._PlaceholderRootExpression,
- filenames: List[str],
- batch_size: int,
- options: Optional[calculate_options.Options] = None
-)
-
-
-
-
-
-
-
-
-
-
-Args |
-
-
-|
-`expressions`
- |
-
-A list of expressions to calculate.
- |
-
-|
-`root_exp`
- |
-
-The root placeholder expression to use as the feed dict.
- |
-
-|
-`filenames`
- |
-
-A list of parquet files.
- |
-
-|
-`batch_size`
- |
-
-The number of messages to batch.
- |
-
-|
-`options`
- |
-
-calculate options.
- |
-
-
-
-
-
-
-
-
-Returns |
-
-|
-A parquet dataset.
- |
-
-
-
-
diff --git a/g3doc/api_docs/python/expression_impl/parquet/create_expression_from_parquet_file.md b/g3doc/api_docs/python/expression_impl/parquet/create_expression_from_parquet_file.md
deleted file mode 100644
index d8f3f9e..0000000
--- a/g3doc/api_docs/python/expression_impl/parquet/create_expression_from_parquet_file.md
+++ /dev/null
@@ -1,65 +0,0 @@
-description: Creates a placeholder expression from a parquet file.
-
-
-
-
-
-
-# expression_impl.parquet.create_expression_from_parquet_file
-
-
-
-
-
-
-
-Creates a placeholder expression from a parquet file.
-
-
-expression_impl.parquet.create_expression_from_parquet_file(
- filenames: List[str]
-) -> placeholder._PlaceholderRootExpression
-
-
-
-
-
-
-
-
-
-
-Args |
-
-
-|
-`filenames`
- |
-
-A list of parquet files.
- |
-
-
-
-
-
-
-
-
-Returns |
-
-|
-A PlaceholderRootExpression that should be used as the root of an expression
-graph.
- |
-
-
-
-
diff --git a/g3doc/api_docs/python/expression_impl/placeholder.md b/g3doc/api_docs/python/expression_impl/placeholder.md
deleted file mode 100644
index a39e524..0000000
--- a/g3doc/api_docs/python/expression_impl/placeholder.md
+++ /dev/null
@@ -1,50 +0,0 @@
-description: Placeholder expression.
-
-
-
-
-
-
-# Module: expression_impl.placeholder
-
-
-
-
-
-
-
-Placeholder expression.
-
-
-A placeholder expression represents prensor nodes, however a prensor is not
-needed until calculate is called. This allows the user to apply expression
-queries to a placeholder expression before having an actual prensor object.
-When calculate is called on a placeholder expression (or a descendant of a
-placeholder expression), the feed_dict will need to be passed in. Then calculate
-will bind the prensor with the appropriate placeholder expression.
-
-#### Sample usage:
-
-
-
-```
-placeholder_exp = placeholder.create_expression_from_schema(schema)
-new_exp = expression_queries(placeholder_exp, ..)
-result = calculate.calculate_values([new_exp],
- feed_dict={placeholder_exp: pren})
-# placeholder_exp requires a feed_dict to be passed in when calculating
-```
-
-## Functions
-
-[`create_expression_from_schema(...)`](../expression_impl/placeholder/create_expression_from_schema.md): Creates a placeholder expression from a parquet schema.
-
-[`get_placeholder_paths_from_graph(...)`](../expression_impl/placeholder/get_placeholder_paths_from_graph.md): Gets all placeholder paths from an expression graph.
-
diff --git a/g3doc/api_docs/python/expression_impl/placeholder/create_expression_from_schema.md b/g3doc/api_docs/python/expression_impl/placeholder/create_expression_from_schema.md
deleted file mode 100644
index bf03a42..0000000
--- a/g3doc/api_docs/python/expression_impl/placeholder/create_expression_from_schema.md
+++ /dev/null
@@ -1,66 +0,0 @@
-description: Creates a placeholder expression from a parquet schema.
-
-
-
-
-
-
-# expression_impl.placeholder.create_expression_from_schema
-
-
-
-
-
-
-
-Creates a placeholder expression from a parquet schema.
-
-
-expression_impl.placeholder.create_expression_from_schema(
- schema: expression_impl.map_prensor_to_prensor.Schema
-) -> "_PlaceholderRootExpression"
-
-
-
-
-
-
-
-
-
-
-Args |
-
-
-|
-`schema`
- |
-
-The schema that describes the prensor tree that this placeholder
-represents.
- |
-
-
-
-
-
-
-
-
-Returns |
-
-|
-A PlaceholderRootExpression that should be used as the root of an expression
-graph.
- |
-
-
-
-
diff --git a/g3doc/api_docs/python/expression_impl/placeholder/get_placeholder_paths_from_graph.md b/g3doc/api_docs/python/expression_impl/placeholder/get_placeholder_paths_from_graph.md
deleted file mode 100644
index a7f008e..0000000
--- a/g3doc/api_docs/python/expression_impl/placeholder/get_placeholder_paths_from_graph.md
+++ /dev/null
@@ -1,66 +0,0 @@
-description: Gets all placeholder paths from an expression graph.
-
-
-
-
-
-
-# expression_impl.placeholder.get_placeholder_paths_from_graph
-
-
-
-
-
-
-
-Gets all placeholder paths from an expression graph.
-
-
-expression_impl.placeholder.get_placeholder_paths_from_graph(
- graph: calculate.ExpressionGraph
-) -> List[path.Path]
-
-
-
-
-
-
-This finds all leaf placeholder expressions in an expression graph, and gets
-the path of these expressions.
-
-
-
-
-Args |
-
-
-|
-`graph`
- |
-
-expression graph
- |
-
-
-
-
-
-
-
-
-Returns |
-
-|
-a list of paths of placeholder expressions
- |
-
-
-
-
diff --git a/g3doc/api_docs/python/expression_impl/project.md b/g3doc/api_docs/python/expression_impl/project.md
deleted file mode 100644
index 8fe4c1a..0000000
--- a/g3doc/api_docs/python/expression_impl/project.md
+++ /dev/null
@@ -1,44 +0,0 @@
-description: project selects a subtree of an expression.
-
-
-
-
-
-
-# Module: expression_impl.project
-
-
-
-
-
-
-
-project selects a subtree of an expression.
-
-
-project is often used right before calculating the value.
-
-#### Example:
-
-
-
-```
-expr = ...
-new_expr = project.project(expr, [path.Path(["foo","bar"]),
- path.Path(["x", "y"])])
-[prensor_result] = calculate.calculate_prensors([new_expr])
-```
-
-prensor_result now has two paths, "foo.bar" and "x.y".
-
-## Functions
-
-[`project(...)`](../expression_impl/project/project.md): select a subtree.
-
diff --git a/g3doc/api_docs/python/expression_impl/project/project.md b/g3doc/api_docs/python/expression_impl/project/project.md
deleted file mode 100644
index f8ac3e2..0000000
--- a/g3doc/api_docs/python/expression_impl/project/project.md
+++ /dev/null
@@ -1,75 +0,0 @@
-description: select a subtree.
-
-
-
-
-
-
-# expression_impl.project.project
-
-
-
-
-
-
-
-select a subtree.
-
-
-expression_impl.project.project(
- expr: expression.Expression,
- paths: Sequence[path.Path]
-) -> expression.Expression
-
-
-
-
-
-
-Paths not selected are removed.
-Paths that are selected are "known", such that if calculate_prensors is
-called, they will be in the result.
-
-
-
-
-Args |
-
-
-|
-`expr`
- |
-
-the original expression.
- |
-
-|
-`paths`
- |
-
-the paths to include.
- |
-
-
-
-
-
-
-
-
-Returns |
-
-|
-A projected expression.
- |
-
-
-
-
diff --git a/g3doc/api_docs/python/expression_impl/promote.md b/g3doc/api_docs/python/expression_impl/promote.md
deleted file mode 100644
index 7053010..0000000
--- a/g3doc/api_docs/python/expression_impl/promote.md
+++ /dev/null
@@ -1,118 +0,0 @@
-description: Promote an expression to be a child of its grandparent.
-
-
-
-
-
-
-# Module: expression_impl.promote
-
-
-
-
-
-
-
-Promote an expression to be a child of its grandparent.
-
-
-Promote is part of the standard flattening of data, promote_and_broadcast,
-which takes structured data and flattens it. By directly accessing promote,
-one can perform simpler operations.
-
-For example, suppose an expr represents:
-
-```
-+
-|
-+-session* (stars indicate repeated)
- |
- +-event*
- |
- +-val*-int64
-
-session: {
- event: {
- val: 111
- }
- event: {
- val: 121
- val: 122
- }
-}
-
-session: {
- event: {
- val: 10
- val: 7
- }
- event: {
- val: 1
- }
-}
-
-```
-
-```
-promote.promote(expr, path.Path(["session", "event", "val"]), nval)
-```
-
-produces:
-
-```
-+
-|
-+-session* (stars indicate repeated)
- |
- +-event*
- | |
- | +-val*-int64
- |
- +-nval*-int64
-
-session: {
- event: {
- val: 111
- }
- event: {
- val: 121
- val: 122
- }
- nval: 111
- nval: 121
- nval: 122
-}
-
-session: {
- event: {
- val: 10
- val: 7
- }
- event: {
- val: 1
- }
- nval: 10
- nval: 7
- nval: 1
-}
-```
-
-## Classes
-
-[`class PromoteChildExpression`](../expression_impl/promote/PromoteChildExpression.md): The root of the promoted sub tree.
-
-[`class PromoteExpression`](../expression_impl/promote/PromoteExpression.md): A promoted leaf.
-
-## Functions
-
-[`promote(...)`](../expression_impl/promote/promote.md): Promote a path to be a child of its grandparent, and give it a name.
-
-[`promote_anonymous(...)`](../expression_impl/promote/promote_anonymous.md): Promote a path to be a new anonymous child of its grandparent.
-
diff --git a/g3doc/api_docs/python/expression_impl/promote/PromoteChildExpression.md b/g3doc/api_docs/python/expression_impl/promote/PromoteChildExpression.md
deleted file mode 100644
index 540f290..0000000
--- a/g3doc/api_docs/python/expression_impl/promote/PromoteChildExpression.md
+++ /dev/null
@@ -1,1044 +0,0 @@
-description: The root of the promoted sub tree.
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-# expression_impl.promote.PromoteChildExpression
-
-
-
-
-
-
-
-The root of the promoted sub tree.
-
-
-expression_impl.promote.PromoteChildExpression(
- origin: expression.Expression,
- origin_parent: expression.Expression
-)
-
-
-
-
-
-
-
-
-
-
-Args |
-
-
-|
-`is_repeated`
- |
-
-if the expression is repeated.
- |
-
-|
-`my_type`
- |
-
-the DType of a field, or None for an internal node.
- |
-
-|
-`schema_feature`
- |
-
-the local schema (StructDomain information should not be
-present).
- |
-
-
-
-
-
-
-
-
-
-
-Attributes |
-
-
-|
-`is_leaf`
- |
-
-True iff the node tensor is a LeafNodeTensor.
- |
-
-|
-`is_repeated`
- |
-
-True iff the same parent value can have multiple children values.
- |
-
-|
-`schema_feature`
- |
-
-Return the schema of the field.
- |
-
-|
-`type`
- |
-
-dtype of the expression, or None if not a leaf expression.
- |
-
-
-
-
-
-## Methods
-
-apply
-
-
-apply(
- transform: Callable[['Expression'], 'Expression']
-) -> "Expression"
-
-
-
-
-
-apply_schema
-
-
-apply_schema(
- schema: schema_pb2.Schema
-) -> "Expression"
-
-
-
-
-
-broadcast
-
-
-broadcast(
- source_path: CoercableToPath,
- sibling_field: path.Step,
- new_field_name: path.Step
-) -> "Expression"
-
-
-Broadcasts the existing field at source_path to the sibling_field.
-
-
-calculate
-
-View source
-
-
-calculate(
- sources: Sequence[prensor.NodeTensor],
- destinations: Sequence[expression.Expression],
- options: calculate_options.Options,
- side_info: Optional[prensor.Prensor] = None
-) -> prensor.NodeTensor
-
-
-Calculates the node tensor of the expression.
-
-The node tensor must be a function of the properties of the expression
-and the node tensors of the expressions from get_source_expressions().
-
-If is_leaf, then calculate must return a LeafNodeTensor.
-Otherwise, it must return a ChildNodeTensor or RootNodeTensor.
-
-If calculate_is_identity is true, then this must return source_tensors[0].
-
-Sometimes, for operations such as parsing the proto, calculate will return
-additional information. For example, calculate() for the root of the
-proto expression also parses out the tensors required to calculate the
-tensors of the children. This is why destinations are required.
-
-For a reference use, see calculate_value_slowly(...) below.
-
-
-
-
-| Args |
-
-
-|
-`source_tensors`
- |
-
-The node tensors of the expressions in
-get_source_expressions().
- |
-
-|
-`destinations`
- |
-
-The expressions that will use the output of this method.
- |
-
-|
-`options`
- |
-
-Options for the calculation.
- |
-
-|
-`side_info`
- |
-
-An optional prensor that is used to bind to a placeholder
-expression.
- |
-
-
-
-
-
-
-
-
-| Returns |
-
-|
-A NodeTensor representing the output of this expression.
- |
-
-
-
-
-
-
-calculation_equal
-
-View source
-
-
-calculation_equal(
- expr: expression.Expression
-) -> bool
-
-
-self.calculate is equal to another expression.calculate.
-
-Given the same source node tensors, self.calculate(...) and
-expression.calculate(...) will have the same result.
-
-Note that this does not check that the source expressions of the two
-expressions are the same. Therefore, two operations can have the same
-calculation, but not the same output, because their sources are different.
-For example, if a.calculation_is_identity() is True and
-b.calculation_is_identity() is True, then a.calculation_equal(b) is True.
-However, unless a and b have the same source, the expressions themselves are
-not equal.
-
-
-
-
-| Args |
-
-
-|
-`expression`
- |
-
-The expression to compare to.
- |
-
-
-
-
-
-calculation_is_identity
-
-View source
-
-
-calculation_is_identity() -> bool
-
-
-True iff the self.calculate is the identity.
-
-There is exactly one source, and the output of self.calculate(...) is the
-node tensor of this source.
-
-cogroup_by_index
-
-
-cogroup_by_index(
- source_path: CoercableToPath,
- left_name: path.Step,
- right_name: path.Step,
- new_field_name: path.Step
-) -> "Expression"
-
-
-Creates a cogroup of left_name and right_name at new_field_name.
-
-
-create_has_field
-
-
-create_has_field(
- source_path: CoercableToPath,
- new_field_name: path.Step
-) -> "Expression"
-
-
-Creates a field that is the presence of the source path.
-
-
-create_proto_index
-
-
-create_proto_index(
- field_name: path.Step
-) -> "Expression"
-
-
-Creates a proto index field as a direct child of the current root.
-
-The proto index maps each root element to the original batch index.
-For example: [0, 2] means the first element came from the first proto
-in the original input tensor and the second element came from the third
-proto. The created field is always "dense" -- it has the same valency as
-the current root.
-
-
-
-
-| Args |
-
-
-|
-`field_name`
- |
-
-the name of the field to be created.
- |
-
-
-
-
-
-
-
-
-| Returns |
-
-|
-An Expression object representing the result of the operation.
- |
-
-
-
-
-
-
-create_size_field
-
-
-create_size_field(
- source_path: CoercableToPath,
- new_field_name: path.Step
-) -> "Expression"
-
-
-Creates a field that is the size of the source path.
-
-
-get_child
-
-
-get_child(
- field_name: path.Step
-) -> Optional['Expression']
-
-
-Gets a named child.
-
-
-get_child_or_error
-
-
-get_child_or_error(
- field_name: path.Step
-) -> "Expression"
-
-
-Gets a named child.
-
-
-get_descendant
-
-
-get_descendant(
- p: path.Path
-) -> Optional['Expression']
-
-
-Finds the descendant at the path.
-
-
-get_descendant_or_error
-
-
-get_descendant_or_error(
- p: path.Path
-) -> "Expression"
-
-
-Finds the descendant at the path.
-
-
-get_known_children
-
-
-get_known_children() -> Mapping[path.Step, 'Expression']
-
-
-
-
-
-get_known_descendants
-
-
-get_known_descendants() -> Mapping[path.Path, 'Expression']
-
-
-Gets a mapping from known paths to subexpressions.
-
-The difference between this and get_descendants in Prensor is that
-all paths in a Prensor are realized, thus all known. But an Expression's
-descendants might not all be known at the point this method is called,
-because an expression may have an infinite number of children.
-
-
-
-
-| Returns |
-
-|
-A mapping from paths (relative to the root of the subexpression) to
-expressions.
- |
-
-
-
-
-
-
-get_paths_with_schema
-
-
-get_paths_with_schema() -> List[path.Path]
-
-
-Extract only paths that contain schema information.
-
-
-get_schema
-
-
-get_schema(
- create_schema_features=True
-) -> schema_pb2.Schema
-
-
-Returns a schema for the entire tree.
-
-
-
-
-
-| Args |
-
-
-|
-`create_schema_features`
- |
-
-If True, schema features are added for all
-children and a schema entry is created if not available on the child. If
-False, features are left off of the returned schema if there is no
-schema_feature on the child.
- |
-
-
-
-
-
-get_source_expressions
-
-View source
-
-
-get_source_expressions() -> Sequence[expression.Expression]
-
-
-Gets the sources of this expression.
-
-The node tensors of the source expressions must be sufficient to
-calculate the node tensor of this expression
-(see calculate and calculate_value_slowly).
-
-
-
-
-| Returns |
-
-|
-The sources of this expression.
- |
-
-
-
-
-
-
-known_field_names
-
-View source
-
-
-known_field_names() -> FrozenSet[path.Step]
-
-
-Returns known field names of the expression.
-
-
-Known field names of a parsed proto correspond to the fields declared in
-the message. Examples of "unknown" fields are extensions and explicit casts
-in an any field. The only way to know if an unknown field "(foo.bar)" is
-present in an expression expr is to call (expr["(foo.bar)"] is not None).
-
-Notice that simply accessing a field does not make it "known". However,
-setting a field (or setting a descendant of a field) will make it known.
-
-project(...) returns an expression where the known field names are the only
-field names. In general, if you want to depend upon known_field_names
-(e.g., if you want to compile a expression), then the best approach is to
-project() the expression first.
-
-
-
-
-| Returns |
-
-|
-An immutable set of field names.
- |
-
-
-
-
-
-
-map_field_values
-
-
-map_field_values(
- source_path: CoercableToPath,
- operator: Callable[[tf.Tensor], tf.Tensor],
- dtype: tf.DType,
- new_field_name: path.Step
-) -> "Expression"
-
-
-Map a primitive field to create a new primitive field.
-
-Note: the dtype argument is added since the v1 API.
-
-
-
-
-| Args |
-
-
-|
-`source_path`
- |
-
-the origin path.
- |
-
-|
-`operator`
- |
-
-an element-wise operator that takes a 1-dimensional vector.
- |
-
-|
-`dtype`
- |
-
-the type of the output.
- |
-
-|
-`new_field_name`
- |
-
-the name of a new sibling of source_path.
- |
-
-
-
-
-
-
-
-
-| Returns |
-
-|
-the resulting root expression.
- |
-
-
-
-
-
-
-map_ragged_tensors
-
-
-map_ragged_tensors(
- parent_path: CoercableToPath,
- source_fields: Sequence[path.Step],
- operator: Callable[..., tf.SparseTensor],
- is_repeated: bool,
- dtype: tf.DType,
- new_field_name: path.Step
-) -> "Expression"
-
-
-Maps a set of primitive fields of a message to a new field.
-
-Unlike map_field_values, this operation allows you to some degree reshape
-the field. For instance, you can take two optional fields and create a
-repeated field, or perform a reduce_sum on the last dimension of a repeated
-field and create an optional field. The key constraint is that the operator
-must return a sparse tensor of the correct dimension: i.e., a
-2D sparse tensor if is_repeated is true, or a 1D sparse tensor if
-is_repeated is false. Moreover, the first dimension of the sparse tensor
-must be equal to the first dimension of the input tensor.
-
-
-
-
-| Args |
-
-
-|
-`parent_path`
- |
-
-the parent of the input and output fields.
- |
-
-|
-`source_fields`
- |
-
-the nonempty list of names of the source fields.
- |
-
-|
-`operator`
- |
-
-an operator that takes len(source_fields) sparse tensors and
-returns a sparse tensor of the appropriate shape.
- |
-
-|
-`is_repeated`
- |
-
-whether the output is repeated.
- |
-
-|
-`dtype`
- |
-
-the dtype of the result.
- |
-
-|
-`new_field_name`
- |
-
-the name of the resulting field.
- |
-
-
-
-
-
-
-
-
-| Returns |
-
-|
-A new query.
- |
-
-
-
-
-
-
-map_sparse_tensors
-
-
-map_sparse_tensors(
- parent_path: CoercableToPath,
- source_fields: Sequence[path.Step],
- operator: Callable[..., tf.SparseTensor],
- is_repeated: bool,
- dtype: tf.DType,
- new_field_name: path.Step
-) -> "Expression"
-
-
-Maps a set of primitive fields of a message to a new field.
-
-Unlike map_field_values, this operation allows you to some degree reshape
-the field. For instance, you can take two optional fields and create a
-repeated field, or perform a reduce_sum on the last dimension of a repeated
-field and create an optional field. The key constraint is that the operator
-must return a sparse tensor of the correct dimension: i.e., a
-2D sparse tensor if is_repeated is true, or a 1D sparse tensor if
-is_repeated is false. Moreover, the first dimension of the sparse tensor
-must be equal to the first dimension of the input tensor.
-
-
-
-
-| Args |
-
-
-|
-`parent_path`
- |
-
-the parent of the input and output fields.
- |
-
-|
-`source_fields`
- |
-
-the nonempty list of names of the source fields.
- |
-
-|
-`operator`
- |
-
-an operator that takes len(source_fields) sparse tensors and
-returns a sparse tensor of the appropriate shape.
- |
-
-|
-`is_repeated`
- |
-
-whether the output is repeated.
- |
-
-|
-`dtype`
- |
-
-the dtype of the result.
- |
-
-|
-`new_field_name`
- |
-
-the name of the resulting field.
- |
-
-
-
-
-
-
-
-
-| Returns |
-
-|
-A new query.
- |
-
-
-
-
-
-
-project
-
-
-project(
- path_list: Sequence[CoercableToPath]
-) -> "Expression"
-
-
-Constrains the paths to those listed.
-
-
-
-
-
-promote(
- source_path: CoercableToPath,
- new_field_name: path.Step
-)
-
-
-Promotes source_path to be a field new_field_name in its grandparent.
-
-
-
-
-
-promote_and_broadcast(
- path_dictionary: Mapping[path.Step, CoercableToPath],
- dest_path_parent: CoercableToPath
-) -> "Expression"
-
-
-
-
-
-reroot
-
-
-reroot(
- new_root: CoercableToPath
-) -> "Expression"
-
-
-Returns a new list of protocol buffers available at new_root.
-
-
-schema_string
-
-
-schema_string(
- limit: Optional[int] = None
-) -> str
-
-
-Returns a schema for the expression.
-
-E.g.
-
-repeated root:
- optional int32 foo
- optional bar:
- optional string baz
- optional int64 bak
-
-Note that unknown fields and subexpressions are not displayed.
-
-
-
-
-| Args |
-
-
-|
-`limit`
- |
-
-if present, limit the recursion.
- |
-
-
-
-
-
-
-
-
-| Returns |
-
-|
-A string, describing (a part of) the schema.
- |
-
-
-
-
-
-
-slice
-
-
-slice(
- source_path: CoercableToPath,
- new_field_name: path.Step,
- begin: Optional[IndexValue] = None,
- end: Optional[IndexValue] = None
-) -> "Expression"
-
-
-Creates a slice copy of source_path at new_field_path.
-
-Note that if begin or end is negative, it is considered relative to
-the size of the array. e.g., slice(...,begin=-1) will get the last
-element of every array.
-
-
-
-
-| Args |
-
-
-|
-`source_path`
- |
-
-the source of the slice.
- |
-
-|
-`new_field_name`
- |
-
-the new field that is generated.
- |
-
-|
-`begin`
- |
-
-the beginning of the slice (inclusive).
- |
-
-|
-`end`
- |
-
-the end of the slice (exclusive).
- |
-
-
-
-
-
-
-
-
-| Returns |
-
-|
-An Expression object representing the result of the operation.
- |
-
-
-
-
-
-
-truncate
-
-
-truncate(
- source_path: CoercableToPath,
- limit: Union[int, tf.Tensor],
- new_field_name: path.Step
-) -> "Expression"
-
-
-Creates a truncated copy of source_path at new_field_path.
-
-
-__eq__
-
-
-__eq__(
- expr: "Expression"
-) -> bool
-
-
-if hash(expr1) == hash(expr2): then expr1 == expr2.
-
-Do not override this method.
-Args:
- expr: The expression to check equality against
-
-
-
-
-| Returns |
-
-|
-Boolean of equality of two expressions
- |
-
-
-
-
-
-
-
-
diff --git a/g3doc/api_docs/python/expression_impl/promote/PromoteExpression.md b/g3doc/api_docs/python/expression_impl/promote/PromoteExpression.md
deleted file mode 100644
index b1f8e32..0000000
--- a/g3doc/api_docs/python/expression_impl/promote/PromoteExpression.md
+++ /dev/null
@@ -1,1041 +0,0 @@
-description: A promoted leaf.
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-# expression_impl.promote.PromoteExpression
-
-
-
-
-
-
-
-A promoted leaf.
-
-
-expression_impl.promote.PromoteExpression(
- origin: expression.Expression,
- origin_parent: expression.Expression
-)
-
-
-
-
-
-
-
-
-
-
-Args |
-
-
-|
-`is_repeated`
- |
-
-if the expression is repeated.
- |
-
-|
-`my_type`
- |
-
-the DType of the field.
- |
-
-|
-`schema_feature`
- |
-
-schema information about the field.
- |
-
-
-
-
-
-
-
-
-
-
-Attributes |
-
-
-|
-`is_leaf`
- |
-
-True iff the node tensor is a LeafNodeTensor.
- |
-
-|
-`is_repeated`
- |
-
-True iff the same parent value can have multiple children values.
- |
-
-|
-`schema_feature`
- |
-
-Return the schema of the field.
- |
-
-|
-`type`
- |
-
-dtype of the expression, or None if not a leaf expression.
- |
-
-
-
-
-
-## Methods
-
-apply
-
-
-apply(
- transform: Callable[['Expression'], 'Expression']
-) -> "Expression"
-
-
-
-
-
-apply_schema
-
-
-apply_schema(
- schema: schema_pb2.Schema
-) -> "Expression"
-
-
-
-
-
-broadcast
-
-
-broadcast(
- source_path: CoercableToPath,
- sibling_field: path.Step,
- new_field_name: path.Step
-) -> "Expression"
-
-
-Broadcasts the existing field at source_path to the sibling_field.
-
-
-calculate
-
-View source
-
-
-calculate(
- sources: Sequence[prensor.NodeTensor],
- destinations: Sequence[expression.Expression],
- options: calculate_options.Options,
- side_info: Optional[prensor.Prensor] = None
-) -> prensor.NodeTensor
-
-
-Calculates the node tensor of the expression.
-
-The node tensor must be a function of the properties of the expression
-and the node tensors of the expressions from get_source_expressions().
-
-If is_leaf, then calculate must return a LeafNodeTensor.
-Otherwise, it must return a ChildNodeTensor or RootNodeTensor.
-
-If calculate_is_identity is true, then this must return source_tensors[0].
-
-Sometimes, for operations such as parsing the proto, calculate will return
-additional information. For example, calculate() for the root of the
-proto expression also parses out the tensors required to calculate the
-tensors of the children. This is why destinations are required.
-
-For a reference use, see calculate_value_slowly(...) below.
-
-
-
-
-| Args |
-
-
-|
-`source_tensors`
- |
-
-The node tensors of the expressions in
-get_source_expressions().
- |
-
-|
-`destinations`
- |
-
-The expressions that will use the output of this method.
- |
-
-|
-`options`
- |
-
-Options for the calculation.
- |
-
-|
-`side_info`
- |
-
-An optional prensor that is used to bind to a placeholder
-expression.
- |
-
-
-
-
-
-
-
-
-| Returns |
-
-|
-A NodeTensor representing the output of this expression.
- |
-
-
-
-
-
-
-calculation_equal
-
-View source
-
-
-calculation_equal(
- expr: expression.Expression
-) -> bool
-
-
-self.calculate is equal to another expression.calculate.
-
-Given the same source node tensors, self.calculate(...) and
-expression.calculate(...) will have the same result.
-
-Note that this does not check that the source expressions of the two
-expressions are the same. Therefore, two operations can have the same
-calculation, but not the same output, because their sources are different.
-For example, if a.calculation_is_identity() is True and
-b.calculation_is_identity() is True, then a.calculation_equal(b) is True.
-However, unless a and b have the same source, the expressions themselves are
-not equal.
-
-
-
-
-| Args |
-
-
-|
-`expression`
- |
-
-The expression to compare to.
- |
-
-
-
-
-
-calculation_is_identity
-
-View source
-
-
-calculation_is_identity() -> bool
-
-
-True iff the self.calculate is the identity.
-
-There is exactly one source, and the output of self.calculate(...) is the
-node tensor of this source.
-
-cogroup_by_index
-
-
-cogroup_by_index(
- source_path: CoercableToPath,
- left_name: path.Step,
- right_name: path.Step,
- new_field_name: path.Step
-) -> "Expression"
-
-
-Creates a cogroup of left_name and right_name at new_field_name.
-
-
-create_has_field
-
-
-create_has_field(
- source_path: CoercableToPath,
- new_field_name: path.Step
-) -> "Expression"
-
-
-Creates a field that is the presence of the source path.
-
-
-create_proto_index
-
-
-create_proto_index(
- field_name: path.Step
-) -> "Expression"
-
-
-Creates a proto index field as a direct child of the current root.
-
-The proto index maps each root element to the original batch index.
-For example: [0, 2] means the first element came from the first proto
-in the original input tensor and the second element came from the third
-proto. The created field is always "dense" -- it has the same valency as
-the current root.
-
-
-
-
-| Args |
-
-
-|
-`field_name`
- |
-
-the name of the field to be created.
- |
-
-
-
-
-
-
-
-
-| Returns |
-
-|
-An Expression object representing the result of the operation.
- |
-
-
-
-
-
-
-create_size_field
-
-
-create_size_field(
- source_path: CoercableToPath,
- new_field_name: path.Step
-) -> "Expression"
-
-
-Creates a field that is the size of the source path.
-
-
-get_child
-
-
-get_child(
- field_name: path.Step
-) -> Optional['Expression']
-
-
-Gets a named child.
-
-
-get_child_or_error
-
-
-get_child_or_error(
- field_name: path.Step
-) -> "Expression"
-
-
-Gets a named child.
-
-
-get_descendant
-
-
-get_descendant(
- p: path.Path
-) -> Optional['Expression']
-
-
-Finds the descendant at the path.
-
-
-get_descendant_or_error
-
-
-get_descendant_or_error(
- p: path.Path
-) -> "Expression"
-
-
-Finds the descendant at the path.
-
-
-get_known_children
-
-
-get_known_children() -> Mapping[path.Step, 'Expression']
-
-
-
-
-
-get_known_descendants
-
-
-get_known_descendants() -> Mapping[path.Path, 'Expression']
-
-
-Gets a mapping from known paths to subexpressions.
-
-The difference between this and get_descendants in Prensor is that
-all paths in a Prensor are realized, thus all known. But an Expression's
-descendants might not all be known at the point this method is called,
-because an expression may have an infinite number of children.
-
-
-
-
-| Returns |
-
-|
-A mapping from paths (relative to the root of the subexpression) to
-expressions.
- |
-
-
-
-
-
-
-get_paths_with_schema
-
-
-get_paths_with_schema() -> List[path.Path]
-
-
-Extract only paths that contain schema information.
-
-
-get_schema
-
-
-get_schema(
- create_schema_features=True
-) -> schema_pb2.Schema
-
-
-Returns a schema for the entire tree.
-
-
-
-
-
-| Args |
-
-
-|
-`create_schema_features`
- |
-
-If True, schema features are added for all
-children and a schema entry is created if not available on the child. If
-False, features are left off of the returned schema if there is no
-schema_feature on the child.
- |
-
-
-
-
-
-get_source_expressions
-
-View source
-
-
-get_source_expressions() -> Sequence[expression.Expression]
-
-
-Gets the sources of this expression.
-
-The node tensors of the source expressions must be sufficient to
-calculate the node tensor of this expression
-(see calculate and calculate_value_slowly).
-
-
-
-
-| Returns |
-
-|
-The sources of this expression.
- |
-
-
-
-
-
-
-known_field_names
-
-
-known_field_names() -> FrozenSet[path.Step]
-
-
-Returns known field names of the expression.
-
-
-Known field names of a parsed proto correspond to the fields declared in
-the message. Examples of "unknown" fields are extensions and explicit casts
-in an any field. The only way to know if an unknown field "(foo.bar)" is
-present in an expression expr is to call (expr["(foo.bar)"] is not None).
-
-Notice that simply accessing a field does not make it "known". However,
-setting a field (or setting a descendant of a field) will make it known.
-
-project(...) returns an expression where the known field names are the only
-field names. In general, if you want to depend upon known_field_names
-(e.g., if you want to compile a expression), then the best approach is to
-project() the expression first.
-
-
-
-
-| Returns |
-
-|
-An immutable set of field names.
- |
-
-
-
-
-
-
-map_field_values
-
-
-map_field_values(
- source_path: CoercableToPath,
- operator: Callable[[tf.Tensor], tf.Tensor],
- dtype: tf.DType,
- new_field_name: path.Step
-) -> "Expression"
-
-
-Map a primitive field to create a new primitive field.
-
-Note: the dtype argument is added since the v1 API.
-
-
-
-
-| Args |
-
-
-|
-`source_path`
- |
-
-the origin path.
- |
-
-|
-`operator`
- |
-
-an element-wise operator that takes a 1-dimensional vector.
- |
-
-|
-`dtype`
- |
-
-the type of the output.
- |
-
-|
-`new_field_name`
- |
-
-the name of a new sibling of source_path.
- |
-
-
-
-
-
-
-
-
-| Returns |
-
-|
-the resulting root expression.
- |
-
-
-
-
-
-
-map_ragged_tensors
-
-
-map_ragged_tensors(
- parent_path: CoercableToPath,
- source_fields: Sequence[path.Step],
- operator: Callable[..., tf.SparseTensor],
- is_repeated: bool,
- dtype: tf.DType,
- new_field_name: path.Step
-) -> "Expression"
-
-
-Maps a set of primitive fields of a message to a new field.
-
-Unlike map_field_values, this operation allows you to some degree reshape
-the field. For instance, you can take two optional fields and create a
-repeated field, or perform a reduce_sum on the last dimension of a repeated
-field and create an optional field. The key constraint is that the operator
-must return a sparse tensor of the correct dimension: i.e., a
-2D sparse tensor if is_repeated is true, or a 1D sparse tensor if
-is_repeated is false. Moreover, the first dimension of the sparse tensor
-must be equal to the first dimension of the input tensor.
-
-
-
-
-| Args |
-
-
-|
-`parent_path`
- |
-
-the parent of the input and output fields.
- |
-
-|
-`source_fields`
- |
-
-the nonempty list of names of the source fields.
- |
-
-|
-`operator`
- |
-
-an operator that takes len(source_fields) sparse tensors and
-returns a sparse tensor of the appropriate shape.
- |
-
-|
-`is_repeated`
- |
-
-whether the output is repeated.
- |
-
-|
-`dtype`
- |
-
-the dtype of the result.
- |
-
-|
-`new_field_name`
- |
-
-the name of the resulting field.
- |
-
-
-
-
-
-
-
-
-| Returns |
-
-|
-A new query.
- |
-
-
-
-
-
-
-map_sparse_tensors
-
-
-map_sparse_tensors(
- parent_path: CoercableToPath,
- source_fields: Sequence[path.Step],
- operator: Callable[..., tf.SparseTensor],
- is_repeated: bool,
- dtype: tf.DType,
- new_field_name: path.Step
-) -> "Expression"
-
-
-Maps a set of primitive fields of a message to a new field.
-
-Unlike map_field_values, this operation allows you to some degree reshape
-the field. For instance, you can take two optional fields and create a
-repeated field, or perform a reduce_sum on the last dimension of a repeated
-field and create an optional field. The key constraint is that the operator
-must return a sparse tensor of the correct dimension: i.e., a
-2D sparse tensor if is_repeated is true, or a 1D sparse tensor if
-is_repeated is false. Moreover, the first dimension of the sparse tensor
-must be equal to the first dimension of the input tensor.
-
-
-
-
-| Args |
-
-
-|
-`parent_path`
- |
-
-the parent of the input and output fields.
- |
-
-|
-`source_fields`
- |
-
-the nonempty list of names of the source fields.
- |
-
-|
-`operator`
- |
-
-an operator that takes len(source_fields) sparse tensors and
-returns a sparse tensor of the appropriate shape.
- |
-
-|
-`is_repeated`
- |
-
-whether the output is repeated.
- |
-
-|
-`dtype`
- |
-
-the dtype of the result.
- |
-
-|
-`new_field_name`
- |
-
-the name of the resulting field.
- |
-
-
-
-
-
-
-
-
-| Returns |
-
-|
-A new query.
- |
-
-
-
-
-
-
-project
-
-
-project(
- path_list: Sequence[CoercableToPath]
-) -> "Expression"
-
-
-Constrains the paths to those listed.
-
-
-
-
-
-promote(
- source_path: CoercableToPath,
- new_field_name: path.Step
-)
-
-
-Promotes source_path to be a field new_field_name in its grandparent.
-
-
-
-
-
-promote_and_broadcast(
- path_dictionary: Mapping[path.Step, CoercableToPath],
- dest_path_parent: CoercableToPath
-) -> "Expression"
-
-
-
-
-
-reroot
-
-
-reroot(
- new_root: CoercableToPath
-) -> "Expression"
-
-
-Returns a new list of protocol buffers available at new_root.
-
-
-schema_string
-
-
-schema_string(
- limit: Optional[int] = None
-) -> str
-
-
-Returns a schema for the expression.
-
-E.g.
-
-repeated root:
- optional int32 foo
- optional bar:
- optional string baz
- optional int64 bak
-
-Note that unknown fields and subexpressions are not displayed.
-
-
-
-
-| Args |
-
-
-|
-`limit`
- |
-
-if present, limit the recursion.
- |
-
-
-
-
-
-
-
-
-| Returns |
-
-|
-A string, describing (a part of) the schema.
- |
-
-
-
-
-
-
-slice
-
-
-slice(
- source_path: CoercableToPath,
- new_field_name: path.Step,
- begin: Optional[IndexValue] = None,
- end: Optional[IndexValue] = None
-) -> "Expression"
-
-
-Creates a slice copy of source_path at new_field_path.
-
-Note that if begin or end is negative, it is considered relative to
-the size of the array. e.g., slice(...,begin=-1) will get the last
-element of every array.
-
-
-
-
-| Args |
-
-
-|
-`source_path`
- |
-
-the source of the slice.
- |
-
-|
-`new_field_name`
- |
-
-the new field that is generated.
- |
-
-|
-`begin`
- |
-
-the beginning of the slice (inclusive).
- |
-
-|
-`end`
- |
-
-the end of the slice (exclusive).
- |
-
-
-
-
-
-
-
-
-| Returns |
-
-|
-An Expression object representing the result of the operation.
- |
-
-
-
-
-
-
-truncate
-
-
-truncate(
- source_path: CoercableToPath,
- limit: Union[int, tf.Tensor],
- new_field_name: path.Step
-) -> "Expression"
-
-
-Creates a truncated copy of source_path at new_field_path.
-
-
-__eq__
-
-
-__eq__(
- expr: "Expression"
-) -> bool
-
-
-if hash(expr1) == hash(expr2): then expr1 == expr2.
-
-Do not override this method.
-Args:
- expr: The expression to check equality against
-
-
-
-
-| Returns |
-
-|
-Boolean of equality of two expressions
- |
-
-
-
-
-
-
-
-
diff --git a/g3doc/api_docs/python/expression_impl/promote/promote.md b/g3doc/api_docs/python/expression_impl/promote/promote.md
deleted file mode 100644
index 833064c..0000000
--- a/g3doc/api_docs/python/expression_impl/promote/promote.md
+++ /dev/null
@@ -1,35 +0,0 @@
-description: Promote a path to be a child of its grandparent, and give it a name.
-
-
-
-
-
-
-# expression_impl.promote.promote
-
-
-
-
-
-
-
-Promote a path to be a child of its grandparent, and give it a name.
-
-
-expression_impl.promote.promote(
- root: expression.Expression,
- p: path.Path,
- new_field_name: path.Step
-) -> expression.Expression
-
-
-
-
-
diff --git a/g3doc/api_docs/python/expression_impl/promote/promote_anonymous.md b/g3doc/api_docs/python/expression_impl/promote/promote_anonymous.md
deleted file mode 100644
index 06c3490..0000000
--- a/g3doc/api_docs/python/expression_impl/promote/promote_anonymous.md
+++ /dev/null
@@ -1,34 +0,0 @@
-description: Promote a path to be a new anonymous child of its grandparent.
-
-
-
-
-
-
-# expression_impl.promote.promote_anonymous
-
-
-
-
-
-
-
-Promote a path to be a new anonymous child of its grandparent.
-
-
-expression_impl.promote.promote_anonymous(
- root: expression.Expression,
- p: path.Path
-) -> Tuple[expression.Expression, path.Path]
-
-
-
-
-
diff --git a/g3doc/api_docs/python/expression_impl/promote_and_broadcast.md b/g3doc/api_docs/python/expression_impl/promote_and_broadcast.md
deleted file mode 100644
index ec7c211..0000000
--- a/g3doc/api_docs/python/expression_impl/promote_and_broadcast.md
+++ /dev/null
@@ -1,126 +0,0 @@
-description: promote_and_broadcast a set of nodes.
-
-
-
-
-
-
-# Module: expression_impl.promote_and_broadcast
-
-
-
-
-
-
-
-promote_and_broadcast a set of nodes.
-
-
-For example, suppose an expr represents:
-
-```
-+
-|
-+-session* (stars indicate repeated)
- |
- +-event*
- | |
- | +-val*-int64
- |
- +-user_info? (question mark indicates optional)
- |
- +-age? int64
-
-session: {
- event: {
- val: 1
- }
- event: {
- val: 4
- val: 5
- }
- user_info: {
- age: 25
- }
-}
-
-session: {
- event: {
- val: 7
- }
- event: {
- val: 8
- val: 9
- }
- user_info: {
- age: 20
- }
-}
-```
-
-```
-promote_and_broadcast.promote_and_broadcast(
- path.Path(["event"]),{"nage":path.Path(["user_info","age"])})
-```
-
-creates:
-
-```
-+
-|
-+-session* (stars indicate repeated)
- |
- +-event*
- | |
- | +-val*-int64
- | |
- | +-nage*-int64
- |
- +-user_info? (question mark indicates optional)
- |
- +-age? int64
-
-session: {
- event: {
- nage: 25
- val: 1
- }
- event: {
- nage: 25
- val: 4
- val: 5
- }
- user_info: {
- age: 25
- }
-}
-
-session: {
- event: {
- nage: 20
- val: 7
- }
- event: {
- nage: 20
- val: 8
- val: 9
- }
- user_info: {
- age: 20
- }
-}
-```
-
-## Functions
-
-[`promote_and_broadcast(...)`](../expression_impl/promote_and_broadcast/promote_and_broadcast.md): Promote and broadcast a set of paths to a particular location.
-
-[`promote_and_broadcast_anonymous(...)`](../expression_impl/promote_and_broadcast/promote_and_broadcast_anonymous.md): Promotes then broadcasts the origin until its parent is new_parent.
-
diff --git a/g3doc/api_docs/python/expression_impl/promote_and_broadcast/promote_and_broadcast.md b/g3doc/api_docs/python/expression_impl/promote_and_broadcast/promote_and_broadcast.md
deleted file mode 100644
index 212fe1f..0000000
--- a/g3doc/api_docs/python/expression_impl/promote_and_broadcast/promote_and_broadcast.md
+++ /dev/null
@@ -1,81 +0,0 @@
-description: Promote and broadcast a set of paths to a particular location.
-
-
-
-
-
-
-# expression_impl.promote_and_broadcast.promote_and_broadcast
-
-
-
-
-
-
-
-Promote and broadcast a set of paths to a particular location.
-
-
-expression_impl.promote_and_broadcast.promote_and_broadcast(
- root: expression.Expression,
- path_dictionary: Mapping[path.Step, path.Path],
- dest_path_parent: path.Path
-) -> expression.Expression
-
-
-
-
-
-
-
-
-
-
-Args |
-
-
-|
-`root`
- |
-
-the original expression.
- |
-
-|
-`path_dictionary`
- |
-
-a map from destination fields to origin paths.
- |
-
-|
-`dest_path_parent`
- |
-
-a map from destination strings.
- |
-
-
-
-
-
-
-
-
-Returns |
-
-|
-A new expression, where all the origin paths are promoted and broadcast
-until they are children of dest_path_parent.
- |
-
-
-
-
diff --git a/g3doc/api_docs/python/expression_impl/promote_and_broadcast/promote_and_broadcast_anonymous.md b/g3doc/api_docs/python/expression_impl/promote_and_broadcast/promote_and_broadcast_anonymous.md
deleted file mode 100644
index 1781907..0000000
--- a/g3doc/api_docs/python/expression_impl/promote_and_broadcast/promote_and_broadcast_anonymous.md
+++ /dev/null
@@ -1,35 +0,0 @@
-description: Promotes then broadcasts the origin until its parent is new_parent.
-
-
-
-
-
-
-# expression_impl.promote_and_broadcast.promote_and_broadcast_anonymous
-
-
-
-
-
-
-
-Promotes then broadcasts the origin until its parent is new_parent.
-
-
-expression_impl.promote_and_broadcast.promote_and_broadcast_anonymous(
- root: expression.Expression,
- origin: path.Path,
- new_parent: path.Path
-) -> Tuple[expression.Expression, path.Path]
-
-
-
-
-
diff --git a/g3doc/api_docs/python/expression_impl/proto.md b/g3doc/api_docs/python/expression_impl/proto.md
deleted file mode 100644
index 9d0016b..0000000
--- a/g3doc/api_docs/python/expression_impl/proto.md
+++ /dev/null
@@ -1,51 +0,0 @@
-description: Expressions to parse a proto.
-
-
-
-
-
-
-# Module: expression_impl.proto
-
-
-
-
-
-
-
-Expressions to parse a proto.
-
-
-These expressions return values with more information than standard node values.
-Specifically, each node calculates additional tensors that are used as inputs
-for its children.
-
-## Classes
-
-[`class DescriptorPool`](../expression_impl/proto/DescriptorPool.md): A collection of protobufs dynamically constructed by descriptor protos.
-
-[`class FileDescriptorSet`](../expression_impl/proto/FileDescriptorSet.md): A ProtocolMessage
-
-## Functions
-
-[`create_expression_from_file_descriptor_set(...)`](../expression_impl/proto/create_expression_from_file_descriptor_set.md): Create an expression from a 1D tensor of serialized protos.
-
-[`create_expression_from_proto(...)`](../expression_impl/proto/create_expression_from_proto.md): Create an expression from a 1D tensor of serialized protos.
-
-[`create_transformed_field(...)`](../expression_impl/proto/create_transformed_field.md): Create an expression that transforms serialized proto tensors.
-
-[`is_proto_expression(...)`](../expression_impl/proto/is_proto_expression.md): Returns true if an expression is a ProtoExpression.
-
-## Type Aliases
-
-[`ProtoExpression`](../expression_impl/proto/ProtoExpression.md)
-
-[`TransformFn`](../expression_impl/proto/TransformFn.md)
-
diff --git a/g3doc/api_docs/python/expression_impl/proto/DescriptorPool.md b/g3doc/api_docs/python/expression_impl/proto/DescriptorPool.md
deleted file mode 100644
index f5e36b6..0000000
--- a/g3doc/api_docs/python/expression_impl/proto/DescriptorPool.md
+++ /dev/null
@@ -1,813 +0,0 @@
-description: A collection of protobufs dynamically constructed by descriptor protos.
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-# expression_impl.proto.DescriptorPool
-
-
-
-
-
-
-
-A collection of protobufs dynamically constructed by descriptor protos.
-
-
-expression_impl.proto.DescriptorPool(
- descriptor_db=None
-)
-
-
-
-
-
-
-
-
-
-
-Args |
-
-
-|
-`descriptor_db`
- |
-
-A secondary source of file descriptors.
- |
-
-
-
-
-
-## Methods
-
-Add
-
-
-Add(
- file_desc_proto
-)
-
-
-Adds the FileDescriptorProto and its types to this pool.
-
-
-
-
-
-| Args |
-
-|
-file_desc_proto (FileDescriptorProto): The file descriptor to add.
- |
-
-
-
-
-
-
-AddDescriptor
-
-
-AddDescriptor(
- *args, **kwargs
-)
-
-
-
-
-
-AddEnumDescriptor
-
-
-AddEnumDescriptor(
- *args, **kwargs
-)
-
-
-
-
-
-AddExtensionDescriptor
-
-
-AddExtensionDescriptor(
- *args, **kwargs
-)
-
-
-
-
-
-AddFileDescriptor
-
-
-AddFileDescriptor(
- *args, **kwargs
-)
-
-
-
-
-
-AddSerializedFile
-
-
-AddSerializedFile(
- serialized_file_desc_proto
-)
-
-
-Adds the FileDescriptorProto and its types to this pool.
-
-
-
-
-
-| Args |
-
-|
-serialized_file_desc_proto (bytes): A bytes string, serialization of the
-:class:`FileDescriptorProto` to add.
- |
-
-
-
-
-
-
-AddServiceDescriptor
-
-
-AddServiceDescriptor(
- *args, **kwargs
-)
-
-
-
-
-
-FindAllExtensions
-
-
-FindAllExtensions(
- message_descriptor
-)
-
-
-Gets all the known extensions of a given message.
-
-Extensions have to be registered to this pool by build related
-:func:`Add` or :func:`AddExtensionDescriptor`.
-
-
-
-
-| Args |
-
-|
-message_descriptor (Descriptor): Descriptor of the extended message.
- |
-
-
-
-
-
-
-
-
-
-| Returns |
-
-|
-list[FieldDescriptor]: Field descriptors describing the extensions.
- |
-
-
-
-
-
-
-FindEnumTypeByName
-
-
-FindEnumTypeByName(
- full_name
-)
-
-
-Loads the named enum descriptor from the pool.
-
-
-
-
-
-| Args |
-
-|
-full_name (str): The full name of the enum descriptor to load.
- |
-
-
-
-
-
-
-
-
-
-| Returns |
-
-
-|
-`EnumDescriptor`
- |
-
-The enum descriptor for the named type.
- |
-
-
-
-
-
-
-
-
-| Raises |
-
-
-|
-`KeyError`
- |
-
-if the enum cannot be found in the pool.
- |
-
-
-
-
-
-FindExtensionByName
-
-
-FindExtensionByName(
- full_name
-)
-
-
-Loads the named extension descriptor from the pool.
-
-
-
-
-
-| Args |
-
-|
-full_name (str): The full name of the extension descriptor to load.
- |
-
-
-
-
-
-
-
-
-
-| Returns |
-
-
-|
-`FieldDescriptor`
- |
-
-The field descriptor for the named extension.
- |
-
-
-
-
-
-
-
-
-| Raises |
-
-
-|
-`KeyError`
- |
-
-if the extension cannot be found in the pool.
- |
-
-
-
-
-
-FindExtensionByNumber
-
-
-FindExtensionByNumber(
- message_descriptor, number
-)
-
-
-Gets the extension of the specified message with the specified number.
-
-Extensions have to be registered to this pool by calling :func:`Add` or
-:func:`AddExtensionDescriptor`.
-
-
-
-
-| Args |
-
-|
-message_descriptor (Descriptor): descriptor of the extended message.
-number (int): Number of the extension field.
- |
-
-
-
-
-
-
-
-
-
-| Returns |
-
-
-|
-`FieldDescriptor`
- |
-
-The descriptor for the extension.
- |
-
-
-
-
-
-
-
-
-| Raises |
-
-
-|
-`KeyError`
- |
-
-when no extension with the given number is known for the
-specified message.
- |
-
-
-
-
-
-FindFieldByName
-
-
-FindFieldByName(
- full_name
-)
-
-
-Loads the named field descriptor from the pool.
-
-
-
-
-
-| Args |
-
-|
-full_name (str): The full name of the field descriptor to load.
- |
-
-
-
-
-
-
-
-
-
-| Returns |
-
-
-|
-`FieldDescriptor`
- |
-
-The field descriptor for the named field.
- |
-
-
-
-
-
-
-
-
-| Raises |
-
-
-|
-`KeyError`
- |
-
-if the field cannot be found in the pool.
- |
-
-
-
-
-
-FindFileByName
-
-
-FindFileByName(
- file_name
-)
-
-
-Gets a FileDescriptor by file name.
-
-
-
-
-
-| Args |
-
-|
-file_name (str): The path to the file to get a descriptor for.
- |
-
-
-
-
-
-
-
-
-
-| Returns |
-
-
-|
-`FileDescriptor`
- |
-
-The descriptor for the named file.
- |
-
-
-
-
-
-
-
-
-| Raises |
-
-
-|
-`KeyError`
- |
-
-if the file cannot be found in the pool.
- |
-
-
-
-
-
-FindFileContainingSymbol
-
-
-FindFileContainingSymbol(
- symbol
-)
-
-
-Gets the FileDescriptor for the file containing the specified symbol.
-
-
-
-
-
-| Args |
-
-|
-symbol (str): The name of the symbol to search for.
- |
-
-
-
-
-
-
-
-
-
-| Returns |
-
-
-|
-`FileDescriptor`
- |
-
-Descriptor for the file that contains the specified
-symbol.
- |
-
-
-
-
-
-
-
-
-| Raises |
-
-
-|
-`KeyError`
- |
-
-if the file cannot be found in the pool.
- |
-
-
-
-
-
-FindMessageTypeByName
-
-
-FindMessageTypeByName(
- full_name
-)
-
-
-Loads the named descriptor from the pool.
-
-
-
-
-
-| Args |
-
-|
-full_name (str): The full name of the descriptor to load.
- |
-
-
-
-
-
-
-
-
-
-| Returns |
-
-
-|
-`Descriptor`
- |
-
-The descriptor for the named type.
- |
-
-
-
-
-
-
-
-
-| Raises |
-
-
-|
-`KeyError`
- |
-
-if the message cannot be found in the pool.
- |
-
-
-
-
-
-FindMethodByName
-
-
-FindMethodByName(
- full_name
-)
-
-
-Loads the named service method descriptor from the pool.
-
-
-
-
-
-| Args |
-
-|
-full_name (str): The full name of the method descriptor to load.
- |
-
-
-
-
-
-
-
-
-
-| Returns |
-
-
-|
-`MethodDescriptor`
- |
-
-The method descriptor for the service method.
- |
-
-
-
-
-
-
-
-
-| Raises |
-
-
-|
-`KeyError`
- |
-
-if the method cannot be found in the pool.
- |
-
-
-
-
-
-FindOneofByName
-
-
-FindOneofByName(
- full_name
-)
-
-
-Loads the named oneof descriptor from the pool.
-
-
-
-
-
-| Args |
-
-|
-full_name (str): The full name of the oneof descriptor to load.
- |
-
-
-
-
-
-
-
-
-
-| Returns |
-
-
-|
-`OneofDescriptor`
- |
-
-The oneof descriptor for the named oneof.
- |
-
-
-
-
-
-
-
-
-| Raises |
-
-
-|
-`KeyError`
- |
-
-if the oneof cannot be found in the pool.
- |
-
-
-
-
-
-FindServiceByName
-
-
-FindServiceByName(
- full_name
-)
-
-
-Loads the named service descriptor from the pool.
-
-
-
-
-
-| Args |
-
-|
-full_name (str): The full name of the service descriptor to load.
- |
-
-
-
-
-
-
-
-
-
-| Returns |
-
-
-|
-`ServiceDescriptor`
- |
-
-The service descriptor for the named service.
- |
-
-
-
-
-
-
-
-
-| Raises |
-
-
-|
-`KeyError`
- |
-
-if the service cannot be found in the pool.
- |
-
-
-
-
-
-
-
diff --git a/g3doc/api_docs/python/expression_impl/proto/FileDescriptorSet.md b/g3doc/api_docs/python/expression_impl/proto/FileDescriptorSet.md
deleted file mode 100644
index 373d76c..0000000
--- a/g3doc/api_docs/python/expression_impl/proto/FileDescriptorSet.md
+++ /dev/null
@@ -1,41 +0,0 @@
-description: A ProtocolMessage
-
-
-
-
-
-
-# expression_impl.proto.FileDescriptorSet
-
-
-
-
-
-
-
-A ProtocolMessage
-
-
-
-
-
-
-
-
-
-Attributes |
-
-
-|
-`file`
- |
-
-`repeated FileDescriptorProto file`
- |
-
-
-
-
-
diff --git a/g3doc/api_docs/python/expression_impl/proto/ProtoExpression.md b/g3doc/api_docs/python/expression_impl/proto/ProtoExpression.md
deleted file mode 100644
index acd848c..0000000
--- a/g3doc/api_docs/python/expression_impl/proto/ProtoExpression.md
+++ /dev/null
@@ -1,25 +0,0 @@
-
-
-
-
-
-# expression_impl.proto.ProtoExpression
-
-
-This symbol is a **type alias**.
-
-
-
-#### Source:
-
-
-ProtoExpression = Union[
- struct2tensor.expression_impl.proto._ProtoRootExpression,
- struct2tensor.expression_impl.proto._ProtoChildExpression,
- struct2tensor.expression_impl.proto._ProtoLeafExpression
-]
-
-
-
-
-
diff --git a/g3doc/api_docs/python/expression_impl/proto/TransformFn.md b/g3doc/api_docs/python/expression_impl/proto/TransformFn.md
deleted file mode 100644
index 236fbc0..0000000
--- a/g3doc/api_docs/python/expression_impl/proto/TransformFn.md
+++ /dev/null
@@ -1,25 +0,0 @@
-
-
-
-
-
-# expression_impl.proto.TransformFn
-
-
-This symbol is a **type alias**.
-
-
-
-#### Source:
-
-
-TransformFn = Callable[
- tensorflow.python.framework.ops.Tensor,
- tensorflow.python.framework.ops.Tensor,
- Tuple[tensorflow.python.framework.ops.Tensor, tensorflow.python.framework.ops.Tensor]
-]
-
-
-
-
-
diff --git a/g3doc/api_docs/python/expression_impl/proto/create_expression_from_file_descriptor_set.md b/g3doc/api_docs/python/expression_impl/proto/create_expression_from_file_descriptor_set.md
deleted file mode 100644
index f0c2413..0000000
--- a/g3doc/api_docs/python/expression_impl/proto/create_expression_from_file_descriptor_set.md
+++ /dev/null
@@ -1,93 +0,0 @@
-description: Create an expression from a 1D tensor of serialized protos.
-
-
-
-
-
-
-# expression_impl.proto.create_expression_from_file_descriptor_set
-
-
-
-
-
-
-
-Create an expression from a 1D tensor of serialized protos.
-
-
-expression_impl.proto.create_expression_from_file_descriptor_set(
- tensor_of_protos: tf.Tensor,
- proto_name: ProtoFullName,
- file_descriptor_set: expression_impl.proto.FileDescriptorSet,
- message_format: str = 'binary'
-) -> expression.Expression
-
-
-
-
-
-
-
-
-
-
-Args |
-
-
-|
-`tensor_of_protos`
- |
-
-1D tensor of serialized protos.
- |
-
-|
-`proto_name`
- |
-
-fully qualified name (e.g. "some.package.SomeProto") of the
-proto in `tensor_of_protos`.
- |
-
-|
-`file_descriptor_set`
- |
-
-The FileDescriptorSet proto containing `proto_name`'s
-and all its dependencies' FileDescriptorProto. Note that if file1 imports
-file2, then file2's FileDescriptorProto must precede file1's in
-file_descriptor_set.file.
- |
-
-|
-`message_format`
- |
-
-Indicates the format of the protocol buffer: is one of
-'text' or 'binary'.
- |
-
-
-
-
-
-
-
-
-Returns |
-
-|
-An expression.
- |
-
-
-
-
diff --git a/g3doc/api_docs/python/expression_impl/proto/create_expression_from_proto.md b/g3doc/api_docs/python/expression_impl/proto/create_expression_from_proto.md
deleted file mode 100644
index 4a2744c..0000000
--- a/g3doc/api_docs/python/expression_impl/proto/create_expression_from_proto.md
+++ /dev/null
@@ -1,81 +0,0 @@
-description: Create an expression from a 1D tensor of serialized protos.
-
-
-
-
-
-
-# expression_impl.proto.create_expression_from_proto
-
-
-
-
-
-
-
-Create an expression from a 1D tensor of serialized protos.
-
-
-expression_impl.proto.create_expression_from_proto(
- tensor_of_protos: tf.Tensor,
- desc: descriptor.Descriptor,
- message_format: str = 'binary'
-) -> expression.Expression
-
-
-
-
-
-
-
-
-
-
-Args |
-
-
-|
-`tensor_of_protos`
- |
-
-1D tensor of serialized protos.
- |
-
-|
-`desc`
- |
-
-a descriptor of protos in tensor of protos.
- |
-
-|
-`message_format`
- |
-
-Indicates the format of the protocol buffer: is one of
-'text' or 'binary'.
- |
-
-
-
-
-
-
-
-
-Returns |
-
-|
-An expression.
- |
-
-
-
-
diff --git a/g3doc/api_docs/python/expression_impl/proto/create_transformed_field.md b/g3doc/api_docs/python/expression_impl/proto/create_transformed_field.md
deleted file mode 100644
index 7555fb9..0000000
--- a/g3doc/api_docs/python/expression_impl/proto/create_transformed_field.md
+++ /dev/null
@@ -1,126 +0,0 @@
-description: Create an expression that transforms serialized proto tensors.
-
-
-
-
-
-
-# expression_impl.proto.create_transformed_field
-
-
-
-
-
-
-
-Create an expression that transforms serialized proto tensors.
-
-
-expression_impl.proto.create_transformed_field(
- expr: expression.Expression,
- source_path: path.CoercableToPath,
- dest_field: StrStep,
- transform_fn: expression_impl.proto.TransformFn
-) -> expression.Expression
-
-
-
-
-
-
-The transform_fn argument should take the form:
-
-def transform_fn(parent_indices, values):
- ...
- return (transformed_parent_indices, transformed_values)
-
-#### Given:
-
-
-- parent_indices: an int64 vector of non-decreasing parent message indices.
-- values: a string vector of serialized protos having the same shape as
- `parent_indices`.
-`transform_fn` must return new parent indices and serialized values encoding
-the same proto message as the passed in `values`. These two vectors must
-have the same size, but it need not be the same as the input arguments.
-
-
-
-
-Args |
-
-
-|
-`expr`
- |
-
-a source expression containing `source_path`.
- |
-
-|
-`source_path`
- |
-
-the path to the field to reverse.
- |
-
-|
-`dest_field`
- |
-
-the name of the newly created field. This field will be a
-sibling of the field identified by `source_path`.
- |
-
-|
-`transform_fn`
- |
-
-a callable that accepts parent_indices and serialized proto
-values and returns a posibly modified parent_indices and values. Note that
-when CalcuateOptions.use_string_view is set, transform_fn should not have
-any stateful side effecting uses of serialized proto inputs. Doing so
-could cause segfaults as the backing string tensor lifetime is not
-guaranteed when the side effecting operations are run.
- |
-
-
-
-
-
-
-
-
-Returns |
-
-|
-An expression.
- |
-
-
-
-
-
-
-
-
-
-Raises |
-
-
-|
-`ValueError`
- |
-
-if the source path is not a proto message field.
- |
-
-
-
diff --git a/g3doc/api_docs/python/expression_impl/proto/is_proto_expression.md b/g3doc/api_docs/python/expression_impl/proto/is_proto_expression.md
deleted file mode 100644
index 60cc1d3..0000000
--- a/g3doc/api_docs/python/expression_impl/proto/is_proto_expression.md
+++ /dev/null
@@ -1,33 +0,0 @@
-description: Returns true if an expression is a ProtoExpression.
-
-
-
-
-
-
-# expression_impl.proto.is_proto_expression
-
-
-
-
-
-
-
-Returns true if an expression is a ProtoExpression.
-
-
-expression_impl.proto.is_proto_expression(
- expr: expression.Expression
-) -> bool
-
-
-
-
-
diff --git a/g3doc/api_docs/python/expression_impl/reroot.md b/g3doc/api_docs/python/expression_impl/reroot.md
deleted file mode 100644
index e1bf645..0000000
--- a/g3doc/api_docs/python/expression_impl/reroot.md
+++ /dev/null
@@ -1,35 +0,0 @@
-description: Reroot to a subtree, maintaining an input proto index.
-
-
-
-
-
-
-# Module: expression_impl.reroot
-
-
-
-
-
-
-
-Reroot to a subtree, maintaining an input proto index.
-
-
-reroot is similar to get_descendant_or_error. However, this method allows
-you to call create_proto_index(...) later on, that gives you a reference to the
-original proto.
-
-## Functions
-
-[`create_proto_index_field(...)`](../expression_impl/reroot/create_proto_index_field.md)
-
-[`reroot(...)`](../expression_impl/reroot/reroot.md): Reroot to a new path, maintaining a input proto index.
-
diff --git a/g3doc/api_docs/python/expression_impl/reroot/create_proto_index_field.md b/g3doc/api_docs/python/expression_impl/reroot/create_proto_index_field.md
deleted file mode 100644
index 0067663..0000000
--- a/g3doc/api_docs/python/expression_impl/reroot/create_proto_index_field.md
+++ /dev/null
@@ -1,32 +0,0 @@
-
-
-
-
-
-# expression_impl.reroot.create_proto_index_field
-
-
-
-
-
-
-
-
-
-
-expression_impl.reroot.create_proto_index_field(
- root: expression.Expression,
- new_field_name: path.Step
-) -> expression.Expression
-
-
-
-
-
diff --git a/g3doc/api_docs/python/expression_impl/reroot/reroot.md b/g3doc/api_docs/python/expression_impl/reroot/reroot.md
deleted file mode 100644
index 7b99d93..0000000
--- a/g3doc/api_docs/python/expression_impl/reroot/reroot.md
+++ /dev/null
@@ -1,74 +0,0 @@
-description: Reroot to a new path, maintaining a input proto index.
-
-
-
-
-
-
-# expression_impl.reroot.reroot
-
-
-
-
-
-
-
-Reroot to a new path, maintaining a input proto index.
-
-
-expression_impl.reroot.reroot(
- root: expression.Expression,
- source_path: path.Path
-) -> expression.Expression
-
-
-
-
-
-
-Similar to root.get_descendant_or_error(source_path): however, this
-method retains the ability to get a map to the original index.
-
-
-
-
-Args |
-
-
-|
-`root`
- |
-
-the original root.
- |
-
-|
-`source_path`
- |
-
-the path to the new root.
- |
-
-
-
-
-
-
-
-
-Returns |
-
-|
-the new root.
- |
-
-
-
-
diff --git a/g3doc/api_docs/python/expression_impl/size.md b/g3doc/api_docs/python/expression_impl/size.md
deleted file mode 100644
index 94ec2da..0000000
--- a/g3doc/api_docs/python/expression_impl/size.md
+++ /dev/null
@@ -1,53 +0,0 @@
-description: Functions for creating new size or has expression.
-
-
-
-
-
-
-# Module: expression_impl.size
-
-
-
-
-
-
-
-Functions for creating new size or has expression.
-
-
-Given a field "foo.bar",
-
-```
-root = size(expr, path.Path(["foo","bar"]), "bar_size")
-```
-
-creates a new expression root that has an optional field "foo.bar_size", which
-is always present, and contains the number of bar in a particular foo.
-
-```
-root_2 = has(expr, path.Path(["foo","bar"]), "bar_has")
-```
-
-creates a new expression root that has an optional field "foo.bar_has", which
-is always present, and is true if there are one or more bar in foo.
-
-## Classes
-
-[`class SizeExpression`](../expression_impl/size/SizeExpression.md): Size of the given expression.
-
-## Functions
-
-[`has(...)`](../expression_impl/size/has.md): Get the has of a field as a new sibling field.
-
-[`size(...)`](../expression_impl/size/size.md): Get the size of a field as a new sibling field.
-
-[`size_anonymous(...)`](../expression_impl/size/size_anonymous.md): Calculate the size of a field, and store it as an anonymous sibling.
-
diff --git a/g3doc/api_docs/python/expression_impl/size/SizeExpression.md b/g3doc/api_docs/python/expression_impl/size/SizeExpression.md
deleted file mode 100644
index 3adb5e0..0000000
--- a/g3doc/api_docs/python/expression_impl/size/SizeExpression.md
+++ /dev/null
@@ -1,1043 +0,0 @@
-description: Size of the given expression.
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-# expression_impl.size.SizeExpression
-
-
-
-
-
-
-
-Size of the given expression.
-
-
-expression_impl.size.SizeExpression(
- origin: expression.Expression,
- origin_parent: expression.Expression
-)
-
-
-
-
-
-
-SizeExpression is intended to be a sibling of origin.
-origin_parent should be the parent of origin.
-
-
-
-
-Args |
-
-
-|
-`is_repeated`
- |
-
-if the expression is repeated.
- |
-
-|
-`my_type`
- |
-
-the DType of the field.
- |
-
-|
-`schema_feature`
- |
-
-schema information about the field.
- |
-
-
-
-
-
-
-
-
-
-
-Attributes |
-
-
-|
-`is_leaf`
- |
-
-True iff the node tensor is a LeafNodeTensor.
- |
-
-|
-`is_repeated`
- |
-
-True iff the same parent value can have multiple children values.
- |
-
-|
-`schema_feature`
- |
-
-Return the schema of the field.
- |
-
-|
-`type`
- |
-
-dtype of the expression, or None if not a leaf expression.
- |
-
-
-
-
-
-## Methods
-
-apply
-
-
-apply(
- transform: Callable[['Expression'], 'Expression']
-) -> "Expression"
-
-
-
-
-
-apply_schema
-
-
-apply_schema(
- schema: schema_pb2.Schema
-) -> "Expression"
-
-
-
-
-
-broadcast
-
-
-broadcast(
- source_path: CoercableToPath,
- sibling_field: path.Step,
- new_field_name: path.Step
-) -> "Expression"
-
-
-Broadcasts the existing field at source_path to the sibling_field.
-
-
-calculate
-
-View source
-
-
-calculate(
- sources: Sequence[prensor.NodeTensor],
- destinations: Sequence[expression.Expression],
- options: calculate_options.Options,
- side_info: Optional[prensor.Prensor] = None
-) -> prensor.NodeTensor
-
-
-Calculates the node tensor of the expression.
-
-The node tensor must be a function of the properties of the expression
-and the node tensors of the expressions from get_source_expressions().
-
-If is_leaf, then calculate must return a LeafNodeTensor.
-Otherwise, it must return a ChildNodeTensor or RootNodeTensor.
-
-If calculate_is_identity is true, then this must return source_tensors[0].
-
-Sometimes, for operations such as parsing the proto, calculate will return
-additional information. For example, calculate() for the root of the
-proto expression also parses out the tensors required to calculate the
-tensors of the children. This is why destinations are required.
-
-For a reference use, see calculate_value_slowly(...) below.
-
-
-
-
-| Args |
-
-
-|
-`source_tensors`
- |
-
-The node tensors of the expressions in
-get_source_expressions().
- |
-
-|
-`destinations`
- |
-
-The expressions that will use the output of this method.
- |
-
-|
-`options`
- |
-
-Options for the calculation.
- |
-
-|
-`side_info`
- |
-
-An optional prensor that is used to bind to a placeholder
-expression.
- |
-
-
-
-
-
-
-
-
-| Returns |
-
-|
-A NodeTensor representing the output of this expression.
- |
-
-
-
-
-
-
-calculation_equal
-
-View source
-
-
-calculation_equal(
- expr: expression.Expression
-) -> bool
-
-
-self.calculate is equal to another expression.calculate.
-
-Given the same source node tensors, self.calculate(...) and
-expression.calculate(...) will have the same result.
-
-Note that this does not check that the source expressions of the two
-expressions are the same. Therefore, two operations can have the same
-calculation, but not the same output, because their sources are different.
-For example, if a.calculation_is_identity() is True and
-b.calculation_is_identity() is True, then a.calculation_equal(b) is True.
-However, unless a and b have the same source, the expressions themselves are
-not equal.
-
-
-
-
-| Args |
-
-
-|
-`expression`
- |
-
-The expression to compare to.
- |
-
-
-
-
-
-calculation_is_identity
-
-View source
-
-
-calculation_is_identity() -> bool
-
-
-True iff the self.calculate is the identity.
-
-There is exactly one source, and the output of self.calculate(...) is the
-node tensor of this source.
-
-cogroup_by_index
-
-
-cogroup_by_index(
- source_path: CoercableToPath,
- left_name: path.Step,
- right_name: path.Step,
- new_field_name: path.Step
-) -> "Expression"
-
-
-Creates a cogroup of left_name and right_name at new_field_name.
-
-
-create_has_field
-
-
-create_has_field(
- source_path: CoercableToPath,
- new_field_name: path.Step
-) -> "Expression"
-
-
-Creates a field that is the presence of the source path.
-
-
-create_proto_index
-
-
-create_proto_index(
- field_name: path.Step
-) -> "Expression"
-
-
-Creates a proto index field as a direct child of the current root.
-
-The proto index maps each root element to the original batch index.
-For example: [0, 2] means the first element came from the first proto
-in the original input tensor and the second element came from the third
-proto. The created field is always "dense" -- it has the same valency as
-the current root.
-
-
-
-
-| Args |
-
-
-|
-`field_name`
- |
-
-the name of the field to be created.
- |
-
-
-
-
-
-
-
-
-| Returns |
-
-|
-An Expression object representing the result of the operation.
- |
-
-
-
-
-
-
-create_size_field
-
-
-create_size_field(
- source_path: CoercableToPath,
- new_field_name: path.Step
-) -> "Expression"
-
-
-Creates a field that is the size of the source path.
-
-
-get_child
-
-
-get_child(
- field_name: path.Step
-) -> Optional['Expression']
-
-
-Gets a named child.
-
-
-get_child_or_error
-
-
-get_child_or_error(
- field_name: path.Step
-) -> "Expression"
-
-
-Gets a named child.
-
-
-get_descendant
-
-
-get_descendant(
- p: path.Path
-) -> Optional['Expression']
-
-
-Finds the descendant at the path.
-
-
-get_descendant_or_error
-
-
-get_descendant_or_error(
- p: path.Path
-) -> "Expression"
-
-
-Finds the descendant at the path.
-
-
-get_known_children
-
-
-get_known_children() -> Mapping[path.Step, 'Expression']
-
-
-
-
-
-get_known_descendants
-
-
-get_known_descendants() -> Mapping[path.Path, 'Expression']
-
-
-Gets a mapping from known paths to subexpressions.
-
-The difference between this and get_descendants in Prensor is that
-all paths in a Prensor are realized, thus all known. But an Expression's
-descendants might not all be known at the point this method is called,
-because an expression may have an infinite number of children.
-
-
-
-
-| Returns |
-
-|
-A mapping from paths (relative to the root of the subexpression) to
-expressions.
- |
-
-
-
-
-
-
-get_paths_with_schema
-
-
-get_paths_with_schema() -> List[path.Path]
-
-
-Extract only paths that contain schema information.
-
-
-get_schema
-
-
-get_schema(
- create_schema_features=True
-) -> schema_pb2.Schema
-
-
-Returns a schema for the entire tree.
-
-
-
-
-
-| Args |
-
-
-|
-`create_schema_features`
- |
-
-If True, schema features are added for all
-children and a schema entry is created if not available on the child. If
-False, features are left off of the returned schema if there is no
-schema_feature on the child.
- |
-
-
-
-
-
-get_source_expressions
-
-View source
-
-
-get_source_expressions() -> Sequence[expression.Expression]
-
-
-Gets the sources of this expression.
-
-The node tensors of the source expressions must be sufficient to
-calculate the node tensor of this expression
-(see calculate and calculate_value_slowly).
-
-
-
-
-| Returns |
-
-|
-The sources of this expression.
- |
-
-
-
-
-
-
-known_field_names
-
-
-known_field_names() -> FrozenSet[path.Step]
-
-
-Returns known field names of the expression.
-
-
-Known field names of a parsed proto correspond to the fields declared in
-the message. Examples of "unknown" fields are extensions and explicit casts
-in an any field. The only way to know if an unknown field "(foo.bar)" is
-present in an expression expr is to call (expr["(foo.bar)"] is not None).
-
-Notice that simply accessing a field does not make it "known". However,
-setting a field (or setting a descendant of a field) will make it known.
-
-project(...) returns an expression where the known field names are the only
-field names. In general, if you want to depend upon known_field_names
-(e.g., if you want to compile a expression), then the best approach is to
-project() the expression first.
-
-
-
-
-| Returns |
-
-|
-An immutable set of field names.
- |
-
-
-
-
-
-
-map_field_values
-
-
-map_field_values(
- source_path: CoercableToPath,
- operator: Callable[[tf.Tensor], tf.Tensor],
- dtype: tf.DType,
- new_field_name: path.Step
-) -> "Expression"
-
-
-Map a primitive field to create a new primitive field.
-
-Note: the dtype argument is added since the v1 API.
-
-
-
-
-| Args |
-
-
-|
-`source_path`
- |
-
-the origin path.
- |
-
-|
-`operator`
- |
-
-an element-wise operator that takes a 1-dimensional vector.
- |
-
-|
-`dtype`
- |
-
-the type of the output.
- |
-
-|
-`new_field_name`
- |
-
-the name of a new sibling of source_path.
- |
-
-
-
-
-
-
-
-
-| Returns |
-
-|
-the resulting root expression.
- |
-
-
-
-
-
-
-map_ragged_tensors
-
-
-map_ragged_tensors(
- parent_path: CoercableToPath,
- source_fields: Sequence[path.Step],
- operator: Callable[..., tf.SparseTensor],
- is_repeated: bool,
- dtype: tf.DType,
- new_field_name: path.Step
-) -> "Expression"
-
-
-Maps a set of primitive fields of a message to a new field.
-
-Unlike map_field_values, this operation allows you to some degree reshape
-the field. For instance, you can take two optional fields and create a
-repeated field, or perform a reduce_sum on the last dimension of a repeated
-field and create an optional field. The key constraint is that the operator
-must return a sparse tensor of the correct dimension: i.e., a
-2D sparse tensor if is_repeated is true, or a 1D sparse tensor if
-is_repeated is false. Moreover, the first dimension of the sparse tensor
-must be equal to the first dimension of the input tensor.
-
-
-
-
-| Args |
-
-
-|
-`parent_path`
- |
-
-the parent of the input and output fields.
- |
-
-|
-`source_fields`
- |
-
-the nonempty list of names of the source fields.
- |
-
-|
-`operator`
- |
-
-an operator that takes len(source_fields) sparse tensors and
-returns a sparse tensor of the appropriate shape.
- |
-
-|
-`is_repeated`
- |
-
-whether the output is repeated.
- |
-
-|
-`dtype`
- |
-
-the dtype of the result.
- |
-
-|
-`new_field_name`
- |
-
-the name of the resulting field.
- |
-
-
-
-
-
-
-
-
-| Returns |
-
-|
-A new query.
- |
-
-
-
-
-
-
-map_sparse_tensors
-
-
-map_sparse_tensors(
- parent_path: CoercableToPath,
- source_fields: Sequence[path.Step],
- operator: Callable[..., tf.SparseTensor],
- is_repeated: bool,
- dtype: tf.DType,
- new_field_name: path.Step
-) -> "Expression"
-
-
-Maps a set of primitive fields of a message to a new field.
-
-Unlike map_field_values, this operation allows you to some degree reshape
-the field. For instance, you can take two optional fields and create a
-repeated field, or perform a reduce_sum on the last dimension of a repeated
-field and create an optional field. The key constraint is that the operator
-must return a sparse tensor of the correct dimension: i.e., a
-2D sparse tensor if is_repeated is true, or a 1D sparse tensor if
-is_repeated is false. Moreover, the first dimension of the sparse tensor
-must be equal to the first dimension of the input tensor.
-
-
-
-
-| Args |
-
-
-|
-`parent_path`
- |
-
-the parent of the input and output fields.
- |
-
-|
-`source_fields`
- |
-
-the nonempty list of names of the source fields.
- |
-
-|
-`operator`
- |
-
-an operator that takes len(source_fields) sparse tensors and
-returns a sparse tensor of the appropriate shape.
- |
-
-|
-`is_repeated`
- |
-
-whether the output is repeated.
- |
-
-|
-`dtype`
- |
-
-the dtype of the result.
- |
-
-|
-`new_field_name`
- |
-
-the name of the resulting field.
- |
-
-
-
-
-
-
-
-
-| Returns |
-
-|
-A new query.
- |
-
-
-
-
-
-
-project
-
-
-project(
- path_list: Sequence[CoercableToPath]
-) -> "Expression"
-
-
-Constrains the paths to those listed.
-
-
-
-
-
-promote(
- source_path: CoercableToPath,
- new_field_name: path.Step
-)
-
-
-Promotes source_path to be a field new_field_name in its grandparent.
-
-
-
-
-
-promote_and_broadcast(
- path_dictionary: Mapping[path.Step, CoercableToPath],
- dest_path_parent: CoercableToPath
-) -> "Expression"
-
-
-
-
-
-reroot
-
-
-reroot(
- new_root: CoercableToPath
-) -> "Expression"
-
-
-Returns a new list of protocol buffers available at new_root.
-
-
-schema_string
-
-
-schema_string(
- limit: Optional[int] = None
-) -> str
-
-
-Returns a schema for the expression.
-
-E.g.
-
-repeated root:
- optional int32 foo
- optional bar:
- optional string baz
- optional int64 bak
-
-Note that unknown fields and subexpressions are not displayed.
-
-
-
-
-| Args |
-
-
-|
-`limit`
- |
-
-if present, limit the recursion.
- |
-
-
-
-
-
-
-
-
-| Returns |
-
-|
-A string, describing (a part of) the schema.
- |
-
-
-
-
-
-
-slice
-
-
-slice(
- source_path: CoercableToPath,
- new_field_name: path.Step,
- begin: Optional[IndexValue] = None,
- end: Optional[IndexValue] = None
-) -> "Expression"
-
-
-Creates a slice copy of source_path at new_field_path.
-
-Note that if begin or end is negative, it is considered relative to
-the size of the array. e.g., slice(...,begin=-1) will get the last
-element of every array.
-
-
-
-
-| Args |
-
-
-|
-`source_path`
- |
-
-the source of the slice.
- |
-
-|
-`new_field_name`
- |
-
-the new field that is generated.
- |
-
-|
-`begin`
- |
-
-the beginning of the slice (inclusive).
- |
-
-|
-`end`
- |
-
-the end of the slice (exclusive).
- |
-
-
-
-
-
-
-
-
-| Returns |
-
-|
-An Expression object representing the result of the operation.
- |
-
-
-
-
-
-
-truncate
-
-
-truncate(
- source_path: CoercableToPath,
- limit: Union[int, tf.Tensor],
- new_field_name: path.Step
-) -> "Expression"
-
-
-Creates a truncated copy of source_path at new_field_path.
-
-
-__eq__
-
-
-__eq__(
- expr: "Expression"
-) -> bool
-
-
-if hash(expr1) == hash(expr2): then expr1 == expr2.
-
-Do not override this method.
-Args:
- expr: The expression to check equality against
-
-
-
-
-| Returns |
-
-|
-Boolean of equality of two expressions
- |
-
-
-
-
-
-
-
-
diff --git a/g3doc/api_docs/python/expression_impl/size/has.md b/g3doc/api_docs/python/expression_impl/size/has.md
deleted file mode 100644
index 3dc9d63..0000000
--- a/g3doc/api_docs/python/expression_impl/size/has.md
+++ /dev/null
@@ -1,80 +0,0 @@
-description: Get the has of a field as a new sibling field.
-
-
-
-
-
-
-# expression_impl.size.has
-
-
-
-
-
-
-
-Get the has of a field as a new sibling field.
-
-
-expression_impl.size.has(
- root: expression.Expression,
- source_path: path.Path,
- new_field_name: path.Step
-) -> expression.Expression
-
-
-
-
-
-
-
-
-
-
-Args |
-
-
-|
-`root`
- |
-
-the original expression.
- |
-
-|
-`source_path`
- |
-
-the source path to measure. Cannot be root.
- |
-
-|
-`new_field_name`
- |
-
-the name of the sibling field.
- |
-
-
-
-
-
-
-
-
-Returns |
-
-|
-The new expression.
- |
-
-
-
-
diff --git a/g3doc/api_docs/python/expression_impl/size/size.md b/g3doc/api_docs/python/expression_impl/size/size.md
deleted file mode 100644
index 819c9e3..0000000
--- a/g3doc/api_docs/python/expression_impl/size/size.md
+++ /dev/null
@@ -1,80 +0,0 @@
-description: Get the size of a field as a new sibling field.
-
-
-
-
-
-
-# expression_impl.size.size
-
-
-
-
-
-
-
-Get the size of a field as a new sibling field.
-
-
-expression_impl.size.size(
- root: expression.Expression,
- source_path: path.Path,
- new_field_name: path.Step
-) -> expression.Expression
-
-
-
-
-
-
-
-
-
-
-Args |
-
-
-|
-`root`
- |
-
-the original expression.
- |
-
-|
-`source_path`
- |
-
-the source path to measure. Cannot be root.
- |
-
-|
-`new_field_name`
- |
-
-the name of the sibling field.
- |
-
-
-
-
-
-
-
-
-Returns |
-
-|
-The new expression.
- |
-
-
-
-
diff --git a/g3doc/api_docs/python/expression_impl/size/size_anonymous.md b/g3doc/api_docs/python/expression_impl/size/size_anonymous.md
deleted file mode 100644
index 792d371..0000000
--- a/g3doc/api_docs/python/expression_impl/size/size_anonymous.md
+++ /dev/null
@@ -1,72 +0,0 @@
-description: Calculate the size of a field, and store it as an anonymous sibling.
-
-
-
-
-
-
-# expression_impl.size.size_anonymous
-
-
-
-
-
-
-
-Calculate the size of a field, and store it as an anonymous sibling.
-
-
-expression_impl.size.size_anonymous(
- root: expression.Expression,
- source_path: path.Path
-) -> Tuple[expression.Expression, path.Path]
-
-
-
-
-
-
-
-
-
-
-Args |
-
-
-|
-`root`
- |
-
-the original expression.
- |
-
-|
-`source_path`
- |
-
-the source path to measure. Cannot be root.
- |
-
-
-
-
-
-
-
-
-Returns |
-
-|
-The new expression and the new field as a pair.
- |
-
-
-
-
diff --git a/g3doc/api_docs/python/expression_impl/slice_expression.md b/g3doc/api_docs/python/expression_impl/slice_expression.md
deleted file mode 100644
index 9ba697d..0000000
--- a/g3doc/api_docs/python/expression_impl/slice_expression.md
+++ /dev/null
@@ -1,135 +0,0 @@
-description: Implementation of slice.
-
-
-
-
-
-
-# Module: expression_impl.slice_expression
-
-
-
-
-
-
-
-Implementation of slice.
-
-
-
-The slice operation is meant to replicate the slicing of a list in python.
-
-Slicing a list in python is done by specifying a beginning and ending.
-The resulting list consists of all elements in the range.
-
-#### For example:
-
-
-
-```
-```
->>> x = ['a', 'b', 'c', 'd', 'e', 'f', 'g']
->>> print(x[2:5]) # all elements between index 2 inclusive and index 5 exclusive
-['c', 'd', 'e']
->>> print(x[2:]) # all elements between index 2 and the end.
-['c', 'd', 'e', 'f', 'g']
->>> print(x[:4]) # all elements between the beginning and index 4 (exclusive).
-['a', 'b', 'c', 'd']
->>> print(x[-3:-1]) # all elements starting three from the end.
->>> # until one from the end (exclusive).
-['e', 'f']
->>> print(x[-3:6]) # all elements starting three from the end
- # until index 6 exclusive.
-['e', 'f', 'g']
-```
-```
-
-
-over the elements (e.g. x[2:6:2]=['c', 'e'], giving you every other element.
-This is not implemented.
-
-
-A prensor can be considered to be interleaved lists and dictionaries.
-E.g.:
-
-```
-my_expression = [{
- "foo":[
- {"bar":[
- {"baz":["a","b","c", "d"]},
- {"baz":["d","e","f"]}
- ]
- },
- {"bar":[
- {"baz":["g","h","i"]},
- {"baz":["j","k","l", ]}
- {"baz":["m"]}
- ]
- }]
-}]
-```
-
-```
-result_1 = slice_expression.slice_expression(
- my_expression, "foo.bar", "new_bar",begin=1, end=3)
-
-result_1 = [{
- "foo":[
- {"bar":[
- {"baz":["a","b","c", "d"]},
- {"baz":["d","e","f"]}
- ],
- "new_bar":[
- {"baz":["d","e","f"]}
- ]
- },
- {"bar":[
- {"baz":["g","h","i"]},
- {"baz":["j","k","l", ]}
- {"baz":["m", ]}
- ],
- "new_bar":[
- {"baz":["j","k","l", ]}
- {"baz":["m", ]}
- ]
- }]
-}]
-```
-
-```
-result_2 = slice_expression.slice_expression(
- my_expression, "foo.bar.baz", "new_baz",begin=1, end=3)
-
-result_2 = [{
- "foo":[
- {"bar":[
- {"baz":["a","b","c", "d"],
- "new_baz":["b","c"],
- },
- {"baz":["d","e","f"], "new_baz":["e","f"]}
- ]
- },
- {"bar":[
- {"baz":["g","h","i"], "new_baz":["h","i"]},
- {"baz":["j","k","l"], "new_baz":["k","l"]},
- {"baz":["m", ]}
- ]
- }]
-}]
-```
-
-## Functions
-
-[`slice_expression(...)`](../expression_impl/slice_expression/slice_expression.md): Creates a new subtree with a sliced expression.
-
-## Type Aliases
-
-[`IndexValue`](../expression_impl/slice_expression/IndexValue.md)
-
diff --git a/g3doc/api_docs/python/expression_impl/slice_expression/IndexValue.md b/g3doc/api_docs/python/expression_impl/slice_expression/IndexValue.md
deleted file mode 100644
index 952d132..0000000
--- a/g3doc/api_docs/python/expression_impl/slice_expression/IndexValue.md
+++ /dev/null
@@ -1,25 +0,0 @@
-
-
-
-
-
-# expression_impl.slice_expression.IndexValue
-
-
-This symbol is a **type alias**.
-
-
-
-#### Source:
-
-
-IndexValue = Union[
- int,
- tensorflow.python.framework.ops.Tensor,
- tensorflow.python.ops.variables.Variable
-]
-
-
-
-
-
diff --git a/g3doc/api_docs/python/expression_impl/slice_expression/slice_expression.md b/g3doc/api_docs/python/expression_impl/slice_expression/slice_expression.md
deleted file mode 100644
index 9de0d23..0000000
--- a/g3doc/api_docs/python/expression_impl/slice_expression/slice_expression.md
+++ /dev/null
@@ -1,98 +0,0 @@
-description: Creates a new subtree with a sliced expression.
-
-
-
-
-
-
-# expression_impl.slice_expression.slice_expression
-
-
-
-
-
-
-
-Creates a new subtree with a sliced expression.
-
-
-expression_impl.slice_expression.slice_expression(
- expr: expression.Expression,
- p: path.Path,
- new_field_name: path.Step,
- begin: Optional[IndexValue],
- end: Optional[IndexValue]
-) -> expression.Expression
-
-
-
-
-
-
-This follows the pattern of python slice() method.
-See module-level comments for examples.
-
-
-
-
-Args |
-
-
-|
-`expr`
- |
-
-the original root expression
- |
-
-|
-`p`
- |
-
-the path to the source to be sliced.
- |
-
-|
-`new_field_name`
- |
-
-the name of the new subtree.
- |
-
-|
-`begin`
- |
-
-beginning index
- |
-
-|
-`end`
- |
-
-end index.
- |
-
-
-
-
-
-
-
-
-Returns |
-
-|
-A new root expression.
- |
-
-
-
-
diff --git a/g3doc/api_docs/python/s2t.md b/g3doc/api_docs/python/s2t.md
deleted file mode 100644
index 6f5e8c6..0000000
--- a/g3doc/api_docs/python/s2t.md
+++ /dev/null
@@ -1,78 +0,0 @@
-description: Import core names for struct2tensor.
-
-
-
-
-
-
-# Module: s2t
-
-
-
-
-
-
-
-Import core names for struct2tensor.
-
-
-
-## Classes
-
-[`class ChildNodeTensor`](./s2t/ChildNodeTensor.md): The value of an intermediate node.
-
-[`class Expression`](./s2t/Expression.md): An expression represents the calculation of a prensor object.
-
-[`class LeafNodeTensor`](./s2t/LeafNodeTensor.md): The value of a leaf node.
-
-[`class Path`](./s2t/Path.md): A representation of a path in the expression.
-
-[`class Prensor`](./s2t/Prensor.md): A expression of NodeTensor objects.
-
-[`class RootNodeTensor`](./s2t/RootNodeTensor.md): The value of the root.
-
-## Functions
-
-[`calculate_prensors(...)`](./s2t/calculate_prensors.md): Gets the prensor value of the expressions.
-
-[`calculate_prensors_with_graph(...)`](./s2t/calculate_prensors_with_graph.md): Gets the prensor value of the expressions and the graph used.
-
-[`calculate_prensors_with_source_paths(...)`](./s2t/calculate_prensors_with_source_paths.md): Returns a list of prensor trees, and proto summaries.
-
-[`create_expression_from_file_descriptor_set(...)`](./s2t/create_expression_from_file_descriptor_set.md): Create an expression from a 1D tensor of serialized protos.
-
-[`create_expression_from_prensor(...)`](./s2t/create_expression_from_prensor.md): Gets an expression representing the prensor.
-
-[`create_expression_from_proto(...)`](./s2t/create_expression_from_proto.md): Create an expression from a 1D tensor of serialized protos.
-
-[`create_path(...)`](./s2t/create_path.md): Create a path from an object.
-
-[`create_prensor_from_descendant_nodes(...)`](./s2t/create_prensor_from_descendant_nodes.md): Create a prensor from a map of paths to NodeTensor.
-
-[`create_prensor_from_root_and_children(...)`](./s2t/create_prensor_from_root_and_children.md)
-
-[`get_default_options(...)`](./s2t/get_default_options.md): Get the default options.
-
-[`get_options_with_minimal_checks(...)`](./s2t/get_options_with_minimal_checks.md): Options for calculation with minimal runtime checks.
-
-[`get_ragged_tensor(...)`](./s2t/get_ragged_tensor.md): Get a ragged tensor for a path. (deprecated)
-
-[`get_ragged_tensors(...)`](./s2t/get_ragged_tensors.md): Gets ragged tensors for all the leaves of the prensor expression. (deprecated)
-
-[`get_sparse_tensor(...)`](./s2t/get_sparse_tensor.md): Gets a sparse tensor for path p. (deprecated)
-
-[`get_sparse_tensors(...)`](./s2t/get_sparse_tensors.md): Gets sparse tensors for all the leaves of the prensor expression. (deprecated)
-
-## Type Aliases
-
-[`NodeTensor`](./s2t/NodeTensor.md)
-
-[`Step`](./s2t/Step.md)
-
diff --git a/g3doc/api_docs/python/s2t/ChildNodeTensor.md b/g3doc/api_docs/python/s2t/ChildNodeTensor.md
deleted file mode 100644
index cc3d3c1..0000000
--- a/g3doc/api_docs/python/s2t/ChildNodeTensor.md
+++ /dev/null
@@ -1,139 +0,0 @@
-description: The value of an intermediate node.
-
-
-
-
-
-
-
-
-# s2t.ChildNodeTensor
-
-
-
-
-
-
-
-The value of an intermediate node.
-
-
-s2t.ChildNodeTensor(
- parent_index: tf.Tensor,
- is_repeated: bool
-)
-
-
-
-
-
-
-
-
-
-
-Args |
-
-
-|
-`parent_index`
- |
-
-a 1-D int64 tensor where parent_index[i] represents the
-parent index of the ith child.
- |
-
-|
-`is_repeated`
- |
-
-a bool indicating if there can be more than one child per
-parent.
- |
-
-
-
-
-
-
-
-
-
-
-Attributes |
-
-
-|
-`is_repeated`
- |
-
-
- |
-
-|
-`parent_index`
- |
-
-
- |
-
-|
-`size`
- |
-
-Returns the size, as if this was the root prensor.
- |
-
-
-
-
-
-## Methods
-
-get_positional_index
-
-View source
-
-
-get_positional_index() -> tf.Tensor
-
-
-Gets the positional index for this ChildNodeTensor.
-
-The positional index tells us which index of the parent an element is.
-
-For example, with the following parent indices: [0, 0, 2]
-we would have positional index:
-[
- 0, # The 0th element of the 0th parent.
- 1, # The 1st element of the 0th parent.
- 0 # The 0th element of the 2nd parent.
-].
-
-For more information, view ops/run_length_before_op.cc
-
-This is the same for Leaf NodeTensors.
-
-
-
-
-| Returns |
-
-|
-A tensor of positional indices.
- |
-
-
-
-
-
-
-
-
diff --git a/g3doc/api_docs/python/s2t/Expression.md b/g3doc/api_docs/python/s2t/Expression.md
deleted file mode 100644
index dd32541..0000000
--- a/g3doc/api_docs/python/s2t/Expression.md
+++ /dev/null
@@ -1,1102 +0,0 @@
-description: An expression represents the calculation of a prensor object.
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-# s2t.Expression
-
-
-
-
-
-
-
-An expression represents the calculation of a prensor object.
-
-
-s2t.Expression(
- is_repeated: bool,
- my_type: Optional[tf.DType],
- schema_feature: Optional[schema_pb2.Feature] = None
-)
-
-
-
-
-
-
-
-
-
-
-Args |
-
-
-|
-`is_repeated`
- |
-
-if the expression is repeated.
- |
-
-|
-`my_type`
- |
-
-the DType of a field, or None for an internal node.
- |
-
-|
-`schema_feature`
- |
-
-the local schema (StructDomain information should not be
-present).
- |
-
-
-
-
-
-
-
-
-
-
-Attributes |
-
-
-|
-`is_leaf`
- |
-
-True iff the node tensor is a LeafNodeTensor.
- |
-
-|
-`is_repeated`
- |
-
-True iff the same parent value can have multiple children values.
- |
-
-|
-`schema_feature`
- |
-
-Return the schema of the field.
- |
-
-|
-`type`
- |
-
-dtype of the expression, or None if not a leaf expression.
- |
-
-
-
-
-
-## Methods
-
-apply
-
-View source
-
-
-apply(
- transform: Callable[['Expression'], 'Expression']
-) -> "Expression"
-
-
-
-
-
-apply_schema
-
-View source
-
-
-apply_schema(
- schema: schema_pb2.Schema
-) -> "Expression"
-
-
-
-
-
-broadcast
-
-View source
-
-
-broadcast(
- source_path: s2t.Path,
- sibling_field: s2t.Step,
- new_field_name: s2t.Step
-) -> "Expression"
-
-
-Broadcasts the existing field at source_path to the sibling_field.
-
-
-calculate
-
-View source
-
-
-@abc.abstractmethod
-calculate(
- source_tensors: Sequence[s2t.NodeTensor],
- destinations: Sequence['Expression'],
- options: calculate_options.Options,
- side_info: Optional[s2t.Prensor] = None
-) -> s2t.NodeTensor
-
-
-Calculates the node tensor of the expression.
-
-The node tensor must be a function of the properties of the expression
-and the node tensors of the expressions from get_source_expressions().
-
-If is_leaf, then calculate must return a LeafNodeTensor.
-Otherwise, it must return a ChildNodeTensor or RootNodeTensor.
-
-If calculate_is_identity is true, then this must return source_tensors[0].
-
-Sometimes, for operations such as parsing the proto, calculate will return
-additional information. For example, calculate() for the root of the
-proto expression also parses out the tensors required to calculate the
-tensors of the children. This is why destinations are required.
-
-For a reference use, see calculate_value_slowly(...) below.
-
-
-
-
-| Args |
-
-
-|
-`source_tensors`
- |
-
-The node tensors of the expressions in
-get_source_expressions().
- |
-
-|
-`destinations`
- |
-
-The expressions that will use the output of this method.
- |
-
-|
-`options`
- |
-
-Options for the calculation.
- |
-
-|
-`side_info`
- |
-
-An optional prensor that is used to bind to a placeholder
-expression.
- |
-
-
-
-
-
-
-
-
-| Returns |
-
-|
-A NodeTensor representing the output of this expression.
- |
-
-
-
-
-
-
-calculation_equal
-
-View source
-
-
-@abc.abstractmethod
-calculation_equal(
- expression: "Expression"
-) -> bool
-
-
-self.calculate is equal to another expression.calculate.
-
-Given the same source node tensors, self.calculate(...) and
-expression.calculate(...) will have the same result.
-
-Note that this does not check that the source expressions of the two
-expressions are the same. Therefore, two operations can have the same
-calculation, but not the same output, because their sources are different.
-For example, if a.calculation_is_identity() is True and
-b.calculation_is_identity() is True, then a.calculation_equal(b) is True.
-However, unless a and b have the same source, the expressions themselves are
-not equal.
-
-
-
-
-| Args |
-
-
-|
-`expression`
- |
-
-The expression to compare to.
- |
-
-
-
-
-
-calculation_is_identity
-
-View source
-
-
-@abc.abstractmethod
-calculation_is_identity() -> bool
-
-
-True iff the self.calculate is the identity.
-
-There is exactly one source, and the output of self.calculate(...) is the
-node tensor of this source.
-
-cogroup_by_index
-
-View source
-
-
-cogroup_by_index(
- source_path: s2t.Path,
- left_name: s2t.Step,
- right_name: s2t.Step,
- new_field_name: s2t.Step
-) -> "Expression"
-
-
-Creates a cogroup of left_name and right_name at new_field_name.
-
-
-create_has_field
-
-View source
-
-
-create_has_field(
- source_path: s2t.Path,
- new_field_name: s2t.Step
-) -> "Expression"
-
-
-Creates a field that is the presence of the source path.
-
-
-create_proto_index
-
-View source
-
-
-create_proto_index(
- field_name: s2t.Step
-) -> "Expression"
-
-
-Creates a proto index field as a direct child of the current root.
-
-The proto index maps each root element to the original batch index.
-For example: [0, 2] means the first element came from the first proto
-in the original input tensor and the second element came from the third
-proto. The created field is always "dense" -- it has the same valency as
-the current root.
-
-
-
-
-| Args |
-
-
-|
-`field_name`
- |
-
-the name of the field to be created.
- |
-
-
-
-
-
-
-
-
-| Returns |
-
-|
-An Expression object representing the result of the operation.
- |
-
-
-
-
-
-
-create_size_field
-
-View source
-
-
-create_size_field(
- source_path: s2t.Path,
- new_field_name: s2t.Step
-) -> "Expression"
-
-
-Creates a field that is the size of the source path.
-
-
-get_child
-
-View source
-
-
-get_child(
- field_name: s2t.Step
-) -> Optional['Expression']
-
-
-Gets a named child.
-
-
-get_child_or_error
-
-View source
-
-
-get_child_or_error(
- field_name: s2t.Step
-) -> "Expression"
-
-
-Gets a named child.
-
-
-get_descendant
-
-View source
-
-
-get_descendant(
- p: s2t.Path
-) -> Optional['Expression']
-
-
-Finds the descendant at the path.
-
-
-get_descendant_or_error
-
-View source
-
-
-get_descendant_or_error(
- p: s2t.Path
-) -> "Expression"
-
-
-Finds the descendant at the path.
-
-
-get_known_children
-
-View source
-
-
-get_known_children() -> Mapping[path.Step, 'Expression']
-
-
-
-
-
-get_known_descendants
-
-View source
-
-
-get_known_descendants() -> Mapping[path.Path, 'Expression']
-
-
-Gets a mapping from known paths to subexpressions.
-
-The difference between this and get_descendants in Prensor is that
-all paths in a Prensor are realized, thus all known. But an Expression's
-descendants might not all be known at the point this method is called,
-because an expression may have an infinite number of children.
-
-
-
-
-| Returns |
-
-|
-A mapping from paths (relative to the root of the subexpression) to
-expressions.
- |
-
-
-
-
-
-
-get_paths_with_schema
-
-View source
-
-
-get_paths_with_schema() -> List[s2t.Path]
-
-
-Extract only paths that contain schema information.
-
-
-get_schema
-
-View source
-
-
-get_schema(
- create_schema_features=True
-) -> schema_pb2.Schema
-
-
-Returns a schema for the entire tree.
-
-
-
-
-
-| Args |
-
-
-|
-`create_schema_features`
- |
-
-If True, schema features are added for all
-children and a schema entry is created if not available on the child. If
-False, features are left off of the returned schema if there is no
-schema_feature on the child.
- |
-
-
-
-
-
-get_source_expressions
-
-View source
-
-
-@abc.abstractmethod
-get_source_expressions() -> Sequence['Expression']
-
-
-Gets the sources of this expression.
-
-The node tensors of the source expressions must be sufficient to
-calculate the node tensor of this expression
-(see calculate and calculate_value_slowly).
-
-
-
-
-| Returns |
-
-|
-The sources of this expression.
- |
-
-
-
-
-
-
-known_field_names
-
-View source
-
-
-@abc.abstractmethod
-known_field_names() -> FrozenSet[s2t.Step]
-
-
-Returns known field names of the expression.
-
-
-Known field names of a parsed proto correspond to the fields declared in
-the message. Examples of "unknown" fields are extensions and explicit casts
-in an any field. The only way to know if an unknown field "(foo.bar)" is
-present in an expression expr is to call (expr["(foo.bar)"] is not None).
-
-Notice that simply accessing a field does not make it "known". However,
-setting a field (or setting a descendant of a field) will make it known.
-
-project(...) returns an expression where the known field names are the only
-field names. In general, if you want to depend upon known_field_names
-(e.g., if you want to compile a expression), then the best approach is to
-project() the expression first.
-
-
-
-
-| Returns |
-
-|
-An immutable set of field names.
- |
-
-
-
-
-
-
-map_field_values
-
-View source
-
-
-map_field_values(
- source_path: s2t.Path,
- operator: Callable[[tf.Tensor], tf.Tensor],
- dtype: tf.DType,
- new_field_name: s2t.Step
-) -> "Expression"
-
-
-Map a primitive field to create a new primitive field.
-
-Note: the dtype argument is added since the v1 API.
-
-
-
-
-| Args |
-
-
-|
-`source_path`
- |
-
-the origin path.
- |
-
-|
-`operator`
- |
-
-an element-wise operator that takes a 1-dimensional vector.
- |
-
-|
-`dtype`
- |
-
-the type of the output.
- |
-
-|
-`new_field_name`
- |
-
-the name of a new sibling of source_path.
- |
-
-
-
-
-
-
-
-
-| Returns |
-
-|
-the resulting root expression.
- |
-
-
-
-
-
-
-map_ragged_tensors
-
-View source
-
-
-map_ragged_tensors(
- parent_path: s2t.Path,
- source_fields: Sequence[s2t.Step],
- operator: Callable[..., tf.SparseTensor],
- is_repeated: bool,
- dtype: tf.DType,
- new_field_name: s2t.Step
-) -> "Expression"
-
-
-Maps a set of primitive fields of a message to a new field.
-
-Unlike map_field_values, this operation allows you to some degree reshape
-the field. For instance, you can take two optional fields and create a
-repeated field, or perform a reduce_sum on the last dimension of a repeated
-field and create an optional field. The key constraint is that the operator
-must return a sparse tensor of the correct dimension: i.e., a
-2D sparse tensor if is_repeated is true, or a 1D sparse tensor if
-is_repeated is false. Moreover, the first dimension of the sparse tensor
-must be equal to the first dimension of the input tensor.
-
-
-
-
-| Args |
-
-
-|
-`parent_path`
- |
-
-the parent of the input and output fields.
- |
-
-|
-`source_fields`
- |
-
-the nonempty list of names of the source fields.
- |
-
-|
-`operator`
- |
-
-an operator that takes len(source_fields) sparse tensors and
-returns a sparse tensor of the appropriate shape.
- |
-
-|
-`is_repeated`
- |
-
-whether the output is repeated.
- |
-
-|
-`dtype`
- |
-
-the dtype of the result.
- |
-
-|
-`new_field_name`
- |
-
-the name of the resulting field.
- |
-
-
-
-
-
-
-
-
-| Returns |
-
-|
-A new query.
- |
-
-
-
-
-
-
-map_sparse_tensors
-
-View source
-
-
-map_sparse_tensors(
- parent_path: s2t.Path,
- source_fields: Sequence[s2t.Step],
- operator: Callable[..., tf.SparseTensor],
- is_repeated: bool,
- dtype: tf.DType,
- new_field_name: s2t.Step
-) -> "Expression"
-
-
-Maps a set of primitive fields of a message to a new field.
-
-Unlike map_field_values, this operation allows you to some degree reshape
-the field. For instance, you can take two optional fields and create a
-repeated field, or perform a reduce_sum on the last dimension of a repeated
-field and create an optional field. The key constraint is that the operator
-must return a sparse tensor of the correct dimension: i.e., a
-2D sparse tensor if is_repeated is true, or a 1D sparse tensor if
-is_repeated is false. Moreover, the first dimension of the sparse tensor
-must be equal to the first dimension of the input tensor.
-
-
-
-
-| Args |
-
-
-|
-`parent_path`
- |
-
-the parent of the input and output fields.
- |
-
-|
-`source_fields`
- |
-
-the nonempty list of names of the source fields.
- |
-
-|
-`operator`
- |
-
-an operator that takes len(source_fields) sparse tensors and
-returns a sparse tensor of the appropriate shape.
- |
-
-|
-`is_repeated`
- |
-
-whether the output is repeated.
- |
-
-|
-`dtype`
- |
-
-the dtype of the result.
- |
-
-|
-`new_field_name`
- |
-
-the name of the resulting field.
- |
-
-
-
-
-
-
-
-
-| Returns |
-
-|
-A new query.
- |
-
-
-
-
-
-
-project
-
-View source
-
-
-project(
- path_list: Sequence[CoercableToPath]
-) -> "Expression"
-
-
-Constrains the paths to those listed.
-
-
-
-
-View source
-
-
-promote(
- source_path: s2t.Path,
- new_field_name: s2t.Step
-)
-
-
-Promotes source_path to be a field new_field_name in its grandparent.
-
-
-
-
-View source
-
-
-promote_and_broadcast(
- path_dictionary: Mapping[path.Step, CoercableToPath],
- dest_path_parent: s2t.Path
-) -> "Expression"
-
-
-
-
-
-reroot
-
-View source
-
-
-reroot(
- new_root: s2t.Path
-) -> "Expression"
-
-
-Returns a new list of protocol buffers available at new_root.
-
-
-schema_string
-
-View source
-
-
-schema_string(
- limit: Optional[int] = None
-) -> str
-
-
-Returns a schema for the expression.
-
-E.g.
-
-repeated root:
- optional int32 foo
- optional bar:
- optional string baz
- optional int64 bak
-
-Note that unknown fields and subexpressions are not displayed.
-
-
-
-
-| Args |
-
-
-|
-`limit`
- |
-
-if present, limit the recursion.
- |
-
-
-
-
-
-
-
-
-| Returns |
-
-|
-A string, describing (a part of) the schema.
- |
-
-
-
-
-
-
-slice
-
-View source
-
-
-slice(
- source_path: s2t.Path,
- new_field_name: s2t.Step,
- begin: Optional[IndexValue] = None,
- end: Optional[IndexValue] = None
-) -> "Expression"
-
-
-Creates a slice copy of source_path at new_field_path.
-
-Note that if begin or end is negative, it is considered relative to
-the size of the array. e.g., slice(...,begin=-1) will get the last
-element of every array.
-
-
-
-
-| Args |
-
-
-|
-`source_path`
- |
-
-the source of the slice.
- |
-
-|
-`new_field_name`
- |
-
-the new field that is generated.
- |
-
-|
-`begin`
- |
-
-the beginning of the slice (inclusive).
- |
-
-|
-`end`
- |
-
-the end of the slice (exclusive).
- |
-
-
-
-
-
-
-
-
-| Returns |
-
-|
-An Expression object representing the result of the operation.
- |
-
-
-
-
-
-
-truncate
-
-View source
-
-
-truncate(
- source_path: s2t.Path,
- limit: Union[int, tf.Tensor],
- new_field_name: s2t.Step
-) -> "Expression"
-
-
-Creates a truncated copy of source_path at new_field_path.
-
-
-__eq__
-
-View source
-
-
-__eq__(
- expr: "Expression"
-) -> bool
-
-
-if hash(expr1) == hash(expr2): then expr1 == expr2.
-
-Do not override this method.
-Args:
- expr: The expression to check equality against
-
-
-
-
-| Returns |
-
-|
-Boolean of equality of two expressions
- |
-
-
-
-
-
-
-
-
diff --git a/g3doc/api_docs/python/s2t/LeafNodeTensor.md b/g3doc/api_docs/python/s2t/LeafNodeTensor.md
deleted file mode 100644
index eb7bbb0..0000000
--- a/g3doc/api_docs/python/s2t/LeafNodeTensor.md
+++ /dev/null
@@ -1,147 +0,0 @@
-description: The value of a leaf node.
-
-
-
-
-
-
-
-
-# s2t.LeafNodeTensor
-
-
-
-
-
-
-
-The value of a leaf node.
-
-
-s2t.LeafNodeTensor(
- parent_index: tf.Tensor,
- values: tf.Tensor,
- is_repeated: bool
-)
-
-
-
-
-
-
-
-
-
-
-Args |
-
-
-|
-`parent_index`
- |
-
-a 1-D int64 tensor where parent_index[i] represents the
-parent index of values[i]
- |
-
-|
-`values`
- |
-
-a 1-D tensor of equal length to parent_index.
- |
-
-|
-`is_repeated`
- |
-
-a bool indicating if there can be more than one child per
-parent.
- |
-
-
-
-
-
-
-
-
-
-
-Attributes |
-
-
-|
-`is_repeated`
- |
-
-
- |
-
-|
-`parent_index`
- |
-
-
- |
-
-|
-`values`
- |
-
-
- |
-
-
-
-
-
-## Methods
-
-get_positional_index
-
-View source
-
-
-get_positional_index() -> tf.Tensor
-
-
-Gets the positional index for this LeafNodeTensor.
-
-The positional index tells us which index of the parent an element is.
-
-For example, with the following parent indices: [0, 0, 2]
-we would have positional index:
-[
- 0, # The 0th element of the 0th parent.
- 1, # The 1st element of the 0th parent.
- 0 # The 0th element of the 2nd parent.
-].
-
-For more information, view ops/run_length_before_op.cc
-
-This is the same for Child NodeTensors.
-
-
-
-
-| Returns |
-
-|
-A tensor of positional indices.
- |
-
-
-
-
-
-
-
-
diff --git a/g3doc/api_docs/python/s2t/NodeTensor.md b/g3doc/api_docs/python/s2t/NodeTensor.md
deleted file mode 100644
index 471455e..0000000
--- a/g3doc/api_docs/python/s2t/NodeTensor.md
+++ /dev/null
@@ -1,25 +0,0 @@
-
-
-
-
-
-# s2t.NodeTensor
-
-
-This symbol is a **type alias**.
-
-
-
-#### Source:
-
-
-NodeTensor = Union[
- s2t.LeafNodeTensor,
- s2t.ChildNodeTensor,
- s2t.RootNodeTensor
-]
-
-
-
-
-
diff --git a/g3doc/api_docs/python/s2t/Path.md b/g3doc/api_docs/python/s2t/Path.md
deleted file mode 100644
index 6fa08b7..0000000
--- a/g3doc/api_docs/python/s2t/Path.md
+++ /dev/null
@@ -1,340 +0,0 @@
-description: A representation of a path in the expression.
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-# s2t.Path
-
-
-
-
-
-
-
-A representation of a path in the expression.
-
-
-s2t.Path(
- field_list: Sequence[s2t.Step]
-)
-
-
-
-
-
-
-Do not implement __nonzero__, __eq__, __ne__, et cetera as these are
-implicitly defined by __cmp__ and __len__.
-
-
-
-
-Args |
-
-
-|
-`field_list`
- |
-
-a list or tuple of fields leading from one node to another.
- |
-
-
-
-
-
-
-
-
-Raises |
-
-
-|
-`ValueError`
- |
-
-if any field is not a valid step (see is_valid_step).
- |
-
-
-
-
-
-## Methods
-
-as_proto
-
-View source
-
-
-as_proto()
-
-
-Serialize a path as a proto.
-
-This fails if there are any anonymous fields.
-
-
-
-
-| Returns |
-
-|
-a Path proto.
- |
-
-
-
-
-
-
-concat
-
-View source
-
-
-concat(
- other_path: "Path"
-) -> "Path"
-
-
-
-
-
-get_child
-
-View source
-
-
-get_child(
- field_name: s2t.Step
-) -> "Path"
-
-
-Get the child path.
-
-
-get_least_common_ancestor
-
-View source
-
-
-get_least_common_ancestor(
- other: "Path"
-) -> "Path"
-
-
-Get the least common ancestor, the longest shared prefix.
-
-
-get_parent
-
-View source
-
-
-get_parent() -> "Path"
-
-
-Get the parent path.
-
-
-
-
-
-| Returns |
-
-|
-The parent path.
- |
-
-
-
-
-
-
-
-
-
-| Raises |
-
-
-|
-`ValueError`
- |
-
-If this is the root path.
- |
-
-
-
-
-
-is_ancestor
-
-View source
-
-
-is_ancestor(
- other: "Path"
-) -> bool
-
-
-True if self is ancestor of other (i.e. a prefix).
-
-
-prefix
-
-View source
-
-
-prefix(
- ending_index: int
-) -> "Path"
-
-
-
-
-
-suffix
-
-View source
-
-
-suffix(
- starting_index: int
-) -> "Path"
-
-
-
-
-
-__add__
-
-View source
-
-
-__add__(
- other: Union['Path', str]
-) -> "Path"
-
-
-
-
-
-__eq__
-
-View source
-
-
-__eq__(
- other: "Path"
-) -> bool
-
-
-Return self==value.
-
-
-__ge__
-
-View source
-
-
-__ge__(
- other: "Path"
-) -> bool
-
-
-Return self>=value.
-
-
-__gt__
-
-View source
-
-
-__gt__(
- other: "Path"
-) -> bool
-
-
-Return self>value.
-
-
-__le__
-
-View source
-
-
-__le__(
- other: "Path"
-) -> bool
-
-
-Return self<=value.
-
-
-__len__
-
-View source
-
-
-__len__() -> int
-
-
-
-
-
-__lt__
-
-View source
-
-
-__lt__(
- other: "Path"
-) -> bool
-
-
-Return self__ne__
-
-View source
-
-
-__ne__(
- other: "Path"
-) -> bool
-
-
-Return self!=value.
-
-
-
-
diff --git a/g3doc/api_docs/python/s2t/Prensor.md b/g3doc/api_docs/python/s2t/Prensor.md
deleted file mode 100644
index e26a368..0000000
--- a/g3doc/api_docs/python/s2t/Prensor.md
+++ /dev/null
@@ -1,384 +0,0 @@
-description: A expression of NodeTensor objects.
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-# s2t.Prensor
-
-
-
-
-
-
-
-A expression of NodeTensor objects.
-
-
-s2t.Prensor(
- node: s2t.NodeTensor,
- children: "collections.OrderedDict[path.Step, Prensor]"
-)
-
-
-
-
-
-
-
-
-
-
-Args |
-
-
-|
-`node`
- |
-
-the NodeTensor of the root.
- |
-
-|
-`children`
- |
-
-a map from edge to subexpression.
- |
-
-
-
-
-
-
-
-
-
-
-Attributes |
-
-
-|
-`is_leaf`
- |
-
-True iff the node value is a LeafNodeTensor.
- |
-
-|
-`node`
- |
-
-The node of the root of the subtree.
- |
-
-
-
-
-
-## Methods
-
-field_names
-
-View source
-
-
-field_names() -> FrozenSet[s2t.Step]
-
-
-Returns the field names of the children.
-
-
-get_child
-
-View source
-
-
-get_child(
- field_name: s2t.Step
-) -> Optional['Prensor']
-
-
-Gets the child at field_name.
-
-
-get_child_or_error
-
-View source
-
-
-get_child_or_error(
- field_name: s2t.Step
-) -> "Prensor"
-
-
-Gets the child at field_name.
-
-
-get_children
-
-View source
-
-
-get_children() -> "collections.OrderedDict[path.Step, Prensor]"
-
-
-A map from field name to subexpression.
-
-
-get_descendant
-
-View source
-
-
-get_descendant(
- p: s2t.Path
-) -> Optional['Prensor']
-
-
-Finds the descendant at the path.
-
-
-get_descendant_or_error
-
-View source
-
-
-get_descendant_or_error(
- p: s2t.Path
-) -> "Prensor"
-
-
-Finds the descendant at the path.
-
-
-get_descendants
-
-View source
-
-
-get_descendants() -> Mapping[path.Path, 'Prensor']
-
-
-A map from paths to all subexpressions.
-
-
-get_ragged_tensor
-
-View source
-
-
-get_ragged_tensor(
- p: s2t.Path,
- options: calculate_options.Options = calculate_options.get_default_options()
-) -> tf.RaggedTensor
-
-
-Get a ragged tensor for a path.
-
-All steps are represented in the ragged tensor.
-
-
-
-
-| Args |
-
-
-|
-`p`
- |
-
-the path to a leaf node in `t`.
- |
-
-|
-`options`
- |
-
-Options for calculating ragged tensors.
- |
-
-
-
-
-
-
-
-
-| Returns |
-
-|
-A ragged tensor containing values of the leaf node, preserving the
-structure along the path. Raises an error if the path is not found.
- |
-
-
-
-
-
-
-get_ragged_tensors
-
-View source
-
-
-get_ragged_tensors(
- options: calculate_options.Options = calculate_options.get_default_options()
-) -> Mapping[s2t.Path, tf.RaggedTensor]
-
-
-Gets ragged tensors for all the leaves of the prensor expression.
-
-
-
-
-
-| Args |
-
-
-|
-`options`
- |
-
-Options for calculating ragged tensors.
- |
-
-
-
-
-
-
-
-
-| Returns |
-
-|
-A map from paths to ragged tensors.
- |
-
-
-
-
-
-
-get_sparse_tensor
-
-View source
-
-
-get_sparse_tensor(
- p: s2t.Path,
- options: calculate_options.Options = calculate_options.get_default_options()
-) -> tf.SparseTensor
-
-
-Gets a sparse tensor for path p.
-
-Note that any optional fields are not registered as dimensions, as they
-can't be represented in a sparse tensor.
-
-
-
-
-| Args |
-
-
-|
-`p`
- |
-
-The path to a leaf node in `t`.
- |
-
-|
-`options`
- |
-
-Currently unused.
- |
-
-
-
-
-
-
-
-
-| Returns |
-
-|
-A sparse tensor containing values of the leaf node, preserving the
-structure along the path. Raises an error if the path is not found.
- |
-
-
-
-
-
-
-get_sparse_tensors
-
-View source
-
-
-get_sparse_tensors(
- options: calculate_options.Options = calculate_options.get_default_options()
-) -> Mapping[s2t.Path, tf.SparseTensor]
-
-
-Gets sparse tensors for all the leaves of the prensor expression.
-
-
-
-
-
-| Args |
-
-
-|
-`options`
- |
-
-Currently unused.
- |
-
-
-
-
-
-
-
-
-| Returns |
-
-|
-A map from paths to sparse tensors.
- |
-
-
-
-
-
-
-
-
diff --git a/g3doc/api_docs/python/s2t/RootNodeTensor.md b/g3doc/api_docs/python/s2t/RootNodeTensor.md
deleted file mode 100644
index c38c929..0000000
--- a/g3doc/api_docs/python/s2t/RootNodeTensor.md
+++ /dev/null
@@ -1,111 +0,0 @@
-description: The value of the root.
-
-
-
-
-
-
-
-
-# s2t.RootNodeTensor
-
-
-
-
-
-
-
-The value of the root.
-
-
-s2t.RootNodeTensor(
- size: tf.Tensor
-)
-
-
-
-
-
-
-
-
-
-
-Args |
-
-
-|
-`size`
- |
-
-A scalar int64 tensor saying how many root objects there are.
- |
-
-
-
-
-
-
-
-
-
-
-Attributes |
-
-
-|
-`is_repeated`
- |
-
-
- |
-
-|
-`size`
- |
-
-
- |
-
-
-
-
-
-## Methods
-
-get_positional_index
-
-View source
-
-
-get_positional_index() -> tf.Tensor
-
-
-Gets the positional index for this RootNodeTensor.
-
-The positional index relative to the node's parent, and thus is always
-monotonically increasing at step size 1 for a RootNodeTensor.
-
-
-
-
-| Returns |
-
-|
-A tensor of positional indices.
- |
-
-
-
-
-
-
-
-
diff --git a/g3doc/api_docs/python/s2t/Step.md b/g3doc/api_docs/python/s2t/Step.md
deleted file mode 100644
index 362604a..0000000
--- a/g3doc/api_docs/python/s2t/Step.md
+++ /dev/null
@@ -1,24 +0,0 @@
-
-
-
-
-
-# s2t.Step
-
-
-This symbol is a **type alias**.
-
-
-
-#### Source:
-
-
-Step = Union[
- int,
- str
-]
-
-
-
-
-
diff --git a/g3doc/api_docs/python/s2t/_api_cache.json b/g3doc/api_docs/python/s2t/_api_cache.json
deleted file mode 100644
index f680f60..0000000
--- a/g3doc/api_docs/python/s2t/_api_cache.json
+++ /dev/null
@@ -1,178 +0,0 @@
-{
- "duplicate_of": {
- "s2t.Expression.__ge__": "s2t.ChildNodeTensor.__ge__",
- "s2t.Expression.__gt__": "s2t.ChildNodeTensor.__gt__",
- "s2t.Expression.__le__": "s2t.ChildNodeTensor.__le__",
- "s2t.Expression.__lt__": "s2t.ChildNodeTensor.__lt__",
- "s2t.Expression.__ne__": "s2t.ChildNodeTensor.__ne__",
- "s2t.Expression.__new__": "s2t.ChildNodeTensor.__new__",
- "s2t.LeafNodeTensor.__eq__": "s2t.ChildNodeTensor.__eq__",
- "s2t.LeafNodeTensor.__ge__": "s2t.ChildNodeTensor.__ge__",
- "s2t.LeafNodeTensor.__gt__": "s2t.ChildNodeTensor.__gt__",
- "s2t.LeafNodeTensor.__le__": "s2t.ChildNodeTensor.__le__",
- "s2t.LeafNodeTensor.__lt__": "s2t.ChildNodeTensor.__lt__",
- "s2t.LeafNodeTensor.__ne__": "s2t.ChildNodeTensor.__ne__",
- "s2t.LeafNodeTensor.__new__": "s2t.ChildNodeTensor.__new__",
- "s2t.Path.__new__": "s2t.ChildNodeTensor.__new__",
- "s2t.Prensor.__eq__": "s2t.ChildNodeTensor.__eq__",
- "s2t.Prensor.__ge__": "s2t.ChildNodeTensor.__ge__",
- "s2t.Prensor.__gt__": "s2t.ChildNodeTensor.__gt__",
- "s2t.Prensor.__le__": "s2t.ChildNodeTensor.__le__",
- "s2t.Prensor.__lt__": "s2t.ChildNodeTensor.__lt__",
- "s2t.Prensor.__ne__": "s2t.ChildNodeTensor.__ne__",
- "s2t.Prensor.__new__": "s2t.ChildNodeTensor.__new__",
- "s2t.RootNodeTensor.__eq__": "s2t.ChildNodeTensor.__eq__",
- "s2t.RootNodeTensor.__ge__": "s2t.ChildNodeTensor.__ge__",
- "s2t.RootNodeTensor.__gt__": "s2t.ChildNodeTensor.__gt__",
- "s2t.RootNodeTensor.__le__": "s2t.ChildNodeTensor.__le__",
- "s2t.RootNodeTensor.__lt__": "s2t.ChildNodeTensor.__lt__",
- "s2t.RootNodeTensor.__ne__": "s2t.ChildNodeTensor.__ne__",
- "s2t.RootNodeTensor.__new__": "s2t.ChildNodeTensor.__new__"
- },
- "is_fragment": {
- "s2t": false,
- "s2t.ChildNodeTensor": false,
- "s2t.ChildNodeTensor.__eq__": true,
- "s2t.ChildNodeTensor.__ge__": true,
- "s2t.ChildNodeTensor.__gt__": true,
- "s2t.ChildNodeTensor.__init__": true,
- "s2t.ChildNodeTensor.__le__": true,
- "s2t.ChildNodeTensor.__lt__": true,
- "s2t.ChildNodeTensor.__ne__": true,
- "s2t.ChildNodeTensor.__new__": true,
- "s2t.ChildNodeTensor.get_positional_index": true,
- "s2t.ChildNodeTensor.is_repeated": true,
- "s2t.ChildNodeTensor.parent_index": true,
- "s2t.ChildNodeTensor.size": true,
- "s2t.Expression": false,
- "s2t.Expression.__eq__": true,
- "s2t.Expression.__ge__": true,
- "s2t.Expression.__gt__": true,
- "s2t.Expression.__init__": true,
- "s2t.Expression.__le__": true,
- "s2t.Expression.__lt__": true,
- "s2t.Expression.__ne__": true,
- "s2t.Expression.__new__": true,
- "s2t.Expression.apply": true,
- "s2t.Expression.apply_schema": true,
- "s2t.Expression.broadcast": true,
- "s2t.Expression.calculate": true,
- "s2t.Expression.calculation_equal": true,
- "s2t.Expression.calculation_is_identity": true,
- "s2t.Expression.cogroup_by_index": true,
- "s2t.Expression.create_has_field": true,
- "s2t.Expression.create_proto_index": true,
- "s2t.Expression.create_size_field": true,
- "s2t.Expression.get_child": true,
- "s2t.Expression.get_child_or_error": true,
- "s2t.Expression.get_descendant": true,
- "s2t.Expression.get_descendant_or_error": true,
- "s2t.Expression.get_known_children": true,
- "s2t.Expression.get_known_descendants": true,
- "s2t.Expression.get_paths_with_schema": true,
- "s2t.Expression.get_schema": true,
- "s2t.Expression.get_source_expressions": true,
- "s2t.Expression.is_leaf": true,
- "s2t.Expression.is_repeated": true,
- "s2t.Expression.known_field_names": true,
- "s2t.Expression.map_field_values": true,
- "s2t.Expression.map_ragged_tensors": true,
- "s2t.Expression.map_sparse_tensors": true,
- "s2t.Expression.project": true,
- "s2t.Expression.promote": true,
- "s2t.Expression.promote_and_broadcast": true,
- "s2t.Expression.reroot": true,
- "s2t.Expression.schema_feature": true,
- "s2t.Expression.schema_string": true,
- "s2t.Expression.slice": true,
- "s2t.Expression.truncate": true,
- "s2t.Expression.type": true,
- "s2t.LeafNodeTensor": false,
- "s2t.LeafNodeTensor.__eq__": true,
- "s2t.LeafNodeTensor.__ge__": true,
- "s2t.LeafNodeTensor.__gt__": true,
- "s2t.LeafNodeTensor.__init__": true,
- "s2t.LeafNodeTensor.__le__": true,
- "s2t.LeafNodeTensor.__lt__": true,
- "s2t.LeafNodeTensor.__ne__": true,
- "s2t.LeafNodeTensor.__new__": true,
- "s2t.LeafNodeTensor.get_positional_index": true,
- "s2t.LeafNodeTensor.is_repeated": true,
- "s2t.LeafNodeTensor.parent_index": true,
- "s2t.LeafNodeTensor.values": true,
- "s2t.NodeTensor": false,
- "s2t.Path": false,
- "s2t.Path.__add__": true,
- "s2t.Path.__eq__": true,
- "s2t.Path.__ge__": true,
- "s2t.Path.__gt__": true,
- "s2t.Path.__init__": true,
- "s2t.Path.__le__": true,
- "s2t.Path.__len__": true,
- "s2t.Path.__lt__": true,
- "s2t.Path.__ne__": true,
- "s2t.Path.__new__": true,
- "s2t.Path.as_proto": true,
- "s2t.Path.concat": true,
- "s2t.Path.get_child": true,
- "s2t.Path.get_least_common_ancestor": true,
- "s2t.Path.get_parent": true,
- "s2t.Path.is_ancestor": true,
- "s2t.Path.prefix": true,
- "s2t.Path.suffix": true,
- "s2t.Prensor": false,
- "s2t.Prensor.__eq__": true,
- "s2t.Prensor.__ge__": true,
- "s2t.Prensor.__gt__": true,
- "s2t.Prensor.__init__": true,
- "s2t.Prensor.__le__": true,
- "s2t.Prensor.__lt__": true,
- "s2t.Prensor.__ne__": true,
- "s2t.Prensor.__new__": true,
- "s2t.Prensor.field_names": true,
- "s2t.Prensor.get_child": true,
- "s2t.Prensor.get_child_or_error": true,
- "s2t.Prensor.get_children": true,
- "s2t.Prensor.get_descendant": true,
- "s2t.Prensor.get_descendant_or_error": true,
- "s2t.Prensor.get_descendants": true,
- "s2t.Prensor.get_ragged_tensor": true,
- "s2t.Prensor.get_ragged_tensors": true,
- "s2t.Prensor.get_sparse_tensor": true,
- "s2t.Prensor.get_sparse_tensors": true,
- "s2t.Prensor.is_leaf": true,
- "s2t.Prensor.node": true,
- "s2t.RootNodeTensor": false,
- "s2t.RootNodeTensor.__eq__": true,
- "s2t.RootNodeTensor.__ge__": true,
- "s2t.RootNodeTensor.__gt__": true,
- "s2t.RootNodeTensor.__init__": true,
- "s2t.RootNodeTensor.__le__": true,
- "s2t.RootNodeTensor.__lt__": true,
- "s2t.RootNodeTensor.__ne__": true,
- "s2t.RootNodeTensor.__new__": true,
- "s2t.RootNodeTensor.get_positional_index": true,
- "s2t.RootNodeTensor.is_repeated": true,
- "s2t.RootNodeTensor.size": true,
- "s2t.Step": false,
- "s2t.calculate_prensors": false,
- "s2t.calculate_prensors_with_graph": false,
- "s2t.calculate_prensors_with_source_paths": false,
- "s2t.create_expression_from_file_descriptor_set": false,
- "s2t.create_expression_from_prensor": false,
- "s2t.create_expression_from_proto": false,
- "s2t.create_path": false,
- "s2t.create_prensor_from_descendant_nodes": false,
- "s2t.create_prensor_from_root_and_children": false,
- "s2t.get_default_options": false,
- "s2t.get_options_with_minimal_checks": false,
- "s2t.get_ragged_tensor": false,
- "s2t.get_ragged_tensors": false,
- "s2t.get_sparse_tensor": false,
- "s2t.get_sparse_tensors": false
- },
- "py_module_names": [
- "s2t"
- ],
- "site_link": null
-}
diff --git a/g3doc/api_docs/python/s2t/_toc.yaml b/g3doc/api_docs/python/s2t/_toc.yaml
deleted file mode 100644
index 04c05ce..0000000
--- a/g3doc/api_docs/python/s2t/_toc.yaml
+++ /dev/null
@@ -1,55 +0,0 @@
-toc:
-- title: s2t
- section:
- - title: Overview
- path: /api_docs/python/s2t
- - title: ChildNodeTensor
- path: /api_docs/python/s2t/ChildNodeTensor
- - title: Expression
- path: /api_docs/python/s2t/Expression
- - title: LeafNodeTensor
- path: /api_docs/python/s2t/LeafNodeTensor
- - title: NodeTensor
- path: /api_docs/python/s2t/NodeTensor
- - title: Path
- path: /api_docs/python/s2t/Path
- - title: Prensor
- path: /api_docs/python/s2t/Prensor
- - title: RootNodeTensor
- path: /api_docs/python/s2t/RootNodeTensor
- - title: Step
- path: /api_docs/python/s2t/Step
- - title: calculate_prensors
- path: /api_docs/python/s2t/calculate_prensors
- - title: calculate_prensors_with_graph
- path: /api_docs/python/s2t/calculate_prensors_with_graph
- - title: calculate_prensors_with_source_paths
- path: /api_docs/python/s2t/calculate_prensors_with_source_paths
- - title: create_expression_from_file_descriptor_set
- path: /api_docs/python/s2t/create_expression_from_file_descriptor_set
- - title: create_expression_from_prensor
- path: /api_docs/python/s2t/create_expression_from_prensor
- - title: create_expression_from_proto
- path: /api_docs/python/s2t/create_expression_from_proto
- - title: create_path
- path: /api_docs/python/s2t/create_path
- - title: create_prensor_from_descendant_nodes
- path: /api_docs/python/s2t/create_prensor_from_descendant_nodes
- - title: create_prensor_from_root_and_children
- path: /api_docs/python/s2t/create_prensor_from_root_and_children
- - title: get_default_options
- path: /api_docs/python/s2t/get_default_options
- - title: get_options_with_minimal_checks
- path: /api_docs/python/s2t/get_options_with_minimal_checks
- - title: get_ragged_tensor
- status: deprecated
- path: /api_docs/python/s2t/get_ragged_tensor
- - title: get_ragged_tensors
- status: deprecated
- path: /api_docs/python/s2t/get_ragged_tensors
- - title: get_sparse_tensor
- status: deprecated
- path: /api_docs/python/s2t/get_sparse_tensor
- - title: get_sparse_tensors
- status: deprecated
- path: /api_docs/python/s2t/get_sparse_tensors
diff --git a/g3doc/api_docs/python/s2t/all_symbols.md b/g3doc/api_docs/python/s2t/all_symbols.md
deleted file mode 100644
index bd052bb..0000000
--- a/g3doc/api_docs/python/s2t/all_symbols.md
+++ /dev/null
@@ -1,29 +0,0 @@
-# All symbols in Struct2Tensor
-
-
-
-## Primary symbols
-* s2t
-* s2t.ChildNodeTensor
-* s2t.Expression
-* s2t.LeafNodeTensor
-* s2t.NodeTensor
-* s2t.Path
-* s2t.Prensor
-* s2t.RootNodeTensor
-* s2t.Step
-* s2t.calculate_prensors
-* s2t.calculate_prensors_with_graph
-* s2t.calculate_prensors_with_source_paths
-* s2t.create_expression_from_file_descriptor_set
-* s2t.create_expression_from_prensor
-* s2t.create_expression_from_proto
-* s2t.create_path
-* s2t.create_prensor_from_descendant_nodes
-* s2t.create_prensor_from_root_and_children
-* s2t.get_default_options
-* s2t.get_options_with_minimal_checks
-* s2t.get_ragged_tensor
-* s2t.get_ragged_tensors
-* s2t.get_sparse_tensor
-* s2t.get_sparse_tensors
\ No newline at end of file
diff --git a/g3doc/api_docs/python/s2t/calculate_prensors.md b/g3doc/api_docs/python/s2t/calculate_prensors.md
deleted file mode 100644
index ee30dea..0000000
--- a/g3doc/api_docs/python/s2t/calculate_prensors.md
+++ /dev/null
@@ -1,81 +0,0 @@
-description: Gets the prensor value of the expressions.
-
-
-
-
-
-
-# s2t.calculate_prensors
-
-
-
-
-
-
-
-Gets the prensor value of the expressions.
-
-
-s2t.calculate_prensors(
- expressions: Sequence[s2t.Expression],
- options: Optional[calculate_options.Options] = None,
- feed_dict: Optional[Dict[expression.Expression, prensor.Prensor]] = None
-) -> Sequence[s2t.Prensor]
-
-
-
-
-
-
-
-
-
-
-Args |
-
-
-|
-`expressions`
- |
-
-expressions to calculate prensors for.
- |
-
-|
-`options`
- |
-
-options for calculate(...).
- |
-
-|
-`feed_dict`
- |
-
-a dictionary, mapping expression to prensor that will be used
-as the initial expression in the expression graph.
- |
-
-
-
-
-
-
-
-
-Returns |
-
-|
-a list of prensors.
- |
-
-
-
-
diff --git a/g3doc/api_docs/python/s2t/calculate_prensors_with_graph.md b/g3doc/api_docs/python/s2t/calculate_prensors_with_graph.md
deleted file mode 100644
index e1a05b3..0000000
--- a/g3doc/api_docs/python/s2t/calculate_prensors_with_graph.md
+++ /dev/null
@@ -1,83 +0,0 @@
-description: Gets the prensor value of the expressions and the graph used.
-
-
-
-
-
-
-# s2t.calculate_prensors_with_graph
-
-
-
-
-
-
-
-Gets the prensor value of the expressions and the graph used.
-
-
-s2t.calculate_prensors_with_graph(
- expressions: Sequence[s2t.Expression],
- options: Optional[calculate_options.Options] = None,
- feed_dict: Optional[Dict[expression.Expression, prensor.Prensor]] = None
-) -> Tuple[Sequence[prensor.Prensor], 'ExpressionGraph']
-
-
-
-
-
-
-This method is useful for getting information like the protobuf fields parsed
-to create an expression.
-
-
-
-
-Args |
-
-
-|
-`expressions`
- |
-
-expressions to calculate prensors for.
- |
-
-|
-`options`
- |
-
-options for calculate(...) methods.
- |
-
-|
-`feed_dict`
- |
-
-a dictionary, mapping expression to prensor that will be used
-as the initial expression in the expression graph.
- |
-
-
-
-
-
-
-
-
-Returns |
-
-|
-a list of prensors, and the graph used to calculate them.
- |
-
-
-
-
diff --git a/g3doc/api_docs/python/s2t/calculate_prensors_with_source_paths.md b/g3doc/api_docs/python/s2t/calculate_prensors_with_source_paths.md
deleted file mode 100644
index 068d369..0000000
--- a/g3doc/api_docs/python/s2t/calculate_prensors_with_source_paths.md
+++ /dev/null
@@ -1,34 +0,0 @@
-description: Returns a list of prensor trees, and proto summaries.
-
-
-
-
-
-
-# s2t.calculate_prensors_with_source_paths
-
-
-
-
-
-
-
-Returns a list of prensor trees, and proto summaries.
-
-
-s2t.calculate_prensors_with_source_paths(
- trees: Sequence[s2t.Expression],
- options: Optional[calculate_options.Options] = None
-) -> Tuple[Sequence[prensor.Prensor], Sequence[ProtoRequirements]]
-
-
-
-
-
diff --git a/g3doc/api_docs/python/s2t/create_expression_from_file_descriptor_set.md b/g3doc/api_docs/python/s2t/create_expression_from_file_descriptor_set.md
deleted file mode 100644
index 77c80eb..0000000
--- a/g3doc/api_docs/python/s2t/create_expression_from_file_descriptor_set.md
+++ /dev/null
@@ -1,93 +0,0 @@
-description: Create an expression from a 1D tensor of serialized protos.
-
-
-
-
-
-
-# s2t.create_expression_from_file_descriptor_set
-
-
-
-
-
-
-
-Create an expression from a 1D tensor of serialized protos.
-
-
-s2t.create_expression_from_file_descriptor_set(
- tensor_of_protos: tf.Tensor,
- proto_name: ProtoFullName,
- file_descriptor_set: FileDescriptorSet,
- message_format: str = 'binary'
-) -> s2t.Expression
-
-
-
-
-
-
-
-
-
-
-Args |
-
-
-|
-`tensor_of_protos`
- |
-
-1D tensor of serialized protos.
- |
-
-|
-`proto_name`
- |
-
-fully qualified name (e.g. "some.package.SomeProto") of the
-proto in `tensor_of_protos`.
- |
-
-|
-`file_descriptor_set`
- |
-
-The FileDescriptorSet proto containing `proto_name`'s
-and all its dependencies' FileDescriptorProto. Note that if file1 imports
-file2, then file2's FileDescriptorProto must precede file1's in
-file_descriptor_set.file.
- |
-
-|
-`message_format`
- |
-
-Indicates the format of the protocol buffer: is one of
-'text' or 'binary'.
- |
-
-
-
-
-
-
-
-
-Returns |
-
-|
-An expression.
- |
-
-
-
-
diff --git a/g3doc/api_docs/python/s2t/create_expression_from_prensor.md b/g3doc/api_docs/python/s2t/create_expression_from_prensor.md
deleted file mode 100644
index f72995e..0000000
--- a/g3doc/api_docs/python/s2t/create_expression_from_prensor.md
+++ /dev/null
@@ -1,64 +0,0 @@
-description: Gets an expression representing the prensor.
-
-
-
-
-
-
-# s2t.create_expression_from_prensor
-
-
-
-
-
-
-
-Gets an expression representing the prensor.
-
-
-s2t.create_expression_from_prensor(
- t: s2t.Prensor
-) -> s2t.Expression
-
-
-
-
-
-
-
-
-
-
-Args |
-
-
-|
-`t`
- |
-
-The prensor to represent.
- |
-
-
-
-
-
-
-
-
-Returns |
-
-|
-An expression representing the prensor.
- |
-
-
-
-
diff --git a/g3doc/api_docs/python/s2t/create_expression_from_proto.md b/g3doc/api_docs/python/s2t/create_expression_from_proto.md
deleted file mode 100644
index 4a720e8..0000000
--- a/g3doc/api_docs/python/s2t/create_expression_from_proto.md
+++ /dev/null
@@ -1,81 +0,0 @@
-description: Create an expression from a 1D tensor of serialized protos.
-
-
-
-
-
-
-# s2t.create_expression_from_proto
-
-
-
-
-
-
-
-Create an expression from a 1D tensor of serialized protos.
-
-
-s2t.create_expression_from_proto(
- tensor_of_protos: tf.Tensor,
- desc: descriptor.Descriptor,
- message_format: str = 'binary'
-) -> s2t.Expression
-
-
-
-
-
-
-
-
-
-
-Args |
-
-
-|
-`tensor_of_protos`
- |
-
-1D tensor of serialized protos.
- |
-
-|
-`desc`
- |
-
-a descriptor of protos in tensor of protos.
- |
-
-|
-`message_format`
- |
-
-Indicates the format of the protocol buffer: is one of
-'text' or 'binary'.
- |
-
-
-
-
-
-
-
-
-Returns |
-
-|
-An expression.
- |
-
-
-
-
diff --git a/g3doc/api_docs/python/s2t/create_path.md b/g3doc/api_docs/python/s2t/create_path.md
deleted file mode 100644
index 65ba841..0000000
--- a/g3doc/api_docs/python/s2t/create_path.md
+++ /dev/null
@@ -1,94 +0,0 @@
-description: Create a path from an object.
-
-
-
-
-
-
-# s2t.create_path
-
-
-
-
-
-
-
-Create a path from an object.
-
-
-s2t.create_path(
- path_source: s2t.Path
-) -> s2t.Path
-
-
-
-
-
-
-
-#### The BNF for a path is:
-
-
-letter := [A-Za-z]
-digit := [0-9]
- := "_"|"-"| | letter | digit
- := +
- := "(" ( ".")* ")"
- := |
- := (( ".") * )?
-
-
-
-
-
-
-Args |
-
-
-|
-`path_source`
- |
-
-a string or a Path object.
- |
-
-
-
-
-
-
-
-
-Returns |
-
-|
-A Path.
- |
-
-
-
-
-
-
-
-
-
-Raises |
-
-
-|
-`ValueError`
- |
-
-if this is not a valid path.
- |
-
-
-
diff --git a/g3doc/api_docs/python/s2t/create_prensor_from_descendant_nodes.md b/g3doc/api_docs/python/s2t/create_prensor_from_descendant_nodes.md
deleted file mode 100644
index 4222a52..0000000
--- a/g3doc/api_docs/python/s2t/create_prensor_from_descendant_nodes.md
+++ /dev/null
@@ -1,82 +0,0 @@
-description: Create a prensor from a map of paths to NodeTensor.
-
-
-
-
-
-
-# s2t.create_prensor_from_descendant_nodes
-
-
-
-
-
-
-
-Create a prensor from a map of paths to NodeTensor.
-
-
-s2t.create_prensor_from_descendant_nodes(
- nodes: Mapping[s2t.Path, s2t.NodeTensor]
-) -> "Prensor"
-
-
-
-
-
-
-If a path is a key in the map, all prefixes of that path must be present.
-
-
-
-
-Args |
-
-
-|
-`nodes`
- |
-
-A map from paths to NodeTensors.
- |
-
-
-
-
-
-
-
-
-Returns |
-
-|
-A Prensor.
- |
-
-
-
-
-
-
-
-
-
-Raises |
-
-
-|
-`ValueError`
- |
-
-if there is a prefix of a path missing.
- |
-
-
-
diff --git a/g3doc/api_docs/python/s2t/create_prensor_from_root_and_children.md b/g3doc/api_docs/python/s2t/create_prensor_from_root_and_children.md
deleted file mode 100644
index faabc20..0000000
--- a/g3doc/api_docs/python/s2t/create_prensor_from_root_and_children.md
+++ /dev/null
@@ -1,32 +0,0 @@
-
-
-
-
-
-# s2t.create_prensor_from_root_and_children
-
-
-
-
-
-
-
-
-
-
-s2t.create_prensor_from_root_and_children(
- root: s2t.NodeTensor,
- children: Mapping[s2t.Step, s2t.Prensor]
-) -> s2t.Prensor
-
-
-
-
-
diff --git a/g3doc/api_docs/python/s2t/get_default_options.md b/g3doc/api_docs/python/s2t/get_default_options.md
deleted file mode 100644
index 9eee444..0000000
--- a/g3doc/api_docs/python/s2t/get_default_options.md
+++ /dev/null
@@ -1,31 +0,0 @@
-description: Get the default options.
-
-
-
-
-
-
-# s2t.get_default_options
-
-
-
-
-
-
-
-Get the default options.
-
-
-s2t.get_default_options() -> Options
-
-
-
-
-
diff --git a/g3doc/api_docs/python/s2t/get_options_with_minimal_checks.md b/g3doc/api_docs/python/s2t/get_options_with_minimal_checks.md
deleted file mode 100644
index 644c0cc..0000000
--- a/g3doc/api_docs/python/s2t/get_options_with_minimal_checks.md
+++ /dev/null
@@ -1,31 +0,0 @@
-description: Options for calculation with minimal runtime checks.
-
-
-
-
-
-
-# s2t.get_options_with_minimal_checks
-
-
-
-
-
-
-
-Options for calculation with minimal runtime checks.
-
-
-s2t.get_options_with_minimal_checks() -> Options
-
-
-
-
-
diff --git a/g3doc/api_docs/python/s2t/get_ragged_tensor.md b/g3doc/api_docs/python/s2t/get_ragged_tensor.md
deleted file mode 100644
index 04eaeaf..0000000
--- a/g3doc/api_docs/python/s2t/get_ragged_tensor.md
+++ /dev/null
@@ -1,86 +0,0 @@
-description: Get a ragged tensor for a path. (deprecated)
-
-
-
-
-
-
-# s2t.get_ragged_tensor
-
-
-
-
-
-
-
-Get a ragged tensor for a path. (deprecated)
-
-
-s2t.get_ragged_tensor(
- t: s2t.Prensor,
- p: s2t.Path,
- options: calculate_options.Options = calculate_options.get_default_options()
-) -> tf.RaggedTensor
-
-
-
-
-
-
-Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
-Instructions for updating:
-Use the Prensor class method instead.
-
-All steps are represented in the ragged tensor.
-
-
-
-
-Args |
-
-
-|
-`t`
- |
-
-The Prensor to extract tensors from.
- |
-
-|
-`p`
- |
-
-the path to a leaf node in `t`.
- |
-
-|
-`options`
- |
-
-used to pass options for calculating ragged tensors.
- |
-
-
-
-
-
-
-
-
-Returns |
-
-|
-A ragged tensor containing values of the leaf node, preserving the
-structure along the path. Raises an error if the path is not found.
- |
-
-
-
-
diff --git a/g3doc/api_docs/python/s2t/get_ragged_tensors.md b/g3doc/api_docs/python/s2t/get_ragged_tensors.md
deleted file mode 100644
index 3bf2171..0000000
--- a/g3doc/api_docs/python/s2t/get_ragged_tensors.md
+++ /dev/null
@@ -1,75 +0,0 @@
-description: Gets ragged tensors for all the leaves of the prensor expression. (deprecated)
-
-
-
-
-
-
-# s2t.get_ragged_tensors
-
-
-
-
-
-
-
-Gets ragged tensors for all the leaves of the prensor expression. (deprecated)
-
-
-s2t.get_ragged_tensors(
- t: s2t.Prensor,
- options: calculate_options.Options = calculate_options.get_default_options()
-) -> Mapping[s2t.Path, tf.RaggedTensor]
-
-
-
-
-
-
-Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
-Instructions for updating:
-Use the Prensor class method instead.
-
-
-
-
-Args |
-
-
-|
-`t`
- |
-
-The Prensor to extract tensors from.
- |
-
-|
-`options`
- |
-
-used to pass options for calculating ragged tensors.
- |
-
-
-
-
-
-
-
-
-Returns |
-
-|
-A map from paths to ragged tensors.
- |
-
-
-
-
diff --git a/g3doc/api_docs/python/s2t/get_sparse_tensor.md b/g3doc/api_docs/python/s2t/get_sparse_tensor.md
deleted file mode 100644
index b6ec8ab..0000000
--- a/g3doc/api_docs/python/s2t/get_sparse_tensor.md
+++ /dev/null
@@ -1,87 +0,0 @@
-description: Gets a sparse tensor for path p. (deprecated)
-
-
-
-
-
-
-# s2t.get_sparse_tensor
-
-
-
-
-
-
-
-Gets a sparse tensor for path p. (deprecated)
-
-
-s2t.get_sparse_tensor(
- t: s2t.Prensor,
- p: s2t.Path,
- options: calculate_options.Options = calculate_options.get_default_options()
-) -> tf.SparseTensor
-
-
-
-
-
-
-Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
-Instructions for updating:
-Use the Prensor class method instead.
-
-Note that any optional fields are not registered as dimensions, as they can't
-be represented in a sparse tensor.
-
-
-
-
-Args |
-
-
-|
-`t`
- |
-
-The Prensor to extract tensors from.
- |
-
-|
-`p`
- |
-
-The path to a leaf node in `t`.
- |
-
-|
-`options`
- |
-
-Currently unused.
- |
-
-
-
-
-
-
-
-
-Returns |
-
-|
-A sparse tensor containing values of the leaf node, preserving the
-structure along the path. Raises an error if the path is not found.
- |
-
-
-
-
diff --git a/g3doc/api_docs/python/s2t/get_sparse_tensors.md b/g3doc/api_docs/python/s2t/get_sparse_tensors.md
deleted file mode 100644
index 4c50fb7..0000000
--- a/g3doc/api_docs/python/s2t/get_sparse_tensors.md
+++ /dev/null
@@ -1,75 +0,0 @@
-description: Gets sparse tensors for all the leaves of the prensor expression. (deprecated)
-
-
-
-
-
-
-# s2t.get_sparse_tensors
-
-
-
-
-
-
-
-Gets sparse tensors for all the leaves of the prensor expression. (deprecated)
-
-
-s2t.get_sparse_tensors(
- t: s2t.Prensor,
- options: calculate_options.Options = calculate_options.get_default_options()
-) -> Mapping[s2t.Path, tf.SparseTensor]
-
-
-
-
-
-
-Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
-Instructions for updating:
-Use the Prensor class method instead.
-
-
-
-
-Args |
-
-
-|
-`t`
- |
-
-The Prensor to extract tensors from.
- |
-
-|
-`options`
- |
-
-Currently unused.
- |
-
-
-
-
-
-
-
-
-Returns |
-
-|
-A map from paths to sparse tensors.
- |
-
-
-
-
diff --git a/mkdocs.yml b/mkdocs.yml
new file mode 100644
index 0000000..67a6e4f
--- /dev/null
+++ b/mkdocs.yml
@@ -0,0 +1,129 @@
+site_name: Struct2Tensor
+repo_name: "Struct2Tensor"
+repo_url: https://github.com/google/struct2tensor
+
+theme:
+ name: material
+ palette:
+ # Palette toggle for automatic mode
+ - media: "(prefers-color-scheme)"
+ primary: custom
+ accent: custom
+ toggle:
+ icon: material/brightness-auto
+ name: Switch to light mode
+
+ # Palette toggle for light mode
+ - media: "(prefers-color-scheme: light)"
+ primary: custom
+ accent: custom
+ scheme: default
+ toggle:
+ icon: material/brightness-7
+ name: Switch to dark mode
+
+ # Palette toggle for dark mode
+ - media: "(prefers-color-scheme: dark)"
+ primary: custom
+ accent: custom
+ scheme: slate
+ toggle:
+ icon: material/brightness-4
+ name: Switch to system preference
+ favicon: assets/favicon.png
+
+ features:
+ - content.code.copy
+ - content.code.select
+ - content.action.edit
+plugins:
+ - search
+ - autorefs
+ - mkdocstrings:
+ default_handler: python
+ handlers:
+ python:
+ options:
+ show_source: true
+ show_root_heading: true
+ unwrap_annotated: true
+ show_symbol_type_toc: true
+ show_symbol_type_heading: true
+ merge_init_into_class: true
+ show_signature_annotations: true
+ separate_signature: true
+ signature_crossrefs: true
+ group_by_category: true
+ show_category_heading: true
+ inherited_members: true
+ show_submodules: true
+ show_object_full_path: false
+ show_root_full_path: true
+ docstring_section_style: "spacy"
+ show_if_no_docstring: true
+ summary: true
+ filters:
+ - "!^_"
+ - "^__init__$"
+ - "^__call__$"
+ - "!^logger"
+ - "!_test$"
+ - "!_test_util$"
+ extensions:
+ - griffe_inherited_docstrings
+ import:
+ - https://docs.python.org/3/objects.inv
+ - mkdocs-jupyter:
+ execute: false
+ - caption:
+ figure:
+ ignore_alt: true
+
+markdown_extensions:
+ - admonition
+ - attr_list
+ - def_list
+ - tables
+ - toc:
+ permalink: true
+ - pymdownx.highlight:
+ anchor_linenums: true
+ linenums: false
+ line_spans: __span
+ pygments_lang_class: true
+ - pymdownx.inlinehilite
+ - pymdownx.snippets
+ - pymdownx.superfences
+ - pymdownx.arithmatex:
+ generic: true
+ - pymdownx.critic
+ - pymdownx.caret
+ - pymdownx.keys
+ - pymdownx.mark
+ - pymdownx.tilde
+ - markdown_grid_tables
+ - md_in_html
+ - pymdownx.emoji:
+ emoji_index: !!python/name:material.extensions.emoji.twemoji
+ emoji_generator: !!python/name:material.extensions.emoji.to_svg
+
+extra_css:
+ - stylesheets/extra.css
+
+extra_javascript:
+ - javascripts/mathjax.js
+ - https://unpkg.com/mathjax@3/es5/tex-mml-chtml.js
+
+watch:
+ - struct2tensor
+nav:
+ - Examples:
+ - "Your structured data into Tensorflow": examples/prensor_playground
+
+ - API Docs:
+ - "s2t":
+ "Overview": api_docs/python/s2t
+ "s2t": api_docs/python/s2t/s2t
+ - "expression_impl":
+ "Overview": api_docs/python/expression_impl
+ "expression_impl": api_docs/python/expression_impl/expression_impl
diff --git a/requirements-docs.txt b/requirements-docs.txt
new file mode 100644
index 0000000..bf02e12
--- /dev/null
+++ b/requirements-docs.txt
@@ -0,0 +1,9 @@
+mkdocs
+mkdocs-material
+mkdocstrings[python]
+griffe-inherited-docstrings
+mkdocs-autorefs
+mkdocs-jupyter
+mkdocs-caption
+markdown-grid-tables
+black
diff --git a/setup.py b/setup.py
index f072d74..4774fc7 100644
--- a/setup.py
+++ b/setup.py
@@ -64,6 +64,11 @@ def select_constraint(default, nightly=None, git_master=None):
exec(fp.read(), globals_dict) # pylint: disable=exec-used
__version__ = globals_dict['__version__']
+# Get documentation build requirements
+with open("requirements-docs.txt", "r") as fp:
+ docs_reqs = fp.readlines()
+docs_reqs = [req.replace("\n", "") for req in docs_reqs]
+
setup(
name='struct2tensor',
version=__version__,
@@ -90,6 +95,7 @@ def select_constraint(default, nightly=None, git_master=None):
),
'pyarrow>=10,<11',
],
+ extras_require={"docs": docs_reqs},
# Add in any packaged data.
include_package_data=True,
package_data={'': ['*.lib', '*.so']},
diff --git a/struct2tensor/__init__.py b/struct2tensor/__init__.py
index dbf5aac..80db160 100644
--- a/struct2tensor/__init__.py
+++ b/struct2tensor/__init__.py
@@ -18,14 +18,18 @@
from struct2tensor.calculate import calculate_prensors_with_graph
from struct2tensor.calculate_options import get_default_options
from struct2tensor.calculate_options import get_options_with_minimal_checks
-from struct2tensor.calculate_with_source_paths import calculate_prensors_with_source_paths
+from struct2tensor.calculate_with_source_paths import (
+ calculate_prensors_with_source_paths,
+)
# Import expressions API.
from struct2tensor.create_expression import create_expression_from_prensor
from struct2tensor.expression import Expression
# Import expression queries API
-from struct2tensor.expression_impl.proto import create_expression_from_file_descriptor_set
+from struct2tensor.expression_impl.proto import (
+ create_expression_from_file_descriptor_set,
+)
from struct2tensor.expression_impl.proto import create_expression_from_proto
# Import path API
@@ -52,3 +56,30 @@
# tf.compat.v1.Session.run() will be able to take a Prensor and return a
# PrensorValue.
import struct2tensor.prensor_value
+
+__all__ = [
+ "s2t",
+ "calculate_prensors",
+ "calculate_prensors_with_graph",
+ "calculate_prensors_with_source_paths",
+ "ChildNodeTensor",
+ "create_expression_from_file_descriptor_set",
+ "create_expression_from_prensor",
+ "create_expression_from_proto",
+ "create_path",
+ "create_prensor_from_descendant_nodes",
+ "create_prensor_from_root_and_children",
+ "Expression",
+ "get_default_options",
+ "get_options_with_minimal_checks",
+ "get_ragged_tensor",
+ "get_ragged_tensors",
+ "get_sparse_tensor",
+ "get_sparse_tensors",
+ "LeafNodeTensor",
+ "NodeTensor",
+ "Path",
+ "Prensor",
+ "RootNodeTensor",
+ "Step",
+]
diff --git a/struct2tensor/expression.py b/struct2tensor/expression.py
index c3b7665..33987bc 100644
--- a/struct2tensor/expression.py
+++ b/struct2tensor/expression.py
@@ -300,7 +300,7 @@ def get_known_descendants(self) -> Mapping[path.Path, "Expression"]:
Returns:
A mapping from paths (relative to the root of the subexpression) to
- expressions.
+ expressions.
"""
known_subexpressions = {
k: v.get_known_descendants()
@@ -478,7 +478,8 @@ def map_field_values(self, source_path: CoercableToPath,
new_field_name: path.Step) -> "Expression":
"""Map a primitive field to create a new primitive field.
- Note: the dtype argument is added since the v1 API.
+ !!! Note
+ The dtype argument is added since the v1 API.
Args:
source_path: the origin path.
@@ -601,13 +602,14 @@ def get_schema(self, create_schema_features=True) -> schema_pb2.Schema:
def schema_string(self, limit: Optional[int] = None) -> str:
"""Returns a schema for the expression.
- E.g.
-
+ For examle,
+ ```
repeated root:
optional int32 foo
optional bar:
optional string baz
optional int64 bak
+ ```
Note that unknown fields and subexpressions are not displayed.
diff --git a/struct2tensor/expression_impl/__init__.py b/struct2tensor/expression_impl/__init__.py
index acd29ae..df7582a 100644
--- a/struct2tensor/expression_impl/__init__.py
+++ b/struct2tensor/expression_impl/__init__.py
@@ -16,7 +16,7 @@
The modules in this file should be accessed like the following:
-```
+```python
import struct2tensor as s2t
from struct2tensor import expression_impl
@@ -41,3 +41,68 @@
from struct2tensor.expression_impl import reroot
from struct2tensor.expression_impl import size
from struct2tensor.expression_impl import slice_expression
+
+
+__all__ = [
+ "apply_schema",
+ "apply_schema.apply_schema",
+ "broadcast",
+ "broadcast.broadcast",
+ "broadcast.broadcast_anonymous",
+ "depth_limit",
+ "depth_limit.limit_depth",
+ "filter_expression",
+ "filter_expression.filter_by_child",
+ "filter_expression.filter_by_sibling",
+ "index",
+ "index.get_index_from_end",
+ "index.get_positional_index",
+ "map_prensor",
+ "map_prensor.map_ragged_tensor",
+ "map_prensor.map_sparse_tensor",
+ "map_prensor_to_prensor",
+ "map_prensor_to_prensor.create_schema",
+ "map_prensor_to_prensor.map_prensor_to_prensor",
+ "map_prensor_to_prensor.Schema",
+ "map_values",
+ "map_values.map_many_values",
+ "map_values.map_values",
+ "map_values.map_values_anonymous",
+ "parquet",
+ "parquet.calculate_parquet_values",
+ "parquet.create_expression_from_parquet_file",
+ "parquet.ParquetDataset",
+ "placeholder",
+ "placeholder.create_expression_from_schema",
+ "placeholder.get_placeholder_paths_from_graph",
+ "project",
+ "project.project",
+ "promote",
+ "promote_and_broadcast",
+ "promote_and_broadcast.promote_and_broadcast",
+ "promote_and_broadcast.promote_and_broadcast_anonymous",
+ "promote.promote",
+ "promote.promote_anonymous",
+ "promote.PromoteChildExpression",
+ "promote.PromoteExpression",
+ "proto",
+ "proto.create_expression_from_file_descriptor_set",
+ "proto.create_expression_from_proto",
+ "proto.create_transformed_field",
+ "proto.DescriptorPool",
+ "proto.FileDescriptorSet",
+ "proto.is_proto_expression",
+ "proto.ProtoExpression",
+ "proto.TransformFn",
+ "reroot",
+ "reroot.create_proto_index_field",
+ "reroot.reroot",
+ "size",
+ "size.has",
+ "size.size",
+ "size.size_anonymous",
+ "size.SizeExpression",
+ "slice_expression",
+ "slice_expression.IndexValue",
+ "slice_expression.slice_expression",
+]
diff --git a/struct2tensor/expression_impl/apply_schema.py b/struct2tensor/expression_impl/apply_schema.py
index edcfb7c..0787ac5 100644
--- a/struct2tensor/expression_impl/apply_schema.py
+++ b/struct2tensor/expression_impl/apply_schema.py
@@ -32,13 +32,15 @@
This does not filter out fields not in the schema.
-
+```python
my_expr = ...
-my_schema = ...schema here...
+my_schema = # ...schema here...
my_new_schema = my_expr.apply_schema(my_schema).get_schema()
-my_new_schema has semantically identical information on the fields as my_schema.
+# my_new_schema has semantically identical information on the fields as my_schema.
+```
TODO(martinz): Add utilities to:
+
1. Get the (non-deprecated) paths from a schema.
2. Check if any paths in the schema are not in the expression.
3. Check if any paths in the expression are not in the schema.
diff --git a/struct2tensor/expression_impl/broadcast.py b/struct2tensor/expression_impl/broadcast.py
index 2eb0873..78f2798 100644
--- a/struct2tensor/expression_impl/broadcast.py
+++ b/struct2tensor/expression_impl/broadcast.py
@@ -26,7 +26,9 @@
+-event*
|
+-val*-int64
+```
+```json
session: {
event: {}
event: {}
@@ -42,7 +44,7 @@
Then:
-```
+```python
broadcast.broadcast(expr, path.Path(["session","val"]), "event", "nv")
```
@@ -58,7 +60,9 @@
| +---nv*-int64
|
+-val*-int64
+```
+```json
session: {
event: {
nv: 10
diff --git a/struct2tensor/expression_impl/filter_expression.py b/struct2tensor/expression_impl/filter_expression.py
index a791f05..5cca1d1 100644
--- a/struct2tensor/expression_impl/filter_expression.py
+++ b/struct2tensor/expression_impl/filter_expression.py
@@ -53,7 +53,7 @@
The following call will have the same effect as above:
-```
+```python
root_2 = filter_expression.filter_by_child(
root, path.create_path("doc"), "keep_me", "new_doc")
```
diff --git a/struct2tensor/expression_impl/index.py b/struct2tensor/expression_impl/index.py
index a18d457..63436f1 100644
--- a/struct2tensor/expression_impl/index.py
+++ b/struct2tensor/expression_impl/index.py
@@ -19,7 +19,7 @@
Given:
-```
+```json
session: {
event: {
val: 111
@@ -41,13 +41,13 @@
}
```
-```
+```python
get_positional_index(expr, path.Path(["event","val"]), "val_index")
```
yields:
-```
+```json
session: {
event: {
val: 111
@@ -75,12 +75,12 @@
}
```
-```
+```python
get_index_from_end(expr, path.Path(["event","val"]), "neg_val_index")
```
yields:
-```
+```json
session: {
event: {
val: 111
diff --git a/struct2tensor/expression_impl/map_prensor.py b/struct2tensor/expression_impl/map_prensor.py
index cec43b1..2ed572e 100644
--- a/struct2tensor/expression_impl/map_prensor.py
+++ b/struct2tensor/expression_impl/map_prensor.py
@@ -18,7 +18,7 @@
Assume expr is:
-```
+```json
session: {
event: {
val_a: 10
@@ -45,7 +45,7 @@
map_sparse_tensor converts val_a and val_b to sparse tensors,
and then add them to produce val_sum.
-```
+```python
new_root = map_prensor.map_sparse_tensor(
expr,
path.Path(["event"]),
@@ -59,7 +59,7 @@
map_ragged_tensor converts val_a and val_b to ragged tensors,
and then add them to produce val_sum.
-```
+```python
new_root = map_prensor.map_ragged_tensor(
expr,
path.Path(["event"]),
@@ -72,7 +72,7 @@
The result of either is:
-```
+```json
session: {
event: {
val_a: 10
@@ -130,7 +130,7 @@ def map_sparse_tensor(root: expression.Expression, root_path: path.Path,
Returns:
A new root expression containing the old root expression plus the new path,
- root_path.get_child(new_field_name), with the result of the operation.
+ root_path.get_child(new_field_name), with the result of the operation.
"""
return _map_sparse_tensor_impl(root, root_path, paths, operation, is_repeated,
@@ -157,7 +157,7 @@ def map_ragged_tensor(root: expression.Expression, root_path: path.Path,
Returns:
A new root expression containing the old root expression plus the new path,
- root_path.get_child(new_field_name), with the result of the operation.
+ root_path.get_child(new_field_name), with the result of the operation.
"""
return _map_ragged_tensor_impl(root, root_path, paths, operation, is_repeated,
dtype, new_field_name)[0]
@@ -353,8 +353,8 @@ def _map_ragged_tensor_impl(root: expression.Expression, root_path: path.Path,
Returns:
An expression/path pair (expr,p) with a new root expression containing
- the old root expression plus the new path,
- root_path.get_child(new_field_name), with the result of the operation.
+ the old root expression plus the new path,
+ root_path.get_child(new_field_name), with the result of the operation.
"""
def new_op(tree: prensor.Prensor,
diff --git a/struct2tensor/expression_impl/map_prensor_to_prensor.py b/struct2tensor/expression_impl/map_prensor_to_prensor.py
index ce3246b..d54ab0b 100644
--- a/struct2tensor/expression_impl/map_prensor_to_prensor.py
+++ b/struct2tensor/expression_impl/map_prensor_to_prensor.py
@@ -34,7 +34,7 @@
foo2 bar2
```
-```
+```python
my_result_schema = create_schema(
is_repeated=True,
children={"foo2":{is_repeated:True, dtype:tf.int64},
@@ -49,7 +49,9 @@
event
/ \
foo bar
+```
+```python
result = map_prensor_to_prensor(
original,
path.Path(["session","event"]),
@@ -155,11 +157,13 @@ def create_schema(is_repeated: bool = True,
children: Optional[Dict[path.Step, Any]] = None) -> Schema:
"""Create a schema recursively.
- Example:
- my_result_schema = create_schema(
- is_repeated=True,
- children={"foo2":{is_repeated=True, dtype=tf.int64},
- "bar2":{is_repeated=False, dtype=tf.int64}})
+ !!! Example
+ ```python
+ my_result_schema = create_schema(
+ is_repeated=True,
+ children={"foo2":{is_repeated=True, dtype=tf.int64},
+ "bar2":{is_repeated=False, dtype=tf.int64}})
+ ```
Args:
is_repeated: whether the root is repeated.
@@ -211,37 +215,46 @@ def map_prensor_to_prensor(
For example, suppose you have an op my_op, that takes a prensor of the form:
+ ```
event
- / \
- foo bar
+ / \
+ foo bar
+ ```
and produces a prensor of the form my_result_schema:
- event
- / \
- foo2 bar2
+ ```
+ event
+ / \
+ foo2 bar2
+ ```
If you give it an expression original with the schema:
+ ```
session
|
event
/ \
foo bar
-
+ ```
+ ```python
result = map_prensor_to_prensor(
original,
path.Path(["session","event"]),
my_op,
my_output_schema)
+ ```
Result will have the schema:
+ ```
session
|
event--------
/ \ \ \
foo bar foo2 bar2
+ ```
Args:
root_expr: the root expression
diff --git a/struct2tensor/expression_impl/parquet.py b/struct2tensor/expression_impl/parquet.py
index ff66324..07b6b1e 100644
--- a/struct2tensor/expression_impl/parquet.py
+++ b/struct2tensor/expression_impl/parquet.py
@@ -13,17 +13,16 @@
# limitations under the License.
"""Apache Parquet Dataset.
-Example usage:
+!!! Example "Example Usage"
+ ```python
+ exp = create_expression_from_parquet_file(filenames)
+ docid_project_exp = project.project(exp, [path.Path(["DocId"])])
+ pqds = parquet_dataset.calculate_parquet_values([docid_project_exp], exp,
+ filenames, batch_size)
-```
- exp = create_expression_from_parquet_file(filenames)
- docid_project_exp = project.project(exp, [path.Path(["DocId"])])
- pqds = parquet_dataset.calculate_parquet_values([docid_project_exp], exp,
- filenames, batch_size)
-
- for prensors in pqds:
- doc_id_prensor = prensors[0]
-```
+ for prensors in pqds:
+ doc_id_prensor = prensors[0]
+ ```
"""
@@ -52,7 +51,7 @@ def create_expression_from_parquet_file(
Returns:
A PlaceholderRootExpression that should be used as the root of an expression
- graph.
+ graph.
"""
metadata = pq.ParquetFile(filenames[0]).metadata
@@ -220,14 +219,18 @@ class ParquetDataset(_RawParquetDataset):
The prensor will have a PrensorTypeSpec, which is created based on
value_paths.
- Note: In tensorflow v1 this dataset will not return a prensor. The output will
- be the same format as _RawParquetDataset's output (a vector of tensors).
- The following is a workaround in v1:
- pq_ds = ParquetDataset(...)
- type_spec = pq_ds.element_spec
- tensors = pq_ds.make_one_shot_iterator().get_next()
- prensor = type_spec.from_components(tensors)
- session.run(prensor)
+ !!! Note
+ In tensorflow v1 this dataset will not return a prensor. The output will
+ be the same format as _RawParquetDataset's output (a vector of tensors).
+ The following is a workaround in v1:
+
+ ```python
+ pq_ds = ParquetDataset(...)
+ type_spec = pq_ds.element_spec
+ tensors = pq_ds.make_one_shot_iterator().get_next()
+ prensor = type_spec.from_components(tensors)
+ session.run(prensor)
+ ```
"""
def __init__(self, filenames: List[str], value_paths: List[str],
diff --git a/struct2tensor/expression_impl/parse_message_level_ex.py b/struct2tensor/expression_impl/parse_message_level_ex.py
index cba7895..2ef6aac 100644
--- a/struct2tensor/expression_impl/parse_message_level_ex.py
+++ b/struct2tensor/expression_impl/parse_message_level_ex.py
@@ -25,7 +25,7 @@
Specifically, consider google.protobuf.Any and proto maps:
-```
+```python
package foo.bar;
message MyMessage {
@@ -53,7 +53,7 @@
Thus, we can run:
-```
+```python
my_message_serialized_tensor = ...
my_message_parsed = parse_message_level_ex(
diff --git a/struct2tensor/expression_impl/placeholder.py b/struct2tensor/expression_impl/placeholder.py
index 782e9a7..ee0e1d9 100644
--- a/struct2tensor/expression_impl/placeholder.py
+++ b/struct2tensor/expression_impl/placeholder.py
@@ -22,7 +22,7 @@
Sample usage:
-```
+```python
placeholder_exp = placeholder.create_expression_from_schema(schema)
new_exp = expression_queries(placeholder_exp, ..)
result = calculate.calculate_values([new_exp],
@@ -53,7 +53,7 @@ def create_expression_from_schema(
Returns:
A PlaceholderRootExpression that should be used as the root of an expression
- graph.
+ graph.
"""
return _PlaceholderRootExpression(schema)
diff --git a/struct2tensor/expression_impl/project.py b/struct2tensor/expression_impl/project.py
index 1983698..de1cb2d 100644
--- a/struct2tensor/expression_impl/project.py
+++ b/struct2tensor/expression_impl/project.py
@@ -15,16 +15,15 @@
project is often used right before calculating the value.
-Example:
-
-```
-expr = ...
-new_expr = project.project(expr, [path.Path(["foo","bar"]),
- path.Path(["x", "y"])])
-[prensor_result] = calculate.calculate_prensors([new_expr])
-```
-
-prensor_result now has two paths, "foo.bar" and "x.y".
+!!! Example
+ ```python
+ expr = ...
+ new_expr = project.project(expr, [path.Path(["foo","bar"]),
+ path.Path(["x", "y"])])
+ [prensor_result] = calculate.calculate_prensors([new_expr])
+ ```
+
+ prensor_result now has two paths, "foo.bar" and "x.y".
"""
diff --git a/struct2tensor/expression_impl/promote.py b/struct2tensor/expression_impl/promote.py
index 3f14a19..b3ea8a7 100644
--- a/struct2tensor/expression_impl/promote.py
+++ b/struct2tensor/expression_impl/promote.py
@@ -27,7 +27,9 @@
+-event*
|
+-val*-int64
+```
+```json
session: {
event: {
val: 111
@@ -50,7 +52,7 @@
```
-```
+```python
promote.promote(expr, path.Path(["session", "event", "val"]), nval)
```
@@ -66,7 +68,9 @@
| +-val*-int64
|
+-nval*-int64
+```
+```python
session: {
event: {
val: 111
diff --git a/struct2tensor/expression_impl/promote_and_broadcast.py b/struct2tensor/expression_impl/promote_and_broadcast.py
index c916ccd..1874627 100644
--- a/struct2tensor/expression_impl/promote_and_broadcast.py
+++ b/struct2tensor/expression_impl/promote_and_broadcast.py
@@ -27,7 +27,9 @@
+-user_info? (question mark indicates optional)
|
+-age? int64
+```
+```json
session: {
event: {
val: 1
@@ -55,7 +57,7 @@
}
```
-```
+```python
promote_and_broadcast.promote_and_broadcast(
path.Path(["event"]),{"nage":path.Path(["user_info","age"])})
```
@@ -76,7 +78,9 @@
+-user_info? (question mark indicates optional)
|
+-age? int64
+```
+```json
session: {
event: {
nage: 25
@@ -159,7 +163,7 @@ def promote_and_broadcast(root: expression.Expression,
Returns:
A new expression, where all the origin paths are promoted and broadcast
- until they are children of dest_path_parent.
+ until they are children of dest_path_parent.
"""
result_paths = {}
diff --git a/struct2tensor/expression_impl/proto.py b/struct2tensor/expression_impl/proto.py
index ce61c9d..f6f4b02 100644
--- a/struct2tensor/expression_impl/proto.py
+++ b/struct2tensor/expression_impl/proto.py
@@ -125,30 +125,32 @@ def transform_fn(parent_indices, values):
return (transformed_parent_indices, transformed_values)
Given:
+
- parent_indices: an int64 vector of non-decreasing parent message indices.
- values: a string vector of serialized protos having the same shape as
`parent_indices`.
+
`transform_fn` must return new parent indices and serialized values encoding
the same proto message as the passed in `values`. These two vectors must
have the same size, but it need not be the same as the input arguments.
- Note:
- If CalculateOptions.use_string_view (set at calculate time, thus this
- Expression cannot know beforehand) is True, `values` passed to
- `transform_fn` are string views pointing all the way back to the original
- input tensor (of serialized root protos). And `transform_fn` must maintain
- such views and avoid creating new values that are either not string views
- into the root protos or self-owned strings. This is because downstream
- decoding ops will still produce string views referring into its input
- (which are string views into the root proto) and they will only hold a
- reference to the original, root proto tensor, keeping it alive. So the input
- tensor may get destroyed after the decoding op.
+ !!! Note
+ If CalculateOptions.use_string_view (set at calculate time, thus this
+ Expression cannot know beforehand) is True, `values` passed to
+ `transform_fn` are string views pointing all the way back to the original
+ input tensor (of serialized root protos). And `transform_fn` must maintain
+ such views and avoid creating new values that are either not string views
+ into the root protos or self-owned strings. This is because downstream
+ decoding ops will still produce string views referring into its input
+ (which are string views into the root proto) and they will only hold a
+ reference to the original, root proto tensor, keeping it alive. So the input
+ tensor may get destroyed after the decoding op.
- In short, you can do element-wise transforms to `values`, but can't mutate
- the contents of elements in `values` or create new elements.
+ In short, you can do element-wise transforms to `values`, but can't mutate
+ the contents of elements in `values` or create new elements.
- To lift this restriction, a decoding op must be told to hold a reference
- of the input tensors of all its upstream decoding ops.
+ To lift this restriction, a decoding op must be told to hold a reference
+ of the input tensors of all its upstream decoding ops.
Args:
@@ -233,6 +235,7 @@ class _ProtoChildNodeTensor(prensor.ChildNodeTensor):
information needed by its children.
In particular:
+
1. Any needed regular fields are included.
2. Any needed extended fields are included.
3. Any needed map fields are included.
@@ -365,11 +368,12 @@ class _ProtoChildExpression(_AbstractProtoChildExpression):
"""An expression representing a proto submessage.
Supports:
- A standard submessage.
- An extension submessage.
- A protobuf.Any submessage.
- A proto map submessage.
- Also supports having fields of the above types.
+
+ - A standard submessage.
+ - An extension submessage.
+ - A protobuf.Any submessage.
+ - A proto map submessage.
+ - Also supports having fields of the above types.
"""
def __init__(self, parent: "_ParentProtoExpression",
@@ -680,10 +684,10 @@ def _get_child(
"""Get a child expression.
This will get one of the following:
- A regular field.
- An extension.
- An Any filtered by value.
- A map field.
+ - A regular field.
+ - An extension.
+ - An Any filtered by value.
+ - A map field.
Args:
parent: The parent expression.
diff --git a/struct2tensor/expression_impl/size.py b/struct2tensor/expression_impl/size.py
index aa430be..c087d48 100644
--- a/struct2tensor/expression_impl/size.py
+++ b/struct2tensor/expression_impl/size.py
@@ -15,14 +15,14 @@
Given a field "foo.bar",
-```
+```python
root = size(expr, path.Path(["foo","bar"]), "bar_size")
```
creates a new expression root that has an optional field "foo.bar_size", which
is always present, and contains the number of bar in a particular foo.
-```
+```python
root_2 = has(expr, path.Path(["foo","bar"]), "bar_has")
```
diff --git a/struct2tensor/expression_impl/slice_expression.py b/struct2tensor/expression_impl/slice_expression.py
index 92ed808..b022374 100644
--- a/struct2tensor/expression_impl/slice_expression.py
+++ b/struct2tensor/expression_impl/slice_expression.py
@@ -21,7 +21,7 @@
For example:
-```
+```python
>>> x = ['a', 'b', 'c', 'd', 'e', 'f', 'g']
>>> print(x[2:5]) # all elements between index 2 inclusive and index 5 exclusive
['c', 'd', 'e']
@@ -45,7 +45,7 @@
A prensor can be considered to be interleaved lists and dictionaries.
E.g.:
-```
+```python
my_expression = [{
"foo":[
{"bar":[
@@ -62,7 +62,7 @@
}]
```
-```
+```python
result_1 = slice_expression.slice_expression(
my_expression, "foo.bar", "new_bar",begin=1, end=3)
@@ -89,7 +89,7 @@
}]
```
-```
+```python
result_2 = slice_expression.slice_expression(
my_expression, "foo.bar.baz", "new_baz",begin=1, end=3)
@@ -234,12 +234,14 @@ def _get_slice_mask(
For example, given:
an index with respect to its parent
The range is specified with beginning and an end.
+
1. If begin is not present, begin_index is implied to be zero.
2. If begin is negative, begin_index is the size of a particular
list + begin
3. If end is not present, end_index is the length of the list + 1.
4. If end is negative, end_index is the length of the list + end
5. If end is non-negative, end_index is end.
+
The mask is positive for all elements in range(begin_index, end_index), and
negative elsewhere.
diff --git a/struct2tensor/path.py b/struct2tensor/path.py
index 584e185..1a36e9c 100644
--- a/struct2tensor/path.py
+++ b/struct2tensor/path.py
@@ -317,6 +317,7 @@ def create_path(path_source: CoercableToPath) -> Path:
"""Create a path from an object.
The BNF for a path is:
+ ```
letter := [A-Za-z]
digit := [0-9]
:= "_"|"-"| | letter | digit
@@ -324,6 +325,7 @@ def create_path(path_source: CoercableToPath) -> Path:
:= "(" ( ".")* ")"
:= |
:= (( ".") * )?
+ ```
TODO(martinz): consider removing dash. This would break YouTube WatchNext.
diff --git a/struct2tensor/prensor.py b/struct2tensor/prensor.py
index d80d96d..b41e057 100644
--- a/struct2tensor/prensor.py
+++ b/struct2tensor/prensor.py
@@ -122,13 +122,16 @@ def get_positional_index(self) -> tf.Tensor:
The positional index tells us which index of the parent an element is.
- For example, with the following parent indices: [0, 0, 2]
+ For example, with the following parent indices: `[0, 0, 2]`
we would have positional index:
+
+ ```python
[
0, # The 0th element of the 0th parent.
1, # The 1st element of the 0th parent.
0 # The 0th element of the 2nd parent.
- ].
+ ]
+ ```
For more information, view ops/run_length_before_op.cc
@@ -183,13 +186,16 @@ def get_positional_index(self) -> tf.Tensor:
The positional index tells us which index of the parent an element is.
- For example, with the following parent indices: [0, 0, 2]
+ For example, with the following parent indices: `[0, 0, 2]`
we would have positional index:
+
+ ```python
[
0, # The 0th element of the 0th parent.
1, # The 1st element of the 0th parent.
0 # The 0th element of the 2nd parent.
- ].
+ ]
+ ```
For more information, view ops/run_length_before_op.cc
@@ -455,7 +461,7 @@ def get_ragged_tensor(
Returns:
A ragged tensor containing values of the leaf node, preserving the
- structure along the path. Raises an error if the path is not found.
+ structure along the path. Raises an error if the path is not found.
"""
return _get_ragged_tensor(self, p, options=options)
@@ -476,7 +482,7 @@ def get_sparse_tensor(
Returns:
A sparse tensor containing values of the leaf node, preserving the
- structure along the path. Raises an error if the path is not found.
+ structure along the path. Raises an error if the path is not found.
"""
return _get_sparse_tensor(self, p, options=options)
diff --git a/struct2tensor/prensor_util.py b/struct2tensor/prensor_util.py
index 234e6d6..54cee07 100644
--- a/struct2tensor/prensor_util.py
+++ b/struct2tensor/prensor_util.py
@@ -47,7 +47,7 @@ def get_sparse_tensor(
Returns:
A sparse tensor containing values of the leaf node, preserving the
- structure along the path. Raises an error if the path is not found.
+ structure along the path. Raises an error if the path is not found.
"""
return t.get_sparse_tensor(p, options)
@@ -88,7 +88,7 @@ def get_ragged_tensor(
Returns:
A ragged tensor containing values of the leaf node, preserving the
- structure along the path. Raises an error if the path is not found.
+ structure along the path. Raises an error if the path is not found.
"""
return t.get_ragged_tensor(p, options)