Skip to content

Commit

Permalink
docs
Browse files Browse the repository at this point in the history
  • Loading branch information
amaiya committed Mar 30, 2023
1 parent 360462d commit e40bb02
Show file tree
Hide file tree
Showing 13 changed files with 352 additions and 122 deletions.
10 changes: 6 additions & 4 deletions docs/dataset.html
Original file line number Diff line number Diff line change
Expand Up @@ -180,8 +180,9 @@ <h1 class="title">Module <code>ktrain.dataset</code></h1>
def __init__(self, x, y, batch_size=32, shuffle=True):
# error checks
err = False
if type(x) == np.ndarray and len(x.shape) != 2:
err = True
if type(x) == np.ndarray:
if len(x.shape) != 2:
err = True
elif type(x) == list:
for d in x:
if type(d) != np.ndarray or len(d.shape) != 2:
Expand Down Expand Up @@ -473,8 +474,9 @@ <h3>Methods</h3>
def __init__(self, x, y, batch_size=32, shuffle=True):
# error checks
err = False
if type(x) == np.ndarray and len(x.shape) != 2:
err = True
if type(x) == np.ndarray:
if len(x.shape) != 2:
err = True
elif type(x) == list:
for d in x:
if type(d) != np.ndarray or len(d.shape) != 2:
Expand Down
32 changes: 15 additions & 17 deletions docs/lroptimize/optimization.html
Original file line number Diff line number Diff line change
Expand Up @@ -558,7 +558,7 @@ <h2 class="section-title" id="header-classes">Classes</h2>
# Simple example, double the gradients.
return [(2. * g, v) for g, v in grads_and_vars]

optimizer = tf.keras.optimizers.legacy.SGD(
optimizer = tf.keras.optimizers.SGD(
1e-3, gradient_transformers=[my_gradient_transformer])
</code></pre>
<h2 id="args">Args</h2>
Expand All @@ -568,9 +568,9 @@ <h2 id="args">Args</h2>
by the optimizer.</dd>
<dt><strong><code>gradient_aggregator</code></strong></dt>
<dd>The function to use to aggregate gradients across
devices (when using <code>tf.distribute.Strategy</code>). If <code>None</code>, defaults
to summing the gradients across devices. The function should accept
and return a list of <code>(gradient, variable)</code> tuples.</dd>
devices (when using <code>tf.distribute.Strategy</code>). If <code>None</code>, defaults to
summing the gradients across devices. The function should accept and
return a list of <code>(gradient, variable)</code> tuples.</dd>
<dt><strong><code>gradient_transformers</code></strong></dt>
<dd>Optional. List of functions to use to transform
gradients before applying updates to Variables. The functions are
Expand All @@ -582,10 +582,9 @@ <h2 id="args">Args</h2>
If <code>clipvalue</code> (float) is set, the gradient of each weight
is clipped to be no higher than this value.
If <code>clipnorm</code> (float) is set, the gradient of each weight
is individually clipped so that its norm is no higher than this
value. If <code>global_clipnorm</code> (float) is set the gradient of all
weights is clipped so that their global norm is no higher than this
value.</dd>
is individually clipped so that its norm is no higher than this value.
If <code>global_clipnorm</code> (float) is set the gradient of all weights is
clipped so that their global norm is no higher than this value.</dd>
</dl>
<h2 id="raises">Raises</h2>
<dl>
Expand Down Expand Up @@ -744,7 +743,7 @@ <h3>Ancestors</h3>
<ul class="hlist">
<li>keras.optimizers.optimizer_v2.adam.Adam</li>
<li>keras.optimizers.optimizer_v2.optimizer_v2.OptimizerV2</li>
<li>tensorflow.python.trackable.base.Trackable</li>
<li>tensorflow.python.training.tracking.base.Trackable</li>
</ul>
<h3>Static methods</h3>
<dl>
Expand Down Expand Up @@ -777,8 +776,8 @@ <h3>Methods</h3>
<p>This is the second part of <code>minimize()</code>. It returns an <code>Operation</code> that
applies gradients.</p>
<p>The method sums gradients from all replicas in the presence of
<code>tf.distribute.Strategy</code> by default. You can aggregate gradients
yourself by passing <code>experimental_aggregate_gradients=False</code>.</p>
<code>tf.distribute.Strategy</code> by default. You can aggregate gradients yourself by
passing <code>experimental_aggregate_gradients=False</code>.</p>
<p>Example:</p>
<pre><code class="language-python">grads = tape.gradient(loss, vars)
grads = tf.distribute.get_replica_context().all_reduce('sum', grads)
Expand All @@ -792,13 +791,12 @@ <h2 id="args">Args</h2>
<dt><strong><code>grads_and_vars</code></strong></dt>
<dd>List of (gradient, variable) pairs.</dd>
<dt><strong><code>name</code></strong></dt>
<dd>Optional name for the returned operation. Default to the name
passed to the <code>Optimizer</code> constructor.</dd>
<dd>Optional name for the returned operation. Default to the name passed
to the <code>Optimizer</code> constructor.</dd>
<dt><strong><code>experimental_aggregate_gradients</code></strong></dt>
<dd>Whether to sum gradients from
different replicas in the presence of <code>tf.distribute.Strategy</code>. If
False, it's user responsibility to aggregate the gradients. Default
to True.</dd>
<dd>Whether to sum gradients from different
replicas in the presence of <code>tf.distribute.Strategy</code>. If False, it's
user responsibility to aggregate the gradients. Default to True.</dd>
</dl>
<h2 id="returns">Returns</h2>
<p>An <code>Operation</code> that applies the specified gradients. The <code>iterations</code>
Expand Down
18 changes: 9 additions & 9 deletions docs/lroptimize/triangular.html
Original file line number Diff line number Diff line change
Expand Up @@ -761,19 +761,19 @@ <h3>Methods</h3>
</code></dt>
<dd>
<div class="desc"><p>Called at the end of an epoch.</p>
<p>Subclasses should override for any actions to run. This function should
only be called during TRAIN mode.</p>
<p>Subclasses should override for any actions to run. This function should only
be called during TRAIN mode.</p>
<h2 id="args">Args</h2>
<dl>
<dt><strong><code>epoch</code></strong></dt>
<dd>Integer, index of epoch.</dd>
<dt><strong><code>logs</code></strong></dt>
<dd>Dict, metric results for this training epoch, and for the
validation epoch if validation is performed. Validation result
keys are prefixed with <code>val_</code>. For training epoch, the values of
the <code>Model</code>'s metrics are returned. Example:
<code>{'loss': 0.2, 'accuracy': 0.7}</code>.</dd>
</dl></div>
validation epoch if validation is performed. Validation result keys
are prefixed with <code>val_</code>. For training epoch, the values of the</dd>
</dl>
<p><code>Model</code>'s metrics are returned. Example : <code>{'loss': 0.2, 'accuracy':
0.7}</code>.</p></div>
<details class="source">
<summary>
<span>Expand source code</span>
Expand Down Expand Up @@ -839,8 +839,8 @@ <h2 id="args">Args</h2>
<h2 id="args">Args</h2>
<dl>
<dt><strong><code>logs</code></strong></dt>
<dd>Dict. Currently no data is passed to this argument for this
method but that may change in the future.</dd>
<dd>Dict. Currently no data is passed to this argument for this method
but that may change in the future.</dd>
</dl></div>
<details class="source">
<summary>
Expand Down
6 changes: 3 additions & 3 deletions docs/text/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -5940,7 +5940,7 @@ <h3>Methods</h3>
self.torch_device
)

def summarize(self, doc):
def summarize(self, doc, **kwargs):
&#34;&#34;&#34;
```
summarize document text
Expand Down Expand Up @@ -5977,7 +5977,7 @@ <h3>Ancestors</h3>
<h3>Methods</h3>
<dl>
<dt id="ktrain.text.TransformerSummarizer.summarize"><code class="name flex">
<span>def <span class="ident">summarize</span></span>(<span>self, doc)</span>
<span>def <span class="ident">summarize</span></span>(<span>self, doc, **kwargs)</span>
</code></dt>
<dd>
<div class="desc"><pre><code>summarize document text
Expand All @@ -5990,7 +5990,7 @@ <h3>Methods</h3>
<summary>
<span>Expand source code</span>
</summary>
<pre><code class="python">def summarize(self, doc):
<pre><code class="python">def summarize(self, doc, **kwargs):
&#34;&#34;&#34;
```
summarize document text
Expand Down
52 changes: 26 additions & 26 deletions docs/text/ner/anago/callbacks.html
Original file line number Diff line number Diff line change
Expand Up @@ -89,17 +89,16 @@ <h2 class="section-title" id="header-classes">Classes</h2>
<p>Callbacks can be passed to keras methods such as <code>fit</code>, <code>evaluate</code>, and
<code>predict</code> in order to hook into the various stages of the model training and
inference lifecycle.</p>
<p>To create a custom callback, subclass <code>keras.callbacks.Callback</code> and
override the method associated with the stage of interest. See
<p>To create a custom callback, subclass <code>keras.callbacks.Callback</code> and override
the method associated with the stage of interest. See
<a href="https://www.tensorflow.org/guide/keras/custom_callback">https://www.tensorflow.org/guide/keras/custom_callback</a> for more information.</p>
<p>Example:</p>
<pre><code class="language-python-repl">&gt;&gt;&gt; training_finished = False
&gt;&gt;&gt; class MyCallback(tf.keras.callbacks.Callback):
... def on_train_end(self, logs=None):
... global training_finished
... training_finished = True
&gt;&gt;&gt; model = tf.keras.Sequential([
... tf.keras.layers.Dense(1, input_shape=(1,))])
&gt;&gt;&gt; model = tf.keras.Sequential([tf.keras.layers.Dense(1, input_shape=(1,))])
&gt;&gt;&gt; model.compile(loss='mean_squared_error')
&gt;&gt;&gt; model.fit(tf.constant([[1.0]]), tf.constant([[1.0]]),
... callbacks=[MyCallback()])
Expand All @@ -112,21 +111,22 @@ <h2 class="section-title" id="header-classes">Classes</h2>
<li>You will need to manually call all the <code>on_*</code> methods at the appropriate
locations in your loop. Like this:</li>
</ol>
<p>Example:</p>
<pre><code class="language-python"> callbacks = tf.keras.callbacks.CallbackList([...])
callbacks.append(...)
callbacks.on_train_begin(...)
for epoch in range(EPOCHS):
callbacks.on_epoch_begin(epoch)
for i, data in dataset.enumerate():
callbacks.on_train_batch_begin(i)
batch_logs = model.train_step(data)
callbacks.on_train_batch_end(i, batch_logs)
epoch_logs = ...
callbacks.on_epoch_end(epoch, epoch_logs)
final_logs=...
callbacks.on_train_end(final_logs)
</code></pre>
<p>```
callbacks =
tf.keras.callbacks.CallbackList([&hellip;])
callbacks.append(&hellip;)</p>
<p>callbacks.on_train_begin(&hellip;)
for epoch in range(EPOCHS):
callbacks.on_epoch_begin(epoch)
for i, data in dataset.enumerate():
callbacks.on_train_batch_begin(i)
batch_logs = model.train_step(data)
callbacks.on_train_batch_end(i, batch_logs)
epoch_logs = &hellip;
callbacks.on_epoch_end(epoch, epoch_logs)
final_logs=&hellip;
callbacks.on_train_end(final_logs)
```</p>
<h2 id="attributes">Attributes</h2>
<dl>
<dt><strong><code>params</code></strong></dt>
Expand Down Expand Up @@ -211,19 +211,19 @@ <h3>Methods</h3>
</code></dt>
<dd>
<div class="desc"><p>Called at the end of an epoch.</p>
<p>Subclasses should override for any actions to run. This function should
only be called during TRAIN mode.</p>
<p>Subclasses should override for any actions to run. This function should only
be called during TRAIN mode.</p>
<h2 id="args">Args</h2>
<dl>
<dt><strong><code>epoch</code></strong></dt>
<dd>Integer, index of epoch.</dd>
<dt><strong><code>logs</code></strong></dt>
<dd>Dict, metric results for this training epoch, and for the
validation epoch if validation is performed. Validation result
keys are prefixed with <code>val_</code>. For training epoch, the values of
the <code>Model</code>'s metrics are returned. Example:
<code>{'loss': 0.2, 'accuracy': 0.7}</code>.</dd>
</dl></div>
validation epoch if validation is performed. Validation result keys
are prefixed with <code>val_</code>. For training epoch, the values of the</dd>
</dl>
<p><code>Model</code>'s metrics are returned. Example : <code>{'loss': 0.2, 'accuracy':
0.7}</code>.</p></div>
<details class="source">
<summary>
<span>Expand source code</span>
Expand Down
43 changes: 18 additions & 25 deletions docs/text/ner/anago/layers.html
Original file line number Diff line number Diff line change
Expand Up @@ -1786,8 +1786,8 @@ <h3>Ancestors</h3>
<ul class="hlist">
<li>keras.engine.base_layer.Layer</li>
<li>tensorflow.python.module.module.Module</li>
<li>tensorflow.python.trackable.autotrackable.AutoTrackable</li>
<li>tensorflow.python.trackable.base.Trackable</li>
<li>tensorflow.python.training.tracking.autotrackable.AutoTrackable</li>
<li>tensorflow.python.training.tracking.base.Trackable</li>
<li>keras.utils.version_utils.LayerVersionSelector</li>
</ul>
<h3>Static methods</h3>
Expand Down Expand Up @@ -2020,13 +2020,10 @@ <h2 id="args">Args</h2>
</code></dt>
<dd>
<div class="desc"><p>This is where the layer's logic lives.</p>
<p>The <code>call()</code> method may not create state (except in its first
invocation, wrapping the creation of variables or other resources in
<code>tf.init_scope()</code>).
It is recommended to create state, including
<code>tf.Variable</code> instances and nested <code>Layer</code> instances,
in <code>__init__()</code>, or in the <code>build()</code> method that is
called automatically before <code>call()</code> executes for the first time.</p>
<p>The <code>call()</code> method may not create state (except in its first invocation,
wrapping the creation of variables or other resources in <code>tf.init_scope()</code>).
It is recommended to create state in <code>__init__()</code>, or the <code>build()</code> method
that is called automatically before <code>call()</code> executes the first time.</p>
<h2 id="args">Args</h2>
<dl>
<dt><strong><code>inputs</code></strong></dt>
Expand All @@ -2035,17 +2032,15 @@ <h2 id="args">Args</h2>
- <code>inputs</code> must be explicitly passed. A layer cannot have zero
arguments, and <code>inputs</code> cannot be provided via the default value
of a keyword argument.
- NumPy array or Python scalar values in <code>inputs</code> get cast as
tensors.
- NumPy array or Python scalar values in <code>inputs</code> get cast as tensors.
- Keras mask metadata is only collected from <code>inputs</code>.
- Layers are built (<code>build(input_shape)</code> method)
using shape info from <code>inputs</code> only.
- <code>input_spec</code> compatibility is only checked against <code>inputs</code>.
- Mixed precision input casting is only applied to <code>inputs</code>.
If a layer has tensor arguments in <code>*args</code> or <code>**kwargs</code>, their
casting behavior in mixed precision should be handled manually.
- The SavedModel input specification is generated using <code>inputs</code>
only.
- The SavedModel input specification is generated using <code>inputs</code> only.
- Integration with various ecosystem packages like TFMOT, TFLite,
TF.js, etc is only supported for <code>inputs</code> and not for tensors in
positional and keyword arguments.</dd>
Expand All @@ -2059,10 +2054,10 @@ <h2 id="args">Args</h2>
- <code>training</code>: Boolean scalar tensor of Python boolean indicating
whether the <code>call</code> is meant for training or inference.
- <code>mask</code>: Boolean input mask. If the layer's <code>call()</code> method takes a
<code>mask</code> argument, its default value will be set to the mask
generated for <code>inputs</code> by the previous layer (if <code>input</code> did come
from a layer that generated a corresponding mask, i.e. if it came
from a Keras layer with masking support).</dd>
<code>mask</code> argument, its default value will be set to the mask generated
for <code>inputs</code> by the previous layer (if <code>input</code> did come from a layer
that generated a corresponding mask, i.e. if it came from a Keras
layer with masking support).</dd>
</dl>
<h2 id="returns">Returns</h2>
<p>A tensor or list/tuple of tensors.</p></div>
Expand Down Expand Up @@ -2128,15 +2123,13 @@ <h2 id="returns">Returns</h2>
<h2 id="args">Args</h2>
<dl>
<dt><strong><code>input_shape</code></strong></dt>
<dd>Shape tuple (tuple of integers) or <code>tf.TensorShape</code>,
or structure of shape tuples / <code>tf.TensorShape</code> instances
(one per output tensor of the layer).
<dd>Shape tuple (tuple of integers)
or list of shape tuples (one per output tensor of the layer).
Shape tuples can include None for free dimensions,
instead of an integer.</dd>
</dl>
<h2 id="returns">Returns</h2>
<p>A <code>tf.TensorShape</code> instance
or structure of <code>tf.TensorShape</code> instances.</p></div>
<p>An input shape tuple.</p></div>
<details class="source">
<summary>
<span>Expand source code</span>
Expand Down Expand Up @@ -2170,9 +2163,9 @@ <h2 id="returns">Returns</h2>
<p>The config of a layer does not include connectivity
information, nor the layer class name. These are handled
by <code>Network</code> (one layer of abstraction above).</p>
<p>Note that <code>get_config()</code> does not guarantee to return a fresh copy of
dict every time it is called. The callers should make a copy of the
returned dict if they want to modify it.</p>
<p>Note that <code>get_config()</code> does not guarantee to return a fresh copy of dict
every time it is called. The callers should make a copy of the returned dict
if they want to modify it.</p>
<h2 id="returns">Returns</h2>
<p>Python dictionary.</p></div>
<details class="source">
Expand Down

0 comments on commit e40bb02

Please sign in to comment.