Skip to content

Commit

Permalink
docs
Browse files Browse the repository at this point in the history
  • Loading branch information
amaiya committed Apr 21, 2023
1 parent 05a6109 commit 9dc21d1
Show file tree
Hide file tree
Showing 16 changed files with 504 additions and 103 deletions.
6 changes: 2 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@

### News and Announcements
- **2023-04-21**
- **ktrain 0.36.x** is released and supports **Sentiment Analysis**. See the [example notebook](https://nbviewer.jupyter.org/github/amaiya/ktrain/blob/develop/examples/text/sentiment_analysis_example.ipynb) for more information.
- **ktrain 0.36.x** is released and supports a simple wrapper to **Sentiment Analysis**. See the [example notebook](https://nbviewer.jupyter.org/github/amaiya/ktrain/blob/develop/examples/text/sentiment_analysis_example.ipynb) for more information.
```python
# Example: Sentiment Analysis
from ktrain.text.sentiment import SentimentAnalyzer
Expand Down Expand Up @@ -42,9 +42,6 @@ print(model.execute(prompt))
```
- **2023-03-30**
- **ktrain 0.34.x** is released and supports fast LexRank-based text summarization.
- **2023-01-14**
- **ktrain 0.33.x** is released and includes fixes to support the latest version of Hugging Face`transformers`. Note that `transformers<=4.25.1` [has a bug](https://github.com/huggingface/transformers/issues/20750) related to TensorFlow 2.11. You can downgrade TensorFlow to 2.10 if you receive an error that says *"has no attribute 'expand_1d'"* (or upgrade to `transformers>4.25.1` if available).

----

### Overview
Expand All @@ -70,6 +67,7 @@ print(model.execute(prompt))
- **Speech Transcription**: Extract text from audio files <sub><sup>[[example notebook](https://nbviewer.jupyter.org/github/amaiya/ktrain/blob/develop/examples/text/speech_transcription_example.ipynb)]</sup></sub>
- **Universal Information Extraction**: extract any kind of information from documents by simply phrasing it in the form of a question <sub><sup>[[example notebook](https://nbviewer.jupyter.org/github/amaiya/ktrain/blob/master/examples/text/qa_information_extraction.ipynb)]</sup></sub>
- **Keyphrase Extraction**: extract keywords from documents <sub><sup>[[example notebook](https://nbviewer.jupyter.org/github/amaiya/ktrain/blob/develop/examples/text/keyword_extraction_example.ipynb)]</sup></sub>
- **Sentiment Analysis**: easy-to-use wrapper to pretrained sentiment analysis <sub><sup>[[example notebook](https://nbviewer.jupyter.org/github/amaiya/ktrain/blob/develop/examples/text/sentiment_analysis_example.ipynb)]</sup
- **Generative AI with GPT**: Provide instructions to a lightweight ChatGPT-like model running on your own own machine to solve various tasks. Model was fine-tuned on the [Alpaca](https://github.com/tatsu-lab/stanford_alpaca) instruction dataset ([CC By NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/deed.en_GB)) <sub><sup>[[example notebook](https://nbviewer.jupyter.org/github/amaiya/ktrain/blob/develop/examples/text/generative_ai_example.ipynb)]</sup>
- `vision` data:
- **image classification** (e.g., [ResNet](https://arxiv.org/abs/1512.03385), [Wide ResNet](https://arxiv.org/abs/1605.07146), [Inception](https://www.cs.unc.edu/~wliu/papers/GoogLeNet.pdf)) <sub><sup>[[example notebook](https://colab.research.google.com/drive/1WipQJUPL7zqyvLT10yekxf_HNMXDDtyR)]</sup></sub>
Expand Down
32 changes: 17 additions & 15 deletions docs/lroptimize/optimization.html
Original file line number Diff line number Diff line change
Expand Up @@ -558,7 +558,7 @@ <h2 class="section-title" id="header-classes">Classes</h2>
# Simple example, double the gradients.
return [(2. * g, v) for g, v in grads_and_vars]

optimizer = tf.keras.optimizers.SGD(
optimizer = tf.keras.optimizers.legacy.SGD(
1e-3, gradient_transformers=[my_gradient_transformer])
</code></pre>
<h2 id="args">Args</h2>
Expand All @@ -568,9 +568,9 @@ <h2 id="args">Args</h2>
by the optimizer.</dd>
<dt><strong><code>gradient_aggregator</code></strong></dt>
<dd>The function to use to aggregate gradients across
devices (when using <code>tf.distribute.Strategy</code>). If <code>None</code>, defaults to
summing the gradients across devices. The function should accept and
return a list of <code>(gradient, variable)</code> tuples.</dd>
devices (when using <code>tf.distribute.Strategy</code>). If <code>None</code>, defaults
to summing the gradients across devices. The function should accept
and return a list of <code>(gradient, variable)</code> tuples.</dd>
<dt><strong><code>gradient_transformers</code></strong></dt>
<dd>Optional. List of functions to use to transform
gradients before applying updates to Variables. The functions are
Expand All @@ -582,9 +582,10 @@ <h2 id="args">Args</h2>
If <code>clipvalue</code> (float) is set, the gradient of each weight
is clipped to be no higher than this value.
If <code>clipnorm</code> (float) is set, the gradient of each weight
is individually clipped so that its norm is no higher than this value.
If <code>global_clipnorm</code> (float) is set the gradient of all weights is
clipped so that their global norm is no higher than this value.</dd>
is individually clipped so that its norm is no higher than this
value. If <code>global_clipnorm</code> (float) is set the gradient of all
weights is clipped so that their global norm is no higher than this
value.</dd>
</dl>
<h2 id="raises">Raises</h2>
<dl>
Expand Down Expand Up @@ -743,7 +744,7 @@ <h3>Ancestors</h3>
<ul class="hlist">
<li>keras.optimizers.optimizer_v2.adam.Adam</li>
<li>keras.optimizers.optimizer_v2.optimizer_v2.OptimizerV2</li>
<li>tensorflow.python.training.tracking.base.Trackable</li>
<li>tensorflow.python.trackable.base.Trackable</li>
</ul>
<h3>Static methods</h3>
<dl>
Expand Down Expand Up @@ -776,8 +777,8 @@ <h3>Methods</h3>
<p>This is the second part of <code>minimize()</code>. It returns an <code>Operation</code> that
applies gradients.</p>
<p>The method sums gradients from all replicas in the presence of
<code>tf.distribute.Strategy</code> by default. You can aggregate gradients yourself by
passing <code>experimental_aggregate_gradients=False</code>.</p>
<code>tf.distribute.Strategy</code> by default. You can aggregate gradients
yourself by passing <code>experimental_aggregate_gradients=False</code>.</p>
<p>Example:</p>
<pre><code class="language-python">grads = tape.gradient(loss, vars)
grads = tf.distribute.get_replica_context().all_reduce('sum', grads)
Expand All @@ -791,12 +792,13 @@ <h2 id="args">Args</h2>
<dt><strong><code>grads_and_vars</code></strong></dt>
<dd>List of (gradient, variable) pairs.</dd>
<dt><strong><code>name</code></strong></dt>
<dd>Optional name for the returned operation. Default to the name passed
to the <code>Optimizer</code> constructor.</dd>
<dd>Optional name for the returned operation. Default to the name
passed to the <code>Optimizer</code> constructor.</dd>
<dt><strong><code>experimental_aggregate_gradients</code></strong></dt>
<dd>Whether to sum gradients from different
replicas in the presence of <code>tf.distribute.Strategy</code>. If False, it's
user responsibility to aggregate the gradients. Default to True.</dd>
<dd>Whether to sum gradients from
different replicas in the presence of <code>tf.distribute.Strategy</code>. If
False, it's user responsibility to aggregate the gradients. Default
to True.</dd>
</dl>
<h2 id="returns">Returns</h2>
<p>An <code>Operation</code> that applies the specified gradients. The <code>iterations</code>
Expand Down
18 changes: 9 additions & 9 deletions docs/lroptimize/triangular.html
Original file line number Diff line number Diff line change
Expand Up @@ -761,19 +761,19 @@ <h3>Methods</h3>
</code></dt>
<dd>
<div class="desc"><p>Called at the end of an epoch.</p>
<p>Subclasses should override for any actions to run. This function should only
be called during TRAIN mode.</p>
<p>Subclasses should override for any actions to run. This function should
only be called during TRAIN mode.</p>
<h2 id="args">Args</h2>
<dl>
<dt><strong><code>epoch</code></strong></dt>
<dd>Integer, index of epoch.</dd>
<dt><strong><code>logs</code></strong></dt>
<dd>Dict, metric results for this training epoch, and for the
validation epoch if validation is performed. Validation result keys
are prefixed with <code>val_</code>. For training epoch, the values of the</dd>
</dl>
<p><code>Model</code>'s metrics are returned. Example : <code>{'loss': 0.2, 'accuracy':
0.7}</code>.</p></div>
validation epoch if validation is performed. Validation result
keys are prefixed with <code>val_</code>. For training epoch, the values of
the <code>Model</code>'s metrics are returned. Example:
<code>{'loss': 0.2, 'accuracy': 0.7}</code>.</dd>
</dl></div>
<details class="source">
<summary>
<span>Expand source code</span>
Expand Down Expand Up @@ -839,8 +839,8 @@ <h2 id="args">Args</h2>
<h2 id="args">Args</h2>
<dl>
<dt><strong><code>logs</code></strong></dt>
<dd>Dict. Currently no data is passed to this argument for this method
but that may change in the future.</dd>
<dd>Dict. Currently no data is passed to this argument for this
method but that may change in the future.</dd>
</dl></div>
<details class="source">
<summary>
Expand Down
5 changes: 5 additions & 0 deletions docs/text/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -157,6 +157,10 @@ <h2 class="section-title" id="header-submodules">Sub-modules</h2>
<dd>
<div class="desc"></div>
</dd>
<dt><code class="name"><a title="ktrain.text.sentiment" href="sentiment/index.html">ktrain.text.sentiment</a></code></dt>
<dd>
<div class="desc"></div>
</dd>
<dt><code class="name"><a title="ktrain.text.shallownlp" href="shallownlp/index.html">ktrain.text.shallownlp</a></code></dt>
<dd>
<div class="desc"></div>
Expand Down Expand Up @@ -6624,6 +6628,7 @@ <h1>Index</h1>
<li><code><a title="ktrain.text.predictor" href="predictor.html">ktrain.text.predictor</a></code></li>
<li><code><a title="ktrain.text.preprocessor" href="preprocessor.html">ktrain.text.preprocessor</a></code></li>
<li><code><a title="ktrain.text.qa" href="qa/index.html">ktrain.text.qa</a></code></li>
<li><code><a title="ktrain.text.sentiment" href="sentiment/index.html">ktrain.text.sentiment</a></code></li>
<li><code><a title="ktrain.text.shallownlp" href="shallownlp/index.html">ktrain.text.shallownlp</a></code></li>
<li><code><a title="ktrain.text.speech" href="speech/index.html">ktrain.text.speech</a></code></li>
<li><code><a title="ktrain.text.summarization" href="summarization/index.html">ktrain.text.summarization</a></code></li>
Expand Down
52 changes: 26 additions & 26 deletions docs/text/ner/anago/callbacks.html
Original file line number Diff line number Diff line change
Expand Up @@ -89,16 +89,17 @@ <h2 class="section-title" id="header-classes">Classes</h2>
<p>Callbacks can be passed to keras methods such as <code>fit</code>, <code>evaluate</code>, and
<code>predict</code> in order to hook into the various stages of the model training and
inference lifecycle.</p>
<p>To create a custom callback, subclass <code>keras.callbacks.Callback</code> and override
the method associated with the stage of interest. See
<p>To create a custom callback, subclass <code>keras.callbacks.Callback</code> and
override the method associated with the stage of interest. See
<a href="https://www.tensorflow.org/guide/keras/custom_callback">https://www.tensorflow.org/guide/keras/custom_callback</a> for more information.</p>
<p>Example:</p>
<pre><code class="language-python-repl">&gt;&gt;&gt; training_finished = False
&gt;&gt;&gt; class MyCallback(tf.keras.callbacks.Callback):
... def on_train_end(self, logs=None):
... global training_finished
... training_finished = True
&gt;&gt;&gt; model = tf.keras.Sequential([tf.keras.layers.Dense(1, input_shape=(1,))])
&gt;&gt;&gt; model = tf.keras.Sequential([
... tf.keras.layers.Dense(1, input_shape=(1,))])
&gt;&gt;&gt; model.compile(loss='mean_squared_error')
&gt;&gt;&gt; model.fit(tf.constant([[1.0]]), tf.constant([[1.0]]),
... callbacks=[MyCallback()])
Expand All @@ -111,22 +112,21 @@ <h2 class="section-title" id="header-classes">Classes</h2>
<li>You will need to manually call all the <code>on_*</code> methods at the appropriate
locations in your loop. Like this:</li>
</ol>
<p>```
callbacks =
tf.keras.callbacks.CallbackList([&hellip;])
callbacks.append(&hellip;)</p>
<p>callbacks.on_train_begin(&hellip;)
for epoch in range(EPOCHS):
callbacks.on_epoch_begin(epoch)
for i, data in dataset.enumerate():
callbacks.on_train_batch_begin(i)
batch_logs = model.train_step(data)
callbacks.on_train_batch_end(i, batch_logs)
epoch_logs = &hellip;
callbacks.on_epoch_end(epoch, epoch_logs)
final_logs=&hellip;
callbacks.on_train_end(final_logs)
```</p>
<p>Example:</p>
<pre><code class="language-python"> callbacks = tf.keras.callbacks.CallbackList([...])
callbacks.append(...)
callbacks.on_train_begin(...)
for epoch in range(EPOCHS):
callbacks.on_epoch_begin(epoch)
for i, data in dataset.enumerate():
callbacks.on_train_batch_begin(i)
batch_logs = model.train_step(data)
callbacks.on_train_batch_end(i, batch_logs)
epoch_logs = ...
callbacks.on_epoch_end(epoch, epoch_logs)
final_logs=...
callbacks.on_train_end(final_logs)
</code></pre>
<h2 id="attributes">Attributes</h2>
<dl>
<dt><strong><code>params</code></strong></dt>
Expand Down Expand Up @@ -211,19 +211,19 @@ <h3>Methods</h3>
</code></dt>
<dd>
<div class="desc"><p>Called at the end of an epoch.</p>
<p>Subclasses should override for any actions to run. This function should only
be called during TRAIN mode.</p>
<p>Subclasses should override for any actions to run. This function should
only be called during TRAIN mode.</p>
<h2 id="args">Args</h2>
<dl>
<dt><strong><code>epoch</code></strong></dt>
<dd>Integer, index of epoch.</dd>
<dt><strong><code>logs</code></strong></dt>
<dd>Dict, metric results for this training epoch, and for the
validation epoch if validation is performed. Validation result keys
are prefixed with <code>val_</code>. For training epoch, the values of the</dd>
</dl>
<p><code>Model</code>'s metrics are returned. Example : <code>{'loss': 0.2, 'accuracy':
0.7}</code>.</p></div>
validation epoch if validation is performed. Validation result
keys are prefixed with <code>val_</code>. For training epoch, the values of
the <code>Model</code>'s metrics are returned. Example:
<code>{'loss': 0.2, 'accuracy': 0.7}</code>.</dd>
</dl></div>
<details class="source">
<summary>
<span>Expand source code</span>
Expand Down
43 changes: 25 additions & 18 deletions docs/text/ner/anago/layers.html
Original file line number Diff line number Diff line change
Expand Up @@ -1786,8 +1786,8 @@ <h3>Ancestors</h3>
<ul class="hlist">
<li>keras.engine.base_layer.Layer</li>
<li>tensorflow.python.module.module.Module</li>
<li>tensorflow.python.training.tracking.autotrackable.AutoTrackable</li>
<li>tensorflow.python.training.tracking.base.Trackable</li>
<li>tensorflow.python.trackable.autotrackable.AutoTrackable</li>
<li>tensorflow.python.trackable.base.Trackable</li>
<li>keras.utils.version_utils.LayerVersionSelector</li>
</ul>
<h3>Static methods</h3>
Expand Down Expand Up @@ -2020,10 +2020,13 @@ <h2 id="args">Args</h2>
</code></dt>
<dd>
<div class="desc"><p>This is where the layer's logic lives.</p>
<p>The <code>call()</code> method may not create state (except in its first invocation,
wrapping the creation of variables or other resources in <code>tf.init_scope()</code>).
It is recommended to create state in <code>__init__()</code>, or the <code>build()</code> method
that is called automatically before <code>call()</code> executes the first time.</p>
<p>The <code>call()</code> method may not create state (except in its first
invocation, wrapping the creation of variables or other resources in
<code>tf.init_scope()</code>).
It is recommended to create state, including
<code>tf.Variable</code> instances and nested <code>Layer</code> instances,
in <code>__init__()</code>, or in the <code>build()</code> method that is
called automatically before <code>call()</code> executes for the first time.</p>
<h2 id="args">Args</h2>
<dl>
<dt><strong><code>inputs</code></strong></dt>
Expand All @@ -2032,15 +2035,17 @@ <h2 id="args">Args</h2>
- <code>inputs</code> must be explicitly passed. A layer cannot have zero
arguments, and <code>inputs</code> cannot be provided via the default value
of a keyword argument.
- NumPy array or Python scalar values in <code>inputs</code> get cast as tensors.
- NumPy array or Python scalar values in <code>inputs</code> get cast as
tensors.
- Keras mask metadata is only collected from <code>inputs</code>.
- Layers are built (<code>build(input_shape)</code> method)
using shape info from <code>inputs</code> only.
- <code>input_spec</code> compatibility is only checked against <code>inputs</code>.
- Mixed precision input casting is only applied to <code>inputs</code>.
If a layer has tensor arguments in <code>*args</code> or <code>**kwargs</code>, their
casting behavior in mixed precision should be handled manually.
- The SavedModel input specification is generated using <code>inputs</code> only.
- The SavedModel input specification is generated using <code>inputs</code>
only.
- Integration with various ecosystem packages like TFMOT, TFLite,
TF.js, etc is only supported for <code>inputs</code> and not for tensors in
positional and keyword arguments.</dd>
Expand All @@ -2054,10 +2059,10 @@ <h2 id="args">Args</h2>
- <code>training</code>: Boolean scalar tensor of Python boolean indicating
whether the <code>call</code> is meant for training or inference.
- <code>mask</code>: Boolean input mask. If the layer's <code>call()</code> method takes a
<code>mask</code> argument, its default value will be set to the mask generated
for <code>inputs</code> by the previous layer (if <code>input</code> did come from a layer
that generated a corresponding mask, i.e. if it came from a Keras
layer with masking support).</dd>
<code>mask</code> argument, its default value will be set to the mask
generated for <code>inputs</code> by the previous layer (if <code>input</code> did come
from a layer that generated a corresponding mask, i.e. if it came
from a Keras layer with masking support).</dd>
</dl>
<h2 id="returns">Returns</h2>
<p>A tensor or list/tuple of tensors.</p></div>
Expand Down Expand Up @@ -2123,13 +2128,15 @@ <h2 id="returns">Returns</h2>
<h2 id="args">Args</h2>
<dl>
<dt><strong><code>input_shape</code></strong></dt>
<dd>Shape tuple (tuple of integers)
or list of shape tuples (one per output tensor of the layer).
<dd>Shape tuple (tuple of integers) or <code>tf.TensorShape</code>,
or structure of shape tuples / <code>tf.TensorShape</code> instances
(one per output tensor of the layer).
Shape tuples can include None for free dimensions,
instead of an integer.</dd>
</dl>
<h2 id="returns">Returns</h2>
<p>An input shape tuple.</p></div>
<p>A <code>tf.TensorShape</code> instance
or structure of <code>tf.TensorShape</code> instances.</p></div>
<details class="source">
<summary>
<span>Expand source code</span>
Expand Down Expand Up @@ -2163,9 +2170,9 @@ <h2 id="returns">Returns</h2>
<p>The config of a layer does not include connectivity
information, nor the layer class name. These are handled
by <code>Network</code> (one layer of abstraction above).</p>
<p>Note that <code>get_config()</code> does not guarantee to return a fresh copy of dict
every time it is called. The callers should make a copy of the returned dict
if they want to modify it.</p>
<p>Note that <code>get_config()</code> does not guarantee to return a fresh copy of
dict every time it is called. The callers should make a copy of the
returned dict if they want to modify it.</p>
<h2 id="returns">Returns</h2>
<p>Python dictionary.</p></div>
<details class="source">
Expand Down

0 comments on commit 9dc21d1

Please sign in to comment.