Skip to content

Commit

Permalink
[no ci] docs update
Browse files Browse the repository at this point in the history
  • Loading branch information
amaiya committed Jun 2, 2023
1 parent e5aadf1 commit ff90bb5
Show file tree
Hide file tree
Showing 3 changed files with 109 additions and 63 deletions.
44 changes: 20 additions & 24 deletions docs/core.html
Original file line number Diff line number Diff line change
Expand Up @@ -691,15 +691,14 @@ <h1 class="title">Module <code>ktrain.core</code></h1>
&#34;&#34;&#34;
```
Return numerical estimates of lr using two different methods:
1. learning rate associated with minimum numerical gradient
2. learning rate associated with minimum loss divided by 10
Since neither of these methods are fool-proof and can
1. lr associated with minum numerical gradient (None if gradient computation fails)
2. lr associated with minimum loss divided by 10
3. lr associated with longest valley
Since none of these methods are fool-proof and can
potentially return bad estimates, it is recommended that you
examine the plot generated by lr_plot to estimate the learning rate.
Returns:
tuple: tuple of the form (float, float), where
First element is lr associated with minimum numerical gradient (None if gradient computation fails).
Second element is lr associated with minimum loss divided by 10.
tuple: tuple of the form (float, float)
```
&#34;&#34;&#34;
if self.lr_finder is None or not self.lr_finder.find_called():
Expand Down Expand Up @@ -4074,15 +4073,14 @@ <h3>Inherited members</h3>
&#34;&#34;&#34;
```
Return numerical estimates of lr using two different methods:
1. learning rate associated with minimum numerical gradient
2. learning rate associated with minimum loss divided by 10
Since neither of these methods are fool-proof and can
1. lr associated with minum numerical gradient (None if gradient computation fails)
2. lr associated with minimum loss divided by 10
3. lr associated with longest valley
Since none of these methods are fool-proof and can
potentially return bad estimates, it is recommended that you
examine the plot generated by lr_plot to estimate the learning rate.
Returns:
tuple: tuple of the form (float, float), where
First element is lr associated with minimum numerical gradient (None if gradient computation fails).
Second element is lr associated with minimum loss divided by 10.
tuple: tuple of the form (float, float)
```
&#34;&#34;&#34;
if self.lr_finder is None or not self.lr_finder.find_called():
Expand Down Expand Up @@ -5318,15 +5316,14 @@ <h3>Methods</h3>
</code></dt>
<dd>
<div class="desc"><pre><code>Return numerical estimates of lr using two different methods:
1. learning rate associated with minimum numerical gradient
2. learning rate associated with minimum loss divided by 10
Since neither of these methods are fool-proof and can
1. lr associated with minum numerical gradient (None if gradient computation fails)
2. lr associated with minimum loss divided by 10
3. lr associated with longest valley
Since none of these methods are fool-proof and can
potentially return bad estimates, it is recommended that you
examine the plot generated by lr_plot to estimate the learning rate.
Returns:
tuple: tuple of the form (float, float), where
First element is lr associated with minimum numerical gradient (None if gradient computation fails).
Second element is lr associated with minimum loss divided by 10.
tuple: tuple of the form (float, float)
</code></pre></div>
<details class="source">
<summary>
Expand All @@ -5336,15 +5333,14 @@ <h3>Methods</h3>
&#34;&#34;&#34;
```
Return numerical estimates of lr using two different methods:
1. learning rate associated with minimum numerical gradient
2. learning rate associated with minimum loss divided by 10
Since neither of these methods are fool-proof and can
1. lr associated with minum numerical gradient (None if gradient computation fails)
2. lr associated with minimum loss divided by 10
3. lr associated with longest valley
Since none of these methods are fool-proof and can
potentially return bad estimates, it is recommended that you
examine the plot generated by lr_plot to estimate the learning rate.
Returns:
tuple: tuple of the form (float, float), where
First element is lr associated with minimum numerical gradient (None if gradient computation fails).
Second element is lr associated with minimum loss divided by 10.
tuple: tuple of the form (float, float)
```
&#34;&#34;&#34;
if self.lr_finder is None or not self.lr_finder.find_called():
Expand Down
52 changes: 36 additions & 16 deletions docs/text/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -5977,10 +5977,19 @@ <h3>Methods</h3>
self.torch_device
)

def summarize(self, doc, **kwargs):
def summarize(
self,
doc,
max_length=150,
min_length=56,
no_repeat_ngram_size=3,
length_penalty=2.0,
num_beams=4,
**kwargs,
):
&#34;&#34;&#34;
```
summarize document text
Summarize document text. Extra arguments are fed to generate method
Args:
doc(str): text of document
Returns:
Expand All @@ -5995,11 +6004,12 @@ <h3>Methods</h3>
)[&#34;input_ids&#34;].to(self.torch_device)
summary_ids = self.model.generate(
answers_input_ids,
num_beams=4,
length_penalty=2.0,
max_length=142,
min_length=56,
no_repeat_ngram_size=3,
num_beams=num_beams,
length_penalty=length_penalty,
max_length=max_length,
min_length=min_length,
no_repeat_ngram_size=no_repeat_ngram_size,
**kwargs,
)

exec_sum = self.tokenizer.decode(
Expand All @@ -6014,10 +6024,10 @@ <h3>Ancestors</h3>
<h3>Methods</h3>
<dl>
<dt id="ktrain.text.TransformerSummarizer.summarize"><code class="name flex">
<span>def <span class="ident">summarize</span></span>(<span>self, doc, **kwargs)</span>
<span>def <span class="ident">summarize</span></span>(<span>self, doc, max_length=150, min_length=56, no_repeat_ngram_size=3, length_penalty=2.0, num_beams=4, **kwargs)</span>
</code></dt>
<dd>
<div class="desc"><pre><code>summarize document text
<div class="desc"><pre><code>Summarize document text. Extra arguments are fed to generate method
Args:
doc(str): text of document
Returns:
Expand All @@ -6027,10 +6037,19 @@ <h3>Methods</h3>
<summary>
<span>Expand source code</span>
</summary>
<pre><code class="python">def summarize(self, doc, **kwargs):
<pre><code class="python">def summarize(
self,
doc,
max_length=150,
min_length=56,
no_repeat_ngram_size=3,
length_penalty=2.0,
num_beams=4,
**kwargs,
):
&#34;&#34;&#34;
```
summarize document text
Summarize document text. Extra arguments are fed to generate method
Args:
doc(str): text of document
Returns:
Expand All @@ -6045,11 +6064,12 @@ <h3>Methods</h3>
)[&#34;input_ids&#34;].to(self.torch_device)
summary_ids = self.model.generate(
answers_input_ids,
num_beams=4,
length_penalty=2.0,
max_length=142,
min_length=56,
no_repeat_ngram_size=3,
num_beams=num_beams,
length_penalty=length_penalty,
max_length=max_length,
min_length=min_length,
no_repeat_ngram_size=no_repeat_ngram_size,
**kwargs,
)

exec_sum = self.tokenizer.decode(
Expand Down
76 changes: 53 additions & 23 deletions docs/text/summarization/core.html
Original file line number Diff line number Diff line change
Expand Up @@ -54,10 +54,19 @@ <h1 class="title">Module <code>ktrain.text.summarization.core</code></h1>
self.torch_device
)

def summarize(self, doc, **kwargs):
def summarize(
self,
doc,
max_length=150,
min_length=56,
no_repeat_ngram_size=3,
length_penalty=2.0,
num_beams=4,
**kwargs,
):
&#34;&#34;&#34;
```
summarize document text
Summarize document text. Extra arguments are fed to generate method
Args:
doc(str): text of document
Returns:
Expand All @@ -72,11 +81,12 @@ <h1 class="title">Module <code>ktrain.text.summarization.core</code></h1>
)[&#34;input_ids&#34;].to(self.torch_device)
summary_ids = self.model.generate(
answers_input_ids,
num_beams=4,
length_penalty=2.0,
max_length=142,
min_length=56,
no_repeat_ngram_size=3,
num_beams=num_beams,
length_penalty=length_penalty,
max_length=max_length,
min_length=min_length,
no_repeat_ngram_size=no_repeat_ngram_size,
**kwargs,
)

exec_sum = self.tokenizer.decode(
Expand Down Expand Up @@ -373,10 +383,19 @@ <h3>Methods</h3>
self.torch_device
)

def summarize(self, doc, **kwargs):
def summarize(
self,
doc,
max_length=150,
min_length=56,
no_repeat_ngram_size=3,
length_penalty=2.0,
num_beams=4,
**kwargs,
):
&#34;&#34;&#34;
```
summarize document text
Summarize document text. Extra arguments are fed to generate method
Args:
doc(str): text of document
Returns:
Expand All @@ -391,11 +410,12 @@ <h3>Methods</h3>
)[&#34;input_ids&#34;].to(self.torch_device)
summary_ids = self.model.generate(
answers_input_ids,
num_beams=4,
length_penalty=2.0,
max_length=142,
min_length=56,
no_repeat_ngram_size=3,
num_beams=num_beams,
length_penalty=length_penalty,
max_length=max_length,
min_length=min_length,
no_repeat_ngram_size=no_repeat_ngram_size,
**kwargs,
)

exec_sum = self.tokenizer.decode(
Expand All @@ -410,10 +430,10 @@ <h3>Ancestors</h3>
<h3>Methods</h3>
<dl>
<dt id="ktrain.text.summarization.core.TransformerSummarizer.summarize"><code class="name flex">
<span>def <span class="ident">summarize</span></span>(<span>self, doc, **kwargs)</span>
<span>def <span class="ident">summarize</span></span>(<span>self, doc, max_length=150, min_length=56, no_repeat_ngram_size=3, length_penalty=2.0, num_beams=4, **kwargs)</span>
</code></dt>
<dd>
<div class="desc"><pre><code>summarize document text
<div class="desc"><pre><code>Summarize document text. Extra arguments are fed to generate method
Args:
doc(str): text of document
Returns:
Expand All @@ -423,10 +443,19 @@ <h3>Methods</h3>
<summary>
<span>Expand source code</span>
</summary>
<pre><code class="python">def summarize(self, doc, **kwargs):
<pre><code class="python">def summarize(
self,
doc,
max_length=150,
min_length=56,
no_repeat_ngram_size=3,
length_penalty=2.0,
num_beams=4,
**kwargs,
):
&#34;&#34;&#34;
```
summarize document text
Summarize document text. Extra arguments are fed to generate method
Args:
doc(str): text of document
Returns:
Expand All @@ -441,11 +470,12 @@ <h3>Methods</h3>
)[&#34;input_ids&#34;].to(self.torch_device)
summary_ids = self.model.generate(
answers_input_ids,
num_beams=4,
length_penalty=2.0,
max_length=142,
min_length=56,
no_repeat_ngram_size=3,
num_beams=num_beams,
length_penalty=length_penalty,
max_length=max_length,
min_length=min_length,
no_repeat_ngram_size=no_repeat_ngram_size,
**kwargs,
)

exec_sum = self.tokenizer.decode(
Expand Down

0 comments on commit ff90bb5

Please sign in to comment.