Skip to content

Commit

Permalink
Deployed d8d2a53 with MkDocs version: 1.6.0
Browse files Browse the repository at this point in the history
  • Loading branch information
Unknown committed Jun 4, 2024
1 parent edb9041 commit 8e8f0be
Show file tree
Hide file tree
Showing 7 changed files with 1,681 additions and 1,654 deletions.
2,844 changes: 1,422 additions & 1,422 deletions client_api/Client/index.html

Large diffs are not rendered by default.

465 changes: 246 additions & 219 deletions client_api/Model/index.html

Large diffs are not rendered by default.

16 changes: 8 additions & 8 deletions client_api/Schemas/Evaluation/EvaluationParameters/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -538,13 +538,13 @@ <h3 class="doc doc-heading" id="valor.schemas.evaluation.EvaluationParameters">
</td>
</tr>
<tr class="doc-section-item">
<td><code><span title="valor.schemas.evaluation.EvaluationParameters.compute_pr_curves">compute_pr_curves</span></code></td>
<td><code><span title="valor.schemas.evaluation.EvaluationParameters.metrics">metrics</span></code></td>
<td>
<code>bool</code>
<code>(<span title="typing.List">List</span>[str], optional)</code>
</td>
<td>
<div class="doc-md-description">
<p>A boolean which determines whether we calculate precision-recall curves or not.</p>
<p>The list of metrics to compute, store, and return to the user.</p>
</div>
</td>
</tr>
Expand All @@ -555,7 +555,7 @@ <h3 class="doc doc-heading" id="valor.schemas.evaluation.EvaluationParameters">
</td>
<td>
<div class="doc-md-description">
<p>The IOU threshold to use when calculating precision-recall curves for object detection tasks. Defaults to 0.5. Does nothing when compute_pr_curves is set to False or None.</p>
<p>The IOU threshold to use when calculating precision-recall curves for object detection tasks. Defaults to 0.5.</p>
</div>
</td>
</tr>
Expand Down Expand Up @@ -609,10 +609,10 @@ <h3 class="doc doc-heading" id="valor.schemas.evaluation.EvaluationParameters">
</span><span id="__span-0-20"><a id="__codelineno-0-20" name="__codelineno-0-20"></a><span class="sd"> Optional mapping of individual labels to a grouper label. Useful when you need to evaluate performance using labels that differ across datasets and models.</span>
</span><span id="__span-0-21"><a id="__codelineno-0-21" name="__codelineno-0-21"></a><span class="sd"> recall_score_threshold: float, default=0</span>
</span><span id="__span-0-22"><a id="__codelineno-0-22" name="__codelineno-0-22"></a><span class="sd"> The confidence score threshold for use when determining whether to count a prediction as a true positive or not while calculating Average Recall.</span>
</span><span id="__span-0-23"><a id="__codelineno-0-23" name="__codelineno-0-23"></a><span class="sd"> compute_pr_curves: bool</span>
</span><span id="__span-0-24"><a id="__codelineno-0-24" name="__codelineno-0-24"></a><span class="sd"> A boolean which determines whether we calculate precision-recall curves or not.</span>
</span><span id="__span-0-23"><a id="__codelineno-0-23" name="__codelineno-0-23"></a><span class="sd"> metrics: List[str], optional</span>
</span><span id="__span-0-24"><a id="__codelineno-0-24" name="__codelineno-0-24"></a><span class="sd"> The list of metrics to compute, store, and return to the user.</span>
</span><span id="__span-0-25"><a id="__codelineno-0-25" name="__codelineno-0-25"></a><span class="sd"> pr_curve_iou_threshold: float, optional</span>
</span><span id="__span-0-26"><a id="__codelineno-0-26" name="__codelineno-0-26"></a><span class="sd"> The IOU threshold to use when calculating precision-recall curves for object detection tasks. Defaults to 0.5. Does nothing when compute_pr_curves is set to False or None.</span>
</span><span id="__span-0-26"><a id="__codelineno-0-26" name="__codelineno-0-26"></a><span class="sd"> The IOU threshold to use when calculating precision-recall curves for object detection tasks. Defaults to 0.5.</span>
</span><span id="__span-0-27"><a id="__codelineno-0-27" name="__codelineno-0-27"></a>
</span><span id="__span-0-28"><a id="__codelineno-0-28" name="__codelineno-0-28"></a><span class="sd"> """</span>
</span><span id="__span-0-29"><a id="__codelineno-0-29" name="__codelineno-0-29"></a>
Expand All @@ -624,7 +624,7 @@ <h3 class="doc doc-heading" id="valor.schemas.evaluation.EvaluationParameters">
</span><span id="__span-0-35"><a id="__codelineno-0-35" name="__codelineno-0-35"></a> <span class="n">iou_thresholds_to_return</span><span class="p">:</span> <span class="n">Optional</span><span class="p">[</span><span class="n">List</span><span class="p">[</span><span class="nb">float</span><span class="p">]]</span> <span class="o">=</span> <span class="kc">None</span>
</span><span id="__span-0-36"><a id="__codelineno-0-36" name="__codelineno-0-36"></a> <span class="n">label_map</span><span class="p">:</span> <span class="n">Optional</span><span class="p">[</span><span class="n">List</span><span class="p">[</span><span class="n">List</span><span class="p">[</span><span class="n">List</span><span class="p">[</span><span class="nb">str</span><span class="p">]]]]</span> <span class="o">=</span> <span class="kc">None</span>
</span><span id="__span-0-37"><a id="__codelineno-0-37" name="__codelineno-0-37"></a> <span class="n">recall_score_threshold</span><span class="p">:</span> <span class="nb">float</span> <span class="o">=</span> <span class="mi">0</span>
</span><span id="__span-0-38"><a id="__codelineno-0-38" name="__codelineno-0-38"></a> <span class="n">compute_pr_curves</span><span class="p">:</span> <span class="nb">bool</span> <span class="o">=</span> <span class="kc">False</span>
</span><span id="__span-0-38"><a id="__codelineno-0-38" name="__codelineno-0-38"></a> <span class="n">metrics_to_return</span><span class="p">:</span> <span class="n">Optional</span><span class="p">[</span><span class="n">List</span><span class="p">[</span><span class="nb">str</span><span class="p">]]</span> <span class="o">=</span> <span class="kc">None</span>
</span><span id="__span-0-39"><a id="__codelineno-0-39" name="__codelineno-0-39"></a> <span class="n">pr_curve_iou_threshold</span><span class="p">:</span> <span class="nb">float</span> <span class="o">=</span> <span class="mf">0.5</span>
</span></code></pre></div></td></tr></table></div>
</details>
Expand Down
6 changes: 3 additions & 3 deletions metrics/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -501,7 +501,7 @@ <h2 id="classification-metrics">Classification Metrics</h2>
</tr>
<tr>
<td style="text-align: left;">Precision-Recall Curves</td>
<td style="text-align: left;">Outputs a nested dictionary containing the true positives, false positives, true negatives, false negatives, precision, recall, and F1 score for each (label key, label value, confidence threshold) combination. Computing this output requires setting the <code>compute_pr_curves</code> argument to <code>True</code> at evaluation time.</td>
<td style="text-align: left;">Outputs a nested dictionary containing the true positives, false positives, true negatives, false negatives, precision, recall, and F1 score for each (label key, label value, confidence threshold) combination. Computing this metric requires passing <code>PrecisionRecallCurve</code> into the list of <code>metrics_to_return</code> at evaluation time.</td>
<td style="text-align: left;">See <a href="#precision-recall-curves">precision-recall curve methods</a></td>
</tr>
</tbody>
Expand Down Expand Up @@ -548,7 +548,7 @@ <h2 id="object-detection-and-instance-segmentation-metrics">Object Detection and
</tr>
<tr>
<td style="text-align: left;">Precision-Recall Curves</td>
<td style="text-align: left;">Outputs a nested dictionary containing the true positives, false positives, true negatives, false negatives, precision, recall, and F1 score for each (label key, label value, confidence threshold) combination. Computing this output requires setting the <code>compute_pr_curves</code> argument to <code>True</code> at evaluation time. These curves are calculated using a default IOU threshold of 0.5; you can set your own threshold by passing a float between 0 and 1 to the <code>pr_curve_iou_threshold</code> parameter at evaluation time.</td>
<td style="text-align: left;">Outputs a nested dictionary containing the true positives, false positives, true negatives, false negatives, precision, recall, and F1 score for each (label key, label value, confidence threshold) combination. Computing this metric requires passing <code>PrecisionRecallCurve</code> into the list of <code>metrics_to_return</code> at evaluation time. These curves are calculated using a default IOU threshold of 0.5; you can set your own threshold by passing a float between 0 and 1 to the <code>pr_curve_iou_threshold</code> parameter at evaluation time.</td>
<td style="text-align: left;">See <a href="#precision-recall-curves">precision-recall curve methods</a></td>
</tr>
</tbody>
Expand Down Expand Up @@ -720,7 +720,7 @@ <h2 id="average-recall-ar">Average Recall (AR)</h2>
<li>COCO calculates three different AR metrics (AR@1, AR@5, AR@100)) by considering only the top 1/5/100 most confident predictions during the matching process. Valor, on the other hand, allows users to input a <code>recall_score_threshold</code> value that will prevent low-confidence predictions from being counted as true positives when calculating AR.</li>
</ul>
<h2 id="precision-recall-curves">Precision-Recall Curves</h2>
<p>Precision-recall curves offer insight into which confidence threshold you should pick for your production pipeline. To compute these curves for your classification or object detection workflow, simply set the <code>compute_pr_curves</code> parameter to <code>True</code> when initiating your evaluation. Valor will then tabulate the true positives, false positives, true negatives, false negatives, precision, recall, and F1 score for each (label key, label value, confidence threshold) combination, and store them in a nested dictionary for your use. When using the Valor Python client, the output will be formatted as follows:</p>
<p>Precision-recall curves offer insight into which confidence threshold you should pick for your production pipeline. To compute these curves for your classification or object detection workflow, simply pass <code>PrecisionRecallCurve</code> into the list of <code>metrics_to_return</code> when initiating your evaluation. Valor will then tabulate the true positives, false positives, true negatives, false negatives, precision, recall, and F1 score for each (label key, label value, confidence threshold) combination, and store them in a nested dictionary for your use. When using the Valor Python client, the output will be formatted as follows:</p>
<div class="language-python highlight"><pre><span></span><code><span id="__span-2-1"><a href="#__codelineno-2-1" id="__codelineno-2-1" name="__codelineno-2-1"></a><span class="p">{</span>
</span><span id="__span-2-2"><a href="#__codelineno-2-2" id="__codelineno-2-2" name="__codelineno-2-2"></a> <span class="s2">"type"</span><span class="p">:</span> <span class="s2">"PrecisionRecallCurve"</span><span class="p">,</span>
</span><span id="__span-2-3"><a href="#__codelineno-2-3" id="__codelineno-2-3" name="__codelineno-2-3"></a> <span class="s2">"parameters"</span><span class="p">:</span> <span class="p">{</span>
Expand Down
2 changes: 1 addition & 1 deletion search/search_index.json

Large diffs are not rendered by default.

Binary file modified sitemap.xml.gz
Binary file not shown.
2 changes: 1 addition & 1 deletion static/openapi.json

Large diffs are not rendered by default.

0 comments on commit 8e8f0be

Please sign in to comment.