Skip to content

Commit

Permalink
docs: clean up readme and ndcg (#359)
Browse files Browse the repository at this point in the history
  • Loading branch information
bwanglzu committed Jan 27, 2022
1 parent 79581b4 commit bb8e974
Show file tree
Hide file tree
Showing 2 changed files with 8 additions and 8 deletions.
4 changes: 2 additions & 2 deletions README.md
Expand Up @@ -23,11 +23,11 @@ applications.
🎛 **Designed for finetuning**: a human-in-the-loop deep learning tool for leveling up your pretrained models in domain-specific neural search applications.

🔱 **Powerful yet intuitive**: all you need is `finetuner.fit()` - a one-liner that unlocks rich features such as
siamese/triplet network, interactive labeling, layer pruning, weights freezing, dimensionality reduction.
siamese/triplet network, metric learning, self-supervised pretraining, layer pruning, weights freezing, dimensionality reduction.

⚛️ **Framework-agnostic**: promise an identical API & user experience on PyTorch, Tensorflow/Keras and PaddlePaddle deep learning backends.

🧈 **Jina integration**: buttery smooth integration with Jina, reducing the cost of context-switch between experiment
🧈 **DocArray integration**: buttery smooth integration with DocArray, reducing the cost of context-switch between experiment
and production.

<!-- end elevator-pitch -->
Expand Down
12 changes: 6 additions & 6 deletions docs/get-started/3d-mesh/index.md
Expand Up @@ -207,7 +207,7 @@ We'll use two metrics to evaluate our two models (pretrained, straight out of th

**mAP@k** : We'll calculate the average precision at 1, 5 and 10, then we'll calculate the mean of those average precisions for all documents in our test data (these are our queries) because we care about how accurate and precise our retrieved 3D objects are.

**mNDCG@k**: We'll calculate NDCG at 1,5 and 10, then we'll calculate the mean of those for all documents in our test data because we care about the order of the retrieved 3D objects.
*nDCG@k**: We'll calculate nDCG at 1,5 and 10, then we'll calculate the mean of those for all documents in our test data because we care about the order of the retrieved 3D objects.

````{dropdown} Complete source code
Expand Down Expand Up @@ -244,11 +244,11 @@ The difference is shown in the tables below:
| mAP@5 | 0.113 | 0.697 |
| mAP@10 | 0.100 | 0.686 |

| mNDCG@k | pre-trained | fine-tuned |
|--------|-------------|--------------|
| mNDCG@1 | 0.563 | 0.927 |
| mNDCG@5 | 0.617 | 0.931 |
| mNDCG@10 | 0.647 | 0.935 |
| nDCG@k | pre-trained | fine-tuned |
|---------|-------------|--------------|
| nDCG@1 | 0.563 | 0.927 |
| nDCG@5 | 0.617 | 0.931 |
| nDCG@10 | 0.647 | 0.935 |

Now let's do some queries ourselves and check the visualizations

Expand Down

0 comments on commit bb8e974

Please sign in to comment.