You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
.Net: Fixed warning in release pipeline about Docker base image in examples (microsoft#6340)
### Motivation and Context
<!-- Thank you for your contribution to the semantic-kernel repo!
Please help reviewers and future users, providing the following
information:
1. Why is this change required?
2. What problem does it solve?
3. What scenario does it contribute to?
4. If it fixes an open issue, please link to the issue here.
-->
There is a warning about using `python:3.12` base image in release
pipeline for one of our demo applications. I moved the content of
Dockerfile to README, so it can be used as an example.
### Contribution Checklist
<!-- Before submitting this PR, please make sure: -->
- [x] The code builds clean without any errors or warnings
- [x] The PR follows the [SK Contribution
Guidelines](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md)
and the [pre-submission formatting
script](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md#development-scripts)
raises no violations
- [x] All unit tests pass, and I have added new tests where possible
- [x] I didn't break anyone 😄
Copy file name to clipboardExpand all lines: dotnet/samples/Demos/QualityCheck/README.md
+34-4Lines changed: 34 additions & 4 deletions
Original file line number
Diff line number
Diff line change
@@ -3,6 +3,7 @@
3
3
This sample provides a practical demonstration how to perform quality check on LLM results for such tasks as text summarization and translation with Semantic Kernel Filters.
4
4
5
5
Metrics used in this example:
6
+
6
7
-[BERTScore](https://github.com/Tiiiger/bert_score) - leverages the pre-trained contextual embeddings from BERT and matches words in candidate and reference sentences by cosine similarity.
7
8
-[BLEU](https://en.wikipedia.org/wiki/BLEU) (BiLingual Evaluation Understudy) - evaluates the quality of text which has been machine-translated from one natural language to another.
8
9
-[METEOR](https://en.wikipedia.org/wiki/METEOR) (Metric for Evaluation of Translation with Explicit ORdering) - evaluates the similarity between the generated summary and the reference summary, taking into account grammar and semantics.
@@ -14,7 +15,7 @@ In this example, SK Filters call dedicated [server](./python-server/) which is r
0 commit comments