Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
![Banner](https://raw.githubusercontent.com/pavanjava/bootstrap-rag/refs/heads/main/assets/bootstrap-rag.png)
# bootstrap-rag
this project will bootstrap and scaffold the projects for specific semantic search and RAG applications along with regular boiler plate code.
This project will bootstrap and scaffold the projects for specific semantic search and RAG applications along with regular boilerplate code.

### Architecture
![Arch](assets/architecture.png)
Expand Down
23 changes: 22 additions & 1 deletion bootstraprag/templates/evaluations/phoenix_evals/readme.md
Original file line number Diff line number Diff line change
@@ -1,2 +1,23 @@
## Phoenix Evaluations
- Under development

This repository provides a script for evaluating model-generated responses using Phoenix's `HallucinationEvaluator` and `QAEvaluator`.
To start evaluating run `bootstraprag create phoenix_evals`
select the specific template shown on cli

```text
? Which technology would you like to use? standalone-evaluations
? Which template would you like to use?
deep-evals
mlflow-evals
❯ phoenix-evals
ragas-evals
```

just replace the `input_data.csv` with your own data, the file has following columns
`id,reference,query,response`.

### How to execute?
- run `python basic_evaluations.py`

### What to expect?
- At the end of process, you can see the `evaluation_report.csv` is created and kept in the parent folder where you can see different aspects of evaluations carried on your input data.