Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
279 changes: 155 additions & 124 deletions docs/VALIDATOR.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,84 +2,47 @@

## Overview

The Validator is responsible for generating challenges for the Miner to solve. It evaluates solutions submitted by Miners and rewards them based on the quality and correctness of their answers. Additionally, it incorporates penalties for late responses.
The Validator is responsible for generating challenges for the Miner to solve. It receives solutions from Miners, evaluates them, and rewards Miners based on the quality of their solutions. The Validator also calculates rewards based on the correctness and quality of the solutions provided.

**Protocol**: `LogicSynapse`

- **Validator Prepares**:
- `raw_logic_question`: A math problem generated using MathGenerator.
- `logic_question`: A personalized challenge created by refining `raw_logic_question` with an LLM.
- `raw_logic_question`: The math problem generated from MathGenerator.
- `logic_question`: The challenge generated by the Validator. It's rewritten by an LLM from `raw_logic_question` with personalization noise.
- **Miner Receives**:
- `logic_question`: The challenge to solve.
- **Miner Submits**:
- `logic_reasoning`: Step-by-step reasoning to solve the challenge.
- `logic_answer`: The final answer to the challenge, expressed as a short sentence.
- `logic_answer`: The final answer to the challenge as a short sentence.

### Reward Structure
**Reward Structure**:

- **Correctness (`bool`)**: Checks if `logic_answer` matches the ground truth.
- **Similarity (`float`)**: Measures cosine similarity between `logic_reasoning` and the Validator’s reasoning.
- **Time Penalty (`float`)**: Applies a penalty for delayed responses based on the formula:

```
time_penalty = (process_time / timeout) * MAX_PENALTY
```
- `correctness (bool)`: Validator asks LLM to check if `logic_answer` matches the ground truth.
- `similarity (float)`: Validator computes cosine similarity between `logic_reasoning` and the Validator's reasoning.
- `time_penalty (float)`: Penalty for late response, calculated as `process_time / timeout * MAX_PENALTY`.

## Setup for Validator

Follow the steps below to configure and run the Validator.

### Step 1: Configure for vLLM

This setup allows you to run the Validator locally by hosting a vLLM server. While it requires significant resources, it offers full control over the environment.

#### Minimum Compute Requirements

- **GPU**: 1x GPU with 24GB VRAM (e.g., RTX 4090, A100, A6000)
- **Storage**: 100GB
- **Python**: 3.10

#### Steps

1. **Set Up vLLM Environment**
```bash
python -m venv vllm
. vllm/bin/activate
pip install vllm
```

2. **Install PM2 for Process Management**
```bash
sudo apt update && sudo apt install jq npm -y
sudo npm install pm2 -g
pm2 update
```

3. **Select a Model**
Supported models are listed [here](https://docs.vllm.ai/en/latest/models/supported_models.html).
There are two ways to run the Validator:

4. **Start the vLLM Server**
```bash
. vllm/bin/activate
pm2 start "vllm Qwen/Qwen2-7B-Instruct --port 8000 --host 0.0.0.0" --name "sn35-vllm"
```
*Adjust the model, port, and host as needed.*
1. [Running the Validator via Together.AI](#method-1-running-the-validator-via-togetherai)
2. [Running the Validator Locally Using vLLM](#method-2-running-the-validator-locally-using-vllm)

---

### Step 2: Configure for Together AI and Open AI
### METHOD 1: Running the Validator via Together.AI

Using Together AI and Open AI simplifies setup and reduces local resource requirements. At least one of these platforms must be configured.
We recommend using Together.AI to run the Validator, as it simplifies setup and reduces local resource requirements.

#### Prerequisites
#### Prerequisites:

- **Account on Together.AI**: [Sign up here](https://together.ai/).
- **Account on Hugging Face**: [Sign up here](https://huggingface.co/).
- **API Key**: Obtain from the Together.AI dashboard.
- **Python 3.10**
- **PM2 Process Manager**: For running and managing the Validator process. *OPTIONAL*

#### Steps
#### Steps:

1. **Clone the Repository**
```bash
Expand All @@ -94,122 +57,190 @@ Using Together AI and Open AI simplifies setup and reduces local resource requir

bash install.sh
```
Alternatively, install manually:
*Or manually install the requirements:*
```bash
pip install -e .
pip uninstall uvloop -y
pip install git+https://github.com/lukew3/mathgenerator.git
```

3. **Set Up the `.env` File**
3. **Register and Obtain API Key**
- Visit [Together.AI](https://together.ai/) and sign up.
- Obtain your API key from the dashboard.

4. **Set Up the `.env` File**
```bash
echo "TOGETHERAI_API_KEY=your_together_ai_api_key" > .env
echo "OPENAI_API_KEY=your_openai_api_key" >> .env
echo "HF_TOKEN=your_hugging_face_token" >> .env (needed for some vLLM model)
echo "TOGETHER_API_KEY=your_together_ai_api_key" > .env
echo "HF_TOKEN=your_hugging_face_token" >> .env
```

4. **Select a Model**
Choose from the following models:

**Together AI Models**:
5. **Select a Model**
Choose a suitable chat or language model from Together.AI:

| Model Name | Model ID | Pricing (per 1M tokens) |
|---------------------------------|------------------------------------------|-------------------------|
| Qwen 2 Instruct (72B) | `Qwen/Qwen2-Instruct-72B` | $0.90 |
| LLaMA-2 Chat (13B) | `meta-llama/Llama-2-13b-chat-hf` | $0.22 |
| MythoMax-L2 (13B) | `Gryphe/MythoMax-L2-13B` | $0.30 |
| Mistral (7B) Instruct v0.3 | `mistralai/Mistral-7B-Instruct-v0.3` | $0.20 |
| **Qwen 2 Instruct (72B)** | `Qwen/Qwen2-Instruct-72B` | $0.90 |
| **LLaMA-2 Chat (13B)** | `meta-llama/Llama-2-13b-chat-hf` | $0.22 |
| **MythoMax-L2 (13B)** | `Gryphe/MythoMax-L2-13B` | $0.30 |
| **Mistral (7B) Instruct v0.3** | `mistralai/Mistral-7B-Instruct-v0.3` | $0.20 |
| **LLaMA-2 Chat (7B)** | `meta-llama/Llama-2-7b-chat-hf` | $0.20 |
| **Mistral (7B) Instruct** | `mistralai/Mistral-7B-Instruct` | $0.20 |
| **Qwen 1.5 Chat (72B)** | `Qwen/Qwen-1.5-Chat-72B` | $0.90 |
| **Mistral (7B) Instruct v0.2** | `mistralai/Mistral-7B-Instruct-v0.2` | $0.20 |

**Open AI Models**:
More models are available here: [Together.AI Models](https://api.together.ai/models)
> *Note: Choose models labeled as `chat` or `language`. Avoid image models.*

| Model Name | Model ID | Pricing (per 1M tokens) |
|---------------------------------|------------------------------------------|-------------------------|
| GPT-4o | `gpt-4o` | $10.00 |
| GPT-4o Mini | `gpt-4o-mini` | $1.00 |
| GPT-4o Turbo | `gpt-4o-turbo` | $15.00 |

> *Refer to [Together AI Models](https://api.together.ai/models) and [Open AI Models](https://platform.openai.com/docs/models) for more options.*
6. **Install PM2 for Process Management**
```bash
sudo apt update && sudo apt install jq npm -y
sudo npm install pm2 -g
pm2 update
```

7. **Run the Validator**
- **Activate Virtual Environment**:
```bash
. main/bin/activate
```
- **Source the `.env` File**:
```bash
source .env
```
- **Start the Validator**:
```bash
pm2 start python --name "sn35-validator" -- neurons/validator/validator.py \
--netuid 35 \
--wallet.name "your-wallet-name" \
--wallet.hotkey "your-hotkey-name" \
--subtensor.network finney \
--llm_client.base_url https://api.together.xyz/v1 \
--llm_client.model "model_id_from_list" \
--llm_client.key $TOGETHER_API_KEY \
--logging.debug
```
> Replace `"model_id_from_list"` with the **Model ID** you selected (e.g., `Qwen/Qwen2-Instruct-72B`).

8. **(Optional) Enable Public Access**
Add the following flag to enable a validator proxy with your public port:
```bash
--axon.port "your-public-open-port"
```

**Notes**:

- Ensure your `TOGETHER_API_KEY` is correctly set and sourced:
- Check the `.env` file: `cat .env`
- Verify the API key is loaded: `echo $TOGETHER_API_KEY`
- Ensure your `HF_TOKEN` is correctly set and sourced:
- Check the `.env` file: `cat .env`
- Verify the API key is loaded: `echo $HF_TOKEN`
- The `--llm_client.base_url` should be `https://api.together.xyz/v1`.
- Match `--llm_client.model` with the **Model ID** from Together.AI.

### Additional Information

- **API Documentation**: [Together.AI Docs](https://docs.together.ai/)
- **Support**: If you encounter issues, check the validator logs or contact the LogicNet support team.

---

### Step 3: Run the Validator
### METHOD 2: Running the Validator Locally Using vLLM

1. **Activate Virtual Environment**
This method involves self-hosting a vLLM r to run the Validator locally. It requires more resources but provides more control over the environment.

#### Minimum Compute Requirements:

- **GPU**: 1x GPU with 24GB VRAM (e.g., RTX 4090, A100, A6000)
- **Storage**: 100GB
- **Python**: 3.10

#### Steps:

1. **Set Up vLLM Environment**
```bash
. main/bin/activate
python -m venv vllm
. vllm/bin/activate
pip install vllm
```

2. **Install PM2 for Process Management**
```bash
sudo apt update && sudo apt install jq npm -y
sudo npm install pm2 -g
pm2 update
```

2. **Source the `.env` File**
3. **Select a Model**

Supported vLLM Models list can be found here: [vLLM Models](https://docs.vllm.ai/en/latest/models/supported_models.html)
4. **Start the vLLM Server**
```bash
source .env
. vllm/bin/activate
pm2 start "vllm Qwen/Qwen2-7B-Instruct --port 8000 --host 0.0.0.0" --name "sn35-vllm"
```
*Adjust the model, port, and host as needed.*

3. **Start the Validator**
5. **Set Up the `.env` File**
```bash
pm2 start python --name "sn35-validator" -- neurons/validator/validator.py \
--netuid 35 \
--wallet.name "your-wallet-name" \
--wallet.hotkey "your-hotkey-name" \
--subtensor.network finney \
--llm_client.base_urls "vllm_base_url,openai_base_url,together_base_url" \
--llm_client.models "vllm_model,openai_model,together_model" \
--neuron_type validator \
--logging.debug
echo "HF_TOKEN=your_hugging_face_token" > .env
```
Replace the placeholders with actual values just like the example.
- "vllm_base_url" with http://localhost:8000/v1.
- "openai_base_url" with https://api.openai.com/v1.
- "together_base_url" with https://api.together.xyz/v1.
- "vllm_model" with Qwen/Qwen2-7B-Instruct.
- "openai_model" with gpt-4o-mini.
- "together_model" with meta-llama/Llama-2-7b-chat-hf.

*If you want to run either Together AI or Open AI, you can set the other to 'null'.*

4. **Enable Public Access (Optional)**
Add this flag to enable proxy:

6. **Run the Validator with Self-Hosted LLM**
- **Activate Virtual Environment**:
```bash
. main/bin/activate
```
- **Start the Validator**:
```bash
pm2 start python --name "sn35-validator" -- neurons/validator/validator.py \
--netuid 35 \
--wallet.name "your-wallet-name" \
--wallet.hotkey "your-hotkey-name" \
--subtensor.network finney \
--llm_client.base_url http://localhost:8000/v1 \
--llm_client.model Qwen/Qwen2-7B-Instruct \
--logging.debug
```

7. **(Optional) Enable Public Access**
```bash
--axon.port "your-public-open-port"
```

---

### Additional Features

#### Wandb Integration
### Wandb

Configure Wandb to track and analyze Validator performance.

1. Add Wandb API key to `.env`:
```bash
echo "WANDB_API_KEY=your_wandb_api_key" >> .env
```
2. It's already configured for mainnet as default.
3. Run Validator with Wandb on Testnet:
```bash
--wandb.project_name logicnet-testnet \
--wandb.entity ait-ai
```
*Wandb is optional, but recommended for better tracking and analysis of the Validator and Miner.*
- Configure the Wandb API key within the `.env` file:
```bash
echo "WANDB_API_KEY=your_wandb_api_key" > .env
```
- To execute the Validator with Wandb on the mainnet, utilize the commands provided above.
- For running the Validator with Wandb on the testnet, append the following arguments to the aforementioned commands:
```bash
--wandb.project_name logicnet-testnet \
--wandb.entity ait-ai \
```

---

### Troubleshooting & Support

- **Logs**:
- Please see the logs for more details using the following command.
- **Logs**: Use PM2 to check logs if you encounter issues.
```bash
pm2 logs sn35-validator
```
- Please check the logs for more details on wandb for mainnet.
https://wandb.ai/ait-ai/logicnet-mainnet/runs
- Please check the logs for more details on wandb for testnet.
https://wandb.ai/ait-ai/logicnet-testnet/runs

- **Common Issues**:
- Missing API keys.
- Incorrect model IDs.
- Connectivity problems.
- **Contact Support**: Reach out to the LogicNet team for assistance.
- **API Key Not Found**: Ensure `.env` is sourced and `TOGETHER_API_KEY` is set.
- **HF Token Not Found**: Ensure `.env` is sourced and `HF_TOKEN` is set.
- **Model ID Incorrect**: Verify the `--llm_client.model` matches the Together.AI Model ID.
- **Connection Errors**: Check internet connectivity and Together.AI service status.

- **Contact Support**: Reach out to the LogicNet support team for assistance.

---

Expand Down
10 changes: 5 additions & 5 deletions logicnet/utils/config.py
Original file line number Diff line number Diff line change
Expand Up @@ -169,21 +169,21 @@ def add_args(cls, parser):
)

parser.add_argument(
"--llm_client.base_urls",
"--llm_client.base_url",
type=str,
help="The base url for the LLM client",
default="http://localhost:8000/v1,https://api.openai.com/v1,https://api.together.xyz/v1",
default="http://localhost:8000/v1",
)

parser.add_argument(
"--llm_client.models",
"--llm_client.model",
type=str,
help="The model for the LLM client",
default="Qwen/Qwen2-7B-Instruct,gpt-4o-mini,meta-llama/Llama-2-7b-chat-hf",
default="Qwen/Qwen2-7B-Instruct",
)

parser.add_argument(
"--llm_client.keys",
"--llm_client.key",
type=str,
help="The key for the LLM client",
default="xyz",
Expand Down
Loading