diff --git a/docs/VALIDATOR.md b/docs/VALIDATOR.md index fc6bc725..3df37aa4 100644 --- a/docs/VALIDATOR.md +++ b/docs/VALIDATOR.md @@ -2,76 +2,39 @@ ## Overview -The Validator is responsible for generating challenges for the Miner to solve. It evaluates solutions submitted by Miners and rewards them based on the quality and correctness of their answers. Additionally, it incorporates penalties for late responses. +The Validator is responsible for generating challenges for the Miner to solve. It receives solutions from Miners, evaluates them, and rewards Miners based on the quality of their solutions. The Validator also calculates rewards based on the correctness and quality of the solutions provided. **Protocol**: `LogicSynapse` - **Validator Prepares**: - - `raw_logic_question`: A math problem generated using MathGenerator. - - `logic_question`: A personalized challenge created by refining `raw_logic_question` with an LLM. + - `raw_logic_question`: The math problem generated from MathGenerator. + - `logic_question`: The challenge generated by the Validator. It's rewritten by an LLM from `raw_logic_question` with personalization noise. - **Miner Receives**: - `logic_question`: The challenge to solve. - **Miner Submits**: - `logic_reasoning`: Step-by-step reasoning to solve the challenge. - - `logic_answer`: The final answer to the challenge, expressed as a short sentence. + - `logic_answer`: The final answer to the challenge as a short sentence. -### Reward Structure +**Reward Structure**: -- **Correctness (`bool`)**: Checks if `logic_answer` matches the ground truth. -- **Similarity (`float`)**: Measures cosine similarity between `logic_reasoning` and the Validator’s reasoning. -- **Time Penalty (`float`)**: Applies a penalty for delayed responses based on the formula: - - ``` - time_penalty = (process_time / timeout) * MAX_PENALTY - ``` +- `correctness (bool)`: Validator asks LLM to check if `logic_answer` matches the ground truth. +- `similarity (float)`: Validator computes cosine similarity between `logic_reasoning` and the Validator's reasoning. +- `time_penalty (float)`: Penalty for late response, calculated as `process_time / timeout * MAX_PENALTY`. ## Setup for Validator -Follow the steps below to configure and run the Validator. - -### Step 1: Configure for vLLM - -This setup allows you to run the Validator locally by hosting a vLLM server. While it requires significant resources, it offers full control over the environment. - -#### Minimum Compute Requirements - -- **GPU**: 1x GPU with 24GB VRAM (e.g., RTX 4090, A100, A6000) -- **Storage**: 100GB -- **Python**: 3.10 - -#### Steps - -1. **Set Up vLLM Environment** - ```bash - python -m venv vllm - . vllm/bin/activate - pip install vllm - ``` - -2. **Install PM2 for Process Management** - ```bash - sudo apt update && sudo apt install jq npm -y - sudo npm install pm2 -g - pm2 update - ``` - -3. **Select a Model** - Supported models are listed [here](https://docs.vllm.ai/en/latest/models/supported_models.html). +There are two ways to run the Validator: -4. **Start the vLLM Server** - ```bash - . vllm/bin/activate - pm2 start "vllm Qwen/Qwen2-7B-Instruct --port 8000 --host 0.0.0.0" --name "sn35-vllm" - ``` - *Adjust the model, port, and host as needed.* +1. [Running the Validator via Together.AI](#method-1-running-the-validator-via-togetherai) +2. [Running the Validator Locally Using vLLM](#method-2-running-the-validator-locally-using-vllm) --- -### Step 2: Configure for Together AI and Open AI +### METHOD 1: Running the Validator via Together.AI -Using Together AI and Open AI simplifies setup and reduces local resource requirements. At least one of these platforms must be configured. +We recommend using Together.AI to run the Validator, as it simplifies setup and reduces local resource requirements. -#### Prerequisites +#### Prerequisites: - **Account on Together.AI**: [Sign up here](https://together.ai/). - **Account on Hugging Face**: [Sign up here](https://huggingface.co/). @@ -79,7 +42,7 @@ Using Together AI and Open AI simplifies setup and reduces local resource requir - **Python 3.10** - **PM2 Process Manager**: For running and managing the Validator process. *OPTIONAL* -#### Steps +#### Steps: 1. **Clone the Repository** ```bash @@ -94,122 +57,190 @@ Using Together AI and Open AI simplifies setup and reduces local resource requir bash install.sh ``` - Alternatively, install manually: + *Or manually install the requirements:* ```bash pip install -e . pip uninstall uvloop -y pip install git+https://github.com/lukew3/mathgenerator.git ``` -3. **Set Up the `.env` File** +3. **Register and Obtain API Key** + - Visit [Together.AI](https://together.ai/) and sign up. + - Obtain your API key from the dashboard. + +4. **Set Up the `.env` File** ```bash - echo "TOGETHERAI_API_KEY=your_together_ai_api_key" > .env - echo "OPENAI_API_KEY=your_openai_api_key" >> .env - echo "HF_TOKEN=your_hugging_face_token" >> .env (needed for some vLLM model) + echo "TOGETHER_API_KEY=your_together_ai_api_key" > .env + echo "HF_TOKEN=your_hugging_face_token" >> .env ``` -4. **Select a Model** - Choose from the following models: - - **Together AI Models**: +5. **Select a Model** + Choose a suitable chat or language model from Together.AI: | Model Name | Model ID | Pricing (per 1M tokens) | |---------------------------------|------------------------------------------|-------------------------| - | Qwen 2 Instruct (72B) | `Qwen/Qwen2-Instruct-72B` | $0.90 | - | LLaMA-2 Chat (13B) | `meta-llama/Llama-2-13b-chat-hf` | $0.22 | - | MythoMax-L2 (13B) | `Gryphe/MythoMax-L2-13B` | $0.30 | - | Mistral (7B) Instruct v0.3 | `mistralai/Mistral-7B-Instruct-v0.3` | $0.20 | + | **Qwen 2 Instruct (72B)** | `Qwen/Qwen2-Instruct-72B` | $0.90 | + | **LLaMA-2 Chat (13B)** | `meta-llama/Llama-2-13b-chat-hf` | $0.22 | + | **MythoMax-L2 (13B)** | `Gryphe/MythoMax-L2-13B` | $0.30 | + | **Mistral (7B) Instruct v0.3** | `mistralai/Mistral-7B-Instruct-v0.3` | $0.20 | + | **LLaMA-2 Chat (7B)** | `meta-llama/Llama-2-7b-chat-hf` | $0.20 | + | **Mistral (7B) Instruct** | `mistralai/Mistral-7B-Instruct` | $0.20 | + | **Qwen 1.5 Chat (72B)** | `Qwen/Qwen-1.5-Chat-72B` | $0.90 | + | **Mistral (7B) Instruct v0.2** | `mistralai/Mistral-7B-Instruct-v0.2` | $0.20 | - **Open AI Models**: + More models are available here: [Together.AI Models](https://api.together.ai/models) + > *Note: Choose models labeled as `chat` or `language`. Avoid image models.* - | Model Name | Model ID | Pricing (per 1M tokens) | - |---------------------------------|------------------------------------------|-------------------------| - | GPT-4o | `gpt-4o` | $10.00 | - | GPT-4o Mini | `gpt-4o-mini` | $1.00 | - | GPT-4o Turbo | `gpt-4o-turbo` | $15.00 | - > *Refer to [Together AI Models](https://api.together.ai/models) and [Open AI Models](https://platform.openai.com/docs/models) for more options.* +6. **Install PM2 for Process Management** + ```bash + sudo apt update && sudo apt install jq npm -y + sudo npm install pm2 -g + pm2 update + ``` + +7. **Run the Validator** + - **Activate Virtual Environment**: + ```bash + . main/bin/activate + ``` + - **Source the `.env` File**: + ```bash + source .env + ``` + - **Start the Validator**: + ```bash + pm2 start python --name "sn35-validator" -- neurons/validator/validator.py \ + --netuid 35 \ + --wallet.name "your-wallet-name" \ + --wallet.hotkey "your-hotkey-name" \ + --subtensor.network finney \ + --llm_client.base_url https://api.together.xyz/v1 \ + --llm_client.model "model_id_from_list" \ + --llm_client.key $TOGETHER_API_KEY \ + --logging.debug + ``` + > Replace `"model_id_from_list"` with the **Model ID** you selected (e.g., `Qwen/Qwen2-Instruct-72B`). + +8. **(Optional) Enable Public Access** + Add the following flag to enable a validator proxy with your public port: + ```bash + --axon.port "your-public-open-port" + ``` + +**Notes**: + +- Ensure your `TOGETHER_API_KEY` is correctly set and sourced: + - Check the `.env` file: `cat .env` + - Verify the API key is loaded: `echo $TOGETHER_API_KEY` +- Ensure your `HF_TOKEN` is correctly set and sourced: + - Check the `.env` file: `cat .env` + - Verify the API key is loaded: `echo $HF_TOKEN` +- The `--llm_client.base_url` should be `https://api.together.xyz/v1`. +- Match `--llm_client.model` with the **Model ID** from Together.AI. + +### Additional Information + +- **API Documentation**: [Together.AI Docs](https://docs.together.ai/) +- **Support**: If you encounter issues, check the validator logs or contact the LogicNet support team. --- -### Step 3: Run the Validator +### METHOD 2: Running the Validator Locally Using vLLM -1. **Activate Virtual Environment** +This method involves self-hosting a vLLM r to run the Validator locally. It requires more resources but provides more control over the environment. + +#### Minimum Compute Requirements: + +- **GPU**: 1x GPU with 24GB VRAM (e.g., RTX 4090, A100, A6000) +- **Storage**: 100GB +- **Python**: 3.10 + +#### Steps: + +1. **Set Up vLLM Environment** ```bash - . main/bin/activate + python -m venv vllm + . vllm/bin/activate + pip install vllm + ``` + +2. **Install PM2 for Process Management** + ```bash + sudo apt update && sudo apt install jq npm -y + sudo npm install pm2 -g + pm2 update ``` -2. **Source the `.env` File** +3. **Select a Model** + + Supported vLLM Models list can be found here: [vLLM Models](https://docs.vllm.ai/en/latest/models/supported_models.html) +4. **Start the vLLM Server** ```bash - source .env + . vllm/bin/activate + pm2 start "vllm Qwen/Qwen2-7B-Instruct --port 8000 --host 0.0.0.0" --name "sn35-vllm" ``` + *Adjust the model, port, and host as needed.* -3. **Start the Validator** +5. **Set Up the `.env` File** ```bash - pm2 start python --name "sn35-validator" -- neurons/validator/validator.py \ - --netuid 35 \ - --wallet.name "your-wallet-name" \ - --wallet.hotkey "your-hotkey-name" \ - --subtensor.network finney \ - --llm_client.base_urls "vllm_base_url,openai_base_url,together_base_url" \ - --llm_client.models "vllm_model,openai_model,together_model" \ - --neuron_type validator \ - --logging.debug + echo "HF_TOKEN=your_hugging_face_token" > .env ``` - Replace the placeholders with actual values just like the example. - - "vllm_base_url" with http://localhost:8000/v1. - - "openai_base_url" with https://api.openai.com/v1. - - "together_base_url" with https://api.together.xyz/v1. - - "vllm_model" with Qwen/Qwen2-7B-Instruct. - - "openai_model" with gpt-4o-mini. - - "together_model" with meta-llama/Llama-2-7b-chat-hf. - - *If you want to run either Together AI or Open AI, you can set the other to 'null'.* - -4. **Enable Public Access (Optional)** - Add this flag to enable proxy: + +6. **Run the Validator with Self-Hosted LLM** + - **Activate Virtual Environment**: + ```bash + . main/bin/activate + ``` + - **Start the Validator**: + ```bash + pm2 start python --name "sn35-validator" -- neurons/validator/validator.py \ + --netuid 35 \ + --wallet.name "your-wallet-name" \ + --wallet.hotkey "your-hotkey-name" \ + --subtensor.network finney \ + --llm_client.base_url http://localhost:8000/v1 \ + --llm_client.model Qwen/Qwen2-7B-Instruct \ + --logging.debug + ``` + +7. **(Optional) Enable Public Access** ```bash --axon.port "your-public-open-port" ``` --- -### Additional Features - -#### Wandb Integration +### Wandb -Configure Wandb to track and analyze Validator performance. - -1. Add Wandb API key to `.env`: - ```bash - echo "WANDB_API_KEY=your_wandb_api_key" >> .env - ``` -2. It's already configured for mainnet as default. -3. Run Validator with Wandb on Testnet: - ```bash - --wandb.project_name logicnet-testnet \ - --wandb.entity ait-ai - ``` +*Wandb is optional, but recommended for better tracking and analysis of the Validator and Miner.* +- Configure the Wandb API key within the `.env` file: + ```bash + echo "WANDB_API_KEY=your_wandb_api_key" > .env + ``` +- To execute the Validator with Wandb on the mainnet, utilize the commands provided above. +- For running the Validator with Wandb on the testnet, append the following arguments to the aforementioned commands: + ```bash + --wandb.project_name logicnet-testnet \ + --wandb.entity ait-ai \ + ``` --- ### Troubleshooting & Support -- **Logs**: - - Please see the logs for more details using the following command. +- **Logs**: Use PM2 to check logs if you encounter issues. ```bash pm2 logs sn35-validator ``` - - Please check the logs for more details on wandb for mainnet. - https://wandb.ai/ait-ai/logicnet-mainnet/runs - - Please check the logs for more details on wandb for testnet. - https://wandb.ai/ait-ai/logicnet-testnet/runs - **Common Issues**: - - Missing API keys. - - Incorrect model IDs. - - Connectivity problems. -- **Contact Support**: Reach out to the LogicNet team for assistance. + - **API Key Not Found**: Ensure `.env` is sourced and `TOGETHER_API_KEY` is set. + - **HF Token Not Found**: Ensure `.env` is sourced and `HF_TOKEN` is set. + - **Model ID Incorrect**: Verify the `--llm_client.model` matches the Together.AI Model ID. + - **Connection Errors**: Check internet connectivity and Together.AI service status. + +- **Contact Support**: Reach out to the LogicNet support team for assistance. --- diff --git a/logicnet/utils/config.py b/logicnet/utils/config.py index b1ffa2ed..00a4ccf4 100644 --- a/logicnet/utils/config.py +++ b/logicnet/utils/config.py @@ -169,21 +169,21 @@ def add_args(cls, parser): ) parser.add_argument( - "--llm_client.base_urls", + "--llm_client.base_url", type=str, help="The base url for the LLM client", - default="http://localhost:8000/v1,https://api.openai.com/v1,https://api.together.xyz/v1", + default="http://localhost:8000/v1", ) parser.add_argument( - "--llm_client.models", + "--llm_client.model", type=str, help="The model for the LLM client", - default="Qwen/Qwen2-7B-Instruct,gpt-4o-mini,meta-llama/Llama-2-7b-chat-hf", + default="Qwen/Qwen2-7B-Instruct", ) parser.add_argument( - "--llm_client.keys", + "--llm_client.key", type=str, help="The key for the LLM client", default="xyz", diff --git a/logicnet/utils/model_selector.py b/logicnet/utils/model_selector.py deleted file mode 100644 index 8050a84a..00000000 --- a/logicnet/utils/model_selector.py +++ /dev/null @@ -1,12 +0,0 @@ -import random - -def model_selector(model_rotation_pool): - # Filter out entries with "no use" or where the model is "null" - valid_models = {k: v for k, v in model_rotation_pool.items() if v != "no use" and v[2] != "null"} - - # Select a random model from the valid ones - model_key = random.choice(list(valid_models.keys())) - base_url, api_key, model = valid_models[model_key] - - # Return the selected model details - return model, base_url, api_key \ No newline at end of file diff --git a/logicnet/validator/challenger/challenger.py b/logicnet/validator/challenger/challenger.py index c34310d7..3cc89495 100644 --- a/logicnet/validator/challenger/challenger.py +++ b/logicnet/validator/challenger/challenger.py @@ -2,20 +2,23 @@ import os import openai import random -import mathgenerator -import bittensor as bt from logicnet.protocol import LogicSynapse +import bittensor as bt from .human_noise import get_condition from .math_generator.topics import TOPICS as topics -from logicnet.utils.model_selector import model_selector import mathgenerator from datasets import load_dataset DATASET_WEIGHT = [40,10,10,10,10,10,10] class LogicChallenger: - def __init__(self, model_rotation_pool: dict): - self.model_rotation_pool = model_rotation_pool + def __init__(self, base_url: str, api_key: str, model: str, dataset_weight: list): + bt.logging.info( + f"Initializing Logic Challenger with model: {model}, base URL: {base_url}." + ) + self.model = model + self.openai_client = openai.OpenAI(base_url=base_url, api_key=api_key) + self.dataset_weight = [float(weight) for weight in dataset_weight.split(',')] def __call__(self, synapse: LogicSynapse) -> LogicSynapse: self.get_challenge(synapse) @@ -181,29 +184,21 @@ def get_revised_logic_question(self, logic_question: str, conditions: dict) -> s {"role": "user", "content": prompt}, ] - max_attempts = 3 - - for attempt in range(max_attempts): - model, base_url, api_key = model_selector(self.model_rotation_pool) - if not model or not base_url or not api_key: - raise ValueError("Model configuration is incomplete.") - - openai_client = openai.OpenAI(base_url=base_url, api_key=api_key) - bt.logging.debug(f"Initiating request with model '{model}' at base URL '{base_url}'.") - - try: - response = openai_client.chat.completions.create( - model=model, - messages=messages, - max_tokens=256, - temperature=0.7, - ) - revised_question = response.choices[0].message.content.strip() - bt.logging.debug(f"Generated revised math question: {revised_question}") - return revised_question - - except openai.error.OpenAIError as e: - bt.logging.error(f"OpenAI API request failed (attempt {attempt + 1}): {e}") - if attempt == max_attempts - 1: - raise RuntimeError("Failed to get a response after multiple attempts with different models.") - bt.logging.info("Switching to a different model configuration.") + response = self.openai_client.chat.completions.create( + model=self.model, + messages=messages, + max_tokens=256, + temperature=0.7, + ) + + response = response.choices[0].message.content.strip() + return response + + + def get_answer_value(self, possible_answers, answer): + # Get the value of the answer from the possible answers + options = possible_answers.split() + for i, option in enumerate(options): + if option.startswith(answer + ")"): + return options[i + 1] + return None # Return None if the answer is not found \ No newline at end of file diff --git a/logicnet/validator/rewarder.py b/logicnet/validator/rewarder.py index 71d8abd6..21e42d5f 100644 --- a/logicnet/validator/rewarder.py +++ b/logicnet/validator/rewarder.py @@ -1,11 +1,10 @@ import torch import openai -import sympy -import bittensor as bt -from concurrent import futures from logicnet.protocol import LogicSynapse from sentence_transformers import SentenceTransformer -from logicnet.utils.model_selector import model_selector +import bittensor as bt +from concurrent import futures +import sympy SIMILARITY_WEIGHT = 0.2 CORRECTNESS_WEIGHT = 0.8 @@ -37,11 +36,15 @@ Correctness Score (a number between 0 and 1, output only the number):""" class LogicRewarder: - def __init__(self, model_rotation_pool: dict): + def __init__(self, base_url: str, api_key: str, model: str): """ READ HERE TO LEARN HOW VALIDATOR REWARD THE MINER """ - self.model_rotation_pool = model_rotation_pool + bt.logging.info( + f"Logic Rewarder initialized with model: {model}, base_url: {base_url}" + ) + self.openai_client = openai.OpenAI(base_url=base_url, api_key=api_key) + self.model = model self.embedder = SentenceTransformer("sentence-transformers/all-MiniLM-L6-v2") def __call__(self, uids, responses: list[LogicSynapse], base_synapse: LogicSynapse): @@ -115,17 +118,6 @@ def _get_correctness( Returns: list[float]: List of correctness scores for each response (float between 0 and 1). """ - model, base_url, api_key = model_selector(self.model_rotation_pool) - if not model: - raise ValueError("Model ID is not valid or not provided.") - if not base_url: - raise ValueError("Base URL is not valid or not provided.") - if not api_key: - raise ValueError("API key is not valid or not provided.") - - openai_client = openai.OpenAI(base_url=base_url, api_key=api_key) - bt.logging.debug(f"Initiating request with model '{model}' at base URL '{base_url}'.") - ground_truth_answer = base_synapse.ground_truth_answer bt.logging.debug(f"[CORRECTNESS] Ground truth: {ground_truth_answer}") correctness = [] @@ -158,66 +150,27 @@ def _get_correctness( if batch_messages: with futures.ThreadPoolExecutor() as executor: - for attempt in range(3): # Retry up to 3 times + results = executor.map( + lambda messages: self.openai_client.chat.completions.create( + model=self.model, + messages=messages, + max_tokens=5, + temperature=0, + ), + batch_messages, + ) + for idx, result in zip(indices_for_llm, results): + response_str = result.choices[0].message.content.strip().lower() + bt.logging.debug(f"[CORRECTNESS] Rating: {response_str}") try: - results = executor.map( - lambda messages: openai_client.chat.completions.create( - model=model, - messages=messages, - max_tokens=5, - temperature=0, - ), - batch_messages, - ) - for idx, result in zip(indices_for_llm, results): - response_str = result.choices[0].message.content.strip().lower() - bt.logging.debug(f"[CORRECTNESS] Rating: {response_str}") - try: - correctness_score = float(response_str) - correctness[idx] = min(max(correctness_score, 0.0), 1.0) - except ValueError: - default_score = 0.5 - bt.logging.warning(f"Failed to parse correctness score for response {idx}. Assigning default score of {default_score}.") - correctness[idx] = default_score - break - - except openai.error.OpenAIError as e: - bt.logging.error(f"API request failed: {e}") - if attempt == 2: # Last attempt - # Switch to another model, base URL, and API key - model, base_url, api_key = model_selector(self.model_rotation_pool) - if not model or not base_url or not api_key: - bt.logging.error("No alternative model, base URL, or API key available.") - for idx in indices_for_llm: - correctness[idx] = 0.5 - else: - openai_client = openai.OpenAI(base_url=base_url, api_key=api_key) - bt.logging.debug(f"Initiating request with model '{model}' at base URL '{base_url}'.") - try: - results = executor.map( - lambda messages: openai_client.chat.completions.create( - model=model, - messages=messages, - max_tokens=5, - temperature=0, - ), - batch_messages, - ) - for idx, result in zip(indices_for_llm, results): - response_str = result.choices[0].message.content.strip().lower() - bt.logging.debug(f"[CORRECTNESS] Rating: {response_str}") - try: - correctness_score = float(response_str) - correctness[idx] = min(max(correctness_score, 0.0), 1.0) - except ValueError: - default_score = 0.5 - bt.logging.warning(f"Failed to parse correctness score for response {idx}. Assigning default score of {default_score}.") - correctness[idx] = default_score - break - except openai.error.OpenAIError as e: - bt.logging.error(f"API request failed after switching: {e}") - for idx in indices_for_llm: - correctness[idx] = 0.5 + correctness_score = float(response_str) + correctness[idx] = min(max(correctness_score, 0.0), 1.0) + except ValueError: + # If parsing fails, assign a default score + default_score = 0.5 + bt.logging.warning(f"Failed to parse correctness score for response {idx}. Assigning default score of {default_score}.") + correctness[idx] = default_score + return correctness def _compare_numerical_answers(self, ground_truth: str, miner_answer: str): @@ -236,8 +189,6 @@ def _compare_numerical_answers(self, ground_truth: str, miner_answer: str): gt_abs = abs(gt_value) + epsilon relative_error = abs_difference / gt_abs - # Logs for debugging - bt.logging.debug(f"[CORRECTNESS DEBUG FOR NUMERICAL COMPARISON] Ground truth: {gt_value}, Miner answer: {miner_value}, Absolute difference: {abs_difference}, Relative error: {relative_error}") # Map relative error to correctness score between 0 and 1 # Assuming that a relative error of 0 corresponds to correctness 1 @@ -288,52 +239,12 @@ def _get_ground_truth(self, question: str): messages = [ {"role": "user", "content": question}, ] - model, base_url, api_key = model_selector(self.model_rotation_pool) - if not model: - raise ValueError("Model ID is not valid or not provided.") - if not base_url: - raise ValueError("Base URL is not valid or not provided.") - if not api_key: - raise ValueError("API key is not valid or not provided.") - - openai_client = openai.OpenAI(base_url=base_url, api_key=api_key) - bt.logging.debug(f"Initiating request with model '{model}' at base URL '{base_url}'.") - - response = "" - for attempt in range(3): # Retry up to 3 times - try: - response = openai_client.chat.completions.create( - model=model, - messages=messages, - max_tokens=1024, - temperature=0.7, - ) - response = response.choices[0].message.content - bt.logging.debug(f"[SIMILARITY] Self-generated ground truth: {response}") - return response # Return response if successful - - except openai.error.OpenAIError as e: - bt.logging.error(f"API request failed on attempt {attempt + 1}: {e}") - if attempt == 2: # Last attempt - # Switch to another model, base URL, and API key - model, base_url, api_key = model_selector(self.model_rotation_pool) - if not model or not base_url or not api_key: - bt.logging.error("No alternative model, base URL, or API key available.") - - else: - openai_client = openai.OpenAI(base_url=base_url, api_key=api_key) - bt.logging.debug(f"Initiating request with model '{model}' at base URL '{base_url}'.") - try: - response = openai_client.chat.completions.create( - model=model, - messages=messages, - max_tokens=1024, - temperature=0.7, - ) - response = response.choices[0].message.content - bt.logging.debug(f"[SIMILARITY] Self-generated ground truth: {response}") - return response - except openai.error.OpenAIError as e: - bt.logging.error(f"API request failed after switching: {e}") - + response = self.openai_client.chat.completions.create( + model=self.model, + messages=messages, + max_tokens=1024, + temperature=0.7, + ) + response = response.choices[0].message.content + bt.logging.debug(f"[SIMILARITY] Self-generated ground truth: {response}") return response diff --git a/neurons/validator/validator.py b/neurons/validator/validator.py index 29c2aee4..ebf13d06 100644 --- a/neurons/validator/validator.py +++ b/neurons/validator/validator.py @@ -1,18 +1,17 @@ -import os import time -import threading import datetime +import bittensor as bt import random -import traceback import torch -import requests -import bittensor as bt -import logicnet as ln -from neurons.validator.validator_proxy import ValidatorProxy from logicnet.base.validator import BaseValidatorNeuron +from neurons.validator.validator_proxy import ValidatorProxy +import logicnet as ln from logicnet.validator import MinerManager, LogicChallenger, LogicRewarder, MinerInfo from logicnet.utils.wandb_manager import WandbManager +import traceback +import threading from neurons.validator.core.serving_queue import QueryQueue +import requests def init_category(config=None): @@ -20,8 +19,17 @@ def init_category(config=None): "Logic": { "synapse_type": ln.protocol.LogicSynapse, "incentive_weight": 1.0, - "challenger": LogicChallenger(config), - "rewarder": LogicRewarder(config), + "challenger": LogicChallenger( + config.llm_client.base_url, + config.llm_client.key, + config.llm_client.model, + config.dataset_weight, + ), + "rewarder": LogicRewarder( + config.llm_client.base_url, + config.llm_client.key, + config.llm_client.model, + ), "timeout": 64, } } @@ -35,53 +43,7 @@ def __init__(self, config=None): """ super(Validator, self).__init__(config=config) bt.logging.info("\033[1;32m🧠 load_state()\033[0m") - - ### Initialize model rotation pool ### - self.model_rotation_pool = {} - openai_key = os.getenv("OPENAI_API_KEY") - togetherai_key = os.getenv("TOGETHERAI_API_KEY") - if not openai_key and not togetherai_key: - bt.logging.warning("OPENAI_API_KEY or TOGETHERAI_API_KEY is not set. Please set it to use OpenAI or TogetherAI.") - raise ValueError("OPENAI_API_KEY or TOGETHERAI_API_KEY is not set. Please set it to use OpenAI or TogetherAI and restart the validator.") - - base_urls = self.config.llm_client.base_urls.split(",") - models = self.config.llm_client.models.split(",") - - # Ensure the lists have enough elements - if len(base_urls) < 3 or len(models) < 3: - bt.logging.warning("base_urls or models configuration is incomplete. Please ensure they have just 3 entries.") - raise ValueError("base_urls or models configuration is incomplete. Please ensure they have just 3 entries.") - - self.model_rotation_pool = { - "vllm": [base_urls[0].strip(), "xyz", models[0]], - "openai": [base_urls[1].strip(), openai_key, models[1]], - "togetherai": [base_urls[2].strip(), togetherai_key, models[2]], - } - - # Check if 'null' is at the same index in both cli lsts - for i in range(3): - if base_urls[i].strip() == 'null' or models[i].strip() == 'null': - if i == 0: - self.model_rotation_pool["vllm"] = "no use" - elif i == 1: - self.model_rotation_pool["openai"] = "no use" - elif i == 2: - self.model_rotation_pool["togetherai"] = "no use" - - # Check if all models are set to "no use" - if all(value == "no use" for value in self.model_rotation_pool.values()): - bt.logging.warning("All models are set to 'no use'. Validator cannot proceed.") - raise ValueError("All models are set to 'no use'. Please configure at least one model and restart the validator.") - - # Create a model_rotation_pool_without_keys - model_rotation_pool_without_keys = { - key: "no use" if value == "no use" else [value[0], "Not allowed to see.", value[2]] - if key in ["openai", "togetherai"] else value - for key, value in self.model_rotation_pool.items() - } - bt.logging.info(f"Model rotation pool without keys: {model_rotation_pool_without_keys}") - - self.categories = init_category(self.model_rotation_pool) + self.categories = init_category(self.config) self.miner_manager = MinerManager(self) self.load_state() self.update_scores_on_chain()