Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
24 commits
Select commit Hold shift + click to select a range
2097017
update documentation with together.ai guide
LVH-Tony Oct 9, 2024
a8a2deb
♻️ Improve prompt engineering for paraphrase generation
LVH-Tony Oct 9, 2024
18cca0f
♻️ adjust scoring weights
LVH-Tony Oct 9, 2024
5ad51c4
adjust documenation to new weights
LVH-Tony Oct 9, 2024
b3c8eec
Merge pull request #24 from LogicNet-Subnet/dev_tony
LVH-Tony Oct 9, 2024
b3f2526
revamp accuracy calculation
LVH-Tony Oct 9, 2024
e268a57
add debug logging for programmatic comparison
LVH-Tony Oct 9, 2024
3b0c1d3
Merge branch 'staging' into dev_tony
LVH-Tony Oct 10, 2024
99802a4
Fix Max Request Not Updating in Validator Check Limit Function
LVH-Tony Oct 10, 2024
98fee99
test access to terminal logs debug
LVH-Tony Oct 10, 2024
f4ebaac
test access to terminal logs debug 2
LVH-Tony Oct 10, 2024
431491b
include more info for debug logging on processing time
LVH-Tony Oct 10, 2024
b5ba5bd
include more info for debug logging on processing time 2
LVH-Tony Oct 10, 2024
562518f
include more info for debug logging on processing time 3
LVH-Tony Oct 10, 2024
42f57f4
include more info for debug logging on processing time 3
LVH-Tony Oct 10, 2024
baee68a
include logging in base miner
LVH-Tony Oct 10, 2024
4162096
improve logging for validator
LVH-Tony Oct 10, 2024
de3d662
reapplying LogicRequest protocol & remove TerminalInFo
LVH-Tony Oct 10, 2024
a13f26e
remove it cuz it make no sense lol
LVH-Tony Oct 10, 2024
e144a06
Update VALIDATOR.md
LVH-Tony Oct 11, 2024
ff1da5f
Merge pull request #28 from LogicNet-Subnet/dev_tony-1
LVH-Tony Oct 11, 2024
de2a32f
Merge pull request #27 from LogicNet-Subnet/dev_tony
LVH-Tony Oct 11, 2024
9cf4bb6
update version to 1.1.1
LVH-Tony Oct 11, 2024
5dc27ef
Merge pull request #30 from LogicNet-Subnet/dev_tony
LVH-Tony Oct 11, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -174,4 +174,5 @@ main/*
*.bin
*.1
*.onnx
example_env.env
example_env.env
bittensor/
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ Our goal is to develop an open-source AI model capable of complex mathematics an

- **Initial Score Calculation**:
- Each miner's response is evaluated to calculate an initial score using a weighted sum:
- `score = (0.4 * similarity_score) + (0.6 * correctness_score) - 0.1 * time_penalty`
- `score = (0.2 * similarity_score) + (0.8 * correctness_score) - 0.1 * time_penalty`
- **Similarity Score**: Calculated based on the cosine similarity between the miner's reasoning and the self-generated ground truth answer.
- **Correctness Score**: Determined by an LLM that assesses whether the miner's answer is correct based on the question and ground truth.
- **Time Penalty**: Derived from the processing time of the response relative to the specified timeout.
Expand Down
134 changes: 103 additions & 31 deletions docs/MINER.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# LogicNet: Miner documentation

### Overview
## Overview

The Miner is responsible for solving the challenges generated by the Validator. The Miner will receive the challenges from the Validator, solve them, and submit the solutions back to the Validator. The Miner will be rewarded based on the number of challenges solved and the quality of the solutions.

Expand All @@ -18,28 +18,6 @@ The Miner is responsible for solving the challenges generated by the Validator.
- `similarity (float)`: Validator compute cosine similarity between `logic_reasoning` and validator's reasoning.
- `time_penalty (float)`: Penalty for late submission. It's ratio of `process_time / timeout * MAX_PENALTY`.

### Minimum Compute Requirements
- 1x GPU 24GB VRAM (RTX 4090, A100, A6000, etc)
- Storage: 100GB
- Python 3.10

Here's the revised chart sorted by ascending GPU footprint, including the model Qwen/Qwen2-7B-Instruct. Additionally, I've included a section on how to run larger models with lower VRAM using techniques such as adjusting `--gpu_memory_utilization`.

### Model to Run
Here are some model examples that could be leveraged, sorted by GPU footprint:

| Model Name | Model ID | Default GPU Footprint | Specialization |
| --- | --- | --- | --- |
| Qwen2-7B-Instruct | Qwen/Qwen2-7B-Instruct | 24 GB | Instruction-following, suitable for logic and structured reasoning |
| Mistral-7B-Instruct | mistralai/Mistral-7B-Instruct-v0.1 | 24 GB | High-performance, excellent for logical tasks |
| Qwen-7B-Chat | Qwen/Qwen-7B-Chat | 24 GB | Conversational logic and problem-solving |
| Baichuan2-13B-Chat | baichuan-inc/Baichuan2-13B-Chat | 32 GB | Versatile in language understanding, suitable for logic and math |
| Llama-2-13b-chat | meta-llama/Llama-2-13b-hf | 32 GB | Strong in conversational tasks, good for logic and structured reasoning |
| Falcon-40B | tiiuae/falcon-40b | 75 GB* | Advanced model, handles complex reasoning and logic efficiently |
| Mixtral-8x7B | mistralai/Mixtral-8x7B-Instruct-v0.1 | 92 GB* | Advanced model, handles complex reasoning and logic efficiently |

> \* Big models such as mixtral are very costly to run and optimize, so always bear in mind the trade-offs between model speed, model quality and infra cost.

### Setup for Miner
1. Git clone the repository
```bash
Expand All @@ -60,18 +38,107 @@ pip install -e .
pip uninstall uvloop -y
pip install git+https://github.com/lukew3/mathgenerator.git
```
3. Create env for vLLM

- For ease of use, you can run the scripts with PM2. To install PM2:
```bash
python -m venv vllm
. vllm/bin/activate
pip install vllm
sudo apt update && sudo apt install jq && sudo apt install npm && sudo npm install pm2 -g && pm2 update
```
3. Setup LLM Configuration

- For ease of use, you can run the scripts as well with PM2. To install PM2:
## There are two ways to run the Miner:
1. [Running Model via Together.AI API](#method-1-running-model-via-togetherai)
2. [Running Model **Locally** using vLLM](#method-2-running-model-locally-using-vllm)
---

### METHOD 1: Running Model via Together.AI

Alternatively, you can use together.ai's API to access various language models without hosting them locally.

**Note:** You need to register an account with together.ai, obtain an API key, and set the API key in a `.env` file.

1. **Register and Obtain API Key**

- Visit [together.ai](https://together.ai/) and sign up for an account.
- Obtain your API key from the together.ai `dashboard`.

2. **Set Up the `.env` File**

Create a `.env` file in your project directory and add your together.ai API key `TOGETHER_API_KEY=your_together_ai_api_key`

You can do this in one command:
```bash
echo "TOGETHER_API_KEY=your_together_ai_api_key" > .env
```

3. **Select a Model**

Together.ai provides access to various models. Please select a suitable chat/language model from the list below:

| Model Name | Model ID | Pricing (per 1M tokens) |
|-----------------------------|----------------------------------------------|-------------------------|
| **Qwen 1.5 Chat (72B)** | `qwen/Qwen-1.5-Chat-72B` | $0.90 |
| **Qwen 2 Instruct (72B)** | `Qwen/Qwen2-Instruct-72B` | $0.90 |
| **LLaMA-2 Chat (13B)** | `meta-llama/Llama-2-13b-chat-hf` | $0.22 |
| **LLaMA-2 Chat (7B)** | `meta-llama/Llama-2-7b-chat-hf` | $0.20 |
| **MythoMax-L2 (13B)** | `Gryphe/MythoMax-L2-13B` | $0.30 |
| **Mistral (7B) Instruct v0.3** | `mistralai/Mistral-7B-Instruct-v0.3` | $0.20 |
| **Mistral (7B) Instruct v0.2** | `mistralai/Mistral-7B-Instruct-v0.2` | $0.20 |
| **Mistral (7B) Instruct** | `mistralai/Mistral-7B-Instruct` | $0.20 |
etc...

More models are available on the together.ai platform here: [together.ai models](https://api.together.ai/models)
> *Note: You don't have to choose image models, choose either chat or language models.*

4. **Run the Miner with together.ai**

Activate your virtual environment:
```bash
. main/bin/activate
```

Source the `.env` file:
```bash
source .env
```

Start the miner using the following command, replacing placeholders with your actual values:
```bash
pm2 start python --name "sn35-miner" -- neurons/miner/miner.py \
--netuid 35 \
--wallet.name "your-wallet-name" \
--wallet.hotkey "your-hotkey-name" \
--subtensor.network finney \
--axon.port "your-open-port" \
--miner.category Logic \
--miner.epoch_volume 200 \
--miner.llm_client.base_url https://api.together.xyz/v1 \
--miner.llm_client.model "model_id_from_list" \
--llm_client.key $TOGETHER_API_KEY \
--logging.debug
```
Replace `"model_id_from_list"` with the **Model ID** you have chosen from the together.ai model list. For example, `Qwen/Qwen2-Instruct-72B`.

**Notes:**

- Ensure your `TOGETHER_API_KEY` is correctly set in the `.env` file and sourced before running the command. you can check the `.env` file by running `cat .env`. And to make sure you sourced the `.env` file correctly, you can run `echo $TOGETHER_API_KEY`.
- The `--miner.llm_client.base_url` should point to the together.ai API endpoint: `https://api.together.xyz/v1`
- Make sure your `--miner.llm_client.model` matches the **Model ID** provided by together.ai.
- For more details on the together.ai API, refer to their [documentation](https://docs.together.ai/).

---

### METHOD 2: Running Model locally using vLLM

#### Minimum Compute Requirements:
- 1x GPU 24GB VRAM (RTX 4090, A100, A6000, L4, etc...)
- Storage: 100GB
- Python 3.10
1. Create env for vLLM
```bash
sudo apt update && sudo apt install jq && sudo apt install npm && sudo npm install pm2 -g && pm2 update
python -m venv vllm
. vllm/bin/activate
pip install vllm
```
2. Setup LLM Configuration
- Self host a vLLM server
```bash
. vllm/bin/activate
Expand All @@ -93,7 +160,7 @@ pm2 start "vllm serve Qwen/Qwen2-7B-Instruct --port 8000 --host 0.0.0.0" --name
pm2 start "vllm serve Qwen/Qwen2-7B-Instruct --shard --port 8000 --host 0.0.0.0" --name "sn35-vllm"
```

4. Run the following command to start mining
3. Run the following command to start mining
```bash
. main/bin/activate
pm2 start python --name "sn35-miner" -- neurons/miner/miner.py \
Expand All @@ -106,3 +173,8 @@ pm2 start python --name "sn35-miner" -- neurons/miner/miner.py \
--miner.llm_client.model Qwen/Qwen2-7B-Instruct \ # vLLM model name
--logging.debug \ # Optional: Enable debug logging
```

---

### If you encounter any issues, check the miner logs or contact the LogicNet support team.
Happy Mining!
Loading