generated from opentensor/bittensor-subnet-template
-
Notifications
You must be signed in to change notification settings - Fork 65
SN1 validator containerization #720
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
bkb2135
merged 5 commits into
macrocosm-os:staging
from
backend-developers-ltd:SN1-Validator-Containerization
May 29, 2025
Merged
Changes from all commits
Commits
Show all changes
5 commits
Select commit
Hold shift + click to select a range
0dd9cb2
Docker containerization for ReproducibleVLLM Validator
chrisu-inigra 423fd30
Use build-context
chrisu-inigra 09a73c9
Container building script for sn1-validator-api
chrisu-inigra 797968d
Fixes for formatters and linters
chrisu-inigra 60a8f59
Fixes from pre-commit hooks
chrisu-inigra File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,24 @@ | ||
| FROM python:3.10-slim | ||
|
|
||
| WORKDIR /app | ||
|
|
||
| RUN apt-get update && apt-get install -y \ | ||
| git build-essential \ | ||
| && rm -rf /var/lib/apt/lists/* | ||
|
|
||
| COPY requirements.txt . | ||
| RUN pip install --no-cache-dir -r requirements.txt | ||
|
|
||
| COPY download_model.py . | ||
|
|
||
| ARG LLM_MODEL | ||
| ENV MODEL_PATH=./downloaded_model | ||
|
|
||
| RUN python download_model.py --model-name "$LLM_MODEL" --model-path "$MODEL_PATH" | ||
|
|
||
| COPY . . | ||
| COPY --from=external_context /vllm_llm.py . | ||
|
|
||
| EXPOSE 8000 | ||
|
|
||
| CMD ["python", "app.py"] |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,50 @@ | ||
| import os | ||
|
|
||
| import uvicorn | ||
| from fastapi import FastAPI | ||
| from fastapi.responses import JSONResponse | ||
| from schema import ChatRequest, LogitsRequest | ||
| from vllm_llm import ReproducibleVLLM | ||
|
|
||
| MODEL_PATH = os.getenv("MODEL_PATH") | ||
|
|
||
|
|
||
| class ReproducibleVllmApp: | ||
| def __init__(self): | ||
| self.llm = ReproducibleVLLM(model_id=MODEL_PATH) | ||
| self.app = FastAPI() | ||
| self.app.post("/generate")(self.generate) | ||
| self.app.post("/generate_logits")(self.generate_logits) | ||
|
|
||
| async def generate(self, request: ChatRequest): | ||
| try: | ||
| result = await self.llm.generate( | ||
| messages=[m.dict() for m in request.messages], | ||
| sampling_params=request.sampling_parameters.dict(), | ||
| seed=request.seed, | ||
| continue_last_message=request.continue_last_message, | ||
| ) | ||
| return {"result": result} | ||
| except Exception as e: | ||
| return JSONResponse(status_code=500, content={"error": str(e)}) | ||
|
|
||
| async def generate_logits(self, request: LogitsRequest): | ||
| try: | ||
| logits, prompt = await self.llm.generate_logits( | ||
| messages=[m.dict() for m in request.messages], | ||
| top_logprobs=request.top_logprobs, | ||
| sampling_params=request.sampling_parameters.dict(), | ||
| seed=request.seed, | ||
| continue_last_message=request.continue_last_message, | ||
| ) | ||
| return {"logits": logits, "prompt": prompt} | ||
| except Exception as e: | ||
| return JSONResponse(status_code=500, content={"error": str(e)}) | ||
Check warningCode scanning / CodeQL Information exposure through an exception
[Stack trace information](1) flows to this location and may be exposed to an external user.
|
||
|
|
||
| def run(self): | ||
| uvicorn.run(self.app, host="0.0.0.0", port=8000) | ||
|
|
||
|
|
||
| if __name__ == "__main__": | ||
| server = ReproducibleVllmApp() | ||
| server.run() | ||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,10 @@ | ||
| #!/bin/bash | ||
|
|
||
| IMAGE_NAME="sn1-validator-api" | ||
| MODEL_NAME="mrfakename/mistral-small-3.1-24b-instruct-2503-hf" | ||
|
|
||
| DOCKER_BUILDKIT=1 docker build \ | ||
| --build-arg LLM_MODEL="$MODEL_NAME" \ | ||
| -t "$IMAGE_NAME" \ | ||
| --build-context external_context=../prompting/llms \ | ||
| . |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,24 @@ | ||
| import argparse | ||
|
|
||
| from huggingface_hub import snapshot_download | ||
|
|
||
| if __name__ == "__main__": | ||
| parser = argparse.ArgumentParser(description="Download model files") | ||
| parser.add_argument( | ||
| "--model-name", | ||
| type=str, | ||
| help="Model name to use", | ||
| ) | ||
| parser.add_argument( | ||
| "--model-path", | ||
| type=str, | ||
| help="Path to save the model files", | ||
| ) | ||
|
|
||
| args = parser.parse_args() | ||
|
|
||
| print(f"Downloading Model {args.model_name}, files downloaded to {args.model_path}") | ||
|
|
||
| snapshot_download(repo_id=args.model_name, local_dir=args.model_path) | ||
|
|
||
| print(f"Model files downloaded to {args.model_path}") |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,8 @@ | ||
| fastapi==0.115.0 | ||
| uvicorn==0.23.2 | ||
| pydantic==2.9.0 | ||
| vllm==0.8.3 | ||
| torch==2.6.0 | ||
| numpy==1.26.4 | ||
| loguru==0.7.2 | ||
| huggingface-hub==0.30.0 |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,29 @@ | ||
| from typing import List, Literal, Optional | ||
|
|
||
| from pydantic import BaseModel | ||
|
|
||
|
|
||
| class ChatMessage(BaseModel): | ||
| content: str | ||
| role: Literal["user", "assistant", "system"] | ||
|
|
||
|
|
||
| class SamplingParameters(BaseModel): | ||
| temperature: Optional[float] = 1.0 | ||
| top_p: Optional[float] = 1.0 | ||
| max_tokens: Optional[int] = 512 | ||
| presence_penalty: Optional[float] = 0.0 | ||
| frequency_penalty: Optional[float] = 0.0 | ||
| top_k: Optional[int] = -1 | ||
| logprobs: Optional[int] = None | ||
|
|
||
|
|
||
| class ChatRequest(BaseModel): | ||
| messages: List[ChatMessage] | ||
| seed: Optional[int] | ||
| sampling_parameters: Optional[SamplingParameters] = SamplingParameters() | ||
| continue_last_message: Optional[bool] = False | ||
|
|
||
|
|
||
| class LogitsRequest(ChatRequest): | ||
| top_logprobs: Optional[int] = 10 |
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Check warning
Code scanning / CodeQL
Information exposure through an exception