Here's an improved version with detailed steps, proper markdown formatting, and clear instructions:
This guide walks you through setting up a Python environment, installing dependencies, configuring GPU usage, and running a transformer model with LangChain.
Creating a virtual environment helps isolate dependencies and prevents conflicts with other Python projects.
python -m venv langchain-env
langchain-env\Scripts\activate
python -m venv langchain-env
source langchain-env/bin/activate
Once the virtual environment is activated, install the required dependencies.
pip install langchain transformers langchain-huggingface
If you have an NVIDIA GPU, install the CUDA-enabled version of PyTorch.
Run the following command (replacing cu126
with your CUDA version):
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu126
To check which CUDA version you have installed, run:
nvcc --version
If you don’t have CUDA installed, follow the official installation guide:
🔗 CUDA Installation Guide
Run the following Python code to verify that your GPU is available:
import torch
# Check if GPU is available
gpu_available = torch.cuda.is_available()
device_name = torch.cuda.get_device_name(0) if gpu_available else "No GPU found"
print(f"GPU Available: {gpu_available}")
print(f"GPU Name: {device_name}")
If torch.cuda.is_available()
returns False
, ensure that:
- You have an NVIDIA GPU.
- The correct version of CUDA is installed.
- You installed the CUDA-enabled version of PyTorch.
Once GPU availability is confirmed, specify the device in the transformer pipeline.
from transformers import pipeline
# Load the model and set device to GPU (device=0)
model = pipeline(
"text-generation",
model="mistralai/Mistral-7B-Instruct-v0.1",
device=0 # Use GPU (0 refers to the first GPU)
)
# Generate text
output = model("What is LangChain?")
print(output)
If using a CPU instead of a GPU, change device=0
to device=-1
.
Step | Command / Code |
---|---|
Create a Virtual Env | python -m venv langchain-env && source langchain-env/bin/activate |
Install Requirements | pip install langchain transformers langchain-huggingface |
Install GPU Support | pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu126 |
Check GPU Availability | print(torch.cuda.is_available()) |
Run Model on GPU | pipeline("text-generation", model="mistralai/Mistral-7B-Instruct-v0.1", device=0) |
Now you’re ready to use LangChain and Transformers with GPU acceleration! 🚀