This repository provides a GPU-enabled Docker environment containing multiple isolated Python environments for different AI workflows.
The container is based on CUDA 12.1 + Ubuntu 22.04 and includes environments for:
- TensorFlow research
- PyTorch deep learning
- Data science workflows
- Resume OCR + ATS scoring with Transformers
The container contains four virtual environments located in:
/opt/envs/
| Environment | Path | Purpose |
|---|---|---|
| TensorFlow | /opt/envs/tf |
TensorFlow experiments and research |
| PyTorch | /opt/envs/torch |
Deep learning, computer vision, YOLO |
| Data Science | /opt/envs/ds |
Classical ML and analytics |
| ATS Resume AI | /opt/envs/ats |
OCR, NLP, resume parsing, ATS scoring |
Run the following command from the project directory:
docker build -t ai-lab:latest .To start the container with GPU support:
docker run -it --gpus all \
-v $(pwd):/workspace \
ai-lab:latestThis mounts your current project folder to /workspace inside the container.
Each environment can be activated manually using source.
Activate:
source /opt/envs/tf/bin/activateVerify:
python -c "import tensorflow as tf; print(tf.__version__)"Deactivate:
deactivateActivate:
source /opt/envs/torch/bin/activateVerify GPU:
python -c "import torch; print(torch.cuda.is_available())"Deactivate:
deactivateActivate:
source /opt/envs/ds/bin/activateTest:
python -c "import pandas, sklearn, xgboost"Deactivate:
deactivateThis environment is designed for:
- Resume OCR
- EasyOCR
- Transformers
- Sentence embeddings
- ATS scoring
Activate:
source /opt/envs/ats/bin/activateTest OCR:
python -c "import easyocr; print('EasyOCR ready')"Deactivate:
deactivateYou can also directly call Python from an environment:
/opt/envs/torch/bin/python script.pyExample:
/opt/envs/ats/bin/python resume_parser.pyIf using JupyterLab inside the container, the following kernels are available:
- Python (TensorFlow)
- Python (PyTorch)
- Python (Data Science)
- Python (ATS Resume AI)
Start JupyterLab:
/opt/envs/ds/bin/jupyter lab --ip=0.0.0.0 --port=8888 --allow-rootThe container uses:
/workspace
as the main working directory.
Any files in your host project directory will appear here inside the container.
The container supports CUDA-enabled GPUs.
Verify inside the container:
nvidia-smior:
python -c "import torch; print(torch.cuda.is_available())"The container includes support for:
- TensorFlow
- PyTorch
- HuggingFace Transformers
- Sentence Transformers
- EasyOCR
- Tesseract OCR
- OpenCV
- YOLOv8
- Streamlit / Gradio / FastAPI
- Pandas / Scikit-learn / XGBoost
- JupyterLab
The container supports running Streamlit applications for building interactive ML demos.
Activate the desired environment (example: Data Science):
source /opt/envs/ds/bin/activateRun the app:
streamlit run app.py --server.address 0.0.0.0 --server.port 8501-
Local:
http://localhost:8501 -
Remote (Coder / VM / Cloud):
- Forward port 8501
- Open the forwarded URL
0.0.0.0is required for Docker/remote environments- Default port is 8501
- Change port if needed:
streamlit run app.py --server.address 0.0.0.0 --server.port 8080MIT License