Skip to content

Linux Mint 22.2 CUDA/cuDNN docker方案 #56

@syaofox

Description

@syaofox

一站式 Docker 方案,专为 Linux Mint 22.2 + RTX 3060 定制:

目标

  • 完全绕过系统 CUDA/cuDNN(不污染宿主机)
  • 直接用 最新 CUDA 12.8 + cuDNN 9.14
  • 支持 GPU 直通 + nvidia-container-toolkit
  • 一条命令启动 PyTorch/TensorFlow/Jupyter 等

Ⅰ. 宿主机只装 3 样东西(5 分钟搞定)

# 1. 安装最新 NVIDIA 驱动(≥560,推荐 570+)
sudo ubuntu-drivers install

# 2. 安装 nvidia-container-toolkit(让 Docker 看到 GPU)
sudo apt update
sudo apt install -y nvidia-container-toolkit
sudo nvidia-ctk runtime configure --runtime=docker
sudo systemctl restart docker

# 3. 重启验证
nvidia-smi
# 应看到 RTX 3060 + 驱动版本

Ⅱ. 一条命令启动最新 CUDA 容器

推荐镜像(NVIDIA 官方,自动匹配最新)

用途 镜像 启动命令
PyTorch pytorch/pytorch:latest 下方
TensorFlow tensorflow/tensorflow:latest-gpu 下方
Jupyter pytorch/pytorch:latest + Jupyter 下方

1. PyTorch(CUDA 12.8 + cuDNN 9.14)

docker run --gpus all -it --rm \
  --shm-size=8g \
  -v $(pwd)/work:/workspace \
  -p 8888:8888 \
  pytorch/pytorch:latest \
  bash -c "pip install jupyterlab && jupyter lab --ip=0.0.0.0 --port=8888 --no-browser --allow-root"

浏览器打开 http://localhost:8888 → 输入 token


2. TensorFlow(CUDA 12.8 + cuDNN 9)

docker run --gpus all -it --rm \
  -v $(pwd)/work:/workspace \
  -p 8888:8888 \
  tensorflow/tensorflow:latest-gpu-jupyter \
  bash -c "jupyter notebook --ip=0.0.0.0 --port=8888 --no-browser --allow-root"

3. 纯 CUDA 开发环境(nvcc + gcc)

docker run --gpus all -it --rm \
  -v $(pwd):/app \
  nvidia/cuda:12.8.0-devel-ubuntu24.04 \
  bash

里面运行 nvcc --version → 12.8.0
cat /usr/include/cudnn_version.h → 9.14.0


Ⅲ. 验证 GPU 是否正常(容器内运行)

# 进入任意容器后运行
nvidia-smi
python3 -c "import torch; print(torch.cuda.is_available(), torch.cuda.get_device_name(0))"

输出:

True NVIDIA GeForce RTX 3060

Ⅳ. 常用镜像版本对照(2025-10-29)

镜像 CUDA cuDNN Python 备注
nvidia/cuda:12.8.0-cudnn9-devel-ubuntu24.04 12.8.0 9.14.0 - 纯开发
pytorch/pytorch:latest 12.8 9.14 3.12 含 torch 2.5+
tensorflow/tensorflow:latest-gpu 12.8 9.14 3.11 含 TF 2.19+

Ⅴ. 推荐:写个 docker-compose.yml(一键启动)

# 文件名: docker-compose.yml
services:
  pytorch:
    image: pytorch/pytorch:latest
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: all
              capabilities: [gpu]
    volumes:
      - ./work:/workspace
    ports:
      - "8888:8888"
    command: >
      bash -c "pip install jupyterlab && 
               jupyter lab --ip=0.0.0.0 --port=8888 --no-browser --allow-root"
    shm_size: '8gb'

启动:

docker compose up -d

总结:你现在只需要

# 1. 装驱动 + toolkit
sudo ubuntu-drivers install
sudo apt install nvidia-container-toolkit
sudo nvidia-ctk runtime configure --runtime=docker
sudo systemctl restart docker

# 2. 启动 PyTorch(推荐)
mkdir work && cd work
docker run --gpus all -it --rm -v $(pwd):/workspace -p 8888:8888 pytorch/pytorch:latest \
  bash -c "pip install jupyterlab && jupyter lab --ip=0.0.0.0 --port=8888 --no-browser --allow-root"

打开浏览器 localhost:8888 → 即刻使用 CUDA 12.8 + RTX 3060

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions