diff --git a/bare-metal/elastic-metal/reference-content/shared-responsibility-model.mdx b/bare-metal/elastic-metal/reference-content/shared-responsibility-model.mdx index 250db4fdb7..9963611778 100644 --- a/bare-metal/elastic-metal/reference-content/shared-responsibility-model.mdx +++ b/bare-metal/elastic-metal/reference-content/shared-responsibility-model.mdx @@ -7,7 +7,7 @@ content: paragraph: Learn about the shared responsibility model for Scaleway Bare Metal services, outlining the roles of Scaleway and users in managing server security, backups, and compliance. tags: bare metal shared responsibility dates: - validation: 2024-07-18 + validation: 2025-01-20 posted: 2024-07-18 categories: - bare-metal diff --git a/compute/gpu/how-to/use-gpu-with-docker.mdx b/compute/gpu/how-to/use-gpu-with-docker.mdx index 553ebe0bab..7b1f82e42d 100644 --- a/compute/gpu/how-to/use-gpu-with-docker.mdx +++ b/compute/gpu/how-to/use-gpu-with-docker.mdx @@ -7,7 +7,7 @@ content: paragraph: Learn how to efficiently access and use GPUs with Docker on Scaleway GPU Instances. tags: gpu docker dates: - validation: 2024-07-16 + validation: 2025-01-20 posted: 2022-03-25 categories: - compute @@ -49,7 +49,7 @@ We recommend that you map volumes from your GPU Instance to your Docker containe You can map directories from your GPU Instance's Local Storage to your Docker container, using the `-v :` flag. See the example command below: -``` +```bash docker run -it --rm -v /root/mydata/:/workspace nvidia/cuda:11.2.1-runtime-ubuntu20.04 # use the `exit` command for exiting this docker container @@ -65,7 +65,7 @@ In the above example, everything in the `/root/mydata` directory on the Instance In the following example, we create a directory called `my-data`, create a "Hello World" text file inside that directory, then use the `chown` command to set appropriate ownership for the directory before running the Docker container and specifying the mapped directories. The "Hello World" file is then available inside the Docker container: - ``` + ```bash mkdir -p /root/my-data/ echo "Hello World" > /root/my-data/hello.txt chown -R 1000:100 /root/my-data @@ -153,22 +153,22 @@ The possible values of the `NVIDIA_VISIBLE_DEVICES` variable are: ### Example commands * Starting a GPU-enabled CUDA container (using `--gpus`) - ```sh + ```bash docker run --runtime=nvidia -it --rm --gpus all nvidia/cuda:11.2.1-runtime-ubuntu20.04 nvidia-smi ``` * Starting a GPU-enabled container using `NVIDIA_VISIBLE_DEVICES` and specify the nvidia runtime - ``` + ```bash docker run --runtime=nvidia -it --rm --e NVIDIA_VISIBLE_DEVICES=all nvidia/cuda:11.2.1-runtime-ubuntu20.04 nvidia-smi ``` * Starting a GPU-enabled [Tensorflow](https://www.tensorflow.org/) container with a Jupyter notebook using `NVIDIA_VISIBLE_DEVICES` and map the port `88888` to access the web GUI: - ``` + ```bash docker run --runtime=nvidia -e NVIDIA_VISIBLE_DEVICES=all -it --rm -p 8888:8888 tensorflow/tensorflow:latest-gpu-jupyter ``` * Query the GPU UUID of the first GPU using nvidia-smi and then specifying that to the container: - ``` + ```bash nvidia-smi -i 0 --query-gpu=uuid --format=csv uuid GPU-18a3e86f-4c0e-cd9f-59c3-55488c4b0c24 diff --git a/compute/gpu/how-to/use-pipenv.mdx b/compute/gpu/how-to/use-pipenv.mdx index 315c1e1bc7..1b82dda148 100644 --- a/compute/gpu/how-to/use-pipenv.mdx +++ b/compute/gpu/how-to/use-pipenv.mdx @@ -7,7 +7,7 @@ content: paragraph: This guide explains how to use Pipenv to create and manage virtual environments for Python projects on Scaleway GPU Instances. tags: Pipenv, virtual environment, GPU, Python dates: - validation: 2024-07-17 + validation: 2025-01-20 posted: 2022-03-25 categories: - compute diff --git a/compute/gpu/reference-content/docker-images.mdx b/compute/gpu/reference-content/docker-images.mdx index 4041de4704..5164ae3fe4 100644 --- a/compute/gpu/reference-content/docker-images.mdx +++ b/compute/gpu/reference-content/docker-images.mdx @@ -7,7 +7,7 @@ content: paragraph: Discover detailed information about Scaleway's Docker images for AI development. tags: docker docker-image tensorflow pytorch jax rapids dates: - validation: 2024-07-16 + validation: 2025-01-20 posted: 2022-03-25 categories: - compute