Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

runtime dockerfile and devel dockerfile #1619

Closed
pineking opened this issue May 23, 2017 · 4 comments
Closed

runtime dockerfile and devel dockerfile #1619

pineking opened this issue May 23, 2017 · 4 comments
Labels
proposal accepted The core team has reviewed the feature request and agreed it would be a useful addition to PyTorch todo Not as important as medium or high priority tasks, but we will work on these.

Comments

@pineking
Copy link

The size of docker image built from the current dockerfile is too large, more than 5GB.
that's better if there are two different dockerfiles: runtime and devel dockerfile.
That would be a smaller size for runtime docker image.

@chenzhekl
Copy link

Virtualenv with pip provides a good enough isolated environment for runtime. I don't see a necessity for a runtime Dockerfile.

@soumith soumith added proposal accepted The core team has reviewed the feature request and agreed it would be a useful addition to PyTorch todo Not as important as medium or high priority tasks, but we will work on these. labels Jun 3, 2017
@ngimel
Copy link
Collaborator

ngimel commented Jun 5, 2017

The following runtime Dockerfile produces an 3 GB image, compared to ~3.5 GB image produced from the devel Dockerfile currently in the tree. If this reduction is worth it, I can submit a PR.

FROM ubuntu:16.04 

LABEL com.nvidia.volumes.needed="nvidia_driver"
RUN apt-get update && apt-get install -y --no-install-recommends \
         build-essential \ 
         git \
         curl \
         ca-certificates \
         libjpeg-dev \
         libpng-dev && \
     rm -rf /var/lib/apt/lists/*

RUN curl -o ~/miniconda.sh -O  https://repo.continuum.io/miniconda/Miniconda3-4.2.12-Linux-x86_64.sh  && \
     chmod +x ~/miniconda.sh && \
     ~/miniconda.sh -b -p /opt/conda && \     
     rm ~/miniconda.sh && \
     /opt/conda/bin/conda install conda-build && \
     /opt/conda/bin/conda create -y --name pytorch-py35 python=3.5.2 numpy pyyaml scipy ipython mkl&& \
     /opt/conda/bin/conda clean -ya 
ENV PATH /opt/conda/envs/pytorch-py35/bin:$PATH
RUN conda install --name pytorch-py35 -c soumith magma-cuda80 && /opt/conda/bin/conda clean -ya
RUN conda install --name pytorch-py35 pytorch torchvision cuda80 -c soumith && /opt/conda/bin/conda clean -ya

ENV LD_LIBRARY_PATH /usr/local/nvidia/lib:/usr/local/nvidia/lib64

WORKDIR /workspace
RUN chmod -R a+w /workspace

@soumith
Copy link
Member

soumith commented Jun 5, 2017

sure whynot!

@soumith
Copy link
Member

soumith commented Jun 5, 2017

fixed in #1732

@soumith soumith closed this as completed Jun 5, 2017
facebook-github-bot pushed a commit that referenced this issue May 17, 2021
Summary:
Related to the effort of upgrade ubuntu base images #58309, this PR removes the unused tools/docker/Dockerfile_runtime

It was introduced in #1619, #1732

- No code references in pytorch github org https://github.com/search?q=org%3Apytorch+Dockerfile_runtime&type=code
- Runtime images are available https://hub.docker.com/r/pytorch/pytorch/tags?page=1&ordering=last_updated&name=runtime (~2GB image size)

One less thing to maintain...

Pull Request resolved: #58333

Reviewed By: samestep

Differential Revision: D28457139

Pulled By: zhouzhuojie

fbshipit-source-id: 3c7034c52eb71463ac284dc48f0f9bbbf3af1312
krshrimali pushed a commit to krshrimali/pytorch that referenced this issue May 19, 2021
Summary:
Related to the effort of upgrade ubuntu base images pytorch#58309, this PR removes the unused tools/docker/Dockerfile_runtime

It was introduced in pytorch#1619, pytorch#1732

- No code references in pytorch github org https://github.com/search?q=org%3Apytorch+Dockerfile_runtime&type=code
- Runtime images are available https://hub.docker.com/r/pytorch/pytorch/tags?page=1&ordering=last_updated&name=runtime (~2GB image size)

One less thing to maintain...

Pull Request resolved: pytorch#58333

Reviewed By: samestep

Differential Revision: D28457139

Pulled By: zhouzhuojie

fbshipit-source-id: 3c7034c52eb71463ac284dc48f0f9bbbf3af1312
jjsjann123 pushed a commit to jjsjann123/pytorch that referenced this issue May 24, 2022
* Initial support for cp.async on ampere
malfet pushed a commit that referenced this issue Jun 8, 2022
Syncing nvfuser devel branch to upstream master. https://github.com/csarofeen/pytorch/

A few bigger updates:
1. Initial support of cp.async and cp.async.wait: csarofeen#1619
2. Emulate ampere's mma 16816 with Turing's mma 1688, for a unified interface: csarofeen#1643
3. Extending the infrastructure to support mma operators on turing and ampere arch: csarofeen#1440

Commits that's actually in this PR from the csarofeen branch
```
* dd23252 (csarofeen/devel) Fusion Segmenter: Unify single kernel and multi-kernel runtime path (#1710)
* b3d1c3f Fix missing cooperative launch (#1726)
* dc670a2 Async gmem copy support on sm80+ (#1619)
* 5e6a8da Add turing mma support and test (#1643)
* d6d6b7d Fix rFactor when there are indirect root domain(s), and refactor (#1723)
* 7093e39 Mma op integration on ampere (#1440)
* fade8da patch python test for bfloat16 (#1724)
* 8fbd0b1 Fine-grained kernel profiling (#1720)
* 77c1b4f Adding dry run mode to skip arch dependent checks (#1702)
* 151d95b More precise concretization analysis (#1719)
* f4d3630 Enable complex python tests (#1667)
* 4ceeee5 Minor bugfix in transform_rfactor.cpp (#1715)
* 3675c70 Separate root domain and rfactor domain in TransformPrinter (#1716)
* f68b830 Fix scheduling with polymorphic broadcast (#1714)
* 4ab5ef7 updating_ci_machine (#1718)
* 56585c5 Merge pull request #1711 from csarofeen/upstream_master_bump_0517
* 174d453 Allow using nvFuser on CUDA extension (#1701)
* 18bee67 Validate LOOP concrete IDs have complete IterDomains (#1676)
```
Pull Request resolved: #78244
Approved by: https://github.com/csarofeen, https://github.com/malfet
facebook-github-bot pushed a commit that referenced this issue Jun 8, 2022
Summary:
Syncing nvfuser devel branch to upstream master. https://github.com/csarofeen/pytorch/

A few bigger updates:
1. Initial support of cp.async and cp.async.wait: csarofeen#1619
2. Emulate ampere's mma 16816 with Turing's mma 1688, for a unified interface: csarofeen#1643
3. Extending the infrastructure to support mma operators on turing and ampere arch: csarofeen#1440

Commits that's actually in this PR from the csarofeen branch
```
* dd23252 (csarofeen/devel) Fusion Segmenter: Unify single kernel and multi-kernel runtime path (#1710)
* b3d1c3f Fix missing cooperative launch (#1726)
* dc670a2 Async gmem copy support on sm80+ (#1619)
* 5e6a8da Add turing mma support and test (#1643)
* d6d6b7d Fix rFactor when there are indirect root domain(s), and refactor (#1723)
* 7093e39 Mma op integration on ampere (#1440)
* fade8da patch python test for bfloat16 (#1724)
* 8fbd0b1 Fine-grained kernel profiling (#1720)
* 77c1b4f Adding dry run mode to skip arch dependent checks (#1702)
* 151d95b More precise concretization analysis (#1719)
* f4d3630 Enable complex python tests (#1667)
* 4ceeee5 Minor bugfix in transform_rfactor.cpp (#1715)
* 3675c70 Separate root domain and rfactor domain in TransformPrinter (#1716)
* f68b830 Fix scheduling with polymorphic broadcast (#1714)
* 4ab5ef7 updating_ci_machine (#1718)
* 56585c5 Merge pull request #1711 from csarofeen/upstream_master_bump_0517
* 174d453 Allow using nvFuser on CUDA extension (#1701)
* 18bee67 Validate LOOP concrete IDs have complete IterDomains (#1676)
```

Pull Request resolved: #78244

Reviewed By: ejguan

Differential Revision: D36678948

Pulled By: davidberard98

fbshipit-source-id: 0ccde965acbd31da67d99c6adb2eaaa888948105
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
proposal accepted The core team has reviewed the feature request and agreed it would be a useful addition to PyTorch todo Not as important as medium or high priority tasks, but we will work on these.
Projects
None yet
Development

No branches or pull requests

4 participants