Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proposal: show files of running containers #2333

Closed
philliphoff opened this issue Sep 18, 2020 · 20 comments · Fixed by #2463
Closed

Proposal: show files of running containers #2333

philliphoff opened this issue Sep 18, 2020 · 20 comments · Fixed by #2463

Comments

@philliphoff
Copy link
Contributor

philliphoff commented Sep 18, 2020

Proposal

The Containers tool window (CTW) in Visual Studio allows users to browse the file system of running containers, and users seem to really like this feature. Should we do the same for Visual Studio Code, something along the lines of the mockup below?

Screenshot 2020-09-18 at 10 53 07

As a related item, it was suggested that, when opening a container file, if that container file actually maps to a local file that's been volume-mounted into the container, that the local file be opened instead. (This is something that CTW does not currently do, but the idea seems good in both places.)

@bwateratmsft
Copy link
Contributor

bwateratmsft commented Sep 18, 2020

I think that this functionality overlaps too much with Remote - Containers, TBH. Remote - Containers offers the ability to attach to both containers and volumes and actually do things with the filesystem, not just see it.

@philliphoff
Copy link
Contributor Author

While I agree that you can use Remote - Containers for similar purposes, it's much more invasive when connecting (e.g. installing the VS Code server, container-side extensions, etc.) as it tries to make that container a "dev" container. If you just want to poke around the file system and look at a couple of files, it seems very heavy handed.

@karolz-ms
Copy link
Contributor

I agree with @philliphoff Being able to see where the files are and what they contain is what most users need. Injecting VS Code backend into the container for that purpose is more than necessary, and also not intuitive/discoverable.

@bwateratmsft
Copy link
Contributor

That's fair although I wouldn't say I'm quite convinced. If we do add this, we should use the API instead of the CLI. The slow performance of the CLI would make this feature pretty frustrating.

@PavelSosin-320
Copy link

This is a great idea with one big BUT: existing Docker volume drivers don't support data access concurrency. See discussion: Concurrency managed by volumes. The pre-requisite is PR: develop Docker volume driver supporting data access concurrency without involving Inter-container IPC based synchronization. Maybe, it is possible when the container is stopped when somebody looks into its volumes and all mounts are read-only except one.

@PavelSosin-320
Copy link

This post contains explanations for sharing data between containers in the Docker sharing data between containers: how and why. Regarding the architecture proposal, you can ask Erich Gamma.

@dbreshears dbreshears added the P3 label Sep 23, 2020
@dbreshears dbreshears added this to the 1.8.0 milestone Sep 23, 2020
@PavelSosin-320
Copy link

Why it is so hard to implement?
Say we have Container1, a writer which populates in the volume mounted by this Container. For example, run 1st container with -it option and mount Volume1 as shared.
cd to the volume mount point inside 1st container and run git clone ..... &.
Run Container2, reader in the way similar way, cd to the mount point of the Volume1 in the 2nd container and do ls. The result will be ???? depends. The default overlay2 Docker storage driver doesn't ensure any concurrency. Do ls a few seconds later to refresh and now you will see Volume1 populated. If you do it in the same FS you can process FileSystem notification or use flock for synchronization but in your scenario, you have to send notification from Container1 to Container2 to say that Volume1 has been populated and can be read.

@karolz-ms
Copy link
Contributor

Note: in addition to what @bwateratmsft said about performance, what Visual Studio currently does (execute 'ls' command inside the container and parse output) will not work for containers built from scratch (because they don't have any shell binaries). See this discussion on StackOverflow for some ideas.

@philliphoff
Copy link
Contributor Author

philliphoff commented Sep 24, 2020

@karolz-ms That's a known limitation and I've seen such containers here and there when using CTW, but I'm not sure there's a "no touch" option. We could copy in a shell, execute it, and then remove the shell; but there will be the potential for leaving bits behind. That might be ok for such containers, but I think we'd want to warn users that we're making modifications to the running container in such cases.

@karolz-ms
Copy link
Contributor

@philliphoff if only volumes could be mounted dynamically... ☺️

@bwateratmsft
Copy link
Contributor

bwateratmsft commented Sep 29, 2020

The API has a way to get file contents (and presumably write them but haven't validated that): https://docs.docker.com/engine/api/v1.40/#operation/ContainerArchive

It almost worked for listing directory contents but the output of HEAD /containers/{id}/archive is more or less useless:

{
    "name": "/",
    "size": 4096,
    "mode": 2147484141,
    "mtime": "2020-09-29T15:27:33.82Z",
    "linkTarget": ""
}

Nevertheless, the GET / PUT could be helpful if we want to read / write files. Also, we should probably see if this whole thing can be implemented as a FileSystemProvider. And by that I mean we should definitely implement it as such.

@philliphoff
Copy link
Contributor Author

philliphoff commented Sep 29, 2020

I assume that that API is the basis of the docker cp CLI command. Unfortunately, that command isn't supported on Docker hosts configured to run Windows containers. In that case, VS uses an ugly hack to stream the file out of the container via docker exec.

@bwateratmsft
Copy link
Contributor

bwateratmsft commented Sep 29, 2020

I don't know for sure about docker cp, but it is the basis of docker export.

UPDATE: Yes, it's also the basis for docker cp. I'm not too concerned about lack of support for Windows containers. IMO we could start the implementation for Linux containers; if we actually get interest we can add it for Windows containers using the CLI even though it would be slower. This way Linux containers stick with the much faster API.

@philliphoff
Copy link
Contributor Author

I agree that a FileSystemProvider would be a nice way to expose container files, both for consumption by our extension as well as others that find a use.

@bwateratmsft
Copy link
Contributor

This feature has now been released in 1.8.0.

@benpinchas
Copy link

benpinchas commented Nov 22, 2020

Hey 👋
There's a way to disable this feature in release 1.8.0?

@PavelSosin-320
Copy link

@bwateratmsft Clarification: The files of containers and how fast the list of container files are questions without a single answer because according to OCI container format (Docker, runc , Podman, Kubernetes, Openshift,...) every container has more than 1 filesystem: 1. RootFS served by overlay2 driver and created in the memory when the image is loaded, container created, and shared among all running containers by image layers basis. It designed to be very slim and static to ensure big Cloud clusters be scalable. More than one team is working to improve these figures but nobody cares about Root FS too much and overlay2 is maximum has been achieved. 2. Mounted and bound and volumes served by storage drivers of Docker, Podman, and underlying Cloud infrastructure. Scores of developers are employed to improve storage driver and storage services performance to make big data applications and video streaming possible and expected performance is much above 1 terabyte per second for certain drivers and infrastructure.
So, ls / and ls mount-point are absolutely different questions.
ls / is image layers structure plus changed files, i.e. Docker diff plus ls of all mount points.

@karolz-ms
Copy link
Contributor

@benpinchas why would you like to disable the feature? It does not incur any cost if you do not expand the Files node...

@benpinchas
Copy link

benpinchas commented Nov 29, 2020

@karolz-ms
Maybe it's just me 🤷‍♂️
But...

First I find this toggle mechanism annoying and much dirtier. (attaching before and after)

Second, let's say you have many microservices, and you want to deal with one of them.

Before that feature, you would just click on one of them to gain focus, start typing part of the microservice name and vs-code would mark any matched microservices (elements) for you.

Now, with this feature, click on one of the elements to gain focus would toggle it and result in an unwanted reveal of its files :(
A real pain in the neck.

Edit:
I found now that you don't have to click on of the elements for "filter-by-type", you could also click the scroll-bar / the dead area beneath the list.

Before (much cleaner)
before

After
after 2

Still one of the most helpful and powerful extension!

@karolz-ms
Copy link
Contributor

@benpinchas glad you found a workaround

Just FYI the extension now groups container nodes by Compose "project" by default (that is why there is a "docker" top-level node in the new tree layout), but that can be turned off via settings if you don't like that

@vscodebot vscodebot bot locked and limited conversation to collaborators Dec 20, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants