Skip to content
This repository was archived by the owner on Oct 10, 2020. It is now read-only.

Conversation

@giuseppe
Copy link
Collaborator

@giuseppe giuseppe commented Nov 9, 2017

Fix "atomic mount" for system containers when http: is added to the image name

/cc @peterbaouoft

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
Copy link
Contributor

@peterbaouoft peterbaouoft left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tend to be paying a lot more attention to details/minors at the beginning of journey to 'atomic'. I feel this helps me understand the codebase better. So hopefully you don't mind it :p.

And below and nits and questions. =)

# If ostree storage was explicitely requested, then we have to
# error out if the container/image could not be mounted.
if res == False:
raise ValueError("Could not mount {}".format(self.image))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey, a minor question here. I noticed that in _try_ostree_mount(), it seems like the only case that FALSE is returned is that no images or containers is found via syscontainers function calls. (has_container and has_image both FALSE). It is this line:
https://github.com/projectatomic/atomic/pull/1127/files#diff-ce5473cd96a720a53d0834698c9b2eaaR865. and
https://github.com/projectatomic/atomic/pull/1127/files#diff-ce5473cd96a720a53d0834698c9b2eaaR166

Other cases seems all raising an error already upon mounting failure.
Case 1: https://github.com/projectatomic/atomic/blob/master/Atomic/mount.py#L289
Case 2: https://github.com/projectatomic/atomic/pull/1127/files#diff-ce5473cd96a720a53d0834698c9b2eaaR857

If that is the case ( I hope I was right but ignore this line if I was wrong :p), do we need to change the description of the error to "Image not found" or something similar. But that.. would share a similarity with _no_such_image() below.. so thoughts?

==========================Another question for GLib.Error==============
I can't seem to find the place where the GLib.Error is raised in the code section

 except GLib.Error: # pylint: disable=catching-non-exception
                self._no_such_image()

Since I am relative new to the code base, can you give me a reminder for where is it being used? =)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that is the exception type that can be raised from the OSTree bindings. In this case OSTreeMount, both as a fallback and in the case the user != root, tries a checkout from OSTree and it can fail with that exception.

We cannot really use "Image not found" as the same mount is use to mount a container (that might be using an already deleted image from the repository), so I preferred the more generic "Cannot mount X", where X can be both an image or a container.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that is the exception type that can be raised from the OSTree bindings

ah I see, I think it makes sense to me now =).

We cannot really use "Image not found" as the same mount is use to mount a container

Sorry for not being clear for the first time =(. I was trying to say that .. the source of this failure( FALSE ) was because the identifier(name) was neither found in containers or in image backend right? The Cannot mount X failure seems to me that the cause of failure was due to a mounting error? But it is really a minor nit =), so I think it is ok to keep the description.

if not OSTREE_PRESENT:
return False

identifier = util.remove_skopeo_prefixes(identifier)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, since we add this removal for ostree prefix, I have a question: Do we also need to worry about something like
atomic pull --storage =docker http:${NAME} case? In the docker handlation part, do we also want to strip the prefix(http: or oci: or https:) off?

I understand this PR is mainly for ostree case, but unsure where to put a related question like this, so this is the wrong place, please correct me =).

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the Docker backend doesn't use the "http:" prefix notation since insecure registries are configured in the configuration for the Docker daemon (i.e. --insecure-registries)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the Docker backend doesn't use the "http:" prefix notation

I think I get it now =). So we will be assuming when using docker as storage, atomic pull --storage=docker $image, we do not need to worry about images starting with ostree prefix (http: or oci: or https:) right?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes correct, there won't be such prefixes for Docker images

# 4. Check that --shared works
${ATOMIC} mount --shared ${NAME} ${WORK_DIR}/mount
# 4. Check that --shared works and that 'http:' is dropped
${ATOMIC} mount --shared http:${NAME} ${WORK_DIR}/mount
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is --shared option handled is ostree Mount in mount.py? I did not find the usage of it.. But I do find --shared used for DockerMount.

The reason why I ask is that.. this line will default to OstreeMount first right? Shown below in mount.py:

 if not self.storage:
            if len(self.beu.available_backends) > 1:
                if self.is_duplicate_image(self.image):
                    raise ValueError("Found more than one Image with name {}; "
                                     "please specify with --storage.".format(self.image))
            try:
                if self._try_ostree_mount(best_mountpoint_for_storage):
                    return

But ostree backend seems not handling the --shared option?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the OSTree storage doesn't use a different selinux mount label since system containers do not require a new file system when used. We do not create any new mount point, we only checkout the files to the file system (/var/lib/containers/atomic/$ID.0) so they all have the same mount label. Since it is shared in any case, the --shared option is simply ignored.

@peterbaouoft
Copy link
Contributor

LGTM, tho, since I am new to the codebase, I feel it might be better if other reviewers can take a look at it as well =)

@ashcrow
Copy link
Contributor

ashcrow commented Nov 14, 2017

@rh-atomic-bot r+ ea9b498

@rh-atomic-bot
Copy link

⌛ Testing commit ea9b498 with merge 0999b88...

rh-atomic-bot pushed a commit that referenced this pull request Nov 14, 2017
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>

Closes: #1127
Approved by: ashcrow
@TomSweeneyRedHat
Copy link

LGTM, assuming happy tests.

@rh-atomic-bot
Copy link

☀️ Test successful - status-papr
Approved by: ashcrow
Pushing 0999b88 to master...

eyusupov pushed a commit to eyusupov/atomic that referenced this pull request Mar 10, 2018
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>

Closes: projectatomic#1127
Approved by: ashcrow
eyusupov pushed a commit to eyusupov/atomic that referenced this pull request Mar 10, 2018
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>

Closes: projectatomic#1127
Approved by: ashcrow
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants