Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Not using downloaded repomd.xml because it is older than what we have #96

Closed
mconigliaro opened this issue Oct 7, 2014 · 11 comments
Closed

Comments

@mconigliaro
Copy link

yum makecache fails inside the yum_repository provider when you point to a mirror with an older repomd.xml than what has already been downloaded. This could happen if you start out pointing to an official mirror, then reconfigure yum to point to your own local mirror (which may not have had a chance to sync up yet):

Mixlib::ShellOut::ShellCommandFailed
------------------------------------
Expected process to exit with [0], but received '1'
---- Begin output of yum -q makecache --disablerepo=* --enablerepo=foo ----
STDOUT:
STDERR: Not using downloaded repomd.xml because it is older than what we have:
 Current  : Tue Oct  7 15:14:53 2014
 Downloaded: Thu Oct  2 15:00:04 2014
http://example.com/repodata/filelists.xml.gz: [Errno -1] Metadata file does not match checksum
Trying other mirror.
Error: failure: repodata/filelists.xml.gz from foo: [Errno 256] No more mirrors to try.
---- End output of yum -q makecache --disablerepo=* --enablerepo=foo ----
Ran yum -q makecache --disablerepo=* --enablerepo=foo returned 1

I think the solution is to run a yum clean before running yum makecache. Unless there's some flag I'm missing to force yum to use the older repomd.xml.

@juliandunn
Copy link
Contributor

I dunno, seems kind of like a real corner case; if you do this "repointing" (presumably with Chef), wouldn't you do a yum clean in that recipe?

yum clean is a pretty big hammer to just throw unconditionally at this in the context of this provider.

@mconigliaro
Copy link
Author

Don't get me wrong, this seems like dumb yum behavior. Ideally yum would provide a flag (--force?) to say "I don't care that the new repomd.xml is older than what I had before," and I'd expect there to be a way for yum_repository to use that flag (i.e. "I don't care what you need to do. Just make it so"). But since yum doesn't seem to have a flag to enable that behavior, I'm suggesting that yum_repository should blow away the existing cache before building a new one, since there seems to be no other way to get around this problem. Yes, we could trigger a yum clean ourselves every time we use yum_repository. But why aren't we forced to manually call yum makecache ourselves too? The answer is that it's obvious that you need to do that every time you reconfigure your repository, so yum_repository does it for us.

Just to be clear, I'm not advocating calling yum clean indiscriminately on all repositories. I'm advocating calling it on a single repository (using the --disablerepo and --enablerepo flags) exactly like yum makecache is being called today, and only when the repository configuration is updated. I'm missing why you'd think that's a big hammer. If we have to rebuild the cache anyway (because we've pointed to a new repository which may have different packages, etc.), doesn't that imply that we're throwing the old cache away? What purpose do the old cache files serve at that point?

For what it's worth, I just did the following test and confirmed that it works:

  1. Edit /var/cache/yum/x86_64/6/foo/repomd.xml and change every value inside <timestamp></timestamp> to some date in the future
  2. Run yum makecache --disablerepo=* --enablerepo=foo to produce the repomd.xml error above
  3. Run yum clean metadata --disablerepo=* --enablerepo=foo to remove all repository metadata files for the foo repository
  4. Re-run yum makecache --disablerepo=* --enablerepo=foo to confirm that the yum clean metadata fix worked

@mconigliaro
Copy link
Author

A colleague of mine just pointed out that we actually can't trigger a yum clean ourselves because yum_repository calls yum makecache first and will fail before we have a chance.

@someara
Copy link

someara commented Apr 22, 2015

what is the lightest test you can think of to detect this condition?

@Gazzonyx
Copy link

I figured out how we're hitting this bug and I've got a simple-ish test case. Basically we've got a race condition between our local yum repos and the upstream yum repos.

Our workflow is that we automatically provision a hardware appliance with cobbler, reboot, run the first run setup to set hostname (as we use chef vault and have to have our node registered with a known name) and then reboot and bootstrap the chef client.

When we run our chef converge, usually within five to ten minutes from when the appliance was first provisioned, our cookbook changes the location of the repositories to our local mirrors and sometimes those mirrors are behind the upstream yum repositories by up to an hour.

@mconigliaro
Copy link
Author

It's been a while since I've looked at this, but to summarize my comments above, I believe the solution is to run yum clean metadata --disablerepo=* --enablerepo=foo just before running yum makecache --disablerepo=* --enablerepo=foo (presumably every time the foo repository config file is changed).

@mconigliaro
Copy link
Author

@someara I don't think this is something you'd want to "detect" per-se. What I'm suggesting is that the yum makecache this cookbook currently does is only half of what needs to be done. I think you need to run yum clean metadata first in order to delete the old cache files.

@someara
Copy link

someara commented Apr 23, 2015

I've added a "yum clean" before the "yum makecache" in the :create action.
Released as 3.6.0

@someara someara closed this as completed Apr 23, 2015
@mconigliaro
Copy link
Author

👍

@Gazzonyx
Copy link

Awesome! Thanks @someara!

@kplimack
Copy link

I'm setting this on oracle linux 6.5, chef 12.4.1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants