Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

riscv64: bump timeouts #8808

Merged
merged 4 commits into from Jun 10, 2020
Merged

riscv64: bump timeouts #8808

merged 4 commits into from Jun 10, 2020

Conversation

xnox
Copy link
Collaborator

@xnox xnox commented Jun 3, 2020

No description provided.

@xnox xnox marked this pull request as draft June 3, 2020 23:44
Copy link
Collaborator

@zyga zyga left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 on the intent but perhaps the way this is is organized could improve.
We could have a helper in testutil package, that contains this logic, and the various test modules could call the function and assign the result to a private variable in their init() function. This would reduce the diff tremendously as well.

@xnox
Copy link
Collaborator Author

xnox commented Jun 4, 2020

I hope it was very clear I have no idea how to golang =)

@xnox
Copy link
Collaborator Author

xnox commented Jun 4, 2020

Also this is still a draft until i can confirm that it builds & helps to complete unittests on riscv64. Cause maybe 6x is not enough.

@xnox xnox force-pushed the riscv64-bump-timeouts branch 2 times, most recently from e2be7b0 to 21653ff Compare June 5, 2020 17:24
@xnox
Copy link
Collaborator Author

xnox commented Jun 5, 2020

Why did ubuntu-core-20-64 fail?

Does it need a rerun?

@pedronis pedronis self-requested a review June 8, 2020 14:32
@pedronis
Copy link
Collaborator

pedronis commented Jun 8, 2020

@xnox some timeouts are local to single tests, e.g.

./overlord/managers_test.go:3436:	err = s.o.Settle(3 * time.Second)

we'll need to bump these too?

@xnox
Copy link
Collaborator Author

xnox commented Jun 9, 2020

@xnox some timeouts are local to single tests, e.g.

./overlord/managers_test.go:3436:	err = s.o.Settle(3 * time.Second)

we'll need to bump these too?

I am not sure. At the moment we do not have real hardware, thus all our builds are subject to noisy neighbours, as I mentioned I did get a clean build out of this config. I've tried rebuilding today, and another place timed out now as well =/

----------------------------------------------------------------------
FAIL: overlord_test.go:153: overlordSuite.TestNewWithGoodState

overlord_test.go:178:
    c.Check(got, DeepEquals, expected)
... obtained map[string]interface {} = map[string]interface {}{"changes":interface {}(nil), "data":map[string]interface {}{"patch-level":6, "patch-sublevel":3, "patch-sublevel-last-version":"2.45.1+git1958.d3cccd8~ubuntu20.10.1", "refresh-privacy-key":"0123456789ABCDEF", "some":"data", "timings":[]interface {}{map[string]interface {}{"start-time":"2020-06-08T17:30:39.54362265Z", "stop-time":"2020-06-08T17:30:39.550235351Z", "tags":map[string]interface {}{"startup":"load-state"}, "timings":[]interface {}{map[string]interface {}{"duration":6.608801e+06, "label":"read-state", "summary":"read snapd state from disk"}}}}}, "last-change-id":0, "last-lane-id":0, "last-task-id":0, "tasks":interface {}(nil)}
... expected map[string]interface {} = map[string]interface {}{"changes":interface {}(nil), "data":map[string]interface {}{"patch-level":6, "patch-sublevel":3, "patch-sublevel-last-version":"2.45.1+git1958.d3cccd8~ubuntu20.10.1", "refresh-privacy-key":"0123456789ABCDEF", "some":"data"}, "last-change-id":0, "last-lane-id":0, "last-task-id":0, "tasks":interface {}(nil)}
... Difference:
...     ["data"]["timings"]: [map["start-time":"2020-06-08T17:30:39.54362265Z" "stop-time":"2020-06-08T17:30:39.550235351Z" "tags":map["startup":"load-state"] "timings":[map["duration":%!q(float64=6.608801e+06) "label":"read-state" "summary":"read snapd state from disk"]]]] != (missing)


OOPS: 100 passed, 1 FAILED
--- FAIL: TestOverlord (479.31s)
FAIL
FAIL	github.com/snapcore/snapd/overlord	488.904s

The proposed bumps do make the tests in question pass. However, I'm not building against latest master as snapd-vendor imports seems to be lagging a bit.

I guess it would be great if there was a universal multiplier factor for "slow architectures" which would be used across all timeouts, but I don't know if some of the timeouts are expected to be fixed.

I guess it's a game of whack-a-mole at the moment a bit. I expect we would need a few of these bumps until we can see stable riscv64 builds on regular basis.

@xnox xnox marked this pull request as ready for review June 9, 2020 00:44
@xnox
Copy link
Collaborator Author

xnox commented Jun 9, 2020

Why is https://code.launchpad.net/~snappy-dev/snapd-vendor/+git/snapd-vendor not updating? I.e. there have been many commits since 5th of June, yet they are not in lp:snapd-vendor.

@mvo5
Copy link
Collaborator

mvo5 commented Jun 9, 2020

@xnox That's strange indeed, we are looking into this. We want to phase this out by moving everyone from core to the snapd snap but we are not there yet on classic. But all new classic systems (20.04+) already have the snapd snap by default now.

@pedronis
Copy link
Collaborator

pedronis commented Jun 9, 2020

@mvo5 I think we should do something along the lines of what @zyga suggested:

testutill.HostScaledTimeout(time.Duration) time.Duration

@pedronis
Copy link
Collaborator

pedronis commented Jun 9, 2020

@mvo5 also if we need to apply it best to move any constant (they tend to repeat) to the top of the test files in a var stanza like:

var (
          shortSettleTimeout = testutil.HostScaledTimeout(5 * time.Second)
)

etc.

@xnox
Copy link
Collaborator Author

xnox commented Jun 9, 2020

@xnox That's strange indeed, we are looking into this. We want to phase this out by moving everyone from core to the snapd snap but we are not there yet on classic. But all new classic systems (20.04+) already have the snapd snap by default now.

I am not sure what you mean here. Core systems have run snapd from snapd snap systemd service. But no such thing exists for classic systems. Hence classic systems currently need a traditional package with all the hooks in the right dirs and the snapd. My understanding was that snapd-vendor is just the automated tarball creation for deb builds. With all the govendor stuff synced in.

Copy link
Collaborator

@pedronis pedronis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm, we might have to do more of this but it sets a not-too-horrible pattern

@mvo5
Copy link
Collaborator

mvo5 commented Jun 9, 2020

Agreed, let's start with this one and as we need to add more we keep repeating this pattern :)

@anonymouse64
Copy link
Member

Core systems have run snapd from snapd snap systemd service.

This is not 100% true. UC systems run snapd from either the snapd snap or from the core snap. The latter is the case with UC16 for example, although now UC16 systems can be transitioned to using the snapd snap manually by installing the snapd snap.

But no such thing exists for classic systems.

The classic world is more complicated. Here for a generic classic system the snapd daemon can come from 3 different places:

  • the snapd distro pkg, deb, rpm, etc.
  • the core snap
  • the snapd snap

On newish Ubuntu installs, I think 18.04.1+, we defaulted to installing the snapd snap first when a snap was requested to be installed (i.e. the version of snapd from the deb started preferring to install the snapd snap over the core snap), so these installs are likely to have snapd daemon come from the snapd snap. However if you have a fresh system and you don't have any snaps, you could still get the snapd daemon from core snap if you install the core snap first.

On older installs and other distros, the first snap installed will usually be core (or it may be the core snap just because the system predates the snapd snap), and thus they will not get the snapd snap automatically (yet) until we transition folks from the core snap to the snapd snap. AFAIK, this transition is desired but not planned yet.

My understanding was that snapd-vendor is just the automated tarball creation for deb builds. With all the govendor stuff synced in.

I think your understanding is correct. However, I think the only snap which uses this snapd-vendor project / snapd deb as a source is the edge builds of the core snap specifically, so your changes to snapd-vendor won't ever feed into building a snapd snap somewhere on LP. For example, the snapd-vendor project has this recipe: https://code.launchpad.net/~snappy-dev/+recipe/snapd-vendor-daily which feeds snapd-vendor deb into snappy-edge PPA, which then is used as the source archive for this snap build: https://launchpad.net/~snappy-dev/+snap/core which is what results in the core snap being built on edge. This is why the snap version numbers for the core snap do not have git commits which correspond to snapd commits, instead the git commits there correspond to snapd-vendor commits. I do not know the history behind why it was setup like this.

The snapd snap builds much simpler, as the snapd LP project has a snap directly attached to it which builds from the project source, so there is no such snapd-vendor for the snapd snap.

@xnox
Copy link
Collaborator Author

xnox commented Jun 9, 2020

Core systems have run snapd from snapd snap systemd service.

This is not 100% true. UC systems run snapd from either the snapd snap or from the core snap. The latter is the case with UC16 for example, although now UC16 systems can be transitioned to using the snapd snap manually by installing the snapd snap.

But no such thing exists for classic systems.

The classic world is more complicated. Here for a generic classic system the snapd daemon can come from 3 different places:

  • the snapd distro pkg, deb, rpm, etc.
  • the core snap
  • the snapd snap

The way I see it, it is not quite like that. On Ubuntu classic systems snapd .deb is always installed. On boot, snapd.service using snapd from the .deb is started. It then may choose to mount core or snapd snaps, and re-exec from there.

But snapd .deb is always installed and is used to launch snapd.

This is different from Ubuntu Core systems, because instead of executing snapd from a .deb they have the service unit that execs mounts core|snapd snap and execs snapd from there.

Classic Ubuntu system, cannot do this at the moment as it does not have the ability to have snapd from snapd snap only without snapd deb present on the system at all.

Can we have Classic Ubuntu systems with just core18 & snapd snaps preseeded and without snapd deb installed, like it is on core systems?

@anonymouse64
Copy link
Member

anonymouse64 commented Jun 9, 2020 via email

@xnox
Copy link
Collaborator Author

xnox commented Jun 9, 2020

@mvo5 i had the requested improvements in-progress locally but thanks for your changes.

Switched from runtime.GOARCH to using snapd/arch. And also added tests, as snapd/arch is easy to instrument unittest against.

I do wonder if at some point this might graduate from testutils to utils, if there are any runtime timeouts, it might make sense to have them different on slow-networks / slow hardware.

@xnox xnox requested a review from zyga June 9, 2020 17:15
testutil/timeouts.go Outdated Show resolved Hide resolved
This commit avoids extra imports in timeouts.go and tweaks
the TimeoutTestSuite to follow the testutil_test pattern and
adds a MockRuntimeARCH() helper.
Copy link
Collaborator

@pedronis pedronis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks

@mvo5 mvo5 merged commit 3a3248a into snapcore:master Jun 10, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
5 participants