-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow extraction of logs from running lnd instances in Travis integration tests #302
Comments
Keen to look into this if it hasn't already been tackled, looks like a good way to get my feet wet! |
@samvrlewis this issue hasn't been picked up yet. Solving it would greatly increase our ability to stamp down the remaining flakes we occasionally run into on travis! |
Have looked about this and have a few options that I'd be interesting in hearing others thoughts on: Option 1. Append logs to the travis build log Option 2. AWS/GCloud Option 3. Bintray Option 4. Github repo/gists Do you have any thoughts @Roasbeef or others? I suppose the AWS/GCloud option would probably be the neatest to pursue, however I'm not sure how to resolve the issue of account ownership for the AWS/GCloud account - particularly as both services require a credit card to be registered to the account. How do open source projects typically register accounts for services like AWS? Also, I'm new to building Golang applications but I'm having a bit of trouble successfully building my fork of the lnd repo because the packages all refer to |
Hi @samvrlewis, Thanks for looking into this! I appreciate the breakdown of the distinct directions we can pursue to implement this much needed testing/debugging feature. I like the gist option as it doesn't require an individual to manage an account hooked up to a credit or that requires an API key for authentication. Taking it a bit further, perhaps we can use zerobin type paste service that innately has a "burn after reading" or time based paste expiry. This seems like a rather seamless path, as the travis script can pipe the logs to another program that itself uses an HTTP API to create the paste, printing out the URL of the created paste. Any thoughts?
The way I usually do this is first to fork the repo itself on github. Once you have a new remote URI for your fork of lnd, you'll then go to the location of |
Thanks for the thoughts @Roasbeef. I actually came across transfer.sh the other day which looks perfect for this, so will push on using that.
This is the same method I tried I think, but I wasn't able to get TravisCI to build my fork properly as the package names in lnd refer to As an example of a Travis build from my fork (nothing modified): https://travis-ci.org/samvrlewis/lnd/builds/285520121 . It fails with:
|
Aha, figured out an easy way to get builds of the forks to work (from cloudfoundry/cli#286 ): Add the following to the
If this is useful feature for anyone else, I'm happy to open it as a separate PR otherwise will leave it here for posterity. |
Makes use the "logOutput" option defined in networktest.go to allow CI builds to save output logs. As part of issue lightningnetwork#302.
Makes use the "logOutput" option defined in networktest.go to allow CI builds to save output logs. As part of issue lightningnetwork#302.
Issue
We currently use Travis (like many other projects) for automated Continuous Integration within the project. We run a few batches of tests across the versions of golang that we officially support, toggling on the [race condition detector] to run the project wide unit tests, and also using our integration testing framework to exercise various real-world node scenarios.
Occasionally, we run into flakes on travis either due to timeouts we have in the integration tests being too long or, due to a newly uncovered bug. At times it's difficult for developers to reproduce the issues locally as we can't exactly reproduce the conditions on the instances that Travis executes the tests on. If we can create a pipeline to export the logs of all running
lnd
instances within the integration tests, then that would greatly aide our efforts in tracking down certain classes of flakes triggered within the integration tests.Steps to Completion
stdout
of each of the runninglnd
instances or reading directly from disk) to an existing storage system. This system might be s3, glcoud buckets, or simply just a pastebin. This should be done in an automated fashion, and it should be easy to locate the logs after test execution. The medium we use to upload the logs should likely be ephemeral so we don't accumulate a ton of space on whichever system is chosen.The text was updated successfully, but these errors were encountered: