-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LWT - Data population takes significantly longer than before #9331
Comments
One observation we had from the logs is huge amount of semaphore_timeouts as described in #7779 (comment) |
I updated in the original description the link to 4.4 monitor |
@avikivity ping - please look at 4.4 / 4.5 monitors do you see anything pointing to the possible issue ? |
@roydahan - lets run on 4.4.4 (latest 4.4 version) and see if it reproduces on that. |
Looking at the logs, the timeouts are happening mostly on node-1.
|
Here's an example of a burst of errors from the log:
@kostja @elcallio @raphaelsc I wonder if lwt might be creating lots of small sstables causing the read timeouts, until they are compacted. |
The test on 4.4.4 looks good, the throughput is normal, the population phase finished in several hours and nemesis started to run. |
If we are generating more sstables it should be visisble in counters for both sstables and also for memtable flush. lets check this out. |
This exception is ignored by design, but if it's left unhandled, it generates `Exceptional future ignored` warnings, like the following. Also, ignore f2 if f1 failed since we return early in this case. ``` [shard 5] seastar - Exceptional future ignored: seastar::named_semaphore_timed_out (Semaphore timed out: _read_concurrency_sem), backtrace: 0x431689e 0x4316d40 0x43170e8 0x3f35486 0x218d14a 0x3f8002f 0x3f81217 0x3f9f868 0x3f4b76a /opt/scylladb/libreloc/libpthread.so.0+0x93f8 /opt/scylladb/libreloc/libc.so.6+0x101902#012 N7seastar12continuationINS_8internal22promise_base_with_typeISt7variantIJN5utils4UUIDEN7service5paxos7promiseEEEEEZZZZNS7_11paxos_state7prepareEN7tracing15trace_state_ptrENS_13lw_shared_ptrIK6schemaEERKN5query12read_commandERK13partition_keyS5_bNSI_16digest_algorithmENSt6chrono10time_pointINS_12lowres_clockENSQ_8durationIlSt5ratioILl1ELl1000EEEEEEENK3$_0clEvENUlvE_clEvENKUlSB_E_clESB_EUlT_E_ZNS_6futureISt5tupleIJNS13_IvEENS13_IS14_IJNSE_INSI_6resultEEE17cache_temperatureEEEEEEE14then_impl_nrvoIS12_NS13_IS9_EEEET0_OS11_EUlOSA_RS12_ONS_12future_stateIS1B_EEE_S1B_EE#012 seastar::continuation<seastar::internal::promise_base_with_type<std::variant<utils::UUID, service::paxos::promise> >, seastar::future<std::variant<utils::UUID, service::paxos::promise> >::finally_body<seastar::with_semaphore<seastar::semaphore_default_exception_factory, seastar::lowres_clock, service::paxos::paxos_state::prepare(tracing::trace_state_ptr, seastar::lw_shared_ptr<schema const>, query::read_command const&, partition_key const&, utils::UUID, bool, query::digest_algorithm, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >)::$_0::operator()() const::{lambda()#1}>(seastar::basic_semaphore<seastar::semaphore_default_exception_factory, seastar::lowres_clock>&, unsigned long, seastar::lowres_clock::duration, std::result_of&&)::{lambda(seastar::basic_semaphore)#1}::operator()<seastar::semaphore_units<seastar::semaphore_default_exception_factory, seastar::lowres_clock> >(seastar::basic_semaphore)::{lambda()#1}, false>, seastar::future<std::variant<utils::UUID, service::paxos::promise> >::then_wrapped_nrvo<seastar::future<std::variant<utils::UUID, service::paxos::promise> >, seastar::semaphore_units<seastar::semaphore_default_exception_factory, seastar::lowres_clock> >(seastar::future<std::variant<utils::UUID, service::paxos::promise> >&&)::{lambda(seastar::internal::promise_base_with_type<std::variant<utils::UUID, service::paxos::promise> >&&, seastar::semaphore_units<seastar::semaphore_default_exception_factory, seastar::lowres_clock>&, seastar::future_state<std::variant<utils::UUID, service::paxos::promise> >&&)#1}, std::variant<utils::UUID, service::paxos::promise> >#12 seastar::continuation<seastar::internal::promise_base_with_type<std::variant<utils::UUID, service::paxos::promise> >, seastar::future<std::variant<utils::UUID, service::paxos::promise> >::finally_body<service::paxos::paxos_state::key_lock_map::with_locked_key<service::paxos::paxos_state::prepare(tracing::trace_state_ptr, seastar::lw_shared_ptr<schema const>, query::read_command const&, partition_key const&, utils::UUID, bool, query::digest_algorithm, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >)::$_0::operator()() const::{lambda()#1}>(dht::token const&, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >, std::result_of)::{lambda()#1}, false>, seastar::future<std::variant<utils::UUID, service::paxos::promise> >::then_wrapped_nrvo<seastar::future<std::variant<utils::UUID, service::paxos::promise> >, {lambda()#1}>({lambda()#1}&&)::{lambda(seastar::internal::promise_base_with_type<std::variant<utils::UUID, service::paxos::promise> >&&, {lambda()#1}&, seastar::future_state<std::variant<utils::UUID, service::paxos::promise> >&&)#1}, std::variant<utils::UUID, service::paxos::promise> >#12 seastar::continuation<seastar::internal::promise_base_with_type<std::variant<utils::UUID, service::paxos::promise> >, seastar::future<std::variant<utils::UUID, service::paxos::promise> >::finally_body<service::paxos::paxos_state::prepare(tracing::trace_state_ptr, seastar::lw_shared_ptr<schema const>, query::read_command const&, partition_key const&, utils::UUID, bool, query::digest_algorithm, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >)::$_0::operator()() const::{lambda()#2}, false>, seastar::future<std::variant<utils::UUID, service::paxos::promise> >::then_wrapped_nrvo<seastar::future<std::variant<utils::UUID, service::paxos::promise> >, service::paxos::paxos_state::prepare(tracing::trace_state_ptr, seastar::lw_shared_ptr<schema const>, query::read_command const&, partition_key const&, utils::UUID, bool, query::digest_algorithm, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >)::$_0::operator()() const::{lambda()#2}>(service::paxos::paxos_state::prepare(tracing::trace_state_ptr, seastar::lw_shared_ptr<schema const>, query::read_command const&, partition_key const&, utils::UUID, bool, query::digest_algorithm, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >)::$_0::operator()() const::{lambda()#2}&&)::{lambda(seastar::internal::promise_base_with_type<std::variant<utils::UUID, service::paxos::promise> >&&, service::paxos::paxos_state::prepare(tracing::trace_state_ptr, seastar::lw_shared_ptr<schema const>, query::read_command const&, partition_key const&, utils::UUID, bool, query::digest_algorithm, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >)::$_0::operator()() const::{lambda()#2}&, seastar::future_state<std::variant<utils::UUID, service::paxos::promise> >&&)#1}, std::variant<utils::UUID, service::paxos::promise> >#12 seastar::continuation<seastar::internal::promise_base_with_type<seastar::foreign_ptr<std::unique_ptr<std::variant<utils::UUID, service::paxos::promise>, std::default_delete<std::variant<utils::UUID, service::paxos::promise> > > > >, service::storage_proxy::init_messaging_service()::$_51::operator()(seastar::rpc::client_info const&, seastar::rpc::opt_time_point, query::read_command, partition_key, utils::UUID, bool, query::digest_algorithm, std::optional<tracing::trace_info>) const::{lambda(seastar::lw_shared_ptr<schema const>)#1}::operator()(seastar::lw_shared_ptr<schema const>)::{lambda()#1}::operator()() const::{lambda(std::variant<utils::UUID, service::paxos::promise>)#1}, seastar::future<std::variant<utils::UUID, service::paxos::promise> >::then_impl_nrvo<{lambda()#1}, {lambda()#1}<seastar::foreign_ptr<std::unique_ptr<std::variant<utils::UUID, service::paxos::promise>, std::default_delete<std::variant<utils::UUID, service::paxos::promise> > > > > >({lambda()#1}&&)::{lambda(seastar::internal::promise_base_with_type<seastar::foreign_ptr<std::unique_ptr<std::variant<utils::UUID, service::paxos::promise>, std::default_delete<std::variant<utils::UUID, service::paxos::promise> > > > >&&, {lambda()#1}&, seastar::future_state<std::variant<utils::UUID, service::paxos::promise> >&&)#1}, std::variant<utils::UUID, service::paxos::promise> >#12 seastar::continuation<seastar::internal::promise_base_with_type<seastar::foreign_ptr<std::unique_ptr<std::variant<utils::UUID, service::paxos::promise>, std::default_delete<std::variant<utils::UUID, service::paxos::promise> > > > >, seastar::future<seastar::foreign_ptr<std::unique_ptr<std::variant<utils::UUID, service::paxos::promise>, std::default_delete<std::variant<utils::UUID, service::paxos::promise> > > > >::finally_body<seastar::smp::submit_to<service::storage_proxy::init_messaging_service()::$_51::operator()(seastar::rpc::client_info const&, seastar::rpc::opt_time_point, query::read_command, partition_key, utils::UUID, bool, query::digest_algorithm, std::optional<tracing::trace_info>) const::{lambda(seastar::lw_shared_ptr<schema const>)#1}::operator()(seastar::lw_shared_ptr<schema const>)::{lambda()#1}>(unsigned int, se ``` Refs #7779 Refs #9331 Signed-off-by: Benny Halevy <bhalevy@scylladb.com> Message-Id: <20210919053007.13960-1-bhalevy@scylladb.com>
@kostja please look into this issue asap (it's blocking 4.5) |
@elcallio can you please also look at the metrics can you see anything suggesting writes are blocked on commitlog in some form. The run is with commitlog hard limit disabled. |
@kostja / @elcallio - to make it easier to find
|
@slivne - I can't see anything from the first monitor, and the second is dead (?). In any case, we never show any commitlog stalls in these - I am starting to wonder if the graphana counter is wired properly? Note that the partial flush could be a reason for perhaps smaller sstables being written (initially). |
@bentsi Magidovich ***@***.***> can you please fix the monitors /
update which ones calle shoild use and ping him we are stuck on thia
Calle once they are avail.please try to exlplain you theory via metrics and
screen shots (from bith versons)
…On Mon, Sep 20, 2021, 12:36 Calle Wilund ***@***.***> wrote:
@slivne <https://github.com/slivne> - I can't see anything from the first
monitor, and the second is dead (?). In any case, we never show any
commitlog stalls in these - I am starting to wonder if the graphana counter
is wired properly?
I can't see any significant changes between 4.4 and the current 4.5
version, other than 5fcc206
<5fcc206>
where we added multi-entry support. Also ab55a1b
<ab55a1b>
(partial flush/threshold) and improved (?) pre-alloc 48ca01c
<48ca01c>
.
Note that the partial flush *could* be a reason for perhaps smaller
sstables being written (initially).
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#9331 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AA2OCCBIZCUIU2XCKKWFHVLUC36CDANCNFSM5D5TU74Q>
.
Triage notifications on the go with GitHub Mobile for iOS
<https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
or Android
<https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.
|
@slivne - I don't think fixing monitors really helps any. As I said, the CL blocked/write counters never seem to contain anything, Which makes me think they are either not read properly, or not presented properly (or don't lend themselves to presenting). |
We have metrics for memtable flush and fsync at prometheus level can we
check them out
…On Tue, Sep 21, 2021, 09:52 Calle Wilund ***@***.***> wrote:
@slivne <https://github.com/slivne> - I don't think fixing monitors
really helps any. As I said, the CL blocked/write counters never seem to
contain anything, Which makes me think they are either not read properly,
or not presented properly (or don't lend themselves to presenting).
I have no theories, esp. since noone seems to be very certain *what* it
is that blocks...
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#9331 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AA2OCCDWEY6HRRXTZEFK76TUDATS7ANCNFSM5D5TU74Q>
.
Triage notifications on the go with GitHub Mobile for iOS
<https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
or Android
<https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.
|
Which monitors do you need? |
If I read the metric correctly, we only had "many" memtable flushes in the beginning of the run (first 3-4h or so). Way before any of the log messages above... Not sure if that means anything though... |
I've been running this for a few hours locally with --commitlog-use-hard-size-limit=true; did not observe a performance drop that would be unusual. running without the option. |
Left the test running overnight with --commitlog-use-hard-size-limit=false, did not observe a performance drop. 131GB of data in blogposts is stored in ~6000 files on disk, which seems to be adequate (~400 files were used for 6GB) |
@kostja just to make sure, are you testing master or 4.5? |
master |
the issue was found and reported against 4.5 |
@fgelcer I'm sorry but this is not the issue. The issue is not found and I am trying to find it. An issue contains a test case that I as an engineer can easily reproduce. A bunch of monitors left around after a 3-day run help me improve my telepathy skills but are a very inefficient way to nail down the problem. |
@kostja I don't know how you ran it and for how long, but the test should write 500GB of data in 8 hours. |
@kostja - QA have a specific use cases hat fails on a specific setup using a specific version - the issue reproduces on that environment and use case - lets use that - to first understand where the issue is and then try to minimize / change / reproduce on a smaller setup etc. The bug exists and it reproduces. |
yes |
Unfortunately, I could still not get any info out of the image, neither the booted up ubuntu, or copying the scylla executable to another machine. No idea why the ubuntu addr2line fails, but it does... |
I will try to help with this. |
The stack trace for the reactor stall in #9331 (comment):
@elcallio would you analyze and determine if it is commitlog (more sepcifically hard limit) related or not?
For completeness and so you can do a one to one match for the addresses, the scylla_stack file content:
|
@elcallio BTW I am leaving the machine on for a few hours in case you want to experiment with it. |
FWIW, the trace looks wholly unrelated to anything commit log. |
@roydahan the stall doesn't seem to come from commitlog or more specifically hard limit. |
If you think it's ready, I think we can merge. |
There will be unbounded growth of pending tasks if they are submitted faster than retiring them. That can potentially happen if memtables are frequently flushed too early. It was observed that this unbounded growth caused task queue violations as the queue will be filled with tons of tasks being reevaluated. By avoiding duplication in pending task list for a given table T, growth is no longer unbounded and consequently reevaluation is no longer aggressive. Refs scylladb#9331. Scylla ent: Refs scylladb#2111. Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com> Message-Id: <20210930125718.41243-1-raphaelsc@scylladb.com> (cherry picked from commit 52302c3) Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com> Message-Id: <20220215162147.61628-1-raphaelsc@scylladb.com>
I am doing a comparative test now: |
After running a build without hardlimit I found a similar amount of stalls: All in all it is comparable. Histograms of the stalls looks similar which also the mean stall time is similar (+/- 10%): |
@aleksbykov the last thing we are left to do according to @roydahan comment #9331 (comment) is to run write throughput test and |
@eliransin can this be closed? |
@eliransin / @wmitros what are we tracking here ATM? |
…fault' from Eliran Sinvani This miniset, completes the prerequisites for enabling commitlog hard limit on by default. Namely, start flushing and evacuating segments halfway to the limit in order to never hit it under normal circumstances. It is worth mentioning that hitting the limit is an exceptional condition which it's root cause need to be resolved, however, once we do hit the limit, the performance impact that is inflicted as a result of this enforcement is irrelevant. Tests: unit tests. LWT write test (#9331) A whitebox testing has been performed by @wmitros , the test aimed at putting as much pressure as possible on the commitlog segments by using a write pattern that rewrites the partitions in the memtable keeping it at ~85% occupancy so the dirty memory manager will not kick in. The test compared 3 configurations: 1. The default configuration 2. Hard limit on (without changing the flush threshold) 3. the changes in this PR applied. The last exhibited the "best" behavior in terms of metrics, the graphs were the flattest and less jaggy from the others. Closes #10974 * github.com:scylladb/scylladb: commitlog: enforce commitlog size hard limit by default commitlog: set flush threshold to half of the limit size commitlog: unfold flush threshold assignment Fixes #9625
…fault' from Eliran Sinvani This miniset, completes the prerequisites for enabling commitlog hard limit on by default. Namely, start flushing and evacuating segments halfway to the limit in order to never hit it under normal circumstances. It is worth mentioning that hitting the limit is an exceptional condition which it's root cause need to be resolved, however, once we do hit the limit, the performance impact that is inflicted as a result of this enforcement is irrelevant. Tests: unit tests. LWT write test (#9331) A whitebox testing has been performed by @wmitros , the test aimed at putting as much pressure as possible on the commitlog segments by using a write pattern that rewrites the partitions in the memtable keeping it at ~85% occupancy so the dirty memory manager will not kick in. The test compared 3 configurations: 1. The default configuration 2. Hard limit on (without changing the flush threshold) 3. the changes in this PR applied. The last exhibited the "best" behavior in terms of metrics, the graphs were the flattest and less jaggy from the others. Closes #10974 * github.com:scylladb/scylladb: commitlog: enforce commitlog size hard limit by default commitlog: set flush threshold to half of the limit size commitlog: unfold flush threshold assignment
@eliransin / @kostja ? |
I think the creator of this issue (@roydahan) or one of the people who made the 257 comments in this issue who were ever able to reproduce this issue (are there such people?) need to say if this issue can still be reproduced. If it can't be reproduced, we should close it. If it's still can be reproduced, it's still a bug and definitely shouldn't be closed! |
@mykaul I agee with @nyh comment. There has never been a clear success criteria for this issue. The description and the title are by now vastly obsolete. We used it to track commit log hard limit work for a while, so it hangs on the commit log hard limit still not being a default. I don't think it is the proper way to use the issue database. So I will take the liberty of closing this, whatever issues are still out there after 4 years deserve to be tracked in a new ticket. |
2 + epsilon. let's not exaggerate. |
…fault' from Eliran Sinvani This miniset, completes the prerequisites for enabling commitlog hard limit on by default. Namely, start flushing and evacuating segments halfway to the limit in order to never hit it under normal circumstances. It is worth mentioning that hitting the limit is an exceptional condition which it's root cause need to be resolved, however, once we do hit the limit, the performance impact that is inflicted as a result of this enforcement is irrelevant. Tests: unit tests. LWT write test (#9331) A whitebox testing has been performed by @wmitros , the test aimed at putting as much pressure as possible on the commitlog segments by using a write pattern that rewrites the partitions in the memtable keeping it at ~85% occupancy so the dirty memory manager will not kick in. The test compared 3 configurations: 1. The default configuration 2. Hard limit on (without changing the flush threshold) 3. the changes in this PR applied. The last exhibited the "best" behavior in terms of metrics, the graphs were the flattest and less jaggy from the others. Closes #10974 * github.com:scylladb/scylladb: commitlog: enforce commitlog size hard limit by default commitlog: set flush threshold to half of the limit size commitlog: unfold flush threshold assignment
Installation details
Kernel version:
5.4.0-1035-aws
Scylla version (or git commit hash):
4.5.rc7-0.20210906.edead1caf
Cluster size: 4 nodes (i3.4xlarge)
Scylla running with shards number (live nodes):
longevity-lwt-500G-3d-4-5-db-node-2447519d-1 (63.33.190.129 | 10.0.0.59): 14 shards
longevity-lwt-500G-3d-4-5-db-node-2447519d-2 (34.243.2.11 | 10.0.3.58): 14 shards
longevity-lwt-500G-3d-4-5-db-node-2447519d-3 (54.74.79.228 | 10.0.1.88): 14 shards
longevity-lwt-500G-3d-4-5-db-node-2447519d-4 (54.216.64.60 | 10.0.3.78): 14 shards
OS (RHEL/CentOS/Ubuntu/AWS AMI):
ami-0e0abb5a45374673b
(aws: eu-west-1)Test:
longevity-lwt-500G-3d-test
Test name:
longevity_lwt_test.LWTLongevityTest.test_lwt_longevity
Test config file(s):
Test id:
2447519d-8e50-4bcc-9f14-c9046cb649f1
Issue description
====================================
This long longevity test of LWT is trying to write 400000000 keys (about 500gb of data), using c-s "user profile" (see below).
The tool is being run from 3 different loaders, each writing one third of the dataset.
Writing the entire dataset should take around 8 hours (based on previous runs of this test).
In this case, it seems that the data population hasn't been able to complete during the entire test (3 days).
The write throughput is consistently decreasing during the first few hours until it reaches a very low number of op/s.
====================================
Live monitor: http://34.252.214.230:3000/d/dGxHpZSnz/longevity-lwt-500g-3d-test-scylla-per-server-metrics-nemesis-master?orgId=1&from=1631032607724&to=1631169036853
(Restore Monitor Stack command:
$ hydra investigate show-monitor 2447519d-8e50-4bcc-9f14-c9046cb649f1
)screenshot: https://snapshot.raintank.io/dashboard/snapshot/yLJrkc9UvaQGg1fUTDB5IReSWskowZ3v
A monitor for 4.4 run: http://44.193.79.222:3000/d/KWzIAbI7k/longevity-lwt-500g-3d-test-scylla-per-server-metrics-nemesis-master?orgId=1&from=1615474723261&to=1615524652826
(Restore Monitor Stack command:
$ hydra investigate show-monitor 79499b7f-2c8f-45ad-8292-3f137c85e025
)Screenshot: https://cloudius-jenkins-test.s3.amazonaws.com/79499b7f-2c8f-45ad-8292-3f137c85e025/20210312_030749/grafana-screenshot-longevity-lwt-500g-3d-test-scylla-per-server-metrics-nemesis-20210312_031129-longevity-lwt-500G-3d-4-4-monitor-node-79499b7f-1.png
Logs:
grafana - https://cloudius-jenkins-test.s3.amazonaws.com/2447519d-8e50-4bcc-9f14-c9046cb649f1/20210910_191248/grafana-screenshot-longevity-lwt-500g-3d-test-scylla-per-server-metrics-nemesis-20210910_191711-longevity-lwt-500G-3d-4-5-monitor-node-2447519d-1.png
db-cluster - https://cloudius-jenkins-test.s3.amazonaws.com/2447519d-8e50-4bcc-9f14-c9046cb649f1/20210910_192216/db-cluster-2447519d.tar.gz
events - https://cloudius-jenkins-test.s3.amazonaws.com/2447519d-8e50-4bcc-9f14-c9046cb649f1/20210910_192216/events.log.tar.gz
loader-set - https://cloudius-jenkins-test.s3.amazonaws.com/2447519d-8e50-4bcc-9f14-c9046cb649f1/20210910_192216/loader-set-2447519d.tar.gz
monitor-set - https://cloudius-jenkins-test.s3.amazonaws.com/2447519d-8e50-4bcc-9f14-c9046cb649f1/20210910_192216/monitor-set-2447519d.tar.gz
events - https://cloudius-jenkins-test.s3.amazonaws.com/2447519d-8e50-4bcc-9f14-c9046cb649f1/20210910_192216/raw_events.log.tar.gz
Show all stored logs command:
$ hydra investigate show-logs 2447519d-8e50-4bcc-9f14-c9046cb649f1
Jenkins job URL
the load is defined by:
The text was updated successfully, but these errors were encountered: