Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Load queries for dashboard page from new system.dashboards table #56771

Merged
merged 7 commits into from Nov 23, 2023

Conversation

serxa
Copy link
Member

@serxa serxa commented Nov 14, 2023

Changes:

  1. Added system.dashboards table that holds all the queries along with title String and dashboard String.
  2. Reworked dashboard.html to SELECT queries from this system table instead of a hard-coded queries list.
  3. Add a new input field on the page to modify that SELECT query to be able to render different dashboards by just combining what we have in system.dashboards or other user-defined tables.

Benefits:

  1. We will have a system table with queries. It can be useful w/o dashboards.
  2. Easy way to add new queries to dashboards.
  3. Way to create specialized dashboards specifically for something (cache/fs/querytype/etc).
  4. No need to bother with javascript to add a query to a dashboard.
  5. User-defined dashboards just by using a custom table instead of system.dashboards.
  6. The same query can be reused in multiple dashboards.

Changelog category (leave one):

  • New Feature

Changelog entry (a user-readable short description of the changes that goes to CHANGELOG.md):

Configurable dashboards. Queries for charts are now loaded using a query, which by default uses a new system.dashboards table.

Documentation entry for user-facing changes

  • Documentation is written (mandatory for new features)
Screenshot 2023-11-16 at 13 45 40

@robot-clickhouse-ci-2 robot-clickhouse-ci-2 added the pr-feature Pull request with new product feature label Nov 14, 2023
@robot-clickhouse-ci-2
Copy link
Contributor

robot-clickhouse-ci-2 commented Nov 14, 2023

This is an automated comment for commit 663c8cd with description of existing statuses. It's updated for the latest CI running

❌ Click here to open a full report in a separate page

Successful checks
Check nameDescriptionStatus
AST fuzzerRuns randomly generated queries to catch program errors. The build type is optionally given in parenthesis. If it fails, ask a maintainer for help✅ success
CI runningA meta-check that indicates the running CI. Normally, it's in success or pending state. The failed status indicates some problems with the PR✅ success
ClickHouse build checkBuilds ClickHouse in various configurations for use in further steps. You have to fix the builds that fail. Build logs often has enough information to fix the error, but you might have to reproduce the failure locally. The cmake options can be found in the build log, grepping for cmake. Use these options and follow the general build process✅ success
Compatibility checkChecks that clickhouse binary runs on distributions with old libc versions. If it fails, ask a maintainer for help✅ success
Docker image for serversThe check to build and optionally push the mentioned image to docker hub✅ success
Docs CheckBuilds and tests the documentation✅ success
Fast testNormally this is the first check that is ran for a PR. It builds ClickHouse and runs most of stateless functional tests, omitting some. If it fails, further checks are not started until it is fixed. Look at the report to see which tests fail, then reproduce the failure locally as described here✅ success
Flaky testsChecks if new added or modified tests are flaky by running them repeatedly, in parallel, with more randomization. Functional tests are run 100 times with address sanitizer, and additional randomization of thread scheduling. Integrational tests are run up to 10 times. If at least once a new test has failed, or was too long, this check will be red. We don't allow flaky tests, read the doc✅ success
Install packagesChecks that the built packages are installable in a clear environment✅ success
Integration testsThe integration tests report. In parenthesis the package type is given, and in square brackets are the optional part/total tests✅ success
Mergeable CheckChecks if all other necessary checks are successful✅ success
Performance ComparisonMeasure changes in query performance. The performance test report is described in detail here. In square brackets are the optional part/total tests✅ success
Push to DockerhubThe check for building and pushing the CI related docker images to docker hub✅ success
SQLTestThere's no description for the check yet, please add it to tests/ci/ci_config.py:CHECK_DESCRIPTIONS✅ success
SQLancerFuzzing tests that detect logical bugs with SQLancer tool✅ success
SqllogicRun clickhouse on the sqllogic test set against sqlite and checks that all statements are passed✅ success
Stateful testsRuns stateful functional tests for ClickHouse binaries built in various configurations -- release, debug, with sanitizers, etc✅ success
Stateless testsRuns stateless functional tests for ClickHouse binaries built in various configurations -- release, debug, with sanitizers, etc✅ success
Style CheckRuns a set of checks to keep the code style clean. If some of tests failed, see the related log from the report✅ success
Unit testsRuns the unit tests for different release types✅ success
Upgrade checkRuns stress tests on server version from last release and then tries to upgrade it to the version from the PR. It checks if the new server can successfully startup without any errors, crashes or sanitizer asserts✅ success
Check nameDescriptionStatus
Stress testRuns stateless functional tests concurrently from several clients to detect concurrency-related errors❌ failure

@serxa serxa marked this pull request as ready for review November 16, 2023 12:48
@serxa
Copy link
Member Author

serxa commented Nov 17, 2023

Broken tests are unrelated

@serxa
Copy link
Member Author

serxa commented Nov 19, 2023

Patch for development to avoid rebuilding and restarting clickhouse server on dashboard.html file changes https://pastila.nl/?0030c045/ad7a323b7a20438f9496de223b43cb63#XJFhTRYYrbf96IhbLuo5vQ==

@serxa serxa self-assigned this Nov 23, 2023
@serxa serxa merged commit 9436ae6 into master Nov 23, 2023
346 of 347 checks passed
@serxa serxa deleted the dashboards-table branch November 23, 2023 18:07
GROUP BY t
ORDER BY t WITH FILL STEP {rounding:UInt32}
)EOQ") }
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting idea. What do you think about moving this into config.xml? That way, user can customize it.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, interesting. I was thinking about making another writable table system.custom_dashboards to make it changeable even in runtime... but it might be an overkill. I like your idea.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder how we migrate to this... Because releasing the version with charts read from config.xml will make system.dashboards become an empty table unless config.xml is also updated in sync...

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can add this predefined charts only if nothing had been configured by user in config. That way it will be compatible.

And also it will have one more bonus - it will make this charts available even after upgrade, since not all users sync config.xml during upgrade (they ship it with ansible or similar tools) and without defaults in code they will see nothing.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This way we need to have the same charts both in some .xml and in .cpp file (unless we use some magic to include .xml file contents in our binary, as It is done e.g. for dashboard.html).

Also if we do this transition it would be hard to add new charts just by releasing clickhouse binary. If one NOT define it's own charts, then they will be auto-updated. If one indeed define charts, then they should look for updates and deploy newer charts with the release. So it is a different deploy procedures w/ and w/o custom charts. I do not like this.

Maybe the solution is just to combine built-in charts in .cpp with charts in .xml instead of replacing?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This way we need to have the same charts both in some .xml and in .cpp file (unless we use some magic to include .xml file contents in our binary, as It is done e.g. for dashboard.html).

Not really, I mean that if something is configured by user, we just consider that this is all what he wants and do not show builtins.

Also if we do this transition it would be hard to add new charts just by releasing clickhouse binary. If one NOT define it's own charts, then they will be auto-updated. If one indeed define charts, then they should look for updates and deploy newer charts with the release. So it is a different deploy procedures w/ and w/o custom charts. I do not like this.

If someone changed them, then likely he knows what he is doing (I guess <1% will do this).
For example I would likely to configure them, plus, I may not be interested in some default charts either.

Maybe the solution is just to combine built-in charts in .cpp with charts in .xml instead of replacing?

That way you cannot remove some charts...
Personally I, want this ability.

@serxa serxa mentioned this pull request May 2, 2024
2 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
pr-feature Pull request with new product feature
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants