-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
update BYO kafka and postgres document #614
Conversation
Signed-off-by: clyang82 <chuyang@redhat.com>
If you have your own postgres, you can use it as the storage for multicluster global hub. You need to create a secret `multicluster-global-hub-storage` in `multicluster-global-hub` namespace. The secret contains the following fields: | ||
|
||
- The `database_uri` format like `postgres://<user>:<password>@<host>:<port>/<database>?sslmode=<mode>`. It is used to create the database and insert data. | ||
- The `database_uri_with_readonlyuser` format like `postgres://<user>:<password>@<host>:<port>/<database>?sslmode=<mode>`. it is used to query data by global hub grafana. It is an optional. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
since this is optional, how about when the secret did not include this value? is there any impact for the globalhub grafana?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if no readonlyuser specified, use database_uri
instead. no impact on the grafana. just security concerns. for example: use database_uri
can modify the db data.
\ 1/1 Running 0 11s\n```\n" | ||
description: | | ||
The Multicluster Global Hub Operator contains the components of multicluster global hub. The Operator deploys all of the required components for global multicluster management. The components include `multicluster-global-hub-manager` and `multicluster-global-hub-grafana` in the global hub cluster and `multicluster-global-hub-agent` in the managed hub clusters. | ||
The Operator also deploys the strimzi kafka and crunchy postgres if you do not bring your own kafka and postgres. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should we add the source of kafka and postgres here? like we installed those 2 operators from community catalog?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it depends on upstream or downstream. if it is upstream, the operators are from the community. if it is downstream, the operators are from redhat (amq streams) and certified (crunchy operator). we want to keep csv the same for both upstream and downstream. That is why I did not include the source of operators here.
strimzi-cluster-operator-v0.36.1-684d7cccb7-zjbf9 1/1 Running 0 21h | ||
``` | ||
|
||
##Documentation |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should be ## Documentation
? There's a space missing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
doc/byo.md
Outdated
``` | ||
Please note that: | ||
- three topics `spec` `status` and `event` are needed. If your Kafka is configured to allow creating topics automatically, you can skip this step. Otherwise, you need to create the topics manually. And ensure that the above kafka user has the permission to read data from the topics and write data to the topics. | ||
- kakfa 3.3 or later is tested. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
typo error, should be kafka
doc/byo.md
Outdated
--from-file=client.key=<Client-key-for-kafka-server> | ||
``` | ||
Please note that: | ||
- three topics `spec` `status` and `event` are needed. If your Kafka is configured to allow creating topics automatically, you can skip this step. Otherwise, you need to create the topics manually. And ensure that the above kafka user has the permission to read data from the topics and write data to the topics. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
kafka also have storage required
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in our sample, we propose to have storage for kafka in case missed data. for BYO, it depends on the customer. global hub does not require that kafka must have storage. But I think I can mention here that you'd better to have storage backed kafka.
Signed-off-by: clyang82 <chuyang@redhat.com>
Kudos, SonarCloud Quality Gate passed! |
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: clyang82, yanmxa The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
No description provided.