Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

update BYO kafka and postgres document #614

Merged
merged 2 commits into from
Sep 6, 2023

Conversation

clyang82
Copy link
Contributor

@clyang82 clyang82 commented Sep 5, 2023

No description provided.

Signed-off-by: clyang82 <chuyang@redhat.com>
@yanmxa
Copy link
Member

yanmxa commented Sep 5, 2023

Overall it looks good to me!
Since you have pulled this PR, I think the PR should be closed now.

/cc @hchenxa @ldpliu I will keep this PR on tomorrow at 12:00 AM. If you haven't other comment, I will merge it.

If you have your own postgres, you can use it as the storage for multicluster global hub. You need to create a secret `multicluster-global-hub-storage` in `multicluster-global-hub` namespace. The secret contains the following fields:

- The `database_uri` format like `postgres://<user>:<password>@<host>:<port>/<database>?sslmode=<mode>`. It is used to create the database and insert data.
- The `database_uri_with_readonlyuser` format like `postgres://<user>:<password>@<host>:<port>/<database>?sslmode=<mode>`. it is used to query data by global hub grafana. It is an optional.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

since this is optional, how about when the secret did not include this value? is there any impact for the globalhub grafana?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if no readonlyuser specified, use database_uri instead. no impact on the grafana. just security concerns. for example: use database_uri can modify the db data.

\ 1/1 Running 0 11s\n```\n"
description: |
The Multicluster Global Hub Operator contains the components of multicluster global hub. The Operator deploys all of the required components for global multicluster management. The components include `multicluster-global-hub-manager` and `multicluster-global-hub-grafana` in the global hub cluster and `multicluster-global-hub-agent` in the managed hub clusters.
The Operator also deploys the strimzi kafka and crunchy postgres if you do not bring your own kafka and postgres.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we add the source of kafka and postgres here? like we installed those 2 operators from community catalog?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it depends on upstream or downstream. if it is upstream, the operators are from the community. if it is downstream, the operators are from redhat (amq streams) and certified (crunchy operator). we want to keep csv the same for both upstream and downstream. That is why I did not include the source of operators here.

strimzi-cluster-operator-v0.36.1-684d7cccb7-zjbf9 1/1 Running 0 21h
```

##Documentation
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should be ## Documentation? There's a space missing

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it seems it won't impact on the final result
image

doc/byo.md Outdated
```
Please note that:
- three topics `spec` `status` and `event` are needed. If your Kafka is configured to allow creating topics automatically, you can skip this step. Otherwise, you need to create the topics manually. And ensure that the above kafka user has the permission to read data from the topics and write data to the topics.
- kakfa 3.3 or later is tested.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

typo error, should be kafka

doc/byo.md Outdated
--from-file=client.key=<Client-key-for-kafka-server>
```
Please note that:
- three topics `spec` `status` and `event` are needed. If your Kafka is configured to allow creating topics automatically, you can skip this step. Otherwise, you need to create the topics manually. And ensure that the above kafka user has the permission to read data from the topics and write data to the topics.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

kafka also have storage required

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in our sample, we propose to have storage for kafka in case missed data. for BYO, it depends on the customer. global hub does not require that kafka must have storage. But I think I can mention here that you'd better to have storage backed kafka.

Signed-off-by: clyang82 <chuyang@redhat.com>
@sonarcloud
Copy link

sonarcloud bot commented Sep 6, 2023

Kudos, SonarCloud Quality Gate passed!    Quality Gate passed

Bug A 0 Bugs
Vulnerability A 0 Vulnerabilities
Security Hotspot A 0 Security Hotspots
Code Smell A 0 Code Smells

No Coverage information No Coverage information
No Duplication information No Duplication information

@yanmxa
Copy link
Member

yanmxa commented Sep 6, 2023

/lgtm

@openshift-ci
Copy link

openshift-ci bot commented Sep 6, 2023

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: clyang82, yanmxa

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-merge-robot openshift-merge-robot merged commit d592098 into stolostron:main Sep 6, 2023
@clyang82 clyang82 deleted the doc_update branch September 6, 2023 03:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants