Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hawk apiserver monitoring #133

Merged
merged 16 commits into from
Aug 30, 2019

Conversation

ayoub-belarbi
Copy link
Contributor

No description provided.

Copy link
Contributor

@MalloZup MalloZup left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks good sofar. I'm not really convinced by the name. We can perhaps find it more useful one

salt/hana_node/cluster_packages.sls Outdated Show resolved Hide resolved
salt/pre_installation/ha_repos.sls Outdated Show resolved Hide resolved
@ayoub-belarbi ayoub-belarbi force-pushed the hawk-apiserver-monitoring branch 2 times, most recently from b8127aa to 52987c4 Compare August 28, 2019 19:12
Copy link
Contributor

@MalloZup MalloZup left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice pr! thx @ayoub!

just some minor changes.

  • ( rebase also with master)

  • remove the WIP in the title 🐶

@ayoub-belarbi ayoub-belarbi changed the title [WIP] Hawk apiserver monitoring Hawk apiserver monitoring Aug 29, 2019
Copy link
Contributor

@MalloZup MalloZup left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK to me.

Here an example:

```
cluster:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps we could create an example file

@ayoub-belarbi ayoub-belarbi dismissed diegoakechi’s stale review August 29, 2019 12:57

Diego is on vacation and issue has been resolved

Copy link
Collaborator

@arbulu89 arbulu89 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't know if I have missed some discussions about this implementation, but I find many details that I don't really understand.

I think with the changes of this PR the deployments makes too many assumptions.

  1. The exporters are always deployed
  2. The ports are fixed
  3. We add a development mode (which i don't know if it's neccessary)

I'm not saying that we have to change them. I would only want to understand them

{% endfor %}
- targets:
{% for ip in grains['monitored_services'] %}
- "{{ ip }}:8001" # 8801: hanadb exporter port
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't like having fixed ports. Actually, the hana instance deployed in the second VM is not using this port.

More than that, the name wouldn't make any sense anymore. We are refering to machines here, not services...

I think that we need to consider many things here

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think for moment we can keep the old part of service with Port number.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think for now, it's best if we enable the monitoring in both nodes by default and avoid specifying the IP of the hosts.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if we add this, we should change monitored_services with monitored_ips

I think in future this solution is really less flexible, because imagine we will have 4 ips,

for all for IPS we will monitor the same serivces.

Before it was more fine grained. I think we have more duplicata but more control which imho better..

libvirt/doc/monitoring.md Outdated Show resolved Hide resolved
libvirt/doc/monitoring.md Show resolved Hide resolved
.gitignore Outdated Show resolved Hide resolved
libvirt/README.md Outdated Show resolved Hide resolved
- "{{ ip }}:8001" # 8801: hanadb exporter port
{% endfor %}

- job_name: node
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use hana node

{% endfor %}
- targets:
{% for ip in grains['monitored_services'] %}
- "{{ ip }}:8001" # 8801: hanadb exporter port
Copy link
Contributor

@MalloZup MalloZup Aug 30, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

btw there is a typo on the comment .. 😁 8801 != 8001 port
{ ip }}:8001" # 8801: hanadb exporter port

Copy link
Collaborator

@arbulu89 arbulu89 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @ayoub-belarbi
More comments, questions, suggestions

@@ -118,7 +118,7 @@ variable "pool" {
}


variable "monitored_services" {
variable "monitored_hosts" {
description = "HOST:PORT of service you want to monitor, it can contain same host with different ports number (diff services)"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We would need to update this doc

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice catch!

@@ -36,8 +41,8 @@ ha_sap_deployment_repo = ""
#background = true

# Monitoring:
# add here HOST:PORT. In this example we monitor on same hosts different services
monitored_services = ["192.168.110.X:8001", "192.168.110.X+1:8001", "192.168.110.X:9100", "192.168.110.X+1:9100"]
# Add here IP addresses of the hosts to be monitored:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The last colom maybe it's not needed here

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what do you mean exactly?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just that we might remove the last :. Never mind... XD

@@ -26,6 +26,10 @@ cluster:
{% endif %}
resource_agents:
- SAPHanaSR
{% if grains.get('devel_mode') %}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As a temporary solution it might work, but eventually we will need to update this

@@ -97,6 +97,11 @@ variable "ha_sap_deployment_repo" {
type = "string"
}

variable "devel_mode" {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And now we have the devel_mode and qa_mode variables.
Could we just have mode or something similar.
Are they auto exclusive?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that devel and qa mode are incompatible.. :(

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes now we have both, but I'm not sure about the qa mode scope to be honest.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, this is something we will need to manage... Maybe unify them to a single variable.

We can work on it in a next PR

@@ -1,3 +1,12 @@
{% if grains.get('devel_mode') %}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Couldn't we just add this operation in the pre_installation states?
I mean, they do the same...
Just add the if condition there and depending the path set one priority or other

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that caused some problems during the sap pattern installation if I recall correctly since it happens before the HA one.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

mmm, Ok, it can be.
Anyway, the devel_mode is kind of strange in that case as it only affects to a single package.

@@ -4,6 +4,7 @@ ha-factory-repo:
- baseurl: {{ grains['ha_sap_deployment_repo'] }}
- gpgautoimport: True
- priority: 110
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As mentioned in other comment, wouldn't be better to manage the priority (and the new devel_mode variable) here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that caused some problems during the sap pattern installation if I recall correctly since it happens before the HA one.

Copy link
Collaborator

@arbulu89 arbulu89 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that there are still several things pending to be improved in this PR, but for the sake of progress, I'm OK with the changes and I think we can work in them in a future PR

@MalloZup
Copy link
Contributor

MalloZup commented Aug 30, 2019

@arbulu89
@ayoub-belarbi

please open issue in github as follow-up (if any) and reference to this PR so we can remember and keep track of them.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants