Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to have a functioning environment after cloning #33

Closed
pierre-moire opened this issue Jul 27, 2019 · 7 comments
Closed

Unable to have a functioning environment after cloning #33

pierre-moire opened this issue Jul 27, 2019 · 7 comments

Comments

@pierre-moire
Copy link

When I setup a mariaDB cluster master+master with ProxySQL and then clone the environment, the cluster is not functioning properly.

The application is unable to connect to ProxySQL, and referenced cluster nodes are the ones of the initial environment that was cloned.

Is there a way to handle this? Otherwise, can you adapt the scripts to allow cloning without additional configuration?

Thanks!

@sych74
Copy link
Collaborator

sych74 commented Jul 29, 2019

Hello!
Cloning support will be integrated into the JPS package in the near future.

Thanks!

@bertho-zero
Copy link
Contributor

@pierre-moire I don't know if the problem you are having is the same as me but I had to add a fixed IP for ProxySQL.

Without it, I couldn't connect with the info received by email.

@sych74
Copy link
Collaborator

sych74 commented Aug 7, 2020

Hi, could you please describe the issue in more details? and please provide hoster where it's reproduced

@bertho-zero
Copy link
Contributor

bertho-zero commented Aug 7, 2020

I create an environment with the default parameters, MySQL and ProxySQL, I receive the email with the login credentials to the proxy, to PhpMyAdmin and Orchestrator but if I do not manually add a fixed IP to the ProxySQL it is not possible to connect to it.

The other thing is that I can't see how the subdomain proxy.${env.domain}:3306 (from https://github.com/jelastic-jps/mysql-cluster/blob/master/texts/proxy-entrypoint.md) is created. I imagine it is a link but I don't see it anywhere in the topology of the environment or in the links of the node.

I am at Infomaniak.

EDIT: proxy.${env.domain}:3306 is created from the nodeGroup of https://github.com/jelastic-jps/mysql-cluster/blob/e3b9b7d1d0e0fd06051ea1c82a76bd2059b33d41/addons/auto-clustering/stercripts/auto-clustering/stercripts/auto-clustering-logic.jps#L79

@sych74
Copy link
Collaborator

sych74 commented Aug 10, 2020

Entry Point for Connecting to Database Cluster via ProxySQL works only in the internal network. To connect from the outside you need to use external IP or use endpoints for connecting to 3306 port.

@bertho-zero
Copy link
Contributor

bertho-zero commented Aug 10, 2020

Thanks !

I see a server-id in the my.cnf file of the mysql servers and it is overloaded by /etc/mysql/conf.d/master.cnf or /etc/mysql/conf.d/slave.cnf, is it necessary to leave it in 2 places?

It gives the impression that you have to modify the my.cnf file on each instance and that you cannot use the "Save for all instances" button.

@sych74
Copy link
Collaborator

sych74 commented Aug 12, 2020

All custom configuration is located in /ets/mysql/conf.d/ directory and overrides parameters from the my.cnf.
The my.cnf contains the general configuration and uses for all DB instanses, so you can apply "Save for all instances" after changes.

@sych74 sych74 closed this as completed Aug 19, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants