New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG/HELP] dialog replication. #2164
Comments
Hi @volga629-1 , When enabling dialog replication, each node expects to have its own, separate database/table. In this case, the DB is an alternative for the cluster based restart persistency. |
Do check the paragraph about DB usage from this documentation section. |
@rvlad-patrascu Thank you for reply. Right now cluster is use shared PgSQL db cluster. So we don't need replicate dialogs ?
|
If you want to use the sharing tags mechanism you do need to replicate the dialogs through clusterer. When you say the docs are not matching the 3.1 version, what exactly are you referring to? |
The section is pointed to tutorial. It really unclear how to use shared tags in cluster with shared db. in version 3.1-dev |
Again, the sharing tags (and the overall dialog replication through clusterer mechanism) are not meant to be used with a shared DB so there is not much I can assist you with if you want to use it this way. |
Cluster fully enabled in setup with shared db. The only option I see is point dialog for each node to separate database table. |
Yes, this is perfectly fine. I've just meant that you should not use the same table for both nodes. |
@rvlad-patrascu Ideally will be nice to have shared db flag in parameters which will help identify the nodes. Add two more tables for dialog is not in issue, just some management overhead if need add or remove cluster nodes. |
Any updates here? No progress has been made in the last 15 days, marking as stale. Will close this issue if no further updates are made in the next 30 days. |
Work in progress |
Work in progress |
Hi @volga629-1 , For the moment, we do not see clear advantages in implementing support for using the same table for multiple nodes when replicating dialogs. This kind of goes in opposition with the idea that each nodes handles its own DB data and might create some more confusion for people. After all, the recommended scenario is to actually have a local database for each clustered node. As such, if the DB is in fact shared, we believe that the extra effort of creating a few more dialog tables is not an issue. |
@rvlad-patrascu I understand, the only one think is might clarify is documentation . Which can describe share db setup. |
Opensips 3.1-dev
With PgSQL dialog module in cluster latest master 3.1-dev active/active setup tries insert duplicate data from each node.
This setup contain 3 vips for each node on LAN and WAN sides.
Each node contain same configuration in term of routing logic.
Might be some miss configuration or bug, but can't find what the issue
Any help thank you, volga629
The text was updated successfully, but these errors were encountered: