Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Want to give you credit for the initial repo, or at least get the benefit of my changes #1

Merged
merged 11 commits into from May 30, 2012
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
1 change: 1 addition & 0 deletions .gitignore
@@ -0,0 +1 @@
.vagrant
7 changes: 5 additions & 2 deletions Readme
Expand Up @@ -15,9 +15,12 @@ vagrant up percona3
And this is all, you have now 3 servers running Percona XtraDB Cluster !


These recipes also install GLB (Galera Load Balancer) on percona1.
These recipes also install HAProxy on percona1. One for writes (single cluster node) and one for reads (rr of all cluster nodes).

You can read more on shinguz's post : http://www.fromdual.com/mysql-and-galera-load-balancer
See cluster status: http://192.168.70.2:9999/

Writes: 192.168.70.2:4306
Reads: 192.168.70.2:5306


Note: SElinux is disabled, there is a selinux policy file included that allows the server to start but the other nodes aren't yet able to sync, I need to investigate...
Expand Down
17 changes: 12 additions & 5 deletions manifests/site.pp
@@ -1,30 +1,37 @@
node percona1 {
include percona::repository
include percona::cluster
include percona::toolkit
include xinet
include myhosts
include haproxy

Class['percona::repository'] -> Class['percona::cluster'] -> Class['galera::glb']
Class['percona::repository'] -> Class['percona::cluster']
Class['percona::repository'] -> Class['percona::toolkit']

class {
'galera::glb':
glb_list_backend => "192.168.70.2:3306:1 192.168.70.3:3306:1 192.168.70.4:3306"
}
}

node percona2 {
include percona::repository
include percona::cluster
include percona::toolkit
include xinet
include myhosts

Class['percona::repository'] -> Class['percona::cluster']
Class['percona::repository'] -> Class['percona::toolkit']

}

node percona3 {
include percona::repository
include percona::cluster
include percona::toolkit
include xinet
include myhosts

Class['percona::repository'] -> Class['percona::cluster']
Class['percona::repository'] -> Class['percona::toolkit']

}

2 changes: 1 addition & 1 deletion modules/galera/manifests/glb/service.pp
Expand Up @@ -2,7 +2,7 @@

exec {
"run_glb":
command => "glbd --daemon --threads $glb_threads --control $glb_ip_control:4444 $glb_ip_loadbalancer:3306 $glb_list_backend",
command => "glbd --daemon --threads $glb_threads --control $glb_ip_control:4445 $glb_ip_loadbalancer:3306 $glb_list_backend",
path => ["/usr/sbin", "/bin" ],
require => Class['Galera::Glb::Packages'],
unless => "ps ax | grep [g]lbd 2> /dev/null",
Expand Down
47 changes: 47 additions & 0 deletions modules/haproxy/files/xtradb_cluster.cfg
@@ -0,0 +1,47 @@
# this config needs haproxy-1.4.19

global
log 127.0.0.1 local0
log 127.0.0.1 local1 notice
maxconn 4096
uid 99
gid 99
daemon
# debug
#quiet

defaults
log global
mode http
option tcplog
option dontlognull
retries 3
option redispatch
maxconn 2000
contimeout 5000
clitimeout 50000
srvtimeout 50000

listen cluster-writes 0.0.0.0:4306
mode tcp
balance roundrobin
option httpchk

server percona1 192.168.70.2:3306 check port 9200 inter 12000 rise 3 fall 3 backup
server percona2 192.168.70.3:3306 check port 9200 inter 12000 rise 3 fall 3 backup
server percona3 192.168.70.4:3306 check port 9200 inter 12000 rise 3 fall 3 backup

listen cluster-reads 0.0.0.0:5306
mode tcp
balance roundrobin
option httpchk

server percona1 192.168.70.2:3306 check port 9200 inter 12000 rise 3 fall 3
server percona2 192.168.70.3:3306 check port 9200 inter 12000 rise 3 fall 3
server percona3 192.168.70.4:3306 check port 9200 inter 12000 rise 3 fall 3


listen admin_page 0.0.0.0:9999
mode http
balance roundrobin
stats uri /
8 changes: 8 additions & 0 deletions modules/haproxy/manifests/config.pp
@@ -0,0 +1,8 @@
class haproxy::config {
file {
"/etc/haproxy/haproxy.cfg":
ensure => present,
require => Class['haproxy::packages'],
source => "/vagrant/modules/haproxy/files/xtradb_cluster.cfg";
}
}
9 changes: 9 additions & 0 deletions modules/haproxy/manifests/init.pp
@@ -0,0 +1,9 @@
class haproxy {

include haproxy::packages
include haproxy::config
include haproxy::service

Class['haproxy::packages'] -> Class['haproxy::config'] -> Class['haproxy::service']

}
6 changes: 6 additions & 0 deletions modules/haproxy/manifests/packages.pp
@@ -0,0 +1,6 @@
class haproxy::packages {
package {
"haproxy": ensure => installed;
}
}

10 changes: 10 additions & 0 deletions modules/haproxy/manifests/service.pp
@@ -0,0 +1,10 @@
class haproxy::service {

service {
"haproxy":
ensure => 'running',
require => Class['haproxy::config'],
subscribe => File['/etc/haproxy/haproxy.cfg']
}

}
3 changes: 3 additions & 0 deletions modules/percona/manifests/cluster/config.pp
@@ -1,8 +1,11 @@
class percona::cluster::config {

if $hostname == "percona1" {
# Percona1 can't join itself, so if this node gets wacked out, it tries to talk to percona2
# $joinip = "192.168.70.3"
$joinip = " "
} else {
# All other nodes join percona1
$joinip = "192.168.70.2"
}
file {
Expand Down
10 changes: 4 additions & 6 deletions modules/percona/manifests/toolkit.pp
Expand Up @@ -8,10 +8,8 @@
"perl-DBD-MySQL":
ensure => installed,
require => Package['Percona-Server-shared-compat'];
"percona-toolkit":
provider => rpm,
ensure => installed,
require => [ Package['perl-Time-HiRes'], Package['perl-TermReadKey'], Package['perl-DBD-MySQL'] ],
source => "http://www.percona.com/redir/downloads/percona-toolkit/percona-toolkit-1.0.1-1.noarch.rpm";
}
"percona-toolkit":
ensure => installed,
require => [ Package['perl-Time-HiRes'], Package['perl-TermReadKey'], Package['perl-DBD-MySQL'] ];
}
}
8 changes: 6 additions & 2 deletions modules/percona/templates/cluster/my.cnf.erb
@@ -1,20 +1,24 @@
[mysqld_safe]
wsrep_urls=gcomm://192.168.70.2:4567,gcomm://192.168.70.3:4567,gcomm://192.168.70.4:4567,gcomm://

[mysqld]
datadir=/var/lib/mysql
user=mysql
log_error=error.log
binlog_format=ROW
wsrep_provider=/usr/lib64/libgalera_smm.so
wsrep_cluster_address=gcomm://<%= joinip %>
wsrep_sst_receive_address=<%= ipaddress_eth1 %>
wsrep_node_incoming_address=<%= ipaddress_eth1 %>
wsrep_slave_threads=2
wsrep_cluster_name=trimethylxanthine
wsrep_sst_method=rsync
#wsrep_sst_method=rsync
wsrep_sst_method=xtrabackup
wsrep_node_name=<%= hostname %>
innodb_locks_unsafe_for_binlog=1
innodb_autoinc_lock_mode=2
innodb_log_file_size=64M
bind-address=<%= ipaddress_eth1 %>

[mysql]
user=root
prompt="<%=hostname %> mysql> "
19 changes: 19 additions & 0 deletions modules/xinet/files/mysqlchk
@@ -0,0 +1,19 @@
# default: on
# description: mysqlchk
service mysqlchk
{
# this is a config for xinetd, place it in /etc/xinetd.d/
disable = no
type = UNLISTED
flags = REUSE
socket_type = stream
port = 9200
wait = no
user = nobody
server = /usr/bin/clustercheck
log_on_failure += USERID
only_from = 192.168.70.0/24
# recommended to put the IPs that need
# to connect exclusively (security purposes)
per_source = UNLIMITED
}
16 changes: 16 additions & 0 deletions modules/xinet/manifests/config.pp
@@ -0,0 +1,16 @@
class xinet::config {
file {
"/etc/xinetd.d/mysqlchk":
ensure => present,
owner => root, group => root,
require => [Class['xinet::packages'], Class['percona::cluster::config']],
source => "/vagrant/modules/xinet/files/mysqlchk";
}

mysql::rights { "clustercheck":
user => "clustercheckuser",
password => "clustercheckpassword!",
database => "mysql",
priv => ["process_priv"]
}
}
9 changes: 9 additions & 0 deletions modules/xinet/manifests/init.pp
@@ -0,0 +1,9 @@
class xinet {

include xinet::packages
include xinet::config
include xinet::service

Class['xinet::packages'] -> Class['xinet::config'] -> Class['xinet::service']

}
5 changes: 5 additions & 0 deletions modules/xinet/manifests/packages.pp
@@ -0,0 +1,5 @@
class xinet::packages {
package {
"xinetd": ensure => installed;
}
}
8 changes: 8 additions & 0 deletions modules/xinet/manifests/service.pp
@@ -0,0 +1,8 @@
class xinet::service {
service {
"xinetd":
ensure => 'running',
require => Class['xinet::packages'],
subscribe => File['/etc/xinetd.d/mysqlchk'];
}
}