Skip to content
Browse files

Merge pull request #1 from zznate/simplied-precise64

Simplied precise64
  • Loading branch information...
2 parents d398932 + a509cd1 commit 6d2810d11f80019d49df0648ac88385e397cf790 @zznate committed
Showing with 55 additions and 2,495 deletions.
  1. +1 −0 .gitignore
  2. +7 −0 README.md
  3. +0 −134 Vagrantfile
  4. +0 −24 manifests/default.pp
  5. +0 −1 modules/cassandra
  6. +0 −12 modules/gini-cassandra-0.3.0/Gemfile
  7. +0 −202 modules/gini-cassandra-0.3.0/LICENSE-2.0.txt
  8. +0 −10 modules/gini-cassandra-0.3.0/Modulefile
  9. +0 −40 modules/gini-cassandra-0.3.0/README.md
  10. +0 −4 modules/gini-cassandra-0.3.0/Rakefile
  11. +0 −64 modules/gini-cassandra-0.3.0/manifests/config.pp
  12. +0 −169 modules/gini-cassandra-0.3.0/manifests/init.pp
  13. +0 −50 modules/gini-cassandra-0.3.0/manifests/install.pp
  14. +0 −249 modules/gini-cassandra-0.3.0/manifests/params.pp
  15. +0 −35 modules/gini-cassandra-0.3.0/manifests/repo.pp
  16. +0 −18 modules/gini-cassandra-0.3.0/manifests/repo/debian.pp
  17. +0 −15 modules/gini-cassandra-0.3.0/manifests/repo/redhat.pp
  18. +0 −10 modules/gini-cassandra-0.3.0/manifests/service.pp
  19. +0 −45 modules/gini-cassandra-0.3.0/metadata.json
  20. +0 −157 modules/gini-cassandra-0.3.0/spec/classes/cassandra_config_spec.rb
  21. +0 −78 modules/gini-cassandra-0.3.0/spec/classes/cassandra_repo_spec.rb
  22. +0 −204 modules/gini-cassandra-0.3.0/spec/classes/cassandra_spec.rb
  23. +0 −1 modules/gini-cassandra-0.3.0/spec/spec_helper.rb
  24. +0 −254 modules/gini-cassandra-0.3.0/templates/cassandra-env.sh.erb
  25. +0 −690 modules/gini-cassandra-0.3.0/templates/cassandra.yaml.erb
  26. +0 −6 modules/gini-cassandra-0.3.0/tests/init.pp
  27. +0 −12 modules/my-cass/manifests/init.pp
  28. +0 −4 modules/my-cass/templates/cass1.yaml.erb
  29. +21 −0 precise64-cassandra-ccm/README.md
  30. +26 −0 precise64-cassandra-ccm/Vagrantfile
  31. +0 −7 provision.sh
View
1 .gitignore
@@ -1,4 +1,5 @@
.idea
+*.box
.vagrant
.DS_Store
hector.iml
View
7 README.md
@@ -0,0 +1,7 @@
+# Useful Vagrant Boxes for Cassandra
+
+Currently, the only one here uses Cassandra Cluster Manager (CCM) for running a cluster on a single image.
+
+### Coming Soon
+
+Three node cluster in a vagrant file
View
134 Vagrantfile
@@ -1,134 +0,0 @@
-# -*- mode: ruby -*-
-# vi: set ft=ruby :
-
-Vagrant.configure("2") do |config|
- # All Vagrant configuration is done here. The most common configuration
- # options are documented and commented below. For a complete reference,
- # please see the online documentation at vagrantup.com.
-
- # Every Vagrant virtual environment requires a box to build off of.
- #config.vm.box = "precise64"
- config.vm.box = "precise64"
- #config.vm.box_url = "./package.box"
-
- # The url from where the 'config.vm.box' box will be fetched if it
- # doesn't already exist on the user's system.
- config.vm.box_url = "http://files.vagrantup.com/precise64.box"
-
- # Create a forwarded port mapping which allows access to a specific port
- # within the machine from a port on the host machine. In the example below,
- # accessing "localhost:8080" will access port 80 on the guest machine.
-
- # zznate: custom addition testing port forwarding
- # config.vm.network :forwarded_port, guest: 80, host: 8080
- config.vm.define "cass1" do |cass1|
- cass1.vm.hostname = "cass1"
- cass1.vm.network :private_network, ip: "192.168.33.10"
- end
-
- #config.vm.define "cass2" do |cass2|
- # cass2.vm.hostname = "cass2"
- # cass2.vm.network :private_network, ip: "192.168.33.11"
- #end
-
-
- # Create a private network, which allows host-only access to the machine
- # using a specific IP.
- # config.vm.network :private_network, ip: "192.168.33.10"
-
- # Create a public network, which generally matched to bridged network.
- # Bridged networks make the machine appear as another physical device on
- # your network.
- # config.vm.network :public_network
-
- # If true, then any SSH connections made will enable agent forwarding.
- # Default value: false
- # config.ssh.forward_agent = true
-
- # Share an additional folder to the guest VM. The first argument is
- # the path on the host to the actual folder. The second argument is
- # the path on the guest to mount the folder. And the optional third
- # argument is a set of non-required options.
- # config.vm.synced_folder "../data", "/vagrant_data"
-
- # Provider-specific configuration so you can fine-tune various
- # backing providers for Vagrant. These expose provider-specific options.
- # Example for VirtualBox:
- #
- config.vm.provider :virtualbox do |vb|
- # # Don't boot with headless mode
- # vb.gui = true
- #
- # # Use VBoxManage to customize the VM. For example to change memory:
- vb.customize ["modifyvm", :id, "--memory", "1024"]
- end
- #
- # View the documentation for the provider you're using for more
- # information on available options.
-
- # Enable provisioning with Puppet stand alone. Puppet manifests
- # are contained in a directory path relative to this Vagrantfile.
- # You will need to create the manifests directory and a manifest in
- # the file precise64.pp in the manifests_path directory.
- #
- # An example Puppet manifest to provision the message of the day:
- #
- # # group { "puppet":
- # # ensure => "present",
- # # }
- # #
- # # File { owner => 0, group => 0, mode => 0644 }
- # #
- # # file { '/etc/motd':
- # # content => "Welcome to your Vagrant-built virtual machine!
- # # Managed by Puppet.\n"
- # # }
- #
- config.vm.provision :puppet do |puppet|
- puppet.manifests_path = "manifests"
- puppet.manifest_file = "default.pp"
- puppet.module_path = "modules"
- end
-
- # Enable provisioning with chef solo, specifying a cookbooks path, roles
- # path, and data_bags path (all relative to this Vagrantfile), and adding
- # some recipes and/or roles.
- #
- # config.vm.provision :chef_solo do |chef|
- # chef.cookbooks_path = "../my-recipes/cookbooks"
- # chef.roles_path = "../my-recipes/roles"
- # chef.data_bags_path = "../my-recipes/data_bags"
- # chef.add_recipe "mysql"
- # chef.add_role "web"
- #
- # # You may also specify custom JSON attributes:
- # chef.json = { :mysql_password => "foo" }
- # end
-
- # Enable provisioning with chef server, specifying the chef server URL,
- # and the path to the validation key (relative to this Vagrantfile).
- #
- # The Opscode Platform uses HTTPS. Substitute your organization for
- # ORGNAME in the URL and validation key.
- #
- # If you have your own Chef Server, use the appropriate URL, which may be
- # HTTP instead of HTTPS depending on your configuration. Also change the
- # validation key to validation.pem.
- #
- # config.vm.provision :chef_client do |chef|
- # chef.chef_server_url = "https://api.opscode.com/organizations/ORGNAME"
- # chef.validation_key_path = "ORGNAME-validator.pem"
- # end
- #
- # If you're using the Opscode platform, your validator client is
- # ORGNAME-validator, replacing ORGNAME with your organization name.
- #
- # If you have your own Chef Server, the default validation client name is
- # chef-validator, unless you changed the configuration.
- #
- # chef.validation_client_name = "ORGNAME-validator"
-
- # zznate - added to test provisioner script
- # config.vm.provision "shell", path: "provision.sh"
-
-end
View
24 manifests/default.pp
@@ -1,24 +0,0 @@
-#include cassandra
-#include apache2
-
-
-#exec { "apt-get update" :
-# command => "/usr/bin/apt-get update",
-#}
-
-#package { "apache2" :
-# require => Exec["apt-get update"],
-#}
-
-file { "/var/www" :
- ensure => link,
- target => "/vagrant",
- force => true,
-}
-include cassandra
-
-#file { "/etc/cassandra/cassandra.yaml" :
-# ensure => file,
-# content => template("cassandra/templates/cassandra.yaml.erb"),
-#}
-
View
1 modules/cassandra
View
12 modules/gini-cassandra-0.3.0/Gemfile
@@ -1,12 +0,0 @@
-source 'https://rubygems.org'
-puppetversion = ENV.key?('PUPPET_VERSION') ? "= #{ENV['PUPPET_VERSION']}" : ['>= 2.7']
-
-gem 'puppet', puppetversion
-
-group :test do
- gem 'rake', '>= 0.9.0'
- gem 'rspec', '>= 2.8.0'
- gem 'rspec-puppet', '>= 0.1.1'
- gem 'puppet-lint'
- gem 'puppetlabs_spec_helper'
-end
View
202 modules/gini-cassandra-0.3.0/LICENSE-2.0.txt
@@ -1,202 +0,0 @@
-
- Apache License
- Version 2.0, January 2004
- http://www.apache.org/licenses/
-
- TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
-
- 1. Definitions.
-
- "License" shall mean the terms and conditions for use, reproduction,
- and distribution as defined by Sections 1 through 9 of this document.
-
- "Licensor" shall mean the copyright owner or entity authorized by
- the copyright owner that is granting the License.
-
- "Legal Entity" shall mean the union of the acting entity and all
- other entities that control, are controlled by, or are under common
- control with that entity. For the purposes of this definition,
- "control" means (i) the power, direct or indirect, to cause the
- direction or management of such entity, whether by contract or
- otherwise, or (ii) ownership of fifty percent (50%) or more of the
- outstanding shares, or (iii) beneficial ownership of such entity.
-
- "You" (or "Your") shall mean an individual or Legal Entity
- exercising permissions granted by this License.
-
- "Source" form shall mean the preferred form for making modifications,
- including but not limited to software source code, documentation
- source, and configuration files.
-
- "Object" form shall mean any form resulting from mechanical
- transformation or translation of a Source form, including but
- not limited to compiled object code, generated documentation,
- and conversions to other media types.
-
- "Work" shall mean the work of authorship, whether in Source or
- Object form, made available under the License, as indicated by a
- copyright notice that is included in or attached to the work
- (an example is provided in the Appendix below).
-
- "Derivative Works" shall mean any work, whether in Source or Object
- form, that is based on (or derived from) the Work and for which the
- editorial revisions, annotations, elaborations, or other modifications
- represent, as a whole, an original work of authorship. For the purposes
- of this License, Derivative Works shall not include works that remain
- separable from, or merely link (or bind by name) to the interfaces of,
- the Work and Derivative Works thereof.
-
- "Contribution" shall mean any work of authorship, including
- the original version of the Work and any modifications or additions
- to that Work or Derivative Works thereof, that is intentionally
- submitted to Licensor for inclusion in the Work by the copyright owner
- or by an individual or Legal Entity authorized to submit on behalf of
- the copyright owner. For the purposes of this definition, "submitted"
- means any form of electronic, verbal, or written communication sent
- to the Licensor or its representatives, including but not limited to
- communication on electronic mailing lists, source code control systems,
- and issue tracking systems that are managed by, or on behalf of, the
- Licensor for the purpose of discussing and improving the Work, but
- excluding communication that is conspicuously marked or otherwise
- designated in writing by the copyright owner as "Not a Contribution."
-
- "Contributor" shall mean Licensor and any individual or Legal Entity
- on behalf of whom a Contribution has been received by Licensor and
- subsequently incorporated within the Work.
-
- 2. Grant of Copyright License. Subject to the terms and conditions of
- this License, each Contributor hereby grants to You a perpetual,
- worldwide, non-exclusive, no-charge, royalty-free, irrevocable
- copyright license to reproduce, prepare Derivative Works of,
- publicly display, publicly perform, sublicense, and distribute the
- Work and such Derivative Works in Source or Object form.
-
- 3. Grant of Patent License. Subject to the terms and conditions of
- this License, each Contributor hereby grants to You a perpetual,
- worldwide, non-exclusive, no-charge, royalty-free, irrevocable
- (except as stated in this section) patent license to make, have made,
- use, offer to sell, sell, import, and otherwise transfer the Work,
- where such license applies only to those patent claims licensable
- by such Contributor that are necessarily infringed by their
- Contribution(s) alone or by combination of their Contribution(s)
- with the Work to which such Contribution(s) was submitted. If You
- institute patent litigation against any entity (including a
- cross-claim or counterclaim in a lawsuit) alleging that the Work
- or a Contribution incorporated within the Work constitutes direct
- or contributory patent infringement, then any patent licenses
- granted to You under this License for that Work shall terminate
- as of the date such litigation is filed.
-
- 4. Redistribution. You may reproduce and distribute copies of the
- Work or Derivative Works thereof in any medium, with or without
- modifications, and in Source or Object form, provided that You
- meet the following conditions:
-
- (a) You must give any other recipients of the Work or
- Derivative Works a copy of this License; and
-
- (b) You must cause any modified files to carry prominent notices
- stating that You changed the files; and
-
- (c) You must retain, in the Source form of any Derivative Works
- that You distribute, all copyright, patent, trademark, and
- attribution notices from the Source form of the Work,
- excluding those notices that do not pertain to any part of
- the Derivative Works; and
-
- (d) If the Work includes a "NOTICE" text file as part of its
- distribution, then any Derivative Works that You distribute must
- include a readable copy of the attribution notices contained
- within such NOTICE file, excluding those notices that do not
- pertain to any part of the Derivative Works, in at least one
- of the following places: within a NOTICE text file distributed
- as part of the Derivative Works; within the Source form or
- documentation, if provided along with the Derivative Works; or,
- within a display generated by the Derivative Works, if and
- wherever such third-party notices normally appear. The contents
- of the NOTICE file are for informational purposes only and
- do not modify the License. You may add Your own attribution
- notices within Derivative Works that You distribute, alongside
- or as an addendum to the NOTICE text from the Work, provided
- that such additional attribution notices cannot be construed
- as modifying the License.
-
- You may add Your own copyright statement to Your modifications and
- may provide additional or different license terms and conditions
- for use, reproduction, or distribution of Your modifications, or
- for any such Derivative Works as a whole, provided Your use,
- reproduction, and distribution of the Work otherwise complies with
- the conditions stated in this License.
-
- 5. Submission of Contributions. Unless You explicitly state otherwise,
- any Contribution intentionally submitted for inclusion in the Work
- by You to the Licensor shall be under the terms and conditions of
- this License, without any additional terms or conditions.
- Notwithstanding the above, nothing herein shall supersede or modify
- the terms of any separate license agreement you may have executed
- with Licensor regarding such Contributions.
-
- 6. Trademarks. This License does not grant permission to use the trade
- names, trademarks, service marks, or product names of the Licensor,
- except as required for reasonable and customary use in describing the
- origin of the Work and reproducing the content of the NOTICE file.
-
- 7. Disclaimer of Warranty. Unless required by applicable law or
- agreed to in writing, Licensor provides the Work (and each
- Contributor provides its Contributions) on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
- implied, including, without limitation, any warranties or conditions
- of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
- PARTICULAR PURPOSE. You are solely responsible for determining the
- appropriateness of using or redistributing the Work and assume any
- risks associated with Your exercise of permissions under this License.
-
- 8. Limitation of Liability. In no event and under no legal theory,
- whether in tort (including negligence), contract, or otherwise,
- unless required by applicable law (such as deliberate and grossly
- negligent acts) or agreed to in writing, shall any Contributor be
- liable to You for damages, including any direct, indirect, special,
- incidental, or consequential damages of any character arising as a
- result of this License or out of the use or inability to use the
- Work (including but not limited to damages for loss of goodwill,
- work stoppage, computer failure or malfunction, or any and all
- other commercial damages or losses), even if such Contributor
- has been advised of the possibility of such damages.
-
- 9. Accepting Warranty or Additional Liability. While redistributing
- the Work or Derivative Works thereof, You may choose to offer,
- and charge a fee for, acceptance of support, warranty, indemnity,
- or other liability obligations and/or rights consistent with this
- License. However, in accepting such obligations, You may act only
- on Your own behalf and on Your sole responsibility, not on behalf
- of any other Contributor, and only if You agree to indemnify,
- defend, and hold each Contributor harmless for any liability
- incurred by, or claims asserted against, such Contributor by reason
- of your accepting any such warranty or additional liability.
-
- END OF TERMS AND CONDITIONS
-
- APPENDIX: How to apply the Apache License to your work.
-
- To apply the Apache License to your work, attach the following
- boilerplate notice, with the fields enclosed by brackets "[]"
- replaced with your own identifying information. (Don't include
- the brackets!) The text should be enclosed in the appropriate
- comment syntax for the file format. We also recommend that a
- file or class name and description of purpose be included on the
- same "printed page" as the copyright notice for easier
- identification within third-party archives.
-
- Copyright [yyyy] [name of copyright owner]
-
- Licensed under the Apache License, Version 2.0 (the "License");
- you may not use this file except in compliance with the License.
- You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
View
10 modules/gini-cassandra-0.3.0/Modulefile
@@ -1,10 +0,0 @@
-name 'gini-cassandra'
-version '0.3.0'
-author 'Jochen Schalanda'
-license 'Apache 2.0'
-project_page 'https://github.com/gini/puppet-cassandra'
-source 'https://github.com/gini/puppet-cassandra'
-summary 'Puppet module to install Apache Cassandra from the DataStax distribution'
-description 'A Puppet module to install the DataStax Community edition using the official repositories.'
-dependency 'puppetlabs/stdlib', '>= 3.0.0'
-dependency 'puppetlabs/apt', '1.x'
View
40 modules/gini-cassandra-0.3.0/README.md
@@ -1,40 +0,0 @@
-Puppet Cassandra module (DataStax edition)
-==========================================
-
-[![Build Status](https://secure.travis-ci.org/gini/puppet-cassandra.png)](http://travis-ci.org/gini/puppet-cassandra)
-
-Overview
---------
-
-Install Apache Cassandra from the [DataStax](http://www.datastax.com/) distribution.
-
-Usage
------
-
-Example:
-
- class { 'cassandra':
- cluster_name => 'YourCassandraCluster',
- seeds => [ '192.0.2.5', '192.0.2.23', '192.0.2.42', ],
- }
-
-Supported Platforms
--------------------
-
-The module has been tested on the following operating systems. Testing and patches for other platforms are welcomed.
-
-* Debian 6.0 (Squeeze) and Debian 7.0 (Wheezy)
-
-License
--------
-
-Copyright (c) 2012-2013 smarchive GmbH, 2013 Gini GmbH
-
-This script is licensed under the Apache License, Version 2.0.
-
-See http://www.apache.org/licenses/LICENSE-2.0.html for the full license text.
-
-Support
--------
-
-Please log tickets and issues at our [project site](https://github.com/gini/puppet-cassandra/issues).
View
4 modules/gini-cassandra-0.3.0/Rakefile
@@ -1,4 +0,0 @@
-require 'rubygems'
-require 'puppetlabs_spec_helper/rake_tasks'
-
-task :default => [:spec, :lint]
View
64 modules/gini-cassandra-0.3.0/manifests/config.pp
@@ -1,64 +0,0 @@
-class cassandra::config(
- $config_path,
- $max_heap_size,
- $heap_newsize,
- $jmx_port,
- $additional_jvm_opts,
- $cluster_name,
- $start_native_transport,
- $start_rpc,
- $listen_address,
- $rpc_address,
- $rpc_port,
- $rpc_server_type,
- $native_transport_port,
- $storage_port,
- $partitioner,
- $data_file_directories,
- $commitlog_directory,
- $saved_caches_directory,
- $initial_token,
- $num_tokens,
- $seeds,
- $concurrent_reads,
- $concurrent_writes,
- $incremental_backups,
- $snapshot_before_compaction,
- $auto_snapshot,
- $multithreaded_compaction,
- $endpoint_snitch,
- $internode_compression,
- $disk_failure_policy,
- $thread_stack_size,
-) {
- group { 'cassandra':
- ensure => present,
- require => Class['cassandra::install'],
- }
-
- user { 'cassandra':
- ensure => present,
- require => Group['cassandra'],
- }
-
- File {
- owner => 'cassandra',
- group => 'cassandra',
- mode => '0644',
- require => Class['cassandra::install'],
- }
-
- file { $data_file_directories:
- ensure => directory,
- }
-
- file { "${config_path}/cassandra-env.sh":
- ensure => file,
- content => template("${module_name}/cassandra-env.sh.erb"),
- }
-
- file { "${config_path}/cassandra.yaml":
- ensure => file,
- content => template("${module_name}/cassandra.yaml.erb"),
- }
-}
View
169 modules/gini-cassandra-0.3.0/manifests/init.pp
@@ -1,169 +0,0 @@
-class cassandra(
- $package_name = $cassandra::params::package_name,
- $version = $cassandra::params::version,
- $service_name = $cassandra::params::service_name,
- $config_path = $cassandra::params::config_path,
- $include_repo = $cassandra::params::include_repo,
- $repo_name = $cassandra::params::repo_name,
- $repo_baseurl = $cassandra::params::repo_baseurl,
- $repo_gpgkey = $cassandra::params::repo_gpgkey,
- $repo_repos = $cassandra::params::repo_repos,
- $repo_release = $cassandra::params::repo_release,
- $repo_pin = $cassandra::params::repo_pin,
- $repo_gpgcheck = $cassandra::params::repo_gpgcheck,
- $repo_enabled = $cassandra::params::repo_enabled,
- $max_heap_size = $cassandra::params::max_heap_size,
- $heap_newsize = $cassandra::params::heap_newsize,
- $jmx_port = $cassandra::params::jmx_port,
- $additional_jvm_opts = $cassandra::params::additional_jvm_opts,
- $cluster_name = $cassandra::params::cluster_name,
- $listen_address = $cassandra::params::listen_address,
- $start_native_transport = $cassandra::params::start_native_transport,
- $start_rpc = $cassandra::params::start_rpc,
- $rpc_address = $cassandra::params::rpc_address,
- $rpc_port = $cassandra::params::rpc_port,
- $rpc_server_type = $cassandra::params::rpc_server_type,
- $native_transport_port = $cassandra::params::native_transport_port,
- $storage_port = $cassandra::params::storage_port,
- $partitioner = $cassandra::params::partitioner,
- $data_file_directories = $cassandra::params::data_file_directories,
- $commitlog_directory = $cassandra::params::commitlog_directory,
- $saved_caches_directory = $cassandra::params::saved_caches_directory,
- $initial_token = $cassandra::params::initial_token,
- $num_tokens = $cassandra::params::num_tokens,
- $seeds = $cassandra::params::seeds,
- $concurrent_reads = $cassandra::params::concurrent_reads,
- $concurrent_writes = $cassandra::params::concurrent_writes,
- $incremental_backups = $cassandra::params::incremental_backups,
- $snapshot_before_compaction = $cassandra::params::snapshot_before_compaction,
- $auto_snapshot = $cassandra::params::auto_snapshot,
- $multithreaded_compaction = $cassandra::params::multithreaded_compaction,
- $endpoint_snitch = $cassandra::params::endpoint_snitch,
- $internode_compression = $cassandra::params::internode_compression,
- $disk_failure_policy = $cassandra::params::disk_failure_policy,
- $thread_stack_size = $cassandra::params::thread_stack_size
-) inherits cassandra::params {
- # Validate input parameters
- validate_bool($include_repo)
-
- validate_absolute_path($commitlog_directory)
- validate_absolute_path($saved_caches_directory)
-
- validate_string($cluster_name)
- validate_string($partitioner)
- validate_string($initial_token)
- validate_string($endpoint_snitch)
-
- validate_re($start_rpc, '^(true|false)$')
- validate_re($start_native_transport, '^(true|false)$')
- validate_re($rpc_server_type, '^(hsha|sync|async)$')
- validate_re($incremental_backups, '^(true|false)$')
- validate_re($snapshot_before_compaction, '^(true|false)$')
- validate_re($auto_snapshot, '^(true|false)$')
- validate_re($multithreaded_compaction, '^(true|false)$')
- validate_re("${concurrent_reads}", '^[0-9]+$')
- validate_re("${concurrent_writes}", '^[0-9]+$')
- validate_re("${num_tokens}", '^[0-9]+$')
- validate_re($internode_compression, '^(all|dc|none)$')
- validate_re($disk_failure_policy, '^(stop|best_effort|ignore)$')
- validate_re("${thread_stack_size}", '^[0-9]+$')
-
- validate_array($additional_jvm_opts)
- validate_array($seeds)
- validate_array($data_file_directories)
-
- if(!is_integer($jmx_port)) {
- fail('jmx_port must be a port number between 1 and 65535')
- }
-
- if(!is_ip_address($listen_address)) {
- fail('listen_address must be an IP address')
- }
-
- if(!is_ip_address($rpc_address)) {
- fail('rpc_address must be an IP address')
- }
-
- if(!is_integer($rpc_port)) {
- fail('rpc_port must be a port number between 1 and 65535')
- }
-
- if(!is_integer($native_transport_port)) {
- fail('native_transport_port must be a port number between 1 and 65535')
- }
-
- if(!is_integer($storage_port)) {
- fail('storage_port must be a port number between 1 and 65535')
- }
-
- if(empty($seeds)) {
- fail('seeds must not be empty')
- }
-
- if(empty($data_file_directories)) {
- fail('data_file_directories must not be empty')
- }
-
- if(!empty($initial_token)) {
- notice("Starting with Cassandra 1.2 you shouldn't set an initial_token but set num_tokens accordingly.")
- }
-
- # Anchors for containing the implementation class
- anchor { 'cassandra::begin': }
-
- if($include_repo) {
- class { 'cassandra::repo':
- repo_name => $repo_name,
- baseurl => $repo_baseurl,
- gpgkey => $repo_gpgkey,
- repos => $repo_repos,
- release => $repo_release,
- pin => $repo_pin,
- gpgcheck => $repo_gpgcheck,
- enabled => $repo_enabled,
- }
- Class['cassandra::repo'] -> Class['cassandra::install']
- }
-
- include cassandra::install
-
- class { 'cassandra::config':
- config_path => $config_path,
- max_heap_size => $max_heap_size,
- heap_newsize => $heap_newsize,
- jmx_port => $jmx_port,
- additional_jvm_opts => $additional_jvm_opts,
- cluster_name => $cluster_name,
- start_native_transport => $start_native_transport,
- start_rpc => $start_rpc,
- listen_address => $listen_address,
- rpc_address => $rpc_address,
- rpc_port => $rpc_port,
- rpc_server_type => $rpc_server_type,
- native_transport_port => $native_transport_port,
- storage_port => $storage_port,
- partitioner => $partitioner,
- data_file_directories => $data_file_directories,
- commitlog_directory => $commitlog_directory,
- saved_caches_directory => $saved_caches_directory,
- initial_token => $initial_token,
- num_tokens => $num_tokens,
- seeds => $seeds,
- concurrent_reads => $concurrent_reads,
- concurrent_writes => $concurrent_writes,
- incremental_backups => $incremental_backups,
- snapshot_before_compaction => $snapshot_before_compaction,
- auto_snapshot => $auto_snapshot,
- multithreaded_compaction => $multithreaded_compaction,
- endpoint_snitch => $endpoint_snitch,
- internode_compression => $internode_compression,
- disk_failure_policy => $disk_failure_policy,
- thread_stack_size => $thread_stack_size,
- }
-
- include cassandra::service
-
- anchor { 'cassandra::end': }
-
- Anchor['cassandra::begin'] -> Class['cassandra::install'] -> Class['cassandra::config'] ~> Class['cassandra::service'] -> Anchor['cassandra::end']
-}
View
50 modules/gini-cassandra-0.3.0/manifests/install.pp
@@ -1,50 +0,0 @@
-class cassandra::install {
- package { 'dsc':
- ensure => $cassandra::version,
- name => $cassandra::package_name,
- }
-
- $python_cql_name = $::osfamily ? {
- 'Debian' => 'python-cql',
- 'RedHat' => 'python26-cql',
- default => 'python-cql',
- }
-
- package { $python_cql_name:
- ensure => installed,
- }
-
- if ($::osfamily == 'Debian') {
- file { 'CASSANDRA-2356 /etc/cassandra':
- ensure => directory,
- path => '/etc/cassandra',
- owner => 'root',
- group => 'root',
- mode => '0755',
- }
-
- exec { 'CASSANDRA-2356 Workaround':
- path => ['/sbin', '/bin', '/usr/sbin', '/usr/bin'],
- command => '/etc/init.d/cassandra stop && rm -rf /var/lib/cassandra/*',
- creates => '/etc/cassandra/CASSANDRA-2356',
- user => 'root',
- require => [
- Package['dsc'],
- File['CASSANDRA-2356 /etc/cassandra'],
- ],
- }
-
- file { 'CASSANDRA-2356 marker file':
- ensure => file,
- path => '/etc/cassandra/CASSANDRA-2356',
- owner => 'root',
- group => 'root',
- mode => '0644',
- content => '# Workaround for CASSANDRA-2356',
- require => [
- File['CASSANDRA-2356 /etc/cassandra'],
- Exec['CASSANDRA-2356 Workaround'],
- ],
- }
- }
-}
View
249 modules/gini-cassandra-0.3.0/manifests/params.pp
@@ -1,249 +0,0 @@
-class cassandra::params {
- $include_repo = $::cassandra_include_repo ? {
- undef => true,
- default => $::cassandra_include_repo
- }
-
- $repo_name = $::cassandra_repo_name ? {
- undef => 'datastax',
- default => $::cassandra_repo_name
- }
-
- $repo_baseurl = $::cassandra_repo_baseurl ? {
- undef => $::osfamily ? {
- 'Debian' => 'http://debian.datastax.com/community',
- 'RedHat' => 'http://rpm.datastax.com/community/',
- default => undef,
- },
- default => $::cassandra_repo_baseurl
- }
-
- $repo_gpgkey = $::cassandra_repo_gpgkey ? {
- undef => $::osfamily ? {
- 'Debian' => 'http://debian.datastax.com/debian/repo_key',
- 'RedHat' => 'http://rpm.datastax.com/rpm/repo_key',
- default => undef,
- },
- default => $::cassandra_repo_gpgkey
- }
-
- $repo_repos = $::cassandra_repo_repos ? {
- undef => 'main',
- default => $::cassandra_repo_release
- }
-
- $repo_release = $::cassandra_repo_release ? {
- undef => 'stable',
- default => $::cassandra_repo_release
- }
-
- $repo_pin = $::cassandra_repo_pin ? {
- undef => 200,
- default => $::cassandra_repo_release
- }
-
- $repo_gpgcheck = $::cassandra_repo_gpgcheck ? {
- undef => 0,
- default => $::cassandra_repo_gpgcheck
- }
-
- $repo_enabled = $::cassandra_repo_enabled ? {
- undef => 1,
- default => $::cassandra_repo_enabled
- }
-
- case $::osfamily {
- 'Debian': {
- $package_name = $::cassandra_package_name ? {
- undef => 'dsc12',
- default => $::cassandra_package_name,
- }
-
- $service_name = $::cassandra_service_name ? {
- undef => 'cassandra',
- default => $::cassandra_service_name,
- }
-
- $config_path = $::cassandra_config_path ? {
- undef => '/etc/cassandra',
- default => $::cassandra_config_path,
- }
- }
- 'RedHat': {
- $package_name = $::cassandra_package_name ? {
- undef => 'dsc12',
- default => $::cassandra_package_name,
- }
-
- $service_name = $::cassandra_service_name ? {
- undef => 'cassandra',
- default => $::cassandra_service_name,
- }
-
- $config_path = $::cassandra_config_path ? {
- undef => '/etc/cassandra/conf',
- default => $::cassandra_config_path,
- }
- }
- default: {
- fail("Unsupported osfamily: ${::osfamily}, operatingsystem: ${::operatingsystem}, module ${module_name} only supports osfamily Debian")
- }
- }
-
- $version = $::cassandra_version ? {
- undef => 'installed',
- default => $::cassandra_version,
- }
-
- $max_heap_size = $::cassandra_max_heap_size ? {
- undef => '',
- default => $::cassandra_max_heap_size,
- }
-
- $heap_newsize = $::cassandra_heap_newsize ? {
- undef => '',
- default => $::cassandra_heap_newsize,
- }
-
- $jmx_port = $::cassandra_jmx_port ? {
- undef => 7199,
- default => $::cassandra_jmx_port,
- }
-
- $additional_jvm_opts = $::cassandra_additional_jvm_opts ? {
- undef => [],
- default => $::cassandra_additional_jvm_opts,
- }
-
- $cluster_name = $::cassandra_cluster_name ? {
- undef => 'Cassandra',
- default => $::cassandra_cluster_name,
- }
-
- $listen_address = $::cassandra_listen_address ? {
- undef => $::ipaddress,
- default => $::cassandra_listen_address,
- }
-
- $rpc_address = $::cassandra_rpc_address ? {
- undef => '0.0.0.0',
- default => $::cassandra_rpc_address,
- }
-
- $rpc_port = $::cassandra_rpc_port ? {
- undef => 9160,
- default => $::cassandra_rpc_port,
- }
-
- $rpc_server_type = $::cassandra_rpc_server_type ? {
- undef => 'hsha',
- default => $::cassandra_rpc_server_type,
- }
-
- $storage_port = $::cassandra_storage_port ? {
- undef => 7000,
- default => $::cassandra_storage_port,
- }
-
- $partitioner = $::cassandra_partitioner ? {
- undef => 'org.apache.cassandra.dht.Murmur3Partitioner',
- default => $::cassandra_partitioner,
- }
-
- $data_file_directories = $::cassandra_data_file_directories ? {
- undef => ['/var/lib/cassandra/data'],
- default => $::cassandra_data_file_directories,
- }
-
- $commitlog_directory = $::cassandra_commitlog_directory ? {
- undef => '/var/lib/cassandra/commitlog',
- default => $::cassandra_commitlog_directory,
- }
-
- $saved_caches_directory = $::cassandra_saved_caches_directory ? {
- undef => '/var/lib/cassandra/saved_caches',
- default => $::cassandra_saved_caches_directory,
- }
-
- $initial_token = $::cassandra_initial_token ? {
- undef => '',
- default => $::cassandra_initial_token,
- }
-
- $seeds = $::cassandra_seeds ? {
- undef => [],
- default => $::cassandra_seeds,
- }
-
- $default_concurrent_reads = $::processorcount * 8
- $concurrent_reads = $::cassandra_concurrent_reads ? {
- undef => $default_concurrent_reads,
- default => $::cassandra_concurrent_reads,
- }
-
- $default_concurrent_writes = $::processorcount * 8
- $concurrent_writes = $::cassandra_concurrent_writes ? {
- undef => $default_concurrent_writes,
- default => $::cassandra_concurrent_writes,
- }
-
- $incremental_backups = $::cassandra_incremental_backups ? {
- undef => 'false',
- default => $::cassandra_incremental_backups,
- }
-
- $snapshot_before_compaction = $::cassandra_snapshot_before_compaction ? {
- undef => 'false',
- default => $::cassandra_snapshot_before_compaction,
- }
-
- $auto_snapshot = $::cassandra_auto_snapshot ? {
- undef => 'true',
- default => $::cassandra_auto_snapshot,
- }
-
- $multithreaded_compaction = $::cassandra_multithreaded_compaction ? {
- undef => 'false',
- default => $::cassandra_multithreaded_compaction,
- }
-
- $endpoint_snitch = $::cassandra_endpoint_snitch ? {
- undef => 'SimpleSnitch',
- default => $::cassandra_endpoint_snitch,
- }
-
- $internode_compression = $::cassandra_internode_compression ? {
- undef => 'all',
- default => $::cassandra_internode_compression,
- }
-
- $disk_failure_policy = $::cassandra_disk_failure_policy ? {
- undef => 'stop',
- default => $::cassandra_disk_failure_policy,
- }
-
- $start_native_transport = $::cassandra_start_native_transport ? {
- undef => 'false',
- default => $::cassandra_start_native_transport,
- }
-
- $native_transport_port = $::cassandra_native_transport_port ? {
- undef => 9042,
- default => $::cassandra_native_transport_port,
- }
-
- $start_rpc = $::cassandra_start_rpc ? {
- undef => 'true',
- default => $::cassandra_start_rpc,
- }
-
- $num_tokens = $::cassandra_num_tokens ? {
- undef => 256,
- default => $::cassandra_num_tokens,
- }
-
- $thread_stack_size = $::cassandra_thread_stack_size ? {
- undef => 180,
- default => $::cassandra_thread_stack_size,
- }
-}
View
35 modules/gini-cassandra-0.3.0/manifests/repo.pp
@@ -1,35 +0,0 @@
-class cassandra::repo (
- $repo_name,
- $baseurl,
- $gpgkey,
- $repos,
- $release,
- $pin,
- $gpgcheck,
- $enabled
-){
- case $::osfamily {
- 'Debian': {
- class { 'cassandra::repo::debian':
- repo_name => $repo_name,
- location => $baseurl,
- repos => $repos,
- release => $release,
- key_source => $gpgkey,
- pin => $pin,
- }
- }
- 'RedHat': {
- class { 'cassandra::repo::redhat':
- repo_name => $repo_name,
- baseurl => $baseurl,
- gpgkey => $gpgkey,
- gpgcheck => $gpgcheck,
- enabled => $enabled,
- }
- }
- default: {
- fail("OS family ${::osfamily} not supported")
- }
- }
-}
View
18 modules/gini-cassandra-0.3.0/manifests/repo/debian.pp
@@ -1,18 +0,0 @@
-class cassandra::repo::debian(
- $repo_name,
- $location,
- $repos,
- $release,
- $key_source,
- $pin
-) {
- apt::source { $repo_name:
- location => $location,
- release => $release,
- repos => $repos,
- key => $repo_name,
- key_source => $key_source,
- pin => $pin,
- include_src => false,
- }
-}
View
15 modules/gini-cassandra-0.3.0/manifests/repo/redhat.pp
@@ -1,15 +0,0 @@
-class cassandra::repo::redhat(
- $repo_name,
- $baseurl,
- $gpgkey,
- $gpgcheck,
- $enabled
-) {
- yumrepo { $repo_name:
- descr => 'DataStax Distribution for Cassandra',
- baseurl => $baseurl,
- gpgkey => $gpgkey,
- gpgcheck => $gpgcheck,
- enabled => $enabled,
- }
-}
View
10 modules/gini-cassandra-0.3.0/manifests/service.pp
@@ -1,10 +0,0 @@
-class cassandra::service {
- service { $cassandra::service_name:
- ensure => running,
- enable => true,
- hasstatus => true,
- hasrestart => true,
- subscribe => Class['cassandra::config'],
- require => Class['cassandra::config'],
- }
-}
View
45 modules/gini-cassandra-0.3.0/metadata.json
@@ -1,45 +0,0 @@
-{
- "name": "gini-cassandra",
- "version": "0.3.0",
- "source": "https://github.com/gini/puppet-cassandra",
- "author": "Jochen Schalanda",
- "license": "Apache 2.0",
- "summary": "Puppet module to install Apache Cassandra from the DataStax distribution",
- "description": "A Puppet module to install the DataStax Community edition using the official repositories.",
- "project_page": "https://github.com/gini/puppet-cassandra",
- "dependencies": [
- {
- "name": "puppetlabs/stdlib",
- "version_requirement": ">= 3.0.0"
- },
- {
- "name": "puppetlabs/apt",
- "version_requirement": "1.x"
- }
- ],
- "types": [
-
- ],
- "checksums": {
- "Gemfile": "296418e7800cf4ce40bd039654afa3cb",
- "LICENSE-2.0.txt": "3b83ef96387f14655fc854ddc3c6bd57",
- "Modulefile": "432c167ed2f0b5e4d598bffbe9ae2d85",
- "README.md": "0014c3791a1ca0c1407970bc3d5d08e9",
- "Rakefile": "d0a7bcdb74658217895c7f13d3f4b1de",
- "manifests/config.pp": "fc9a3314414b9fe9b3d1147f3674a8c0",
- "manifests/init.pp": "2e8ade0b446a34eecfa59440a1543307",
- "manifests/install.pp": "4bfa7f40cfb2f556894d3b0ef49787e0",
- "manifests/params.pp": "25cc5d957bd39f17594ab7f5d7c614ca",
- "manifests/repo/debian.pp": "e70157415e69e84e9a68828bc085839f",
- "manifests/repo/redhat.pp": "9dbb641cc76955e9a155066c42105bc1",
- "manifests/repo.pp": "ef942dc79cbc8c2596b93d3ac4df6b39",
- "manifests/service.pp": "26d9761ae4c25299460eab0664a82958",
- "spec/classes/cassandra_config_spec.rb": "72a860000639d6ce3b5820c624af5ff6",
- "spec/classes/cassandra_repo_spec.rb": "d2fde60f9b8a1969a07ef337319a78dd",
- "spec/classes/cassandra_spec.rb": "7c541ad490d551e37e361ac23b6b26de",
- "spec/spec_helper.rb": "31443cbb31de764aac79186cbc17689f",
- "templates/cassandra-env.sh.erb": "a815a0b852e8cbcc7e91de9c2b499436",
- "templates/cassandra.yaml.erb": "fce2d090fffba9dbbd4db7e805e97e67",
- "tests/init.pp": "04b020e5355f0a137fef76b6aaa8935b"
- }
-}
View
157 modules/gini-cassandra-0.3.0/spec/classes/cassandra_config_spec.rb
@@ -1,157 +0,0 @@
-require 'spec_helper'
-
-describe 'cassandra::config' do
- describe 'with supported os Debian' do
- let :facts do
- {
- :osfamily => 'Debian'
- }
- end
- let(:params) do
- {
- :config_path => '/etc/cassandra',
- :max_heap_size => '',
- :heap_newsize => '',
- :jmx_port => 7199,
- :additional_jvm_opts => [],
- :cluster_name => 'Cassandra',
- :listen_address => '1.2.3.4',
- :rpc_address => '0.0.0.0',
- :rpc_port => 9160,
- :rpc_server_type => 'hsha',
- :storage_port => 7000,
- :partitioner => 'org.apache.cassandra.dht.Murmur3Partitioner',
- :data_file_directories => ['/var/lib/cassandra/data'],
- :commitlog_directory => '/var/lib/cassandra/commitlog',
- :saved_caches_directory => '/var/lib/cassandra/saved_caches',
- :initial_token => '',
- :seeds => ['1.2.3.4'],
- :concurrent_reads => 32,
- :concurrent_writes => 32,
- :incremental_backups => 'false',
- :snapshot_before_compaction => 'false',
- :auto_snapshot => 'true',
- :multithreaded_compaction => 'false',
- :endpoint_snitch => 'SimpleSnitch',
- :internode_compression => 'all',
- :disk_failure_policy => 'stop',
- :start_native_transport => 'false',
- :start_rpc => 'true',
- :native_transport_port => 9042,
- :num_tokens => 256,
- :thread_stack_size => 180,
- }
- end
-
- it 'does contain group cassandra' do
- should contain_group('cassandra').with({
- :ensure => 'present',
- :require => 'Class[Cassandra::Install]',
- })
- end
-
- it 'does contain user cassandra' do
- should contain_user('cassandra').with({
- :ensure => 'present',
- :require => 'Group[cassandra]',
- })
- end
-
- it 'does contain file /etc/cassandra/cassandra-env.sh' do
- should contain_file('/etc/cassandra/cassandra-env.sh').with({
- :ensure => 'file',
- :owner => 'cassandra',
- :group => 'cassandra',
- :mode => '0644',
- :content => /MAX_HEAP_SIZE/,
- })
- end
-
- it 'does contain file /etc/cassandra/cassandra.yaml' do
- should contain_file('/etc/cassandra/cassandra.yaml').with({
- :ensure => 'file',
- :owner => 'cassandra',
- :group => 'cassandra',
- :mode => '0644',
- :content => /cluster_name: 'Cassandra'/,
- })
- end
- end
-
- describe 'with supported os RedHat' do
- let :facts do
- {
- :osfamily => 'Redhat'
- }
- end
- let(:params) do
- {
- :config_path => '/etc/cassandra/conf',
- :max_heap_size => '',
- :heap_newsize => '',
- :jmx_port => 7199,
- :additional_jvm_opts => [],
- :cluster_name => 'Cassandra',
- :listen_address => '1.2.3.4',
- :rpc_address => '0.0.0.0',
- :rpc_port => 9160,
- :rpc_server_type => 'hsha',
- :storage_port => 7000,
- :partitioner => 'org.apache.cassandra.dht.Murmur3Partitioner',
- :data_file_directories => ['/var/lib/cassandra/data'],
- :commitlog_directory => '/var/lib/cassandra/commitlog',
- :saved_caches_directory => '/var/lib/cassandra/saved_caches',
- :initial_token => '',
- :seeds => ['1.2.3.4'],
- :concurrent_reads => 32,
- :concurrent_writes => 32,
- :incremental_backups => 'false',
- :snapshot_before_compaction => 'false',
- :auto_snapshot => 'true',
- :multithreaded_compaction => 'false',
- :endpoint_snitch => 'SimpleSnitch',
- :internode_compression => 'all',
- :disk_failure_policy => 'stop',
- :start_native_transport => 'false',
- :start_rpc => 'true',
- :native_transport_port => 9042,
- :num_tokens => 256,
- :thread_stack_size => 128,
- }
- end
- it 'does contain group cassandra' do
- should contain_group('cassandra').with({
- :ensure => 'present',
- :require => 'Class[Cassandra::Install]',
- })
- end
-
- it 'does contain user cassandra' do
- should contain_user('cassandra').with({
- :ensure => 'present',
- :require => 'Group[cassandra]',
- })
- end
-
- it 'does contain file /etc/cassandra/conf/cassandra-env.sh' do
- should contain_file('/etc/cassandra/conf/cassandra-env.sh').with({
- :ensure => 'file',
- :owner => 'cassandra',
- :group => 'cassandra',
- :mode => '0644',
- :content => /MAX_HEAP_SIZE/,
- })
- end
-
- it 'does contain file /etc/cassandra/conf/cassandra.yaml' do
- should contain_file('/etc/cassandra/conf/cassandra.yaml').with({
- :ensure => 'file',
- :owner => 'cassandra',
- :group => 'cassandra',
- :mode => '0644',
- :content => /cluster_name: 'Cassandra'/,
- })
- end
- end
-
-end
View
78 modules/gini-cassandra-0.3.0/spec/classes/cassandra_repo_spec.rb
@@ -1,78 +0,0 @@
-require 'spec_helper'
-
-describe 'cassandra::repo' do
-
- let(:params) do
-
- { :repo_name => 'rspec_repo',
- :baseurl => 'http://cassandra.repo.com/',
- :gpgkey => 'http://cassandra.repo.com/repo_key',
- :repos => 'main',
- :release => 'stable',
- :pin => 42,
- :gpgcheck => 0,
- :enabled => 1,
- }
- end
-
- context 'on Debian' do
-
- let(:facts) {{ :osfamily => 'Debian' }}
-
- it 'does contain class cassandra::repo::debian' do
- should contain_class('cassandra::repo::debian').with({
- :repo_name => 'rspec_repo',
- :location => 'http://cassandra.repo.com/',
- :repos => 'main',
- :release => 'stable',
- :key_source => 'http://cassandra.repo.com/repo_key',
- :pin => 42,
- })
- end
-
- it 'does contain apt::source' do
- should contain_apt__source('rspec_repo').with({
- :location => 'http://cassandra.repo.com/',
- :repos => 'main',
- :release => 'stable',
- :key_source => 'http://cassandra.repo.com/repo_key',
- :pin => 42,
- })
- end
- end
-
- context 'on RedHat' do
-
- let(:facts) {{ :osfamily => 'RedHat' }}
-
- it 'does contain class cassandra::repo::redhat' do
- should contain_class('cassandra::repo::redhat').with({
- :repo_name => 'rspec_repo',
- :baseurl => 'http://cassandra.repo.com/',
- :gpgkey => 'http://cassandra.repo.com/repo_key',
- :gpgcheck => 0,
- :enabled => 1,
- })
- end
-
- it 'does contain yumrepo' do
- should contain_yumrepo('rspec_repo').with({
- :baseurl => 'http://cassandra.repo.com/',
- :gpgkey => 'http://cassandra.repo.com/repo_key',
- :gpgcheck => 0,
- :enabled => 1,
- })
- end
- end
-
- context 'on some other OS' do
-
- let(:facts) {{ :osfamily => 'Gentoo' }}
-
- it 'fails' do
- expect {
- should contain_class('cassandra::repo::gentoo')
- }.to raise_error(Puppet::Error)
- end
- end
-end
View
204 modules/gini-cassandra-0.3.0/spec/classes/cassandra_spec.rb
@@ -1,204 +0,0 @@
-require 'spec_helper'
-
-describe 'cassandra' do
-
- let(:facts) do
- { :osfamily => 'Debian',
- :processorcount => 4,
- :lsbdistcodename => 'squeeze',
- :ipaddress => '1.2.3.4'
- }
- end
-
- let(:params) {{ :seeds => ['1.2.3.4'] }}
-
- context 'verify module' do
-
- it 'does contain anchor cassandra::begin ' do
- should contain_anchor('cassandra::begin')
- end
-
- it 'does contain class cassandra::repo' do
- ## Default params from cassandra::params
- should contain_class('cassandra::repo').with({
- :repo_name => 'datastax',
- :baseurl => 'http://debian.datastax.com/community',
- :gpgkey => 'http://debian.datastax.com/debian/repo_key',
- :repos => 'main',
- :release => 'stable',
- :pin => '200',
- :gpgcheck => 0,
- :enabled => 1,
- })
- end
-
- ## Install related resources. No dedicated spec file as it references variables
- ## from cassandra
- it 'does contain class cassandra::install' do
- should contain_class('cassandra::install')
- end
-
- it 'does contain package dsc' do
- should contain_package('dsc').with({
- :ensure => 'installed',
- :name => 'dsc12',
- })
- end
-
- it 'does contain package python-cql' do
- should contain_package('python-cql').with({
- :ensure => 'installed',
- })
- end
-
- it 'does contain directory /etc/cassandra' do
- should contain_file('CASSANDRA-2356 /etc/cassandra').with({
- :ensure => 'directory',
- :path => '/etc/cassandra',
- :owner => 'root',
- :group => 'root',
- :mode => '0755',
- })
- end
-
- it 'does contain file /etc/cassandra/CASSANDRA-2356' do
- should contain_file('CASSANDRA-2356 marker file').with({
- :ensure => 'file',
- :path => '/etc/cassandra/CASSANDRA-2356',
- :owner => 'root',
- :group => 'root',
- :mode => '0644',
- :require => ['File[CASSANDRA-2356 /etc/cassandra]', 'Exec[CASSANDRA-2356 Workaround]'],
- })
- end
-
- it 'does contain exec CASSANDRA-2356 Workaround' do
- should contain_exec('CASSANDRA-2356 Workaround').with({
- :command => '/etc/init.d/cassandra stop && rm -rf /var/lib/cassandra/*',
- :creates => '/etc/cassandra/CASSANDRA-2356',
- :require => ['Package[dsc]', 'File[CASSANDRA-2356 /etc/cassandra]'],
- })
- end
- ## /Finished install resources
-
- it 'does contain class cassandra::config' do
- should contain_class('cassandra::config').with({
- :max_heap_size => '',
- :heap_newsize => '',
- :jmx_port => 7199,
- :additional_jvm_opts => [],
- :cluster_name => 'Cassandra',
- :listen_address => '1.2.3.4',
- :rpc_address => '0.0.0.0',
- :rpc_port => 9160,
- :rpc_server_type => 'hsha',
- :storage_port => 7000,
- :partitioner => 'org.apache.cassandra.dht.Murmur3Partitioner',
- :data_file_directories => ['/var/lib/cassandra/data'],
- :commitlog_directory => '/var/lib/cassandra/commitlog',
- :saved_caches_directory => '/var/lib/cassandra/saved_caches',
- :initial_token => '',
- :seeds => ['1.2.3.4'],
- :concurrent_reads => 32,
- :concurrent_writes => 32,
- :incremental_backups => 'false',
- :snapshot_before_compaction => 'false',
- :auto_snapshot => 'true',
- :multithreaded_compaction => 'false',
- :endpoint_snitch => 'SimpleSnitch',
- :internode_compression => 'all',
- :disk_failure_policy => 'stop',
- :start_native_transport => 'false',
- :start_rpc => 'true',
- :native_transport_port => 9042,
- :num_tokens => 256,
- })
- end
-
- ## ervice related resources. No dedicated spec file as it references variables
- ## from cassandra
- it 'does contain class cassandra::service' do
- should contain_class('cassandra::service')
- end
-
- it 'does contain service cassandra' do
- should contain_service('cassandra').with({
- :ensure => 'running',
- :enable => 'true',
- :hasstatus => 'true',
- :hasrestart => 'true',
- :subscribe => 'Class[Cassandra::Config]',
- :require => 'Class[Cassandra::Config]',
- })
- end
- ## /Finished install resources
-
- it 'does contain anchor cassandra::end ' do
- should contain_anchor('cassandra::end')
- end
- end
-
- context 'verify parameter' do
-
- ## Array of arrays: {parameter => [[valid], [invalid]]}
- test_pattern = {
- :include_repo => [[true, false], ['bozo']],
- :commitlog_directory => [['/tmp/test'], ['test/']],
- :saved_caches_directory => [['/tmp/test'], ['test/']],
- :cluster_name => [['bozo'], [true]],
- :partitioner => [['bozo'], [true]],
- :initial_token => [['bozo'], [true]],
- :endpoint_snitch => [['bozo'], [true]],
- :rpc_server_type => [['hsha', 'sync', 'async'], [9, 'bozo', true]],
- :incremental_backups => [['true', 'false'], [9, 'bozo']],
- :snapshot_before_compaction => [['true', 'false'], [9, 'bozo']],
- :auto_snapshot => [['true', 'false'], [9, 'bozo']],
- :multithreaded_compaction => [['true', 'false'], [9, 'bozo']],
- :concurrent_reads => [[1, 256, 42], ['bozo', 0.5, true]],
- :concurrent_writes => [[1, 256, 42], ['bozo', 0.5, true]],
- :additional_jvm_opts => [[['a', 'b']], ['bozo']],
- :seeds => [[['a', 'b']], ['bozo', []]],
- :data_file_directories => [[['a', 'b']], ['bozo', '']],
- :jmx_port => [[1, 65535], [420000, true]],
- :listen_address => [['1.2.3.4'], ['4.5.6']],
- :rpc_address => [['1.2.3.4'], ['4.5.6']],
- :rpc_port => [[1, 65535], [420000, true]],
- :storage_port => [[1, 65535], [420000, true]],
- :internode_compression => [['all', 'dc' ,'none'], [9, 'bozo', true]],
- :disk_failure_policy => [['stop', 'best_effort', 'ignore'], [9, 'bozo', true]],
- :start_native_transport => [['true', 'false'], [9, 'bozo']],
- :start_rpc => [['true', 'false'], [9, 'bozo']],
- :native_transport_port => [[1, 65535], [420000, true]],
- :num_tokens => [[1, 100000], [-1, true, 'bozo']],
- }
-
- test_pattern.each do |param, pattern|
-
- describe "#{param} " do
-
- pattern[0].each do |p|
-
- let(:params) {{ :seeds => ['1.2.3.4'], param => p }}
-
- it "succeeds with #{p}" do
- should contain_class('cassandra::install')
- end
- end
- end
-
- describe "#{param}" do
-
- pattern[1].each do |p|
-
- let(:params) {{ :seeds => ['1.2.3.4'], param => p }}
-
- it "fails with #{p}" do
- expect {
- should contain_class('cassandra::install')
- }.to raise_error(Puppet::Error)
- end
- end
- end
- end
- end
-end
View
1 modules/gini-cassandra-0.3.0/spec/spec_helper.rb
@@ -1 +0,0 @@
-require 'puppetlabs_spec_helper/module_spec_helper'
View
254 modules/gini-cassandra-0.3.0/templates/cassandra-env.sh.erb
@@ -1,254 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements. See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership. The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License. You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-calculate_heap_sizes()
-{
- case "`uname`" in
- Linux)
- system_memory_in_mb=`free -m | awk '/Mem:/ {print $2}'`
- system_cpu_cores=`egrep -c 'processor([[:space:]]+):.*' /proc/cpuinfo`
- ;;
- FreeBSD)
- system_memory_in_bytes=`sysctl hw.physmem | awk '{print $2}'`
- system_memory_in_mb=`expr $system_memory_in_bytes / 1024 / 1024`
- system_cpu_cores=`sysctl hw.ncpu | awk '{print $2}'`
- ;;
- SunOS)
- system_memory_in_mb=`prtconf | awk '/Memory size:/ {print $3}'`
- system_cpu_cores=`psrinfo | wc -l`
- ;;
- Darwin)
- system_memory_in_bytes=`sysctl hw.memsize | awk '{print $2}'`
- system_memory_in_mb=`expr $system_memory_in_bytes / 1024 / 1024`
- system_cpu_cores=`sysctl hw.ncpu | awk '{print $2}'`
- ;;
- *)
- # assume reasonable defaults for e.g. a modern desktop or
- # cheap server
- system_memory_in_mb="2048"
- system_cpu_cores="2"
- ;;
- esac
-
- # some systems like the raspberry pi don't report cores, use at least 1
- if [ "$system_cpu_cores" -lt "1" ]
- then
- system_cpu_cores="1"
- fi
-
- # set max heap size based on the following
- # max(min(1/2 ram, 1024MB), min(1/4 ram, 8GB))
- # calculate 1/2 ram and cap to 1024MB
- # calculate 1/4 ram and cap to 8192MB
- # pick the max
- half_system_memory_in_mb=`expr $system_memory_in_mb / 2`
- quarter_system_memory_in_mb=`expr $half_system_memory_in_mb / 2`
- if [ "$half_system_memory_in_mb" -gt "1024" ]
- then
- half_system_memory_in_mb="1024"
- fi
- if [ "$quarter_system_memory_in_mb" -gt "8192" ]
- then
- quarter_system_memory_in_mb="8192"
- fi
- if [ "$half_system_memory_in_mb" -gt "$quarter_system_memory_in_mb" ]
- then
- max_heap_size_in_mb="$half_system_memory_in_mb"
- else
- max_heap_size_in_mb="$quarter_system_memory_in_mb"
- fi
- MAX_HEAP_SIZE="${max_heap_size_in_mb}M"
-
- # Young gen: min(max_sensible_per_modern_cpu_core * num_cores, 1/4 * heap size)
- max_sensible_yg_per_core_in_mb="100"
- max_sensible_yg_in_mb=`expr $max_sensible_yg_per_core_in_mb "*" $system_cpu_cores`
-
- desired_yg_in_mb=`expr $max_heap_size_in_mb / 4`
-
- if [ "$desired_yg_in_mb" -gt "$max_sensible_yg_in_mb" ]
- then
- HEAP_NEWSIZE="${max_sensible_yg_in_mb}M"
- else
- HEAP_NEWSIZE="${desired_yg_in_mb}M"
- fi
-}
-
-# Determine the sort of JVM we'll be running on.
-
-java_ver_output=`"${JAVA:-java}" -version 2>&1`
-
-jvmver=`echo "$java_ver_output" | awk -F'"' 'NR==1 {print $2}'`
-JVM_VERSION=${jvmver%_*}
-JVM_PATCH_VERSION=${jvmver#*_}
-
-jvm=`echo "$java_ver_output" | awk 'NR==2 {print $1}'`
-case "$jvm" in
- OpenJDK)
- JVM_VENDOR=OpenJDK
- # this will be "64-Bit" or "32-Bit"
- JVM_ARCH=`echo "$java_ver_output" | awk 'NR==3 {print $2}'`
- ;;
- "Java(TM)")
- JVM_VENDOR=Oracle
- # this will be "64-Bit" or "32-Bit"
- JVM_ARCH=`echo "$java_ver_output" | awk 'NR==3 {print $3}'`
- ;;
- *)
- # Help fill in other JVM values
- JVM_VENDOR=other
- JVM_ARCH=unknown
- ;;
-esac
-
-
-# Override these to set the amount of memory to allocate to the JVM at
-# start-up. For production use you may wish to adjust this for your
-# environment. MAX_HEAP_SIZE is the total amount of memory dedicated
-# to the Java heap; HEAP_NEWSIZE refers to the size of the young
-# generation. Both MAX_HEAP_SIZE and HEAP_NEWSIZE should be either set
-# or not (if you set one, set the other).
-#
-# The main trade-off for the young generation is that the larger it
-# is, the longer GC pause times will be. The shorter it is, the more
-# expensive GC will be (usually).
-#
-# The example HEAP_NEWSIZE assumes a modern 8-core+ machine for decent pause
-# times. If in doubt, and if you do not particularly want to tweak, go with
-# 100 MB per physical CPU core.
-
-<% unless @max_heap_size.empty? -%>
-MAX_HEAP_SIZE="<%= @max_heap_size %>"
-<% end -%>
-<% unless @heap_newsize.empty? -%>
-HEAP_NEWSIZE="<%= @heap_newsize %>"
-<% end -%>
-
-if [ "x$MAX_HEAP_SIZE" = "x" ] && [ "x$HEAP_NEWSIZE" = "x" ]; then
- calculate_heap_sizes
-else
- if [ "x$MAX_HEAP_SIZE" = "x" ] || [ "x$HEAP_NEWSIZE" = "x" ]; then
- echo "please set or unset MAX_HEAP_SIZE and HEAP_NEWSIZE in pairs (see cassandra-env.sh)"
- exit 1
- fi
-fi
-
-# Specifies the default port over which Cassandra will be available for
-# JMX connections.
-JMX_PORT="<%= @jmx_port %>"
-
-
-# Here we create the arguments that will get passed to the jvm when
-# starting cassandra.
-
-# enable assertions. disabling this in production will give a modest
-# performance benefit (around 5%).
-JVM_OPTS="$JVM_OPTS -ea"
-
-# add the jamm javaagent
-if [ "$JVM_VENDOR" != "OpenJDK" -o "$JVM_VERSION" \> "1.6.0" ] \
- || [ "$JVM_VERSION" = "1.6.0" -a "$JVM_PATCH_VERSION" -ge 23 ]
-then
- JVM_OPTS="$JVM_OPTS -javaagent:$CASSANDRA_HOME/lib/jamm-0.2.5.jar"
-fi
-
-# enable thread priorities, primarily so we can give periodic tasks
-# a lower priority to avoid interfering with client workload
-JVM_OPTS="$JVM_OPTS -XX:+UseThreadPriorities"
-# allows lowering thread priority without being root. see
-# http://tech.stolsvik.com/2010/01/linux-java-thread-priorities-workaround.html
-JVM_OPTS="$JVM_OPTS -XX:ThreadPriorityPolicy=42"
-
-# min and max heap sizes should be set to the same value to avoid
-# stop-the-world GC pauses during resize, and so that we can lock the
-# heap in memory on startup to prevent any of it from being swapped
-# out.
-JVM_OPTS="$JVM_OPTS -Xms${MAX_HEAP_SIZE}"
-JVM_OPTS="$JVM_OPTS -Xmx${MAX_HEAP_SIZE}"
-JVM_OPTS="$JVM_OPTS -Xmn${HEAP_NEWSIZE}"
-JVM_OPTS="$JVM_OPTS -XX:+HeapDumpOnOutOfMemoryError"
-
-# set jvm HeapDumpPath with CASSANDRA_HEAPDUMP_DIR
-if [ "x$CASSANDRA_HEAPDUMP_DIR" != "x" ]; then
- JVM_OPTS="$JVM_OPTS -XX:HeapDumpPath=$CASSANDRA_HEAPDUMP_DIR/cassandra-`date +%s`-pid$$.hprof"
-fi
-
-
-startswith() { [ "${1#$2}" != "$1" ]; }
-
-if [ "`uname`" = "Linux" ] ; then
- # reduce the per-thread stack size to minimize the impact of Thrift
- # thread-per-client. (Best practice is for client connections to
- # be pooled anyway.) Only do so on Linux where it is known to be
- # supported.
- # u34 and greater need 180k
- JVM_OPTS="$JVM_OPTS -Xss<%= @thread_stack_size %>k"
-fi
-
-# GC tuning options
-JVM_OPTS="$JVM_OPTS -XX:+UseParNewGC"
-JVM_OPTS="$JVM_OPTS -XX:+UseConcMarkSweepGC"
-JVM_OPTS="$JVM_OPTS -XX:+CMSParallelRemarkEnabled"
-JVM_OPTS="$JVM_OPTS -XX:SurvivorRatio=8"
-JVM_OPTS="$JVM_OPTS -XX:MaxTenuringThreshold=1"
-JVM_OPTS="$JVM_OPTS -XX:CMSInitiatingOccupancyFraction=75"
-JVM_OPTS="$JVM_OPTS -XX:+UseCMSInitiatingOccupancyOnly"
-JVM_OPTS="$JVM_OPTS -XX:+UseTLAB"
-# note: bash evals '1.7.x' as > '1.7' so this is really a >= 1.7 jvm check
-if [ "$JVM_VERSION" \> "1.7" ] ; then
- JVM_OPTS="$JVM_OPTS -XX:+UseCondCardMark"
-fi
-
-# GC logging options -- uncomment to enable
-# JVM_OPTS="$JVM_OPTS -XX:+PrintGCDetails"
-# JVM_OPTS="$JVM_OPTS -XX:+PrintGCDateStamps"
-# JVM_OPTS="$JVM_OPTS -XX:+PrintHeapAtGC"
-# JVM_OPTS="$JVM_OPTS -XX:+PrintTenuringDistribution"
-# JVM_OPTS="$JVM_OPTS -XX:+PrintGCApplicationStoppedTime"
-# JVM_OPTS="$JVM_OPTS -XX:+PrintPromotionFailure"
-# JVM_OPTS="$JVM_OPTS -XX:PrintFLSStatistics=1"
-# JVM_OPTS="$JVM_OPTS -Xloggc:/var/log/cassandra/gc-`date +%s`.log"
-# If you are using JDK 6u34 7u2 or later you can enable GC log rotation
-# don't stick the date in the log name if rotation is on.
-# JVM_OPTS="$JVM_OPTS -Xloggc:/var/log/cassandra/gc.log"
-# JVM_OPTS="$JVM_OPTS -XX:+UseGCLogFileRotation"
-# JVM_OPTS="$JVM_OPTS -XX:NumberOfGCLogFiles=10"
-# JVM_OPTS="$JVM_OPTS -XX:GCLogFileSize=10M"
-
-# uncomment to have Cassandra JVM listen for remote debuggers/profilers on port 1414
-# JVM_OPTS="$JVM_OPTS -Xdebug -Xnoagent -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=1414"
-
-# Prefer binding to IPv4 network intefaces (when net.ipv6.bindv6only=1). See
-# http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6342561 (short version:
-# comment out this entry to enable IPv6 support).
-JVM_OPTS="$JVM_OPTS -Djava.net.preferIPv4Stack=true"
-
-# jmx: metrics and administration interface
-#
-# add this if you're having trouble connecting:
-# JVM_OPTS="$JVM_OPTS -Djava.rmi.server.hostname=<public name>"
-#
-# see
-# https://blogs.oracle.com/jmxetc/entry/troubleshooting_connection_problems_in_jconsole
-# for more on configuring JMX through firewalls, etc. (Short version:
-# get it working with no firewall first.)
-JVM_OPTS="$JVM_OPTS -Dcom.sun.management.jmxremote.port=$JMX_PORT"
-JVM_OPTS="$JVM_OPTS -Dcom.sun.management.jmxremote.ssl=false"
-JVM_OPTS="$JVM_OPTS -Dcom.sun.management.jmxremote.authenticate=false"
-JVM_OPTS="$JVM_OPTS $JVM_EXTRA_OPTS"
-
-<% @additional_jvm_opts.each do |jvm_opt| %>
-JVM_OPTS="$JVM_OPTS <%= jvm_opt %>"
-<% end %>
View
690 modules/gini-cassandra-0.3.0/templates/cassandra.yaml.erb
@@ -1,690 +0,0 @@
-# Cassandra storage config YAML
-
-# NOTE:
-# See http://wiki.apache.org/cassandra/StorageConfiguration for
-# full explanations of configuration directives
-# /NOTE
-
-# The name of the cluster. This is mainly used to prevent machines in
-# one logical cluster from joining another.
-cluster_name: '<%= @cluster_name %>'
-
-# This defines the number of tokens randomly assigned to this node on the ring
-# The more tokens, relative to other nodes, the larger the proportion of data
-# that this node will store. You probably want all nodes to have the same number
-# of tokens assuming they have equal hardware capability.
-#
-# If you leave this unspecified, Cassandra will use the default of 1 token for legacy compatibility,
-# and will use the initial_token as described below.
-#
-# Specifying initial_token will override this setting.
-#
-# If you already have a cluster with 1 token per node, and wish to migrate to
-# multiple tokens per node, see http://wiki.apache.org/cassandra/Operations
-num_tokens: <%= @num_tokens %>
-
-# If you haven't specified num_tokens, or have set it to the default of 1 then
-# you should always specify InitialToken when setting up a production
-# cluster for the first time, and often when adding capacity later.
-# The principle is that each node should be given an equal slice of
-# the token ring; see http://wiki.apache.org/cassandra/Operations
-# for more details.
-#
-# If blank, Cassandra will request a token bisecting the range of
-# the heaviest-loaded existing node. If there is no load information
-# available, such as is the case with a new cluster, it will pick
-# a random token, which will lead to hot spots.
-initial_token: <%= @initial_token %>
-
-# See http://wiki.apache.org/cassandra/HintedHandoff
-hinted_handoff_enabled: true
-# this defines the maximum amount of time a dead host will have hints
-# generated. After it has been dead this long, new hints for it will not be
-# created until it has been seen alive and gone down again.
-max_hint_window_in_ms: 10800000 # 3 hours
-# throttle in KB's per second, per delivery thread
-hinted_handoff_throttle_in_kb: 1024
-# Number of threads with which to deliver hints;
-# Consider increasing this number when you have multi-dc deployments, since
-# cross-dc handoff tends to be slower
-max_hints_delivery_threads: 2
-
-# The following setting populates the page cache on memtable flush and compaction
-# WARNING: Enable this setting only when the whole node's data fits in memory.
-# Defaults to: false
-# populate_io_cache_on_flush: false
-
-# Authentication backend, implementing IAuthenticator; used to identify users
-# Out of the box, Cassandra provides org.apache.cassandra.auth.{AllowAllAuthenticator,
-# PasswordAuthenticator}.
-#
-# - AllowAllAuthenticator performs no checks - set it to disable authentication.
-# - PasswordAuthenticator relies on username/password pairs to authenticate
-# users. It keeps usernames and hashed passwords in system_auth.credentials table.
-# Please increase system_auth keyspace replication factor if you use this authenticator.
-authenticator: org.apache.cassandra.auth.AllowAllAuthenticator
-
-# Authorization backend, implementing IAuthorizer; used to limit access/provide permissions
-# Out of the box, Cassandra provides org.apache.cassandra.auth.{AllowAllAuthorizer,
-# CassandraAuthorizer}.
-#
-# - AllowAllAuthorizer allows any action to any user - set it to disable authorization.
-# - CassandraAuthorizer stores permissions in system_auth.permissions table. Please
-# increase system_auth keyspace replication factor if you use this authorizer.
-authorizer: org.apache.cassandra.auth.AllowAllAuthorizer
-
-# Validity period for permissions cache (fetching permissions can be an
-# expensive operation depending on the authorizer, CassandraAuthorizer is
-# one example). Defaults to 2000, set to 0 to disable.
-# Will be disabled automatically for AllowAllAuthorizer.
-permissions_validity_in_ms: 2000
-
-# The partitioner is responsible for distributing rows (by key) across
-# nodes in the cluster. Any IPartitioner may be used, including your
-# own as long as it is on the classpath. Out of the box, Cassandra
-# provides org.apache.cassandra.dht.{Murmur3Partitioner, RandomPartitioner
-# ByteOrderedPartitioner, OrderPreservingPartitioner (deprecated)}.
-#
-# - RandomPartitioner distributes rows across the cluster evenly by md5.
-# This is the default prior to 1.2 and is retained for compatibility.
-# - Murmur3Partitioner is similar to RandomPartioner but uses Murmur3_128
-# Hash Function instead of md5. When in doubt, this is the best option.
-# - ByteOrderedPartitioner orders rows lexically by key bytes. BOP allows
-# scanning rows in key order, but the ordering can generate hot spots
-# for sequential insertion workloads.
-# - OrderPreservingPartitioner is an obsolete form of BOP, that stores
-# - keys in a less-efficient format and only works with keys that are
-# UTF8-encoded Strings.
-# - CollatingOPP colates according to EN,US rules rather than lexical byte
-# ordering. Use this as an example if you need custom collation.
-#
-# See http://wiki.apache.org/cassandra/Operations for more on
-# partitioners and token selection.
-partitioner: <%= @partitioner %>
-
-# directories where Cassandra should store data on disk.
-data_file_directories:<% @data_file_directories.each do |data_file_directory| %>
- - <%= data_file_directory %>
-<% end %>
-
-# commit log
-commitlog_directory: <%= @commitlog_directory %>
-
-# policy for data disk failures:
-# stop: shut down gossip and Thrift, leaving the node effectively dead, but
-# still inspectable via JMX.
-# best_effort: stop using the failed disk and respond to requests based on
-# remaining available sstables. This means you WILL see obsolete
-# data at CL.ONE!
-# ignore: ignore fatal errors and let requests fail, as in pre-1.2 Cassandra
-disk_failure_policy: <%= @disk_failure_policy %>
-
-# Maximum size of the key cache in memory.
-#
-# Each key cache hit saves 1 seek and each row cache hit saves 2 seeks at the
-# minimum, sometimes more. The key cache is fairly tiny for the amount of
-# time it saves, so it's worthwhile to use it at large numbers.
-# The row cache saves even more time, but must store the whole values of
-# its rows, so it is extremely space-intensive. It's best to only use the
-# row cache if you have hot rows or static rows.
-#
-# NOTE: if you reduce the size, you may not get you hottest keys loaded on startup.
-#
-# Default value is empty to make it "auto" (min(5% of Heap (in MB), 100MB)). Set to 0 to disable key cache.
-key_cache_size_in_mb:
-
-# Duration in seconds after which Cassandra should
-# safe the keys cache. Caches are saved to saved_caches_directory as
-# specified in this configuration file.
-#
-# Saved caches greatly improve cold-start speeds, and is relatively cheap in
-# terms of I/O for the key cache. Row cache saving is much more expensive and
-# has limited use.
-#
-# Default is 14400 or 4 hours.
-key_cache_save_period: 14400
-
-# Number of keys from the key cache to save
-# Disabled by default, meaning all keys are going to be saved
-# key_cache_keys_to_save: 100
-
-# Maximum size of the row cache in memory.
-# NOTE: if you reduce the size, you may not get you hottest keys loaded on startup.
-#
-# Default value is 0, to disable row caching.
-row_cache_size_in_mb: 0
-
-# Duration in seconds after which Cassandra should
-# safe the row cache. Caches are saved to saved_caches_directory as specified
-# in this configuration file.
-#
-# Saved caches greatly improve cold-start speeds, and is relatively cheap in
-# terms of I/O for the key cache. Row cache saving is much more expensive and
-# has limited use.
-#
-# Default is 0 to disable saving the row cache.
-row_cache_save_period: 0
-
-# Number of keys from the row cache to save
-# Disabled by default, meaning all keys are going to be saved
-# row_cache_keys_to_save: 100
-
-# The provider for the row cache to use.
-#
-# Supported values are: ConcurrentLinkedHashCacheProvider, SerializingCacheProvider
-#
-# SerializingCacheProvider serialises the contents of the row and stores
-# it in native memory, i.e., off the JVM Heap. Serialized rows take
-# significantly less memory than "live" rows in the JVM, so you can cache
-# more rows in a given memory footprint. And storing the cache off-heap
-# means you can use smaller heap sizes, reducing the impact of GC pauses.
-#
-# It is also valid to specify the fully-qualified class name to a class
-# that implements org.apache.cassandra.cache.IRowCacheProvider.
-#
-# Defaults to SerializingCacheProvider
-row_cache_provider: SerializingCacheProvider
-
-# saved caches
-saved_caches_directory: <%= @saved_caches_directory %>
-
-# commitlog_sync may be either "periodic" or "batch."
-# When in batch mode, Cassandra won't ack writes until the commit log
-# has been fsynced to disk. It will wait up to
-# commitlog_sync_batch_window_in_ms milliseconds for other writes, before
-# performing the sync.
-#
-# commitlog_sync: batch
-# commitlog_sync_batch_window_in_ms: 50
-#
-# the other option is "periodic" where writes may be acked immediately
-# and the CommitLog is simply synced every commitlog_sync_period_in_ms
-# milliseconds.
-commitlog_sync: periodic
-commitlog_sync_period_in_ms: 10000
-
-# The size of the individual commitlog file segments. A commitlog
-# segment may be archived, deleted, or recycled once all the data
-# in it (potentally from each columnfamily in the system) has been
-# flushed to sstables.
-#
-# The default size is 32, which is almost always fine, but if you are
-# archiving commitlog segments (see commitlog_archiving.properties),
-# then you probably want a finer granularity of archiving; 8 or 16 MB
-# is reasonable.
-commitlog_segment_size_in_mb: 32
-
-# any class that implements the SeedProvider interface and has a
-# constructor that takes a Map<String, String> of parameters will do.
-seed_provider:
- # Addresses of hosts that are deemed contact points.
- # Cassandra nodes use this list of hosts to find each other and learn
- # the topology of the ring. You must change this if you are running
- # multiple nodes!
- - class_name: org.apache.cassandra.locator.SimpleSeedProvider
- parameters:
- # seeds is actually a comma-delimited list of addresses.
- # Ex: "<ip1>,<ip2>,<ip3>"
- - seeds: <%= @seeds.join(',') %>
-
-# emergency pressure valve: each time heap usage after a full (CMS)
-# garbage collection is above this fraction of the max, Cassandra will
-# flush the largest memtables.
-#
-# Set to 1.0 to disable. Setting this lower than
-# CMSInitiatingOccupancyFraction is not likely to be useful.
-#
-# RELYING ON THIS AS YOUR PRIMARY TUNING MECHANISM WILL WORK POORLY:
-# it is most effective under light to moderate load, or read-heavy
-# workloads; under truly massive write load, it will often be too
-# little, too late.
-flush_largest_memtables_at: 0.75
-
-# emergency pressure valve #2: the first time heap usage after a full
-# (CMS) garbage collection is above this fraction of the max,
-# Cassandra will reduce cache maximum _capacity_ to the given fraction
-# of the current _size_. Should usually be set substantially above
-# flush_largest_memtables_at, since that will have less long-term
-# impact on the system.
-#
-# Set to 1.0 to disable. Setting this lower than
-# CMSInitiatingOccupancyFraction is not likely to be useful.
-reduce_cache_sizes_at: 0.85
-reduce_cache_capacity_to: 0.6
-
-# For workloads with more data than can fit in memory, Cassandra's
-# bottleneck will be reads that need to fetch data from
-# disk. "concurrent_reads" should be set to (16 * number_of_drives) in
-# order to allow the operations to enqueue low enough in the stack
-# that the OS and drives can reorder them.
-#
-# On the other hand, since writes are almost never IO bound, the ideal
-# number of "concurrent_writes" is dependent on the number of cores in
-# your system; (8 * number_of_cores) is a good rule of thumb.
-concurrent_reads: <%= @concurrent_reads %>
-concurrent_writes: <%= @concurrent_writes %>
-
-# Total memory to use for memtables. Cassandra will flush the largest
-# memtable when this much memory is used.
-# If omitted, Cassandra will set it to 1/3 of the heap.
-# memtable_total_space_in_mb: 2048
-
-# Total space to use for commitlogs. Since commitlog segments are
-# mmapped, and hence use up address space, the default size is 32
-# on 32-bit JVMs, and 1024 on 64-bit JVMs.
-#
-# If space gets above this value (it will round up to the next nearest
-# segment multiple), Cassandra will flush every dirty CF in the oldest
-# segment and remove it. So a small total commitlog space will tend
-# to cause more flush activity on less-active columnfamilies.
-# commitlog_total_space_in_mb: 4096
-
-# This sets the amount of memtable flush writer threads. These will
-# be blocked by disk io, and each one will hold a memtable in memory
-# while blocked. If you have a large heap and many data directories,
-# you can increase this value for better flush performance.
-# By default this will be set to the amount of data directories defined.
-#memtable_flush_writers: 1
-
-# the number of full memtables to allow pending flush, that is,
-# waiting for a writer thread. At a minimum, this should be set to
-# the maximum number of secondary indexes created on a single CF.
-memtable_flush_queue_size: 4
-
-# Whether to, when doing sequential writing, fsync() at intervals in
-# order to force the operating system to flush the dirty
-# buffers. Enable this to avoid sudden dirty buffer flushing from
-# impacting read latencies. Almost always a good idea on SSD:s; not
-# necessarily on platters.
-trickle_fsync: false
-trickle_fsync_interval_in_kb: 10240
-
-# TCP port, for commands and data
-storage_port: <%= @storage_port %>
-
-# SSL port, for encrypted communication. Unused unless enabled in
-# encryption_options
-ssl_storage_port: 7001
-
-# Address to bind to and tell other Cassandra nodes to connect to. You
-# _must_ change this if you want multiple nodes to be able to
-# communicate!
-#
-# Leaving it blank leaves it up to InetAddress.getLocalHost(). This
-# will always do the Right Thing *if* the node is properly configured
-# (hostname, name resolution, etc), and the Right Thing is to use the
-# address associated with the hostname (it might not be).
-#
-# Setting this to 0.0.0.0 is always wrong.
-listen_address: <%= @listen_address %>
-
-# Address to broadcast to other Cassandra nodes
-# Leaving this blank will set it to the same value as listen_address
-# broadcast_address: 1.2.3.4
-
-# Internode authentication backend, implementing IInternodeAuthenticator;
-# used to allow/disallow connections from peer nodes.
-# internode_authenticator: org.apache.cassandra.auth.AllowAllInternodeAuthenticator
-
-# Whether to start the native transport server.
-# Currently, only the thrift server is started by default because the native
-# transport is considered beta.
-# Please note that the address on which the native transport is bound is the
-# same as the rpc_address. The port however is different and specified below.
-start_native_transport: <%= @start_native_transport %>
-# port for the CQL native transport to listen for clients on
-native_transport_port: <%= @native_transport_port %>
-# The minimum and maximum threads for handling requests when the native
-# transport is used. The meaning is those is similar to the one of
-# rpc_min_threads and rpc_max_threads, though the default differ slightly and
-# are the ones below:
-# native_transport_min_threads: 16
-# native_transport_max_threads: 128
-
-# Whether to start the thrift rpc server.
-start_rpc: <%= @start_rpc %>
-
-# The address to bind the Thrift RPC service to -- clients connect
-# here. Unlike ListenAddress above, you *can* specify 0.0.0.0 here if
-# you want Thrift to listen on all interfaces.
-#
-# Leaving this blank has the same effect it does for ListenAddress,
-# (i.e. it will be based on the configured hostname of the node).
-rpc_address: <%= @rpc_address %>
-# port for Thrift to listen for clients on
-rpc_port: <%= @rpc_port %>
-
-# enable or disable keepalive on rpc connections
-rpc_keepalive: true
-
-# Cassandra provides three out-of-the-box options for the RPC Server:
-#
-# sync -> One thread per thrift connection. For a very large number of clients, memory
-# will be your limiting factor. On a 64 bit JVM, 128KB is the minimum stack size
-# per thread, and that will correspond to your use of virtual memory (but physical memory
-# may be limited depending on use of stack space).
-#
-# hsha -> Stands for "half synchronous, half asynchronous." All thrift clients are handled
-# asynchronously using a small number of threads that does not vary with the amount
-# of thrift clients (and thus scales well to many clients). The rpc requests are still
-# synchronous (one thread per active request).
-#
-# The default is sync because on Windows hsha is about 30% slower. On Linux,
-# sync/hsha performance is about the same, with hsha of course using less memory.
-#
-# Alternatively, can provide your own RPC server by providing the fully-qualified class name
-# of an o.a.c.t.TServerFactory that can create an instance of it.
-rpc_server_type: <%= @rpc_server_type %>
-
-# Uncomment rpc_min|max_thread to set request pool size limits.
-#
-# Regardless of your choice of RPC server (see above), the number of maximum requests in the
-# RPC thread pool dictates how many concurrent requests are possible (but if you are using the sync
-# RPC server, it also dictates the number of clients that can be connected at all).
-#
-# The default is unlimited and thus provide no protection against clients overwhelming the server. You are
-# encouraged to set a maximum that makes sense for you in production, but do keep in mind that
-# rpc_max_threads represents the maximum number of client requests this server may execute concurrently.
-#
-# rpc_min_threads: 16
-# rpc_max_threads: 2048
-
-# uncomment to set socket buffer sizes on rpc connections
-# rpc_send_buff_size_in_bytes:
-# rpc_recv_buff_size_in_bytes:
-
-# Uncomment to set socket buffer size for internode communication
-# Note that when setting this, the buffer size is limited by net.core.wmem_max
-# and when not setting it it is defined by net.ipv4.tcp_wmem
-# See:
-# /proc/sys/net/core/wmem_max
-# /proc/sys/net/core/rmem_max
-# /proc/sys/net/ipv4/tcp_wmem
-# /proc/sys/net/ipv4/tcp_wmem
-# and: man tcp
-# internode_send_buff_size_in_bytes:
-# internode_recv_buff_size_in_bytes:
-
-# Frame size for thrift (maximum field length).
-thrift_framed_transport_size_in_mb: 15
-
-# The max length of a thrift message, including all fields and
-# internal thrift overhead.
-thrift_max_message_length_in_mb: 16
-
-# Set to true to have Cassandra create a hard link to each sstable
-# flushed or streamed locally in a backups/ subdirectory of the
-# Keyspace data. Removing these links is the operator's
-# responsibility.
-incremental_backups: <%= @incremental_backups %>
-
-# Whether or not to take a snapshot before each compaction. Be
-# careful using this option, since Cassandra won't clean up the
-# snapshots for you. Mostly useful if you're paranoid when there
-# is a data format change.
-snapshot_before_compaction: <%= @snapshot_before_compaction %>
-
-# Whether or not a snapshot is taken of the data before keyspace truncation
-# or dropping of column families. The STRONGLY advised default of true
-# should be used to provide data safety. If you set this flag to false, you will
-# lose data on truncation or drop.
-auto_snapshot: <%= @auto_snapshot %>
-
-# Add column indexes to a row after its contents reach this size.
-# Increase if your column values are large, or if you have a very large
-# number of columns. The competing causes are, Cassandra has to
-# deserialize this much of the row to read a single column, so you want
-# it to be small - at least if you do many partial-row reads - but all
-# the index data is read for each access, so you don't want to generate
-# that wastefully either.
-column_index_size_in_kb: 64
-
-# Size limit for rows being compacted in memory. Larger rows will spill
-# over to disk and use a slower two-pass compaction process. A message
-# will be logged specifying the row key.
-in_memory_compaction_limit_in_mb: 64
-
-# Number of simultaneous compactions to allow, NOT including
-# validation "compactions" for anti-entropy repair. Simultaneous
-# compactions can help preserve read performance in a mixed read/write
-# workload, by mitigating the tendency of small sstables to accumulate
-# during a single long running compactions. The default is usually
-# fine and if you experience problems with compaction running too
-# slowly or too fast, you should look at
-# compaction_throughput_mb_per_sec first.
-#
-# concurrent_compactors defaults to the number of cores.
-# Uncomment to make compaction mono-threaded, the pre-0.8 default.
-#concurrent_compactors: 1
-
-# Multi-threaded compaction. When enabled, each compaction will use
-# up to one thread per core, plus one thread per sstable being merged.
-# This is usually only useful for SSD-based hardware: otherwise,
-# your concern is usually to get compaction to do LESS i/o (see:
-# compaction_throughput_mb_per_sec), not more.
-multithreaded_compaction: <%= @multithreaded_compaction %>
-
-# Throttles compaction to the given total throughput across the entire
-# system. The faster you insert data, the faster you need to compact in
-# order to keep the sstable count down, but in general, setting this to
-# 16 to 32 times the rate you are inserting data is more than sufficient.
-# Setting this to 0 disables throttling. Note that this account for all types
-# of compaction, including validation compaction.
-compaction_throughput_mb_per_sec: 16
-
-# Track cached row keys during compaction, and re-cache their new
-# positions in the compacted sstable. Disable if you use really large
-# key caches.
-compaction_preheat_key_cache: true
-
-# Throttles all outbound streaming file transfers on this node to the
-# given total throughput in Mbps. This is necessary because Cassandra does
-# mostly sequential IO when streaming data during bootstrap or repair, which
-# can lead to saturating the network connection and degrading rpc performance.
-# When unset, the default is 200 Mbps or 25 MB/s.
-# stream_throughput_outbound_megabits_per_sec: 200
-
-# How long the coordinator should wait for read operations to complete
-read_request_timeout_in_ms: 10000
-# How long the coordinator should wait for seq or index scans to complete
-range_request_timeout_in_ms: 10000
-# How long the coordinator should wait for writes to complete
-write_request_timeout_in_ms: 10000
-# How long the coordinator should wait for truncates to complete
-# (This can be much longer, because unless auto_snapshot is disabled
-# we need to flush first so we can snapshot before removing the data.)
-truncate_request_timeout_in_ms: 60000
-# The default timeout for other, miscellaneous operations
-request_timeout_in_ms: 10000
-
-# Enable operation timeout information exchange between nodes to accurately
-# measure request timeouts, If disabled cassandra will assuming the request
-# was forwarded to the replica instantly by the coordinator
-#
-# Warning: before enabling this property make sure to ntp is installed
-# and the times are synchronized between the nodes.
-cross_node_timeout: false
-
-# Enable socket timeout for streaming operation.
-# When a timeout occurs during streaming, streaming is retried from the start
-# of the current file. This *can* involve re-streaming an important amount of
-# data, so you should avoid setting the value too low.
-# Default value is 0, which never timeout streams.
-# streaming_socket_timeout_in_ms: 0
-
-# phi value that must be reached for a host to be marked down.
-# most users should never need to adjust this.
-# phi_convict_threshold: 8
-
-# endpoint_snitch -- Set this to a class that implements
-# IEndpointSnitch. The snitch has two functions:
-# - it teaches Cassandra enough about your network topology to route
-# requests efficiently
-# - it allows Cassandra to spread replicas around your cluster to avoid
-# correlated failures. It does this by grouping machines into
-# "datacenters" and "racks." Cassandra will do its best not to have
-# more than one replica on the same "rack" (which may not actually
-# be a physical location)
-#
-# IF YOU CHANGE THE SNITCH AFTER DATA IS INSERTED INTO THE CLUSTER,
-# YOU MUST RUN A FULL REPAIR, SINCE THE SNITCH AFFECTS WHERE REPLICAS
-# ARE PLACED.
-#
-# Out of the box, Cassandra provides
-# - SimpleSnitch:
-# Treats Strategy order as proximity. This improves cache locality
-# when disabling read repair, which can further improve throughput.
-# Only appropriate for single-datacenter deployments.
-# - PropertyFileSnitch:
-# Proximity is determined by rack and data center, which are
-# explicitly configured in cassandra-topology.properties.
-# - GossipingPropertyFileSnitch
-# The rack and datacenter for the local node are defined in
-# cassandra-rackdc.properties and propagated to other nodes via gossip. If
-# cassandra-topology.properties exists, it is used as a fallback, allowing
-# migration from the PropertyFileSnitch.
-# - RackInferringSnitch:
-# Proximity is determined by rack and data center, which are
-# assumed to correspond to the 3rd and 2nd octet of each node's
-# IP address, respectively. Unless this happens to match your
-# deployment conventions (as it did Facebook's), this is best used
-# as an example of writing a custom Snitch class.
-# - Ec2Snitch: