-
Notifications
You must be signed in to change notification settings - Fork 279
Quick-Start-Guide: add ansible reference #379
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -152,12 +152,16 @@ To submit your change for review, run the rfc.sh script, | |
| $ ./rfc.sh | ||
|
|
||
| The script will ask you to enter a bugzilla bug id. Every change | ||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The script will prompt for a valid bug number in the Red Hat Bugzilla |
||
| submitted to GlusterFS needs a bugzilla entry to be accepted. If you do | ||
| not already have a bug id, file a new bug at [Red Hat | ||
| submitted to GlusterFS needs a reference ID to be accepted. If you do | ||
| not already have a reference id, file a new bug at [Red Hat | ||
| Bugzilla](https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS). | ||
| If the patch is submitted for review, the rfc.sh script will return the | ||
| gerrit url for the review request. | ||
|
|
||
| Also note, all the feature development of GlusterFS is tracked using | ||
| [github issues](https://github.com/gluster/glusterfs/issues). Github | ||
| Issue number can also serve as a reference ID while submitting the patch. | ||
|
|
||
| More details on the rfc.sh script are available at | ||
| [Development Work Flow - rfc.sh](./Development-Workflow.md#rfc.sh). | ||
|
|
||
|
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -2,11 +2,11 @@ Installing GlusterFS - a Quick Start Guide | |
| ------- | ||
|
|
||
| #### Purpose of this document | ||
|
|
||
| This document is intended to give you a step by step guide to setting up | ||
| GlusterFS for the first time. For this tutorial, we will assume you are | ||
| using Fedora 26 (or later) virtual machines. | ||
| We also do not explain the steps in detail here as this guide is just to help | ||
| you get it up and running as soon as possible. | ||
|
|
||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This document is intended to provide a step-by-step guide to setting up GlusterFS for the first time. For the purposes of this guide, it is required to use Fedora 26 (or, higher) virtual machine instances. |
||
| After you deploy GlusterFS by following these steps, | ||
| we recommend that you read the GlusterFS Admin Guide to learn how to | ||
| administer GlusterFS and how to select a volume type that fits your | ||
|
|
@@ -19,32 +19,43 @@ installing using different methods (in local virtual machines, EC2 and | |
| baremetal) and different distributions, then have a look at the Install | ||
| guide. | ||
|
|
||
| #### Using Ansible to deploy and manage GlusterFS | ||
|
|
||
| If you are already an ansible user, and are more comfortable with setting | ||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Ansible |
||
| up distributed systems with Ansible, we recommend you to skip all these and | ||
| move over to [gluster-ansible](https://github.com/gluster/gluster-ansible) repository, which gives most of the details to get the systems running faster. | ||
|
|
||
| #### Automatically deploying GlusterFS with Puppet-Gluster+Vagrant | ||
|
|
||
| If you'd like to deploy GlusterFS automatically using | ||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. To deploy GlusterFS using scripted methods, please read this ... |
||
| Puppet-Gluster+Vagrant, have a look at [this | ||
| article](https://ttboj.wordpress.com/2014/01/08/automatically-deploying-glusterfs-with-puppet-gluster-vagrant/). | ||
|
|
||
|
|
||
| ### Step 1 – Have at least two nodes | ||
| ### Step 1 – Have at least three nodes | ||
|
|
||
| - Fedora 22 (or later) on two nodes named "server1" and "server2" | ||
| - Fedora 26 (or later) on 3 nodes named "server1", "server2" and "server3" | ||
| - A working network connection | ||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Is it required to add a line about functional DNS (and/or, name resolution)?
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Also, what about NTP? |
||
| - At least two virtual disks, one for the OS installation, and one to be | ||
| used to serve GlusterFS storage (sdb). This will emulate a real- | ||
| world deployment, where you would want to separate GlusterFS storage | ||
| from the OS install. | ||
| - Note: GlusterFS stores its dynamically generated configuration files | ||
| at /var/lib/glusterd. If at any point in time GlusterFS is unable to | ||
| used to serve GlusterFS storage (sdb), on each of these VMs. This will | ||
| emulate a real-world deployment, where you would want to separate | ||
| GlusterFS storage from the OS install. | ||
|
|
||
| **Note**: GlusterFS stores its dynamically generated configuration files | ||
| at `/var/lib/glusterd`. If at any point in time GlusterFS is unable to | ||
| write to these files (for example, when the backing filesystem is full), | ||
| it will at minimum cause erratic behavior for your system; or worse, | ||
| take your system offline completely. It is advisable to create separate | ||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It is recommended to create separate partitions for directories such as /var/log to reduce the chances of this happening. |
||
| partitions for directories such as /var/log to ensure this does not happen. | ||
| partitions for directories such as `/var/log` to ensure this does not | ||
| happen. | ||
|
|
||
|
|
||
| ### Step 2 - Format and mount the bricks | ||
|
|
||
| (on both nodes): Note: We are going to use the XFS filesystem for the backend bricks. | ||
| Perform this step on all the nodes, "server{1,2,3}" | ||
|
|
||
| **Note**: We are going to use the XFS filesystem for the backend bricks. But Gluster is designed to work on top of any filesystem, which supports extended attributes. | ||
|
|
||
| These examples are going to assume the brick is going to reside on /dev/sdb1. | ||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The following examples assume that the brick will be residing on /dev/sdb1 |
||
|
|
||
| mkfs.xfs -i size=512 /dev/sdb1 | ||
|
|
@@ -57,7 +68,7 @@ You should now see sdb1 mounted at /data/brick1 | |
|
|
||
| ### Step 3 - Installing GlusterFS | ||
|
|
||
| (on both nodes) Install the software | ||
| Install the software | ||
|
|
||
| yum install glusterfs-server | ||
|
|
||
|
|
@@ -91,6 +102,7 @@ where ip-address is the address of the other node. | |
| From "server1" | ||
|
|
||
| gluster peer probe server2 | ||
| gluster peer probe server3 | ||
|
|
||
| Note: When using hostnames, the first server needs to be probed from | ||
| ***one*** other server to set its hostname. | ||
|
|
@@ -109,22 +121,26 @@ Check the peer status on server1 | |
|
|
||
| You should see something like this (the UUID will differ) | ||
|
|
||
| Number of Peers: 1 | ||
| Number of Peers: 2 | ||
|
|
||
| Hostname: server2 | ||
| Uuid: f0e7b138-4874-4bc0-ab91-54f20c7068b4 | ||
| State: Peer in Cluster (Connected) | ||
|
|
||
| Hostname: server3 | ||
| Uuid: f0e7b138-4532-4bc0-ab91-54f20c701241 | ||
| State: Peer in Cluster (Connected) | ||
|
|
||
|
|
||
| ### Step 6 - Set up a GlusterFS volume | ||
|
|
||
| On both server1 and server2: | ||
| On all servers: | ||
|
|
||
| mkdir -p /data/brick1/gv0 | ||
|
|
||
| From any single server: | ||
|
|
||
| gluster volume create gv0 replica 2 server1:/data/brick1/gv0 server2:/data/brick1/gv0 | ||
| gluster volume create gv0 replica 3 server1:/data/brick1/gv0 server2:/data/brick1/gv0 server3:/data/brick1/gv0 | ||
| gluster volume start gv0 | ||
|
|
||
| Confirm that the volume shows "Started": | ||
|
|
@@ -139,17 +155,18 @@ You should see something like this (the Volume ID will differ): | |
| Volume ID: f25cc3d8-631f-41bd-96e1-3e22a4c6f71f | ||
| Status: Started | ||
| Snapshot Count: 0 | ||
| Number of Bricks: 1 x 2 = 2 | ||
| Number of Bricks: 1 x 3 = 3 | ||
| Transport-type: tcp | ||
| Bricks: | ||
| Brick1: server1:/data/brick1/gv0 | ||
| Brick2: server2:/data/brick1/gv0 | ||
| Brick3: server3:/data/brick1/gv0 | ||
| Options Reconfigured: | ||
| transport.address-family: inet | ||
|
|
||
|
|
||
| Note: If the volume is not started, clues as to what went wrong will be | ||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. If the volume does not show "Started", the files under /var/log/glusterfs/glusterd.log should be checked in order to debug and diagnose the situation. These logs can be looked at on one or, all the servers configured. |
||
| in log files under /var/log/glusterfs/glusterd.log on one or both of the servers. | ||
| in log files under `/var/log/glusterfs/glusterd.log` on one or all of the servers. | ||
|
|
||
|
|
||
| ### Step 7 - Testing the GlusterFS volume | ||
|
|
@@ -174,5 +191,5 @@ points on each server: | |
|
|
||
| You should see 100 files on each server using the method we listed here. | ||
| Without replication, in a distribute only volume (not detailed here), you | ||
| should see about 50 files on each one. | ||
| should see about 33 files on each one. | ||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it should either be Reference ID or, reference id - can't be both.