Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 5 additions & 4 deletions Developer-guide/Development-Workflow.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,7 @@ mirrors.
A good introduction to Git can be found at
<http://www-cs-students.stanford.edu/~blynn/gitmagic/>.


### Gerrit

Gerrit is an excellent code review system which is developed with a git
Expand Down Expand Up @@ -306,13 +307,13 @@ This script does the following:
- Rebase your commit against the latest upstream HEAD. This rebase
also causes your commits to undergo massaging from the just
downloaded commit-msg hook.
- Prompt for a Bug Id for each commit (if it was not already provded)
and include it as a "BUG:" tag in the commit log. You can just hit
- Prompt for a Reference Id for each commit (if it was not already provded)
and include it as a "fixes: #n" tag in the commit log. You can just hit
<enter> at this prompt if your submission is purely for review
purposes.
- Push the changes to review.gluster.org for review. If you had
provided a bug id, it assigns the topic of the change as "bug-XYZ".
If not it sets the topic as "rfc".
provided a reference id, it assigns the topic of the change as

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it should either be Reference ID or, reference id - can't be both.

"ref-XYZ". If not it sets the topic as "rfc".

On a successful push, you will see a URL pointing to the change in
review.gluster.org
Expand Down
8 changes: 6 additions & 2 deletions Developer-guide/Simplified-Development-Workflow.md
Original file line number Diff line number Diff line change
Expand Up @@ -152,12 +152,16 @@ To submit your change for review, run the rfc.sh script,
$ ./rfc.sh

The script will ask you to enter a bugzilla bug id. Every change

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The script will prompt for a valid bug number in the Red Hat Bugzilla

submitted to GlusterFS needs a bugzilla entry to be accepted. If you do
not already have a bug id, file a new bug at [Red Hat
submitted to GlusterFS needs a reference ID to be accepted. If you do
not already have a reference id, file a new bug at [Red Hat
Bugzilla](https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS).
If the patch is submitted for review, the rfc.sh script will return the
gerrit url for the review request.

Also note, all the feature development of GlusterFS is tracked using
[github issues](https://github.com/gluster/glusterfs/issues). Github
Issue number can also serve as a reference ID while submitting the patch.

More details on the rfc.sh script are available at
[Development Work Flow - rfc.sh](./Development-Workflow.md#rfc.sh).

Expand Down
53 changes: 35 additions & 18 deletions Quick-Start-Guide/Quickstart.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,11 +2,11 @@ Installing GlusterFS - a Quick Start Guide
-------

#### Purpose of this document

This document is intended to give you a step by step guide to setting up
GlusterFS for the first time. For this tutorial, we will assume you are
using Fedora 26 (or later) virtual machines.
We also do not explain the steps in detail here as this guide is just to help
you get it up and running as soon as possible.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This document is intended to provide a step-by-step guide to setting up GlusterFS for the first time. For the purposes of this guide, it is required to use Fedora 26 (or, higher) virtual machine instances.

After you deploy GlusterFS by following these steps,
we recommend that you read the GlusterFS Admin Guide to learn how to
administer GlusterFS and how to select a volume type that fits your
Expand All @@ -19,32 +19,43 @@ installing using different methods (in local virtual machines, EC2 and
baremetal) and different distributions, then have a look at the Install
guide.

#### Using Ansible to deploy and manage GlusterFS

If you are already an ansible user, and are more comfortable with setting

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ansible

up distributed systems with Ansible, we recommend you to skip all these and
move over to [gluster-ansible](https://github.com/gluster/gluster-ansible) repository, which gives most of the details to get the systems running faster.

#### Automatically deploying GlusterFS with Puppet-Gluster+Vagrant

If you'd like to deploy GlusterFS automatically using

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To deploy GlusterFS using scripted methods, please read this ...

Puppet-Gluster+Vagrant, have a look at [this
article](https://ttboj.wordpress.com/2014/01/08/automatically-deploying-glusterfs-with-puppet-gluster-vagrant/).


### Step 1 – Have at least two nodes
### Step 1 – Have at least three nodes

- Fedora 22 (or later) on two nodes named "server1" and "server2"
- Fedora 26 (or later) on 3 nodes named "server1", "server2" and "server3"
- A working network connection

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it required to add a line about functional DNS (and/or, name resolution)?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, what about NTP?

- At least two virtual disks, one for the OS installation, and one to be
used to serve GlusterFS storage (sdb). This will emulate a real-
world deployment, where you would want to separate GlusterFS storage
from the OS install.
- Note: GlusterFS stores its dynamically generated configuration files
at /var/lib/glusterd. If at any point in time GlusterFS is unable to
used to serve GlusterFS storage (sdb), on each of these VMs. This will
emulate a real-world deployment, where you would want to separate
GlusterFS storage from the OS install.

**Note**: GlusterFS stores its dynamically generated configuration files
at `/var/lib/glusterd`. If at any point in time GlusterFS is unable to
write to these files (for example, when the backing filesystem is full),
it will at minimum cause erratic behavior for your system; or worse,
take your system offline completely. It is advisable to create separate

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is recommended to create separate partitions for directories such as /var/log to reduce the chances of this happening.

partitions for directories such as /var/log to ensure this does not happen.
partitions for directories such as `/var/log` to ensure this does not
happen.


### Step 2 - Format and mount the bricks

(on both nodes): Note: We are going to use the XFS filesystem for the backend bricks.
Perform this step on all the nodes, "server{1,2,3}"

**Note**: We are going to use the XFS filesystem for the backend bricks. But Gluster is designed to work on top of any filesystem, which supports extended attributes.

These examples are going to assume the brick is going to reside on /dev/sdb1.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The following examples assume that the brick will be residing on /dev/sdb1


mkfs.xfs -i size=512 /dev/sdb1
Expand All @@ -57,7 +68,7 @@ You should now see sdb1 mounted at /data/brick1

### Step 3 - Installing GlusterFS

(on both nodes) Install the software
Install the software

yum install glusterfs-server

Expand Down Expand Up @@ -91,6 +102,7 @@ where ip-address is the address of the other node.
From "server1"

gluster peer probe server2
gluster peer probe server3

Note: When using hostnames, the first server needs to be probed from
***one*** other server to set its hostname.
Expand All @@ -109,22 +121,26 @@ Check the peer status on server1

You should see something like this (the UUID will differ)

Number of Peers: 1
Number of Peers: 2

Hostname: server2
Uuid: f0e7b138-4874-4bc0-ab91-54f20c7068b4
State: Peer in Cluster (Connected)

Hostname: server3
Uuid: f0e7b138-4532-4bc0-ab91-54f20c701241
State: Peer in Cluster (Connected)


### Step 6 - Set up a GlusterFS volume

On both server1 and server2:
On all servers:

mkdir -p /data/brick1/gv0

From any single server:

gluster volume create gv0 replica 2 server1:/data/brick1/gv0 server2:/data/brick1/gv0
gluster volume create gv0 replica 3 server1:/data/brick1/gv0 server2:/data/brick1/gv0 server3:/data/brick1/gv0
gluster volume start gv0

Confirm that the volume shows "Started":
Expand All @@ -139,17 +155,18 @@ You should see something like this (the Volume ID will differ):
Volume ID: f25cc3d8-631f-41bd-96e1-3e22a4c6f71f
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: server1:/data/brick1/gv0
Brick2: server2:/data/brick1/gv0
Brick3: server3:/data/brick1/gv0
Options Reconfigured:
transport.address-family: inet


Note: If the volume is not started, clues as to what went wrong will be

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the volume does not show "Started", the files under /var/log/glusterfs/glusterd.log should be checked in order to debug and diagnose the situation. These logs can be looked at on one or, all the servers configured.

in log files under /var/log/glusterfs/glusterd.log on one or both of the servers.
in log files under `/var/log/glusterfs/glusterd.log` on one or all of the servers.


### Step 7 - Testing the GlusterFS volume
Expand All @@ -174,5 +191,5 @@ points on each server:

You should see 100 files on each server using the method we listed here.
Without replication, in a distribute only volume (not detailed here), you
should see about 50 files on each one.
should see about 33 files on each one.