Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: improvements for droplet, jobdefs #1581

Merged
4 changes: 4 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,10 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
### Removed
- plugins: remove old deprecated postgres plugin [PR #1606]

### Documentation
- docs: improvements for droplet, jobdefs [PR #1581]

[PR #1581]: https://github.com/bareos/bareos/pull/1581
[PR #1589]: https://github.com/bareos/bareos/pull/1589
[PR #1606]: https://github.com/bareos/bareos/pull/1606
[unreleased]: https://github.com/bareos/bareos/tree/master
Original file line number Diff line number Diff line change
Expand Up @@ -8,4 +8,5 @@ Job {
Pool = Incremental
Messages = Standard
Where = /tmp/bareos-restores
Maximum Concurrent Jobs = 10
}
14 changes: 11 additions & 3 deletions docs/manuals/source/Configuration/Director.rst
Original file line number Diff line number Diff line change
Expand Up @@ -123,16 +123,24 @@ The following is an example of a valid Job resource definition:
JobDefs Resource
----------------

:index:`\ <single: Job; JobDefs Resource>`\ :index:`\ <single: Resource; JobDefs>`\
.. index::
single: Job; JobDefs Resource
single: Resource; JobDefs

The JobDefs resource permits all the same directives that can appear in a Job resource. However, a JobDefs resource does not create a Job, rather it can be referenced within a Job to provide defaults for that Job. This permits you to concisely define several nearly identical Jobs, each one referencing a JobDefs resource which contains the defaults. Only the changes from the defaults need to be mentioned in each Job.
The JobDefs resource permits all the same directives that can appear in a Job resource.
However, a JobDefs resource does not create a Job, rather it can be referenced within
a Job to provide defaults for that Job. This permits you to concisely define several nearly
identical Jobs, each one referencing a JobDefs resource which contains the defaults.
Only the changes from the defaults need to be mentioned in each Job.

.. _DirectorResourceSchedule:

Schedule Resource
-----------------

:index:`\ <single: Resource; Schedule>`\ :index:`\ <single: Schedule; Resource>`\
.. index::
single: Resource; Schedule
single: Schedule; Resource

The Schedule resource provides a means of automatically scheduling a Job as well as the ability to override the default Level, Pool, Storage and Messages resources. If a Schedule resource is not referenced in a Job, the Job can only be run manually. In general, you specify an action to be taken and when.

Expand Down
14 changes: 11 additions & 3 deletions docs/manuals/source/TasksAndConcepts/MigrationAndCopy.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,9 @@
Migration and Copy
==================

:index:`\ <single: Migration>`\ :index:`\ <single: Copy>`\
.. index::
single: Migration
single: Copy

The term Migration, as used in the context of Bareos, means moving data from one Volume to another. In particular it refers to a Job (similar to a backup job) that reads data that was previously backed up to a Volume and writes it to another Volume. As part of this process, the File catalog records associated with the first backup job are purged. In other words, Migration moves Bareos Job data from one Volume to another by reading the Job data from the Volume it is stored on, writing it to a
different Volume in a different Pool, and then purging the database records for the first Job.
Expand Down Expand Up @@ -54,7 +56,8 @@ If the migration/copy control job finds more than one existing job to migrate, i
Important migration Considerations
----------------------------------

:index:`\ <single: migration; Important migration Considerations>`\
.. index::
single: migration; Important migration Considerations

- Each Pool into which you migrate Jobs or Volumes must contain Volumes of only one :config:option:`dir/storage/MediaType`\ .

Expand Down Expand Up @@ -103,6 +106,10 @@ Job Resource

:config:option:`dir/job/PurgeMigrationJob`\

-

:config:option:`dir/job/MaximumConcurrentJobs`\ > 1 is needed if you want to have multiple migrate/copy jobs running at the same time

Pool Resource
'''''''''''''

Expand All @@ -124,7 +131,8 @@ Pool Resource
Example Migration Jobs
~~~~~~~~~~~~~~~~~~~~~~

:index:`\ <single: Example; Migration Jobs>`\
.. index::
single: Example; Migration Jobs

Assume a simple configuration with a single backup job as described below.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@ LDAP Plugin
.. index::
single: Plugin; ldap

.. deprecated:: 20.0.0

This plugin is intended to backup (and restore) the contents of a LDAP server. It uses normal LDAP operation for this. The package **bareos-filedaemon-ldap-python-plugin** (:sinceVersion:`15.2.0: LDAP Plugin`) contains an example configuration file, that must be adapted to your environment.
This plugin is intended to backup (and restore) the contents of a LDAP server.
It uses normal LDAP operation for this. The package **bareos-filedaemon-ldap-python-plugin**
(:sinceVersion:`15.2.0: LDAP Plugin`) contains an example configuration file,
that must be adapted to your environment.
87 changes: 62 additions & 25 deletions docs/manuals/source/TasksAndConcepts/StorageBackends.rst
Original file line number Diff line number Diff line change
Expand Up @@ -24,11 +24,14 @@ A Bareos Storage Daemon can use various storage backends:
Droplet Storage Backend
-----------------------

:index:`\ <single: Backend; Droplet>`
:index:`\ <single: Backend; Droplet; S3>`
:index:`\ <single: Backend; S3|see {Backend; Droplet}>`
.. index::
single: Backend; Droplet
single: Backend; Droplet; S3
single: Backend; S3|see {Backend; Droplet}

The **bareos-storage-droplet** backend (:sinceVersion:`17.2.7: Droplet`) can be used to access Object Storage through **libdroplet**. Droplet support a number of backends, most notably S3. For details about Droplet itself see https://github.com/scality/Droplet.
The **bareos-storage-droplet** backend (:sinceVersion:`17.2.7: Droplet`) can be used to
access Object Storage through **libdroplet**. Droplet supports a number of backends, most
notably S3. For details about Droplet itself see https://github.com/scality/Droplet.

Requirements
~~~~~~~~~~~~
Expand All @@ -37,17 +40,22 @@ Requirements

- Droplet S3:

- The droplet S3 backend can only be used with virtual-hosted-style buckets like http://bucket.s3_server/object. Path-style buckets are not supported. It has been tested successfully with AWS S3 and CEPH Object Gateway S3.
- The droplet S3 backend can only be used with virtual-hosted-style buckets like `http://bucket.s3_server/object`.
Path-style buckets are not supported. It has been tested successfully with AWS S3
and CEPH Object Gateway S3.

Installation
~~~~~~~~~~~~

Install the package **bareos-storage-droplet** by using an appropriate package management tool (eg. :command:`yum`, :command:`zypper`).
Install the package **bareos-storage-droplet** by using an appropriate package management
tool (eg. :command:`dnf`, :command:`zypper`, :command:`apt`).

Configuration
~~~~~~~~~~~~~

The droplet backend requires a |dir| :ref:`DirectorResourceStorage`, a |sd| :ref:`StorageResourceDevice` as well as a Droplet profile file where your access– and secret–keys and other parameters for the connection to your object storage are stored.
The droplet backend requires a |dir| :ref:`DirectorResourceStorage`, a |sd| :ref:`StorageResourceDevice`
as well as a Droplet profile file where your access–, secret–keys and other parameters for
the connection to your object storage are stored.

.. _section-DropletAwsS3:

Expand All @@ -63,9 +71,12 @@ For the following example, we

- choose the name :config:option:`Dir/Storage = S3_Object`\ .

- choose :config:option:`dir/storage/MediaType = S3_Object1`\ . We name it this way, in case we later add more separated Object Storages that don’t have access to the same volumes.
- choose :config:option:`dir/storage/MediaType = S3_Object1`\ . We name it this way,
in case we later add more separated Object Storages that don’t have access to the same volumes.

- assume the |sd| is located on the host :strong:`bareos-sd.example.com` and will offers the :ref:`StorageResourceDevice` :config:option:`Sd/Device = S3_ObjectStorage`\ (to be configured in the next section).
- assume the |sd| is located on the host :strong:`bareos-sd.example.com` and will offers
the :ref:`StorageResourceDevice` :config:option:`Sd/Device = S3_ObjectStorage`\
(to be configured in the next section).

.. code-block:: bareosconfig
:caption: bareos-dir.d/storage/S3\_Object.conf
Expand All @@ -78,7 +89,8 @@ For the following example, we
Media Type = "S3_Object1"
}

These credentials are only used to connect to the |sd|. The credentials to access the object store (e.g. S3) are stored in the |sd| Droplet Profile.
These credentials are only used to connect to the |sd|. The credentials to access the object store
(e.g. S3) are stored in the |sd| Droplet Profile.

Storage Daemon
''''''''''''''
Expand All @@ -93,8 +105,10 @@ The name and media type must correspond to those settings in the |dir| :ref:`Dir

.. limitation:: Droplet Backend does not support block interleaving

The current implementation has a known Bug that may lead to bogus data on your S3 volumes when you set :config:option:`sd/device/MaximumConcurrentJobs` to a value other than 1.
Because of this the default for a backend of type Droplet is set to 1 and the |sd| will refuse to start if you set it to a value greater than 1.
The current implementation has a known Bug that may lead to bogus data on your S3 volumes
when you set :config:option:`sd/device/MaximumConcurrentJobs` to a value other than 1.
Because of this the default for a backend of type Droplet is set to 1 and the |sd| will
refuse to start if you set it to a value greater than 1.


A device for the usage of AWS S3 object storage with a bucket named :file:`backup-bareos` located in EU Central 1 (Frankfurt, Germany), would look like this:
Expand Down Expand Up @@ -122,7 +136,8 @@ files, so every append operation could result in reading and writing the full vo
Following :config:option:`sd/device/DeviceOptions`\ settings are possible:

profile
Droplet profile path (e.g. /etc/bareos/bareos-sd.d/device/droplet/droplet.profile). Make sure the profile file is readable for user **bareos**.
Droplet profile path (e.g. /etc/bareos/bareos-sd.d/device/droplet/droplet.profile).
Make sure the profile file is readable for user **bareos**.

acl
Canned ACL
Expand All @@ -134,7 +149,7 @@ bucket
Bucket to store objects in.

chunksize
Size of Volume Chunks (default = 10 Mb).
Size of Volume Chunks (default = 10 Mb). see below the limitation with Maximum Volume Size

iothreads
Number of IO-threads to use for uploads (if not set, blocking uploads are used)
Expand All @@ -158,18 +173,24 @@ An example for AWS S3 could look like this:
.. code-block:: cfg
:caption: aws.profile

host = s3.amazonaws.com # This parameter is only used as baseurl and will be prepended with bucket and location set in device resource to form correct url
host = s3.amazonaws.com
use_https = true
access_key = myaccesskey
secret_key = mysecretkey
pricing_dir = "" # If not empty, an droplet.csv file will be created which will record all S3 operations.
pricing_dir = ""
backend = s3
aws_auth_sign_version = 4 # Currently, AWS S3 uses version 4. The Ceph S3 gateway uses version 2.
aws_auth_sign_version = 4
aws_region = eu-central-1

More arguments and the SSL parameters can be found in the documentation of the droplet library: \externalReferenceDropletDocConfigurationFile

While parameters have been explained in the :ref:`section-DropletAwsS3` section, this gives an example about how to backup to a CEPH Object Gateway S3.
.. limitation:: Droplet doesn't support comments into profile configuration file.

Keep the `*.profile` clean of any form of comments.


While parameters have been explained in the :ref:`section-DropletAwsS3` section, this
gives an example about how to backup to a CEPH Object Gateway S3.

.. code-block:: bareosconfig
:caption: bareos-dir.d/storage/S3\_Object.conf
Expand Down Expand Up @@ -201,7 +222,7 @@ A device for CEPH object storage could look like this:
Maximum Concurrent Jobs = 1
}

The correspondig Droplet profile looks like this:
The corresponding Droplet profile looks like this:

.. code-block:: cfg
:caption: ceph-rados-gateway.profile
Expand All @@ -216,6 +237,14 @@ The correspondig Droplet profile looks like this:

Main differences are, that :file:`aws_region` is not required and :file:`aws_auth_sign_version = 2` instead of 4.

.. limitation:: Maximum of 9'999 chunks

You have to make sure that your :config:option:`dir/pool/MaximumVolumeBytes` divided
by the `chunk size` doesn't exceed 9'999.

Example: Maximum Volume Bytes = 300 GB, and chunk size = 100 MB -> 3'000 is ok.


Troubleshooting
~~~~~~~~~~~~~~~

Expand Down Expand Up @@ -284,11 +313,14 @@ For performance, :config:option:`sd/device/DeviceOptions`\ should be configured
New AWS S3 Buckets
^^^^^^^^^^^^^^^^^^

As AWS S3 buckets are accessed via virtual-hosted-style buckets (like http://bucket.s3_server/object) creating a new bucket results in a new DNS entry.
As AWS S3 buckets are accessed via virtual-hosted-style buckets (like http://bucket.s3_server/object)
creating a new bucket results in a new DNS entry.

As a new DNS entry is not available immediatly, Amazon solves this by using HTTP temporary redirects (code: 307) to redirect to the correct host. Unfortenatly, the Droplet library does not support HTTP redirects.
As a new DNS entry is not available immediately, Amazon solves this by using HTTP temporary
redirects (code: 307) to redirect to the correct host. Unfortunately, the Droplet library
does not support HTTP redirects.

Requesting the device status only resturn a unspecific error:
Requesting the device status only returns an unspecific error:

.. code-block:: bconsole
:caption: status storage
Expand Down Expand Up @@ -316,7 +348,9 @@ Workaround:
AWS S3 Logging
^^^^^^^^^^^^^^

If you use AWS S3 object storage and want to debug your bareos setup, it is recommended to turn on the server access logging in your bucket properties. You will see if bareos gets to try writing into your bucket or not.
If you use AWS S3 object storage and want to debug your bareos setup, it is recommended
to turn on the server access logging in your bucket properties.
This will allow you to determine whether Bareos attempted to write to your bucket or not.

.. _SdBackendGfapi:

Expand All @@ -325,8 +359,11 @@ GFAPI Storage Backend

**GFAPI** (GlusterFS)

A GlusterFS Storage can be used as Storage backend of Bareos. Prerequistes are a working GlusterFS storage system and the package **bareos-storage-glusterfs**. See https://www.gluster.org/ for more information regarding GlusterFS installation and configuration and specifically `https://docs.gluster.org/en/latest/Administrator-Guide/Bareos/ <https://docs.gluster.org/en/latest/Administrator-Guide/Bareos/>`__ for Bareos integration. You can use following snippet to
configure it as storage device:
A GlusterFS Storage can be used as Storage backend of Bareos. Prerequisites are a working
GlusterFS storage system and the package **bareos-storage-glusterfs**.
See https://www.gluster.org/ for more information regarding GlusterFS installation and
configuration and specifically `https://docs.gluster.org/en/latest/Administrator-Guide/Bareos/ <https://docs.gluster.org/en/latest/Administrator-Guide/Bareos/>`__
for Bareos integration. You can use following snippet to configure it as storage device:



Expand Down
Original file line number Diff line number Diff line change
@@ -1,2 +1,5 @@
If a :ref:`Job Defs <DirectorResourceJobDefs>` resource name is specified, all the values contained in the named :ref:`Job Defs <DirectorResourceJobDefs>` resource will be used as the defaults for the current Job. Any value that you explicitly define in the current Job resource, will override any defaults specified in the :ref:`Job Defs <DirectorResourceJobDefs>` resource. The use of this directive permits writing much more compact Job resources where the
bulk of the directives are defined in one or more :ref:`Job Defs <DirectorResourceJobDefs>`. This is particularly useful if you have many similar Jobs but with minor variations such as different Clients. To structure the configuration even more, :ref:`Job Defs <DirectorResourceJobDefs>` themselves can also refer to other :ref:`Job Defs <DirectorResourceJobDefs>`.

.. warning::
If a parameter like RunScript for example can be specified multiple times, the configuration will be added instead of overridden as described above. Therefore, if one RunScript is defined in the JobDefs and another in the job, both will be executed.