diff --git a/docs/manuals/en/new_main_reference/source/chapter20/storage-backends.rst b/docs/manuals/en/new_main_reference/source/chapter20/storage-backends.rst index 8ca799a183f..82c1f854d74 100644 --- a/docs/manuals/en/new_main_reference/source/chapter20/storage-backends.rst +++ b/docs/manuals/en/new_main_reference/source/chapter20/storage-backends.rst @@ -24,14 +24,12 @@ A Bareos Storage Daemon can use various storage backends: **Rados** (Ceph Object Store) is used to access a Ceph object store. -Droplet Storage Backend ------------------------ - -:index:`[TAG=Backend->Droplet] ` :index:`[TAG=Backend->Droplet->S3] ` :index:`[TAG=Backend->S3|see {Backend->Droplet}] ` - .. _SdBackendDroplet: +Droplet Storage Backend +----------------------- +:index:`[TAG=Backend->Droplet] ` :index:`[TAG=Backend->Droplet->S3] ` :index:`[TAG=Backend->S3|see {Backend->Droplet}] ` The **bareos-storage-droplet** backend (:index:`Version >= 17.2.7 `) can be used to access Object Storage through **libdroplet**. Droplet support a number of backends, most notably S3. For details about Droplet itself see ``_. @@ -42,7 +40,7 @@ Requirements - Droplet S3: - - The droplet S3 backend can only be used with virtual-hosted-style buckets like `http://./object ./object>`__. Path-style buckets are not supported. It has been tested successfully with AWS S3 and CEPH Object Gateway S3. + - The droplet S3 backend can only be used with virtual-hosted-style buckets like http://bucket.s3_server/object. Path-style buckets are not supported. It has been tested successfully with AWS S3 and CEPH Object Gateway S3. Installation ~~~~~~~~~~~~ @@ -288,7 +286,7 @@ For performance, **Device Options**:sup:`Sd`:sub:`Device`\ should be configured New AWS S3 Buckets ^^^^^^^^^^^^^^^^^^ -As AWS S3 buckets are accessed via virtual-hosted-style buckets (like `http://./object ./object>`__) creating a new bucket results in a new DNS entry. +As AWS S3 buckets are accessed via virtual-hosted-style buckets (like http://bucket.s3_server/object) creating a new bucket results in a new DNS entry. As a new DNS entry is not available immediatly, Amazon solves this by using HTTP temporary redirects (code: 307) to redirect to the correct host. Unfortenatly, the Droplet library does not support HTTP redirects. diff --git a/docs/manuals/en/new_main_reference/source/chapter24/basejob.rst b/docs/manuals/en/new_main_reference/source/chapter24/basejob.rst index 822a5469f05..b1ab06be40e 100644 --- a/docs/manuals/en/new_main_reference/source/chapter24/basejob.rst +++ b/docs/manuals/en/new_main_reference/source/chapter24/basejob.rst @@ -1,15 +1,15 @@ .. ATTENTION do not edit this file manually. It was automatically converted from the corresponding .tex file +.. _basejobs: + File Deduplication using Base Jobs ================================== -:index:`[TAG=Base Jobs] ` :index:`[TAG=File Deduplication] ` - -.. _basejobs: +:index:`[TAG=Base Jobs] ` :index:`[TAG=File Deduplication] ` - A base job is sort of like a Full save except that you will want the FileSet to contain only files that are unlikely to change in the future (i.e. a snapshot of most of your system after installing it). After the base job has been run, when you are doing a Full save, you specify one or more Base jobs to be used. All files that have been backed up in the Base job/jobs but not -modified will then be excluded from the backup. During a restore, the Base jobs will be automatically pulled in where necessary. +A base job is sort of like a Full save except that you will want the FileSet to contain only files that are unlikely to change in the future (i.e. a snapshot of most of your system after installing it). After the base job has been run, when you are doing a Full save, you specify one or more Base jobs to be used. All files that have been backed up in the Base job/jobs but not modified will then be excluded from the backup. During a restore, the Base jobs will be automatically pulled in where +necessary. Imagine having 100 nearly identical Windows or Linux machine containing the OS and user files. Now for the OS part, a Base job will be backed up once, and rather than making 100 copies of the OS, there will be only one. If one or more of the systems have some files updated, no problem, they will be automatically backuped. diff --git a/docs/manuals/en/new_main_reference/source/chapter32/catmaintenance.rst b/docs/manuals/en/new_main_reference/source/chapter32/catmaintenance.rst index 13beeed8999..9ebdb7c2149 100644 --- a/docs/manuals/en/new_main_reference/source/chapter32/catmaintenance.rst +++ b/docs/manuals/en/new_main_reference/source/chapter32/catmaintenance.rst @@ -635,86 +635,39 @@ You will note that the File table (containing the file attributes) make up the l Without proper setup and maintenance, your Catalog may continue to grow indefinitely as you run Jobs and backup Files, and/or it may become very inefficient and slow. How fast the size of your Catalog grows depends on the number of Jobs you run and how many files they backup. By deleting records within the database, you can make space available for the new records that will be added during the next Job. By constantly deleting old expired records (dates older than the Retention period), your database size will remain constant. -Setting Retention Periods -~~~~~~~~~~~~~~~~~~~~~~~~~ - -:index:`[TAG=Setting Retention Periods] ` :index:`[TAG=Periods->Setting Retention] ` - .. _Retention: +Setting Retention Periods +~~~~~~~~~~~~~~~~~~~~~~~~~ +:index:`[TAG=Setting Retention Periods] ` :index:`[TAG=Periods->Setting Retention] ` Bareos uses three Retention periods: the File Retention period, the Job Retention period, and the Volume Retention period. Of these three, the File Retention period is by far the most important in determining how large your database will become. The File Retention and the Job Retention are specified in each Client resource as is shown below. The Volume Retention period is specified in the Pool resource, and the details are given in the next chapter of this manual. -\begin{description} - - \item [File Retention = ] - :index:`[TAG=File Retention] ` - :index:`[TAG=Retention->File] ` - The File Retention record defines the length of time that Bareos will keep - File records in the Catalog database. When this time period expires, and if - {\bf AutoPrune} is set to {\bf yes}, Bareos will prune (remove) File records - that are older than the specified File Retention period. The pruning will - occur at the end of a backup Job for the given Client. Note that the Client - database record contains a copy of the File and Job retention periods, but - Bareos uses the current values found in the Director's Client resource to do - the pruning. - - Since File records in the database account for probably 80 percent of the - size of the database, you should carefully determine exactly what File - Retention period you need. Once the File records have been removed from - the database, you will no longer be able to restore individual files - in a Job. However, as long as the - Job record still exists, you will be able to restore all files in the - job. - - Retention periods are specified in seconds, but as a convenience, there are - a number of modifiers that permit easy specification in terms of minutes, - hours, days, weeks, months, quarters, or years on the record. See the - :ref:`Configuration chapter