diff --git a/manuals/en/main/bareos-fd-resource-client-definitions.tex b/manuals/en/main/bareos-fd-resource-client-definitions.tex index dad5267..40cc9ce 100644 --- a/manuals/en/main/bareos-fd-resource-client-definitions.tex +++ b/manuals/en/main/bareos-fd-resource-client-definitions.tex @@ -45,7 +45,7 @@ If runscripts are not needed it would be recommended as a security measure to disable running those or only allow the commands that you really want to be used. -Runscripts are particularly a problem as they allow the filedaemon to run +Runscripts are particularly a problem as they allow the \bareosFd to run arbitrary commands. You may also look into the \linkResourceDirective{Fd}{Client}{Allowed Script Dir} keyword to limit the impact of the runscript command. } @@ -115,10 +115,8 @@ } \defDirective{Fd}{Client}{FD Source Address}{}{}{% -This record is optional, and if it is specified, it will cause the File -daemon server (for Storage connections) to bind to the specified {\bf -IP-Address}, which is either a domain name or an IP address specified as a -dotted quadruple. If this record is not specified, the kernel will choose +If specified, the \bareosFd will bind to the specified address when creating outbound connections. +If this record is not specified, the kernel will choose the best address according to the routing table (the default). } @@ -133,23 +131,6 @@ that does not follow Internet standards and times out a valid connection after a short duration despite the fact that keepalive is set. This usually results in a broken pipe error message. - -% If you continue getting broken pipe error messages despite using the -% Heartbeat Interval, and you are using Windows, you should consider -% upgrading your ethernet driver. This is a known problem with NVidia -% NForce 3 drivers (4.4.2 17/05/2004), or try the following workaround -% suggested by Thomas Simmons for Win32 machines: -% -% Browse to: -% Start {\textgreater} Control Panel {\textgreater} Network Connections -% -% Right click the connection for the nvidia adapter and select properties. -% Under the General tab, click "Configure...". Under the Advanced tab set -% "Checksum Offload" to disabled and click OK to save the change. - -% Lack of communications, or communications that get interrupted can -% also be caused by Linux firewalls where you have a rule that throttles -% connections or traffic. } \defDirective{Fd}{Client}{LMDB Threshold}{}{}{% @@ -157,20 +138,21 @@ \defDirective{Fd}{Client}{Maximum Bandwidth Per Job}{}{}{% The speed parameter specifies the maximum allowed bandwidth that a job may -use. The speed parameter should be specified in k/s, kb/s, m/s or mb/s. +use. } \defDirective{Fd}{Client}{Maximum Concurrent Jobs}{}{}{% -where {\textless}number{\textgreater} is the maximum number of Jobs that should run +This directive specifies the maximum number of Jobs that should run concurrently. Each contact from the Director (e.g. status request, job start -request) is considered as a Job, so if you want to be able to do a {\bf -status} request in the console at the same time as a Job is running, you +request) is considered as a Job, +so if you want to be able to do a \bcommand{status} request in the console +at the same time as a Job is running, you will need to set this value greater than 1. } \defDirective{Fd}{Client}{Maximum Network Buffer Size}{}{}{% -where {\textless}bytes{\textgreater} specifies the initial network buffer size to use with -the File daemon. This size will be adjusted down if it is too large until it +This directive specifies the initial network buffer size to use. +This size will be adjusted down if it is too large until it is accepted by the OS. Please use care in setting this value since if it is too large, it will be trimmed by 512 bytes until the OS is happy, which may require a large number of system calls. The default value is 65,536 bytes. diff --git a/manuals/en/main/bareos-manual-main-reference.tex b/manuals/en/main/bareos-manual-main-reference.tex index 02e362d..0e0ffc1 100644 --- a/manuals/en/main/bareos-manual-main-reference.tex +++ b/manuals/en/main/bareos-manual-main-reference.tex @@ -187,7 +187,7 @@ \part{Tasks and Concepts} \include{restore} \chapter{Volume Management} - \include{disk} + \input{disk} \include{recycling} \include{pools} diff --git a/manuals/en/main/bareos-sd-resource-storage-definitions.tex b/manuals/en/main/bareos-sd-resource-storage-definitions.tex index c23dfeb..0b6be58 100644 --- a/manuals/en/main/bareos-sd-resource-storage-definitions.tex +++ b/manuals/en/main/bareos-sd-resource-storage-definitions.tex @@ -69,17 +69,15 @@ } \defDirective{Sd}{Storage}{Maximum Concurrent Jobs}{}{}{% -where {\textless}number{\textgreater} is the maximum number of Jobs that may run -concurrently. The default is set to 10, but you may set it to a larger -number. Each contact from the Director (e.g. status request, job start -request) is considered as a Job, so if you want to be able to do a {\bf -status} request in the console at the same time as a Job is running, you +This directive specifies the maximum number of Jobs that may run +concurrently. Each contact from the Director (e.g. status request, job start +request) is considered as a Job, so if you want to be able to do a \bcommand{status}{} +request in the console at the same time as a Job is running, you will need to set this value greater than 1. To run simultaneous Jobs, you will need to set a number of other directives in the Director's configuration file. Which ones you set depend on what you want, but you -will almost certainly need to set the {\bf Maximum Concurrent Jobs} in -the Storage resource in the Director's configuration file and possibly -those in the Job and Client resources. +will almost certainly need to set the \linkResourceDirective{Dir}{Storage}{Maximum Concurrent Jobs}. +Please refer to the \nameref{ConcurrentJobs} chapter. } \defDirective{Sd}{Storage}{Maximum Network Buffer Size}{}{}{% diff --git a/manuals/en/main/bareos.sty b/manuals/en/main/bareos.sty index 1ad9cb5..dc04a11 100644 --- a/manuals/en/main/bareos.sty +++ b/manuals/en/main/bareos.sty @@ -182,12 +182,19 @@ } \newcommand{\parameter}[1]{\path|#1|} \newcommand{\pluginevent}[1]{\path|#1|} -\newcommand{\pool}[1]{\path|#1|} +\newcommand{\pool}[1]{\resourcename{Dir}{Pool}{#1}} \newcommand{\argument}[1]{\textit{#1}} -\newcommand{\resourcename}[1]{\path|#1|} +\newcommand{\resourcetype}[2]{\path|#2|$^{\mbox{\tiny #1}}$} +\newcommand{\resourcename}[3]{\path|#3|$^{\mbox{\tiny #1}}_{\mbox{\tiny #2}}$} \newcommand{\registrykey}[1]{\path|#1|} \newcommand{\variable}[1]{\path|#1|} \newcommand{\volume}[1]{\path|#1|} +\newcommand{\volumestatus}[1]{\path|#1|} +\newcommand{\volumeparameter}[2]{\ifthenelse{\isempty{#2}}{% + \path|#1|% +}{% + \path|#1 = #2|% +}} \newcommand{\os}[2]{\ifthenelse{\isempty{#2}}{% \path|#1|\index[general]{Platform!#1}% }{% diff --git a/manuals/en/main/bconsole.tex b/manuals/en/main/bconsole.tex index c4af67c..b955e72 100644 --- a/manuals/en/main/bconsole.tex +++ b/manuals/en/main/bconsole.tex @@ -326,7 +326,6 @@ \section{Console Keywords} \end{description} \section{Console Commands} -\index[general]{Console!Commands} \label{sec:ConsoleCommands} The following commands are currently implemented: @@ -1687,8 +1686,7 @@ \section{Console Commands} \subsection{Special dot (.) Commands} \label{dotcommands} -\index[general]{Console!Command!Special .} -\index[general]{Console!Command!. Commands} +\index[general]{Console!Command!. commands} There is a list of commands that are prefixed with a period (.). These commands are intended to be used either by batch programs or graphical user @@ -1712,8 +1710,8 @@ \subsection{Special At (@) Commands} \index[general]{Console!Command!\at{}input {\textless}filename{\textgreater}} Read and execute the commands contained in the file specified. -\item [@output {\textless}filename{\textgreater} w/a] - \index[general]{Console!Command!\at{}output {\textless}filename{\textgreater} w/a} +\item [@output {\textless}filename{\textgreater} {\textless}w{\textbar}a{\textgreater}] + \index[general]{Console!Command!\at{}output {\textless}filename{\textgreater} {\textless}w{\textbar}a{\textgreater}} Send all following output to the filename specified either overwriting the file (w) or appending to the file (a). To redirect the output to the terminal, simply enter {\bf @output} without a filename specification. @@ -1725,12 +1723,11 @@ \subsection{Special At (@) Commands} @output /dev/null commands ... @output - \end{verbatim} \normalsize -\item [@tee {\textless}filename{\textgreater} w/a] - \index[general]{Console!Command!\at{}tee {\textless}filename{\textgreater} w/a} +\item [@tee {\textless}filename{\textgreater} {\textless}w{\textbar}a{\textgreater}] + \index[general]{Console!Command!\at{}tee {\textless}filename{\textgreater} {\textless}w{\textbar}a{\textgreater}} Send all subsequent output to both the specified file and the terminal. It is turned off by specifying {\bf @tee} or {\bf @output} without a filename. diff --git a/manuals/en/main/director-resource-client-definitions.tex b/manuals/en/main/director-resource-client-definitions.tex index 651a588..ee1d126 100644 --- a/manuals/en/main/director-resource-client-definitions.tex +++ b/manuals/en/main/director-resource-client-definitions.tex @@ -91,7 +91,7 @@ \defDirective{Dir}{Client}{Job Retention}{}{}{% The Job Retention directive defines the length of time that Bareos will keep Job records in the Catalog database after the Job End time. When -this time period expires, and if {\bf AutoPrune} is set to {\bf yes} +this time period expires, and if \linkResourceDirective{Dir}{Client}{Auto Prune} is set to {\bf yes} Bareos will prune (remove) Job records that are older than the specified File Retention period. As with the other retention periods, this affects only records in the catalog and not data in your archive backup. @@ -100,14 +100,14 @@ records will also be pruned regardless of the File Retention period set. As a consequence, you normally will set the File retention period to be less than the Job retention period. The Job retention period can actually -be less than the value you specify here if you set the {\bf Volume -Retention} directive in the Pool resource to a smaller duration. This is +be less than the value you specify here if you set the \linkResourceDirective{Dir}{Pool}{Volume +Retention} directive to a smaller duration. This is because the Job retention period and the Volume retention period are independently applied, so the smaller of the two takes precedence. The Job retention period is specified as seconds, minutes, hours, days, weeks, months, quarters, or years. See the -\ilink{ Configuration chapter}{Time} of this manual for +\ilink{Configuration chapter}{Time} of this manual for additional details of time specification. The default is 180 days. @@ -120,12 +120,11 @@ } \defDirective{Dir}{Client}{Maximum Concurrent Jobs}{}{}{% -where {\textless}number{\textgreater} is the maximum number of Jobs with the current Client +This directive specifies the maximum number of Jobs with the current Client that can run concurrently. Note, this directive limits only Jobs for Clients with the same name as the resource in which it appears. Any other -restrictions on the maximum concurrent jobs such as in the Director, Job, or +restrictions on the maximum concurrent jobs such as in the Director, Job or Storage resources will also apply in addition to any limit specified here. -The default is set to 1, but you may set it to a larger number. } \defDirective{Dir}{Client}{Name}{}{}{% diff --git a/manuals/en/main/director-resource-director-definitions.tex b/manuals/en/main/director-resource-director-definitions.tex index 8a4c908..31d155b 100644 --- a/manuals/en/main/director-resource-director-definitions.tex +++ b/manuals/en/main/director-resource-director-definitions.tex @@ -85,12 +85,10 @@ } \defDirective{Dir}{Director}{Maximum Concurrent Jobs}{}{}{% -\label{DirMaxConJobs}% \index[general]{Simultaneous Jobs}% \index[general]{Concurrent Jobs}% -where {\textless}number{\textgreater} is the maximum number of total Director Jobs that -should run concurrently. The default is set to 1, but you may set it to a -larger number. +This directive specifies the maximum number of total Director Jobs that +should run concurrently. The Volume format becomes more complicated with multiple simultaneous jobs, consequently, restores may take longer if @@ -104,9 +102,8 @@ } \defDirective{Dir}{Director}{Maximum Console Connections}{}{}{% -where \parameter{number} is the maximum number of Console Connections that -could run concurrently. The default is set to 20, but you may set it to a -larger number. +This directive specifies the maximum number of Console Connections that +could run concurrently. } \defDirective{Dir}{Director}{Messages}{}{}{% diff --git a/manuals/en/main/director-resource-job-definitions.tex b/manuals/en/main/director-resource-job-definitions.tex index a50941c..5aba74e 100644 --- a/manuals/en/main/director-resource-job-definitions.tex +++ b/manuals/en/main/director-resource-job-definitions.tex @@ -560,8 +560,8 @@ from when the job starts, ({\bf not} necessarily the same as when the job was scheduled). -By default, the the watchdog thread will kill any Job that has run more -than 6 days. The maximum watchdog timeout is independent of MaxRunTime +By default, the watchdog thread will kill any Job that has run more +than 6 days. The maximum watchdog timeout is independent of \configdirective{Max Run Time} and cannot be changed. } @@ -610,10 +610,10 @@ Job resource that can run concurrently. Note, this directive limits only Jobs with the same name as the resource in which it appears. Any other restrictions on the maximum concurrent jobs such as in the -Director, Client, or Storage resources will also apply in addition to -the limit specified here. The default is set to 1, but you may set it -to a larger number. We strongly recommend that you read the WARNING -documented under \nameref{DirMaxConJobs}. +Director, Client or Storage resources will also apply in addition to +the limit specified here. + +For details, see the \nameref{ConcurrentJobs} chapter. } \defDirective{Dir}{Job}{Maxrun Sched Time}{}{}{% diff --git a/manuals/en/main/director-resource-pool-definitions.tex b/manuals/en/main/director-resource-pool-definitions.tex index 6794592..4974f5c 100644 --- a/manuals/en/main/director-resource-pool-definitions.tex +++ b/manuals/en/main/director-resource-pool-definitions.tex @@ -121,8 +121,8 @@ \configdirective{Label Format = "File-"}, the first volumes will be named \volume{File-0001}, \volume{File-0002}, ... -With the exception of Job specific variables, you can test your -\configdirective{Label Format} +With the exception of Job specific variables, you can test your +\configdirective{Label Format} by using the \ilink{var command}{var} the Console Chapter of this manual. @@ -300,17 +300,17 @@ proper retention periods. However, by using this option you risk losing valuable data. -Please be aware that {\bf Purge Oldest Volume} disregards all retention +\warning{ +Be aware that \configdirective{Purge Oldest Volume} disregards all retention periods. If you have only a single Volume defined and you turn this variable on, that Volume will always be immediately overwritten when it fills! So at a minimum, ensure that you have a decent number of Volumes in your Pool before running any jobs. If you want retention periods to -apply do not use this directive. To specify a retention period, use the -{\bf Volume Retention} directive (see above). - -We {\bf highly} recommend against using this directive, because it is -sure that some day, Bareos will recycle a Volume that contains current -data. The default is {\bf no}. +apply do not use this directive.\\ +We \textbf{highly} recommend against using this directive, because it is +sure that some day, Bareos will purge a Volume that contains current +data. +} } \defDirective{Dir}{Pool}{Recycle}{}{}{% diff --git a/manuals/en/main/director-resource-storage-definitions.tex b/manuals/en/main/director-resource-storage-definitions.tex index ab03c7f..967ffb1 100644 --- a/manuals/en/main/director-resource-storage-definitions.tex +++ b/manuals/en/main/director-resource-storage-definitions.tex @@ -87,21 +87,19 @@ } \defDirective{Dir}{Storage}{Maximum Concurrent Jobs}{}{}{% -where {\textless}number{\textgreater} is the maximum number of Jobs with the current +This directive specifies the maximum number of Jobs with the current Storage resource that can run concurrently. Note, this directive limits only Jobs for Jobs using this Storage daemon. Any other restrictions on -the maximum concurrent jobs such as in the Director, Job, or Client -resources will also apply in addition to any limit specified here. The -default is set to 1, but you may set it to a larger number. However, if -you set the Storage daemon's number of concurrent jobs greater than one, -we recommend that you read the waring documented under \ilink{Maximum -Concurrent Jobs}{DirMaxConJobs} in the Director's resource or simply -turn data spooling on as documented in the \ilink{Data -Spooling}{SpoolingChapter} chapter of this manual. +the maximum concurrent jobs such as in the Director, Job or Client +resources will also apply in addition to any limit specified here. + +If you set the Storage daemon's number of concurrent jobs greater than one, +we recommend that you read \nameref{ConcurrentJobs} and/or +turn data spooling on as documented in \nameref{SpoolingChapter}. } \defDirective{Dir}{Storage}{Maximum Concurrent Read Jobs}{}{}{% -where {\textless}number{\textgreater} is the maximum number of Jobs with the current +This directive specifies the maximum number of Jobs with the current Storage resource that can read concurrently. } diff --git a/manuals/en/main/disk.tex b/manuals/en/main/disk.tex index 07f4f1f..9a1f332 100644 --- a/manuals/en/main/disk.tex +++ b/manuals/en/main/disk.tex @@ -18,14 +18,12 @@ give you some of the options that are available to you so that you can manage either disk or tape volumes. -\label{Concepts} \section{Key Concepts and Resource Records} \index[general]{Volume!Management!Key Concepts and Resource Records} -\index[general]{Key Concepts and Resource Records} Getting Bareos to write to disk rather than tape in the simplest case is rather easy. In the Storage daemon's configuration file, you simply define an -{\bf Archive Device} to be a directory. +\linkResourceDirective{Sd}{Device}{Archive Device} to be a directory. The default directory to store backups on disk is \path|/var/lib/bareos/storage|: \footnotesize @@ -42,7 +40,7 @@ \section{Key Concepts and Resource Records} \end{verbatim} \normalsize -Assuming you have the appropriate {\bf Storage} resource in your Director's +Assuming you have the appropriate \configresource{Storage} resource in your Director's configuration file that references the above Device resource, \footnotesize @@ -75,43 +73,23 @@ \section{Key Concepts and Resource Records} In addition, if you want to use concurrent jobs that write to several different volumes at the same time, you will need to understand a number of other details. An example of such a configuration is given -at the end of this chapter under \ilink{Concurrent Disk -Jobs}{ConcurrentDiskJobs}. +at the end of this chapter under \nameref{ConcurrentDiskJobs}. \subsection{Pool Options to Limit the Volume Usage} -\index[general]{Usage!Pool Options to Limit the Volume} -\index[general]{Pool Options to Limit the Volume Usage} +\index[general]{Pool!Options to Limit the Volume Usage} Some of the options you have, all of which are specified in the Pool record, are: \begin{itemize} -\item To write each Volume only once (i.e. one Job per Volume or file in this - case), use: - -{\bf UseVolumeOnce = yes}. - -\item To write nnn Jobs to each Volume, use: - - {\bf Maximum Volume Jobs = nnn}. - -\item To limit the maximum size of each Volume, use: - - {\bf Maximum Volume Bytes = mmmm}. +\item \linkResourceDirective{Dir}{Pool}{Maximum Volume Jobs}: write only the specified number of jobs on each Volume. +\item \linkResourceDirective{Dir}{Pool}{Maximum Volume Bytes}: limit the maximum size of each Volume. Note, if you use disk volumes you should probably limit the Volume size to some reasonable - value such as say 5GB. This is because during a restore, Bareos is - currently unable to seek to the proper place in a disk volume to restore - a file, which means that it must read all records up to where the - restore begins. If your Volumes are 50GB, reading half or more of the - volume could take quite a bit of time. Also, if you ever have a partial + value. If you ever have a partial hard disk failure, you are more likely to be able to recover more data if they are in smaller Volumes. - -\item To limit the use time (i.e. write the Volume for a maximum of five days), - use: - -{\bf Volume Use Duration = ttt}. +\item \linkResourceDirective{Dir}{Pool}{Volume Use Duration}: restrict the time between first and last data written to Volume. \end{itemize} Note that although you probably would not want to limit the number of bytes on @@ -121,25 +99,24 @@ \subsection{Pool Options to Limit the Volume Usage} through a set of daily Volumes if you wish. As mentioned above, each of those directives is specified in the Pool or -Pools that you use for your Volumes. In the case of {\bf Maximum Volume Job}, -{\bf Maximum Volume Bytes}, and {\bf Volume Use Duration}, you can actually +Pools that you use for your Volumes. In the case of \linkResourceDirective{Dir}{Pool}{Maximum Volume Jobs}, +\linkResourceDirective{Dir}{Pool}{Maximum Volume Bytes} and \linkResourceDirective{Dir}{Pool}{Volume Use Duration}, +you can actually specify the desired value on a Volume by Volume basis. The value specified in the Pool record becomes the default when labeling new Volumes. Once a Volume has been created, it gets its own copy of the Pool defaults, and subsequently changing the Pool will have no effect on existing Volumes. You can either manually change the Volume values, or refresh them from the Pool defaults using -the {\bf update volume} command in the Console. As an example +the \bcommand{update}{volume} command in the Console. As an example of the use of one of the above, suppose your Pool resource contains: -\footnotesize -\begin{verbatim} +\begin{bconfig}{Volume Use Duration} Pool { Name = File Pool Type = Backup Volume Use Duration = 23h } -\end{verbatim} -\normalsize +\end{bconfig} then if you run a backup once a day (every 24 hours), Bareos will use a new Volume for each backup, because each Volume it writes can only be used for 23 hours @@ -148,10 +125,10 @@ \subsection{Pool Options to Limit the Volume Usage} because Bareos will want a new Volume and no one will be present to mount it, so no weekend backups will be done until Monday morning. -\label{AutomaticLabeling} \subsection{Automatic Volume Labeling} -\index[general]{Automatic!Volume Labeling} +\label{AutomaticLabeling} \index[general]{Label!Automatic Volume Labeling} +\index[general]{Volume!Labeling!Automatic} Use of the above records brings up another problem -- that of labeling your Volumes. For automated disk backup, you can either manually label each of your @@ -163,26 +140,24 @@ \subsection{Automatic Volume Labeling} requires some user interaction. Automatic labeling from templates does NOT work with autochangers since Bareos will not access unknown slots. There are several methods of labeling all volumes in an autochanger magazine. -For more information on this, please see the \ilink{Autochanger}{AutochangersChapter} chapter of this manual. +For more information on this, please see the \nameref{AutochangersChapter} chapter. -Automatic Volume labeling is enabled by making a change to both the Pool -resource (Director) and to the Device resource (Storage daemon) shown above. +Automatic Volume labeling is enabled by making a change to both the \resourcetype{Dir}{Pool} +resource and to the \resourcetype{Sd}{Device} resource shown above. In the case of the Pool resource, you must provide Bareos with a label format that it will use to create new names. In the simplest form, the label format is simply the Volume name, to which Bareos will append a four digit number. This number starts at 0001 and is incremented for each Volume the catalog contains. Thus if you modify your Pool resource to be: -\footnotesize -\begin{verbatim} +\begin{bconfig}{Label Format} Pool { Name = File Pool Type = Backup Volume Use Duration = 23h - LabelFormat = "Vol" + Label Format = "Vol" } -\end{verbatim} -\normalsize +\end{bconfig} Bareos will create Volume names Vol0001, Vol0002, and so on when new Volumes are needed. Much more complex and elaborate labels can be created using @@ -191,28 +166,24 @@ \subsection{Automatic Volume Labeling} The second change that is necessary to make automatic labeling work is to give the Storage daemon permission to automatically label Volumes. Do so by adding -{\bf LabelMedia = yes} to the Device resource as follows: +\linkResourceDirective{Sd}{Device}{Label Media} = yes to the \configresource{Device} resource as follows: -\footnotesize -\begin{verbatim} +\begin{bconfig}{Label Media = yes} Device { Name = File Media Type = File - Archive Device = /home/bareos/backups - Random Access = Yes; - AutomaticMount = yes; - RemovableMedia = no; - AlwaysOpen = no; - LabelMedia = yes + Archive Device = /var/lib/bareos/storage/ + Random Access = yes + Automatic Mount = yes + Removable Media = no + Always Open = no + Label Media = yes } -\end{verbatim} -\normalsize +\end{bconfig} + +See \linkResourceDirective{Dir}{Pool}{Label Format} for details about the labeling format. -You can find more details of the {\bf Label Format} Pool record in -\linkResourceDirective{Dir}{Pool}{Label Format} description of the Pool resource -records. -\label{Recycling1} \subsection{Restricting the Number of Volumes and Recycling} \index[general]{Recycling!Restricting the Number of Volumes and Recycling} \index[general]{Restricting the Number of Volumes and Recycling} @@ -226,70 +197,53 @@ \subsection{Restricting the Number of Volumes and Recycling} The tools Bareos gives you to help automatically manage these problems are the following: -\begin{enumerate} -\item Catalog file record retention periods, the - \linkResourceDirective{Dir}{Client}{File Retention} record in the Client - resource. -\item Catalog job record retention periods, the - \linkResourceDirective{Dir}{Client}{Job Retention} record in the Client - resource. -\item The - \linkResourceDirective{Dir}{Client}{Auto Prune} = yes record in the Client resource - to permit application of the above two retention periods. -\item The - \linkResourceDirective{Dir}{Pool}{Volume Retention} record in the Pool - resource. -\item The - \linkResourceDirective{Dir}{Pool}{Auto Prune} = yes record in the Pool - resource to permit application of the Volume retention period. -\item The - \linkResourceDirective{Dir}{Pool}{Recycle} = yes record in the Pool resource - to permit automatic recycling of Volumes whose Volume retention period has +\begin{itemize} +\item \linkResourceDirective{Dir}{Client}{File Retention}: catalog file record retention period. +\item \linkResourceDirective{Dir}{Client}{Job Retention}: catalog job record retention period. +\item \linkResourceDirective{Dir}{Client}{Auto Prune} = yes: permit the application of the above two retention periods. +\item \linkResourceDirective{Dir}{Pool}{Volume Retention} +\item \linkResourceDirective{Dir}{Pool}{Auto Prune} = yes: permit the application of the \linkResourceDirective{Dir}{Pool}{Volume Retention} period. +\item \linkResourceDirective{Dir}{Pool}{Recycle} = yes: permit automatic recycling of Volumes whose Volume retention period has expired. -\item The - \linkResourceDirective{Dir}{Pool}{Recycle Oldest Volume} = yes record in the - Pool resource tells Bareos to Prune the oldest volume in the Pool, and if all - files were pruned to recycle this volume and use it. -\item The - \linkResourceDirective{Dir}{Pool}{Recycle Current Volume} = yes record in - the Pool resource tells Bareos to Prune the currently mounted volume in the - Pool, and if all files were pruned to recycle this volume and use it. -\item The - \linkResourceDirective{Dir}{Pool}{Purge Oldest Volume} = yes record in the - Pool resource permits a forced recycling of the oldest Volume when a new one +\item \linkResourceDirective{Dir}{Pool}{Recycle Oldest Volume} = yes: prune the oldest volume in the Pool, and if all + files were pruned, recycle this volume and use it. +\item \linkResourceDirective{Dir}{Pool}{Recycle Current Volume} = yes: prune the currently mounted volume in the + Pool, and if all files were pruned, recycle this volume and use it. +\item \linkResourceDirective{Dir}{Pool}{Purge Oldest Volume} = yes: permits a forced recycling of the oldest Volume when a new one is needed.\\ \warning{This record ignores retention periods! We highly recommend not to use this record, but instead use \linkResourceDirective{Dir}{Pool}{Recycle Oldest Volume}.} -\item The - \linkResourceDirective{Dir}{Pool}{Maximum Volumes} = nnn record in the Pool - resource to limit the number of Volumes that can be created. -\end{enumerate} +\item \linkResourceDirective{Dir}{Pool}{Maximum Volumes}: limitthe number of Volumes that can be created. +\end{itemize} -The first three records (File Retention, Job Retention, and AutoPrune) +The first three records +(\linkResourceDirective{Dir}{Client}{File Retention}, \linkResourceDirective{Dir}{Client}{Job Retention} and \linkResourceDirective{Dir}{Client}{Auto Prune}) determine the amount of time that Job and File records will remain in your -Catalog, and they are discussed in detail in the -\ilink{Automatic Volume Recycling}{RecyclingChapter} chapter of -this manual. - -Volume Retention, AutoPrune, and Recycle determine how long Bareos will keep -your Volumes before reusing them, and they are also discussed in detail in the -\ilink{Automatic Volume Recycling}{RecyclingChapter} chapter of -this manual. - -The Maximum Volumes record can also be used in conjunction with the Volume -Retention period to limit the total number of archive Volumes (files) that -Bareos will create. By setting an appropriate Volume Retention period, a -Volume will be purged just before it is needed and thus Bareos can cycle +Catalog and they are discussed in detail in the +\ilink{Automatic Volume Recycling}{RecyclingChapter} chapter. + +\linkResourceDirective{Dir}{Pool}{Volume Retention}, \linkResourceDirective{Dir}{Pool}{Auto Prune} and \linkResourceDirective{Dir}{Pool}{Recycle} +determine how long Bareos will keep +your Volumes before reusing them and they are also discussed in detail in the +\ilink{Automatic Volume Recycling}{RecyclingChapter} chapter. + +The \linkResourceDirective{Dir}{Pool}{Maximum Volumes} record +can also be used in conjunction with the \linkResourceDirective{Dir}{Pool}{Volume Retention} period +to limit the total number of archive Volumes that +Bareos will create. +By setting an appropriate \linkResourceDirective{Dir}{Pool}{Volume Retention} period, +a Volume will be purged just before it is needed and thus Bareos can cycle through a fixed set of Volumes. Cycling through a fixed set of Volumes can -also be done by setting {\bf Recycle Oldest Volume = yes} or {\bf Recycle -Current Volume = yes}. In this case, when Bareos needs a new Volume, it will +also be done by setting +\linkResourceDirective{Dir}{Pool}{Purge Oldest Volume} = yes or \linkResourceDirective{Dir}{Pool}{Recycle Current Volume} = yes. +In this case, when Bareos needs a new Volume, it will prune the specified volume. -\label{ConcurrentDiskJobs} \section{Concurrent Disk Jobs} \index[general]{Concurrent Disk Jobs} -Above, we discussed how you could have a single device named {\bf -FileBackup} that writes to volumes in {\bf /home/bareos/backups}. +\label{ConcurrentDiskJobs} +Above, we discussed how you could have a single device named +\resourcename{Sd}{Device}{FileBackup} that writes to volumes in \fileStoragePath. You can, in fact, run multiple concurrent jobs using the Storage definition given with this example, and all the jobs will simultaneously write into the Volume that is being written. @@ -297,58 +251,59 @@ \section{Concurrent Disk Jobs} Now suppose you want to use multiple Pools, which means multiple Volumes, or suppose you want each client to have its own Volume and perhaps its own directory such as {\bf /home/bareos/client1} -and {\bf /home/bareos/client2} ... With the single Storage and Device -definition above, neither of these two is possible. Why? Because +and {\bf /home/bareos/client2} ... . +With the single Storage and Device definition above, neither of these two is possible. Why? Because Bareos disk storage follows the same rules as tape devices. Only one Volume can be mounted on any Device at any time. If you want to simultaneously write multiple Volumes, you will need multiple -Device resources in your bareos-sd.conf file, and thus multiple -Storage resources in your bareos-dir.conf. +Device resources in your \bareosSd configuration and thus multiple +Storage resources in your \bareosDir configuration. -OK, so now you should understand that you need multiple Device definitions +Okay, so now you should understand that you need multiple Device definitions in the case of different directories or different Pools, but you also need to know that the catalog data that Bareos keeps contains only the Media Type and not the specific storage device. This permits a tape for example to be re-read on any compatible tape drive. The compatibility -being determined by the Media Type. The same applies to disk storage. -Since a volume that is written by a Device in say directory {\bf -/home/bareos/backups} cannot be read by a Device with an Archive Device -definition of {\bf /home/bareos/client1}, you will not be able to -restore all your files if you give both those devices -{\bf Media Type = File}. During the restore, Bareos will simply choose +being determined by the +Media Type (\linkResourceDirective{Dir}{Storage}{Media Type} and \linkResourceDirective{Sd}{Device}{Media Type}). +The same applies to disk storage. +Since a volume that is written by a Device in say directory +\path|/home/bareos/backups| cannot be read by a Device with an +\linkResourceDirective{Sd}{Device}{Archive Device} = \path|/home/bareos/client1|, +you will not be able to restore all your files if you give both those devices +\linkResourceDirective{Sd}{Device}{Media Type} = File. +During the restore, Bareos will simply choose the first available device, which may not be the correct one. If this is confusing, just remember that the Directory has only the Media Type -and the Volume name. It does not know the {\bf Archive Device} (or the -full path) that is specified in the Storage daemon. Thus you must +and the Volume name. It does not know the \linkResourceDirective{Sd}{Device}{Archive Device} (or the +full path) that is specified in the \bareosSd. Thus you must explicitly tie your Volumes to the correct Device by using the Media Type. -The example shown below shows a case where there are two clients, each -using its own Pool and storing their Volumes in different directories. - -\subsection{An Example} +\subsection{Example for two clients, separate devices and recycling} The following example is not very practical, but can be used to demonstrate -the proof of concept in a relatively short period of time. The example -consists of a two clients that are backed up to a set of 12 archive files -(Volumes) for each client into different directories on the Storage +the proof of concept in a relatively short period of time. + +The example +consists of a two clients that are backed up to a set of 12 Volumes for each client +into different directories on the Storage machine. Each Volume is used (written) only once, and there are four Full saves done every hour (so the whole thing cycles around after three hours). -What is key here is that each physical device on the Storage daemon +What is key here is that each physical device on the \bareosSd has a different Media Type. This allows the Director to choose the -correct device for restores ... +correct device for restores. -The Director's configuration file is as follows: +The \bareosDir configuration is as follows: \begin{bconfig}{} Director { - Name = my-dir - QueryFile = "~/bareos/bin/query.sql" - PidDirectory = "~/bareos/working" - WorkingDirectory = "~/bareos/working" - Password = dir_password + Name = bareos-dir + QueryFile = "/usr/lib/bareos/scripts/query.sql" + Password = "" } + Schedule { Name = "FourPerHour" Run = Level=Full hourly at 0:05 @@ -356,11 +311,23 @@ \subsection{An Example} Run = Level=Full hourly at 0:35 Run = Level=Full hourly at 0:50 } + +FileSet { + Name = "Example FileSet" + Include { + Options { + compression=GZIP + signature=SHA1 + } + File = /etc + } +} + Job { Name = "RecycleExample" Type = Backup Level = Full - Client = Rufus + Client = client1-fd FileSet= "Example FileSet" Messages = Standard Storage = FileStorage @@ -372,100 +339,88 @@ \subsection{An Example} Name = "RecycleExample2" Type = Backup Level = Full - Client = Roxie + Client = client2-fd FileSet= "Example FileSet" Messages = Standard - Storage = FileStorage1 - Pool = Recycle1 + Storage = FileStorage2 + Pool = Recycle2 Schedule = FourPerHour } -FileSet { - Name = "Example FileSet" - Include { - Options { - compression=GZIP - signature=SHA1 - } - File = /home/user/bareos/bin - } -} - Client { - Name = Rufus - Address = rufus - Catalog = BackupDB - Password = client_password + Name = client1-fd + Address = client1.example.com + Password = client1_password } Client { - Name = Roxie - Address = roxie - Catalog = BackupDB - Password = client1_password + Name = client2-fd + Address = client2.example.com + Password = client2_password } Storage { Name = FileStorage - Address = rufus + Address = bareos-sd.example.com Password = local_storage_password Device = RecycleDir Media Type = File } Storage { - Name = FileStorage1 - Address = rufus + Name = FileStorage2 + Address = bareos-sd.example.com Password = local_storage_password - Device = RecycleDir1 + Device = RecycleDir2 Media Type = File1 } Catalog { - Name = BackupDB - dbname = bareos; user = bareos; password = "" + Name = MyCatalog + ... } + Messages { Name = Standard ... } + Pool { Name = Recycle - Use Volume Once = yes Pool Type = Backup - LabelFormat = "Recycle-" - AutoPrune = yes - VolumeRetention = 2h + Label Format = "Recycle-" + Auto Prune = yes + Use Volume Once = yes + Volume Retention = 2h Maximum Volumes = 12 Recycle = yes } Pool { - Name = Recycle1 - Use Volume Once = yes + Name = Recycle2 Pool Type = Backup - LabelFormat = "Recycle1-" - AutoPrune = yes - VolumeRetention = 2h + Label Format = "Recycle2-" + Auto Prune = yes + Use Volume Once = yes + Volume Retention = 2h Maximum Volumes = 12 Recycle = yes } - \end{bconfig} -and the Storage daemon's configuration file is: +and the \bareosSd configuration is: \begin{bconfig}{} Storage { - Name = my-sd - WorkingDirectory = "~/bareos/working" - Pid Directory = "~/bareos/working" - MaximumConcurrentJobs = 10 + Name = bareos-sd + Maximum Concurrent Jobs = 10 } + Director { - Name = my-dir + Name = bareos-dir Password = local_storage_password } + Device { Name = RecycleDir Media Type = File @@ -478,9 +433,9 @@ \subsection{An Example} } Device { - Name = RecycleDir1 - Media Type = File1 - Archive Device = /home/bareos/backups1 + Name = RecycleDir2 + Media Type = File2 + Archive Device = /home/bareos/backups2 LabelMedia = yes; Random Access = Yes; AutomaticMount = yes; @@ -490,293 +445,117 @@ \subsection{An Example} Messages { Name = Standard - director = my-dir = all + director = bareos-dir = all } \end{bconfig} With a little bit of work, you can change the above example into a weekly or monthly cycle (take care about the amount of archive disk space used). -\section{Backing up to Multiple Disks} -\label{MultipleDisks} -\index[general]{Disks!Backing up to Multiple} -\index[general]{Backup!to Multiple Disks} - -Bareos can, of course, use multiple disks, but in general, each disk must be a -separate Device specification in the Storage daemon's conf file, and you must -then select what clients to backup to each disk. You will also want to -give each Device specification a different Media Type so that during -a restore, Bareos will be able to find the appropriate drive. - -The situation is a bit more complicated if you want to treat two different -physical disk drives (or partitions) logically as a single drive, which -Bareos does not directly support. However, it is possible to back up your -data to multiple disks as if they were a single drive by linking the -Volumes from the first disk to the second disk. -For example, assume that you have two disks named {\bf /disk1} and {\bf -/disk2}. If you then create a standard Storage daemon Device resource for -backing up to the first disk, it will look like the following: -\footnotesize -\begin{verbatim} -Device { - Name = client1 - Media Type = File - Archive Device = /disk1 - LabelMedia = yes; - Random Access = Yes; - AutomaticMount = yes; - RemovableMedia = no; - AlwaysOpen = no; -} -\end{verbatim} -\normalsize - -Since there is no way to get the above Device resource to reference both {\bf -/disk1} and {\bf /disk2} we do it by pre-creating Volumes on /disk2 with the -following: - -\footnotesize -\begin{verbatim} -ln -s /disk2/Disk2-vol001 /disk1/Disk2-vol001 -ln -s /disk2/Disk2-vol002 /disk1/Disk2-vol002 -ln -s /disk2/Disk2-vol003 /disk1/Disk2-vol003 -... -\end{verbatim} -\normalsize - -At this point, you can label the Volumes as Volume {\bf Disk2-vol001}, {\bf -Disk2-vol002}, ... and Bareos will use them as if they were on /disk1 but -actually write the data to /disk2. The only minor inconvenience with this -method is that you must explicitly name the disks and cannot use automatic -labeling unless you arrange to have the labels exactly match the links you -have created. - -An important thing to know is that Bareos treats disks like tape drives -as much as it can. This means that you can only have a single Volume -mounted at one time on a disk as defined in your Device resource in -the Storage daemon's conf file. You can have multiple concurrent -jobs running that all write to the one Volume that is being used, but -if you want to have multiple concurrent jobs that are writing to -separate disks drives (or partitions), you will need to define -separate Device resources for each one, exactly as you would do for -two different tape drives. There is one fundamental difference, however. -The Volumes that you create on the two drives cannot be easily exchanged -as they can for a tape drive, because they are physically resident (already -mounted in a sense) on the particular drive. As a consequence, you will -probably want to give them different Media Types so that Bareos can -distinguish what Device resource to use during a restore. -An example would be the following: - -\footnotesize -\begin{verbatim} -Device { - Name = Disk1 - Media Type = File1 - Archive Device = /disk1 - LabelMedia = yes; - Random Access = Yes; - AutomaticMount = yes; - RemovableMedia = no; - AlwaysOpen = no; -} - -Device { - Name = Disk2 - Media Type = File2 - Archive Device = /disk2 - LabelMedia = yes; - Random Access = Yes; - AutomaticMount = yes; - RemovableMedia = no; - AlwaysOpen = no; -} -\end{verbatim} -\normalsize +\subsection{Using Multiple Storage Devices} -With the above device definitions, you can run two concurrent -jobs each writing at the same time, one to {\bf /disk1} and the -other to {\bf /disk2}. The fact that you have given them different -Media Types will allow Bareos to quickly choose the correct -Storage resource in the Director when doing a restore. +Bareos treats disk volumes similar to tape volumes as much as it can. +This means that you can only have a single Volume mounted at one time on a disk as defined in your \resourcetype{Sd}{Device} resource. -\label{MultipleClients} -\section{Considerations for Multiple Clients} -\index[general]{Clients!Considerations for Multiple} -\index[general]{Multiple Clients} +If you use Bareos without \nameref{sec:DataSpooling}, +multiple concurrent backup jobs can be written to a Volume using interleaving. +However, interleaving has disadvantages, see \nameref{sec:Interleaving}. -If we take the above example and add a second Client, here are a few -considerations: +Also the \resourcetype{Sd}{Device} will be in use. If there are other jobs, requesting other Volumes, +these jobs have to wait. -\begin{itemize} -\item Although the second client can write to the same set of Volumes, you - will probably want to write to a different set. -\item You can write to a different set of Volumes by defining a second Pool, - which has a different name and a different {\bf LabelFormat}. -\item If you wish the Volumes for the second client to go into a different - directory (perhaps even on a different filesystem to spread the load), you - would do so by defining a second Device resource in the Storage daemon. The -{\bf Name} must be different, and the {\bf Archive Device} could be -different. To ensure that Volumes are never mixed from one pool to another, -you might also define a different MediaType (e.g. {\bf File1}). -\end{itemize} +On a tape (or autochanger), this is a physical limitation of the hardware. +However, when using disk storage, this is only a limitation of the software. -In this example, we have two clients, each with a different Pool and a -different number of archive files retained. They also write to different -directories with different Volume labeling. +To enable Bareos to run concurrent jobs (on disk storage), define as many \resourcetype{Sd}{Device} as concurrent jobs should run. +All these \resourcetype{Sd}{Device}s can use the same \linkResourceDirective{Sd}{Device}{Archive Device} directory. Set \linkResourceDirective{Sd}{Device}{Maximum Concurrent Jobs} = 1 for all these devices. -The Director's configuration file is as follows: +\subsubsection{Example: use four storage devices pointing to the same directory} -\footnotesize -\begin{verbatim} +\begin{bconfig}{\bareosDir configuration: using 4 storage devices} Director { - Name = my-dir - QueryFile = "~/bareos/bin/query.sql" - PidDirectory = "~/bareos/working" - WorkingDirectory = "~/bareos/working" - Password = dir_password -} -# Basic weekly schedule -Schedule { - Name = "WeeklySchedule" - Run = Level=Full fri at 1:30 - Run = Level=Incremental sat-thu at 1:30 -} -FileSet { - Name = "Example FileSet" - Include { - Options { - compression=GZIP - signature=SHA1 - } - File = /home/user/bareos/bin - } -} -Job { - Name = "Backup-client1" - Type = Backup - Level = Full - Client = client1 - FileSet= "Example FileSet" - Messages = Standard - Storage = File1 - Pool = client1 - Schedule = "WeeklySchedule" -} -Job { - Name = "Backup-client2" - Type = Backup - Level = Full - Client = client2 - FileSet= "Example FileSet" - Messages = Standard - Storage = File2 - Pool = client2 - Schedule = "WeeklySchedule" -} -Client { - Name = client1 - Address = client1 - Catalog = BackupDB - Password = client1_password - File Retention = 7d -} -Client { - Name = client2 - Address = client2 - Catalog = BackupDB - Password = client2_password -} -# Two Storage definitions with different Media Types -# permits different directories -Storage { - Name = File1 - Address = rufus - Password = local_storage_password - Device = client1 - Media Type = File1 + Name = bareos-dir.example.com + QueryFile = "/usr/lib/bareos/scripts/query.sql" + Maximum Concurrent Jobs = 10 + Password = "" } + Storage { - Name = File2 - Address = rufus - Password = local_storage_password - Device = client2 - Media Type = File2 -} -Catalog { - Name = BackupDB - dbname = bareos; user = bareos; password = "" -} -Messages { - Name = Standard - ... -} -# Two pools permits different cycling periods and Volume names -# Cycle through 15 Volumes (two weeks) -Pool { - Name = client1 - Use Volume Once = yes - Pool Type = Backup - LabelFormat = "Client1-" - AutoPrune = yes - VolumeRetention = 13d - Maximum Volumes = 15 - Recycle = yes -} -# Cycle through 8 Volumes (1 week) -Pool { - Name = client2 - Use Volume Once = yes - Pool Type = Backup - LabelFormat = "Client2-" - AutoPrune = yes - VolumeRetention = 6d - Maximum Volumes = 8 - Recycle = yes + Name = File + Address = bareos-sd.bareos.com + Password = "" + Device = FileStorage1 + Device = FileStorage2 + Device = FileStorage3 + Device = FileStorage4 + # number of devices = Maximum Concurrent Jobs + Maximum Concurrent Jobs = 4 + Media Type = File } -\end{verbatim} -\normalsize -and the Storage daemon's configuration file is: +[...] +\end{bconfig} -\footnotesize -\begin{verbatim} + +\begin{bconfig}{\bareosSd configuraton: using 4 storage devices} Storage { - Name = my-sd - WorkingDirectory = "~/bareos/working" - Pid Directory = "~/bareos/working" - MaximumConcurrentJobs = 10 + Name = bareos-sd.example.com + # any number >= 4 + Maximum Concurrent Jobs = 20 } + Director { - Name = my-dir - Password = local_storage_password + Name = bareos-dir.example.com + Password = "" } -# Archive directory for Client1 + Device { - Name = client1 - Media Type = File1 - Archive Device = /home/bareos/client1 - LabelMedia = yes; - Random Access = Yes; - AutomaticMount = yes; - RemovableMedia = no; - AlwaysOpen = no; + Name = FileStorage1 + Media Type = File + Archive Device = /var/lib/bareos/storage + LabelMedia = yes + Random Access = yes + AutomaticMount = yes + RemovableMedia = no + AlwaysOpen = no + Maximum Concurrent Jobs = 1 } -# Archive directory for Client2 + Device { - Name = client2 - Media Type = File2 - Archive Device = /home/bareos/client2 - LabelMedia = yes; - Random Access = Yes; - AutomaticMount = yes; - RemovableMedia = no; - AlwaysOpen = no; + Name = FileStorage2 + Media Type = File + Archive Device = /var/lib/bareos/storage + LabelMedia = yes + Random Access = yes + AutomaticMount = yes + RemovableMedia = no + AlwaysOpen = no + Maximum Concurrent Jobs = 1 } -Messages { - Name = Standard - director = my-dir = all + +Device { + Name = FileStorage3 + Media Type = File + Archive Device = /var/lib/bareos/storage + LabelMedia = yes + Random Access = yes + AutomaticMount = yes + RemovableMedia = no + AlwaysOpen = no + Maximum Concurrent Jobs = 1 } -\end{verbatim} -\normalsize + +Device { + Name = FileStorage4 + Media Type = File + Archive Device = /var/lib/bareos/storage + LabelMedia = yes + Random Access = yes + AutomaticMount = yes + RemovableMedia = no + AlwaysOpen = no + Maximum Concurrent Jobs = 1 +} +\end{bconfig} diff --git a/manuals/en/main/migration.tex b/manuals/en/main/migration.tex index 6bf24c7..83aba93 100644 --- a/manuals/en/main/migration.tex +++ b/manuals/en/main/migration.tex @@ -237,15 +237,15 @@ \subsection{Example Migration Jobs} \end{bconfig} Note that the backup job writes to the \pool{Default} pool, which -corresponds to \resourcename{File} storage. There is no +corresponds to \resourcename{Dir}{Storage}{File} storage. There is no \linkResourceDirective{Dir}{Pool}{Storage} directive -in the Job resource while the two \configresource{Pool} resources contain +in the Job resource while the two \resourcetype{Dir}{Pool} resources contain different \linkResourceDirective{Dir}{Pool}{Storage} directives. Moreover, the \pool{Default} pool contains a \linkResourceDirective{Dir}{Pool}{Next Pool} directive that refers to the \pool{Tape} pool. -In order to migrate jobs from the \pool{Default} pool to the \pool{Tape} pool +In order to migrate jobs from the \resourcename{Dir}{Pool}{Default} pool to the \resourcename{Dir}{Pool}{Tape} pool we add the following Job resource: \begin{bconfig}{migrate all volumes of a pool} diff --git a/manuals/en/main/ndmp.tex b/manuals/en/main/ndmp.tex index a189d15..8f9b624 100644 --- a/manuals/en/main/ndmp.tex +++ b/manuals/en/main/ndmp.tex @@ -213,7 +213,7 @@ \subsubsection{Add a NDMP resource} \subsection{Bareos Director: Configure a Paired Storage} For NDMP Backups, we always need two storages that are paired together. -The default configuration already has a Storage \resourcename{File} defined: +The default configuration already has a Storage \resourcename{Dir}{Storage}{File} defined: \begin{bconfig}{} Storage { @@ -225,7 +225,7 @@ \subsection{Bareos Director: Configure a Paired Storage} } \end{bconfig} -We now add a paired storage to the already existing \resourcename{File} storage: +We now add a paired storage to the already existing \resourcename{Dir}{}{File} storage: \begin{bconfig}{} # # Same storage daemon but via NDMP protocol. @@ -720,12 +720,12 @@ \section{NDMP Copy Jobs} \index[general]{NDMP!Copy jobs} To be able to do copy jobs, we need to have a second storage resource where we can copy the data to. -Depending on your requirements, this resource can be added to the existing \bareosSd (e.g. \resourcename{autochanger-0} for tape based backups) or to an additional \bareosSd. +Depending on your requirements, this resource can be added to the existing \bareosSd (e.g. \resourcename{Sd}{Storage}{autochanger-0} for tape based backups) or to an additional \bareosSd. We set up an additional \bareosSd on a host named \host{bareos-sd2.example.com} -with the default \resourcename{FileStorage} device. +with the default \resourcename{Sd}{Storage}{FileStorage} device. -When this is done, add a second storage resource \resourcename{File2} to the \file{bareos-dir.conf}: +When this is done, add a second storage resource \resourcename{Dir}{Storage}{File2} to the \file{bareos-dir.conf}: \begin{bconfig}{Storage resource File2} Storage { Name = File2 @@ -760,7 +760,7 @@ \section{NDMP Copy Jobs} Then we need to define the just defined pool as the \linkResourceDirective{Dir}{Pool}{Next Pool} of the pool that actually holds the data to be copied. -In our case this is the \resourcename{Full} Pool: +In our case this is the \resourcename{Dir}{Pool}{Full} Pool: \begin{bconfig}{add Next Pool setting} # # Full Pool definition @@ -773,10 +773,10 @@ \section{NDMP Copy Jobs} \end{bconfig} -Finally, we need to define a Copy Job that will select the jobs that are in the \resourcename{Full} pool -and copy them over to the \resourcename{Copy} pool -reading the data via the \resourcename{File} Storage -and writing the data via the \resourcename{File2} Storage: +Finally, we need to define a Copy Job that will select the jobs that are in the \resourcename{Dir}{Pool}{Full} pool +and copy them over to the \resourcename{Dir}{Pool}{Copy} pool +reading the data via the \resourcename{Dir}{Storage}{File} Storage +and writing the data via the \resourcename{Dir}{Storage}{File2} Storage: \begin{bconfig}{NDMP copy job} Job { @@ -883,7 +883,7 @@ \subsection{Restore to NDMP Primary Storage System} To be able to do NDMP operations from the storage that was used to store the copies, we need to define a NDMP storage that is paired with it. -The definition is very similar to our \resourcename{NDMPFile} Storage, +The definition is very similar to our \resourcename{Dir}{Storage}{NDMPFile} Storage, as we want to restore the data to the same NDMP Storage system: \begin{bconfig}{add paired Storage resource for File2} diff --git a/manuals/en/main/recycling.tex b/manuals/en/main/recycling.tex index cd939db..fc636c6 100644 --- a/manuals/en/main/recycling.tex +++ b/manuals/en/main/recycling.tex @@ -1,24 +1,19 @@ -%% -%% - \section{Automatic Volume Recycling} \label{RecyclingChapter} \index[general]{Recycle!Automatic Volume} -\index[general]{Automatic!Volume Recycling} +\index[general]{Volume!Recycle!Automatic} By default, once Bareos starts writing a Volume, it can append to the volume, but it will not overwrite the existing data thus destroying it. -However when Bareos {\bf recycles} a Volume, the Volume becomes available -for being reused, and Bareos can at some later time overwrite the previous +However when Bareos recycles a Volume, the Volume becomes available +for being reused and Bareos can at some later time overwrite the previous contents of that Volume. Thus all previous data will be lost. If the Volume is a tape, the tape will be rewritten from the beginning. If the Volume is a disk file, the file will be truncated before being rewritten. You may not want Bareos to automatically recycle (reuse) tapes. This would require a large number of tapes though, and in such a case, it is possible -to manually recycle tapes. For more on manual recycling, see the section -entitled \ilink{ Manually Recycling Volumes}{manualrecycling} below in this -chapter. +to manually recycle tapes. For more on manual recycling, see the \nameref{manualrecycling} chapter. Most people prefer to have a Pool of tapes that are used for daily backups and recycled once a week, another Pool of tapes that are used for Full backups @@ -29,46 +24,50 @@ \section{Automatic Volume Recycling} By properly defining your Volume Pools with appropriate Retention periods, Bareos can manage the recycling (such as defined above) automatically. -Automatic recycling of Volumes is controlled by four records in the {\bf -Pool} resource definition in the Director's configuration file. These four -records are: +Automatic recycling of Volumes is controlled by four records in the \resourcetype{Dir}{Pool} +resource definition. +These four records are: \begin{itemize} -\item AutoPrune = yes -\item VolumeRetention = {\textless}time{\textgreater} -\item Recycle = yes -\item RecyclePool = {\textless}APool{\textgreater} +\item \linkResourceDirective{Dir}{Pool}{Auto Prune} = yes +\item \linkResourceDirective{Dir}{Pool}{Volume Retention} +\item \linkResourceDirective{Dir}{Pool}{Recycle} = yes +\item \linkResourceDirective{Dir}{Pool}{Recycle Pool} \end{itemize} The above three directives are all you need assuming that you fill each of your Volumes then wait the Volume Retention period before reusing them. If you want Bareos to stop using a Volume and recycle -it before it is full, you will need to use one or more additional +it before it is full, you can use one or more additional directives such as: \begin{itemize} -\item Use Volume Once = yes -\item Volume Use Duration = ttt -\item Maximum Volume Jobs = nnn -\item Maximum Volume Bytes = mmm +\item \linkResourceDirective{Dir}{Pool}{Volume Use Duration} +\item \linkResourceDirective{Dir}{Pool}{Maximum Volume Jobs} +\item \linkResourceDirective{Dir}{Pool}{Maximum Volume Bytes} \end{itemize} Please see below and the \ilink{Basic Volume Management}{DiskChapter} chapter -of this manual for more complete examples. +of this manual for complete examples. Automatic recycling of Volumes is performed by Bareos only when it wants a new Volume and no appendable Volumes are available in the Pool. It will then search the Pool for any Volumes with the {\bf Recycle} flag set and the -Volume Status is {\bf Purged}. At that point, it will choose the oldest +Volume Status is \volumestatus{Purged}. At that point, it will choose the oldest purged volume and recycle it. -If there are no volumes with Status {\bf Purged}, then +If there are no volumes with status \volumestatus{Purged}, then the recycling occurs in two steps: -The first is that the Catalog for a Volume must be pruned of all Jobs (i.e. -Purged). Files contained on that Volume, and the second step is the actual -recycling of the Volume. Only Volumes marked {\bf Full} or {\bf Used} will -be considerd for pruning. The Volume will be purged if the VolumeRetention -period has expired. When a Volume is marked as Purged, it means that no -Catalog records reference that Volume, and the Volume can be recycled. +\begin{enumerate} + \item The Catalog for a Volume must be pruned of all Jobs (i.e. +Purged). + \item The actual recycling of the Volume. +\end{enumerate} + +Only Volumes marked \volumestatus{Full} or \volumestatus{Used} will +be considerd for pruning. The Volume will be purged if the \volumeparameter{Volume Retention}{} +period has expired. When a Volume is marked as \volumestatus{Purged}, it means that no +Catalog records reference that Volume and the Volume can be recycled. + Until recycling actually occurs, the Volume data remains intact. If no Volumes can be found for recycling for any of the reasons stated above, Bareos will request operator intervention (i.e. it will ask you to label a @@ -76,16 +75,17 @@ \section{Automatic Volume Recycling} A key point mentioned above, that can be a source of frustration, is that Bareos will only recycle purged Volumes if there is no other appendable Volume -available, otherwise, it will always write to an appendable Volume before +available. +Otherwise, it will always write to an appendable Volume before recycling even if there are Volume marked as Purged. This preserves your data -as long as possible. So, if you wish to "force" Bareos to use a purged +as long as possible. So, if you wish to \bquote{force} Bareos to use a purged Volume, you must first ensure that no other Volume in the Pool is marked {\bf Append}. If necessary, you can manually set a volume to {\bf Full}. The reason for this is that Bareos wants to preserve the data on your old tapes (even though purged from the catalog) as long as absolutely possible before overwriting it. There are also a number of directives such as -{\bf Volume Use Duration} that will automatically mark a volume as {\bf -Used} and thus no longer appendable. +\volumeparameter{Volume Use Duration}{} that will automatically mark a volume as \volumestatus{Used} +and thus no longer appendable. \subsection{Automatic Pruning} \label{AutoPruning} @@ -101,17 +101,21 @@ \subsection{Automatic Pruning} Bareos's process for removing entries from the catalog is called Pruning. The default is Automatic Pruning, which means that once an entry reaches a certain -age (e.g. 30 days old) it is removed from the catalog. Note that Job records -that are required for current restore won't be removed automatically, and File -records are needed for VirtualFull and Accurate backups. Once a job has been +age (e.g. 30 days old) it is removed from the catalog. Note that +Job records that are required for current restore and +File records are needed for VirtualFull and Accurate backups +won't be removed automatically. + +Once a job has been pruned, you can still restore it from the backup tape, but one additional step -is required: scanning the volume with bscan. The alternative to Automatic -Pruning is Manual Pruning, in which you explicitly tell Bareos to erase the +is required: scanning the volume with \command{bscan}. + +The alternative to Automatic Pruning is Manual Pruning, +in which you explicitly tell Bareos to erase the catalog entries for a volume. You'd usually do this when you want to reuse a Bareos volume, because there's no point in keeping a list of files that USED TO BE on a tape. Or, if the catalog is starting to get too big, you could prune -the oldest jobs to save space. Manual pruning is done with the \ilink{prune - command}{ManualPruning} in the console. +the oldest jobs to save space. Manual pruning is done with the \ilink{prune command}{ManualPruning} in the console. \subsection{Pruning Directives} \index[general]{Pruning!Directives} @@ -119,7 +123,7 @@ \subsection{Pruning Directives} There are three pruning durations. All apply to catalog database records and not to the actual data in a Volume. The pruning (or retention) durations are for: Volumes (Media records), Jobs (Job records), and Files (File records). -The durations inter-depend a bit because if Bareos prunes a Volume, it +The durations inter-depend because if Bareos prunes a Volume, it automatically removes all the Job records, and all the File records. Also when a Job record is pruned, all the File records for that Job are also pruned (deleted) from the catalog. @@ -132,7 +136,7 @@ \subsection{Pruning Directives} cannot use the Console restore command to restore the files. When a Job record is pruned, the Volume (Media record) for that Job can still -remain in the database, and if you do a "list volumes", you will see the +remain in the database, and if you do a \bcommand{list}{volumes}, you will see the volume information, but the Job records (and its File records) will no longer be available. @@ -140,132 +144,123 @@ \subsection{Pruning Directives} also prevents the catalog from growing to be too large. You choose the retention periods in function of how many files you are backing up and the time periods you want to keep those records online, and the size of the -database. You can always re-insert the records (with 98\% of the original data) -by using "bscan" to scan in a whole Volume or any part of the volume that +database. +It is possible to re-insert the records (with 98\% of the original data) +by using \command{bscan} to scan in a whole Volume or any part of the volume that you want. -By setting {\bf AutoPrune} to {\bf yes} you will permit {\bf Bareos} to +By setting \linkResourceDirective{Dir}{Pool}{Auto Prune} = yes you will permit +the \bareosDir to automatically prune all Volumes in the Pool when a Job needs another Volume. Volume pruning means removing records from the catalog. It does not shrink the size of the Volume or affect the Volume data until the Volume gets overwritten. When a Job requests another volume and there are no Volumes with -Volume Status {\bf Append} available, Bareos will begin volume pruning. This -means that all Jobs that are older than the {\bf VolumeRetention} period will -be pruned from every Volume that has Volume Status {\bf Full} or {\bf Used} -and has Recycle set to {\bf yes}. Pruning consists of deleting the +Volume status \volumestatus{Append} available, Bareos will begin volume pruning. This +means that all Jobs that are older than the \volumeparameter{Volume Retention}{} period will +be pruned from every Volume that has Volume status \volumestatus{Full} or \volumestatus{Used} +and has \volumeparameter{Recycle}{yes}. Pruning consists of deleting the corresponding Job, File, and JobMedia records from the catalog database. No change to the physical data on the Volume occurs during the pruning process. When all files are pruned from a Volume (i.e. no records in the catalog), the -Volume will be marked as {\bf Purged} implying that no Jobs remain on the +Volume will be marked as \volumestatus{Purged} implying that no Jobs remain on the volume. The Pool records that control the pruning are described below. \begin{description} -\item [AutoPrune = {\textless}yes|no{\textgreater}] - \index[dir]{AutoPrune} - If AutoPrune is set to {\bf yes} (default), Bareos - will automatically apply the Volume retention period when running a Job and - it needs a new Volume but no appendable volumes are available. At that point, - Bareos will prune all Volumes that can be pruned (i.e. AutoPrune set) in an +\item \linkResourceDirective{Dir}{Pool}{Auto Prune} = yes: + when running a Job and it needs a new Volume but no appendable volumes are available, apply the Volume retention period. + At that point, + Bareos will prune all Volumes that can be pruned in an attempt to find a usable volume. If during the autoprune, all files are - pruned from the Volume, it will be marked with VolStatus {\bf Purged}. The - default is {\bf yes}. Note, that although the File and Job records may be - pruned from the catalog, a Volume will be marked Purged (and hence - ready for recycling) if the Volume status is Append, Full, Used, or Error. - If the Volume has another status, such as Archive, Read-Only, Disabled, - Busy, or Cleaning, the Volume status will not be changed to Purged. - -\item [Volume Retention = {\textless}time-period-specification{\textgreater}] - \index[dir]{Volume Retention} - The Volume Retention record defines the length of time that Bareos will + pruned from the Volume, it will be marked with Volume status \volumestatus{Purged}. + + Note, that although the File and Job records may be + pruned from the catalog, a Volume will only be marked \volumestatus{Purged} (and hence + ready for recycling) if the Volume status is \volumestatus{Append}, \volumestatus{Full}, \volumestatus{Used}, or \volumestatus{Error}. + If the Volume has another status, such as \volumestatus{Archive}, \volumestatus{Read-Only}, \volumestatus{Disabled}, + \volumestatus{Busy} or \volumestatus{Cleaning}, the Volume status will not be changed to \volumestatus{Purged}. + +\item \linkResourceDirective{Dir}{Pool}{Volume Retention} + defines the length of time that Bareos will guarantee that the Volume is not reused counting from the time the last job stored on the Volume terminated. A key point is that this time period is not even considered as long at the Volume remains appendable. - The Volume Retention period count down begins only when the Append - status has been changed to some othe status (Full, Used, Purged, ...). + The Volume Retention period count down begins only when the \volumestatus{Append} + status has been changed to some other status (\volumestatus{Full}, \volumestatus{Used}, \volumestatus{Purged}, ...). - When this time period expires, and if {\bf AutoPrune} is set to {\bf - yes}, and a new Volume is needed, but no appendable Volume is available, + When this time period expires and if \linkResourceDirective{Dir}{Pool}{Auto Prune} = yes + and a new Volume is needed, but no appendable Volume is available, Bareos will prune (remove) Job records that are older than the specified - Volume Retention period. + \volumeparameter{Volume Retention}{} period. - The Volume Retention period takes precedence over any Job Retention + The \volumeparameter{Volume Retention}{} period takes precedence over any \linkResourceDirective{Dir}{Client}{Job Retention} period you have specified in the Client resource. It should also be - noted, that the Volume Retention period is obtained by reading the + noted, that the \volumeparameter{Volume Retention}{} period is obtained by reading the Catalog Database Media record rather than the Pool resource record. - This means that if you change the VolumeRetention in the Pool resource + This means that if you change the \linkResourceDirective{Dir}{Pool}{Volume Retention} in the Pool resource record, you must ensure that the corresponding change is made in the - catalog by using the {\bf update pool} command. Doing so will insure - that any new Volumes will be created with the changed Volume Retention - period. Any existing Volumes will have their own copy of the Volume - Retention period that can only be changed on a Volume by Volume basis - using the {\bf update volume} command. - - When all file catalog entries are removed from the volume, its VolStatus is - set to {\bf Purged}. The files remain physically on the Volume until the + catalog by using the \bcommand{update}{pool} command. Doing so will insure + that any new Volumes will be created with the changed \volumeparameter{Volume Retention}{} + period. Any existing Volumes will have their own copy of the \volumeparameter{Volume Retention}{} + period that can only be changed on a Volume by Volume basis + using the \bcommand{update}{volume} command. + + When all file catalog entries are removed from the volume, its Volume status is + set to \volumestatus{Purged}. The files remain physically on the Volume until the volume is overwritten. - Retention periods are specified in seconds, minutes, hours, days, weeks, - months, quarters, or years on the record. See the - \ilink{Configuration chapter}{Time} of this manual for - additional details of time specification. - -The default is 1 year. -% TODO: if that is the format, should it be in quotes? decide on a style - -\item [Recycle = {\textless}yes|no{\textgreater}] - \index[dir]{Recycle} - This statement tells Bareos whether or not the particular Volume can be - recycled (i.e. rewritten). If Recycle is set to {\bf no} (the - default), then even if Bareos prunes all the Jobs on the volume and it - is marked {\bf Purged}, it will not consider the tape for recycling. If - Recycle is set to {\bf yes} and all Jobs have been pruned, the volume - status will be set to {\bf Purged} and the volume may then be reused +\item \linkResourceDirective{Dir}{Pool}{Recycle} + defines whether or not the particular Volume can be + recycled (i.e. rewritten). If Recycle is set to \parameter{no}, + then even if Bareos prunes all the Jobs on the volume and it + is marked \volumestatus{Purged}, it will not consider the tape for recycling. If + Recycle is set to \parameter{yes} and all Jobs have been pruned, the volume + status will be set to \volumestatus{Purged} and the volume may then be reused when another volume is needed. If the volume is reused, it is relabeled with the same Volume Name, however all previous data will be lost. - \end{description} +\end{description} - It is also possible to "force" pruning of all Volumes in the Pool - associated with a Job by adding {\bf Prune Files = yes} to the Job resource. +% It is also possible to force pruning of all Volumes in the Pool +% associated with a Job by adding {\bf Prune Files = yes} to the Job resource. \subsection{Recycling Algorithm} \index[general]{Algorithm!Recycling} -\index[general]{Recycling Algorithm} +\index[general]{Recycle!Algorithm} \label{RecyclingAlgorithm} \label{Recycling} After all Volumes of a Pool have been pruned (as mentioned above, this happens when a Job needs a new Volume and no appendable Volumes are available), Bareos -will look for the oldest Volume that is Purged (all Jobs and Files expired), -and if the {\bf Recycle} flag is on (Recycle=yes) for that Volume, Bareos will +will look for the oldest Volume that is \volumestatus{Purged} (all Jobs and Files expired), +and if the \volumeparameter{Recycle}{yes} for that Volume, Bareos will relabel it and write new data on it. As mentioned above, there are two key points for getting a Volume -to be recycled. First, the Volume must no longer be marked Append (there +to be recycled. First, the Volume must no longer be marked \volumestatus{Append} (there are a number of directives to automatically make this change), and second since the last write on the Volume, one or more of the Retention periods must have expired so that there are no more catalog backup job records that reference that Volume. Once both those conditions are satisfied, -the volume can be marked Purged and hence recycled. +the volume can be marked \volumestatus{Purged} and hence recycled. The full algorithm that Bareos uses when it needs a new Volume is: \index[general]{New Volume Algorithm} \index[general]{Algorithm!New Volume} -The algorithm described below assumes that AutoPrune is enabled, +The algorithm described below assumes that \configdirective{Auto Prune} is enabled, that Recycling is turned on, and that you have defined -appropriate Retention periods, or used the defaults for all these +appropriate Retention periods or used the defaults for all these items. -\begin{itemize} +\begin{enumerate} \item If the request is for an Autochanger device, look only for Volumes in the Autochanger (i.e. with InChanger set and that have the correct Storage device). -\item Search the Pool for a Volume with VolStatus=Append (if there is more +\item Search the Pool for a Volume with Volume status=\volumestatus{Append} (if there is more than one, the Volume with the oldest date last written is chosen. If two have the same date then the one with the lowest MediaId is chosen). -\item Search the Pool for a Volume with VolStatus=Recycle and the InChanger +\item Search the Pool for a Volume with Volume status=\volumestatus{Recycle} and the InChanger flag is set true (if there is more than one, the Volume with the oldest date last written is chosen. If two have the same date then the one with the lowest MediaId is chosen). @@ -275,46 +270,44 @@ \subsection{Recycling Algorithm} records are pruned from a Volume, the Volume will not be marked Purged until the Volume retention period expires. \item Search the Pool for a Volume with VolStatus=Purged -\item If a Pool named "Scratch" exists, search for a Volume and if found +\item If a Pool named \pool{Scratch} exists, search for a Volume and if found move it to the current Pool for the Job and use it. Note, when the Scratch Volume is moved into the current Pool, the basic Pool defaults are applied as if it is a newly labeled Volume - (equivalent to an {\bf update volume from pool} command). + (equivalent to an \bcommand{update}{volume from pool} command). \item If we were looking for Volumes in the Autochanger, go back to step 2 above, but this time, look for any Volume whether or not it is in the Autochanger. -\item Attempt to create a new Volume if automatic labeling enabled - If Python is enabled, a Python NewVolume event is generated before - the Label Format directve is used. If the maximum number of Volumes - specified for the pool is reached, a new Volume will not be created. -\item Prune the oldest Volume if RecycleOldestVolume=yes (the Volume with the +\item Attempt to create a new Volume if automatic labeling enabled. + If the maximum number of Volumes + specified for the pool is reached, no new Volume will be created. +\item Prune the oldest Volume if \linkResourceDirective{Dir}{Pool}{Recycle Oldest Volume}=yes (the Volume with the oldest LastWritten date and VolStatus equal to Full, Recycle, Purged, Used, or Append is chosen). This record ensures that all retention periods are properly respected. -\item Purge the oldest Volume if PurgeOldestVolume=yes (the Volume with the +\item Purge the oldest Volume if \linkResourceDirective{Dir}{Pool}{Purge Oldest Volume}=yes (the Volume with the oldest LastWritten date and VolStatus equal to Full, Recycle, Purged, Used, - or Append is chosen). We strongly recommend against the use of {\bf - PurgeOldestVolume} as it can quite easily lead to loss of current backup - data. + or Append is chosen). + \warning{We strongly recommend against the use of \configdirective{Purge Oldest Volume} as it can quite easily lead to loss of current backup + data.} \item Give up and ask operator. -\end{itemize} +\end{enumerate} The above occurs when Bareos has finished writing a Volume or when no Volume is present in the drive. On the other hand, if you have inserted a different Volume after the last job, and Bareos recognizes the Volume as valid, it will request authorization from -the Director to use this Volume. In this case, if you have set {\bf Recycle -Current Volume = yes} and the Volume is marked as Used or Full, Bareos will +the Director to use this Volume. In this case, if you have set +\linkResourceDirective{Dir}{Pool}{Recycle Current Volume} = yes and the Volume is marked as Used or Full, Bareos will prune the volume and if all jobs were removed during the pruning (respecting the retention periods), the Volume will be recycled and used. The recycling algorithm in this case is: \begin{itemize} -\item If the VolStatus is {\bf Append} or {\bf Recycle} - is set, the volume will be used. -\item If {\bf Recycle Current Volume} is set and the volume is marked {\bf - Full} or {\bf Used}, Bareos will prune the volume (applying the retention +\item If the Volume status is \volumestatus{Append} or \volumestatus{Recycle}, the volume will be used. +\item If \linkResourceDirective{Dir}{Pool}{Recycle Current Volume} = yes and the volume is + marked \volumestatus{Full} or \volumestatus{Used}, Bareos will prune the volume (applying the retention period). If all Jobs are pruned from the volume, it will be recycled. \end{itemize} @@ -324,18 +317,14 @@ \subsection{Recycling Algorithm} A few points from Alan Brown to keep in mind: -\begin{enumerate} -\item If a pool doesn't have maximum volumes defined then Bareos will prefer to +\begin{itemize} +\item If \linkResourceDirective{Dir}{Pool}{Maximum Volumes} is not set, Bareos will prefer to demand new volumes over forcibly purging older volumes. \item If volumes become free through pruning and the Volume retention period has - expired, then they get marked as "purged" and are immediately available for + expired, then they get marked as \volumestatus{Purged} and are immediately available for recycling - these will be used in preference to creating new volumes. - -\item If the Job, File, and Volume retention periods are different, then - it's common to see a tape with no files or jobs listed in the database, - but which is still not marked as "purged". -\end{enumerate} +\end{itemize} \subsection{Recycle Status} @@ -407,7 +396,7 @@ \subsection{Recycle Status} A typical volume life cycle is like this: because job count or size limit exceeded - Append ----------------------------------------> Used + Append --------------------------------------> Used/Full ^ | | First Job writes to Retention time passed | | the volume and recycling takes | @@ -420,41 +409,7 @@ \subsection{Recycle Status} \normalsize -\subsection{Making Bareos Use a Single Tape} -\label{singletape} -\index[general]{Tape!Making Bareos Use a Single} - -Most people will want Bareos to fill a tape and when it is full, a new tape -will be mounted, and so on. However, as an extreme example, it is possible for -Bareos to write on a single tape, and every night to rewrite it. To get this -to work, you must do two things: first, set the VolumeRetention to less than -your save period (one day), and the second item is to make Bareos mark the -tape as full after using it once. This is done using {\bf UseVolumeOnce = -yes}. If this latter record is not used and the tape is not full after the -first time it is written, Bareos will simply append to the tape and eventually -request another volume. Using the tape only once, forces the tape to be marked -{\bf Full} after each use, and the next time {\bf Bareos} runs, it will -recycle the tape. - -An example Pool resource that does this is: - -\footnotesize -\begin{verbatim} -Pool { - Name = DDS-4 - Use Volume Once = yes - Pool Type = Backup - AutoPrune = yes - VolumeRetention = 12h # expire after 12 hours - Recycle = yes -} -\end{verbatim} -\normalsize - \subsection{Daily, Weekly, Monthly Tape Usage Example} -\label{usageexample} -\index[general]{Daily, Weekly, Monthly Tape Usage Example} -\index[general]{Example!Daily Weekly Monthly Tape Usage} This example is meant to show you how one could define a fixed set of volumes that Bareos will rotate through on a regular schedule. There are an infinite @@ -669,11 +624,9 @@ \subsection{Automatic Pruning and Recycling Example} \subsection{Manually Recycling Volumes} \label{manualrecycling} -\index[general]{Volumes!Manually Recycling} -\index[general]{Manually Recycling Volumes} +\index[general]{Volume!Recycle!Manual} \index[general]{Recycle!Manual} - Although automatic recycling of Volumes is implemented (see the \nameref{RecyclingChapter} chapter of this manual), you may want to manually force reuse (recycling) of a Volume. @@ -682,50 +635,18 @@ \subsection{Manually Recycling Volumes} new data on the tape, the steps to take are: \begin{itemize} -\item Use the {\bf update volume} command in the Console to ensure that the - {\bf Recycle} field is set to {\bf 1} -\item Use the {\bf purge jobs volume} command in the Console to mark the - Volume as {\bf Purged}. Check by using {\bf list volumes}. +\item Use the \bcommand{update}{volume} command in the Console to ensure that + \volumeparameter{Recycle}{yes}. +\item Use the \bcommand{purge}{jobs volume} command in the Console to mark the + Volume as \volumestatus{Purged}. Check by using \bcommand{list}{volumes}. \end{itemize} Once the Volume is marked Purged, it will be recycled the next time a Volume is needed. -If you wish to reuse the tape by giving it a new name, follow the following -steps: - -\begin{itemize} -\item Use the \bcommand{purge jobs volume}{} command in the Console to mark the - Volume as {\bf Purged}. Check by using \bcommand{list volumes}{}. -\item Use the Console \bcommand{relabel}{} command to relabel the Volume. -\end{itemize} - -Please note that the relabel command applies only to tape Volumes. - -For Bareos versions prior to 1.30 or to manually relabel the Volume, use the -instructions below: - -\begin{itemize} -\item Use the {\bf delete volume} command in the Console to delete the Volume - from the Catalog. -\item If a different tape is mounted, use the {\bf unmount} command, - remove the tape, and insert the tape to be renamed. -\item Write an EOF mark in the tape using the following commands: - -\footnotesize -\begin{verbatim} - mt -f /dev/nst0 rewind - mt -f /dev/nst0 weof -\end{verbatim} -\normalsize - -where you replace {\bf /dev/nst0} with the appropriate device name on your -system. -\item Use the \bcommand{label}{} command to write a new label to the tape and to - enter it in the catalog. -\end{itemize} +If you wish to reuse the tape by giving it a new name, use the \bcommand{relabel}{} instead of the \bcommand{purge}{} command. -Please be aware that the {\bf delete} command can be dangerous. Once it is +\warning{The \bcommand{delete}{} command can be dangerous. Once it is done, to recover the File records, you must either restore your database as it -was before the {\bf delete} command, or use the {\bf bscan} utility program to -scan the tape and recreate the database entries. +was before the \bcommand{delete}{} command or use the \nameref{bscan} utility program to +scan the tape and recreate the database entries.} diff --git a/manuals/en/main/spooling.tex b/manuals/en/main/spooling.tex index fd7df26..e96bcf8 100644 --- a/manuals/en/main/spooling.tex +++ b/manuals/en/main/spooling.tex @@ -4,6 +4,7 @@ \chapter{Data Spooling} \label{SpoolingChapter} \label{sec:spooling} +\label{sec:DataSpooling} \index[general]{Data Spooling} \index[general]{Spooling!Data} diff --git a/manuals/en/main/troubleshooting.tex b/manuals/en/main/troubleshooting.tex index 57b971a..c0582d7 100644 --- a/manuals/en/main/troubleshooting.tex +++ b/manuals/en/main/troubleshooting.tex @@ -59,12 +59,6 @@ \subsection{Authorization Errors} this manual. You will run a backup to disk and a restore. Only when that works, should you begin customization of the configuration files. - Another reason that you can get authentication errors is if you are - running Multiple Concurrent Jobs in the Director, but you have not set - them in the File daemon or the Storage daemon. Once you reach their - limit, they will reject the connection producing authentication (or - connection) errors. - Some users report that authentication fails if there is not a proper reverse DNS lookup entry for the machine. This seems to be a requirement of gethostbyname(), which is what Bareos uses to translate @@ -80,13 +74,14 @@ \subsection{Authorization Errors} \end{center} In the left column, you will find the Director, Storage, and Client - resources, with their names and passwords -- these are all in {\bf - bareos-dir.conf}. The right column is where the corresponding values + resources, with their names and passwords -- these are all in the + \bareosDir configuration. + The right column is where the corresponding values should be found in the Console, Storage daemon (SD), and File daemon (FD) configuration files. Another thing to check is to ensure that the Bareos component you are - trying to access has {\bf Maximum Concurrent Jobs} set large enough to + trying to access has \configdirective{Maximum Concurrent Jobs} set large enough to handle each of the Jobs and the Console that want to connect simultaneously. Once the maximum connections has been reached, each Bareos component will reject all new connections. @@ -97,33 +92,46 @@ \section{Concurrent Jobs} \index[general]{Running Concurrent Jobs} \index[general]{Concurrent Jobs} -Bareos can run multiple concurrent jobs. -Using the {\bf Maximum Concurrent Jobs} directive, you -can configure how many and which jobs can be run simultaneously. -The Director's default value for {\bf Maximum Concurrent Jobs} is "1". - -To initially setup concurrent jobs you need to define {\bf Maximum Concurrent Jobs} in -the Director's configuration file (bareos-dir.conf) in the -Director, Job, Client, and Storage resources. - -Additionally the File daemon, and the Storage daemon each have their own -{\bf Maximum Concurrent Jobs} directive that sets the overall maximum -number of concurrent jobs the daemon will run. The default for both the -File daemon and the Storage daemon is "20". +Bareos can run multiple concurrent jobs. Using the \configdirective{Maximum Concurrent Jobs} directives, you +can configure how many and which jobs can be run simultaneously: +\begin{description} + \item[\bareosDir] \hfill\\ + \begin{itemize} + \item \linkResourceDirective{Dir}{Director}{Maximum Concurrent Jobs} + \item \linkResourceDirective{Dir}{Client}{Maximum Concurrent Jobs} + \item \linkResourceDirective{Dir}{Job}{Maximum Concurrent Jobs} + \item \linkResourceDirective{Dir}{Storage}{Maximum Concurrent Jobs} + \end{itemize} + \item[\bareosSd] \hfill\\ + \begin{itemize} + \item \linkResourceDirective{Sd}{Storage}{Maximum Concurrent Jobs} + \item \linkResourceDirective{Sd}{Device}{Maximum Concurrent Jobs} + \end{itemize} + \item[\bareosFd] \hfill\\ + \begin{itemize} + \item \linkResourceDirective{Fd}{Client}{Maximum Concurrent Jobs} + \end{itemize} +\end{description} For example, if you want two different jobs to run simultaneously backing up the same Client to the same Storage device, they will run concurrently only if -you have set {\bf Maximum Concurrent Jobs} greater than one in the Director -resource, the Client resource, and the Storage resource in bareos-dir.conf. - -We recommend that you read the \ilink{Data -Spooling}{SpoolingChapter} of this manual first, then test your multiple -concurrent backup including restore testing before you put it into -production. - -Below is a super stripped down bareos-dir.conf file showing you the four -places where the the file must be modified to allow the same job {\bf -NightlySave} to run up to four times concurrently. The change to the Job +you have set \configdirective{Maximum Concurrent Jobs} greater than one in the \configresource{Director} +resource, the \configresource{Client} resource, and the \configresource{Storage} resource in \bareosDir configuration. + +% TODO: is there a better explaination for interleaving? Then more label to that place. +\label{sec:Interleaving} +When running concurrent jobs without \nameref{sec:DataSpooling}, the volume format becomes more complicated, +consequently, restores may take longer if Bareos must sort through interleaved volume blocks from multiple simultaneous +jobs. This can be avoided by having each simultaneous job write to +a different volume or by using data spooling +We recommend that you read the \nameref{sec:DataSpooling} of this manual first, then test your multiple +concurrent backup including restore testing before you put it into production. + +When using random access media as backup space (e.g. disk), you should also read the chapter about \nameref{ConcurrentDiskJobs}. + +Below is a super stripped down \file{bareos-dir.conf} file showing you the four +places where the the file must be modified to allow the same job \resourcename{Dir}{Job}{NightlySave} +to run up to four times concurrently. The change to the Job resource is not necessary if you want different Jobs to run at the same time, which is the normal case.