This repository has been archived by the owner on Mar 19, 2021. It is now read-only.
/
director-resource-job-definitions.tex
1459 lines (1237 loc) · 60.1 KB
/
director-resource-job-definitions.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
\defDirective{Dir}{Job}{Accurate}{}{}{%
In accurate mode, the File daemon knowns exactly which files were present
after the last backup. So it is able to handle deleted or renamed files.
When restoring a FileSet for a specified date (including "most
recent"), Bareos is able to restore exactly the files and
directories that existed at the time of the last backup prior to
that date including ensuring that deleted files are actually deleted,
and renamed directories are restored properly.
In this mode, the File daemon must keep data concerning all files in
memory. So If you do not have sufficient memory, the backup may
either be terribly slow or fail.
%% $$ memory = \sum_{i=1}^{n}(strlen(path_i + file_i) + sizeof(CurFile))$$
For 500.000 files (a typical desktop linux system), it will require
approximately 64 Megabytes of RAM on your File daemon to hold the
required information.
}
\defDirective{Dir}{Job}{Add Prefix}{}{}{%
This directive applies only to a Restore job and specifies a prefix to the
directory name of all files being restored. This will use \ilink{File
Relocation}{filerelocation} feature.
}
\defDirective{Dir}{Job}{Add Suffix}{}{}{%
This directive applies only to a Restore job and specifies a suffix to all
files being restored. This will use \ilink{File Relocation}{filerelocation}
feature.
Using \texttt{Add Suffix=.old}, \texttt{/etc/passwd} will be restored to
\texttt{/etc/passwsd.old}
}
\defDirective{Dir}{Job}{Allow Duplicate Jobs}{}{}{%
\begin{figure}[htbp]
\centering
\includegraphics[width=0.5\textwidth]{\idir duplicate-real}
\caption{Allow Duplicate Jobs usage}
\label{fig:allowduplicatejobs}
\end{figure}
A duplicate job in the sense we use it here means a second or subsequent job
with the same name starts. This happens most frequently when the first job
runs longer than expected because no tapes are available.
If this directive is enabled duplicate jobs will be run. If
the directive is set to {\bf no} then only one job of a given name
may run at one time, and the action that Bareos takes to ensure only
one job runs is determined by the other directives (see below).
If {\bf Allow Duplicate Jobs} is set to {\bf no} and two jobs
are present and none of the three directives given below permit
cancelling a job, then the current job (the second one started)
will be cancelled.
}
\defDirective{Dir}{Job}{Allow Higher Duplicates}{}{}{%
}
\defDirective{Dir}{Job}{Allow Mixed Priority}{}{}{%
When
set to {\bf yes} (default {\bf no}), this job may run even if lower
priority jobs are already running. This means a high priority job
will not have to wait for other jobs to finish before starting.
The scheduler will only mix priorities when all running jobs have
this set to true.
Note that only higher priority jobs will start early. Suppose the
director will allow two concurrent jobs, and that two jobs with
priority 10 are running, with two more in the queue. If a job with
priority 5 is added to the queue, it will be run as soon as one of
the running jobs finishes. However, new priority 10 jobs will not
be run until the priority 5 job has finished.
}
\defDirective{Dir}{Job}{Backup Format}{}{}{%
The backup format used for protocols which support multiple formats.
By default, it uses the Bareos \parameter{Native} Backup format.
Other protocols,
like NDMP supports different backup formats for instance:
\begin{itemize}
\item Dump
\item Tar
\item SMTape
\end{itemize}
}
\defDirective{Dir}{Job}{Base}{}{}{%
The Base directive permits to specify the list of jobs that will be used during
Full backup as base. This directive is optional. See the \ilink{Base Job
chapter}{basejobs} for more information.
}
\defDirective{Dir}{Job}{Bootstrap}{}{}{%
The Bootstrap directive specifies a bootstrap file that, if provided,
will be used during {\bf Restore} Jobs and is ignored in other Job
types. The {\bf bootstrap} file contains the list of tapes to be used
in a restore Job as well as which files are to be restored.
Specification of this directive is optional, and if specified, it is
used only for a restore job. In addition, when running a Restore job
from the console, this value can be changed.
If you use the {\bf Restore} command in the Console program, to start a
restore job, the {\bf bootstrap} file will be created automatically from
the files you select to be restored.
For additional details of the {\bf bootstrap} file, please see
\ilink{Restoring Files with the Bootstrap File}{BootstrapChapter} chapter
of this manual.
}
\defDirective{Dir}{Job}{Cancel Lower Level Duplicates}{}{}{%
If \linkResourceDirective{Dir}{Job}{Allow Duplicates Jobs} is set to \parameter{no} and this
directive is set to \parameter{yes}, Bareos will choose between duplicated
jobs the one with the highest level. For example, it will cancel a
previous Incremental to run a Full backup. It works only for Backup
jobs.
If the levels of the duplicated
jobs are the same, nothing is done and the directives
\linkResourceDirective{Dir}{Job}{Cancel Queued Duplicates} and \linkResourceDirective{Dir}{Job}{Cancel Running Duplicates}
will be examined.
}
\defDirective{Dir}{Job}{Cancel Queued Duplicates}{}{}{%
If \linkResourceDirective{Dir}{Job}{Allow Duplicates Jobs} is set to \parameter{no} and
if this directive is set to \parameter{yes} any job that is
already queued to run but not yet running will be canceled.
}
\defDirective{Dir}{Job}{Cancel Running Duplicates}{}{}{%
If \linkResourceDirective{Dir}{Job}{Allow Duplicates Jobs} is set to \parameter{no} and
if this directive is set to \parameter{yes} any job that is already running
will be canceled.
}
\defDirective{Dir}{Job}{Catalog}{}{13.4.0}{%
This specifies the name of the catalog resource to be used for this Job.
When a catalog is defined in a Job it will override the definition in
the client.
}
\defDirective{Dir}{Job}{Client}{}{}{%
The Client directive specifies the Client (File daemon) that will be used in
the current Job. Only a single Client may be specified in any one Job. The
Client runs on the machine to be backed up, and sends the requested files to
the Storage daemon for backup, or receives them when restoring. For
additional details, see the
\nameref{DirectorResourceClient} of this chapter.
This directive is required
For versions before 13.3.0, this directive is required for all Jobtypes.
For \sinceVersion{dir}{Director Job Resource isn't required for Copy or Migrate jobs}{13.3.0}
it is required for all Jobtypes but Copy or Migrate jobs.
}
\defDirective{Dir}{Job}{Client Run After Job}{}{}{%
The specified {\bf command} is run on the client machine as soon
as data spooling is complete in order to allow restarting applications
on the client as soon as possible. .
Note, please see the notes above in {\bf RunScript}
concerning Windows clients.
}
\defDirective{Dir}{Job}{Client Run Before Job}{}{}{%
This directive is the same as {\bf Run Before Job} except that the
program is run on the client machine. The same restrictions apply to
Unix systems as noted above for the {\bf RunScript}.
}
\defDirective{Dir}{Job}{Description}{}{}{
}
\defDirective{Dir}{Job}{Differential Backup Pool}{}{}{%
The {\bf Differential Backup Pool} specifies a Pool to be used for
Differential backups. It will override any Pool specification during a
Differential backup. This directive is optional.
}
\defDirective{Dir}{Job}{Differential Max Runtime}{}{}{%
The time specifies the maximum allowed time that a Differential backup job may
run, counted from when the job starts, ({\bf not} necessarily the same as when
the job was scheduled).
}
\defDirective{Dir}{Job}{Differential Max Wait Time}{}{}{%
This directive has been deprecated in favor of
\linkResourceDirective{Dir}{Job}{Differential Max Runtime}.
}
\defDirective{Dir}{Job}{Enabled}{}{}{%
This directive allows you to enable or disable automatic execution
via the scheduler of a Job.
}
\defDirective{Dir}{Job}{Fileset}{}{}{%
The FileSet directive specifies the FileSet that will be used in the
current Job. The FileSet specifies which directories (or files) are to
be backed up, and what options to use (e.g. compression, ...). Only a
single FileSet resource may be specified in any one Job. For additional
details, see the \ilink{FileSet Resource section}{FileSetResource} of
this chapter.
This directive is required (For versions before 13.3.0 for all Jobtypes
and for versions after that for all Jobtypes but Copy and Migrate).
}
\defDirective{Dir}{Job}{Full Backup Pool}{}{}{%
The {\bf Full Backup Pool} specifies a Pool to be used for Full backups.
It will override any Pool specification during a Full backup. This
directive is optional.
}
\defDirective{Dir}{Job}{Full Max Runtime}{}{}{
}
\defDirective{Dir}{Job}{Full Max Wait Time}{}{}{
}
\defDirective{Dir}{Job}{Incremental Backup Pool}{}{}{%
The {\bf Incremental Backup Pool} specifies a Pool to be used for
Incremental backups. It will override any Pool specification during an
Incremental backup. This directive is optional.
}
\defDirective{Dir}{Job}{Incremental Max Runtime}{}{}{%
The time specifies the maximum allowed time that an Incremental backup job may
run, counted from when the job starts, ({\bf not} necessarily the same as when
the job was scheduled).
}
\defDirective{Dir}{Job}{Incremental Max Wait Time}{}{}{%
This directive has been deprecated in favor of
\linkResourceDirective{Dir}{Job}{Incremental Max Runtime}
}
\defDirective{Dir}{Job}{Job Defs}{}{}{
If a Job Defs resource name is specified, all the values contained in the
named JobDefs resource will be used as the defaults for the current Job.
Any value that you explicitly define in the current Job resource, will
override any defaults specified in the JobDefs resource. The use of
this directive permits writing much more compact Job resources where the
bulk of the directives are defined in one or more JobDefs. This is
particularly useful if you have many similar Jobs but with minor
variations such as different Clients.
}
\defDirective{Dir}{Job}{Job To Verify}{}{}{
}
\defDirective{Dir}{Job}{Level}{}{}{%
The Level directive specifies the default Job level to be run. Each
different Job Type (Backup, Restore, Verify, ...) has a different set of Levels
that can be specified. The Level is normally overridden by a different
value that is specified in the {\bf Schedule} resource. This directive
is not required, but must be specified either by a {\bf Level} directive
or as an override specified in the {\bf Schedule} resource.
\begin{description}
\item [Backup] \hfill \\
For a {\bf Backup} Job, the Level may be one of the following:
\begin{description}
\item [Full] \hfill \\
\index[dir]{Full}
When the Level is set to Full all files in the FileSet whether or not
they have changed will be backed up.
\item [Incremental] \hfill \\
\index[dir]{Incremental}
When the Level is set to Incremental all files specified in the FileSet
that have changed since the last successful backup of the the same Job
using the same FileSet and Client, will be backed up. If the Director
cannot find a previous valid Full backup then the job will be upgraded
into a Full backup. When the Director looks for a valid backup record
in the catalog database, it looks for a previous Job with:
\begin{itemize}
\item The same Job name.
\item The same Client name.
\item The same FileSet (any change to the definition of the FileSet such as
adding or deleting a file in the Include or Exclude sections constitutes a
different FileSet.
\item The Job was a Full, Differential, or Incremental backup.
\item The Job terminated normally (i.e. did not fail or was not canceled).
\item The Job started no longer ago than {\bf Max Full Interval}.
\end{itemize}
If all the above conditions do not hold, the Director will upgrade the
Incremental to a Full save. Otherwise, the Incremental backup will be
performed as requested.
The File daemon (Client) decides which files to backup for an
Incremental backup by comparing start time of the prior Job (Full,
Differential, or Incremental) against the time each file was last
"modified" (st\_mtime) and the time its attributes were last
"changed"(st\_ctime). If the file was modified or its attributes
changed on or after this start time, it will then be backed up.
Some virus scanning software may change st\_ctime while
doing the scan. For example, if the virus scanning program attempts to
reset the access time (st\_atime), which Bareos does not use, it will
cause st\_ctime to change and hence Bareos will backup the file during
an Incremental or Differential backup. In the case of Sophos virus
scanning, you can prevent it from resetting the access time (st\_atime)
and hence changing st\_ctime by using the \parameter{--no-reset-atime}
option. For other software, please see their manual.
When Bareos does an Incremental backup, all modified files that are
still on the system are backed up. However, any file that has been
deleted since the last Full backup remains in the Bareos catalog,
which means that if between a Full save and the time you do a
restore, some files are deleted, those deleted files will also be
restored. The deleted files will no longer appear in the catalog
after doing another Full save.
In addition, if you move a directory rather than copy it, the files in
it do not have their modification time (st\_mtime) or their attribute
change time (st\_ctime) changed. As a consequence, those files will
probably not be backed up by an Incremental or Differential backup which
depend solely on these time stamps. If you move a directory, and wish
it to be properly backed up, it is generally preferable to copy it, then
delete the original.
However, to manage deleted files or directories changes in the
catalog during an Incremental backup you can use \ilink{accurate mode}{accuratemode}.
This is quite memory consuming process.
\item [Differential] \hfill \\
\index[dir]{Differential}
When the Level is set to Differential
all files specified in the FileSet that have changed since the last
successful Full backup of the same Job will be backed up.
If the Director cannot find a
valid previous Full backup for the same Job, FileSet, and Client,
backup, then the Differential job will be upgraded into a Full backup.
When the Director looks for a valid Full backup record in the catalog
database, it looks for a previous Job with:
\begin{itemize}
\item The same Job name.
\item The same Client name.
\item The same FileSet (any change to the definition of the FileSet such as
adding or deleting a file in the Include or Exclude sections constitutes a
different FileSet.
\item The Job was a FULL backup.
\item The Job terminated normally (i.e. did not fail or was not canceled).
\item The Job started no longer ago than {\bf Max Full Interval}.
\end{itemize}
If all the above conditions do not hold, the Director will upgrade the
Differential to a Full save. Otherwise, the Differential backup will be
performed as requested.
The File daemon (Client) decides which files to backup for a
differential backup by comparing the start time of the prior Full backup
Job against the time each file was last "modified" (st\_mtime) and the
time its attributes were last "changed" (st\_ctime). If the file was
modified or its attributes were changed on or after this start time, it
will then be backed up. The start time used is displayed after the {\bf
Since} on the Job report. In rare cases, using the start time of the
prior backup may cause some files to be backed up twice, but it ensures
that no change is missed. As with the Incremental option, you should
ensure that the clocks on your server and client are synchronized or as
close as possible to avoid the possibility of a file being skipped.
Note, on versions 1.33 or greater Bareos automatically makes the
necessary adjustments to the time between the server and the client so
that the times Bareos uses are synchronized.
When Bareos does a Differential backup, all modified files that are
still on the system are backed up. However, any file that has been
deleted since the last Full backup remains in the Bareos catalog, which
means that if between a Full save and the time you do a restore, some
files are deleted, those deleted files will also be restored. The
deleted files will no longer appear in the catalog after doing another
Full save. However, to remove deleted files from the catalog during a
Differential backup is quite a time consuming process and not currently
implemented in Bareos. It is, however, a planned future feature.
As noted above, if you move a directory rather than copy it, the
files in it do not have their modification time (st\_mtime) or
their attribute change time (st\_ctime) changed. As a
consequence, those files will probably not be backed up by an
Incremental or Differential backup which depend solely on these
time stamps. If you move a directory, and wish it to be
properly backed up, it is generally preferable to copy it, then
delete the original. Alternatively, you can move the directory, then
use the {\bf touch} program to update the timestamps.
%% TODO: merge this with incremental
However, to manage deleted files or directories changes in the
catalog during an Differential backup you can use \ilink{accurate mode}{accuratemode}.
This is quite memory consuming process. See for more details.
Every once and a while, someone asks why we need Differential
backups as long as Incremental backups pickup all changed files.
There are possibly many answers to this question, but the one
that is the most important for me is that a Differential backup
effectively merges
all the Incremental and Differential backups since the last Full backup
into a single Differential backup. This has two effects: 1. It gives
some redundancy since the old backups could be used if the merged backup
cannot be read. 2. More importantly, it reduces the number of Volumes
that are needed to do a restore effectively eliminating the need to read
all the volumes on which the preceding Incremental and Differential
backups since the last Full are done.
\end{description}
\item [Restore] \hfill \\
For a {\bf Restore} Job, no level needs to be specified.
\item [Verify] \hfill \\
For a {\bf Verify} Job, the Level may be one of the following:
\begin{description}
\item [InitCatalog] \hfill \\
\index[dir]{InitCatalog}
does a scan of the specified {\bf FileSet} and stores the file
attributes in the Catalog database. Since no file data is saved, you
might ask why you would want to do this. It turns out to be a very
simple and easy way to have a {\bf Tripwire} like feature using {\bf
Bareos}. In other words, it allows you to save the state of a set of
files defined by the {\bf FileSet} and later check to see if those files
have been modified or deleted and if any new files have been added.
This can be used to detect system intrusion. Typically you would
specify a {\bf FileSet} that contains the set of system files that
should not change (e.g. /sbin, /boot, /lib, /bin, ...). Normally, you
run the {\bf InitCatalog} level verify one time when your system is
first setup, and then once again after each modification (upgrade) to
your system. Thereafter, when your want to check the state of your
system files, you use a {\bf Verify} {\bf level = Catalog}. This
compares the results of your {\bf InitCatalog} with the current state of
the files.
\item [Catalog] \hfill \\
\index[dir]{Catalog}
Compares the current state of the files against the state previously
saved during an {\bf InitCatalog}. Any discrepancies are reported. The
items reported are determined by the {\bf verify} options specified on
the {\bf Include} directive in the specified {\bf FileSet} (see the {\bf
FileSet} resource below for more details). Typically this command will
be run once a day (or night) to check for any changes to your system
files.
\warning{If you run two Verify Catalog jobs on the same client at
the same time, the results will certainly be incorrect. This is because
Verify Catalog modifies the Catalog database while running in order to
track new files.}
\item [VolumeToCatalog] \hfill \\
\index[dir]{VolumeToCatalog}
This level causes Bareos to read the file attribute data written to the
Volume from the last backup Job for the job specified on the {\bf VerifyJob}
directive. The file attribute data are compared to the
values saved in the Catalog database and any differences are reported.
This is similar to the {\bf DiskToCatalog} level except that instead of
comparing the disk file attributes to the catalog database, the
attribute data written to the Volume is read and compared to the catalog
database. Although the attribute data including the signatures (MD5 or
SHA1) are compared, the actual file data is not compared (it is not in
the catalog).
\warning{If you run two Verify VolumeToCatalog jobs on the same
client at the same time, the results will certainly be incorrect. This
is because the Verify VolumeToCatalog modifies the Catalog database
while running.}
\item [DiskToCatalog] \hfill \\
\index[dir]{DiskToCatalog}
This level causes Bareos to read the files as they currently are on
disk, and to compare the current file attributes with the attributes
saved in the catalog from the last backup for the job specified on the
{\bf VerifyJob} directive. This level differs from the {\bf VolumeToCatalog}
level described above by the fact that it doesn't compare against a
previous Verify job but against a previous backup. When you run this
level, you must supply the verify options on your Include statements.
Those options determine what attribute fields are compared.
This command can be very useful if you have disk problems because it
will compare the current state of your disk against the last successful
backup, which may be several jobs.
Note, the current implementation does not identify files that
have been deleted.
\end{description}
\end{description}
}
\defDirective{Dir}{Job}{Max Diff Interval}{}{}{
}
\defDirective{Dir}{Job}{Max Full Interval}{}{}{%
The time specifies the maximum allowed age (counting from start time) of
the most recent successful Full backup that is required in order to run
Incremental or Differential backup jobs. If the most recent Full backup
is older than this interval, Incremental and Differential backups will be
upgraded to Full backups automatically. If this directive is not present,
or specified as 0, then the age of the previous Full backup is not
considered.
}
\defDirective{Dir}{Job}{Max Run Time}{}{}{%
The time specifies the maximum allowed time that a job may run, counted
from when the job starts, ({\bf not} necessarily the same as when the
job was scheduled).
By default, the the watchdog thread will kill any Job that has run more
than 6 days. The maximum watchdog timeout is independent of MaxRunTime
and cannot be changed.
}
\defDirective{Dir}{Job}{Max Start Delay}{}{}{%
The time specifies the maximum delay between the scheduled time and the
actual start time for the Job. For example, a job can be scheduled to
run at 1:00am, but because other jobs are running, it may wait to run.
If the delay is set to 3600 (one hour) and the job has not begun to run
by 2:00am, the job will be canceled. This can be useful, for example,
to prevent jobs from running during day time hours. The default is 0
which indicates no limit.
}
\defDirective{Dir}{Job}{Max Wait Time}{}{}{%
The time specifies the maximum allowed time that a job may block waiting
for a resource (such as waiting for a tape to be mounted, or waiting for
the storage or file daemons to perform their duties), counted from the
when the job starts, ({\bf not} necessarily the same as when the job was
scheduled).
\begin{figure}[htbp]
\centering
\includegraphics[width=13cm]{\idir different_time}
\caption{Job time control directives}
\label{fig:differenttime}
\end{figure}
}
\defDirective{Dir}{Job}{Maximum Bandwidth}{}{}{%
The speed parameter specifies the maximum allowed bandwidth that a job may
use.
}
\defDirective{Dir}{Job}{Maximum Concurrent Jobs}{}{}{%
where {\textless}number{\textgreater} is the maximum number of Jobs from the current
Job resource that can run concurrently. Note, this directive limits
only Jobs with the same name as the resource in which it appears. Any
other restrictions on the maximum concurrent jobs such as in the
Director, Client, or Storage resources will also apply in addition to
the limit specified here. The default is set to 1, but you may set it
to a larger number. We strongly recommend that you read the WARNING
documented under \ilink{ Maximum Concurrent Jobs}{DirMaxConJobs} in the
Director's resource.
}
\defDirective{Dir}{Job}{Maxrun Sched Time}{}{}{%
The time specifies the maximum allowed time that a job may run, counted from
when the job was scheduled. This can be useful to prevent jobs from running
during working hours. We can see it like \texttt{Max Start Delay + Max Run
Time}.
}
\defDirective{Dir}{Job}{Messages}{}{}{%
The Messages directive defines what Messages resource should be used for
this job, and thus how and where the various messages are to be
delivered. For example, you can direct some messages to a log file, and
others can be sent by email. For additional details, see the
\ilink{Messages Resource}{MessagesChapter} Chapter of this manual. This
directive is required.
}
\defDirective{Dir}{Job}{Name}{}{}{%
The Job name. This name can be specified on the {\bf Run} command in the
console program to start a job. If the name contains spaces, it must be
specified between quotes. It is generally a good idea to give your job the
same name as the Client that it will backup. This permits easy
identification of jobs.
When the job actually runs, the unique Job Name will consist of the name you
specify here followed by the date and time the job was scheduled for
execution. This directive is required.
}
\defDirective{Dir}{Job}{Next Pool}{}{}{%
A Next Pool override used for Migration/Copy and Virtual Backup Jobs.
}
\defDirective{Dir}{Job}{Plugin Options}{}{}{
}
\defDirective{Dir}{Job}{Pool}{}{}{%
The Pool directive defines the pool of Volumes where your data can be
backed up. Many Bareos installations will use only the {\bf Default}
pool. However, if you want to specify a different set of Volumes for
different Clients or different Jobs, you will probably want to use
Pools. For additional details, see the \nameref{DirectorResourcePool}
of this chapter. This directive is required.
}
\defDirective{Dir}{Job}{Prefer Mounted Volumes}{}{}{%
If the Prefer Mounted Volumes directive is set to {\bf yes},
the Storage daemon is requested to select either an Autochanger or
a drive with a valid Volume already mounted in preference to a drive
that is not ready. This means that all jobs will attempt to append
to the same Volume (providing the Volume is appropriate -- right Pool,
... for that job), unless you are using multiple pools.
If no drive with a suitable Volume is available, it
will select the first available drive. Note, any Volume that has
been requested to be mounted, will be considered valid as a mounted
volume by another job. This if multiple jobs start at the same time
and they all prefer mounted volumes, the first job will request the
mount, and the other jobs will use the same volume.
If the directive is set to {\bf no}, the Storage daemon will prefer
finding an unused drive, otherwise, each job started will append to the
same Volume (assuming the Pool is the same for all jobs). Setting
Prefer Mounted Volumes to no can be useful for those sites
with multiple drive autochangers that prefer to maximize backup
throughput at the expense of using additional drives and Volumes.
This means that the job will prefer to use an unused drive rather
than use a drive that is already in use.
Despite the above, we recommend against setting this directive to
{\bf no} since
it tends to add a lot of swapping of Volumes between the different
drives and can easily lead to deadlock situations in the Storage
daemon. We will accept bug reports against it, but we cannot guarantee
that we will be able to fix the problem in a reasonable time.
A better alternative for using multiple drives is to use multiple
pools so that Bareos will be forced to mount Volumes from those Pools
on different drives.
}
\defDirective{Dir}{Job}{Prefix Links}{}{}{%
If a {\bf Where} path prefix is specified for a recovery job, apply it
to absolute links as well. The default is {\bf No}. When set to {\bf
Yes} then while restoring files to an alternate directory, any absolute
soft links will also be modified to point to the new alternate
directory. Normally this is what is desired -- i.e. everything is self
consistent. However, if you wish to later move the files to their
original locations, all files linked with absolute names will be broken.
}
\defDirective{Dir}{Job}{Priority}{}{}{%
This directive permits you to control the order in which your jobs will
be run by specifying a positive non-zero number. The higher the number,
the lower the job priority. Assuming you are not running concurrent jobs,
all queued jobs of priority 1 will run before queued jobs of priority 2
and so on, regardless of the original scheduling order.
The priority only affects waiting jobs that are queued to run, not jobs
that are already running. If one or more jobs of priority 2 are already
running, and a new job is scheduled with priority 1, the currently
running priority 2 jobs must complete before the priority 1 job is
run, unless Allow Mixed Priority is set.
If you want to run concurrent jobs you should
keep these points in mind:
\begin{itemize}
\item See \nameref{ConcurrentJobs} on how to setup
concurrent jobs.
\item Bareos concurrently runs jobs of only one priority at a time. It
will not simultaneously run a priority 1 and a priority 2 job.
\item If Bareos is running a priority 2 job and a new priority 1 job is
scheduled, it will wait until the running priority 2 job terminates even
if the Maximum Concurrent Jobs settings would otherwise allow two jobs
to run simultaneously.
\item Suppose that bareos is running a priority 2 job and a new priority 1
job is scheduled and queued waiting for the running priority 2 job to
terminate. If you then start a second priority 2 job, the waiting
priority 1 job will prevent the new priority 2 job from running
concurrently with the running priority 2 job. That is: as long as there
is a higher priority job waiting to run, no new lower priority jobs will
start even if the Maximum Concurrent Jobs settings would normally allow
them to run. This ensures that higher priority jobs will be run as soon
as possible.
\end{itemize}
If you have several jobs of different priority, it may not best to start
them at exactly the same time, because Bareos must examine them one at a
time. If by Bareos starts a lower priority job first, then it will run
before your high priority jobs. If you experience this problem, you may
avoid it by starting any higher priority jobs a few seconds before lower
priority ones. This insures that Bareos will examine the jobs in the
correct order, and that your priority scheme will be respected.
}
\defDirective{Dir}{Job}{Protocol}{}{}{%
The backup protocol to use to run the Job. If not set it will default
to {\bf Native} currently the director understand the following protocols:
\begin{enumerate}
\item Native - The native Bareos protocol
\item NDMP - The NDMP protocol
\end{enumerate}
}
\defDirective{Dir}{Job}{Prune Files}{}{}{%
Normally, pruning of Files from the Catalog is specified on a Client by
Client basis in the Client resource with the {\bf AutoPrune} directive.
If this directive is specified (not normally) and the value is {\bf
yes}, it will override the value specified in the Client resource.
}
\defDirective{Dir}{Job}{Prune Jobs}{}{}{%
Normally, pruning of Jobs from the Catalog is specified on a Client by
Client basis in the Client resource with the {\bf AutoPrune} directive.
If this directive is specified (not normally) and the value is {\bf
yes}, it will override the value specified in the Client resource.
}
\defDirective{Dir}{Job}{Prune Volumes}{}{}{%
Normally, pruning of Volumes from the Catalog is specified on a Pool by
Pool basis in the Pool resource with the {\bf AutoPrune} directive.
Note, this is different from File and Job pruning which is done on a
Client by Client basis. If this directive is specified (not normally)
and the value is {\bf yes}, it will override the value specified in the
Pool resource.
}
\defDirective{Dir}{Job}{Purge Migration Job}{}{}{
}
\defDirective{Dir}{Job}{Regex Where}{}{}{%
This directive applies only to a Restore job and specifies a regex filename
manipulation of all files being restored. This will use \ilink{File
Relocation}{filerelocation} feature.
For more informations about how use this option, see
\nameref{regexwhere}.
}
\defDirective{Dir}{Job}{Replace}{}{}{%
This directive applies only to a Restore job and specifies what happens
when Bareos wants to restore a file or directory that already exists.
You have the following options for {\bf replace-option}:
\begin{description}
\item [always]
\index[dir]{always}
when the file to be restored already exists, it is deleted and then
replaced by the copy that was backed up. This is the default value.
\item [ifnewer]
\index[dir]{ifnewer}
if the backed up file (on tape) is newer than the existing file, the
existing file is deleted and replaced by the back up.
\item [ifolder]
\index[dir]{ifolder}
if the backed up file (on tape) is older than the existing file, the
existing file is deleted and replaced by the back up.
\item [never]
\index[dir]{never}
if the backed up file already exists, Bareos skips restoring this file.
\end{description}
}
\defDirective{Dir}{Job}{Rerun Failed Levels}{}{}{%
If this directive is set to {\bf yes} (default no), and Bareos detects that
a previous job at a higher level (i.e. Full or Differential) has failed,
the current job level will be upgraded to the higher level. This is
particularly useful for Laptops where they may often be unreachable, and if
a prior Full save has failed, you wish the very next backup to be a Full
save rather than whatever level it is started as.
There are several points that must be taken into account when using this
directive: first, a failed job is defined as one that has not terminated
normally, which includes any running job of the same name (you need to
ensure that two jobs of the same name do not run simultaneously);
secondly, the {\bf Ignore FileSet Changes} directive is not considered
when checking for failed levels, which means that any FileSet change will
trigger a rerun.
}
\defDirective{Dir}{Job}{Reschedule Interval}{}{}{%
If you have specified {\bf Reschedule On Error = yes} and the job
terminates in error, it will be rescheduled after the interval of time
specified by {\bf time-specification}. See \ilink{the time
specification formats}{Time} in the Configure chapter for details of
time specifications. If no interval is specified, the job will not be
rescheduled on error.
}
\defDirective{Dir}{Job}{Reschedule On Error}{}{}{%
If this directive is enabled, and the job terminates in error, the job
will be rescheduled as determined by the {\bf Reschedule Interval} and
{\bf Reschedule Times} directives. If you cancel the job, it will not
be rescheduled. The default is {\bf no} (i.e. the job will not be
rescheduled).
This specification can be useful for portables, laptops, or other
machines that are not always connected to the network or switched on.
}
\defDirective{Dir}{Job}{Reschedule Times}{}{}{%
This directive specifies the maximum number of times to reschedule the
job. If it is set to zero (the default) the job will be rescheduled an
indefinite number of times.
}
\defDirective{Dir}{Job}{Run}{}{}{%
\index[dir]{Clone a Job}%
The Run directive (not to be confused with the Run option in a
Schedule) allows you to start other jobs or to clone jobs. By using the
cloning keywords (see below), you can backup
the same data (or almost the same data) to two or more drives
at the same time. The {\bf job-name} is normally the same name
as the current Job resource (thus creating a clone). However, it
may be any Job name, so one job may start other related jobs.
The part after the equal sign must be enclosed in double quotes,
and can contain any string or set of options (overrides) that you
can specify when entering the Run command from the console. For
example {\bf storage=DDS-4 ...}. In addition, there are two special
keywords that permit you to clone the current job. They are {\bf level=\%l}
and {\bf since=\%s}. The \%l in the level keyword permits
entering the actual level of the current job and the \%s in the since
keyword permits putting the same time for comparison as used on the
current job. Note, in the case of the since keyword, the \%s must be
enclosed in double quotes, and thus they must be preceded by a backslash
since they are already inside quotes. For example:
\begin{bconfig}{}^^J
run = "Nightly-backup level=\%l since=\\"\%s\\" storage=DDS-4"^^J
\end{bconfig}
A cloned job will not start additional clones, so it is not
possible to recurse.
Please note that all cloned jobs, as specified in the Run directives are
submitted for running before the original job is run (while it is being
initialized). This means that any clone job will actually start before
the original job, and may even block the original job from starting
until the original job finishes unless you allow multiple simultaneous
jobs. Even if you set a lower priority on the clone job, if no other
jobs are running, it will start before the original job.
If you are trying to prioritize jobs by using the clone feature (Run
directive), you will find it much easier to do using a \linkResourceDirective{Dir}{Job}{Run Script}
resource, or a \linkResourceDirective{Dir}{Job}{Run Before Job} directive.
}
\defDirective{Dir}{Job}{Run After Failed Job}{}{}{
The specified command is run as an external program after the current
job terminates with any error status. This directive is not required. The
command string must be a valid program name or name of a shell script. If
the exit code of the program run is non-zero, Bareos will print a
warning message. Before submitting the specified command to the
operating system, Bareos performs character substitution as described above
for the {\bf RunScript} directive. Note, if you wish that your script
will run regardless of the exit status of the Job, you can use this:
\begin{bconfig}{Run Script that runs after all jobs (successful and failed)}^^J
Run Script \{^^J
\ Command = "echo test"^^J
\ Runs When = After^^J
\ Runs On Failure = yes^^J
\ Runs On Client = no^^J
\ Runs On Success = yes \# default, you can drop this line^^J
\}^^J
\end{bconfig}
}
\defDirective{Dir}{Job}{Run After Job}{}{}{%
The specified {\bf command} is run as an external program if the current
job terminates normally (without error or without being canceled). This
directive is not required. If the exit code of the program run is
non-zero, Bareos will print a warning message. Before submitting the
specified command to the operating system, Bareos performs character
substitution as described above for the {\bf RunScript} directive.
%An example of the use of this directive is given in the
%\ilink{Tips Chapter}{JobNotification} of this manual.
See the \linkResourceDirective{Dir}{Job}{Run After Failed Job} if you
want to run a script after the job has terminated with any
non-normal status.
}
\defDirective{Dir}{Job}{Run Before Job}{}{}{%
The specified command is run as an external program prior to running the
current Job. This directive is not required, but if it is defined, and if the
exit code of the program run is non-zero, the current Bareos job will be
canceled.
\begin{bconfig}{}^^J
Run Before Job = "echo test"^^J
\end{bconfig}
it's equivalent to :
\begin{bconfig}{}^^J
RunScript \{^^J
\ Command = "echo test"^^J
\ Runs On Client = No^^J
\ Runs When = Before^^J
\}^^J
\end{bconfig}
%
% Lutz Kittler has pointed out that using the RunBeforeJob directive can be a
% simple way to modify your schedules during a holiday. For example, suppose
% that you normally do Full backups on Fridays, but Thursday and Friday are
% holidays. To avoid having to change tapes between Thursday and Friday when
% no one is in the office, you can create a RunBeforeJob that returns a
% non-zero status on Thursday and zero on all other days. That way, the
% Thursday job will not run, and on Friday the tape you inserted on Wednesday
% before leaving will be used.
}
\defDirective{Dir}{Job}{Run Script}{}{}{
The RunScript directive behaves like a resource in that it
requires opening and closing braces around a number of directives
that make up the body of the runscript.
The specified {\bf Command} (see below for details) is run as an external
program prior or after the current Job. This is optional. By default, the
program is executed on the Client side like in \texttt{ClientRunXXXJob}.
\textbf{Console} options are special commands that are sent to the director instead
of the OS. At this time, console command ouputs are redirected to log with
the jobid 0.
You can use following console command : \texttt{delete}, \texttt{disable},
\texttt{enable}, \texttt{estimate}, \texttt{list}, \texttt{llist},
\texttt{memory}, \texttt{prune}, \texttt{purge}, \texttt{reload},
\texttt{status}, \texttt{setdebug}, \texttt{show}, \texttt{time},
\texttt{trace}, \texttt{update}, \texttt{version}, \texttt{.client},
\texttt{.jobs}, \texttt{.pool}, \texttt{.storage}. See console chapter for
more information. You need to specify needed information on command line, nothing
will be prompted. Example:
\begin{bconfig}{Run Script Console commands}^^J
Console = "prune files client=\%c"^^J
Console = "update stats age=3"^^J
\end{bconfig}
You can specify more than one Command/Console option per RunScript.
You can use following options may be specified in the body
of the runscript:\\
\begin{tabular}{|c|c|c|l}
\hline
Options & Value & Default & Information \\
\hline
\hline
Runs On Success & Yes{\textbar}No & {\bf Yes} & Run command if JobStatus is successful\\
\hline
Runs On Failure & Yes{\textbar}No & {\bf No} & Run command if JobStatus isn't successful\\
\hline
Runs On Client & Yes{\textbar}No & {\bf Yes} & Run command on client\\
\hline
Runs When & Before{\textbar}After{\textbar}Always{\textbar}\textsl{AfterVSS} & {\bf Never} & When run commands\\
\hline
Fail Job On Error & Yes/No & {\bf Yes} & Fail job if script returns
something different from 0 \\
\hline
Command & & & Path to your script\\
\hline
Console & & & Console command\\
\hline
\end{tabular}
\\
Any output sent by the command to standard output will be included in the
Bareos job report. The command string must be a valid program name or name
of a shell script.
In addition, the command string is parsed then fed to the OS,
which means that the path will be searched to execute your specified
command, but there is no shell interpretation, as a consequence, if you
invoke complicated commands or want any shell features such as redirection
or piping, you must call a shell script and do it inside that script.
Before submitting the specified command to the operating system, Bareos
performs character substitution of the following characters:
\label{character substitution}
\footnotesize
\begin{longtable}{ l l }
\%\% & \% \\
\%b & Job Bytes \\
\%c & Client's name \\
\%d & Daemon's name (Such as host-dir or host-fd) \\
\%D & Director's name (Also valid on file daemon) \\
\%e & Job Exit Status \\
\%f & Job FileSet (Only on director side) \\
\%F & Job Files \\