This repository has been archived by the owner on Mar 19, 2021. It is now read-only.
/
dirdconf.tex
3734 lines (3201 loc) · 163 KB
/
dirdconf.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
\chapter{Director Configuration}
\label{DirectorChapter}
\index[general]{Director!Configuring the}
\index[general]{Configuring the Director}
Of all the configuration files needed to run {\bf Bareos}, the Director's is
the most complicated, and the one that you will need to modify the most often
as you add clients or modify the FileSets.
For a general discussion of configuration files and resources including the
data types recognized by {\bf Bareos}. Please see the
\ilink{Configuration}{ConfigureChapter} chapter of this manual.
%\section{Director Resource Types}
\index[general]{Types!Director Resource}
\index[general]{Director Resource Types}
Director resource type may be one of the following:
Job, JobDefs, Client, Storage, Catalog, Schedule, FileSet, Pool, Director, or
Messages. We present them here in the most logical order for defining them:
Note, everything revolves around a job and is tied to a job in one
way or another.
\begin{itemize}
\item
\ilink{Director}{DirectorResource} -- to define the Director's
name and its access password used for authenticating the Console program.
Only a single Director resource definition may appear in the Director's
configuration file. If you have either {\bf /dev/random} or {\bf bc} on your
machine, Bareos will generate a random password during the configuration
process, otherwise it will be left blank.
\item
\ilink{Job}{JobResource} -- to define the backup/restore Jobs
and to tie together the Client, FileSet and Schedule resources to be used
for each Job. Normally, you will Jobs of different names corresponding
to each client (i.e. one Job per client, but a different one with a different name
for each client).
\item
\ilink{JobDefs}{JobDefsResource} -- optional resource for
providing defaults for Job resources.
\item
\ilink{Schedule}{ScheduleResource} -- to define when a Job is to
be automatically run by {\bf Bareos's} internal scheduler. You
may have any number of Schedules, but each job will reference only
one.
\item
\ilink{FileSet}{FileSetResource} -- to define the set of files
to be backed up for each Client. You may have any number of
FileSets but each Job will reference only one.
\item
\ilink{Client}{ClientResource2} -- to define what Client is to be
backed up. You will generally have multiple Client definitions. Each
Job will reference only a single client.
\item
\ilink{Storage}{StorageResource2} -- to define on what physical
device the Volumes should be mounted. You may have one or
more Storage definitions.
\item
\ilink{Pool}{PoolResource} -- to define the pool of Volumes
that can be used for a particular Job. Most people use a
single default Pool. However, if you have a large number
of clients or volumes, you may want to have multiple Pools.
Pools allow you to restrict a Job (or a Client) to use
only a particular set of Volumes.
\item
\ilink{Catalog}{CatalogResource} -- to define in what database to
keep the list of files and the Volume names where they are backed up.
Most people only use a single catalog. However, if you want to
scale the Director to many clients, multiple catalogs can be helpful.
Multiple catalogs require a bit more management because in general
you must know what catalog contains what data. Currently, all
Pools are defined in each catalog. This restriction will be removed
in a later release.
\item
\ilink{Messages}{MessagesChapter} -- to define where error and
information messages are to be sent or logged. You may define
multiple different message resources and hence direct particular
classes of messages to different users or locations (files, ...).
\end{itemize}
\section{Director Resource}
\label{DirectorResource4}
\index[general]{Director Resource}
\index[general]{Resource!Director}
The Director resource defines the attributes of the Directors running on the
network. In the current implementation, there is only a single Director
resource, but the final design will contain multiple Directors to maintain
index and media database redundancy.
\begin{description}
\item [Director] \hfill \\
\index[dir]{Director}
Start of the Director resource. One and only one director resource must be
supplied.
\item [Name = {\textless}name{\textgreater}] \hfill \\
\index[dir]{Name}
\index[dir]{Directive!Name}
The director name used by the system administrator. This directive is
required.
\item [Description = {\textless}text{\textgreater}] \hfill \\
\index[dir]{Description}
\index[dir]{Directive!Description}
The text field contains a description of the Director that will be displayed
in the graphical user interface. This directive is optional.
\item [Password = {\textless}UA-password{\textgreater}] \hfill \\
\index[dir]{Password}
\index[dir]{Directive!Password}
Specifies the password that must be supplied for the default Bareos
Console to be authorized. The same password must appear in the {\bf
Director} resource of the Console configuration file. For added
security, the password is never passed across the network but instead a
challenge response hash code created with the password. This directive
is required. If you have either {\bf /dev/random} or {\bf bc} on your
machine, Bareos will generate a random password during the configuration
process, otherwise it will be left blank and you must manually supply
it.
The password is plain text. It is not generated through any special
process but as noted above, it is better to use random text for
security reasons.
\item [Messages = {\textless}Messages-resource-name{\textgreater}] \hfill \\
\index[dir]{Messages}
\index[dir]{Directive!Messages}
The messages resource specifies where to deliver Director messages that are
not associated with a specific Job. Most messages are specific to a job and
will be directed to the Messages resource specified by the job. However,
there are a few messages that can occur when no job is running. This
directive is required.
\item [Working Directory = {\textless}Directory{\textgreater}] \hfill \\
\index[dir]{Working Directory}
\index[dir]{Directive!Working Directory}
This directive is optional and specifies a directory in which the Director
may put its status files. This directory should be used only by Bareos but
may be shared by other Bareos daemons. However, please note, if this
directory is shared with other Bareos daemons (the File daemon and Storage
daemon), you must ensure that the {\bf Name} given to each daemon is
unique so that the temporary filenames used do not collide.
By default
the Bareos configure process creates unique daemon names by postfixing them
with -dir, -fd, and -sd.
Standard shell expansion of the {\bf
Directory} is done when the configuration file is read so that values such
as {\bf \$HOME} will be properly expanded.
The working directory specified must already exist and be
readable and writable by the Bareos daemon referencing it.
% If you have specified a Director user and/or a Director group on your
% ./configure line with {\bf {-}{-}with-dir-user} and/or
% {\bf {-}{-}with-dir-group} the Working Directory owner and group will
% be set to those values.
\item [Pid Directory = {\textless}Directory{\textgreater}] \hfill \\
\index[dir]{Pid Directory}
\index[dir]{Directive!Pid Directory}
This directive is optional and specifies a directory in which the Director
may put its process Id file. The process Id file is used to shutdown
Bareos and to prevent multiple copies of Bareos from running simultaneously.
Standard shell expansion of the {\bf Directory} is done when the
configuration file is read so that values such as {\bf \$HOME} will be
properly expanded.
The PID directory specified must already exist and be
readable and writable by the Bareos daemon referencing it.
Typically on Linux systems, you will set this to: {\bf /var/run}. If you are
not installing Bareos in the system directories, you can use the {\bf Working
Directory} as defined above.
\item [Scripts Directory = {\textless}Directory{\textgreater}] \hfill \\
\index[dir]{Scripts Directory}
\index[dir]{Directive!Scripts Directory}
This directive is currently unused.
\item [QueryFile = {\textless}Path{\textgreater}] \hfill \\
\index[dir]{QueryFile}
\index[dir]{Directive!QueryFile}
This directive is required and specifies a directory and file in which
the Director can find the canned SQL statements for the {\bf Query}
command of the Console.
%Standard shell expansion of the {\bf Path} is
%done when the configuration file is read so that values such as {\bf
%\$HOME} will be properly expanded.
\item [Heartbeat Interval = {\textless}time-interval{\textgreater}] \hfill \\
\index[dir]{Heartbeat Interval}
\index[dir]{Directive!Heartbeat}
This directive is optional and if specified will cause the Director to
set a keepalive interval (heartbeat) in seconds on each of the sockets
it opens for the Client resource. This value will override any
specified at the Director level. It is implemented only on systems
that provide the {\bf setsockopt} TCP\_KEEPIDLE function (Linux, ...).
The default value is zero, which means no change is made to the socket.
\directive{dir}{Maximum Concurrent Jobs}{number}{}{1}
\label{DirMaxConJobs}
%\item [Maximum Concurrent Jobs = {\textless}number{\textgreater}] \hfill \\
\index[dir]{Maximum Concurrent Jobs}
\index[dir]{Directive!Maximum Concurrent Jobs}
\index[general]{Simultaneous Jobs}
\index[general]{Concurrent Jobs}
where {\textless}number{\textgreater} is the maximum number of total Director Jobs that
should run concurrently. The default is set to 1, but you may set it to a
larger number.
The Volume format becomes more complicated with
multiple simultaneous jobs, consequently, restores may take longer if
Bareos must sort through interleaved volume blocks from multiple simultaneous
jobs. This can be avoided by having each simultaneous job write to
a different volume or by using data spooling, which will first spool the data
to disk simultaneously, then write one spool file at a time to the volume
thus avoiding excessive interleaving of the different job blocks.
See also the section about \ilink{Concurrent Jobs}{ConcurrentJobs}.
\item [FD Connect Timeout = {\textless}time{\textgreater}] \hfill \\
\index[dir]{FD Connect Timeout}
\index[dir]{Directive!FD Connect Timeout}
where {\bf time} is the time that the Director should continue
attempting to contact the File daemon to start a job, and after which
the Director will cancel the job. The default is 30 minutes.
\item [SD Connect Timeout = {\textless}time{\textgreater}] \hfill \\
\index[dir]{SD Connect Timeout}
\index[dir]{Directive!SD Connect Timeout}
where {\bf time} is the time that the Director should continue
attempting to contact the Storage daemon to start a job, and after which
the Director will cancel the job. The default is 30 minutes.
\item [DirAddresses = {\textless}IP-address-specification{\textgreater}] \hfill \\
\index[dir]{DirAddresses}
\index[dir]{Address}
\index[general]{Address}
\index[dir]{Directive!DirAddresses}
Specify the ports and addresses on which the Director daemon will listen
for Bareos Console connections. Probably the simplest way to explain
this is to show an example:
\footnotesize
\begin{verbatim}
DirAddresses = {
ip = { addr = 1.2.3.4; port = 1205;}
ipv4 = {
addr = 1.2.3.4; port = http;}
ipv6 = {
addr = 1.2.3.4;
port = 1205;
}
ip = {
addr = 1.2.3.4
port = 1205
}
ip = { addr = 1.2.3.4 }
ip = { addr = 201:220:222::2 }
ip = {
addr = bluedot.thun.net
}
}
\end{verbatim}
\normalsize
where ip, ip4, ip6, addr, and port are all keywords. Note, that the address
can be specified as either a dotted quadruple, or IPv6 colon notation, or as
a symbolic name (only in the ip specification). Also, port can be specified
as a number or as the mnemonic value from the /etc/services file. If a port
is not specified, the default will be used. If an ip section is specified,
the resolution can be made either by IPv4 or IPv6. If ip4 is specified, then
only IPv4 resolutions will be permitted, and likewise with ip6.
Please note that if you use the DirAddresses directive, you must
not use either a DirPort or a DirAddress directive in the same
resource.
\item [DirPort = {\textless}port-number{\textgreater}] \hfill \\
\index[dir]{DirPort}
\index[dir]{Directive!DirPort}
Specify the port (a positive integer) on which the Director daemon will
listen for Bareos Console connections. This same port number must be
specified in the Director resource of the Console configuration file. The
default is 9101, so normally this directive need not be specified. This
directive should not be used if you specify DirAddresses (N.B plural)
directive.
\item [DirAddress = {\textless}IP-Address{\textgreater}] \hfill \\
\index[dir]{DirAddress}
\index[dir]{Directive!DirAddress}
This directive is optional, but if it is specified, it will cause the
Director server (for the Console program) to bind to the specified
{\bf IP-Address}, which is either a domain name or an IP address specified as a
dotted quadruple in string or quoted string format. If this directive is
not specified, the Director will bind to any available address (the
default). Note, unlike the DirAddresses specification noted above, this
directive only permits a single address to be specified. This directive
should not be used if you specify a DirAddresses (N.B. plural) directive.
\item [DirSourceAddress = {\textless}IP-Address{\textgreater}] \hfill \\
\index[fd]{DirSourceAddress}
\index[fd]{Directive!DirSourceAddress}
This record is optional, and if it is specified, it will cause the Director
server (when initiating connections to a storage or file daemon) to source
its connections from the specified address. Only a single IP address may be
specified. If this record is not specified, the Director server will source
its outgoing connections according to the system routing table (the default).
\item [Statistics Retention = {\textless}time{\textgreater}] \hfill \\
\index[dir]{StatisticsRetention}
\index[dir]{Directive!StatisticsRetention}
\label{PruneStatistics}
The \texttt{Statistics Retention} directive defines the length of time that
Bareos will keep statistics job records in the Catalog database after the
Job End time (in \texttt{JobHistory} table). When this time period expires,
and if user runs \texttt{prune stats} command, Bareos will prune (remove)
Job records that are older than the specified period.
Theses statistics records aren't use for restore purpose, but mainly for
capacity planning, billings, etc.
See chapter about \ilink{how to extract information from the catalog}{UseBareosCatalogToExtractInformationChapter}
for additional information.
See the \ilink{ Configuration chapter}{Time} of this manual for additional
details of time specification.
The default is 5 years.
\item [VerId = {\textless}string{\textgreater}] \hfill \\
\index[dir]{Directive!VerId}
where {\textless}string{\textgreater} is an identifier which can be used for support purpose.
This string is displayed using the \texttt{version} command.
\item [MaximumConsoleConnections = {\textless}number{\textgreater}] \hfill \\
\index[dir]{MaximumConsoleConnections}
\index[dir]{Directive!MaximumConsoleConnections}
\index[dir]{Console}
where {\textless}number{\textgreater} is the maximum number of Console Connections that
could run concurrently. The default is set to 20, but you may set it to a
larger number.
\item [Optimize For Size = {\textless}yes{\textbar}no{\textgreater}] \hfill \\
\index[dir]{Optimize For Size}
\index[dir]{Directive!Optimize For Size}
If {\bf Yes} which is the default, this directive will use the optimizations
for memory size over speed. So it will try to use less memory which may lead
to a somewhat lower speed. Its currently mostly used for keeping all hardlinks
in memory.
\item [Optimize For Speed = {\textless}yes{\textbar}no{\textgreater}] \hfill \\
\index[dir]{Optimize For Speed}
\index[dir]{Directive!Optimize For Speed}
If {\bf Yes} which is {\bf Not} the default, this directive will use the
optimizations for speed over the memory size. So it will try to use more memory
which lead to a somewhat higher speed. Its currently mostly used for keeping all
hardlinks in memory. Its relates to the {\bf Optimize For Size} option set either
one to {\bf Yes} as they are mutually exclusive.
\item [Key Encryption Key = {\textless}KeyEncryptionKey{\textgreater}] \hfill \\
\index[dir]{Key Encryption Key}
\index[dir]{Directive!Key Encryption Key}
This key is used to encrypt the Security Key that is exchanged between
the Director and the Storage Daemon for supporting Application Managed
Encryption (AME). For security reasons each Director should have a
different Key Encryption Key.
\item [NDMP Snooping = {\textless}yes{\textbar}no{\textgreater}] \hfill \\
\index[dir]{NDMP Snooping}
\index[dir]{Directive!NDMP Snooping}
This directive enables the Snooping and pretty printing of NDMP protocol
information in debugging mode.
\item [NDMP Loglevel = {\textless}level{\textgreater}] \hfill \\
\index[dir]{NDMP Loglevel}
\index[dir]{Directive!NDMP Loglevel}
This directive sets the loglevel for the NDMP protocol library.
\end{description}
The following is an example of a valid Director resource definition:
\footnotesize
\begin{verbatim}
Director {
Name = HeadMan
WorkingDirectory = "$HOME/bareos/bin/working"
Password = UA_password
PidDirectory = "$HOME/bareos/bin/working"
QueryFile = "$HOME/bareos/bin/query.sql"
Messages = Standard
}
\end{verbatim}
\normalsize
\section{Job Resource}
\label{JobResource}
\index[general]{Resource!Job}
\index[general]{Job Resource}
The Job resource defines a Job (Backup, Restore, ...) that Bareos must
perform. Each Job resource definition contains the name of a Client and
a FileSet to backup, the Schedule for the Job, where the data
are to be stored, and what media Pool can be used. In effect, each Job
resource must specify What, Where, How, and When or FileSet, Storage,
Backup/Restore/Level, and Schedule respectively. Note, the FileSet must
be specified for a restore job for historical reasons, but it is no longer used.
Only a single type ({\bf Backup}, {\bf Restore}, ...) can be specified for any
job. If you want to backup multiple FileSets on the same Client or multiple
Clients, you must define a Job for each one.
Note, you define only a single Job to do the Full, Differential, and
Incremental backups since the different backup levels are tied together by
a unique Job name. Normally, you will have only one Job per Client, but
if a client has a really huge number of files (more than several million),
you might want to split it into to Jobs each with a different FileSet
covering only part of the total files.
Multiple Storage daemons are not currently supported for Jobs, so if
you do want to use multiple storage daemons, you will need to create
a different Job and ensure that for each Job that the combination of
Client and FileSet are unique. The Client and FileSet are what Bareos
uses to restore a client, so if there are multiple Jobs with the same
Client and FileSet or multiple Storage daemons that are used, the
restore will not work. This problem can be resolved by defining multiple
FileSet definitions (the names must be different, but the contents of
the FileSets may be the same).
\begin{description}
\item [Job]
\index[dir]{Job}
\index[dir]{Directive!Job}
Start of the Job resource. At least one Job resource is required.
\item [Name = {\textless}name{\textgreater}] \hfill \\
\index[dir]{Name}
\index[dir]{Directive!Name}
The Job name. This name can be specified on the {\bf Run} command in the
console program to start a job. If the name contains spaces, it must be
specified between quotes. It is generally a good idea to give your job the
same name as the Client that it will backup. This permits easy
identification of jobs.
When the job actually runs, the unique Job Name will consist of the name you
specify here followed by the date and time the job was scheduled for
execution. This directive is required.
\item [Enabled = {\textless}yes{\textbar}no{\textgreater}] \hfill \\
\index[dir]{Enable}
\index[dir]{Directive!Enable}
This directive allows you to enable or disable automatic execution
via the scheduler of a Job.
\item [Type = {\textless}job-type{\textgreater}] \hfill \\
\index[dir]{Type}
\index[dir]{Directive!Type}
The {\bf Type} directive specifies the Job type, which may be one of the
following: {\bf Backup}, {\bf Restore}, {\bf Verify}, or {\bf Admin}. This
directive is required. Within a particular Job Type, there are also Levels
as discussed in the next item.
\begin{description}
\item [Backup] \hfill \\
\index[dir]{Backup}
Run a backup Job. Normally you will have at least one Backup job for each
client you want to save. Normally, unless you turn off cataloging, most all
the important statistics and data concerning files backed up will be placed
in the catalog.
\item [Restore] \hfill \\
\index[dir]{Restore}
Run a restore Job. Normally, you will specify only one Restore job
which acts as a sort of prototype that you will modify using the console
program in order to perform restores. Although certain basic
information from a Restore job is saved in the catalog, it is very
minimal compared to the information stored for a Backup job -- for
example, no File database entries are generated since no Files are
saved.
{\bf Restore} jobs cannot be
automatically started by the scheduler as is the case for Backup, Verify
and Admin jobs. To restore files, you must use the {\bf restore} command
in the console.
\item [Verify] \hfill \\
\index[dir]{Verify}
Run a verify Job. In general, {\bf verify} jobs permit you to compare the
contents of the catalog to the file system, or to what was backed up. In
addition, to verifying that a tape that was written can be read, you can
also use {\bf verify} as a sort of tripwire intrusion detection.
\item [Admin] \hfill \\
\index[dir]{Admin}
Run an admin Job. An {\bf Admin} job can be used to periodically run catalog
pruning, if you do not want to do it at the end of each {\bf Backup} Job.
Although an Admin job is recorded in the catalog, very little data is saved.
\end{description}
\item [Protocol = {\textless}protocolname{\textgreater}] \hfill \\
\index[dir]{Protocol}
\index[dir]{Directive!Protocol}
The backup protocol to use to run the Job. If not set it will default
to {\bf Native} currently the director understand the following protocols:
\begin{enumerate}
\item Native - The native Bareos protocol
\item NDMP - The NDMP protocol
\end{enumerate}
\item [Backup Format = {\textless}backup format{\textgreater}] \hfill \\
\index[dir]{Backup Format}
\index[dir]{Directive!Backup Format}
The backup format used for protocols which support multiple formats. A protocol
like NDMP supports different backup formats for instance:
\begin{enumerate}
\item Dump
\item Tar
\item SMTape
\end{enumerate}
\label{Level}
\item [Level = {\textless}job-level{\textgreater}] \hfill \\
\index[dir]{Level}
\index[dir]{Directive!Level}
The Level directive specifies the default Job level to be run. Each
different Job Type (Backup, Restore, ...) has a different set of Levels
that can be specified. The Level is normally overridden by a different
value that is specified in the {\bf Schedule} resource. This directive
is not required, but must be specified either by a {\bf Level} directive
or as an override specified in the {\bf Schedule} resource.
For a {\bf Backup} Job, the Level may be one of the following:
\begin{description}
\item [Full] \hfill \\
\index[dir]{Full}
When the Level is set to Full all files in the FileSet whether or not
they have changed will be backed up.
\item [Incremental] \hfill \\
\index[dir]{Incremental}
When the Level is set to Incremental all files specified in the FileSet
that have changed since the last successful backup of the the same Job
using the same FileSet and Client, will be backed up. If the Director
cannot find a previous valid Full backup then the job will be upgraded
into a Full backup. When the Director looks for a valid backup record
in the catalog database, it looks for a previous Job with:
\begin{itemize}
\item The same Job name.
\item The same Client name.
\item The same FileSet (any change to the definition of the FileSet such as
adding or deleting a file in the Include or Exclude sections constitutes a
different FileSet.
\item The Job was a Full, Differential, or Incremental backup.
\item The Job terminated normally (i.e. did not fail or was not canceled).
\item The Job started no longer ago than {\bf Max Full Interval}.
\end{itemize}
If all the above conditions do not hold, the Director will upgrade the
Incremental to a Full save. Otherwise, the Incremental backup will be
performed as requested.
The File daemon (Client) decides which files to backup for an
Incremental backup by comparing start time of the prior Job (Full,
Differential, or Incremental) against the time each file was last
"modified" (st\_mtime) and the time its attributes were last
"changed"(st\_ctime). If the file was modified or its attributes
changed on or after this start time, it will then be backed up.
Some virus scanning software may change st\_ctime while
doing the scan. For example, if the virus scanning program attempts to
reset the access time (st\_atime), which Bareos does not use, it will
cause st\_ctime to change and hence Bareos will backup the file during
an Incremental or Differential backup. In the case of Sophos virus
scanning, you can prevent it from resetting the access time (st\_atime)
and hence changing st\_ctime by using the {\bf \verb:--:no-reset-atime}
option. For other software, please see their manual.
When Bareos does an Incremental backup, all modified files that are
still on the system are backed up. However, any file that has been
deleted since the last Full backup remains in the Bareos catalog,
which means that if between a Full save and the time you do a
restore, some files are deleted, those deleted files will also be
restored. The deleted files will no longer appear in the catalog
after doing another Full save.
In addition, if you move a directory rather than copy it, the files in
it do not have their modification time (st\_mtime) or their attribute
change time (st\_ctime) changed. As a consequence, those files will
probably not be backed up by an Incremental or Differential backup which
depend solely on these time stamps. If you move a directory, and wish
it to be properly backed up, it is generally preferable to copy it, then
delete the original.
However, to manage deleted files or directories changes in the
catalog during an Incremental backup you can use \texttt{accurate}
mode. This is quite memory consuming process.
See \ilink{Accurate mode}{accuratemode} for more details.
\item [Differential] \hfill \\
\index[dir]{Differential}
When the Level is set to Differential
all files specified in the FileSet that have changed since the last
successful Full backup of the same Job will be backed up.
If the Director cannot find a
valid previous Full backup for the same Job, FileSet, and Client,
backup, then the Differential job will be upgraded into a Full backup.
When the Director looks for a valid Full backup record in the catalog
database, it looks for a previous Job with:
\begin{itemize}
\item The same Job name.
\item The same Client name.
\item The same FileSet (any change to the definition of the FileSet such as
adding or deleting a file in the Include or Exclude sections constitutes a
different FileSet.
\item The Job was a FULL backup.
\item The Job terminated normally (i.e. did not fail or was not canceled).
\item The Job started no longer ago than {\bf Max Full Interval}.
\end{itemize}
If all the above conditions do not hold, the Director will upgrade the
Differential to a Full save. Otherwise, the Differential backup will be
performed as requested.
The File daemon (Client) decides which files to backup for a
differential backup by comparing the start time of the prior Full backup
Job against the time each file was last "modified" (st\_mtime) and the
time its attributes were last "changed" (st\_ctime). If the file was
modified or its attributes were changed on or after this start time, it
will then be backed up. The start time used is displayed after the {\bf
Since} on the Job report. In rare cases, using the start time of the
prior backup may cause some files to be backed up twice, but it ensures
that no change is missed. As with the Incremental option, you should
ensure that the clocks on your server and client are synchronized or as
close as possible to avoid the possibility of a file being skipped.
Note, on versions 1.33 or greater Bareos automatically makes the
necessary adjustments to the time between the server and the client so
that the times Bareos uses are synchronized.
When Bareos does a Differential backup, all modified files that are
still on the system are backed up. However, any file that has been
deleted since the last Full backup remains in the Bareos catalog, which
means that if between a Full save and the time you do a restore, some
files are deleted, those deleted files will also be restored. The
deleted files will no longer appear in the catalog after doing another
Full save. However, to remove deleted files from the catalog during a
Differential backup is quite a time consuming process and not currently
implemented in Bareos. It is, however, a planned future feature.
As noted above, if you move a directory rather than copy it, the
files in it do not have their modification time (st\_mtime) or
their attribute change time (st\_ctime) changed. As a
consequence, those files will probably not be backed up by an
Incremental or Differential backup which depend solely on these
time stamps. If you move a directory, and wish it to be
properly backed up, it is generally preferable to copy it, then
delete the original. Alternatively, you can move the directory, then
use the {\bf touch} program to update the timestamps.
%% TODO: merge this with incremental
However, to manage deleted files or directories changes in the
catalog during an Differential backup you can use \texttt{accurate}
mode. This is quite memory consuming process. See \ilink{Accurate
mode}{accuratemode} for more details.
Every once and a while, someone asks why we need Differential
backups as long as Incremental backups pickup all changed files.
There are possibly many answers to this question, but the one
that is the most important for me is that a Differential backup
effectively merges
all the Incremental and Differential backups since the last Full backup
into a single Differential backup. This has two effects: 1. It gives
some redundancy since the old backups could be used if the merged backup
cannot be read. 2. More importantly, it reduces the number of Volumes
that are needed to do a restore effectively eliminating the need to read
all the volumes on which the preceding Incremental and Differential
backups since the last Full are done.
\end{description}
For a {\bf Restore} Job, no level needs to be specified.
For a {\bf Verify} Job, the Level may be one of the following:
\begin{description}
\item [InitCatalog] \hfill \\
\index[dir]{InitCatalog}
does a scan of the specified {\bf FileSet} and stores the file
attributes in the Catalog database. Since no file data is saved, you
might ask why you would want to do this. It turns out to be a very
simple and easy way to have a {\bf Tripwire} like feature using {\bf
Bareos}. In other words, it allows you to save the state of a set of
files defined by the {\bf FileSet} and later check to see if those files
have been modified or deleted and if any new files have been added.
This can be used to detect system intrusion. Typically you would
specify a {\bf FileSet} that contains the set of system files that
should not change (e.g. /sbin, /boot, /lib, /bin, ...). Normally, you
run the {\bf InitCatalog} level verify one time when your system is
first setup, and then once again after each modification (upgrade) to
your system. Thereafter, when your want to check the state of your
system files, you use a {\bf Verify} {\bf level = Catalog}. This
compares the results of your {\bf InitCatalog} with the current state of
the files.
\item [Catalog] \hfill \\
\index[dir]{Catalog}
Compares the current state of the files against the state previously
saved during an {\bf InitCatalog}. Any discrepancies are reported. The
items reported are determined by the {\bf verify} options specified on
the {\bf Include} directive in the specified {\bf FileSet} (see the {\bf
FileSet} resource below for more details). Typically this command will
be run once a day (or night) to check for any changes to your system
files.
\warning{If you run two Verify Catalog jobs on the same client at
the same time, the results will certainly be incorrect. This is because
Verify Catalog modifies the Catalog database while running in order to
track new files.}
\item [VolumeToCatalog] \hfill \\
\index[dir]{VolumeToCatalog}
This level causes Bareos to read the file attribute data written to the
Volume from the last backup Job for the job specified on the {\bf VerifyJob}
directive. The file attribute data are compared to the
values saved in the Catalog database and any differences are reported.
This is similar to the {\bf DiskToCatalog} level except that instead of
comparing the disk file attributes to the catalog database, the
attribute data written to the Volume is read and compared to the catalog
database. Although the attribute data including the signatures (MD5 or
SHA1) are compared, the actual file data is not compared (it is not in
the catalog).
\warning{If you run two Verify VolumeToCatalog jobs on the same
client at the same time, the results will certainly be incorrect. This
is because the Verify VolumeToCatalog modifies the Catalog database
while running.}
\item [DiskToCatalog] \hfill \\
\index[dir]{DiskToCatalog}
This level causes Bareos to read the files as they currently are on
disk, and to compare the current file attributes with the attributes
saved in the catalog from the last backup for the job specified on the
{\bf VerifyJob} directive. This level differs from the {\bf VolumeToCatalog}
level described above by the fact that it doesn't compare against a
previous Verify job but against a previous backup. When you run this
level, you must supply the verify options on your Include statements.
Those options determine what attribute fields are compared.
This command can be very useful if you have disk problems because it
will compare the current state of your disk against the last successful
backup, which may be several jobs.
Note, the current implementation does not identify files that
have been deleted.
\end{description}
\item [Accurate = {\textless}yes{\textbar}no{\textgreater}] \hfill \\
\index[dir]{Accurate}
In accurate mode, the File daemon knowns exactly which files were present
after the last backup. So it is able to handle deleted or renamed files.
When restoring a FileSet for a specified date (including "most
recent"), Bareos is able to restore exactly the files and
directories that existed at the time of the last backup prior to
that date including ensuring that deleted files are actually deleted,
and renamed directories are restored properly.
In this mode, the File daemon must keep data concerning all files in
memory. So If you do not have sufficient memory, the backup may
either be terribly slow or fail.
%% $$ memory = \sum_{i=1}^{n}(strlen(path_i + file_i) + sizeof(CurFile))$$
For 500.000 files (a typical desktop linux system), it will require
approximately 64 Megabytes of RAM on your File daemon to hold the
required information.
\item [Verify Job = {\textless}Job-Resource-Name{\textgreater}] \hfill \\
\index[dir]{Verify Job}
\index[dir]{Directive!Verify Job}
If you run a verify job without this directive, the last job run will be
compared with the catalog, which means that you must immediately follow
a backup by a verify command. If you specify a {\bf Verify Job} Bareos
will find the last job with that name that ran. This permits you to run
all your backups, then run Verify jobs on those that you wish to be
verified (most often a {\bf VolumeToCatalog}) so that the tape just
written is re-read.
\item [Catalog = {\textless}Catalog-resource-name{\textgreater}] \hfill \\
\index[dir]{Catalog}
\index[dir]{Directive!Catalog}
This specifies the name of the catalog resource to be used for this Job.
When a catalog is defined in a Job it will override the definition in
the client. (This keyword was introduced in Bareos 13.4.0)
\item [JobDefs = {\textless}JobDefs-Resource-Name{\textgreater}] \hfill \\
\index[dir]{JobDefs}
\index[dir]{Directive!JobDefs}
If a JobDefs-Resource-Name is specified, all the values contained in the
named JobDefs resource will be used as the defaults for the current Job.
Any value that you explicitly define in the current Job resource, will
override any defaults specified in the JobDefs resource. The use of
this directive permits writing much more compact Job resources where the
bulk of the directives are defined in one or more JobDefs. This is
particularly useful if you have many similar Jobs but with minor
variations such as different Clients. A simple example of the use of
JobDefs is provided in the default bareos-dir.conf file.
\item [Bootstrap = {\textless}bootstrap-file{\textgreater}] \hfill \\
\index[dir]{Bootstrap}
\index[dir]{Directive!Bootstrap}
The Bootstrap directive specifies a bootstrap file that, if provided,
will be used during {\bf Restore} Jobs and is ignored in other Job
types. The {\bf bootstrap} file contains the list of tapes to be used
in a restore Job as well as which files are to be restored.
Specification of this directive is optional, and if specified, it is
used only for a restore job. In addition, when running a Restore job
from the console, this value can be changed.
If you use the {\bf Restore} command in the Console program, to start a
restore job, the {\bf bootstrap} file will be created automatically from
the files you select to be restored.
For additional details of the {\bf bootstrap} file, please see
\ilink{Restoring Files with the Bootstrap File}{BootstrapChapter} chapter
of this manual.
\label{writebootstrap}
\item [Write Bootstrap = {\textless}bootstrap-file-specification{\textgreater}] \hfill \\
\index[dir]{Write Bootstrap}
\index[dir]{Directive!Write Bootstrap}
The {\bf writebootstrap} directive specifies a file name where Bareos
will write a {\bf bootstrap} file for each Backup job run. This
directive applies only to Backup Jobs. If the Backup job is a Full
save, Bareos will erase any current contents of the specified file
before writing the bootstrap records. If the Job is an Incremental
or Differential
save, Bareos will append the current bootstrap record to the end of the
file.
Using this feature, permits you to constantly have a bootstrap file that
can recover the current state of your system. Normally, the file
specified should be a mounted drive on another machine, so that if your
hard disk is lost, you will immediately have a bootstrap record
available. Alternatively, you should copy the bootstrap file to another
machine after it is updated. Note, it is a good idea to write a separate
bootstrap file for each Job backed up including the job that backs up
your catalog database.
If the {\bf bootstrap-file-specification} begins with a vertical bar
(\verb+|+), Bareos will use the specification as the name of a program to which
it will pipe the bootstrap record. It could for example be a shell
script that emails you the bootstrap record.
Before opening the file or executing the
specified command, Bareos performs
\ilink{character substitution}{character substitution} like in RunScript
directive. To automatically manage your bootstrap files, you can use
this in your {\bf JobDefs} resources:
\begin{verbatim}
JobDefs {
Write Bootstrap = "%c_%n.bsr"
...
}
\end{verbatim}
For more details on using this file, please see the chapter entitled
\ilink{The Bootstrap File}{BootstrapChapter} of this manual.
\item [Client = {\textless}client-resource-name{\textgreater}] \hfill \\
\index[dir]{Client}
\index[dir]{Directive!Client}
The Client directive specifies the Client (File daemon) that will be used in
the current Job. Only a single Client may be specified in any one Job. The
Client runs on the machine to be backed up, and sends the requested files to
the Storage daemon for backup, or receives them when restoring. For
additional details, see the
\ilink{Client Resource section}{ClientResource2} of this chapter.
This directive is required (For versions before 13.3.0 for all Jobtypes
and for versions after that for all Jobtypes but Copy and Migrate).
\item [FileSet = {\textless}FileSet-resource-name{\textgreater}] \hfill \\
\index[dir]{FileSet}
\index[dir]{Directive!FileSet}
The FileSet directive specifies the FileSet that will be used in the
current Job. The FileSet specifies which directories (or files) are to
be backed up, and what options to use (e.g. compression, ...). Only a
single FileSet resource may be specified in any one Job. For additional
details, see the \ilink{FileSet Resource section}{FileSetResource} of
this chapter.
This directive is required (For versions before 13.3.0 for all Jobtypes
and for versions after that for all Jobtypes but Copy and Migrate).
\item [Base = {\textless}job-resource-name, ...{\textgreater}] \hfill \\
\index[dir]{Base}
\index[dir]{Directive!Base}
The Base directive permits to specify the list of jobs that will be used during
Full backup as base. This directive is optional. See the \ilink{Base Job
chapter}{basejobs} for more information.
\item [Messages = {\textless}messages-resource-name{\textgreater}] \hfill \\
\index[dir]{Messages}
\index[dir]{Directive!Messages}
The Messages directive defines what Messages resource should be used for
this job, and thus how and where the various messages are to be
delivered. For example, you can direct some messages to a log file, and
others can be sent by email. For additional details, see the
\ilink{Messages Resource}{MessagesChapter} Chapter of this manual. This
directive is required.
\item [Pool = {\textless}pool-resource-name{\textgreater}] \hfill \\
\index[dir]{Pool}
\index[dir]{Directive!Pool}
The Pool directive defines the pool of Volumes where your data can be
backed up. Many Bareos installations will use only the {\bf Default}
pool. However, if you want to specify a different set of Volumes for
different Clients or different Jobs, you will probably want to use
Pools. For additional details, see the \ilink{Pool Resource
section}{PoolResource} of this chapter. This directive is required.
\item [Full Backup Pool = {\textless}pool-resource-name{\textgreater}] \hfill \\
\index[dir]{Full Backup Pool}
\index[dir]{Directive!Full Backup Pool}
The {\bf Full Backup Pool} specifies a Pool to be used for Full backups.
It will override any Pool specification during a Full backup. This
directive is optional.
\item [Differential Backup Pool = {\textless}pool-resource-name{\textgreater}] \hfill \\
\index[dir]{Differential Backup Pool}
\index[dir]{Directive!Differential Backup Pool}
The {\bf Differential Backup Pool} specifies a Pool to be used for
Differential backups. It will override any Pool specification during a
Differential backup. This directive is optional.
\item [Incremental Backup Pool = {\textless}pool-resource-name{\textgreater}] \hfill \\
\index[dir]{Incremental Backup Pool}
\index[dir]{Directive!Incremental Backup Pool}
The {\bf Incremental Backup Pool} specifies a Pool to be used for
Incremental backups. It will override any Pool specification during an
Incremental backup. This directive is optional.
\item [Next Pool = {\textless}pool-resource-name{\textgreater}] \hfill \\
\index[dir]{Next Pool}
\index[dir]{Directive!Next Pool}
A Next Pool override used for Migration/Copy and Virtual Backup Jobs.
\item [Schedule = {\textless}schedule-name{\textgreater}] \hfill \\
\index[dir]{Schedule}
\index[dir]{Directive!Schedule}
The Schedule directive defines what schedule is to be used for the Job.
The schedule in turn determines when the Job will be automatically
started and what Job level (i.e. Full, Incremental, ...) is to be run.
This directive is optional, and if left out, the Job can only be started
manually using the Console program. Although you may specify only a
single Schedule resource for any one job, the Schedule resource may
contain multiple {\bf Run} directives, which allow you to run the Job at
many different times, and each {\bf run} directive permits overriding
the default Job Level Pool, Storage, and Messages resources. This gives
considerable flexibility in what can be done with a single Job. For
additional details, see the \ilink{Schedule Resource
Chapter}{ScheduleResource} of this manual.
\item [Storage = {\textless}storage-resource-name{\textgreater}] \hfill \\
\index[dir]{Storage}
\index[dir]{Directive!Storage}
The Storage directive defines the name of the storage services where you
want to backup the FileSet data. For additional details, see the
\ilink{Storage Resource Chapter}{StorageResource2} of this manual.
The Storage resource may also be specified in the Job's Pool resource,
in which case the value in the Pool resource overrides any value
in the Job. This Storage resource definition is not required by either
the Job resource or in the Pool, but it must be specified in
one or the other, if not an error will result.
\item [Max Start Delay = {\textless}time{\textgreater}] \hfill \\
\index[dir]{Max Start Delay}
\index[dir]{Directive!Max Start Delay}
The time specifies the maximum delay between the scheduled time and the
actual start time for the Job. For example, a job can be scheduled to
run at 1:00am, but because other jobs are running, it may wait to run.
If the delay is set to 3600 (one hour) and the job has not begun to run
by 2:00am, the job will be canceled. This can be useful, for example,
to prevent jobs from running during day time hours. The default is 0
which indicates no limit.
\item [Max Run Time = {\textless}time{\textgreater}] \hfill \\
\index[dir]{Max Run Time}