This repository has been archived by the owner on Mar 19, 2021. It is now read-only.
/
dirdconf.tex
663 lines (542 loc) · 25.8 KB
/
dirdconf.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
\chapter{Director Configuration}
\label{DirectorChapter}
\label{DirectorConfChapter}
\index[general]{Director!Configuring the}
\index[general]{Configuring the Director}
Of all the configuration files needed to run {Bareos}, the Director's is
the most complicated and the one that you will need to modify the most often
as you add clients or modify the FileSets.
For a general discussion of configuration files and resources including the
recognized data types see \nameref{ConfigureChapter}.
%\section{Director Resource Types}
\index[general]{Types!Director Resource}
\index[general]{Director!Resource Types}
\index[dir]{Resource Types}
Everything revolves around a job and is tied to a job in one
way or another.
The \bareosDir knows about following resource types:
\begin{itemize}
\item
\nameref{DirectorResourceDirector} -- to define the Director's
name and its access password used for authenticating the Console program.
Only a single Director resource definition may appear in the Director's
configuration file.
\item
\nameref{DirectorResourceJob} -- to define the backup/restore Jobs
and to tie together the Client, FileSet and Schedule resources to be used
for each Job. Normally, you will Jobs of different names corresponding
to each client (i.e. one Job per client, but a different one with a different name
for each client).
\item
\nameref{DirectorResourceJobDefs} -- optional resource for
providing defaults for Job resources.
\item
\nameref{DirectorResourceSchedule} -- to define when a Job has to
run. You may have any number of Schedules, but each job will reference only
one.
\item
\nameref{DirectorResourceFileSet} -- to define the set of files
to be backed up for each Client. You may have any number of
FileSets but each Job will reference only one.
\item
\nameref{DirectorResourceClient} -- to define what Client is to be
backed up. You will generally have multiple Client definitions. Each
Job will reference only a single client.
\item
\nameref{DirectorResourceStorage} -- to define on what physical
device the Volumes should be mounted. You may have one or
more Storage definitions.
\item
\nameref{DirectorResourcePool} -- to define the pool of Volumes
that can be used for a particular Job. Most people use a
single default Pool. However, if you have a large number
of clients or volumes, you may want to have multiple Pools.
Pools allow you to restrict a Job (or a Client) to use
only a particular set of Volumes.
\item
\nameref{DirectorResourceCatalog} -- to define in what database to
keep the list of files and the Volume names where they are backed up.
Most people only use a single catalog.
It is possible, however not adviced and not supported to use multiple catalogs,
see \nameref{MultipleCatalogs}.
\item
\nameref{DirectorResourceMessages} -- to define where error and
information messages are to be sent or logged. You may define
multiple different message resources and hence direct particular
classes of messages to different users or locations (files, ...).
\end{itemize}
\section{Director Resource}
\label{DirectorResourceDirector}
\index[general]{Director Resource}
\index[general]{Resource!Director}
The Director resource defines the attributes of the Directors running on the
network. Only a single Director
resource is allowed.
The following is an example of a valid Director resource definition:
\begin{bconfig}{Director Resource example}
Director {
Name = bareos-dir
Password = secretpassword
QueryFile = "/etc/bareos/query.sql"
Maximum Concurrent Jobs = 10
Messages = Daemon
}
\end{bconfig}
\input{autogenerated/bareos-dir-resource-director-table.tex}
\input{director-resource-director-definitions.tex}
\input{autogenerated/bareos-dir-resource-director-description.tex}
\section{Job Resource}
\label{DirectorResourceJob}
\label{JobResource}
\index[general]{Resource!Job}
\index[general]{Job!Resource}
The Job resource defines a Job (Backup, Restore, ...) that Bareos must
perform. Each Job resource definition contains the name of a Client and
a FileSet to backup, the Schedule for the Job, where the data
are to be stored, and what media Pool can be used. In effect, each Job
resource must specify What, Where, How, and When or FileSet, Storage,
Backup/Restore/Level, and Schedule respectively. Note, the FileSet must
be specified for a restore job for historical reasons, but it is no longer used.
Only a single type ({\bf Backup}, {\bf Restore}, ...) can be specified for any
job. If you want to backup multiple FileSets on the same Client or multiple
Clients, you must define a Job for each one.
Note, you define only a single Job to do the Full, Differential, and
Incremental backups since the different backup levels are tied together by
a unique Job name. Normally, you will have only one Job per Client, but
if a client has a really huge number of files (more than several million),
you might want to split it into to Jobs each with a different FileSet
covering only part of the total files.
Multiple Storage daemons are not currently supported for Jobs, so if
you do want to use multiple storage daemons, you will need to create
a different Job and ensure that for each Job that the combination of
Client and FileSet are unique. The Client and FileSet are what Bareos
uses to restore a client, so if there are multiple Jobs with the same
Client and FileSet or multiple Storage daemons that are used, the
restore will not work. This problem can be resolved by defining multiple
FileSet definitions (the names must be different, but the contents of
the FileSets may be the same).
\input{autogenerated/bareos-dir-resource-job-table.tex}
\input{director-resource-job-definitions.tex}
\input{autogenerated/bareos-dir-resource-job-description.tex}
The following is an example of a valid Job resource definition:
\begin{bconfig}{Job Resource Example}
Job {
Name = "Minou"
Type = Backup
Level = Incremental # default
Client = Minou
FileSet="Minou Full Set"
Storage = DLTDrive
Pool = Default
Schedule = "MinouWeeklyCycle"
Messages = Standard
}
\end{bconfig}
\section{JobDefs Resource}
\label{DirectorResourceJobDefs}
\index[general]{Job!JobDefs Resource}
\index[general]{Resource!JobDefs}
The JobDefs resource permits all the same directives that can appear in a Job
resource. However, a JobDefs resource does not create a Job, rather it can be
referenced within a Job to provide defaults for that Job. This permits you to
concisely define several nearly identical Jobs, each one referencing a JobDefs
resource which contains the defaults. Only the changes from the defaults need to
be mentioned in each Job.
% \input{autogenerated/bareos-dir-resource-jobdefs-table.tex}
% \input{director-resource-jobdefs-definitions.tex}
% \input{autogenerated/bareos-dir-resource-jobdefs-description.tex}
\section{Schedule Resource}
\label{DirectorResourceSchedule}
\index[general]{Resource!Schedule}
\index[general]{Schedule!Resource}
The Schedule resource provides a means of automatically scheduling a Job as
well as the ability to override the default Level, Pool, Storage and Messages
resources. If a Schedule resource is not referenced in a Job, the Job can only
be run manually. In general, you specify an action to be taken and when.
\input{autogenerated/bareos-dir-resource-schedule-table.tex}
\input{director-resource-schedule-definitions.tex}
\input{autogenerated/bareos-dir-resource-schedule-description.tex}
Note, the Week of Year specification wnn follows the ISO standard definition
of the week of the year, where Week 1 is the week in which the first Thursday
of the year occurs, or alternatively, the week which contains the 4th of
January. Weeks are numbered w01 to w53. w00 for Bareos is the week that
precedes the first ISO week (i.e. has the first few days of the year if any
occur before Thursday). w00 is not defined by the ISO specification. A week
starts with Monday and ends with Sunday.
According to the NIST (US National Institute of Standards and Technology),
12am and 12pm are ambiguous and can be defined to anything. However,
12:01am is the same as 00:01 and 12:01pm is the same as 12:01, so Bareos
defines 12am as 00:00 (midnight) and 12pm as 12:00 (noon). You can avoid
this abiguity (confusion) by using 24 hour time specifications (i.e. no
am/pm).
An example schedule resource that is named {\bf WeeklyCycle} and runs a job
with level full each Sunday at 2:05am and an incremental job Monday through
Saturday at 2:05am is:
\begin{bconfig}{Schedule Example}
Schedule {
Name = "WeeklyCycle"
Run = Level=Full sun at 2:05
Run = Level=Incremental mon-sat at 2:05
}
\end{bconfig}
An example of a possible monthly cycle is as follows:
\begin{bconfig}{}
Schedule {
Name = "MonthlyCycle"
Run = Level=Full Pool=Monthly 1st sun at 2:05
Run = Level=Differential 2nd-5th sun at 2:05
Run = Level=Incremental Pool=Daily mon-sat at 2:05
}
\end{bconfig}
The first of every month:
\begin{bconfig}{}
Schedule {
Name = "First"
Run = Level=Full on 1 at 2:05
Run = Level=Incremental on 2-31 at 2:05
}
\end{bconfig}
The last friday of the month (i.e. the last friday in the last week of the month)
\begin{bconfig}{}
Schedule {
Name = "Last Friday"
Run = Level=Full last fri at 21:00
}
\end{bconfig}
Every 10 minutes:
\begin{bconfig}{}
Schedule {
Name = "TenMinutes"
Run = Level=Full hourly at 0:05
Run = Level=Full hourly at 0:15
Run = Level=Full hourly at 0:25
Run = Level=Full hourly at 0:35
Run = Level=Full hourly at 0:45
Run = Level=Full hourly at 0:55
}
\end{bconfig}
The {\bf modulo scheduler} makes it easy to specify schedules like odd or even days/weeks, or more generally every n days or weeks. It is called modulo scheduler because it uses the modulo to determine if the schedule must be run or not. The second variable behind the slash lets you determine in which cycle of days/weeks a job should be run. The first part determines on which day/week the job should be run first. E.g. if you want to run a backup in a 5-week-cycle, starting on week 3, you set it up as w03/w05.
\begin{bconfig}{Schedule Examples: modulo}
Schedule {
Name = "Odd Days"
Run = 1/2 at 23:10
}
Schedule {
Name = "Even Days"
Run = 2/2 at 23:10
}
Schedule {
Name = "On the 3rd week in a 5-week-cycle"
Run = w03/w05 at 23:10
}
Schedule {
Name = "Odd Weeks"
Run = w01/w02 at 23:10
}
Schedule {
Name = "Even Weeks"
Run = w02/w02 at 23:10
}
\end{bconfig}
\subsection{Technical Notes on Schedules}
\index[general]{Schedule!Technical Notes on Schedules}
Internally Bareos keeps a schedule as a bit mask. There are six masks and a
minute field to each schedule. The masks are hour, day of the month (mday),
month, day of the week (wday), week of the month (wom), and week of the year
(woy). The schedule is initialized to have the bits of each of these masks
set, which means that at the beginning of every hour, the job will run. When
you specify a month for the first time, the mask will be cleared and the bit
corresponding to your selected month will be selected. If you specify a second
month, the bit corresponding to it will also be added to the mask. Thus when
Bareos checks the masks to see if the bits are set corresponding to the
current time, your job will run only in the two months you have set. Likewise,
if you set a time (hour), the hour mask will be cleared, and the hour you
specify will be set in the bit mask and the minutes will be stored in the
minute field.
For any schedule you have defined, you can see how these bits are set by doing
a {\bf show schedules} command in the Console program. Please note that the
bit mask is zero based, and Sunday is the first day of the week (bit zero).
\section{FileSet Resource}
\label{DirectorResourceFileSet}
\label{FileSetResource}
\index[general]{Resource!FileSet}
\index[general]{FileSet!Resource}
The FileSet resource defines what files are to be included or excluded in a
backup job. A {\bf FileSet} resource is required for each backup Job. It
consists of a list of files or directories to be included, a list of files
or directories to be excluded and the various backup options such as
compression, encryption, and signatures that are to be applied to each
file.
Any change to the list of the included files will cause Bareos to
automatically create a new FileSet (defined by the name and an MD5 checksum
of the Include/Exclude contents). Each time a new FileSet is created,
Bareos will ensure that the next backup is always a Full save.
\input{autogenerated/bareos-dir-resource-fileset-table.tex}
\input{director-resource-fileset-definitions.tex}
\input{autogenerated/bareos-dir-resource-fileset-description.tex}
\input{dirdconf-fileset}
\section{Client Resource}
\label{DirectorResourceClient}
\index[general]{Resource!Client}
\index[general]{Client Resource}
The Client (or FileDaemon) resource defines the attributes of the Clients that are served by
this Director; that is the machines that are to be backed up. You will need
one Client resource definition for each machine to be backed up.
\input{autogenerated/bareos-dir-resource-client-table.tex}
\input{director-resource-client-definitions.tex}
\input{autogenerated/bareos-dir-resource-client-description.tex}
The following is an example of a valid Client resource definition:
\begin{bconfig}{Minimal client resource definition in bareos-dir.conf}
Client {
Name = client1-fd
Address = client1.example.com
Password = "secret"
}
\end{bconfig}
The following is an example of a Quota Configuration in Client resource:
\begin{bconfig}{Quota Configuration in Client resource}
Client {
Name = client1-fd
Address = client1.example.com
Password = "secret"
# Quota
Soft Quota = 50 mb
Soft Quota Grace Period = 2 days
Strict Quotas = Yes
Hard Quota = 150 mb
Quota Include Failed Jobs = yes
}
\end{bconfig}
\section{Storage Resource}
\label{DirectorResourceStorage}
\index[general]{Resource!Storage}
\index[general]{Storage Resource}
The Storage resource defines which Storage daemons are available for use by
the Director.
\input{autogenerated/bareos-dir-resource-storage-table.tex}
\input{director-resource-storage-definitions.tex}
\input{autogenerated/bareos-dir-resource-storage-description.tex}
The following is an example of a valid Storage resource definition:
\begin{bconfig}{Storage resource (tape) example}
Storage {
Name = DLTDrive
Address = lpmatou
Password = storage_password # password for Storage daemon
Device = "HP DLT 80" # same as Device in Storage daemon
Media Type = DLT8000 # same as MediaType in Storage daemon
}
\end{bconfig}
\section{Pool Resource}
\label{DirectorResourcePool}
\index[general]{Resource!Pool}
\index[general]{Pool Resource}
The Pool resource defines the set of storage Volumes (tapes or files) to be
used by Bareos to write the data. By configuring different Pools, you can
determine which set of Volumes (media) receives the backup data. This permits,
for example, to store all full backup data on one set of Volumes and all
incremental backups on another set of Volumes. Alternatively, you could assign
a different set of Volumes to each machine that you backup. This is most
easily done by defining multiple Pools.
Another important aspect of a Pool is that it contains the default attributes
(Maximum Jobs, Retention Period, Recycle flag, ...) that will be given to a
Volume when it is created. This avoids the need for you to answer a large
number of questions when labeling a new Volume. Each of these attributes can
later be changed on a Volume by Volume basis using the {\bf update} command in
the console program. Note that you must explicitly specify which Pool Bareos
is to use with each Job. Bareos will not automatically search for the correct
Pool.
Most often in Bareos installations all backups for all machines (Clients) go
to a single set of Volumes. In this case, you will probably only use the {\bf
Default} Pool. If your backup strategy calls for you to mount a different tape
each day, you will probably want to define a separate Pool for each day. For
more information on this subject, please see the
\ilink{Backup Strategies}{StrategiesChapter} chapter of this
manual.
To use a Pool, there are three distinct steps. First the Pool must be defined
in the Director's configuration file. Then the Pool must be written to the
Catalog database. This is done automatically by the Director each time that it
starts, or alternatively can be done using the {\bf create} command in the
console program. Finally, if you change the Pool definition in the Director's
configuration file and restart Bareos, the pool will be updated alternatively
you can use the {\bf update pool} console command to refresh the database
image. It is this database image rather than the Director's resource image
that is used for the default Volume attributes. Note, for the pool to be
automatically created or updated, it must be explicitly referenced by a Job
resource.
Next the physical media must be labeled. The labeling can either be done with
the {\bf label} command in the {\bf console} program or using the {\bf btape}
program. The preferred method is to use the {\bf label} command in the {\bf
console} program.
Finally, you must add Volume names (and their attributes) to the Pool. For
Volumes to be used by Bareos they must be of the same {\bf Media Type} as the
archive device specified for the job (i.e. if you are going to back up to a
DLT device, the Pool must have DLT volumes defined since 8mm volumes cannot be
mounted on a DLT drive). The {\bf Media Type} has particular importance if you
are backing up to files. When running a Job, you must explicitly specify which
Pool to use. Bareos will then automatically select the next Volume to use from
the Pool, but it will ensure that the {\bf Media Type} of any Volume selected
from the Pool is identical to that required by the Storage resource you have
specified for the Job.
If you use the {\bf label} command in the console program to label the
Volumes, they will automatically be added to the Pool, so this last step is
not normally required.
It is also possible to add Volumes to the database without explicitly labeling
the physical volume. This is done with the {\bf add} console command.
As previously mentioned, each time Bareos starts, it scans all the Pools
associated with each Catalog, and if the database record does not already
exist, it will be created from the Pool Resource definition. {\bf Bareos}
probably should do an {\bf update pool} if you change the Pool definition, but
currently, you must do this manually using the {\bf update pool} command in
the Console program.
The Pool Resource defined in the Director's configuration file
(\file{bareos-dir.conf}) may contain the following directives:
\input{autogenerated/bareos-dir-resource-pool-table.tex}
\input{director-resource-pool-definitions.tex}
\input{autogenerated/bareos-dir-resource-pool-description.tex}
In order for a Pool to be used during a Backup Job, the Pool must have at
least one Volume associated with it. Volumes are created for a Pool using
the {\bf label} or the {\bf add} commands in the {\bf Bareos Console},
program. In addition to adding Volumes to the Pool (i.e. putting the
Volume names in the Catalog database), the physical Volume must be labeled
with a valid Bareos software volume label before {\bf Bareos} will accept
the Volume. This will be automatically done if you use the {\bf label}
command. Bareos can automatically label Volumes if instructed to do so,
but this feature is not yet fully implemented.
The following is an example of a valid Pool resource definition:
\begin{bconfig}{Pool resource example}
Pool {
Name = Default
Pool Type = Backup
}
\end{bconfig}
\subsection{Scratch Pool}
\label{TheScratchPool}
\index[general]{Scratch Pool}
\index[general]{Pool!Scratch}
In general, you can give your Pools any name you wish, but there is one
important restriction: the Pool named {\bf Scratch}, if it exists behaves
like a scratch pool of Volumes in that when Bareos needs a new Volume for
writing and it cannot find one, it will look in the Scratch pool, and if
it finds an available Volume, it will move it out of the Scratch pool into
the Pool currently being used by the job.
\section{Catalog Resource}
\label{DirectorResourceCatalog}
\index[general]{Resource!Catalog}
\index[general]{Catalog Resource}
The Catalog Resource defines what catalog to use for the current job.
Currently, Bareos can only handle a single database server (SQLite, MySQL,
PostgreSQL) that is defined when configuring {\bf Bareos}. However, there
may be as many Catalogs (databases) defined as you wish. For example, you
may want each Client to have its own Catalog database, or you may want
backup jobs to use one database and verify or restore jobs to use another
database.
Since SQLite is compiled in, it always runs on the same machine
as the Director and the database must be directly accessible (mounted) from
the Director. However, since both MySQL and PostgreSQL are networked
databases, they may reside either on the same machine as the Director
or on a different machine on the network. See below for more details.
\input{autogenerated/bareos-dir-resource-catalog-table.tex}
\input{director-resource-catalog-definitions.tex}
\input{autogenerated/bareos-dir-resource-catalog-description.tex}
The following is an example of a valid Catalog resource definition:
\begin{bconfig}{Catalog Resource for Sqlite}
Catalog
{
Name = SQLite
DB Driver = sqlite
DB Name = bareos;
DB User = bareos;
DB Password = ""
}
\end{bconfig}
or for a Catalog on another machine:
\begin{bconfig}{Catalog Resource for remote MySQL}
Catalog
{
Name = MySQL
DB Driver = mysql
DB Name = bareos
DB User = bareos
DB Password = "secret"
DB Address = remote.example.com
DB Port = 1234
}
\end{bconfig}
\section{Messages Resource}
\label{DirectorResourceMessages}
\index[general]{Resource!Messages}
\index[general]{Messages Resource}
For the details of the Messages Resource, please see the
\nameref{MessagesChapter} of this manual.
\section{Console Resource}
\label{DirectorResourceConsole}
\index[general]{Console Resource}
\index[general]{Resource!Console}
There are three different kinds of consoles, which the administrator or
user can use to interact with the Director. These three kinds of consoles
comprise three different security levels.
\begin{description}
\item[Default Console] \index[dir]{Console!Default Console}
the first console type is an \bquote{anonymous} or \bquote{default} console,
which has full privileges. There is no console resource necessary for
this type since the password is specified in the Director's resource and
consequently such consoles do not have a name as defined on a \configdirective{Name} directive.
Typically you would use it only for administrators.
\item[Named Console] \index[dir]{Named Console} \index[dir]{Console!Named Console} \index[dir]{Console!Restricted Console}
the second type of console, is a
\bquote{named} console (also called \bquote{Restricted Console}) defined within a Console resource in both the Director's
configuration file and in the Console's configuration file. Both the
names and the passwords in these two entries must match much as is the
case for Client programs.
This second type of console begins with absolutely no privileges except
those explicitly specified in the Director's Console resource. Thus you
can have multiple Consoles with different names and passwords, sort of
like multiple users, each with different privileges. As a default,
these consoles can do absolutely nothing -- no commands whatsoever. You
give them privileges or rather access to commands and resources by
specifying access control lists in the Director's Console resource. The
ACLs are specified by a directive followed by a list of access names.
Examples of this are shown below.
\begin{itemize}
\item The third type of console is similar to the above mentioned one in that
it requires a Console resource definition in both the Director and the
Console. In addition, if the console name, provided on the
\linkResourceDirective{Dir}{Console}{Name} directive,
is the same as a Client name, that console is permitted to
use the \bcommand{SetIP}{} command to change the Address directive in the
Director's client resource to the IP address of the Console. This
permits portables or other machines using DHCP (non-fixed IP addresses)
to "notify" the Director of their current IP address.
\end{itemize}
\end{description}
The Console resource is optional and need not be specified. The following
directives are permitted within these resources:
\input{autogenerated/bareos-dir-resource-console-table.tex}
\input{director-resource-console-definitions.tex}
\input{autogenerated/bareos-dir-resource-console-description.tex}
The example at \nameref{sec:ConsoleAccessExample} shows how to use a console resource for a connection from a client like \command{bconsole}.
\section{Profile Resource}
\label{DirectorResourceProfile}
\index[general]{Profile Resource}
\index[general]{Resource!Profile}
The Profile Resource defines a set of ACLs. \nameref{DirectorResourceConsole}s can be tight to one or more profiles (\linkResourceDirective{Dir}{Console}{Profile}),
making it easier to use a common set of ACLs.
\input{autogenerated/bareos-dir-resource-profile-table.tex}
\input{bareos-dir-resource-profile-definitions.tex}
\input{autogenerated/bareos-dir-resource-profile-description.tex}
\section{Counter Resource}
\label{DirectorResourceCounter}
\index[general]{Resource!Counter}
\index[general]{Counter Resource}
The Counter Resource defines a counter variable that can be accessed by
variable expansion used for creating Volume labels with the \linkResourceDirective{Dir}{Pool}{Label Format}
directive.
\input{autogenerated/bareos-dir-resource-counter-table.tex}
\input{director-resource-counter-definitions.tex}
\input{autogenerated/bareos-dir-resource-counter-description.tex}
\section{Example Director Configuration File}
\label{SampleDirectorConfiguration}
\index[general]{Configuration!Director!Example}
\index[dir]{Configuration File Example}
See below an example of a full Director configuration file:
\bconfigInput{bareos-dir.conf.in}