Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with
or
.
Download ZIP
XFS tests source tree
C Shell Rust Other
branch: master

This branch is 32 commits ahead, 1 commit behind amir73il:master

Fetching latest commit…

Cannot retrieve the latest commit at this time

Failed to load latest commit information.
build
configs
crash
dmapi
include
lib
ltp
m4
nfs4acl
src
tools
.gitignore
001
001.out
002
002.out
003
003.out
004
004.out
005
005.out
006
006.out
007
007.out
008
008.out
009
009.out
010
010.out
011
011.out
012
012.out
013
013.out
014
014.out
015
015.out
016
016.out
017
017.out
018
018.op.irix
018.op.linux
018.out
018.trans_buf
018.trans_inode
019
019.out
020
020.out
021
021.out
022
022.out.irix
022.out.linux
023
023.out.irix
023.out.linux
024
024.out
025
025.out
026
026.out
027
027.out
028
028.out
029
029.out
030
030.out.irix
030.out.linux
031
031.out.irix
031.out.linux
032
032.out
033
033.out.irix
033.out.linux
034
034.out
035
035.out.irix
035.out.linux
036
036.out.irix
036.out.linux
037
037.out
038
038.out
039
039.out.irix
039.out.linux
040
040.good
040.out
041
041.out
042
042.out
043
043.out.irix
043.out.linux
044
044.out
045
045.out
046
046.out
047
047.out
048
048.out
049
049.out
050
050.out
051
051.out
052
052.out
053
053.out
054
054.out
055
055.out.irix
055.out.linux
056
056.out
057
057.out
058
058.out
059
060
061
061.out
062
062.out
063
063.out
064
064.out
065
065.out
066
066.out
067
067.out
068
068.out
069
069.out
070
070.out
071
071.out
072
072.out
073
073.out
074
074.out
075
075.out
076
076.out
077
077.out
078
078.out
079
079.out
080
080.out
081
081.out
081.ugquota.trans_inode
082
082.op.irix
082.op.linux
082.out
082.trans_buf
082.trans_inode
083
083.out
084
084.out
085
085.out
086
086.out
087
087.out
088
088.out.irix
088.out.linux
089
089.out
090
090.out
091
091.out
092
092.out
093
093.out
094
094.out
095
095.out
096
096.external
096.internal
097
097.out.udf
097.out.xfs
098
098.out
099
099.out
100
100.out
101
101.out
102
102.out
103
103.out
104
104.out
105
105.out
106
106.out
107
107.out
108
108.out
109
109.out
110
110.out
111
111.out
112
112.out
113
113.out
114
114.out
115
115.out
116
116.out
117
117.out
118
118.out
119
119.out
120
120.out
121
121.out
122
122.out
123
123.out
124
124.out
125
125.out
126
126.out
127
127.out
128
128.out
129
129.out
130
130.out
131
131.out
132
132.out
133
133.out
134
134.out
135
135.out
136
136.out
137
137.out
138
138.out
139
139.out
140
140.out
141
141.out
142
142.out
143
143.out
144
144.out
145
145.out
146
146.out.irix
146.out.linux
147
147.out
148
148.out
149
149.out
150
150.out
151
151.out
152
152.out
153
153.out
154
154.out
155
155.out
156
156.out
157
157.out
158
158.out
159
159.out
160
160.out
161
161.out
162
162.out
163
163.out
164
164.out
165
165.out
166
166.out
167
167.out
168
168.out
169
169.out
170
170.out
171
171.out
172
172.out
173
173.out
174
174.out
175
175.out
176
176.out
177
177.out
178
178.out
179
179.out
180
180.out
181
181.out
182
182.out
183
183.out
184
184.out
185
185.out
186
186.out
187
187.out
188
188.out
189
189.out
190
190.out
191
191.out
192
192.out
193
193.out
194
194.out
195
195.out
196
196.out
197
197.out
198
198.out
199
199.out
200
200.out
201
201.out
202
202.out
203
203.out
204
204.out
205
205.out
206
206.out
207
207.out
208
208.out
209
209.out
210
210.out
211
211.out
212
212.out
213
213.out
214
214.out
215
215.out
216
216.out
217
217.out
218
218.out
219
219.out
220
220.out
221
221.out
222
222.out
223
223.out
224
224.out
225
225.out
226
226.out
227
227.out
228
228.out
229
229.out
230
230.out
231
231.out
232
232.out
233
233.out
234
234.out
235
235.out
236
236.out
237
237.out
238
238.out
239
239.out
240
240.out
241
241.out
242
242.out
243
243.out
244
244.out
245
245.out
246
246.out
247
247.out
248
248.out
249
249.out
250
250.out
251
251.out
252
252.out
253
253.out
254
254.out
255
255.out
256
256.out
257
257.out
Makefile
Makepkgs
README
README.device-mapper
README.snapshot
VERSION
aclocal.m4
bench
check
common
common.attr
common.bonnie
common.config
common.dbench
common.defrag
common.dmapi
common.dump
common.filestreams
common.filter
common.log
common.metaperf
common.punch
common.quota
common.rc
common.repair
common.snapshot
configure.in
group
install-sh
lsqa.pl
make_irix
new
randomize.awk
remake
run.bonnie_io
run.bonnie_ops
run.dbench
run.dbench10
run.dbench100
run.dbench2
run.dbench20
run.dbench50
run.dbenchmulti
run.io
run.metaperf_10i_1000n
run.metaperf_10i_1n
run.metaperf_1i_1n
run.pio
run.rtio
run.tar
run_snapshots
setup
snapshot_test
soak

README

_______________________
BUILDING THE FSQA SUITE
_______________________

Building Linux:
	- cd into the xfstests directory and run make.
	
Building IRIX:
	- cd into the xfstests directory 
	- set the ROOT and TOOLROOT env variables for IRIX appropriately
	- run ./make_irix

______________________
USING THE FSQA SUITE
______________________

Preparing system for tests (IRIX and Linux):

    - compile XFS into your kernel or load XFS modules
    - install user tools including mkfs.xfs, xfs_db & xfs_bmap
    - If you wish to run the udf components of the suite install 
      mkfs_udf and udf_db for IRIX and mkudffs for Linux. Also download and 
      build the Philips UDF Verification Software from 
      http://www.extra.research.philips.com/udf/, then copy the udf_test 
      binary to xfstests/src/. If you wish to disable UDF verification test
      set the environment variable DISABLE_UDF_TEST to 1.
	
    
    - create one or two partitions to use for testing
        - one TEST partition
            - format as XFS, mount & optionally populate with 
              NON-IMPORTANT stuff
        - one SCRATCH partition (optional)
            - leave empty and expect this partition to be clobbered
              by some tests.  If this is not provided, many tests will
              not be run.
              
        (these must be two DIFFERENT partitions)
              
    - setup your environment
        - setenv TEST_DEV "device containing TEST PARTITION"
        - setenv TEST_DIR "mount point of TEST PARTITION"   
       	- optionally:
             - setenv SCRATCH_DEV "device containing SCRATCH PARTITION"
             - setenv SCRATCH_MNT "mount point for SCRATCH PARTITION"
             - setenv TAPE_DEV "tape device for testing xfsdump"
             - setenv RMT_TAPE_DEV "remote tape device for testing xfsdump"
             - setenv RMT_IRIXTAPE_DEV "remote IRIX tape device for testing xfsdump"
	     - setenv SCRATCH_LOGDEV "device for scratch-fs external log"
             - setenv SCRATCH_RTDEV "device for scratch-fs realtime data"
	     - setenv TEST_LOGDEV "device for test-fs external log"
             - setenv TEST_RTDEV "device for test-fs realtime data"
             - if TEST_LOGDEV and/or TEST_RTDEV, these will always be used.
             - if SCRATCH_LOGDEV and/or SCRATCH_RTDEV, the USE_EXTERNAL
               environment variable set to "yes" will enable their use.
        - or add a case to the switch in common.config assigning
          these variables based on the hostname of your test
          machine
	- or add these variables to a file called local.config and keep that
	  file in your workarea.

    - if testing xfsdump, make sure the tape devices have a
      tape which can be overwritten.
          
    - make sure $TEST_DEV is a mounted XFS partition
    - make sure that $SCRATCH_DEV contains nothing useful
    
Running tests:

    - cd xfstests
    - By default the tests suite will run xfs tests:
    - ./check 001 002 003 ... or you can explicitly run a filesystem: 
      ./check -xfs [test(s)]
    - You can run a range of tests: ./check 067-078
    - Groups of tests maybe ran by: ./check -g [group(s)]
      See the 'group' file for details on groups
    - for udf tests: ./check -udf [test(s)]
      Running all the udf tests: ./check -udf -g udf
    - for running nfs tests: ./check -nfs [test(s)]
    - To randomize test order: ./check -r [test(s)]

    
    The check script tests the return value of each script, and
    compares the output against the expected output. If the output
    is not as expected, a diff will be output and an .out.bad file
    will be produced for the failing test.
    
    Unexpected console messages, crashes and hangs may be considered
    to be failures but are not necessarily detected by the QA system.

__________________________ 
ADDING TO THE FSQA SUITE
__________________________


Creating new tests scripts:

    Use the "new" script.

Test script environment:

    When developing a new test script keep the following things in
    mind.  All of the environment variables and shell procedures are
    available to the script once the "common.rc" file has been
    sourced.

     1. The tests are run from an arbitrary directory.  If you want to
	do operations on an XFS filesystem (good idea, eh?), then do
	one of the following:

	(a) Create directories and files at will in the directory
	    $TEST_DIR ... this is within an XFS filesystem and world
	    writeable.  You should cleanup when your test is done,
	    e.g. use a _cleanup shell procedure in the trap ... see
	    001 for an example.  If you need to know, the $TEST_DIR
	    directory is within the filesystem on the block device
	    $TEST_DEV.

	(b) mkfs a new XFS filesystem on $SCRATCH_DEV, and mount this
	    on $SCRATCH_MNT. Call the the _require_scratch function 
            on startup if you require use of the scratch partition.
            _require_scratch does some checks on $SCRATCH_DEV & 
            $SCRATCH_MNT and makes sure they're unmounted. You should 
            cleanup when your test is done, and in particular unmount 
            $SCRATCH_MNT.
	    Tests can make use of $SCRATCH_LOGDEV and $SCRATCH_RTDEV
	    for testing external log and realtime volumes - however,
	    these tests need to simply "pass" (e.g. cat $seq.out; exit
	    - or default to an internal log) in the common case where
	    these variables are not set.

     2. You can safely create temporary files that are not part of the
	filesystem tests (e.g. to catch output, prepare lists of things
	to do, etc.) in files named $tmp.<anything>.  The standard test
	script framework created by "new" will initialize $tmp and
	cleanup on exit.

     3. By default, tests are run as the same uid as the person
	executing the control script "check" that runs the test scripts.

	If you need to be root, add a call to the shell procedure
	_need_to_be_root ... this will do nothing or exit with an
	error message depending on your current uid.

     4. Some other useful shell procedures:

	_get_fqdn		- echo the host's fully qualified
				  domain name

	_get_pids_by_name	- one argument is a process name, and
				  return all of the matching pids on
				  standard output

	_within_tolerance	- fancy numerical "close enough is good
				  enough" filter for deterministic
				  output ... see comments in
				  common.filter for an explanation

	_filter_date		- turn ctime(3) format dates into the
				  string DATE for deterministic
				  output

	_cat_passwd,		- dump the content of the password
	_cat_group		  or group file (both the local file
				  and the content of the NIS database
				  if it is likely to be present)

     4. General recommendations, usage conventions, etc.:
	- When the content of the password or group file is
	  required, get it using the _cat_passwd and _cat_group
	  functions, to ensure NIS information is included if NIS
	  is active.
	- When calling getfacl in a test, pass the "-n" argument so
	  that numeric rather than symbolic identifiers are used in
	  the output.

Verified output:

    Each test script has a numerical name, e.g. 007, and an associated
    verified output, e.g. 007.out.

    It is important that the verified output is deterministic, and
    part of the job of the test script is to filter the output to
    make this so.  Examples of the sort of things that need filtering:

    - dates
    - pids
    - hostnames
    - filesystem names
    - timezones
    - variable directory contents
    - imprecise numbers, especially sizes and times

    Use the "remake" script to recreate the verified output for one
    or more tests.

Pass/failure:

    The script "check" may be used to run one or more tests.

    Test number $seq is deemed to "pass" when:
    (a) no "core" file is created,
    (b) the file $seq.notrun is not created,
    (c) the exit status is 0, and
    (d) the output matches the verified output.

    In the "not run" case (b), the $seq.notrun file should contain a
    short one-line summary of why the test was not run.  The standard
    output is not checked, so this can be used for a more verbose
    explanation and to provide feedback when the QA test is run
    interactively.


    To force a non-zero exit status use:
	status=1
	exit

    Note that:
	exit 1
    won't have the desired effect because of the way the exit trap
    works.

    The recent pass/fail history is maintained in the file "check.log".
    The elapsed time for the most recent pass for each test is kept
    in "check.time".
Something went wrong with that request. Please try again.