/
xl.cfg.5.pod.in
2893 lines (1872 loc) · 87.2 KB
/
xl.cfg.5.pod.in
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
=head1 NAME
xl.cfg - xl domain configuration file syntax
=head1 SYNOPSIS
/etc/xen/xldomain
=head1 DESCRIPTION
Creating a VM (a domain in Xen terminology, sometimes called a guest)
with xl requires the provision of a domain configuration file. Typically,
these live in F</etc/xen/DOMAIN.cfg>, where DOMAIN is the name of the
domain.
=head1 SYNTAX
A domain configuration file consists of a series of options, specified by
using C<KEY=VALUE> pairs.
Some C<KEY>s are mandatory, some are general options which apply to
any guest type, while others relate only to specific guest types
(e.g. PV or HVM guests).
A C<VALUE> can be one of:
=over 4
=item B<"STRING">
A string, surrounded by either single or double quotes. But if the
STRING is part of a SPEC_STRING, the quotes should be omitted.
=item B<NUMBER>
A number, in either decimal, octal (using a C<0> prefix) or
hexadecimal (using a C<0x> prefix) format.
=item B<BOOLEAN>
A C<NUMBER> interpreted as C<False> (C<0>) or C<True> (any other
value).
=item B<[ VALUE, VALUE, ... ]>
A list of C<VALUE>s of the above types. Lists can be heterogeneous and
nested.
=back
The semantics of each C<KEY> defines which type of C<VALUE> is required.
Pairs may be separated either by a newline or a semicolon. Both
of the following are valid:
name="h0"
type="hvm"
name="h0"; type="hvm"
=head1 OPTIONS
=head2 Mandatory Configuration Items
The following key is mandatory for any guest type.
=over 4
=item B<name="NAME">
Specifies the name of the domain. Names of domains existing on a
single host must be unique.
=back
=head2 Selecting Guest Type
=over 4
=item B<type="pv">
Specifies that this is to be a PV domain, suitable for hosting Xen-aware
guest operating systems. This is the default on x86.
=item B<type="pvh">
Specifies that this is to be an PVH domain. That is a lightweight HVM-like
guest without a device model and without many of the emulated devices
available to HVM guests. Note that this mode requires a PVH aware kernel on
x86. This is the default on Arm.
=item B<type="hvm">
Specifies that this is to be an HVM domain. That is, a fully virtualised
computer with emulated BIOS, disk and network peripherals, etc.
=back
=head3 Deprecated guest type selection
Note that the builder option is being deprecated in favor of the type
option.
=over 4
=item B<builder="generic">
Specifies that this is to be a PV domain, suitable for hosting Xen-aware guest
operating systems. This is the default.
=item B<builder="hvm">
Specifies that this is to be an HVM domain. That is, a fully
virtualised computer with emulated BIOS, disk and network peripherals,
etc.
=back
=head2 General Options
The following options apply to guests of any type.
=head3 CPU Allocation
=over 4
=item B<pool="CPUPOOLNAME">
Put the guest's vCPUs into the named CPU pool.
=item B<vcpus=N>
Start the guest with N vCPUs initially online.
=item B<maxvcpus=M>
Allow the guest to bring up a maximum of M vCPUs. When starting the guest, if
B<vcpus=N> is less than B<maxvcpus=M> then the first B<N> vCPUs will be
created online and the remainder will be created offline.
=item B<cpus="CPULIST">
List of host CPUs the guest is allowed to use. Default is no pinning at
all (more on this below). A C<CPULIST> may be specified as follows:
=over 4
=item "all"
To allow all the vCPUs of the guest to run on all the CPUs on the host.
=item "0-3,5,^1"
To allow all the vCPUs of the guest to run on CPUs 0,2,3,5. It is possible to
combine this with "all", meaning "all,^7" results in all the vCPUs
of the guest being allowed to run on all the CPUs of the host except CPU 7.
=item "nodes:0-3,^node:2"
To allow all the vCPUs of the guest to run on the CPUs from NUMA nodes
0,1,3 of the host. So, if CPUs 0-3 belong to node 0, CPUs 4-7 belong
to node 1, CPUs 8-11 to node 2 and CPUs 12-15 to node 3, the above would mean
all the vCPUs of the guest would be allowed to run on CPUs 0-7,12-15.
Combining this notation with the one above is possible. For instance,
"1,node:1,^6", means all the vCPUs of the guest will run on CPU 1 and
on all the CPUs of NUMA node 1, but not on CPU 6. Following the same
example as above, that would be CPUs 1,4,5,7.
Combining this with "all" is also possible, meaning "all,^node:1"
results in all the vCPUs of the guest running on all the CPUs on the
host, except for the CPUs belonging to the host NUMA node 1.
=item ["2", "3-8,^5"]
To ask for specific vCPU mapping. That means (in this example), vCPU 0
of the guest will run on CPU 2 of the host and vCPU 1 of the guest will
run on CPUs 3,4,6,7,8 of the host (excluding CPU 5).
More complex notation can be also used, exactly as described above. So
"all,^5-8", or just "all", or "node:0,node:2,^9-11,18-20" are all legal,
for each element of the list.
=back
If this option is not specified, no vCPU to CPU pinning is established,
and the vCPUs of the guest can run on all the CPUs of the host. If this
option is specified, the intersection of the vCPU pinning mask, provided
here, and the soft affinity mask, if provided via B<cpus_soft=>,
is utilized to compute the domain node-affinity for driving memory
allocations.
=item B<cpus_soft="CPULIST">
Exactly as B<cpus=>, but specifies soft affinity, rather than pinning
(hard affinity). When using the credit scheduler, this means what CPUs
the vCPUs of the domain prefer.
A C<CPULIST> is specified exactly as for B<cpus=>, detailed earlier in the
manual.
If this option is not specified, the vCPUs of the guest will not have
any preference regarding host CPUs. If this option is specified,
the intersection of the soft affinity mask, provided here, and the vCPU
pinning, if provided via B<cpus=>, is utilized to compute the
domain node-affinity for driving memory allocations.
If this option is not specified (and B<cpus=> is not specified either),
libxl automatically tries to place the guest on the least possible
number of nodes. A heuristic approach is used for choosing the best
node (or set of nodes), with the goal of maximizing performance for
the guest and, at the same time, achieving efficient utilization of
host CPUs and memory. In that case, the soft affinity of all the vCPUs
of the domain will be set to host CPUs belonging to NUMA nodes
chosen during placement.
For more details, see L<xl-numa-placement(7)>.
=back
=head3 CPU Scheduling
=over 4
=item B<cpu_weight=WEIGHT>
A domain with a weight of 512 will get twice as much CPU as a domain
with a weight of 256 on a contended host.
Legal weights range from 1 to 65535 and the default is 256.
Honoured by the credit and credit2 schedulers.
=item B<cap=N>
The cap optionally fixes the maximum amount of CPU a domain will be
able to consume, even if the host system has idle CPU cycles.
The cap is expressed as a percentage of one physical CPU:
100 is 1 physical CPU, 50 is half a CPU, 400 is 4 CPUs, etc.
The default, 0, means there is no cap.
Honoured by the credit and credit2 schedulers.
B<NOTE>: Many systems have features that will scale down the computing
power of a CPU that is not 100% utilized. This can be done in the
operating system, but can also sometimes be done below the operating system,
in the BIOS. If you set a cap such that individual cores are running
at less than 100%, this may have an impact on the performance of your
workload over and above the impact of the cap. For example, if your
processor runs at 2GHz, and you cap a VM at 50%, the power management
system may also reduce the clock speed to 1GHz; the effect will be
that your VM gets 25% of the available power (50% of 1GHz) rather than
50% (50% of 2GHz). If you are not getting the performance you expect,
look at performance and CPU frequency options in your operating system and
your BIOS.
=back
=head3 Memory Allocation
=over 4
=item B<memory=MBYTES>
Start the guest with MBYTES megabytes of RAM.
=item B<maxmem=MBYTES>
Specifies the maximum amount of memory a guest can ever see.
The value of B<maxmem=> must be equal to or greater than that of B<memory=>.
In combination with B<memory=> it will start the guest "pre-ballooned",
if the values of B<memory=> and B<maxmem=> differ.
A "pre-ballooned" HVM guest needs a balloon driver, without a balloon driver
it will crash.
B<NOTE>: Because of the way ballooning works, the guest has to allocate
memory to keep track of maxmem pages, regardless of how much memory it
actually has available to it. A guest with maxmem=262144 and
memory=8096 will report significantly less memory available for use
than a system with maxmem=8096 memory=8096 due to the memory overhead
of having to track the unused pages.
=item B<static_shm=[ "SSHM_SPEC", "SSHM_SPEC", ... ]>
Specifies the static shared memory regions of this guest. Static shared
memory regions enables guests to communicate with each other through
one or more shared memory regions, even without grant table support.
Currently, this only works on ARM guests.
See L<xl-static-shm-configuration(5)> for more details.
=back
=head3 Guest Virtual NUMA Configuration
=over 4
=item B<vnuma=[ VNODE_SPEC, VNODE_SPEC, ... ]>
Specify virtual NUMA configuration with positional arguments. The
nth B<VNODE_SPEC> in the list specifies the configuration of the nth
virtual node.
Note that virtual NUMA is not supported for PV guests yet, because
there is an issue with the CPUID instruction handling that affects PV virtual
NUMA. Furthermore, guests with virtual NUMA cannot be saved or migrated
because the migration stream does not preserve node information.
Each B<VNODE_SPEC> is a list, which has a form of
"[VNODE_CONFIG_OPTION, VNODE_CONFIG_OPTION, ... ]" (without the quotes).
For example, vnuma = [ ["pnode=0","size=512","vcpus=0-4","vdistances=10,20"] ]
means vnode 0 is mapped to pnode 0, has 512MB ram, has vcpus 0 to 4, the
distance to itself is 10 and the distance to vnode 1 is 20.
Each B<VNODE_CONFIG_OPTION> is a quoted C<KEY=VALUE> pair. Supported
B<VNODE_CONFIG_OPTION>s are (they are all mandatory at the moment):
=over 4
=item B<pnode=NUMBER>
Specifies which physical node this virtual node maps to.
=item B<size=MBYTES>
Specifies the size of this virtual node. The sum of memory sizes of all
vnodes will become B<maxmem=>. If B<maxmem=> is specified separately,
a check is performed to make sure the sum of all vnode memory matches
B<maxmem=>.
=item B<vcpus="CPUSTRING">
Specifies which vCPUs belong to this node. B<"CPUSTRING"> is a string of numerical
values separated by a comma. You can specify a range and/or a single CPU.
An example would be "vcpus=0-5,8", which means you specified vCPU 0 to vCPU 5,
and vCPU 8.
=item B<vdistances=NUMBER, NUMBER, ... >
Specifies the virtual distance from this node to all nodes (including
itself) with positional arguments. For example, "vdistance=10,20"
for vnode 0 means the distance from vnode 0 to vnode 0 is 10, from
vnode 0 to vnode 1 is 20. The number of arguments supplied must match
the total number of vnodes.
Normally you can use the values from B<xl info -n> or B<numactl
--hardware> to fill the vdistances list.
=back
=back
=head3 Event Actions
=over 4
=item B<on_poweroff="ACTION">
Specifies what should be done with the domain if it shuts itself down.
The B<ACTION>s are:
=over 4
=item B<destroy>
destroy the domain
=item B<restart>
destroy the domain and immediately create a new domain with the same
configuration
=item B<rename-restart>
rename the domain which terminated, and then immediately create a new
domain with the same configuration as the original
=item B<preserve>
keep the domain. It can be examined, and later destroyed with B<xl destroy>.
=item B<coredump-destroy>
write a "coredump" of the domain to F<@XEN_DUMP_DIR@/NAME> and then
destroy the domain.
=item B<coredump-restart>
write a "coredump" of the domain to F<@XEN_DUMP_DIR@/NAME> and then
restart the domain.
=item B<soft-reset>
Reset all Xen specific interfaces for the Xen-aware HVM domain allowing
it to reestablish these interfaces and continue executing the domain. PV
and non-Xen-aware HVM guests are not supported.
=back
The default for B<on_poweroff> is B<destroy>.
=item B<on_reboot="ACTION">
Action to take if the domain shuts down with a reason code requesting
a reboot. Default is B<restart>.
=item B<on_watchdog="ACTION">
Action to take if the domain shuts down due to a Xen watchdog timeout.
Default is B<destroy>.
=item B<on_crash="ACTION">
Action to take if the domain crashes. Default is B<destroy>.
=item B<on_soft_reset="ACTION">
Action to take if the domain performs a 'soft reset' (e.g. does B<kexec>).
Default is B<soft-reset>.
=back
=head3 Direct Kernel Boot
Direct kernel boot allows booting guests with a kernel and an initrd
stored on a filesystem available to the host physical machine, allowing
command line arguments to be passed directly. PV guest direct kernel boot is
supported. HVM guest direct kernel boot is supported with some limitations
(it's supported when using B<qemu-xen> and the default BIOS 'seabios',
but not supported in case of using B<stubdom-dm> and the old 'rombios'.)
=over 4
=item B<kernel="PATHNAME">
Load the specified file as the kernel image.
=item B<ramdisk="PATHNAME">
Load the specified file as the ramdisk.
=item B<cmdline="STRING">
Append B<STRING> to the kernel command line. (Note: the meaning of
this is guest specific). It can replace B<root="STRING">
along with B<extra="STRING"> and is preferred. When B<cmdline="STRING"> is set,
B<root="STRING"> and B<extra="STRING"> will be ignored.
=item B<root="STRING">
Append B<root=STRING> to the kernel command line (Note: the meaning of this
is guest specific).
=item B<extra="STRING">
Append B<STRING> to the kernel command line. (Note: the meaning of this
is guest specific).
=back
=head3 Non direct Kernel Boot
Non direct kernel boot allows booting guests with a firmware. This can be
used by all types of guests, although the selection of options is different
depending on the guest type.
This option provides the flexibly of letting the guest decide which kernel
they want to boot, while preventing having to poke at the guest file
system form the toolstack domain.
=head4 PV guest options
=over 4
=item B<firmware="pvgrub32|pvgrub64">
Boots a guest using a para-virtualized version of grub that runs inside
of the guest. The bitness of the guest needs to be know, so that the right
version of pvgrub can be selected.
Note that xl expects to find the pvgrub32.bin and pvgrub64.bin binaries in
F<@XENFIRMWAREDIR@>.
=back
=head4 HVM guest options
=over 4
=item B<firmware="bios">
Boot the guest using the default BIOS firmware, which depends on the
chosen device model.
=item B<firmware="uefi">
Boot the guest using the default UEFI firmware, currently OVMF.
=item B<firmware="seabios">
Boot the guest using the SeaBIOS BIOS firmware.
=item B<firmware="rombios">
Boot the guest using the ROMBIOS BIOS firmware.
=item B<firmware="ovmf">
Boot the guest using the OVMF UEFI firmware.
=item B<firmware="PATH">
Load the specified file as firmware for the guest.
=back
=head4 PVH guest options
Currently there's no firmware available for PVH guests, they should be
booted using the B<Direct Kernel Boot> method or the B<bootloader> option.
=over 4
=item B<pvshim=BOOLEAN>
Whether to boot this guest as a PV guest within a PVH container.
Ie, the guest will experience a PV environment,
but
processor hardware extensions are used to
separate its address space
to mitigate the Meltdown attack (CVE-2017-5754).
Default is false.
=item B<pvshim_path="PATH">
The PV shim is a specially-built firmware-like executable
constructed from the hypervisor source tree.
This option specifies to use a non-default shim.
Ignored if pvhsim is false.
=item B<pvshim_cmdline="STRING">
Command line for the shim.
Default is "pv-shim console=xen,pv".
Ignored if pvhsim is false.
=item B<pvshim_extra="STRING">
Extra command line arguments for the shim.
If supplied, appended to the value for pvshim_cmdline.
Default is empty.
Ignored if pvhsim is false.
=back
=head3 Other Options
=over 4
=item B<uuid="UUID">
Specifies the UUID of the domain. If not specified, a fresh unique
UUID will be generated.
=item B<seclabel="LABEL">
Assign an XSM security label to this domain.
=item B<init_seclabel="LABEL">
Specify an XSM security label used for this domain temporarily during
its build. The domain's XSM label will be changed to the execution
seclabel (specified by B<seclabel>) once the build is complete, prior to
unpausing the domain. With a properly constructed security policy (such
as nomigrate_t in the example policy), this can be used to build a
domain whose memory is not accessible to the toolstack domain.
=item B<max_grant_frames=NUMBER>
Specify the maximum number of grant frames the domain is allowed to have.
This value controls how many pages the domain is able to grant access to for
other domains, needed e.g. for the operation of paravirtualized devices.
The default is settable via L<xl.conf(5)>.
=item B<max_maptrack_frames=NUMBER>
Specify the maximum number of grant maptrack frames the domain is allowed
to have. This value controls how many pages of foreign domains can be accessed
via the grant mechanism by this domain. The default value is settable via
L<xl.conf(5)>.
=item B<nomigrate=BOOLEAN>
Disable migration of this domain. This enables certain other features
which are incompatible with migration. Currently this is limited to
enabling the invariant TSC feature flag in CPUID results when TSC is
not emulated.
=item B<driver_domain=BOOLEAN>
Specify that this domain is a driver domain. This enables certain
features needed in order to run a driver domain.
=item B<device_tree=PATH>
Specify a partial device tree (compiled via the Device Tree Compiler).
Everything under the node "/passthrough" will be copied into the guest
device tree. For convenience, the node "/aliases" is also copied to allow
the user to define aliases which can be used by the guest kernel.
Given the complexity of verifying the validity of a device tree, this
option should only be used with a trusted device tree.
Note that the partial device tree should avoid using the phandle 65000
which is reserved by the toolstack.
=item B<passthrough="STRING">
Specify whether IOMMU mappings are enabled for the domain and hence whether
it will be enabled for passthrough hardware. Valid values for this option
are:
=over 4
=item B<disabled>
IOMMU mappings are disabled for the domain and so hardware may not be
passed through.
This option is the default if no passthrough hardware is specified in the
domain's configuration.
=item B<enabled>
This option enables IOMMU mappings and selects an appropriate default
operating mode (see below for details of the operating modes). For HVM/PVH
domains running on platforms where the option is available, this is
equivalent to B<share_pt>. Otherwise, and also for PV domains, this
option is equivalent to B<sync_pt>.
This option is the default if passthrough hardware is specified in the
domain's configuration.
=item B<sync_pt>
This option means that IOMMU mappings will be synchronized with the
domain's P2M table as follows:
For a PV domain, all writable pages assigned to the domain are identity
mapped by MFN in the IOMMU page table. Thus a device driver running in the
domain may program passthrough hardware for DMA using MFN values
(i.e. host/machine frame numbers) looked up in its P2M.
For an HVM/PVH domain, all non-foreign RAM pages present in its P2M will be
mapped by GFN in the IOMMU page table. Thus a device driver running in the
domain may program passthrough hardware using GFN values (i.e. guest
physical frame numbers) without any further translation.
This option is not currently available on Arm.
=item B<share_pt>
This option is unavailable for a PV domain. For an HVM/PVH domain, this
option means that the IOMMU will be programmed to directly reference the
domain's P2M table as its page table. From the point of view of a device
driver running in the domain this is functionally equivalent to B<sync_pt>
but places less load on the hypervisor and so should generally be selected
in preference. However, the availability of this option is hardware
specific. If B<xl info> reports B<virt_caps> containing
B<iommu_hap_pt_share> then this option may be used.
=item B<default>
The default, which chooses between B<disabled> and B<enabled>
according to whether passthrough devices are enabled in the config
file.
=back
=back
=head2 Devices
The following options define the paravirtual, emulated and physical
devices which the guest will contain.
=over 4
=item B<disk=[ "DISK_SPEC_STRING", "DISK_SPEC_STRING", ...]>
Specifies the disks (both emulated disks and Xen virtual block
devices) which are to be provided to the guest, and what objects on
the host they should map to. See L<xl-disk-configuration(5)> for more
details.
=item B<vif=[ "NET_SPEC_STRING", "NET_SPEC_STRING", ...]>
Specifies the network interfaces (both emulated network adapters,
and Xen virtual interfaces) which are to be provided to the guest. See
L<xl-network-configuration(5)> for more details.
=item B<vtpm=[ "VTPM_SPEC_STRING", "VTPM_SPEC_STRING", ...]>
Specifies the Virtual Trusted Platform module to be
provided to the guest. See L<xen-vtpm(7)> for more details.
Each B<VTPM_SPEC_STRING> is a comma-separated list of C<KEY=VALUE>
settings from the following list:
=over 4
=item B<backend=domain-id>
Specifies the backend domain name or id. B<This value is required!>
If this domain is a guest, the backend should be set to the
vTPM domain name. If this domain is a vTPM, the
backend should be set to the vTPM manager domain name.
=item B<uuid=UUID>
Specifies the UUID of this vTPM device. The UUID is used to uniquely
identify the vTPM device. You can create one using the B<uuidgen(1)>
program on unix systems. If left unspecified, a new UUID
will be randomly generated every time the domain boots.
If this is a vTPM domain, you should specify a value. The
value is optional if this is a guest domain.
=back
=item B<p9=[ "9PFS_SPEC_STRING", "9PFS_SPEC_STRING", ...]>
Creates a Xen 9pfs connection to share a filesystem from the backend to the
frontend.
Each B<9PFS_SPEC_STRING> is a comma-separated list of C<KEY=VALUE>
settings, from the following list:
=over 4
=item B<tag=STRING>
9pfs tag to identify the filesystem share. The tag is needed on the
guest side to mount it.
=item B<security_model="none">
Only "none" is supported today, which means that the files are stored using
the same credentials as those they have in the guest (no user ownership
squash or remap).
=item B<path=STRING>
Filesystem path on the backend to export.
=item B<backend=domain-id>
Specify the backend domain name or id, defaults to dom0.
=back
=item B<pvcalls=[ "backend=domain-id", ... ]>
Creates a Xen pvcalls connection to handle pvcalls requests from
frontend to backend. It can be used as an alternative networking model.
For more information about the protocol, see
https://xenbits.xenproject.org/docs/unstable/misc/pvcalls.html.
=item B<vfb=[ "VFB_SPEC_STRING", "VFB_SPEC_STRING", ...]>
Specifies the paravirtual framebuffer devices which should be supplied
to the domain.
This option does not control the emulated graphics card presented to
an HVM guest. See B<Emulated VGA Graphics Device> below for how to
configure the emulated device. If B<Emulated VGA Graphics Device> options
are used in a PV guest configuration, B<xl> will pick up B<vnc>, B<vnclisten>,
B<vncpasswd>, B<vncdisplay>, B<vncunused>, B<sdl>, B<opengl> and
B<keymap> to construct the paravirtual framebuffer device for the guest.
Each B<VFB_SPEC_STRING> is a comma-separated list of C<KEY=VALUE>
settings, from the following list:
=over 4
=item B<vnc=BOOLEAN>
Allow access to the display via the VNC protocol. This enables the
other VNC-related settings. Default is 1 (enabled).
=item B<vnclisten=ADDRESS[:DISPLAYNUM]>
Specifies the IP address, and optionally the VNC display number, to use.
Note: if you specify the display number here, you should not use
the B<vncdisplay> option.
=item B<vncdisplay=DISPLAYNUM>
Specifies the VNC display number to use. The actual TCP port number
will be DISPLAYNUM+5900.
Note: you should not use this option if you set the DISPLAYNUM in the
B<vnclisten> option.
=item B<vncunused=BOOLEAN>
Requests that the VNC display setup searches for a free TCP port to use.
The actual display used can be accessed with B<xl vncviewer>.
=item B<vncpasswd=PASSWORD>
Specifies the password for the VNC server. If the password is set to an
empty string, authentication on the VNC server will be disabled,
allowing any user to connect.
=item B<sdl=BOOLEAN>
Specifies that the display should be presented via an X window (using
Simple DirectMedia Layer). The default is 0 (not enabled).
=item B<display=DISPLAY>
Specifies the X Window display that should be used when the B<sdl> option
is used.
=item B<xauthority=XAUTHORITY>
Specifies the path to the X authority file that should be used to
connect to the X server when the B<sdl> option is used.
=item B<opengl=BOOLEAN>
Enable OpenGL acceleration of the SDL display. Only effects machines
using B<device_model_version="qemu-xen-traditional"> and only if the
device-model was compiled with OpenGL support. The default is 0 (disabled).
=item B<keymap=LANG>
Configure the keymap to use for the keyboard associated with this
display. If the input method does not easily support raw keycodes
(e.g. this is often the case when using VNC) then this allows us to
correctly map the input keys into keycodes seen by the guest. The
specific values which are accepted are defined by the version of the
device-model which you are using. See B<Keymaps> below or consult the
B<qemu(1)> manpage. The default is B<en-us>.
=back
=item B<channel=[ "CHANNEL_SPEC_STRING", "CHANNEL_SPEC_STRING", ...]>
Specifies the virtual channels to be provided to the guest. A
channel is a low-bandwidth, bidirectional byte stream, which resembles
a serial link. Typical uses for channels include transmitting VM
configuration after boot and signalling to in-guest agents. Please see
L<xen-pv-channel(7)> for more details.
Each B<CHANNEL_SPEC_STRING> is a comma-separated list of C<KEY=VALUE>
settings. Leading and trailing whitespace is ignored in both KEY and
VALUE. Neither KEY nor VALUE may contain ',', '=' or '"'. Defined values
are:
=over 4
=item B<backend=domain-id>
Specifies the backend domain name or id. This parameter is optional. If
this parameter is omitted then the toolstack domain will be assumed.
=item B<name=NAME>
Specifies the name for this device. B<This parameter is mandatory!>
This should be a well-known name for a specific application (e.g.
guest agent) and should be used by the frontend to connect the
application to the right channel device. There is no formal registry
of channel names, so application authors are encouraged to make their
names unique by including the domain name and a version number in the string
(e.g. org.mydomain.guestagent.1).
=item B<connection=CONNECTION>
Specifies how the backend will be implemented. The following options are
available:
=over 4
=item B<SOCKET>
The backend will bind a Unix domain socket (at the path given by
B<path=PATH>), listen for and accept connections. The backend will proxy
data between the channel and the connected socket.
=item B<PTY>
The backend will create a pty and proxy data between the channel and the
master device. The command B<xl channel-list> can be used to discover the
assigned slave device.
=back
=back
=item B<rdm="RDM_RESERVATION_STRING">
B<HVM/x86 only!> Specifies information about Reserved Device Memory (RDM),
which is necessary to enable robust device passthrough. One example of RDM
is reporting through the ACPI Reserved Memory Region Reporting (RMRR) structure
on the x86 platform.
B<RDM_RESERVATION_STRING> is a comma separated list of C<KEY=VALUE> settings,
from the following list:
=over 4
=item B<strategy=STRING>
Currently there is only one valid type, and that is "host".
=over 4
=item B<host>
If set to "host" it means all reserved device memory on this platform should
be checked to reserve regions in this VM's address space. This global RDM
parameter allows the user to specify reserved regions explicitly, and using
"host" includes all reserved regions reported on this platform, which is
useful when doing hotplug.
By default this isn't set so we don't check all RDMs. Instead, we just check
the RDM specific to a given device if we're assigning this kind of a device.
Note: this option is not recommended unless you can make sure that no
conflicts exist.
For example, you're trying to set "memory = 2800" to allocate memory to one
given VM but the platform owns two RDM regions like:
Device A [sbdf_A]: RMRR region_A: base_addr ac6d3000 end_address ac6e6fff
Device B [sbdf_B]: RMRR region_B: base_addr ad800000 end_address afffffff
In this conflict case,
#1. If B<strategy> is set to "host", for example:
rdm = "strategy=host,policy=strict" or rdm = "strategy=host,policy=relaxed"
it means all conflicts will be handled according to the policy
introduced by B<policy> as described below.
#2. If B<strategy> is not set at all, but
pci = [ 'sbdf_A, rdm_policy=xxxxx' ]
it means only one conflict of region_A will be handled according to the policy
introduced by B<rdm_policy=STRING> as described inside B<pci> options.
=back
=item B<policy=STRING>
Specifies how to deal with conflicts when reserving already reserved device
memory in the guest address space.
=over 4
=item B<strict>
Specifies that in case of an unresolved conflict the VM can't be created,
or the associated device can't be attached in the case of hotplug.
=item B<relaxed>
Specifies that in case of an unresolved conflict the VM is allowed to be
created but may cause the VM to crash if a pass-through device accesses RDM.
For example, the Windows IGD GFX driver always accesses RDM regions so it
leads to a VM crash.
Note: this may be overridden by the B<rdm_policy> option in the B<pci>
device configuration.
=back
=back
=item B<usbctrl=[ "USBCTRL_SPEC_STRING", "USBCTRL_SPEC_STRING", ...]>
Specifies the USB controllers created for this guest.
Each B<USBCTRL_SPEC_STRING> is a comma-separated list of C<KEY=VALUE>
settings, from the following list:
=over 4
=item B<type=TYPE>
Specifies the usb controller type.
=over 4
=item B<pv>