Commits
spapr-tpm-hcal…
Name already in use
Commits on Jul 17, 2019
-
spapr: initial implementation for H_TPM_COMM/spapr-tpm-proxy
This implements the H_TPM_COMM hypercall, which is used by an Ultravisor to pass TPM commands directly to the host's TPM device, or a TPM Resource Manager associated with the device. This also introduces a new virtual device, spapr-tpm-proxy, which is used to configure the host TPM path to be used to service requests sent by H_TPM_COMM hcalls, for example: -device spapr-tpm-proxy,id=tpmp0,host-path=/dev/tpmrm0 By default, no spapr-tpm-proxy will be created, and hcalls will return H_FUNCTION. The full specification for this hypercall can be found in docs/specs/ppc-spapr-uv-hcalls.txt Since SVM-related hcalls like H_TPM_COMM use a reserved range of 0xEF00-0xEF80, we introduce a separate hcall table here to handle them. Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com
-
docs/specs: initial spec summary for Ultravisor-related hcalls
For now this only covers hcalls relating to TPM communication since it's the only one particularly important from a QEMU perspective atm, but others can be added here where it makes sense. The full specification for all hcalls/ucalls will eventually be made available in the public/OpenPower version of the PAPR specification. Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Commits on Jul 16, 2019
-
valgrind showed some memory leaks while running qemu-system-ppc64. Fixing them in this patch. Signed-off-by: Shivaprasad G Bhat <sbhat@linux.ibm.com> Message-Id: <156326547324.51324.17260886524634041718.stgit@lep8c.aus.stglabs.ibm.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
-
target/ppc: Optimize emulation of vsl and vsr instructions
Optimization of altivec instructions vsl and vsr(Vector Shift Left/Rigt). Perform shift operation (left and right respectively) on 128 bit value of register vA by value specified in bits 125-127 of register vB. Lowest 3 bits in each byte element of register vB must be identical or result is undefined. For vsl instruction, the first step is bits 125-127 of register vB have to be saved in variable sh. Then, the highest sh bits of the lower doubleword element of register vA are saved in variable shifted, in order not to lose those bits when shift operation is performed on the lower doubleword element of register vA, which is the next step. After shifting the lower doubleword element shift operation is performed on higher doubleword element of vA, with replacement of the lowest sh bits(that are now 0) with bits saved in shifted. For vsr instruction, firstly, the bits 125-127 of register vB have to be saved in variable sh. Then, the lowest sh bits of the higher doubleword element of register vA are saved in variable shifted, in odred not to lose those bits when the shift operation is performed on the higher doubleword element of register vA, which is the next step. After shifting higher doubleword element, shift operation is performed on lower doubleword element of vA, with replacement of highest sh bits(that are now 0) with bits saved in shifted. Signed-off-by: Stefan Brankovic <stefan.brankovic@rt-rk.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <1563200574-11098-3-git-send-email-stefan.brankovic@rt-rk.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
-
target/ppc: Optimize emulation of lvsl and lvsr instructions
Adding simple macro that is calling tcg implementation of appropriate instruction if altivec support is active. Optimization of altivec instruction lvsl (Load Vector for Shift Left). Place bytes sh:sh+15 of value 0x00 || 0x01 || 0x02 || ... || 0x1E || 0x1F in destination register. Sh is calculated by adding 2 source registers and getting bits 60-63 of result. First, the bits [28-31] are placed from EA to variable sh. After that, the bytes are created in the following way: sh:(sh+7) of X(from description) by multiplying sh with 0x0101010101010101 followed by addition of the result with 0x0001020304050607. Value obtained is placed in higher doubleword element of vD. (sh+8):(sh+15) by adding the result of previous multiplication with 0x08090a0b0c0d0e0f. Value obtained is placed in lower doubleword element of vD. Optimization of altivec instruction lvsr (Load Vector for Shift Right). Place bytes 16-sh:31-sh of value 0x00 || 0x01 || 0x02 || ... || 0x1E || 0x1F in destination register. Sh is calculated by adding 2 source registers and getting bits 60-63 of result. First, the bits [28-31] are placed from EA to variable sh. After that, the bytes are created in the following way: sh:(sh+7) of X(from description) by multiplying sh with 0x0101010101010101 followed by substraction of the result from 0x1011121314151617. Value obtained is placed in higher doubleword element of vD. (sh+8):(sh+15) by substracting the result of previous multiplication from 0x18191a1b1c1d1e1f. Value obtained is placed in lower doubleword element of vD. Signed-off-by: Stefan Brankovic <stefan.brankovic@rt-rk.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <1563200574-11098-2-git-send-email-stefan.brankovic@rt-rk.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
-
migration: Do not re-read the clock on pre_save in case of paused guest
Re-read the timebase before migrate was ported from x86 commit: 6053a86: kvmclock: reduce kvmclock difference on migration The clock move makes the guest knows about the paused time between the stop and migrate commands. This is an issue in an already-paused VM because some side effects, like process stalls, could happen after migration. So, this patch checks the runstate of guest in the pre_save handler and do not re-reads the timebase in case of paused state (cold migration). Signed-off-by: Maxiwell S. Garcia <maxiwell@linux.ibm.com> Message-Id: <20190711194702.26598-1-maxiwell@linux.ibm.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
-
Commits on Jul 15, 2019
-
Merge remote-tracking branch 'remotes/maxreitz/tags/pull-block-2019-0…
…7-15' into staging Block patches for 4.1-rc1: - Fixes for the NVMe block driver, the gluster block driver, and for running multiple block jobs concurrently on a single chain # gpg: Signature made Mon 15 Jul 2019 14:51:43 BST # gpg: using RSA key 91BEB60A30DB3E8857D11829F407DB0061D5CF40 # gpg: issuer "mreitz@redhat.com" # gpg: Good signature from "Max Reitz <mreitz@redhat.com>" [full] # Primary key fingerprint: 91BE B60A 30DB 3E88 57D1 1829 F407 DB00 61D5 CF40 * remotes/maxreitz/tags/pull-block-2019-07-15: gluster: fix .bdrv_reopen_prepare when backing file is a JSON object iotests: Add read-only test case to 030 iotests: Add new case to 030 iotests: Add @use_log to VM.run_job() iotests: Compare error messages in 030 iotests: Fix throttling in 030 block: Deep-clear inherits_from block/stream: Swap backing file change order block/stream: Fix error path block: Add BDS.never_freeze nvme: Set number of queues later in nvme_init() Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
-
Merge remote-tracking branch 'remotes/juanquintela/tags/migration-pul…
…l-request' into staging Pull request # gpg: Signature made Mon 15 Jul 2019 14:49:41 BST # gpg: using RSA key 1899FF8EDEBF58CCEE034B82F487EF185872D723 # gpg: Good signature from "Juan Quintela <quintela@redhat.com>" [full] # gpg: aka "Juan Quintela <quintela@trasno.org>" [full] # Primary key fingerprint: 1899 FF8E DEBF 58CC EE03 4B82 F487 EF18 5872 D723 * remotes/juanquintela/tags/migration-pull-request: (21 commits) migration: always initial RAMBlock.bmap to 1 for new migration migration/postcopy: remove redundant cpu_synchronize_all_post_init migration/postcopy: fix document of postcopy_send_discard_bm_ram() migration: allow private destination ram with x-ignore-shared migration: Split log_clear() into smaller chunks kvm: Support KVM_CLEAR_DIRTY_LOG kvm: Introduce slots lock for memory listener kvm: Persistent per kvmslot dirty bitmap kvm: Update comments for sync_dirty_bitmap memory: Introduce memory listener hook log_clear() memory: Pass mr into snapshot_and_clear_dirty bitmap: Add bitmap_copy_with_{src|dst}_offset() memory: Don't set migration bitmap when without migration migration: No need to take rcu during sync_dirty_bitmap migration/ram.c: reset complete_round when we gets a queued page migration/multifd: sync packet_num after all thread are done cutils: remove one unnecessary pointer operation migration/xbzrle: update cache and current_data in one place migration/multifd: call multifd_send_sync_main when sending RAM_SAVE_FLAG_EOS migration-test: rename parameter to parameter_int ... Signed-off-by: Peter Maydell <peter.maydell@linaro.org> -
gluster: fix .bdrv_reopen_prepare when backing file is a JSON object
When the backing_file is specified as a JSON object, the qemu_gluster_reopen_prepare() fails with this message: invalid URI json:{"server.0.host": ...} In this case, we should call qemu_gluster_init() using the QDict 'state->options' that contains the JSON parameters already parsed. Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=1542445 Signed-off-by: Stefano Garzarella <sgarzare@redhat.com> Message-id: 20190715132844.506584-1-sgarzare@redhat.com Signed-off-by: Max Reitz <mreitz@redhat.com> -
iotests: Add read-only test case to 030
This tests that the stream job exits cleanly (without abort) when the top node is read-only and cannot be reopened read/write. Signed-off-by: Max Reitz <mreitz@redhat.com> Message-id: 20190703172813.6868-12-mreitz@redhat.com Signed-off-by: Max Reitz <mreitz@redhat.com>
-
We recently removed the dependency of the stream job on its base node. That makes it OK to use a commit filter node there. Test that. Signed-off-by: Max Reitz <mreitz@redhat.com> Tested-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com> Reviewed-by: Alberto Garcia <berto@igalia.com> Message-id: 20190703172813.6868-11-mreitz@redhat.com Signed-off-by: Max Reitz <mreitz@redhat.com>
-
iotests: Add @use_log to VM.run_job()
unittest-style tests generally do not use the log file, but VM.run_job() can still be useful to them. Add a parameter to it that hides its output from the log file. Signed-off-by: Max Reitz <mreitz@redhat.com> Message-id: 20190703172813.6868-10-mreitz@redhat.com Signed-off-by: Max Reitz <mreitz@redhat.com>
-
iotests: Compare error messages in 030
Currently, 030 just compares the error class, which does not say anything. Before HEAD^ added throttling to test_overlapping_4, that test actually usually failed because node2 was already gone, not because it was the commit and stream job were not allowed to overlap. Prevent such problems in the future by comparing the error description instead. Signed-off-by: Max Reitz <mreitz@redhat.com> Tested-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com> Reviewed-by: Alberto Garcia <berto@igalia.com> Message-id: 20190703172813.6868-9-mreitz@redhat.com Signed-off-by: Max Reitz <mreitz@redhat.com>
-
iotests: Fix throttling in 030
Currently, TestParallelOps in 030 creates images that are too small for job throttling to be effective. This is reflected by the fact that it never undoes the throttling. Increase the image size and undo the throttling when the job should be completed. Also, add throttling in test_overlapping_4, or the jobs may not be so overlapping after all. In fact, the error usually emitted here is that node2 simply does not exist, not that overlapping jobs are not allowed -- the fact that this job ignores the exact error messages and just checks the error class is something that should be fixed in a follow-up patch. Signed-off-by: Max Reitz <mreitz@redhat.com> Tested-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com> Reviewed-by: Alberto Garcia <berto@igalia.com> Message-id: 20190703172813.6868-8-mreitz@redhat.com Signed-off-by: Max Reitz <mreitz@redhat.com>
-
block: Deep-clear inherits_from
BDS.inherits_from does not always point to an immediate parent node. When launching a block job with a filter node, for example, the node directly below the filter will not point to the filter, but keep its old pointee (above the filter). If that pointee goes away while the job is still running, the node's inherits_from will not be updated and thus point to garbage. To fix this, bdrv_unref_child() has to check not only the parent node's immediate children for nodes whose inherits_from needs to be cleared, but its whole subtree. Signed-off-by: Max Reitz <mreitz@redhat.com> Message-id: 20190703172813.6868-7-mreitz@redhat.com Signed-off-by: Max Reitz <mreitz@redhat.com>
-
block/stream: Swap backing file change order
bdrv_change_backing_file() can result in yields. Therefore, @base may no longer be the the backing_bs() of s->bottom afterwards. Just swap the order of the two calls to fix this. Signed-off-by: Max Reitz <mreitz@redhat.com> Message-id: 20190703172813.6868-4-mreitz@redhat.com Signed-off-by: Max Reitz <mreitz@redhat.com>
-
The commit and the mirror block job must be able to drop their filter node at any point. However, this will not be possible if any of the BdrvChild links to them is frozen. Therefore, we need to prevent them from ever becoming frozen. Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com> Reviewed-by: Alberto Garcia <berto@igalia.com> Message-id: 20190703172813.6868-2-mreitz@redhat.com Signed-off-by: Max Reitz <mreitz@redhat.com>
-
nvme: Set number of queues later in nvme_init()
When creating the admin queue in nvme_init() the variable that holds the number of queues created is modified before actual queue creation. This is a problem because if creating the queue fails then the variable is left in inconsistent state. This was actually observed when I tried to hotplug a nvme disk. The control got to nvme_file_open() which called nvme_init() which failed and thus nvme_close() was called which in turn called nvme_free_queue_pair() with queue being NULL. This lead to an instant crash: #0 0x000055d9507ec211 in nvme_free_queue_pair (bs=0x55d952ddb880, q=0x0) at block/nvme.c:164 #1 0x000055d9507ee180 in nvme_close (bs=0x55d952ddb880) at block/nvme.c:729 #2 0x000055d9507ee3d5 in nvme_file_open (bs=0x55d952ddb880, options=0x55d952bb1410, flags=147456, errp=0x7ffd8e19e200) at block/nvme.c:781 #3 0x000055d9507629f3 in bdrv_open_driver (bs=0x55d952ddb880, drv=0x55d95109c1e0 <bdrv_nvme>, node_name=0x0, options=0x55d952bb1410, open_flags=147456, errp=0x7ffd8e19e310) at block.c:1291 #4 0x000055d9507633d6 in bdrv_open_common (bs=0x55d952ddb880, file=0x0, options=0x55d952bb1410, errp=0x7ffd8e19e310) at block.c:1551 #5 0x000055d950766881 in bdrv_open_inherit (filename=0x0, reference=0x0, options=0x55d952bb1410, flags=32768, parent=0x55d9538ce420, child_role=0x55d950eaade0 <child_file>, errp=0x7ffd8e19e510) at block.c:3063 #6 0x000055d950765ae4 in bdrv_open_child_bs (filename=0x0, options=0x55d9541cdff0, bdref_key=0x55d950af33aa "file", parent=0x55d9538ce420, child_role=0x55d950eaade0 <child_file>, allow_none=true, errp=0x7ffd8e19e510) at block.c:2712 #7 0x000055d950766633 in bdrv_open_inherit (filename=0x0, reference=0x0, options=0x55d9541cdff0, flags=0, parent=0x0, child_role=0x0, errp=0x7ffd8e19e908) at block.c:3011 #8 0x000055d950766dba in bdrv_open (filename=0x0, reference=0x0, options=0x55d953d00390, flags=0, errp=0x7ffd8e19e908) at block.c:3156 #9 0x000055d9507cb635 in blk_new_open (filename=0x0, reference=0x0, options=0x55d953d00390, flags=0, errp=0x7ffd8e19e908) at block/block-backend.c:389 #10 0x000055d950465ec5 in blockdev_init (file=0x0, bs_opts=0x55d953d00390, errp=0x7ffd8e19e908) at blockdev.c:602 Signed-off-by: Michal Privoznik <mprivozn@redhat.com> Message-id: 927aae40b617ba7d4b6c7ffe74e6d7a2595f8e86.1562770546.git.mprivozn@redhat.com Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
-
migration: always initial RAMBlock.bmap to 1 for new migration
Reproduce the problem: migrate migrate_cancel migrate Error happen for memory migration The reason as follows: 1. qemu start, ram_list.dirty_memory[DIRTY_MEMORY_MIGRATION] all set to 1 by a series of cpu_physical_memory_set_dirty_range 2. migration start:ram_init_bitmaps - memory_global_dirty_log_start: begin log diry - memory_global_dirty_log_sync: sync dirty bitmap to ram_list.dirty_memory[DIRTY_MEMORY_MIGRATION] - migration_bitmap_sync_range: sync ram_list. dirty_memory[DIRTY_MEMORY_MIGRATION] to RAMBlock.bmap and ram_list.dirty_memory[DIRTY_MEMORY_MIGRATION] is set to zero 3. migration data... 4. migrate_cancel, will stop log dirty 5. migration start:ram_init_bitmaps - memory_global_dirty_log_start: begin log diry - memory_global_dirty_log_sync: sync dirty bitmap to ram_list.dirty_memory[DIRTY_MEMORY_MIGRATION] - migration_bitmap_sync_range: sync ram_list. dirty_memory[DIRTY_MEMORY_MIGRATION] to RAMBlock.bmap and ram_list.dirty_memory[DIRTY_MEMORY_MIGRATION] is set to zero Here RAMBlock.bmap only have new logged dirty pages, don't contain the whole guest pages. Signed-off-by: Ivan Ren <ivanren@tencent.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> Message-Id: <1563115879-2715-1-git-send-email-ivanren@tencent.com> Signed-off-by: Juan Quintela <quintela@redhat.com> -
migration/postcopy: remove redundant cpu_synchronize_all_post_init
cpu_synchronize_all_post_init() is called twice in loadvm_postcopy_handle_run_bh(), so remove one redundant call. Signed-off-by: Wei Yang <richardw.yang@linux.intel.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Message-Id: <20190715080751.24304-1-richardw.yang@linux.intel.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
-
migration/postcopy: fix document of postcopy_send_discard_bm_ram()
Commit 6b6712e ('ram: Split dirty bitmap by RAMBlock') changes the parameter of postcopy_send_discard_bm_ram(), while left the document part untouched. This patch correct the document and fix two typo by hand. Signed-off-by: Wei Yang <richardw.yang@linux.intel.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-Id: <20190715020549.15018-1-richardw.yang@linux.intel.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
-
Merge remote-tracking branch 'remotes/pmaydell/tags/pull-target-arm-2…
…0190715' into staging target-arm queue: * report ARMv8-A FP support for AArch32 -cpu max * hw/ssi/xilinx_spips: Avoid AXI writes to the LQSPI linear memory * hw/ssi/xilinx_spips: Avoid out-of-bound access to lqspi_buf[] * hw/ssi/mss-spi: Avoid crash when reading empty RX FIFO * hw/display/xlnx_dp: Avoid crash when reading empty RX FIFO * hw/arm/virt: Fix non-secure flash mode * pl031: Correctly migrate state when using -rtc clock=host * fix regression that meant arm926 and arm1026 lost VFP double-precision support * v8M: NS BusFault on vector table fetch escalates to NS HardFault # gpg: Signature made Mon 15 Jul 2019 14:41:25 BST # gpg: using RSA key E1A5C593CD419DE28E8315CF3C2525ED14360CDE # gpg: issuer "peter.maydell@linaro.org" # gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>" [ultimate] # gpg: aka "Peter Maydell <pmaydell@gmail.com>" [ultimate] # gpg: aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>" [ultimate] # Primary key fingerprint: E1A5 C593 CD41 9DE2 8E83 15CF 3C25 25ED 1436 0CDE * remotes/pmaydell/tags/pull-target-arm-20190715: target/arm: NS BusFault on vector table fetch escalates to NS HardFault target/arm: Set VFP-related MVFR0 fields for arm926 and arm1026 pl031: Correctly migrate state when using -rtc clock=host hw/arm/virt: Fix non-secure flash mode hw/display/xlnx_dp: Avoid crash when reading empty RX FIFO hw/ssi/mss-spi: Avoid crash when reading empty RX FIFO hw/ssi/xilinx_spips: Avoid out-of-bound access to lqspi_buf[] hw/ssi/xilinx_spips: Avoid AXI writes to the LQSPI linear memory hw/ssi/xilinx_spips: Convert lqspi_read() to read_with_attrs target/arm: report ARMv8-A FP support for AArch32 -cpu max Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
-
migration: allow private destination ram with x-ignore-shared
By removing the share ram check, qemu is able to migrate to private destination ram when x-ignore-shared capability is on. Then we can create multiple destination VMs based on the same source VM. This changes the x-ignore-shared migration capability to work similar to Lai's original bypass-shared-memory work(https://lists.gnu.org/archive/html/qemu-devel/2018-04/msg00003.html) which enables kata containers (https://katacontainers.io) to implement the VM templating feature. An example usage in kata containers(https://katacontainers.io): 1. Start the source VM: qemu-system-x86 -m 2G \ -object memory-backend-file,id=mem0,size=2G,share=on,mem-path=/tmpfs/template-memory \ -numa node,memdev=mem0 2. Stop the template VM, set migration x-ignore-shared capability, migrate "exec:cat>/tmpfs/state", quit it 3. Start target VM: qemu-system-x86 -m 2G \ -object memory-backend-file,id=mem0,size=2G,share=off,mem-path=/tmpfs/template-memory \ -numa node,memdev=mem0 \ -incoming defer 4. connect to target VM qmp, set migration x-ignore-shared capability, migrate_incoming "exec:cat /tmpfs/state" 5. create more target VMs repeating 3 and 4 Cc: Dr. David Alan Gilbert <dgilbert@redhat.com> Cc: Yury Kotov <yury-kotov@yandex-team.ru> Cc: Jiangshan Lai <laijs@hyper.sh> Cc: Xu Wang <xu@hyper.sh> Signed-off-by: Peng Tao <tao.peng@linux.alibaba.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Message-Id: <1560494113-1141-1-git-send-email-tao.peng@linux.alibaba.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
-
migration: Split log_clear() into smaller chunks
Currently we are doing log_clear() right after log_sync() which mostly keeps the old behavior when log_clear() was still part of log_sync(). This patch tries to further optimize the migration log_clear() code path to split huge log_clear()s into smaller chunks. We do this by spliting the whole guest memory region into memory chunks, whose size is decided by MigrationState.clear_bitmap_shift (an example will be given below). With that, we don't do the dirty bitmap clear operation on the remote node (e.g., KVM) when we fetch the dirty bitmap, instead we explicitly clear the dirty bitmap for the memory chunk for each of the first time we send a page in that chunk. Here comes an example. Assuming the guest has 64G memory, then before this patch the KVM ioctl KVM_CLEAR_DIRTY_LOG will be a single one covering 64G memory. If after the patch, let's assume when the clear bitmap shift is 18, then the memory chunk size on x86_64 will be 1UL<<18 * 4K = 1GB. Then instead of sending a big 64G ioctl, we'll send 64 small ioctls, each of the ioctl will cover 1G of the guest memory. For each of the 64 small ioctls, we'll only send if any of the page in that small chunk was going to be sent right away. Signed-off-by: Peter Xu <peterx@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Message-Id: <20190603065056.25211-12-peterx@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
-
kvm: Support KVM_CLEAR_DIRTY_LOG
Firstly detect the interface using KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2 and mark it. When failed to enable the new feature we'll fall back to the old sync. Provide the log_clear() hook for the memory listeners for both address spaces of KVM (normal system memory, and SMM) and deliever the clear message to kernel. Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Message-Id: <20190603065056.25211-11-peterx@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
-
kvm: Introduce slots lock for memory listener
Introduce KVMMemoryListener.slots_lock to protect the slots inside the kvm memory listener. Currently it is close to useless because all the KVM code path now is always protected by the BQL. But it'll start to make sense in follow up patches where we might do remote dirty bitmap clear and also we'll update the per-slot cached dirty bitmap even without the BQL. So let's prepare for it. We can also use per-slot lock for above reason but it seems to be an overkill. Let's just use this bigger one (which covers all the slots of a single address space) but anyway this lock is still much smaller than the BQL. Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Message-Id: <20190603065056.25211-10-peterx@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
-
kvm: Persistent per kvmslot dirty bitmap
When synchronizing dirty bitmap from kernel KVM we do it in a per-kvmslot fashion and we allocate the userspace bitmap for each of the ioctl. This patch instead make the bitmap cache be persistent then we don't need to g_malloc0() every time. More importantly, the cached per-kvmslot dirty bitmap will be further used when we want to add support for the KVM_CLEAR_DIRTY_LOG and this cached bitmap will be used to guarantee we won't clear any unknown dirty bits otherwise that can be a severe data loss issue for migration code. Signed-off-by: Peter Xu <peterx@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Message-Id: <20190603065056.25211-9-peterx@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
-
kvm: Update comments for sync_dirty_bitmap
It's obviously obsolete. Do some update. Signed-off-by: Peter Xu <peterx@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Message-Id: <20190603065056.25211-8-peterx@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
-
memory: Introduce memory listener hook log_clear()
Introduce a new memory region listener hook log_clear() to allow the listeners to hook onto the points where the dirty bitmap is cleared by the bitmap users. Previously log_sync() contains two operations: - dirty bitmap collection, and, - dirty bitmap clear on remote site. Let's take KVM as example - log_sync() for KVM will first copy the kernel dirty bitmap to userspace, and at the same time we'll clear the dirty bitmap there along with re-protecting all the guest pages again. We add this new log_clear() interface only to split the old log_sync() into two separated procedures: - use log_sync() to collect the collection only, and, - use log_clear() to clear the remote dirty bitmap. With the new interface, the memory listener users will still be able to decide how to implement the log synchronization procedure, e.g., they can still only provide log_sync() method only and put all the two procedures within log_sync() (that's how the old KVM works before KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2 is introduced). However with this new interface the memory listener users will start to have a chance to postpone the log clear operation explicitly if the module supports. That can really benefit users like KVM at least for host kernels that support KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2. There are three places that can clear dirty bits in any one of the dirty bitmap in the ram_list.dirty_memory[3] array: cpu_physical_memory_snapshot_and_clear_dirty cpu_physical_memory_test_and_clear_dirty cpu_physical_memory_sync_dirty_bitmap Currently we hook directly into each of the functions to notify about the log_clear(). Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Message-Id: <20190603065056.25211-7-peterx@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com> -
memory: Pass mr into snapshot_and_clear_dirty
Also we change the 2nd parameter of it to be the relative offset within the memory region. This is to be used in follow up patches. Signed-off-by: Peter Xu <peterx@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Message-Id: <20190603065056.25211-6-peterx@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
-
bitmap: Add bitmap_copy_with_{src|dst}_offset()
These helpers copy the source bitmap to destination bitmap with a shift either on the src or dst bitmap. Meanwhile, we never have bitmap tests but we should. This patch also introduces the initial test cases for utils/bitmap.c but it only tests the newly introduced functions. Signed-off-by: Peter Xu <peterx@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Message-Id: <20190603065056.25211-5-peterx@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com> --- Bitmap test used sizeof(unsigned long) instead of BITS_PER_LONG.
-
memory: Don't set migration bitmap when without migration
Similar to 9460dee ("memory: do not touch code dirty bitmap unless TCG is enabled", 2015-06-05) but for the migration bitmap - we can skip the MIGRATION bitmap update if migration not enabled. Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Message-Id: <20190603065056.25211-4-peterx@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
-
migration: No need to take rcu during sync_dirty_bitmap
cpu_physical_memory_sync_dirty_bitmap() has one RAMBlock* as parameter, which means that it must be with RCU read lock held already. Taking it again inside seems redundant. Removing it. Instead comment on the functions about the RCU read lock. Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Message-Id: <20190603065056.25211-2-peterx@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>