Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

This backports is for CVE-2021-47110 fix #6

Closed
wants to merge 3 commits into from

Conversation

mngyadam
Copy link

@mngyadam mngyadam commented Jun 7, 2024

Hello,

These three patches is for fixing CVE-2021-47110, these were backported in upstream to 5.10& 5.4 as well. would be good to have this applied.

Thanks,
MNAdam

commit 8b79feffeca28c5459458fe78676b081e87c93a4 upstream.

Various PV features (Async PF, PV EOI, steal time) work through memory
shared with hypervisor and when we restore from hibernation we must
properly teardown all these features to make sure hypervisor doesn't
write to stale locations after we jump to the previously hibernated kernel
(which can try to place anything there). For secondary CPUs the job is
already done by kvm_cpu_down_prepare(), register syscore ops to do
the same for boot CPU.

Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20210414123544.1060604-3-vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Mahmoud Adam <mngyadam@amazon.com>
commit c02027b5742b5aa804ef08a4a9db433295533046 upstream.

Currenly, we disable kvmclock from machine_shutdown() hook and this
only happens for boot CPU. We need to disable it for all CPUs to
guard against memory corruption e.g. on restore from hibernate.

Note, writing '0' to kvmclock MSR doesn't clear memory location, it
just prevents hypervisor from updating the location so for the short
while after write and while CPU is still alive, the clock remains usable
and correct so we don't need to switch to some other clocksource.

Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20210414123544.1060604-4-vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Mahmoud Adam <mngyadam@amazon.com>
commit 3d6b84132d2a57b5a74100f6923a8feb679ac2ce upstream.

Crash shutdown handler only disables kvmclock and steal time, other PV
features remain active so we risk corrupting memory or getting some
side-effects in kdump kernel. Move crash handler to kvm.c and unify
with CPU offline.

Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20210414123544.1060604-5-vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Mahmoud Adam <mngyadam@amazon.com>
@harshimogalapalli
Copy link

Hi @mngyadam ,

Thanks for contributing, if these were backported to 4.19.y, these will directly be on target list of commits to backport in upcoming releases. Do you have any plans to backport them to 4.19.y stable ?

@mngyadam
Copy link
Author

mngyadam commented Jul 2, 2024

I understand the concern here, unfortunately we don't maintain 4.19 so we wouldn't be able to verify/test the backports on that kernel version.

@harshimogalapalli
Copy link

Hi Adam,

Thanks for letting me know. We will take your backports for the next tag, thanks for your contributions :)

Thanks,
Harshit

vegard pushed a commit that referenced this pull request Jul 17, 2024
commit d3b17c6d9dddc2db3670bc9be628b122416a3d26 upstream.

Using completion_done to determine whether the caller has gone
away only works after a complete call.  Furthermore it's still
possible that the caller has not yet called wait_for_completion,
resulting in another potential UAF.

Fix this by making the caller use cancel_work_sync and then freeing
the memory safely.

Fixes: 7d42e097607c ("crypto: qat - resolve race condition during AER recovery")
Cc: <stable@vger.kernel.org> #6.8+
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Reviewed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(cherry picked from commit 0ce5964b82f212f4df6a9813f09a0b5de15bd9c8)
Signed-off-by: Vegard Nossum <vegard.nossum@oracle.com>
@harshimogalapalli
Copy link

Hi @mngyadam

These three fixes were taken in for 4.14.349-openela .(Released 6 hrs back)

Thanks for your contribution.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants