Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support SMM Relocated SmBase handling #3884

Closed
wants to merge 5 commits into from

Conversation

jiaxinwu
Copy link
Member

@jiaxinwu jiaxinwu commented Jan 11, 2023

The default SMBASE for the x86 processor is 0x30000. When
SMI happens, CPU runs the SMI handler at SMBASE+0x8000.
Also, the SMM save state area is within SMBASE+0x10000.

One of the SMM initialization from CPU perspective is to relocate
and program the new SMBASE (in TSEG range) for each CPU thread. When
the SMBASE relocation happens in a PEI module, the PEI module shall
produce the SMM_BASE_HOB in HOB database which tells the
PiSmmCpuDxeSmm driver (runs at a later phase) about the new SMBASE
for each CPU thread. PiSmmCpuDxeSmm driver installs the SMI handler
at the SMM_BASE_HOB.SmBase[Index]+0x8000 for CPU thread Index. When
the HOB doesn't exist, PiSmmCpuDxeSmm driver shall relocate and
program the new SMBASE itself.

Those patches add the SMM Base HOB, which can be produced
by any PEI module to do the SmBase relocation ahead of
PiSmmCpuDxeSmm driver and store the relocated SmBase
address in array for reach Processors. PiSmmCpuDxeSmm
and SmmCpuFeaturesLib will consume the HOB to simplify
SMM relocation process.

With SMM Base Hob, PiSmmCpuDxeSmm does not need the RSM
instruction to reload the SMBASE register with the new allocated
SMBASE each time when it exits SMM. SMBASE Register for each
processors have already been programmed and all SMBASE address
have recorded in SMM Base Hob. So the same default SMBASE Address
(0x30000) will not be used, thus the CPUs over-writing each
other's SMM Save State Area will not happen in PiSmmCpuDxeSmm
driver. This way makes the first SMI init can be executed in
parallel and save boot time on multi-core system.

@jiaxinwu jiaxinwu force-pushed the master branch 3 times, most recently from 0c78904 to 8407251 Compare January 13, 2023 06:31
@jiaxinwu jiaxinwu changed the title UefiCpuPkg: Support SMM Relocated SmBase handling Support SMM Relocated SmBase handling Jan 13, 2023
@jiaxinwu jiaxinwu force-pushed the master branch 6 times, most recently from 566621a to 814d96e Compare January 16, 2023 09:01
@jiaxinwu jiaxinwu force-pushed the master branch 6 times, most recently from ac070e2 to 97c9e92 Compare February 17, 2023 01:46
@mergify
Copy link

mergify bot commented Mar 14, 2023

PR can not be merged due to conflict. Please rebase and resubmit

Background:
For arch X64, system will enable the page table in SPI to cover 0-512G range
via CR4.PAE & MSR.LME & CR0.PG & CR3 setting (see ResetVector code). Existing
code doesn't cover the higher address access above 512G before memory-discovered
callback. That will be potential problem if system access the higher address
after the transition from temporary RAM to permanent MEM RAM.

Solution:
This patch is to migrate page table to permanent memory to map entire physical
address space if CR0.PG is set during temporary RAM Done.

Change-Id: I29bdb078ef567ed9455d1328cb007f4f60a617a2
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Zeng Star <star.zeng@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
Some security features depends on the page table enabling. So, This patch
is to enable the page table if page table has not been enabled during the
transition from Temporary RAM to Permanent RAM.

Note: If page table is not enabled before this point, which means the system
IA-32e Mode is not activated. Because on Intel 64 processors, IA-32e Mode
operation requires physical address extensions with 4 or 5 levels of enhanced
paging structures (see Section 4.5, "4 - Level Paging and 5 -Level Paging"
and Section 9.8, "Initializing IA-32e Mode"). So, just enable PAE page table
if CR0.PG is not set.

Change-Id: Ibfbfdace1fe7e29ab94463629d8d2f539a43f1b9
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Zeng Star <star.zeng@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
System paging 5 level enabled or not can be checked via CR4.LA57, system
preferred Page table Level (PcdUse5LevelPageTable) must align with previous
level for X64 mode.

This patch is to do the wise check:
If cpu has already runned in X64 PEI, Page table Level in DXE must align
with previous level.
If cpu runs in IA32 PEI, Page table Level in DXE is decided by PCD and
feature capability.

Change-Id: Ia7f7e365c7354cc49f971209bfcbc5af5aded062
Cc: Dandan Bi <dandan.bi@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Zeng Star <star.zeng@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
Add CpuPageTableLib required by SecCore & CpuMpPei in OvmfPkg.

Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Zeng Star <star.zeng@intel.com>
Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
Add CpuPageTableLib required by SecCore & CpuMpPei in UefiPayloadPkg.

Cc: Guo Dong <guo.dong@intel.com>
Cc: Sean Rhodes <sean@starlabs.systems>
Cc: James Lu <james.lu@intel.com>
Cc: Gua Guo <gua.guo@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Zeng Star <star.zeng@intel.com>
Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

1 participant