-
Notifications
You must be signed in to change notification settings - Fork 161
[DRAFT] Combined prototype of PCIe emulation with the NVMe emulator as a sample use case #1976
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…d downstream ports
| /// controller response. | ||
| pub subsystem_id: Guid, | ||
| /// The controller ID, used in the identify controller response. | ||
| pub controller_id: u16, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not PCIe-specific, but Linux was refusing to enumerate multiple controllers with the same ID on the same NVM subsystem
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We discussed this some more offline and while this is fine, an easier short-term mitigation would be to use different NVM subsystem IDs for each controller because the current CLI/config interface doesn't really allow NVM subsystem sharing
| #[derive(Copy, Clone, Debug, PartialEq, Eq, PartialOrd, Ord, Hash)] | ||
| pub enum PcieEnumerator {} | ||
| #[derive(Copy, Clone, Debug, PartialEq, Eq, PartialOrd, Ord, Hash)] | ||
| pub enum PcieDownstreamPort {} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PcieEnumerator represents the internal "bus" of a root complex or switch (within a single chipset device).
PcieDownstreamPort represents a downstream port that a different chipset device can get plugged into.
| processor_topology: &processor_topology, | ||
| mem_layout: &mem_layout, | ||
| cache_topology: None, | ||
| pcie_host_bridges: &Vec::new(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have learned that "host bridges" is probably not the right term for this. What I intend to convey is the idea of the CPU's view of the PCIe root and what is exposed about the root through ACPI, but "host bridge" means different things on different platforms (ex. on Intel it means a PCIe function at address 0.0 on the internal bus of the root complex).
This change adds currently unused MCFG definitions and parsing to the `acpi_spec` crate. The MCFG table describes the memory-mapped configuration space base address for each PCIe root complex device on a system. Future changes will actually utilize these MCFG definitions to expose enhanced configuration access mechanism (ECAM) addresses to guests. See #1976 for a preview.
This change introduces an unused SSDT generator to the `acpi` crate, intended to expose PCIe root complex (`PNP0A08` devices) to guests. Since the structure of SSDTs is so similar to DSDTs, this change extracts significant code from the DSDT generator into a shared `aml` module. Future changes will actually utilize this SSDT generator, see #1976 for a preview.
…eneration (#2027) This change enables the configuration of PCIe root complexes (with their associated ports) on OpenVMM VMs. Root complexes are presented to the guest through ACPI but the probing of ports will come in a subsequent emulation change. - Adds command line parameters `--pcie-root-complex` and `--pcie-root-port` and corresponding mesh configurations - In the VM worker, translates configuration into `vm_topology::pcie::PcieHostBridge` objects representing the root complexes, using a naive MMIO allocation algorithm for ECAM and CRS ranges - Generate SSDT and MCFG tables in ACPI when root complexes are configured for the VM For an end-to-end view of how this fits together with previous and subsequent PCIe changes, see the combined draft PR #1976
…d downstream ports (#2122) This change introduces `vmotherboard` support in OpenVMM for wiring PCIe topology components together. This PR builds on previous changes to introduce root complex and root port emulators, and in subsequent PRs will be used to wire endpoint devices up to their parent devices. For a view of how this fits together, see #1976 Specifially, this change: - Introduces two `vmotherboard::BusId` types. which are separate because they have distinct wiring requirements. - The first such type represents a multi-port upstream device such as a root complex or in the future a switch. These devices represent an internal bus, with multiple downstream ports - The second such type represents each downstream port of the multi-port devices, at most one downstream device can be attached to each downstream port. - Adds a bus resolver to the motherboard, with associated error handling types, to facilitate the motherboard connecting upstream and downstream devices based on port names - These are modeled off of the legacy PCI bus resolution architecture, though had to be separate. The legacy PCI bus infrastructure models a multi-device bus, where BDFs are statically defined at VM-start, whereas the PCIe bus infrastructure models both multi-port semantics for upstream devices and single-parent semantics for the PCIe links and resolution is done based on string identifiers not BDFs - When an enumerator is registered with the motherboard, the motherboard queries the downstream ports of the enumerator and tracks them internally for resolution - Updates the VM worker to register PCIe root complexes with this new resolution architecture.
|
This is all in! |
This change adds currently unused MCFG definitions and parsing to the `acpi_spec` crate. The MCFG table describes the memory-mapped configuration space base address for each PCIe root complex device on a system. Future changes will actually utilize these MCFG definitions to expose enhanced configuration access mechanism (ECAM) addresses to guests. See microsoft#1976 for a preview.
…osoft#1984) This change introduces an unused SSDT generator to the `acpi` crate, intended to expose PCIe root complex (`PNP0A08` devices) to guests. Since the structure of SSDTs is so similar to DSDTs, this change extracts significant code from the DSDT generator into a shared `aml` module. Future changes will actually utilize this SSDT generator, see microsoft#1976 for a preview.
…eneration (microsoft#2027) This change enables the configuration of PCIe root complexes (with their associated ports) on OpenVMM VMs. Root complexes are presented to the guest through ACPI but the probing of ports will come in a subsequent emulation change. - Adds command line parameters `--pcie-root-complex` and `--pcie-root-port` and corresponding mesh configurations - In the VM worker, translates configuration into `vm_topology::pcie::PcieHostBridge` objects representing the root complexes, using a naive MMIO allocation algorithm for ECAM and CRS ranges - Generate SSDT and MCFG tables in ACPI when root complexes are configured for the VM For an end-to-end view of how this fits together with previous and subsequent PCIe changes, see the combined draft PR microsoft#1976
This draft PR lays out a combined set of changes to enable basic PCIe emulation in OpenVMM. The changes are split into independent commits based on functionality. I tried to make these as small as possible but some are still pretty large, let me know if you suggestions for further splitting.
There are still a lot of rough patches in this code. To name a few:
pci_core. To get something working, this change basically hard codes the type 1 root port config space.Note: Interrupts do not work. If you set the Linux admin/IO timeouts low enough, it will eventually enumerate controller(s), but this is still a work in progress