Skip to content
This repository has been archived by the owner on Jan 28, 2023. It is now read-only.

[Draft] GPA Protection Implementation #35

Open
wants to merge 2 commits into
base: master
Choose a base branch
from
Open

[Draft] GPA Protection Implementation #35

wants to merge 2 commits into from

Conversation

Taogle2018
Copy link
Contributor

Hi, Ning,

This is the latest implementation of GPA_PROT call. The whole implementation is split into two. One covers protection using EPT while the other covers protection from access from driver itself. The second part is not tested.
It could be used as a base for discussion.
BTW: I did not finish mac part. But it does not sound a road blocker now.

Haitao

@Taogle2018
Copy link
Contributor Author

https://android-review.googlesource.com/q/topic:%22On-demand+Snapshot+Windows%22+(status:open%20OR%20status:merged)
This is emulator side patch. However, it is not up-to-date. At least, gpa protection bit definition needs to be updated in order to make it work with driver side changes.

Copy link
Contributor

@raphaelning raphaelning left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, Haitao! In general this patch looks pretty clean. So far my comments are mostly about cosmetic issues. But there are also a couple of more important issues that I'd like to give more thought to. If you can explain your design decisions, that would be great.

@@ -109,7 +109,8 @@ enum exit_status {
HAX_EXIT_HLT,
HAX_EXIT_STATECHANGE,
HAX_EXIT_PAUSED,
HAX_EXIT_FAST_MMIO
HAX_EXIT_FAST_MMIO,
HAX_EXIT_GPAPROT
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about HAX_EXIT_PAGEFAULT? I think that sounds more intuitive and corresponds to HVF's EXIT_REASON_EPT_FAULT. One could argue that we don't allow the user space to handle every EPT page fault, but probably nor does HVF.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good!

@@ -169,6 +174,7 @@ struct hax_module_version {
#define HAX_CAP_64BIT_RAMBLOCK (1 << 3)
#define HAX_CAP_64BIT_SETRAM (1 << 4)
#define HAX_CAP_TUNNEL_PAGE (1 << 5)
#define HAX_CAP_GPA_PROTECTION (1 << 6)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about HAX_CAP_RAM_PROTECTION? We use RAM to refer to the guest RAM in many other places, and so does QEMU.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK.
The name came from my original design thought about the API. At that time, I was struggling what was used for protection index, hva or gpa. Later I decided to use GPA, although QEMU needs to maintain slots information in this case. But this sounds right direction to go.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could actually return both the GPA and HVA to QEMU at every HAX_EXIT_PAGEFAULT, and also allow the new IOCTL to be called either with a GPA range or HVA range (e.g. distinguished by a bit in the flags parameter). That way, we wouldn't even need the gpa2hva or hva2gpa conversion functions in QEMU, but it would require some API changes to the current snapshot support code in Android Emulator...

@@ -236,6 +242,14 @@ struct hax_set_ram_info2 {
uint64_t reserved2;
} PACKED;

#define HAX_GPA_PROT_MASK 0x7 // one bit each for r/w/e
#define HAX_GPA_PROT_ALL 0x7 // disable r/w/e all
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was confused at first whether "disable all" means to disallow or allow all accesses. So how about

// No access (R/W/X) is allowed
#define HAX_RAM_PERM_NONE 0x0
// All accesses (R/W/X) are allowed
#define HAX_RAM_PERM_RWX  0x7
#define HAX_RAM_PERM_MASK 0x7

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree.

@@ -236,6 +242,14 @@ struct hax_set_ram_info2 {
uint64_t reserved2;
} PACKED;

#define HAX_GPA_PROT_MASK 0x7 // one bit each for r/w/e
#define HAX_GPA_PROT_ALL 0x7 // disable r/w/e all
struct hax_gpa_prot_info {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hax_ram_prot_info?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK.

@@ -161,4 +161,8 @@ extern PDRIVER_OBJECT HaxDriverObject;
#define HAX_VM_IOCTL_NOTIFY_QEMU_VERSION \
CTL_CODE(HAX_DEVICE_TYPE, 0x910, METHOD_BUFFERED, FILE_ANY_ACCESS)

/* API version 3.0 */
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The current HAXM API version is actually 4. For backward compatibility, since there's already a new capability flag that QEMU can check, do we need to upgrade the API version this time?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I forget why I added this. :( I actually do not know how haxm API version is defined and used.
Perhaps it is something I forget to clean up. Let me remove it.

int gpa_space_adjust_prot_bitmap(struct hax_gpa_space *gpa_space,
uint64 max_gpfn);

int gpa_space_test_prot_bitmap(struct hax_gpa_space *gpa_space, uint64 gfn);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. Return type: int => bool, since it returns either true or false.
  2. How about renaming it to gpa_space_is_page_protected? If we ever implement fine-grained protection, we can add is_page_readable, is_page_writable, etc.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice!

uint64 max_gpfn);

int gpa_space_test_prot_bitmap(struct hax_gpa_space *gpa_space, uint64 gfn);
int gpa_space_chunk_protected(struct hax_gpa_space *gpa_space, uint64 gfn,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Return type: int => bool

@@ -116,10 +116,10 @@ static inline void hax_mutex_free(hax_mutex lock)
}

/* Return true if the bit is set already */
static int hax_test_and_set_bit(int bit, uint64_t *memory)
static int hax_test_and_set_bit(uint64 bit, uint64_t *memory)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have concerns about this change. Is it safe to use a 64-bit integer as an array index (base) on a 32-bit OS? (We still support 32-bit Windows.) We could change int to long, but long is 32 bits on Windows. If the caller could adapt to the current interface, that would be great. After all, the protection bitmap shouldn't be very long, right? If max_gpa is 8GB, the bitmap size will be 256KB.
Anyway, since this change is self-contained, if it's really necessary, we should make a separate patch for it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed.

return hax_test_bit(gfn, (uint64 *)pbm->bitmap);
}

int gpa_space_chunk_protected(struct hax_gpa_space *gpa_space, uint64 gfn,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see the QEMU patch make use of HAX_CHUNK_SIZE which is currently set to 2MB. When we introduced chunks with EPT2, we considered it an internal implementation detail and didn't expect to expose it to user space. If it's absolutely necessary, we should add an API to allow the user space to query the chunk size.
My question is, is making QEMU aware of HAXM chunks a necessity or an optimization? Admittedly, right now, at each non-MMIO EPT violation, we pin a whole chunk (instead of just the faulting page) in host RAM, and set up multiple EPT entries, so as to reduce the number of future EPT violations. But it's possible to make ept_tree_create_entries() skip the GFNs that are protected, so if the guest accesses one of them, we'll still get a VM exit. If we do that, is there any more reason for QEMU to know about chunks?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In theory, (I mean I did not test it), QEMU does not need to know the chunk size. It can resolve only one page one time, or 1M, etc. It simply means in that case, QEMU would receive more EPT_FAULT_EXIT at the same place with different gpa reported. I did not design it in a way that user space has to rely on the exact chunk size.
For QEMU side patch, I have to agree I write it that way because I have knowledge on HAXM.
I think as long as driver does not make assumptions on user-space implementation, it should be good.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So basically it's an optimization. I'd prefer that we process/fill one guest page per exit to QEMU, which is the same page that hardware (EPT violation) gave us, because

  1. Having user space depend on a driver implementation detail is not a good practice, since the interface is not stable, although it's not as bad as having the driver depend on user space.
  2. Chunks are created in HVA space (by dividing a RAM block), not GPA space. So technically, rounding a GPA down to the nearest 2MB boundary is not guaranteed to give you a GPA that maps to the same chunk.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. Some chunks are smaller than 2MB, e.g. in case of a tiny RAM block, or the last chunk in a RAM block, so __gpa < gpa + 0x200000 doesn't always work.
  2. HVF also fills one page at a time and takes this many VM exits. If we fill too many pages at a time, we may be doing kind of eager loading rather than on-demand loading (there's a trade-off).

If the performance is not good enough, we could add the chunk-based optimization in a later patch.

@@ -70,7 +70,8 @@ void ept_handle_mapping_changed(hax_gpa_space_listener *listener,
}

int ept_handle_access_violation(hax_gpa_space *gpa_space, hax_ept_tree *tree,
exit_qualification_t qual, uint64 gpa)
exit_qualification_t qual, uint64 gpa,
uint64 *fault_gfn)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So this is the intrusive change you were talking about :) It didn't look very intrusive to me until I peeked at the second patch. I need to spend more time tomorrow to understand the rationale better and think about possible alternatives. But maybe I can ask you this for now: if we don't have to worry about chunks (see my other long comment below), do we still need this extra parameter?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, yes.
I also think about something like:
__try {
hax_vcpu_run();
--------> call RaiseException when we failed to pass protection check
} __except (xxxx) {
//fill related htun and exit
}
But I have two concerns and I do not know the answer:
1> it seems too windows-specific, is it possible for mac (I did not work on mac so far). I believe it is OK not to implement this on mac from google point of view.
2> I am not sure what exception I should raise when checking for protection bitmap. My concern is if I choose existing exceptions defined (page fault, for example) and later a real page fault happened due to bug in code, it could be interpreted as our ram protection fault. And I do not know whether user-defined exception code is acceptable.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My question was probably not clear enough, so you must have mistaken it for something else - but thanks to that, I get to see this very interesting idea in your answer :)

I totally understand that Google's priority is to bring on-demand snapshot loading to Windows, so it's fine to leave out Mac for the QEMU (Android Emulator) side patch. But I still hope the new API we add this time works on both Windows and Mac, with the same behavior, as all the existing HAXM APIs do. So I agree with you that any Windows-specific API design should be avoided.

Back to my original question: I know this new fault_gfn parameter is a result of the call to gpa_space_chunk_protected(). My point is, if we don't worry about chunks and just stick to processing (filling) one page per exit to QEMU (via HAX_EXIT_PAGEFAULT), the pagefault GPA we return to QEMU will be the same as gpa, so there's no need to call gpa_space_chunk_protected() just because it might return a different GPA.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

By the way, I've thought about handling HAXM's own accesses to guest RAM, and your "intrusive" approach makes more sense to me. But I also came to realize how complicated the problem is - for example, since it's possible for HAXM to access a protected GPA in the middle of its MMIO or PIO handler (instruction emulation) code, we have to make sure the code before the GPA access does nothing "irreversible", because as soon as we realize the GPA is protected, we'll abort the handler and return to QEMU with the new exit reason (instead of HAX_EXIT_FAST_MMIO or HAX_EXIT_IO), and must be able to emulate the same MMIO/PIO instruction again when QEMU re-executes it.

@intel intel deleted a comment from xianxiaoyin Aug 23, 2019
@intel intel deleted a comment from yiqisiben Aug 23, 2019
@intel intel deleted a comment from yiqisiben Aug 23, 2019
@intel intel deleted a comment from xianxiaoyin Aug 23, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants