New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
A question about the computation of total number of blocks #12
Comments
I believe that this accurately describes our implementation. The implementation of PathORAM in this repo is generic over an ORAMStorage object which stores the entire tree. The PathORAM object only cares that it can check out any branch that it wants. This is why we cannot find a container being allocated which is obviously of size However we can see the checked-out branch here: https://github.com/mobilecoinofficial/mc-oblivious/blob/e9be7af384654c5467fb28f4396eed6c255f0e15/mc-oblivious-ram/src/path_oram/mod.rs#L250 It contains a vector of byte chunks of size So if this is a branch in the tree, this should tell us that the tree has height equal to The ORAMStorage object that we actually use in the enclaves is here: This object does two things: The logic for this encryption / authentication scheme is in the trusted crate. That allocation occurs here: https://github.com/mobilecoinfoundation/mobilecoin/blob/21aabfb46cf750817c35696fee3a42e505ca49a1/fog/ocall_oram_storage/untrusted/src/lib.rs#L105 There are a bunch of additional considerations about exactly how big the tree needs to be: There's at least one important optimization over the conference paper given by Gentry et al: https://eprint.iacr.org/2013/239 The tree-top caching stuff that we did can be roughly compared with the ZeroTrace implementation here: https://github.com/sshsshy/ZeroTrace/blob/master/ZT_Enclave/ORAMTree.cpp, it's not exactly the same but it's similar in spirit. Please let me know if you still think there is an inconsistency or if you have any other questions, I'm happy to share my thoughts. |
Thank you for your comprehensive explanation. Yeah, I'm reading your codes and the codes of ZeroTrace simultaneously and the spirits of both are indeed similar, as you said. According to my understanding, for In your implementation, Moreover, I plan to adapt RingORAM to your framework, the way you compute the total number of blocks seems not compatible to RingORAM. By the way, your implementation is concise, beautiful, and powerful. I like it. |
I see what you are saying. I think I was trying to implement the optimization described in Gentry et al. in section 3: https://eprint.iacr.org/2013/239
But it's possible that I messed it up. I didn't remember this part
But nevertheless the implementation seems to work -- we create trees with thousands of items and exercise them millions of times with the Z=4 ORAM. It might be that the correct way to think about it is, the I need to think more about this and read these papers again, sorry.
I agree it does seem unsafe -- although there is the stash as well, but that should not really matter if the entire tree is full. In this test, we are making an n=8192 ORAM, and exercising it 20,000 times: If there is really no space for dummy blocks then I would expect that after 20,000 times we would see overflow. So I am missing something here. I will write again later. Thanks for your questions!
Thank you for your kind words!
That sounds like a really interesting project! |
Yeah, it is confusing that no overflow is observed by you. But I notice that you have observed it when Z=2.
Thank you for your patience. I'm looking forward to your helpful explanation. |
this adds a version of exercise_path_oram that queries consecutive locations over and over, rather than using the "progressive" strategy. the progressive stratey was good initially when we would find bugs quickly when items are queried again, and it's also good for testing very large ORAMs in a way that is interesting. this strategy helps to answer questions posed in issue #12, like, is it the case that when all items exist in the tree, there is no space for any dummy blocks, due to our choices of parameters. In case of such an issue, we would expect this test to fail when the number of rounds is significantly larger than the size of the ORAM, because with high probability when an item is mapped there would be no space on the branch that it selects or in the stash.
It occurred to me that the test that I mentioned was using a "progressive" probing strategy which doesn't actually access all locations in the ORAM, so it's conceivable that the ORAM has that problem and the test passes. I have added some more tests that always probe consecutive locations, and they seem to be passing locally: this just helps give me confidence that the stuff in prod is not broken though -- it doesn't explain right now why it's not broken. |
Ok, I think the reason is here: We are actually creating the ORAM storage object with The code comment above this explains:
So I think we have effectively done Gentry et. al. optimization -- instead of 2^n nodes, we have 2 * 2^n / Z nodes. Another problem I remember now is, what I have called "height" of a node in all of the comments is actually called the "depth" in computer science literature, and I really should fix that. |
I understood! Thank you for your explanation! I've learned a lot. |
* add additional test functions this adds a version of exercise_path_oram that queries consecutive locations over and over, rather than using the "progressive" strategy. the progressive stratey was good initially when we would find bugs quickly when items are queried again, and it's also good for testing very large ORAMs in a way that is interesting. this strategy helps to answer questions posed in issue #12, like, is it the case that when all items exist in the tree, there is no space for any dummy blocks, due to our choices of parameters. In case of such an issue, we would expect this test to fail when the number of rounds is significantly larger than the size of the ORAM, because with high probability when an item is mapped there would be no space on the branch that it selects or in the stash. * cargo fmt
Hi, I have a question about the implementation of PathORAM. In the conference paper, N is the working set, i.e., the number of distinct data blocks that are stored in ORAM. The derivation of the total number of blocks is also illustrated in the journal version, that is, height L=ceil(log_2(N)), and the total server storage is Z*2^L. The derivation seems inconsistent with your implementation. What are the considerations here?
The text was updated successfully, but these errors were encountered: