Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fine-tuning stage process and filtering sample adaptively #8

Open
dogyoonlee opened this issue Jun 7, 2023 · 0 comments
Open

Fine-tuning stage process and filtering sample adaptively #8

dogyoonlee opened this issue Jun 7, 2023 · 0 comments

Comments

@dogyoonlee
Copy link

Hello,

I'm implementing your method with pure PyTorch code and it works before the finetuning stage including sample importance learning.

However, I have additional questions about the adaptive sampling and fine-tuning stage process.

Could you let me know where is exactly related to adaptive sampling with fine-tuning stage?

I implement the adaptive sampling based on learned sample importance with top-k algorithm after masking the importance value exceeding adaptive threshold.

Because of the batch-wise data format, the algorithm that I designed sets the rest of the importance as zero in following cases you mentioned in the paper.
image

In addition, I'm confusing about the actual meaning of the sentence in the paper(Section 3.2 - Fine-tuning):
Note that this phase results in separate shading networks for each maximum sample count, while all rely on the same sampling network.

However, it does not work and I'm still get hard to fix it.
Could you explain about the point in detail?

(I just add this implementation code for understanding my implementation. )
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant