You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm implementing your method with pure PyTorch code and it works before the finetuning stage including sample importance learning.
However, I have additional questions about the adaptive sampling and fine-tuning stage process.
Could you let me know where is exactly related to adaptive sampling with fine-tuning stage?
I implement the adaptive sampling based on learned sample importance with top-k algorithm after masking the importance value exceeding adaptive threshold.
Because of the batch-wise data format, the algorithm that I designed sets the rest of the importance as zero in following cases you mentioned in the paper.
In addition, I'm confusing about the actual meaning of the sentence in the paper(Section 3.2 - Fine-tuning): Note that this phase results in separate shading networks for each maximum sample count, while all rely on the same sampling network.
However, it does not work and I'm still get hard to fix it.
Could you explain about the point in detail?
(I just add this implementation code for understanding my implementation. )
The text was updated successfully, but these errors were encountered:
Hello,
I'm implementing your method with pure PyTorch code and it works before the finetuning stage including sample importance learning.
However, I have additional questions about the adaptive sampling and fine-tuning stage process.
Could you let me know where is exactly related to adaptive sampling with fine-tuning stage?
I implement the adaptive sampling based on learned sample importance with top-k algorithm after masking the importance value exceeding adaptive threshold.
Because of the batch-wise data format, the algorithm that I designed sets the rest of the importance as zero in following cases you mentioned in the paper.
In addition, I'm confusing about the actual meaning of the sentence in the paper(Section 3.2 - Fine-tuning):
Note that this phase results in separate shading networks for each maximum sample count, while all rely on the same sampling network.
However, it does not work and I'm still get hard to fix it.
Could you explain about the point in detail?
(I just add this implementation code for understanding my implementation. )
The text was updated successfully, but these errors were encountered: