-
Notifications
You must be signed in to change notification settings - Fork 67
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
5 merged "New York scene" looks worse than output from Adobe Camera RAW #18
Comments
It all depends on your definition of "good" and "bad". Some people prefer noisy but more detail, some prefer less noise but smoother, there is no mathematical answer for a "better" or "worse" comparison... personal impression is way to important in this regard so that an objective answer is not possible. |
I've been tearing out my hair looking for it here, but the supplement section can be found here with 6 additional pages of detail. Thanks for pointing me in the right direction! I have a pocophone f1 with gcam and opencamera, so it will be interesting to do a comparison of gcam vs imagestackalignator + raws from opencamera. I also plan to do comparisons with the hdr+ dataset. Hopefully I can tune the parameters to resolve more detail from the New York scene as well :) The way I see your software is bringing state of the art computational photography to any camera, not constrained to the weak processing power of a mobile device or short processing times. On a side note, I'd love to donate to the continued development of this and further user friendliness. Is it possible to work with Nvidia to get around
? |
Now that I think of it, the 2GB limit actually should come from .net and not from NPPi - .net arrays are for sure limited to 2GB of size in .net framework 4.6. Not sure how .net core handles this. |
There's less noise, but it looks like everything has been smoothed out. It is significantly harder to make out fine detail such as pavement lines, borders between windows, etc. Is this just a case where there aren't enough frames, or the SNR is simply too low(in the paper, figure 21 is the only direct comparison between classic demosaicing and their method, but it's quite a bit brighter than the "New York Scene")? Of course, it's unlikely this algorithm is better in all cases, but I'm still curious as to why.
On a side note, can you explain more about Dth, Dtr, kDetail, kDenoise? I actually don't see it in the paper...
The text was updated successfully, but these errors were encountered: