Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

5 merged "New York scene" looks worse than output from Adobe Camera RAW #18

Open
boogerlad opened this issue Mar 7, 2021 · 3 comments

Comments

@boogerlad
Copy link

boogerlad commented Mar 7, 2021

There's less noise, but it looks like everything has been smoothed out. It is significantly harder to make out fine detail such as pavement lines, borders between windows, etc. Is this just a case where there aren't enough frames, or the SNR is simply too low(in the paper, figure 21 is the only direct comparison between classic demosaicing and their method, but it's quite a bit brighter than the "New York Scene")? Of course, it's unlikely this algorithm is better in all cases, but I'm still curious as to why.

On a side note, can you explain more about Dth, Dtr, kDetail, kDenoise? I actually don't see it in the paper...

@kunzmi
Copy link
Owner

kunzmi commented Mar 7, 2021

It all depends on your definition of "good" and "bad". Some people prefer noisy but more detail, some prefer less noise but smoother, there is no mathematical answer for a "better" or "worse" comparison... personal impression is way to important in this regard so that an objective answer is not possible.
As for the technical aspects: with this algorithm you have a lot of control over the actual outcome, you can adjust many parameters. Especially the smooth result is controlled by the kernel parameters kDetail and kDenoise (these are described in the paper’s supplement). You basically define with them the amount of smoothing to apply. Less smoothing then of course means more noise, you have to find a trade-off that suits your needs.
As for the comparison with the results from the paper, don’t forget that the paper contains so many errors and mistakes, that I can only assume that the way I implemented the method is also the method as used by google. I have no knowledge about what they actually implemented in their software why I didn’t try to compare my results to theirs. I wanted to understand the method, I got it working and that’s where my motivation ended ;-)

@boogerlad
Copy link
Author

I've been tearing out my hair looking for it here, but the supplement section can be found here with 6 additional pages of detail. Thanks for pointing me in the right direction! I have a pocophone f1 with gcam and opencamera, so it will be interesting to do a comparison of gcam vs imagestackalignator + raws from opencamera. I also plan to do comparisons with the hdr+ dataset. Hopefully I can tune the parameters to resolve more detail from the New York scene as well :)

The way I see your software is bringing state of the art computational photography to any camera, not constrained to the weak processing power of a mobile device or short processing times. On a side note, I'd love to donate to the continued development of this and further user friendliness. Is it possible to work with Nvidia to get around

I heavily make use of the NPPi library coming with CUDA, and there the maximum image size seems to be restricted to 2 Gigabytes.

?

@kunzmi
Copy link
Owner

kunzmi commented Mar 8, 2021

Now that I think of it, the 2GB limit actually should come from .net and not from NPPi - .net arrays are for sure limited to 2GB of size in .net framework 4.6. Not sure how .net core handles this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants