Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Branch newwavelet - add special effects "style" #3264

Open
Desmis opened this issue Apr 26, 2016 · 20 comments
Open

Branch newwavelet - add special effects "style" #3264

Desmis opened this issue Apr 26, 2016 · 20 comments
Assignees
Labels
type: enhancement Something could be better than it currently is

Comments

@Desmis
Copy link
Collaborator

Desmis commented Apr 26, 2016

I just push a change to "newwavelet".

You can now merge 2 files to improve effects (style) of an image. Eg with an artistic paint "Starry night"..or other Raw or TIF...which must have substantially the same size as the current image.

I don't use "Neural network", but only Wavelet....

This algorithm requires significant resources due to size of images and because I open 2 wavelets...about

  • memory * 2
  • time (output JPG / TIFF)==> about 10 second on my computer with 2 files # 4000*3000

I have update Rawpedia in french :)

@Beep6581 Beep6581 added type: enhancement Something could be better than it currently is Component-Tool labels Apr 26, 2016
@sguyader
Copy link

Bonjour Jacques,

Today I tried your new "style" effect, by following your guide on Rawpedia. I think it's not too much heavy on the computer resources, it runs reasonably fast on my 2.5 GHz i5 / 4 GB RAM Laptop.
Regarding the effect itself, I don't see the results as close to what we can see from the programs based on the neural network as I wanted.
In your implementation, I still see quite well most of the first image through the second one. I tried with your "Starry night" 1st image on 2 different 2nd images. And I tried to play with the different settings, here are some results:

The original 2nd image:
The original 2nd image

Style from "Starry Night" with defaults settings:
with defaults settings

Other settings:
other settings

Yet other settings:
yet other settings

But don't take it wrong Jacques, it's very interesting, and hopefully it will develop into something even more interesting.

@Desmis
Copy link
Collaborator Author

Desmis commented Apr 28, 2016

Salut Sébastien

Merci pour ce "test" :)
Il est évident que l'algorithme que j'ai utilisé est moins discriminant que celui à Neurones, mais il est considérablement moins exigeant en ressources. Je ne pense pas qu'on puisse - avec des temps de traitement raisonnables - traiter des images de 7000*5000 avec les réseaux de neurones...(à voir !!)

Le système (le mien) est très sensible au contraste de l'image courante...

Peux-tu poster, ton Raw et tes divers pp3;

Merci encore pour ton essai et ce bel iguane, de la belle Guadeloupe :)

Jacques

@Desmis
Copy link
Collaborator Author

Desmis commented Apr 30, 2016

Sebastien
I add a little code, then with your "jpg" (Iguane 1000*650 )...

@Desmis
Copy link
Collaborator Author

Desmis commented Apr 30, 2016

@sguyader
Copy link

Jacques thanks for your work. I have to clarify something though: on my photorgaph it's not an iguane from Guadleoupe. The image was taken at the North Carolina zoo, in the USA. I don't remember if the animal is a lizard or an iguane, though... But this is off-topic, I'll try to send you the raw file if you still want it.

@Desmis
Copy link
Collaborator Author

Desmis commented Apr 30, 2016

It looks like an iguane but not an Antille' iguane, probably a green iguane...but it is not the problem.

Sure try to send Raw and pp3...because this "iguane" as a special skin...multicolor :)

Thanks again

@sguyader
Copy link

sguyader commented May 1, 2016

Hey Jacques,

Sorry for the delay, you can find the raw file and .pp3 here: http://filebin.net/6nkalvs9iv
By the way, I did not use the raw file for my tests, only a developed jpg. Let me see what you can do from the raw file ;)

Sebastien

PS. It looks like this reptile is actually a lizard, the Common Collared Lizard: https://en.wikipedia.org/wiki/Common_collared_lizard

@Desmis
Copy link
Collaborator Author

Desmis commented May 2, 2016

Sebastien

With your DNG, I developed this assembly, which is arbitrary. It is different from the previous with the JPG (1000 * 600), for several reasons:

  • wavelet reacts differently owing to the size
  • I deliberately activate multiple wavelet settings to show their impact (see pp3)

The result is a matter of taste because in any case we are very far from reality in this type of arrangement

Jacques

http://filebin.net/84hzn4jjn2

@Desmis
Copy link
Collaborator Author

Desmis commented May 7, 2016

Hello all
I pushed a new change to "newwavelet".
This change is a "merge HDR ++" , a complex algorithm similar to "style", for merging 2 files. "First" image overexposed, and current "underexposed".

I updated Rawpedia in french :)

@Desmis
Copy link
Collaborator Author

Desmis commented May 11, 2016

I just push a new changed for HDR++
Now you can aligned or desalined pixels : first and second image.
I also reviewed tooltip :)

@sguyader
Copy link

sguyader commented May 13, 2016

@Desmis Jacques, here's your merge HDR++ algorithm in action (not great, because I started from a single TIFF image to get one dark, one light): https://discuss.pixls.us/t/hdr-postprocessing/1403/15

@Desmis
Copy link
Collaborator Author

Desmis commented May 13, 2016

@sguyader

Salut Sébastien
Je m'exprime en français, car c'est plus simple...Si tu peux traduire pour les autres ce serait bien. Merci par avance..je te devrais un Ti Punch!

Effectivement, on peut mélanger le même fichier, une fois avec un réglage image sombre (qui met en valeur les hautes lumières) , et une autre avec image claire (qui met en valeur les basses lumières). Bien sûr cela fonctionne, mais ce serait nettement mieux de le faire à partir à partir des 2 raw originaux (surex et sousex).
Je ne l'ai pas mentionné dans la documentation, mais je peux l'ajouter sans problèmes, qu'en penses-tu ?.

Avec des images raw "adaptées", le mélange HDR++ fonctionne correctement, même très correctement...on fait quasiment ce que l'on veut. Certes, c'est complexe (aussi bien en développement qu'en utilisation), mais je pense au moins pour certains cas, supérieur au HDRmerge de fichiers "Raw" à partir des images brutes . Détail quand même cela nécessite beaucoup de ressources.
Il ne reste qu'une grosse (même très grosse) amélioration, c'est le contrôle local...Je vais essayer, sans garantie, de faire quelque chose...
Bien sûr, il est nécessaire de tester, améliorer, debuger, valider,...

Merci encore
Jacques

@sguyader
Copy link

sguyader commented May 13, 2016

@Desmis Jacques, unfortunately I don't have exposure-bracketted raw files of my own. Yesterday I downloaded 2 jpegs developed from separate raw files, and indeed the result was already very good. What I did today (link I posted above) was just to show the potential, what RT can already do in your experimental branch, only from a single image. I know it's not optimal at all, but I found it interesting.

I'll go on and try to translate what what Jacques wrote in his message above:

Indeed, one can merge the same file, once with the image developed for highlight details, the second for shadow details. Of course this works, but it would be much better to merge from 2 separate raw files (one overexposed, one underexposed).
I didn't mention this in the documentation, but I can if needed, what do you think?

With relevant raw files, merge HDR++ works correctly, even more than correctly... one can do pretty much anything. Of course, it's complex (both in terms of development and use), but I think that at least in some cases, it can show superior to the HDRmerge of RAW files from raw images. However it requires a lot of resources.
One big (even huge) improvement will be the local control... I will try, with no guarantee of success, to do something about it...
Of course, it's necessary to test, improve, debug, validate...

Thanks again,
Jacques

@Beep6581
Copy link
Owner

Beep6581 commented Jun 3, 2016

Regarding the newwavelet branch, is there a need to keep this issue open, or do we use #2982, or do we close #2982 and use this?

@Desmis
Copy link
Collaborator Author

Desmis commented Jun 4, 2016

@Beep6581

I think we can use now only one issue :)

In sommary newwavelet - it is a very big "branch":

  1. sharp mask and clarity
  2. retinex in wavelet
  3. merge files and watermark for :
    a) merge for HDR++ - use 2 wavelet decomposition to merge 2 images over/under exposed
    b) merge for style : use 2 wavelet decomposition to merge 2 images (one with style)
    c) merge HDR - simple processus
    d) watermark...it works, even we cannot build with RT the "pattern watermark" (it is an other problem) !

Jacques

@Desmis
Copy link
Collaborator Author

Desmis commented Jun 18, 2016

I have add, some improvments to newwavelet

  1. change defaults values for "style"
  2. add a "shape and texture method" to improve texture detection.
    it seems to be "Neural networks", but it is not. Automatic threatment with "feed back learning" leeds to huge time threatment (more than 30 secondes or more 1 or 2 mn...). Then you can choose with a slider, a manual neural composition of 3 or 4 levels for each level...

I updated Rawpedia (always in french)

:)

@heckflosse
Copy link
Collaborator

@Desmis Jacques, may I make some changes to newwavelet or are you currently working on it?

Ingo

@Desmis
Copy link
Collaborator Author

Desmis commented Jun 18, 2016

@heckflosse
Ingo : No problem, you can make some changes :)

jacques

@heckflosse
Copy link
Collaborator

@Desmis Jaques, where is that "feed back learning" option in gui? I didn't find it.

Ingo

@Desmis
Copy link
Collaborator Author

Desmis commented Jun 19, 2016

@heckflosse
Ingo, there is not "feed back learning" option in gui?
When I began "neural", I thought represent "textur" by the function "void Textur" where "textur > mean -sigma" and "textur > mean + 2*sigma"...
Then found the maximum of textur, by varying coefficient of neural network, for each level.

With "crop" (dcrop.cc) and only with "neurthree"...already much time...(more than 5 or 10 seconds)...which leads to think that with "full image" and 4 or 5 levels of neural (neurfour...neurfive...) time will be about of 1 or 2 mn.
Therefore, I replaced the calculation by a slider, which is considerably faster, but of course is manual...and the result is not as good as I thought!
In addition it must be possible to remove the call to "tempneur***" and replace by a simple variable...

Jacques

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type: enhancement Something could be better than it currently is
Projects
None yet
Development

No branches or pull requests

4 participants