-
Notifications
You must be signed in to change notification settings - Fork 186
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pixelBuffer normalization function? #6
Comments
Why do you need to do this? It's better to let Core ML handle this for you. |
New to this arena, so excuse my lack of understanding. If my model is normalized via CoreML, input images must be similarly normalized? What is the best/easiest means to normalize a UIImage? Brian |
When you convert your model to Core ML you tell it how to normalize the image (by passing in the appropriate parameters). Then in your app you just use a regular CVPixelBuffer and Core ML takes care of the normalization. Even easier is using Vision, which also resizes the image if necessary. |
Hi @hollance Google just showed me this thread while searching image normalization and CoreML. Write now i'm in the process to convert a pytorch model through onnx. Original model normalizes each channel individually by subtracting average and dividing by standard deviation. |
The bias is for subtracting, and you can provide a separate bias for each of the 3 channels. The scale is for dividing by the standard deviation. See also: http://machinethink.net/blog/help-core-ml-gives-wrong-output/ |
ok, but that model that i'm using has a standard deviation per channel. IMAGE_NET_MEAN = [0.485, 0.456, 0.406] This means that |
Wait. This makes no sense. |
Yeah it gets a bit trickier. But really I would just use 0.225 for all of them since they're so similar it probably doesn't matter. |
That is a dangerous assumption... |
You're going to lose more precision due to 16-bit floating point precision issues than because of a difference of 0.001 or 0.004 in the standard deviation. So, I wouldn't lose any sleep over it. You could probably achieve what you want by adding a ScaleLayerParams as the first layer in the model, although I'm not 100% sure that it accepts a different scale factor per channel. |
@hollance and folks reading this. cheers |
Nice, thanks for finding this! |
This was created a few days ago and added to master yesterday. Here you can find the flow |
Matt,
Thanks much for the ML library. You always write such elegant code.
Let's assume I have the following, though (as the comment shows), I want to normalize all samples of the pixelBuffer. What is the best means to accomplish this normalization pixel normalization?
The text was updated successfully, but these errors were encountered: