You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi! Thanks for publishing this work, it's a great reference.
I'm trying to integrate a couple of different systems, and I need the model encodings to match. So far, I haven't been able to make that work:
Given this python;
device = "cuda" if torch.cuda.is_available() else "cpu"
model, preprocess = clip.load("ViT-B/32", device=device)
image = preprocess(Image.open("image_1.png")).unsqueeze(0).float().to(device)
text = clip.tokenize(["a face", "a dog", "a cat"]).to(device)
with torch.no_grad():
image_features = model.encode_image(image)
print(image_features.tolist()[0])
I'm trying to get the same array of floats out using Clip.mm's - (NSArray<NSNumber*>*)test_uiimagetomat:(UIImage*)image function. Try as I might, they always differ - and I'm not sure what the difference is. I can see that the cvt methods do the same as the image preprocess, then the normalise with the values from clip.
Here's some of the initial values from the python code above:
I used the preview of the quicklook on debugging the iOS code to save the image from the UIImage to ensure the same image is being used. In both cases, I'm using the original vit-b-32 CLIP image encoding. Strangely, the numbers above are kind of similar - but not sure if that's coincidental.
Any advice?
The text was updated successfully, but these errors were encountered:
Hi! Thanks for publishing this work, it's a great reference.
I'm trying to integrate a couple of different systems, and I need the model encodings to match. So far, I haven't been able to make that work:
Given this python;
I'm trying to get the same array of floats out using
Clip.mm
's- (NSArray<NSNumber*>*)test_uiimagetomat:(UIImage*)image
function. Try as I might, they always differ - and I'm not sure what the difference is. I can see that the cvt methods do the same as the image preprocess, then the normalise with the values from clip.Here's some of the initial values from the python code above:
And from the Swift:
I used the preview of the quicklook on debugging the iOS code to save the image from the UIImage to ensure the same image is being used. In both cases, I'm using the original vit-b-32 CLIP image encoding. Strangely, the numbers above are kind of similar - but not sure if that's coincidental.
Any advice?
The text was updated successfully, but these errors were encountered: