Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About the proposals feature cluster visualization #3

Closed
chengyu0910 opened this issue Mar 18, 2021 · 8 comments
Closed

About the proposals feature cluster visualization #3

chengyu0910 opened this issue Mar 18, 2021 · 8 comments

Comments

@chengyu0910
Copy link

Hi, I wanna know what preprocessing, such as normalization, on proposals featuer should be done to visualize them using t-SNE? And can you share the code for visualization of proposals feature cluster ?

@Chauncy-Cai
Copy link

Chauncy-Cai commented Mar 19, 2021

You can use the code below to visualize the features.

def plot_embedding(data, label, title,show=None):
    # param data:data
    # param label:label
    # param title:title of output
    # param show:(int) if you have too much proposals to draw, you can draw part of them
    # return: tsne-image
    
    if show is not None:
        temp = [i for i in range(len(data))]
        random.shuffle(temp)
        data = data[temp]
        data = data[:show]
        label = torch.tensor(label)[temp]
        label = label[:show]
        label.numpy().tolist()

    x_min, x_max = np.min(data, 0), np.max(data, 0)
    data = (data - x_min) / (x_max - x_min) # norm data
    fig = plt.figure() 

    # go through all the samples
    data = data.tolist()
    label = label.squeeze().tolist()
    
    for i in range(len(data)):
        plt.text(data[i][0], data[i][1], ".",fontsize=18, color=plt.cm.tab20(label[i] / 20))
    plt.title(title, fontsize=14)
    return fig

# weight:(n proposals * 1024-D) input of the classifier
# label: the label of the proposals/ground truth 
# we only select foreground proposals to visualize
# you can try to visualize the weight of different classes by extracting weight during training or testing stage

ts = TSNE(n_components=2,init='pca', random_state=0)
weight = ts.fit_transform(weight)
fig = plot_embedding(weight, label, 't-SNE feature child')
plt.show()

@Chauncy-Cai
Copy link

(1) Weight is the feature extract from the input of the classifier (1024-dimension). (I think it should be named feature)
(2) Weight and label are torch.tensor(), if you use another type, remember to convert them.
(3) TSNE is imported from sklearn.manifold, and here is the guild. https://scikit-learn.org/stable/modules/generated/sklearn.manifold.TSNE.html. Moreover, perplexity may need to be changed to get a better result.

@chengyu0910
Copy link
Author

(1) Weight is the feature extract from the input of the classifier (1024-dimension). (I think it should be named feature)
(2) Weight and label are torch.tensor(), if you use another type, remember to convert them.
(3) TSNE is imported from sklearn.manifold, and here is the guild. https://scikit-learn.org/stable/modules/generated/sklearn.manifold.TSNE.html. Moreover, perplexity may need to be changed to get a better result.

Are the perplexitys same for features learned with CPEloss and without CPEloss ?

@Chauncy-Cai
Copy link

Chauncy-Cai commented Mar 19, 2021

(1) Weight is the feature extract from the input of the classifier (1024-dimension). (I think it should be named feature)
(2) Weight and label are torch.tensor(), if you use another type, remember to convert them.
(3) TSNE is imported from sklearn.manifold, and here is the guild. https://scikit-learn.org/stable/modules/generated/sklearn.manifold.TSNE.html. Moreover, perplexity may need to be changed to get a better result.

Are the perplexitys same for features learned with CPEloss and without CPEloss ?

Yes.
We set perplexity=30, which is the default number recommended by the guild. You can just ignore this parameter.

@chengyu0910
Copy link
Author

(1) Weight is the feature extract from the input of the classifier (1024-dimension). (I think it should be named feature)
(2) Weight and label are torch.tensor(), if you use another type, remember to convert them.
(3) TSNE is imported from sklearn.manifold, and here is the guild. https://scikit-learn.org/stable/modules/generated/sklearn.manifold.TSNE.html. Moreover, perplexity may need to be changed to get a better result.

Are the perplexitys same for features learned with CPEloss and without CPEloss ?

Yes

Got it. Thanks a lot!

@MrCrightH
Copy link

hello, where should this code be added?

@hjfdsssdg
Copy link

@chengyu0910
Please ask you a question, I will not have the true value label when I test the model, so how do I pass the label parameter when I draw T-SNE?

@zzzjoey
Copy link

zzzjoey commented Mar 4, 2023

@chengyu0910 Please ask you a question, I will not have the true value label when I test the model, so how do I pass the label parameter when I draw T-SNE?

You can use the labels predicted by the model, as the model's predictions are relatively confident in test stage.
Or you can extract features and draw the T-SNE figure during training, where the sample's true label can be determined by the iou-matching with the ground truth bounding box annotations.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants