-
Notifications
You must be signed in to change notification settings - Fork 199
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CGNN results question #63
Comments
Hello, the results seem good, the accuracy should follow, could you join a sample of Best, |
I have added the csv file that comes out after training the model. Thanks for your help! Best |
Here is also the way I loaded the data. I add this in cdt/data/loader.py
|
Hello, |
oh yeah I have which I downloaded from https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/3757KX Thanks for the reply Best |
Thanks for getting back to me quickly, There seems to be an issue with your accuracy computation ; i got an accuracy of .72 on this dataset: import pandas as pd
import numpy as np
from sklearn.metrics import average_precision_score, accuracy_score
preds = pd.read_csv('res2_gauss.csv')
labels = pd.read_csv('CE-Gauss_targets.csv')
print(labels.shape, preds.shape)
print(labels.columns, preds.columns)
# Returns :(300, 2) (300, 2)
# Returns : Index(['SampleID', 'Target'], dtype='object') Index(['SampleID', 'Predictions'], dtype='object')
average_precision_score(labels.Target, preds.Predictions) ## Equals to AUPR
# Returns :0.8027886920926466
preds.loc[preds.Predictions > 0, 'Predictions'] = 1
preds.loc[preds.Predictions < 0, 'Predictions'] = -1
accuracy_score(labels.Target,preds.Predictions)
# Returns : 0.7233333333333334
From my point of view, accuracy however might not be the best metric for evaluating causal algorithms: The confidence of an algorithm has to be taken into account, thus giving the possibility of not committing into a prediction if the prediction is not certain (Not answering is better that giving a wrong causal direction). Best regards, |
Thanks a lot Thanks for your help Sorry for the inconvenience |
No issues, glad I could help you! |
Hi,
So I have tried to run the experiments again for the CGNN pairwise experiments.
And I can confirm to get the same results for the Multi, Gauss, Net, Tueb datasets in terms of AUPRC (using 12 different runs to ensemble)
AUPR: 0.95 MULTI
AUPR: 0.80 GAUSS
AUPR: 0.90 NET
However when I look at the acc ie. predicting the actual direction I get:
0.43, 0.46, 0.49 respectively.
I compute the acc by the score
This method also gives me around 74% unweighted on Tueb dataset.
So my question is whether this is expected or whether i should be computing the acc differently or maybe even the ACC doesnt matter?
Thanks for the clarification in advance.
Best
The text was updated successfully, but these errors were encountered: