Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Chromatin Preprocessing #43

Open
lhendre opened this issue Jan 10, 2024 · 10 comments
Open

Chromatin Preprocessing #43

lhendre opened this issue Jan 10, 2024 · 10 comments

Comments

@lhendre
Copy link

lhendre commented Jan 10, 2024

Im working on and using the Chromatin portion of the project; however, I am running into issues with the preprocessing. Can you provide any more details on how you generated the initial data? I have followed Sei and deepsea but I seem to be missing the train_hg38_coords_targets.csv. There been several versions of Sei and deepsea so Im currently going through the old versions but when I follow previous steps I seem to be missing that file?

@cbirchsy
Copy link
Contributor

cbirchsy commented Jan 16, 2024

Hi @lhendre, I believe the original DeepSEA data can be downloaded from this link and you may also find this repo useful if you want to recreate it yourself

@lhendre
Copy link
Author

lhendre commented Jan 18, 2024

Hi, I will try that shortly and get back to you!

@lhendre
Copy link
Author

lhendre commented Jan 18, 2024

So Im going to try recreating the data or using that data and renaming it. But to verify, the code is specifically looking for something with the naming convention of {self.data_path}/{split}_{self.ref_genome_version}_coords_targets.csv(link to the code below). That doesnt appear to exist or be created in the repo. That being said ill try the repo and try changing that name to the naming convention and using it instead

https://github.com/search?q=org%3AHazyResearch+coords_target&type=code

@lhendre
Copy link
Author

lhendre commented Jan 22, 2024

Hi,
So I have run the above code to generate the data(specifically utilizing the repo you suggested). The issue is still the same in that im not getting a coords_target.csv file. I have been experimenting the existing tsv, npy and mat files to csv to see if that file is produced from there but I havent had much luck

@cbirchsy
Copy link
Contributor

Hi @lhendre, apologies for the delay. So the coords_target.csv are just the debug_{train|valid|test}.tsv files but with some small modifications (remove sequence, add prefix to target column names, remove duplicates, save as csv) which can be done with this snippet:

def create_coord_target_files(file, name):
    target_cols=pd.read_csv('data/deepsea_metadata.tsv', sep='\t')['File accession'].tolist() # metadata from build-deepsea-training-dataset repo
    colnames=target_cols+['Chr_No','Start','End']
    df = pd.read_csv(file, usecols=colnames, header=0)
    df.drop_duplicates(inplace=True)
    df.reset_index(drop=True, inplace=True)
    df.rename(columns={k:f'y_{k}' for k in target_cols}, inplace=True)
    df.to_csv(f'{name}_coords_targets.csv')
create_coord_target_files('debug_valid.tsv', 'val')
create_coord_target_files('debug_test.tsv', 'test')
create_coord_target_files('debug_train.tsv', 'train')

@lhendre
Copy link
Author

lhendre commented Jan 25, 2024

Hi @cbirchsy, that appears to answer my questions! Im testing it now but this fills in the missing gaps. If this works would it make since for me to open a pr to update the instructions? And potentially adding in a small script for the create_coord_target_files?

@lhendre
Copy link
Author

lhendre commented Feb 18, 2024

Hi, this appears to work for getting the preprocessing working, do let me know if it would make sense for me to add this in via a PR. We are right now trying to replicate the papers results regarding chromatin and are still running into some issues, so if there are any additional steps let me know, but we are still digging in on our side

@jimmylihui
Copy link

Hi, this appears to work for getting the preprocessing working, do let me know if it would make sense for me to add this in via a PR. We are right now trying to replicate the papers results regarding chromatin and are still running into some issues, so if there are any additional steps let me know, but we are still digging in on our side

I check your code, isn't output file out/train.tsv instead of train.mat, since your coordinate file read .csv?

@lhendre
Copy link
Author

lhendre commented May 20, 2024

@jimmylihui are you referring to step 4?
Sorry for the delay, ended up switching jobs. Ill go back and double check this, but This uses a third party library to build the dataset and the instructions are here. The specific thing to notice is the save_debug_info flag is set to true.

If you dig into the build file Youll see it then creates the tsv file that is used in the coordinate reading file.

I will go back and double check this, but I can also lead a note to ensure there is no confusion if that is helpful

@rubensolozabal
Copy link

rubensolozabal commented Oct 7, 2024

Hello, I confirm the script @cbirchsy works

def create_coord_target_files(file, name):
    target_cols=pd.read_csv('data/deepsea_metadata.tsv', sep='\t')['File accession'].tolist() # metadata from build-deepsea-training-dataset repo
    colnames=target_cols+['Chr_No','Start','End']
    df = pd.read_csv(file, usecols=colnames, header=0)
    df.drop_duplicates(inplace=True)
    df.reset_index(drop=True, inplace=True)
    df.rename(columns={k:f'y_{k}' for k in target_cols}, inplace=True)
    df.to_csv(f'{name}_coords_targets.csv') 

However, I tried to reproduce the results on the 7M training for 50 epoch and could only get AUROC 0.8

d_model: 256
n_layer: 8
l_max = 1024

P.s. I am training from scratch as commented in github (I could not find a 8 layer model pretained on 1k)
Thank you


Solved my issue, now I can correctly reproduce the results in the chromatin prediction task:

At inference one needs to comment from chromatin_profile.yaml -->
# for loading backbone and not head, requires both of these flags below
# pretrained_model_state_hook:
# name: load_backbone
# freeze_backbone: false # seems to work much better if false (ie finetune entire model)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants