Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

adapt MULTILAYER according to feedback. #8

Open
multinetlab opened this issue Dec 10, 2021 · 2 comments · May be fixed by #4
Open

adapt MULTILAYER according to feedback. #8

multinetlab opened this issue Dec 10, 2021 · 2 comments · May be fixed by #4
Assignees

Comments

@multinetlab
Copy link
Collaborator

No description provided.

@multinetlab multinetlab changed the title adapt multilayer according to Marike's feedback. Resp: @lcbreedt, @fnobregasantos. Project: Multilayer adapt multilayer according to Marike's feedback. Dec 10, 2021
@multinetlab multinetlab changed the title adapt multilayer according to Marike's feedback. adapt MULTILAYER according to Marike's feedback. Dec 10, 2021
@multinetlab multinetlab linked a pull request Dec 23, 2021 that will close this issue
@multinetlab
Copy link
Collaborator Author

Feedback:
few tips to be implemented in the 'Multilayer_Main_code.py - readme' file to make it easier for newbies that want to use it:

  1. Creating layer tags: you could include that you should always start with 0 as the first layer, not 1 (like in other programs as Matlab).

  2. Loading the matrices: maybe create an example (like mine: supra_mst = scipy.io.loadmat('/mnt/resource/m.vanlingen/m2b/matlab_output_final_okt2021/supra_mst_full.mat'). Then it is clear that you should also include the path to your input file, although this is also stated in the 'setting' paragraph above.

  3. If you want to calculate a certain measure within a specific brain network (like I did for FPN), it should be clear that you need to subtract the number of the regions with -1 since python starts at 0. Otherwise you are looking a complete different areas. Maybe include a small section on this in the readme file or in the script itself. It already says: "sub_net = list(range(0,N))", so it starts at 0, but an extra reminder to subtract with -1 would be helpful (and crucial) for the analysis!

  4. When you run your final command, it would be good to indicate in the readme file that you should state the path of the output file. I didn't before, and apparently it is saved in a different folder than from where I opened Python, so I kept running the command again because I couldn't find the output file anywhere. This is what I ran in the end and it worked: function_output(group_eigenvector_centrality, supra_mst,'/mnt/resource/m.vanlingen/m2b/matlab_output_final_okt2021/whole_multilayer_EC_mean', 'EC', list(range(6)))

  5. A description of the output csv file is missing (for as far as I could see). It is just a csv file with many many numbers and you don't know if the order of output is per subject/ per region / per node etc. Fernando and I figured out that it was: subject 1 all FPN nodes, subject2 all FPN nodes, etc.
    An idea would be to include an extra column in the csv file stating the subject number and or FPN region, so there can be no mistakes in interpreting the data

  6. Now I got all the EC values of each FPN node for all subjects (so 12 regions in FPN --> 12 values per subject). In the end I want to know the average EC of the whole FPN per person. If there is a way in python to calculate this, that would be nice. Now I calculated it using a part of Lucas' his matlab script (also works fine but is an extra step to import your python output back in matlab)

@multinetlab multinetlab changed the title adapt MULTILAYER according to Marike's feedback. adapt MULTILAYER according to feedback. Jan 21, 2022
@multinetlab
Copy link
Collaborator Author

multinetlab commented Jan 21, 2022

New feedback:

  1. Internal: I recreated the Multilayer environment and think that the installation process should make it clearer that the multinetx installation should come from GitHub and not from pip. Moreover, I believe that the package versions in the script/readme should be updated... they were not the ones we are using
    in the server. I was able to install and run with this requirements.txt, but would ask you to double-check. Once checked, we can leave the requirements available here in the repo.

requirements.txt

It's possible that the list has to be reduced to the main packages (multinetx, networkx, scipy, numpy, cython, scikit-learn, matplotlib, pyreadstat, pandas, spyder-kernels).
btw, the list in the readme could be simplified just to the package names, and not the modules you are using from each package.

Finally, the script says that some packages are imported but never used... would be good to double-check and erase them, if not necessary.

image

The ones w alert next to it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants