Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dataset list refactor #724

Merged
merged 44 commits into from Jan 23, 2020
Merged

Dataset list refactor #724

merged 44 commits into from Jan 23, 2020

Conversation

adelavega
Copy link
Collaborator

Closes #689

Let's also take care of #707 and #692 here.

For now just made these small backend changes:

  • mimetype for datasets are now sorted by id, so that the oldest stimuli for each dataset goes first. Let's display the mimetype for only that one, as that is the original stimulus.
  • TR is now shown for tasks under datasets route.

@adelavega
Copy link
Collaborator Author

adelavega commented Jan 16, 2020

We could also add a manually curated long_description for each dataset, or task, or both.

@codecov-io
Copy link

codecov-io commented Jan 16, 2020

Codecov Report

Merging #724 into master will decrease coverage by 0.02%.
The diff coverage is 83.33%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #724      +/-   ##
==========================================
- Coverage    85.8%   85.77%   -0.03%     
==========================================
  Files          66       66              
  Lines        2965     2995      +30     
==========================================
+ Hits         2544     2569      +25     
- Misses        421      426       +5
Impacted Files Coverage Δ
neuroscout/populate/json.py 96% <ø> (ø) ⬆️
neuroscout/tasks/upload.py 54.92% <0%> (-2.43%) ⬇️
neuroscout/schemas/task.py 100% <100%> (ø) ⬆️
neuroscout/schemas/dataset.py 100% <100%> (ø) ⬆️
neuroscout/populate/ingest.py 92.64% <100%> (+0.05%) ⬆️
neuroscout/models/task.py 95.65% <100%> (+1.53%) ⬆️
neuroscout/models/dataset.py 92.1% <88.23%> (-3.55%) ⬇️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update f2618a8...748714d. Read the comment docs.

@adelavega
Copy link
Collaborator Author

Author list sometimes was one long string. I manually updated these entries, as these are technically BIDS errors which need to be corrected prior to ingestion. In the future, check this field prior to ingestion.

@rwblair
Copy link
Collaborator

rwblair commented Jan 17, 2020

  • Length of stimuli for display.
  • Runs per subject on task level
  • Average length of run per task

…uns_subject and avg_run_duration to Task model/schema
@adelavega
Copy link
Collaborator Author

@rwblair fyi the server is live with the changes in this branch

@adelavega
Copy link
Collaborator Author

The API, I mean.

… issue with number of selected runs when there are multiple tasks per dataset
@rwblair
Copy link
Collaborator

rwblair commented Jan 21, 2020

I didn't end up putting any of the task information in the dataset expansion, just extended the task table to include it.

No real formatting going on in expansion still, will only show mean age, percent female, and known issues if present.

…ne task, since the task and runs won't be auto selected then. fixes #707
@adelavega
Copy link
Collaborator Author

adelavega commented Jan 21, 2020

I actually think the Task information might be good where it is now that I've looked at it.

My main gripes now will be aesthetic ones, but I may play around with it myself and see if I can think of other ideas. Mostly I just want to keep the display as clean as possible, while still providing additional information.

@adelavega
Copy link
Collaborator Author

  • For "Average Run Length" let's just display s instead of seconds.
  • Remove the (N) after subject

@adelavega
Copy link
Collaborator Author

I think I'm going to change description to summary and shorten it, then add a longer description for each dataset. This will include known_issues so I will get rid of that field. The description will be displayed under the + button, and the summary where description is now.

I think for missing demographics we should put n/a, so that it doesn't look too empty. We could potentially put these in a box that says demographics.

Let's also put a complete author listing under the + button (or maybe what we were doing before that showed ~5 others, with ... in between).

Let's preface the external link with something like References and Links:.

Curation wise, I'll do the following:

  • Put all author lists under the same format
  • Change the descriptions/summaries
  • Add long descriptions for all.
  • Add missing demographics for HBN, other tasks...

Other ideas for other fields (that as in OpenNeuro for example)

  • Date added (not sure if I even have that)
  • Dataset DOI
  • Funding

@adelavega
Copy link
Collaborator Author

Snuck in a patch for #717

@adelavega
Copy link
Collaborator Author

Okay @rwblair I think the fixed width looks good!

@rbroc will you take look at the final descriptions and propose changes as you see fit?

@adelavega
Copy link
Collaborator Author

@rwblair one minor thing here

Looks like when I unselect all runs, then select all again (in SherlockMerlin, with a task selected), the following happens:

image

@adelavega
Copy link
Collaborator Author

Otherwise, eveything else LGTM,

@rbroc
Copy link
Collaborator

rbroc commented Jan 23, 2020

Okay @rwblair I think the fixed width looks good!

@rbroc will you take look at the final descriptions and propose changes as you see fit?

Looks great!
Really minor things if one wants to be super picky:

  • For HBN, maybe woudn't say that only a "small" subset of participants is available (we have 142 subjects after all)
  • For schematic narratives, do we also have the data from the audio-only runs? If not, I would make that clear in the short description. If yes, would fix "description" in task tab, which right now says "audiovisual watching".
  • Typo in Forrest, should be "movie being presented in German", I guess.

Everything else looking awesome!

@adelavega
Copy link
Collaborator Author

@rbroc I think in the Schematic Narratives, the audio only clips are played during the video stimulation within a run, so we're good.

@adelavega adelavega merged commit 48f66d5 into master Jan 23, 2020
@adelavega adelavega deleted the dataset_list_rf branch January 23, 2020 16:26
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

displaying more dataset info in analysis builder
4 participants