Skip to content
Merged

Dev #114

Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
File renamed without changes.
2,876 changes: 1,438 additions & 1,438 deletions jupyter_notebook_config.py → .binder/jupyter_notebook_config.py

Large diffs are not rendered by default.

3 changes: 3 additions & 0 deletions .binder/postBuild
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
#!/bin/bash
mkdir ~/.jupyter
cp .binder/jupyter_notebook_config.py ~/.jupyter/
File renamed without changes.
8 changes: 4 additions & 4 deletions docs/basics/download_nwb.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,7 @@
"metadata": {},
"source": [
"### Downloading Just One File\n",
"Set `filepath` to the path of the file you want to download within the dandiset. You can get this by navigating to the file you want to download on the DANDI Archive website and pressing on the `i` icon. There, you can copy the filepath from the field labeled `path`. Don't include a leading `/`."
"Set `dandi_filepath` to the path of the file you want to download within the dandiset. You can get this by navigating to the file you want to download on the DANDI Archive website and pressing on the `i` icon. There, you can copy the filepath from the field labeled `path`. Don't include a leading `/`."
]
},
{
Expand All @@ -96,7 +96,7 @@
"metadata": {},
"outputs": [],
"source": [
"filepath = \"sub-699733573/sub-699733573_ses-715093703.nwb\""
"dandi_filepath = \"sub-699733573/sub-699733573_ses-715093703.nwb\""
]
},
{
Expand All @@ -114,8 +114,8 @@
}
],
"source": [
"filename = filepath.split(\"/\")[-1]\n",
"file = my_dandiset.get_asset_by_path(filepath)\n",
"filename = dandi_filepath.split(\"/\")[-1]\n",
"file = my_dandiset.get_asset_by_path(dandi_filepath)\n",
"# this may take awhile, especially if the file to download is large\n",
"file.download(f\"{download_loc}/{filename}\")\n",
"\n",
Expand Down
140 changes: 50 additions & 90 deletions docs/basics/read_nwb.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
"metadata": {},
"source": [
"# Reading an NWB File\n",
"After downloading an NWB file, you *may* want to view the data inside. You can do basic reading of the file with [PyNWB](https://github.com/NeurodataWithoutBorders/pynwb). This is a package designed to utilize, modify, and process NWB files. The basic read functionality of PyNWB is shown below. This notebook is intended to explore the NWB file which was specified for download in *Downloading an NWB file*. If you choose a different file, make sure it's already downloaded!"
"After downloading an NWB file, you *may* want to view the data inside. You can do basic reading of the file with [PyNWB](https://github.com/NeurodataWithoutBorders/pynwb). This is a package designed to utilize, modify, and process NWB files. The basic read functionality of PyNWB is shown below."
]
},
{
Expand All @@ -19,120 +19,80 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": null,
"id": "d76f08a6",
"metadata": {},
"outputs": [],
"source": [
"from dandi import dandiapi\n",
"from pynwb import NWBHDF5IO"
]
},
{
"cell_type": "markdown",
"id": "0b6b0571",
"id": "4687824b",
"metadata": {},
"source": [
"### Reading an NWB File\n",
"You can read in a PyNWB file with `NWBHDF5IO` to retrieve an io object. You can use the `.read` method to actually read it in. From there, you can see the raw data of the NWB file and can print the fields you are interested in. "
"### Downloading an NWB File\n",
"To read an NWB File, it must first be downloaded. `dandiset_id` and `filepath` may be changed to select a different file off of DANDI. If the file of interest already downloaded, you don't need to run the download cell again. When trying to download an embargoed file, refer to the code from the [Downloading an NWB File](./download_nwb.ipynb) notebook."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7fbd4d6c",
"metadata": {},
"outputs": [],
"source": [
"dandiset_id = \"000021\"\n",
"dandi_filepath = \"sub-699733573/sub-699733573_ses-715093703.nwb\"\n",
"download_loc = \".\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9da13c50",
"metadata": {},
"outputs": [],
"source": [
"filename = dandi_filepath.split(\"/\")[-1]\n",
"filepath = f\"{download_loc}/{filename}\""
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "400c411d",
"execution_count": null,
"id": "da61049a",
"metadata": {},
"outputs": [],
"source": [
"nwb_filepath = \"./sub-699733573_ses-715093703.nwb\""
"client = dandiapi.DandiAPIClient()\n",
"my_dandiset = client.get_dandiset(dandiset_id)\n",
"file = my_dandiset.get_asset_by_path(dandi_filepath)\n",
"# this may take awhile, especially if the file to download is large\n",
"file.download(filepath)\n",
"\n",
"print(f\"Downloaded file to {filepath}\")"
]
},
{
"cell_type": "markdown",
"id": "0b6b0571",
"metadata": {},
"source": [
"### Reading an NWB File\n",
"You can read in a PyNWB file with `NWBHDF5IO` to retrieve an io object. You can use the `.read` method to actually read it in. From there, you can see the raw data of the NWB file and can print the fields you are interested in. "
]
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": null,
"id": "7628e758",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"C:\\Users\\carter.peene\\AppData\\Local\\Programs\\Python\\Python39\\lib\\site-packages\\hdmf\\spec\\namespace.py:531: UserWarning: Ignoring cached namespace 'hdmf-common' version 1.1.3 because version 1.5.1 is already loaded.\n",
" warn(\"Ignoring cached namespace '%s' version %s because version %s is already loaded.\"\n",
"C:\\Users\\carter.peene\\AppData\\Local\\Programs\\Python\\Python39\\lib\\site-packages\\hdmf\\spec\\namespace.py:531: UserWarning: Ignoring cached namespace 'core' version 2.2.2 because version 2.5.0 is already loaded.\n",
" warn(\"Ignoring cached namespace '%s' version %s because version %s is already loaded.\"\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"root pynwb.file.NWBFile at 0x2228950120912\n",
"Fields:\n",
" acquisition: {\n",
" raw_running_wheel_rotation <class 'pynwb.base.TimeSeries'>,\n",
" running_wheel_signal_voltage <class 'pynwb.base.TimeSeries'>,\n",
" running_wheel_supply_voltage <class 'pynwb.base.TimeSeries'>\n",
" }\n",
" devices: {\n",
" probeA <class 'abc.EcephysProbe'>,\n",
" probeB <class 'abc.EcephysProbe'>,\n",
" probeC <class 'abc.EcephysProbe'>,\n",
" probeD <class 'abc.EcephysProbe'>,\n",
" probeE <class 'abc.EcephysProbe'>,\n",
" probeF <class 'abc.EcephysProbe'>\n",
" }\n",
" electrode_groups: {\n",
" probeA <class 'abc.EcephysElectrodeGroup'>,\n",
" probeB <class 'abc.EcephysElectrodeGroup'>,\n",
" probeC <class 'abc.EcephysElectrodeGroup'>,\n",
" probeD <class 'abc.EcephysElectrodeGroup'>,\n",
" probeE <class 'abc.EcephysElectrodeGroup'>,\n",
" probeF <class 'abc.EcephysElectrodeGroup'>\n",
" }\n",
" electrodes: electrodes <class 'hdmf.common.table.DynamicTable'>\n",
" file_create_date: [datetime.datetime(2020, 5, 26, 0, 53, 26, 986608, tzinfo=tzoffset(None, -25200))]\n",
" identifier: 715093703\n",
" institution: Allen Institute for Brain Science\n",
" intervals: {\n",
" drifting_gratings_presentations <class 'pynwb.epoch.TimeIntervals'>,\n",
" flashes_presentations <class 'pynwb.epoch.TimeIntervals'>,\n",
" gabors_presentations <class 'pynwb.epoch.TimeIntervals'>,\n",
" invalid_times <class 'pynwb.epoch.TimeIntervals'>,\n",
" natural_movie_one_presentations <class 'pynwb.epoch.TimeIntervals'>,\n",
" natural_movie_three_presentations <class 'pynwb.epoch.TimeIntervals'>,\n",
" natural_scenes_presentations <class 'pynwb.epoch.TimeIntervals'>,\n",
" spontaneous_presentations <class 'pynwb.epoch.TimeIntervals'>,\n",
" static_gratings_presentations <class 'pynwb.epoch.TimeIntervals'>\n",
" }\n",
" invalid_times: invalid_times <class 'pynwb.epoch.TimeIntervals'>\n",
" processing: {\n",
" eye_tracking_rig_metadata <class 'pynwb.base.ProcessingModule'>,\n",
" optotagging <class 'pynwb.base.ProcessingModule'>,\n",
" running <class 'pynwb.base.ProcessingModule'>,\n",
" stimulus <class 'pynwb.base.ProcessingModule'>\n",
" }\n",
" session_description: Data and metadata for an Ecephys session\n",
" session_id: 715093703\n",
" session_start_time: 2019-01-19 00:54:18-08:00\n",
" stimulus_notes: brain_observatory_1.1\n",
" subject: subject abc.EcephysSpecimen at 0x2228949633056\n",
"Fields:\n",
" age: P118D\n",
" age_in_days: 118.0\n",
" genotype: Sst-IRES-Cre/wt;Ai32(RCL-ChR2(H134R)_EYFP)/wt\n",
" sex: M\n",
" species: Mus musculus\n",
" specimen_name: Sst-IRES-Cre;Ai32-386129\n",
" subject_id: 699733573\n",
"\n",
" timestamps_reference_time: 2019-01-19 00:54:18-08:00\n",
" units: units <class 'pynwb.misc.Units'>\n",
"\n"
]
}
],
"outputs": [],
"source": [
"io = NWBHDF5IO(nwb_filepath, mode=\"r\", load_namespaces=True)\n",
"io = NWBHDF5IO(f\"{download_loc}/{filename}\", mode=\"r\", load_namespaces=True)\n",
"nwb = io.read()\n",
"\n",
"print(nwb)"
Expand Down
4 changes: 2 additions & 2 deletions docs/basics/stream_nwb.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@
"outputs": [],
"source": [
"dandiset_id = \"000021\"\n",
"filepath = \"sub-699733573/sub-699733573_ses-715093703.nwb\"\n",
"dandi_filepath = \"sub-699733573/sub-699733573_ses-715093703.nwb\"\n",
"authenticate = False\n",
"dandi_api_key = \"\""
]
Expand Down Expand Up @@ -105,7 +105,7 @@
}
],
"source": [
"file = my_dandiset.get_asset_by_path(filepath)\n",
"file = my_dandiset.get_asset_by_path(dandi_filepath)\n",
"base_url = file.client.session.head(file.base_download_url)\n",
"file_url = base_url.headers['Location']\n",
"\n",
Expand Down
Loading