Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
60 commits
Select commit Hold shift + click to select a range
ca0ad54
Add averaging of event windows
rcpeene Nov 3, 2022
88361e0
Cleaned up/Organized lfp notebook. Still plotting to do
rcpeene Nov 3, 2022
bbe114c
Added completed LFP visualization book
rcpeene Nov 7, 2022
87e25a7
Capitalized subsection titles
rcpeene Nov 7, 2022
54d5e32
Merge pull request #58 from AllenInstitute/visualize_lfp
rcpeene Nov 7, 2022
f857e15
Merge branch 'main' of https://github.com/AllenInstitute/openscope_da…
rcpeene Nov 7, 2022
414b09a
Added beginnings of 2p raw notebook
rcpeene Nov 7, 2022
116e125
Merge branch 'main' of https://github.com/AllenInstitute/openscope_da…
rcpeene Nov 7, 2022
aa815dd
Added basic visualization
rcpeene Nov 8, 2022
1f98833
Reorganized script
rcpeene Nov 8, 2022
1f705a4
Erased output and corrected filenames
rcpeene Nov 8, 2022
256967d
Merge branch 'dev' of https://github.com/AllenInstitute/openscope_dat…
rcpeene Nov 8, 2022
6050fe4
Incremental changes, extracting ophys data from NWB file
rcpeene Nov 8, 2022
3a55666
Merge branch 'main' of https://github.com/AllenInstitute/openscope_da…
rcpeene Nov 8, 2022
ea034bf
Ensure consistent formatting and fix typos
rcpeene Nov 8, 2022
eeba739
Added download nwb cells to visualization notebooks. Formatting and o…
rcpeene Nov 9, 2022
18a716a
Added ability to set Hz of interpolated lfp data
rcpeene Nov 9, 2022
8482ddc
Merge branch 'main' of https://github.com/AllenInstitute/openscope_da…
rcpeene Nov 9, 2022
725e7e3
Changed filename
rcpeene Nov 10, 2022
b91ab90
Added download cell with example file. Included all output saved
rcpeene Nov 11, 2022
83b6eee
Added printing of ROI mask
rcpeene Nov 11, 2022
69237fd
Overlaid ROI masks onto movie
rcpeene Nov 11, 2022
dcf8b56
Added download cell and included output in neuropixel probe visualiza…
rcpeene Nov 11, 2022
9bdd3f0
Added fluorescence trace
rcpeene Nov 15, 2022
153efd5
adjust stat description for accuracy
jeromelecoq Nov 15, 2022
607e397
Update visualize_neuropixel_probes.ipynb
rcpeene Nov 15, 2022
4984992
Labeled axes, minor formatting and markdown changes
rcpeene Nov 15, 2022
9a5c7e4
Removed output and set id
rcpeene Nov 15, 2022
9c9a9aa
Fixed typos with filename variables
rcpeene Nov 15, 2022
91c7d8b
Added color to lfp track stack, fixed time window units, added axis l…
rcpeene Nov 16, 2022
303be7b
Final changes to LFP visualization
rcpeene Nov 16, 2022
6487d4a
Completed 2p raw visualization without output or demo files
rcpeene Nov 16, 2022
84dfff0
Fixed bugs with formatting and plotting
rcpeene Nov 16, 2022
66c7f51
Completed 2p raw visualization with output
rcpeene Nov 16, 2022
e86e06d
Update brainrender version with embedded window bugfix
rcpeene Nov 17, 2022
5099dd0
Improved colorbar label
rcpeene Nov 17, 2022
0250f14
Merge branch 'dev' of https://github.com/AllenInstitute/openscope_dat…
rcpeene Nov 17, 2022
db375c3
Fixed filepath
rcpeene Nov 18, 2022
7b02275
Removed 'nwb-cache' from file streaming
rcpeene Nov 18, 2022
615f065
Moved 'filepath' definition in notebooks for convenience
rcpeene Nov 18, 2022
4d63ff3
Merge branch 'dev' into visualize_2p_raw
rcpeene Nov 18, 2022
9df5f9f
Added selection of 'period' within LFP data
rcpeene Nov 19, 2022
877b2a8
various typos and units correction
jeromelecoq Nov 22, 2022
76223d4
Optimized interpolation step to use less memory
rcpeene Nov 22, 2022
72af77b
Changed filepath
rcpeene Nov 22, 2022
6d59fa3
Lowered setting values to conserve memory for usage on binder
rcpeene Nov 22, 2022
8326068
Merge pull request #83 from AllenInstitute/visualize_2p_raw
rcpeene Nov 23, 2022
387a0b4
Merge branch 'dev' of https://github.com/AllenInstitute/openscope_dat…
rcpeene Nov 29, 2022
57c8601
Added 'start' script for binder
rcpeene Nov 29, 2022
aeff8d9
Added line to ensure start file works
rcpeene Nov 29, 2022
d23134e
Changed download location and period selection parameter
rcpeene Nov 29, 2022
8ae3f42
Delete start
rcpeene Nov 29, 2022
a04304d
Fixed confidence interval calculation
rcpeene Nov 30, 2022
89c091a
Merge pull request #85 from AllenInstitute/lfp_responses
rcpeene Nov 30, 2022
aa423c6
Update requirements.txt
rcpeene Nov 30, 2022
c67747c
updated itk-meshtopolydata version
rcpeene Nov 30, 2022
f3c5258
Update requirements.txt
rcpeene Nov 30, 2022
9bc0f67
Rebuilt requirements for LFP and neuropixel notebooks
rcpeene Nov 30, 2022
529b7f7
Resolved merge of requirements
rcpeene Nov 30, 2022
6a2b1d0
Adjusted intro notebooks to run with example nwb
rcpeene Nov 30, 2022
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
57 changes: 40 additions & 17 deletions docs/basics/download_nwb.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
"id": "92b86eca",
"metadata": {},
"source": [
"# Download an NWB File\n",
"# Downloading an NWB File\n",
"In order to analyze some data, you'll need to have some data. The [DANDI Archive](https://dandiarchive.org/) is used to store NWB files in datasets called **dandisets**. Typically, an NWB file contains the data for just one experimental session, while a dandiset contains all the related data files yielded from a project. This notebook allows you to download from public dandisets or private dandisets (called **embargoed** dandisets) via the [DANDI Python API](https://dandi.readthedocs.io/en/latest/modref/index.html). To download embargoed dandisets from DANDI, you will need to make an account on the DANDI Archive and must be given access by the owner of the dandiset."
]
},
Expand All @@ -19,7 +19,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 1,
"id": "8f34eecf",
"metadata": {},
"outputs": [],
Expand All @@ -38,23 +38,38 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 2,
"id": "67536d37",
"metadata": {},
"outputs": [],
"source": [
"dandiset_id = \"000021\"\n",
"data_loc = \"~/data\"\n",
"download_loc = \".\"\n",
"authenticate = False\n",
"dandi_api_key = \"your_api_key_here\""
"dandi_api_key = \"\""
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 3,
"id": "a309c067",
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"A newer version (0.46.6) of dandi/dandi-cli is available. You are using 0.46.3\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Got dandiset DANDI:000021/draft\n"
]
}
],
"source": [
"if authenticate:\n",
" client = dandiapi.DandiAPIClient(token=dandi_api_key)\n",
Expand All @@ -70,13 +85,13 @@
"id": "420ef8ac",
"metadata": {},
"source": [
"### Download Just One File\n",
"### Downloading Just One File\n",
"Set `filepath` to the path of the file you want to download within the dandiset. You can get this by navigating to the file you want to download on the DANDI Archive website and pressing on the `i` icon. There, you can copy the filepath from the field labeled `path`. Don't include a leading `/`."
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 4,
"id": "fe9aa40c",
"metadata": {},
"outputs": [],
Expand All @@ -86,25 +101,33 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 5,
"id": "a110beeb",
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Downloaded file to ./sub-699733573_ses-715093703.nwb\n"
]
}
],
"source": [
"file = my_dandiset.get_asset_by_path(filepath)\n",
"filename = filepath.split(\"/\")[-1]\n",
"file = my_dandiset.get_asset_by_path(filepath)\n",
"# this may take awhile, especially if the file to download is large\n",
"file.download(f\"{data_loc}/{filename}\")\n",
"file.download(f\"{download_loc}/{filename}\")\n",
"\n",
"print(f\"Downloaded file to {data_loc}/{filename}\")"
"print(f\"Downloaded file to {download_loc}/{filename}\")"
]
},
{
"cell_type": "markdown",
"id": "7a85a038",
"metadata": {},
"source": [
"### Download Entire Dandiset\n",
"### Downloading Entire Dandiset\n",
"If you'd like to do a lot of work with the files in a dandiset, you might want to download the entire thing or some portion of the dandiset. Be prepared, though; This could take a significant amount of space on your drive and a significant amount of time. If you want to just download all the files within a directory of the dandiset, you can set the first argument of `download_directory` below to a more specific path within the dandiset."
]
},
Expand All @@ -116,9 +139,9 @@
"outputs": [],
"source": [
"# patience isn't just a virtue, it's a requirement\n",
"my_dandiset.download_directory(\"./\", f\"{data_loc}/{dandiset_id}\")\n",
"my_dandiset.download_directory(\"./\", f\"{download_loc}/{dandiset_id}\")\n",
"\n",
"print(f\"Downloaded directory to {data_loc}/{dandiset_id}\")"
"print(f\"Downloaded directory to {download_loc}/{dandiset_id}\")"
]
}
],
Expand Down
89 changes: 84 additions & 5 deletions docs/basics/read_nwb.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 1,
"id": "d76f08a6",
"metadata": {},
"outputs": [],
Expand All @@ -38,20 +38,99 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 2,
"id": "400c411d",
"metadata": {},
"outputs": [],
"source": [
"nwb_filepath = \"~/data/sub-699733573_ses-715093703.nwb\""
"nwb_filepath = \"./sub-699733573_ses-715093703.nwb\""
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 3,
"id": "7628e758",
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"C:\\Users\\carter.peene\\AppData\\Local\\Programs\\Python\\Python39\\lib\\site-packages\\hdmf\\spec\\namespace.py:531: UserWarning: Ignoring cached namespace 'hdmf-common' version 1.1.3 because version 1.5.1 is already loaded.\n",
" warn(\"Ignoring cached namespace '%s' version %s because version %s is already loaded.\"\n",
"C:\\Users\\carter.peene\\AppData\\Local\\Programs\\Python\\Python39\\lib\\site-packages\\hdmf\\spec\\namespace.py:531: UserWarning: Ignoring cached namespace 'core' version 2.2.2 because version 2.5.0 is already loaded.\n",
" warn(\"Ignoring cached namespace '%s' version %s because version %s is already loaded.\"\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"root pynwb.file.NWBFile at 0x2647013427664\n",
"Fields:\n",
" acquisition: {\n",
" raw_running_wheel_rotation <class 'pynwb.base.TimeSeries'>,\n",
" running_wheel_signal_voltage <class 'pynwb.base.TimeSeries'>,\n",
" running_wheel_supply_voltage <class 'pynwb.base.TimeSeries'>\n",
" }\n",
" devices: {\n",
" probeA <class 'abc.EcephysProbe'>,\n",
" probeB <class 'abc.EcephysProbe'>,\n",
" probeC <class 'abc.EcephysProbe'>,\n",
" probeD <class 'abc.EcephysProbe'>,\n",
" probeE <class 'abc.EcephysProbe'>,\n",
" probeF <class 'abc.EcephysProbe'>\n",
" }\n",
" electrode_groups: {\n",
" probeA <class 'abc.EcephysElectrodeGroup'>,\n",
" probeB <class 'abc.EcephysElectrodeGroup'>,\n",
" probeC <class 'abc.EcephysElectrodeGroup'>,\n",
" probeD <class 'abc.EcephysElectrodeGroup'>,\n",
" probeE <class 'abc.EcephysElectrodeGroup'>,\n",
" probeF <class 'abc.EcephysElectrodeGroup'>\n",
" }\n",
" electrodes: electrodes <class 'hdmf.common.table.DynamicTable'>\n",
" file_create_date: [datetime.datetime(2020, 5, 26, 0, 53, 26, 986608, tzinfo=tzoffset(None, -25200))]\n",
" identifier: 715093703\n",
" institution: Allen Institute for Brain Science\n",
" intervals: {\n",
" drifting_gratings_presentations <class 'pynwb.epoch.TimeIntervals'>,\n",
" flashes_presentations <class 'pynwb.epoch.TimeIntervals'>,\n",
" gabors_presentations <class 'pynwb.epoch.TimeIntervals'>,\n",
" invalid_times <class 'pynwb.epoch.TimeIntervals'>,\n",
" natural_movie_one_presentations <class 'pynwb.epoch.TimeIntervals'>,\n",
" natural_movie_three_presentations <class 'pynwb.epoch.TimeIntervals'>,\n",
" natural_scenes_presentations <class 'pynwb.epoch.TimeIntervals'>,\n",
" spontaneous_presentations <class 'pynwb.epoch.TimeIntervals'>,\n",
" static_gratings_presentations <class 'pynwb.epoch.TimeIntervals'>\n",
" }\n",
" invalid_times: invalid_times <class 'pynwb.epoch.TimeIntervals'>\n",
" processing: {\n",
" eye_tracking_rig_metadata <class 'pynwb.base.ProcessingModule'>,\n",
" optotagging <class 'pynwb.base.ProcessingModule'>,\n",
" running <class 'pynwb.base.ProcessingModule'>,\n",
" stimulus <class 'pynwb.base.ProcessingModule'>\n",
" }\n",
" session_description: Data and metadata for an Ecephys session\n",
" session_id: 715093703\n",
" session_start_time: 2019-01-19 00:54:18-08:00\n",
" stimulus_notes: brain_observatory_1.1\n",
" subject: subject abc.EcephysSpecimen at 0x2647012943904\n",
"Fields:\n",
" age: P118D\n",
" age_in_days: 118.0\n",
" genotype: Sst-IRES-Cre/wt;Ai32(RCL-ChR2(H134R)_EYFP)/wt\n",
" sex: M\n",
" species: Mus musculus\n",
" specimen_name: Sst-IRES-Cre;Ai32-386129\n",
" subject_id: 699733573\n",
"\n",
" timestamps_reference_time: 2019-01-19 00:54:18-08:00\n",
" units: units <class 'pynwb.misc.Units'>\n",
"\n"
]
}
],
"source": [
"io = NWBHDF5IO(nwb_filepath, mode=\"r\", load_namespaces=True)\n",
"nwb = io.read()\n",
Expand Down
78 changes: 63 additions & 15 deletions docs/basics/stream_nwb.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
"id": "d84ee614",
"metadata": {},
"source": [
"# Stream an NWB File with fsspec\n",
"# Streaming an NWB File with fsspec\n",
"As you might have realized, NWB files are large. They take a lot of time to download and a lot of space on your drive. A convenient tool to mitigate this is **fsspec**. Fsspec allows you to *stream* the information from a file remotely without having to download it. This can be more efficient if you are only wanting to quickly examine a file or just need access to a portion of the file's contents. For more exensive analysis, it is still recommended that you download the file."
]
},
Expand All @@ -19,7 +19,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 1,
"id": "df1c4cce",
"metadata": {},
"outputs": [],
Expand Down Expand Up @@ -47,7 +47,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 2,
"id": "f3f97f13",
"metadata": {},
"outputs": [],
Expand All @@ -60,10 +60,25 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 3,
"id": "a51caf90",
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"A newer version (0.46.6) of dandi/dandi-cli is available. You are using 0.46.3\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Got dandiset DANDI:000021/draft\n"
]
}
],
"source": [
"if authenticate:\n",
" client = dandiapi.DandiAPIClient(token=dandi_api_key)\n",
Expand All @@ -76,10 +91,18 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 4,
"id": "d131ad56",
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Retrieved file url https://dandiarchive.s3.amazonaws.com/blobs/f5f/175/f5f1752f-5227-47d5-8f75-cd71937878aa?response-content-disposition=attachment%3B%20filename%3D%22sub-699733573_ses-715093703.nwb%22&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAUBRWC5GAEKH3223E%2F20221130%2Fus-east-2%2Fs3%2Faws4_request&X-Amz-Date=20221130T215329Z&X-Amz-Expires=3600&X-Amz-SignedHeaders=host&X-Amz-Signature=9edb7bb8f4263cd27cdcc37b27cc23cc7e79b8dbc3a0c9d6b4ae2a74da30c63a\n"
]
}
],
"source": [
"file = my_dandiset.get_asset_by_path(filepath)\n",
"base_url = file.client.session.head(file.base_download_url)\n",
Expand All @@ -93,22 +116,32 @@
"id": "3df13a24",
"metadata": {},
"source": [
"### Stream Your File\n",
"### Streaming a File\n",
"First, this creates a virtual filesystem based on the http protocol and specifies using caching to save accessed data to RAM. Then it opens the file remotely through the virtual filesystem."
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 5,
"id": "d15db3bb",
"metadata": {
"scrolled": true
},
"outputs": [],
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"C:\\Users\\carter.peene\\AppData\\Local\\Programs\\Python\\Python39\\lib\\site-packages\\hdmf\\spec\\namespace.py:531: UserWarning: Ignoring cached namespace 'hdmf-common' version 1.1.3 because version 1.5.1 is already loaded.\n",
" warn(\"Ignoring cached namespace '%s' version %s because version %s is already loaded.\"\n",
"C:\\Users\\carter.peene\\AppData\\Local\\Programs\\Python\\Python39\\lib\\site-packages\\hdmf\\spec\\namespace.py:531: UserWarning: Ignoring cached namespace 'core' version 2.2.2 because version 2.5.0 is already loaded.\n",
" warn(\"Ignoring cached namespace '%s' version %s because version %s is already loaded.\"\n"
]
}
],
"source": [
"fs = CachingFileSystem(\n",
" fs=fsspec.filesystem(\"http\"),\n",
" cache_storage=\"nwb-cache\", # Local folder for the cache\n",
" fs=fsspec.filesystem(\"http\")\n",
")\n",
"\n",
"f = fs.open(file_url, \"rb\")\n",
Expand All @@ -128,7 +161,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 6,
"id": "91031da2",
"metadata": {},
"outputs": [],
Expand All @@ -145,10 +178,25 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 7,
"id": "3e06b964",
"metadata": {},
"outputs": [],
"outputs": [
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "682abdb6aeef48f4a3164f618072cbc1",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"VBox(children=(HBox(children=(Label(value='session_description:', layout=Layout(max_height='40px', max_width='…"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"nwb2widget(nwb)"
]
Expand Down
Loading