Skip to content

Commit

Permalink
BioRxiv link, fix to relative path in GUIs
Browse files Browse the repository at this point in the history
  • Loading branch information
AlexEMG committed Nov 26, 2018
1 parent 751255c commit c5c5d97
Show file tree
Hide file tree
Showing 15 changed files with 58 additions and 23 deletions.
33 changes: 24 additions & 9 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,9 +13,9 @@ This package includes graphical user interfaces to label your data, and take you
VERSION 1.0: The initial, Nature Neuroscience version of **DeepLabCut** can be found in the history of git, or the latest version here: https://github.com/AlexEMG/DeepLabCut/releases/tag/1.11

<p align="center">
<img src="docs/images/MATHIS_2018_odortrail.gif" width="36.4%">
<img src="docs/images/rat-grasp.gif" width="24.95%">
<img src="docs/images/MATHIS_2018_fly.gif" width="31.5%">
<img src="http://www.people.fas.harvard.edu/~amathis/dlc/MATHIS_2018_odortrail.gif" height="220">
<img src="docs/images/rat-grasp.gif" width="24.89%">
<img src="http://www.people.fas.harvard.edu/~amathis/dlc/MATHIS_2018_fly.gif" height="220">
</p>

Please check out [www.mousemotorlab.org/deeplabcut](https://www.mousemotorlab.org/deeplabcut/) for more video demonstrations of automated tracking. Above: courtesy of the Murthy (mouse), Leventhal (rat), and Axel (fly) labs!
Expand All @@ -29,10 +29,11 @@ Please check out [www.mousemotorlab.org/deeplabcut](https://www.mousemotorlab.or
</p>

# [DEMO the code](/examples)
We provide several Jupyter Notebooks: one that walks you through a demo dataset to test your installation, and another Notebook to run DeepLabCut from the begining on your own data. We also show you how to use the code in Docker, and on Google Colab.
We provide several Jupyter Notebooks: one that walks you through a demo dataset to test your installation, and another Notebook to run DeepLabCut from the begining on your own data. We also show you how to use the code in Docker, and on Google Colab. Please also read the [user-guide](https://www.biorxiv.org/content/early/2018/10/30/457242).

# News (and in the news):

- Nov 2018: We posted a detailed guide for DeepLabCut 2.0 on [BioRxiv](https://www.biorxiv.org/content/early/2018/10/30/457242). It also contains a case study for 3D pose estimation in cheetahs.
- Nov 2018: Various (post-hoc) analysis scripts contributed by users (and us) will be gathered at [DLCutils](https://github.com/AlexEMG/DLCutils). Feel free to contribute! In particular, there is a script guiding you through
importing a project into the new data format for DLC 2.0
- Oct 2018: new pre-print on the speed video-compression and robustness of DeepLabCut on [BioRxiv](https://www.biorxiv.org/content/early/2018/10/30/457242)
Expand All @@ -49,20 +50,23 @@ importing a project into the new data format for DLC 2.0

- Top Right: Video anlysis is fast (see [Mathis/Warren](https://www.biorxiv.org/content/early/2018/10/30/457242) for details)

- Bottom Left: The feature detectors are robust to video compression (see [Mathis/Warren](https://www.biorxiv.org/content/early/2018/10/30/457242) for details)
- Mid Left: The feature detectors are robust to video compression (see [Mathis/Warren](https://www.biorxiv.org/content/early/2018/10/30/457242) for details)

- Bottom Right: It allows 3D pose estimation with a single network and camera (see [Mathis/Warren](https://www.biorxiv.org/content/early/2018/10/30/457242) for details)
- Mid Right: It allows 3D pose estimation with a single network and camera (see [Mathis/Warren](https://www.biorxiv.org/content/early/2018/10/30/457242) for details)

- Bottom: It allows 3D pose estimation with a single network trained on data from multple cameras together with standard triangulation methods (see [Nath* and Mathis* et al.](https://www.biorxiv.org/content/early/2018/11/24/476531) for details)

<p align="center">
<img src="docs/images/ErrorvsTrainingsetSize.png" width="50%">
<img src="docs/images/inferencespeed.png" width="30%">
<img src="docs/images/compressionrobustness.png" width="40%">
<img src="docs/images/MouseLocomotion_warren.gif" width="30%">
<img src="docs/images/MouseLocomotion_warren.gif" width="30%">
<img src="docs/images/cheetah.jpg" width="75%">
</p>

## Code contributors:

[Alexander Mathis](https://github.com/AlexEMG), [Tanmay Nath](http://www.mousemotorlab.org/team), [Mackenzie Mathis](https://github.com/MMathisLab), and especially the authors of DeeperCut authors for the feature detector code. The feature detector code is based on Eldar Insafutdinov's TensorFlow implementation of [DeeperCut](https://github.com/eldar/pose-tensorflow). DeepLabCut is an open-source tool and has benefited from suggestions and edits by many individuals including Richard Warren, Ronny Eichler, Jonas Rauber, Hao Wu, Taiga Abe, and Jonny Saunders. In particular, the authors thank Ronny Eichler for input on the modularized version. We are also grateful to all the beta testers!
[Alexander Mathis](https://github.com/AlexEMG), [Tanmay Nath](http://www.mousemotorlab.org/team), [Mackenzie Mathis](https://github.com/MMathisLab), and especially the authors of DeeperCut authors for the feature detector code. The feature detector code is based on Eldar Insafutdinov's TensorFlow implementation of [DeeperCut](https://github.com/eldar/pose-tensorflow). DeepLabCut is an open-source tool and has benefited from suggestions and edits by many individuals including Richard Warren, Ronny Eichler, Jonas Rauber, Hao Wu, Federico Claudi, Taiga Abe, and Jonny Saunders as well as the [contributors](https://github.com/AlexEMG/DeepLabCut/graphs/contributors). In particular, the authors thank Ronny Eichler for input on the modularized version. We are also grateful to all the beta testers!

This is an actively developed package and we welcome community development and involvement! If you would like to join the [DeepLabCut Slack group](https://deeplabcut.slack.com), please drop us a note to be invited by emailing: mackenzie@post.harvard.edu

Expand All @@ -85,7 +89,7 @@ Please check out the following references for more details:
url = {http://arxiv.org/abs/1605.03170}
}

Our open source pre-prints:
Our open-access pre-prints:

@article{mathis2018markerless,
title={Markerless tracking of user-defined features with deep learning},
Expand All @@ -94,6 +98,17 @@ Our open source pre-prints:
year={2018}
}

@article {NathMathis2018,
author = {Nath*, Tanmay and Mathis*, Alexander and Chen, An Chi and Patel, Amir and Bethge, Matthias and Mathis, Mackenzie W},
title = {Using DeepLabCut for 3D markerless pose estimation across species and behaviors},
year = {2018},
doi = {10.1101/476531},
publisher = {Cold Spring Harbor Laboratory},
URL = {https://www.biorxiv.org/content/early/2018/11/24/476531},
eprint = {https://www.biorxiv.org/content/early/2018/11/24/476531.full.pdf},
journal = {bioRxiv}
}

@article {MathisWarren2018speed,
author = {Mathis, Alexander and Warren, Richard A.},
title = {On the inference speed and video-compression robustness of DeepLabCut},
Expand Down
8 changes: 3 additions & 5 deletions deeplabcut/generate_training_dataset/labeling_toolbox.py
Original file line number Diff line number Diff line change
Expand Up @@ -56,9 +56,6 @@ def __init__(self, parent, config,Screens,scale_w,scale_h, winHack, img_scale):

wx.Frame.__init__(self, None, title="DeepLabCut2.0 - Labeling GUI", size=(self.gui_width*winHack, self.gui_height*winHack), style= wx.DEFAULT_FRAME_STYLE)




self.statusbar = self.CreateStatusBar()
self.statusbar.SetStatusText("")
self.Bind(wx.EVT_CHAR_HOOK, self.OnKeyPressed)
Expand Down Expand Up @@ -230,7 +227,9 @@ def browseDir(self, event):
self.index = glob.glob(os.path.join(self.dir,'*.png'))
print('Working on folder: {}'.format(os.path.split(str(self.dir))[-1]))

self.relativeimagenames=self.index ##[n.split(self.project_path+'/')[1] for n in self.index]
#self.relativeimagenames=self.index ##[n.split(self.project_path+'/')[1] for n in self.index]
#self.relativeimagenames=[n.split(self.project_path+'/')[1] for n in self.index]
self.relativeimagenames=['labeled'+n.split('labeled')[1] for n in self.index]

self.fig1, (self.ax1f1) = plt.subplots(figsize=self.img_size,facecolor = "None")
self.iter = 0
Expand Down Expand Up @@ -342,7 +341,6 @@ def saveEachImage(self):
plt.close(self.fig1)

for idx, bp in enumerate(self.updatedCoords):

self.dataFrame.loc[self.relativeimagenames[self.iter]][self.scorer, bp[0][-2],'x' ] = bp[-1][0]
self.dataFrame.loc[self.relativeimagenames[self.iter]][self.scorer, bp[0][-2],'y' ] = bp[-1][1]

Expand Down
13 changes: 12 additions & 1 deletion deeplabcut/refine_training_dataset/refinement.py
Original file line number Diff line number Diff line change
Expand Up @@ -458,7 +458,18 @@ def save(self, event):
if Path(self.dir,'CollectedData_'+self.humanscorer+'.h5').is_file():
print("A training dataset file is already found for this video. The refined machine labels are merged to this data!")
DataU1 = pd.read_hdf(os.path.join(self.dir,'CollectedData_'+self.humanscorer+'.h5'), 'df_with_missing')
DataCombined = pd.concat([DataU1, self.Dataframe])
#combine datasets Original Col. + corrected machinefiles:
DataCombined = pd.concat([self.Dataframe,DataU1])
# Now drop redundant ones keeping the first one [this will make sure that the refined machine file gets preference]
DataCombined = DataCombined[~DataCombined.index.duplicated(keep='first')]
'''
if len(self.droppedframes)>0: #i.e. frames were dropped/corrupt. also remove them from original file (if they exist!)
for fn in self.droppedframes:
try:
DataCombined.drop(fn,inplace=True)
except KeyError:
pass
'''
DataCombined.to_hdf(os.path.join(self.dir,'CollectedData_'+ self.humanscorer +'.h5'), key='df_with_missing', mode='w')
DataCombined.to_csv(os.path.join(self.dir,'CollectedData_'+ self.humanscorer +'.csv'))
else:
Expand Down
2 changes: 1 addition & 1 deletion deeplabcut/version.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,5 +7,5 @@
"""

__version__ = '2.0.0'
__version__ = '2.0.1'
VERSION = __version__
9 changes: 6 additions & 3 deletions docs/UseOverviewGuide.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ TIP: for every function there is a associated help document that can be viewed b
**mini-demo:** create project and edit the yaml file

<p align="center">
<img src="/docs/images/startdeeplabcut.gif" width="90%">
<img src="http://www.people.fas.harvard.edu/~amathis/dlc/startdeeplabcut.gif" width="90%">
</p>

### Select Frames to Label:
Expand All @@ -62,7 +62,7 @@ TIP: for every function there is a associated help document that can be viewed b
**mini-demo:** using the GUI to label

<p align="center">
<img src="/docs/images/guiexample.gif" width="90%">
<img src="http://www.people.fas.harvard.edu/~amathis/dlc/guiexample.gif" width="90%">
</p>

### Check Annotated Frames:
Expand Down Expand Up @@ -113,7 +113,7 @@ TIP: for every function there is a associated help document that can be viewed b
**mini-demo:** using the refinement GUI, a user can load the file then zoom, pan, and edit and/or remove points:

<p align="center">
<img src="/docs/images/refinelabels.gif" width="90%">
<img src="http://www.people.fas.harvard.edu/~amathis/dlc/refinelabels.gif" width="90%">
</p>

When done editing the labels, merge:
Expand All @@ -132,3 +132,6 @@ In ipython/Jupyter notebook:
In Python:

``help(deeplabcut.nameofthefunction)``


Return to [readme](../README.md).
5 changes: 5 additions & 0 deletions docs/functionDetails.md
Original file line number Diff line number Diff line change
Expand Up @@ -381,3 +381,8 @@ labeled data. The example project, named as Reaching-Mackenzie-2018-08-30 consis
with default parameters and 20 images, which are cropped around the region of interest as an example dataset. These
images are extracted from a video, which was recorded in a study of skilled motor control in mice. Some example
labels for these images are also provided. See more details [here](/examples).


Return to [User guide overview](UseOverviewGuide.md).

Return to [readme](../README.md).
Binary file removed docs/images/MATHIS_2018_fly.gif
Binary file not shown.
Binary file removed docs/images/MATHIS_2018_odortrail.gif
Binary file not shown.
Binary file added docs/images/cheetah.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file removed docs/images/guiexample.gif
Binary file not shown.
Binary file removed docs/images/refinelabels.gif
Binary file not shown.
Binary file removed docs/images/startdeeplabcut.gif
Binary file not shown.
7 changes: 5 additions & 2 deletions docs/installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,11 +39,14 @@ conda create -n <nameyourenvironment> python=3.6
source activate <nameyourenvironment>
```
**Windows:**

- We also provide [environment files for Windows](https://github.com/AlexEMG/DeepLabCut/tree/master/conda-environments). They can be installed by typing (in this folder): ```conda env create -f dlc-windowsCPU.yaml``` or ```conda env create -f dlc-windowsGPU.yaml``` for the GPU version. See further details in this [issue](https://github.com/AlexEMG/DeepLabCut/issues/112).

- Alternatively, you can create your tailored environment:
```
conda create -n <nameyourenvironment> python=3.6
activate <nameyourenvironment>
```
- here are some additional/alternative [tips for Anaconda + Windows](https://github.com/AlexEMG/DeepLabCut/issues/20#issuecomment-438661814)

Once the environment was activated, the user can install DeepLabCut. In the terminal type:
```
Expand Down Expand Up @@ -137,5 +140,5 @@ If you perform the system wide installation, and the computer has other Python p

Now you can use Jupyer Notebooks, Spyder, and to train just use the terminal, to run all the code!

Return to [readme](../README.md).
Return to [readme](../README.md).

2 changes: 1 addition & 1 deletion reinstall.sh
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
pip uninstall deeplabcut
python3 setup.py sdist bdist_wheel
pip install dist/deeplabcut-2.0.0-py3-none-any.whl
pip install dist/deeplabcut-2.0.1-py3-none-any.whl

2 changes: 1 addition & 1 deletion setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@

setuptools.setup(
name="deeplabcut",
version="2.0.0",
version="2.0.1",
author="Alexander Mathis, Tanmay Nath, Mackenzie Mathis",
author_email="alexander.mathis@bethgelab.org",
description="Markerless pose-estimation of user-defined features with deep learning",
Expand Down

0 comments on commit c5c5d97

Please sign in to comment.