A library built on top of TensorFlow, Sonnet, and Sacred to faciliate deep learning research.
Version: 0.1.0.dev1
- Python 3.6
- virtualenv is strongly recommended
- CUDA 9.0 and cuDNN (if enabling GPU)
Clone to repository, cd
into the root directory, activate a virtual environment (optional), and run
pip install -r requirements.txt
pip uninstall tensorflow && pip install tensorflow-gpu # run this line if you want to enable GPU
Setup s more complicated than this on Crane
, but talk to Ellie directly about it because we don't have an automated process nailed down yet.
Documentation on master
can be found at sounds-deep.readthedocs.io or can be built by running
./docs/build_scrip.sh
and pointing a browser at ./docs_build/index.html
Because the use of this package is expected to stay within the lab right now, you can find me in person or on slack with any questions.
import sounds_deep as sd
contrib.data
: Easy downloading of standard datasets and loading for TensorFlowcontrib.distributions
: Handles distributions in ways not done intf.contrib.distributions
(usetfd
when possible)contrib.experiments
: Executable files with command line interfaces which train a modelcontrib.models
:Sonnet
modules implementing entire model frameworkscontrib.parameterized_distributions
: Distributions with parameters baked incontrib.sacred_ingredients
: Classes inheriting fromSacred.Ingredient
Everyone in the lab is invited to contribute code pertaining to Sonnet, TensorFlow, or deep learning/machine learning with Python.
Eleanor Quint, a Ph.D. student in the computer science and engineering department at the University of Nebraska-Lincoln