Releases: alan-turing-institute/ThermodynamicAnalyticsToolkit
Releases · alan-turing-institute/ThermodynamicAnalyticsToolkit
Version 0.9.5
ChangeLog update
- Added jupyter notebooks with a guided tour through TATi.
- Added Dockerfile for putting TATi in docker container.
- Added analysis capability for ensemble averaging.
- FIX: SGLD formula in userguide was incorrect.
- Added GradientDescent optimizer with Barzilai-Borwein learning rate picker.
- Accumulation of norms of gradients, noise, momentum, ... is made optional
and can be fully switched off for performance reasons. - Optimizers have been moved into distinct module/folder.
- several smaller fixes preventing the distributed tarball from compiling.
- small runtime improvements.
- FIX: time_per_nth_step column in run info file/dataframe was wrong when
using HMC.
Version 0.9.4
ChangeLog update
- Simulation has get_losses() and get_activations() with list of valid names.
- TrajectoryData's dataframes' columns now all have correct dtype.
- TATiLossFunctionSampler may resample trajectory in subspace.
- Refactored TATiLossFunctionSampler extensively, introduced SamplingModes.
- TATi can now be installed as PyPI (wheel) package, i.e.
pip install tati
. - smaller fixes to userguide.
- FIX: Rewite of Model class broke check for present nn in Simulation.
Version 0.9.3
ChangeLog update
- DOCU: Improved and update userguide.
- DOCU: Code documentation now follows Google Style (no longer ReST) and has
API documentation. - Introduced a general test threshold to account for numerical inaccuracy of
parallel reduction on GPU-assisted hardware. - Fully refactored model class: ModelState, MultiLayerPerceptron,
InputPipelineFactory. - Configure now checks presence of required python packages.
- added full copyright notes, added code check tests on this.
- extracted grid-based sampling from TATiLossFunctionSampler.
- TATiAnalyser can perform covariance and Integrated Autocorrelation Time (IAT)
analysis. - Refactored TATiAnalyser into several operation modes such that these are
easy-to-use and accessible from Python interfaces.
Version 0.9.2
ChangeLog update
- added fully tested Hamiltonian Monte Carlo method with first and second order time integrator (Euler and Leapfrog), following [Neal, 2011].
- added Ensemble Quasi Newton scheme for all samplers, tested on simple Gaussian mixture model and MNIST single-layer perceptron.
- for checking virial theorem, average moment of inertia is written to averages file.
- allowing "0" in option hidden_dimension.
- trajectory can be written w.r.t to subspace spanned by vectors in new option directions_file.
- summaries now write memory and cputime usage for debugging bottlenecks.
- DOCU: re-added accidentally dropped reference section on simulation module.
- FIX: TATiOptimizer always writes last step to files.
- FIX: update of parameters caused no update of simulation's evaluation cache.
- FIX: time_per_nth_step was using process_time instead of time, i.e. accumulated time over all processes.
- FIX: Boolean parameters were not used from cmd-line.
- FIX: Assigning weights from dataframes took ages. Now works also for multiple walkers.
- FIX: parse_parameters_file was not working in TATi.simulation.
- tested on TF version up to 1.10.
sha256: 182418765305da71563bb737f68d2a3ce7bcd0744af11e8ea3506ec9cf0f06b0 thermodynamicanalyticstoolkit-0.9.tar.bz2
Version 0.9
ChangeLog update
- added simulation module, an easy-to-use python interface to loss manifold sampling for neural networks.
- large rewrite of userguide, now in asciidoc.
- added programmer's guide
- added roadmap.
- option types are checked in python interface.
- tensorflow up to 1.8 supported.
- improved input pipeline (and thereby overall) performance.
- several smaller fixes.
- HMC is removed temporarily till being fully validated.
sha1: a25214752556e4699b7af9c011cdaa23bdd47286 thermodynamicanalyticstoolkit-0.9.tar.bz2
Version 0.8
ChangeLog update
- introducing replicated neural networks to allow for multiple walkers that
proceed in parallel on individual trajectories with the ability to exchange
information, e.g. for Ensemble Quasi Newton method. - Updated package dependencies and funding notes in README.
- Added python interface that allows to use neural network as a general
function depending parameters and with a gradient. - FIX: TATiExplorer could still experience dead-locks.
- Docbook now also supports non-standard fop and xsltproc installation paths.
- FIX: scipy.sparse's linalg module was not loaded correctly for certain scipy
versions. - added Covariance Controlled Adaptive Langevin (CCAdL) as sampler, untested.
- added option burnin to drop initial set of steps from accumulated averages
- added option progress to display a progress bar with time estimate
- added option summaries_path to write summaries for TensorBoard on demand
- FIX: accuracy was not calculated correctly for multi class classification.
- added testsuite section on tensorflow (non-)capabilities.
sha1: 80fa208ddae2b274aebed065daccba3c39dfd51f thermodynamicanalyticstoolkit-0.8.tar.bz2
Version 0.7
ChangeLog Update
- renamed from DataDrivenSampler (DDS) to Thermodynamic Analytics Toolkit
(TATi) - added (vectorized) hessian and gradient nodes to allow easy access through
numpy arrays - sampler, optimizer, lossfunctionsampler, and inputspacesampler may now parse
parameters from a given CSV file through a single cmd-line option - Explorer can now run parallel processes each sampling or training along a
independent trajectory - FIX: Sampler module's names were inconsistent
- FIX: sqlite3 presence check was broken
sha1: b3eca5fe66b71c644935b1c2cd75308dc4d857b3 thermodynamicanalyticstoolkit-0.7.tar.bz2
Version 0.6
ChangeLog update
- supporting now up to tensorflow 1.6
- added DDSExplorer for exploring loss landscapes, picking minima long the way
- prints replaced by logging expressions and verbose cmdline statment supported
- FIX: rejection_rate in HMC fixed
- tensorflow computations can now be done with a given basetype.
- returned to default tf.float32 as tensorflow basetype. tf.float64 seems to
be broken to some extent suggested from sampler's convergence plots - FIX: SGLD was not resetting aggregated values in run info
- LossFunctionSampler can now fix partial set of parameters and to values
obtained from (minima, trajectory) file - FIX: Picking input columns (e.g. "sin(x1)") was broken to some extent
- LossFunctionSampler and InputSpaceSampler can now interpret CSV files of
arbitrary type (they pick out the columns they need) - updated userguide significantly
- some fixes to changed dependent python packages related to Ubuntu 16.04
sha1: 346348be9678b5b5cf43fa96d7a81bb4249f3a7d datadrivensampler-0.6.tar.bz2
Version 0.5
ChangeLog update
- may read TFRecords (as well as CSV) files
- added example for MNIST optimization
- input pipelines now depend on tf.Dataset framework. Either in-memory for
smaller datasets or file - priors have been added for BAOAB and HMC
- version now always gives a git hash (of the commit)
- Python API can feed its own in-memory dataset for sampling or optimization
sha1: 2fb81ff09fb1ca6f2181fd349953589b22fc289d datadrivensampler-0.5.tar.bz2
Version 0.3
ChangeLog update
- added HamiltonianMonteCarlo sampler
- added BAOAB sampler
- parameters can be fixed in optimization or sampling (also loss function
sampling) - FIX: trajectories are now correctly written also for networks with hidden
layers - diffusion map analysis can now use python package pydiffmap (and is
recommended due to optimal epsilon choice) - all cmdline examples in userguide are also fully tested
- added input space sampler to see classification boundaries of network
- we no longer generate datasets in memory but parse from CSV files
- added DatasetWriter to produce CSV for old in-memory datasets
sha1: c1a663adb4ead4e5989cff57c16e3bf530b0ecc4 datadrivensampler-0.3.tar.bz2