Releases: mlpack/mlpack
mlpack 3.3.1
Released April 29th, 2020.
-
Minor Julia and Python documentation fixes (#2373).
-
Updated terminal state and fixed bugs for Pendulum environment (#2354, #2369).
-
Added
EliSH
activation function (#2323). -
Add L1 Loss function (#2203).
-
Pass CMAKE_CXX_FLAGS (compilation options) correctly to Python build (#2367).
-
Expose ensmallen Callbacks for sparseautoencoder (#2198).
-
Bugfix for LARS class causing invalid read (#2374).
-
Add serialization support from Julia; use
mlpack.serialize()
andmlpack.deserialize()
to save and load fromIOBuffer
s.
mlpack 3.3.0
Released April 7th, 2020.
-
Templated return type of
Forward function
of loss functions (#2339). -
Added
R2 Score
regression metric (#2323). -
Added
mean squared logarithmic error
loss function for neural networks (#2210). -
Added
mean bias loss function
for neural networks (#2210). -
The DecisionStump class has been marked deprecated; use the
DecisionTree
class withNoRecursion=true
or useID3DecisionStump
instead (#2099). -
Added
probabilities_file
parameter to get the probabilities matrix of AdaBoost classifier (#2050). -
Fix STB header search paths (#2104).
-
Add
DISABLE_DOWNLOADS
CMake configuration option (#2104). -
Add padding layer in TransposedConvolutionLayer (#2082).
-
Fix pkgconfig generation on non-Linux systems (#2101).
-
Use log-space to represent HMM initial state and transition probabilities (#2081).
-
Add functions to access parameters of
Convolution
andAtrousConvolution
layers (#1985). -
Add Compute Error function in lars regression and changing Train function to return computed error (#2139).
-
Add Julia bindings (#1949). Build settings can be controlled with the
BUILD_JULIA_BINDINGS=(ON/OFF)
andJULIA_EXECUTABLE=/path/to/julia
CMake parameters. -
CMake fix for finding STB include directory (#2145).
-
Add bindings for loading and saving images (#2019);
mlpack_image_converter
from the command-line,mlpack.image_converter()
from Python. -
Add normalization support for CF binding (#2136).
-
Add Mish activation function (#2158).
-
Update
init_rules
in AMF to allow users to merge two initialization rules (#2151). -
Add GELU activation function (#2183).
-
Better error handling of eigendecompositions and Cholesky decompositions (#2088, #1840).
-
Add LiSHT activation function (#2182).
-
Add Valid and Same Padding for Transposed Convolution layer (#2163).
-
Add CELU activation function (#2191)
-
Add Log-Hyperbolic-Cosine Loss function (#2207)
-
Change neural network types to avoid unnecessary use of rvalue references (#2259).
-
Bump minimum Boost version to 1.58 (#2305).
-
Refactor STB support so
HAS_STB
macro is not needed when compiling against mlpack (#2312). -
Add Hard Shrink Activation Function (#2186).
-
Add Soft Shrink Activation Function (#2174).
-
Add Hinge Embedding Loss Function (#2229).
-
Add Cosine Embedding Loss Function (#2209).
-
Add Margin Ranking Loss Function (#2264).
-
Bugfix for incorrect parameter vector sizes in logistic regression and softmax regression (#2359).
mlpack 3.2.1
mlpack 3.2.0
Released Sept. 25, 2019.
-
Fix occasionally-failing RADICAL test (#1924).
-
Fix gcc 9 OpenMP compilation issue (#1970).
-
Added support for loading and saving of images (#1903).
-
Added functionality for scaling of data (#1876); see the command-line binding
mlpack_preprocess_scale
or Python bindingpreprocess_scale()
. -
Add new parameter
maximum_depth
to decision tree and random forest bindings (#1916). -
Fix prediction output of softmax regression when test set accuracy is calculated (#1922).
-
Pendulum environment now checks for termination. All RL environments now have an option to terminate after a set number of time steps (no limit by default) (#1941).
-
Add support for probabilistic KDE (kernel density estimation) error bounds when using the Gaussian kernel (#1934).
-
Fix negative distances for cover tree computation (#1979).
-
Fix cover tree building when all pairwise distances are 0 (#1986).
-
Improve KDE pruning by reclaiming not used error tolerance (#1954, #1984).
-
Optimizations for sparse matrix accesses in z-score normalization for CF (#1989).
-
Add
kmeans_max_iterations
option to GMM training bindinggmm_train_main
. -
Bump minimum Armadillo version to 8.400.0 due to ensmallen dependency requirement (#2015).
mlpack 3.1.1
Released May 26, 2019.
- Fix random forest bug for numerical-only data (#1887).
- Significant speedups for random forest (#1887).
- Random forest now has
minimum_gain_split
andsubspace_dim
parameters (#1887). - Decision tree parameter
print_training_error
deprecated in favor ofprint_training_accuracy
. output
option changed topredictions
for adaboost and perceptron binding. Old options are now deprecated and will be preserved until mlpack 4.0.0 (#1882).- Concatenated ReLU layer (#1843).
- Accelerate NormalizeLabels function using hashing instead of linear search (see
src/mlpack/core/data/normalize_labels_impl.hpp
) (#1780). - Add
ConfusionMatrix()
function for checking performance of classifiers (#1798). - Install ensmallen headers when it is downloaded during build (#1900).
mlpack 3.1.0
Released April 25, 2019.
Release email
-
Add DiagonalGaussianDistribution and DiagonalGMM classes to speed up the diagonal covariance computation and deprecate DiagonalConstraint (#1666).
-
Add kernel density estimation (KDE) implementation with bindings to other languages (#1301).
-
Where relevant, all models with a
Train()
method now return adouble
value representing the goodness of fit (i.e. final objective value, error, etc.) (#1678). -
Add implementation for linear support vector machine (see
src/mlpack/methods/linear_svm
). -
Change DBSCAN to use PointSelectionPolicy and add OrderedPointSelection (#1625).
-
Residual block support (#1594).
-
Bidirectional RNN (#1626).
-
Dice loss layer (#1674, #1714) and hard sigmoid layer (#1776).
-
output
option changed topredictions
andoutput_probabilities
toprobabilities
for Naive Bayes binding (mlpack_nbc
/nbc()
). Old options are now deprecated and will be preserved until mlpack 4.0.0 (#1616). -
Add support for Diagonal GMMs to HMM code (#1658, #1666). This can provide large speedup when a diagonal GMM is acceptable as an emission probability distribution.
-
Python binding improvements: check parameter type (#1717), avoid copying Pandas dataframes (#1711), handle Pandas Series objects (#1700).
mlpack 3.0.4
mlpack 3.0.3
Released July 27th, 2018.
- Fix Visual Studio compilation issue (#1443).
- Allow running local_coordinate_coding binding with no initial_dictionary parameter when input_model is not specified (#1457).
- Make use of OpenMP optional via the CMake USE_OPENMP configuration variable (#1474).
- Accelerate FNN training by 20-30% by avoiding redundant calculations (#1467).
- Fix math::RandomSeed() usage in tests (#1462, #1440).
- Generate better Python setup.py with documentation (#1460).
mlpack 3.0.2
Released June 8th, 2018.
- Documentation generation fixes for Python bindings (#1421).
- Fix build error for man pages if command-line bindings are not being built (#1424).
- Add shuffle parameter and Shuffle() method to KFoldCV (#1412). This will shuffle the data when the object is constructed, or when Shuffle() is called.
- Added neural network layers: AtrousConvolution (#1390), Embedding (#1401), and LayerNorm (layer normalization) (#1389).
- Add Pendulum environment for reinforcement learning (#1388) and update Mountain Car environment (#1394).
mlpack 3.0.1
Released May 10th, 2018.
- Fix intermittently failing tests (#1387).
- Add Big-Batch SGD (BBSGD) optimizer in src/mlpack/core/optimizers/bigbatch_sgd (#1131).
- Fix simple compiler warnings (#1380, #1373).
- Simplify NeighborSearch constructor and Train() overloads (#1378).
- Add warning for OpenMP setting differences (#1358/#1382). When mlpack is compiled with OpenMP but another application linking against mlpack is not (or vice versa), a compilation warning will now be issued.
- Restructured loss functions in src/mlpack/methods/ann/ (#1365).
- Add environments for reinforcement learning tests (#1368, #1370, #1329).
- Allow single outputs for multiple timestep inputs for recurrent neural networks (#1348).
- Neural networks: add He and LeCun normal initializations (#1342), add FReLU and SELU activation functions (#1346, #1341), add alpha-dropout (#1349).