Skip to content

Releases: ppdebreuck/modnet

v0.4.4

07 May 14:09
Compare
Choose a tag to compare

What's Changed

  • Usability tweaks & new featurizer preset by @ml-evs in #215

Full Changelog: v0.4.3...v0.4.4

v0.4.3

05 Apr 10:42
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: v0.4.2...v0.4.3

v0.4.2

02 Apr 15:47
1d08ebf
Compare
Choose a tag to compare

What's Changed

  • Deprecated BayesianMODNetModel and update deps by @ml-evs in #182
  • Fix issue with fit_preset invoking fit incorrectly during refit by @ml-evs in #181
  • 3.10 compatibility by @ppdebreuck in #198
  • Improve evaluate (custom loss, ...) by @ppdebreuck in #194
  • Drop Python 3.8 and update other deps by @ml-evs in #201
  • Bump matminer version by @ml-evs in #199
  • Attempt at bumping pymatgen and matminer by @ml-evs in #203
  • Backwards compatibility of test data with pymatgen by @ml-evs in #206
  • Properly handle Bayesian model import failure by @ml-evs in #207

Full Changelog: v0.4.1...v0.4.2

v0.4.1

29 Jul 11:00
6ad7835
Compare
Choose a tag to compare

What's Changed

  • Fixed refit=0 in FitGenetic, it behaves as before (ensemble of 10 best architecture, ensembled over the nested (default 5) folds)
  • Bump pymatgen from 2023.1.30 to 2023.7.20, compatible with cython 3

v0.4.0

17 Jul 13:38
85d73fc
Compare
Choose a tag to compare

What's Changed

  • /!\ New default model architecture
    v0.4.0 changes the default architecture of all MODNet models. It is now possible to predict vectors (previously one had to make individual joint learned properties - which can be slow when the output dimensionality is high), while keeping the joint-learning architecture. In essence, the architecture is moving to joint-learning on vectors.
    Previously saved models are still compatible and will be loaded following the old architecture. Please consider retraining your saved models in the near future as modnet will transition to v1.0 without support of the old model architecture.
    See #89 and #155 by @ppdebreuck

  • Possibility to remove or not fully NaNs features
    by @gbrunin in #157

Full Changelog: v0.3.1...v0.4.0

v0.3.1

11 Jul 14:38
679694f
Compare
Choose a tag to compare

What's Changed

Full Changelog: v0.3.0...v0.3.1

v0.3.0

01 Jun 14:32
Compare
Choose a tag to compare

What's Changed

  1. Impute missing values by @gbrunin in #149
    After the featurization, the NaNs are not replaced by 0 anymore. The infinite values are replaced by NaNs. Then, the NaNs are handled when fitting the model using a SimpleImputer which can be chosen. It is then stored as an attribute to the model, and can be re-used when predicting new values. The scaler can also be chosen (StandardScaler or MinMaxScaler), and the user can also choose to first impute then scale, or first scale then impute. Both can be argued (do we want to keep the same distribution as the initial feature, or to change it by moving the NaNs outside the distribution).

  2. New featurizer presets by @gbrunin in #150
    The full list of featurizers are:

  • DeBreuck2020Featurizer,
  • CompositionOnlyFeaturizer,
  • Matminer2023Featurizer,
  • MatminerAll2023Featurizer,
  • CompositionOnlyMatminer2023Featurizer,
  • CompositionOnlyMatminerAll2023Featurizer,
    It also adds the possibility to use only features that are continuous with respect to the composition. Some features are by their nature not continuous, which can lead to unphysical discontinuities when predicting a property as a function of the materials composition.
  1. Introducing better customisation by @ppdebreuck in #148
  • Running feature selection only on a subset of properties present in the MODData. feature_ selection() now enables this with ignore_names.
  • By default, FitGenetic will proceed by using joint-learning when multiple targets are given in the MODData. This can now be avoided by using ignore_names in FitGenetic().
  • MODNetModel.fit() can take optional fit_params that are passed through to Keras model.fit().
  • fit_params can also be passed to FitGenetic.run()
  • MODNetModel.fit() can take a custom loss function.
  • FitGenetic() can take a custom loss function.
  • Custom data can be passed trough MODNetModel.fit(). It will be appended to the targets (axis=-1). This can be useful for defining custom loss functions.
  • Any property called custom_data in FitGenetic is ignored, and appended to the targets (axis=-1). This can be useful for defining custom loss functions.
  1. Add get_params and set_params to the MODNet model by @gbrunin in #151
    This includes renaming of the the EnsembleMODNetModel ´modnet_models´ arg to ´models´

New Contributors 🎉

  • @gbrunin made their first contribution in #149
    Thanks Guillaume Brunin !

Full Changelog: v0.2.1...v0.3.0

v0.2.1

09 Feb 15:37
Compare
Choose a tag to compare

What's Changed

  • Add support for Python 3.9 and 3.10 by @ml-evs in #119
  • Update dependency pins, compatibility with TF 2.11 and add install notes by @ml-evs in #122
  • Transfer some info from README into docs by @ml-evs in #136

Full Changelog: v0.2.0...v0.2.1

v0.2.0

07 Feb 16:37
Compare
Choose a tag to compare

What's Changed

  • Add new default feature preset and updates for new matminer & pymatgen versions by @ml-evs in #101
  • Bump tensorflow from 2.10.0 to 2.10.1 by @dependabot in #112
  • fix verbosity by @ppdebreuck in #128
  • Replace deprecated NumPy and Tensorflow calls by @ml-evs in #123
  • Add mode where each featurizer is applied individually by @ml-evs in #127

Full Changelog: v0.1.13...v0.2.0

v0.1.13

04 Nov 15:44
86b8caa
Compare
Choose a tag to compare

What's Changed

  • Add pinned requirements file by @ml-evs in #94
  • Make sure new deps do not get overwritten by CI by @ml-evs in #99
  • Add instructions for installing pinned requirements and prepare release by @ml-evs in #108

Full Changelog: v0.1.12...v0.1.13