Skip to content

Latest commit

 

History

History
12 lines (9 loc) · 1.57 KB

modelmerging.MD

File metadata and controls

12 lines (9 loc) · 1.57 KB

Merging refers to the concept of ensembling learning that combines multiple models to create a stronger and more robust one (aka model merging). Some commonly methods we tend to implement the following:

  • Ensemble learning: This method involves training multiple models separately and then combining their predictions. The combination can be done in various ways, such as averaging, weighted averaging or voting.
  • Model stacking (aka meta learning): The predictions of multiple models are used as input for a new model - the merging one.
  • Model blending: Similar to model stacking, blending combines the predictions of multiple models. However, instead of using a meta-learner, blending typically involves a simpler approach such as taking the average of predictions.
  • Bayesian Model Combination: This advanced technique involves using Bayesian methods to combine models. It takes into account the uncertainty in the predictions of each model and can be more effective than simple averaging or voting in certain cases.

Not in scope:

  • Feature union: This technique is used primarily in data preprocessing, where features generated by different models or transformations are combined into a single feature set.
  • Cascade generalization: This method involves using the predictions of one model as an input feature for another model. Unlike stacking, where the meta-learner is trained after all base models are trained, in cascade generalization, each model can be trained sequentially, with each new model incorporating the predictions of the previous models as features.