Welcome to fusilli! Have you got multimodal data but not sure how to combine them for your machine learning task? Look no further! This library provides a way to compare different fusion methods for your multimodal data.
Problem | Solution |
---|---|
You have a dataset that contains multiple modalities. 🩻 📈 | Either two types of tabular data or one type of tabular data and one type of image data. Ever thought that maybe they'd be more powerful together? Fusilli can help you find out if multimodal fusion is right for you! ✨ |
You've looked at methods for multimodal fusion and thought "wow, that's a lot of code" and "wow, there are so many names for the same concept". 🤔 🆘 | So relatable. Fusilli provides a simple way for comparing multimodal fusion models without having to trawl through Google Scholar! ✨ |
You've found a multimodal fusion method that you want to try out, but you're not sure how to implement it or it's not quite right for your data. 😵💫 🙌 | Fusilli allows the users to modify existing methods, such as changing the architecture of the model, and provides templates for implementing new methods! ✨ |
introduction fusion_model_explanations installation data_loading experiment_setup quick_start
choosing_model modifying_models customising_training logging_with_wandb glossary
auto_examples/training_and_testing/index auto_examples/model_comparison/index auto_examples/customising_behaviour/index
developers_guide contributing_examples/index
fusilli.fusionmodels fusilli.data fusilli.train fusilli.eval fusilli.utils
genindex
modindex
search