An autoencoder model to extract features from images and obtain their compressed vector representation, inspired by the convolutional VVG16 architecture
-
Updated
Jul 16, 2024 - Python
An autoencoder model to extract features from images and obtain their compressed vector representation, inspired by the convolutional VVG16 architecture
Beta Machine Learning Toolkit
A Python package for an autoencoder-based algorithm to detect anomalies in distributed acoustic sensing (DAS) datasets.
Mini project series based on fundamental deep learning concepts for my practice.
Skeleton-based Self-Supervised Feature Extraction for Improved Dynamic Hand Gesture Recognition
Repository Made for Summer Internship at LMNIIT Jaipur '24
A simple transformer-based autoencoder model
A PyTorch implementation of a Sparse Auto Encoder (SAE) using MSE loss and KL Divergence penalty
Implements the Tsetlin Machine, Coalesced Tsetlin Machine, Convolutional Tsetlin Machine, Regression Tsetlin Machine, and Weighted Tsetlin Machine, with support for continuous features, drop clause, Type III Feedback, focused negative sampling, multi-task classifier, autoencoder, literal budget, and one-vs-one multi-class classifier. TMU is writ…
NIRS-VIS is a Master Thesis Project for decoding visual stimuli from fNIRS brain data with transformers and autoencoders via Pytorch
The project presents a Deep Learning model with an autoencoder-like architecture making use of convolutional layers in both the encoder and the decoder to perform image inpainting over the CIFAR-10 database images reaching a mean mean-squared error value of 0.007867775.
A project leveraging machine learning for the identification and classification of glomeruli in renal biopsy images. Utilizes SegNet and U-Net for segmentation and explores unsupervised clustering for sclerosed glomeruli classification
Auto encoder U-Nets and DCGANs for the coulourization task of CIFAR10 images.
Detecting GW signals from extreme mass ratio inspirals using convolutional autoencoders
This project is a practice implementation of an autoencoder, The primary use case for this autoencoder is for anomaly detection in sales data, but it can be adapted for other purposes. The autoencoder compresses the input data into a lower-dimensional representation and then reconstructs the original input from this representation.
Compresses Twemoji emojis down to 32 bytes (8 4-bit floating point numbers).
Joint variational Autoencoders for Multimodal Imputation and Embedding (JAMIE)
Homeworks from the Deep Learning course, UniPD - DEI, 2021/22.
Anomaly Detection for Bearing Failures Prediction.
Add a description, image, and links to the autoencoder topic page so that developers can more easily learn about it.
To associate your repository with the autoencoder topic, visit your repo's landing page and select "manage topics."