All of my projects in this repo are related to the Artificial Intelligence Field.
- MNIST Classification Project Using ANN & CNN - 99.23% Accuracy
- MNIST Using Conv2D Dilation 2 Inputs and 1 output 99.31% accuracy
- Food Vision Project with Transfer Learning With ResNetV250 & EfficientNetB0
- Food Vision Project Without Transfer Learning Multiclass
- Food Vision Project With Feature Extraction & Fine Tuning Using EfficientNetB0
- Pizza Steak Classification
- ACLImdb Movie Review Sentiment Analysis Using CovNet1D & Bidirectional LSTMs
- Cat Vs Dog Classification
- Disaster Tweets Sentiment Clasification
- PubMed 200k RCT: Sequential Sentence Classification in Medical Abstracts
- Timeseries Forecasting Bitcoin Prices Using CovNET, LSTM, N-BEATS
- Fashion MNIST Classification Using ANN
- Used Car Price Prediction KaggleX Skill Dataset - Used LightGMB, XGBoost And DNN.
- Reconstructing MNIST Using Vanilla Autoencoders
- Denoising Autoencoders On MNIST dataset
- Colorization using Autoencoders on CIFAR10 dataset
- Implementation of Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks, a paper by Alec Radford, Luke Metz, and Soumith Chintala
- Implementation Of Info GAN
- Implementation of Least Squares GAN
- Implementation of Wassertein GAN
- Arxiv34k4l - Multi-label Text Classification Project: Arxiv34k4l is a project aimed at building a multi-label text classification model using natural language processing (NLP) techniques. The project utilizes data sourced from the ArXiv database, which contains a vast collection of academic papers spanning various disciplines. The project's main objective was to develop a model capable of effectively classifying academic papers into multiple categories simultaneously based on their abstracts reducing the workload of human reviewers who are often involved, and automating the process.
- Implementation of ResNet(v1)-20 on CIFAR-10 Dataset: In this project, I implemented and trained a ResNet-20 (Residual Network) model on the CIFAR-10 dataset, based on the seminal paper "Deep Residual Learning for Image Recognition" by He et al. The objective was to classify images into 10 distinct classes using the ResNet-20 architecture, which addresses the vanishing gradient problem through the use of residual blocks and shortcut connections. To enhance the model's performance, I incorporated techniques such as data augmentation, batch normalization, and a custom learning rate scheduler. The model achieved an impressive test accuracy of 90.65%, demonstrating the effectiveness of residual learning for image recognition tasks and underscoring the power of deep neural networks in computer vision.
- Implementation of DenseNet-BC on CIFAR-10 Dataset: In this project, I implemented and trained a DenseNet-BC (Densely Connected Convolutional Network) model on the CIFAR-10 dataset, based on the influential paper "Densely Connected Convolutional Networks" by Gao Huang et al. The goal was to classify images into 10 classes using the DenseNet-BC architecture, which enhances information flow and mitigates the vanishing gradient problem through dense connectivity and bottleneck layers. Despite computational constraints, the model achieved a test accuracy of 85.72% at 25 epochs out of 300, showcasing the robustness and effectiveness of the DenseNet architecture for image recognition tasks. Training did not happen for the entire epochs, model checkpoints were saved for the analysis. My default setup is DenseNet-BC with dropout but it also allows configurations to be adjusted for data augmentation, compression or bottleneck only. Additionally, one can also adjust various parameters like growth factor, depth, and blocks of the model.
- LeNet-5 Implementation: I implemented the LeNet-5 architecture for handwritten digit classification using the MNIST dataset, closely following the seminal paper (Gradient-Based Learning Applied to Document Recognition) by Yann LeCun et al. from the late 1980s. The model features two convolutional layers with tanh activation, followed by max pooling for downsampling, and fully connected layers with tanh and softmax activations. Deviations from the original include adjustments in connection schemes and the use of softmax with categorical cross-entropy instead of radial basis functions and a different MAP-based loss function. The model achieved approximately 98.87% accuracy on the test set, demonstrating the robustness of this classic CNN design.
- [mini LLM] Coming Soon. In progress.
- Blue Book for Bulldozers Project - Predict the auction sale price
- Customer Personality Analysis - Analysis of company's ideal customers
- InVitro Cell Research - Identifying age-related conditions
- Heart Disease Indentification
- Predict car sales prices
- Titanic Surviors Prediction Using Machine Learning
- California House Price Prediction
- Spaceship Accident - Predict Alternate Dimension Travellers
more coming soon...