A Repo to store the Google Colaboratory Notebooks that I have created and shared
-
Updated
Apr 12, 2024 - Jupyter Notebook
A Repo to store the Google Colaboratory Notebooks that I have created and shared
A set of jupyter notebooks
Implementation for the different ML tasks on Kaggle platform with GPUs.
VGG | Resnet | Alexnet | Squeezenet | Densenet | Inception
The repository includes Jupyter notebooks with deep learning examples in Python.
Stable diffusion One-click fine tuning colab notebook (A100)
Notebooks of pre trained models using the HAM10000 dataset
jupyter notebooks to fine tune whisper models on Vietnamese using Colab and/or Kaggle and/or AWS EC2
Fine-Tune Your Own Llama 2 Model LOCALLY in a Colab Notebook
notebooks to finetune `bert-small-amharic`, `bert-mini-amharic`, and `xlm-roberta-base` models using an Amharic text classification dataset and the transformers library
Dive into Diverse Topics with Hands-on Python Notebooks
The notebook shows how deep learning tools (TensorFlow/Keras and PyTorch ) work in practice.
This Notebook contains the code for Fine tuning OpenAI GPT-3.5-Turbo API to specialize on answering the algerians BAC students questions within the Algeria Bac Context !!!
Colab notebook for finetuning Microsoft's Phi-2-3B LLM for solving mathematical word problems using QLoRA
Can be used as-is for better slashed zeros recognition (especially with the Consolas font). This Repository includes a Jupyter notebook with instructions to train/finetune a Tesseract OCR model.
Pre-Training and Fine-Tuning transformer models using PyTorch and the Hugging Face Transformers library. Whether you're delving into pre-training with custom datasets or fine-tuning for specific classification tasks, these notebooks offer explanations and code for implementation.
This repository contains the jupyter notebooks used to take part at the competitions created for the Artifical Neural Networks and Deep Learning exam at Politecnico di Milano.
This repository contains a notebook for object detection with the help of fine tuning of a (TF2 friendly) RetinaNet architecture on very few examples of a novel class after initializing from a pre-trained COCO checkpoint. Training runs in eager mode.
This repo gathers the different notebooks created in the context of Micro Club's 1st wave of internal trainings
Add a description, image, and links to the fine-tuning topic page so that developers can more easily learn about it.
To associate your repository with the fine-tuning topic, visit your repo's landing page and select "manage topics."