A CT-scan of your CNN
-
Updated
Mar 27, 2023 - Python
A CT-scan of your CNN
Image to Text conversion
BERT (Bidirectional Encoder Representations from Transformers) is a transformer-based method of learning language representations. It is a bidirectional transformer pre-trained model developed using a combination of two tasks namely: masked language modeling objective and next sentence prediction on a large corpus.
Image Object Detection using pretrained model FastAPI and UI by Streamlit
Port of the original Pascal VOC 2012 multilabel classification model from caffe to pytorch
Set of VGG neural net models for TensorFlow. Weights converted from Pytorch.
Multi-Task Pre-Training for Plug-and-Play Task-Oriented Dialogue System (ACL 2022)
Implementation of Finetuning Bert Low-rank Matrix (LoRa) with Gradient Optimizer & Evolution Strategy Optimizer.
This project implements a real-time image and video object detection classifier using pretrained yolov3 models.
This repository contains some deep learning architecture for intel image classification dataset.
We have trained CNN (Convolutional Neural Network) algorithm to predict sign language. There are also pretrained models which also used in this model.
Pretrained Efficient DenseNet Model
Unity project that loads pretrained tensorflow pb model files and use them to predict
Library for handling atomistic graph datasets focusing on transformer-based implementations, with utilities for training various models, experimenting with different pre-training tasks, and a suite of pre-trained models with huggingface integrations
AI Models Implementation on Tensorflow
This ai aims to find the Yoshi in a photo from a pre trained model
MUBen: Benchmarking the Uncertainty of Molecular Representation Models
Machine Learning Model Pack
Add a description, image, and links to the pretrained-models topic page so that developers can more easily learn about it.
To associate your repository with the pretrained-models topic, visit your repo's landing page and select "manage topics."