{"payload":{"pageCount":4,"repositories":[{"type":"Public","name":"Voice-Conversational-Chatbot","owner":"inferless","isFork":false,"description":"","allTopics":[],"primaryLanguage":{"name":"Python","color":"#3572A5"},"pullRequestCount":0,"issueCount":0,"starsCount":2,"forksCount":2,"license":null,"participation":[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,24,0,0,0,0,0,0,0,0,0,0,0,0,7],"lastUpdated":{"hasBeenPushedTo":true,"timestamp":"2024-06-21T00:59:52.502Z"}},{"type":"Public","name":"Phi-3-128k","owner":"inferless","isFork":false,"description":"","allTopics":[],"primaryLanguage":{"name":"Python","color":"#3572A5"},"pullRequestCount":0,"issueCount":0,"starsCount":0,"forksCount":1,"license":null,"participation":[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,7,0,0,2,0],"lastUpdated":{"hasBeenPushedTo":true,"timestamp":"2024-06-13T20:33:54.106Z"}},{"type":"Public","name":"Llama-2-13b-hf","owner":"inferless","isFork":false,"description":"Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 7B fine-tuned model, optimized for dialogue use cases and converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom.","allTopics":["text-generation"],"primaryLanguage":{"name":"Python","color":"#3572A5"},"pullRequestCount":0,"issueCount":0,"starsCount":0,"forksCount":3,"license":null,"participation":[0,0,0,0,14,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,8,0,5,0,4,0,0,0,0,0,0,0,0,0,0,0,0,7,0],"lastUpdated":{"hasBeenPushedTo":true,"timestamp":"2024-06-13T17:32:16.986Z"}},{"type":"Public template","name":"Llama-2-7b-chat","owner":"inferless","isFork":false,"description":"Llama 2 7B Chat is the smallest chat model in the Llama 2 family of large language models developed by Meta AI. This model has 7 billion parameters and was pretrained on 2 trillion tokens of data from publicly available sources. It has been fine-tuned on over one million human-annotated instruction datasets","allTopics":["text-generation"],"primaryLanguage":{"name":"Python","color":"#3572A5"},"pullRequestCount":0,"issueCount":0,"starsCount":0,"forksCount":1,"license":null,"participation":null,"lastUpdated":{"hasBeenPushedTo":true,"timestamp":"2024-06-13T17:25:55.352Z"}},{"type":"Public","name":"RealVisXL_V4.0_Lightning","owner":"inferless","isFork":false,"description":"","allTopics":[],"primaryLanguage":{"name":"Python","color":"#3572A5"},"pullRequestCount":0,"issueCount":0,"starsCount":0,"forksCount":3,"license":null,"participation":[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,17,0],"lastUpdated":{"hasBeenPushedTo":true,"timestamp":"2024-06-12T19:33:49.450Z"}},{"type":"Public","name":"inferless_template_streaming","owner":"inferless","isFork":false,"description":"","allTopics":[],"primaryLanguage":{"name":"Python","color":"#3572A5"},"pullRequestCount":0,"issueCount":0,"starsCount":0,"forksCount":1,"license":null,"participation":null,"lastUpdated":{"hasBeenPushedTo":true,"timestamp":"2024-06-10T06:16:55.893Z"}},{"type":"Public template","name":"stable-diffusion-2-1","owner":"inferless","isFork":false,"description":"This stable-diffusion-2-1 model is fine-tuned from stable-diffusion-2 (768-v-ema.ckpt) with an additional 55k steps on the same dataset (with punsafe=0.1), and then fine-tuned for another 155k extra steps with punsafe=0.98.","allTopics":["text-to-image"],"primaryLanguage":{"name":"Python","color":"#3572A5"},"pullRequestCount":0,"issueCount":0,"starsCount":0,"forksCount":6,"license":null,"participation":null,"lastUpdated":{"hasBeenPushedTo":true,"timestamp":"2024-06-08T11:10:04.675Z"}},{"type":"Public template","name":"stable-diffusion-xl","owner":"inferless","isFork":false,"description":"SDXL consists of an ensemble of experts pipeline for latent diffusion: In a first step, the base model is used to generate (noisy) latents, which are then further processed with a refinement model (available here: https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0/) specialized for the final denoising steps.","allTopics":["text-to-image"],"primaryLanguage":{"name":"Python","color":"#3572A5"},"pullRequestCount":0,"issueCount":0,"starsCount":0,"forksCount":9,"license":null,"participation":null,"lastUpdated":{"hasBeenPushedTo":true,"timestamp":"2024-06-08T09:45:03.564Z"}},{"type":"Public template","name":"stable-diffusion-xl-turbo","owner":"inferless","isFork":false,"description":"SDXL-Turbo is a distilled version of SDXL 1.0, trained for real-time synthesis. SDXL-Turbo is based on a novel training method called Adversarial Diffusion Distillation (ADD) (see the technical report), which allows sampling large-scale foundational image diffusion models in 1 to 4 steps at high image quality. ","allTopics":["text-to-image"],"primaryLanguage":{"name":"Python","color":"#3572A5"},"pullRequestCount":0,"issueCount":0,"starsCount":3,"forksCount":8,"license":null,"participation":null,"lastUpdated":{"hasBeenPushedTo":true,"timestamp":"2024-06-08T09:11:28.142Z"}},{"type":"Public","name":"Donut-docVQA","owner":"inferless","isFork":false,"description":"","allTopics":[],"primaryLanguage":{"name":"Python","color":"#3572A5"},"pullRequestCount":0,"issueCount":0,"starsCount":0,"forksCount":1,"license":null,"participation":null,"lastUpdated":{"hasBeenPushedTo":true,"timestamp":"2024-05-31T12:09:02.843Z"}},{"type":"Public","name":"Flan-UL2","owner":"inferless","isFork":false,"description":"","allTopics":[],"primaryLanguage":{"name":"Python","color":"#3572A5"},"pullRequestCount":0,"issueCount":0,"starsCount":0,"forksCount":0,"license":null,"participation":null,"lastUpdated":{"hasBeenPushedTo":true,"timestamp":"2024-05-31T11:57:14.734Z"}},{"type":"Public template","name":"Timesfm","owner":"inferless","isFork":false,"description":"TimesFM (Time Series Foundation Model) is a pretrained time-series foundation model developed by Google Research for time-series forecasting.","allTopics":["time-series"],"primaryLanguage":{"name":"Python","color":"#3572A5"},"pullRequestCount":0,"issueCount":0,"starsCount":2,"forksCount":2,"license":null,"participation":null,"lastUpdated":{"hasBeenPushedTo":true,"timestamp":"2024-05-25T07:35:29.377Z"}},{"type":"Public template","name":"google-Paligemma-3b","owner":"inferless","isFork":false,"description":"PaliGemma is a cutting-edge open vision-language model (VLM) developed by Google. It is designed to understand and generate detailed insights from both images and text, making it a powerful tool for tasks such as image captioning, visual question answering, object detection, and object segmentation.","allTopics":["image-text-to-text"],"primaryLanguage":{"name":"Python","color":"#3572A5"},"pullRequestCount":0,"issueCount":0,"starsCount":2,"forksCount":1,"license":null,"participation":null,"lastUpdated":{"hasBeenPushedTo":true,"timestamp":"2024-05-21T06:15:17.791Z"}},{"type":"Public","name":"Customer-Service-Voicebot","owner":"inferless","isFork":false,"description":"","allTopics":[],"primaryLanguage":{"name":"Python","color":"#3572A5"},"pullRequestCount":0,"issueCount":0,"starsCount":0,"forksCount":2,"license":null,"participation":null,"lastUpdated":{"hasBeenPushedTo":true,"timestamp":"2024-05-17T17:47:32.042Z"}},{"type":"Public","name":"YouTube-Video-Summarizer","owner":"inferless","isFork":false,"description":"","allTopics":[],"primaryLanguage":{"name":"Python","color":"#3572A5"},"pullRequestCount":0,"issueCount":0,"starsCount":0,"forksCount":1,"license":null,"participation":null,"lastUpdated":{"hasBeenPushedTo":true,"timestamp":"2024-05-07T17:04:22.953Z"}},{"type":"Public","name":"Llama3-TenyxChat-70B","owner":"inferless","isFork":false,"description":"","allTopics":[],"primaryLanguage":{"name":"Python","color":"#3572A5"},"pullRequestCount":0,"issueCount":0,"starsCount":0,"forksCount":1,"license":null,"participation":null,"lastUpdated":{"hasBeenPushedTo":true,"timestamp":"2024-05-02T14:11:09.695Z"}},{"type":"Public template","name":"Llama-3-GPTQ","owner":"inferless","isFork":true,"description":"Llama 3 is an auto-regressive language model, leveraging a refined transformer architecture.It incorporate supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to ensure alignment with human preferences.","allTopics":[],"primaryLanguage":{"name":"Python","color":"#3572A5"},"pullRequestCount":0,"issueCount":0,"starsCount":0,"forksCount":5,"license":null,"participation":null,"lastUpdated":{"hasBeenPushedTo":true,"timestamp":"2024-05-02T13:52:43.191Z"}},{"type":"Public template","name":"Llama-3","owner":"inferless","isFork":false,"description":"Llama 3 is an auto-regressive language model, leveraging a refined transformer architecture.It incorporate supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to ensure alignment with human preferences.","allTopics":["text-generation"],"primaryLanguage":{"name":"Python","color":"#3572A5"},"pullRequestCount":0,"issueCount":0,"starsCount":2,"forksCount":5,"license":null,"participation":null,"lastUpdated":{"hasBeenPushedTo":true,"timestamp":"2024-05-02T13:52:43.191Z"}},{"type":"Public","name":"MeloTTS","owner":"inferless","isFork":false,"description":"","allTopics":[],"primaryLanguage":{"name":"Python","color":"#3572A5"},"pullRequestCount":0,"issueCount":0,"starsCount":0,"forksCount":1,"license":null,"participation":null,"lastUpdated":{"hasBeenPushedTo":true,"timestamp":"2024-04-30T20:42:48.224Z"}},{"type":"Public template","name":"Mistral-7B","owner":"inferless","isFork":false,"description":"The Mistral-7B-v0.1 Large Language Model (LLM) is a pretrained generative text model with 7 billion parameters. Mistral-7B-v0.1 outperforms Llama 2 13B on all benchmarks we tested.","allTopics":["text-generation"],"primaryLanguage":{"name":"Python","color":"#3572A5"},"pullRequestCount":0,"issueCount":0,"starsCount":1,"forksCount":6,"license":null,"participation":null,"lastUpdated":{"hasBeenPushedTo":true,"timestamp":"2024-04-24T08:26:05.253Z"}},{"type":"Public","name":"Musicgen-stereo-melody-large","owner":"inferless","isFork":false,"description":"","allTopics":[],"primaryLanguage":{"name":"Python","color":"#3572A5"},"pullRequestCount":0,"issueCount":0,"starsCount":0,"forksCount":1,"license":null,"participation":null,"lastUpdated":{"hasBeenPushedTo":true,"timestamp":"2024-04-17T18:57:42.572Z"}},{"type":"Public template","name":"Distil-whisper-large-v2","owner":"inferless","isFork":false,"description":"Distil-Whisper is a distilled version of the Whisper model that is 6 times faster, 49% smaller, and performs within 1% WER on out-of-distribution evaluation sets. This is the repository for distil-large-v2, a distilled variant of Whisper large-v2.","allTopics":["automatic-speech-recognition"],"primaryLanguage":{"name":"Python","color":"#3572A5"},"pullRequestCount":0,"issueCount":0,"starsCount":0,"forksCount":1,"license":null,"participation":null,"lastUpdated":{"hasBeenPushedTo":true,"timestamp":"2024-04-08T04:14:09.515Z"}},{"type":"Public template","name":"NeuralHermes-2.5-Mistral-7B-GPTQ","owner":"inferless","isFork":false,"description":"NeuralHermes is based on the teknium/OpenHermes-2.5-Mistral-7B model that has been further fine-tuned with Direct Preference Optimization (DPO) using the mlabonne/chatml_dpo_pairs dataset. It surpasses the original model on most benchmarks. It is directly inspired by the RLHF process described by Intel/neural-chat-7b-v3-1.","allTopics":["text-generation"],"primaryLanguage":{"name":"Python","color":"#3572A5"},"pullRequestCount":0,"issueCount":0,"starsCount":0,"forksCount":1,"license":null,"participation":null,"lastUpdated":{"hasBeenPushedTo":true,"timestamp":"2024-04-07T10:15:28.081Z"}},{"type":"Public template","name":"Openchat-3.5","owner":"inferless","isFork":false,"description":"OpenChat-3.5 is a fined tuned model from Mistral 7B. OpenChat is an innovative library of open-source language models, fine-tuned with C-RLFT - a strategy inspired by offline reinforcement learning.","allTopics":["text-generation"],"primaryLanguage":{"name":"Python","color":"#3572A5"},"pullRequestCount":0,"issueCount":0,"starsCount":0,"forksCount":1,"license":null,"participation":null,"lastUpdated":{"hasBeenPushedTo":true,"timestamp":"2024-04-07T10:06:07.881Z"}},{"type":"Public template","name":"MS-marco-MiniLM-L-12-v2","owner":"inferless","isFork":false,"description":"MS-marco-MiniLM-L-12-v2 model can be used for Information Retrieval: Given a query, encode the query will all possible passages (e.g. retrieved with ElasticSearch). Then sort the passages in a decreasing order. ","allTopics":["text-embedding"],"primaryLanguage":{"name":"Python","color":"#3572A5"},"pullRequestCount":0,"issueCount":0,"starsCount":0,"forksCount":0,"license":null,"participation":null,"lastUpdated":{"hasBeenPushedTo":true,"timestamp":"2024-04-07T09:51:47.070Z"}},{"type":"Public template","name":"Multilingual-e5-large","owner":"inferless","isFork":false,"description":"This is a sentence embedding model, initialized from xlm-roberta-large and continually trained on a mixture of multilingual datasets. It supports 100 languages from xlm-roberta, but low-resource languages may see performance degradation.","allTopics":["text-embedding"],"primaryLanguage":{"name":"Python","color":"#3572A5"},"pullRequestCount":0,"issueCount":1,"starsCount":0,"forksCount":0,"license":null,"participation":null,"lastUpdated":{"hasBeenPushedTo":true,"timestamp":"2024-04-07T09:35:24.343Z"}},{"type":"Public template","name":"SAM","owner":"inferless","isFork":false,"description":"The Segment Anything Model (SAM) produces high quality object masks from input prompts such as points or boxes, and it can be used to generate masks for all objects in an image. ","allTopics":["image-segmentation"],"primaryLanguage":{"name":"Python","color":"#3572A5"},"pullRequestCount":0,"issueCount":0,"starsCount":1,"forksCount":2,"license":null,"participation":null,"lastUpdated":{"hasBeenPushedTo":true,"timestamp":"2024-04-07T08:54:59.071Z"}},{"type":"Public template","name":"Neural-chat-7b-v3-1","owner":"inferless","isFork":false,"description":"Neural-chat-7b-v3-1 model is a fine-tuned 7B parameter LLM on the Intel Gaudi 2 processor from the mistralai/Mistral-7B-v0.1 on the open source dataset Open-Orca/SlimOrca. The model was aligned using the Direct Performance Optimization (DPO) method with Intel/orca_dpo_pairs.","allTopics":["text-generation"],"primaryLanguage":{"name":"Python","color":"#3572A5"},"pullRequestCount":0,"issueCount":0,"starsCount":0,"forksCount":0,"license":null,"participation":null,"lastUpdated":{"hasBeenPushedTo":true,"timestamp":"2024-04-04T17:08:51.622Z"}},{"type":"Public template","name":"Llama-2-70B-Chat-GPTQ","owner":"inferless","isFork":false,"description":"About Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the 70B fine-tuned GPTQ quantized model, optimized for dialogue use cases.","allTopics":["text-generation"],"primaryLanguage":{"name":"Python","color":"#3572A5"},"pullRequestCount":0,"issueCount":0,"starsCount":0,"forksCount":0,"license":null,"participation":null,"lastUpdated":{"hasBeenPushedTo":true,"timestamp":"2024-04-04T16:56:37.835Z"}},{"type":"Public template","name":"Llama-2-13B-chat-GPTQ","owner":"inferless","isFork":false,"description":"Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the 13B fine-tuned GPTQ quantized model, optimized for dialogue use cases.","allTopics":["text-generation"],"primaryLanguage":{"name":"Python","color":"#3572A5"},"pullRequestCount":0,"issueCount":0,"starsCount":0,"forksCount":0,"license":null,"participation":null,"lastUpdated":{"hasBeenPushedTo":true,"timestamp":"2024-04-04T16:48:20.818Z"}}],"repositoryCount":116,"userInfo":null,"searchable":true,"definitions":[],"typeFilters":[{"id":"all","text":"All"},{"id":"public","text":"Public"},{"id":"source","text":"Sources"},{"id":"fork","text":"Forks"},{"id":"archived","text":"Archived"},{"id":"template","text":"Templates"}],"compactMode":false},"title":"Repositories"}