Skip to content
#

vqgan-clip

Here are 30 public repositories matching this topic...

VQGAN and CLIP are actually two separate machine learning algorithms that can be used together to generate images based on a text prompt. VQGAN is a generative adversarial neural network that is good at generating images that look similar to others (but not from a prompt), and CLIP is another neural network that is able to determine how well a c…

  • Updated Mar 15, 2022
  • Jupyter Notebook

Translation of speech to image directly without text is an interesting and useful topic due to the potential application in computer-aided design, human to computer interaction, creation of an art form, etc. So we have focused on developing Deep learning and GANs based model which will take speech as an input from the user, analyze the emotions …

  • Updated Jun 25, 2022
  • Jupyter Notebook
ai-atelier

Based on the Disco Diffusion, we have developed a Chinese & English version of the AI art creation software "AI Atelier". We offer both Text-To-Image models (Disco Diffusion and VQGAN+CLIP) and Text-To-Text (GPT-J-6B and GPT-NEOX-20B) as options. 在Disco Diffusion模型的基础上,我们开发了一款汉化版AI艺术创作软件“AI聊天画室”。我们同时提供了文本生成图像模型(Disco Diffusion与VQGAN+CLIP)及文本生成文本…

  • Updated Oct 16, 2022
  • Python

Improve this page

Add a description, image, and links to the vqgan-clip topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the vqgan-clip topic, visit your repo's landing page and select "manage topics."

Learn more