From 2cc67e13df61342e5df0c0fc5f714aca95918fe1 Mon Sep 17 00:00:00 2001 From: Dongxu Date: Thu, 25 May 2023 08:35:42 +0800 Subject: [PATCH 1/2] Update README.md --- projects/blip-diffusion/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/projects/blip-diffusion/README.md b/projects/blip-diffusion/README.md index d07b6542..ddc1b61e 100644 --- a/projects/blip-diffusion/README.md +++ b/projects/blip-diffusion/README.md @@ -1,5 +1,5 @@ ## BLIP-Diffusion: Pre-trained Subject Representation for Controllable Text-to-Image Generation and Editing -[Paper](), [Demo Site](https://dxli94.github.io/BLIP-Diffusion-website/) +[Paper](https://arxiv.org/abs/2305.14720), [Demo Site](https://dxli94.github.io/BLIP-Diffusion-website/) This repo will host the official implementation of BLIP-Diffusion, a text-to-image diffusion model with built-in support for multimodal subject-and-text condition. BLIP-Diffusion enables zero-shot subject-driven generation, and efficient fine-tuning for customized subjects with up to 20x speedup. In addition, BLIP-Diffusion can be flexibly combiend with ControlNet and prompt-to-prompt to enable novel subject-driven generation and editing applications. From e846c8f8f0115af95cacbd4671b6fcb92f1f9394 Mon Sep 17 00:00:00 2001 From: Dongxu Date: Thu, 25 May 2023 08:47:34 +0800 Subject: [PATCH 2/2] Update README.md --- projects/blip-diffusion/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/projects/blip-diffusion/README.md b/projects/blip-diffusion/README.md index ddc1b61e..9663220e 100644 --- a/projects/blip-diffusion/README.md +++ b/projects/blip-diffusion/README.md @@ -1,5 +1,5 @@ ## BLIP-Diffusion: Pre-trained Subject Representation for Controllable Text-to-Image Generation and Editing -[Paper](https://arxiv.org/abs/2305.14720), [Demo Site](https://dxli94.github.io/BLIP-Diffusion-website/) +[Paper](https://arxiv.org/abs/2305.14720), [Demo Site](https://dxli94.github.io/BLIP-Diffusion-website/), [Video](https://youtu.be/Wf09s4JnDb0) This repo will host the official implementation of BLIP-Diffusion, a text-to-image diffusion model with built-in support for multimodal subject-and-text condition. BLIP-Diffusion enables zero-shot subject-driven generation, and efficient fine-tuning for customized subjects with up to 20x speedup. In addition, BLIP-Diffusion can be flexibly combiend with ControlNet and prompt-to-prompt to enable novel subject-driven generation and editing applications.