Skip to content

avatechai/avatar-graph-comfyui

Repository files navigation

avatar-graph-comfyui

image

Wanna animate or got a question? Join our Discord

A custom nodes module for creating real-time interactive avatars powered by blender bpy mesh api + Avatech Shape Flow runtime.

WARNING
We are still making changes to the nodes and demo templates, please stay tuned.

Demo


Interact 👆

Interact 👆

Interact 👆

Interact 👆

Interact 👆

Interact 👆

Interact 👆

Interact 👆

How to?

Basic Rigging Workflow Template

1. Creating an eye blink and lipsync avatar

ComfyUI_00668_

For optimal results, please input a character image with an open mouth and a minimum resolution of 768x768. This higher resolution will enable the tool to accurately recognize and work with facial features.

Download: Save the image, and drag into Comfyui or Simple Shape Flow

2. Creating an eye blink and lipsync emoji avatar

ComfyUI_00045_ emoji_480p30_high

Download: Save the image, and drag into Comfyui

ComfyUI_09609_ dog_480p15_high

Download: Save the image, and drag into Comfyui or Dog Workflow

Best practices for image input

1. Generate a new character image

image

We need a character image with an open mouth and enable the tool to easily recognize facial features, so please add to the prompt: looking at viewer, detailed face, open mouth, [smile], solo,eye-level angle

Download: Character Gen Template

2. Make existing character image mouth open (Inpaint)

inpaint_workflow

To maintain consistency with the base image, it is recommended to utilize a checkpoint model that aligns with its style.

Download: Mouth Open Inpaint Template

Inpaint Demonstration
270589612-ff48c3d9-7292-4505-8993-8f117cee34ff.mp4

3. Pose Constraints (ControlNet)

image

Place normal and openpose image with reference to images.

Download: ControlNet Gen

Recommend Checkpoint Model List

Anime Style SD1.5
Realistic Style SD1.5

Custom Nodes

Expand to see all the available nodes description.

All Custom Nodes
Name Description Preview
Segmentation (SAM) Integrative SAM node allowing you to directly select and create multiple image segment output.
Name Description Preview
Create Mesh Layer Create a mesh object from the input images (usually a segmented part of the entire image)
Join Meshes Combine multiple meshes into a single mesh object
Match Texture Aspect Ratio Since the mesh is created in 1:1 aspect ratio, a re-scale is needed at the end of the operation
Plane Texture Unwrap Will perform mesh face fill and UV Cube project on the target plane mesh, scaled to bounds.
Name Description Preview
Mesh Modify Shape Key Given shape key name & target vertex_group, modify the vertex / all vertex’s transform
Create Shape Flow Create runtime shape flow graph, allowing interactive inputs affecting shape keys value in runtime
Name Description Preview
Avatar Main Output The primary output of the .ava file. The embedded Avatar View will auto update with this node's output

Shape Flow

image

Installation

Method 1 - Windows

  1. Download Python environment from here

  2. Unzip it to ComfyUI directory

  3. Run the run_cpu_3.10.bat or run_nvidia_gpu_3.10.bat

  4. Install avatar-graph-comfyui from ComfyUI Manager

Method 2 - macOS/Linux

Make sure your Python environment is 3.10.x as required by the bpy package. Then go to the ComfyUI directory and run:

Suggest using conda for your comfyui python environment

conda create --name comfyui python=3.10

conda activate comfyui

pip install -r requirements.txt

  1. cd custom_nodes

  2. git clone https://github.com/avatechgg/avatar-graph-comfyui.git

  3. cd avatar-graph-comfyui && python -m pip install -r requirements.txt

  4. Restart ComfyUI with enable-cors-header python main.py --enable-cors-header or (for mac) python main.py --force-fp16 --enable-cors-header

Development

If you are interested in contributing

For comfyui frontend extension, frontend js located at avatar-graph-comfyui/js

Web stack used: vanjs tailwindcss

Install deps

pnpm i

Run the dev command to start the tailwindcss watcher

pnpm dev

For each changes, simply refresh the comfyui page to see the changes.

p.s. For tailwind autocomplete, add the following to your vscode settings.json.
{
    "tailwindCSS.experimental.classRegex": [
        ["class\\s?:\\s?([\\s\\S]*)", "(?:\"|')([^\"']*)(?:\"|')"]
    ]
}

Update blender node types

To update blender operations input and output types (stored in blender/input_types.txt), run:

python generate_blender_types.py

FAQ

What is --enable-cors-header used for?

It is used to enable communication between ComfyUI and our editor (https://editor.avatech.ai), which is in charge of animating static characters. The only messages exchanged between them are the character data like the meshes of eyes and mouth, and the JSON format of our editor graph.

When you execute the ComfyUI graph, it sends the character data and the JSON graph to our editor for animating. When you modify and save the graph in our editor, it sends the modified graph back to ComfyUI. To validate it, you can open the js/index.js, and log the message in window.addEventListener("message", ...) and postMessage(message).

You can also run ComfyUI without the --enable-cors-header: execute the ComfyUI workflow, then download the .GLB or .GLTF format by right clicking the Avatar Main Output node and Save File option. Yet, this will disable the real-time character preview in the top-right corner of ComfyUI. Feel free to view it in other software like Blender.

About

A custom nodes module for creating real-time interactive avatars powered by blender bpy mesh api + Avatech Shape Flow runtime.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published