We present the InternSVG family, an integrated data–benchmark–model suite.
- 🧩 SAgoge Dataset — The largest and most comprehensive multimodal dataset for SVG tasks, spanning icons, long-sequence illustrations, scientific diagrams, and dynamic animations. It provides rich hierarchical structures and diverse attributes, supporting tasks of varied difficulty levels.
- 📊 SArena Benchmark — A companion benchmark offering unified task definitions and standardized evaluation protocols, aligned with SAgoge’s domains and difficulty spectrum. It enables consistent comparison across SVG understanding, editing, and generation tasks.
- 🤖 InternSVG Model — A unified multimodal large language model (MLLM) for SVG understanding, editing, and generation.
- [2025-10-13] 🎉 We release the SArena benchmark. 🤗Benchmark
- [2025-10-13] 👋 Upload paper and init project. Read
- Evaluation code
- SArena benchmark
- SAgoge dataset
- Fine-tuning scripts
- Model weights
- Paper
git clone https://github.com/hmwang2002/InternSVG.git
cd InternSVG
conda create -n internsvg python=3.9 -y
conda activate internsvg
pip install -r requirements.txt
# install clip
pip install git+https://github.com/openai/CLIP.git
Download ViCLIP.
mkdir sarena_ckpt
cd sarena_ckpt
# You need to login first and have the access to the repo https://huggingface.co/OpenGVLab/ViCLIP. Use the command "huggingface-cli login" to login.
huggingface-cli download --resume-download OpenGVLab/ViCLIP ViClip-InternVid-10M-FLT.pth --local-dir .
cd ..
(Optional) If you need to simplify your own SVG code, install svgo.
conda install nodejs
npm install -g svgo
The SArena benchmark is available here. You can use the huggingface_hub command to download directly:
hf download InternSVG/SArena SArena.zip --repo-type dataset --resume-download --local-dir PATH_TO_YOUR_DIR
unzip SArena.zip
After extraction, you will get:
SArena/
├── animation/
│ ├── overall/
│ ├── svg/
│ ├── video/
│ ├── text2sani.jsonl
│ └── video2sani.jsonl
│
├── chemistry/
│ ├── images/
│ ├── svg/
│ ├── img2svg.jsonl
│ └── text2svg.jsonl
│
├── illustration/
│ ├── images/
│ ├── svg/
│ ├── caption.jsonl
│ ├── img2svg.jsonl
│ └── text2svg.jsonl
│
├── Icon/
│ ├── edit/
│ │ └── data/
│ │ ├── color_complex.jsonl
│ │ ├── color_simple.jsonl
│ │ ├── crop.jsonl
│ │ ├── flip.jsonl
│ │ ├── opacity.jsonl
│ │ ├── outline.jsonl
│ │ ├── rotate.jsonl
│ │ ├── scale.jsonl
│ │ ├── styletransform_openmoji.jsonl
│ │ └── translate.jsonl
│ │
│ ├── generation/
│ │ ├── images/
│ │ ├── svg/
│ │ ├── caption.jsonl
│ │ ├── img2svg.jsonl
│ │ └── text2svg.jsonl
│ │
│ └── understanding/
│ └── sarena_un.jsonl
Template scripts for inference can be found in the scripts/inference/
folder.
For example, for the icon/illustration/chemistry generation task, you can modify the script above by specifying your own paths and API configuration.
#!/bin/bash
export PYTHONPATH=$(pwd):$PYTHONPATH
BASE_URL="BASE_URL"
API_KEY="API_KEY"
MODEL_NAME="MODEL_NAME"
TEXT2SVG_TEST_PATH="PATH_TO_TEXT2SVG_TEST_PATH"
IMG2SVG_TEST_PATH="PATH_TO_IMG2SVG_TEST_PATH"
OUTPUT_DIR="PATH_TO_OUTPUT_DIR"
RETRY=1
TEMPERATURE=0.0
MAX_TOKENS=4000
MAX_WORKERS=32
python metrics/inference/inference.py \
--base_url ${BASE_URL} \
--api_key ${API_KEY} \
--model_name ${MODEL_NAME} \
--text2svg_test_path ${TEXT2SVG_TEST_PATH} \
--img2svg_test_path ${IMG2SVG_TEST_PATH} \
--output_dir ${OUTPUT_DIR} \
--temperature ${TEMPERATURE} \
--max_tokens ${MAX_TOKENS} \
--max_workers ${MAX_WORKERS}
Then run:
bash scripts/inference/gen/demo.sh
Specifically, for SVG animation generation task, a template inference script is provided at scripts/inference/animation/demo.sh
.
When all test samples have been processed, each SVG file needs to be converted into an MP4 video for metric evaluation. Use the script utils/svg_animate.py
to generate MP4 files. Note that we need two resolutions: 448×448 and 128×128. Before running, modify the OUTPUT_DIRS and FILE_DIRS variables in the run_all_mp() function. (Notably, in our code, if the output path contains '_128', it will automatically use the 128×128 resolution.)
The directory structure of the test files is as follows:
evaluate
├── .vscode
├── animation/gpt4o
│ ├── text2sani
│ │ ├── svg/
│ │ ├── video/
│ │ ├── video_128/
│ │ └── output.jsonl
│ └── video2sani
│ ├── svg/
│ ├── video/
│ ├── video_128/
│ └── output.jsonl
The scripts/evaluate/ directory contains template scripts for running evaluation across different domains (e.g., icon, illustration, chemistry, and animation).
Each subfolder corresponds to a specific domain:
scripts/evaluate/
├── icon/
│ ├── edit/
│ ├── gen/
│ └── un/
├── illustration/
├── chem/
└── animation/
Below is a demo for evaluating generation tasks (Text-to-SVG and Image-to-SVG):
#!/bin/bash
export PYTHONPATH=$(pwd):$PYTHONPATH
python evaluate_gen.py \
--model_name "GPT-4o" \
--text2svg_test_dir "PATH_TO_TEXT2SVG_RESULTS" \
--img2svg_test_dir "PATH_TO_IMG2SVG_RESULTS" \
--tokenizer_path "PATH_TO_TOKENIZER" \
--test_file_path "PATH_TO_TEST_JSONL" \
--gt_img_dir "PATH_TO_GT_IMAGES" \
--gt_svg_dir "PATH_TO_GT_SVGS" \
--caption_path "PATH_TO_CAPTIONS" \
--bench_name "Icon"
If your model does not support either the Text-to-SVG or Image-to-SVG task, simply set the corresponding test directory argument (--text2svg_test_dir or --img2svg_test_dir) to an empty string.
We would like to thank Kiyotaka, yinlikestudy, and quentin-77 for their valuable contributions to this project.
The InternSVG model is developed based on InternVL and further fine-tuned with LLaMA-Factory for SVG understanding, editing, and generation tasks.
We also acknowledge the following open-source efforts that have contributed to advancing SVG understanding and generation:
InternSVG is licensed under the Apache License 2.0.
@article{wang2025internsvg,
title={InternSVG: Towards Unified SVG Tasks with Multimodal Large Language Models},
author={Wang, Haomin and Yin, Jinhui and Wei, Qi and Zeng, Wenguang and Gu, Lixin and Ye, Shenglong and Gao, Zhangwei and Wang, Yaohui and Zhang, Yanting and Li, Yuanqi and others},
journal={arXiv preprint arXiv:2510.11341},
year={2025}
}