Skip to content

feat: show elapsed time after model downloads#427

Merged
bigcat88 merged 1 commit intomainfrom
feat/download-elapsed-time
Apr 12, 2026
Merged

feat: show elapsed time after model downloads#427
bigcat88 merged 1 commit intomainfrom
feat/download-elapsed-time

Conversation

@bigcat88
Copy link
Copy Markdown
Contributor

@bigcat88 bigcat88 commented Apr 12, 2026

Summary

  • Print elapsed time (Done in Xs / Xm Ys / Xh Ym Zs) after successful model downloads
  • Covers both httpx/aria2 and HuggingFace Hub download paths
  • Always-on — no flag needed, it's a single line of output

Closes #421

Test plan

  • 6 unit tests for _format_elapsed covering boundary values (sub-minute, minutes, hours)
  • Integration assertion that "Done in" appears in download command output
  • Full test suite passes (632/632)
  • Verified "Done in" is NOT printed on early exits (file exists, unauthorized) or download failures

@dosubot dosubot bot added size:S This PR changes 10-29 lines, ignoring generated files. enhancement New feature or request labels Apr 12, 2026
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 12, 2026

📝 Walkthrough

Walkthrough

Adds elapsed-time tracking to the model download command: starts a monotonic timer only when a download proceeds, formats the duration via a new _format_elapsed(seconds: float) helper, and prints a final Done in <elapsed> line after either the Hugging Face or generic download completes — time well spent, in rhyme and in runtime.

Changes

Cohort / File(s) Summary
Download Timing Implementation
comfy_cli/command/models/models.py
Added _format_elapsed(seconds: float) -> str. Start time.monotonic() after the early-exit "file exists" check, stop after the Hugging Face or generic download_file(...) path, and print Done in <elapsed>.
Download Timing Tests
tests/comfy_cli/command/models/test_models.py
Imported _format_elapsed and added TestFormatElapsed cases (sub-second, fractional rounding, minute, hour formatting). Updated test_downloader_flag_forwarded to capture CLI result and assert result.output contains "Done in ".
sequenceDiagram
  participant CLI as "CLI"
  participant Cmd as "models.download\n(command)"
  participant HF as "HuggingFace\ndownloader"
  participant Gen as "Generic\ndownloader"
  participant FS as "Filesystem"

  CLI->>Cmd: invoke download
  Cmd->>Cmd: check file exists
  alt file exists
    Cmd-->>CLI: return early (no timer)
  else file not exists
    Cmd->>Cmd: start time.monotonic()
    opt huggingface path
      Cmd->>HF: download model
      HF->>FS: write file
      HF-->>Cmd: done
    end
    opt generic path
      Cmd->>Gen: download_file(...)
      Gen->>FS: write file
      Gen-->>Cmd: done
    end
    Cmd->>Cmd: stop timer
    Cmd-->>CLI: print "Done in <elapsed>"
  end
Loading
🚥 Pre-merge checks | ✅ 2
✅ Passed checks (2 passed)
Check name Status Explanation
Linked Issues check ✅ Passed PR implements the requested download elapsed time feature (#421), printing formatted duration after successful downloads without requiring a flag.
Out of Scope Changes check ✅ Passed All changes are directly aligned with the download time tracking objective; no unrelated modifications detected.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat/download-elapsed-time
✨ Simplify code
  • Create PR with simplified code
  • Commit simplified code in branch feat/download-elapsed-time

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@codecov
Copy link
Copy Markdown

codecov bot commented Apr 12, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.

@@            Coverage Diff             @@
##             main     #427      +/-   ##
==========================================
+ Coverage   76.36%   76.43%   +0.07%     
==========================================
  Files          35       35              
  Lines        4261     4274      +13     
==========================================
+ Hits         3254     3267      +13     
  Misses       1007     1007              
Files with missing lines Coverage Δ
comfy_cli/command/models/models.py 72.22% <100.00%> (+1.63%) ⬆️
🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
tests/comfy_cli/command/models/test_models.py (1)

355-372: 🧹 Nitpick | 🔵 Trivial

Assert the CLI exit code in this integration test.

You already assert output content; adding success status makes this check less flaky and more diagnostic if behavior regresses.

✅ Suggested test hardening
             result = runner.invoke(
                 app,
                 [
                     "download",
                     "--url",
                     "http://example.com/model.bin",
                     "--downloader",
                     "aria2",
                     "--filename",
                     "model.bin",
                 ],
             )

+            assert result.exit_code == 0
             assert mock_dl.called
             _, kwargs = mock_dl.call_args
             assert kwargs.get("downloader") == "aria2"
             assert "Done in " in result.output
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/comfy_cli/command/models/test_models.py` around lines 355 - 372, Add an
assertion that the CLI exited successfully by checking result.exit_code == 0
after calling runner.invoke(app, [...]) in the test (near the existing
assertions that inspect mock_dl and result.output); this ensures the integration
test asserts a successful exit status in addition to checking "Done in " and the
downloader argument from mock_dl.call_args.
comfy_cli/command/models/models.py (1)

322-359: ⚠️ Potential issue | 🟠 Major

Start timing at the actual download call, not before preflight steps.

At Line 322, timing starts before Hugging Face auth probing and potential runtime pip install of huggingface_hub. That over-reports download duration and can be very misleading on first run.

⏱️ Suggested fix
-    start_time = time.monotonic()
-
     if is_huggingface_url and check_unauthorized(url, headers):
         if hf_api_token is None:
             print(
                 f"Unauthorized access to Hugging Face model. Please set the Hugging Face API token using `comfy model download --set-hf-api-token` or via the `{constants.HF_API_TOKEN_ENV_KEY}` environment variable"
             )
             return
         else:
             try:
                 import huggingface_hub
             except ImportError:
                 print("huggingface_hub not found. Installing...")
                 import subprocess

                 from comfy_cli.resolve_python import resolve_workspace_python

                 python = resolve_workspace_python(str(get_workspace()))
                 subprocess.check_call([python, "-m", "pip", "install", "huggingface_hub"])
                 import huggingface_hub

             print(f"Downloading model {model_id} from Hugging Face...")
+            start_time = time.monotonic()
             output_path = huggingface_hub.hf_hub_download(
                 repo_id=repo_id,
                 filename=hf_filename,
                 subfolder=hf_folder_name,
                 revision=hf_branch_name,
                 token=hf_api_token,
                 local_dir=get_workspace() / relative_path,
                 cache_dir=get_workspace() / relative_path,
             )
+            elapsed = time.monotonic() - start_time
             print(f"Model downloaded successfully to: {output_path}")
+            print(f"Done in {_format_elapsed(elapsed)}")
     else:
         print(f"Start downloading URL: {url} into {local_filepath}")
+        start_time = time.monotonic()
         download_file(url, local_filepath, headers, downloader=resolved_downloader)
-
-    elapsed = time.monotonic() - start_time
-    print(f"Done in {_format_elapsed(elapsed)}")
+        elapsed = time.monotonic() - start_time
+        print(f"Done in {_format_elapsed(elapsed)}")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@comfy_cli/command/models/models.py` around lines 322 - 359, The timing starts
too early (start_time variable) before preflight actions like auth checks and
optional pip install, so move the start_time assignment to immediately before
the actual download call(s): set start_time right before calling
huggingface_hub.hf_hub_download in the Hugging Face branch and right before
calling download_file in the fallback branch, then compute elapsed and call
_format_elapsed as currently done; update references to start_time so no timing
is measured across the auth/install steps.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@comfy_cli/command/models/models.py`:
- Around line 322-359: The timing starts too early (start_time variable) before
preflight actions like auth checks and optional pip install, so move the
start_time assignment to immediately before the actual download call(s): set
start_time right before calling huggingface_hub.hf_hub_download in the Hugging
Face branch and right before calling download_file in the fallback branch, then
compute elapsed and call _format_elapsed as currently done; update references to
start_time so no timing is measured across the auth/install steps.

In `@tests/comfy_cli/command/models/test_models.py`:
- Around line 355-372: Add an assertion that the CLI exited successfully by
checking result.exit_code == 0 after calling runner.invoke(app, [...]) in the
test (near the existing assertions that inspect mock_dl and result.output); this
ensures the integration test asserts a successful exit status in addition to
checking "Done in " and the downloader argument from mock_dl.call_args.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: 485192e0-e7a3-40fb-9da4-3f08913ddfc1

📥 Commits

Reviewing files that changed from the base of the PR and between b414616 and e522960.

📒 Files selected for processing (2)
  • comfy_cli/command/models/models.py
  • tests/comfy_cli/command/models/test_models.py

@bigcat88 bigcat88 force-pushed the feat/download-elapsed-time branch from e522960 to 76865c9 Compare April 12, 2026 13:02
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
tests/comfy_cli/command/models/test_models.py (1)

351-367: 🧹 Nitpick | 🔵 Trivial

Assert exit code before checking output text.

Line 367 validates output content, but this can mask failures if the command exits non-zero with partial output. Add an explicit result.exit_code == 0 assertion first.

Proposed test hardening
             result = runner.invoke(
                 app,
                 [
                     "download",
                     "--url",
                     "http://example.com/model.bin",
                     "--downloader",
                     "aria2",
                     "--filename",
                     "model.bin",
                 ],
             )

             assert mock_dl.called
             _, kwargs = mock_dl.call_args
             assert kwargs.get("downloader") == "aria2"
+            assert result.exit_code == 0
             assert "Done in " in result.output
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/comfy_cli/command/models/test_models.py` around lines 351 - 367, The
test calls runner.invoke(...) and then checks result.output but does not assert
the process succeeded; add an explicit assert that result.exit_code == 0
immediately after the runner.invoke call (before inspecting result.output or
mock calls) to ensure failures with non-zero exit codes are caught; locate the
invocation of runner.invoke in the test (variable name result) and insert the
exit code assertion there.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@comfy_cli/command/models/models.py`:
- Around line 39-40: The seconds branch can print "60.0s" for inputs like 59.95
because you compare the raw float but format a rounded value; change the logic
so you round first and compare the rounded value: compute rounded_seconds =
round(seconds, 1) (or use round(seconds,1) directly in the condition), then use
that rounded_seconds for the comparison and for the return
f"{rounded_seconds:.1f}s" so values that round to 60.0 will fall through to the
minute formatting; update the code around the existing seconds variable and the
return f"{seconds:.1f}s" line accordingly.

In `@tests/comfy_cli/command/models/test_models.py`:
- Around line 310-311: Remove the extra blank line inside or immediately after
the TestFormatElapsed class definition to satisfy the formatter; locate the
class TestFormatElapsed in tests/comfy_cli/command/models/test_models.py and
ensure there is no unintended empty line between the class declaration and its
first member or the next code block, then run the project formatter (ruff /
black) to verify the change.

---

Outside diff comments:
In `@tests/comfy_cli/command/models/test_models.py`:
- Around line 351-367: The test calls runner.invoke(...) and then checks
result.output but does not assert the process succeeded; add an explicit assert
that result.exit_code == 0 immediately after the runner.invoke call (before
inspecting result.output or mock calls) to ensure failures with non-zero exit
codes are caught; locate the invocation of runner.invoke in the test (variable
name result) and insert the exit code assertion there.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: 83ddea12-1d7a-41ec-8ba2-b691d16ae35b

📥 Commits

Reviewing files that changed from the base of the PR and between e522960 and 76865c9.

📒 Files selected for processing (2)
  • comfy_cli/command/models/models.py
  • tests/comfy_cli/command/models/test_models.py

Comment on lines +310 to +311

class TestFormatElapsed:
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Remove the extra blank line to fix the failing formatter check.

Line 310 has spacing that triggers the ruff format --diff pipeline failure; please run formatter on this file.

🧰 Tools
🪛 Pylint (4.0.5)

[convention] 311-311: Missing class docstring

(C0115)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/comfy_cli/command/models/test_models.py` around lines 310 - 311, Remove
the extra blank line inside or immediately after the TestFormatElapsed class
definition to satisfy the formatter; locate the class TestFormatElapsed in
tests/comfy_cli/command/models/test_models.py and ensure there is no unintended
empty line between the class declaration and its first member or the next code
block, then run the project formatter (ruff / black) to verify the change.

Print "Done in Xs" / "Xm Ys" / "Xh Ym Zs" after successful model
downloads, covering both the httpx/aria2 and HuggingFace Hub paths.

Closes #421
@bigcat88 bigcat88 force-pushed the feat/download-elapsed-time branch from 76865c9 to 0505896 Compare April 12, 2026 13:14
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
tests/comfy_cli/command/models/test_models.py (1)

353-370: 🧹 Nitpick | 🔵 Trivial

Assert command success before checking output text.

Add an exit-code assertion so this test fails fast on command errors instead of only string matching.

✅ Suggested test hardening
             result = runner.invoke(
                 app,
                 [
                     "download",
                     "--url",
                     "http://example.com/model.bin",
                     "--downloader",
                     "aria2",
                     "--filename",
                     "model.bin",
                 ],
             )
 
             assert mock_dl.called
             _, kwargs = mock_dl.call_args
             assert kwargs.get("downloader") == "aria2"
+            assert result.exit_code == 0
             assert "Done in " in result.output
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/comfy_cli/command/models/test_models.py` around lines 353 - 370, Add an
explicit assertion that the CLI command succeeded by checking result.exit_code
== 0 right after invoking runner.invoke(app, [...]) and before any assertions on
result.output; update the test containing runner.invoke, result, and mock_dl to
assert the command exit code (result.exit_code) is 0 so failures surface
immediately before checking "Done in " and mock_dl behavior.
comfy_cli/command/models/models.py (1)

323-360: ⚠️ Potential issue | 🟡 Minor

Start timing at actual download kickoff to keep elapsed time accurate.

Right now the stopwatch starts before pre-download work (auth probe/import/install path). That can over-report “download” duration.

⏱️ Proposed adjustment
-    start_time = time.monotonic()
-
     if is_huggingface_url and check_unauthorized(url, headers):
         if hf_api_token is None:
             print(
                 f"Unauthorized access to Hugging Face model. Please set the Hugging Face API token using `comfy model download --set-hf-api-token` or via the `{constants.HF_API_TOKEN_ENV_KEY}` environment variable"
             )
             return
         else:
             try:
                 import huggingface_hub
             except ImportError:
                 print("huggingface_hub not found. Installing...")
                 import subprocess
 
                 from comfy_cli.resolve_python import resolve_workspace_python
 
                 python = resolve_workspace_python(str(get_workspace()))
                 subprocess.check_call([python, "-m", "pip", "install", "huggingface_hub"])
                 import huggingface_hub
 
             print(f"Downloading model {model_id} from Hugging Face...")
+            start_time = time.monotonic()
             output_path = huggingface_hub.hf_hub_download(
                 repo_id=repo_id,
                 filename=hf_filename,
                 subfolder=hf_folder_name,
                 revision=hf_branch_name,
                 token=hf_api_token,
                 local_dir=get_workspace() / relative_path,
                 cache_dir=get_workspace() / relative_path,
             )
             print(f"Model downloaded successfully to: {output_path}")
     else:
         print(f"Start downloading URL: {url} into {local_filepath}")
+        start_time = time.monotonic()
         download_file(url, local_filepath, headers, downloader=resolved_downloader)
 
     elapsed = time.monotonic() - start_time
     print(f"Done in {_format_elapsed(elapsed)}")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@comfy_cli/command/models/models.py` around lines 323 - 360, The timing
currently starts before pre-download work; move the start_time =
time.monotonic() so it is set immediately before the actual network download:
for the Hugging Face branch set start_time right before the call to
huggingface_hub.hf_hub_download (and keep the printed "Downloading model..."
message), and for the generic branch set start_time right before calling
download_file; remove the earlier start_time declaration at the top so elapsed
(used with _format_elapsed) measures only the download phase.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@comfy_cli/command/models/models.py`:
- Around line 323-360: The timing currently starts before pre-download work;
move the start_time = time.monotonic() so it is set immediately before the
actual network download: for the Hugging Face branch set start_time right before
the call to huggingface_hub.hf_hub_download (and keep the printed "Downloading
model..." message), and for the generic branch set start_time right before
calling download_file; remove the earlier start_time declaration at the top so
elapsed (used with _format_elapsed) measures only the download phase.

In `@tests/comfy_cli/command/models/test_models.py`:
- Around line 353-370: Add an explicit assertion that the CLI command succeeded
by checking result.exit_code == 0 right after invoking runner.invoke(app, [...])
and before any assertions on result.output; update the test containing
runner.invoke, result, and mock_dl to assert the command exit code
(result.exit_code) is 0 so failures surface immediately before checking "Done in
" and mock_dl behavior.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: 228bd338-6afa-4b88-86d2-af44a73c8915

📥 Commits

Reviewing files that changed from the base of the PR and between 76865c9 and 0505896.

📒 Files selected for processing (2)
  • comfy_cli/command/models/models.py
  • tests/comfy_cli/command/models/test_models.py

@bigcat88 bigcat88 merged commit ab18427 into main Apr 12, 2026
15 checks passed
@bigcat88 bigcat88 deleted the feat/download-elapsed-time branch April 12, 2026 13:44
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request size:S This PR changes 10-29 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Question] Download time tracking flag

1 participant