Skip to content

Add benchmarks for table rendering#17886

Closed
majiayu000 wants to merge 1 commit intonushell:mainfrom
majiayu000:feat/issue-7727-table-rendering-benchmarks
Closed

Add benchmarks for table rendering#17886
majiayu000 wants to merge 1 commit intonushell:mainfrom
majiayu000:feat/issue-7727-table-rendering-benchmarks

Conversation

@majiayu000
Copy link
Copy Markdown

Fixes #7727

Description

Add table rendering benchmarks to benches/benchmarks.rs using the existing tango-bench framework. These benchmarks measure NuTable::draw() performance — the actual rendering pipeline that converts structured data into formatted terminal output.

This fills the gap between existing table data-operation benchmarks (create/get/select/insert) and actual display rendering.

New benchmarks:

  • bench_table_render(rows, cols) — Core rendering at varying dimensions: (10,3), (100,5), (1000,5), (1000,15)
  • bench_table_render_with_theme(name, theme) — Rendering with different themes (basic, rounded, thin, heavy, none) at 100x5
  • bench_table_render_wide(termwidth) — Rendering at varying terminal widths (40, 80, 120, 200) at 100x10

All benchmarks use NuTable directly (not through the eval engine) to isolate rendering from parsing overhead.

Release notes summary - What our users need to know

Added Criterion-style benchmarks for table rendering (NuTable::draw()) covering different table sizes, themes, and terminal widths.

Tasks after submitting

Add NuTable::draw() benchmarks to measure rendering performance:
- bench_table_render: varies rows/cols (10x3, 100x5, 1000x5, 1000x15)
- bench_table_render_with_theme: basic, rounded, thin, heavy, none
- bench_table_render_wide: terminal widths 40, 80, 120, 200

Signed-off-by: majiayu000 <1835304752@qq.com>
@majiayu000 majiayu000 marked this pull request as ready for review March 25, 2026 06:11
@fdncred
Copy link
Copy Markdown
Contributor

fdncred commented Mar 25, 2026

I'm up for this but there's a lot of changes to the cargo.lock file. i'd like those changes reverted except for the changes related to this PR. it looks like cargo update was ran by mistake.

@flinesse
Copy link
Copy Markdown

Careful #17240 (comment)

@fdncred
Copy link
Copy Markdown
Contributor

fdncred commented Mar 25, 2026

Careful #17240 (comment)

what's your point? are you saying @majiayu000 is an AI bot?

@flinesse
Copy link
Copy Markdown

flinesse commented Mar 25, 2026

I can't claim with certainty but the comment on the linked issue and the prior MR attempt raises all kinds of flags.

Just sending up a flare so you can decide how much time you want to invest. Sorry for the vague comment prior.

@fdncred
Copy link
Copy Markdown
Contributor

fdncred commented Mar 25, 2026

I can't claim with certainty but the comment on the linked issue and the prior MR attempt raises all kinds of flags.

Just sending up a flare so you can decide how much time you want to invest. Sorry for the vague comment earlier.

yes, indeed. I just saw that just now. it does raise flags. @majiayu000 who is doing your coding for you?

@majiayu000
Copy link
Copy Markdown
Author

@fdncred I use claude code for planing issue, codex + claude code for cross-review , but this may not slove some issue, I am trying to improve my system. Feel free to close my pr if you feel bother or pr is not a right way.

@cptpiepmatz
Copy link
Copy Markdown
Member

This still has many changes in Cargo.lock for no reason. Also while I like having more benchmarks, I feel like we should first look into making our benchmarking system more useful by reducing as much overhead as possible. I think we can go back to this, once we have a better benchmarking system in place.

@cptpiepmatz cptpiepmatz closed this Apr 3, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Add benchmarks for table rendering

4 participants