Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add more label benchmarks #71

Merged
merged 4 commits into from
Jun 11, 2024
Merged

Conversation

OverloadedOrama
Copy link
Contributor

@OverloadedOrama OverloadedOrama commented Jun 7, 2024

Adds the following benchmarks from #36:

  • 🟥CPU🟥 RichTextLabel long text shaping: Display a RichTextLabel with 100+ paragraphs of Lorem Ipsum

  • 🟥CPU🟥 Text Resizing : Create a complex paragraph (loren ipsum) in a Label. Make a script that resizes it evey frame so it has to re-fit the text.

  • 🟥CPU🟥 Container sorting: Make a BoxContainer with 1000 Control children and call queue_sort() for 1000 frames in a row

  • 🟥CPU🟥 Container resizing : Create a random set of containers up to 20 levels. Every frame resize the parent container, measure CPU.
    EDIT: Removed the container benchmarks as recommended in Add more label benchmarks #71 (comment).

Seems like 🟥CPU🟥 Text Rendering : Create a label with a huge text (lorem impsum) with a tiny font size that fills the screen. Measure performance was already implemented in label.gd.

I also moved label.gd from the rendering folder to a new gui folder for better organizing.

One thing I'm not sure of, however, is that the container sorting benchmark asks for 1000 frames in a row, which to my knowledge is not currently possible to test because the benchmarks are hard-coded to 5 seconds. Should I make further changes to the code so that each benchmark can have the ability to change the time they take?

Results on my laptop

{
"benchmarks": [
{
"category": "Gui > Container",
"name": "Container Resizing",
"results": {
"render_cpu": 0.009149,
"time": 0.239
}
},
{
"category": "Gui > Container",
"name": "Container Sorting",
"results": {
"render_cpu": 0.03911,
"time": 3.84
}
},
{
"category": "Gui > Label",
"name": "Label",
"results": {
"render_cpu": 0.9346,
"render_gpu": 0.4779,
"time": 0.112
}
},
{
"category": "Gui > Label",
"name": "Label Autowrap Arbitrary",
"results": {
"render_cpu": 0.9912,
"render_gpu": 0.4784,
"time": 0.119
}
},
{
"category": "Gui > Label",
"name": "Label Autowrap Smart",
"results": {
"render_cpu": 1.017,
"render_gpu": 0.4792,
"time": 0.151
}
},
{
"category": "Gui > Label",
"name": "Label Autowrap Word",
"results": {
"render_cpu": 1.033,
"render_gpu": 0.4797,
"time": 0.143
}
},
{
"category": "Gui > Label",
"name": "Label Resize",
"results": {
"render_cpu": 1.06,
"render_gpu": 1.417,
"time": 0.133
}
},
{
"category": "Gui > Label",
"name": "Rich Text Label",
"results": {
"render_cpu": 2.252,
"render_gpu": 0.7583,
"time": 0.487
}
}
],
"engine": {
"version": "v4.3.beta1.official",
"version_hash": "a4f2ea91a1bd18f70a43ff4c1377db49b56bc3f0"
},
"system": {
"cpu_architecture": "x86_64",
"cpu_count": 12,
"cpu_name": "AMD Ryzen 5 6600H with Radeon Graphics",
"os": "Linux"
}
}

Copy link
Member

@Calinou Calinou left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tested locally, it works as expected.

@Calinou
Copy link
Member

Calinou commented Jun 7, 2024

One thing I'm not sure of, however, is that the container sorting benchmark asks for 1000 frames in a row, which to my knowledge is not currently possible to test because the benchmarks are hard-coded to 5 seconds. Should I make further changes to the code so that each benchmark can have the ability to change the time they take?

V-Sync is disabled in the benchmarks project, so it's possible to render more than 60 FPS if the CPU/GPU can keep up. If I add print(i) within the loop, I see all numbers being printed from 0 to 999.

Benchmarks that have Render CPU/GPU reporting disabled have no time limit, so maybe you could disable it for the container benchmarks.

Something strange I noticed though is that this benchmark actually requires 3+ seconds to run on my machine, yet the reported main thread time is only a few milliseconds… This may be a consequence of godotengine/godot#20623.

@OverloadedOrama
Copy link
Contributor Author

Benchmarks that have Render CPU/GPU reporting disabled have no time limit, so maybe you could disable it for the container benchmarks.

Do you mean the test_render_cpu and test_render_gpu variables? Setting them to false doesn't seem to change anything rather than the CPU/GPU time being reported at the end of the benchmark, the benchmark still takes 5 seconds. The 5 seconds seem to be hard-coded in manager.gd's run_test() method. If I understand the code correctly, every benchmark that returns a node, takes 5 seconds. I think we may need to change this time limit and make it configurable, as some benchmarks in the list require more than 5 seconds, for example the physics benchmarks.

@Calinou
Copy link
Member

Calinou commented Jun 10, 2024

I don't know either way, as the time reported by the container sorting benchmark is meaningless due to (what I assume is) godotengine/godot#20623. It may be better to remove this benchmark for now as it'd be misleading otherwise. We can resurrect the code from this PR if we manage to fix that issue in the engine.

@OverloadedOrama OverloadedOrama changed the title Add GUI benchmarks Add more label benchmarks Jun 11, 2024
@OverloadedOrama
Copy link
Contributor Author

All right, I removed the container benchmarks.

Copy link
Member

@Calinou Calinou left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

@Calinou Calinou merged commit baf024d into godotengine:main Jun 11, 2024
@OverloadedOrama OverloadedOrama deleted the gui-benchmarks branch June 12, 2024 13:33
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants