New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FarCry 3]Bad performance with Gallium-Nine #313
Comments
Could you produce a log with NINE_DEBUG=all (needs mesa built with --enable-debug to have output) |
I did both :D And the NINE_DEBUG=all here : |
Thank you I don't see what could possibly cause bad performance. I notice on internet a lot of people complain of bad perf on some cpus, for example amd cpus. Perhaps the game has a fast internal path that only triggers if some criterion are met, and wine wouldn't respect all of them. |
So there is no solution for this issue unfortunately ? :-( |
A performance analysis via a tool like operf will tell where time is spent. |
It may also be a game sensitive to thread scheduling, and in that case playing with linux scheduler may affect performance. |
Your windows screenshot shows the performance on your 12 cores. Could you show the same on nine with the gallium hud (not just the average, but the details for all cores. It's something like GALLIUM_HUD=cpu0+cpu1+etc |
On my old PC with Nvidia GPU and Intel CPU, never had this performance issue. I have a Ryzen, maybe there is something wrong with my CPU. I launch But i don't know if it's a good report ;-) |
For starters, It doesn't seems like the CPU is bottleneck here, at least Nine uses half the wined3d CPU to get (the same) 10fps. Do you have vsync enabled? I see the "fast" part been ~60fps. Try disabling it. Try enabling all possible GALLIUM_HUD graphs, with the hope one of them might give some hint. Flushes might be indicating unwanted synchronization. Finding performance issues is very tricky. For example, apitrace can do cpu and gpu timing for opengl, but not for d3d9. Some of the Nine developers had worked on that, but the pull request has been hanging for years. PIXWin is program that comes with D3D SDK and it has nice capabilities, but it is windows program and running it under Wine is troublesome. A bit primitive tool, I've used a lot is "Helix Mod". It's d3d9.dll wrapper that allows dumping and replacing shaders. The dumping is done interactive, you use numpad keys to select a shader, the selected shader is nullified (it stops working), it might make an object using that shader go black or vanish entirely. Also, playing with the graphic engine setting sometimes might help. Setting lowest setting, then enabling stuff one by one until you have a dramatic effect. Then apitrace before and after and try to find the difference... This reminds me... check if PhysX is disabled if the game has it. |
I try to set all GALLIUM_HUD options, i hope it will be clear for you. Disable vsync helps little, i win some fps. Same thing if i set all options to "LOW", i have 20FPS maximum. There isn't Physx in this game :-) |
According to the huds, there is a cpu used close to 100% on windows, but not on linux. It may be some linux threading issue (windows programs have more control on switching between threads). Perhaps try to force the game to run on 4 cpus, or use the realtime scheduler ? |
Ryzen CPU works like NUMA, where you have 2 physical cores sharing cashes thus forming a single node and you have heavy penalty if processes move to another node. So kernel should try to avoid moving processing across nodes. There were options introduces for that... BTW, try using
About the screenshot. What we see here is kind of locking issue, and it is extra complicated as it might be CPU/GPU locking. At least the flushing is not excessive, I've seen wined3d causing flush after every draw (or double that). CPU locking sometimes might be spotted when using Notice in the graphs, that GPU utilization is about 60% and even shader clock is 461MHz. So basically the core is so unused it is still working in idle mode. We might need a core mesa/radeonsi developer to look into this issue. |
teh hud shows the system cpu usage, not the app cpu usage, so what you see in the screenshot is already the same than htop. Possibly the game uses as many processes as cpus to prepare draw commands. If the delay to switch to a thread when it is ready is a little slow (compared to windows where it would be fast thanks to some fine control not available on linux schedulers), it could cause what we see, thus my comment on investigating there (forcing less cpus, changing linux scheduler). |
I ask for From the first image, he gets 22,5% CPU total utilization, with 12 cores, this makes about 260% core usage. On the last picture summing the CPU gives around 340% core utilization. The scheduler should not matter at all, since there are enough cores to run all threads without having to do context switches. (I'm joking but...). Usually games run as many threads as the physical core, so this leaves half of the virtual cores free for linux scheduler to use as it sees fit. It is far more likely that e.g. the game tries to be multi-threaded, so it calls D3D concurrently from 6 different threads. However Nine just has a big lock around the threaded code so it processes each thread one by one in serialized manner. I think NINE CSMT should handle this much more gracefully (since the lock would hold only for filling the queue, not actual processing). |
I don't see what you mean by having better lock mechanism for nine csmt. |
It's not csmt locking the issue here. I'm talking for the case when e.g. two separate threads issue different drawing commands at the same time. Nine will serialize that ... one would block until the other is finished. |
if two threads issue commands, they must still make sure the draw commands have the same parameters (shader constants, sampler states, etc). In practice the app has to use an internal locking system to ensure a thread is the only one sending all its commands for its draw call. |
Whatever. I just gave it as example why it should not be issue. No idea why make is so big of a question. |
About the issue, what is the best test i can to do to help you ?
Le mer. 4 avr. 2018 à 11:13, iiv3 <notifications@github.com> a écrit :
… Whatever. I just gave it as example why it should *not* be issue. No idea
why make is so big of a question.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#313 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/ADcGFL16K7Xw-VKsdQNK_l7i-TW-LXyFks5tlI6zgaJpZM4TFeBF>
.
|
|
1- With 2- With 3- And with |
I'll see what could be obtained through PIXWin. It's a powerful D3D debugger that comes from the 2010 D3D SDK. If anybody else have any ideas... feel free to give them a try. |
I found this about performances with FC3 :
https://bugs.winehq.org/show_bug.cgi?id=43277
I don't know if it's only with Ryzen Cpu because when i tried with a old
laptop Intel/Nvidia at the release, i never had this bad perf.
I will retry after work with my laptop.
Le jeu. 5 avr. 2018 à 01:40, iiv3 <notifications@github.com> a écrit :
…
1.
While it's not 100%, been 80% is really high usage. I couldn't find
the original work, but there seems to be a staging patch that uses shared
memory between processes to lower the messaging overhead.
You might want to try it.
2.
My mistake, i wanted latencytop on the game, but from the snapshot I
see that it is <=5ms Max, so this is probably not the bottleneck...
3.
That was expected, but it had to be tested.
I'll see what could be obtained through PIXWin. It's a powerful D3D
debugger that comes from the 2010 D3D SDK.
If anybody else have any ideas... feel free to give them a try.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#313 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/ADcGFJ8uJapKAHdSCYe8Yjyz57gZw9eHks5tlVnagaJpZM4TFeBF>
.
|
Hey guys, I can confirm that issue come from AMD CPU. Don't know if it's all CPU or only Ryzen CPUs. I updated the wine bug report but i don't know if there will be a solution. Wine's dev don't care about these problem. |
I wonder if it's the same issue with Path of Exile, where turning on Engine Multihreading game setting lowers fps greatly. I'm on ryzen1700+rx580 and get the following fps and htop cpu usage on that game:
|
iiv3 and Axel, there is an issue with Ryzen, Wine and the game's engine. See this comment on Wine Bugzilla and following screenshot to see the difference : |
Closing as not a nine bug. |
Hi,
Like Axel Davy explained here, the performance with Gallium Nine must be close to what i have on Windows and actually, it's far, far away (like Star Wars :P )
In fact, i have the same performance with Nine than WineD3D. And Gallium-Nine is correctly enabled because in the output console, i have these lines :
Native Direct3D 9 is active.
For more information visit https://wiki.ixit.cz/d3d9
Performance on Windows 7 :
https://reho.st/self/a958f0e2e6f0765b1b7b35603b42837a1fc03b93.jpg
Performance with WineD3D :
https://reho.st/self/d8960a1052964986b5ba05552eab4ea19c314e3b.jpg
Performance with Gallium Nine :
https://reho.st/self/480b7bda86c14dc76562f750ba1e52d39cd38f1d.jpg
The text was updated successfully, but these errors were encountered: