Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FCAT support #100

Open
1 of 5 tasks
illusion0001 opened this issue Jul 25, 2021 · 6 comments
Open
1 of 5 tasks

FCAT support #100

illusion0001 opened this issue Jul 25, 2021 · 6 comments

Comments

@illusion0001
Copy link
Contributor

illusion0001 commented Jul 25, 2021

Prerequisites:
Uncompressed 60hz video feed
Games running with Vsync off and goes above 60fps with color boarder on the screen and a framerate + frametime counter for reference. (Seizure warning)

Software that can display color boarders (for anything newer than d3d11 i.e VK and D3D12 applications)
On Screen Displays that has color boarders
OCAT
Rivatuner
Show color boarder on the left side for compatibility reasons when recording and analyzing

More info can be found here: https://www.nvidia.com/en-us/geforce/technologies/fcat/downloads/

You want FCAT Software download. contains scripts to analyze the recorded video as well as capture methods.
https://international.download.nvidia.com/geforce-com/international/downloads/fcat-Rev1.55.zip
https://international.download.nvidia.com/geforce-com/international/pdfs/FCAT+4K_Reviewer's_Guide.pdf

Task List:

  • Obtain Sample Clips

https://drive.google.com/drive/folders/1TtAu5kIpfV8cJ8KVTAN0GtWx1esSO6mW

  • Analyze only the first adjustable horizontal x max vertical of the video, i.e 30x1080.

image

  • Compensate for image noise, similar to pixel difference we have now but for color boarders.
  • Read frametime from boarder length
  • Plot frametime data into framerate
@cirquit
Copy link
Owner

cirquit commented Jul 27, 2021

So can you elaborate a bit more on what you actually want to achieve? Should we be the ones exporting the videos with the colored boxes? It seems to me that we don't do that different things, but they have a different visualization of it.

Or do you want trdrop to be able to render the results of these videos (which from what I understand are VR recordings) in trdrop?

As you also added the item of plotting frametime data into framerate - I think this is very hard to understand for non-professionals and should (if at all) be a non-default option. I'd also think that makes discerning framerate and frametime (which will be of similar color probably, as you want to know which video it is relating to) will be very hard. I feel we're starting to get into a territory which needs a major rehaul of the whole interface/concept on how trdrop works now.

@illusion0001
Copy link
Contributor Author

illusion0001 commented Jul 27, 2021

So can you elaborate a bit more on what you actually want to achieve? Should we be the ones exporting the videos with the colored boxes? It seems to me that we don't do that different things, but they have a different visualization of it.

I wanted to teardrop to be able to analyze fcat video and accurately display framerate and frametime just from the boarder data itself. This video as an example. https://youtu.be/toPqfXhEYbA?list=PLLwn03LyuQ4tOV2oAQn9Zmx7xnzAzrX4X

Getting sample video with On Screen framerate and frametime counter from the On screen display application above is a reference for us to know what the actual frametime data should be, once we know frametime data just from the video, the OSD is no longer needed. as we can just analyze the boarder data and get the frametimes from there.

As you also added the item of plotting frametime data into framerate

Yes as this will allow teardrop to display framerate accurately because it is just converting frametime to framerate
i.e 8.3ms -> 120fps (1000 / 8.3 = 120.4)

I think this is very hard to understand for non-professionals and should (if at all) be a non-default option. I'd also think that makes discerning framerate and frametime (which will be of similar color probably, as you want to know which video it is relating to) will be very hard.

We can use the existing plotting methods, we'll only need to implement reading frametime from color boarders and plotting frametime data as is then converting the resulting frametime data back into framerate and then plotting into the framerate graph.

Or do you want trdrop to be able to render the results of these videos (which from what I understand are VR recordings) in trdrop?

Nono, just standard single screen recording.

@illusion0001
Copy link
Contributor Author

illusion0001 commented Nov 4, 2021

I might have found the formula to convert pixel length to frametime.

With some simple number crunching, we can easily deduce the frame times for every single frame from the height of the colored bars. As the proper sequence of colors is defined, if we encounter a missing color we can immediately report a dropped frame.

https://web.archive.org/web/20200130015109/http://boostclock.com/guides/fta-guide-10.html

Frametime from camera calculation, may or may not be useful (Click to Expand)


http://www.c-cam.be/doc/Archive/FrameRates.pdf
Reupload in case it goes down. FrameRates.pdf

@illusion0001
Copy link
Contributor Author

@cirquit
I got some proper sample clips, gonna link them here so it doesn't get lost in the sea of messages.
initial results: trdrop detects tear < 60fps properly (sometimes) but anything above 60fps fails.

test_0000001482

test_0000001549

https://drive.google.com/drive/folders/1TtAu5kIpfV8cJ8KVTAN0GtWx1esSO6mW
Notes: blackout folder is for fcat videos but without the content for small filesize, still uncompressed. also contains the 61fps header bug in the mp4 folder.

Just for fun, here are some analysis with the current algorithm.

Epilepsy warning.
normal https://youtu.be/oipKh_DjV98
delta rendering https://youtu.be/IAMB9Ql1WJc

@illusion0001
Copy link
Contributor Author

illusion0001 commented Nov 6, 2021

Gonna post my findings here. Scanout of CSS-120fps-static.mp4 which can be found in the folder link above, ran it through the extractor tool from the nvidia website.

120.csv
On most beginnings of screen refresh (or video frame) seem to have the frame time value from the border height calculated from previous frames? but that is a no because frame 0 has 8.3ms in it..

Snippet of 120.csv (Click to Expand)
[frame] [scanlines] [time (ms)] [fps] [frame start (s)] [screen refresh] [color]
0 562 8.363111964 119.5727146 0 0 0xdde1de
1 482 7.172633393 139.4188083 0.008363112 0 0xfa01fa
2 36 0.535715357 1866.662933 0.015535745 0 0xfcff04
3 40 0.595239286 1679.99664 0.016071461 0 0x000000
4 1040 15.47622143 64.61525538 0.0166667 1 0xfcff04
5 40 0.595239286 1679.99664 0.032142921 1 0xfcfefc
6 40 0.595239286 1679.99664 0.032738161 1 0x000000
7 556 8.273826071 120.8630676 0.0333334 2 0xfcfffc
8 476 7.0833475 141.1761882 0.041607226 2 0x03fd02
9 48 0.714287143 1399.9972 0.048690574 2 0x0000f8
10 40 0.595239286 1679.99664 0.049404861 2 0x000000
11 546 8.12501625 123.0766769 0.0500001 3 0x0000f8
12 494 7.351205179 136.0321166 0.058125116 3 0xf90100
13 40 0.595239286 1679.99664 0.065476321 3 0x03d8d8
14 40 0.595239286 1679.99664 0.066071561 3 0x000000
15 546 8.12501625 123.0766769 0.0666668 4 0x02d9d8
16 486 7.232157321 138.2713284 0.074791816 4 0x0101c2
17 48 0.714287143 1399.9972 0.082023974 4 0x01d901
18 40 0.595239286 1679.99664 0.082738261 4 0x000000
19 572 8.511921786 117.4822825 0.0833335 5 0x00d901
20 450 6.696441964 149.3330347 0.091845422 5 0x03fdfa
21 58 0.863096964 1158.618372 0.098541864 5 0xc20100
22 40 0.595239286 1679.99664 0.099404961 5 0x000000

@draconb
Copy link

draconb commented Oct 15, 2022

Hey just wondering if you got FCAT support working or if you are still planning on supporting it? Also I checked the v2 branch but doesn't seem like its been updated recently, is that just not public yet or has progress stalled on it? Great looking software thank you both for your hard work on it!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants