Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using microseconds on duration/timestamp will result in loss of precision #853

Closed
fwenguang opened this issue Jan 5, 2024 · 5 comments
Closed
Assignees

Comments

@fwenguang
Copy link
Contributor

issues in pytorch: pytorch/pytorch#116830

@briancoutinho
Copy link
Contributor

briancoutinho commented Jan 5, 2024

That's a fair point that data for events does come in with nano second granularity for start and end timestamps.

AFaIK the main constraint is Chrome Trace format. We use these 'X' Complete events to represent the kernels and operators. They unfortunately specify the timing in resolution of microseconds only

There is an extra parameter dur to specify the tracing clock duration of complete events in microseconds. All other parameters are the same as in duration events.

And all timestamsp are in microseconds too

ts: The tracing clock timestamp of the event. The timestamps are provided at microsecond granularity.

https://docs.google.com/document/d/1CvAClvFfyA5R-PhYUmn5OOQtYMH4h6I0nSsKchNAySU/preview

@fwenguang
Copy link
Contributor Author

fwenguang commented Jan 8, 2024

That's a fair point that data for events does come in with nano second granularity for start and end timestamps.

AFaIK the main constraint is Chrome Trace format. We use these 'X' Complete events to represent the kernels and operators. They unfortunately specify the timing in resolution of microseconds only

There is an extra parameter dur to specify the tracing clock duration of complete events in microseconds. All other parameters are the same as in duration events.

And all timestamsp are in microseconds too

ts: The tracing clock timestamp of the event. The timestamps are provided at microsecond granularity.

https://docs.google.com/document/d/1CvAClvFfyA5R-PhYUmn5OOQtYMH4h6I0nSsKchNAySU/preview

Yes, ts and dur is in microseconds granularity. But they can be floating point type as far as i know.

So if there are no other constraints, I think it would be necessary to perform the conversion from nanoseconds(int64) to microseconds(double) only when saving to a Chrome trace file. Using floating-point types for 'ts' and 'dur' would prevent the loss of time in the order of nanoseconds or even hundreds of nanoseconds per event.

Additionally, it need to use relative timestamps for 'ts'.Due to the timestamps in nanoseconds require 19 digits. But double can typically represent around 15-16 significant digits of precision.

@briancoutinho
Copy link
Contributor

But they can be floating point type as far as i know.

Oh I didn't know that. We can try that actually.

Additionally, it need to use relative timestamps for 'ts'.Due to the timestamps in nanoseconds require 19 digits. But double can typically represent around 15-16 significant digits of precision.

How about just using a fixed point representation with 3 digits after the decimal point?

@fwenguang
Copy link
Contributor Author

How about just using a fixed point representation with 3 digits after the decimal point?

Sorry. I don't fully understand that.Using what filed to save these 3 digits?

If the 'ts' in nanosecond is 1706495062881409718. It was like this before the modification.

{
    "ph": "X", "cat": "kernel", "name": "kernel_name", "pid": 0, "tid": 1,
    "ts": 1706495062881409, "dur": 36,
    "args": {
      ...
    }
  }

What should it be like after the modification?

@sraikund16 sraikund16 self-assigned this Apr 18, 2024
@sraikund16
Copy link
Contributor

Resolved in this PR: pytorch/pytorch#123650

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants