Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: rebuffering percentage inflated if client ads fail to load #68

Merged
merged 2 commits into from
Apr 4, 2024

Conversation

daytime-em
Copy link
Collaborator

No description provided.

@daytime-em daytime-em requested a review from a team as a code owner April 2, 2024 22:23
// content. The events all really happen and there really is a short playback interuption so
// it's good to count it all
if (muxPlayerState == MuxPlayerState.REBUFFERING) {
stateCollector.pause()

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

question(non-blocking): any concerns with failing to unpause later (e.g. due to other jank conditions?) thereby deflating rebuffering?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nope, it would be ok. We technically "unpause" when we get adbreakstart because MuxPlayerState goes from PAUSED to AD_BREAK_STARTED or something like that.

Copy link
Collaborator Author

@daytime-em daytime-em Apr 4, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

.. and the other part of your question: the rebuffer tracker doesn't check for pause so it's safe to send from that perspective.

A pause event is also not-wrong here, since content stopped playing. After the ad is over, playing will be sent, which will unpause us, and media3 makes it easy to protect that process from jank by checking exoplayer's state, which we do

MuxPlayerState.REBUFFERING, stateCollector.muxPlayerState
)

Assert.assertTrue(

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion(non-blocking): As long as this has been smoke tested, I don't think this should be a blocker, but we should probably have 1+ test case(s) validating buffering after leaving ad playing state, ideally that adequately represent what can happen in real world usage, right?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we'd be ok, becuase that pause() call will end the rebuffering state at the beginning of the ad break (mentioned in other comment). And the way that it "ends the rebuffering state" is twofold: 1) setting our overall muxPlayerState thing to PAUSED 2) if muxPlayerState was REBUFFERING before that, it sends rebufferend to the core (and backend and dashboard)

Copy link

@cjpillsbury cjpillsbury left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Approving with a couple of nb callouts.

@daytime-em daytime-em merged commit 9f87c15 into releases/v1.3.0 Apr 4, 2024
2 checks passed
@daytime-em daytime-em deleted the fix/rebuffer-percent-csai branch April 4, 2024 22:24
This was referenced Apr 4, 2024
daytime-em added a commit that referenced this pull request Apr 5, 2024
## New
* `MuxErrorException` now allows you to report non-fatal and business-related errors

## Improvements
* update: Updated MuxCore to version 8.0.0
* update: Updated Android Core to version 1.2.0

## Fixes
* fix: renditionchange sent in situations where rendition was only reset (#73 )
* fix: Capture IMA CSAI media failures with LOG events (#67)
* fix: rebuffering percentage inflated if client ads fail to load (#68)


Co-authored-by: Emily Dixon <edixon@mux.com>
Co-authored-by: GitHub <noreply@github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants