New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ConvertCueToDOMTree should apply positioning based on cue settings to nodes #86
Comments
Looking at the spec for this issue #82 is very closely integrated with this and it makes more sense to start with this. So I'll work on this before that one. |
Implementing this is going to add a lot of CSS to the generated DOM tree. Do people think it's worth it to be testing the CSS in each JSON file? Or should we make ConvertCueToDOMTree() be able to pass us back DOM trees with/without CSS and test it like that? I'm kind of a fan of keeping the CSS in each JSON file. Along with parsing cue settings correctly means that we should be reflecting those cue settings with the appropriate CSS. |
I think it's fine for the JSON to include the CSS--it's meant for test comparison, esp for regressions, and not human readability. The more the tests can cover, the better. |
Okay great @humphd. I'll do just that. We also need to run the Unicode Bidirectional Algorithm on the text nodes. I'm looking at using Twitter's implementation of it. Hopefully, Gecko has something like this as well so we can use a similar method that we did with the StringDecoder. |
On second thought it seems like Twitter's implementation requires Ruby on rails. Unless we manually take out the source code with applies. I'll keep looking. |
Actually for our purposes it might be easiest to code it ourselves. We only have to account for this one small section of the spec. |
Twitter's impl still looks like the canonical one most people point to for JS. We just need to figure out the right way to package it per locale, since you have to generate the JS with locale specific stuff. |
One problem with Twitter's implementation is that you have to know ahead of time what locale you will be using. This isn't really possible with WebVTT. There is a tag, but it isn't guaranteed to be there. Another solution I'm looking at would be to use this node package that loads in the entire Unicode data table and implement our own function to determine the paragraph embedded level. Once we have the table it shouldn't be too hard to code this. Just loop through the concatenated text nodes and return 0 or 1 based on if we find a valid RTL or LTR character. We also need to make sure whatever we do is compatible with Gecko. I haven't found the details on Gecko's implementation of this yet, although I know it has one. |
So it looks like Gecko doesn't expose this functionality to JavaScript components so it might be better to bundle something with the parser... |
Partial fix for this proposed in #99. The next part to do for this to be resolved is to apply positioning on regions. |
Final fix for this in #109. |
We can do the whole WebVTT processing model in this function.
The text was updated successfully, but these errors were encountered: