Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for other TranslationStrings instances in tokens #164

Closed
wants to merge 3 commits into from
Closed

Conversation

KirillMysnik
Copy link
Member

Also removed LangStrings.get_strings method that modified instances in-place and found its use literally nowhere.

Example

../resource/source-python/translations/new_translations_test.ini

[magic_string]
en="{full_name} is the new president"

[sub 1]
en="{first_name} {last_name}"

[sub 2]
en="Hillary"

[sub 3]
en="Clinton"

../addons/source-python/plugins/new_translations_test.py

from translations.strings import LangStrings


strings = LangStrings("new_translations_test")


tokenized_string = strings['magic_string'].tokenized(
    full_name=strings['sub 1'].tokenized(
        first_name=strings['sub 2'],
        last_name=strings['sub 3']
    )
)

print(tokenized_string.get_string(first_name="Donald", last_name="Trump"))

Result:
Donald Trump is the new president

For reference

http://forums.sourcepython.com/viewtopic.php?f=8&t=1043
http://forums.sourcepython.com/viewtopic.php?f=38&t=1053

Kirill Mysnik added 2 commits November 11, 2016 13:55
…tokens

Removed LangStrings.get_strings
Added TranslationStrings.tokenized
@KirillMysnik
Copy link
Member Author

@Ayuto @invincibleqc @satoon101 What do you guys think? I suspect you might have not been notified of this one.

Copy link
Member

@Ayuto Ayuto left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good, but please take a look at my comments before merging.

"""
result = TranslationStrings()
result.tokens.update(tokens)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This could just be result.update(self). But we might even want to create a deep copy.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, you're right. This must be a leftover from something bigger, but now it can indeed be written as result.update(self).


exposed_tokens[token_name] = token

# Expose all TranslationsStrings instances in **tokens
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should create a helper method for this duplicate. Maybe something like this:

        # Code...

        # Expose all TranslationStrings instances in self.tokens
        exposed_tokens = {}

        self._update_exposed_tokens(exposed_tokens, language, self.tokens, **tokens)
        self._update_exposed_tokens(exposed_tokens, language, tokens)

        return self[language].format(**exposed_tokens)

    @staticmethod
    def _update_exposed_tokens(exposed_tokens, language, tokens, **kwargs):
        for token_name, token in tokens.items():
            if isinstance(token, TranslationStrings):
                # Pass additional tokens - these will be used to format
                # the string
                token = token.get_string(language, **kwargs)

            exposed_tokens[token_name] = token

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure about this one... Is there a real need to do so? It's impossible to put everything this method does into its name, so anybody reading the code will need to look into _update_exposed_tokens anyways.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is no need to do that, but it removes duplicated code. And in my opinion it's easier to read the code, because it's easier to see the difference between both calls.

@@ -217,12 +217,6 @@ def _replace_escaped_sequences(given_string):
# Return the replaced string
return given_string

def get_strings(self, key, **tokens):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This method was added because of PR #43.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, now I see where it comes from. But in my opinion, we shouldn't tokenize the original TranslationStrings instances that are stored in LangStrings. How about this:

def get_strings(self, key, **tokens):
    """Return a new TranslationStrings object with updated tokens."""
    strings = TranslationStrings()
    strings.update(self[key])
    strings.tokens.update(tokens)
    return strings

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, if you want to prevent in-place updates, you would change the __getitem__ method. But actually, that doesn't really matter, because you always pass all tokens, right?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, everyone should probably pass the complete dict of tokens, but with current implementation of get_strings those tokens will get stuck in LangStrings for absolutely no reason.
Regarding to __getitem__, if somebody gets the empty TranslationStrings instance that is stored in LangStrings and does something filthy with its tokens, it's their problem. Creating a new TranslationStrings instance in __getitem__ and then another one while tokenizing it? That's too much of instantiating, I say. Would be nice to hear opinions from other contributors, too. @Mahi, I believe this method originates from your PR?

I also don't see any reason for keeping LangStrings.get_strings now when we have TranslationStrings.tokenized method.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants