Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Contributor stats in API #5459

Open
Duncan-Idaho opened this issue Feb 16, 2021 · 5 comments
Open

Contributor stats in API #5459

Duncan-Idaho opened this issue Feb 16, 2021 · 5 comments
Labels
backlog This is not on the Weblate roadmap for now. Can be prioritized by sponsorship. enhancement Adding or requesting a new feature.

Comments

@Duncan-Idaho
Copy link

Is your feature request related to a problem? If so, please describe.
We're exporting translation files to outside contractors and reimporting them later.
This process is automatic, and I need to have a way to extract their contributions for bill verification for example.

Many parts of the API expose statistics already, but:

  • On user statistics, we lack the amount of words for example.
  • We have no way to filter statistics per period.

Trying to call the existing contribution stat page is error prone at best due to the authentification and CSRF mechanisms.

Describe the solution you'd like

Either:

  • a new API endpoint that receives exactly 4 parameters (start_date, end_date, period and format) and returns exactly what the /counts/:project-slug returns, or
  • an improvement on the api/users/:email/statistics endpoint to include wordcount and support two query string parameters (start_date and end_date)

Describe alternatives you've considered

See above

Screenshots

Additional context

@nijel nijel added backlog This is not on the Weblate roadmap for now. Can be prioritized by sponsorship. enhancement Adding or requesting a new feature. labels Feb 16, 2021
@nijel nijel added this to To do in API improvements via automation Feb 16, 2021
@github-actions
Copy link

This issue has been added to the backlog. It is not scheduled on the Weblate roadmap, but it eventually might be implemented. In case you need this feature soon, please consider helping or push it by funding the development.

@comradekingu
Copy link
Contributor

comradekingu commented Feb 24, 2021

@Duncan-Idaho Are they working on it concurrently, and is it separated into exclusive parts?

By dividing work into greater deliveries (and also roles), I find the accountability of only having someone to blame (once something is really wrong) is only worth avoiding not getting paid to the average translator.

If you can't account for not being able to work faster than you should, which by all accounts stops with more specific timing, there is no way to average out being held up in review when doing all the work to begin with would be faster, and easier.

Somewhere the rubber has to meet the road, and you can either test people against each-other and see what the resulting numbers of errors amount to, or there could be shorter deadlines with fewer strings per batch.

With where that is going, translators are not about to take the hit on what it costs to average out ability. Nor should they, but that is usually company money.

To combat the idea of strings translated and/or time spent amounting to end value:

It would be nice if each translator had their hourly rate set, and then the time it takes to fix the strings of others would have their payout deducted by said rate (for the time it takes to do). Some portion of this could be awarded the translator that ends up with the most sane fixes.

Either by actual metrics string-for-string, or by average speed and/or estimated/calculated complexity of the source material for changed strings, there are many ways to arrive at a number.

So how do you decide on a review being good or bad? Asking people to document errors others make for free isn't going to happen, and what company is paying more to have their already lost time documented at the cost of what their better translators charge? Add to that it is a boring thing to do, and why would anyone unless push came to shove? If not by way of external contributions if a company gets clever, certainly on the platform itself.

Both of those should run the distinct words in the respective personal translation memory of a translator against a dictionary. And then against the internal terminology/glossary.

If the premise of getting stuff done is using Weblate itself to do it, delivering quicker by concurrent parallel translation is the obvious benefit, but it isn't geared towards the same accountability of quality. Keeping up with immediate notifications is sort of the same as e-mail digests in a slower format, but doing so while trying to arrive at a consistent result for oneself is way harder.

If a list of contested changes was produced, it becomes a lot easier for managers to tell what is going on. Moreover, it is easier to add "forbidden" words, checks and whatnot to help with actual productivity.
I like the idea of automated logic for reviewing contested changes, but in the end managers are the ones with the money and the ultimate desire for quality deciding in the matter.

In summary, review-to-verify should not be a way to gloss over how some people cost the company money.

Some platforms have a bit of this, and I have some screenshots somewhere, but I have never seen it done right.

Per-translator review-status is needed to get fancier than that, but I digress.

@Duncan-Idaho
Copy link
Author

Maybe I was not clear enough, we're exporting to a single contractor company some languages (not all). They have their own tools, their own processes, and their own way to pay individual translators, etc...

Each time we export, they send us prior to translation a quote (on a per-word basis), and after they send us a bill. This is pretty old school and we're looking to go toward a more continuous approach but this is the current situation.

What I'm looking for is an easy way to see if the bill somewhat matches the upload I received, overall, or if there's a mistake somewhere (it happened).

I'm not looking to closely micro-manage individuals, quicken the pace, pits them against each other, or some inhuman process.

Besides my current situation:

I tend to agree using only statistics is not the best approach for reviews and analysis, especially on a micro level. They can be misused by some persons.

But statistics can still be very useful for overview analysis and detecting major issues or inconsistencies. They can help you spot on incongruities, which you can then dig more into to have a more complete understanding.
Therefore I do think this feature request would bring value to Weblate users overall.

@nijel
Copy link
Member

nijel commented Mar 8, 2021

The reporting is already there - https://docs.weblate.org/en/latest/devel/reporting.html, it's just not exposed in the API.

@luzpaz
Copy link
Contributor

luzpaz commented Jul 14, 2023

Just curious if there is an ETA on this feature?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backlog This is not on the Weblate roadmap for now. Can be prioritized by sponsorship. enhancement Adding or requesting a new feature.
Projects
Status: To do
Development

No branches or pull requests

4 participants